Image Title

Search Results for Genius of Things:

Video exclusive: Oracle adds more wood to the MySQL HeatWave fire


 

(upbeat music) >> When Oracle acquired Sun in 2009, it paid $5.6 billion net of Sun's cash and debt. Now I argued at the time that Oracle got one of the best deals in the history of enterprise tech, and I got a lot of grief for saying that because Sun had a declining business, it was losing money, and its revenue was under serious pressure as it tried to hang on for dear life. But Safra Catz understood that Oracle could pay Sun's lower profit and lagging businesses, like its low index 86 product lines, and even if Sun's revenue was cut in half, because Oracle has such a high revenue multiple as a software company, it could almost instantly generate $25 to $30 billion in shareholder value on paper. In addition, it was a catalyst for Oracle to initiate its highly differentiated engineering systems business, and was actually the precursor to Oracle's Cloud. Oracle saw that it could capture high margin dollars that used to go to partners like HP, it's original exit data partner, and get paid for the full stack across infrastructure, middleware, database, and application software, when eventually got really serious about cloud. Now there was also a major technology angle to this story. Remember Sun's tagline, "the network is the computer"? Well, they should have just called it cloud. Through the Sun acquisition. Oracle also got a couple of key technologies, Java, the number one programming language in the world, and MySQL, a key ingredient of the LAMP stack, that's Linux, Apache, MySQL and PHP, Perl or Python, on which the internet is basically built, and is used by many cloud services like Facebook, Twitter, WordPress, Flicker, Amazon, Aurora, and many other examples, including, by the way, Maria DB, which is a fork of MySQL created by MySQL's creator, basically in protest to Oracle's acquisition; the drama is Oscar worthy. It gets even better. In 2020, Oracle began introducing a new version of MySQL called MySQL HeatWave, and since late 2020 it's been in sort of a super cycle rolling, out three new releases in less than a year and a half in an attempt to expand its Tam and compete in new markets. Now we covered the release of MySQL Autopilot, which uses machine learning to automate management functions. And we also covered the bench marketing that Oracle produced against Snowflake, AWS, Azure, and Google. And Oracle's at it again with HeatWave, adding machine learning into its database capabilities, along with previously available integrations of OLAP and OLTP. This, of course, is in line with Oracle's converged database philosophy, which, as we've reported, is different from other cloud database providers, most notably Amazon, which takes the right tool for the right job approach and chooses database specialization over a one size fits all strategy. Now we've asked Oracle to come on theCUBE and explain these moves, and I'm pleased to welcome back Nipun Agarwal, who's the senior vice president for MySQL Database and HeatWave at Oracle. And today, in this video exclusive, we'll discuss machine learning, other new capabilities around elasticity and compression, and then any benchmark data that Nipun wants to share. Nipun's been a leading advocate of the HeatWave program. He's led engineering in that team for over 10 years, and he has over 185 patents in database technologies. Welcome back to the show Nipun. Great to see you again. Thanks for coming on. >> Thank you, Dave. Very happy to be back. >> Yeah, now for those who may not have kept up with the news, maybe to kick things off you could give us an overview of what MySQL HeatWave actually is so that we're all on the same page. >> Sure, Dave, MySQL HeatWave is a fully managed MySQL database service from Oracle, and it has a builtin query accelerator called HeatWave, and that's the part which is unique. So with MySQL HeatWave, customers of MySQL get a single database which they can use for transactional processing, for analytics, and for mixed workloads because traditionally MySQL has been designed and optimized for transaction processing. So in the past, when customers had to run analytics with the MySQL based service, they would need to move the data out of MySQL into some other database for running analytics. So they would end up with two different databases and it would take some time to move the data out of MySQL into this other system. With MySQL HeatWave, we have solved this problem and customers now have a single MySQL database for all their applications, and they can get the good performance of analytics without any changes to their MySQL application. >> Now it's no secret that a lot of times, you know, queries are not, you know, most efficiently written, and critics of MySQL HeatWave will claim that this product is very memory and cluster intensive, it has a heavy footprint that adds to cost. How do you answer that, Nipun? >> Right, so for offering any database service in the cloud there are two dimensions, performance and cost, and we have been very cognizant of both of them. So it is indeed the case that HeatWave is a, in-memory query accelerator, which is why we get very good performance, but it is also the case that we have optimized HeatWave for commodity cloud services. So for instance, we use the least expensive compute. We use the least expensive storage. So what I would suggest is for the customers who kind of would like to know what is the price performance advantage of HeatWave compared to any database we have benchmark against, Redshift, Snowflake, Google BigQuery, Azure Synapse, HeatWave is significantly faster and significantly lower price on a multitude of workloads. So not only is it in-memory database and optimized for that, but we have also optimized it for commodity cloud services, which makes it much lower price than the competition. >> Well, at the end of the day, it's customers that sort of decide what the truth is. So to date, what's been the customer reaction? Are they moving from other clouds from on-prem environments? Both why, you know, what are you seeing? >> Right, so we are definitely a whole bunch of migrations of customers who are running MySQL on-premise to the cloud, to MySQL HeatWave. That's definitely happening. What is also very interesting is we are seeing that a very large percentage of customers, more than half the customers who are coming to MySQL HeatWave, are migrating from other clouds. We have a lot of migrations coming from AWS Aurora, migrations from RedShift, migrations from RDS MySQL, TerriData, SAP HANA, right. So we are seeing migrations from a whole bunch of other databases and other cloud services to MySQL HeatWave. And the main reason we are told why customers are migrating from other databases to MySQL HeatWave are lower cost, better performance, and no change to their application because many of these services, like AWS Aurora are ETL compatible with MySQL. So when customers try MySQL HeatWave, not only do they get better performance at a lower cost, but they find that they can migrate their application without any changes, and that's a big incentive for them. >> Great, thank you, Nipun. So can you give us some names? Are there some real world examples of these customers that have migrated to MySQL HeatWave that you can share? >> Oh, absolutely, I'll give you a few names. Stutor.com, this is an educational SaaS provider raised out of Brazil. They were using Google BigQuery, and when they migrated to MySQL HeatWave, they found a 300X, right, 300 times improvement in performance, and it lowered their cost by 85 (audio cut out). Another example is Neovera. They offer cybersecurity solutions and they were running their application on an on-premise version of MySQL when they migrated to MySQL HeatWave, their application improved in performance by 300 times and their cost reduced by 80%, right. So by going from on-premise to MySQL HeatWave, they reduced the cost by 80%, improved performance by 300 times. We are Glass, another customer based out of Brazil. They were running on AWS EC2, and when they migrated, within hours they found that there was a significant improvement, like, you know, over 5X improvement in database performance, and they were able to accommodate a very large virtual event, which had more than a million visitors. Another example, Genius Senority. They are a game designer in Japan, and when they moved to MySQL HeatWave, they found a 90 times percent improvement in performance. And there many, many more like a lot of migrations, again, from like, you know, Aurora, RedShift and many other databases as well. And consistently what we hear is (audio cut out) getting much better performance at a much lower cost without any change to their application. >> Great, thank you. You know, when I ask that question, a lot of times I get, "Well, I can't name the customer name," but I got to give Oracle credit, a lot of times you guys have at your fingertips. So you're not the only one, but it's somewhat rare in this industry. So, okay, so you got some good feedback from those customers that did migrate to MySQL HeatWave. What else did they tell you that they wanted? Did they, you know, kind of share a wishlist and some of the white space that you guys should be working on? What'd they tell you? >> Right, so as customers are moving more data into MySQL HeatWave, as they're consolidating more data into MySQL HeatWave, customers want to run other kinds of processing with this data. A very popular one is (audio cut out) So we have had multiple customers who told us that they wanted to run machine learning with data which is stored in MySQL HeatWave, and for that they have to extract the data out of MySQL (audio cut out). So that was the first feedback we got. Second thing is MySQL HeatWave is a highly scalable system. What that means is that as you add more nodes to a HeatWave cluster, the performance of the system improves almost linearly. But currently customers need to perform some manual steps to add most to a cluster or to reduce the cluster size. So that was other feedback we got that people wanted this thing to be automated. Third thing is that we have shown in the previous results, that HeatWave is significantly faster and significantly lower price compared to competitive services. So we got feedback from customers that can we trade off some performance to get even lower cost, and that's what we have looked at. And then finally, like we have some results on various data sizes with TPC-H. Customers wanted to see if we can offer some more data points as to how does HeatWave perform on other kinds of workloads. And that's what we've been working on for the several months. >> Okay, Nipun, we're going to get into some of that, but, so how did you go about addressing these requirements? >> Right, so the first thing is we are announcing support for in-database machine learning, meaning that customers who have their data inside MySQL HeatWave can now run training, inference, and prediction all inside the database without the data or the model ever having to leave the database. So that's how we address the first one. Second thing is we are offering support for real time elasticity, meaning that customers can scale up or scale down to any number of nodes. This requires no manual intervention on part of the user, and for the entire duration of the resize operation, the system is fully available. The third, in terms of the costs, we have double the amount of data that can be processed per node. So if you look at a HeatWave cluster, the size of the cluster determines the cost. So by doubling the amount of data that can be processed per node, we have effectively reduced the cluster size which is required for planning a given workload to have, which means it reduces the cost to the customer by half. And finally, we have also run the TPC-DS workload on HeatWave and compared it with other vendors. So now customers can have another data point in terms of the performance and the cost comparison of HeatWave with other services. >> All right, and I promise, I'm going to ask you about the benchmarks, but I want to come back and drill into these a bit. How is HeatWave ML different from competitive offerings? Take for instance, Redshift ML, for example. >> Sure, okay, so this is a good comparison. Let's start with, let's say RedShift ML, like there are some systems like, you know, Snowflake, which don't even offer any, like, processing of machine learning inside the database, and they expect customers to write a whole bunch of code, in say Python or Java, to do machine learning. RedShift ML does have integration with SQL. That's a good start. However, when customers of Redshift need to run machine learning, and they invoke Redshift ML, it makes a call to another service, SageMaker, right, where so the data needs to be exported to a different service. The model is generated, and the model is also outside RedShift. With HeatWave ML, the data resides always inside the MySQL database service. We are able to generate models. We are able to train the models, run inference, run explanations, all inside the MySQL HeatWave service. So the data, or the model, never have to leave the database, which means that both the data and the models can now be secured by the same access control mechanisms as the rest of the data. So that's the first part, that there is no need for any ETL. The second aspect is the automation. Training is a very important part of machine learning, right, and it impacts the quality of the predictions and such. So traditionally, customers would employ data scientists to influence the training process so that it's done right. And even in the case of Redshift ML, the users are expected to provide a lot of parameters to the training process. So the second thing which we have worked on with HeatWave ML is that it is fully automated. There is absolutely no user intervention required for training. Third is in terms of performance. So one of the things we are very, very sensitive to is performance because performance determines the eventual cost to the customer. So again, in some benchmarks, which we have published, and these are all available on GitHub, we are showing how HeatWave ML is 25 times faster than Redshift ML, and here's the kicker, at 1% of the cost. So four benefits, the data all remain secure inside the database service, it's fully automated, much faster, much lower cost than the competition. >> All right, thank you Nipun. Now, so there's a lot of talk these days about explainability and AI. You know, the system can very accurately tell you that it's a cat, you know, or for you Silicon Valley fans, it's a hot dog or not a hot dog, but they can't tell you how the system got there. So what is explainability, and why should people care about it? >> Right, so when we were talking to customers about what they would like from a machine learning based solution, one of the feedbacks we got is that enterprise is a little slow or averse to uptaking machine learning, because it seems to be, you know, like magic, right? And enterprises have the obligation to be able to explain, or to provide a answer to their customers as to why did the database make a certain choice. With a rule based solution it's simple, it's a rule based thing, and you know what the logic was. So the reason explanations are important is because customers want to know why did the system make a certain prediction? One of the important characteristics of HeatWave ML is that any model which is generated by HeatWave ML can be explained, and we can do both global explanations or model explanations as well as we can also do local explanations. So when the system makes a specific prediction using HeatWave ML, the user can find out why did the system make such a prediction? So for instance, if someone is being denied a loan, the user can figure out what were the attribute, what were the features which led to that decision? So this ensures, like, you know, fairness, and many of the times there is also like a need for regulatory compliance where users have a right to know. So we feel that explanations are very important for enterprise workload, and that's why every model which is generated by HeatWave ML can be explained. >> Now I got to give Snowflakes some props, you know, this whole idea of separating compute from storage, but also bringing the database to the cloud and driving elasticity. So that's been a key enabler and has solved a lot of problems, in particular the snake swallowing the basketball problem, as I often say. But what about elasticity and elasticity in real time? How is your version, and there's a lot of companies chasing this, how is your approach to an elastic cloud database service different from what others are promoting these days? >> Right, so a couple of characteristics. One is that we have now fully automated the process of elasticity, meaning that if a user wants to scale up or scale down, the only thing they need to specify is the eventual size of the cluster and the system completely takes care of it transparently. But then there are a few characteristics which are very unique. So for instance, we can scale up or scale down to any number of nodes. Whereas in the case of Snowflake, the number of nodes someone can scale up or scale down to are the powers of two. So if a user needs 70 CPUs, well, their choice is either 64 or 128. So by providing this flexibly with MySQL HeatWave, customers get a custom fit. So they can get a cluster which is optimized for their specific portal. So that's the first thing, flexibility of scaling up or down to any number of nodes. The second thing is that after the operation is completed, the system is fully balanced, meaning the data across the various nodes is fully balanced. That is not the case with many solutions. So for instance, in the case of Redshift, after the resize operation is done, the user is expected to manually balance the data, which can be very cumbersome. And the third aspect is that while the resize operation is going on, the HeatWave cluster is completely available for queries, for DMLS, for loading more data. That is, again, not the case with Redshift. Redshift, suppose the operation takes 10 to 15 minutes, during that window of time, the system is not available for writes, and for a big part of that chunk of time, the system is not even available for queries, which is very limiting. So the advantages we have are fully flexible, the system is in a balanced state, and the system is completely available for the entire duration operation. >> Yeah, I guess you got that hypergranularity, which, you know, sometimes they say, "Well, t-shirt sizes are good enough," but then I think of myself, some t-shirts fit me better than others, so. Okay, I saw on the announcement that you have this lower price point for customers. How did you actually achieve this? Could you give us some details around that please? >> Sure, so there are two things for announcing this service, which lower the cost for the customers. The first thing is that we have doubled the amount of data that can be processed by a HeatWave node. So if we have doubled the amount of data, which can be a process by a node, the cluster size which is required by customers reduces to half, and that's why the cost drops to half. The way we have managed to do this is by two things. One is support for Bloom filters, which reduces the amount of intermediate memory. And second is we compress the base data. So these are the two techniques we have used to process more data per node. The second way by which we are lowering the cost for the customers is by supporting pause and resume of HeatWave. And many times you find customers of like HeatWave and other services that they want to run some other queries or some other workloads for some duration of time, but then they don't need the cluster for a few hours. Now with the support for pause and resume, customers can pause the cluster and the HeatWave cluster instantaneously stops. And when they resume, not only do we fetch the data, in a very, like, you know, a quick pace from the object store, but we also preserve all the statistics, which are used by Autopilot. So both the data and the metadata are fetched, extremely fast from the object store. So with these two capabilities we feel that it'll drive down the cost to our customers even more. >> Got it, thank you. Okay, I promised I was going to get to the benchmarks. Let's have it. How do you compare with others but specifically cloud databases? I mean, and how do we know these benchmarks are real? My friends at EMC, they were back in the day, they were brilliant at doing benchmarks. They would produce these beautiful PowerPoints charts, but it was kind of opaque, but what do you say to that? >> Right, so there are multiple things I would say. The first thing is that this time we have published two benchmarks, one is for machine learning and other is for SQL analytics. All the benchmarks, including the scripts which we have used are available on GitHub. So we have full transparency, and we invite and encourage customers or other service providers to download the scripts, to download the benchmarks and see if they get any different results, right. So what we are seeing, we have published it for other people to try and validate. That's the first part. Now for machine learning, there hasn't been a precedence for enterprise benchmarks so we talk about aiding open data sets and we have published benchmarks for those, right? So both for classification, as well as for aggression, we have run the training times, and that's where we find that HeatWave MLS is 25 times faster than RedShift ML at one percent of the cost. So fully transparent, available. For SQL analytics, in the past we have shown comparisons with TPC-H. So we would show TPC-H across various databases, across various data sizes. This time we decided to use TPC-DS. the advantage of TPC-DS over TPC-H is that it has more number of queries, the queries are more complex, the schema is more complex, and there is a lot more data skew. So it represents a different class of workloads, and which is very interesting. So these are queries derived from the TPC-DS benchmark. So the numbers we have are published this time are for 10 terabyte TPC-DS, and we are comparing with all the four majors services, Redshift, Snowflake, Google BigQuery, Azure Synapse. And in all the cases, HeatWave is significantly faster and significantly lower priced. Now one of the things I want to point out is that when we are doing the cost comparison with other vendors, we are being overly fair. For instance, the cost of HeatWave includes the cost of both the MySQL node as well as the HeatWave node, and with this setup, customers can run transaction processing analytics as well as machine learning. So the price captures all of it. Whereas with the other vendors, the comparison is only for the analytic queries, right? So if customers wanted to run RDP, you would need to add the cost of that database. Or if customers wanted to run machine learning, you would need to add the cost of that service. Furthermore, with the case of HeatWave, we are quoting pay as you go price, whereas for other vendors like, you know, RedShift, and like, you know, where applicable, we are quoting one year, fully paid upfront cost rate. So it's like, you know, very fair comparison. So in terms of the numbers though, price performance for TPC-DS, we are about 4.8 times better price performance compared to RedShift We are 14.4 times better price performance compared to Snowflake, 13 times better than Google BigQuery, and 15 times better than Synapse. So across the board, we are significantly faster and significantly lower price. And as I said, all of these scripts are available in GitHub for people to drive for themselves. >> Okay, all right, I get it. So I think what you're saying is, you could have said this is what it's going to cost for you to do both analytics and transaction processing on a competitive platform versus what it takes to do that on Oracle MySQL HeatWave, but you're not doing that. You're saying, let's take them head on in their sweet spot of analytics, or OLTP separately and you're saying you still beat them. Okay, so you got this one database service in your cloud that supports transactions and analytics and machine learning. How much do you estimate your saving companies with this integrated approach versus the alternative of kind of what I called upfront, the right tool for the right job, and admittedly having to ETL tools. How can you quantify that? >> Right, so, okay. The numbers I call it, right, at the end of the day in a cloud service price performance is the metric which gives a sense as to how much the customers are going to save. So for instance, for like a TPC-DS workload, if we are 14 times better price performance than Snowflake, it means that our cost is going to be 1/14th for what customers would pay for Snowflake. Now, in addition, in other costs, in terms of migrating the data, having to manage two different databases, having to pay for other service for like, you know, machine learning, that's all extra and that depends upon what tools customers are using or what other services they're using for transaction processing or for machine learning. But these numbers themselves, right, like they're very, very compelling. If we are 1/5th the cost of Redshift, right, or 1/14th of Snowflake, these numbers, like, themselves are very, very compelling. And that's the reason we are seeing so many of these migrations from these databases to MySQL HeatWave. >> Okay, great, thank you. Our last question, in the Q3 earnings call for fiscal 22, Larry Ellison said that "MySQL HeatWave is coming soon on AWS," and that caught a lot of people's attention. That's not like Oracle. I mean, people might say maybe that's an indication that you're not having success moving customers to OCI. So you got to go to other clouds, which by the way I applaud, but any comments on that? >> Yep, this is very much like Oracle. So if you look at one of the big reasons for success of the Oracle database and why Oracle database is the most popular database is because Oracle database runs on all the platforms, and that has been the case from day one. So very akin to that, the idea is that there's a lot of value in MySQL HeatWave, and we want to make sure that we can offer same value to the customers of MySQL running on any cloud, whether it's OCI, whether it's the AWS, or any other cloud. So this shows how confident we are in our offering, and we believe that in other clouds as well, customers will find significant advantage by having a single database, which is much faster and much lower price then what alternatives they currently have. So this shows how confident we are about our products and services. >> Well, that's great, I mean, obviously for you, you're in MySQL group. You love that, right? The more places you can run, the better it is for you, of course, and your customers. Okay, Nipun, we got to leave it there. As always it's great to have you on theCUBE, really appreciate your time. Thanks for coming on and sharing the new innovations. Congratulations on all the progress you're making here. You're doing a great job. >> Thank you, Dave, and thank you for the opportunity. >> All right, and thank you for watching this CUBE conversation with Dave Vellante for theCUBE, your leader in enterprise tech coverage. We'll see you next time. (upbeat music)

Published Date : Mar 29 2022

SUMMARY :

and get paid for the full Very happy to be back. maybe to kick things off you and that's the part which is unique. that adds to cost. So it is indeed the case that HeatWave Well, at the end of the day, And the main reason we are told So can you give us some names? and they were running their application and some of the white space and for that they have to extract the data and for the entire duration I'm going to ask you about the benchmarks, So one of the things we are You know, the system can and many of the times there but also bringing the So the advantages we Okay, I saw on the announcement and the HeatWave cluster but what do you say to that? So the numbers we have and admittedly having to ETL tools. And that's the reason we in the Q3 earnings call for fiscal 22, and that has been the case from day one. Congratulations on all the you for the opportunity. All right, and thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

$25QUANTITY

0.99+

JapanLOCATION

0.99+

Larry EllisonPERSON

0.99+

OracleORGANIZATION

0.99+

BrazilLOCATION

0.99+

two techniquesQUANTITY

0.99+

2009DATE

0.99+

EMCORGANIZATION

0.99+

14.4 timesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

85QUANTITY

0.99+

10QUANTITY

0.99+

SunORGANIZATION

0.99+

300 timesQUANTITY

0.99+

14 timesQUANTITY

0.99+

two thingsQUANTITY

0.99+

$5.6 billionQUANTITY

0.99+

2020DATE

0.99+

HPORGANIZATION

0.99+

80%QUANTITY

0.99+

MySQLTITLE

0.99+

25 timesQUANTITY

0.99+

Nipun AgarwalPERSON

0.99+

RedshiftTITLE

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

90 timesQUANTITY

0.99+

JavaTITLE

0.99+

PythonTITLE

0.99+

$30 billionQUANTITY

0.99+

GoogleORGANIZATION

0.99+

70 CPUsQUANTITY

0.99+

MySQL HeatWaveTITLE

0.99+

second aspectQUANTITY

0.99+

RedShiftTITLE

0.99+

Second thingQUANTITY

0.99+

RedShift MLTITLE

0.99+

1%QUANTITY

0.99+

Redshift MLTITLE

0.99+

NipunPERSON

0.99+

ThirdQUANTITY

0.99+

one percentQUANTITY

0.99+

13 timesQUANTITY

0.99+

first partQUANTITY

0.99+

todayDATE

0.99+

15 timesQUANTITY

0.99+

two capabilitiesQUANTITY

0.99+

Stefanie Chiras, Ph.D., Red Hat | AnsibleFest 2019


 

>>live from Atlanta, Georgia. It's the Q covering answerable Best 2019. Brought to you by Red Hat. >>Welcome back. Everyone keeps live coverage of answerable fast here in Atlanta. Georgia John for my coach do Minutemen were here. Stephanie chairs to the vice president of general manager of the rail business unit. Red Hat. Great to see you. Nice to see you, too. You have all your three year career. IBM now Invincible. Back, Back in the fold. >>Yeah. >>So last time we chatted at Red Hat Summit Rail. Eight. How's it going? What's the update? >>Yeah, so we launched. Related some. It was a huge opportunity for arrested Sort of Show it off to the world. A couple of key things we really wanted to do There was make sure that we showed up the red hat portfolio. It wasn't just a product launch. It was really a portfolio. Lunch feedback so far on relate has been great. We have a lot of adopters on their early. It's still pretty early days. When you think about it, it's been about a little over 445 months. So, um, still early days the feedback has been good. You know it's actually interesting when you run a subscription based software model, because customers can choose to go to eight when they need those features and when they assess those features and they can pick and choose how they go. But we have a lot of folks who have areas of relate that they're testing the feature function off. >>I saw a tweet you had on your Twitter feed 28 years old, still growing up, still cool. >>Yeah, >>I mean 28 years old, The world's an adult now >>know Lennox is running. The enterprise is now, and now it's about how do you bring new innovation in when we launched Relate. We focused really on two sectors. One was, how do we help you run your business more efficiently? And then how do we help you grow your business with innovation? One of the key things we did, which is probably the one that stuck with me the most, was we actually partnered with the Redhead Management Organization and we pulled in the capability of what's called insights into the product itself. So all carbon subscription 678 all include insights, which is a rules based engine built upon the data that we have from, you know, over 15 years of helping customers run large scale Lennox deployments. And we leverage that data in order to bring that directly to customers. And that's been huge for us. And it's not only it's a first step into getting into answerable. >>I want to get your thoughts on We're here and Ansel Fest ate one of our two day coverage. The Red Hat announced the answer Automation platform. I'll see. That's the news. Why is this show so important in your mind? I mean, you see the internal. You've seen the history of the industry's a lot of technology changes happening in the modern enterprises. Now, as things become modernized both public sector and commercial, what's the most important thing happening? Why is this as well fest so important this year? >>To me, it comes down to, I'd say, kind of two key things. Management and automation are becoming one of the key decision makers that we see in our customers, and that's really driven by. They need to be efficient with what they have running today, and they need to be able to scale and grow into innovation. platform. So management and automation is a core critical decision point. I think the other aspect is, you know, Lennox started out 28 years ago proving to the world how open source development drives innovation. And that's what you see here. A danceable fest. This is the community coming together to drive innovation, super modular, able to provide impact right from everything from how you run your legacy systems to how you bring security to it into how do you bring new applications and deploy them in a safe and consistent way? It spans the whole gambit. >>So, Stephanie, you know, there's so much change going on in the industry you talked about, you know what's happening in Relate. I actually saw a couple of hello world T shirts which were given out at Summit in Boston this year, maybe help tie together how answerable fits into this. How does it help customers, you know, take advantage of the latest technology and and and move their companies along to be able to take advantage of some of the new features? >>Yeah, and so I really believe, of course, that unopened hybrid cloud, which is our vision of where people want to go, You need Lennox. So Lenox sits at the foundation. But to really deploy it in in a reasonable way in a Safeway in an efficient way, you need management on automation. So we've started on this journey. When we launched, we announced its summit that we brought in insights and that was our first step included in we've seen incredible uptick. So, um, when we launch, we've seen 87% increase since May in the number of systems that are linked in, we're seeing 33% more increase in coverage of rules based and 152% increase in customers who are using it. What that does is it creates a community of people using and getting value from it, but also giving value back because the more data we have, the better the rules get. So one interesting thing at the end of May, the engineering team they worked with all the customers that currently have insights. Lincoln and they did a scan for Specter meltdown, which, of course, everyone knows about in the industry with the customers who had systems hooked up, they found 100 and 76,000 customer systems that were vulnerable to Spector meltdown. What we did was we had unanswerable playbook that could re mediate that problem. We proactively alerted those customers. So now you start to see problems get identified with something like insights. Now you bring an answerable and answerable tower. You can effectively decide. So I want to re mediate. I can re mediate automatically. I can schedule that remediation for what's best for my company. So, you know, we've tied these three things together kind of in the stepwise function. In fact, if you have a real subscription, you've hooked up to insights. If insights finds an issue, there's a fix it and with answerable, creates a playbook. Now I can use that playbook and answerable tower so really ties through nicely through the whole portfolio to be able to to do everything in feeling. >>It also creates collaboration to these playbooks can be portable, move across the organization, do it once. That's the automation pieces that >>yeah, absolutely. So now we're seeing automation. How do you look at it across multiple teams within an organization so you could have a tower, a tower admin be able to set rules and boundaries for teams, I can have an array l writes. I t operations person be able to create playbooks for the security protocols. How do I set up a system being able to do things repeatedly and consistently brings a whole lot of value and security and efficiency? >>One of the powers of answerable is that it can live in a header Ji. In this environment, you got your windows environment. You know, I've talked of'em where customers that are using it and, of course, in cloud help help us understand kind of the realm. You know why rail plus answerable is, you know, an optimal solution for customers in those header ingenious environment. And what would love I heard a little bit in the keynote about kind of the road map where it's going. Maybe you can talk to about where those would fit together. >>Yeah, perfect and e think your comment about Header genius World is is Keith. That is the way we live, And folks will have to live in a head or a genius, a cz far as the eye can see. And I think that's part of the value, right to bring choice when you look at what we do with rail because of the close collaboration we have between my team and Theo team. That in the management bu around insights are engineering team is actively building rules so we can bring added value from the sense of we have our red Hat engineers who build rail creating rules to mitigate things, to help things with migration. So us develop well, Aden adoption. We put in in place upgrades, of course, in the product. But also there's a whole set of rules curated, supported by red hat that help you upgrade to relate from a prior version. So it's the tight engineering collaboration that we can bring. But to your point, it's, you know, we want to make sure that answerable and answerable tower and the rules that are set up bring added value to rebel and make that simple. But it does have to be in a head of a genius world. I'm gonna live with neighbors in any data center. Of course, >>what one of the pieces of the announcement talked about collections, eyes there, anything specific from from your team that it should be pointed out about from a collections in the platform announcement. >>So I think I think his collection starts to starts to grow on. Git brings out sort of the the simplicity of being pulled. It pulled playbooks and rolls on and pull that all in tow. One spot. We'll be looking at key scenarios that we pulled together that mean the most Terrell customers. Migration, of course, is one. We have other spaces, of course. Where we work with key ecosystem partners, of course, ASAP, Hana, running on rail has been a big focus for us in partnership with S A P. We have a playbook for installing ASAP Hana on Well, so this collaboration will continue to grow. I think collections offers a huge opportunity for a simpler experience to be able to kind of do a automated solution, if you will kind of on your floor >>automation for all. That's the theme here. >>That's what I >>want to get your thoughts on. The comment you made about analytical analytics keep it goes inside rail. This seems to be a key area for insights. Tying the two things together so kind of cohesive. But decoupled. I see how that works. What kind of analytical cables are you guys serving up today and what's coming around the corner because environments are changing. Hybrid and multi cloud are part of what everyone's talking about. Take care of the on premises. First, take care of the public cloud. Now, hybrids now on operating model has to look the same. This is a key thing. What kind of new capabilities of analytics do you see? >>Yes, that's it. So let me step you through that a little bit because because your point is exactly right. Our goal is to provide a single experience that can be on Prem or off Prem and provides value across both, as as you choose to deploy. So insights, which is the analytics engine that we use built upon our data. You can have that on Prem with. Well, you can have it off from with well, in the public cloud. So where we have data coming in from customers who are running well on the public cloud, so that provides a single view. So if you if you see a security vulnerability, you can skin your entire environment, Which is great. Um, I mentioned earlier. The more people we have participating, the more value comes so new rules are being created. So as a subscription model, you get more value as you go. And you can see the automation analytics that was announced today as part of the platform. So that brings analytics capabilities to, you know, first to be able to see what who's running what, how much value they're getting out of analytics, that the presentation by J. P. Morgan Chase was really compelling to see the value that automation is delivering to them. For a company to be ableto look at that in a dashboard with analytics automation, that's huge value, they can decide. Do we need to leverage it here more? Do we need to bring it value value here? Now you combine those two together, right? It's it, And being informed is the best. >>I want to get your reaction way Make common. Are opening student in our opening segment around the J. P. Morgan comment, you know, hours, two minutes, days, two minutes, depending on what the configurations. Automation is a wonderful thing. Where pro automation, as you know, we think it's gonna be huge category, but we took, um ah survey inside our community. We asked our practitioners in our community members about automation, and then they came back with the following. I want to get your reaction. Four. Major benefits. Automation focused efforts allows for better results. Efficiency. Security is a key driver in all this. You mentioned that automation drives job satisfaction, and then finally, the infrastructure Dev ops folks are getting re skilled up the stack as the software distraction. Those are the four main points of why automation is impacting enterprise. Do you agree with that? You make comments on some of those points? >>No, I do. I agree. I think skills is one thing that we've seen over and over again. Skills is skills. His key. We see it in Lennox. We have to help, right? Bridge window skills in tow. Lennox skills. I think automation that helps with skills development helps not only individuals but helps the company. I think the 2nd 2nd piece that you mentioned about job satisfaction at the end of the day, all of us want to have impact. And when you can leverage automation for one individual toe, have impact that that is much broader than they could do before with manual tasks. That's just that's just >>you know, Stew and I were talking also about the one of the key note keywords that kept on coming out and the keynote was scales scales, driving a lot of change in the industry at many levels. Certainly, software automation drives more value. When you have scale because you scaling more stuff, you can manually configure his stuff. A scale software certainly is gonna be a big part of that. But the role of cloud providers, the big cloud providers see IBM, Amazon, all the big enterprises like Microsoft. They're traveling massive scale. So there's a huge change in the open source community around how to deal with scale. This is a big topic of conversation. What's your thoughts on this? Sending general opinions on how the scales change in the open source equation. Is it more towards platforms, less tools, vice versa? Is there any trends? You see? >>I think it's interesting because I think when I think a scale, I think both volume right or quantity as the hyper scale ours do. I think also it's about complexity. I think I think the public clouds have great volume that they have to deal with in numbers of systems, but they have the ability to customize leveraging development teams and leveraging open source software they can customize. They can customize all the way down to the servers and the processor chips. As we know for most folks, right, they scale. But when they scale across on Prem in off from its adding complexity for them. And I think automation has value both in solving volume issues around scale, but also in complexity issues around scale. So even you know mid size businesses if they want a leverage on Prem, an off ramp to them, that's complexity scale. And I think automation has a huge amount of value to >>bring that abstracts away. The complexity automated, absolutely prized job satisfaction but also benefits of efficiency >>absolutely intimately. The greatest value of efficiency is now. There's more time to bring an innovation right. It's a zoo, Stephanie. >>Last thing I wondering, What feedback are you hearing from customers? You know, one of the things that struck me we're talking about the J. P. Morgan is they made great progress. But he said they had about a year of working with security of the cyber, the control groups to help get them through that knothole of allowing them toe really deploy automation. So, you know, usually something like answerable. You think? Oh, I can get a team. Let me get it going. But, oh, wait, no, Hold on. Corporate needs to make its way through. What is that something you hear generally? Is that a large enterprise thing? You know what? What are you hearing from customers that you're >>talking? I think I think we see it more and more, and it came up in the discussions today. The technical aspect is one aspect. The sort of cultural or the ability to pull it in is a whole separate aspect. And you think that technology from all of us who are engineers, we think, Well, that's the tough bit. But actually, the culture bit is just it's hard. One thing that that I see over and over again is the way cos air structured has a big impact. The more silo the teams are, do they have a way to communicate because fixing that so that you, when you bring in automation, it has that ability to sort of drive more ubiquitous value across. But if you're not structured toe leverage that it's really hard if your I T ops guys don't talk to the application folks bringing that value is very hard, so I think it is kind of going along in parallel right. The technical capabilities is one aspect. How you get your organization structure to reap the benefits is another aspect, and it's a journey. That's that's really what I see from folks. It is a journey. And, um, I think it's inspiring to see the stories here when they come back and talk about it. But to me the most, the greatest thing about it's just start right. Just start wherever you are and and our goal is to try and help on ramps for folks wherever their journey is, >>is a graft over people's careers and certainly the modernization of the enterprise and public sector and governments from how they procure technology to how they deploy and consume it is radically changing very quickly. By the way too scale on these things were happening. I've got to get your take on. I want to get your expert opinion on this because you have been in the industry of some of the different experiences. The cloud one Datta was the era of compute storage startups started Airbnb start all these companies examples of cloud scale. But now, as we start to get into the impact to businesses in the enterprise with hybrid multi cloud, there's a cloud. 2.0 equation again mentioned Observe Ability was just network management at White Space. Small category. Which company going public? It's important now kind of subsystem of cloud 2.0, automation seems to feel the same way we believe. What's your definition of cloud to point of cloud? One daughter was simply stand up some storage and compete. Use the public cloud and cloud to point is enterprise. What does that mean to you? What? How would you describe cloud to point? >>So my view is Cloud one Dato was all about capability. Cloud to Dato is all about experience, and that is bringing a whole do way that we look at every product in the stack, right? It has to be a seamless, simple experience, and that's where automation and management comes in in spades. Because all of that stuff you needed incapability having it be secure, having it be reliable, resilient. All of that still has to be there. But now you now you need the experience or to me, it's all about the experience and how you pull that together. And that's why we're hoping. You know, I'm thrilled here to be a danceable fast cause. The more I can work with the teams that are doing answerable and insights and the management aspect in the automation, it'll make the rail experience better >>than people think it's. Software drives it all. Absolutely. Adam, Thanks for sharing your insights on the case. Appreciate you coming back on and great to see you. >>Great to be here. Good to see >>you. Coverage here in Atlanta. I'm John for Stupid Men Cube coverage here and answerable Fest Maur coverage. After the short break, we'll be right back. >>Um

Published Date : Sep 24 2019

SUMMARY :

Brought to you by Red Hat. Back, Back in the fold. What's the update? You know it's actually interesting when you run a subscription based software model, because customers I saw a tweet you had on your Twitter feed 28 years old, still growing up, And then how do we help you grow your business with innovation? I mean, you see the internal. able to provide impact right from everything from how you run your legacy systems to how How does it help customers, you know, take advantage of the latest technology and and and move So now you start to That's the automation pieces that I t operations person be able to create playbooks for the security protocols. You know why rail plus answerable is, you know, an optimal solution for customers in those header And I think that's part of the value, right to bring choice when you look at from your team that it should be pointed out about from a collections in the platform announcement. to be able to kind of do a automated solution, if you will kind of on your floor That's the theme here. What kind of analytical cables are you guys serving up today So if you if you see a security vulnerability, you can skin your entire environment, P. Morgan comment, you know, hours, two minutes, days, two minutes, piece that you mentioned about job satisfaction at the end of the day, all of us want to have impact. So there's a huge change in the open source community around how to deal with scale. So even you know mid size businesses if they want a leverage on Prem, an off ramp to bring that abstracts away. There's more time to bring an innovation What is that something you hear generally? How you get your organization structure to reap the of cloud 2.0, automation seems to feel the same way we believe. it's all about the experience and how you pull that together. Appreciate you coming back on and great to see you. Great to be here. After the short break, we'll be right back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

StephaniePERSON

0.99+

AtlantaLOCATION

0.99+

two minutesQUANTITY

0.99+

Stefanie ChirasPERSON

0.99+

StewPERSON

0.99+

AdamPERSON

0.99+

JohnPERSON

0.99+

OneQUANTITY

0.99+

100QUANTITY

0.99+

33%QUANTITY

0.99+

AirbnbORGANIZATION

0.99+

two dayQUANTITY

0.99+

152%QUANTITY

0.99+

Atlanta, GeorgiaLOCATION

0.99+

J. P. MorganORGANIZATION

0.99+

LennoxORGANIZATION

0.99+

LenoxORGANIZATION

0.99+

87%QUANTITY

0.99+

two sectorsQUANTITY

0.99+

KeithPERSON

0.99+

EightQUANTITY

0.99+

eightQUANTITY

0.99+

FirstQUANTITY

0.99+

red hatORGANIZATION

0.99+

twoQUANTITY

0.99+

BostonLOCATION

0.99+

first stepQUANTITY

0.99+

MayDATE

0.99+

two thingsQUANTITY

0.99+

over 15 yearsQUANTITY

0.99+

bothQUANTITY

0.99+

Redhead Management OrganizationORGANIZATION

0.98+

three yearQUANTITY

0.98+

todayDATE

0.98+

2nd 2nd pieceQUANTITY

0.98+

one aspectQUANTITY

0.98+

28 years agoDATE

0.98+

2019DATE

0.98+

this yearDATE

0.98+

DattaORGANIZATION

0.97+

one thingQUANTITY

0.97+

One spotQUANTITY

0.97+

oneQUANTITY

0.97+

LincolnPERSON

0.97+

FourQUANTITY

0.97+

S A P.ORGANIZATION

0.97+

over 445 monthsQUANTITY

0.97+

76,000 customerQUANTITY

0.96+

PremORGANIZATION

0.96+

ASAPORGANIZATION

0.95+

J. P. Morgan ChaseORGANIZATION

0.95+

four main pointsQUANTITY

0.94+

Atlanta.LOCATION

0.94+

firstQUANTITY

0.93+

HanaORGANIZATION

0.92+

SpecterTITLE

0.92+

single experienceQUANTITY

0.92+

28 years oldQUANTITY

0.91+

One thingQUANTITY

0.9+

J. P. MorganORGANIZATION

0.89+

one interesting thingQUANTITY

0.89+

one individual toeQUANTITY

0.89+

end of MayDATE

0.89+

single viewQUANTITY

0.89+

two key thingsQUANTITY

0.88+

TerrellORGANIZATION

0.88+

678OTHER

0.87+

Darryl Addington, Five9 | Enterprise Connect 2019


 

live from Orlando Florida it's the cube covering enterprise connect 2019 brought to you by five nine hello from Orlando Florida I'm Lisa Martin with two men a man of the cube and we are on the floor in five 9s booth at Enterprise Connect 2019 were excited to be joined by another guest from 5-9 we've got Darryl Addington the director of portfolio marketing Darryl it's great to have you so nice to be here yeah man this has been great for us we've gotten a lot of conversations over the last few days a lot from your execs customers partners this space enterprise communication and collaboration it's really hot it really is yes so much opportunity there give us a little bit of a picture about what you're seeing in the global market from a trend perspective to where your team says all right let's really work with our customers right there on the problems at five nine shit's off yeah so it's really an interesting as you said it's a very hot space right now everybody has to contact center that has any sort of customer service and so moving from where they're what they're currently using the day into the cloud is what's happening in mass and the market is a whole on contact centers as a whole has really said ok you know the cloud does look like it's got the right capabilities for me and as they age out of those older systems they're moving into the cloud so it's very exciting and there's a lot of new capabilities there as well my team really focuses on what the customers are trying to do in the contact center you know what types of problems they're trying to solve today in order to meet the needs of their customers and then we work to put together materials and things like that to help articulate you know what makes us different what makes us unique or how we can uniquely solve some of the issues that contact centers are facing today Darryl I love that I love it love you can bring us in a little bit more in those customers so you know the queues been covering the cloud for quite a long time yeah and speed and agility are some of the top things that we hear when we're talking about cloud and generally I know that is you know translates to the contact center but what else what are some of those key concerns that you're meeting with the customers and what are you hearing from them today yeah so I think one of the big things that people don't maybe don't realize when they're ready to move in in the cloud or they're starting to move into the cloud is that the updates and the the innovation that is that capable of that you're capable of having when you're in the cloud is increased pretty drastically so there's two ways that that happens so one thing is that we're a multi-tenant cloud offering you know you know what that is and so when we push code out to our customers we do it to all of our customers all at the same time so there's no upgrade path that they have to go on or extra work that they have to do our professional services teams we just get that to them so I have new capabilities at their fingertips all the time that they can take advantage that's one big area it's such a great point even you know you give most customers and to talk to them as Microsoft as their example you know the the Microsoft of previous years where I have Patch Tuesday and am I in the latest patch and what's going on and everything if I'm running office 365 if I'm living on Microsoft Azure I'm not asking what version or how did I get that latest security patch in there because they take care of it yeah that absolutely all goes away the other thing that's happened it's really interesting is that in an on-premises world you get the software you buy the perpetual license of course you load it onto servers and then somebody integrates it all together so that the agent can see where the customer information is and the contact center and some of the WFL features in a cloud environment that's all happening based on the vendor and it has a kind of a surprising result as part of that and that surprising result is that it doesn't break as often it's not as fragile as it was when it got all stitched together by a specialized team of people that by the way have moved on to the next project they're not there to support it anymore but because we have hundreds of customers that are using those integrations on our platform you know and they're using it every single month each one of the integrations we have to obviously make sure that it's functioning all the time and that lets them this is the surprising part let's say you iterate in the contact center in a way that the contact center hasn't really been able to do up until they had the cloud you can make changes every couple of weeks if you want to and actually you know you'll have David on I think a little bit later from Carfax and that's one of the things that he talks about so hopefully you can ask them some questions about how they've changed their since they came on board with would love to know that because as we as we talk about the centrioles of cloud for context and our contact center as a service the obvious operational and cost benefits of the club but also I know 5/9 has about five billion minutes of reported customer conversations and Rowan Tolliver CEO per year yes thank you will be coming on later today this thing you know I was gonna say prehistorically maybe previously there's a ton of dark data in there but there's so much insight there yes that that you know the problem is the consumer behavior is changing everything we expect omni-channel experience rate we're demanding that contact center still has the original problem of meeting the customer demands and resolving a problem but now there's so much more complexity because of omni-channel and all of these different things so talk to us about how you guys at five minute open customers to really ensure that the cloud is going to maximize say their opportunity to dial up AI right so it's interesting i mean AI has got so much potential in this marketplace and and and rollin we'll talk about that this afternoon you know it's an exciting time because well first of all AI is a whole bunch of different types of technologies you know kind of at the core of it is machine learning of course which allows you to create these different types of models for doing things that human beings have been good at and you know in the cases of Google what we've seen is that they're dictation has improved drastically over previous generations of recognition and things like that so that's very exciting because you can so dictation in particular lots of things you can do with dictation that's part of that dark data story that we've been talking about you can pull the words out of the voice that are sitting there and use those effectively but there's another layer there which is natural language understanding once you have the words what are the words mean and that's I think where five nine I'll be able to provide a significant value again based on that dark data because if you think about the way a customer is going to interact with a business they're not going to use the words that you use when you're communicating on your mobile phone to Siri right you're gonna you're gonna be talking about whatever it is that you're doing inside of that business and so if it's pharmaceuticals a whole different set of vocabulary that you're going to use communicating with an agent around around pharmaceuticals and the same for every single industry right and so that's where that really starts to come into play is those systems that machine learning system has to have all those specific not just industry level words but business level words right if I'm calling Amazon and I'm talking about you know a specific capability I have an Amazon that's gonna be a different set of words versus you know if I'm if I'm calling my cable company because there's specific products and so anyway so all of that is gonna lead I think into a future where we're able to provide some very interesting and compelling yeah I driven solutions for contact centers all right so Darryl lately and I've had a chance to we see the show floor we've attended some of the keynotes one of the things we rarely get to do though is actually go to some of the breakout sessions you led one of those yesterday participated in one yesterday maybe bring us inside you know sure how was the attendance any good questions from the audience and what was some of the key takeaways you know attendance was good it was it was a good session so we were talking about the intelligent cloud contact center so one of the things that contact centers are trying to think about is how do I create a system or a platform and a process within my contact center where I can better serve customers and so we've coined this term or we have this term the intelligent cloud contact sentence what we deliver with five nine genius which is the five nine platform for delivering services out into contact centers and essentially affecting just running through it really quickly the intelligent cloud contact center is integrated into all the systems as we talked about previously so CRM and WFO capabilities out of the cloud you just turn it on and configure it and you can use it of course you can customize it if you need to for your specific business processes as well second element is around agent empowerment and agent so the agent is the really that human touch point between the customer and the business and that usually when the agent gets involved it's this kind of critical moment right so the agent needs to know either in their head but there's a lot of information and that can extend your training time or right in front of them on their desktop you know all the information about that customer so that they can help the customer continue their journey whatever that happens to be and hopefully drive it to conclusion right so that's the second element and then the third element is is really about reliability so has to be a reliable system because you're offering it as a cloud service if it's not available you can't use it if you can't use it you then you can't run your contact center and you're you know you're you're not gonna you're not gonna provide great service to your customers and really drive up that customer experience so that's the third element of the intelligent cloud contact center and seem to get good feedback from the crowd yeah how to add a number of interesting questions back on that anything particular question catcher that you know might be a common thing that users would be asking you know I am NOT the none of them are coming to mind at the moment they I remember them as being interesting I flagged them as such and we had an interesting conversation after this session but but I don't actually remember it's always at these shows it's a blur of you know there's a lot of activities there's been good buzz any any other just kind of key takeaways you've had is we're you know get into the third day of the show and some of the interactions well I think every single business is at a different place and so well it's fun to talk about AI and it's very interesting I'm excited about it I think depending on where a business is right now in their particular path and their particular journey there's still a lot of things that you can do that don't require AI to transform what you're doing in the contact center and that intelligent cloud contact center I think is one of the ways that a business can really do that and that is to get that data all in a location get it in front of the agents let that agent be able to know what's going on with that customer at that moment and be able to communicate with the customer and then do that with confidence that you can iterate and then improve your business so one of the things I didn't talk about that yet so I'll do it now is metrics and being able to know what's happening in your contact center and that's obviously a fully integrated part of that what are some of those key metrics so we think of Net Promoter Score customer lifetime value or predicting customer lifetime value what are some of those key metrics that 5:9 is helping your customers to achieve through the intelligent kind of first call resolution is of course I'm you know a critical one that a lot of contact center is used to trying to determine what you know how well your your your serving your customers I think one of the key things that's relatively new for the contact center industry is giving the stats to the agent so that the agents know how they're performing all throughout the day we all like to know how we're doing right we want feedback on a regular basis and when you don't provide those to an agent or if you just have a really simple wall board that's up that the agents are looking at then you know then all they have to go on is that weekly call with their supervisor where the supervisor goes well actually on Tuesday at 10:00 you forgot to introduce the brand correctly and on Wednesday at 11:00 you didn't greet the customer in the appropriate way and you didn't end the call with the appropriate greeting based on the fact that they were called so that's no fun for an agent right it's it's basically a mistake driven set of feedback that you get but if you give them the stats in front and the performance dashboard that we provide does that then they can see exactly how they're doing all through the course of the day and you know that helps them to do better people will autocorrect almost instantly when they get feedback on how they're doing and it makes you feel better because then with the gamification aspect of it it kind you know you're ratcheting up you know how you're doing you're starting to compete maybe with your peers a little better at least you're feeling good and then at the end of the week the supervisor can come back and say hey great job this week let's let's talk about some deeper skills that you can work on with you to you know improve your close rate on sales for example rather than yeah I love that because I cringe a little bit of gamification but knowing this kind of environment right if it's something that I can proactively look at it myself I can engage on I take ownership myself so right you can have more constructive engagement and work on you know strengths when you're going with your management that's what it's really all about it's all about making it fun it's not about having a you know a silly game on the side it's all about am I doing the things that the company wants me to do and then there's some nice fringe benefits that can be out of it too you know monetary awards or things that you can buy tickets to sporting events things like that it's all part of the so we talk so much about customer experience CX big theme of the show every conversation that we've had but I'm just kind of wondering and hearing you talk about how five nine five nine is enabling the agent experience when you're talking with customers do they see customer experience an agent experience as separate issues to deal with or are you really saying they are one in the same and here's how we're going to enable you through the agent empowerment to deliver that awesome CX yeah it's a it's a really great question because you're absolutely right if the agent is you know like we just talked about a second ago getting negative feedback every week about what they didn't do right well how is that gonna you know how is that going to motivate them to be excited about talking to the customer you know whereas if you've got an environment where the agent doesn't have to so part of the problem with the old desktops for the agents is this they're just too complicated right so if I'm on the phone and I'll just do it like a parody of it and I'm trying to talk to you guys you know but I'm constant trading on you know what I'm digging through different screens and it's just really hard to connect with the customer so if you create that environment where it's easy to understand what the customer data is intuitive you know almost so it doesn't require much training but then also you don't have to focus on it and you're giving constant feedback on how you're doing in your performance all throughout the day and your supervisor sessions at the end of every week are on a positive note doing great job let's see how we can increase your close rate now you've got an agent who's enjoying their job which is cool because then of course if they're enjoying their job then when they get that call from a customer they're interacting with customers and they're empowered to make decisions right and to have the right content to deliver that through the right channel to you know not make that the last exchange that that business has with a customer but to actually retain them as you said retention is huge that agent empowerment and being able to make decisions I can imagine can be an absolute game changer yeah I know absolutely and a lot of call centers would like to become they would like to transform and be able to help the business in those key areas whether it's retaining customers or whether it's cross-selling and upselling different capabilities and if you have the right tools it's much much easier to enable the agents to do that all right so Darryl we've been talking with your team this week about many of the announcements at the show what have you been involved in what's the ones that been exciting you the most to be able to get in front of customers and about well so one of the one of the things we've been talking about over and over is the agent desktop and the information that the agent has and so we renamed our platform at there as part of this show and so we've renamed it to five nine genius the intelligent cloud contact center as I've mentioned previously and we think that name gives us some interesting ways to talk about really the power that we're bringing to bear with the our cloud contact center because it's integrated because the desktop is intuitive because it pulls information from all the appropriate data sources including customer intent in a self-service channel and it's still bringing that to the agent it really does empower a different way in a different method of communicating with customers that businesses can use to improve their customer experience Darryl thank you so much for joining suing me on the Cuba separated sharing about what you're doing and how that voice of the customer is really impacting everything that 5/9 does we appreciate your time yeah thanks so much thanks for having me first a minute man I'm Lisa Martin you're watching the cube [Music]

Published Date : Mar 20 2019

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Lisa MartinPERSON

0.99+

AmazonORGANIZATION

0.99+

SiriTITLE

0.99+

Darryl AddingtonPERSON

0.99+

DarrylPERSON

0.99+

yesterdayDATE

0.99+

Rowan TolliverPERSON

0.99+

two waysQUANTITY

0.99+

DavidPERSON

0.99+

second elementQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Orlando FloridaLOCATION

0.99+

third elementQUANTITY

0.99+

third elementQUANTITY

0.99+

todayDATE

0.99+

third dayQUANTITY

0.98+

two menQUANTITY

0.98+

GoogleORGANIZATION

0.98+

CarfaxORGANIZATION

0.98+

hundreds of customersQUANTITY

0.98+

oneQUANTITY

0.98+

2019DATE

0.97+

five minuteQUANTITY

0.97+

one thingQUANTITY

0.97+

second elementQUANTITY

0.97+

office 365TITLE

0.97+

this weekDATE

0.97+

fiveQUANTITY

0.97+

this weekDATE

0.94+

a ton of dark dataQUANTITY

0.92+

about five billion minutesQUANTITY

0.91+

this afternoonDATE

0.91+

five nine platformQUANTITY

0.89+

Tuesday at 10:00DATE

0.89+

first callQUANTITY

0.88+

every couple of weeksQUANTITY

0.88+

five nineQUANTITY

0.85+

a second agoDATE

0.85+

first a minuteQUANTITY

0.84+

TuesdayTITLE

0.83+

later todayDATE

0.83+

5/9DATE

0.83+

Wednesday at 11:00DATE

0.83+

every single monthQUANTITY

0.8+

CXORGANIZATION

0.8+

each oneQUANTITY

0.77+

one of the thingsQUANTITY

0.76+

CubaLOCATION

0.75+

enterprise connect 2019EVENT

0.74+

AzureTITLE

0.72+

a lot of activitiesQUANTITY

0.71+

five nine geniusQUANTITY

0.68+

every weekQUANTITY

0.68+

every single businessQUANTITY

0.67+

lot of informationQUANTITY

0.67+

lotQUANTITY

0.65+

every single industryQUANTITY

0.65+

Enterprise Connect 2019EVENT

0.63+

geniusQUANTITY

0.62+

thingsQUANTITY

0.56+

every weekQUANTITY

0.55+

last few daysDATE

0.55+

keyQUANTITY

0.53+

nineQUANTITY

0.53+

lot ofQUANTITY

0.53+

WFLORGANIZATION

0.47+

Enterprise ConnectTITLE

0.45+

nineTITLE

0.43+

Linda Hill, Harvard | PTC LiveWorx 2018


 

>> From Boston, Massachusetts, it's the Cube, covering LiveWorx 18, brought to you by PTC. (light electronic music) >> Welcome back to Boston, everybody. This is the Cube, the leader in live tech coverage. We're covering day one of the LiveWorx conference that's hosted by PTC. I'm Dave Vellante with my cohost Stu Miniman. Professor Linda A. Hill is here. She's the Wallace Brett Donham Professor of Business Administration at the Harvard Business School. Professor Hill, welcome to the Cube. Thanks so much for coming on. >> Thank you for having me. >> So, innovation, lot of misconceptions about innovation and where it stems from. People think of Steve Jobs, well, the innovation comes from a single leader and a visionary who gets us in a headlock and makes it all happen. That's not really how innovation occurs, is it? >> No, it is not, actually. Most innovation is the result of a collaboration amongst people of different expertise and different points of view, and in fact, unless you have that diversity and some conflict, you rarely see innovation. >> So this is a topic that you've researched, so this isn't just an idea that you had. You've got proof and documentation of this, so tell us a little more about the work that you do at Harvard. >> So really over 10 years ago, I began to look at the connection between leadership and innovation, because it turns out that like a lot of organizations, the academy is quite siloed, so the people studying innovation were very separate from the ones who studied leadership, and we look at the connection between the two. When you look at that, what you discover is that leading innovation is actually different from leading change. Leading change is about coming up with a vision, communicating that vision, and inspiring people to want to fulfill that vision. Leading innovation is not about that. It's really more about how do you create a space in which people will be willing and able to do the kind of collaborative work required for innovation to happen? >> Sometimes I get confused, maybe you can help me, between invention and innovation. How should we think about those two dimensions? >> Innovation and invention. The way I think about it is an innovation is something that's both an invention, i.e. new, plus useful. So it can be an innovation or it can be creative, but unless it's useful and addresses an opportunity or a challenge that an organization faces, for me, that's not an innovation. So you need both, and that is really the paradox. How do you unleash people's talents and passions so you get the innovation or the invention or the new, and then how do you actually combine that, or harness all of those different ideas so that you get something that is useful, that actually solves a problem that the collective needs solved? >> So there's an outcome that involves changing something, adoption, as part of that innovation. >> For instance, one of the things that we're doing a lot right now is we're working with organizations, incumbents, I guess you'd call them, that have put together these innovation labs to create digital assets. And the problem is that those digital assets get created, they're new, if you will, but unless the core business will adopt them and use them, they get implemented, they're not going to be useful. So we're trying to understand, how do you take what gets created in those innovation labs, those assets, if you will, and make sure that the organization takes them in and scales them so that you can actually solve a business problem? >> Professor Hill, a fascinating topic I love digging into here. Because you see so many times, startups are often people that get frustrated inside a large company. I've worked for some very large companies, so which have had labs, or research division, and even when you carve aside time for innovation, you do programs on that, there's the corporate antibodies that fight against that. Maybe talk a little bit about that dynamic. Can large companies truly innovate? >> Yes, large companies can truly innovate. We do see it happening, it is not easy by any means, and I think part of the dilemma for why we don't see more innovation is actually our mindset about what leadership is about and who can innovate. So if I could combine a couple of things you asked, invention, often when we talk to people about what is innovation, they think about technology, and they think about new, and if I'm not a technologist and I'm not creative, then I can't play the game. But what we see in organizations, big ones that can innovate, is they don't separate out the innovators from the executors. They tell everybody, guess what, your job no matter who you are, of course you need to deal with making sure we get done what we said we'd deliver, but if we're going to delight our customers or we're ever going to really get them to be sticky with us, you also need to think about not just what should you be doing, but what could you be doing. In the literature, in the research, that's called how do you close an opportunity gap and not just a performance gap? In the organizations we look at that are innovative, that can innovate time and again, they have a very democratic notion: everybody has a role to play. So our work, Collective Genius, is called Collective Genius because what we saw in Pixar was the touchstone for that work, is that they believe everybody has a slice of genius. They're not equally big or whatever, but everybody has a contribution to make, and you need to use yours to come up with what's new and useful. A lot of that will be incremental, but some of it will be breakthrough. So I think what we see with these innovation labs and the startups, if you will, is that often people do go to start them up, of course they eventually have to grow their business, so a part of what I find myself doing now is helping startups that have to scale, figure out how to maintain that culture, those capabilities, that allowed them to be successful in the first place, and that's tough one for startups, right? >> Yeah, I think Pixar's only about a 1,500 person company and they all have creativity in wat they do. I'm wondering if there's some basic training that's missing. I studied engineering and I didn't get design training in my undergraduate studies. It wasn't until I was out in the workforce that I learned about that. What kind of mindset and training do you have to do to make sure the people are open to this? >> One of the things that I did related to this is about five years ago, I told our dean of Harvard Business School that I needed to join the board of an organization called Arts Center. I don't know if you were aware of Arts Center in Pasadena. It's the number one school of industrial design in the U.S., and people don't know about it 'cause I always laugh at them. The man who designed the Apple store is a graduate there. The man who designed Tesla car and et cetera, so they're not so good at it, but one of the things that we've all come to understand is design thinking, lean startup, these are all tools that can help you be better at innovation, but unless you create an environment around that, people are going to be willing to use those tools and make the missteps, the failures that might come with it, know how to collaborate together, even when they're a large organization, I mean it's easier when you're smaller. But unless you know how to do all that, those tools, the lean startup or digital or design thinking or whatever, ' cause I'm working with a lot of the people who do that, and deep respect for them, nothing gets done. In the end, we are human, we all need to know first off that it's worthwhile to take the risk to get done whatever it is you want to get done, so what's the purpose of the work, how's it going to change the world? The second thing is we need to share a set of values about learning because we have to understand, as you well know, you cannot plan your way to an innovation, you have to act your way. And with the startup, you act as fast as you can, right, so somebody will give you enough money before you run out of money. Same similar process you have to do in a large company, an incumbent, but of course it's more complicated. The other thing that makes it more complicated is companies are global, and the other part of it that makes it more complicated that I'm seeing like in personalized medicine: you need to build an ecosystem of different kinds, of nanotechnologists, biotechnologists, different expertise to come together. All of this, frankly, you don't learn any of it in school. I remember learning that you can't teach anyone how to lead. You actually have to help people learn how to lead themselves and technologists will frequently say to me, i don't know why, you're a leadership professor? Well, this is a technical problem. We just haven't figured out the platform right, and once we get it right, all will be. No, once you get it right, humans are still going to resist change and not know how to necessarily learn together to get this done. >> I wonder if, are there any speacial leadership skills we need for digital transformation? Really kind of the overarching theme of the show here, help connect the dots for us. >> So the leading change piece is about having a vision, communicating it, and inspiring people. What it really does turn out when we look at exceptional leaders of innovation, and all of us would agree that they've done wonderful things time and again, not just once, they understand that is collective. They spend time building a culture and capabilities that really will support people collaborating together. The first one they build is, how do we know how to create a marketplace of ideas through debate and discourse? Yeah, you can brainstorm, but eventually, we have to abrade and have conflict. They know how to have healthy debates in which people are taught terms of skills, basic stuff, not just listening and inquiring, but how to actively advocate in a constructive way for your point of view, these leaders have to learn how to amplify difference, whereas many leaders learn how to minimize it. And as the founder of Pixar once said, you can never have too many cooks in the kitchen. Many people believe you can. It's like today, you need as much talent as you can get. Your job as a leader, what are the skills you need to get those top cooks to be able to cook a meal together, not to reduce the amount of diversity. You got to be prepared for the healthy fight. >> You've pointed this out in some of your talks is that you've got to have that debate. >> Yes, you have to. >> That friction, to create innovation, but at the same time it has to be productive. I know it can be toxic to an organization, maybe talk about that a little. >> I think one of the challenges is what skills do people need to learn? One is, how do you deal with conflict when people are very talented and passionate? I think many people avoid conflict or don't know how to engage that constructively, just truly don't, and they avoid it. I find that many times organizations aren't doing what they need to do because the leadrr is uncomfortable. The other thing, and I'm going to stereotype horribly here, but I'm an introvert, that book quiet is wonderful, but one of the challenges you have if you're more introverted or if you're more technical and you tend to look at things from a technical point of view, in some ways is that you often find the people with that kind of, that's what drives them, there's a right answer, there's a rational answer we need to get through or get to, as opposed to understanding that really innovative ideas are often the combination of ideas that look like they're in conflict initially, and by definition, you need to have the naive eye and the expert working together to come up with that innovative solution, so for someone who's a technologist to think they should listen to someone who's naive about a technical problem, just the very basic mindset you have about who's going to have the idea. So that's a tricky one, it's a mindset, it's not even just a skill level, it's more, who do you think actually is valuable? Where is that slice that you need at this moment going to come from? It may not be from that expert, it may be from the one who had no point of view. I heard a story that I was collecting my data, and apparently, Steve Jobs went to see Ed Land. We're here in Boston over Polaroid, which is one of our most innovative companies, right, in the history. And he said, what do I need to learn from you? And what Land said to him is, whenever my scientist and technologist get stuck, I have some of the art students or the humanities students come in and spend time in the lab. They will ask the stupid question because they don't know it's stupid. The expert's not going to ask the stupid question, particularly the tech expert, not going to ask it. They will ask the question that gets the first principles. I think, but I wouldn't want to be held to this, the person who was telling me the story, that's partly how they came up with the instant camera. Some naive person said, why do I have to wait? Why can't I have it now? And of course, silly so-and-so, you don't know it takes this, that, and the others. Then someone else thought, why does she have to wait? I think it was really a she who asked the question, the person telling me this, and they came up with a different way. Who said it has to be done in a darkroom in that way? I think that there's certain things about our mindset independently of our skill, that get in the way of our actually hearing all the different voices we need to hear to get that abrasion going in the right way. >> Listening to those Columbo questions, you say, can sometime lead to an outcome that is radically different. There's a lot of conversation in our industry, the technology industry, about, we call it the cordially shock clock, the companies are on a cordially reporting mechanism or requirement from the SEC. A lot of complaints about that, but at the same time, it feels like at least in the tech business, that U.S. companies tend to be more innovative. But again, you hear a lot of complaints about, well, they can't think for the long term. Can you help us square that circle? >> It's funny, so one thing is you rarely ever get innovation without constraint. If you actually talk to people who are trying to innovate, there needs to be the boundaries around it in which they're doing the constraint. To be completely free rarely leads to, it is the constraint. Now we did do a study of boards to try to understand when is a board facilitating innovation and when is a board interfering with it? We interviewed CEOs and lead directors of a number of companies and wrote an article about that last year, and what we did find is many boards actually are seen as being inhibitors. They don't help management make the right decision. Then of course the board would say now management's the one that's too conservative, but this question about how the board, with guidance, and all of these issues have come up when you're looking at research analysts and who you play to, and I've been on corporate boards. One thing is that the CEO needs to know that the board is actually going to be supportive of his or her choices relative to how you communicate why you're making the choices you're making. So there is pressure, and I think it's real. We can't tell CEOs, no, you don't need to care about it, 'cause guess what, they do get in trouble if they don't. On the other hand, if they don't know how to make the argument for investing in terms of helping the company grow, so in the long run, innovation is not innovation for innovation's sake, it's to meet customer needs so you can grow, so you need to have a narrative that makes sense and be able to talk with people, the different stakeholders, about why you're making certain choices. I must say that I think that many times companies may be making the right choice for the long haul, and get punished in the short run, for sure that happens, but I also think that there are those companies that get a way with a lot of investment in the long haul, partly because they do, over time, deliver, and there is evidence that they're making the right choices or have built a culture where people think what they're saying might actually happen or be delivered. What's happening right now because of the convergence of industries, is I think a lot of CEOS, it's a frightening time, it is difficult to sustain success these days, because what you have to do is innovate at low cost. Going back to some other piece about boards, one of the things we've found is so many board members define innovation as being technology. Technology has a very important enabling role to play in otherwise, but they have such a narrow definition of it in a way that again, they create a culture to let the people in the innovation lab innovate, but not one where everybody understands that all of us, together, need to innovate in ways that will also prepare us to execute better. They don't see the whole culture transformation, digital transformation often requires cultural transformation for you to be able to get this stuff done, and that's what takes a long time. Takes a long time to get rid of your legacy systems and put in these new, or get that balance right, but what takes even longer is getting the culture to be receptive to using that new data capability they have and working in different ways and collaborating when they've been very siloed and they're paid to be very siloed. I think that unless you show, as a CEO, that you are actually putting all of those building blocks in place, and that's what you're about, you understand it's a transformation at that level, you're just talking to the analysts about, we're going to do x, and there's no evidence about your culture or anything else going on, how you're going to lead to attract and retain the kind of talent you need, no one's buying that, I think that that's the problem. There's not a whole story that they're telling about how this goes together and they're going to move forward on it. >> To your other point, is there data to suggest, can you quantify the relationship between diversity and innovation? >> There are some data about that, I don't have it. I find it's very funny, as you can see, I'm an African-American woman. My work is on leadership globalization and innovation. I do a lot of work on how you deliver global strategies. I often find when I'm working with senior teams, they'll ask me, would you help us with our inclusion effort? And I think it's partly because of who I am and diversity comes up in our work, and if you actually build the environments for talking about, they tend to be more inclusive about diversity of thought. Not demographic diversity, those can be separate as we well know because we know Silicon Valley is not a place where you see a lot of demographic diversity, but you might see diversity of thought. I haven't asked, it's interesting, I have had some invitations by governments, too. Japan, which has womenomics, which is a part of their policy If they need to get more women in the economy, frankly, otherwise they can't grow as an economy. It turns out that the innovation story is the business case that many businesses or business people find one that they can buy into, doesn't feel like you're doing it 'cause it's the right thing, or not that you shouldn't do the right thing, but helping them understand how you really, really make sure that the minority voice is heard, and I mean minority of thought, independent of demographic, but if you create an environment as a leader where you actually run your team so that people do feel they can speak up, as you all know. It's so often, I'll talk to people afterwards and they'll say, I didn't say what I really thought about those ideas because I didn't want to be punished or I didn't want to step in that person's territory. People are making decisions based on varying complete information everyone knows. What often happens is it gets escalated up. We had this one senior team complaining, everything is so slow here, a very big bank, not the one I'm on the board of, another very big bank we're working with. Everything's so slow, people won't do anything. So when we actually ask people, what's happening? Why aren't you making decisions? First off, decisions making rights are very fuzzy in this organization, except for at the very top, so what they say is all decisions, actually, they're made on the 34th floor. We escalate 'cause if you make a decision, they're going to turn it over anyway, so we've backed off, or we don't say what we think 'cause I don't want them to say what they think about my ideas 'cause we actually have very separate business units here. >> We might get shot. >> You might get shot. That's the reality that many people live in, so we're not surprised to see that not very many organizations can innovate time and again when we think about the reality of what our contexts are. The good news for us is that in part, millennials won't tolerate some of these environments in the same way, which is going to be a good thing. I think they're marvelous to work with, I'm not one of them obviously, but I think a lot of what they're requesting, the transparency, the understanding the connections between what they do and are they having impact, the desire to be developed and be learning, and wanting to be an organization they're not ashamed of but in fact they're very proud to be a part of what's happening there, I think that that requires businesses and leaders to behave differently. One of the businesses we studied, if the millennial wants to know who's on the front line, he or she is making a difference. They had to do finance differently to be able to show, to draw the cause and effect between what that person was doing every day and how it impacted the client's work. That ended up being a really interesting task. Or a supply chain leader, who really needed them to think very differently about supply chain so they could innovate. What he ended up doing is, instead of thinking about our customers being the pharmaceutical company, the CBS or the big hospital chain or whatever it is, think about the end customer. What would we have to do with supply chain to ensure that that end patient took his or her pill on time and got better? And when they shifted the whole meaning of the work to that individual patient in his or her home, he was able, over time, to get the whole supply chain group organization to understand, we're not doing what we need to do if we're really going to reduce diabetes in the world because the biggest problem we have is not when they go and get their medication, it's whether they actually use it properly when they're there. So when you switched it to that being the purpose of the work, the mindset that everyone had to have, that's what we're delivering on. Everyone said, oh, this is completely appropriate, we needed digital, we need different kind of data to know what's going on there. >> Don't get me started on human health. Professor Hill, for an introvert, you're quite a storyteller, and we appreciate you sharing your examples and your knowledge. Thanks so much for coming on the Cube. It was great to meet you. >> Been my pleasure, glad to know you, thank you. >> Keep it right there, everybody, Stu and I will be back right after this short break. You're watching the Cube from LiveWorx in Boston. We'll be right back. (light electronic music)

Published Date : Jun 18 2018

SUMMARY :

brought to you by PTC. This is the Cube, the leader So, innovation, lot of and some conflict, you that you do at Harvard. I began to look at the connection maybe you can help me, so that you get something adoption, as part of that innovation. so that you can actually and even when you carve and the startups, if you will, to make sure the people are open to this? take the risk to get done Really kind of the overarching are the skills you need is that you've got to have that debate. it has to be productive. but one of the challenges you have in the tech business, is getting the culture to be receptive I do a lot of work on how you the desire to be developed and we appreciate you glad to know you, thank you. from LiveWorx in Boston.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steve JobsPERSON

0.99+

Dave VellantePERSON

0.99+

PasadenaLOCATION

0.99+

Linda HillPERSON

0.99+

BostonLOCATION

0.99+

CBSORGANIZATION

0.99+

Arts CenterORGANIZATION

0.99+

PixarORGANIZATION

0.99+

Harvard Business SchoolORGANIZATION

0.99+

StuPERSON

0.99+

U.S.LOCATION

0.99+

PTCORGANIZATION

0.99+

HillPERSON

0.99+

Linda A. HillPERSON

0.99+

TeslaORGANIZATION

0.99+

twoQUANTITY

0.99+

bothQUANTITY

0.99+

AppleORGANIZATION

0.99+

last yearDATE

0.99+

first principlesQUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

two dimensionsQUANTITY

0.99+

SECORGANIZATION

0.99+

firstQUANTITY

0.99+

Wallace Brett DonhamPERSON

0.99+

Stu MinimanPERSON

0.99+

oneQUANTITY

0.98+

Silicon ValleyLOCATION

0.98+

OneQUANTITY

0.98+

LiveWorxEVENT

0.98+

34th floorQUANTITY

0.98+

PolaroidORGANIZATION

0.98+

second thingQUANTITY

0.98+

LiveWorxORGANIZATION

0.98+

ColumboPERSON

0.97+

LandPERSON

0.97+

HarvardLOCATION

0.97+

first oneQUANTITY

0.97+

ProfessorPERSON

0.97+

FirstQUANTITY

0.97+

HarvardORGANIZATION

0.96+

Ed LandPERSON

0.95+

day oneQUANTITY

0.94+

one thingQUANTITY

0.9+

over 10 years agoDATE

0.89+

Collective GeniusTITLE

0.89+

todayDATE

0.89+

African-AmericanOTHER

0.89+

about five years agoDATE

0.88+

One thingQUANTITY

0.88+

single leaderQUANTITY

0.87+

18COMMERCIAL_ITEM

0.81+

2018DATE

0.79+

1,500 personQUANTITY

0.78+

CubeCOMMERCIAL_ITEM

0.77+

Collective GeniusORGANIZATION

0.76+

JapanLOCATION

0.69+

CubeORGANIZATION

0.67+

onceQUANTITY

0.51+

CubePERSON

0.47+

Dan Rogers, ServiceNow | ServiceNow Knowledge18


 

>> Announcer: Live from Las Vegas, it's theCUBE. Covering ServiceNow Knowledge 2018. Brought to you by ServiceNow. >> Welcome back to theCUBE's live coverage of ServiceNow Knowledge18, #Know18 we are theCUBE, the leader in live tech coverage. I'm your host Rebecca Knight along with my co-host, Dave Vellante. We are joined by Dan Rogers. He is the CMO of ServiceNow. Thanks so much for coming on theCUBE Dan. >> Thanks for inviting me. I always have a great conversation with you guys. >> Yeah, you're, you're back, you're back. So, this conference is amazing. There's so much buzz happening. 18,000 people. It gets bigger and better every year. >> How ironic, 18,000, K18. >> You got it. >> Oh my gosh. >> Well done. >> I didn't even, you did it you must've done it that's marketing genius, genius Dan. >> We might bend the curve next year though. We might bend the curve a little bit more. >> So, so what it, what in your opinion is the most sort of knew exciting things happening? >> Well you know we start the planning process as you can imagine, about six months prior. And we're really super focused this year on customer success. So, one of our principles was it's all about our customers, it's all for our customers. You probably know, unlike any other conference, most of the sessions are delivered by customers. So we have 85% of our breakouts are delivered by customers. So this is really our customers' event. And in the background here, you know we've created this customer success zone, which is where I've taken all the best practices from our customers and we're sharing that and you'll see we've got Genius Lounge, customer success clinics, customer theaters, and the whole vibe is supposed to be helping our customers be more successful. In some ways it's the anti-marketing conference. This isn't buy more stuff, this is we want to help you be successful. And so we wanted to keep the authenticity throughout. The keynotes were celebrating people, celebrating our users how users can use our products. The experiences that they can have. So I think that was the principle. Hopefully we pulled it off. >> So I wonder if you could talk about some of the challenges you have from a marketing standpoint. So let me just set it up. So, in the keynote this morning, if you didn't see it ServiceNow had kind of a fun little play on words where they had cave people in the cave trying to light a fire. We all know that, right? Light a fire under somebody's butt. And then fast forward to today's world and there's this thing called the saber tooth virus coming and so that was kind of really fun. And it explained things, you know, it resonated, I think, with a lot of people. But as you enter this new world beyond IT, I mean 2013, 5% of your business was outside of IT. You know, today it's you know, a third of your business. So you're reaching a new audience now. How do you handle sort of the marketing and messaging of that hybrid approach? That must've been a challenge for you. >> Well, you know I'm a story teller I love kind of starting with the stories. And, talking with our product leaders, the story that we're most deeply connected to really for our product road map is around experiences. So we knew this needed to be a conference about experiences. And we wanted to put a marker down that says this is the era of great experiences. You deserve great experiences at work. It really is the case that certainly when millennials come in to work they have expectations of what the work experience looks like and they arrive and it's like, wah, wah, wah, wah, No you can't, just swipe your finger, No, you have to stand in line. No you, yes, we really use telephones still, you know. And, chat experience isn't really what it ought to be. So we kind of said we're putting a marker down at this conference to say, Welcome to the Era of Great Experiences. You deserve great experiences, and we're going to create that. And if you look at our entire product roadmap, we're trying to create great experiences at work. CJ talked about the Now platform. He said there are three layers to the Now platform. The Now platform has user experiences. That's really how people want to interact with our, our products, how they want to interact with the world. Great service experiences that's all the stuff that's happening in the background. Customers, employees, they just want to touch their phone, the 20 things that happened behind, they need to be obstercated. And then, service intelligence. This idea of prediction. Now these things are not new in the consumer world, but they're very new in the enterprise world. Take the consumer world. You think about Uber, you think about OpenTable, they spend a lot of time on the user experience. Think about the service experience of something like Amazon. Amazon, you touch, you swipe, you click and they're orchestrating hundreds of processes on the, behind the scenes. And then service intelligence. Netflix is a great example. Stuff's predicting for you stuff's being recommended for you. Where are the recommendations at work? Where's the predictions at work? Where's the prioritization that's happening at work? And we've sort of said, that's what our Now platform is all about. It's about delivering those three great things that we think go into making great experiences at work. And that's what the show's about. And therefore, you see the people's centricity of the show. CJ celebrated four personas. He talked about the personas and their life. The IT topic, you know it's happening in a couple hours. We're going to talk about people. Real people and their lives, and how it's making it better. And that all rolls back to the central idea that we believe that technology should be in the service of people. Making work, work better for you. >> So that's the main spring. Love it, go ahead. >> No, I was just going to ask you, you were describing the millennial, or the post-millennial entering the workforce and this, wah, wah, wah, feeling of no it's not like that here, you got to, there's a lot of, onerous administrative tasks that you've got to do. So is that what's driving this, this change, this moment that you're saying that we're at this, this point in time where employees are demanding better and demanding more from their workplace. I mean, is that what's driving the change in your opinion? >> I think we have just this confluence of technologies around AI, around machine learning and a lot of the services being delivered by Cloud platforms. And then we have this contrast between people's work life and their home life. I have a nine-year-old son. I'll share a little experience with him. So he uses things like Khan Academy. Khan Academy, he uses his finger to write the answers and that gets converted into text. Well now when he tries to interact with any application, he's trying to use his finger and he's wondering, why you guys all using keyboards? What is this keyboard thing? And you know, and then when he interacts with any application, TV screen, he's trying to swipe on the TV screen. He can't understand why he can't swipe on the TV screen to get to the next show to the next channel. I look at that, and I'm like, it's so obvious this is where we're going, this is, this next generation, they want to interact with their applications in a very different way. And we need to get to that in the Enterprise. And we want to be first to get there in Enterprise. The acquisitions that we've made five acquisitions that we've made in the last nine months or year. I was actually just walking with some of the guys that, you know from Boas, from SkyGiraffe. SkyGiraffe, DxContinuum, Parlor, Parlo. And these are just kind of adding to our ability to create the experiences that we deserve, opposite all of those technologies, so you can just get your work done, get your work done. Get to the actions that you need. John I thought did an amazing job of explaining what it takes to create great experiences. And he had this, what I call the UX iceberg. This idea that, appearances are on the top, Anyone can make an app, mobile app that has great appearances. Just put nice skin on there, nice colors on it. But the hard work happens below the water line, which is where you think about the behaviors. How do people actually want to work? And we've filmed people, we've watched people, in their daily lives how they want to work. Go down a layer, the relationships who do they need to work with? Who do they interact with? And then, the work flows, what are the systems they need to interact with. And when we think about their entire paradigm of UX experience and then design from that paradigm, we end up not just with a pretty skin, we end up with actually something that fundamentally changes the way you get your work done, and that's what we're going after. >> So I've kind of resigned myself to the fact that I'm not going to be a ServiceNow customer anytime soon. When Jeff and I first saw it in like 2013, we were like, we want this. It's not designed for 50 person companies like ours. Okay, I can live with that. You guys aspire to be the next great Enterprise software company. As a marketing executive, you got to kind of be in Heaven, right now, because now, you and I have talked about this, I don't have the marketing gene, I find marketing very challenging, but for someone who has that marketing gene, if I compare you to, the great software companies in the Enterprise, it's Oracle, it's SAP, it's Sales Force. Our HR system, our provider, it's Oracle, it's clunky. We use Sales Force, it's Oracle, right? I don't use SAP. I don't want to use SAP. Okay, so laying down the gauntlet on experience is I think brilliant because you're living in a sea of mediocrity when it comes to experience. Now, you have to stay ahead of the game. Acquisitions are one way to do that. But how does that all play in to your marketing. >> You know, it actually starts with purpose. So we, about nine months ago began a journey to, I'd say get to the essence of our purpose. We talked to all of our employees, went on road shows around the world, Talked to our customers around the world. And we kind of said, both what do we actually do for you, what do you want us to do for you, and we grounded ourself in this central idea we make the world of work work better for people. It turns out, that is a rallying cry a firing signal for everything we do as a company. So when I think of marketing, marketing is about bringing that promise through our brand expression to life. We make the world of work, work better for people. That's a bar, a standard. This conference needs to feel like it's making work, work better for people. This conference needs to exude humanity and their experiences. This isn't a technology conference. You see the thing behind you, very deliberately. We're celebrating people, people's lives, people's work lives, so I think of the connection between our purpose and marketing. It's the standard, it's the bar for us. My website, which we refreshed in time for Knowledge, is no longer a taxonomy of products. It's talking about people, their lives, how we make their experiences better. So I think of it as this show, our keynotes, very deliberately focusing on those personas. I think of it as a watermark that kind of says make everything true to your purpose. It's also a watermark for our products. It's a litmus test for our products. Is this product ready to ship yet? Does it make the world of work work better for people? Yes, no? Yes, let's ship it. No, let's not. It's the litmus test for our sales engagements. Are you talking about how you're making experiences better for people? Or are you talking about some other abstract concept? You talking just about cost savings, you're talking about, if you're not talking about experiences, you're not living our purpose. So, it's going to exude through everything that we do. I think it's a really foundational idea for us. >> It's powerful when a brand can align its sales, its marketing, and its product and its delivery, you know to the customer. >> And the timing too just because we were really at low unemployment, we have this war for talent, particularly in technology but in other industries as well where employees are saying what can I do to attract and retain the best people. Make, make their work lives easier, more fun, more intuitive, simpler. >> I always joke that, you know, there's something that's written on a job description. And if you read the job description, You're like, yeah, I want to do that. I get to lead this thing, drive this thing, duh de tuh. The job description doesn't say, oh and by the way, you're going to spend 2.4% of your time filling in forms and you're going to spend 1.8% of your time handling manual IT requests. 4.2% of your time, you're going to, if it did, you wouldn't take the job. So we actually deserve the jobs just on our job description. And that's kind of what I think is that, you know, where we need to get to with work. >> Right, right, exactly. >> So what have we got goin' the rest of, of K18 here? You got a big show, I think Thursday night, you got the customer appreciation. What else is going on here that we should know about? >> Well the way we structure the event is we have these general session keynotes. And you can kind of think of it as John is explaining a lot about why we're doing what we're doing. CJ's explaining a lot about what are we doing. What have we been doing? What's our innovation road map look like? And then Pat Casey's going to pick up on how. How can you build those experiences that CJ's previewed, that fell into the reason why we're doing the things that CJ previewed. So there's kind of a method to the madness to the, to the three days as it were. And then below that, we have these things called topic keynotes, and as you remember we have these five Cloud services now. Of course HR, customer service, security operations, IT, and then really intelligent apps allowing me to build those up. So you have topic keynotes across each of those five Cloud services. And then beyond that, it's really the customer, customer breakouts. Interspersed amongst that is your ability to go along and have a session or success clinic in this customer success area. Or go into the Genius Lounge. Drop by the pavilion, have demos of our products. So those are some of the really, kind of exciting structural things we have around the conference. And then on Thursday night, you know, we wanted to go bigger and better than ever before, and we call it Vegas Nights. So Thursday night, instead of having, you know, the band, you know, of yesteryear, which many conferences, kind of love to do, we decided to have this kind of experiential thing. You can go and see Cirque De Soleil. You can go to the Tower Night Club. You can go to Topgolf. So there's a little menu you can choose from. We've actually reserved the Cirque De Soleil for the whole night so they're running multiple performances just for ServiceNow customers, which is pretty fun. >> So tailored to the individual. Whatever you want to do. Whatever will make your life better. >> That's the idea. Just drop it in, put it in your agenda and you're good to go. >> I love it. Well Dan, thanks so much for coming on the show. It was great to have you. >> Thank you, I enjoyed the discussion. >> Good to see ya again. >> Good to see you. >> I'm Rebecca Knight for Dave Vellante. We will have more from theCUBE's live coverage of ServiceNow Knowledge18 coming up in just a little bit. (upbeat music)

Published Date : May 9 2018

SUMMARY :

Brought to you by ServiceNow. He is the CMO of ServiceNow. I always have a great conversation with you guys. So, this conference is amazing. I didn't even, you did it We might bend the curve next year though. And in the background here, you know some of the challenges you have And that all rolls back to the central idea So that's the main spring. of no it's not like that here, you got to, that fundamentally changes the way you get your work done, So I've kind of resigned myself to the fact And we kind of said, both what do we actually do for you, and its product and its delivery, you know And the timing too just because we were really And if you read the job description, What else is going on here that we should know about? the band, you know, of yesteryear, So tailored to the individual. That's the idea. Well Dan, thanks so much for coming on the show. live coverage of ServiceNow Knowledge18

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JohnPERSON

0.99+

Rebecca KnightPERSON

0.99+

JeffPERSON

0.99+

DanPERSON

0.99+

Dan RogersPERSON

0.99+

Pat CaseyPERSON

0.99+

Khan AcademyORGANIZATION

0.99+

2.4%QUANTITY

0.99+

CJPERSON

0.99+

SkyGiraffeORGANIZATION

0.99+

85%QUANTITY

0.99+

1.8%QUANTITY

0.99+

4.2%QUANTITY

0.99+

Thursday nightDATE

0.99+

AmazonORGANIZATION

0.99+

2013DATE

0.99+

20 thingsQUANTITY

0.99+

50 personQUANTITY

0.99+

next yearDATE

0.99+

ParlorORGANIZATION

0.99+

UberORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

Las VegasLOCATION

0.99+

Cirque De SoleilORGANIZATION

0.99+

OracleORGANIZATION

0.99+

firstQUANTITY

0.99+

DxContinuumORGANIZATION

0.99+

ServiceNowORGANIZATION

0.99+

ParloORGANIZATION

0.99+

18,000 peopleQUANTITY

0.99+

four personasQUANTITY

0.98+

5%QUANTITY

0.98+

this yearDATE

0.98+

three daysQUANTITY

0.97+

bothQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

three layersQUANTITY

0.97+

five acquisitionsQUANTITY

0.97+

todayDATE

0.96+

eachQUANTITY

0.95+

hundredsQUANTITY

0.95+

oneQUANTITY

0.94+

five CloudQUANTITY

0.94+

BoasORGANIZATION

0.93+

about nine months agoDATE

0.93+

nine-year-oldQUANTITY

0.93+

one wayQUANTITY

0.93+

five Cloud servicesQUANTITY

0.92+

Tower Night ClubLOCATION

0.92+

about six months priorDATE

0.9+

SAPORGANIZATION

0.89+

saber tooth virusOTHER

0.89+

last nine monthsDATE

0.88+

#Know18EVENT

0.87+

this morningDATE

0.87+

three great thingsQUANTITY

0.86+

Sales ForceTITLE

0.81+

intelORGANIZATION

0.79+

Vegas NightsEVENT

0.79+

2018TITLE

0.75+

OpenTableORGANIZATION

0.73+

ServiceNowTITLE

0.71+

18,000QUANTITY

0.7+

ServiceNow Knowledge18EVENT

0.65+

CJORGANIZATION

0.6+

thirdQUANTITY

0.59+

GeniusLOCATION

0.59+

TopgolfLOCATION

0.55+

coupleQUANTITY

0.55+

Veeru Ramaswamy, IBM | CUBEConversation


 

(upbeat music) >> Hi we're at the Palo Alto studio of SiliconANGLE Media and theCUBE. My name is George Gilbert, we have a special guest with us this week, Veeru Ramaswamy who is VP IBM Watson IoT platform and he's here to fill us in on the incredible amount of innovation and growth that's going on in that sector of the world and we're going to talk more broadly about IoT and digital twins as a broad new construct that we're seeing in how to build enterprise systems. So Veeru, good to have you. Why don't you introduce yourself and tell us a little bit about your background. >> Thanks George, thanks for having me. I've been in the technology space for a long time and if you look at what's happening in the IoT, in the digital space, it's pretty interesting the amount of growth, the amount of productivity and efficiency the companies are trying to achieve. It is just phenomenal and I think we're now turning off the hype cycle and getting into real actions in a lot of businesses. Prior to joining IBM, I was junior offiicer and senior VP of data science with Cable Vision where I led the data strategy for the entire company and prior to that I was the GE of one of the first two guys who actually built the Cyamon digital center. GE digital center, it's a center of excellence. Looking at different kinds of IoT related projects and products along with leading some of the UX and the analytics and the club ration or the social integration. So that's the background. >> So just to set context 'cause this is as we were talking before, there was another era when Steve Jobs was talking about the next work station and he talked about objectory imitation and then everything was sprinkled with fairy dust about objects. So help us distinguish between IoT and digital twins which GE was brilliant in marketing 'cause that concept everyone could grasp. Help us understand where they fit. >> The idea of digital twin is, how do you abstract the actual physical entity out there in the world, and create an object model out of it. So it's very similar in that sense, what happened in the 90s for Steve Jobs and if you look at that object abstraction, is what is now happening in the digital twin space from the IoT angle. The way we look at IoT is we look at every center which is out there which can actually produce a metric on every device which produces a metric we consider as a sense so it could be as simple as the pressure, temperature, humidity sensors or it could be as complicated as cardio sensors and your healthcare and so on and so forth. The concept of bringing these sensors into the to the digital world, the data from that physical world to the digital world is what is making it even more abstract from a programming perspective. >> Help us understand, so it sounds like we're going to have these fire hoses of data. How do we organize that into something that someone who's going to work on that data, someone is going to program to it. How do they make sense out of it the way a normal person looks at a physical object? >> That's a great question. We're looking at sensors as a device that we can measure out of and that we call it a device twin. Taking the data that's coming from the device, we call that as a device twin and then your physical asset, the physical thing itself, which could be elevators, jet engines anything, physical asset that we have what we call the asset twin and there's hierarchical model that we believe that will have to be existing for the digital twin to be actually constructed from an IoT perspective. The asset twins will basically encompass some of the device twins and then we actually take that and represent the digital twin on a physical world of that particular asset. >> So that would be sort of like as we were talking about earlier like an elevator might be the asset but the devices within it might be the bricks and the pulleys and the panels for operating it. >> Veeru: Exactly. >> And it's then the hierarchy of these or in manufacturing terms, the building materials that becomes a critical part of the twin. What are some other components of this digital twin? >> When we talk about digital twin, we don't just take the blueprint as schematics. We also think about the system, the process, the operation that goes along with that physical asset and when we capture that and be able to model that, in the digital world, then that gives you the ability to do a lot of things where you don't have to do it in the physical world. For instance, you don't have to train your people but on the physical world, if it is periodical systems and so on and so forth, you could actually train them in the digital world and then be able to allow them to operate on the physical world whenever it's needed. Or if you want to increase your productivity or efficiency doing predictive models and so forth, you can test all the models in your digital world and then you actually deploy it in your physical world. >> That's great for context setting. How would you think of, this digital twins is more than just a representation of the structure, but it's also got the behavior in there. So in a sense it's a sensor and an actuator in that you could program the real world. What would that look like? What things can you do with that sort of approach? >> So when you actually have the data coming this humongous amount of terabyte data that comes from the sensors, once you model it and you get the insights out of that, based on the insight, you can take an actionable outcome that could be turning off an actuator or turning on an actuator and simple thngs like in the elevator case, open the door, shut the door, move the elevator up, move the elevator down etc. etc All of these things can be done from a digital world. That's where it makes a humongous difference. >> Okay, so it's a structured way of interacting with the highly structured world around us. >> Veeru: That's right. >> Okay, so it's not the narrow definition that many of us have been used to like an airplane engine or the autonomous driving capability of a car. It's more general than that. >> Yeah, it is more general than that. >> Now let's talk about having sort of set context with the definition so everyone knows we're talking about a broader sense that's going on. What are some of the business impacts in terms of operational efficiency, maybe just the first-order impact. But what about the ability to change products into more customizable services that have SLAs or entirely new business models including engineered order instead of make to stock. Tell us something about that hierarchy of value. >> That's a great question. You're talking about things like operations optimization and predicament and all of that which you can actually do from the digital world it's all on digital twin. You also can look into various kinds of business models now instead of a product, you can actually have a service out of the product and then be able to have different business models like powered by the hour, pay per use and kinds of things. So these kinds of models, business models can be tried out. Think about what's happening in the world of Air BnB and Uber, nobody owns any asset but still be able to make revenue by pay per use or power by the hour. I think that's an interesting model. I don't think it's being tested out so much in the physical asset world but I think that could be interesting model that you could actually try. >> One thing that I picked up at the Genius of Things event in Munich in February was that we really have to rethink about software markets in the sense that IBM's customers become in the way your channel, sometimes because they sell to their customers. Almost like a supply chain master or something similar and also pricing changes from potentially we've already migrated or are migrating from perpetual licenses to service softwares or service but now we could do unit pricing or SLA-based pricing, in which case you as a vendor have to start getting very smart about, you owe your customers the risk in meeting an SLA so it's almost more like insurance, actuarial modeling. >> Correct so the way we want think about is, how can we make our customers more, what do you call, monetizable. Their products to be monetizable with their customers and then in that case, when we enter into a service level agreement with our customers, there's always that risk of what we deliver to make their products and services more successful? There's always a risk component which we will have to work with the customers to make sure that combined model of what our customers are going to deliver is going to be more beneficial, more contributing to both bottom line and top line. >> That implies that your modeling, someone's modeling and risk from you the supplier to your customer as vendor to their customer. >> Right. >> That sounds tricky. >> I'm pretty sure we have a lot of financial risk modeling entered into our SLAs when we actually go to our customers. >> So that's a new business model for IBM, for IBM's sort of supply chain master type customers if that's the right word. As this capability, this technology pervades more industries, customers become software vendors or if not software vendors, services vendors for software enhanced products or service enhanced products. >> Exactly, exactly. >> Another thing, I'd listened to a briefing by IBM Global Services where they thought, ultimately, this might end up where there's far more industries are engineered to order instead of make to stock. How would this enable that? >> I think the way we want think about it is that most of the IoT based services will actually start by co-designing and co-developing with your customers. And that's where you're going to start. That's how you're going to start. You're not going to say, here's my 100 data centers and you bring your billion devices and connect and it's going to happen. We are going to start that way and then our customers are going to say, hey by the way, I have these used cases that we want to start doing, so that's why platform becomes so imortant. Once you have the platform, now you can scale, into a scale, individual silos as a vertical use case for them. We provide the platform and the use cases start driving on top of the platform. So the scale becomes much easier for the customers. >> So this sounds like the traditional application. The traditional way an application vendor might turn into a platform vendor which is a difficult transition in itself but you take a few use cases and then generalize into a platform. >> We call that a zone application services. The zone application service is basically, is drawing on perfectly cold platform service which actually provides you the abilities. So for instance like an asset management. An asset management can be done in an oil and gas rig, you can look at asset management in power tub vine, you can can look at asset management in a jet engine. You can do asset management across any different vertical but that is a common horizontal application so most of the time you get 80% of your asset management API's if you will. Then you can be able to scale across multiple different vertical applications and solutions. >> Hold that thought 'cause we're going to come back to joint development and leveraging expertise from vendor and customer and sharing that. Let's talk just at a high level one of the things that I keep hearing is that in Europe industry 4.0 is sort of the hot topic and in the states, it's more digital twins. Help parse that out for us. >> So the way we believe how digital twin should be viewed is a component view. What we mean the component view is that we have your knowledge graph representation of the real assets in the digital world and then you bring in your IoT sensors and connections to the models then you have your functional, logical, physical models that you want to bring into your knowledge graph and then you also want to be able to give the ability of search visualize allies. Kind of an intelligent experience for the end consumer and then you want to bring your similation models when you do the actual similation models in digital to bring it in there and then your enterprise asset management, your ERP systems, all of that and then when you connect, when you're able to build a knowledge graph, that's when the digital twin really connects with your enterprise systems. Sort of bring the OT and the IT together. >> So this is sort of to try and summarize 'cause there are a lot of moving parts in there. You've got you've got the product hierarchy which, in product Kaiser call it building materials, sort of the explosion of parts in an assembly, sub-assembly and then that provides like a structure, a data model then the machine learning models in the different types of models that they could be represent behavior and then when you put a knowledge graph across that structure and behavior, is that what makes it simulation ready? >> Yes, so you're talking about entities and connecting these entities with the actual relationship between these entities. That's the graph that holds the relation between nodes and your links. >> And then integrating the enterprise systems that maybe the lower level operation systems. That's how you effect business processes. >> Correct. >> For efficiency or optimization, automation. >> Yes, take a look at what you can do with like a shop floor optimization. You have all the building materials, you need to know from your existing ERP systems and then you will actually have the actual real parts that's coming to your shop floors to manage them and now base supposing, depending on whether you want to repair, you want to replace, you want an overall, you want to modify whatever that is, you want to look at your existing building materials and see, okay do I first have it do we need more? Do we need to order more? So your auditing system naturally gets integrated into that and then you have to integrate the data that's coming from these models and the availability of the existing assets with you. You can integrate it and say how fast can you actually start moving these out of your shop, into the. >> Okay that's where you translate essentially what's more like intelligent about an object or a rich object into sort of operational implications. >> Veeru: Yes. >> Okay operational process. Let's talk about customer engagement so far. There's intense interest in this. I remember in the Munich event, they were like they had to shut off attendance because they couldn't find a big enough venue. >> Veeru: That's true. >> So what are the characteristics of some of the most successful engagements or the ones that are promising. Maybe it's a little early to say successful. >> So, I think the way you can definitely see success from customer engagement are two fold. One is show what's possible. Show what's possible with after all desire to connect, collection of data, all of that so that one part of it. The second part is understand the customer. The customer has certain requirements in their existing processes and operations. Understand that and then deliver based on what solutions they are expecting, what applications they want to build. How you bring them together is what is, so we're thinking about. That Munich center you talked about. We are actually bringing in chip manufacturers, sensor manufacturers, device manufacturers. We are binging in network providers. We are bringing in SIs, system integrators all of them into the fold and show what is possible and then your partners enable you to get to market faster. That's how we see the engagement with customer should happen in a much more foster manner and show them what's possible. >> It sounds like in the chip industry Moore's law for many years it wasn't deterministic that you we would do double things every 18 months or two years, it was actually an incredibly complex ecosystem web where everyone's sort of product release cycles were synchronized so as to enable that. And it sounds like you're synchronizing the ecosystem to keep up. >> Exactly The saxel of a particular organization IoT efforts is going to depend on how do you build this ecosystem and how do you establish that ecosystem to get to market faster. That's going to be extremely key for all your integration efforts with your customer. >> Let's start narrowly with you. IBM what are the key skills that you feel you need to own starting from sort of the base rocket scientists you know who not only work on machine learning models but they come up with new algorithms on top of say tons of flow work or something like that. And all the way up to the guys who are going to work in conjunction with the customer to apply that science to a particular industry. How does that hold together? >> So it all starts on the platform. On the platform side we have all the developers, the engineers who build these platform all the video connection and all of that to make the connections. So you need the highest software development engineers to build these on the platform and then you also need the solution builders so who is in front of the customer understanding what kind of solutions you want to build. Solutions could be anything. It could be predictive maintenance, it could be as simple as management, it could be remote monitoring and diagnostics. It could be any of these solutions that you want to build and then the solution builders and the platform builders work together to make sure that it's the holistic approach for the customer at the final deployment. >> And how much is the solution builder typically in the early stages IBM or is there some expertise that the customer has to contribute almost like agile development, but not two programmers but like 500 and 500 from different companies. >> 500 is a bit too much. (laughs) I would say this is the concept of co-designing and co-development. We definitely want the ultimate, the developer, the engineers form, the subject exports from our customers and we also need our analytics experts and software developers to come and sit together and understand what's the use case. How do we actually bring in those optimized solution for the customer. >> What level of expertise or what type of expertise are the developers who are contributing to this effort in terms of do they have to, if you're working with manufacturing let's say auto manufacturing. Do they have to have automotive software development expertise or are they more generically analytics and the automotive customer brings in the specific industry expertise. >> It depends. In some cases we have RGB for instance. We have dedicated servers, that particular vertical service provider. We understand some of this industry knowledge. In some cases we don't, in some cases it actually comes from the customer. But it has to be an aggregation of the subject matter experts with our platform developers and solution developers sitting together, finding what's the solution. Literally going through, think about how we actually bring in the UX. What does a typical day of a persona look like? We always by the way believe it's an augmented allegiance which means the human and the machine work together rather than a complete. It gives you the answer for everything you ask for. >> It's a debate that keeps coming up Doug Anglebad sort of had his own answer like 50 years ago which was he sort of set the path for modern computing by saying we're not going to replace people, we're going to augment them and this is just a continuation of that. >> It's a continuation of that. >> Like UX design sounds like someone on the IBM side might be talking to the domain expert and the customer to say how does this workflow work. >> Exactly. So have this design thinking, design sessions with our customers and then based on that we take that knowledge, take it back, we build our mark ups, we build our wire frames, visual designs and the analytics and software that goes behind it and then we provide on top of platform. So most of the platform work, the standard what do you call table state connections, collection of data. All of that as they are already existing then it's one level above as to what the particular solution a customer wants. That's when we actually. >> In terms of getting the customer organization aligned to make this project successful, what are some of the different configurations? Who needs to be a sponsor? Where does budget typically come from? How long are the pilots? That sort of stuff so to set expectations. >> We believe in all the agile thinking, agile development and we believe in all of that. It's almost given now. So depending on where the customer comes from so the customer could actually directly come and sign up to our platform on the existing cloud infrastructure and then they will say, okay we want to build applications then there are some customers really big customers, large enterprises who want to say, give me the platform, we have our solution folks. We will want to work on board with you but we also want somebody who understands building solutions. We integrate with our solution developers and then we build on top of that. They build on top of that actually. So you have that model as well and then you have a GBS which actually does this, has been doing this for years, decades. >> George: Almost like from the silicon. >> All the way up to the application level. >> When the customer is not outsourcing completely, The custom app that they need to build in other words when when they need to go to GBS Global Business Services, whereas if they want a semi-packaged app, can they go to the industry solutions group? >> Yes. >> I assume it's the IoT, Industry Solutions Group. >> Solutions group, yes. >> They then take a it's almost maybe a framework or an existing application that needs customization. >> Exactly so we have IoT-4. IoT for manufacturing, IoT for retail, IoT for insurance IoT for you name it. We have all these industry solutions so there would be some amount of template which is already existing in some fashion so when GBS gets a request to say here is customer X coming and asking for a particular solution. They would come back to IoT solutions group to say, they already have some template solutions from where we can start from rather than building it from scratch. You speed to market again is much faster and then based on that, if it's something that is to be customizable, both of them work together with the customer and then make that happen, and they leverage our platform underneath to do all the connection collection data analytics and so on and so forth that goes along with that. >> Tell me this from everything we hear. There's a huge talent shortage. Tell me in which roles is there the greatest shortage and then how do different members of the ecosystem platform vendors, solution vendors sort of a supply-chain master customers and their customers. How do they attract and retain and train? >> It's a fantastic question. One of the difficulties both in the valley and everywhere across is that three is a skill gap. You want advanced data scientists you want advances machinery experts, you want advanced AI specialists to actually come in. Luckily for us, we have about 1000 data scientists and AI specialists distributed across the globe. >> When you say 1000 data scientists and AI specialists, help us understand which layer are they-- >> It could be all the way from like a BI person all the way to people who can build advanced AI models. >> On top of an engine or a framework. >> We have our Watson APIs from which we build then we have our data signs experience which actually has some of the models then built on top of what's in the data platform so we take that as well. There are many different ways by which we can actually bring the AM model missionary models to build. >> Where do you find those people? Not just the sort of band strengths that's been with IBM for years but to grow that skill space and then where are they also attracted to? >> It's a great question. The valley definitely has a lot of talent, then we also go outside. We have multiple centers of excellence in Israel, in India, in China. So we have multiple centers of excellence we gather from them. It's difficult to get all the talent just from US or just from one country so it's naturally that talent has to be much more improvement and enhanced all the wat fom fresh graduates from colleges to more experienced folks in the in the actual profession. >> What about when you say enhancing the pool talent you have. Could it also include productivity improvements, qualitative productivity improvements in the tools that makes machine learning more accessible at any level? The old story of rising obstruction layers where deep learning might help design statistical models by doing future engineering and optimizing the search for the best model, that sort of stuff. >> Tools are very, very hopeful. There are so many. We have from our tools to python tools to psychic and all of that which can help the data scientist. The key part is the knowledge of the data scientist so data science, you need the algorithm, the statistical background, then you need your applications software development background and then you also need the domestics for engineering background. You have to bring all of them together. >> We don't have too many Michaelangelos who are these all around geniuses. There's the issue of, how do you to get them to work more effectively together and then assuming even each of those are in short supply, how do you make them more productive? >> So making them more productive is by giving them the right tools and resources to work with. I think that's the best way to do it, and in some cases in my organization, we just say, okay we know that a particular person is skilled is up skilled in certain technologies and certain skill sets and then give them all the tools and resources for them to go on build. There's a constant education training process that goes through that we in fact, we have our entire Watson ED platform that can be learned on Kosera today. >> George: Interesting. >> So people can go and learn how to build a platform from a Kosera. >> When we start talking with clients and with vendors, things we hear is that and we were kind of I think early that calling foul but in the open source infrastructure big data infrastructure this notion of mix-and-match and roll your own pipeline sounded so alluring, but in the end it was only the big Internet companies and maybe some big banks and telcos that had the people to operate that stuff and probably even fewer who could build stuff on it. Do we do we need to up level or simplify some of those roles because mainstream companies can't have enough or won't will have enough data scientists or other roles needed to make that whole team work >> I think it will be a combination of both one is we need to up school our existing students with the stem background, that's one thing and the other aspect is, how do you up scale your existing folks in your companies with the latest tools and how can you automate more things so that people who may not be schooled will still be able to use the tool to deliver other things but they don't have to go to a rigorous curriculum to actually be able to deal with it. >> So what does that look like? Give us an example. >> Think of tools like today. There are a lot of BI folks who can actually build. BI is usually your trends and graphs and charts that comes out of the data which are simple things. So they understand the distribution and so on and so forth but they may not know what is the random model. If you look at tools today, that actually gives you to build them, once you give the data to that model, it actually gives you the outputs so they don't really have to go dig deep I have to understand the decision tree model and so on and so forth. They have the data, they can give the data, tools like that. There are so many different tools which would actually give you the outputs and then they can actually start building app, the analytics application on top of that rather than being worried about how do I write 1000 line code or 2000 line code to actually build that model itself. >> The inbuilt machine learning models in and intend, integrated to like pentaho or what's another example. I'm trying to think, I lost my, I having a senior moment. These happen too often now. >> We do have it in our own data science tools. We already have those models supported. You can actually go and call those in your web portal and be able to call the data and then call the model and then you'll get all that. >> George: Splank has something like that. >> Splank does, yes. >> I don't know how functional it is but it seems to be oriented towards like someone who built a dashboard can sort of wire up a model, it gives you an example of what type of predictions or what type of data you need. >> True, in the Splank case, I think it is more of BI tool actually supporting a level of data science moral support on the back. I do not know, maybe I have to look at this but in our case we have a complete data science experience where you actually start from the minute the data gets ingested, you can actually start the storage, the transformation, the analytics and all of that can be done in less than 10 lines of coding. You can just actually do the whole thing. You just call those functions then it will the right there in front of you. So in twin you can do that. That I think is much more powerful and there are tools, there are many many tools today. >> So you're saying that data science experience is an enter in pipeline and therefore can integrate what were boundaries between separate products. >> The boundary is becoming narrower and narrower in some sense. You can go all the way from data ingestion to the analytics in just few clicks or few lines of course. That's what's happening today. Integrated experience if you will. >> That's different from the specialized skills where you might have a tri-factor, prexada or something similar as for the wrangling and then something else for sort of the the visualizations like Altracks or Tavlo and then into modeling. >> A year or so ago, most of data scientists try to spend a lot of time doing data wrangling because some of the models, they can actually call very directly but the wrangling is actually where they spend their time. How do you get the data crawl the data, cleanse the data, etc. That is all now part of our data platform. It is already integrated into the platform so you don't have to go through some of these things. >> Where are you finding the first success for that tool suite? >> Today it is almost integrated with, for instance, I had a case where we exchange the data we integrate that into what's in the Watson data platform and the Watson APIs is a layer above us in the platform where we actually use the analytics tools, more advanced AI tools but the simple machinery models and so on and so forth is already integrated into as part of the Watson data platform. It is going to become an integrated experience through and through. >> To connect data science experience into eWatson IoT platform and maybe a little higher at this quasi-solution layer. >> Correct, exactly. >> Okay, interesting. >> We are doing that today and given the fact that we have so much happening on the edge side of things which means mission critical systems today are expecting stream analysts to get to get insights right there and then be able to provide the outcomes at the edge rather than pushing all the data up to your cloud and then bringing it back down. >> Let's talk about edge versus cloud. Obviously, we can't for latency and band width reasons we can't forward all the data to the cloud, but there's different use cases. We were talking to Matasa Harry at Sparks Summit and one of the use cases he talked about was video. You can't send obviously all the video back and you typically on an edge device wouldn't have heavy-duty machine learning, but for video camera, you might want to learn what is anomalous or behavior call out for that camera. Help us understand some of the different use cases and how much data do you bring back and how frequently do retrain the models? >> In the case of video, it's so true that you want to do a lot of any object ignition and so on and so forth in the video itself. We have tools today, we have cameras outside where if a van goes it detect the particular object in the video live. Realtime streaming analytics so we can do that today. What I'm seeing today in the market is, in the transaction between the edge and the cloud. We believe edge is an extension of the cloud, closer to the asset or device and we believe that models are going to get pushed from the cloud, closer to the edge because the compute capacity and storage and the networking capacity are all improving. We are pushing more and more computing to their devices. >> When you talk about pushing more of the processing. you're talking more about predicts and inferencing then the training. >> Correct. >> Okay. >> I don't think I see so much of the training needs to be done at the edge. >> George: You don't see it. >> No, not yet at least. We see the training happening in the cloud and then once a train, the model has been trained, then you come to a steady, steady model and then that is the model you want to push. When you say model, it could be a bunch of coefficients. That could be pushed onto the edge and then when a new data comes in, you evaluate, make decisions on that, create insights and push it back as actions to the asset and then that data can be pushed back into the cloud once a day or once in a week, whatever that is. Whatever the capacity of the device you have and we believe that edge can go across multiple scales. We believe it could be as small with 128 MB it could be one or two which I see sitting in your local data center on the premise. >> I've had to hear examples of 32 megs in elevators. >> Exactly. >> There might be more like a sort of bandwidth and latency oriented platform at the edge and then throughput and an volume in the cloud for training. And then there's the issue of do you have a model at the edge that corresponds to that instance of a physical asset and then do you have an ensemble meaning, the model that maps to that instance, plus a master canonical model. Does that work for? >> In some cases, I think it'll be I think they have master canonical model and other subsidiary models based on what the asset, it could be a fleet so you in the fleet of assets which you have, you can have, does one asset in the fleet behave similar to another asset in the fleet then you could build similarity models in that. But then there will also be a model to look at now that I have to manage this fleet of assets which will be a different model compared to action similarity model, in terms of operations, in terms of optimization if I want to make certain operations of that asset work more efficiently, that model could be completely different with when compared to when you look at similarity of one model or one asset with another. >> That's interesting and then that model might fit into the information technology systems, the enterprise systems. Let's talk, I want to go get a little lower level now about the issue of intellectual property, joint development and sharing and ownership. IBM it's a nuanced subject. So we get different sort of answers, definitive answers from different execs, but at this high level, IBM says unlike Google and Facebook we will not take your customer data and make use of it but there's more to it than that. It's not as black-and-white. Help explain that for so us. >> The way you want to think is I would definitely paired back what our chairman always says customers' data is customers' data, customer insights is customer insights so they way we look at it is if you look at a black box engine, that could be your analytics engine, whatever it is. The data is your inputs and the insights are our outputs so the insights and outputs belong to them. we don't take their data and marry it with somebody else's data and so forth but we use the data to train the models and the model which is an abstract version of what that engine should be and then more we train the more better the model becomes. And then we can then use across many different customers and as we improve the models, we might go back to the same customers and hey we have an improved model you want to deploy this version rather than the previous version of the model we have. We can go to customer Y and say, here is a model which we believe it can take more of your data and fine tune that model again and then give it back to them. It is true that we don't actually take their data and share the data or the insights from one customer X to another customer Y but the models that make it better. How do you make that model more intelligent is what out job is and that's what we do. >> If we go with precise terminology, it sounds like when we talk about the black box having learned from the customer data and the insights also belonging to the customer. Let's say one of the examples we've heard was architecture engineering consulting for large capital projects has a model that's coming obviously across that vertical but also large capital projects like oil and gas exploration, something like that. There, the model sounds like it's going to get richer with each engagement. And let's pin down so what in the model is sort of not exposed to the next customer and what part of the model that has gotten richer does the next customer get the balance of? >> When we actually build a model, when we pass the data, in some cases, customer X data, the model is built out of customer X data may not sometimes work with the customer Y's data so in which case you actually build it from scratch again. Sometimes it doesn't. In some case it does help because of the similarity of the data in some instance because if the data from company X in oil gas is similar to company Y in oil gas, sometimes the data could be similar so in which case when you train that model, it becomes more efficient and the efficiency goes back to both customers. we will do that but there are places where it would really not work. What we are trying to do is. We are in fact trying to build some kind of knowledge bundles where we can actually what used to be a long process to train the model can ow shortened using that knowledge bundle of what we have actually gained. >> George: Tell me more about how it works. >> In retail for instance, when we actually provide analytics, from any kind of IoT sense, whatever sense of data this comes in we train the model, we get analytics used for ads, pushing coupons, whatever it is. That knowledge, what you have gained off that retail, it could be models of models, it could be metamodels, whatever you built. That can actually serve many different customers but the first customer who is trying to engage with us, you don't have any data to the model. It's almost starting from ground zero and so that would actually take a longer time when you are starting with a new industry and you don't have the data, it'll take you a longer time to understand what is that saturation point or optimization point where you think the model cannot go any further. In some cases, once you do that, you can take that saturated model or near saturated model and improve it based on more data that actually comes from different other segments. >> When you have a model that has gotten better with engagements and we've talked about the black box which produces the insights after taking in the customer data. Inside that black box there's like at the highest level we might call it the digital twin with the broad definition that we started with, then there's a data model which a data model which I guess could also be incorporated into the knowledge graft for the structure and then would it be fair to call the operational model the behavior? >> Yes, how does the system perform or behave with respect the data and the asset itself. >> And then underpinning that, the different models that correspond to the behaviors of different parts of this overall asset. So if we were to be really precise about this black box, what can move from one customer to the next and what what won't? >> The overall model, supposing I'm using a random data retrieval model, that remains but actual the coefficients are the feature rector, or whatever I use, that could be totally different for customers, depending on what kind of data they actually provide us. In data science or in analytics you have a whole platora of all the way from simple classification algorithms to very advanced predictive modeling algorithms. If you take the whole class when you start with a customer, you don't know which model is really going to work for a specific user case because the customer might come and can say, you might get some idea but you will not know exactly this is the model that will work. How you test it with one customer, that model could remain the same kind of use case for some of other customer, but that actual the coefficients the degree of the digital in some cases it might be two level decision trees, in others case it might be a six level decision tree. >> It is not like you take the model and the features and then just let different customers tweak the coefficients for the features. >> If you can do that, that will be great but I don't know whether you can really do it the data is going to change. The data is definitely going to change at some point of time but in certain cases it might be directly correlated where it can help, in certain cases it might not help. >> What I'm taking away is this is fundamentally different from traditional enterprise applications where you could standardize business processes and the transactional data that they were producing. Here it's going to be much more bespoke because I guess the processes, the analytic processes are not standardized. >> Correct, every business processes is unique for a business. >> The accentures of the world we're trying to tell people that when SAP shipped packaged processes, which were pretty much good enough, but that convince them to spend 10 times as much as the license fee on customization. But is there a qualitative difference between the processes here and the processes in the old ERP era? I think it's kind of different in the ERP era and the processes, we are more talking about just data management. Here we're talking about data science which means in the data management world, you're just moving data or transforming data and things like that, that's what you're doing. You're taking the data. transforming to some other form and then you're doing basic SQL queries to get some response, blah blah blah. That is a standard process that is not much of intelligence attached to it but now you are trying to see from the data what kind of intelligence can you derive by modeling the characteristics of the data. That becomes a much tougher problem so it now becomes one level higher of intelligence that you need to capture from the data itself that you want to serve a particular outcome from the insights you get from is model. >> This sounds like the differences are based on one different business objectives and perhaps data that's not as uniform that you would in enterprise applications, you would standardize the data here, if it's not standardized. >> I think because of the varied the disparity of the businesses and the kinds of verticals and things like that you're looking at, to get complete unified business model, is going to be extremely difficult. >> Last question, back-office systems the highest level they got to were maybe the CFO 'cause you had a sign off on a lot of the budget for the license and a much much bigger budget for the SI but he was getting something that was like close you quarter in three days or something instead of two weeks. It was a control function. Who do you sell to now for these different systems and what's the message, how much more strategic how do you sell the business impact differently? >> The platforms we directly interact with the CIO and CTOs or the head of engineering. And the actual solutions or the insights, we usually sell it to the COOs or the operational folks. So because the COO is responsible for showing you productivity, efficiency, how much of savings can you do on the bottom line top line. So the insights would actually go through the COOs or in some sense go through their CTOs to COOs but the actual platform itself will go to the enterprise IT folks in that order. >> This sounds like it's a platform and a solution sell which requires, is that different from the sales motions of other IBM technologies or is this a new approach? >> IBM is transforming on its way. The days where we believe that all the strategies and predictives that we are aligned towards, that actually needs to be the key goal because that's where the world is going. There are folks who, like Jeff Boaz talks about in the olden days you need 70 people to sell or 70% of the people to sell a 30% product. Today it's a 70% product and you need 30% to actually sell the product. The model is completely changing the way we interact with customers. So I think that's what's going to drive. We are transforming that in that area. We are becoming more conscious about all the strategy operations that we want to deliver to the market we want to be able to enable our customers with a much broader value proposition. >> With the industry solutions group and the Global Business Services teams work on these solutions. They've already been selling, line of business CXO type solutions. So is this more of the same, it's just better or is this really higher level than IBM's ever gotten in terms of strategic value? >> This is possibly in decades I would say a high level of value which come from a strategic perspective. >> Okay, on that note Veeru, we'll call it a day. This is great discussion and we look forward to writing it up and clipping all the videos and showering the internet with highlights. >> Thank you George. Appreciate it. >> Hopefully I will get you back soon. >> I was a pleasure, absolutely. >> With that, this George Gilbert. We're in our Palo Alto studio for wiki bond and theCUBE and we've been talking to Veeru Ramaswamy who's VP of Watson IoT platform and we look forward to coming back with Veeru sometime soon. (upbeat music)

Published Date : Aug 23 2017

SUMMARY :

and he's here to fill us in and the club ration or the social integration. the next work station and he talked about into the to the digital world, the way a normal person looks at a physical object? and represent the digital twin on a physical world and the pulleys and the panels for operating it. that becomes a critical part of the twin. in the digital world, then that gives you the ability in that you could program the real world. that comes from the sensors, once you model it Okay, so it's a structured way of interacting Okay, so it's not the narrow definition What are some of the business impacts and then be able to have different business models in the sense that IBM's customers become in the way Correct so the way we want think about is, someone's modeling and risk from you the supplier I'm pretty sure we have a lot of financial risk modeling if that's the right word. are engineered to order instead of make to stock. and you bring your billion devices and connect but you take a few use cases and then generalize so most of the time you get 80% of your asset management sort of the hot topic and in the states, and then you want to bring your similation models and behavior, is that what makes it simulation ready? That's the graph that holds the relation between nodes that maybe the lower level operation systems. and the availability of the existing assets with you. Okay that's where you translate essentially I remember in the Munich event, of some of the most successful engagements the way you can definitely see success It sounds like in the chip industry Moore's law is going to depend on how do you build this ecosystem And all the way up to the guys who are going to and all of that to make the connections. And how much is the solution builder and software developers to come and sit together and the automotive customer brings in We always by the way believe he sort of set the path for modern computing someone on the IBM side might be talking the standard what do you call In terms of getting the customer organization and then you have a GBS which actually or an existing application that needs customization. analytics and so on and so forth that goes along with that. and then how do different members of the ecosystem and AI specialists distributed across the globe. like a BI person all the way to people who can build then we have our data signs experience it's naturally that talent has to be much more the pool talent you have. and then you also need the domestics There's the issue of, and resources to work with. how to build a platform from a Kosera. that had the people to operate that stuff and the other aspect is, So what does that look like? and charts that comes out of the data in and intend, integrated to like pentaho and be able to call the data what type of data you need. the data gets ingested, you can actually start the storage, can integrate what were boundaries You can go all the way from data ingestion sort of the the visualizations like Altracks It is already integrated into the platform and the Watson APIs is a layer above us a little higher at this quasi-solution layer. and given the fact that we have and one of the use cases he talked about was video. and so on and so forth in the video itself. When you talk about pushing more of the processing. needs to be done at the edge. Whatever the capacity of the device you have and then do you have an ensemble meaning, so you in the fleet of assets which you have, about the issue of intellectual property, and share the data or the insights from There, the model sounds like it's going to get richer and the efficiency goes back to both customers. and you don't have the data, it'll take you a longer time incorporated into the knowledge graft for the structure Yes, how does the system perform or behave that correspond to the behaviors of different parts and can say, you might get some idea It is not like you take the model and the features the data is going to change. and the transactional data that they were producing. is unique for a business. and the processes, we are more talking about This sounds like the differences are based on and the kinds of verticals the highest level they got to were maybe the CFO So because the COO is responsible for showing you in the olden days you need 70 people to sell and the Global Business Services teams a high level of value which come from and showering the internet with highlights. Thank you George. and we look forward to coming back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

GeorgePERSON

0.99+

IBMORGANIZATION

0.99+

Steve JobsPERSON

0.99+

VeeruPERSON

0.99+

Jeff BoazPERSON

0.99+

IsraelLOCATION

0.99+

80%QUANTITY

0.99+

GBSORGANIZATION

0.99+

Doug AnglebadPERSON

0.99+

oneQUANTITY

0.99+

EuropeLOCATION

0.99+

UberORGANIZATION

0.99+

Veeru RamaswamyPERSON

0.99+

100 data centersQUANTITY

0.99+

IBM Global ServicesORGANIZATION

0.99+

128 MBQUANTITY

0.99+

1000 data scientistsQUANTITY

0.99+

GEORGANIZATION

0.99+

twoQUANTITY

0.99+

30%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

second partQUANTITY

0.99+

IndiaLOCATION

0.99+

two yearsQUANTITY

0.99+

FebruaryDATE

0.99+

10 timesQUANTITY

0.99+

USLOCATION

0.99+

MunichLOCATION

0.99+

GoogleORGANIZATION

0.99+

TodayDATE

0.99+

70%QUANTITY

0.99+

32 megsQUANTITY

0.99+

FacebookORGANIZATION

0.99+

KaiserORGANIZATION

0.99+

70 peopleQUANTITY

0.99+

ChinaLOCATION

0.99+

six levelQUANTITY

0.99+

bothQUANTITY

0.99+

two programmersQUANTITY

0.99+

OneQUANTITY

0.99+

both customersQUANTITY

0.99+

eachQUANTITY

0.99+

two weeksQUANTITY

0.99+

Cable VisionORGANIZATION

0.99+

one partQUANTITY

0.99+

three daysQUANTITY

0.99+

GBS Global Business ServicesORGANIZATION

0.99+

two levelQUANTITY

0.98+

one customerQUANTITY

0.98+

todayDATE

0.98+

KoseraORGANIZATION

0.98+

one modelQUANTITY

0.98+

less than 10 linesQUANTITY

0.98+

90sDATE

0.98+

threeQUANTITY

0.98+

Air BnBORGANIZATION

0.98+

a dayQUANTITY

0.97+

Harriet Green, IBM - IBM Interconnect 2017 - #ibminterconnect - #theCUBE


 

(upbeat music) >> Announcer: Live from Las Vegas. It's The Cube. Covering Interconnect 2017. Brought to you by IBM. >> Welcome back everyone. We are here and live in Las Vegas. This is The Cube's coverage of IBM's Interconnect 2017. Three days of wall to wall coverage. Day two here. I'm John Furrier with my co-host Dave Vellante. Our next guest is Harriet Green, General Manager of Watson IoT, a Cube alumni. Great to see you again. Thanks for coming on The Cube this year again, appreciate it. >> Oh it's my pleasure. I hope we're going to talk about Internet of Things, what's in customer engagement and education. Those are things that we hope to talk about. >> Congratulations. You guys have an IoT center now in Munich. You guys had that big launch there, but the real thing that's happening in context if you could zoom out on this is that we're seeing the trend of cloud and big data world being kind of accelerated together and IoT seems to be the center point of the action because it's industrial, it's business, it's people, it's cars, it's the world now. The data piece of it is really accelerating. Now combine that with machine learning, and the glam of AI, the sizzle of artificial intelligence and cognitive really kind of puts that at the center of the conversation. This is transformative. >> Oh totally and I mean you guys were at the Genius of Things so you know that there were 600 Cs: COs, Chief Innovation Officers, Chief Digital Officers from 400 different companies, accounting for about two trillion of revenues. You're exactly right. It's across every major industry, every major sector. I think there are kind of three critical elements. The first is that with the whole proliferation of sensors and the cost points, etcetera, the amount of data and information that is being created is absolutely suited for Watson. So all of those clients there, as you know, are working with us and we shared 22 major outcomes: things faster, cheaper, better, that clients are actually experiencing. Watson is the differentiator and from an IoT perspective, I think the other piece is for a very long time IBM has proven that we respect and keep people's data perfectly safe. We don't use it, we don't open it, we don't go into it, we're not taking it for a future world of knowledge graph. We consider client's data to be their DNA. People know that when you're doing IoT with IBM that deep level of security is imbued within our capability. Then thirdly, who's data is it? Which is a huge thing in Europe and we're able with our data centers to demonstrate if you want to keep that data within lower Bavaria, that's what we'll do. And those three elements, I think, are fundamental; cognitive, the protection of the data, and who's data is it? >> 'Cause who owns the data is really important. It's a big differentiator because the data informs the model. They're almost intertwined so who owns the model? The client owns the model? Is that correct? >> Yeah, but I think people have over-complicated this, those who perhaps do not have such a simple and clear answer to it. Who don't have written into their terms and conditions that it's actually their data and they can hang onto it for as long as they like. We have always to our clients said it's your data. It's absolutely your data. If we create something together with your data, it's still your data. People only start to confuse this when they have primary and secondary and tertiary levels of confusion to support their particular cause. There is no confusion with our clients. When you talk to the chief digital officers of Shaeffler, of ISS, of SNCF who are up on stage with us yesterday, people who are demonstrating amazing outcomes that they didn't have before with IoT, they will say to you there are three reasons why we went with IBM. The first, the platform. It is the best IoT platform. From an IDC, from a Gottner perspective that's what Forester, what the guys say. Secondly, our applications are very robust and help people get started on this IoT journey. Thirdly, that the digital transformation that is happening alongside this, back to your convergence point, we're also able to assist with our GPS IoT practice. >> And you're accelerating that too. Ginni Rometty on stage talking about how that Watson's learning faster by industry but it's not a silo thing. It's actually accelerating the transformation components. >> Well, you put your finger on that precisely because the amazing thing about the Internet of Things is it's not just consumer, it's not just one industry. We're interfacing 34 different industries who are represented at the Genius of Things. It's also affecting life. Yesterday you may have seen ISS and their amazing building that they've created, which now as you arrive at terminal five, wherever it is, a huge rush and suddenly the elevators don't work. Remotely these elevators are being fixed and the journey is absolutely amazing. It is kind of is industry. >> That social good angle is important is the cognitive for social good trend going on right now culturally. That's really important. But I want to ask you- >> But I do think on the ... Ginni announced in Davos our cognitive principle. There's no client working with us that doesn't know we're working from a cognitive perspective. We go to great levels to explain what we are doing, to whom it belongs and that charter is not something that we just came up with. That's IBM for 105 years. It's why I chose to come here around the Internet of Things. >> It's super inspirational for me personally and I want to ask you about a topic that's passionate for us as an organization. We've had the largest library of women in tech, going back to 2010, we've been interviewing some of the great leaders in the tech industry. This is really now going really amazing. You heard Mark Benning up on stage talking about all the goodness going on around equality and pay, everything else going on but there's more women now instrumental in all the computer science and business side. How are you continuing that? We talked a little bit about this last year with the mentoring. How do you attract the talent? How do you get that inspiration for the young women and girls out there from whether grade school, high school, college? What's the plan? >> Well, first of all I think IBM has on every level a proven history of diversity. 35 years before the equal pay act we were equal paying. We have an incredibly diverse cultural environment where regardless of your age, your sex, your color, your creed, your sexuality, or your physical ability if you're good you'll get on. IBM lives and breathes that in every sense. Now I think the challenge is in North America particularly, in the 80s 30% of young women were going into the STEM subjects and now it's dropped just below 18%. I think it's absolutely critical that investors in companies are thinking about this equality and measuring the power of diversity and innovation. That leaders inside of businesses do more than just pontificate on stage but live on breath it. as Ginni >> Walk the talk. >> Harriet: Does. And then also that all of us in our decision making, particularly, I did for International Women's Week last week a whole webx around inclusion and how we include, how we exclude, and I shared a particular story of a couple of weeks ago some said to me you're just such a left field candidate, Harriet. And maybe that's a compliment. He happens to be a very nice guy and maybe he's right but we want people to feel inclusive. One of the most amazing things that IBM has done for some time which is almost unique, up there with Watson, is we do this to attract millennials particularly, but anyone can participate. It's a program where we take people who go in a totally immersive six or seven weeks. It may be human trafficking in Thailand. It may be helping to train and educate in sub-Saharan Africa and they work with local bodies, local institutions, really helps build this collaborative capability. And then all of the work we're doing with Ptech around up-skilling and ensuring that the STEM subjects from a very wide range of young people are really embraced. >> Harriet, you're getting requested 'cause you got to move around the events so many places and your time is very scarce and you have to move to the next event. Thank you for taking the time to share that with us and also the awesomeness around IoT and Watson. Appreciate and good to see you. You look great. This is IBM Interconnect. Harriet Green the leader of Watson IoT Customer Engagement and Support. I'm John Furrier with Dave Vellante. We'll be right back with more after this short break. (upbeat music)

Published Date : Mar 21 2017

SUMMARY :

Brought to you by IBM. Great to see you again. Those are things that we hope to talk about. and the glam of AI, the sizzle of artificial intelligence at the Genius of Things so you know that there were 600 Cs: because the data informs the model. Thirdly, that the digital transformation It's actually accelerating the transformation components. and the journey is absolutely amazing. That social good angle is important is the cognitive and that charter is not something that we just came up with. We've had the largest library of women in tech, in the 80s 30% of young women were going into One of the most amazing things that IBM has done and also the awesomeness around IoT and Watson.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

HarrietPERSON

0.99+

Harriet GreenPERSON

0.99+

IBMORGANIZATION

0.99+

Mark BenningPERSON

0.99+

ThailandLOCATION

0.99+

MunichLOCATION

0.99+

2010DATE

0.99+

EuropeLOCATION

0.99+

John FurrierPERSON

0.99+

sixQUANTITY

0.99+

North AmericaLOCATION

0.99+

SNCFORGANIZATION

0.99+

Las VegasLOCATION

0.99+

105 yearsQUANTITY

0.99+

International Women's WeekEVENT

0.99+

GinniPERSON

0.99+

ISSORGANIZATION

0.99+

Ginni RomettyPERSON

0.99+

seven weeksQUANTITY

0.99+

yesterdayDATE

0.99+

firstQUANTITY

0.99+

Watson IoTORGANIZATION

0.99+

30%QUANTITY

0.99+

600 CsQUANTITY

0.99+

BavariaLOCATION

0.99+

three elementsQUANTITY

0.99+

PtechORGANIZATION

0.99+

400 different companiesQUANTITY

0.99+

last yearDATE

0.99+

Three daysQUANTITY

0.99+

last weekDATE

0.98+

34 different industriesQUANTITY

0.98+

Day twoQUANTITY

0.98+

WatsonPERSON

0.98+

YesterdayDATE

0.98+

three reasonsQUANTITY

0.98+

about two trillionQUANTITY

0.97+

this yearDATE

0.97+

SecondlyQUANTITY

0.97+

OneQUANTITY

0.97+

80sDATE

0.96+

ThirdlyQUANTITY

0.96+

sub-Saharan AfricaLOCATION

0.94+

CubeORGANIZATION

0.93+

three critical elementsQUANTITY

0.93+

Interconnect 2017EVENT

0.93+

WatsonTITLE

0.92+

35 yearsQUANTITY

0.92+

22 major outcomesQUANTITY

0.9+

couple of weeks agoDATE

0.9+

IBM InterconnectORGANIZATION

0.83+

The CubeORGANIZATION

0.8+

one industryQUANTITY

0.79+

below 18%QUANTITY

0.77+

Genius ofORGANIZATION

0.74+

Genius of ThingsORGANIZATION

0.72+

DavosLOCATION

0.7+

WatsonORGANIZATION

0.7+

#ibminterconnectEVENT

0.64+

GottnerPERSON

0.61+

ShaefflerORGANIZATION

0.58+

terminal fiveQUANTITY

0.53+

equalTITLE

0.5+

ForesterPERSON

0.48+

CubeCOMMERCIAL_ITEM

0.38+

Holden Karau, IBM Big Data SV 17 #BigDataSV #theCUBE


 

>> Announcer: Big Data Silicon Valley 2017. >> Hey, welcome back, everybody, Jeff Frick here with The Cube. We are live at the historic Pagoda Lounge in San Jose for Big Data SV, which is associated with Strathead Dupe World, across the street, as well as Big Data week, so everything big data is happening in San Jose, we're happy to be here, love the new venue, if you're around, stop by, back of the Fairmount, Pagoda Lounge. We're excited to be joined in this next segment by, who's now become a regular, any time we're at a Big Data event, a Spark event, Holden always stops by. Holden Karau, she's the principal software engineer at IBM. Holden, great to see you. >> Thank you, it's wonderful to be back yet again. >> Absolutely, so the big data meme just keeps rolling, Google Cloud Next was last week, a lot of talk about AI and ML and of course you're very involved in Spark, so what are you excited about these days? What are you, I'm sure you've got a couple presentations going on across the street. >> Yeah, so my two presentations this week, oh wow, I should remember them. So the one that I'm doing today is with my co-worker Seth Hendrickson, also at IBM, and we're going to be focused on how to use structured streaming for machine learning. And sort of, I think that's really interesting, because streaming machine learning is something a lot of people seem to want to do but aren't yet doing in production, so it's always fun to talk to people before they've built their systems. And then tomorrow I'm going to be talking with Joey on how to debug Spark, which is something that I, you know, a lot of people ask questions about, but I tend to not talk about, because it tends to scare people away, and so I try to keep the happy going. >> Jeff: Bugs are never fun. >> No, no, never fun. >> Just picking up on that structured streaming and machine learning, so there's this issue of, as we move more and more towards the industrial internet of things, like having to process events as they come in, make a decision. How, there's a range of latency that's required. Where does structured streaming and ML fit today, and where might that go? >> So structured streaming for today, latency wise, is probably not something I would use for something like that right now. It's in the like sub second range. Which is nice, but it's not what you want for like live serving of decisions for your car, right? That's just not going to be feasible. But I think it certainly has the potential to get a lot faster. We've seen a lot of renewed interest in ML liblocal, which is really about making it so that we can take the models that we've trained in Spark and really push them out to the edge and sort of serve them in the edge, and apply our models on end devices. So I'm really excited about where that's going. To be fair, part of my excitement is someone else is doing that work, so I'm very excited that they're doing this work for me. >> Let me clarify on that, just to make sure I understand. So there's a lot of overhead in Spark, because it runs on a cluster, because you have an optimizer, because you have the high availability or the resilience, and so you're saying we can preserve the predict and maybe serve part and carve out all the other overhead for running in a very small environment. >> Right, yeah. So I think for a lot of these IOT devices and stuff like that it actually makes a lot more sense to do the predictions on the device itself, right. These models generally are megabytes in size, and we don't need a cluster to do predictions on these models, right. We really need the cluster to train them, but I think for a lot of cases, pushing the prediction out to the edge node is actually a pretty reasonable use case. And so I'm really excited that we've got some work going on there. >> Taking that one step further, we've talked to a bunch of people, both like at GE, and at their Minds and Machines show, and IBM's Genius of Things, where you want to be able to train the models up in the cloud where you're getting data from all the different devices and then push the retrained model out to the edge. Can that happen in Spark, or do we have to have something else orchestrating all that? >> So actually pushing the model out isn't something that I would do in Spark itself, I think that's better served by other tools. Spark is not really well suited to large amounts of internet traffic, right. But it's really well suited to the training, and I think with ML liblocal it'll essentially, we'll be able to provide both sides of it, and the copy part will be left up to whoever it is that's doing their work, right, because like if you're copying over a cell network you need to do something very different as if you're broadcasting over a terrestrial XM or something like that, you need to do something very different for satellite. >> If you're at the edge on a device, would you be actually running, like you were saying earlier, structured streaming, with the prediction? >> Right, I don't think you would use structured streaming per se on the edge device, but essentially there would be a lot of code share between structured streaming and the code that you'd be using on the edge device. And it's being vectored out now so that we can have this code sharing and Spark machine learning. And you would use structured streaming maybe on the training side, and then on the serving side you would use your custom local code. >> Okay, so tell us a little more about Spark ML today and how we can democratize machine learning, you know, for a bigger audience. >> Right, I think machine learning is great, but right now you really need a strong statistical background to really be able to apply it effectively. And we probably can't get rid of that for all problems, but I think for a lot of problems, doing things like hyperparameter tuning can actually give really powerful tools to just like regular engineering folks who, they're smart, but maybe they don't have a strong machine learning background. And Spark's ML pipelines make it really easy to sort of construct multiple stages, and then just be like, okay, I don't know what these parameters should be, I want you to do a search over what these different parameters could be for me, and it makes it really easy to do this as just a regular engineer with less of an ML background. >> Would that be like, just for those of us who are, who don't know what hyperparameter tuning is, that would be the knobs, the variables? >> Yeah, it's going to spin the knobs on like our regularization parameter on like our regression, and it can also spin some knobs on maybe the engram sizes that we're using on the inputs to something else, right. And it can compare how these knobs sort of interact with each other, because often you can tune one knob but you actually have six different knobs that you want to tune and you don't know, if you just explore each one individually, you're not going to find the best setting for them working together. >> So this would make it easier for, as you're saying, someone who's not a data scientist to set up a pipeline that lets you predict. >> I think so, very much. I think it does a lot of the, brings a lot of the benefits from sort of the SciPy world to the big data world. And SciPy is really wonderful about making machine learning really accessible, but it's just not ready for big data, and I think this does a good job of bringing these same concepts, if not the code, but the same concepts, to big data. >> The SciPy, if I understand, is it a notebook that would run essentially on one machine? >> SciPy can be put in a notebook environment, and generally it would run on, yeah, a single machine. >> And so to make that sit on Spark means that you could then run it on a cluster-- >> So this isn't actually taking SciPy and distributing it, this is just like stealing the good concepts from SciPy and making them available for big data people. Because SciPy's done a really good job of making a very intuitive machine learning interface. >> So just to put a fine sort of qualifier on one thing, if you're doing the internet of things and you have Spark at the edge and you're running the model there, it's the programming model, so structured streaming is one way of programming Spark, but if you don't have structured streaming at the edge, would you just be using the core batch Spark programming model? >> So at the edge you'd just be using, you wouldn't even be using batch, right, because you're trying to predict individual events, right, so you'd just be calling predict with every new event that you're getting in. And you might have a q mechanism of some type. But essentially if we had this batch, we would be adding additional latency, and I think at the edge we really, the reason we're moving the models to the edge is to avoid the latency. >> So just to be clear then, is the programming model, so it wouldn't be structured streaming, and we're taking out all the overhead that forced us to use batch with Spark. So the reason I'm trying to clarify is a lot of people had this question for a long time, which is are we going to have a different programming model at the edge from what we have at the center? >> Yeah, that's a great question. And I don't think the answer is finished yet, but I think the work is being done to try and make it look the same. Of course, you know, trying to make it look the same, this is Boosh, it's not like actually barking at us right now, even though she looks like a dog, she is, there will always be things which are a little bit different from the edge to your cluster, but I think Spark has done a really good job of making things look very similar on single node cases to multi node cases, and I think we can probably bring the same things to ML. >> Okay, so it's almost time, we're coming back, Spark took us from single machine to cluster, and now we have to essentially bring it back for an edge device that's really light weight. >> Yeah, I think at the end of the day, just from a latency point of view, that's what we have to do for serving. For some models, not for everyone. Like if you're building a website with a recommendation system, you don't need to serve that model like on the edge node, that's fine, but like if you've got a car device we can't depend on cell latency, right, you have to serve that in car. >> So what are some of the things, some of the other things that IBM is contributing to the ecosystem that you see having a big impact over the next couple years? >> So there's a lot of really exciting things coming out of IBM. And I'm obviously pretty biased. I spend a lot of time focused on Python support in Spark, and one of the most exciting things is coming from my co-worker Brian, I'm not going to say his last name in case I get it wrong, but Brian is amazing, and he's been working on integrating Arrow with Spark, and this can make it so that it's going to be a lot easier to sort of interoperate between JVM languages and Python and R, so I'm really optimistic about the sort of Python and R interfaces improving a lot in Spark and getting a lot faster as well. And we're also, in addition to the Arrow work, we've got some work around making it a lot easier for people in R and Python to get started. The R stuff is mostly actually the Microsoft people, thanks Felix, you're awesome. I don't actually know which camera I should have done that to but that's okay. >> I think you got it! >> But Felix is amazing, and the other people working on R are too. But I think we've both been pursuing sort of making it so that people who are in the R or Python spaces can just use like Pit Install, Conda Install, or whatever tool it is they're used to working with, to just bring Spark into their machine really easily, just like they would sort of any other software package that they're using. Because right now, for someone getting started in Spark, if you're in the Java space it's pretty easy, but if you're in R or Python you have to do sort of a lot of weird setup work, and it's worth it, but like if we can get rid of that friction, I think we can get a lot more people in these communities using Spark. >> Let me see, just as a scenario, the R server is getting fairly well integrated into Sequel server, so would it be, would you be able to use R as the language with a Spark execution engine to somehow integrate it into Sequel server as an execution engine for doing the machine learning and predicting? >> You definitely, well I shouldn't say definitely, you probably could do that. I don't necessarily know if that's a good idea, but that's the kind of stuff that this would enable, right, it'll make it so that people that are making tools in R or Python can just use Spark as another library, right, and it doesn't have to be this really special setup. It can just be this library and they point out the cluster and they can do whatever work it wants to do. That being said, the Sequel server R integration, if you find yourself using that to do like distributed computing, you should probably take a step back and like rethink what you're doing. >> George: Because it's not really scale out. >> It's not really set up for that. And you might be better off doing this with like, connecting your Spark cluster to your Sequel server instance using like JDBC or a special driver and doing it that way, but you definitely could do it in another inverted sort of way. >> So last question from me, if you look out a couple years, how will we make machine learning accessible to a bigger and bigger audience? And I know you touched on the tuning of the knobs, hyperparameter tuning, what will it look like ultimately? >> I think ML pipelines are probably what things are going to end up looking like. But I think the other part that we'll sort of see is we'll see a lot more examples of how to work with certain kinds of data, because right now, like, I know what I need to do when I'm ingesting some textural data, but I know that because I spent like a week trying to figure out what the hell I was doing once, right. And I didn't bother to write it down. And it looks like no one else bothered to write it down. So really I think we'll see a lot of tools that look very similar to the tools we have today, they'll have more options and they'll be a bit easier to use, but I think the main thing that we're really lacking right now is good documentation and sort of good books and just good resources for people to figure out how to use these tools. Now of course, I mean, I'm biased, because I work on these tools, so I'm like, yeah, they're pretty great. So there might be other people who are like, Holden, no, you're wrong, we need to rethink everything. But I think this is, we can go very far with the pipeline concept. >> And then that's good, right? The democratization of these things opens it up to more people, you get more creative people solving more different problems, that makes the whole thing go. >> You can like install Spark easily, you can, you know, set up an ML pipeline, you can train your model, you can start doing predictions, you can, people that haven't been able to do machine learning at scale can get started super easily, and build a recommendation system for their small little online shop and be like, hey, you bought this, you might also want to buy Boosh, he's really cute, but you can't have this one. No no no, not this one. >> Such a tease! >> Holden: I'm sorry, I'm sorry. >> Well Holden, that will, we'll say goodbye for now, I'm sure we will see you in June in San Francisco at the Spark Summit, and look forward to the update. >> Holden: I look forward to chatting with you then. >> Absolutely, and break a leg this afternoon at your presentation. >> Holden: Thank you. >> She's Holden Karau, I'm Jeff Frick, he's George Gilbert, you're watching The Cube, we're at Big Data SV, thanks for watching. (upbeat music)

Published Date : Mar 15 2017

SUMMARY :

Announcer: Big Data We're excited to be joined to be back yet again. so what are you excited about these days? but I tend to not talk about, like having to process and really push them out to the edge and carve out all the other overhead We really need the cluster to train them, model out to the edge. and the copy part will be left up to and then on the serving side you would use you know, for a bigger audience. and it makes it really easy to do this that you want to tune and you don't know, that lets you predict. but the same concepts, to big data. and generally it would run the good concepts from SciPy the models to the edge So just to be clear then, from the edge to your cluster, machine to cluster, like on the edge node, that's fine, R and Python to get started. and the other people working on R are too. but that's the kind of stuff not really scale out. to your Sequel server instance and they'll be a bit easier to use, that makes the whole thing go. and be like, hey, you bought this, look forward to the update. to chatting with you then. Absolutely, and break you're watching The Cube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

BrianPERSON

0.99+

Jeff FrickPERSON

0.99+

Holden KarauPERSON

0.99+

HoldenPERSON

0.99+

FelixPERSON

0.99+

George GilbertPERSON

0.99+

GeorgePERSON

0.99+

JoeyPERSON

0.99+

JeffPERSON

0.99+

IBMORGANIZATION

0.99+

San JoseLOCATION

0.99+

Seth HendricksonPERSON

0.99+

SparkTITLE

0.99+

PythonTITLE

0.99+

last weekDATE

0.99+

MicrosoftORGANIZATION

0.99+

tomorrowDATE

0.99+

San FranciscoLOCATION

0.99+

JuneDATE

0.99+

six different knobsQUANTITY

0.99+

GEORGANIZATION

0.99+

BooshPERSON

0.99+

Pagoda LoungeLOCATION

0.99+

one knobQUANTITY

0.99+

both sidesQUANTITY

0.99+

two presentationsQUANTITY

0.99+

this weekDATE

0.98+

todayDATE

0.98+

The CubeORGANIZATION

0.98+

JavaTITLE

0.98+

bothQUANTITY

0.97+

one thingQUANTITY

0.96+

oneQUANTITY

0.96+

Big Data weekEVENT

0.96+

single machineQUANTITY

0.95+

RTITLE

0.95+

SciPyTITLE

0.95+

Big DataEVENT

0.95+

single machineQUANTITY

0.95+

each oneQUANTITY

0.94+

JDBCTITLE

0.93+

Spark MLTITLE

0.89+

JVMTITLE

0.89+

The CubeTITLE

0.88+

singleQUANTITY

0.88+

SequelTITLE

0.87+

Big Data Silicon Valley 2017EVENT

0.86+

Spark SummitLOCATION

0.86+

one machineQUANTITY

0.86+

a weekQUANTITY

0.84+

FairmountLOCATION

0.83+

liblocalTITLE

0.83+