Image Title

Search Results for Turing:

Exascale – Why So Hard? | Exascale Day


 

from around the globe it's thecube with digital coverage of exascale day made possible by hewlett packard enterprise welcome everyone to the cube celebration of exascale day ben bennett is here he's an hpc strategist and evangelist at hewlett-packard enterprise ben welcome good to see you good to see you too son hey well let's evangelize exascale a little bit you know what's exciting you uh in regards to the coming of exoskilled computing um well there's a couple of things really uh for me historically i've worked in super computing for many years and i have seen the coming of several milestones from you know actually i'm old enough to remember gigaflops uh coming through and teraflops and petaflops exascale is has been harder than many of us anticipated many years ago the sheer amount of technology that has been required to deliver machines of this performance has been has been us utterly staggering but the exascale era brings with it real solutions it gives us opportunities to do things that we've not been able to do before if you look at some of the the most powerful computers around today they've they've really helped with um the pandemic kovid but we're still you know orders of magnitude away from being able to design drugs in situ test them in memory and release them to the public you know we still have lots and lots of lab work to do and exascale machines are going to help with that we are going to be able to to do more um which ultimately will will aid humanity and they used to be called the grand challenges and i still think of them as that i still think of these challenges for scientists that exascale class machines will be able to help but also i'm a realist is that in 10 20 30 years time you know i should be able to look back at this hopefully touch wood look back at it and look at much faster machines and say do you remember the days when we thought exascale was faster yeah well you mentioned the pandemic and you know the present united states was tweeting this morning that he was upset that you know the the fda in the u.s is not allowing the the vaccine to proceed as fast as you'd like it in fact it the fda is loosening some of its uh restrictions and i wonder if you know high performance computing in part is helping with the simulations and maybe predicting because a lot of this is about probabilities um and concerns is is is that work that is going on today or are you saying that that exascale actually you know would be what we need to accelerate that what's the role of hpc that you see today in regards to sort of solving for that vaccine and any other sort of pandemic related drugs so so first a disclaimer i am not a geneticist i am not a biochemist um my son is he tries to explain it to me and it tends to go in one ear and out the other um um i just merely build the machines he uses so we're sort of even on that front um if you read if you had read the press there was a lot of people offering up systems and computational resources for scientists a lot of the work that has been done understanding the mechanisms of covid19 um have been you know uncovered by the use of very very powerful computers would exascale have helped well clearly the faster the computers the more simulations we can do i think if you look back historically no vaccine has come to fruition as fast ever under modern rules okay admittedly the first vaccine was you know edward jenner sat quietly um you know smearing a few people and hoping it worked um i think we're slightly beyond that the fda has rules and regulations for a reason and we you don't have to go back far in our history to understand the nature of uh drugs that work for 99 of the population you know and i think exascale widely available exoscale and much faster computers are going to assist with that imagine having a genetic map of very large numbers of people on the earth and being able to test your drug against that breadth of person and you know that 99 of the time it works fine under fda rules you could never sell it you could never do that but if you're confident in your testing if you can demonstrate that you can keep the one percent away for whom that drug doesn't work bingo you now have a drug for the majority of the people and so many drugs that have so many benefits are not released and drugs are expensive because they fail at the last few moments you know the more testing you can do the more testing in memory the better it's going to be for everybody uh personally are we at a point where we still need human trials yes do we still need due diligence yes um we're not there yet exascale is you know it's coming it's not there yet yeah well to your point the faster the computer the more simulations and the higher the the chance that we're actually going to going to going to get it right and maybe compress that time to market but talk about some of the problems that you're working on uh and and the challenges for you know for example with the uk government and maybe maybe others that you can you can share with us help us understand kind of what you're hoping to accomplish so um within the united kingdom there was a report published um for the um for the uk research institute i think it's the uk research institute it might be epsrc however it's the body of people responsible for funding um science and there was a case a science case done for exascale i'm not a scientist um a lot of the work that was in this documentation said that a number of things that can be done today aren't good enough that we need to look further out we need to look at machines that will do much more there's been a program funded called asimov and this is a sort of a commercial problem that the uk government is working with rolls royce and they're trying to research how you build a full engine model and by full engine model i mean one that takes into account both the flow of gases through it and how those flow of gases and temperatures change the physical dynamics of the engine and of course as you change the physical dynamics of the engine you change the flow so you need a closely coupled model as air travel becomes more and more under the microscope we need to make sure that the air travel we do is as efficient as possible and currently there aren't supercomputers that have the performance one of the things i'm going to be doing as part of this sequence of conversations is i'm going to be having an in detailed uh sorry an in-depth but it will be very detailed an in-depth conversation with professor mark parsons from the edinburgh parallel computing center he's the director there and the dean of research at edinburgh university and i'm going to be talking to him about the azimoth program and and mark's experience as the person responsible for looking at exascale within the uk to try and determine what are the sort of science problems that we can solve as we move into the exoscale era and what that means for humanity what are the benefits for humans yeah and that's what i wanted to ask you about the the rolls-royce example that you gave it wasn't i if i understood it wasn't so much safety as it was you said efficiency and so that's that's what fuel consumption um it's it's partly fuel consumption it is of course safety there is a um there is a very specific test called an extreme event or the fan blade off what happens is they build an engine and they put it in a cowling and then they run the engine at full speed and then they literally explode uh they fire off a little explosive and they fire a fan belt uh a fan blade off to make sure that it doesn't go through the cowling and the reason they do that is there has been in the past uh a uh a failure of a fan blade and it came through the cowling and came into the aircraft depressurized the aircraft i think somebody was killed as a result of that and the aircraft went down i don't think it was a total loss one death being one too many but as a result you now have to build a jet engine instrument it balance the blades put an explosive in it and then blow the fan blade off now you only really want to do that once it's like car crash testing you want to build a model of the car you want to demonstrate with the dummy that it is safe you don't want to have to build lots of cars and keep going back to the drawing board so you do it in computers memory right we're okay with cars we have computational power to resolve to the level to determine whether or not the accident would hurt a human being still a long way to go to make them more efficient uh new materials how you can get away with lighter structures but we haven't got there with aircraft yet i mean we can build a simulation and we can do that and we can be pretty sure we're right um we still need to build an engine which costs in excess of 10 million dollars and blow the fan blade off it so okay so you're talking about some pretty complex simulations obviously what are some of the the barriers and and the breakthroughs that are kind of required you know to to do some of these things that you're talking about that exascale is going to enable i mean presumably there are obviously technical barriers but maybe you can shed some light on that well some of them are very prosaic so for example power exoscale machines consume a lot of power um so you have to be able to design systems that consume less power and that goes into making sure they're cooled efficiently if you use water can you reuse the water i mean the if you take a laptop and sit it on your lap and you type away for four hours you'll notice it gets quite warm um an exascale computer is going to generate a lot more heat several megawatts actually um and it sounds prosaic but it's actually very important to people you've got to make sure that the systems can be cooled and that we can power them yeah so there's that another issue is the software the software models how do you take a software model and distribute the data over many tens of thousands of nodes how do you do that efficiently if you look at you know gigaflop machines they had hundreds of nodes and each node had effectively a processor a core a thread of application we're looking at many many tens of thousands of nodes cores parallel threads running how do you make that efficient so is the software ready i think the majority of people will tell you that it's the software that's the problem not the hardware of course my friends in hardware would tell you ah software is easy it's the hardware that's the problem i think for the universities and the users the challenge is going to be the software i think um it's going to have to evolve you you're just you want to look at your machine and you just want to be able to dump work onto it easily we're not there yet not by a long stretch of the imagination yeah consequently you know we one of the things that we're doing is that we have a lot of centers of excellence is we will provide well i hate say the word provide we we sell super computers and once the machine has gone in we work very closely with the establishments create centers of excellence to get the best out of the machines to improve the software um and if a machine's expensive you want to get the most out of it that you can you don't just want to run a synthetic benchmark and say look i'm the fastest supercomputer on the planet you know your users who want access to it are the people that really decide how useful it is and the work they get out of it yeah the economics is definitely a factor in fact the fastest supercomputer in the planet but you can't if you can't afford to use it what good is it uh you mentioned power uh and then the flip side of that coin is of course cooling you can reduce the power consumption but but how challenging is it to cool these systems um it's an engineering problem yeah we we have you know uh data centers in iceland where it gets um you know it doesn't get too warm we have a big air cooled data center in in the united kingdom where it never gets above 30 degrees centigrade so if you put in water at 40 degrees centigrade and it comes out at 50 degrees centigrade you can cool it by just pumping it round the air you know just putting it outside the building because the building will you know never gets above 30 so it'll easily drop it back to 40 to enable you to put it back into the machine um right other ways to do it um you know is to take the heat and use it commercially there's a there's a lovely story of they take the hot water out of the supercomputer in the nordics um and then they pump it into a brewery to keep the mash tuns warm you know that's that's the sort of engineering i can get behind yeah indeed that's a great application talk a little bit more about your conversation with professor parsons maybe we could double click into that what are some of the things that you're going to you're going to probe there what are you hoping to learn so i think some of the things that that are going to be interesting to uncover is just the breadth of science that can be uh that could take advantage of exascale you know there are there are many things going on that uh that people hear about you know we people are interested in um you know the nobel prize they might have no idea what it means but the nobel prize for physics was awarded um to do with research into black holes you know fascinating and truly insightful physics um could it benefit from exascale i have no idea uh i i really don't um you know one of the most profound pieces of knowledge in in the last few hundred years has been the theory of relativity you know an austrian patent clerk wrote e equals m c squared on the back of an envelope and and voila i i don't believe any form of exascale computing would have helped him get there any faster right that's maybe flippant but i think the point is is that there are areas in terms of weather prediction climate prediction drug discovery um material knowledge engineering uh problems that are going to be unlocked with the use of exascale class systems we are going to be able to provide more tools more insight [Music] and that's the purpose of computing you know it's not that it's not the data that that comes out and it's the insight we get from it yeah i often say data is plentiful insights are not um ben you're a bit of an industry historian so i've got to ask you you mentioned you mentioned mentioned gigaflop gigaflops before which i think goes back to the early 1970s uh but the history actually the 80s is it the 80s okay well the history of computing goes back even before that you know yes i thought i thought seymour cray was you know kind of father of super computing but perhaps you have another point of view as to the origination of high performance computing [Music] oh yes this is um this is this is one for all my colleagues globally um you know arguably he says getting ready to be attacked from all sides arguably you know um computing uh the parallel work and the research done during the war by alan turing is the father of high performance computing i think one of the problems we have is that so much of that work was classified so much of that work was kept away from commercial people that commercial computing evolved without that knowledge i uh i have done in in in a previous life i have done some work for the british science museum and i have had the great pleasure in walking through the the british science museum archives um to look at how computing has evolved from things like the the pascaline from blaise pascal you know napier's bones the babbage's machines uh to to look all the way through the analog machines you know what conrad zeus was doing on a desktop um i think i think what's important is it doesn't matter where you are is that it is the problem that drives the technology and it's having the problems that requires the you know the human race to look at solutions and be these kicks started by you know the terrible problem that the us has with its nuclear stockpile stewardship now you've invented them how do you keep them safe originally done through the ascii program that's driven a lot of computational advances ultimately it's our quest for knowledge that drives these machines and i think as long as we are interested as long as we want to find things out there will always be advances in computing to meet that need yeah and you know it was a great conversation uh you're a brilliant guest i i love this this this talk and uh and of course as the saying goes success has many fathers so there's probably a few polish mathematicians that would stake a claim in the uh the original enigma project as well i think i think they drove the algorithm i think the problem is is that the work of tommy flowers is the person who took the algorithms and the work that um that was being done and actually had to build the poor machine he's the guy that actually had to sit there and go how do i turn this into a machine that does that and and so you know people always remember touring very few people remember tommy flowers who actually had to turn the great work um into a working machine yeah super computer team sport well ben it's great to have you on thanks so much for your perspectives best of luck with your conversation with professor parsons we'll be looking forward to that and uh and thanks so much for coming on thecube a complete pleasure thank you and thank you everybody for watching this is dave vellante we're celebrating exascale day you're watching the cube [Music]

Published Date : Oct 16 2020

SUMMARY :

that requires the you know the human

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
mark parsonsPERSON

0.99+

ben bennettPERSON

0.99+

todayDATE

0.99+

hundreds of nodesQUANTITY

0.99+

dave vellantePERSON

0.98+

pandemicEVENT

0.98+

united kingdomLOCATION

0.98+

seymour crayPERSON

0.98+

one earQUANTITY

0.98+

first vaccineQUANTITY

0.98+

markPERSON

0.98+

four hoursQUANTITY

0.97+

tens of thousands of nodesQUANTITY

0.97+

blaise pascalPERSON

0.97+

one percentQUANTITY

0.97+

50 degrees centigradeQUANTITY

0.97+

oneQUANTITY

0.97+

40QUANTITY

0.97+

nobel prizeTITLE

0.97+

rolls royceORGANIZATION

0.96+

each nodeQUANTITY

0.96+

early 1970sDATE

0.96+

hpcORGANIZATION

0.96+

10 million dollarsQUANTITY

0.95+

uk governmentORGANIZATION

0.95+

fdaORGANIZATION

0.95+

united statesORGANIZATION

0.94+

bothQUANTITY

0.94+

this morningDATE

0.94+

40 degrees centigradeQUANTITY

0.94+

one deathQUANTITY

0.93+

hewlett packardORGANIZATION

0.93+

earthLOCATION

0.93+

exascaleTITLE

0.93+

above 30QUANTITY

0.93+

99 of the populationQUANTITY

0.92+

Why So Hard?TITLE

0.92+

uk research instituteORGANIZATION

0.92+

lots of carsQUANTITY

0.92+

exascale dayEVENT

0.9+

conrad zeusPERSON

0.9+

firstQUANTITY

0.9+

edinburgh universityORGANIZATION

0.89+

many years agoDATE

0.89+

asimovTITLE

0.88+

Exascale DayEVENT

0.88+

ukLOCATION

0.87+

professorPERSON

0.87+

parsonsPERSON

0.86+

99 ofQUANTITY

0.86+

above 30 degrees centigradeQUANTITY

0.85+

edward jennerPERSON

0.85+

alan turingPERSON

0.83+

thingsQUANTITY

0.83+

80sDATE

0.82+

epsrcORGANIZATION

0.82+

last few hundred yearsDATE

0.82+

ExascaleTITLE

0.8+

a lot of peopleQUANTITY

0.79+

covid19OTHER

0.78+

hewlett-packardORGANIZATION

0.77+

britishOTHER

0.76+

tommyPERSON

0.75+

edinburgh parallel computing centerORGANIZATION

0.74+

one ofQUANTITY

0.73+

nordicsLOCATION

0.71+

so many drugsQUANTITY

0.7+

manyQUANTITY

0.69+

many yearsQUANTITY

0.68+

lots and lots of lab workQUANTITY

0.68+

large numbers of peopleQUANTITY

0.68+

hpcEVENT

0.68+

peopleQUANTITY

0.68+

Silvano Gai, Pensando | Future Proof Your Enterprise 2020


 

>> Narrator: From the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, and welcome to this CUBE conversation, I'm Stu Min and I'm coming to you from our Boston area studio, we've been digging in with the Pensando team, understand how they're fitting into the cloud, multi-cloud, edge discussion, really thrilled to welcome to the program, first time guest, Silvano Gai, he's a fellow with Pensando. Silvano, really nice to see you again, thanks so much for joining us on theCUBE. >> Stuart, it's so nice to see you, we used to work together many years ago and that was really good and is really nice to come to you from Oregon, from Bend, Oregon. A beautiful town in the high desert of Oregon. >> I do love the Pacific North West, I miss the planes and the hotels, I should say, I don't miss the planes and the hotels, but going to see some of the beautiful places is something I do miss and getting to see people in the industry I do like. As you mentioned, you and I crossed paths back through some of the spin-ins, back when I was working for a very large storage company, you were working for SISCO, you were known for writing the book, you were a professor in Italy, many of the people that worked on some of those technologies were your students. But Silvano, my understanding is you retired so, maybe share for our audience, what brought you out of that retirement and into working once again with some of your former colleagues and on the Pensando opportunity. >> I did retire for a while, I retired in 2011 from Cisco if I remember correctly. But at the end of 2016, beginning of 2017, some old friend that you may remember and know called me to discuss some interesting idea, which was basically the seed idea that is behind the Pensando product and their idea were interesting, what we built, of course, is not exactly the original idea because you know product evolve over time, but I think we have something interesting that is adequate and probably superb for the new way to design the data center network, both for enterprise and cloud. >> All right, and Silvano, I mentioned that you've written a number of books, really the authoritative look on when some new products had been released before. So, you've got a new book, "Building a Future-Proof Cloud Infrastructure," and look at you, you've got the physical copy, I've only gotten the soft version. The title, really interesting. Help us understand how Pensando's platform is meeting that future-proof cloud infrastructure that you discuss. >> Well, network have evolved dramatically in the data center and in the cloud. You know, now the speed of classical server in enterprise is probably 25 gigabits, in the cloud we are talking of 100 gigabit of speed for a server, going to 200 gigabit. Now, the backbone are ridiculously fast. We no longer use Spanning Tree and all the stuff, we no longer use access code aggregation. We switched to closed network, and with closed network, we have huge enormous amount of bandwidth and that is good but it also imply that is not easy to do services in a centralized fashion. If you want to do a service in a centralized fashion, what you end up doing is creating a giant bottleneck. You basically, there is this word that is being used, that is trombone or tromboning. You try to funnel all this traffic through the bottleneck and this is not really going to work. The only place that you can really do services is at the edge, and this is not an invention, I mean, even all the principles of cloud is move everything to the edge and maintain the network as simple as possible. So, we approach services with the same general philosophy. We try to move services to the edge, as close as possible to the server and basically at the border between the sever and the network. And when I mean services I mean three main categories of services. The networking services of course, there is the basic layer, two-layer, three stuff, plus the bonding, you know VAMlog and what is needed to connect a server to a network. But then there is the overlay, overlay like the xLAN or Geneva, very very important, basically to build a cloud infrastructure, and that are basically the network service. We can have others but that, sort of is the core of a network service. Some people want to run BGP layers, some people don't want to run BGP. There may be a VPN or kind of things like that but that is the core of a network service. Then of course, and we go back to the time we worked together, there are storage services. At that time, we were discussing mostly about fiber tunnel, now the BUS world is clearly NVMe, but it's not just the BUS world, it's really a new way of doing storage, and is very very interesting. So, NVMe kind of service are very important and NVMe as a version that is called NVMeOF, over fiber. Which is basically, sort of remote version of NVMe. And then the third, least but not last, most important category probably, is security. And when I say that security is very very important, you know, the fact that security is very important is clear to everybody in our day, and I think security has two main branches in terms of services. There is the classical firewall and micro-segmentation, in which you basically try to enforce the fact that only who is allowed to access something can access something. But you don't, at that point, care too much about the privacy of the data. Then there is the other branch that encryption, in which you are not trying to enforce to decide who can access or not access the resource, but you are basically caring about the privacy of the data, encrypting the data so that if it is hijacked, snooped or whatever, it cannot be decoded. >> Eccellent, so Silvano, absolutely the edge is a huge opportunity. When someone looks at the overall solution and say you're putting something in the edge, you know, they could just say, "This really looks like a NIC." You talked about some of the previous engagement we'd worked on, host bus adapters, smart NICs and the like. There were some things we could build in but there were limits that we had, so, what differentiates the Pensando solution from what we would traditionally think of as an adapter card in the past? >> Well, the Pensando solution has two main, multiple pieces but in term of hardware, has two main pieces, there is an ASIC that we call copper internally. That ASIC is not strictly related to be used only in an adapter form, you can deploy it also in other form factors in another part of the network in other embodiment, et cetera. And then there is a card, the card has a PCI-E interface and sit in a PCI-E slot. So yes, in that sense, somebody can can call it a NIC and since it's a pretty good NIC, somebody can call it a smart NIC. We don't really like that two terms, we prefer to call it DSC, domain specific card, but the real term that I like to use is domain specific hardware, and I like to use domain specific hardware because it's the same term that Hennessy and Patterson use in a beautiful piece of literature that is the Turing Award lecture. It's on the internet, it's public, I really ask everybody to go and try to find it and listen to that beautiful piece of literature, modern literature on computer architecture. The Turing Award lecture of Hennessy and Patterson. And they have introduced the concept of domain specific hardware, and they explain also the justification for why now is important to look at domain specific hardware. And the justification is basically in a nutshell and we can go more deep if you're interested, but in a nutshell is that the specing, that is the single tried performer's measurement of a CPU, is not growing fast at all, is only growing nowadays like a few point percent a year, maybe 4% per year. And with this slow grow, over specing performance of a core, you know the core need to be really used for user application, for customer application, and all what is known as Sentian can be moved to some domain specific hardware that can do that in a much better fashion, and by no mean I imply that the DSC is the best example of domain specific hardware. The best example of domain specific hardware is in front of all of us, and are GPUs. And not GPUs for graphic processing which are also important, but GPU used basically for artificial intelligence, machine learning inference. You know, that is a piece of hardware that has shown that something can be done with performance that the purpose processor can do. >> Yeah, it's interesting right. If you term back the clock 10 or 15 years ago, I used to be in arguments, and you say, "Do you build an offload, "or do you let it happen is software." And I was always like, "Oh, well Moore's law with mean that, "you know, the software solution will always win, "because if you bake it in hardware, it's too slow." It's a very different world today, you talk about how fast things speed up. From your customer standpoint though, often some of those architectural things are something that I've looked for my suppliers to take care of that. Speak to the use case, what does this all mean from a customer stand point, what are some of those early use cases that you're looking at? >> Well, as always, you get a bit surprised by the use cases, in the sense that you start to design a product thinking that some of the most cool thing will be the dominant use cases, and then you discover that something that you have never really fought have the most interesting use case. One that we have fought about since day one, but it's really becoming super interesting is telemetry. Basically, measuring everything in the network, and understanding what is happening in the network. I was speaking with a friend the other day, and the friend was asking me, "Oh, but we have SNMP for many many years, "which is the difference between SNMP and telemetry?" And the difference is to me, the real difference is in SNMP or in many of these management protocol, you involve a management plan, you involve a control plan, and then you go to read something that is in the data plan. But the process is so inefficient that you cannot really get a huge volume of data, and you cannot get it practically enough, with enough performance. Doing telemetry means thinking a data path, building a data path that is capable of not only measuring everything realtime, but also sending out that measurement without involving anything else, without involving the control path and the management path so that the measurement becomes really very efficient and the data that you stream out becomes really usable data, actionable data in realtime. So telemetry is clearly the first one, is important. One that you honestly, we had built but we weren't thinking this was going to have so much success is what we call Bidirectional ERSPAN. And basically, is just the capability of copying data. And sending data that the card see to a station. And that is very very useful for replacing what are called TAP network, Which is just network, but many customer put in parallel to the real network just to observe the real network and to be able to troubleshoot and diagnose problem in the real network. So, this two feature telemetry and ERSPAN that are basically troubleshooting feature are the two features that are beginning are getting more traction. >> You're talking about realtime things like telemetry. You know, the applications and the integrations that you need to deal with are so important, back in some of the previous start-ups that you done was getting ready for, say how do we optimize for virtualization, today you talk cloud-native architectures, streaming, very popular, very modular, often container based solutions and things change constantly. You look at some of these architectures, it's not a single thing that goes on for a long period of time, but it's lots of things that happen over shorter periods of time. So, what integrations do you need to do, and what architecturally, how do you build things to make them as you talk, future-proof for these kind of cloud architectures? >> Yeah, what I mentioned were just the two low hanging fruit, if you want the first two low hanging fruit of this architecture. But basically, the two that come immediately after and where there is a huge amount of radio are distributor's state for firewall, with micro-segmentation support. That is a huge topic in itself. So important nowadays that is absolutely fundamental to be able to build a cloud. That is very important, and the second one is wire rate encryption. There is so much demand for privacy, and so much demand to encrypt the data. Not only between data center but now also inside the data center. And when you look at a large bank for example. A large bank is no longer a single organization. A large bank is multiple organizations that are compartmentalized by law. That need to keep things separate by law, by regulation, by FCC regulation. And if you don't have encryption, and if you don't have distributed firewall, is really very difficult to achieve that. And then you know, there are other applications, we mentioned storage NVME, and is a very nice application, and then we have even more, if you go to look at load balance in between server, doing compression for storage and other possible applications. But I sort of lost your real question. >> So, just part of the pieces, when you look at integrations that Pensando needs to do, for maybe some of the applications that you would tie in to any of those that come to mind? >> Yeah, well for sure. It depends, I see two main branches again. One is the cloud provider, and one are the enterprise. In the cloud provider, basically this cloud provider have a huge management infrastructure that is already built and they want just the card to adapt to this, to be controllable by this huge management infrastructure. They already know which rule they want to send to the card, they already know which feature they want to enable on the card. They already have all that, they just want the card to provide the data plan performers for that particular feature. So they're going to build something particular that is specific for that particular cloud provider that adapt to that cloud provider architecture. We want the flexibility of having an API on the card that is like a rest API or a gRPC which they can easily program, monitor and control that card. When you look at the enterprise, the situation is different. Enterprise is looking to at two things. Two or three things. The first thing is a complete solution. They don't want to, they don't have the management infrastructure that they have built like a cloud provider. They want a complete solution that has the card and the management station and there's all what is required to make from day one, a working solution, which is absolutely correct in an enterprise environment. They also want integration, and integration is the tool that they already have. If you look at main enterprise, one of a dominant presence is clearly VMware virtualization in terms of ESX and vSphere and NSX. And so most of the customer are asking us to integrate with VMware, which is a very reasonable demand. And then of course, there are other player, not so much in the virtualization's space, but for example, in the data collections space, and the data analysis space, and for sure Pensando doesn't want to reinvent the wheel there, doesn't want to build a data collector or data analysis engine and whatever, there is a lot of work, and there are a lot out there, so integration with things like Splunk for example are kind of natural for Pensando. >> Eccellent, so wait, you talked about some of the places where Pensando doesn't need to reinvent the wheel, you talk through a lot of the different technology pieces. If I had to have you pull out one, what would you say is the biggest innovation that Pensando has built into the platform. >> Well, the biggest innovation is this P4 architecture. And the P4 architecture was a sort of gift that was given us in the sense that it was not invented for what we use it. P4 was basically invented to have programmable switches. The first big P4 company was clearly Barefoot that then was acquired by Intel and Barefoot built a programmable switch. But if you look at the reality of today, the network, most of the people want the network to be super easy. They don't want to program anything into the network. They want to program everything at the edge, they want to put all the intelligence and the programmability of the edge, so we borrowed the P4 architecture, which is fantastic programmable architecture and we implemented that yet. It's also easier because the bandwidth is clearly more limited at the edge compared to being in the core of a network. And that P4 architecture give us a huge advantage. If you, tomorrow come up with the Stuart Encapsulation Super Duper Technology, I can implement in the copper The Stuart, whatever it was called, Super Duper Encapsulation Technology, even when I design the ASIC I didn't know that encapsulation exists. Is the data plan programmability, is the capability to program the data plan and programming the data plan while maintaining wire-speed performance, which I think is the biggest benefit of Pensando. >> All right, well Silvano, thank you so much for sharing, your journey with Pensando so far, really interesting to dig into it and absolutely look forward to following progress as it goes. >> Stuart, it's been really a pleasure to talk with you, I hope to talk with you again in the near future. Thank you so much. >> All right, and thank you for watching theCUBE, I'm Stu Miniman, thanks for watching. (upbeat music)

Published Date : Jun 17 2020

SUMMARY :

leaders all around the world, I'm Stu Min and I'm coming to you and is really nice to and on the Pensando opportunity. that is behind the Pensando product I've only gotten the soft version. but that is the core of a network service. as an adapter card in the past? but the real term that I like to use "you know, the software and the data that you stream out becomes really usable data, and the integrations and the second one is and integration is the tool that Pensando has built into the platform. is the capability to program the data plan and absolutely look forward to I hope to talk with you you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SilvanoPERSON

0.99+

OregonLOCATION

0.99+

SISCOORGANIZATION

0.99+

2011DATE

0.99+

Stu MinPERSON

0.99+

PensandoORGANIZATION

0.99+

TwoQUANTITY

0.99+

ItalyLOCATION

0.99+

Silvano GaiPERSON

0.99+

BarefootORGANIZATION

0.99+

BostonLOCATION

0.99+

StuartPERSON

0.99+

CiscoORGANIZATION

0.99+

two featuresQUANTITY

0.99+

two main piecesQUANTITY

0.99+

Stu MinimanPERSON

0.99+

200 gigabitQUANTITY

0.99+

OneQUANTITY

0.99+

Palo AltoLOCATION

0.99+

100 gigabitQUANTITY

0.99+

two termsQUANTITY

0.99+

25 gigabitsQUANTITY

0.99+

FCCORGANIZATION

0.99+

Pacific North WestLOCATION

0.99+

IntelORGANIZATION

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

Bend, OregonLOCATION

0.99+

two thingsQUANTITY

0.99+

Building a Future-Proof Cloud InfrastructureTITLE

0.99+

thirdQUANTITY

0.98+

10DATE

0.98+

first oneQUANTITY

0.98+

Future Proof Your EnterpriseTITLE

0.98+

two main branchesQUANTITY

0.98+

vSphereTITLE

0.98+

ESXTITLE

0.98+

firstQUANTITY

0.98+

two-layerQUANTITY

0.98+

tomorrowDATE

0.98+

three thingsQUANTITY

0.97+

MoorePERSON

0.97+

Cube StudiosORGANIZATION

0.97+

two featureQUANTITY

0.97+

bothQUANTITY

0.97+

todayDATE

0.97+

two main branchesQUANTITY

0.96+

two mainQUANTITY

0.96+

single thingQUANTITY

0.96+

first timeQUANTITY

0.95+

4% per yearQUANTITY

0.95+

HennessyORGANIZATION

0.95+

first thingQUANTITY

0.95+

15 years agoDATE

0.94+

second oneQUANTITY

0.93+

single organizationQUANTITY

0.92+

NSXTITLE

0.91+

singleQUANTITY

0.9+

CUBEORGANIZATION

0.89+

ERSPANORGANIZATION

0.89+

SplunkORGANIZATION

0.88+

P4COMMERCIAL_ITEM

0.85+

P4ORGANIZATION

0.84+

PensandoLOCATION

0.84+

2016DATE

0.83+

TuringEVENT

0.82+

two low hangingQUANTITY

0.79+

VMwareTITLE

0.77+

2020DATE

0.77+

Super Duper Encapsulation TechnologyOTHER

0.77+

PattersonORGANIZATION

0.76+

UNLIST TILL 4/2 - A Technical Overview of Vertica Architecture


 

>> Paige: Hello, everybody and thank you for joining us today on the Virtual Vertica BDC 2020. Today's breakout session is entitled A Technical Overview of the Vertica Architecture. I'm Paige Roberts, Open Source Relations Manager at Vertica and I'll be your host for this webinar. Now joining me is Ryan Role-kuh? Did I say that right? (laughs) He's a Vertica Senior Software Engineer. >> Ryan: So it's Roelke. (laughs) >> Paige: Roelke, okay, I got it, all right. Ryan Roelke. And before we begin, I want to be sure and encourage you guys to submit your questions or your comments during the virtual session while Ryan is talking as you think of them as you go along. You don't have to wait to the end, just type in your question or your comment in the question box below the slides and click submit. There'll be a Q and A at the end of the presentation and we'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to get back to you offline. Now, alternatively, you can visit the Vertica forums to post your question there after the session as well. Our engineering team is planning to join the forums to keep the conversation going, so you can have a chat afterwards with the engineer, just like any other conference. Now also, you can maximize your screen by clicking the double arrow button in the lower right corner of the slides and before you ask, yes, this virtual session is being recorded and it will be available to view on demand this week. We'll send you a notification as soon as it's ready. Now, let's get started. Over to you, Ryan. >> Ryan: Thanks, Paige. Good afternoon, everybody. My name is Ryan and I'm a Senior Software Engineer on Vertica's Development Team. I primarily work on improving Vertica's query execution engine, so usually in the space of making things faster. Today, I'm here to talk about something that's more general than that, so we're going to go through a technical overview of the Vertica architecture. So the intent of this talk, essentially, is to just explain some of the basic aspects of how Vertica works and what makes it such a great database software and to explain what makes a query execute so fast in Vertica, we'll provide some background to explain why other databases don't keep up. And we'll use that as a starting point to discuss an academic database that paved the way for Vertica. And then we'll explain how Vertica design builds upon that academic database to be the great software that it is today. I want to start by sharing somebody's approximation of an internet minute at some point in 2019. All of the data on this slide is generated by thousands or even millions of users and that's a huge amount of activity. Most of the applications depicted here are backed by one or more databases. Most of this activity will eventually result in changes to those databases. For the most part, we can categorize the way these databases are used into one of two paradigms. First up, we have online transaction processing or OLTP. OLTP workloads usually operate on single entries in a database, so an update to a retail inventory or a change in a bank account balance are both great examples of OLTP operations. Updates to these data sets must be visible immediately and there could be many transactions occurring concurrently from many different users. OLTP queries are usually key value queries. The key uniquely identifies the single entry in a database for reading or writing. Early databases and applications were probably designed for OLTP workloads. This example on the slide is typical of an OLTP workload. We have a table, accounts, such as for a bank, which tracks information for each of the bank's clients. An update query, like the one depicted here, might be run whenever a user deposits $10 into their bank account. Our second category is online analytical processing or OLAP which is more about using your data for decision making. If you have a hardware device which periodically records how it's doing, you could analyze trends of all your devices over time to observe what data patterns are likely to lead to failure or if you're Google, you might log user search activity to identify which links helped your users find the answer. Analytical processing has always been around but with the advent of the internet, it happened at scales that were unimaginable, even just 20 years ago. This SQL example is something you might see in an OLAP workload. We have a table, searches, logging user activity. We will eventually see one row in this table for each query submitted by users. If we want to find out what time of day our users are most active, then we could write a query like this one on the slide which counts the number of unique users running searches for each hour of the day. So now let's rewind to 2005. We don't have a picture of an internet minute in 2005, we don't have the data for that. We also don't have the data for a lot of other things. The term Big Data is not quite yet on anyone's radar and The Cloud is also not quite there or it's just starting to be. So if you have a database serving your application, it's probably optimized for OLTP workloads. OLAP workloads just aren't mainstream yet and database engineers probably don't have them in mind. So let's innovate. It's still 2005 and we want to try something new with our database. Let's take a look at what happens when we do run an analytic workload in 2005. Let's use as a motivating example a table of stock prices over time. In our table, the symbol column identifies the stock that was traded, the price column identifies the new price and the timestamp column indicates when the price changed. We have several other columns which, we should know that they're there, but we're not going to use them in any example queries. This table is designed for analytic queries. We're probably not going to make any updates or look at individual rows since we're logging historical data and want to analyze changes in stock price over time. Our database system is built to serve OLTP use cases, so it's probably going to store the table on disk in a single file like this one. Notice that each row contains all of the columns of our data in row major order. There's probably an index somewhere in the memory of the system which will help us to point lookups. Maybe our system expects that we will use the stock symbol and the trade time as lookup keys. So an index will provide quick lookups for those columns to the position of the whole row in the file. If we did have an update to a single row, then this representation would work great. We would seek to the row that we're interested in, finding it would probably be very fast using the in-memory index. And then we would update the file in place with our new value. On the other hand, if we ran an analytic query like we want to, the data access pattern is very different. The index is not helpful because we're looking up a whole range of rows, not just a single row. As a result, the only way to find the rows that we actually need for this query is to scan the entire file. We're going to end up scanning a lot of data that we don't need and that won't just be the rows that we don't need, there's many other columns in this table. Many information about who made the transaction, and we'll also be scanning through those columns for every single row in this table. That could be a very serious problem once we consider the scale of this file. Stocks change a lot, we probably have thousands or millions or maybe even billions of rows that are going to be stored in this file and we're going to scan all of these extra columns for every single row. If we tried out our stocks use case behind the desk for the Fortune 500 company, then we're probably going to be pretty disappointed. Our queries will eventually finish, but it might take so long that we don't even care about the answer anymore by the time that they do. Our database is not built for the task we want to use it for. Around the same time, a team of researchers in the North East have become aware of this problem and they decided to dedicate their time and research to it. These researchers weren't just anybody. The fruits of their labor, which we now like to call the C-Store Paper, was published by eventual Turing Award winner, Mike Stonebraker, along with several other researchers from elite universities. This paper presents the design of a read-optimized relational DBMS that contrasts sharply with most current systems, which are write-optimized. That sounds exactly like what we want for our stocks use case. Reasoning about what makes our queries executions so slow brought our researchers to the Memory Hierarchy, which essentially is a visualization of the relative speeds of different parts of a computer. At the top of the hierarchy, we have the fastest data units, which are, of course, also the most expensive to produce. As we move down the hierarchy, components get slower but also much cheaper and thus you can have more of them. Our OLTP databases data is stored in a file on the hard disk. We scanned the entirety of this file, even though we didn't need most of the data and now it turns out, that is just about the slowest thing that our query could possibly be doing by over two orders of magnitude. It should be clear, based on that, that the best thing we can do to optimize our query's execution is to avoid reading unnecessary data from the disk and that's what the C-Store researchers decided to look at. The key innovation of the C-Store paper does exactly that. Instead of storing data in a row major order, in a large file on disk, they transposed the data and stored each column in its own file. Now, if we run the same select query, we read only the relevant columns. The unnamed columns don't factor into the table scan at all since we don't even open the files. Zooming out to an internet scale sized data set, we can appreciate the savings here a lot more. But we still have to read a lot of data that we don't need to answer this particular query. Remember, we had two predicates, one on the symbol column and one on the timestamp column. Our query is only interested in AAPL stock, but we're still reading rows for all of the other stocks. So what can we do to optimize our disk read even more? Let's first partition our data set into different files based on the timestamp date. This means that we will keep separate files for each date. When we query the stocks table, the database knows all of the files we have to open. If we have a simple predicate on the timestamp column, as our sample query does, then the database can use it to figure out which files we don't have to look at at all. So now all of our disk reads that we have to do to answer our query will produce rows that pass the timestamp predicate. This eliminates a lot of wasteful disk reads. But not all of them. We do have another predicate on the symbol column where symbol equals AAPL. We'd like to avoid disk reads of rows that don't satisfy that predicate either. And we can avoid those disk reads by clustering all the rows that match the symbol predicate together. If all of the AAPL rows are adjacent, then as soon as we see something different, we can stop reading the file. We won't see any more rows that can pass the predicate. Then we can use the positions of the rows we did find to identify which pieces of the other columns we need to read. One technique that we can use to cluster the rows is sorting. So we'll use the symbol column as a sort key for all of the columns. And that way we can reconstruct a whole row by seeking to the same row position in each file. It turns out, having sorted all of the rows, we can do a bit more. We don't have any more wasted disk reads but we can still be more efficient with how we're using the disk. We've clustered all of the rows with the same symbol together so we don't really need to bother repeating the symbol so many times in the same file. Let's just write the value once and say how many rows we have. This one length encoding technique can compress large numbers of rows into a small amount of space. In this example, we do de-duplicate just a few rows but you can imagine de-duplicating many thousands of rows instead. This encoding is great for reducing the amounts of disk we need to read at query time, but it also has the additional benefit of reducing the total size of our stored data. Now our query requires substantially fewer disk reads than it did when we started. Let's recap what the C-Store paper did to achieve that. First, we transposed our data to store each column in its own file. Now, queries only have to read the columns used in the query. Second, we partitioned the data into multiple file sets so that all rows in a file have the same value for the partition column. Now, a predicate on the partition column can skip non-matching file sets entirely. Third, we selected a column of our data to use as a sort key. Now rows with the same value for that column are clustered together, which allows our query to stop reading data once it finds non-matching rows. Finally, sorting the data this way enables high compression ratios, using one length encoding which minimizes the size of the data stored on the disk. The C-Store system combined each of these innovative ideas to produce an academically significant result. And if you used it behind the desk of a Fortune 500 company in 2005, you probably would've been pretty pleased. But it's not 2005 anymore and the requirements of a modern database system are much stricter. So let's take a look at how C-Store fairs in 2020. First of all, we have designed the storage layer of our database to optimize a single query in a single application. Our design optimizes the heck out of that query and probably some similar ones but if we want to do anything else with our data, we might be in a bit of trouble. What if we just decide we want to ask a different question? For example, in our stock example, what if we want to plot all the trade made by a single user over a large window of time? How do our optimizations for the previous query measure up here? Well, our data's partitioned on the trade date, that could still be useful, depending on our new query. If we want to look at a trader's activity over a long period of time, we would have to open a lot of files. But if we're still interested in just a day's worth of data, then this optimization is still an optimization. Within each file, our data is ordered on the stock symbol. That's probably not too useful anymore, the rows for a single trader aren't going to be clustered together so we will have to scan all of the rows in order to figure out which ones match. You could imagine a worse design but as it becomes crucial to optimize this new type of query, then we might have to go as far as reconfiguring the whole database. The next problem of one of scale. One server is probably not good enough to serve a database in 2020. C-Store, as described, runs on a single server and stores lots of files. What if the data overwhelms this small system? We could imagine exhausting the file system's inodes limit with lots of small files due to our partitioning scheme. Or we could imagine something simpler, just filling up the disk with huge volumes of data. But there's an even simpler problem than that. What if something goes wrong and C-Store crashes? Then our data is no longer available to us until the single server is brought back up. A third concern, another one of scalability, is that one deployment does not really suit all possible things and use cases we could imagine. We haven't really said anything about being flexible. A contemporary database system has to integrate with many other applications, which might themselves have pretty restricted deployment options. Or the demands imposed by our workloads have changed and the setup you had before doesn't suit what you need now. C-Store doesn't do anything to address these concerns. What the C-Store paper did do was lead very quickly to the founding of Vertica. Vertica's architecture and design are essentially all about bringing the C-Store designs into an enterprise software system. The C-Store paper was just an academic exercise so it didn't really need to address any of the hard problems that we just talked about. But Vertica, the first commercial database built upon the ideas of the C-Store paper would definitely have to. This brings us back to the present to look at how an analytic query runs in 2020 on the Vertica Analytic Database. Vertica takes the key idea from the paper, can we significantly improve query performance by changing the way our data is stored and give its users the tools to customize their storage layer in order to heavily optimize really important or commonly wrong queries. On top of that, Vertica is a distributed system which allows it to scale up to internet-sized data sets, as well as have better reliability and uptime. We'll now take a brief look at what Vertica does to address the three inadequacies of the C-Store system that we mentioned. To avoid locking into a single database design, Vertica provides tools for the database user to customize the way their data is stored. To address the shortcomings of a single node system, Vertica coordinates processing among multiple nodes. To acknowledge the large variety of desirable deployments, Vertica does not require any specialized hardware and has many features which smoothly integrate it with a Cloud computing environment. First, we'll look at the database design problem. We're a SQL database, so our users are writing SQL and describing their data in SQL way, the Create Table statement. Create Table is a logical description of what your data looks like but it doesn't specify the way that it has to be stored, For a single Create Table, we could imagine a lot of different storage layouts. Vertica adds some extensions to SQL so that users can go even further than Create Table and describe the way that they want the data to be stored. Using terminology from the C-Store paper, we provide the Create Projection statement. Create Projection specifies how table data should be laid out, including column encoding and sort order. A table can have multiple projections, each of which could be ordered on different columns. When you query a table, Vertica will answer the query using the projection which it determines to be the best match. Referring back to our stock example, here's a sample Create Table and Create Projection statement. Let's focus on our heavily optimized example query, which had predicates on the stock symbol and date. We specify that the table data is to be partitioned by date. The Create Projection Statement here is excellent for this query. We specify using the order by clause that the data should be ordered according to our predicates. We'll use the timestamp as a secondary sort key. Each projection stores a copy of the table data. If you don't expect to need a particular column in a projection, then you can leave it out. Our average price query didn't care about who did the trading, so maybe our projection design for this query can leave the trader column out entirely. If the question we want to ask ever does change, maybe we already have a suitable projection, but if we don't, then we can create another one. This example shows another projection which would be much better at identifying trends of traders, rather than identifying trends for a particular stock. Next, let's take a look at our second problem, that one, or excuse me, so how should you decide what design is best for your queries? Well, you could spend a lot of time figuring it out on your own, or you could use Vertica's Database Designer tool which will help you by automatically analyzing your queries and spitting out a design which it thinks is going to work really well. If you want to learn more about the Database Designer Tool, then you should attend the session Vertica Database Designer- Today and Tomorrow which will tell you a lot about what the Database Designer does and some recent improvements that we have made. Okay, now we'll move to our next problem. (laughs) The challenge that one server does not fit all. In 2020, we have several orders of magnitude more data than we had in 2005. And you need a lot more hardware to crunch it. It's not tractable to keep multiple petabytes of data in a system with a single server. So Vertica doesn't try. Vertica is a distributed system so will deploy multiple severs which work together to maintain such a high data volume. In a traditional Vertica deployment, each node keeps some of the data in its own locally-attached storage. Data is replicated so that there is a redundant copy somewhere else in the system. If any one node goes down, then the data that it served is still available on a different node. We'll also have it so that in the system, there's no special node with extra duties. All nodes are created equal. This ensures that there is no single point of failure. Rather than replicate all of your data, Vertica divvies it up amongst all of the nodes in your system. We call this segmentation. The way data is segmented is another parameter of storage customization and it can definitely have an impact upon query performance. A common way to segment data is by using a hash expression, which essentially randomizes the node that a row of data belongs to. But with a guarantee that the same data will always end up in the same place. Describing the way data is segmented is another part of the Create Projection Statement, as seen in this example. Here we segment on the hash of the symbol column so all rows with the same symbol will end up on the same node. For each row that we load into the system, we'll apply our segmentation expression. The result determines which segment the row belongs to and then we'll send the row to each node which holds the copy of that segment. In this example, our projection is marked KSAFE 1, so we will keep one redundant copy of each segment. When we load a row, we might find that its segment had copied on Node One and Node Three, so we'll send a copy of the row to each of those nodes. If Node One is temporarily disconnected from the network, then Node Three can serve the other copy of the segment so that the whole system remains available. The last challenge we brought up from the C-Store design was that one deployment does not fit all. Vertica's cluster design neatly addressed many of our concerns here. Our use of segmentation to distribute data means that a Vertica system can scale to any size of deployment. And since we lack any special hardware or nodes with special purposes, Vertica servers can run anywhere, on premise or in the Cloud. But let's suppose you need to scale out your cluster to rise to the demands of a higher workload. Suppose you want to add another node. This changes the division of the segmentation space. We'll have to re-segment every row in the database to find its new home and then we'll have to move around any data that belongs to a different segment. This is a very expensive operation, not something you want to be doing all that often. Traditional Vertica doesn't solve that problem especially well, but Vertica Eon Mode definitely does. Vertica's Eon Mode is a large set of features which are designed with a Cloud computing environment in mind. One feature of this design is elastic throughput scaling, which is the idea that you can smoothly change your cluster size without having to pay the expenses of shuffling your entire database. Vertica Eon Mode had an entire session dedicated to it this morning. I won't say any more about it here, but maybe you already attended that session or if you haven't, then I definitely encourage you to listen to the recording. If you'd like to learn more about the Vertica architecture, then you'll find on this slide links to several of the academic conference publications. These four papers here, as well as Vertica Seven Years Later paper which describes some of the Vertica designs seven years after the founding and also a paper about the innovations of Eon Mode and of course, the Vertica documentation is an excellent resource for learning more about what's going on in a Vertica system. I hope you enjoyed learning about the Vertica architecture. I would be very happy to take all of your questions now. Thank you for attending this session.

Published Date : Mar 30 2020

SUMMARY :

A Technical Overview of the Vertica Architecture. Ryan: So it's Roelke. in the question box below the slides and click submit. that the best thing we can do

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RyanPERSON

0.99+

Mike StonebrakerPERSON

0.99+

Ryan RoelkePERSON

0.99+

2005DATE

0.99+

2020DATE

0.99+

thousandsQUANTITY

0.99+

2019DATE

0.99+

$10QUANTITY

0.99+

Paige RobertsPERSON

0.99+

VerticaORGANIZATION

0.99+

PaigePERSON

0.99+

Node ThreeTITLE

0.99+

TodayDATE

0.99+

FirstQUANTITY

0.99+

each fileQUANTITY

0.99+

RoelkePERSON

0.99+

each rowQUANTITY

0.99+

Node OneTITLE

0.99+

millionsQUANTITY

0.99+

each hourQUANTITY

0.99+

eachQUANTITY

0.99+

SecondQUANTITY

0.99+

second categoryQUANTITY

0.99+

each columnQUANTITY

0.99+

One techniqueQUANTITY

0.99+

oneQUANTITY

0.99+

two predicatesQUANTITY

0.99+

each nodeQUANTITY

0.99+

One serverQUANTITY

0.99+

SQLTITLE

0.99+

C-StoreTITLE

0.99+

second problemQUANTITY

0.99+

Ryan RolePERSON

0.99+

ThirdQUANTITY

0.99+

North EastLOCATION

0.99+

each segmentQUANTITY

0.99+

todayDATE

0.98+

single entryQUANTITY

0.98+

each dateQUANTITY

0.98+

GoogleORGANIZATION

0.98+

one rowQUANTITY

0.98+

one serverQUANTITY

0.98+

single serverQUANTITY

0.98+

single entriesQUANTITY

0.98+

bothQUANTITY

0.98+

20 years agoDATE

0.98+

two paradigmsQUANTITY

0.97+

a dayQUANTITY

0.97+

this weekDATE

0.97+

billions of rowsQUANTITY

0.97+

VerticaTITLE

0.97+

4/2DATE

0.97+

single applicationQUANTITY

0.97+

each queryQUANTITY

0.97+

Each projectionQUANTITY

0.97+

Naveen Rao, Intel | AWS re:Invent 2019


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> Welcome back to the Sands Convention Center in Las Vegas everybody, you're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante, I'm here with my cohost Justin Warren, this is day one of our coverage of AWS re:Invent 2019, Naveen Rao here, he's the corporate vice president and general manager of artificial intelligence, AI products group at Intel, good to see you again, thanks for coming to theCUBE. >> Thanks for having me. >> Dave: You're very welcome, so what's going on with Intel and AI, give us the big picture. >> Yeah, I mean actually the very big picture is I think the world of computing is really shifting. The purpose of what a computer is made for is actually shifting, and I think from its very conception, from Alan Turing, the machine was really meant to be something that recapitulated intelligence, and we took sort of a divergent path where we built applications for productivity, but now we're actually coming back to that original intent, and I think that hits everything that Intel does, because we're a computing company, we supply computing to the world, so everything we do is actually impacted by AI, and will be in service of building better AI platforms, for intelligence at the edge, intelligence in the cloud, and everything in between. >> It's really come full circle, I mean, when I first started this industry, AI was the big hot topic, and really, Intel's ascendancy was around personal productivity, but now we're seeing machines replacing cognitive functions for humans, that has implications for society. But there's a whole new set of workloads that are emerging, and that's driving, presumably, different requirements, so what do you see as the sort of infrastructure requirements for those new workloads, what's Intel's point of view on that? >> Well, so maybe let's focus that on the cloud first. Any kind of machine learning algorithm typically has two phases to it, one is called training or learning, where we're really iterating over large data sets to fit model parameters. And once that's been done to a satisfaction of whatever performance metrics that are relevant to your application, it's rolled out and deployed, that phase is called inference. So these two are actually quite different in their requirements in that inference is all about the best performance per watt, how much processing can I shove into a particular time and power budget? On the training side, it's much more about what kind of flexibility do I have for exploring different types of models, and training them very very fast, because when this field kind of started taking off in 2014, 2013, typically training a model back then would take a month or so, those models now take minutes to train, and the models have grown substantially in size, so we've still kind of gone back to a couple of weeks of training time, so anything we can do to reduce that is very important. >> And why the compression, is that because of just so much data? >> It's data, the sheer amount of data, the complexity of data, and the complexity of the models. So, very broad or a rough categorization of the complexity can be the number of parameters in a model. So, back in 2013, there were, call it 10 million, 20 million parameters, which was very large for a machine learning model. Now they're in the billions, one or two billion is sort of the state of the art. To give you bearings on that, the human brain is about a three to 500 trillion model, so we're still pretty far away from that. So we got a long way to go. >> Yeah, so one of the things about these models is that once you've trained them, that then they do things, but understanding how they work, these are incredibly complex mathematical models, so are we at a point where we just don't understand how these machines actually work, or do we have a pretty good idea of, "No no no, when this model's trained to do this thing, "this is how it behaves"? >> Well, it really depends on what you mean by how much understanding we have, so I'll say at one extreme, we trust humans to do certain things, and we don't really understand what's happening in their brain. We trust that there's a process in place that has tested them enough. A neurosurgeon's cutting into your head, you say you know what, there's a system where that neurosurgeon probably had to go through a ton of training, be tested over and over again, and now we trust that he or she is doing the right thing. I think the same thing is happening in AI, some aspects we can bound and say, I have analytical methods on how I can measure performance. In other ways, other places, it's actually not so easy to measure the performance analytically, we have to actually do it empirically, which means we have data sets that we say, "Does it stand up to all the different tests?" One area we're seeing that in is autonomous driving. Autonomous driving, it's a bit of a black box, and the amount of situations one can incur on the road are almost limitless, so what we say is, for a 16 year old, we say "Go out and drive," and eventually you sort of learn it. Same thing is happening now for autonomous systems, we have these training data sets where we say, "Do you do the right thing in these scenarios?" And we say "Okay, we trust that you'll probably "do the right thing in the real world." >> But we know that Intel has partnered with AWS, I ran autonomous driving with their DeepRacer project, and I believe it's on Thursday is the grand final, it's been running for, I think it was announced on theCUBE last year, and there's been a whole bunch of competitions running all year, basically training models that run on this Intel chip inside a little model car that drives around a race track, so speaking of empirical testing of whether or not it works, lap times gives you a pretty good idea, so what have you learned from that experience, of having all of these people go out and learn how to use these ALM models on a real live race car and race around a track? >> I think there's several things, I mean one thing is, when you turn loose a number of developers on a competitive thing, you get really interesting results, where people find creative ways to use the tools to try to win, so I always love that process, I think competition is how you push technology forward. On the tool side, it's actually more interesting to me, is that we had to come up with something that was adequately simple, so that a large number of people could get going on it quickly. You can't have somebody who spends a year just getting the basic infrastructure to work, so we had to put that in place. And really, I think that's still an iterative process, we're still learning what we can expose as knobs, what kind of areas of innovation we allow the user to explore, and where we sort of walk it down to make it easy to use. So I think that's the biggest learning we get from this, is how I can deploy AI in the real world, and what's really needed from a tool chain standpoint. >> Can you talk more specifically about what you guys each bring to the table with your collaboration with AWS? >> Yeah, AWS has been a great partner. Obviously AWS has a huge ecosystem of developers, all kinds of different developers, I mean web developers are one sort of developer, database developers are another, AI developers are yet another, and we're kind of partnering together to empower that AI base. What we bring from a technological standpoint are of course the hardware, our CPUs, our AI ready now with a lot of software that we've been putting out in the open source. And then other tools like OpenVINO, which make it very easy to start using AI models on our hardware, and so we tie that in to the infrastructure that AWS is building for something like DeepRacer, and then help build a community around it, an ecosystem around it of developers. >> I want to go back to the point you were making about the black box, AI, people are concerned about that, they're concerned about explainability. Do you feel like that's a function of just the newness that we'll eventually get over, and I mean I can think of so many examples in my life where I can't really explain how I know something, but I know it, and I trust it. Do you feel like it's sort of a tempest in a teapot? >> Yeah, I think it depends on what you're talking about, if you're talking about the traceability of a financial transaction, we kind of need that maybe for legal reasons, so even for humans we do that. You got to write down everything you did, why did you do this, why'd you do that, so we actually want traceability for humans, even. In other places, I think it is really about the newness. Do I really trust this thing, I don't know what it's doing. Trust comes with use, after a while it becomes pretty straightforward, I mean I think that's probably true for a cell phone, I remember the first smartphones coming out in the early 2000s, I didn't trust how they worked, I would never do a credit card transaction on 'em, these kind of things, now it's taken for granted. I've done it a million times, and I never had any problems, right? >> It's the opposite in social media, most people. >> Maybe that's the opposite, let's not go down that path. >> I quite like Dr. Kate Darling's analogy from MIT lab, which is we already we have AI, and we're quite used to them, they're called dogs. We don't fully understand how a dog makes a decision, and yet we use 'em every day. In a collaboration with humans, so a dog, sort of replace a particular job, but then again they don't, I don't particularly want to go and sniff things all day long. So having AI systems that can actually replace some of those jobs, actually, that's kind of great. >> Exactly, and think about it like this, if we can build systems that are tireless, and we can basically give 'em more power and they keep going, that's a big win for us. And actually, the dog analogy is great, because I think, at least my eventual goal as an AI researcher is to make the interface for intelligent agents to be like a dog, to train it like a dog, reinforce it for the behaviors you want and keep pushing it in new directions that way, as opposed to having to write code that's kind of esoteric. >> Can you talk about GANs, what is GANs, what's it stand for, what does it mean? >> Generative Adversarial Networks. What this means is that, you can kind of think of it as, two competing sides of solving a problem. So if I'm trying to make a fake picture of you, that makes it look like you have no hair, like me, you can see a Photoshop job, and you can kind of tell, that's not so great. So, one side is trying to make the picture, and the other side is trying to guess whether it's fake or not. We have two neural networks that are kind of working against each other, one's generating stuff, and the other one's saying, is it fake or not, and then eventually you keep improving each other, this one tells that one "No, I can tell," this one goes and tries something else, this one says "No, I can still tell." The one that's trying with a discerning network, once it can't tell anymore, you've kind of built something that's really good, that's sort of the general principle here. So we basically have two things kind of fighting each other to get better and better at a particular task. >> Like deepfakes. >> I use that because it is relevant in this case, and that's kind of where it came from, is from GANs. >> All right, okay, and so wow, obviously relevant with 2020 coming up. I'm going to ask you, how far do you think we can take AI, two part question, how far can we take AI in the near to mid term, let's talk in our lifetimes, and how far should we take it? Maybe you can address some of those thoughts. >> So how far can we take it, well, I think we often have the sci-fi narrative out there of building killer machines and this and that, I don't know that that's actually going to happen anytime soon, for several reasons, one is, we build machines for a purpose, they don't come from an embattled evolutionary past like we do, so their motivations are a little bit different, say. So that's one piece, they're really purpose-driven. Also, building something that's as general as a human or a dog is very hard, and we're not anywhere close to that. When I talked about the trillions of parameters that a human brain has, we might be able to get close to that from a engineering standpoint, but we're not really close to making those trillions of parameters work together in such a coherent way that a human brain does, and efficient, human brain does that in 20 watts, to do it today would be multiple megawatts, so it's not really something that's easily found, just laying around. Now how far should we take it, I look at AI as a way to push humanity to the next level. Let me explain what that means a little bit. Simple equation I always sort of write down, is people are like "Radiologists aren't going to have a job." No no no, what it means is one radiologist plus AI equals 100 radiologists. I can take that person's capabilities and scale it almost freely to millions of other people. It basically increases the accessibility of expertise, we can scale expertise, that's a good thing. It makes, solves problems like we have in healthcare today. All right, that's where we should be going with this. >> Well a good example would be, when, and probably part of the answer's today, when will machines make better diagnoses than doctors? I mean in some cases it probably exists today, but not broadly, but that's a good example, right? >> It is, it's a tool, though, so I look at it as more, giving a human doctor more data to make a better decision on. So, what AI really does for us is it doesn't limit the amount of data on which we can make decisions, as a human, all I can do is read so much, or hear so much, or touch so much, that's my limit of input. If I have an AI system out there listening to billions of observations, and actually presenting data in a form that I can make better decisions on, that's a win. It allows us to actually move science forward, to move accessibility of technologies forward. >> So keeping the context of that timeframe I said, someday in our lifetimes, however you want to define that, when do you think that, or do you think that driving your own car will become obsolete? >> I don't know that it'll ever be obsolete, and I'm a little bit biased on this, so I actually race cars. >> Me too, and I drive a stick, so. >> I kind of race them semi-professionally, so I don't want that to go away, but it's the same thing, we don't need to ride horses anymore, but we still do for fun, so I don't think it'll completely go away. Now, what I think will happen is that commutes will be changed, we will now use autonomous systems for that, and I think five, seven years from now, we will be using autonomy much more on prescribed routes. It won't be that it completely replaces a human driver, even in that timeframe, because it's a very hard problem to solve, in a completely general sense. So, it's going to be a kind of gentle evolution over the next 20 to 30 years. >> Do you think that AI will change the manufacturing pendulum, and perhaps some of that would swing back to, in this country, anyway, on-shore manufacturing? >> Yeah, perhaps, I was in Taiwan a couple of months ago, and we're actually seeing that already, you're seeing things that maybe were much more labor-intensive before, because of economic constraints are becoming more mechanized using AI. AI as inspection, did this machine install this thing right, so you have an inspector tool and you have an AI machine building it, it's a little bit like a GAN, you can think of, right? So this is happening already, and I think that's one of the good parts of AI, is that it takes away those harsh conditions that humans had to be in before to build devices. >> Do you think AI will eventually make large retail stores go away? >> Well, I think as long as there are humans who want immediate satisfaction, I don't know that it'll completely go away. >> Some humans enjoy shopping. >> Naveen: Some people like browsing, yeah. >> Depends how fast you need to get it. And then, my last AI question, do you think banks, traditional banks will lose control of the payment systems as a result of things like machine intelligence? >> Yeah, I do think there are going to be some significant shifts there, we're already seeing many payment companies out there automate several aspects of this, and reducing the friction of moving money. Moving money between people, moving money between different types of assets, like stocks and Bitcoins and things like that, and I think AI, it's a critical component that people don't see, because it actually allows you to make sure that first you're doing a transaction that makes sense, when I move from this currency to that one, I have some sense of what's a real number. It's much harder to defraud, and that's a critical element to making these technologies work. So you need AI to actually make that happen. >> All right, we'll give you the last word, just maybe you want to talk a little bit about what we can expect, AI futures, or anything else you'd like to share. >> I think it's, we're at a really critical inflection point where we have something that works, basically, and we're going to scale it, scale it, scale it to bring on new capabilities. It's going to be really expensive for the next few years, but we're going to then throw more engineering at it and start bringing it down, so I start seeing this look a lot more like a brain, something where we can start having intelligence everywhere, at various levels, very low power, ubiquitous compute, and then very high power compute in the cloud, but bringing these intelligent capabilities everywhere. >> Naveen, great guest, thanks so much for coming on theCUBE. >> Thank you, thanks for having me. >> You're really welcome, all right, keep it right there everybody, we'll be back with our next guest, Dave Vellante for Justin Warren, you're watching theCUBE live from AWS re:Invent 2019. We'll be right back. (techno music)

Published Date : Dec 3 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel, AI products group at Intel, good to see you again, Dave: You're very welcome, so what's going on and we took sort of a divergent path so what do you see as the Well, so maybe let's focus that on the cloud first. the human brain is about a three to 500 trillion model, and the amount of situations one can incur on the road is that we had to come up with something that was on our hardware, and so we tie that in and I mean I can think of so many examples You got to write down everything you did, and we're quite used to them, they're called dogs. and we can basically give 'em more power and you can kind of tell, that's not so great. and that's kind of where it came from, is from GANs. and how far should we take it? I don't know that that's actually going to happen it doesn't limit the amount of data I don't know that it'll ever be obsolete, but it's the same thing, we don't need to ride horses that humans had to be in before to build devices. I don't know that it'll completely go away. Depends how fast you need to get it. and reducing the friction of moving money. All right, we'll give you the last word, and we're going to scale it, scale it, scale it we'll be back with our next guest,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

20 wattsQUANTITY

0.99+

AWSORGANIZATION

0.99+

2014DATE

0.99+

10 millionQUANTITY

0.99+

Naveen RaoPERSON

0.99+

Justin WarrenPERSON

0.99+

20 millionQUANTITY

0.99+

oneQUANTITY

0.99+

TaiwanLOCATION

0.99+

2013DATE

0.99+

100 radiologistsQUANTITY

0.99+

Alan TuringPERSON

0.99+

NaveenPERSON

0.99+

IntelORGANIZATION

0.99+

MITORGANIZATION

0.99+

two thingsQUANTITY

0.99+

twoQUANTITY

0.99+

last yearDATE

0.99+

billionsQUANTITY

0.99+

a monthQUANTITY

0.99+

2020DATE

0.99+

two partQUANTITY

0.99+

Las VegasLOCATION

0.99+

one pieceQUANTITY

0.99+

ThursdayDATE

0.99+

Kate DarlingPERSON

0.98+

early 2000sDATE

0.98+

two billionQUANTITY

0.98+

first smartphonesQUANTITY

0.98+

one sideQUANTITY

0.98+

Sands Convention CenterLOCATION

0.97+

todayDATE

0.97+

OpenVINOTITLE

0.97+

one radiologistQUANTITY

0.96+

Dr.PERSON

0.96+

16 year oldQUANTITY

0.95+

two phasesQUANTITY

0.95+

trillions of parametersQUANTITY

0.94+

firstQUANTITY

0.94+

a million timesQUANTITY

0.93+

seven yearsQUANTITY

0.93+

billions of observationsQUANTITY

0.92+

one thingQUANTITY

0.92+

one extremeQUANTITY

0.91+

two competing sidesQUANTITY

0.9+

500 trillion modelQUANTITY

0.9+

a yearQUANTITY

0.89+

fiveQUANTITY

0.88+

eachQUANTITY

0.88+

One areaQUANTITY

0.88+

a couple of months agoDATE

0.85+

one sortQUANTITY

0.84+

two neuralQUANTITY

0.82+

GANsORGANIZATION

0.79+

couple of weeksQUANTITY

0.78+

DeepRacerTITLE

0.77+

millions ofQUANTITY

0.76+

PhotoshopTITLE

0.72+

deepfakesORGANIZATION

0.72+

next few yearsDATE

0.71+

yearQUANTITY

0.67+

re:Invent 2019EVENT

0.66+

threeQUANTITY

0.64+

Invent 2019EVENT

0.64+

aboutQUANTITY

0.63+

Seth Juarez, Microsoft | Microsoft Ignite 2019


 

>>Live from Orlando, Florida. It's the cube covering Microsoft ignite brought to you by Cohesity. >>Good afternoon everyone and welcome back to the cubes live coverage of Microsoft ignite 26,000 people here at this conference at the orange County convention center. I'm your host, Rebecca Knight, alongside my cohost Stu Miniman. We are joined by Seth Juarez. He is the cloud developer advocate at Microsoft. Thank you so much for coming on the show. >>Glad to be here. You have such a lovely sad and you're lovely people. We just met up. You don't know any better? No. Well maybe after after the end of the 15 minutes we'll have another discussion. >>You're starting off on the right foot, so tell us a little bit about what you do. You're also a host on channel nine tell us about your role as a, as a cloud developer. >>So a cloud advocate's job is primarily to help developers be successful on Azure. My particular expertise lies in AI and machine learning and so my job is to help developers be successful with AI in the cloud, whether it be developers, data scientists, machine learning engineers or whatever it is that people call it nowadays. Because you know how the titles change a lot, but my job is to help them be successful and sometimes what's interesting is that sometimes our customers can't find success in the cloud. That's actually a win for me too because then I have a deep integration with the product group and my job is to help them understand from a customer perspective what it is they need and why. So I'm like the ombudsman so to speak because the product groups are the product groups. I don't report up to them. So I usually go in there and I'm like, Hey, I don't report to any of you, but this is what the customers are saying. >>We are very keen on being customer centered and that's why I do what I do. >> Seth, I have to imagine when you're dealing with customers, some of that skills gap and learning is something that they need to deal with. You know, we've been hearing for a long time, you know, there's not enough data scientists, you know, we need to learn these environments. Satya Nadella spent a lot of time talking about the citizen developers out there. So you know H bring us inside the customers you're talking to, you know, kind of, where do you usually start and you know, how do they pull the right people in there or are they bringing in outside people a little bit? Great organization, great question. It turns out that for us at Microsoft we have our product groups and then right outside we have our advocates that are very closely aligned to the product groups. >>And so anytime we do have an interaction with a customer, it's for the benefit of all the other customers. And so I meet with a lot of customers and I don't, I'm to get to talk about them too much. But the thing is I go in there, I see what they're doing. For example, one time I went to the touring Institute in the UK. I went in there and because I'm not there to sell, I'm there to figure out like what are you trying to do and does this actually match up? It's a very different kind of conversation and they'd tell me about what they're working on. I tell them about how we can help them and then they tell me where the gaps are or where they're very excited and I take both of those pieces of feedback to the, to the product group and they, they just love being able to have someone on the ground to talk to people because sometimes you know, when work on stuff you get a little siloed and it's good to have an ombudsman so to speak, to make sure that we're doing the right thing for our customers. >>As somebody that works on AI. You must've been geeking out working, working with the Turing Institute though. Oh yeah. Those people are absolutely wonderful and it was like as I was walking in, a little giddy, but the problems that they're facing in AI are very similar. The problems that people at the other people doing and that are in big organizations, other organizations are trying to onboard to AI and try to figure out, everyone says I need to be using this hammer and they're trying to hammer some screws in with the hammer. So it's good to figure out when it's appropriate to use AI and when it isn't. And I also have customers with that >>and I'm sure the answer is it depends in terms of when it's appropriate, but do you have any sort of broad brush advice for helping an organization determine is is this a job for AI? Absolutely. >>That's uh, it's a question I get often and developers, we have this thing called the smell that tells us if a code smell, we have a code smell tells us, maybe we should refactor, maybe we should. For me, there's this AI smell where if you can't precisely figure out the series of steps to execute an algorithm and you're having a hard time writing code, or for example, if every week you need to change your if L statements or if you're changing numbers from 0.5 to 0.7 and now it works, that's the smell that you should think about using AI or machine learning, right? There's also a set of a class of algorithms that, for example, AI, it's not that we've solved, solved them, but they're pretty much solved. Like for example, detecting what's in an image, understanding sentiment and text, right? Those kinds of problems we have solutions for that are just done. >>But if you have a code smell where you have a lot of data and you don't want to write an algorithm to solve that problem, machine learning and AI might be the solution. Alright, a lot of announcements this week. Uh, any of the highlights for from your area. We last year, AI was mentioned specifically many times now with you know, autonomous systems and you know it feels like AI is in there not necessarily just you know, rubbing AI on everything. >> I think it's because we have such a good solution for people building custom machine learning that now it's time to talk about the things you can do with it. So we're talking about autonomous systems. It's because it's based upon the foundation of the AI that we've already built. We released something called Azure machine learning, a set of tools called in a studio where you can do end and machine learning. >>Because what what's happening is most data scientists nowadays, and I'm guilty of this myself, we put stuff in things called Jupiter notebooks. We release models, we email them to each other, we're emailing Python files and that's kinda like how programming was in 1995 and now we're doing is we're building a set of tools to allow machine learning developers to go end to end, be able to see how data scientists are working and et cetera. For example, let's just say you're a data scientist. Bill. Did an awesome job, but then he goes somewhere else and Sally who was absolutely amazing, comes in and now she's the data scientist. Usually Sally starts from zero and all of the stuff that bill did is lost with Azure machine learning. You're able to see all of your experiments, see what bill tried, see what he learned and Sally can pick right up and go on. And that's just doing the experiments. Now if you want to get machine learning models into production, we also have the ability to take these models, version them, put them into a CIC, D similar process with Azure dev ops and machine learning. So you can go from data all the way to machine learning in production very easily, very quickly and in a team environment, you know? And that's what I'm excited about mostly. >>So at a time when AI and big and technology companies in general are under fire and not, Oh considered to not always have their users best interests at heart. I'd like you to talk about the Microsoft approach to ethical AI and responsible AI. >>Yeah, I was a part of the keynote. Scott Hanselman is a very famous dab and he did a keynote and I got to form part of it and one of the things that we're very careful even on a dumb demo or where he was like doing rock paper, scissors. I said, and Scott, we were watching you with your permission to see like what sequence of throws you were doing. We believe that through and through all the way we will never use our customers' data to enhance any of our models. In fact, there was a time when we were doing like a machine learning model for NLP and I saw the email thread and it's like we don't have language food. I don't remember what it was. We don't have enough language food. Let's pay some people to ethically source this particular language data. We will never use any of our customer's data and I've had this question asked a lot. >>Like for example, our cognitive services which have built in AI, we will never use any of our customer's data to build that neither. For example, if we have, for example, we have a custom vision where you upload your own pictures, those are your pictures. We're never going to use them for anything. And anything that we do, there's always consent and we want to make sure that everyone understands that AI is a powerful tool, but it also needs to be used ethically. And that's just on how we use data for people that are our customers. We also have tools inside of Azure machine learning to get them to use AI. Ethically. We have tools to explain models. So for example, if you very gender does the model changes prediction or if you've very class or race, is your model being a little iffy? We allow, we have those tools and Azure machine learning, so our customers can also be ethical with the AI they build on our platform. So we have ethics built into how we build our models and we have ethics build into how our customers can build their models too, which is to me very. >>And is that a selling point? Are customers gravitating? I mean we've talked a lot about it on the show. About the, the trust that customers have in Microsoft and the image that Microsoft has in the industry right now. But the idea that it is also trying to perpetuate this idea of making everyone else more ethical. Do you think that that is one of the reasons customers are gravitate? >>I hope so. And as far as a selling point, I absolutely think it's a selling point, but we've just released it and so I'm going to go out there and evangelize the fact that not only are we as tickle with what we do in AI, but we want our customers to be ethical as well. Because you know, trust pays, as Satya said in his keynote, tra trust the enhancer in the exponent that allows tech intensity to actually be tech intensity. And we believe that through and through not only do believe it for ourselves, but we want our customers to also believe it and see the benefits of having trust with our customers. One of the things we, we talked to Scott Hanselman a little bit yesterday about that demo is the Microsoft of today isn't just use all the Microsoft products, right? To allow you to use, you know, any tool, any platform, you know, your own environment, uh, to tell us how that, that, that plays into your world. >>It's, you know, like in my opinion, and I don't know if it's the official opinion, but we are in the business of renting computer cycles. We don't care how you use them, just come into our house and use them. You wanna use Java. We've recently announced a tons of things with spraying. We're become an open JDK contributor. You know, one of my colleagues, we're very hard on that. I work primarily in Python because it's machine learning. I have a friend might call a friend and colleague, David Smith who works in our, I have other colleagues that work in a number of different languages. We don't care. What we are doing is we're trying to empower every organization and every person on the planet to achieve more where they are, how they are, and hopefully bring a little bit of of it to our cloud. >>What are you doing that, that's really exciting to you right now? I know you're doing a new.net library. Any other projects that are sparking your end? >>Yeah, so next week I'm going to France and this is before anyone's going to see this and there is a, there is a company, I think it's called surf, I'll have to look it up and we'll put it in the notes, but they are basically trying to use AI to be more environmentally conscious and they're taking pictures of trash and rivers and they're using AI to figure out where it's coming from so they can clean up environment. I get to go over there and see what they're doing, see how I can help them improvement and promote this kind of ethical way of doing AI. We also do stuff with snow leopards. I was watching some Netflix thing with my kids and we were watching snow leopards and there was like two of them. Like this is impressive because as I'm watching this with my kids, I'm like, Hey we are at Microsoft, we're helping this population, you know, perpetuate with AI. >>And so those are the things it's actually a had had I've seen on TV is, you know, rather than spending thousands of hours of people out there, the AI can identify the shape, um, you know, through the cameras. So they're on a, I love that powerful story to explain some of those pieces as opposed to it. It's tough to get the nuance of what's happening here. Absolutely. With this technology, these models are incredibly easy to build on our platform. And, and I and I st fairly easy to build with what you have. We love people use TensorFlow, use TensorFlow, people use pie torch. That's great cafe on it. Whatever you want to use. We are happy to let you use a rent out our computer cycles because we want you to be successful. Maybe speak a little bit of that when you talk about, you know, the, the cloud, one of the things is to democratize, uh, availability of this. >>There's usually free tiers out there, especially in the emerging areas. Uh, you know, how, how is Microsoft helping to get that, that compute and that world technology to people that might not have had it in the past? I was in, I was in Peru a number of years ago and I and I had a discussion with someone on the channel nine show and it was absolutely imp. Like I under suddenly understood the value of this. He said, Seth, if I wanted to do a startup here in Peru, right, and it was a capital Peru, like a very industrialized city, I would have to buy a server. It would come from California on a boat. It would take a couple of months to get here and then it would be in a warehouse for another month as it goes through customs. And then I would have to put it into a building that has a C and then I could start now sat with a click of a button. >>I can provision an entire cluster of machines on Azure and start right now. That's what, that's what the cloud is doing in places like Peru and places that maybe don't have a lot of infrastructure. Now infrastructure is for everyone and maybe someone even in the United States, you know, in a rural area that doesn't, they can start up their own business right now anywhere. And it's not just because it's Peru, it's not just because it's some other place that's becoming industrialized. It's everywhere. Because any kid with a dream can spin up an app service and have a website done in like five minutes. >>So what does this mean? I mean, as you said, any, any kid, any person or rural area, any developing country, what does this mean in five or 10 years from now in terms of the future of commerce and work and business? >>Honestly, some people feel like computers are art, stealing, you know, human engineering. I think they are really augmenting it. Like for example, I don't have to, if I want to know something for her. Back when, when I was a kid, I had to, if I want to know something, sometimes I had to go without knowing where like I guess we'll never know. Right? And then five years later we're like, okay, we found out it was that a character on that show, you know? And now we just look at our phone. It's like, Oh, you were wrong. And I like not knowing that I'm wrong for a lot longer, you know what I'm saying? But nowadays with our, with our phones and with other devices, we have information readily available so that we can make appropriate response, appropriate answers to questions that we have. AI is going to help us with that by augmenting human ingenuity, by looking at the underlying structure. >>We can't, for example, if you look at, if you look at an Excel spreadsheet, if it's like five rows and maybe five columns, you and I as humans can look at and see a trend. But what if it's 10 million rows and 5,000 columns? Our ingenuity has been stretched too far, but with computers now we can aggregate, we can do some machine learning models, and then we can see the patterns that the computer found aggregated, and now we can make the decisions we could make with five columns, five rows, but it's not taking our jobs. It's augmenting our capacity to do the right thing. >>Excellent. We'll assess that. Thank you so much for coming on the Cuba. Really fun conversation. >>Glad to be here. Thanks for having me. >>Alright, I'm Rebecca Knight for Stu minimun. Stay tuned for more of the cubes live coverage of Microsoft ignite.

Published Date : Nov 6 2019

SUMMARY :

Microsoft ignite brought to you by Cohesity. Thank you so much for coming on the show. Glad to be here. You're starting off on the right foot, so tell us a little bit about what you do. So I'm like the ombudsman so to speak because the product groups are the product groups. You know, we've been hearing for a long time, you know, there's not enough data scientists, they just love being able to have someone on the ground to talk to people because sometimes you know, And I also have customers with that and I'm sure the answer is it depends in terms of when it's appropriate, but do you have any sort of broad brush if every week you need to change your if L statements or if you're changing numbers from 0.5 to 0.7 many times now with you know, autonomous systems and you know it feels like AI is to talk about the things you can do with it. So you can go from data all the way to machine learning in I'd like you to talk about the Microsoft approach to ethical AI and responsible AI. I said, and Scott, we were watching you with your permission to see For example, if we have, for example, we have a custom vision where you upload your own pictures, Do you think that that is one of the reasons customers are gravitate? any platform, you know, your own environment, uh, to tell us how that, We don't care how you use them, just come into our house What are you doing that, that's really exciting to you right now? we're helping this population, you know, perpetuate with AI. And, and I and I st fairly easy to build with what you have. Uh, you know, how, how is Microsoft helping to get that, that compute and that world technology to you know, in a rural area that doesn't, they can start up their own business right now anywhere. Honestly, some people feel like computers are art, stealing, you know, We can't, for example, if you look at, if you look at an Excel spreadsheet, if it's like five rows and maybe five Thank you so much for coming on the Cuba. Glad to be here. Alright, I'm Rebecca Knight for Stu minimun.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SallyPERSON

0.99+

Rebecca KnightPERSON

0.99+

ScottPERSON

0.99+

David SmithPERSON

0.99+

PeruLOCATION

0.99+

Seth JuarezPERSON

0.99+

CaliforniaLOCATION

0.99+

FranceLOCATION

0.99+

1995DATE

0.99+

Satya NadellaPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Turing InstituteORGANIZATION

0.99+

10 million rowsQUANTITY

0.99+

Scott HanselmanPERSON

0.99+

UKLOCATION

0.99+

Stu MinimanPERSON

0.99+

United StatesLOCATION

0.99+

five minutesQUANTITY

0.99+

five rowsQUANTITY

0.99+

5,000 columnsQUANTITY

0.99+

last yearDATE

0.99+

yesterdayDATE

0.99+

five columnsQUANTITY

0.99+

Orlando, FloridaLOCATION

0.99+

SatyaPERSON

0.99+

JavaTITLE

0.99+

next weekDATE

0.99+

ExcelTITLE

0.99+

PythonTITLE

0.99+

SethPERSON

0.99+

CubaLOCATION

0.99+

BillPERSON

0.99+

todayDATE

0.99+

26,000 peopleQUANTITY

0.99+

oneQUANTITY

0.99+

five years laterDATE

0.98+

this weekDATE

0.98+

bothQUANTITY

0.98+

15 minutesQUANTITY

0.98+

OneQUANTITY

0.97+

0.7QUANTITY

0.97+

AzureTITLE

0.96+

JDKTITLE

0.96+

thousands of hoursQUANTITY

0.95+

10 yearsQUANTITY

0.94+

fiveQUANTITY

0.93+

NetflixORGANIZATION

0.92+

0.5QUANTITY

0.91+

zeroQUANTITY

0.91+

TensorFlowTITLE

0.9+

orange County convention centerLOCATION

0.84+

snow leopardsTITLE

0.84+

nine showQUANTITY

0.76+

number of years agoDATE

0.73+

NLPORGANIZATION

0.72+

two of themQUANTITY

0.7+

billPERSON

0.67+

monthsQUANTITY

0.66+

StuORGANIZATION

0.65+

thingsQUANTITY

0.61+

igniteTITLE

0.6+

CohesityORGANIZATION

0.59+

coupleQUANTITY

0.54+

Around theCUBE, Unpacking AI Panel | CUBEConversation, October 2019


 

(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Hello everyone, welcome to theCUBE studio here in Palo Alto. I'm John Furrier your host of theCUBE. We're here introducing a new format for CUBE panel discussions, it's called Around theCUBE and we have a special segment here called Get Smart: Unpacking AI with some great with some great guests in the industry. Gene Santos, Professor of Engineering in College of Engineering Dartmouth College. Bob Friday, Vice President CTO at Mist at Juniper Company. And Ed Henry, Senior Scientist and Distinguished Member of the Technical Staff for Machine Learning at Dell EMC. Guys this is a format, we're going to keep score and we're going to throw out some interesting conversations around Unpacking AI. Thanks for joining us here, appreciate your time. >> Yeah, glad to be here. >> Okay, first question, as we all know AI is on the rise, we're seeing AI everywhere. You can't go to a show or see marketing literature from any company, whether it's consumer or tech company around, they all have AI, AI something. So AI is on the rise. The question is, is it real AI, is AI relevant from a reality standpoint, what really is going on with AI, Gene, is AI real? >> I think a good chunk of AI is real there. It depends on what you apply it to. If it's making some sort of decisions for you, that is AI that's blowing into play. But there's also a lot of AI left out there potentially is just simply a script. So, you know, one of the challenges that you'll always have is that, if it were scripted, is it scripted because, somebody's already developed the AI and now just pulled out all the answers and just using the answers straight? Or is it active learning and changing on its own? I would tend to say that anything that's learning and changing on its own, that's where you're having the evolving AI and that's where you get the most power from. >> Bob what's your take on this, AI real? >> Yeah, if you look at Google, What you see is AI really became real in 2014. That's when the AI and ML really became a thing in the industry and when you look why did it become a thing in 2014? It's really back when we actually saw TensorFlow, open source technology really become available. It's all that Amazon Compute story. You know, you look what we're doing here at Mist, I really don't have to worry about compute storage, except for the Amazon bill I get every month now. So I think you're really seeing AI become real, because of some key turning points in the industry. >> Ed, your take, AI real? >> Yeah, so it depends on what lens you want to kind of look at it through. The notion of intelligence is something that's kind of ill defined and depending how how you want to interpret that will kind of guide whether or not you think it's real. I tend to all things AI if it has a notion of agency. So if it can navigate its problem space without human intervention. So, really it depends on, again, what lens you kind of want to look at it through? It's a set of moving goalposts, right? If you take your smartphone back to Turing When he was coming up with the Turing test and asked them if this intelligent, or some value intelligent device was AI, would that be AI, to him probably back then. So really it depends on how you kind of want to look at it. >> Is AI the same as it was in 1988? Or has it changed, what's the change point with AI because some are saying, AI's been around for a while but there's more AI now than ever before, Ed we'll start with you, what's different with AI now versus say in the late 80s, early 90s? >> See what's funny is some of the methods that we're using aren't different, I think the big push that happened in the last decade or so has been the ability to store as much data as we can along with the ability to have as much compute readily disposable as we have today. Some of the methodologies I mean there was a great Wired article that was published and somebody referenced called, method called Eigenvector Decomposition they said it was from quantum mechanic, that came out in 1888 right? So it really a lot of the methodologies that we're using aren't much different, it's the amount of data that we have available to us that represents reality and the amount of compute that we have. >> Bob. >> Yeah so for me back in the 80s when I did my masters I actually did a masters on neural networks so yeah it's been around for a while but when I started Mist what really changed was a couple things. One is this modern cloud stack right so if you're going to have to build an AI solution really have to have all the pieces ingest tons of data and process it in real time so that is one big thing that's changed that we didn't have 20 years ago. The other big thing is we had access to all this open source TensorFlow stuff right now. People like Google and Facebook have made it so easy for the average person to actually do an AI project right? You know anyone here, anyone in the audience here could actually train a machine learning model over the weekend right now, you just have to go to Google, you have to find kind of the, you know they have the data sets you want to basically build a model to recognize letters and numbers, those data sets are on the internet right now and you personally yourself could go become a data scientist over the weekend. >> Gene, your take. >> Yeah I think also on top of that because of all that availability on the open software anybody can come in and start playing with AI, it's also building a really large experience base of what works and what doesn't work and because they have that now you can actually better define the problem you're shooting for and when you do that you increase you know what's going to work, what's not going to work and people can also tell you that on the part that's not going to work, how's it going to expand but I think overall though this comes back to the question of when people ask what is AI, and a lot of that is just being focused on machine learning and if it's just machine learning that's kind of a little limited use in terms of what you're classifying or not. Back in the early 80s AI back then is really what people are trying to call artificial general intelligence nowadays but it's that all encompassing piece. All the things that you know us humans can do, us humans can reason about, all the decision sequences that we make and so you know that's the part that we haven't quite gotten to but there is all the things that's why the applications that the AI with machine learning classification has gotten us this far. >> Okay machine learning is certainly relevant, it's been one of the most hottest, the hottest topic I think in computer science and with AI becoming much more democratized you guys mentioned TensorFlow, a variety of other open source initiatives been a great wave of innovation and again motivation, younger generations is easier to code now than ever before but machine learning seems to be at the heart of AI and there's really two schools of thought in the machine learning world, is it just math or is there more of a cognition learning machine kind of a thing going on? This has been a big debate in the industry, I want to get your guys' take on this, Gene is machine learning just math and running algorithms or is there more to it like cognition, where do you guys fall on this, what's real? >> If I look at the applications and look what people are using it for it's mostly just algorithms it's mostly that you know you've managed to do the pattern recognition, you've managed to compute out the things and find something interesting from it but then on the other side of it the folks working in say neurosciences, the first people working in cogno-sciences. You know I have the interest in that when we look at that, that machine learning does it correspond to what we're doing as human beings, now because the reason I fall more on the algorithm side is that a lot of those algorithms they don't match what we're often thinking so if they're not matching that it's like okay something else is coming up but then what do we do with it, you know you can get an answer and work from it but then if we want to build true human intelligence how does that all stack together to get to the human intelligence and I think that's the challenge at this point. >> Bob, machine learning, math, cognition is there more to do there, what's your take? >> Yeah I think right now you look at machine learning, machine learning are the algorithms we use, I mean I think the big thing that happened to machine learning is the neural network and deep learning, that was kind of a mild stepping stone where we got through and actually building kind of these AI behavior things. You know when you look what's really happening out there you look at the self driving car, what we don't realize is like it's kind of scary right now, you go to Vegas you can actually get on a driving bus now, you know so this AI machine learning stuff is starting to happen right before our eyes, you know when you go to the health care now and you get your diagnosis for cancer right, we're starting to see AI in image recognition really start to change how we get our diagnosis. And that's really starting to affect people's lives. So those are cases where we're starting to see this AI machine learning stuff is starting to make a difference. When we think about the AI singularity discussion right when are we finally going to build something that really has human behavior. I mean right now we're building AI that can actually play Jeopardy right, and that was kind of one of the inspirations for my company Mist was hey, if they can build something to play Jeopardy we should be able to build something answer questions on par with network domain experts. So I think we're seeing people build solutions now that do a lot of behaviors that mimic humans. I do think we're probably on the path to building something that is truly going to be on par with human thinking right, you know whether it's 50 years or a thousand years I think it's inevitable on how man is progressing right now if you look at the technologically exponential growth we're seeing in human evolution. >> Well we're going to get to that in the next question so you're jumping ahead, hold that thought. Ed, machine learning just math, pattern recognition or is there more cognition there to be had? Where do fall in this? >> Right now it's, I mean it's all math, so we collect something some data set about the world and then we use algorithms and some representation of mathematics to find some pattern, which is new and interesting, don't get me wrong, when you say cognition though we have to understand that we have a fundamentally flawed perspective on how maybe the one guiding light that we have on what intelligence could be would be ourselves right. Computers don't work like brains, brains are what we determine embody our intelligence right, computers, our brains don't have a clock, there's no state that's actually between different clock cycles that light up in the brain so when you start using words like cognition we end up trying to measure ourselves or use ourselves as a ruler and most of the methodologies that we have today don't necessarily head down that path. So yeah that's kind of how I view it. >> Yeah I mean stateless those are API kind of mindsets, you can't run Kubernetes in the brain. Maybe we will in the future, stateful applications are always harder than stateless as we all know but again when I'm sleeping, I'm still dreaming. So cognition in the question of human replacement. This has been a huge conversation. This is one, the singularity conversation you know the fear of most average people and then some technical people as well on the job front, will AI replace my job will it take over the world is there going to be a Skynet Terminator moment? This is a big conversation point because it just teases out what could be and tech for good tech for bad. Some say tech is neutral but it can be shaped. So the question is will AI replace humans and where does that line come from. We'll start with Ed on this one. What do you see this singularity discussion where humans are going to be replaced with AI? >> So replace is an interesting term, so there I mean we look at the last kind of Industrial Revolution that happened and people I think are most worried about the potential of job loss and when you look at what happened during the Industrial Revolution this concept of creative destruction kind of came about and the idea is that yes technology has taken some jobs out of the market in some way shape or form but more jobs were created because of that technology, that's kind of our one again lighthouse that we have with respect to measuring that singularity in and of itself. Again the ill defined definition, or the ill defined notion of intelligence that we have today, I mean when you go back and you read some of the early papers from psychologists from the early 1900s the experiment specifically who came up with this idea of intelligence he uses the term general intelligence as kind of the first time that all of civilization has tried to assign a definition to what is intelligent right? And it's only been roughly 100 years or so or maybe a little longer since we have had this understanding that's been normalized at least within western culture of what this notion of intelligence is so singularity this idea of the singularity is interesting because we just don't understand enough about the one measure ruler or yardstick that we have that we consider intelligence ourselves to be able to go and then embed that inside of a thing. >> Gene what's your thoughts on this, reasoning is a big part of your research you're doing a lot of research around intent and contextual, all these cool behavioral things you know this is where machines are there to augment or replace, this is the conversation, your view on this? >> I think one of the things with this is that that's where the downs still lie, if we have bad intentions, if we can actually start communicating then we can start getting the general intelligence yeah I mean sort of like what Ed was referring to how people have been trying to define this but I think one of the problems that comes up is that computers and stuff like that don't really capture that at this time, the intentions that they have are still at a low level, but if we start tying it to you know the question of the terminator moment to the singularity, one of the things is that autonomy, you know how much autonomy that we give to the algorithm, how much does the algorithm have access to? Now there could be you know just to be on an extreme there could be a disaster situation where you know we weren't very careful and we provided an API that gives full autonomy to whatever AI we have to run it and so you can start seeing elements of Skynet that can come from that but I also tend to come to analysis that hey even with APIs, while it's not AI, APIs a lot of that also we have the intentions of what you're going to give us to control. Then you have the AI itself where if you've defined the intentions of what it is supposed to do then you can avoid that terminator moment in terms of that's more of an act. So I'm seeing it at this point. And so overall singularity I still think we're a ways off and you know when people worry about job loss probably the closest thing that I think that can match that in recent history is the whole thing on automation, I grew up at the time in Ohio when the steel industry was collapsing and that was a trade off between automation and what the current jobs are and if you have something like that okay that's one thing that we go forward dealing with and I think that this is something that state governments, our national government something we should be considering. If you're going to have that job loss you know what better study, what better form can you do from that and I've heard different proposals from different people like, well if we need to retrain people where do you get the resources from it could be something even like AI job pack. And so there's a lot of things to discuss, we're not there yet but I do believe the lower, repetitive jobs out there, I should say the things where we can easily define, those can be replaceable but that's still close to the automation side. >> Yeah and there's a lot of opportunities there. Bob, you mentioned in the last segment the singularity, cognition learning machines, you mentioned deep learning, as the machines learn this needs more data, data informs. If it's biased data or real data how do you become cognitive, how do you become human if you don't have the data or the algorithms? The data's the-- >> I mean and I think that's one of the big ethical debates going on right now right you know are we basically going to basically take our human biases and train them into our next generation of AI devices right. But I think from my point of view I think it's inevitable that we will build something as complex as the brain eventually, don't know if it's 50 years or 500 years from now but if you look at kind of the evolution of man where we've been over the last hundred thousand years or so, you kind of see this exponential rise in technology right from, you know for thousands of years our technology was relatively flat. So in the last 200 years where we've seen this exponential growth in technology that's taking off and you know what's amazing is when you look at quantum computing what's scary is, I always thought of quantum computing as being a research lab thing but when you start to see VC's and investing in quantum computing startups you know we're going from university research discussions to I guess we're starting to commercialize quantum computing, you know when you look at the complexity of what a brain does it's inevitable that we will build something that has basic complexity of a neuron and I think you know if you look how people neural science looks at the brain, we really don't understand how it encodes, but it's clear that it does encode memories which is very similar to what we're doing right now with our AI machine right? We're building things that takes data and memories and encodes in some certain way. So yeah I'm convinced that we will start to see more AI cognizance and it starts to really happen as we start with the next hundred years going forward. >> Guys, this has been a great conversation, AI is real based upon this around theCUBE conversation. Look at I mean you've seen the evidence there you guys pointed it out and I think cloud computing has been a real accelerant with the combination of machine learning and open source so you guys have illustrated and so that brings up kind of the final question I'd love to get each of you's thought on this because Bob just brought up quantum computing which as the race to quantum supremacy goes on around the world this becomes maybe that next step function, kind of what cloud computing did for revitalizing or creating a renaissance in AI. What does quantum do? So that begs the question, five ten years out if machine learning is the beginning of it and it starts to solve some of these problems as quantum comes in, more compute, unlimited resource applied with software, where does that go, five ten years? We'll go start with Gene, Bob, then Ed. Let's wrap this up. >> Yeah I think if quantum becomes a reality that you know when you have the exponential growth this is going to be exponential and exponential. Quantum is going to address a lot of the harder AI problems that were from complexity you know when you talk about this regular search regular approaches of looking up stuff quantum is the one that allows you now to potentially take something that was exponential and make it quantum. And so that's going to be a big driver. That'll be a big enabler where you know a lot of the problems I look at trying to do intentions is that I have an exponential number of intentions that might be possible if I'm going to choose it as an explanation. But, quantum will allow me to narrow it down to one if that technology can work out and of course the real challenge if I can rephrase it into say a quantum program while doing it. But that's I think the advance is just beyond the step function. >> Beyond a step function you see. Okay Bob your take on this 'cause you brought it up, quantum step function revolution what's your view on this? >> I mean your quantum computing changes the whole paradigm right because it kind of goes from a paradigm of what we know, this binary if this then that type of computing. So I think quantum computing is more than just a step function, I think it's going to take a whole paradigm shift of you know and it's going to be another decade or two before we actually get all the tools we need to actually start leveraging quantum computing but I think that is going to be one of those step functions that basically takes our AI efforts into a whole different realm right? Let us solve another whole set of classic problems and that's why they're doing it right now because it starts to let you be able to crack all the encryption codes right? You know where you have millions of billions of choices and you have to basically find that one needle in the haystack so quantum computing's going to basically open that piece of the puzzle up and when you look at these AI solutions it's really a collection of different things going underneath the hood. It's not this one algorithm that you're doing and trying to mimic human behavior, so quantum computing's going to be yet one more tool in the AI toolbox that's going to move the whole industry forward. >> Ed, you're up, quantum. >> Cool, yeah so I think it'll, like Gene and Bob had alluded to fundamentally change the way we approach these problems and the reason is combinatorial problems that everybody's talking about so if I want to evaluate the state space of anything using modern binary based computers we have to kind of iteratively make that search over that search space where quantum computing allows you to kind of evaluate the entire search space at once. When you talk about games like AlphaGo, you talk about having more moves on a blank 19 by 19 AlphaGo board than you have if you put 1,000 universes on every proton of our universe. So the state space is absolutely massive so searching that is impossible. Using today's binary based computers but quantum computing allows you to evaluate kind of search spaces like that in one big chunk to really simplify the aspect but I think it will kind of change how we approach these problems to Bob and Gene's point with respect to how we approach, the technology once we crack that quantum nut I don't think will look anything like what we have today. >> Okay thank you guys, looks like we have a winner. Bob you're up by one point, we had a tie for second but Ed and Gene of course I'm the arbiter but I've decided Bob you nailed this one so since you're the winner, Gene you guys did a great job coming in second place, Ed good job, Bob you get the last word. Unpacking AI, what's the summary from your perspective as the winner of Around theCUBE. >> Yeah no I think you know from a societal point of view I think AI's going to be on par with kind of the internet. It's going to be one of these next big technology things. I think it'll start to impact our lives and people when you look around it it's kind of sneaking up on us, whether it's the self driving car the healthcare cancer, the self driving bus, so I think it's here, I think we're just at the beginnings of it. I think it's going to be one of these technologies that's going to basically impact our whole lives or our next one or two decades. Next 10, 20 years is just going to be exponentially growing everywhere in all our segments. >> Thanks so much for playing guys really appreciate it we have an inventor entrepreneur, Gene doing great research at Dartmouth check him out, Gene Santos at Dartmouth Computer Science. And Ed, technical genius at Dell, figuring out how to make those machines smarter and with the software abstractions growing you guys are doing some good work over there as well. Gentlemen thank you for joining us on this inaugural Around theCUBE unpacking AI Get Smart series, thanks for joining us. >> Thank you. >> Thank you. >> Okay, that's a wrap everyone this is theCUBE in Palo Alto, I'm John Furrier thanks for watching. (upbeat funk music)

Published Date : Oct 23 2019

SUMMARY :

in the heart of Silicon Valley, and Distinguished Member of the Technical Staff is on the rise, we're seeing AI everywhere. the evolving AI and that's where you get in the industry and when you look and depending how how you want to interpret that of data that we have available to us to go to Google, you have to find All the things that you know us humans what do we do with it, you know you can to happen right before our eyes, you know or is there more cognition there to be had? of the methodologies that we have today of mindsets, you can't run Kubernetes in the brain. of job loss and when you look at what happened and what the current jobs are and if you have if you don't have the data or the algorithms? and I think you know if you look how people So that begs the question, five ten years out quantum is the one that allows you now Beyond a step function you see. because it starts to let you be able to crack the technology once we crack that quantum nut but Ed and Gene of course I'm the arbiter and people when you look around it you guys are doing some good work over there as well. in Palo Alto, I'm John Furrier thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BobPERSON

0.99+

Gene SantosPERSON

0.99+

Ed HenryPERSON

0.99+

EdPERSON

0.99+

GenePERSON

0.99+

John FurrierPERSON

0.99+

2014DATE

0.99+

1988DATE

0.99+

Palo AltoLOCATION

0.99+

50 yearsQUANTITY

0.99+

1888DATE

0.99+

AmazonORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

OhioLOCATION

0.99+

Bob FridayPERSON

0.99+

DellORGANIZATION

0.99+

October 2019DATE

0.99+

thousands of yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

first questionQUANTITY

0.99+

Dartmouth Computer ScienceORGANIZATION

0.99+

one pointQUANTITY

0.99+

1,000 universesQUANTITY

0.99+

secondQUANTITY

0.99+

todayDATE

0.99+

five ten yearsQUANTITY

0.98+

Dell EMCORGANIZATION

0.98+

decadeQUANTITY

0.98+

oneQUANTITY

0.98+

two schoolsQUANTITY

0.98+

twoQUANTITY

0.98+

MistORGANIZATION

0.97+

80sDATE

0.97+

late 80sDATE

0.97+

first timeQUANTITY

0.97+

Juniper CompanyORGANIZATION

0.97+

early 1900sDATE

0.97+

early 90sDATE

0.97+

second placeQUANTITY

0.97+

20 years agoDATE

0.97+

early 80sDATE

0.97+

DartmouthORGANIZATION

0.96+

one needleQUANTITY

0.95+

last decadeDATE

0.95+

500 yearsQUANTITY

0.93+

eachQUANTITY

0.93+

100 yearsQUANTITY

0.93+

AlphaGoORGANIZATION

0.93+

one algorithmQUANTITY

0.92+

JeopardyTITLE

0.92+

theCUBEORGANIZATION

0.92+

OneQUANTITY

0.92+

one big thingQUANTITY

0.92+

Silicon Valley,LOCATION

0.92+

one thingQUANTITY

0.91+

19QUANTITY

0.91+

TensorFlowTITLE

0.91+

Industrial RevolutionEVENT

0.91+

millions of billions of choicesQUANTITY

0.9+

The Truth About AI and RPA | UiPath


 

>> From the SiliconANGLE Media Office in Boston, Massachusets, it's theCUBE! (techno music) Now, here's your host, Stu Miniman. >> Hi. I'm Stu Miniman and this is a Cube Conversation from our Boston area studio. Welcome back to the program. Bobby Patrick, who is the Chief Marketing Officer of UiPath. Bobby, good to see you. >> Great to be here Stu. >> Alright. Bobby, we're going to tackle head-on an interesting discussion that's been going on in the industry. Of course, Artificial Intelligence is this wave that is impacting a lot when you look at earnings reports, everyone's talking about it. Most companies are understanding how they're doing it. It is not a new term. I go back reading my history of technology, Ada Lovelace, 150 years ago when she was helping to define what a computer was. She made the Lovelace objective, I believe they said - >> Right. >> Which was later quoted by Turing and the like is that if we can describe it in code, it's probably not Artificial Intelligence cause their not building new things - >> Right. >> And being able to change on there, so there's hype around AI itself, but UiPath is one of the leaders in Robotic Process Automation and how that fits in with AI and Machine Learning, all of these other terms it can get a bit of an acronym soup and we all can't agree on what the terms are. So, let's start with some of the basics Bobby. Please give us RPA and AI and we'll get into it from there. >> Well, Robotic Process Automation, according to the analysts, like Forester are part of the overall AI broader kind of massive, massive market. AI itself has many different, different, routes. Deep learning right, and machine learning, natural language processing, right and so on. I think AI is a term that covers many different grounds. And RPA, AI applies two ways. It applies within RPA and that we have a technology called Computer Vision. It's how a robot looks at a screen like how a human does, which is very, very difficult actually. You look at a citrix terminal session, or a VDI session, different than an Excel sheet, different than as SASAB, and most processes across all of those, so a robot has to be able to look at all of, all of those screen elements, and understand them right. AI within Computer Vision around understanding documents, looking at unstructured data, looking at handwriting. Conversational understanding. Looking at text in an email determining context, helping with chatbots. But a number of those components, doesn't mean we have to build that all ourselves. What RPA does is we bring it all together. We make it easy to automate and build and create the data flow of a process. Then you can apply AI to that, right. So, I think, two years ago when I first joined UiPath, putting RPA and AI in the same sentence people laughed. Year ago we said, ya know what, RPA is really the path to AI in business operations. Now, ya know we say that we're the most highly valued AI company in the world and no one has ever disagreed. >> Yeah, so it's good to lay out some of the adopting cause one of the things to look at and say if I looked at this product two or three years ago, it's not the product that it is today. We know how fast software - >> Right. Is making changes along the line. Second thing, automation itself is something we've been talking about my entire career. >> Right. When I look at things we were doing 5, 10, 15 years ago, and calling automation, we kind of laugh at it. Because today, automation absolutely is making a lot of changes. RPA is taking that automation in a very strategic direction for many companies there. It's the conversation we had last year at your conference was, RPA is the gateway drug if you will. >> Right. >> Of that environment because automation has scared a lot of people. Am I just doing scripts, what do I control, what do I set? Maybe just give us that first grounding of where that automation path is, has come and is going. >> So, there's different kinds of automation right as you said. We've had automation for decades, primarily in IT. Automation was primarily around API to API integration. And that's really hard, right. It requires developers, engineers, it requires them to keep it current. It's expensive and takes a longer time. Along comes the technology, RPA and UiPath, right were you can automate fairly quickly. There's built in recorders and you can do it with a drag and drop, like a flow chart. You can automate a process, and that, that automation is immediately beneficial. Meaning that outcome, is immediate. And, the cost to doing that is small in comparison. And I think, maybe it's the longtail of automation in some ways. It's all of these things that we do around a SAP process. The reality is if you have SAP, or you have Oracle, or you have Workday, the human processes around that involve still a Spreadsheet. It involves PDF documents. A great, one of my favorite examples right now on YouTube with Microsoft is Chevron. Chevron has hundreds of thousands of PDF's that is generated from every oil rig every day. It has all kinds of data in different formats. Tables, different structured and semi-structured data. They would actually extract that data, manually. To be able to process that and analyze that, right. Working with Microsoft AI and UiPath RPA they're able to automate that entire massive process. And now they're on stage talking about it, Microsoft and UiPath events right. And, they call that AI. That's applying AI to a massive problem for them. They need the robot to be completely accurate though. You don't to worry that the data that is being extracted from the PDF's is inaccurate, right. So, Machine Learning goes into that. There's exception management that's a part of that process as well. They call it AI. >> Yeah, some of this is just, people in the industry, the industry watchers is, we get very particular on different terminology. Let's not conflate Artificial Intelligence, or Augmented Intelligence with Machine Learning, because their different environments. I've heard Forester talk about, right, it's a spectrum though, there's an umbrella for some of these. So, we like to get not too pedantic on individual terms itself. >> Right. >> Um - >> Let me give you more examples. I think the term robotic and RPA, yes, it's true that the vast majority of the last couple of years with RPA have been very rules based, right. Because most processes today like in a call center, there's a rule. Do this and this, then this and this. And so, you're automating that same rules based structure. But once that data's flowing through, you can actually then look at the history of that data and then turn a rules based automation into an experience based automation. And how do you do that? You apply Machine Learning algorithms. You apply Data Robot, LMAI, IBM Watson to it, right. But, it's still the RPA platform that is driving that automation, it's just no longer rules based it's experience based. A great example at UiPath Together Dubai recently, was Dubai customs. They had a process where when you declared something, let's say you box of chocolate, they had to open up a binder and find a classification code for that box of chocolate. Well, they use our RPA product and they make a call out to IBM Watson as a part of the automation, and they just write in, pink box of candy filled chocolate. And it takes its Deep Learning, it comes back with a classification code, all part of an automated process. What happens? Dubai customs lines go from being a two hours to a few minutes, right. It's a combination of our RPA capability and our automation board capability and the ability to bring in IBM Watson. Dubai customs says they applied AI now and solved a big problem. >> One of the things I was reading through the recent Gartner Magic Quadrant on RPA, and they had two classifications. One was, kind of the automation does it all, and the other was the people and machines. Things like chatbox, some of the examples you've been giving there seem to be that combination. Where do those two fit together or are those distinctions that you make? >> Yeah, I mean Gartner's interesting. Gartner's a very IT-centric analyst firm, right and IT often in my view are often very conventional thinkers and not the fastest to adopt breakthrough technologies. They weren't the fastest to adopt Cloud, they weren't the fastest to adopt on-demand CRM, and they weren't the fastest to jump onto RPA because they believe, why can't we use API for everything. And the Gartner analysts is kind of, in the beginning of the process of the Magic Quadrant, they spent a lot of time with us and they were trying hard to say that was, you should solve everything with an API. That's just not reality, right? It's not feasible, and it's not affordable, right? But, RPA is just not the automation of a task or process, it's then applying a whole other set of other technologies. We have 700 partners today in our ecosystem. Natural Language processing partners, right. Machine learning partners. Chatbox partners, you mentioned. So we want to be, we want to make it very easy. In a drag and drop way. To be able to apply these great technologies to an automation to solve some big problem. What's fun to me right now is there's a lot of great startups. They come out of say insurance, or they come out of financial services and they've got a great algorithm and they know the business really well. And they probably have one or two amazing customers, and they're stuck. We, for them, this came from a partner of ours, you're becoming, you UiPath, you're becoming our best route to market because you have the data. You have the work flow. Our job I think in some ways, is to make it easy to bring these technologies together to apply them to an automation to make that through a democratized way where a non-engineer can do this, and I think that's what's happening. >> Yeah, those integrations between environments can be very powerful something we see. Every shop has lots of applications, has lots of technical data and they're not just sweeping the floor of everything they have. What are some of the limits of AI and RPA today, what do you see things going? >> I think, Deep Learning we see very little of that. It's probably applied to some kind of science project and things within companies. I think for the vast majority of our customers, they use machine learning within RPA for Computer Vision by default. But, ya know they're still not really at a stage of mass adoption of what algorithms do I want to apply to a process. I think we're trying to make it easier for you to be able to drag and drop AI we call it, to make it easier to apply. But, I think we're in very early days. And as you mentioned, there's market confusion on it. I know one thing from our 90 plus customers that are in our advisory boards. I know from them they say their companies struggles with finding an ROI in AI, and, you know, I think we're helping there cause we're applying to real operations. They say the same thing about Blockchain. I don't know Stu. Do you know of a single example of a Blockchain ROI, great example? >> Yeah, it reminds me, Big Data was one of those, over half of the people failed to get the ROI they want. It's one of those promises of certain technology - >> Right. >> That high-level, you know let's poo-poo Bobby things that actually have tangible results - >> Yeah. >> And get things done. But you weren't following the strict guidelines of the API economy. >> Right, well true, exactly right. What I find amazing is, I mentioned in another one of our talks conversations that 23,000 have come to UiPath events this year. To our own events, not trade events and other shows, that's different. They want to get on stage and talk. They're delighted about this. And their talking about, generally speaking, RPA's helping them go digital. But they're all saying their ambition is to apply AI to make those processes smarter. To learn from - to go from rules based to experience based. I think what's beautiful about UiPath, is that we're a platform that you can get there overtime. You can apply - you can predict perhaps the algorithm 's you're going to want to use in two or three years. We're not going to force you, you can apply any algorithm you want to an automation work going through. I think that flexibility is actually for customers, they find it very comforting. >> It's one of those things I say, most companies have a cloud strategy. That needs to be written in, not etched in stone. You need to revisit it every quarter. Same thing with what happening AI and in your space things are changing so fast and they need to be agile. >> That's right. >> They need to be able to make changes. In October, you're going to have a lot of those customers up on stage talking. Where will this AI discussion fit into UiPath forward in Las Vegas. We talk a lot about our AI fabric, framework it's around document understanding, getting heavy robots getting smarter and smarter, what they see on the screen, what they see on a document, what they see with handwriting, and improving the accuracy of visual understanding. Looking at the, face recognition and other types of images and being able to understand the images. Conversational understanding. The tone of an email. Is this person really upset? How upset? Or, a conversational chatbot. Really evolving from mimicking humans with RPA to augmenting humans and I think that story, both in the innovations, the customer examples on stage, I think you're going to see the sophistication of automation's that are being used through UiPath grow exponentially. >> Okay, so I want to give you the final word on this. And I don't want to talk to the people that might poo-poo or argue RPA and AI and ML and all these things. Bring us inside your customers. What...where, how does that conversation start? Are they coming it from AI, ML, RPA or is there, ya know a business discussion that usually catalyzes this engagement? >> Our customer's are starting with digital. They're trying to go digital. They know they need digital transformation, it's been very, very hard. There's a real outcome that comes quickly from taking a mundane task that is expensive, and automating that. The outcomes are quick, often projects that involve our partners like Accenture and others. The payback period on the entire project with RPA can be 6 months, it's self-funding. What other technologies doing B2B is self-funding in one year? That's part of the incredible adoption birth. But, every single customer doesn't stop there. They say okay, I also want to know that this automation is, I want to know that I can go apply AI to this. It's in every conversation. So there's two big booms with UiPath and our RPA. The first is when you go digital, there's some great outcome. There's productivity gain, it's immediate, right. I guess I said the payback period is quick. The second big one is when you go and turn it from a rules based to an experience based process, or you apply AI to it, there's another set of business benefits down the road. As more algorithms come out and things, you keep applying to it. This is sort of the gift that keeps on giving. I think if we didn't have that connection to Machine Learning or AI, I think the enthusiasm level of the majority of our customers would not be anywhere near what it is today. >> Alright, well Bobby really appreciate digging into the customerality, RPA, AI all the acronym soup that was going on and we look forward to UiPath Forward at the Bellagio in Las Vegas this October. >> It'll be fun. Alright, I'm Stu Miniman, as always thank you so much for watching theCube.

Published Date : Jul 17 2019

SUMMARY :

From the SiliconANGLE Media Office Welcome back to the program. that is impacting a lot when you look at but UiPath is one of the leaders in RPA is really the path to AI in business operations. cause one of the things to look at and say Is making changes along the line. RPA is the gateway drug if you will. Am I just doing scripts, They need the robot to be completely accurate though. people in the industry, they had to open up a binder and find a and the other was the people and machines. But, RPA is just not the automation of a task the floor of everything they have. They say the same thing about Blockchain. over half of the people failed to get of the API economy. is that we're a platform that you can get there overtime. things are changing so fast and they need to be and improving the accuracy of visual understanding. I want to give you the final word on this. I guess I said the payback period is quick. all the acronym soup that was going on thank you so much

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ada LovelacePERSON

0.99+

BobbyPERSON

0.99+

Stu MinimanPERSON

0.99+

oneQUANTITY

0.99+

Bobby PatrickPERSON

0.99+

GartnerORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

OctoberDATE

0.99+

BostonLOCATION

0.99+

IBMORGANIZATION

0.99+

6 monthsQUANTITY

0.99+

AccentureORGANIZATION

0.99+

one yearQUANTITY

0.99+

twoQUANTITY

0.99+

UiPathORGANIZATION

0.99+

Las VegasLOCATION

0.99+

two hoursQUANTITY

0.99+

last yearDATE

0.99+

700 partnersQUANTITY

0.99+

StuPERSON

0.99+

ExcelTITLE

0.99+

twoDATE

0.99+

firstQUANTITY

0.99+

5DATE

0.99+

two waysQUANTITY

0.99+

YouTubeORGANIZATION

0.98+

three yearsQUANTITY

0.98+

two years agoDATE

0.98+

90 plus customersQUANTITY

0.98+

OneQUANTITY

0.98+

ChevronORGANIZATION

0.98+

this yearDATE

0.98+

OracleORGANIZATION

0.98+

todayDATE

0.98+

two classificationsQUANTITY

0.98+

150 years agoDATE

0.97+

DubaiLOCATION

0.97+

bothQUANTITY

0.97+

23,000QUANTITY

0.97+

TuringPERSON

0.97+

three years agoDATE

0.97+

10DATE

0.97+

two big boomsQUANTITY

0.95+

Year agoDATE

0.95+

SiliconANGLE Media OfficeORGANIZATION

0.95+

singleQUANTITY

0.95+

one thingQUANTITY

0.94+

Second thingQUANTITY

0.93+

15 years agoDATE

0.93+

hundreds of thousandsQUANTITY

0.91+

ForesterORGANIZATION

0.89+

MassachusetsLOCATION

0.86+

LMAIORGANIZATION

0.85+

second bigQUANTITY

0.84+

SAPORGANIZATION

0.84+

over halfQUANTITY

0.79+

SASABTITLE

0.78+

single customerQUANTITY

0.78+

customersQUANTITY

0.77+

Rob Thomas, IBM | IBM Innovation Day 2018


 

(digital music) >> From Yorktown Heights, New York It's theCUBE! Covering IBM Cloud Innovation Day. Brought to you by IBM. >> Hi, it's Wikibon's Peter Burris again. We're broadcasting on The Cube from IBM Innovation Day at the Thomas J Watson Research Laboratory in Yorktown Heights, New York. Have a number of great conversations, and we got a great one right now. Rob Thomas, who's the General Manager of IBM Analytics, welcome back to theCUBE. >> Thanks Peter, great to see you. Thanks for coming out here to the woods. >> Oh, well it's not that bad. I actually live not to far from here. Interesting Rob, I was driving up the Taconic Parkway and I realized I hadn't been on it in 40 years, so. >> Is that right? (laugh) >> Very exciting. So Rob let's talk IBM analytics and some of the changes that are taking place. Specifically, how are customers thinking about achieving their AI outcomes. What's that ladder look like? >> Yeah. We call it the AI ladder. Which is basically all the steps that a client has to take to get to get to an AI future, is the best way I would describe it. From how you collect data, to how you organize your data. How you analyze your data, start to put machine learning into motion. How you infuse your data, meaning you can take any insights, infuse it into other applications. Those are the basic building blocks of this laddered AI. 81 percent of clients that start to do something with AI, they realize their first issue is a data issue. They can't find the data, they don't have the data. The AI ladder's about taking care of the data problem so you can focus on where the value is, the AI pieces. >> So, AI is a pretty broad, hairy topic today. What are customers learning about AI? What kind of experience are they gaining? How is it sharpening their thoughts and their pencils, as they think about what kind of outcomes they want to achieve? >> You know, its... For some reason, it's a bit of a mystical topic, but to me AI is actually quite simple. I'd like to say AI is not magic. Some people think it's a magical black box. You just, you know, put a few inputs in, you sit around and magic happens. It's not that, it's real work, it's real computer science. It's about how do I put, you know, how do I build models? Put models into production? Most models, when they go into production, are not that good, so how do I continually train and retrain those models? Then the AI aspect is about how do I bring human features to that? How do I integrate that with natural language, or with speech recognition, or with image recognition. So, when you get under the covers, it's actually not that mystical. It's about basic building blocks that help you start to achieve business outcomes. >> It's got to be very practical, otherwise the business has a hard time ultimately adopting it, but you mentioned a number of different... I especially like the 'add the human features' to it of the natural language. It also suggests that the skill set of AI starts to evolve as companies mature up this ladder. How is that starting to change? >> That's still one of the biggest gaps, I would say. Skill sets around the modern languages of data science that lead to AI: Python, AR, Scala, as an example of a few. That's still a bit of a gap. Our focus has been how do we make tools that anybody can use. So if you've grown up doing SPSS or SaaS, something like that, how do you adopt those skills for the open world of data science? That can make a big difference. On the human features point, we've actually built applications to try to make that piece easy. Great example is with Royal Bank of Scotland where we've created a solution called Watson Assistant which is basically how do we arm their call center representatives to be much more intelligent and engaging with clients, predicting what clients may do. Those types of applications package up the human features and the components I talked about, makes it really easy to get AI into production. >> Now many years ago, the genius Turing, noted the notion of the Turing machine where you couldn't tell the difference between the human and a machine from an engagement standpoint. We're actually starting to see that happen in some important ways. You mentioned the call center. >> Yep. >> How are technologies and agency coming together? By that I mean, the rate at which businesses are actually applying AI to act as an agent for them in front of customers? >> I think it's slow. What I encourage clients to do is, you have to do a massive number of experiments. So don't talk to me about the one or two AI projects you're doing, I'm thinking like hundreds. I was with a bank last week in Japan, and they're comment was in the last year they've done a hundred different AI projects. These are not one year long projects with hundreds of people. It's like, let's do a bunch of small experiments. You have to be comfortable that probably half of your experiments are going to fail, that's okay. The goal is how do you increase your win rate. Do you learn from the ones that work, and from the ones that don't work, so that you can apply those. This is all, to me at this stage, is about experimentation. Any enterprise right now, has to be thinking in terms of hundreds of experiments, not one, not two or 'Hey, should we do that project?' Think in terms of hundreds of experiments. You're going to learn a lot when you do that. >> But as you said earlier, AI is not magic and it's grounded in something, and it's increasingly obvious that it's grounded in analytics. So what is the relationship between AI analytics, and what types of analytics are capable of creating value independent of AI? >> So if you think about how I kind of decomposed AI, talked about human features, I talked about, it kind of starts with a model, you train the model. The model is only as good as the data that you feed it. So, that assumes that one, that your data's not locked into a bunch of different silos. It assumes that your data is actually governed. You have a data catalog or that type of capability. If you have those basics in place, once you have a single instantiation of your data, it becomes very easy to train models, and you can find that the more that you feed it, the better the model's going to get, the better your business outcomes are going to get. That's our whole strategy around IBM Cloud Private for Data. Basically, one environment, a console for all your data, build a model here, train it in all your data, no matter where it is, it's pretty powerful. >> Let me pick up on that where it is, 'cause it's becoming increasingly obvious, at least to us and our clients, that the world is not going to move all the data over to a central location. The data is going to be increasingly distributed closer to the sources, closer to where the action is. How does AI and that notion of increasing distributed data going to work together for clients. >> So we've just released what's called IBM Data Virtualization this month, and it is a leapfrog in terms of data virtualization technology. So the idea is leave your data where ever it is, it could be in a data center, it could be on a different data center, it could be on an automobile if you're an automobile manufacturer. We can federate data from anywhere, take advantage of processing power on the edge. So we're breaking down that problem. Which is, the initial analytics problem was before I do this I've got to bring all my data to one place. It's not a good use of money. It's a lot of time and it's a lot of money. So we're saying leave your data where it is, we will virtualize your data from wherever it may be. >> That's really cool. What was it called again? >> IBM Data Virtualization and it's part of IBM Cloud Private for Data. It's a feature in that. >> Excellent, so one last question Rob. February's coming up, IBM Think San Francisco thirty plus thousand people, what kind of conversations do you anticipate having with you customers, your partners, as they try to learn, experiment, take away actions that they can take to achieve their outcomes? >> I want to have this AI experimentation discussion. I will be encouraging every client, let's talk about hundreds of experiments not 5. Let's talk about what we can get started on now. Technology's incredibly cheap to get started and do something, and it's all about rate and pace, and trying a bunch of things. That's what I'm going to be encouraging. The clients that you're going to see on stage there are the ones that have adopted this mentality in the last year and they've got some great successes to show. >> Rob Thomas, general manager IBM Analytics, thanks again for being on theCUBE. >> Thanks Peter. >> Once again this is Peter Buriss of Wikibon, from IBM Innovation Day, Thomas J Watson Research Center. We'll be back in a moment. (techno beat)

Published Date : Dec 7 2018

SUMMARY :

Brought to you by IBM. at the Thomas J Watson Research Laboratory Thanks for coming out here to the woods. I actually live not to far from here. and some of the changes care of the data problem What kind of experience are they gaining? blocks that help you How is that starting to change? that lead to AI: Python, AR, notion of the Turing so that you can apply those. But as you said earlier, AI that the more that you feed it, that the world is not So the idea is leave your What was it called again? of IBM Cloud Private for Data. that they can take to going to see on stage there Rob Thomas, general Peter Buriss of Wikibon,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurissPERSON

0.99+

JapanLOCATION

0.99+

Rob ThomasPERSON

0.99+

PeterPERSON

0.99+

oneQUANTITY

0.99+

IBMORGANIZATION

0.99+

one yearQUANTITY

0.99+

Royal Bank of ScotlandORGANIZATION

0.99+

RobPERSON

0.99+

81 percentQUANTITY

0.99+

last weekDATE

0.99+

last yearDATE

0.99+

twoQUANTITY

0.99+

Peter BurrisPERSON

0.99+

FebruaryDATE

0.99+

first issueQUANTITY

0.99+

Yorktown Heights, New YorkLOCATION

0.99+

IBM Innovation DayEVENT

0.99+

IBM AnalyticsORGANIZATION

0.99+

hundredsQUANTITY

0.99+

WikibonORGANIZATION

0.98+

PythonTITLE

0.98+

Taconic ParkwayLOCATION

0.98+

40 yearsQUANTITY

0.98+

ScalaTITLE

0.98+

thirty plus thousand peopleQUANTITY

0.97+

IBM Cloud Innovation DayEVENT

0.96+

hundreds of experimentsQUANTITY

0.96+

todayDATE

0.96+

Watson AssistantTITLE

0.96+

one placeQUANTITY

0.94+

IBM Innovation Day 2018EVENT

0.93+

Thomas J Watson Research CenterORGANIZATION

0.93+

SPSSTITLE

0.89+

this monthDATE

0.88+

one environmentQUANTITY

0.86+

San FranciscoLOCATION

0.8+

half ofQUANTITY

0.79+

hundreds of peopleQUANTITY

0.78+

many years agoDATE

0.77+

hundreds of experimentsQUANTITY

0.76+

single instantiationQUANTITY

0.76+

hundred different AI projectsQUANTITY

0.76+

one last questionQUANTITY

0.73+

SaaSTITLE

0.71+

TuringORGANIZATION

0.71+

ARTITLE

0.7+

IBM ThinkORGANIZATION

0.69+

J Watson ResearchORGANIZATION

0.67+

ThomasLOCATION

0.62+

The CubeTITLE

0.58+

moneyQUANTITY

0.58+

VirtualizationCOMMERCIAL_ITEM

0.55+

LaboratoryLOCATION

0.54+

TuringPERSON

0.51+

Cloud PrivateCOMMERCIAL_ITEM

0.49+

Private forCOMMERCIAL_ITEM

0.47+

CloudTITLE

0.3+

Dan Barnhardt, Infor | Inforum DC 2018


 

>> Live, from Washington D., it's the Cube. Covering Inforum DC 2018. Brought to you by Infor. >> And welcome back to Inforum '18. We're live here in Washington DC as Inforum has brought its show to our nation's capital. I'm John Walls along with Dave Vellante. It's now a pleasure to welcome Vice President of corporate communications Dan Barnhardt. >> Thank you. >> Hey Dan, good morning to you. >> Good morning to you. Good to see you again. >> We were kidding before we got started about why you're here in Washington. We think it's for the weather, right, because it's so nice. >> It's gorgeous. >> But there is a reason. I mean, you've released a federal product today, have an announcement we'll get to in just a moment. But about coming to Washington. You've been in New York before, you've been in New Orleans. Why DC, why now? >> Well, it's important for us to make sure that our customers can access the event. We've got more customers that came this year than came previous years, certainly, than last year. And it's important to be in a city where it's accessible for our customers not just in the US, but also from Europe and Asia Pacific, Latin America and Washington DC's very accessible. We also are one of the largest suppliers to public sector organizations. That's, uh, local, state, and federal government. We've got a particular focus on federal government and fed ramp compliance this year, which we achieved. And, so, we're here so that we can show off some of that new technology that you just mentioned. >> Yeah, what about the significance of that? Of reaching the compliance goal. And what does that mean to the business going forward? >> Well, it's yet another example of the benefits of our cloud strategy and working with the AWS beginning in 2014. Infor was the first large ISV to embrace a public cloud. And Amazon and Amazon web services in particular has been very helpful in achieving fed ramp. They have a lot of federal customers. They've got a very large federal agency with three initials that is a customer and they require compliance with all of the federal regulations that continually change and the utmost security for customers and we're able to offer that to our customers as well. >> Yeah, we were talking on the kick off about that - how you guys can draft the AWS innovations and things like fed ramp and other compliance. They were first, they were way ahead of anybody. You as an ISV, you don't have to worry about all that stuff. I mean, you've still got to connect to it, but they do a lot of the heavy lifting, so that's cool. You got some other hard news. >> Well we also are able to focus on our products by doing that. We don't have to invest in proprietary cloud infrastructure or data centers or databases. We can focus on delivering innovation in our products and functionality that makes a difference for our customers. Their business is not - their customers don't care what infrastructure they're running on, they care how they're able to provide goods and services. So Infor focuses just on delivering better goods and services for our customers. >> What Charles said at the keynote this morning - our strategy, we didn't want to compete with Google and Amazon and Microsoft for scale of cloud. That made no sense. It also made the point that when we were an on prem - exclusively on prem software company, we didn't go out and manage servers for our clients. So we don't want to do that. So, big differentiator for sure, from some of the other SAS players. >> And it's paying off now in a way that our competitors are starting to come after us when they used to not want to acknowledge us. One of our larger competitors - on premise legacy vendor - had an anti-Infor ad on their homepage. They've got cabs outside of here. >> We're talking about- - Yeah >> And then Charles said, ya know if you're - we're welcome the competition here if you'd like to see innovation in enterprise software, this is the place to be. >> Well, congratulations, right, 'cause, well, you know, when Oracle's coming at you, it means you succeeded - that's good. Um, other hard news that you guys had this week - you got true cost accounting in healthcare and some other things, take us through those. >> Well health care has been a major focus industry for us, just along with government, which we mentioned. Um, seventy plus percent of large hospitals in the United States are automated using an Infor software. And healthcare has been an industry that's undergone a lot of disruption, obviously, for the last ten, twelve years, with the Affordable Care Act and others. And we're trying to figure out - we as a society are trying to figure out - how to deliver better care to patients, that's the goal for healthcare organizations. And to do that, they need to better understand what's the cost of care. So the Infor true cost, which we announced in January and have now delivered and have customers implementing, will help our customers understand better what is the cost of the care that they're giving so that they can give better care to their patients and allocate their resources in a way that will help more people heal better and feel better. >> We heard on the intro to the keynotes today, Turing, Edison, and Coleman. It sounded like it was Charles' voiceover. I don't know if it was or not, but >> It was. >> It was. He's got the smooth, mellifluous voice. Um, last year Coleman, Catherine, Coleman, Johnson - you named your AI offering platform after her. Give us the update where you're at today, you've got some other announcements around that as well. >> We do. It's a big announcement for Coleman here. We've got the GA of Coleman digital assistant, which is - enables humans to have - everyone to have an assistant at work with them to help automate certain functions such as search and gather, which can take twenty percent of people's time just collecting the information to make a decision. But now with Coleman digital assistant being live and customers implementing and going live on it right now, they're able - users are able to ask Coleman to fetch information and deliver not only the information but predictions and smart intelligence that helps people make better decisions and be more productive. >> So we had a lot of conversation this morning about robotic process automation, which is really interesting. I mean, essentially, we're talking about software robots taking over mundane tasks to humans. Now a lot of people like to talk about how - and we talked about this in the Cube all the time - how, oh, the machines are taking away jobs, but in speaking to numerous customers about RPA, they're thrilled that they don't have to do these mundane tasks because it makes them more valuable, they're doing more interesting things, and they're getting offers from others that are asking them to do this type of automation for their company. So they're more valuable to their existing company and outside companies. So, RPA - hot topic. You guys are leaning in hard. >> We definitely are. We definitely believe that there are jobs that - there are functions that can be better served by automation, particularly search and gather that we mentioned. There are multiple functions that will always be done by people. Human interaction is not going to change so we are looking to have a digital assistant make productivity better. Productivity is a function of being able to do more, having more workers, and we'd like to do both with this. We'd like people to be more productive using artificial intelligence assistance. And, also, a conversational user experience with software will make it easier and less intimidating for a lot of people to interact with technology at work. And we think that will also help people be able to be more productive in their jobs and have more people able to take more jobs that right now or in the past have required a level of technical expertise that you won't need when you can simply ask the computer to do something for you using your own conversational language. >> Some major data points - excuse me - >> That's okay. that came out of the keynote this morning - one is that there are now more job openings than there are unemployed individuals and productivity, even though the tech spending is booming, it doesn't show up in the productivity numbers. We saw this actually, you know, a couple decades ago in the nineties. And all of a sudden you saw this massive productivity boom. I've predicted that with automation and artificial intelligence you're going to see something similar. It seems like Infor's on a mission - that human potential tagline - on a mission to really drive that productivity and help close those gaps. >> We definitely are. Our tagline is "design for progress" and we are looking to promote progress around the world and do what we can in order to help human progress and the theme at Inforum is human potential and that's what we're looking to do here. We have seen a lot of productivity growth in people's personal lives. I now - I don't know how to set a timer to cook anymore, I just ask Alexa to do it, but we haven't seen that at enterprise yet. So we're bringing consumer grade technology that people have gotten used to in their everyday lives but they don't see at the office. We're bringing it to the office to help make them equally as productive as they are in their personal lives. >> Yeah, that's what I wanted to hit on, actually, was the theme of the show. We're talking about human potential and which Hervan Jones talking about that, you know, from a personal mission statement if you want - that's the way he worded it. But, what's the broad scope of that in terms of how you apply that thematically throughout the company when you talk about human potential, because it's just not you, obviously you're trying to do that for your clients, you're trying to do that for the people they serve, do it for taxpayers, right, through the federal sector. But talk about that from the thirty thousand foot level about human potential - unlocking that and how Infor totally is, I guess, trying to illustrate that or put that in place. >> Certainly. The first thing I would mention is our human capital management. Infor is a very large provider of HR software - there's others that are perhaps better known, but Infor has many customers that are using our HR software, but they're also using our software for other key functions. And by integrating those two things, we're able to help people be their best self at work. Because it's not just the HR management, but the HR system knows what you're working on, they can help with professional development, and talent management, and align that to the business processes that the company has. We're also looking to engage workers. As you mentioned, there are more job openings than there are unemployed people that we believe seeking employment right now, but they're not very engaged. So we're hoping to have technology and learning management to help engage more workers. And then we'd also like to increase new business creation. One of the things that Charles mentioned that slowed down is the introduction of new businesses and small businesses. We believe one of the reasons for that is that there is so much business automation that goes on that in order to achieve that to be competitive requires so much capital investment that it makes it difficult to start a new business. But if we're able to automate a lot of that business, we're able to make it really easy through Infor cloud suite for new business starting, we feel like we'll be able to help entrepreneurs generate new businesses which will employ more people and offer more engaging and rewarding jobs and help fill some of those gaps that we have. >> We've talked a lot about AI - not just some magic thing that you throw at your business - it has to be operationalized and the likely way in which organizations are going to consume AI is it's going to be infused in applications. And this is exactly what your strategy it, isn't it? >> It is. The artificial intelligence is only going to be as smart as the amount of data that it can access and that it can analyze. It doesn't have a brain, it looks at data and learns from that data and where it tells you. And Infor has access to data that very few companies have - mission critical data, ERP, data manufacturing, distribution - core processes that we're able to put in the cloud, and not just in the cloud, but in a multi-tenant cloud environment where it can be drawn from analytics, from our burst analytics engine. And then, Coleman can make decisions based on that data - not only from within the enterprise but across the network using GT Nexus commerce network. >> Yeah, so we're hearing a lot about HCM, of course, at this show, you know, human potential, fits into talent management, HCM. You guys have a very competitive product there, it's sort of a knife fight with some of the large SAS players, but I was excited to see so much attention paid to HCM as a key part of your SAS portfolio - your thoughts? >> I do, I agree with you and I think one of the differentiating points that we just mentioned was that Infor HCM also connects to Infor systems that automate core business processes. So it's not just about those business processes, but also knowing who the people are that work on them and helping companies navigate. So much time is wasted from what we would call tribal knowledge - an employee getting up to speed or figuring out how to navigate inside an organization, particularly a large enterprise. And Infor HCM can help make that easier, but they can do that while attached to a business process so that everything can move faster and more efficiently for the customer. >> I wonder if you could comment, Dan, on this notion of best of breed versus a full suite. It seems like - so for decades, there's been this argument of oh, best of breed point products will sometimes win but full suite, people want a single throat to choke and that integration - It seems like with your micro-vertical strategy you're trying to do both - be both best of breed and have a full suite across the enterprise application portfolio. Is that right, you know, do you feel like you guys are succeeding at that, uh where do you think you fit in that whole spectrum? >> That is correct, and it's one of the things that we're able to do because of our cloud strategy - is to offer the complete suite and the artificial intelligence that comes on top of it. In the past, when there wasn't an artificial intelligence layer, there wasn't the machine learning that needed to draw from all of that data, best of breed individual applications would work. But now that we're trying to pull data together so that you can make more intelligent and you get actionable insights that let you make more intelligent decisions, that requires an integrated suite. And that can be done now in a multi-tenant cloud environment that couldn't be done before. >> The other thing I would observe - we talked about this, John - is - >> I'd also really quick just add that I think that that's proving to be correct in the amount of growth that we're seeing. Infor is significantly outgrowing from a revenue perspective. Oracle, more than forty percent last year, more than double the rate of growth of SAP and our growth rate for cloud applications is up there with work day which is setting the bar for cloud software companies. >> Yeah, that's true, that's a great point. I mean work day has set the bar and this is an example of what was essentially a narrow point product there to, of course, trying to get into other spaces. Of course, SAP and Oracle always have had a large suite. Your strategy has seemed to be working in terms of being a place where a customer can come in and access a lot of different functionality. The other thing that we heard today - a year in - is the Koch Industries investment. I was noticing that you now see Accenture here, you see Grant Thorton, Deloitte- >> Capgemini >> Yeah, Capgemini - these people are taking notice of - I would imagine Koch Industries does a lot of business with those guys and one of the gentlemen from Koch told me last year - said "Hey, we're going to expose these SI's to the Infor opportunity." It seems like it started to happen and I've heard that there's been several large deals that they've helped to catalyze, so it's great to see that presence here. Talk a little bit about the Koch Industries dynamic and what that's brought to the table. >> Well, the Koch relationship for Infor has been so helpful. First, obviously, there's a large infusion of cash from the investment. It was 2.5 billion dollars - one of the largest tech investments that wasn't an acquisition in history. And we're able to use that capital to increase more functionality. Not only that, but Infor has an industrial background. The majority of our customers are in manufacturing or distribution - industries that Koch Industries is a big player in. So not only do we have a great partner, but we have a living lab in one of the world's best and most efficient companies with which to develop our software, implement our software, and test our software. And we've got a willing partner in Koch that can do that and provide a lot of that expertise. >> I was telling Dave that that's what really struck me listening to the keynote was that - yeah - it's this wonderful symbiotic relationship and they gave you money - that's nice, right - but you have an opportunity now to roll out services, products, experiment a little bit. >> We do. >> See how it works within the Koch family, if you will, before you take it out further and so you've got this great test lab at your disposal that you didn't have before. >> And like Infor, Koch is a private company, so we don't feel the same pressure to provide quarterly return to shareholders that public companies do. So we're able to invest more of our revenue in development and R and D in ensuring that our products are going to deliver the best experience and the best functionality for our customers. >> Well, to me, the key for Infor - a key - is you've got a large install base and you're trying to get that install base to come to a more modern, SAS-like, cloud-like platform. To do that, you got to be relevant. So, the stuff like Coleman, the burst acquisition, your micro-verticals - those are all highly relevant. You know, your ability eliminate custom mods because you go that last mile. Highly relevant to companies that have to place a bet. Now, when they have to move to this new world, you know, others are going to try to grab them, so you got to hang on to them. To me, relevance, and showing a road map, and showing an investment, and things like R and D, is critical - your thoughts? >> I agree with you, I think that's the reason that we're seeing those large global system integrators partner with Infor now and develop practices that Accenture and Deloitte, Grant Thornton, and Capgemini, that will implement Infor software at their customers. They're having the demand from the customers that they're working with, including up to the largest of enterprises, for Infor software, just simply because we are able to automate processes and help them get to a level of automation that will let them compete in the digital era. There are companies all over are fearing that they're going to be disrupted by a digital, native competitor or a digitally enabled competitor. And we're looking to help Infor customers become digitally enabled themselves and to be that disruptive competitor in their field. >> Well, Dan, we appreciate the time >> Thank you very much. >> Good seeing you, thanks for having us here. >> Thanks for coming back again. >> Overlooking the show floor, got a great seat - >> Yeah, a lot of activity down there. >> And, uh, good luck with the rest of the show. >> Thank you very much. >> Dan Barnhardt, from Infor back with more. Live on the Cube here from Washington DC at Inforum '18. (bright, electric music)

Published Date : Sep 25 2018

SUMMARY :

Brought to you by Infor. It's now a pleasure to welcome Vice President Good to see you again. because it's so nice. But about coming to Washington. And it's important to be in a city where Of reaching the compliance goal. and the utmost security for customers and we're You as an ISV, you don't have to worry about all that stuff. and functionality that makes a difference for our customers. It also made the point that when we were competitors are starting to come after us this is the place to be. Um, other hard news that you guys had this week - so that they can give better care to their patients We heard on the intro to the keynotes today, He's got the smooth, mellifluous voice. to fetch information and deliver not only the information Now a lot of people like to talk about how - a lot of people to interact with technology at work. that came out of the keynote this morning - anymore, I just ask Alexa to do it, but we But talk about that from the thirty thousand and talent management, and align that to the is it's going to be infused in applications. And Infor has access to data that very few companies have - so much attention paid to HCM as a key part and more efficiently for the customer. Is that right, you know, do you feel like you guys that let you make more intelligent decisions, that that's proving to be correct in the Your strategy has seemed to be working large deals that they've helped to catalyze, infusion of cash from the investment. really struck me listening to the keynote was that - and so you've got this great test lab and the best functionality for our customers. Well, to me, the key for Infor - a key - that they're going to be disrupted Live on the Cube here from Washington DC

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

DanPERSON

0.99+

AmazonORGANIZATION

0.99+

CharlesPERSON

0.99+

GoogleORGANIZATION

0.99+

Dan BarnhardtPERSON

0.99+

2014DATE

0.99+

USLOCATION

0.99+

JanuaryDATE

0.99+

WashingtonLOCATION

0.99+

AccentureORGANIZATION

0.99+

AWSORGANIZATION

0.99+

EuropeLOCATION

0.99+

Koch IndustriesORGANIZATION

0.99+

OracleORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

New OrleansLOCATION

0.99+

John WallsPERSON

0.99+

New YorkLOCATION

0.99+

Charles'PERSON

0.99+

Affordable Care ActTITLE

0.99+

KochORGANIZATION

0.99+

JohnPERSON

0.99+

CapgeminiORGANIZATION

0.99+

Washington DCLOCATION

0.99+

last yearDATE

0.99+

twenty percentQUANTITY

0.99+

United StatesLOCATION

0.99+

Washington D.LOCATION

0.99+

DCLOCATION

0.99+

2.5 billion dollarsQUANTITY

0.99+

Asia PacificLOCATION

0.99+

ColemanPERSON

0.99+

Hervan JonesPERSON

0.99+

InforORGANIZATION

0.99+

firstQUANTITY

0.99+

seventy plus percentQUANTITY

0.99+

Latin AmericaLOCATION

0.99+

FirstQUANTITY

0.99+

SAPORGANIZATION

0.99+

more than forty percentQUANTITY

0.99+

this yearDATE

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.98+

CatherinePERSON

0.98+

oneQUANTITY

0.98+

this weekDATE

0.98+

bothQUANTITY

0.98+

thirty thousand footQUANTITY

0.98+

todayDATE

0.98+

ninetiesDATE

0.98+

SASORGANIZATION

0.97+

JohnsonPERSON

0.97+

Grant ThorntonORGANIZATION

0.97+

Pat Casey, ServiceNow | ServiceNow Knowledge18


 

>> Announcer: Live from Las Vegas, it's the Cube. Covering ServiceNow Knowledge 2018. Brought to you by ServiceNow. >> Welcome to day three of Knowledge18. You're watching the Cube, the leader in live tech coverage. Day three is when ServiceNow brings together its audience and talks about its platform, the creators, the developers, the doers get together in the room. Jeff Frick and I, my co-host, we've seen this show now, Jeff, for many, many years. I joked on Twitter today, it's not often you see a full room and this room was packed on day three. Unless Larry Ellison is speaking. Well, Larry Ellison is not here, but Pat Casey is. He's the Senior Vice President of DevOps at ServiceNow and a Cube alum, Pat, great to see you again. >> Absolutely, just glad to be back. >> So, my head is exploding. With all the innovation that's comin' out. I feel like I'm at a AWS re:Invent with Andy Jassy up on stage with all these features that are coming out. But wow, you guys are on it. And part of that is because of the platform. You're able to put out new features, but how's the week going? >> So far it's been great. But you're sort of right, we are super proud of this year. I think there's more new stuff that's valuable for our customers coming out this year than probably the three years prior to this. I mean you got the chat bot designer, and you got some great application innovation, you got Flow Designer, you've got the entire integration suite coming online, and then in addition to that you've got a whole new mobile experience coming out. Just all stuff that our customers can touch. You can go downstairs and see all that and they can get their hands on it. Super exciting. >> So consistent too with the messaging. We've been coming here, I this is our sixth year, with kind of the low-code and no-code vision that Fred had way at the beginning. To let lots of people build great workflows and then to start taking some of these crazy new applications like chat bots and integration platform, pretty innovative. >> Yeah, I think it's a mindset when you get down to it. I mean we, the weird failure mode of technology is technology tends to get built by by technologists. And I do this for a living. There's a failure mode where you design the tool you want to use. And those tend to be programmer tools 'cause they tend to get designed by programmers. It does take an extra mental shift to say no, my user is not me. My user is a different person. I want to build the tool that they want to use. And that sort of user empathy, you know Fred had that in spades. That was his huge, huge, huge strength. Among other things. One of his huge strengths. It's something that we're really trying to keep foreground in the company. And you see that in some of the new products we released as well. It's really aimed at our customers not at our developers. >> The other thing I think that's been consistent in all the interviews we've done, and John talked on the day one keynote one of his kind of three keys to success was try to stay with out of the box as much as you can as a rule, and we've had all the GMs of the various application stacks that you guys have, they've all talked consistently we really try to drive, even as a group our specific requests back into development on the platform level so we can all leverage it. So even though then the vertical applications you guys are building, it's still this drive towards leverage the common platform. >> Yeah, absolutely. And there is, what's the word I'm looking for? There's a lot of value in using the product the way it was shipped. For easiest thing is when it advances or when we ship you new features you can just turn 'em on, and it doesn't conflict with anything else you got going in there. There's always an element of, you know, this is enterprise software. Every customer's a little bit different. GE does not work the same way as Bank of America. So you probably never get away entirely from configuring, but doing the minimum that you can get away with, the minimum that'll let you put your business-specific needs in there, and being really sure of it, you need to do it, it's the right approach to take. The failure mode of technologists, the other one, is we like writing technology. So give me a platform and I'm going to just write stuff. Applying that only when it makes sense to the business is where you really need to be. Especially in this day and age. >> Well I wanted to ask you about that 'cause you guys talk about many applications one platform. But you used to be one platform one app. >> Pat: Yep. >> So as you have more, and more, and more apps, how are you finding it regarding prioritization of features, and capabilities? I imagine the GMs like any company are saying, hey, this is a priority. >> Sure. >> And because you have a platform there's I'm sure a lot more overlap than if you're a stovepipe development organization. But nonetheless you still got to prioritize. Maybe talk about that a little bit. >> Sure, you end up with two different levels of it though. At one level, you tend to want to pick businesses to go into, which you're aligned with the technology stack you have. I don't think we're going to go into video streaming business. It's a good business, but it's not our business. >> Too bad, we could use some of that actually. >> Well, maybe next year. (laughs) But when you get down to it we mostly write enterprise business apps. So HR is an enterprise business app, CSM, SecOps, ITSM, they're all kind of the same general application area. So we don't tend to have something which is totally out to lunch. But you're right in the sense that A, what's important to CSM might be less important to ITSM. And so we do prioritize. And we prioritize partly based on what the perceived benefit across the product line is. If something that a particular BU wants that five other BUs are going to benefit from that's pretty valuable. If only them, not so much. And part of it too is based on how big the BUs are. You know if you're an emerging product line you probably get few less features than like Feryl Huff. Like she has a very big product line. Or Pabla, he has a very big product line. But there's also an over-investment in the emerging stuff. Because you have to invest to build the product lines out. >> The other thing I think is you guys have been such a great opportunity is I just go back to those early Fred interviews with the copy room and the color paper 'cause nobody knows what that is anymore. >> Pat: Yep. >> But workflow just by its very nature lends itself so much to leveraging, AI, and ML, so you've already kind of approached it while trying to make work easier with these great workflow tools, but what an opportunity now to apply AI and machine learning to those things over time. So I don't even have to write the rules and even a big chunk of that workflow that I built will eventually go away for me actually having to interact with it. >> Yeah, there's a second layer to it too, which I'll call out. The workflows between businesses are different. But we have the advantage that we have the data for each of the businesses. So we can train AI on this is the way this particular workflow works at General Electric and use that bot at GE and train a different bot at maybe at Siemens. You know it's still a big industrial firm. It's a different way of doing it. That gives us a really big advantage over people who commingle the data together. Because of our architecture, we can treat every customer uniquely and we can train the automation for the unique workflows for that particular customer. It gives a much more accurate result. >> So thinking about, staying on the theme of machine intelligence for a moment, you're not a household name in the world of AI, so you've done some acquisitions and-- >> Pat: Yep. >> But it's really becoming a fundamental part of your next wave of innovation. As a technologist, and you look out at the landscape, you obviously you see Google, Apple, Facebook, IBM, with Watson, et cetera, et cetera, as sort of the perceived leaders, do you guys aspire to be at that level? Do you need to be? What's the philosophy and strategy with regard to implementing AI in the road map? >> Well if you cast your eyes forward to where we think the future's going to be, I do think there are going to be certain core AI services that they're going to call their volume plays. You need a lot of engineers, a lot of resources, a lot of time to execute them. Really good voice-to-text is an example. And that's getting pretty good. It's almost solved at this point. A general case conversational agent, not solved yet. Even the stuff you see at Google I/O, it's very specialized. It does one thing really well and it's a great demo, but ask it about Russian history, no idea what to talk about. Whereas, maybe you don't know a lot about Russian history, you as a human would at least have something interesting to say. We expect that we will be leveraging other people's core AI services for a lot of stuff out there. Voice-to-text is a good example. There may well be some language parsing that we can do out there. There may be other things we never even thought of. Maybe stuff that'll read text for you and give you back summaries. Those are the kinds of things that we probably won't implement internally. Where you never know, but that's my guess, where you look at where we think we need to write our own code or own our own IP, it's where the domain is specific to our customers. So when I talked about General Electric having a specific workflow, I need to be able to train something specific for that. And if you look at some other things like language processing, there's a grammar problem. Which is a fancy way of saying that the words that you use describing a Cube show are different than the words that I would use describing a trade show. So if I teach a bot to talk about the Cube, it can't talk about trade shows. If you're Amazon, you train your bot to talk in generic language. When you want to actually speak in domain-specific language, it gets a lot harder. It's not good at talking about your show. We think we're going to have value to provide domain-specific language for our customers' individualized domains. I think that's a big investment. >> But you don't have to do it all as well. We saw two actually interesting use cases talking to some of your customers this week. One was the hospital in Australia, I don't know if you're familiar with this, where they're using Alexa as the interface, and everything goes into the ServiceNow platform for the nurses. >> Yep. >> And so that's not really your AI, it's kind of Amazon's AI, that's fine. And the other was Siemens taking some of your data and then doing some stuff in Azure and Watson, although the Watson piece was, my take away was it was kind of a fail, so there's some work to be done there, but customers are going to use different technologies. >> Pat: Oh, they will. >> You have to pick your spots. >> You know we're, as a vendor, we're pretty customer-centric. We love it when you use our technology and we think it's awesome, otherwise we wouldn't sell it. But fundamentally we don't expect to be the only person in the universe. And we're also not, like you've seen us with our chat bot, our chat bot, you can use somebody else's chat client. You can use Slack, you can use Teams, you can use our client, we can use Jabber. It's great. If you were a customer and want to use it, use it. Same thing on the AI front. Even if you look at our chat bot right now, there's the ability to plug in third-party AIs for certain things even today. You can plug it in for language processing. I think out of box is configured for Google, but you can use Amazon, you can use Microsoft if you want to. And it'll parse your language for you at certain steps in there. We're pretty open to partnering on that stuff. >> But you're also adding value on top of those platforms, and that's the key point, right? >> The operating model we have is we want it to be transparent to our customers as to what's going on in the back end. We will make their life easy. And if we're going to make their life easy by behind the scenes, integrating somebody else's technology in there, that's what we're going to do. And for things like language processing, our customers never need to know about that. We know. And the customers might care if they asked because we're not hiding it. But we're not going to make them do that integration. We're going to do it for them, and just they click to turn it on. >> Pat, I want to shift gears a little bit in terms of the human factors point of all this. I laugh, I have an Alexa at home, I have a Google at home, and they send me emails suggesting ways that I should interact with these things that I've never thought of. So as you see kind of an increase in chat bots and you see it increase in things like voice-to-text and these kind of automated systems in the background, how are you finding people's adoption of it? Do they get it? Do the younger folks just get it automatically? Are you able to bury it such where it's just served up without much thought in their proc, 'cause it's really the behavior thing I think's probably a bigger challenge than the technology. >> It is and frankly it's varied by domain. If you look at something like Voice that's getting pretty ubiquitous in the home, it's not that common in a business world. And partly there frankly is just you've got a background noise problem. Engineering-wise, crowded office, someone's going to say Alexa and like nobody even knows what they're talking about. >> Jeff: And then 50 of 'em all-- >> Exactly. There's ways to solve that, but this is actual challenge. >> Right. >> If you look at how people like to interact with technologies, I would argue we've already gone through a paradigm shift that's generational. My generation by default is I get out a laptop. If you're a millennial your default is you get out your phone. You will go to a laptop and the same says I will go to a phone, but that's your default. You see the same thing with how you want to interact. Chat is a very natural thing on the phone. It's something you might do on a full screen, but it's a less common. So you're definitely seeing people shifting over to chat as their preferred interaction paradigm especially as they move onto the phones. Nobody wants to fill out a form on a phone. It's miserable. >> Jeff: Right. >> I wonder if we could, so when Jeff and I have Fred on, we always ask him to break out his telescope. So as the resident technologist, we're going to ask you. And I'm going to ask a bunch of open-ended questions and you can pick whatever ones you want to answer, so the questions are, how far can we take machine intelligence and how far should we take machine intelligence? What are the things that machines can do that humans really can't and vice versa? How will humans and machines come together in the future? >> That's a broad question. I'll say right now that AI is probably a little over-marketed. In that you can build really awesome demos that make it seem like it's thinking. But we're a lot further away from an actual thinking machine, which is aware of itself than I think it would seem from the demos. My kids think Alexa's alive, but my son's nine, right? There's no actual Alexa at the end of it. I doubt that one's going to get solved in my lifetime. I think what we're going to get is a lot better at faking it. So there's the classical the Turing test. The Turing test doesn't require that you be self-aware. The Turing test says that my AI passes the Turing test if you can't tell the difference. And you can do that by faking it really well. So I do think there's going to be a big push there. First level you're seeing it is really in the voice-to-text and the voice assistance. And you're seeing it move from the Alexas into the call centers into the customer service into a lot of those rote interactions. When it's positive it's usually replacing one of those horrible telephone mazes that everybody hates. It gets replaced by a voice assist, and as a customer you're like that is better. My life is better. When it's negative, it might replace a human with a not-so-good chat. The good news on that front is our society seems to have a pretty good immune system on that. When companies have tried to roll out less good experiences that are based on less good AI, we tend to rebel, and go no, no, we don't want that. And so I haven't seen that been all that successful. You could imagine a model where people were like, I'm going to roll out something that's worse but cheaper. And I haven't seen that happening. Usually when the AI rolls out it's doing it to be better at something for the consumer perspective. >> That's great. I mean we were talking earlier, it's very hard to predict. >> Pat: Of course. >> I mean who would have predicted that Alexa would have emerged as a leader in NLP or that, and we said this yesterday, that the images of cats on the internet would lead to facial recognition. >> I think Alexa is one example though. The thing I think's even more amazing is the Comcast Voice Remote. Because I used to be in that business. I'm like, how could you ever have a voice remote while you're watching a TV and watching a movie with the sound interaction? And the fact that now they've got the integration as a real nice consumer experience with YouTube and Netflix, if I want to watch a show, and I don't know where it is, HBO, Netflix, Comcast, YouTube, I just tell that Comcast remote find me Chris Rock the Tamborine man was his latest one, and boom there it comes. >> There's a school of thought out there, which is actually pretty widespread that feels like the voice technologies have actually been a bit of a fail from a pure technologies standpoint. In that for all the energy that we've spent on them, they're sort of stuck as a niche application. There's like Alexa, my kids talk to Alexa at home, you can talk to Siri, but when these technologies were coming online, I think we thought that they would replace hard keyboard interactions to a greater degree than they have. I think there's actually a bit of a learning in there that people are not as, we don't mandatorily, I'm not sure if that's a real word, but we don't need to go oral. There's actually a need for non-oral interfaces. And I do think that's a big learning for a lot of the technology is that there's a variety of interface paradigms that actual humans want to use, and forcing people into any one of them is just not the right approach. You have to, right now I want to talk, tomorrow I want to text, I might want to make hand gestures another time. You're mostly a visual media, obviously there's talking too, but it's not radio, right? >> You're absolutely right. That's a great point because when you're on a plane, you don't want to be interacting in a voice. And other times that there's background noise that will screw up the voice reactions, but clearly there's been a lot of work in Silicon Valley and other places on a different interface and it needs to be there. I don't know if neural will happen in our lifetime. I wanted to give you some props on the DevOps announcement that you sort of pre-announced. >> We did. >> It's, you know CJ looked like he was a little upset there. Was that supposed to be his announcement? >> In my version of the script, I announced it and he commented on my announcement. >> It's your baby, come on. So I love the way you kind of laid out the DevOps and kind of DevOps 101 for the audience. Bringing together the plan, dev, test, deploy, and operate. And explaining the DevOps problem. You really didn't go into the dev versus the ops, throwing it over the wall, but people I think generally understand that. But you announced solving a different problem. 500 DevOps tools out there and it gets confusing. We've talked to a bunch of customers about that. They're super excited to get that capability. >> Well, we're super, it's one of those cases where you have an epiphany, 'cause we solved it internally. >> Dave: Right. >> And we just ran it for like three years, and we kept hearing customers say, hey, what are you guys going to do about DevOps? And we're never like quite sure what they mean, 'cause you're like, well what do you mean? Do you want like a planning tool? And then probably about a year ago we sort of had this epiphany of, oh, our customers have exactly the same problem we do. Duh. And so from that it kind of led us to go down the product road of how can we build this kind of management layer? But if you look across our customer base and the industry, DevOps is almost a rebellion. It's a rebellion against the waterfall development model which has dominated things. It's a rebellion against that centralized control. And in a sense it's good because there's a lot of silliness that comes out of those formal development methodologies. Slow everybody down, stupid bureaucracy in there. But when you apply it in an enterprise, okay some of the stuff in there, you actually did need that. And you kind of throw the baby out with the bathwater. So adding that kind of enterprise DevOps layer back in, you still do get that speed. Your developers get to iterate, you get the automated tests, you get the operating model, but you still don't lose those kind of key things you need at the top enterprise levels. >> And most of the customers we've talked to this week have straight up said, look, we do waterfall for certain things, and we're not going to stop doing waterfall, but some of the new cool stuff, you know. (laughs) >> Well if you look at us, it's at the, if you take the microscope far enough away from ServiceNow, we're waterfall in that every six months we release. >> Dave: Yeah, right. >> But if you're an engineer, we're iterating in 24-hour cycles for you. 24-hour cycles, two-week sprints. It's a very different model when you're in the trenches than from the customer perspective. >> And then I think that's the more important part of the DevOps story. Again, there's the technology and the execution detail which you outlined, but it's really more the attitudinal way that you approach problems. We don't try to solve the big problems. We try to keep moving down the road, moving down the road. We have a vision of where we want to get, but let's just keep moving down the road, moving down the road. So it's a very, like you said, cumbersome MRD and PRD and all those kind of classic things that were just too slow for 2018. >> Nobody goes into technology to do paperwork. You go into technology to build things to create, it's a creative outlet. So the more time you can spend doing that, and the less time you're spending on overhead, the happier you're going to be. And if you fundamentally like doing administration, you should move into management. That's great. That's the right job for you. But if you're a hands on the keyboard engineer, you probably want to have your hands on the keyboard, engineering. That's what you do. >> Let's leave on a last thought around the platform. I mentioned Andy Jassy before and AWS. He talks about the flywheel effect. Clearly we're seeing the power of the platform and it feels like there's the developer analog to operating leverage. And that flywheel effect going from your perspective. What can we expect going forward? >> Well, I mean for us there's two parallel big investment vectors. One is clearly we want to make the platform better for our apps. And you asked earlier about how do we prioritize from our various BUs, and that is driving platform enhancements. But the second layer is, this is the platform our customers are using to automate their entire workflow across their whole organization. So there's a series of stuff we're doing there to make that easier for them. In a lot of cases, less about new capabilities. You look at a lot of our investments, it's more about taking something that previously was hard, but possible, and making it easier and still possible. And in doing that, that's been my experience, is Fred Luddy's experience, the easier you can make something, the more successful people will be with it. And Fred had an insight that you could almost over-simplify it sometimes. You could take something which had 10 features and was hard to use, and replace with something that had seven features and was easy to use, everyone would be super happy. At some level, that's the iPhone story, right? I could do more on my Blackberry, it just took me an hour of reading the documentation to figure out how. >> Both: Right, right. >> But I still miss the little side wheel. (laughs) >> Love that side wheel. All right, Pat, listen thanks very much for coming. We are humbled by your humility. You are like a rock star in this community, and congratulations on all this success and really thanks for coming back on the Cube. >> Thank you very much. It's been a pleasure meeting you guys again. >> All right, great. Okay, keep it right there, everybody. We'll be back with our next guest. You're watching the Cube live from ServiceNow Knowledge K18, #know18. We'll be right back. (upbeat music)

Published Date : May 10 2018

SUMMARY :

Brought to you by ServiceNow. great to see you again. And part of that is because of the platform. I mean you got the chat bot designer, and then to start taking some of these And you see that in some of the new products to stay with out of the box as much as you can to the business is where you really need to be. But you used to be one platform one app. So as you have more, and more, and more apps, And because you have a platform At one level, you tend to want to pick businesses But when you get down to it we mostly write The other thing I think is you guys have been and even a big chunk of that workflow for each of the businesses. As a technologist, and you look out at the landscape, Even the stuff you see at Google I/O, But you don't have to do it all as well. And the other was Siemens taking some of your data You can use Slack, you can use Teams, And the customers might care if they asked in the background, how are you finding people's If you look at something like Voice There's ways to solve that, but this is actual challenge. You see the same thing with how you want to interact. and you can pick whatever ones you want to answer, passes the Turing test if you can't tell the difference. I mean we were talking earlier, that the images of cats on the internet I'm like, how could you ever have a voice remote In that for all the energy that we've spent on them, that you sort of pre-announced. Was that supposed to be his announcement? and he commented So I love the way you kind of laid out the DevOps where you have an epiphany, 'cause we solved it internally. Your developers get to iterate, you get the but some of the new cool stuff, you know. Well if you look at us, it's at the, than from the customer perspective. So it's a very, like you said, cumbersome So the more time you can spend doing that, And that flywheel effect going from your perspective. is Fred Luddy's experience, the easier you can But I still miss the little side wheel. and really thanks for coming back on the Cube. It's been a pleasure meeting you guys again. We'll be back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

GoogleORGANIZATION

0.99+

General ElectricORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

FacebookORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Larry EllisonPERSON

0.99+

AppleORGANIZATION

0.99+

Pat CaseyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

ComcastORGANIZATION

0.99+

JohnPERSON

0.99+

Andy JassyPERSON

0.99+

SiemensORGANIZATION

0.99+

GEORGANIZATION

0.99+

FredPERSON

0.99+

twoQUANTITY

0.99+

10 featuresQUANTITY

0.99+

Bank of AmericaORGANIZATION

0.99+

2018DATE

0.99+

24-hourQUANTITY

0.99+

AustraliaLOCATION

0.99+

three yearsQUANTITY

0.99+

YouTubeORGANIZATION

0.99+

SiriTITLE

0.99+

Silicon ValleyLOCATION

0.99+

HBOORGANIZATION

0.99+

two-weekQUANTITY

0.99+

OneQUANTITY

0.99+

NetflixORGANIZATION

0.99+

sixth yearQUANTITY

0.99+

50QUANTITY

0.99+

seven featuresQUANTITY

0.99+

second layerQUANTITY

0.99+

nineQUANTITY

0.99+

CJPERSON

0.99+

next yearDATE

0.99+

PatPERSON

0.99+

Fred LuddyPERSON

0.99+

todayDATE

0.99+

ServiceNowORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

BothQUANTITY

0.99+

AlexaTITLE

0.99+

Las VegasLOCATION

0.99+

yesterdayDATE

0.99+

tomorrowDATE

0.98+

First levelQUANTITY

0.98+

ServiceNowTITLE

0.98+

this weekDATE

0.98+

this yearDATE

0.98+

Chris RockPERSON

0.98+

BlackberryORGANIZATION

0.98+

AlexasTITLE

0.98+

eachQUANTITY

0.98+

one exampleQUANTITY

0.98+

DevOpsTITLE

0.97+

one platformQUANTITY

0.97+

one levelQUANTITY

0.97+

WatsonTITLE

0.97+

day threeQUANTITY

0.97+

AzureTITLE

0.97+

oneQUANTITY

0.97+

James Kobielus, Wikibon | The Skinny on Machine Intelligence


 

>> Announcer: From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now here's your host, Dave Vellante. >> In the early days of big data and Hadoop, the focus was really on operational efficiency where ROI was largely centered on reduction of investment. Fast forward 10 years and you're seeing a plethora of activity around machine learning, and deep learning, and artificial intelligence, and deeper business integration as a function of machine intelligence. Welcome to this Cube conversation, The Skinny on Machine Intelligence. I'm Dave Vellante and I'm excited to have Jim Kobielus here up from the District area. Jim, great to see you. Thanks for coming into the office today. >> Thanks a lot, Dave, yes great to be here in beautiful Marlboro, Massachusetts. >> Yes, so you know Jim, when you think about all the buzz words in this big data business, I have to ask you, is this just sort of same wine, new bottle when we talk about all this AI and machine intelligence stuff? >> It's actually new wine. But of course there's various bottles and they have different vintages, and much of that wine is still quite tasty, and let me just break it out for you, the skinny on machine intelligence. AI as a buzzword and as a set of practices really goes back of course to the early post-World War II era, as we know Alan Turing and the Imitation Game and so forth. There are other developers, theorists, academics in the '40s and the '50s and '60s that pioneered in this field. So we don't want to give Alan Turing too much credit, but he was clearly a mathematician who laid down the theoretical framework for much of what we now call Artificial Intelligence. But when you look at Artificial Intelligence as a ever-evolving set of practices, where it began was in an area that focused on deterministic rules, rule-driven expert systems, and that was really the state of the art of AI for a long, long time. And so you had expert systems in a variety of areas that became useful or used in business, and science, and government and so forth. Cut ahead to the turn of the millennium, we are now in the 21st century, and what's different, the new wine, is big data, larger and larger data sets that can reveal great insights, patterns, correlations that might be highly useful if you have the right statistical modeling tools and approaches to be able to surface up these patterns in an automated or semi-automated fashion. So one of the core areas is what we now call machine learning, which really is using statistical models to infer correlations, anomalies, trends, and so forth in the data itself, and machine learning, the core approach for machine learning is something called Artificial Neural Networks, which is essentially modeling a statistical model along the lines of how, at a very high level, the nervous system is made up, with neurons connected by synapses, and so forth. It's an analog in statistical modeling called a perceptron. The whole theoretical framework of perceptrons actually got started in the 1950s with the first flush of AI, but didn't become a practical reality until after the turn of this millennium, really after the turn of this particular decade, 2010, when we started to see not only very large big data sets emerge and new approaches for managing it all, like Hadoop, come to the fore. But we've seen artificial neural nets get more sophisticated in terms of their capabilities, and a new approach for doing machine learning, artificial neural networks, with deeper layers of perceptrons, neurons, called deep learning has come to the fore. With deep learning, you have new algorithms like convolutional neural networks, recurrent neural networks, generative adversarial neural networks. These are different ways of surfacing up higher level abstractions in the data, for example for face recognition and object recognition, voice recognition and so forth. These all depend on this new state of the art for machine learning called deep learning. So what we have now in the year 2017 is we have quite a mania for all things AI, much of it is focused on deep learning, much of it is focused on tools that your average data scientist or your average developer increasingly can use and get very productive with and build these models and train and test them, and deploy them into working applications like going forward, things like autonomous vehicles would be impossible without this. >> Right, and we'll get some of that. But so you're saying that machine learning is essentially math that infers patterns from data. And math, it's new math, math that's been around for awhile or. >> Yeah, and inferring patterns from data has been done for a long time with software, and we have some established approaches that in many ways predate the current vogue for neural networks. We have support vector machines, and decision trees, and Bayesian logic. These are different ways of approaches statistical for inferring patterns, correlations in the data. They haven't gone away, they're a big part of the overall AI space, but it's a growing area that I've only skimmed the surface of. >> And they've been around for many many years, like SVM for example. Okay, now describe further, add some color to deep learning. You sort of painted a picture of this sort of deep layers of these machine learning algorithms and this network with some depth to it, but help us better understand the difference between machine learning and deep learning, and then ultimately AI. >> Yeah, well with machine learning generally, you know, inferring patterns from data that I said, artificial neural networks of which the deep learning networks are one subset. Artificial neural networks can be two or more layers of perceptrons or neurons, they have relationship to each other in terms of their activation according to various mathematical functions. So when you look at an artificial neural network, it basically does very complex math equations through a combination of what they call scalar functions, like multiplication and so forth, and then you have these non-linear functions, like cosine and so forth, tangent, all that kind of math playing together in these deep structures that are triggered by data, data input that's processed according to activation functions that set weights and reset the weights among all the various neural processing elements, that ultimately output something, the insight or the intelligence that you're looking for, like a yes or no, is this a face or not a face, that these incoming bits are presenting. Or it might present output in terms of categories. What category of face is this, a man, a woman, a child, or whatever. What I'm getting at is that so deep learning is more layers of these neural processing elements that are specialized to various functions to be able to abstract higher level phenomena from the data, it's not just, "Is this a face," but if it's a scene recognition deep learning network, it might recognize that this is a face that corresponds to a person named Dave who also happens to be the father in the particular family scene, and by the way this is a family scene that this deep learning network is able to ascertain. What I'm getting at is those are the higher level abstractions that deep learning algorithms of various sorts are built to identify in an automated way. >> Okay, and these in your view all fit under the umbrella of artificial intelligence, or is that sort of an uber field that we should be thinking of. >> Yeah, artificial intelligence as the broad envelope essentially refers to any number of approaches that help machines to think like humans, essentially. When you say, "Think like humans," what does that mean actually? To do predictions like humans, to look for anomalies or outliers like a human might, you know separate figure from ground for example in a scene, to identify the correlations or trends in a given scene. Like I said, to do categorization or classification based on what they're seeing in a given frame or what they're hearing in a given speech sample. So all these cognitive processes just skim the surface, or what AI is all about, automating to a great degree. When I say cognitive, but I'm also referring to affective like emotion detection, that's another set of processes that goes on in our heads or our hearts, that AI based on deep learning and so forth is able to do depending on different types of artificial neural networks are specialized particular functions, and they can only perform these functions if A, they've been built and optimized for those functions, and B, they have been trained with actual data from the phenomenon of interest. Training the algorithms with the actual data to determine how effective the algorithms are is the key linchpin of the process, 'cause without training the algorithms you don't know if the algorithm is effective for its intended purpose, so in Wikibon what we're doing is in the whole development process, DevOps cycle, for all things AI, training the models through a process called supervised learning is absolutely an essential component of ascertaining the quality of the network that you've built. >> So that's the calibration and the iteration to increase the accuracy, and like I say, the quality of the outcome. Okay, what are some of the practical applications that you're seeing for AI, and ML, and DL. >> Well, chat bots, you know voice recognition in general, Siri and Alexa, and so forth. Without machine learning, without deep learning to do speech recognition, those can't work, right? Pretty much in every field, now for example, IT service management tools of all sorts. When you have a large network that's logging data at the server level, at the application level and so forth, those data logs are too large and too complex and changing too fast for humans to be able to identify the patterns related to issues and faults and incidents. So AI, machine learning, deep learning is being used to fathom those anomalies and so forth in an automated fashion to be able to alert a human to take action, like an IT administrator, or to be able to trigger a response work flow, either human or automated. So AI within IT service management, hot hot topic, and we're seeing a lot of vendors incorporate that capability into their tools. Like I said, in the broad world we live in in terms of face recognition and Facebook, the fact is when I load a new picture of myself or my family or even with some friends or brothers in it, Facebook knows lickity-split whether it's my brother Tom or it's my wife or whoever, because of face recognition which obviously depends, well it's not obvious to everybody, depends on deep learning algorithms running inside Facebook's big data network, big data infrastructure. They're able to immediately know this. We see this all around us now, speech recognition, face recognition, and we just take it for granted that it's done, but it's done through the magic of AI. >> I want to get to the development angle scenario that you specialize in. Part of the reason why you came to Wikibon is to really focus on that whole application development angle. But before we get there, I want to follow the data for a bit 'cause you mentioned that was really the catalyst for the resurgence in AI, and last week at the Wikibon research meeting we talked about this three-tiered model. Edge, as edge piece, and then something in the middle which is this aggregation point for all this edge data, and then cloud which is where I guess all the deep modeling occurs, so sort of a three-tier model for the data flow. >> John: Yes. >> So I wonder if you could comment on that in the context of AI, it means more data, more I guess opportunities for machine learning and digital twins, and all this other cool stuff that's going on. But I'm really interested in how that is going to affect the application development and the programming model. John Farrier has a phrase that he says that, "Data is the new development kit." Well, if you got all this data that's distributed all over the place, that changes the application development model, at least you think it does. So I wonder if you could comment on that edge explosion, the data explosion as a result, and what it means for application development. >> Right, so more and more deep learning algorithms are being pushed to edge devices, by that I mean smartphones, and smart appliances like the ones that incorporate Alexa and so forth. And so what we're talking about is the algorithms themselves are being put into CPUs and FPGAs and ASICs and GPUs. All that stuff's getting embedded in everything that we're using, everything's that got autonomous, more and more devices have the ability if not to be autonomous in terms of making decisions, independent of us, or simply to serve as augmentation vehicles for our own whatever we happen to be doing thanks to the power of deep learning at the client. Okay, so when deep learning algorithms are embedded in say an internet of things edge device, what the deep learning algorithms are doing is A, they're ingesting the data through the sensors of that device, B, they're making inferences, deep learning algorithmic-driven inferences, based on that data. It might be speech recognition, face recognition, environmental sensing and being able to sense geospatially where you are and whether you're in a hospitable climate for whatever. And then the inferences might drive what we call actuation. Now in the autonomous vehicle scenario, the autonomous vehicle is equipped with all manner of sensors in terms of LiDAR and sonar and GPS and so forth, and it's taking readings all the time. It's doing inferences that either autonomously or in conjunction with inferences that are being made through deep learning and machine learning algorithms that are executing in those intermediary hubs like you described, or back in the cloud, or in a combination of all of that. But ultimately, the results of all those analytics, all those deep learning models, feed the what we call actuation of the car itself. Should it stop, should it put on the brakes 'cause it's about to hit a wall, should it turn right, should it turn left, should it slow down because it happened to have entered a new speed zone or whatever. All of the decisions, the actions that the edge device, like a car would be an edge device in this scenario, are being driven by evermore complex algorithms that are trained by data. Now, let's stay with the autonomous vehicle because that's an extreme case of a very powerful edge device. To train an autonomous vehicle you need of course lots and lots of data that's acquired from possibly a prototype that you, a Google or a Tesla, or whoever you might be, have deployed into the field or your customers are using, B, proving grounds like there's one out by my stomping ground out in Ann Arbor, a proving ground for the auto industry for self-driving vehicles and gaining enough real training data based on the operation of these vehicles in various simulated scenarios, and so forth. This data is used to build and iterate and refine the algorithms, the deep learning models that are doing the various operations of not only the vehicles in isolation but the vehicles operating as a fleet within an entire end to end transportation system. So what I'm getting at, is if you look at that three-tier model, then the edge device is the car, it's running under its own algorithms, the middle tier the hub might be a hub that's controlling a particular zone within a traffic system, like in my neck of the woods it might be a hub that's controlling congestion management among self-driving vehicles in eastern Fairfax County, Virginia. And then the cloud itself might be managing an entire fleet of vehicles, let's say you might have an entire fleet of vehicles under the control of say an Uber, or whatever is managing its own cars from a cloud-based center. So when you look at the tiering model that analytics, deep learning analytics is being performed, increasingly it will be for various, not just self-driving vehicles, through this tiered model, because the edge device needs to make decisions based on local data. The hub needs to make decisions based on a wider view of data across a wider range of edge entities. And then the cloud itself has responsibility or visibility for making deep learning driven determinations for some larger swath. And the cloud might be managing both the deep learning driven edge devices, as well as monitoring other related systems that self-driving network needs to coordinate with, like the government or whatever, or police. >> So envisioning that three-tier model then, how does the programming paradigm change and evolve as a result of that. >> Yeah, the programming paradigm is the modeling itself, the building and the training and the iterating the models generally will stay centralized, meaning to do all these functions, I mean to do modeling and training and iteration of these models, you need teams of data scientists and other developers who are both adept as to statistical modeling, who are adept at acquiring the training data, at labeling it, labeling is an important function there, and who are adept at basically developing and deploying one model after another in an iterative fashion through DevOps, through a standard release pipeline with version controls, and so forth built in, the governance built in. And that's really it needs to be a centralized function, and it's also very compute and data intensive, so you need storage resources, you need large clouds full of high performance computing, and so forth. Be able to handle these functions over and over. Now the edge devices themselves will feed in the data in just the data that is fed into the centralized platform where the training and the modeling is done. So what we're going to see is more and more centralized modeling and training with decentralized execution of the actual inferences that are driven by those models is the way it works in this distributive environment. >> It's the Holy Grail. All right, Jim, we're out of time but thanks very much for helping us unpack and giving us the skinny on machine learning. >> John: It's a fat stack. >> Great to have you in the office and to be continued. Thanks again. >> John: Sure. >> All right, thanks for watching everybody. This is Dave Vellante with Jim Kobelius, and you're watching theCUBE at the Marlboro offices. See ya next time. (upbeat music)

Published Date : Oct 18 2017

SUMMARY :

Announcer: From the SiliconANGLE Media office Thanks for coming into the office today. Thanks a lot, Dave, yes great to be here in beautiful So one of the core areas is what we now call math that infers patterns from data. that I've only skimmed the surface of. the difference between machine learning might recognize that this is a face that corresponds to a of artificial intelligence, or is that sort of an Training the algorithms with the actual data to determine So that's the calibration and the iteration at the server level, at the application level and so forth, Part of the reason why you came to Wikibon is to really all over the place, that changes the application development devices have the ability if not to be autonomous in terms how does the programming paradigm change and so forth built in, the governance built in. It's the Holy Grail. Great to have you in the office and to be continued. and you're watching theCUBE at the Marlboro offices.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

JimPERSON

0.99+

Jim KobeliusPERSON

0.99+

DavePERSON

0.99+

Jim KobielusPERSON

0.99+

Dave VellantePERSON

0.99+

FacebookORGANIZATION

0.99+

John FarrierPERSON

0.99+

GoogleORGANIZATION

0.99+

21st centuryDATE

0.99+

James KobielusPERSON

0.99+

TeslaORGANIZATION

0.99+

Alan TuringPERSON

0.99+

UberORGANIZATION

0.99+

SiriTITLE

0.99+

twoQUANTITY

0.99+

WikibonORGANIZATION

0.99+

last weekDATE

0.99+

AlexaTITLE

0.99+

MarlboroLOCATION

0.99+

TomPERSON

0.99+

Boston, MassachusettsLOCATION

0.99+

10 yearsQUANTITY

0.98+

Ann ArborLOCATION

0.98+

1950sDATE

0.98+

bothQUANTITY

0.97+

todayDATE

0.97+

Marlboro, MassachusettsLOCATION

0.97+

oneQUANTITY

0.96+

2017DATE

0.95+

three-tierQUANTITY

0.95+

2010DATE

0.95+

World War IIEVENT

0.95+

first flushQUANTITY

0.94+

three-tier modelQUANTITY

0.93+

Alan TuringTITLE

0.88+

'50sDATE

0.88+

eastern Fairfax County, VirginiaLOCATION

0.87+

The Skinny on Machine IntelligenceTITLE

0.87+

WikibonTITLE

0.87+

one modelQUANTITY

0.86+

'40sDATE

0.85+

CubeORGANIZATION

0.84+

DevOpsTITLE

0.83+

three-tieredQUANTITY

0.82+

one subsetQUANTITY

0.81+

The SkinnyORGANIZATION

0.81+

'60sDATE

0.8+

Imitation GameTITLE

0.79+

more layersQUANTITY

0.74+

theCUBEORGANIZATION

0.73+

SiliconANGLE MediaORGANIZATION

0.72+

post-DATE

0.56+

decadeDATE

0.46+

Wikibon Presents: Software is Eating the Edge | The Entangling of Big Data and IIoT


 

>> So as folks make their way over from Javits I'm going to give you the least interesting part of the evening and that's my segment in which I welcome you here, introduce myself, lay out what what we're going to do for the next couple of hours. So first off, thank you very much for coming. As all of you know Wikibon is a part of SiliconANGLE which also includes theCUBE, so if you look around, this is what we have been doing for the past couple of days here in the TheCUBE. We've been inviting some significant thought leaders from over on the show and in incredibly expensive limousines driven them up the street to come on to TheCUBE and spend time with us and talk about some of the things that are happening in the industry today that are especially important. We tore it down, and we're having this party tonight. So we want to thank you very much for coming and look forward to having more conversations with all of you. Now what are we going to talk about? Well Wikibon is the research arm of SiliconANGLE. So we take data that comes out of TheCUBE and other places and we incorporated it into our research. And work very closely with large end users and large technology companies regarding how to make better decisions in this incredibly complex, incredibly important transformative world of digital business. What we're going to talk about tonight, and I've got a couple of my analysts assembled, and we're also going to have a panel, is this notion of software is eating the Edge. Now most of you have probably heard Marc Andreessen, the venture capitalist and developer, original developer of Netscape many years ago, talk about how software's eating the world. Well, if software is truly going to eat the world, it's going to eat at, it's going to take the big chunks, big bites at the Edge. That's where the actual action's going to be. And what we want to talk about specifically is the entangling of the internet or the industrial internet of things and IoT with analytics. So that's what we're going to talk about over the course of the next couple of hours. To do that we're going to, I've already blown the schedule, that's on me. But to do that I'm going to spend a couple minutes talking about what we regard as the essential digital business capabilities which includes analytics and Big Data, and includes IIoT and we'll explain at least in our position why those two things come together the way that they do. But I'm going to ask the august and revered Neil Raden, Wikibon analyst to come on up and talk about harvesting value at the Edge. 'Cause there are some, not now Neil, when we're done, when I'm done. So I'm going to ask Neil to come on up and we'll talk, he's going to talk about harvesting value at the Edge. And then Jim Kobielus will follow up with him, another Wikibon analyst, he'll talk specifically about how we're going to take that combination of analytics and Edge and turn it into the new types of systems and software that are going to sustain this significant transformation that's going on. And then after that, I'm going to ask Neil and Jim to come, going to invite some other folks up and we're going to run a panel to talk about some of these issues and do a real question and answer. So the goal here is before we break for drinks is to create a community feeling within the room. That includes smart people here, smart people in the audience having a conversation ultimately about some of these significant changes so please participate and we look forward to talking about the rest of it. All right, let's get going! What is digital business? One of the nice things about being an analyst is that you can reach back on people who were significantly smarter than you and build your points of view on the shoulders of those giants including Peter Drucker. Many years ago Peter Drucker made the observation that the purpose of business is to create and keep a customer. Not better shareholder value, not anything else. It is about creating and keeping your customer. Now you can argue with that, at the end of the day, if you don't have customers, you don't have a business. Now the observation that we've made, what we've added to that is that we've made the observation that the difference between business and digital business essentially is one thing. That's data. A digital business uses data to differentially create and keep customers. That's the only difference. If you think about the difference between taxi cab companies here in New York City, every cab that I've been in in the last three days has bothered me about Uber. The reason, the difference between Uber and a taxi cab company is data. That's the primary difference. Uber uses data as an asset. And we think this is the fundamental feature of digital business that everybody has to pay attention to. How is a business going to use data as an asset? Is the business using data as an asset? Is a business driving its engagement with customers, the role of its product et cetera using data? And if they are, they are becoming a more digital business. Now when you think about that, what we're really talking about is how are they going to put data to work? How are they going to take their customer data and their operational data and their financial data and any other kind of data and ultimately turn that into superior engagement or improved customer experience or more agile operations or increased automation? Those are the kinds of outcomes that we're talking about. But it is about putting data to work. That's fundamentally what we're trying to do within a digital business. Now that leads to an observation about the crucial strategic business capabilities that every business that aspires to be more digital or to be digital has to put in place. And I want to be clear. When I say strategic capabilities I mean something specific. When you talk about, for example technology architecture or information architecture there is this notion of what capabilities does your business need? Your business needs capabilities to pursue and achieve its mission. And in the digital business these are the capabilities that are now additive to this core question, ultimately of whether or not the company is a digital business. What are the three capabilities? One, you have to capture data. Not just do a good job of it, but better than your competition. You have to capture data better than your competition. In a way that is ultimately less intrusive on your markets and on your customers. That's in many respects, one of the first priorities of the internet of things and people. The idea of using sensors and related technologies to capture more data. Once you capture that data you have to turn it into value. You have to do something with it that creates business value so you can do a better job of engaging your markets and serving your customers. And that essentially is what we regard as the basis of Big Data. Including operations, including financial performance and everything else, but ultimately it's taking the data that's being captured and turning it into value within the business. The last point here is that once you have generated a model, or an insight or some other resource that you can act upon, you then have to act upon it in the real world. We call that systems of agency, the ability to enact based on data. Now I want to spend just a second talking about systems of agency 'cause we think it's an interesting concept and it's something Jim Kobielus is going to talk about a little bit later. When we say systems of agency, what we're saying is increasingly machines are acting on behalf of a brand. Or systems, combinations of machines and people are acting on behalf of the brand. And this whole notion of agency is the idea that ultimately these systems are now acting as the business's agent. They are at the front line of engaging customers. It's an extremely rich proposition that has subtle but crucial implications. For example I was talking to a senior decision maker at a business today and they made a quick observation, they talked about they, on their way here to New York City they had followed a woman who was going through security, opened up her suitcase and took out a bird. And then went through security with the bird. And the reason why I bring this up now is as TSA was trying to figure out how exactly to deal with this, the bird started talking and repeating things that the woman had said and many of those things, in fact, might have put her in jail. Now in this case the bird is not an agent of that woman. You can't put the woman in jail because of what the bird said. But increasingly we have to ask ourselves as we ask machines to do more on our behalf, digital instrumentation and elements to do more on our behalf, it's going to have blow back and an impact on our brand if we don't do it well. I want to draw that forward a little bit because I suggest there's going to be a new lifecycle for data. And the way that we think about it is we have the internet or the Edge which is comprised of things and crucially people, using sensors, whether they be smaller processors in control towers or whether they be phones that are tracking where we go, and this crucial element here is something that we call information transducers. Now a transducer in a traditional sense is something that takes energy from one form to another so that it can perform new types of work. By information transducer I essentially mean it takes information from one form to another so it can perform another type of work. This is a crucial feature of data. One of the beauties of data is that it can be used in multiple places at multiple times and not engender significant net new costs. It's one of the few assets that you can say about that. So the concept of an information transducer's really important because it's the basis for a lot of transformations of data as data flies through organizations. So we end up with the transducers storing data in the form of analytics, machine learning, business operations, other types of things, and then it goes back and it's transduced, back into to the real world as we program the real world and turning into these systems of agency. So that's the new lifecycle. And increasingly, that's how we have to think about data flows. Capturing it, turning it into value and having it act on our behalf in front of markets. That could have enormous implications for how ultimately money is spent over the next few years. So Wikibon does a significant amount of market research in addition to advising our large user customers. And that includes doing studies on cloud, public cloud, but also studies on what's happening within the analytics world. And if you take a look at it, what we basically see happening over the course of the next few years is significant investments in software and also services to get the word out. But we also expect there's going to be a lot of hardware. A significant amount of hardware that's ultimately sold within this space. And that's because of something that we call true private cloud. This concept of ultimately a business increasingly being designed and architected around the idea of data assets means that the reality, the physical realities of how data operates, how much it costs to store it or move it, the issues of latency, the issues of intellectual property protection as well as things like the regulatory regimes that are being put in place to govern how data gets used in between locations. All of those factors are going to drive increased utilization of what we call true private cloud. On premise technologies that provide the cloud experience but act where the data naturally needs to be processed. I'll come a little bit more to that in a second. So we think that it's going to be a relatively balanced market, a lot of stuff is going to end up in the cloud, but as Neil and Jim will talk about, there's going to be an enormous amount of analytics that pulls an enormous amount of data out to the Edge 'cause that's where the action's going to be. Now one of the things I want to also reveal to you is we've done a fair amount of data, we've done a fair amount of research around this question of where or how will data guide decisions about infrastructure? And in particular the Edge is driving these conversations. So here is a piece of research that one of our cohorts at Wikibon did, David Floyer. Taking a look at IoT Edge cost comparisons over a three year period. And it showed on the left hand side, an example where the sensor towers and other types of devices were streaming data back into a central location in a wind farm, stylized wind farm example. Very very expensive. Significant amounts of money end up being consumed, significant resources end up being consumed by the cost of moving the data from one place to another. Now this is even assuming that latency does not become a problem. The second example that we looked at is if we kept more of that data at the Edge and processed at the Edge. And literally it is a 85 plus percent cost reduction to keep more of the data at the Edge. Now that has enormous implications, how we think about big data, how we think about next generation architectures, et cetera. But it's these costs that are going to be so crucial to shaping the decisions that we make over the next two years about where we put hardware, where we put resources, what type of automation is possible, and what types of technology management has to be put in place. Ultimately we think it's going to lead to a structure, an architecture in the infrastructure as well as applications that is informed more by moving cloud to the data than moving the data to the cloud. That's kind of our fundamental proposition is that the norm in the industry has been to think about moving all data up to the cloud because who wants to do IT? It's so much cheaper, look what Amazon can do. Or what AWS can do. All true statements. Very very important in many respects. But most businesses today are starting to rethink that simple proposition and asking themselves do we have to move our business to the cloud, or can we move the cloud to the business? And increasingly what we see happening as we talk to our large customers about this, is that the cloud is being extended out to the Edge, we're moving the cloud and cloud services out to the business. Because of economic reasons, intellectual property control reasons, regulatory reasons, security reasons, any number of other reasons. It's just a more natural way to deal with it. And of course, the most important reason is latency. So with that as a quick backdrop, if I may quickly summarize, we believe fundamentally that the difference today is that businesses are trying to understand how to use data as an asset. And that requires an investment in new sets of technology capabilities that are not cheap, not simple and require significant thought, a lot of planning, lot of change within an IT and business organizations. How we capture data, how we turn it into value, and how we translate that into real world action through software. That's going to lead to a rethinking, ultimately, based on cost and other factors about how we deploy infrastructure. How we use the cloud so that the data guides the activity and not the choice of cloud supplier determines or limits what we can do with our data. And that's going to lead to this notion of true private cloud and elevate the role the Edge plays in analytics and all other architectures. So I hope that was perfectly clear. And now what I want to do is I want to bring up Neil Raden. Yes, now's the time Neil! So let me invite Neil up to spend some time talking about harvesting value at the Edge. Can you see his, all right. Got it. >> Oh boy. Hi everybody. Yeah, this is a really, this is a really big and complicated topic so I decided to just concentrate on something fairly simple, but I know that Peter mentioned customers. And he also had a picture of Peter Drucker. I had the pleasure in 1998 of interviewing Peter and photographing him. Peter Drucker, not this Peter. Because I'd started a magazine called Hired Brains. It was for consultants. And Peter said, Peter said a number of really interesting things to me, but one of them was his definition of a customer was someone who wrote you a check that didn't bounce. He was kind of a wag. He was! So anyway, he had to leave to do a video conference with Jack Welch and so I said to him, how do you charge Jack Welch to spend an hour on a video conference? And he said, you know I have this theory that you should always charge your client enough that it hurts a little bit or they don't take you seriously. Well, I had the chance to talk to Jack's wife, Suzie Welch recently and I told her that story and she said, "Oh he's full of it, Jack never paid "a dime for those conferences!" (laughs) So anyway, all right, so let's talk about this. To me, things about, engineered things like the hardware and network and all these other standards and so forth, we haven't fully developed those yet, but they're coming. As far as I'm concerned, they're not the most interesting thing. The most interesting thing to me in Edge Analytics is what you're going to get out of it, what the result is going to be. Making sense of this data that's coming. And while we're on data, something I've been thinking a lot lately because everybody I've talked to for the last three days just keeps talking to me about data. I have this feeling that data isn't actually quite real. That any data that we deal with is the result of some process that's captured it from something else that's actually real. In other words it's proxy. So it's not exactly perfect. And that's why we've always had these problems about customer A, customer A, customer A, what's their definition? What's the definition of this, that and the other thing? And with sensor data, I really have the feeling, when companies get, not you know, not companies, organizations get instrumented and start dealing with this kind of data what they're going to find is that this is the first time, and I've been involved in analytics, I don't want to date myself, 'cause I know I look young, but the first, I've been dealing with analytics since 1975. And everything we've ever done in analytics has involved pulling data from some other system that was not designed for analytics. But if you think about sensor data, this is data that we're actually going to catch the first time. It's going to be ours! We're not going to get it from some other source. It's going to be the real deal, to the extent that it's the real deal. Now you may say, ya know Neil, a sensor that's sending us information about oil pressure or temperature or something like that, how can you quarrel with that? Well, I can quarrel with it because I don't know if the sensor's doing it right. So we still don't know, even with that data, if it's right, but that's what we have to work with. Now, what does that really mean? Is that we have to be really careful with this data. It's ours, we have to take care of it. We don't get to reload it from source some other day. If we munge it up it's gone forever. So that has, that has very serious implications, but let me, let me roll you back a little bit. The way I look at analytics is it's come in three different eras. And we're entering into the third now. The first era was business intelligence. It was basically built and governed by IT, it was system of record kind of reporting. And as far as I can recall, it probably started around 1988 or at least that's the year that Howard Dresner claims to have invented the term. I'm not sure it's true. And things happened before 1988 that was sort of like BI, but 88 was when they really started coming out, that's when we saw BusinessObjects and Cognos and MicroStrategy and those kinds of things. The second generation just popped out on everybody else. We're all looking around at BI and we were saying why isn't this working? Why are only five people in the organization using this? Why are we not getting value out of this massive license we bought? And along comes companies like Tableau doing data discovery, visualization, data prep and Line of Business people are using this now. But it's still the same kind of data sources. It's moved out a little bit, but it still hasn't really hit the Big Data thing. Now we're in third generation, so we not only had Big Data, which has come and hit us like a tsunami, but we're looking at smart discovery, we're looking at machine learning. We're looking at AI induced analytics workflows. And then all the natural language cousins. You know, natural language processing, natural language, what's? Oh Q, natural language query. Natural language generation. Anybody here know what natural language generation is? Yeah, so what you see now is you do some sort of analysis and that tool comes up and says this chart is about the following and it used the following data, and it's blah blah blah blah blah. I think it's kind of wordy and it's going to refined some, but it's an interesting, it's an interesting thing to do. Now, the problem I see with Edge Analytics and IoT in general is that most of the canonical examples we talk about are pretty thin. I know we talk about autonomous cars, I hope to God we never have them, 'cause I'm a car guy. Fleet Management, I think Qualcomm started Fleet Management in 1988, that is not a new application. Industrial controls. I seem to remember, I seem to remember Honeywell doing industrial controls at least in the 70s and before that I wasn't, I don't want to talk about what I was doing, but I definitely wasn't in this industry. So my feeling is we all need to sit down and think about this and get creative. Because the real value in Edge Analytics or IoT, whatever you want to call it, the real value is going to be figuring out something that's new or different. Creating a brand new business. Changing the way an operation happens in a company, right? And I think there's a lot of smart people out there and I think there's a million apps that we haven't even talked about so, if you as a vendor come to me and tell me how great your product is, please don't talk to me about autonomous cars or Fleet Managing, 'cause I've heard about that, okay? Now, hardware and architecture are really not the most interesting thing. We fell into that trap with data warehousing. We've fallen into that trap with Big Data. We talk about speeds and feeds. Somebody said to me the other day, what's the narrative of this company? This is a technology provider. And I said as far as I can tell, they don't have a narrative they have some products and they compete in a space. And when they go to clients and the clients say, what's the value of your product? They don't have an answer for that. So we don't want to fall into this trap, okay? Because IoT is going to inform you in ways you've never even dreamed about. Unfortunately some of them are going to be really stinky, you know, they're going to be really bad. You're going to lose more of your privacy, it's going to get harder to get, I dunno, mortgage for example, I dunno, maybe it'll be easier, but in any case, it's not going to all be good. So let's really think about what you want to do with this technology to do something that's really valuable. Cost takeout is not the place to justify an IoT project. Because number one, it's very expensive, and number two, it's a waste of the technology because you should be looking at, you know the old numerator denominator thing? You should be looking at the numerators and forget about the denominators because that's not what you do with IoT. And the other thing is you don't want to get over confident. Actually this is good advice about anything, right? But in this case, I love this quote by Derek Sivers He's a pretty funny guy. He said, "If more information was the answer, "then we'd all be billionaires with perfect abs." I'm not sure what's on his wishlist, but you know, I would, those aren't necessarily the two things I would think of, okay. Now, what I said about the data, I want to explain some more. Big Data Analytics, if you look at this graphic, it depicts it perfectly. It's a bunch of different stuff falling into the funnel. All right? It comes from other places, it's not original material. And when it comes in, it's always used as second hand data. Now what does that mean? That means that you have to figure out the semantics of this information and you have to find a way to put it together in a way that's useful to you, okay. That's Big Data. That's where we are. How is that different from IoT data? It's like I said, IoT is original. You can put it together any way you want because no one else has ever done that before. It's yours to construct, okay. You don't even have to transform it into a schema because you're creating the new application. But the most important thing is you have to take care of it 'cause if you lose it, it's gone. It's the original data. It's the same way, in operational systems for a long long time we've always been concerned about backup and security and everything else. You better believe this is a problem. I know a lot of people think about streaming data, that we're going to look at it for a minute, and we're going to throw most of it away. Personally I don't think that's going to happen. I think it's all going to be saved, at least for a while. Now, the governance and security, oh, by the way, I don't know where you're going to find a presentation where somebody uses a newspaper clipping about Vladimir Lenin, but here it is, enjoy yourselves. I believe that when people think about governance and security today they're still thinking along the same grids that we thought about it all along. But this is very very different and again, I'm sorry I keep thrashing this around, but this is treasured data that has to be carefully taken care of. Now when I say governance, my experience has been over the years that governance is something that IT does to make everybody's lives miserable. But that's not what I mean by governance today. It means a comprehensive program to really secure the value of the data as an asset. And you need to think about this differently. Now the other thing is you may not get to think about it differently, because some of the stuff may end up being subject to regulation. And if the regulators start regulating some of this, then that'll take some of the degrees of freedom away from you in how you put this together, but you know, that's the way it works. Now, machine learning, I think I told somebody the other day that claims about machine learning in software products are as common as twisters in trail parks. And a lot of it is not really what I'd call machine learning. But there's a lot of it around. And I think all of the open source machine learning and artificial intelligence that's popped up, it's great because all those math PhDs who work at Home Depot now have something to do when they go home at night and they construct this stuff. But if you're going to have machine learning at the Edge, here's the question, what kind of machine learning would you have at the Edge? As opposed to developing your models back at say, the cloud, when you transmit the data there. The devices at the Edge are not very powerful. And they don't have a lot of memory. So you're only going to be able to do things that have been modeled or constructed somewhere else. But that's okay. Because machine learning algorithm development is actually slow and painful. So you really want the people who know how to do this working with gobs of data creating models and testing them offline. And when you have something that works, you can put it there. Now there's one thing I want to talk about before I finish, and I think I'm almost finished. I wrote a book about 10 years ago about automated decision making and the conclusion that I came up with was that little decisions add up, and that's good. But it also means you don't have to get them all right. But you don't want computers or software making decisions unattended if it involves human life, or frankly any life. Or the environment. So when you think about the applications that you can build using this architecture and this technology, think about the fact that you're not going to be doing air traffic control, you're not going to be monitoring crossing guards at the elementary school. You're going to be doing things that may seem fairly mundane. Managing machinery on the factory floor, I mean that may sound great, but really isn't that interesting. Managing well heads, drilling for oil, well I mean, it's great to the extent that it doesn't cause wells to explode, but they don't usually explode. What it's usually used for is to drive the cost out of preventative maintenance. Not very interesting. So use your heads. Come up with really cool stuff. And any of you who are involved in Edge Analytics, the next time I talk to you I don't want to hear about the same five applications that everybody talks about. Let's hear about some new ones. So, in conclusion, I don't really have anything in conclusion except that Peter mentioned something about limousines bringing people up here. On Monday I was slogging up and down Park Avenue and Madison Avenue with my client and we were visiting all the hedge funds there because we were doing a project with them. And in the miserable weather I looked at him and I said, for godsake Paul, where's the black car? And he said, that was the 90s. (laughs) Thank you. So, Jim, up to you. (audience applauding) This is terrible, go that way, this was terrible coming that way. >> Woo, don't want to trip! And let's move to, there we go. Hi everybody, how ya doing? Thanks Neil, thanks Peter, those were great discussions. So I'm the third leg in this relay race here, talking about of course how software is eating the world. And focusing on the value of Edge Analytics in a lot of real world scenarios. Programming the real world for, to make the world a better place. So I will talk, I'll break it out analytically in terms of the research that Wikibon is doing in the area of the IoT, but specifically how AI intelligence is being embedded really to all material reality potentially at the Edge. But mobile applications and industrial IoT and the smart appliances and self driving vehicles. I will break it out in terms of a reference architecture for understanding what functions are being pushed to the Edge to hardware, to our phones and so forth to drive various scenarios in terms of real world results. So I'll move a pace here. So basically AI software or AI microservices are being infused into Edge hardware as we speak. What we see is more vendors of smart phones and other, real world appliances and things like smart driving, self driving vehicles. What they're doing is they're instrumenting their products with computer vision and natural language processing, environmental awareness based on sensing and actuation and those capabilities and inferences that these devices just do to both provide human support for human users of these devices as well as to enable varying degrees of autonomous operation. So what I'll be talking about is how AI is a foundation for data driven systems of agency of the sort that Peter is talking about. Infusing data driven intelligence into everything or potentially so. As more of this capability, all these algorithms for things like, ya know for doing real time predictions and classifications, anomaly detection and so forth, as this functionality gets diffused widely and becomes more commoditized, you'll see it burned into an ever-wider variety of hardware architecture, neuro synaptic chips, GPUs and so forth. So what I've got here in front of you is a sort of a high level reference architecture that we're building up in our research at Wikibon. So AI, artificial intelligence is a big term, a big paradigm, I'm not going to unpack it completely. Of course we don't have oodles of time so I'm going to take you fairly quickly through the high points. It's a driver for systems of agency. Programming the real world. Transducing digital inputs, the data, to analog real world results. Through the embedding of this capability in the IoT, but pushing more and more of it out to the Edge with points of decision and action in real time. And there are four capabilities that we're seeing in terms of AI enabled, enabling capabilities that are absolutely critical to software being pushed to the Edge are sensing, actuation, inference and Learning. Sensing and actuation like Peter was describing, it's about capturing data from the environment within which a device or users is operating or moving. And then actuation is the fancy term for doing stuff, ya know like industrial IoT, it's obviously machine controlled, but clearly, you know self driving vehicles is steering a vehicle and avoiding crashing and so forth. Inference is the meat and potatoes as it were of AI. Analytics does inferences. It infers from the data, the logic of the application. Predictive logic, correlations, classification, abstractions, differentiation, anomaly detection, recognizing faces and voices. We see that now with Apple and the latest version of the iPhone is embedding face recognition as a core, as the core multifactor authentication technique. Clearly that's a harbinger of what's going to be universal fairly soon which is that depends on AI. That depends on convolutional neural networks, that is some heavy hitting processing power that's necessary and it's processing the data that's coming from your face. So that's critically important. So what we're looking at then is the AI software is taking root in hardware to power continuous agency. Getting stuff done. Powered decision support by human beings who have to take varying degrees of action in various environments. We don't necessarily want to let the car steer itself in all scenarios, we want some degree of override, for lots of good reasons. They want to protect life and limb including their own. And just more data driven automation across the internet of things in the broadest sense. So unpacking this reference framework, what's happening is that AI driven intelligence is powering real time decisioning at the Edge. Real time local sensing from the data that it's capturing there, it's ingesting the data. Some, not all of that data, may be persistent at the Edge. Some, perhaps most of it, will be pushed into the cloud for other processing. When you have these highly complex algorithms that are doing AI deep learning, multilayer, to do a variety of anti-fraud and higher level like narrative, auto-narrative roll-ups from various scenes that are unfolding. A lot of this processing is going to begin to happen in the cloud, but a fair amount of the more narrowly scoped inferences that drive real time decision support at the point of action will be done on the device itself. Contextual actuation, so it's the sensor data that's captured by the device along with other data that may be coming down in real time streams through the cloud will provide the broader contextual envelope of data needed to drive actuation, to drive various models and rules and so forth that are making stuff happen at the point of action, at the Edge. Continuous inference. What it all comes down to is that inference is what's going on inside the chips at the Edge device. And what we're seeing is a growing range of hardware architectures, GPUs, CPUs, FPGAs, ASIC, Neuro synaptic chips of all sorts playing in various combinations that are automating more and more very complex inference scenarios at the Edge. And not just individual devices, swarms of devices, like drones and so forth are essentially an Edge unto themselves. You'll see these tiered hierarchies of Edge swarms that are playing and doing inferences of ever more complex dynamic nature. And much of this will be, this capability, the fundamental capabilities that is powering them all will be burned into the hardware that powers them. And then adaptive learning. Now I use the term learning rather than training here, training is at the core of it. Training means everything in terms of the predictive fitness or the fitness of your AI services for whatever task, predictions, classifications, face recognition that you, you've built them for. But I use the term learning in a broader sense. It's what's make your inferences get better and better, more accurate over time is that you're training them with fresh data in a supervised learning environment. But you can have reinforcement learning if you're doing like say robotics and you don't have ground truth against which to train the data set. You know there's maximize a reward function versus minimize a loss function, you know, the standard approach, the latter for supervised learning. There's also, of course, the issue, or not the issue, the approach of unsupervised learning with cluster analysis critically important in a lot of real world scenarios. So Edge AI Algorithms, clearly, deep learning which is multilayered machine learning models that can do abstractions at higher and higher levels. Face recognition is a high level abstraction. Faces in a social environment is an even higher level of abstraction in terms of groups. Faces over time and bodies and gestures, doing various things in various environments is an even higher level abstraction in terms of narratives that can be rolled up, are being rolled up by deep learning capabilities of great sophistication. Convolutional neural networks for processing images, recurrent neural networks for processing time series. Generative adversarial networks for doing essentially what's called generative applications of all sort, composing music, and a lot of it's being used for auto programming. These are all deep learning. There's a variety of other algorithm approaches I'm not going to bore you with here. Deep learning is essentially the enabler of the five senses of the IoT. Your phone's going to have, has a camera, it has a microphone, it has the ability to of course, has geolocation and navigation capabilities. It's environmentally aware, it's got an accelerometer and so forth embedded therein. The reason that your phone and all of the devices are getting scary sentient is that they have the sensory modalities and the AI, the deep learning that enables them to make environmentally correct decisions in the wider range of scenarios. So machine learning is the foundation of all of this, but there are other, I mean of deep learning, artificial neural networks is the foundation of that. But there are other approaches for machine learning I want to make you aware of because support vector machines and these other established approaches for machine learning are not going away but really what's driving the show now is deep learning, because it's scary effective. And so that's where most of the investment in AI is going into these days for deep learning. AI Edge platforms, tools and frameworks are just coming along like gangbusters. Much development of AI, of deep learning happens in the context of your data lake. This is where you're storing your training data. This is the data that you use to build and test to validate in your models. So we're seeing a deepening stack of Hadoop and there's Kafka, and Spark and so forth that are driving the training (coughs) excuse me, of AI models that are power all these Edge Analytic applications so that that lake will continue to broaden in terms, and deepen in terms of a scope and the range of data sets and the range of modeling, AI modeling supports. Data science is critically important in this scenario because the data scientist, the data science teams, the tools and techniques and flows of data science are the fundamental development paradigm or discipline or capability that's being leveraged to build and to train and to deploy and iterate all this AI that's being pushed to the Edge. So clearly data science is at the center, data scientists of an increasingly specialized nature are necessary to the realization to this value at the Edge. AI frameworks are coming along like you know, a mile a minute. TensorFlow has achieved a, is an open source, most of these are open source, has achieved sort of almost like a defacto standard, status, I'm using the word defacto in air quotes. There's Theano and Keras and xNet and CNTK and a variety of other ones. We're seeing range of AI frameworks come to market, most open source. Most are supported by most of the major tool vendors as well. So at Wikibon we're definitely tracking that, we plan to go deeper in our coverage of that space. And then next best action, powers recommendation engines. I mean next best action decision automation of the sort of thing Neil's covered in a variety of contexts in his career is fundamentally important to Edge Analytics to systems of agency 'cause it's driving the process automation, decision automation, sort of the targeted recommendations that are made at the Edge to individual users as well as to process that automation. That's absolutely necessary for self driving vehicles to do their jobs and industrial IoT. So what we're seeing is more and more recommendation engine or recommender capabilities powered by ML and DL are going to the Edge, are already at the Edge for a variety of applications. Edge AI capabilities, like I said, there's sensing. And sensing at the Edge is becoming ever more rich, mixed reality Edge modalities of all sort are for augmented reality and so forth. We're just seeing a growth in certain, the range of sensory modalities that are enabled or filtered and analyzed through AI that are being pushed to the Edge, into the chip sets. Actuation, that's where robotics comes in. Robotics is coming into all aspects of our lives. And you know, it's brainless without AI, without deep learning and these capabilities. Inference, autonomous edge decisioning. Like I said, it's, a growing range of inferences that are being done at the Edge. And that's where it has to happen 'cause that's the point of decision. Learning, training, much training, most training will continue to be done in the cloud because it's very data intensive. It's a grind to train and optimize an AI algorithm to do its job. It's not something that you necessarily want to do or can do at the Edge at Edge devices so, the models that are built and trained in the cloud are pushed down through a dev ops process down to the Edge and that's the way it will work pretty much in most AI environments, Edge analytics environments. You centralize the modeling, you decentralize the execution of the inference models. The training engines will be in the cloud. Edge AI applications. I'll just run you through sort of a core list of the ones that are coming into, already come into the mainstream at the Edge. Multifactor authentication, clearly the Apple announcement of face recognition is just a harbinger of the fact that that's coming to every device. Computer vision speech recognition, NLP, digital assistance and chat bots powered by natural language processing and understanding, it's all AI powered. And it's becoming very mainstream. Emotion detection, face recognition, you know I could go on and on but these are like the core things that everybody has access to or will by 2020 and they're core devices, mass market devices. Developers, designers and hardware engineers are coming together to pool their expertise to build and train not just the AI, but also the entire package of hardware in UX and the orchestration of real world business scenarios or life scenarios that all this intelligence, the submitted intelligence enables and most, much of what they build in terms of AI will be containerized as micro services through Docker and orchestrated through Kubernetes as full cloud services in an increasingly distributed fabric. That's coming along very rapidly. We can see a fair amount of that already on display at Strata in terms of what the vendors are doing or announcing or who they're working with. The hardware itself, the Edge, you know at the Edge, some data will be persistent, needs to be persistent to drive inference. That's, and you know to drive a variety of different application scenarios that need some degree of historical data related to what that device in question happens to be sensing or has sensed in the immediate past or you know, whatever. The hardware itself is geared towards both sensing and increasingly persistence and Edge driven actuation of real world results. The whole notion of drones and robotics being embedded into everything that we do. That's where that comes in. That has to be powered by low cost, low power commodity chip sets of various sorts. What we see right now in terms of chip sets is it's a GPUs, Nvidia has gone real far and GPUs have come along very fast in terms of power inference engines, you know like the Tesla cars and so forth. But GPUs are in many ways the core hardware sub straight for in inference engines in DL so far. But to become a mass market phenomenon, it's got to get cheaper and lower powered and more commoditized, and so we see a fair number of CPUs being used as the hardware for Edge Analytic applications. Some vendors are fairly big on FPGAs, I believe Microsoft has gone fairly far with FPGAs inside DL strategy. ASIC, I mean, there's neuro synaptic chips like IBM's got one. There's at least a few dozen vendors of neuro synaptic chips on the market so at Wikibon we're going to track that market as it develops. And what we're seeing is a fair number of scenarios where it's a mixed environment where you use one chip set architecture at the inference side of the Edge, and other chip set architectures that are driving the DL as processed in the cloud, playing together within a common architecture. And we see some, a fair number of DL environments where the actual training is done in the cloud on Spark using CPUs and parallelized in memory, but pushing Tensorflow models that might be trained through Spark down to the Edge where the inferences are done in FPGAs and GPUs. Those kinds of mixed hardware scenarios are very, very, likely to be standard going forward in lots of areas. So analytics at the Edge power continuous results is what it's all about. The whole point is really not moving the data, it's putting the inference at the Edge and working from the data that's already captured and persistent there for the duration of whatever action or decision or result needs to be powered from the Edge. Like Neil said cost takeout alone is not worth doing. Cost takeout alone is not the rationale for putting AI at the Edge. It's getting new stuff done, new kinds of things done in an automated consistent, intelligent, contextualized way to make our lives better and more productive. Security and governance are becoming more important. Governance of the models, governance of the data, governance in a dev ops context in terms of version controls over all those DL models that are built, that are trained, that are containerized and deployed. Continuous iteration and improvement of those to help them learn to do, make our lives better and easier. With that said, I'm going to hand it over now. It's five minutes after the hour. We're going to get going with the Influencer Panel so what we'd like to do is I call Peter, and Peter's going to call our influencers. >> All right, am I live yet? Can you hear me? All right so, we've got, let me jump back in control here. We've got, again, the objective here is to have community take on some things. And so what we want to do is I want to invite five other people up, Neil why don't you come on up as well. Start with Neil. You can sit here. On the far right hand side, Judith, Judith Hurwitz. >> Neil: I'm glad I'm on the left side. >> From the Hurwitz Group. >> From the Hurwitz Group. Jennifer Shin who's affiliated with UC Berkeley. Jennifer are you here? >> She's here, Jennifer where are you? >> She was here a second ago. >> Neil: I saw her walk out she may have, >> Peter: All right, she'll be back in a second. >> Here's Jennifer! >> Here's Jennifer! >> Neil: With 8 Path Solutions, right? >> Yep. >> Yeah 8 Path Solutions. >> Just get my mic. >> Take your time Jen. >> Peter: All right, Stephanie McReynolds. Far left. And finally Joe Caserta, Joe come on up. >> Stephie's with Elysian >> And to the left. So what I want to do is I want to start by having everybody just go around introduce yourself quickly. Judith, why don't we start there. >> I'm Judith Hurwitz, I'm president of Hurwitz and Associates. We're an analyst research and fault leadership firm. I'm the co-author of eight books. Most recent is Cognitive Computing and Big Data Analytics. I've been in the market for a couple years now. >> Jennifer. >> Hi, my name's Jennifer Shin. I'm the founder and Chief Data Scientist 8 Path Solutions LLC. We do data science analytics and technology. We're actually about to do a big launch next month, with Box actually. >> We're apparent, are we having a, sorry Jennifer, are we having a problem with Jennifer's microphone? >> Man: Just turn it back on? >> Oh you have to turn it back on. >> It was on, oh sorry, can you hear me now? >> Yes! We can hear you now. >> Okay, I don't know how that turned back off, but okay. >> So you got to redo all that Jen. >> Okay, so my name's Jennifer Shin, I'm founder of 8 Path Solutions LLC, it's a data science analytics and technology company. I founded it about six years ago. So we've been developing some really cool technology that we're going to be launching with Box next month. It's really exciting. And I have, I've been developing a lot of patents and some technology as well as teaching at UC Berkeley as a lecturer in data science. >> You know Jim, you know Neil, Joe, you ready to go? >> Joe: Just broke my microphone. >> Joe's microphone is broken. >> Joe: Now it should be all right. >> Jim: Speak into Neil's. >> Joe: Hello, hello? >> I just feel not worthy in the presence of Joe Caserta. (several laughing) >> That's right, master of mics. If you can hear me, Joe Caserta, so yeah, I've been doing data technology solutions since 1986, almost as old as Neil here, but been doing specifically like BI, data warehousing, business intelligence type of work since 1996. And been doing, wholly dedicated to Big Data solutions and modern data engineering since 2009. Where should I be looking? >> Yeah I don't know where is the camera? >> Yeah, and that's basically it. So my company was formed in 2001, it's called Caserta Concepts. We recently rebranded to only Caserta 'cause what we do is way more than just concepts. So we conceptualize the stuff, we envision what the future brings and we actually build it. And we help clients large and small who are just, want to be leaders in innovation using data specifically to advance their business. >> Peter: And finally Stephanie McReynolds. >> I'm Stephanie McReynolds, I had product marketing as well as corporate marketing for a company called Elysian. And we are a data catalog so we help bring together not only a technical understanding of your data, but we curate that data with human knowledge and use automated intelligence internally within the system to make recommendations about what data to use for decision making. And some of our customers like City of San Diego, a large automotive manufacturer working on self driving cars and General Electric use Elysian to help power their solutions for IoT at the Edge. >> All right so let's jump right into it. And again if you have a question, raise your hand, and we'll do our best to get it to the floor. But what I want to do is I want to get seven questions in front of this group and have you guys discuss, slog, disagree, agree. Let's start here. What is the relationship between Big Data AI and IoT? Now Wikibon's put forward its observation that data's being generated at the Edge, that action is being taken at the Edge and then increasingly the software and other infrastructure architectures need to accommodate the realities of how data is going to work in these very complex systems. That's our perspective. Anybody, Judith, you want to start? >> Yeah, so I think that if you look at AI machine learning, all these different areas, you have to be able to have the data learned. Now when it comes to IoT, I think one of the issues we have to be careful about is not all data will be at the Edge. Not all data needs to be analyzed at the Edge. For example if the light is green and that's good and it's supposed to be green, do you really have to constantly analyze the fact that the light is green? You actually only really want to be able to analyze and take action when there's an anomaly. Well if it goes purple, that's actually a sign that something might explode, so that's where you want to make sure that you have the analytics at the edge. Not for everything, but for the things where there is an anomaly and a change. >> Joe, how about from your perspective? >> For me I think the evolution of data is really becoming, eventually oxygen is just, I mean data's going to be the oxygen we breathe. It used to be very very reactive and there used to be like a latency. You do something, there's a behavior, there's an event, there's a transaction, and then you go record it and then you collect it, and then you can analyze it. And it was very very waterfallish, right? And then eventually we figured out to put it back into the system. Or at least human beings interpret it to try to make the system better and that is really completely turned on it's head, we don't do that anymore. Right now it's very very, it's synchronous, where as we're actually making these transactions, the machines, we don't really need, I mean human beings are involved a bit, but less and less and less. And it's just a reality, it may not be politically correct to say but it's a reality that my phone in my pocket is following my behavior, and it knows without telling a human being what I'm doing. And it can actually help me do things like get to where I want to go faster depending on my preference if I want to save money or save time or visit things along the way. And I think that's all integration of big data, streaming data, artificial intelligence and I think the next thing that we're going to start seeing is the culmination of all of that. I actually, hopefully it'll be published soon, I just wrote an article for Forbes with the term of ARBI and ARBI is the integration of Augmented Reality and Business Intelligence. Where I think essentially we're going to see, you know, hold your phone up to Jim's face and it's going to recognize-- >> Peter: It's going to break. >> And it's going to say exactly you know, what are the key metrics that we want to know about Jim. If he works on my sales force, what's his attainment of goal, what is-- >> Jim: Can it read my mind? >> Potentially based on behavior patterns. >> Now I'm scared. >> I don't think Jim's buying it. >> It will, without a doubt be able to predict what you've done in the past, you may, with some certain level of confidence you may do again in the future, right? And is that mind reading? It's pretty close, right? >> Well, sometimes, I mean, mind reading is in the eye of the individual who wants to know. And if the machine appears to approximate what's going on in the person's head, sometimes you can't tell. So I guess, I guess we could call that the Turing machine test of the paranormal. >> Well, face recognition, micro gesture recognition, I mean facial gestures, people can do it. Maybe not better than a coin toss, but if it can be seen visually and captured and analyzed, conceivably some degree of mind reading can be built in. I can see when somebody's angry looking at me so, that's a possibility. That's kind of a scary possibility in a surveillance society, potentially. >> Neil: Right, absolutely. >> Peter: Stephanie, what do you think? >> Well, I hear a world of it's the bots versus the humans being painted here and I think that, you know at Elysian we have a very strong perspective on this and that is that the greatest impact, or the greatest results is going to be when humans figure out how to collaborate with the machines. And so yes, you want to get to the location more quickly, but the machine as in the bot isn't able to tell you exactly what to do and you're just going to blindly follow it. You need to train that machine, you need to have a partnership with that machine. So, a lot of the power, and I think this goes back to Judith's story is then what is the human decision making that can be augmented with data from the machine, but then the humans are actually training the training side and driving machines in the right direction. I think that's when we get true power out of some of these solutions so it's not just all about the technology. It's not all about the data or the AI, or the IoT, it's about how that empowers human systems to become smarter and more effective and more efficient. And I think we're playing that out in our technology in a certain way and I think organizations that are thinking along those lines with IoT are seeing more benefits immediately from those projects. >> So I think we have a general agreement of what kind of some of the things you talked about, IoT, crucial capturing information, and then having action being taken, AI being crucial to defining and refining the nature of the actions that are being taken Big Data ultimately powering how a lot of that changes. Let's go to the next one. >> So actually I have something to add to that. So I think it makes sense, right, with IoT, why we have Big Data associated with it. If you think about what data is collected by IoT. We're talking about a serial information, right? It's over time, it's going to grow exponentially just by definition, right, so every minute you collect a piece of information that means over time, it's going to keep growing, growing, growing as it accumulates. So that's one of the reasons why the IoT is so strongly associated with Big Data. And also why you need AI to be able to differentiate between one minute versus next minute, right? Trying to find a better way rather than looking at all that information and manually picking out patterns. To have some automated process for being able to filter through that much data that's being collected. >> I want to point out though based on what you just said Jennifer, I want to bring Neil in at this point, that this question of IoT now generating unprecedented levels of data does introduce this idea of the primary source. Historically what we've done within technology, or within IT certainly is we've taken stylized data. There is no such thing as a real world accounting thing. It is a human contrivance. And we stylize data and therefore it's relatively easy to be very precise on it. But when we start, as you noted, when we start measuring things with a tolerance down to thousandths of a millimeter, whatever that is, metric system, now we're still sometimes dealing with errors that we have to attend to. So, the reality is we're not just dealing with stylized data, we're dealing with real data, and it's more, more frequent, but it also has special cases that we have to attend to as in terms of how we use it. What do you think Neil? >> Well, I mean, I agree with that, I think I already said that, right. >> Yes you did, okay let's move on to the next one. >> Well it's a doppelganger, the digital twin doppelganger that's automatically created by your very fact that you're living and interacting and so forth and so on. It's going to accumulate regardless. Now that doppelganger may not be your agent, or might not be the foundation for your agent unless there's some other piece of logic like an interest graph that you build, a human being saying this is my broad set of interests, and so all of my agents out there in the IoT, you all need to be aware that when you make a decision on my behalf as my agent, this is what Jim would do. You know I mean there needs to be that kind of logic somewhere in this fabric to enable true agency. >> All right, so I'm going to start with you. Oh go ahead. >> I have a real short answer to this though. I think that Big Data provides the data and compute platform to make AI possible. For those of us who dipped our toes in the water in the 80s, we got clobbered because we didn't have the, we didn't have the facilities, we didn't have the resources to really do AI, we just kind of played around with it. And I think that the other thing about it is if you combine Big Data and AI and IoT, what you're going to see is people, a lot of the applications we develop now are very inward looking, we look at our organization, we look at our customers. We try to figure out how to sell more shoes to fashionable ladies, right? But with this technology, I think people can really expand what they're thinking about and what they model and come up with applications that are much more external. >> Actually what I would add to that is also it actually introduces being able to use engineering, right? Having engineers interested in the data. Because it's actually technical data that's collected not just say preferences or information about people, but actual measurements that are being collected with IoT. So it's really interesting in the engineering space because it opens up a whole new world for the engineers to actually look at data and to actually combine both that hardware side as well as the data that's being collected from it. >> Well, Neil, you and I have talked about something, 'cause it's not just engineers. We have in the healthcare industry for example, which you know a fair amount about, there's this notion of empirical based management. And the idea that increasingly we have to be driven by data as a way of improving the way that managers do things, the way the managers collect or collaborate and ultimately collectively how they take action. So it's not just engineers, it's supposed to also inform business, what's actually happening in the healthcare world when we start thinking about some of this empirical based management, is it working? What are some of the barriers? >> It's not a function of technology. What happens in medicine and healthcare research is, I guess you can say it borders on fraud. (people chuckling) No, I'm not kidding. I know the New England Journal of Medicine a couple of years ago released a study and said that at least half their articles that they published turned out to be written, ghost written by pharmaceutical companies. (man chuckling) Right, so I think the problem is that when you do a clinical study, the one that really killed me about 10 years ago was the women's health initiative. They spent $700 million gathering this data over 20 years. And when they released it they looked at all the wrong things deliberately, right? So I think that's a systemic-- >> I think you're bringing up a really important point that we haven't brought up yet, and that is is can you use Big Data and machine learning to begin to take the biases out? So if you let the, if you divorce your preconceived notions and your biases from the data and let the data lead you to the logic, you start to, I think get better over time, but it's going to take a while to get there because we do tend to gravitate towards our biases. >> I will share an anecdote. So I had some arm pain, and I had numbness in my thumb and pointer finger and I went to, excruciating pain, went to the hospital. So the doctor examined me, and he said you probably have a pinched nerve, he said, but I'm not exactly sure which nerve it would be, I'll be right back. And I kid you not, he went to a computer and he Googled it. (Neil laughs) And he came back because this little bit of information was something that could easily be looked up, right? Every nerve in your spine is connected to your different fingers so the pointer and the thumb just happens to be your C6, so he came back and said, it's your C6. (Neil mumbles) >> You know an interesting, I mean that's a good example. One of the issues with healthcare data is that the data set is not always shared across the entire research community, so by making Big Data accessible to everyone, you actually start a more rational conversation or debate on well what are the true insights-- >> If that conversation includes what Judith talked about, the actual model that you use to set priorities and make decisions about what's actually important. So it's not just about improving, this is the test. It's not just about improving your understanding of the wrong thing, it's also testing whether it's the right or wrong thing as well. >> That's right, to be able to test that you need to have humans in dialog with one another bringing different biases to the table to work through okay is there truth in this data? >> It's context and it's correlation and you can have a great correlation that's garbage. You know if you don't have the right context. >> Peter: So I want to, hold on Jim, I want to, >> It's exploratory. >> Hold on Jim, I want to take it to the next question 'cause I want to build off of what you talked about Stephanie and that is that this says something about what is the Edge. And our perspective is that the Edge is not just devices. That when we talk about the Edge, we're talking about human beings and the role that human beings are going to play both as sensors or carrying things with them, but also as actuators, actually taking action which is not a simple thing. So what do you guys think? What does the Edge mean to you? Joe, why don't you start? >> Well, I think it could be a combination of the two. And specifically when we talk about healthcare. So I believe in 2017 when we eat we don't know why we're eating, like I think we should absolutely by now be able to know exactly what is my protein level, what is my calcium level, what is my potassium level? And then find the foods to meet that. What have I depleted versus what I should have, and eat very very purposely and not by taste-- >> And it's amazing that red wine is always the answer. >> It is. (people laughing) And tequila, that helps too. >> Jim: You're a precision foodie is what you are. (several chuckle) >> There's no reason why we should not be able to know that right now, right? And when it comes to healthcare is, the biggest problem or challenge with healthcare is no matter how great of a technology you have, you can't, you can't, you can't manage what you can't measure. And you're really not allowed to use a lot of this data so you can't measure it, right? You can't do things very very scientifically right, in the healthcare world and I think regulation in the healthcare world is really burdening advancement in science. >> Peter: Any thoughts Jennifer? >> Yes, I teach statistics for data scientists, right, so you know we talk about a lot of these concepts. I think what makes these questions so difficult is you have to find a balance, right, a middle ground. For instance, in the case of are you being too biased through data, well you could say like we want to look at data only objectively, but then there are certain relationships that your data models might show that aren't actually a causal relationship. For instance, if there's an alien that came from space and saw earth, saw the people, everyone's carrying umbrellas right, and then it started to rain. That alien might think well, it's because they're carrying umbrellas that it's raining. Now we know from real world that that's actually not the way these things work. So if you look only at the data, that's the potential risk. That you'll start making associations or saying something's causal when it's actually not, right? So that's one of the, one of the I think big challenges. I think when it comes to looking also at things like healthcare data, right? Do you collect data about anything and everything? Does it mean that A, we need to collect all that data for the question we're looking at? Or that it's actually the best, more optimal way to be able to get to the answer? Meaning sometimes you can take some shortcuts in terms of what data you collect and still get the right answer and not have maybe that level of specificity that's going to cost you millions extra to be able to get. >> So Jennifer as a data scientist, I want to build upon what you just said. And that is, are we going to start to see methods and models emerge for how we actually solve some of these problems? So for example, we know how to build a system for stylized process like accounting or some elements of accounting. We have methods and models that lead to technology and actions and whatnot all the way down to that that system can be generated. We don't have the same notion to the same degree when we start talking about AI and some of these Big Datas. We have algorithms, we have technology. But are we going to start seeing, as a data scientist, repeatability and learning and how to think the problems through that's going to lead us to a more likely best or at least good result? >> So I think that's a bit of a tough question, right? Because part of it is, it's going to depend on how many of these researchers actually get exposed to real world scenarios, right? Research looks into all these papers, and you come up with all these models, but if it's never tested in a real world scenario, well, I mean we really can't validate that it works, right? So I think it is dependent on how much of this integration there's going to be between the research community and industry and how much investment there is. Funding is going to matter in this case. If there's no funding in the research side, then you'll see a lot of industry folk who feel very confident about their models that, but again on the other side of course, if researchers don't validate those models then you really can't say for sure that it's actually more accurate, or it's more efficient. >> It's the issue of real world testing and experimentation, A B testing, that's standard practice in many operationalized ML and AI implementations in the business world, but real world experimentation in the Edge analytics, what you're actually transducing are touching people's actual lives. Problem there is, like in healthcare and so forth, when you're experimenting with people's lives, somebody's going to die. I mean, in other words, that's a critical, in terms of causal analysis, you've got to tread lightly on doing operationalizing that kind of testing in the IoT when people's lives and health are at stake. >> We still give 'em placebos. So we still test 'em. All right so let's go to the next question. What are the hottest innovations in AI? Stephanie I want to start with you as a company, someone at a company that's got kind of an interesting little thing happening. We start thinking about how do we better catalog data and represent it to a large number of people. What are some of the hottest innovations in AI as you see it? >> I think it's a little counter intuitive about what the hottest innovations are in AI, because we're at a spot in the industry where the most successful companies that are working with AI are actually incorporating them into solutions. So the best AI solutions are actually the products that you don't know there's AI operating underneath. But they're having a significant impact on business decision making or bringing a different type of application to the market and you know, I think there's a lot of investment that's going into AI tooling and tool sets for data scientists or researchers, but the more innovative companies are thinking through how do we really take AI and make it have an impact on business decision making and that means kind of hiding the AI to the business user. Because if you think a bot is making a decision instead of you, you're not going to partner with that bot very easily or very readily. I worked at, way at the start of my career, I worked in CRM when recommendation engines were all the rage online and also in call centers. And the hardest thing was to get a call center agent to actually read the script that the algorithm was presenting to them, that algorithm was 99% correct most of the time, but there was this human resistance to letting a computer tell you what to tell that customer on the other side even if it was more successful in the end. And so I think that the innovation in AI that's really going to push us forward is when humans feel like they can partner with these bots and they don't think of it as a bot, but they think about as assisting their work and getting to a better result-- >> Hence the augmentation point you made earlier. >> Absolutely, absolutely. >> Joe how 'about you? What do you look at? What are you excited about? >> I think the coolest thing at the moment right now is chat bots. Like to be able, like to have voice be able to speak with you in natural language, to do that, I think that's pretty innovative, right? And I do think that eventually, for the average user, not for techies like me, but for the average user, I think keyboards are going to be a thing of the past. I think we're going to communicate with computers through voice and I think this is the very very beginning of that and it's an incredible innovation. >> Neil? >> Well, I think we all have myopia here. We're all thinking about commercial applications. Big, big things are happening with AI in the intelligence community, in military, the defense industry, in all sorts of things. Meteorology. And that's where, well, hopefully not on an every day basis with military, you really see the effect of this. But I was involved in a project a couple of years ago where we were developing AI software to detect artillery pieces in terrain from satellite imagery. I don't have to tell you what country that was. I think you can probably figure that one out right? But there are legions of people in many many companies that are involved in that industry. So if you're talking about the dollars spent on AI, I think the stuff that we do in our industries is probably fairly small. >> Well it reminds me of an application I actually thought was interesting about AI related to that, AI being applied to removing mines from war zones. >> Why not? >> Which is not a bad thing for a whole lot of people. Judith what do you look at? >> So I'm looking at things like being able to have pre-trained data sets in specific solution areas. I think that that's something that's coming. Also the ability to, to really be able to have a machine assist you in selecting the right algorithms based on what your data looks like and the problems you're trying to solve. Some of the things that data scientists still spend a lot of their time on, but can be augmented with some, basically we have to move to levels of abstraction before this becomes truly ubiquitous across many different areas. >> Peter: Jennifer? >> So I'm going to say computer vision. >> Computer vision? >> Computer vision. So computer vision ranges from image recognition to be able to say what content is in the image. Is it a dog, is it a cat, is it a blueberry muffin? Like a sort of popular post out there where it's like a blueberry muffin versus like I think a chihuahua and then it compares the two. And can the AI really actually detect difference, right? So I think that's really where a lot of people who are in this space of being in both the AI space as well as data science are looking to for the new innovations. I think, for instance, cloud vision I think that's what Google still calls it. The vision API we've they've released on beta allows you to actually use an API to send your image and then have it be recognized right, by their API. There's another startup in New York called Clarify that also does a similar thing as well as you know Amazon has their recognition platform as well. So I think in a, from images being able to detect what's in the content as well as from videos, being able to say things like how many people are entering a frame? How many people enter the store? Not having to actually go look at it and count it, but having a computer actually tally that information for you, right? >> There's actually an extra piece to that. So if I have a picture of a stop sign, and I'm an automated car, and is it a picture on the back of a bus of a stop sign, or is it a real stop sign? So that's going to be one of the complications. >> Doesn't matter to a New York City cab driver. How 'about you Jim? >> Probably not. (laughs) >> Hottest thing in AI is General Adversarial Networks, GANT, what's hot about that, well, I'll be very quick, most AI, most deep learning, machine learning is analytical, it's distilling or inferring insights from the data. Generative takes that same algorithmic basis but to build stuff. In other words, to create realistic looking photographs, to compose music, to build CAD CAM models essentially that can be constructed on 3D printers. So GANT, it's a huge research focus all around the world are used for, often increasingly used for natural language generation. In other words it's institutionalizing or having a foundation for nailing the Turing test every single time, building something with machines that looks like it was constructed by a human and doing it over and over again to fool humans. I mean you can imagine the fraud potential. But you can also imagine just the sheer, like it's going to shape the world, GANT. >> All right so I'm going to say one thing, and then we're going to ask if anybody in the audience has an idea. So the thing that I find interesting is traditional programs, or when you tell a machine to do something you don't need incentives. When you tell a human being something, you have to provide incentives. Like how do you get someone to actually read the text. And this whole question of elements within AI that incorporate incentives as a way of trying to guide human behavior is absolutely fascinating to me. Whether it's gamification, or even some things we're thinking about with block chain and bitcoins and related types of stuff. To my mind that's going to have an enormous impact, some good, some bad. Anybody in the audience? I don't want to lose everybody here. What do you think sir? And I'll try to do my best to repeat it. Oh we have a mic. >> So my question's about, Okay, so the question's pretty much about what Stephanie's talking about which is human and loop training right? I come from a computer vision background. That's the problem, we need millions of images trained, we need humans to do that. And that's like you know, the workforce is essentially people that aren't necessarily part of the AI community, they're people that are just able to use that data and analyze the data and label that data. That's something that I think is a big problem everyone in the computer vision industry at least faces. I was wondering-- >> So again, but the problem is that is the difficulty of methodologically bringing together people who understand it and people who, people who have domain expertise people who have algorithm expertise and working together? >> I think the expertise issue comes in healthcare, right? In healthcare you need experts to be labeling your images. With contextual information where essentially augmented reality applications coming in, you have the AR kit and everything coming out, but there is a lack of context based intelligence. And all of that comes through training images, and all of that requires people to do it. And that's kind of like the foundational basis of AI coming forward is not necessarily an algorithm, right? It's how well are datas labeled? Who's doing the labeling and how do we ensure that it happens? >> Great question. So for the panel. So if you think about it, a consultant talks about being on the bench. How much time are they going to have to spend on trying to develop additional business? How much time should we set aside for executives to help train some of the assistants? >> I think that the key is not, to think of the problem a different way is that you would have people manually label data and that's one way to solve the problem. But you can also look at what is the natural workflow of that executive, or that individual? And is there a way to gather that context automatically using AI, right? And if you can do that, it's similar to what we do in our product, we observe how someone is analyzing the data and from those observations we can actually create the metadata that then trains the system in a particular direction. But you have to think about solving the problem differently of finding the workflow that then you can feed into to make this labeling easy without the human really realizing that they're labeling the data. >> Peter: Anybody else? >> I'll just add to what Stephanie said, so in the IoT applications, all those sensory modalities, the computer vision, the speech recognition, all that, that's all potential training data. So it cross checks against all the other models that are processing all the other data coming from that device. So that the natural language process of understanding can be reality checked against the images that the person happens to be commenting upon, or the scene in which they're embedded, so yeah, the data's embedded-- >> I don't think we're, we're not at the stage yet where this is easy. It's going to take time before we do start doing the pre-training of some of these details so that it goes faster, but right now, there're not that many shortcuts. >> Go ahead Joe. >> Sorry so a couple things. So one is like, I was just caught up on your incentivizing programs to be more efficient like humans. You know in Ethereum that has this notion, which is bot chain, has this theory, this concept of gas. Where like as the process becomes more efficient it costs less to actually run, right? It costs less ether, right? So it actually is kind of, the machine is actually incentivized and you don't really know what it's going to cost until the machine processes it, right? So there is like some notion of that there. But as far as like vision, like training the machine for computer vision, I think it's through adoption and crowdsourcing, so as people start using it more they're going to be adding more pictures. Very very organically. And then the machines will be trained and right now is a very small handful doing it, and it's very proactive by the Googles and the Facebooks and all of that. But as we start using it, as they start looking at my images and Jim's and Jen's images, it's going to keep getting smarter and smarter through adoption and through very organic process. >> So Neil, let me ask you a question. Who owns the value that's generated as a consequence of all these people ultimately contributing their insight and intelligence into these systems? >> Well, to a certain extent the people who are contributing the insight own nothing because the systems collect their actions and the things they do and then that data doesn't belong to them, it belongs to whoever collected it or whoever's going to do something with it. But the other thing, getting back to the medical stuff. It's not enough to say that the systems, people will do the right thing, because a lot of them are not motivated to do the right thing. The whole grant thing, the whole oh my god I'm not going to go against the senior professor. A lot of these, I knew a guy who was a doctor at University of Pittsburgh and they were doing a clinical study on the tubes that they put in little kids' ears who have ear infections, right? And-- >> Google it! Who helps out? >> Anyway, I forget the exact thing, but he came out and said that the principle investigator lied when he made the presentation, that it should be this, I forget which way it went. He was fired from his position at Pittsburgh and he has never worked as a doctor again. 'Cause he went against the senior line of authority. He was-- >> Another question back here? >> Man: Yes, Mark Turner has a question. >> Not a question, just want to piggyback what you're saying about the transfixation of maybe in healthcare of black and white images and color images in the case of sonograms and ultrasound and mammograms, you see that happening using AI? You see that being, I mean it's already happening, do you see it moving forward in that kind of way? I mean, talk more about that, about you know, AI and black and white images being used and they can be transfixed, they can be made to color images so you can see things better, doctors can perform better operations. >> So I'm sorry, but could you summarize down? What's the question? Summarize it just, >> I had a lot of students, they're interested in the cross pollenization between AI and say the medical community as far as things like ultrasound and sonograms and mammograms and how you can literally take a black and white image and it can, using algorithms and stuff be made to color images that can help doctors better do the work that they've already been doing, just do it better. You touched on it like 30 seconds. >> So how AI can be used to actually add information in a way that's not necessarily invasive but is ultimately improves how someone might respond to it or use it, yes? Related? I've also got something say about medical images in a second, any of you guys want to, go ahead Jennifer. >> Yeah, so for one thing, you know and it kind of goes back to what we were talking about before. When we look at for instance scans, like at some point I was looking at CT scans, right, for lung cancer nodules. In order for me, who I don't have a medical background, to identify where the nodule is, of course, a doctor actually had to go in and specify which slice of the scan had the nodule and where exactly it is, so it's on both the slice level as well as, within that 2D image, where it's located and the size of it. So the beauty of things like AI is that ultimately right now a radiologist has to look at every slice and actually identify this manually, right? The goal of course would be that one day we wouldn't have to have someone look at every slice to like 300 usually slices and be able to identify it much more automated. And I think the reality is we're not going to get something where it's going to be 100%. And with anything we do in the real world it's always like a 95% chance of it being accurate. So I think it's finding that in between of where, what's the threshold that we want to use to be able to say that this is, definitively say a lung cancer nodule or not. I think the other thing to think about is in terms of how their using other information, what they might use is a for instance, to say like you know, based on other characteristics of the person's health, they might use that as sort of a grading right? So you know, how dark or how light something is, identify maybe in that region, the prevalence of that specific variable. So that's usually how they integrate that information into something that's already existing in the computer vision sense. I think that's, the difficulty with this of course, is being able to identify which variables were introduced into data that does exist. >> So I'll make two quick observations on this then I'll go to the next question. One is radiologists have historically been some of the highest paid physicians within the medical community partly because they don't have to be particularly clinical. They don't have to spend a lot of time with patients. They tend to spend time with doctors which means they can do a lot of work in a little bit of time, and charge a fair amount of money. As we start to introduce some of these technologies that allow us to from a machine standpoint actually make diagnoses based on those images, I find it fascinating that you now see television ads promoting the role that the radiologist plays in clinical medicine. It's kind of an interesting response. >> It's also disruptive as I'm seeing more and more studies showing that deep learning models processing images, ultrasounds and so forth are getting as accurate as many of the best radiologists. >> That's the point! >> Detecting cancer >> Now radiologists are saying oh look, we do this great thing in terms of interacting with the patients, never have because they're being dis-intermediated. The second thing that I'll note is one of my favorite examples of that if I got it right, is looking at the images, the deep space images that come out of Hubble. Where they're taking data from thousands, maybe even millions of images and combining it together in interesting ways you can actually see depth. You can actually move through to a very very small scale a system that's 150, well maybe that, can't be that much, maybe six billion light years away. Fascinating stuff. All right so let me go to the last question here, and then I'm going to close it down, then we can have something to drink. What are the hottest, oh I'm sorry, question? >> Yes, hi, my name's George, I'm with Blue Talon. You asked earlier there the question what's the hottest thing in the Edge and AI, I would say that it's security. It seems to me that before you can empower agency you need to be able to authorize what they can act on, how they can act on, who they can act on. So it seems if you're going to move from very distributed data at the Edge and analytics at the Edge, there has to be security similarly done at the Edge. And I saw (speaking faintly) slides that called out security as a key prerequisite and maybe Judith can comment, but I'm curious how security's going to evolve to meet this analytics at the Edge. >> Well, let me do that and I'll ask Jen to comment. The notion of agency is crucially important, slightly different from security, just so we're clear. And the basic idea here is historically folks have thought about moving data or they thought about moving application function, now we are thinking about moving authority. So as you said. That's not necessarily, that's not really a security question, but this has been a problem that's been in, of concern in a number of different domains. How do we move authority with the resources? And that's really what informs the whole agency process. But with that said, Jim. >> Yeah actually I'll, yeah, thank you for bringing up security so identity is the foundation of security. Strong identity, multifactor, face recognition, biometrics and so forth. Clearly AI, machine learning, deep learning are powering a new era of biometrics and you know it's behavioral metrics and so forth that's organic to people's use of devices and so forth. You know getting to the point that Peter was raising is important, agency! Systems of agency. Your agent, you have to, you as a human being should be vouching in a secure, tamper proof way, your identity should be vouching for the identity of some agent, physical or virtual that does stuff on your behalf. How can that, how should that be managed within this increasingly distributed IoT fabric? Well a lot of that's been worked. It all ran through webs of trust, public key infrastructure, formats and you know SAML for single sign and so forth. It's all about assertion, strong assertions and vouching. I mean there's the whole workflows of things. Back in the ancient days when I was actually a PKI analyst three analyst firms ago, I got deep into all the guts of all those federation agreements, something like that has to be IoT scalable to enable systems agency to be truly fluid. So we can vouch for our agents wherever they happen to be. We're going to keep on having as human beings agents all over creation, we're not even going to be aware of everywhere that our agents are, but our identity-- >> It's not just-- >> Our identity has to follow. >> But it's not just identity, it's also authorization and context. >> Permissioning, of course. >> So I may be the right person to do something yesterday, but I'm not authorized to do it in another context in another application. >> Role based permissioning, yeah. Or persona based. >> That's right. >> I agree. >> And obviously it's going to be interesting to see the role that block chain or its follow on to the technology is going to play here. Okay so let me throw one more questions out. What are the hottest applications of AI at the Edge? We've talked about a number of them, does anybody want to add something that hasn't been talked about? Or do you want to get a beer? (people laughing) Stephanie, you raised your hand first. >> I was going to go, I bring something mundane to the table actually because I think one of the most exciting innovations with IoT and AI are actually simple things like City of San Diego is rolling out 3200 automated street lights that will actually help you find a parking space, reduce the amount of emissions into the atmosphere, so has some environmental change, positive environmental change impact. I mean, it's street lights, it's not like a, it's not medical industry, it doesn't look like a life changing innovation, and yet if we automate streetlights and we manage our energy better, and maybe they can flicker on and off if there's a parking space there for you, that's a significant impact on everyone's life. >> And dramatically suppress the impact of backseat driving! >> (laughs) Exactly. >> Joe what were you saying? >> I was just going to say you know there's already the technology out there where you can put a camera on a drone with machine learning within an artificial intelligence within it, and it can look at buildings and determine whether there's rusty pipes and cracks in cement and leaky roofs and all of those things. And that's all based on artificial intelligence. And I think if you can do that, to be able to look at an x-ray and determine if there's a tumor there is not out of the realm of possibility, right? >> Neil? >> I agree with both of them, that's what I meant about external kind of applications. Instead of figuring out what to sell our customers. Which is most what we hear. I just, I think all of those things are imminently doable. And boy street lights that help you find a parking place, that's brilliant, right? >> Simple! >> It improves your life more than, I dunno. Something I use on the internet recently, but I think it's great! That's, I'd like to see a thousand things like that. >> Peter: Jim? >> Yeah, building on what Stephanie and Neil were saying, it's ambient intelligence built into everything to enable fine grain microclimate awareness of all of us as human beings moving through the world. And enable reading of every microclimate in buildings. In other words, you know you have sensors on your body that are always detecting the heat, the humidity, the level of pollution or whatever in every environment that you're in or that you might be likely to move into fairly soon and either A can help give you guidance in real time about where to avoid, or give that environment guidance about how to adjust itself to your, like the lighting or whatever it might be to your specific requirements. And you know when you have a room like this, full of other human beings, there has to be some negotiated settlement. Some will find it too hot, some will find it too cold or whatever but I think that is fundamental in terms of reshaping the sheer quality of experience of most of our lived habitats on the planet potentially. That's really the Edge analytics application that depends on everybody having, being fully equipped with a personal area network of sensors that's communicating into the cloud. >> Jennifer? >> So I think, what's really interesting about it is being able to utilize the technology we do have, it's a lot cheaper now to have a lot of these ways of measuring that we didn't have before. And whether or not engineers can then leverage what we have as ways to measure things and then of course then you need people like data scientists to build the right model. So you can collect all this data, if you don't build the right model that identifies these patterns then all that data's just collected and it's just made a repository. So without having the models that supports patterns that are actually in the data, you're not going to find a better way of being able to find insights in the data itself. So I think what will be really interesting is to see how existing technology is leveraged, to collect data and then how that's actually modeled as well as to be able to see how technology's going to now develop from where it is now, to being able to either collect things more sensitively or in the case of say for instance if you're dealing with like how people move, whether we can build things that we can then use to measure how we move, right? Like how we move every day and then being able to model that in a way that is actually going to give us better insights in things like healthcare and just maybe even just our behaviors. >> Peter: Judith? >> So, I think we also have to look at it from a peer to peer perspective. So I may be able to get some data from one thing at the Edge, but then all those Edge devices, sensors or whatever, they all have to interact with each other because we don't live, we may, in our business lives, act in silos, but in the real world when you look at things like sensors and devices it's how they react with each other on a peer to peer basis. >> All right, before I invite John up, I want to say, I'll say what my thing is, and it's not the hottest. It's the one I hate the most. I hate AI generated music. (people laughing) Hate it. All right, I want to thank all the panelists, every single person, some great commentary, great observations. I want to thank you very much. I want to thank everybody that joined. John in a second you'll kind of announce who's the big winner. But the one thing I want to do is, is I was listening, I learned a lot from everybody, but I want to call out the one comment that I think we all need to remember, and I'm going to give you the award Stephanie. And that is increasing we have to remember that the best AI is probably AI that we don't even know is working on our behalf. The same flip side of that is all of us have to be very cognizant of the idea that AI is acting on our behalf and we may not know it. So, John why don't you come on up. Who won the, whatever it's called, the raffle? >> You won. >> Thank you! >> How 'about a round of applause for the great panel. (audience applauding) Okay we have a put the business cards in the basket, we're going to have that brought up. We're going to have two raffle gifts, some nice Bose headsets and speaker, Bluetooth speaker. Got to wait for that. I just want to say thank you for coming and for the folks watching, this is our fifth year doing our own event called Big Data NYC which is really an extension of the landscape beyond the Big Data world that's Cloud and AI and IoT and other great things happen and great experts and influencers and analysts here. Thanks for sharing your opinion. Really appreciate you taking the time to come out and share your data and your knowledge, appreciate it. Thank you. Where's the? >> Sam's right in front of you. >> There's the thing, okay. Got to be present to win. We saw some people sneaking out the back door to go to a dinner. >> First prize first. >> Okay first prize is the Bose headset. >> Bluetooth and noise canceling. >> I won't look, Sam you got to hold it down, I can see the cards. >> All right. >> Stephanie you won! (Stephanie laughing) Okay, Sawny Cox, Sawny Allie Cox? (audience applauding) Yay look at that! He's here! The bar's open so help yourself, but we got one more. >> Congratulations. Picture right here. >> Hold that I saw you. Wake up a little bit. Okay, all right. Next one is, my kids love this. This is great, great for the beach, great for everything portable speaker, great gift. >> What is it? >> Portable speaker. >> It is a portable speaker, it's pretty awesome. >> Oh you grabbed mine. >> Oh that's one of our guys. >> (lauging) But who was it? >> Can't be related! Ava, Ava, Ava. Okay Gene Penesko (audience applauding) Hey! He came in! All right look at that, the timing's great. >> Another one? (people laughing) >> Hey thanks everybody, enjoy the night, thank Peter Burris, head of research for SiliconANGLE, Wikibon and he great guests and influencers and friends. And you guys for coming in the community. Thanks for watching and thanks for coming. Enjoy the party and some drinks and that's out, that's it for the influencer panel and analyst discussion. Thank you. (logo music)

Published Date : Sep 28 2017

SUMMARY :

is that the cloud is being extended out to the Edge, the next time I talk to you I don't want to hear that are made at the Edge to individual users We've got, again, the objective here is to have community From the Hurwitz Group. And finally Joe Caserta, Joe come on up. And to the left. I've been in the market for a couple years now. I'm the founder and Chief Data Scientist We can hear you now. And I have, I've been developing a lot of patents I just feel not worthy in the presence of Joe Caserta. If you can hear me, Joe Caserta, so yeah, I've been doing We recently rebranded to only Caserta 'cause what we do to make recommendations about what data to use the realities of how data is going to work in these to make sure that you have the analytics at the edge. and ARBI is the integration of Augmented Reality And it's going to say exactly you know, And if the machine appears to approximate what's and analyzed, conceivably some degree of mind reading but the machine as in the bot isn't able to tell you kind of some of the things you talked about, IoT, So that's one of the reasons why the IoT of the primary source. Well, I mean, I agree with that, I think I already or might not be the foundation for your agent All right, so I'm going to start with you. a lot of the applications we develop now are very So it's really interesting in the engineering space And the idea that increasingly we have to be driven I know the New England Journal of Medicine So if you let the, if you divorce your preconceived notions So the doctor examined me, and he said you probably have One of the issues with healthcare data is that the data set the actual model that you use to set priorities and you can have a great correlation that's garbage. What does the Edge mean to you? And then find the foods to meet that. And tequila, that helps too. Jim: You're a precision foodie is what you are. in the healthcare world and I think regulation For instance, in the case of are you being too biased We don't have the same notion to the same degree but again on the other side of course, in the Edge analytics, what you're actually transducing What are some of the hottest innovations in AI and that means kind of hiding the AI to the business user. I think keyboards are going to be a thing of the past. I don't have to tell you what country that was. AI being applied to removing mines from war zones. Judith what do you look at? and the problems you're trying to solve. And can the AI really actually detect difference, right? So that's going to be one of the complications. Doesn't matter to a New York City cab driver. (laughs) So GANT, it's a huge research focus all around the world So the thing that I find interesting is traditional people that aren't necessarily part of the AI community, and all of that requires people to do it. So for the panel. of finding the workflow that then you can feed into that the person happens to be commenting upon, It's going to take time before we do start doing and Jim's and Jen's images, it's going to keep getting Who owns the value that's generated as a consequence But the other thing, getting back to the medical stuff. and said that the principle investigator lied and color images in the case of sonograms and ultrasound and say the medical community as far as things in a second, any of you guys want to, go ahead Jennifer. to say like you know, based on other characteristics I find it fascinating that you now see television ads as many of the best radiologists. and then I'm going to close it down, It seems to me that before you can empower agency Well, let me do that and I'll ask Jen to comment. agreements, something like that has to be IoT scalable and context. So I may be the right person to do something yesterday, Or persona based. that block chain or its follow on to the technology into the atmosphere, so has some environmental change, the technology out there where you can put a camera And boy street lights that help you find a parking place, That's, I'd like to see a thousand things like that. that are always detecting the heat, the humidity, patterns that are actually in the data, but in the real world when you look at things and I'm going to give you the award Stephanie. and for the folks watching, We saw some people sneaking out the back door I can see the cards. Stephanie you won! Picture right here. This is great, great for the beach, great for everything All right look at that, the timing's great. that's it for the influencer panel and analyst discussion.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JudithPERSON

0.99+

JenniferPERSON

0.99+

JimPERSON

0.99+

NeilPERSON

0.99+

Stephanie McReynoldsPERSON

0.99+

JackPERSON

0.99+

2001DATE

0.99+

Marc AndreessenPERSON

0.99+

Jim KobielusPERSON

0.99+

Jennifer ShinPERSON

0.99+

AmazonORGANIZATION

0.99+

Joe CasertaPERSON

0.99+

Suzie WelchPERSON

0.99+

JoePERSON

0.99+

David FloyerPERSON

0.99+

PeterPERSON

0.99+

StephaniePERSON

0.99+

JenPERSON

0.99+

Neil RadenPERSON

0.99+

Mark TurnerPERSON

0.99+

Judith HurwitzPERSON

0.99+

JohnPERSON

0.99+

ElysianORGANIZATION

0.99+

UberORGANIZATION

0.99+

QualcommORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

2017DATE

0.99+

HoneywellORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Derek SiversPERSON

0.99+

New YorkLOCATION

0.99+

AWSORGANIZATION

0.99+

New York CityLOCATION

0.99+

1998DATE

0.99+

Tal Klein, The Punch Escrow | VMworld 2017


 

>> Narrator: Live from Las Vegas, it's the Cube, covering VMWorld 2017. Brought to you by VMWare and its ecosystem partners. (bright music) >> Hi, I'm Stu Miniman with the Cube, here with my guest host, Justin Warren. Happy to have a returning Cube alum, but in a different role then we had. It's been a few years. Tal Klein, who is the author of The Punch Escrow. >> Au-tor, please. No, I'm just kidding. (laughing) Tal, thanks so much for joining us. It's great for you to be able to find time to hang out with the tech geeks rather than all the Hollywood people that you've been with recently. (laughing) >> You guys are more interesting. (laughing) >> Well thank you for saying that. So last time we interviewed you, you were working for a sizable tech company. You were talking about things like, you know, virtualization, everything like that. Your Twitter handle's VirtualTal. So how does a guy like that become not only an author but an author that's been optioned for a movie, which those of us that, you know, are geeks and everything are looking at, as a matter of fact, Pac Elsiger this morning said, "we are seeing science fiction become science fact." >> That's right. >> Stu: So tell us a little of the journey. >> Yeah, cool, I hope you read the book. (laughing) I don't know, the journey is really about marketing, right? Cause a lot of times when we talk about virtual, like, in fact last time I was on the Cube, we were talking about the idea that desktops could be virtual. Cause back then it was still this, you know, almost hypothetical notion, like could desktops be virtual, and so today, you know, so much of our life is virtual. So much of the things that we do are not actually direct. I was watching this great video by Apple's new augmented reality product, where you sit in the restaurant and you look at it with your iPad, and it's your plate, and you can just shift the menu items, and you see the menu items on your plate in the context of the restaurant and your seat and the person you're sitting across from. So I think the future is now. >> Yeah, it reminds of, you know, the movie Wall-E, the animated one. We're all going to be sitting in chairs with our devices or Ready Player One, you know, very popular sci-fi book that's being done by Speilberg, I believe. >> Yes, yeah, very exciting. >> Tell us a little bit about your book, you know, we talked, when I was younger and used to read a lot of sci-fi, it was like, what stuff had they done 50 years ago that now's reality, and what stuff had they predicted, like, you know, we're going to go away from currency and go digital currency, and it's like we're almost there. But we still don't have flying cars. >> Yeah, we're, I mean, the main problem with flying cars is that we need pilots. And I think actually we're very close to flying cars, cause once we have self-driving vehicles and we no longer need to worry about it being a person behind the joystick, then we're in really good shape. That's really the issue, you know, the problem with flying cars is that we are so incompetent at driving and or flying. That's not our core competency, so let's just put things that do understand how to make those things happen and eliminate us from the equation. >> Everything is a people problem. >> Yeah, so when I wrote the book, Punch Escrow, Punch Escrow, (laughing) when I wrote the book, I really thought about all the things that I read growing up in science fiction, you know, things like teleportation, things like nanotechnology, things like digital currency, you know, how do we make those, how do we present those in a viable way that doesn't seem too science fictiony. Like one of the things I really get when people read the book is it feels really near-future, even though it's set like 100 plus years in the future, all the concepts in it feel very pragmatic or within reach, you know? >> Yeah, absolutely. It's interesting, we look at, you know, what things happen in a couple of years and what things take a long time. So artificial intelligence, machine learning, it's not like these are new concepts, you know? I read a great book by, you know, it was Isaacson, The Innovators. You go back to like Aida Lovelace, and the idea of what a machine or computer would be able to do. So 100 years from now, what's real, what's not real? We still all have jobs or something? >> We have jobs but different. Remember, I don't know if you're a historian, but back in the industrial age, there was a whole bunch of people screaming doom and gloom. In fact, if we go way back to the age of the Luddites, who just hated machines of any kind. I think that in general, we don't like, you know, we're scared of change. So I do think a lot of the jobs that exist today are going to be done by machines or code. That doesn't mean the jobs are going away. It means jobs are changing. A lot of the jobs that people have today didn't exist in the industrial age. So I think that we have to accept that we are going to be pragmatic enough to accept the fact that humans will continue to evolve as the infrastructure powering our world evolves, you know? We talk about living in the age of the quantified self, right? There's a whole bunch that we don't understand how to do yet. For example, I can think of a whole industry that tethers my FitBit to my nutrition. You know, like there's so much opportunity that for us to say, oh that's going to be the end of jobs, or the end of innovation or the end of capitalism, is insane. I think this just ushers in a whole new age of opportunity. And that's me, I'm just an optimist that way, you know. >> So the Luddites did famously try to destroy the machines. But the thing is, the Luddites weren't wrong. They did lose their jobs. So what about the people whose jobs are replaced, as you say net new, there's a net new number of jobs. But specific individuals, like people who manufacture cars for example, lose their jobs because a robot can do that job safer and better and faster than a human can do it. So what do we do with those humans? Because how do we get people to have new jobs and retrain themselves? >> I address some of these notions in the book. For example, one of the weird things that we're suffering from is the lack of welders in society today, cause welding has become this weird thing that we don't think we need people for, so people don't really get trained up in it because, you know, machines do a lot of welding but there's actually specialty welding that machines can't do. So I think the people who are really good at the things that they do will continue to have careers. I think their careers will become more niche. Therefore they'll be able to create, to demand a higher wage for it because almost like a carpenter, you know, a specialist carpenter will be able to earn a much higher wage today by having fewer customers who want really custom carpentry versus things that can be carved up by a machine. So I think what we end up seeing is that it's not that those jobs go away. It's they become more specialized. People still want Rolls Royces. People still want McLarens. Those are not done by machines. Those are hand-made, you know? >> That's an interesting point, so the value of something being hand-made becomes, instead of it being a worse product, it's actually- >> Tal: That's a big concept in the book. >> Oh okay, right. >> A big concept in the book is that we place a lot of value on the uniqueness of an object. And that parlays in multiple ways. So one of the examples that I use in the book is the value of a Big Mac actually coming from McDonald's. Like, you can make a Big Mac. We know the recipe for a Big Mac. But there is a weird sort of nacent value to getting a Big Mac from McDonald's. It's something in our brain that clicks that tethers it to an originality. Diamonds, another really good example. Or you know, we know there's synthetic diamonds. We still want the ones that get mined in the cave. Why? We don't know. Right, they're just special. >> Because De Beers still has really good marketing. (laughing) >> So I think there's- >> That's interesting, so the concept of uniqueness, which again comes to scarcity and so on. As an author, someone who is no doubt, signed a lot of his book, that means that that book is unique because it's signed by the author, unlike something which is mass produced and there is hopefully thousands and thousands of copies that you sell. >> Going into this, I actually thought about that a lot. And that's why I've created like multiple editions of the book. So like the first 500 people who pre-ordered it, they get like a special edition of the book that's like stamped and all this kind of stuff. I even used different pens. (laughs) I appreciate that because I'm also a collector. I collect music, I collect books. And you know, so I see those aspects in myself. So I know what I value about them, you know? >> And the crossover between music and books is interesting. So as someone who has a musical background, I know that there's a lot of musicians who'll come out with special editions, and you know, because this is an age where we can download it. You can download the book. Do you think there is something, is there something that is intrinsic to having a physical object in a virtual world? >> I think to our generation, yes. I'm not so sure about millennials, when they grow up. But there are, for example, I'm going to see U2 next week, I'm very lucky to see that. But part of the U2 buying experience, to get access to the presale, you need to be part of their fan club. To be a part of their fan club, you need to get, you get like a whole bunch of limited edition posters, limited edition vinyl, and all this kind of stuff. So there's an experience. It's no longer just about going to see U2 at a concert. There's like the entire package of you being a special U2 fan. And they surround it with uniqueness. It's not necessarily limited, but there's an enhanced experience that can't just be, it's not just about you having a ticket to a single concert. >> Justin: Yeah, okay. >> I'm curious, the genre, if you'd call it, is hard science fiction. >> Yes. >> The challenge with that is, you know, what is an extension of what we're doing, and what is fiction? And people probably poke at that. Have you had any interesting experience, things like that? I mean, I've listened to a lot of stuff like Andy Weir, like let the community give feedback before he created the final The Martian. (laughing) But so yeah, what's it like, cause we can, the geeks can be really harsh. >> Yes, I've learned from my Reddit experience that, so what's really funny about it is the first draft of this novel was hard as nails. It was crazy. And my publisher read it, and it would have made all the hard science fiction guys super happy. My publisher read it, he was like, you've written a really great hard science fiction book, and all five people who read it are going to love it. (laughing) You know, but like, I came here with my buddy Danny. He couldn't even get through the first three pages of it. He's like, he wanted to read it. So part of working through the editorial process is saying, look, I care a lot about the science because one of my deep goals is to write a STEM-oriented book that gets people excited about technology and present the future as not a dystopian place. And so I wanted the science to be there and have a sort of gravity to the narrative. But yeah, it's tough. I worked with a physicist, a biologist, a geneticist, an anthropologist, and a lawyer. (laughs) Just to try to figure out, how do we carve out, you know, what does the future look like, what does the evolution of each individual sciences, we talked about the mosquitoes, right? You know, we're already doing a lot of crazy stuff with mosquitoes. We're modifying them so that the males mate with females that carry the Zika virus, you know, give birth to offspring that never reach maturity. I mean, this is just crazy, it's science fiction. And now that they're working on modifying female mosquitoes into vaccine carriers instead of disease carriers. I mean, this is science fiction, right? Like who believes this stuff? It's crazy. >> Christopher is amazing. >> Yeah, I've loved, there's been a bunch of movies recently that have kind of helped to educate on STEM some, you know, Martian got a lot of people excited, you know, Hidden Figures, the one that I could being my kids that are teenagers now into it and they get excited, oh, science is great. So the movie, how much will you be involved? You know, what can you share about that experience, too, so far? >> It's been, it's very surreal. That's the word is use to describe it, the honest, god's honest truth, I mean. I've been very lucky in that my representation in Hollywood is this rock-solid guy called Howie Sanders. And he's this bigger-than-life Hollywood agent guy. He's hooked me up, we've made a lot of business decisions that we're focused less on the money and more on the team, which is nice to be, like when you're in your 40s and you're more financially settled, you're not in the kind of situation where you might be in your 20s and just going to sign the first deal that people give you. So we really focused on hooking up with like the director, James Bovin is, you know, he's the guy who co-created Flight of the Concords. He did the Muppets movie, you know, Alice Through the Looking Glass. Really professional guy but also really understands the tone of the book, which is like humorous, you know, kind of sarcastic. It's not just about the technology. It's also about the characters. Same thing with the production team. The two producers, Mandeville Productions, I was just talking to Todd Lieberman, and we're talking about just what is augmented reality, like how does it look like on the screen? So I'm not- >> It's not going to look like Blade Runner is what I'm hearing. >> (laughs) I don't know. It's going to look real. I imagine, I don't know, they're going to make whatever movie they're going to make, but their perspective, one of the things we talked about is keeping the movie very grounded. Like you know, one of the big questions they ask first going into it is before we even had any sort of movie discussions is like is this more of like a Looper, Gattica, or District Nine, or is it more like The Fifth Element, you know, I mean, is it like, do you want it to be this sort of grounded movie that feels authentic and real and near future or do you want this to be like completely alien and weird and out of it. And the story is more grounded. So I think a lot, hopefully what we display on the screen will not feel that far away from reality. >> Okay, yeah. >> You do marketing in your day job. >> I do. >> I'm curious as you look at this, kind of the balance of educating, reaching a broad audience, you have passion for STEM, what's your thoughts around that? Is it, I worry there's so much general, like television or things like that, when I see the science stuff, it like makes me groan. Because you know, it's like I don't understand that. >> I am the worst, because I got a security background too, so that's the one I get scrambled on. The war, I mean, like. >> Wait, thank goodness I updated my firewall settings because I saved the world from terrorists. >> Hang on, we're breaking through the first firewall. Now we're through the second firewall. (laughing) Now we're going through the third firewall, like 15 firewalls. And let me upload the virus, like all that stuff. It's difficult for me. I think that, you know, hopefully, there's also a group in Hollywood called the Hollywood Science and Entertainment Exchange. And they're a group of scientists who work with film makers on, you know, reigning things in. And film makers don't usually take all their advice, i.e. Interstellar, (laughing) but you know, I think (laughing) in many cases there's some really good ideas that come to play into it that hopefully bring up, like I think Jarvis for example, in Iron Man or the Avengers is a really cool implementation of what the future of AI systems might be like. And I know they used the Hollywood Science Exchange to figure out how is that going to work? And I think the marketing aspect is, you know, the reason I came up with the idea for this book is because my CEO of a company I used to work for, he had this whole conversation about teleportation, like teleportation was impossible. And he's like, it's not because the science, yes, the science is a problem right now, but we'll get over it. The main issue is that nobody would ever step foot into a device that vaporizes them and then printed them out somewhere else. And I said, well that's great, cause that's a marketing problem. (laughing) >> Yeah, you're dead every time you do it. But it's the same you, I can't tell the difference. >> Well, you say you're dead, I'm saying you're just moving. (laughing) >> Artificial intelligence, you know, kind of a big gap between the hype to where we need to go. What's your thoughts on that space in general? >> I think that we have, it's a great question because I feel like that's a term that gets thrown around a lot, and I think as a result it's becoming watered down. So you've this sort of artificial intelligence that comes with like, you know, Google building an app that can beat the world's best Go player, which is a really, really difficult puzzle. The problem is, that app can do one thing, and that's play Go. You put in it a chess game, and it's like I don't know what's going on. >> It's a very specialized kind of intelligence, yeah. >> Now with Open AI, you know, they just had some pretty interesting implementations where they actually played video games with a real live competition and won. Again, you know, but without the smack talk, which really I think would add a lot. Now you got to get an AI to smack talk. So I think the problem is we haven't figured out a really good way of creating a general purpose AI. And there's a lot of parallels to the evolution of computing in general because if you look at how computers were before we had general purpose operating systems like Unix, every computer was built to do a very, very specific function, and that's kind of what AI is right now. So we're still waiting to have a sort of general purpose AI that can do a lot of specialized activities. >> Even most robots are still very single-purpose today. >> That's the fundamental problem. But you're seeing the Cambridge guys are working on sort of the bipedal robot that can do lots of things. And Siri's getting better, Cortana's getting better, Watson's getting better, but we're not there. We still need to find a really good way of integrating deep knowledge with general purpose conversational AI. Cause that's really what you need to like, Stu, what do you need? Here, let me give it to you, you know? >> Do you draw a distinction between AI that's able to simply sort of react as a fairly complex machine or something that can create new things and add something? >> That's in the book as well. So the fundamental thing that I don't think we get around even in the future is giving computers the ability to actually come up with new ideas. There's actually a career, the main job of the protagonist in the book, his job is a salter. And his job is to salt AI algorithms to introduce entropy so they can come up with new ideas. >> Okay, interesting. >> So based off the sort of chaos theory. >> Like chaos monkey, right? >> Yeah. And that's really what you're trying to do is like, okay, react to things that are happening because you can't just come up with them on their own. There's a whole, I don't want to bore you, but there's a whole bunch of stuff in the book about how that works. >> It's like hand-carving ideas that are then mass produced by machines. >> Yeah, I don't know if you guys are going to have Simon Crosby on here, he's kind of like an expert on that. He was the Dean of Kings College, which is where Turing came from. So he really knows a lot about that. He's got a lot of strong ideas about it. But I learned a lot from him in that regard. There's a lot of like, the snarky spirit of Simon Crosby lives on in my book somewhere. But he's just funny cause he's, coming from that field, he immediately sees a lot of BS right off the bat, whenever anybody's presenting. He's got like the ability to just cut through it. Because he understands what it would actually take to make that happen, you know? So I tried to preserve some of that in the book. >> That is refreshing in the tech industry. >> So Tal, I need to let you, you know, wrap this up. Give us a plug for the book, tell us, when are we going to be able to see this on the big screen? >> I don't know about the big screen, but the Punch Escrow is now available. You can get it on Amazon, Barnes and Noble, anywhere books are sold. It's been optioned by Lionsgate. The director attached to it is James Bovin, production team is Mandeville Productions. I'm very excited about it. Go check it out. It's a pretty quick read, reads like a technothriller. It's not too hard. And it's fun for the whole family. I think one of the coolest things about it is that the feedback I've been getting has been that it really is appealing to everybody. I've got mother-in-laws reading it, you know, it's pretty cool. Initially I sold it, my initial audience is like us, but it's kind of cool, like, Stu will finish the book, he'll give it to, you know, wife, daughter, anything, and they're really digging it. So it's kind of fun. >> Justin: Thanks a lot. >> Tal Klein, really appreciate you coming. Congratulations on the book, we look forward to the movie. Maybe, you know, we'll get the Cube involved down the road. (laughing) >> And we're giving away 75 copies of it here at Lakeside booth, if you guys want to come. >> Tal Klein, author of The Punch Escrow, also CMO of Lakeside, who is here in the thing. But yeah, (laughing) a lot of stuff. Justin and I will be back with more coverage here from VMWorld 2017. You're watching the Cube. (bright music)

Published Date : Aug 28 2017

SUMMARY :

Brought to you by VMWare but in a different role then we had. It's great for you to be able to find time (laughing) You were talking about things like, you know, So much of the things that we do are with our devices or Ready Player One, you know, you know, we talked, when I was younger you know, the problem with flying cars is that things like digital currency, you know, It's interesting, we look at, you know, of jobs, or the end of innovation So the Luddites did famously try because, you know, machines do a lot of welding So one of the examples that I use in the book (laughing) of copies that you sell. So I know what I value about them, you know? and you know, because this is an age of you being a special U2 fan. I'm curious, the genre, if you'd call it, The challenge with that is, you know, is the first draft of this novel was hard as nails. So the movie, how much will you be involved? He did the Muppets movie, you know, It's not going to look like Blade Runner Like you know, one of the big questions Because you know, it's like I don't understand that. I am the worst, because I got a security background too, because I saved the world from terrorists. I think that, you know, But it's the same you, I can't tell the difference. Well, you say you're dead, Artificial intelligence, you know, that comes with like, you know, Google building an app Now with Open AI, you know, Cause that's really what you need to like, So the fundamental thing that I don't think because you can't just come up with them on their own. that are then mass produced by machines. He's got like the ability to just cut through it. So Tal, I need to let you, you know, wrap this up. is that the feedback I've been getting has been Maybe, you know, we'll get the Cube involved down the road. at Lakeside booth, if you guys want to come. Justin and I will be back with more coverage here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Todd LiebermanPERSON

0.99+

Justin WarrenPERSON

0.99+

Tal KleinPERSON

0.99+

James BovinPERSON

0.99+

JustinPERSON

0.99+

Alice Through the Looking GlassTITLE

0.99+

Andy WeirPERSON

0.99+

SpeilbergPERSON

0.99+

DannyPERSON

0.99+

75 copiesQUANTITY

0.99+

Howie SandersPERSON

0.99+

SiriTITLE

0.99+

Barnes and NobleORGANIZATION

0.99+

Flight of the ConcordsTITLE

0.99+

Hollywood Science ExchangeORGANIZATION

0.99+

VMWareORGANIZATION

0.99+

JarvisPERSON

0.99+

Hollywood Science and Entertainment ExchangeORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Pac ElsigerPERSON

0.99+

The Punch EscrowTITLE

0.99+

LionsgateORGANIZATION

0.99+

iPadCOMMERCIAL_ITEM

0.99+

CortanaTITLE

0.99+

ChristopherPERSON

0.99+

Simon CrosbyPERSON

0.99+

Wall-ETITLE

0.99+

TuringPERSON

0.99+

AppleORGANIZATION

0.99+

next weekDATE

0.99+

AmazonORGANIZATION

0.99+

District NineTITLE

0.99+

first draftQUANTITY

0.99+

two producersQUANTITY

0.99+

second firewallQUANTITY

0.99+

oneQUANTITY

0.99+

15 firewallsQUANTITY

0.99+

Mandeville ProductionsORGANIZATION

0.99+

third firewallQUANTITY

0.99+

TalPERSON

0.99+

five peopleQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Ready Player OneTITLE

0.98+

first firewallQUANTITY

0.98+

Blade RunnerTITLE

0.98+

firstQUANTITY

0.98+

20sQUANTITY

0.98+

Iron ManTITLE

0.98+

first 500 peopleQUANTITY

0.98+

first three pagesQUANTITY

0.98+

The Fifth ElementTITLE

0.98+

todayDATE

0.98+

StuPERSON

0.98+

LooperTITLE

0.98+

40sQUANTITY

0.97+

GatticaTITLE

0.97+

McDonald'sORGANIZATION

0.97+

The MartianTITLE

0.97+

TwitterORGANIZATION

0.97+

IsaacsonPERSON

0.97+

50 years agoDATE

0.96+

MartianTITLE

0.96+

100 plus yearsQUANTITY

0.96+

VMWorld 2017EVENT

0.95+

GoTITLE

0.95+

UnixTITLE

0.94+

CubePERSON

0.94+

single concertQUANTITY

0.94+

HollywoodLOCATION

0.93+

one thingQUANTITY

0.93+

Kings CollegeORGANIZATION

0.92+

CubeCOMMERCIAL_ITEM

0.92+

U2ORGANIZATION

0.92+

McLarensORGANIZATION

0.92+

first dealQUANTITY

0.91+

VMworld 2017EVENT

0.9+

Catherine Blackmore, Oracle Marketing Cloud | Oracle Modern Customer Experience 2017


 

(energetic upbeat music) >> Host: Live from Las Vegas, it's The CUBE. Covering Oracle Modern Customer Experience 2017. Brought to you by Oracle. >> Welcome back, everyone. We are here live in Las Vegas at the Mandalay Bay for Oracle's Modern CX show, Modern Customer Experience. The Modern Marketing Experience converted into the Modern CX Show. I'm John Furrier with The Cube. My co-host Peter Burris. Day two of coverage. Our next guest is Catherine Blackmore, Global Vice President, Customer Success, Global Customer Success at Oracle Marketing Cloud. Catherine, welcome back to The CUBE. Great to see you. >> Thank you so much for having me here. It's been an incredible week, just amazing. >> Last year we had a great conversation. Remember we had. >> Yes. >> It was one of those customer focused conversations. Because at the end of the day, the customers are the ones putting the products to use, solving their problems. You were on stage at the keynote. The theme here is journeys, and the heroes involved. What was the summary of the keynote? >> Sure. As you say, this theme has really been around heroic marketing moments. And in a way, I wanted to take our marketers and the audience to an experience and a time where I think a lot of folks can either remember or certainly relate where, what was the beginning of really one experience, which was Superman. If you think about heroism and a superhero, well, Superman will come to mind. But I think what was interesting about that is that it was created at a time where most folks were not doing well. It was actually during the Great Depression. And most folks wouldn't realize that Superman almost never came to be. It was an image, an icon, that was created by two teenage boys, Jerry Shuster and Joe Siegal. And what they did is they got audience. They understood, just as two teenage boys, my parents, my family, my community is just not doing well. And we see that folks are trying to escape reality. So we're going to come up with this hero of the people. And in doing so, what's interesting is, they really were bold, they were brave. They presented a new way to escape. And as a result, DC Comics took it up. And they launched, and they sold out every single copy. And I think it's just a really strong message about being able to think about creativity and being bold. Jerry and Joe were really the heroes of that story, which was around. My challenge to the audience is, who's your Superman? What is your creative idea that you need to get out there? Because in many ways, we need to keep moving forward. At the same time, though, balance running a business. >> It's interesting, you did mention Superman and they got passed over. And we do a lot of events in the industry, a lot of them are big data events. And it's one little insight could actually change a business, and most times, some people get passed over because they're not the decision maker or they may be lower in the organization or they may just be, not be knowing what to do. So the question on the Superman theme, I have to ask you, kind of put you on the spot here is, what is the kryptonite for the marketer, okay, because >> (laughing) Yes. >> there's a lot of obstacles in the way. >> Catherine: It is. >> And so people sometimes want to be Superman, but the kryptonite paralyzes them. >> Catherine: Yeah. >> Where's the paralysis? >> It's funny that you say that. I think I actually challenge folks to avoid the kryptonite. There was three things that we really talked about. Number one is, Modern Marketing Experience, it's just an incredible opportunity for folks to think ahead, dream big, be on the bleeding edge. But guess what, we're all going to go on flights, we're going to head home, and Monday morning's going to roll around and we're going to be stuck and running the business. And my inspiration and, really, challenge to the audience and to all of our marketers is how do we live Modern Marketing Experience everyday? How do we keep looking ahead and balance the business? And, really, those heroic marketers are able to do both. But it doesn't stop there. We talked a lot about this week, about talent. Do we have the right team? Kryptonite is not having the right people for today and tomorrow, and then in addition to that, you can't just have a team, you can't just have a vision, but what's your plan? Where actually having the right stakeholders engaged, the right sponsorship, that's certainly probably the ultimate kryptonite if you don't. >> The sponsorships are interesting because the people who actually will empower or have empathy for the users and empower their people and the team have to look for the yes's, not the no's. Right. And that's the theme that we see in the Cloud success stories is, they're looking for the yes. They're trying to get that yes. But they're challenging, but they're not saying no. That's going to shut it down. We've seen that in IT. IT's been a no-no, I was going to say no ops but in this digital transformation with the emphasis on speed, they have to get to the yes. So the question is, in your customer interactions, what are some of those use cases where getting to that yes, we could do this, What are some of the things, is it data availability? >> Catherine: Absolutely. >> Share some color on that. >> I think, So I actually had a wonderful time connecting with Marta Federici, she met with you earlier. And I love her story, because she really talks about the culture and placing the customer at the center of everything they're doing, to the extent that they're telling these stories about why are we doing this? We're trying to save lives, especially in healthcare. And just to have stories and images. And I know some companies do an amazing job of putting the customer up on the wall. When we talk to our customers about how do we actually advance a digital transformation plan? How do we actually align everyone towards this concept of a connected customer experience? It starts with thinking about everyone who touches the customer every day and inspiring them around how they can be part of being a customer centric organization. And that's really, that's really important. That's the formula, and that's what we see. Companies, that they can break through and have that customer conversation, it tends to align folks. >> Interesting. We were talking earlier, Mark Hurd's comment to both the CMO Summit that was happening in a separate part of the hotel here in the convention center, as well as his keynote. He was saying, look, we have all this technology. Why are we doing this one percent improvement? And he was basically saying, we have to get to a model where there's no data department anymore. There never was. >> That's right. >> And there shouldn't be. There shouldn't be, that department takes care of the data. That's kind of the old way of data warehousing. Everyone's a data department, and to your point, that's a liberating, and also enables opportunities. >> It does. We talked a lot. Actually, the CMO Summit that we had as well this week, a lot of our CMOs were talking about the democratization of data. And Elissa from Tableau, I think you also talked to. We talked about, how do you do that? And why, what are those use cases, where, Kristen O'Hara from Time Warner talked about it as well. And I think, that's where we have to go. And I think there's a lot of great examples on stage that I would like to think our marketers, and quite frankly, >> Which one's your favorite, favorite story? >> My favorite story. >> John: Your favorite story. >> Wow, that's really putting me on the spot. >> It's like picking your favorite child. I have four. I always say "well, they're good at this sport, or this kid's good in school." Is there? >> I guess one. >> John: Or ones that you want to highlight. >> Well one that I, because we talked about it today. And it was really a combination of team and plan. Just really highlighting on what Marta's driving. If you think about the challenges of a multinational >> Peter: Again, this is at Philips. >> John: Marta, yeah. >> Catherine: This is Philips, Royal Philips. So Marta, what she's really, her team has been trying to accomplish, both B to C and B to B, and it speaks to data, and it talks about obviously having CRM be kind of that central nervous system so that you can actually align your departments. But then, being able to think about team. They've done a lot of work, really making certain they have the team for today and the future. They're also leveraging partners, which is also key to success. And then, having a plan. We spent time with Royal Philips actually at headquarters a number of weeks ago and they are doing this transformation, this disruptive tour with all of their top folks across, around the world that running their different departments, to really have them up and them think differently which is aligning them around that culture of looking out to the future. >> Peter: Let's talk a bit about thinking differently. And I want to use you as an example. >> Catherine: Sure. >> So your title is Customer Success. Global Vice President, Global Customer Success. What does that mean? >> Sure. I know a lot of folks, I'd like to think that, that's just a household name right now in terms of Customer Success. But I realize it's still a little new and nascent. >> We've seen it elsewhere but it's still not crystal clear what it means. >> Sure, sure. So when I think of Customer Success, the shorter answer is, we help our customers be successful. But that, what does it really mean? And when I think about the evolution of what Customer Success, the department, the profession, the role, has really come to be, it's serving a very important piece of this Cloud story. Go back a decade when we were just getting started actually operationalizing SaaS and thinking about how to actually grow our businesses, we found that there just needed to be a different way of managing our customers and keeping customers, quite frankly. Cause as easy as it is to perhaps land a SaaS customer, and a Cloud customer, because it's easier to stand them up and it's easier for them to purchase, but then they can easily leave you too. And so what we found is, the sales organization, while, obviously understands the customer, they need to go after new customers. They need to grow share. And then in addition to that, in some organizations, there still are services to obviously help our customers be successful. And that's really important, but that is statement-of-work-based. There's a start and a stop and an end to that work. And then obviously there's support that is part of a services experience, but they tend to be queue-based, ticket-based, break-fix. And what we found in all of this is, who ultimately is going be the advocate of the customer? Who's going to help the customer achieve ROI business value and help them ensure that they are managing what they've purchased and getting value, but also looking out towards the future and helping them see what's around the corner. >> Catherine I want to ask the question. One of the themes in your keynote was live in the moment every day as a modern marketing executive, build your team for today and tomorrow, and plan for the future. You mentioned Marta, who was on yesterday, as well as Kristen O'Hara from Time Warner. But she made an interesting comment, because I was trying to dig into her a little bit, because Time Warner, everyone knows Time Warner. So, I was kind of curious. At the same time, it was a success story where there was no old way. It was only a new way, and she had a pilot. And she had enough rope to kind of get started, and do some pilots. So I was really curious in the journey that she had. And one thing she said was, it was a multi-year journey. >> Catherine: Yes. >> And some people just want it tomorrow. They want to go too fast. Talk through your experience with your customer success and this transformation for setting up the team, going on the transformational journey. Is there a clock? Is there a kind of order of magnitude time frame that you've seen, that works for most companies? >> Sure. And actually I want to bring in one more experience that I know folks had here at Modern Marketing, which was, also, Joseph Gordon-Levitt, he actually talked about this very thing. I think a lot of folks related to that because what he's been doing in terms of building out this community and creating crowd-sourced, or I should say, I think he would want to say community-sourced content and creativity. It was about, you can't really think about going big. Like I'm not thinking about feature film. I'm thinking about short video clips, and then you build. And I think everyone, the audience, like okay I get that. And Kristen's saying, it took many little moments to get to the big moment. I think folks want to do it all, right at the very beginning. >> John: The Big Bang Theory, just add, >> Absolutely. >> Just add water, and instant Modern Marketing. >> It is, it is. >> John: And it's hard. >> And what we have found, and this is why the planning part is so important, because what you have to do, and it might not be the marketer. The marketer, that VP of Marketing, even that CMO may know, it's going to be a three year journey. But sometimes it's that CEO, Board of Director alignment that's really required to mark, this is the journey. This is what year one's going to look like. This is what we're going to accomplish year two. There may be some ups and downs through this, because we need to transform sales, we need to transform back in operations in terms of how we're going to retire old processes and do new. And in doing so, we're going to get to this end state. But you need all of your stakeholders to be engaged, otherwise you do get that pressure to go big because, you know what Mark was saying, I've got 18 months, we need to be able to show improvement right away. >> We were talking about CIOs on another show that I was doing with Peter. And I think Peter made the comment that the CIO's job sometimes doesn't last three years. So these transformations can't be three years. They got to get things going quicker, more parallel. So it sounds like you guys are sharing data here at the event among peers >> Catherine: Yes. >> around these expectations. Is there anything in terms of the playbook? >> Catherine: Yes. >> Is it parallel, a lot of AGILE going on? How do you get those little wins for that big moment? >> So I think this is where the, what I would call, the League of Justice. You got to call in that League of Justice. For all you Superman out there. Because in many ways you're really challenged with running the business, and I think that's the pressure all of us are under. But when you think about speeding up that journey, it really is engaging partners, engaging, Oracle Marketing Cloud, our success and services team. I know you're going to be talking to Tony a little bit about some of the things we're building but that's where we can really come in and help accelerate and really demonstrate business value along the way. >> Well one more question I had for you. On the show floor, I noticed, was a lot of great traffic. Did you guys do anything different this year compared to last year when we talked to make this show a little bit more fluid? Because it seems to me the hallway conversation has been all about the adaptive intelligence and data is in every conversation that we have right now. What have you guys done differently? Did it magically just come to you, (Catherine laughing) Say, we're going to have to tighten it up this year? What was the aha moment between last year and this year? It's like night and day. >> I would like to think that we are our first and best customer, because as we ourselves are delivering technology, we ourselves also have to live what we tell our customers to do every day. Look at the data, look at the feedback. Understand what customers are telling you. How can you help customers achieve value? And we think of this as an important moment for our partners and our companies, that are here spending money and spending time to be here, achieve value. What we've done is really create an experience where it's so much easier to have those conversations. Really understanding the flow of traffic, and how we can actually ensure people are able to experience our partners, get to know them, get to know other customers. A lot of folks, too, have been saying, love keynote, love these different breakout sessions, but I want to connect with other folks going through that same thing that I am, so I can get some gems, get some ideas that I can pick up. >> And peer review is key in that. They talk to each other. >> Exactly. That's right, that's right. And so we've really enabled that, the way that we've laid out the experience this year. And I know it's even going to be better next year. Cause I know we're going to collect a lot more data. >> Well last year we talked a lot about data being horizontally scalable. That's all people are talking about now, is making that data free. The question for you is, in the customer success journeys you've been involved, what's the progress bar of the customer in terms of, because we live in Silicon Valley. So oh yeah, data driven marketer! Everyone's that. Well, not really. People are now putting the training wheels on to get there. Where are we on the progress bar for that data driven marketer, where there's really, the empathy for the users is there. There's no on that doubts that. But there's the empowerment piece in the organization. Talk about that piece. Where are we in that truly data driven marketer? >> Oh, we're still early days. It was obvious in talking to our various CMO's. We were talking about talent and the change, and what the team and the landscape needs to look like to respond to certainly what we've experienced in technology over the last number of years and then even what was introduced today. That level of, I need to have more folks that really understand data on my team but I'll tell you, I think the thing that's really interesting though about what we've been driving around technology and specifically AI. I love what Steve said, by the way, which is if a company is presenting AI as magic, well the trick's on you. Because truly, it's not that easy. So I think the thing that we need to think about and we will work with our customers on is that there's certainly a need and you have to be data driven but at the same time, we want to be innovation ready and looking and helping our customers see the future to the extent that how we think about what we're introducing is very practical. There's ways that we can help our customers achieve success in understanding their audience in a way that is, I wouldn't say, it's just practical. We can help them with use cases, and the way the technology is helping them do that, I think we're going to see a lot of great results this year. >> AI is great, I love to promote AI hype because it just makes software more cooler and mainstream, but I always get asked the question, how do you evaluate whether something is BS in AI or real? And I go, well first of all, what is AI? It's a whole 'nother story. It is augmented intelligence, that's my definition of it. But I always say, "It's great sizzle. Look for the steak." So if someone says AI, you got to look on the grill, and see what's on there, because if they have substance, it's okay to put a little sizzle on it. So to me, I'm cool with that. Some people just say, oh we have an AI magical algorithm. Uh, it's just predictive analytics. >> Catherine: Yes. >> So that's not really AI. I mean, you could say you're using data. So how do you talk to customers when they say, "Hey, AI magic or real? How do I grok that?" How do I figure it out? >> I think it's an important advancement, but we can't be distracted by words we place on things that have probably been around for a little while. It's an important way to think about the technology, and I think even Steve mentioned it on stage. But I think we're helping customers be smarter and empowering them to be able to leverage data in an easier way, and that's what we have to do. Help them, and I know this is talked a lot, not take the human and the people factor out because that's still required, but we're going to help them be able to concentrate on what they do best, whether it's, I don't want to have to diminish my creative team by hiring a bunch of data scientists. We don't want that. We want to be able to help brands and companies still focus on really understanding customers. >> You know, AI may be almost as old as Superman. >> Catherine: (laughing) I think you're right. >> Yeah, because it all comes back to Turing's test of whether or not you can tell the difference between a machine and a human being, and that was the 1930s. >> Well, neural networks is a computer science. It's a great concept, but with compute and with data these things really become interesting now. >> Peter: It becomes possible. >> Yeah, and it's super fun. But it promotes nuanced things like machine learning and Internet Of Things. These are geeky under-the-hood stuff that most marketers are like, uh what? Yeah, a human wearing a gadget is an Internet of Things device. That's important data. So then if you look at it that way, AI can be just a way to kind of mentally think about it. >> That's right, that's right. >> I think that's cool for me, I can deal with that. Okay, final question, Catherine, for you. >> Catherine: Yes. >> What's the most important thing that you think folks should walk away from Modern CX this year? What would you share from this show, given that, on the keynote, CMO Summit, hallways, exhibits, breakouts, if there's a theme or a catalyst or one? >> Peter: What should they put in the trip report? >> It's all about the people. I think that, if I were to distill it down, you think about that word bubble chart, that's people. I think that's the biggest word that came out of this. As much as technology is important, it's going to enable us, it's going to enable our people, and it's going to put a lot of attention on our talent and our folks that are going to be able to take our customers to the next level. >> And then people are the ones that are generating the data too, that want experiences, to them. >> Catherine: That's right. >> It's a people centric culture. >> Catherine: It is. >> Catherine Blackmore here on site, The CUBE, at Modern CX's The CUBE, with more live coverage here from the Mandalay Bay in Las Vegas, live after this short break. (electronic music)

Published Date : Apr 27 2017

SUMMARY :

Brought to you by Oracle. We are here live in Las Vegas at the Mandalay Bay Thank you so much for having me here. Remember we had. putting the products to use, solving their problems. and the audience to an experience and a time So the question on the Superman theme, I have to ask you, And so people sometimes want to be Superman, I think I actually challenge folks to avoid the kryptonite. And that's the theme that we see And just to have stories and images. And he was basically saying, we have to get to a model There shouldn't be, that department takes care of the data. And Elissa from Tableau, I think you also talked to. I always say "well, they're good at this sport, And it was really a combination of team and plan. and it speaks to data, And I want to use you as an example. What does that mean? I'd like to think that, that's just but it's still not crystal clear what it means. the profession, the role, has really come to be, And she had enough rope to kind of get started, And some people just want it tomorrow. I think a lot of folks related to that and it might not be the marketer. And I think Peter made the comment that Is there anything in terms of the playbook? about some of the things we're building and data is in every conversation that we have right now. and spending time to be here, achieve value. They talk to each other. And I know it's even going to be better next year. in the customer success journeys you've been involved, to the extent that how we think about And I go, well first of all, what is AI? I mean, you could say you're using data. and empowering them to be able to leverage data and that was the 1930s. It's a great concept, but with compute and with data So then if you look at it that way, I think that's cool for me, I can deal with that. and it's going to put a lot of attention that are generating the data too, from the Mandalay Bay in Las Vegas,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CatherinePERSON

0.99+

MartaPERSON

0.99+

StevePERSON

0.99+

TonyPERSON

0.99+

Jerry ShusterPERSON

0.99+

PeterPERSON

0.99+

Peter BurrisPERSON

0.99+

Kristen O'HaraPERSON

0.99+

Marta FedericiPERSON

0.99+

JohnPERSON

0.99+

MarkPERSON

0.99+

Catherine BlackmorePERSON

0.99+

Mark HurdPERSON

0.99+

Joseph Gordon-LevittPERSON

0.99+

Joe SiegalPERSON

0.99+

ElissaPERSON

0.99+

John FurrierPERSON

0.99+

Time WarnerORGANIZATION

0.99+

Royal PhilipsORGANIZATION

0.99+

last yearDATE

0.99+

Silicon ValleyLOCATION

0.99+

KristenPERSON

0.99+

next yearDATE

0.99+

JerryPERSON

0.99+

DC ComicsORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Las VegasLOCATION

0.99+

PhilipsORGANIZATION

0.99+

fourQUANTITY

0.99+

Mandalay BayLOCATION

0.99+

18 monthsQUANTITY

0.99+

three yearQUANTITY

0.99+

Last yearDATE

0.99+

Monday morningDATE

0.99+

tomorrowDATE

0.99+

yesterdayDATE

0.99+

JoePERSON

0.99+

this yearDATE

0.99+

three yearsQUANTITY

0.99+

League of JusticeTITLE

0.99+

firstQUANTITY

0.99+

one percentQUANTITY

0.99+

OneQUANTITY

0.99+

bothQUANTITY

0.99+

two teenage boysQUANTITY

0.99+

three thingsQUANTITY

0.99+

Oracle Marketing CloudORGANIZATION

0.98+

SupermanPERSON

0.98+

todayDATE

0.98+

this weekDATE

0.98+