Image Title

Search Results for Orcale:

Randy Meyer, HPE & Paul Shellard, University of Cambridge | HPE Discover 2017 Madrid


 

>> Announcer: Live from Madrid, Spain, it's the Cube, covering HPE Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> Welcome back to Madrid, Spain everybody, this is the Cube, the leader in live tech coverage. We're here covering HPE Discover 2017. I'm Dave Vellante with my cohost for the week, Peter Burris, Randy Meyer is back, he's the vice president and general manager Synergy and Mission Critical Solutions at Hewlett Packard Enterprise and Paul Shellerd is here, the director of the Center for Theoretical Cosmology at Cambridge University, thank you very much for coming on the Cube. >> It's a pleasure. >> Good to see you again. >> Yeah good to be back for the second time this week. I think that's, day stay outlets play too. >> Talking about computing meets the cosmos. >> Well it's exciting, yesterday we talked about Superdome Flex that we announced, we talked about it in the commercial space, where it's taking HANA and Orcale databases to the next level but there's a whole different side to what you can do with in memory compute. It's all in this high performance computing space. You think about the problems people want to solve in fluid dynamics, in forecasting, in all sorts of analytics problems, high performance compute, one of the things it does is it generates massive amounts of data that people then want to do things with. They want to compare that data to what their model said, okay can I run that against, they want to take that data and visualize it, okay how do I go do that. The more you can do that in memory, it means it's just faster to deal with because you're not going and writing this stuff off the disk, you're not moving it to another cluster back and forth, so we're seeing this burgeoning, the HPC guys would call it fat nodes, where you want to put lots of memory and eliminate the IO to go make their jobs easier and Professor Shallard will talk about a lot of that in terms of what they're doing at the Cosmos Institute, but this is a trend, you don't have to be a university. We're seeing this inside of oil and gas companies, aerospace engineering companies, anybody that's solving these complex computational problems that have an analytical element to whether it's comparative model, visualize, do something with that once you've done that. >> Paul, explain more about what it is you do. >> Well in the Cosmos Group, of which I'm the head, we're interested in two things, cosmology, which is trying to understand where the universe comes from, the whole big bang and then we're interested in black holes, particularly their collisions which produce gravitational waves, so they're the two main areas, relativity and cosmology. >> That's a big topic. I don't even know where to start, I just want to know okay what have you learned and can you summarize it for a lay person, where are you today, what can you share with us that we can understand? >> What we do is we take our mathematical models and we make predictions about the real universe and so we try and compare those to the latest observational data. We're in a particularly exciting period of time at the moment because of a flood of new data about the universe and about black holes and in the last two years, gravitational waves were discovered, there's a Nobel prize this year so lots of things are happening. It's a very data driven science so we have to try and keep up with this flood of new data which is getting larger and larger and also with new types of data, because suddenly gravitational waves are the latest thing to look at. >> What are the sources of data and new sources of data that you're tapping? >> Well, in cosmology we're mainly interested in the cosmic microwave background. >> Peter: Yeah the sources of data are the cosmos. >> Yeah right, so this is relic radiation left over from the big bang fireball, it's like a photograph of the universe, a blueprint and then also in the distribution of galaxies, so 3D maps of the universe and we've only, we're in a new age of exploration, we've only got a tiny fraction of the universe mapped so far and we're trying to extract new information about the origin of the universe from that data. In relativity, we've got these gravitational waves, these ripples in space time, they're traversing across the universe, they're essentially earthquakes in the universe and they're sound waves or seismic waves that propagate to us from these very violent events. >> I want to take you to the gravitational waves because in many respects, it's an example of a lot of what's here in action. Here's what I mean, that the experiment and correct me if I'm wrong, but it's basically, you create a, have two lasers perpendicular to each other, shooting a signal about two or three miles in that direction and it is the most precise experiment ever undertaken because what you're doing is you're measuring the time it takes for one laser versus another laser and that time is a function of the slight stretching that comes from the gravitational rays. That is an unbelievable example of edge computing, where you have just the tolerances to do that, that's not something you can send back to the cloud, you gotta do a lot of the compute right there, right? >> That's right, yes so a gravitational wave comes by and you shrink one way and you stretch the other. >> Peter: It distorts the space time. >> Yeah you become thinner and these tiny, tiny changes are what's measured and nobody expected gravitational waves to be discovered in 2015, we all thought, oh another five years, another five years, they've always been saying, we'll discover them, we'll discover them, but it happened. >> And since then, it's been used two or three times to discover new types of things and there's now a whole, I'm sure this is very centric to what you're doing, there's now a whole concept of gravitational information, can in fact becomes an entirely new branch of cosmology, have I got that right? >> Yeah you have, it's called multimessenger astronomy now because you don't just see the universe in electromagnetic waves, in light, you hear the universe. This is qualitatively different, it's sound waves coming across the universe and so combining these two, the latest event was where they heard the event first, then they turned their telescope and they saw it. So much information came out of that, even information about cosmology, because these signals are traveling hundreds of billions of light years across to us, we're getting a picture of the whole universe as they propagate all that way, so we're able to measure the expansion rate of the universe from that point. >> The techniques for the observational, the technology for observation, what is that, how has that evolved? >> Well you've got the wrong guy here. I'm from the theory group, we're doing the predictions and these guys with their incredible technology, are seeing the data, seeing and it's imagined, the whole point is you've gotta get the predictions and then you've gotta look in the data for a needle in the haystack which is this signature of these black holes colliding. >> You think about that, I have a model, I'm looking for the needle in the haystack, that's a different way to describe an in memory analytic search pattern recognition problem, that's really what it is. This is the world's largest pattern recognition problem. >> Most precise, and literally. >> And that's an observation that confirms your theory right? >> Confirms the theory, maybe it was your theory. >> I'm actually a cosmologist, so in my group we have relativists who are actively working on the black hole collisions and making predictions about this stuff. >> But they're dampening vibration from passing trucks and these things and correcting it, it's unbelievable. But coming back to the technology, the technology is, one of the reasons why this becomes so exciting and becomes practical is because for the first time, the technology has gotten to the point where you can assume that the problem you're trying to solve, that you're focused on and you don't have to translate it in technology terms, so talk a little bit about, because in many respects, that's where business is. Business wants to be able to focus on the problem and how to think the problem differently and have the technology to just respond. They don't want to have to start with the technology and then imagine what they can do with it. >> I think from our point of view, it's a very fast moving field, things are changing, new data's coming in. The data's getting bigger and bigger because instruments are getting packed tighter and tighter, there's more information, so we've got a computational problem as well, so we've got to get more computational power but there's new types of data, like suddenly there's gravitational waves. There's new types of analysis that we want to do so we want to be able to look at this data in a very flexible way and ingest it and explore new ideas more quickly because things are happening so fast, so that's why we've adopted this in memory paradigm for a number of years now and the latest incarnation of this is the HP Superdome flex and that's a shared memory system, so you can just pull in all your data and explore it without carefully programming how the memory is distributed around. We find this is very easy for our users to develop data analytic pipelines to develop their new theoretical models and to compare the two on the single system. It's also very easy for new users to use. You don't have to be an advanced programmer to get going, you can just stay with the science in a sense. >> You gotta have a PhD in Physics to do great in Physics, you don't have to have a PhD in Physics and technology. >> That's right, yeah it's a very flexible program. A flexible architecture with which to program so you can more or less take your laptop pipeline, develop your pipeline on a laptop, take it to the Superdome and then scale it up to these huge memory problems. >> And get it done fast and you can iterate. >> You know these are the most brilliant scientists in the world, bar none, I made the analogy the other day. >> Oh, thanks. >> You're supposed to say aw, chucks. >> Peter: Aw, chucks. >> Present company excepted. >> Oh yeah, that's right. >> I made the analogy of, imagine I.M. Pei or Frank Lloyd Wright or someone had to be their own general contractor, right? No, they're brilliant at designing architectures and imagining things that no one else could imagine and then they had people to go do that. This allows the people to focus on the brilliance of the science without having to go become the expert programmer, we see that in business too. Parallel programming techniques are difficult, spoken like an old tandem guy, parallelism is hard but to the extent that you can free yourself up and focus on the problem and not have to mess around with that, it makes life easier. Some problems parallelize well, but a lot of them don't need to be and you can allow the data to shine, you can allow the science to shine. >> Is it correct that the barrier in your ability to reach a conclusion or make a discovery is the ability to find that needle in a haystack or maybe there are many, but. >> Well, if you're talking about obstacles to progress, I would say computational power isn't the obstacle, it's developing the software pipelines and it's the human personnel, the smart people writing the codes that can look for the needle in the haystack who have the efficient algorithms to do that and if they're cobbled by having to think very hard about the hardware and the architecture they're working with and how they've parallelized the problem, our philosophy is much more that you solve the problem, you validate it, it can be quite inefficient if you like, but as long as it's a working program that gets you to where you want, then your second stage you worry about making it efficient, putting it on accelerators, putting it on GPUs, making it go really fast and that's, for many years now we've bought these very flexible shared memory or in memory is the new word for it, in memory architectures which allow new users, graduate students to come straight in without a Master's degree in high performance computing, they can start to tackle problems straight away. >> It's interesting, we hear the same, you talk about it at the outer reaches of the universe, I hear it at the inner reaches of the universe from the life sciences companies, we want to map the genome and we want to understand the interaction of various drug combinations with that genetic structure to say can I tune exactly a vaccine or a drug or something else for that patient's genetic makeup to improve medical outcomes? The same kind of problem, I want to have all this data that I have to run against a complex genome sequence to find the one that gets me to the answer. From the macro to the micro, we hear this problem in all different sorts of languages. >> One of the things we have our clients, mainly in business asking us all the time, is with each, let me step back, as analysts, not the smartest people in the world, as you'll attest I'm sure for real, as analysts, we like to talk about change and we always talked about mainframe being replaced by minicomputer being replaced by this or that. I like to talk in terms of the problems that computing's been able to take on, it's been able to take on increasingly complex, challenging, more difficult problems as a consequence of the advance of technology, very much like you're saying, the advance of technology allows us to focus increasingly on the problem. What kinds of problems do you think physicists are gonna be able to attack in the next five years or so as we think about the combination of increasingly powerful computing and an increasingly simple approach to use it? >> I think the simplification you're indicating here is really going to more memory. Holding your whole workload in memory, so that you, one of the biggest bottlenecks we find is ingesting the data and then writing it out, but if you can do everything at once, then that's the key element, so one of the things we've been working on a great deal is in situ visualization for example, so that you see the black holes coming together and you see that you've set the right parameters, they haven't missed each other or something's gone wrong with your simulation, so that you do the post-processing at the same time, you never need the intermediate data products, so larger and larger memory and the computational power that balances with that large memory. It's all very well to get a fat node, but you don't have the computational power to use all those terrabytes, so that's why this in memory architecture of the Superdome Flex is much more balanced between the two. What are the problems that we're looking forward to in terms of physics? Well, in cosmology we're looking for these hints about the origin of the universe and we've made a lot of progress analyzing the Plank satellite data about the cosmic microwave background. We're honing in on theories of inflation, which is where all the structure in the universe comes from, from Heisenberg's uncertainty principle, rapid period of expansion just like inflation in the financial markets in the very early universe, okay and so we're trying to identify can we distinguish between different types and are they gonna tell us whether the universe comes from a higher dimensional theory, ten dimensions, gets reduced to three plus one or lots of clues like that, we're looking for statistical fingerprints of these different models. In gravitational waves of course, this whole new area, we think of the cosmic microwave background as a photograph of the early universe, well in fact gravitational waves look right back to the earliest moment, fractions of a nanosecond after the big bang and so it may be that the answers, the clues that we're looking for come from gravitational waves and of course there's so much in astrophysics that we'll learn about compact objects, about neutron stars, about the most energetic events there are in the whole universe. >> I never thought about the idea, because cosmic radiation background goes back what, about 300,000 years if that's right. >> Yeah that's right, you're very well informed, 400,000 years because 300 is. >> Not that well informed. >> 370,000. >> I never thought about the idea of gravitational waves as being noise from the big bang and you make sense with that. >> Well with the cosmic microwave background, we're actually looking for a primordial signal from the big bang, from inflation, so it's yeah. Well anyway, what were you gonna say Randy? >> No, I just, it's amazing the frontiers we're heading down, it's kind of an honor to be able to enable some of these things, I've spent 30 years in the technology business and heard customers tell me you transformed by business or you helped me save costs, you helped me enter a new market. Never before in 30 plus years of being in this business have I had somebody tell me the things that you're providing are helping me understand the origins of the universe. It's an honor to be affiliated with you guys. >> Oh no, the honor's mine Randy, you're producing the hardware, the tools that allow us to do this work. >> Well now the honor's ours for coming onto the Cube. >> That's right, how do we learn more about your work and your discoveries, inclusions. >> In terms of looking at. >> Are there popular authors we could read other than Stephen Hawking? >> Well, read Stephen's books, they're very good, he's got a new one called A Briefer History of Time so it's more accessible than the Brief History of Time. >> So your website is. >> Yeah our website is ctc.cam.ac.uk, the center for theoretical cosmology and we've got some popular pages there, we've got some news stories about the latest things that have happened like the HP partnership that we're developing and some nice videos about the work that we're doing actually, very nice videos of that. >> Certainly, there were several videos run here this week that if people haven't seen them, go out, they're available on Youtube, they're available at your website, they're on Stephen's Facebook page also I think. >> Can you share that website again? >> Well, actually you can get the beautiful videos of Stephen and the rest of his group on the Discover website, is that right? >> I believe so. >> So that's at HP Discover website, but your website is? >> Is ctc.cam.ac.uk and we're just about to upload those videos ourselves. >> Can I make a marketing suggestion. >> Yeah. >> Simplify that. >> Ctc.cam.ac.uk. >> Yeah right, thank you. >> We gotta get the Cube at one of these conferences, one of these physics conferences and talk about gravitational waves. >> Bone up a little bit, you're kind of embarrassing us here, 100,000 years off. >> He's better informed than you are. >> You didn't need to remind me sir. Thanks very much for coming on the Cube, great pleasure having you today. >> Thank you. >> Keep it right there everybody, Mr. Universe and I will be back after this short break. (upbeat techno music)

Published Date : Nov 29 2017

SUMMARY :

brought to you by Hewlett Packard Enterprise. the director of the Center for Theoretical Cosmology Yeah good to be back for the second time this week. to what you can do with in memory compute. Well in the Cosmos Group, of which I'm the head, okay what have you learned and can you summarize it and in the last two years, gravitational waves in the cosmic microwave background. in the universe and they're sound waves or seismic waves and it is the most precise experiment ever undertaken and you shrink one way and you stretch the other. Yeah you become thinner and these tiny, tiny changes of the universe from that point. I'm from the theory group, we're doing the predictions for the needle in the haystack, that's a different way and making predictions about this stuff. the technology has gotten to the point where you can assume to get going, you can just stay with the science in a sense. You gotta have a PhD in Physics to do great so you can more or less take your laptop pipeline, in the world, bar none, I made the analogy the other day. This allows the people to focus on the brilliance is the ability to find that needle in a haystack the problem, our philosophy is much more that you solve From the macro to the micro, we hear this problem One of the things we have our clients, at the same time, you never need the I never thought about the idea, Yeah that's right, you're very well informed, from the big bang and you make sense with that. from the big bang, from inflation, so it's yeah. It's an honor to be affiliated with you guys. the hardware, the tools that allow us to do this work. and your discoveries, inclusions. so it's more accessible than the Brief History of Time. that have happened like the HP partnership they're available at your website, to upload those videos ourselves. We gotta get the Cube at one of these conferences, of embarrassing us here, 100,000 years off. You didn't need to remind me sir. Keep it right there everybody, Mr. Universe and I

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

Dave VellantePERSON

0.99+

Peter BurrisPERSON

0.99+

2015DATE

0.99+

PaulPERSON

0.99+

Randy MeyerPERSON

0.99+

PeterPERSON

0.99+

30 yearsQUANTITY

0.99+

HeisenbergPERSON

0.99+

Frank Lloyd WrightPERSON

0.99+

Paul ShellerdPERSON

0.99+

twoQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Cosmos InstituteORGANIZATION

0.99+

30 plus yearsQUANTITY

0.99+

Center for Theoretical CosmologyORGANIZATION

0.99+

A Briefer History of TimeTITLE

0.99+

Cosmos GroupORGANIZATION

0.99+

RandyPERSON

0.99+

100,000 yearsQUANTITY

0.99+

ten dimensionsQUANTITY

0.99+

three milesQUANTITY

0.99+

yesterdayDATE

0.99+

five yearsQUANTITY

0.99+

second stageQUANTITY

0.99+

Paul ShellardPERSON

0.99+

threeQUANTITY

0.99+

ctc.cam.ac.ukOTHER

0.99+

ShallardPERSON

0.99+

Stephen HawkingPERSON

0.99+

three timesQUANTITY

0.99+

Brief History of TimeTITLE

0.99+

HPEORGANIZATION

0.99+

todayDATE

0.98+

first timeQUANTITY

0.98+

this weekDATE

0.98+

Ctc.cam.ac.ukOTHER

0.98+

two lasersQUANTITY

0.98+

Madrid, SpainLOCATION

0.98+

400,000 yearsQUANTITY

0.98+

hundreds of billions of light yearsQUANTITY

0.98+

this yearDATE

0.98+

DiscoverORGANIZATION

0.98+

MadridLOCATION

0.98+

second timeQUANTITY

0.98+

oneQUANTITY

0.97+

about 300,000 yearsQUANTITY

0.96+

two main areasQUANTITY

0.96+

University of CambridgeORGANIZATION

0.96+

Superdome flexCOMMERCIAL_ITEM

0.96+

Nobel prizeTITLE

0.95+

OneQUANTITY

0.95+

about twoQUANTITY

0.95+

one wayQUANTITY

0.95+

one laserQUANTITY

0.94+

HANATITLE

0.94+

single systemQUANTITY

0.94+

HP DiscoverORGANIZATION

0.94+

eachQUANTITY

0.93+

YoutubeORGANIZATION

0.93+

HPORGANIZATION

0.93+

two thingsQUANTITY

0.92+

UniversePERSON

0.92+

firstQUANTITY

0.92+

ProfessorPERSON

0.89+

last two yearsDATE

0.88+

I.M. PeiPERSON

0.88+

CubeCOMMERCIAL_ITEM

0.87+

370,000QUANTITY

0.86+

Cambridge UniversityORGANIZATION

0.85+

SynergyORGANIZATION

0.8+

PlankLOCATION

0.8+

300QUANTITY

0.72+

several videosQUANTITY

0.65+

next five yearsDATE

0.64+

HPCORGANIZATION

0.61+

a nanosecondQUANTITY

0.6+

Day One Wrap - Inforum 2017 - #Inforum2017 - #theCUBE


 

(upbeat music) >> Announcer: Live from the Javits Center in New York City. It's the Cube. Covering Inforum 2017. Brought to by Infor. >> Welcome back to the cube's coverage of Inforum here at the Javits center in New York City. I'm your host Rebecca Knight along with my co-host Dave Vellante, and Jim Kobielus who is the lead analyst for Wikibon in AI. So guys we're wrapping up day one of this conference. What do we think? What did we learn? Jim you've been, we've been here at the desk, interviewing people, and we've certainly learned a lot from them, but you've been out there talking to people, and off the record I should say. >> Yeah. >> So give us. >> I'm going to name names. >> Yes. >> If I may, I want to clarify something. >> Yeah, okay, sorry. >> I said this morning that the implied valuation was like three point seven, three point eight billion. >> Rebecca: Okay. >> Charles Phillips indicated to us off camera actually it was more like 10 and a half billion. >> Yeah, yeah. >> But I still can't make the math work. So I'm working on that. >> Okay. >> I suspect what's happened, was that a pre debt number. Remember they have a lot of debt. >> Yes. >> So I will figure it out, find out, and report back, okay. >> You do. >> So I just wanted to clarify that. >> Run those numbers okay. >> I'll call George. >> Kay, right, but Jim back to you. What do think is the biggest impression you have of the day in terms of where Infor is? >> Yeah, I've had the better part of this day to absorb the Coleman announcement which of course, ya know AI is one my core focus areas at Wikibon, and it really seems to me that, well Infor's direct competitors are the ERP space of all in cloud it's SAP, it's Oracle, it's Microsoft. They all have AI investments strategies going for in their ERP portfolios. So I was going back, and doing my own research today, just to get my head around where does Coleman put Infor in the race, cause it's a very competitive race. I referred to it this morning maybe a little bit extremely as a war of attrition, but what I think is that Coleman represents a milestone in the development of the ERP cloud, ERP market. Where with SAP, Oracle, and Microsoft, they're all going deep on AI and ERP, but none of them has the comprehensive framework or strategy to AI enable their suites for human augmentation, ya know, natural language processing, conversational UI's, Ya know, recommenders in line to the whole experience of ya know inventory management, and so forth. What infor has done with Coleman is laid out a, more than just a framework and a strategy, but they've got a lot of other assets behind the whole AI first strategy, that I think will put in them in good steady terms of innovating within their portfolio going forward. One of which is they've got this substantial infusion of capital from coke industries of course, and coke is very much as we've heard today at this show very much behind where the infor team under Charles is going with AI enabling everything, but also the Burst team is now on board with it, and the acquisition closed last month Brad Peters spoke this morning, and of course he spoke yesterday at the analyst pre-brief, and so David and I have more than 24 hours to absorb, what they're saying about where Burst fits into this. Burst has AI assets all ready. That, ya know Infor is very much committed to converging the best of what Burst has with where Coleman is going throughout their portfolio. What Infor announced this morning is all of that. Plus the fact that they've already got some Colemanize it's a term I'm using, applications in their current portfolio. So it's not just a future statement of direction. It's all that they've already done. Significant development and productization of Coleman, and they've also announced a commitment Infor with in the coming year, to bring, to introduce Coleman features throughout each of the industry vertical suite, cloud suites, like I said, human augmentation, plus automation, plus assistants, that are ya know, chat bots sort of inline. In other words, Infor has a far more ambitious and I think, potentially revolutionary strategy to really make ERP, to take ERP away from the legacy of protecters that have all been based on deterministic business rules, that a thicket, a rickety thicket of business rules that need to be maintained. Bringing it closer to the future of cognitive applications, where the logic will be in predictive, and deterministic, predictive, data driven algorithms that are continually learning, continually adapting, continually optimizing all interactions and transactions that's the statement of direction that I think that Infor is on the path to making it happen in the next couple of years in a way that will probably force SAP, Oracle, Microsoft to step up their game, and bring their cognitive or AI strategies in portfolios. >> So I want to talk some more about the horse in the track, but I want to still understand what it is. >> Jim: Yes. >> So the competitors are going to say is oh. It's Alexa. Okay, okay it is partially. >> Jim: Yeah sure. It's very reductive that's their job to reduce. >> Yeah you're right, you've lived that world for a while. Actually that was not your job, so. >> If you don't understand technology, you're just some very smart guy who talks a good talk. >> Yeah, okay. >> So, yeah. >> So, okay, so what we heard yesterday in the analyst meeting, and maybe you found this out today, was is conversational UX. >> Yes. >> It's chat wired into the APIs, and that's table stakes. It augments, it automates, an example is early payments versus by cash on hand. Should I take the early payment deal, and take the discount, or, and so it helps decide those decisions, and which can, if you have a lot of volume could be complex, and it advises it uncovers insights. Now what I don't know is how much of the IP is ya know, We'em defense essentially from Amazon, and how much is actual Infor IP, ya know. >> Good question, good question, whether it's all organically developed so far, or whether they've sourced it from partners, is an open issue. >> Question for Duncan Demarro. >> Duncan Demarra, exactly. >> Okay, so who are the horses in the track. I mean obviously there's Google, there's Amazon, there's I guess Facebook, even though they're not competing in the enterprise, there's IMB Watson, and then you mentioned Oracle, and SAP. >> Well, here's the thing. You named at least one of those solution providers, IBM for example, provides obviously a really sophisticated, cognitive AI suite under Watson that is not imbedded however, within an ERP application suite from that vendor. >> No it's purpose built for whatever. >> It's purpose built for stand alone deployment into all manner of applications. What Infor is not doing with Coleman, and they make that very clear, they're not building a stand alone AI platform. >> Which strategy do you like better. >> Do I like? They're both valid strategies. First of all, Infor is very much a sass vendor, going forward in that they don't they haven't given any indications of going into past. I mean that's why they've partnered with Amazon, for example. So it's clear for a sass vendor like Infor going forward to do what they've done which is that they're not going to allow their customers apparently to decouple the Coleman infrastructure from everything else that ya know, Infor makes money on. >> Which for them is the right strategy. >> Yeah, that's the right strategy for them, and I'm not saying it's a bad strategy for anybody who wants to be in Infor's market. >> So what is in Oracle, or in a SAP, or for that matter, a work day do, I mean service now made some AI announcements at their knowledge event. So they're spending money on that. I think that was organic IP, or I don't know maybe they're open swamps AI compenents. >> Sure, sure, A they need to have a cloud data platform that provides the data upon which to build and train the algorithm. Clearly Infor has cast a slot with AWS, ya know, SAP, Microsoft, Orcale, IBM they all have their own cloud platform. So >> And GT Nexus plays into that data corpus or? >> Yeah, cause GT Nexus is very much a commerce network, ya know, and there is EDI for this century, that is a continual free flowing, ever replenishing, pool of data. Upon which to build and train. >> Okay, but I interrupted you. You said number one, you need the cloud platform with data. >> Ya need the conversational UI, you know, the user reductive term chat bots, ya know, digital assistant. You need that technology, and it ya know, it's very much a technology in the works, its' not like. Everybody's building chat bots, doesn't mean that every customer is using them, or that they perform well, but chat bots are at the very heart of a new generation of application development conversational interfaces. Which is why Wikibon, why are are doing a study, on the art of building, and training, and tuning chat bots. Cause they are so fundamental to the UX of every product category in the cloud. >> Rebecca: And only getting more so. >> IOT, right, desk top applications. Everything's going with , moving towards more of a conversational interface, ya know. For starters, so you need a big data cloud platform. You need a chat bot framework, for building and ya know, the engagement, and ya know, the UI and all of that. You need obviously, machine learning, and deep learning capabilities. Ya know, open source. We are looking at a completely open source stack in the middle there for all the data. Ya know, you need obviously things like tenserflow for deep learning. Which is becoming the standard there. Things like Spark, ya know, for machine learning, streaming analytics and so forth. You need all that plumbing to make it happen, but you need in terms of ERP of course, you need business applications, and you need to have a business application stacked to infuse with this capability, and there's only a hardcore of really dominant vendors in that space. >> But the precious commodity seems to be data. >> Yeah. >> Right. >> Precious commodity is data both to build the algorithms, and an ongoing basis to train them. Ya see, the thing is training is just as important as building the algorithms cause training makes all the difference in the world between whether a predictive analytics, ya know ML algorithm actually predicts what it's supposed to predict or doesn't. So without continual retraining of the algorithms, they'll lose their ability to do predictions, and classifications and pattern recognitions. So, ya know, the vendors in the cloud arena who are in a good place are the Googles and the Facebooks, and others who generate this data organically as part of their services. Google's got YouTube, and YouTube is mother load of video and audio and so forth for training all the video analytics, all the speech recognition, everything else that you might want to do, but also very much, ya know, you look at natural language processing, ya know, text data, social media data. I mean everybody is tapping into the social media fire hose to tune all the NLP, ongoing. That's very, very important. So the vendor that can assemble a complete solution portfolio that provides all the data, and also very much this something people often overlook, training the data involves increasingly labeling the data, and labeling needs a hardcore of resources increasingly crowdsource to do that training. That's why companies like Crowd Flower, and Mighty AI, and of course Amazon with mechanical terf are becoming evermore important. They are the go to solution providers in the cloud for training these algorithms to keep them fit for purpose. >> Mmm, alright Rebecca, what are your thoughts as a sort of newbie to Infor. >> I'm a newbie yes, and well to be honest, yes I'm a newbie, and I have only an inch wide, an inch deep understanding of the technology, but one thing that has really resonated with me. >> You fake it really well. >> Well, thank you, I appreciate that, thank you. That I've really taken away from this is the difficulties of implementing this stuff, and this what you hear time and time again. Is that the technology is tough, but it's the change management piece that is what trips up these companies because of personalities who are resistant to it, and just the entrenched ways of doing things. It is so hard. >> Yes, change management, yes I agree, there's so many moving parts in these stacks, it's incredible. >> Rebecca: Yeah. >> If you we just focus on the moving parts that represent the business logic that's driving all of this AI, that's a governance mess in it's own right. Because what you're governing, I mean version controls and so forth, are both traditional business rules that drive all of these applications, application code, plus all of these predictive algorithms, model governance, and so forth, and so on. I mean just making sure that all of that is, you're controlling versions of that. You've got stewards, who are managing the quality of all that. Then it moves in lock step with each other so. >> Rebecca: Exactly. >> So when you change the underlying coding of a chat bot, for example, you're also making sure to continue to refresh and train, and verify that the algorithms that were built along with that code are doing their job, so forth. I'm just giving sort of this meta data, and all of that other stuff that needs to be managed in a unified way within, what I call, a business logic governance framework for cloud data driven applications like AI. >> And in companies that are so big, and where people are so disparately located, these are the biggest challenges that companies are facing. >> Yeah, you're going to get your data scientists in lets say China to build the deep learning algorithms, probably to train them, your probably going to get coders in Poland, or in Uruguay or somewhere else to build the code, and over time, there'll be different pockets of development all around the world, collaborating within a unified like dev ops environment for data science. Another focus for us by the way, dev ops for data science, over time these applications like any application, it'll be year after year, after year of change and change. The people who are building and tuning and tweaking This stuff now probably weren't the people five years ago, as this stuff gets older, who built the original. So you're going to need to manage the end to end life cycle, ya know like documentation, and change control, and all that. It's a dev ops challenge ongoing within a broader development initiative to keep this stuff from flying apart from the sheer complexity. >> Rebecca: Yes. >> So, just I don't Jim, if you can help me answer this, this might be more of a foyer sort of issue, but when we heard from the analyst meeting yesterday, Soma, their chief technical guy, who's been on the Cube before in New Orleans, very sharp dude, Two things that stood out. Remember that architecture slide, they showed? They showed a slide of the XI and the architecture, and obviously they're building on AWS cloud. So their greatest strengths are in my view, any way the achilles heel is here, and one is edge. Let's talk about edge. So edge to cloud. >> Jim : Yes. >> Very expensive to move data into the cloud, and that's where ya know, we heard today that all the analysis is going to be done, we know that, but you're really only going to be moving the needles, presumably, into the cloud. The haystacks going to stay at the edge, and the processing going to be done at the edge, it's going to be interesting to see how Amazon plays there. We've seen Amazon make some moves to the edge with snowball, and greenfield and things like that, and but it just seems that analytics are going to happen at the edge, otherwise it's going to be too expensive. The economic model doesn't favor edge to cloud. One sort of caveat. The second was the complexity of the data pipeline. So we saw a lot of AWS in that slide yesterday. I mean I wrote down dynamo DB, kineses, S3 redshift, I'm sure there's some EC2. These are all discreet sort of one trick pony platforms with a proprietary API, and that data pipeline is going to get very, very complex. >> Flywheel platforms I think when you were talking to Charles Phillips. >> But when you talk to Andy Jasse, he says look we want to have access to primitive access to those APIs. Cause we don't know what the markets going to do. So we have to have control. It's all about control, but that said, it's this burgeoning collection of at least 10 to 15 data services. So the end to end, the question I have is Oracle threw down the gauntlet in cloud. They said they'll be able to service any user request in a 150 milliseconds. What is the end to end performance going to be as that data pipeline gets more robust, and more complicated. I don't know the answer to that, but I think it's something to watch. Can you deliver that in under 150 milliseconds, can Oracle even do that, who knows? >> Well, you can if you deliver more of the actual logic, ya know, machine learning and code to the edge, I mean close the user, close to the point of decision, yes. Keep in mind that the term pipeline is ambiguous here. One one hand, it refers, in many people's minds to the late ya know, the end to end path of a packet for example, from source to target application, but in the context of development or dev ops it refers to the end to end life cycle of a given asset, ya know, code or machine learning, modeling and so forth. In context of data science in the pipeline for data science much of the training the whole notion of training, and machine learning models, say for predictive analysis that doesn't happen in real time in line to actual executing, that happens, Ya know, it happens, but it doesn't need it's not inline in a critical path of the performance of the application much of that will stay in the cloud cause that's massively parallel processing, of ya know, of tensorflow, graphs and so forth. Doesn't need to happen in real time. What needs to happen in real time is that the algorithms like tensorflow that are trained will be pushed to the edge, and they'll execute in increasingly nanoscopic platforms like your smartphone and like smart sensors imbedded in your smart car and so forth. So the most of the application logic, probabilistic ya know, machine learning, will execute at the edge. More of the pipeline functions like model building, model training and so forth, data ingest, and data discovery. That will not happen in real time, but it'll happen in the cloud. It need not happen in the edge. >> Kind of geeky topics, but still one that I wanted to just sort of bring up, and riff on a little bit, but let's bring it back up, and back into sort of. >> And this is the thing there's going to be a lot more to talk about. >> Geeking out Rebecca, we apologize. >> You do indeed, it's okay, it's okay. >> Dave indulges me. >> No, you love it too. >> Of course, no I learn every time I try to describe these things, and get smart people like Jim to help unpack it, and so. >> And we'll do more unpacking tomorrow at two day of Inforum 2017. Well, we will all return. Jim Kobielus, Dave Vellante, I'm Rebecca Knight. We will see you back here tomorrow for day two. (upbeat music)

Published Date : Jul 11 2017

SUMMARY :

It's the Cube. and off the record I should say. I said this morning that the implied valuation Charles Phillips indicated to us But I still can't make the math work. I suspect what's happened, was that a pre debt number. and report back, okay. but Jim back to you. that Infor is on the path to making it happen but I want to still understand what it is. So the competitors are going to say is oh. that's their job to reduce. Actually that was not your job, so. If you don't understand technology, in the analyst meeting, and take the discount, or, is an open issue. I mean obviously there's Google, there's Amazon, Well, here's the thing. and they make that very clear, to decouple the Coleman infrastructure from everything else Yeah, that's the right strategy for them, So what is in Oracle, or in a SAP, or for that matter, that provides the data upon which to build that is a continual You said number one, you need the cloud platform with data. and it ya know, You need all that plumbing to make it happen, They are the go to solution providers as a sort of newbie to Infor. but one thing that has really resonated with me. and just the entrenched ways of doing things. in these stacks, it's incredible. that represent the business logic that needs to be managed And in companies that are so big, to manage the end to end life cycle, So edge to cloud. and the processing going to be done at the edge, talking to Charles Phillips. So the end to end, the question I have to the late ya know, the end to end but still one that I wanted to just sort of bring up, And this is the thing there's going to be a lot more to help unpack it, and so. We will see you back here tomorrow for day two.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavidPERSON

0.99+

Jim KobielusPERSON

0.99+

Rebecca KnightPERSON

0.99+

RebeccaPERSON

0.99+

Duncan DemarraPERSON

0.99+

Duncan DemarroPERSON

0.99+

JimPERSON

0.99+

UruguayLOCATION

0.99+

PolandLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Brad PetersPERSON

0.99+

AmazonORGANIZATION

0.99+

GeorgePERSON

0.99+

IBMORGANIZATION

0.99+

Andy JassePERSON

0.99+

New OrleansLOCATION

0.99+

BurstORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

Charles PhillipsPERSON

0.99+

KayPERSON

0.99+

last monthDATE

0.99+

SomaPERSON

0.99+

OrcaleORGANIZATION

0.99+

InforORGANIZATION

0.99+

yesterdayDATE

0.99+

FacebookORGANIZATION

0.99+

GooglesORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

todayDATE

0.99+

FacebooksORGANIZATION

0.99+

150 millisecondsQUANTITY

0.99+

10 and a half billionQUANTITY

0.99+

New York CityLOCATION

0.99+

SAPORGANIZATION

0.99+

ColemanORGANIZATION

0.99+

YouTubeORGANIZATION

0.99+

Crowd FlowerORGANIZATION

0.99+

tomorrowDATE

0.99+

OneQUANTITY

0.99+

bothQUANTITY

0.99+

secondQUANTITY

0.99+

WatsonTITLE

0.98+

five years agoDATE

0.98+

eight billionQUANTITY

0.98+