Image Title

Search Results for Xeon Phi:

Kazuhiro Gomi, NTT | Upgrade 2020 The NTT Research Summit


 

>> Narrator: From around the globe, it's theCUBE, covering the Upgrade 2020, the NTT Research Summit presented by NTT Research. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in Palo Alto studio for our ongoing coverage of the Upgrade 2020, it's the NTT Research conference. It's our first year covering the event, it's actually the first year for the event inaugural, a year for the events, we're really, really excited to get into this. It's basic research that drives a whole lot of innovation, and we're really excited to have our next guest. He is Kazuhiro Gomi, he is the President and CEO of NTT Research. Kazu, great to see you. >> Hi, good to see you. >> Yeah, so let's jump into it. So this event, like many events was originally scheduled I think for March at Berkeley, clearly COVID came along and you guys had to make some changes. I wonder if you can just share a little bit about your thinking in terms of having this event, getting this great information out, but having to do it in a digital way and kind of rethinking the conference strategy. >> Sure, yeah. So NTT Research, we started our operations about a year ago, July, 2019. and I always wanted to show the world that to give a update of what we have done in the areas of basic and fundamental research. So we plan to do that in March, as you mentioned, however, that the rest of it to some extent history, we needed to cancel the event and then decided to do this time of the year through virtual. Something we learned, however, not everything is bad, by doing this virtual we can certainly reach out to so many peoples around the globe at the same time. So we're taking, I think, trying to get the best out of it. >> Right, right, so you've got a terrific lineup. So let's jump into a little bit. So first thing just about NTT Research, we're all familiar, if you've been around for a little while about Bell Labs, we're fortunate to have Xerox PARC up the street here in Palo Alto, these are kind of famous institutions doing basic research. People probably aren't as familiar at least in the states around NTT basic research. But when you think about real bottom line basic research and how it contributes ultimately, it gets into products, and solutions, and health care, and all kinds of places. How should people think about basic research and its role in ultimately coming to market in products, and services, and all different things. But you're getting way down into the weeds into the really, really basic hardcore technology. >> Sure, yeah, so let me just from my perspective, define the basic research versus some other research and development. For us that the basic research means that we don't necessarily have any like a product roadmap or commercialization roadmap, we just want to look at the fundamental core technology of all things. And from the timescale perspective obviously, not that we're not looking at something new, thing, next year, next six months, that kind of thing. We are looking at five years or sometimes longer than that, potentially 10 years down the road. But you mentioned about the Bell Lab and Xerox PARC. Yeah, well, they used to be such organizations in the United States, however, well, arguably those days have kind of gone, but so that's what's going on in the United States. In Japan, NTT has have done quite a bit of basic research over the years. And so we wanted to, I think because that a lot of the cases that we can talk about the end of the Moore's laws and then the, we are kind of scary time for that. The energy consumptions on ITs We need to make some huge, big, fundamental change has to happen to sustain our long-term development of the ideas and basically for the sake of human beings. >> Right, right. >> So NTT sees that and also we've been doing quite a bit of basic research in Japan. So we recognize this is a time that the let's expand this activities and then by doing, as a part of doing so is open up the research lab in Silicon Valley, where certainly we can really work better, work easier to with that the global talents in this field. So that's how we started this endeavor, like I said, last year. And so far, it's a tremendous progress that we have made, so that's where we are. >> That's great, so just a little bit more specific. So you guys are broken down into three labs as I understand, you've got the Physics, the PHI, which is Physics and Informatics, the CIS lab Cryptography and Information Security, and the MEI lab Medical and Health Informatics, and the conference has really laid out along those same tracks, really day one is a whole lot of stuff, or excuse me, they do to run the Physics and Informatics day. The next day is really Cryptography and Information Security, and then the Medical and Health Informatics. So those are super interesting but very diverse kind of buckets of fundamental research. And you guys are attacking all three of those pillars. >> Yup, so day one, general session, is that we cover the whole, all the topics. And but just that whole general topics. I think some people, those who want to understand what NTT research is all about, joining day one will be a great day to be, to understand more holistic what we are doing. However, given the type of research topic that we are tackling, we need the deep dive conversations, very specific to each topic by the specialist and the experts in each field. Therefore we have a day two, three, and four for a specific topics that we're going to talk about. So that's a configuration of this conference. >> Right, right, and I love. I just have to read a few of the session breakout titles 'cause I think they're just amazing and I always love learning new vocabulary words. Coherent nonlinear dynamics and combinatorial optimization language multipliers, indistinguishability obfuscation from well-founded assumptions, fully deniable communications and computation. I mean, a brief history of the quasi-adaptive NIZKs, which I don't even know what that stands for. (Gomi laughing) Really some interesting topics. But the other thing that jumps out when you go through the sessions is the representation of universities and really the topflight university. So you've got people coming from MIT, CalTech, Stanford, Notre Dame, Michigan, the list goes on and on. Talk to us about the role of academic institutions and how NTT works in conjunction with academic institutions, and how at this basic research level kind of the commercial academic interests align and come together, and work together to really move this basic research down the road. >> Sure, so the working with academic, especially at the top-notch universities are crucial for us. Obviously, that's where the experts in each field of the basic research doing their super activities and we definitely need to get connected, and then we need to accelerate our activities and together with the entities researchers. So that has been kind of one of the number one priority for us to jumpstart and get some going. So as you mentioned, Jeff, that we have a lineup of professors and researchers from each top-notch universities joining to this event and talking at a generous, looking at different sessions. So I'm sure that those who are listening in to those sessions, you will learn well what's going on from the NTT's mind or NTT researchers mind to tackle each problem. But at the same time you will get to hear that top level researchers and professors in each field. So I believe this is going to be a kind of unique, certainly session that to understand what's it's like in a research field of quantum computing, encryptions, and then medical informatics of the world. >> Right. >> So that's, I am sure it's going to be a pretty great lineups. >> Oh, absolutely, a lot of information exchange. And I'm not going to ask you to pick your favorite child 'cause that would be unfair, but what I am going to do is I noticed too that you also write for the Forbes Technology Council members. So you're publishing on Forbes, and one of the articles that you publish relatively recently was about biological digital twins. And this is a topic that I'm really interested in. We used to do a lot of stuff with GE and there was always a lot of conversation about digital twins, for turbines, and motors, and kind of all this big, heavy industrial equipment so that you could get ahead of the curve in terms of anticipating maintenance and basically kind of run simulations of its lifetime. Need concept, now, and that's applied to people in biology, whether that's your heart or maybe it's a bigger system, your cardiovascular system, or the person as a whole. I mean, that just opens up so much interesting opportunities in terms of modeling people and being able to run simulations. If they do things different, I would presume, eat different, walk a little bit more, exercise a little bit more. And you wrote about it, I wonder if you could share kind of your excitement about the potential for digital twins in the medical space. >> Sure, so I think that the benefit is very clear for a lot of people, I would hope that the ones, basically, the computer system can simulate or emulate your own body, not just a generic human body, it's the body for Kazu Gomi at the age of whatever. (Jeff laughing) And so if you get that precise simulation of your body you can do a lot of things. Oh, you, meaning I think a medical professional can do a lot of thing. You can predict what's going to happen to my body in the next year, six months, whatever. Or if I'm feeling sick or whatever the reasons and then the doctor wants to prescribe a few different medicines, but you can really test it out a different kind of medicines, not to you, but to the twin, medical twin then obviously is safer to do some kind of specific medicines or whatever. So anyway, those are the kind of visions that we have. And I have to admit that there's a lot of things, technically we have to overcome, and it will take a lot of years to get there. But I think it's a pretty good goal to define, so we said we did it and I talked with a couple of different experts and I am definitely more convinced that this is a very nice goal to set. However, well, just talking about the goal, just talking about those kinds of futuristic thing, you may just end up with a science fiction. So we need to be more specific, so we have the very researchers are breaking down into different pieces, how to get there, again, it's going to be a pretty long journey, but we're starting from that, they're try to get the digital twin for the cardiovascular system, so basically the create your own heart. Again, the important part is that this model of my heart is very similar to your heart, Jeff, but it's not identical it is somehow different. >> Right, right. >> So we are looking on it and there are certainly some, we're not the only one thinking something like this, there are definitely like-minded researchers in the world. So we are gathered together with those folks and then come up with the exchanging the ideas and coming up with that, the plans, and ideas, that's where we are. But like you said, this is really a exciting goal and exciting project. >> Right, and I like the fact that you consistently in all the background material that I picked up preparing for this today, this focus on tech for good and tech for helping the human species do better down the road. In another topic, in other blog post, you talked about and specifically what are 15 amazing technologies contributing to the greater good and you highlighted cryptography. So there's a lot of interesting conversations around encryption and depending kind of commercialization of quantum computing and how that can break all the existing kind of encryption. And there's going to be this whole renaissance in cryptography, why did you pick that amongst the entire pallet of technologies you can pick from, what's special about cryptography for helping people in the future? >> Okay, so encryption, I think most of the people, just when you hear the study of the encryption, you may think what the goal of these researchers or researches, you may think that you want to make your encryption more robust and more difficult to break. That you can probably imagine that's the type of research that we are doing. >> Jeff: Right. >> And yes, yes, we are doing that, but that's not the only direction that we are working on. Our researchers are working on different kinds of encryptions and basically encryptions controls that you can just reveal, say part of the data being encrypted, or depending upon that kind of attribute of whoever has the key, the information being revealed are slightly different. Those kinds of encryption, well, it's kind of hard to explain verbally, but functional encryption they call is becoming a reality. And I believe those inherit data itself has that protection mechanism, and also controlling who has access to the information is one of the keys to address the current status. Current status, what I mean by that is, that they're more connected world we are going to have, and more information are created through IOT and all that kind of stuff, more sensors out there, I think. So it is great on the one side that we can do a lot of things, but at the same time there's a tons of concerns from the perspective of privacy, and securities, and stuff, and then how to make those things happen together while addressing the concern and the leverage or the benefit you can create super complex accessing systems. But those things, I hate to say that there are some inherently bringing in some vulnerabilities and break at some point, which we don't want to see. >> Right. >> So I think having those securities and privacy mechanism in that the file itself is I think that one of the key to address those issues, again, get the benefit of that they're connected in this, and then while maintaining the privacy and security for the future. >> Right. >> So and then that's, in the end will be the better for everyone and a better society. So I couldn't pick other (Gomi and Jeff laughing) technology but I felt like this is easier for me to explain to a lot of people. So that's mainly the reasons that I went back launching. >> Well, you keep publishing, so I'm sure you'll work your way through most of the technologies over a period of time, but it's really good to hear there's a lot of talk about security not enough about privacy. There's usually the regs and the compliance laws lag, what's kind of happening in the marketplace. So it's good to hear that's really a piece of the conversation because without the privacy the other stuff is not as attractive. And we're seeing all types of issues that are coming up and the regs are catching up. So privacy is a super important piece. But the other thing that is so neat is to be exposed not being an academic, not being in this basic research every day, but have the opportunity to really hear at this level of detail, the amount of work that's being done by big brain smart people to move these basic technologies along, we deal often in kind of higher level applications versus the stuff that's really going on under the cover. So really a great opportunity to learn more and hear from, and probably understand some, understand not all about some of these great, kind of baseline technologies, really good stuff. >> Yup. >> Yeah, so thank-you for inviting us for the first one. And we'll be excited to sit in on some sessions and I'm going to learn. What's that one phrase that I got to learn? The N-I-K-Z-T. NIZKs. (laughs) >> NIZKs. (laughs) >> Yeah, NIZKs, the brief history of quasi-adaptive NI. >> Oh, all right, yeah, yeah. (Gomi and Jeff laughing) >> All right, Kazuhiro, I give you the final word- >> You will find out, yeah. >> You've been working on this thing for over a year, I'm sure you're excited to finally kind of let it out to the world, I wonder if you have any final thoughts you want to share before we send people back off to their sessions. >> Well, let's see, I'm sure if you're watching this video, you are almost there for that actual summit. It's about to start and so hope you enjoy the summit and in a physical, well, I mentioned about the benefit of this virtual, we can reach out to many people, but obviously there's also a flip side of the coin as well. With a physical, we can get more spontaneous conversations and more in-depth discussion, certainly we can do it, perhaps not today. It's more difficult to do it, but yeah, I encourage you to, I think I encouraged my researchers NTT side as well to basic communicate with all of you potentially and hopefully then to have more in-depth, meaningful conversations just starting from here. So just feel comfortable, perhaps just feel comfortable to reach out to me and then all the other NTT folks. And then now, also that the researchers from other organizations, I'm sure they're looking for this type of interactions moving forward as well, yeah. >> Terrific, well, thank-you for that open invitation and you heard it everybody, reach out, and touch base, and communicate, and engage. And it's not quite the same as being physical in the halls, but that you can talk to a whole lot more people. So Kazu, again, thanks for inviting us. Congratulations on the event and really glad to be here covering it. >> Yeah, thank-you very much, Jeff, appreciate it. >> All right, thank-you. He's Kazu, I'm Jeff, we are at the Upgrade 2020, the NTT Research Summit. Thanks for watching, we'll see you next time. (upbeat music)

Published Date : Sep 29 2020

SUMMARY :

the NTT Research Summit of the Upgrade 2020, it's and you guys had to make some changes. and then decided to do this time and health care, and all kinds of places. of the cases that we can talk that the let's expand this and the MEI lab Medical and the experts in each field. and really the topflight university. But at the same time you will get to hear it's going to be a pretty great lineups. and one of the articles that so basically the create your own heart. researchers in the world. Right, and I like the fact and more difficult to break. is one of the keys to and security for the future. So that's mainly the reasons but have the opportunity to really hear and I'm going to learn. NIZKs. Yeah, NIZKs, the brief (Gomi and Jeff laughing) it out to the world, and hopefully then to have more in-depth, and really glad to be here covering it. Yeah, thank-you very the NTT Research Summit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Kazuhiro GomiPERSON

0.99+

CalTechORGANIZATION

0.99+

NTTORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

JapanLOCATION

0.99+

KazuPERSON

0.99+

Silicon ValleyLOCATION

0.99+

MarchDATE

0.99+

Palo AltoLOCATION

0.99+

threeQUANTITY

0.99+

five yearsQUANTITY

0.99+

Bell LabORGANIZATION

0.99+

GomiPERSON

0.99+

Bell LabsORGANIZATION

0.99+

Kazu GomiPERSON

0.99+

fourQUANTITY

0.99+

KazuhiroPERSON

0.99+

United StatesLOCATION

0.99+

next yearDATE

0.99+

MoorePERSON

0.99+

10 yearsQUANTITY

0.99+

NTT ResearchORGANIZATION

0.99+

GEORGANIZATION

0.99+

BerkeleyLOCATION

0.99+

Forbes Technology CouncilORGANIZATION

0.99+

last yearDATE

0.99+

Xerox PARCORGANIZATION

0.99+

StanfordORGANIZATION

0.99+

NTT Research SummitEVENT

0.99+

15 amazing technologiesQUANTITY

0.99+

July, 2019DATE

0.99+

MITORGANIZATION

0.98+

each topicQUANTITY

0.98+

NTT ResearchEVENT

0.98+

Upgrade 2020EVENT

0.98+

oneQUANTITY

0.98+

first yearQUANTITY

0.97+

each fieldQUANTITY

0.97+

todayDATE

0.97+

three labsQUANTITY

0.96+

each problemQUANTITY

0.96+

MichiganLOCATION

0.96+

next six monthsDATE

0.95+

Notre DameORGANIZATION

0.95+

first oneQUANTITY

0.95+

a year agoDATE

0.94+

one sideQUANTITY

0.91+

one phraseQUANTITY

0.9+

over a yearQUANTITY

0.9+

a yearQUANTITY

0.9+

Physics and InformaticsEVENT

0.89+

twinQUANTITY

0.87+

first thingQUANTITY

0.86+

each top-QUANTITY

0.86+

day oneQUANTITY

0.84+

CISORGANIZATION

0.83+

sixQUANTITY

0.82+

Medical and Health InformaticsORGANIZATION

0.8+

one ofQUANTITY

0.72+

ForbesORGANIZATION

0.71+

Machine Learning Applied to Computationally Difficult Problems in Quantum Physics


 

>> My name is Franco Nori. Is a great pleasure to be here and I thank you for attending this meeting and I'll be talking about some of the work we are doing within the NTT-PHI group. I would like to thank the organizers for putting together this very interesting event. The topics studied by NTT-PHI are very exciting and I'm glad to be part of this great team. Let me first start with a brief overview of just a few interactions between our team and other groups within NTT-PHI. After this brief overview or these interactions then I'm going to start talking about machine learning and neural networks applied to computationally difficult problems in quantum physics. The first one I would like to raise is the following. Is it possible to have decoherence free interaction between qubits? And the proposed solution was a postdoc and a visitor and myself some years ago was to study decoherence free interaction between giant atoms made of superconducting qubits in the context of waveguide quantum electrodynamics. The theoretical prediction was confirmed by a very nice experiment performed by Will Oliver's group at MIT was probably so a few months ago in nature and it's called waveguide quantum electrodynamics with superconducting artificial giant atoms. And this is the first joint MIT Michigan nature paper during this NTT-PHI grand period. And we're very pleased with this. And I look forward to having additional collaborations like this one also with other NTT-PHI groups, Another collaboration inside NTT-PHI regards the quantum hall effects in a rapidly rotating polarity and condensates. And this work is mainly driven by two people, a Michael Fraser and Yoshihisa Yamamoto. They are the main driving forces of this project and this has been a great fun. We're also interacting inside the NTT-PHI environment with the groups of marandI Caltech, like McMahon Cornell, Oliver MIT, and as I mentioned before, Fraser Yamamoto NTT and others at NTT-PHI are also very welcome to interact with us. NTT-PHI is interested in various topics including how to use neural networks to solve computationally difficult and important problems. Let us now look at one example of using neural networks to study computationally difficult and hard problems. Everything we'll be talking today is mostly working progress to be extended and improve in the future. So the first example I would like to discuss is topological quantum phase transition retrieved through manifold learning, which is a variety of version of machine learning. This work is done in collaboration with Che, Gneiting and Liu all members of the group. preprint is available in the archive. Some groups are studying a quantum enhanced machine learning where machine learning is supposed to be used in actual quantum computers to use exponential speed-up and using quantum error correction we're not working on these kind of things we're doing something different. We're studying how to apply machine learning applied to quantum problems. For example how to identify quantum phases and phase transitions. We shall be talking about right now. How to achieve, how to perform quantum state tomography in a more efficient manner. That's another work of ours which I'll be showing later on. And how to assist the experimental data analysis which is a separate project which we recently published. But I will not discuss today because the experiments can produce massive amounts of data and machine learning can help to understand these huge tsunami of data provided by these experiments. Machine learning can be either supervised or unsupervised. Supervised is requires human labeled data. So we have here the blue dots have a label. The red dots have a different label. And the question is the new data corresponds to either the blue category or the red category. And many of these problems in machine learning they use the example of identifying cats and dogs but this is typical example. However, there are the cases which are also provides with there are no labels. So you're looking at the cluster structure and you need to define a metric, a distance between the different points to be able to correlate them together to create these clusters. And you can manifold learning is ideally suited to look at problems we just did our non-linearities and unsupervised. Once you're using the principle component analysis along this green axis here which are the principal axis here. You can actually identify a simple structure with linear projection when you increase the axis here, you get the red dots in one area, and the blue dots down here. But in general you could get red green, yellow, blue dots in a complicated manner and the correlations are better seen when you do an nonlinear embedding. And in unsupervised learning the colors represent similarities are not labels because there are no prior labels here. So we are interested on using machine learning to identify topological quantum phases. And this requires looking at the actual phases and their boundaries. And you start from a set of Hamiltonians or wave functions. And recall that this is difficult to do because there is no symmetry breaking, there is no local order parameters and in complicated cases you can not compute the topological properties analytically and numerically is very hard. So therefore machine learning is enriching the toolbox for studying topological quantum phase transitions. And before our work, there were quite a few groups looking at supervised machine learning. The shortcomings that you need to have prior knowledge of the system and the data must be labeled for each phase. This is needed in order to train the neural networks . More recently in the past few years, there has been increased push on looking at all supervised and Nonlinear embeddings. One of the shortcomings we have seen is that they all use the Euclidean distance which is a natural way to construct the similarity matrix. But we have proven that it is suboptimal. It is not the optimal way to look at distance. The Chebyshev distances provides better performance. So therefore the difficulty here is how to detect topological quantifies transition is a challenge because there is no local order parameters. Few years ago we thought well, three or so years ago machine learning may provide effective methods for identifying topological Features needed in the past few years. The past two years several groups are moving this direction. And we have shown that one type of machine learning called manifold learning can successfully retrieve topological quantum phase transitions in momentum and real spaces. We have also Shown that if you use the Chebyshev distance between data points are supposed to Euclidean distance, you sharpen the characteristic features of these topological quantum phases in momentum space and the afterwards we do so-called diffusion map, Isometric map can be applied to implement the dimensionality reduction and to learn about these phases and phase transition in an unsupervised manner. So this is a summary of this work on how to characterize and study topological phases. And the example we used is to look at the canonical famous models like the SSH model, the QWZ model, the quenched SSH model. We look at this momentous space and the real space, and we found that the metal works very well in all of these models. And moreover provides a implications and demonstrations for learning also in real space where the topological invariants could be either or known or hard to compute. So it provides insight on both momentum space and real space and its the capability of manifold learning is very good especially when you have the suitable metric in exploring topological quantum phase transition. So this is one area we would like to keep working on topological faces and how to detect them. Of course there are other problems where neural networks can be useful to solve computationally hard and important problems in quantum physics. And one of them is quantum state tomography which is important to evaluate the quality of state production experiments. The problem is quantum state tomography scales really bad. It is impossible to perform it for six and a half 20 qubits. If you have 2000 or more forget it, it's not going to work. So now we're seeing a very important process which is one here tomography which cannot be done because there is a computationally hard bottleneck. So machine learning is designed to efficiently handle big data. So the question we're asking a few years ago is chemistry learning help us to solve this bottleneck which is quantum state tomography. And this is a project called Eigenstate extraction with neural network tomography with a student Melkani , research scientists of the group Clemens Gneiting and I'll be brief in summarizing this now. The specific machine learning paradigm is the standard artificial neural networks. They have been recently shown in the past couple of years to be successful for tomography of pure States. Our approach will be to carry this over to mixed States. And this is done by successively reconstructing the eigenStates or the mixed states. So it is an iterative procedure where you can slowly slowly get into the desired target state. If you wish to see more details, this has been recently published in phys rev A and has been selected as a editor suggestion. I mean like some of the referees liked it. So tomography is very hard to do but it's important and machine learning can help us to do that using neural networks and these to achieve mixed state tomography using an iterative eigenstate reconstruction. So why it is so challenging? Because you're trying to reconstruct the quantum States from measurements. You have a single qubit, you have a few Pauli matrices there are very few measurements to make when you have N qubits then the N appears in the exponent. So the number of measurements grows exponentially and this exponential scaling makes the computation to be very difficult. It's prohibitively expensive for large system sizes. So this is the bottleneck is these exponential dependence on the number of qubits. So by the time you get to 20 or 24 it is impossible. It gets even worst. Experimental data is noisy and therefore you need to consider maximum-likelihood estimation in order to reconstruct the quantum state that kind of fits the measurements best. And again these are expensive. There was a seminal work sometime ago on ion-traps. The post-processing for eight qubits took them an entire week. There were different ideas proposed regarding compressed sensing to reduce measurements, linear regression, et cetera. But they all have problems and you quickly hit a wall. There's no way to avoid it. Indeed the initial estimate is that to do tomography for 14 qubits state, you will take centuries and you cannot support a graduate student for a century because you need to pay your retirement benefits and it is simply complicated. So therefore a team here sometime ago we're looking at the question of how to do a full reconstruction of 14-qubit States with in four hours. Actually it was three point three hours Though sometime ago and many experimental groups were telling us that was very popular paper to read and study because they wanted to do fast quantum state tomography. They could not support the student for one or two centuries. They wanted to get the results quickly. And then because we need to get these density matrices and then they need to do these measurements here. But we have N qubits the number of expectation values go like four to the N to the Pauli matrices becomes much bigger. A maximum likelihood makes it even more time consuming. And this is the paper by the group in Inns brook, where they go this one week post-processing and they will speed-up done by different groups and hours. Also how to do 14 qubit tomography in four hours, using linear regression. But the next question is can machine learning help with quantum state tomography? Can allow us to give us the tools to do the next step to improve it even further. And then the standard one is this one here. Therefore for neural networks there are some inputs here, X1, X2 X3. There are some weighting factors when you get an output function PHI we just call Nonlinear activation function that could be heavy side Sigmon piecewise, linear logistic hyperbolic. And this creates a decision boundary and input space where you get let's say the red one, the red dots on the left and the blue dots on the right. Some separation between them. And you could have either two layers or three layers or any number layers can do either shallow or deep. This cannot allow you to approximate any continuous function. You can train data via some cost function minimization. And then there are different varieties of neural nets. We're looking at some sequel restricted Boltzmann machine. Restricted means that the input layer speeds are not talking to each other. The output layers means are not talking to each other. And we got reasonably good results with the input layer, output layer, no hidden layer and the probability of finding a spin configuration called the Boltzmann factor. So we try to leverage Pure-state tomography for mixed-state tomography. By doing an iterative process where you start here. So there are the mixed States in the blue area the pure States boundary here. And then the initial state is here with the iterative process you get closer and closer to the actual mixed state. And then eventually once you get here, you do the final jump inside. So you're looking at a dominant eigenstate which is closest pure state and then computer some measurements and then do an iterative algorithm that to make you approach this desire state. And after you do that then you can essentially compare results with some data. We got some data for four to eight trapped-ion qubits approximate W States were produced and they were looking at let's say the dominant eigenstate is reliably recorded for any equal four, five six, seven, eight for the ion-state, for the eigenvalues we're still working because we're getting some results which are not as accurate as we would like to. So this is still work in progress, but for the States is working really well. So there is some cost scaling which is beneficial, goes like NR as opposed to N squared. And then the most relevant information on the quality of the state production is retrieved directly. This works for flexible rank. And so it is possible to extract the ion-state within network tomography. It is cost-effective and scalable and delivers the most relevant information about state generation. And it's an interesting and viable use case for machine learning in quantum physics. We're also now more recently working on how to do quantum state tomography using Conditional Generative Adversarial Networks. Usually the masters student are analyzed in PhD and then two former postdocs. So this CGANs refers to this Conditional Generative Adversarial Networks. In this framework you have two neural networks which are essentially having a dual, they're competing with each other. And one of them is called generator another one is called discriminator. And there they're learning multi-modal models from the data. And then we improved these by adding a cost of neural network layers that enable the conversion of outputs from any standard neural network into physical density matrix. So therefore to reconstruct the density matrix, the generator layer and the discriminator networks So the two networks, they must train each other on data using standard gradient-based methods. So we demonstrate that our quantum state tomography and the adversarial network can reconstruct the optical quantum state with very high fidelity which is orders of magnitude faster and from less data than a standard maximum likelihood metals. So we're excited about this. We also show that this quantum state tomography with these adversarial networks can reconstruct a quantum state in a single evolution of the generator network. If it has been pre-trained on similar quantum States. so requires some additional training. And all of these is still work in progress where some preliminary results written up but we're continuing. And I would like to thank all of you for attending this talk. And thanks again for the invitation.

Published Date : Sep 26 2020

SUMMARY :

And recall that this is difficult to do

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael FraserPERSON

0.99+

Franco NoriPERSON

0.99+

Yoshihisa YamamotoPERSON

0.99+

oneQUANTITY

0.99+

NTT-PHIORGANIZATION

0.99+

two peopleQUANTITY

0.99+

two layersQUANTITY

0.99+

Clemens GneitingORGANIZATION

0.99+

20QUANTITY

0.99+

MITORGANIZATION

0.99+

three hoursQUANTITY

0.99+

firstQUANTITY

0.99+

three layersQUANTITY

0.99+

fourQUANTITY

0.99+

one weekQUANTITY

0.99+

MelkaniPERSON

0.99+

14 qubitsQUANTITY

0.99+

todayDATE

0.98+

one areaQUANTITY

0.98+

first exampleQUANTITY

0.98+

Inns brookLOCATION

0.98+

six and a half 20 qubitsQUANTITY

0.98+

24QUANTITY

0.98+

four hoursQUANTITY

0.98+

Will OliverPERSON

0.98+

two centuriesQUANTITY

0.98+

Few years agoDATE

0.98+

first jointQUANTITY

0.98+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

each phaseQUANTITY

0.97+

three pointQUANTITY

0.96+

Fraser YamamotoPERSON

0.96+

two networksQUANTITY

0.96+

first oneQUANTITY

0.96+

2000QUANTITY

0.96+

sixQUANTITY

0.95+

fiveQUANTITY

0.94+

14 qubitQUANTITY

0.94+

BoltzmannOTHER

0.94+

a centuryQUANTITY

0.93+

one exampleQUANTITY

0.93+

eight qubitsQUANTITY

0.92+

CaltechORGANIZATION

0.91+

NTTORGANIZATION

0.91+

centuriesQUANTITY

0.91+

few months agoDATE

0.91+

singleQUANTITY

0.9+

OliverPERSON

0.9+

two former postdocsQUANTITY

0.9+

single qubitQUANTITY

0.89+

few years agoDATE

0.88+

14-qubitQUANTITY

0.86+

NTT-PHITITLE

0.86+

eightQUANTITY

0.86+

MichiganLOCATION

0.86+

past couple of yearsDATE

0.85+

two neuralQUANTITY

0.84+

sevenQUANTITY

0.83+

eight trapped-QUANTITY

0.83+

three or so years agoDATE

0.82+

LiuPERSON

0.8+

PauliOTHER

0.79+

one typeQUANTITY

0.78+

past two yearsDATE

0.77+

some years agoDATE

0.73+

CornellPERSON

0.72+

McMahonORGANIZATION

0.71+

GneitingPERSON

0.69+

ChebyshevOTHER

0.68+

few yearsDATE

0.67+

phys revTITLE

0.65+

past few yearsDATE

0.64+

NTTEVENT

0.64+

ChePERSON

0.63+

CGANsORGANIZATION

0.61+

BoltzmannPERSON

0.57+

EuclideanLOCATION

0.57+

marandIORGANIZATION

0.5+

HamiltoniansOTHER

0.5+

eachQUANTITY

0.5+

NTTTITLE

0.44+

-PHITITLE

0.31+

PHIORGANIZATION

0.31+

Coherent Nonlinear Dynamics and Combinatorial Optimization


 

Hi, I'm Hideo Mabuchi from Stanford University. This is my presentation on coherent nonlinear dynamics, and combinatorial optimization. This is going to be a talk, to introduce an approach, we are taking to the analysis, of the performance of Coherent Ising Machines. So let me start with a brief introduction, to ising optimization. The ising model, represents a set of interacting magnetic moments or spins, with total energy given by the expression, shown at the bottom left of the slide. Here the cigna variables are meant to take binary values. The matrix element jij, represents the interaction, strength and sign, between any pair of spins ij, and hi represents a possible local magnetic field, acting on each thing. The ising ground state problem, is defined in an assignment of binary spin values, that achieves the lowest possible value of total energy. And an instance of the easing problem, is specified by given numerical values, for the matrix j and vector h, although the ising model originates in physics, we understand the ground state problem, to correspond to what would be called, quadratic binary optimization, in the field of operations research. And in fact, in terms of computational complexity theory, it can be established that the, ising ground state problem is NP complete. Qualitatively speaking, this makes the ising problem, a representative sort of hard optimization problem, for which it is expected, that the runtime required by any computational algorithm, to find exact solutions, should asyntonically scale, exponentially with the number of spins, and four worst case instances at each end. Of course, there's no reason to believe that, the problem instances that actually arise, in practical optimization scenarios, are going to be worst case instances. And it's also not generally the case, in practical optimization scenarios, that we demand absolute optimum solutions. Usually we're more interested in, just getting the best solution we can, within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for computation. This focus is great interest on, so-called heuristic algorithms, for the ising problem and other NP complete problems, which generally get very good, but not guaranteed optimum solutions, and run much faster than algorithms, that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem, for which extensive compilations, of benchmarking data may be found online. A recent study found that, the best known TSP solver required median runtimes, across a library of problem instances, that scaled as a very steep route exponential, for an up to approximately 4,500. This gives some indication of the change, in runtime scaling for generic, as opposed to worst case problem instances. Some of the instances considered in this study, were taken from a public library of TSPs, derived from real world VLSI design data. This VLSI TSP library, includes instances within ranging from 131 to 744,710, instances from this library within between 6,880 and 13,584, were first solved just a few years ago, in 2017 requiring days of runtime, and a 48 core two gigahertz cluster, all instances with n greater than or equal to 14,233, remain unsolved exactly by any means. Approximate solutions however, have been found by heuristic methods, for all instances in the VLSI TSP library, with, for example, a solution within 0.014% of a known lower bound, having been discovered for an instance, with n equal 19,289, requiring approximately two days of runtime, on a single quarter at 2.4 gigahertz. Now, if we simple-minded the extrapolate, the route exponential scaling, from the study yet to n equal 4,500, we might expect that an exact solver, would require something more like a year of runtime, on the 48 core cluster, used for the n equals 13,580 for instance, which shows how much, a very small concession on the quality of the solution, makes it possible to tackle much larger instances, with much lower costs, at the extreme end, the largest TSP ever solved exactly has n equal 85,900. This is an instance derived from 1980s VLSI design, and this required 136 CPU years of computation, normalized to a single core, 2.4 gigahertz. But the 20 fold larger, so-called world TSP benchmark instance, with n equals 1,904,711, has been solved approximately, with an optimality gap bounded below 0.0474%. Coming back to the general practical concerns, of applied optimization. We may note that a recent meta study, analyze the performance of no fewer than, 37 heuristic algorithms for MaxCut, and quadratic binary optimization problems. And find the performance... Sorry, and found that a different heuristics, work best for different problem instances, selected from a large scale heterogeneous test bed, with some evidence, the cryptic structure, in terms of what types of problem instances, were best solved by any given heuristic. Indeed, there are reasons to believe, that these results for MaxCut, and quadratic binary optimization, reflect to general principle, of a performance complementarity, among heuristic optimization algorithms, and the practice of solving hard optimization problems. There thus arises the critical pre processing issue, of trying to guess, which of a number of available, good heuristic algorithms should be chosen, to tackle a given problem instance. Assuming that any one of them, would incur high cost to run, on a large problem of incidents, making an astute choice of heuristic, is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight, about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This is certainly pinpointed by researchers in the field, as a circumstance and must be addressed. So adding this all up, we see that a critical frontier, for cutting edge academic research involves, both the development of novel heuristic algorithms, that deliver better performance with lower costs, on classes of problem instances, that are underserved by existing approaches, as well as fundamental research, to provide deep conceptual insight, into what makes a given problem instance, easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law, and speculate about a so-called second quantum revolution, it's natural to talk not only about novel algorithms, for conventional CPUs, but also about highly customized, special purpose hardware architectures, on which we may run entirely unconventional algorithms, for common tutorial optimizations, such as ising problem. So against that backdrop, I'd like to use my remaining time, to introduce our work on, analysis of coherent using machine architectures, and associated optimization algorithms. Ising machines in general, are a novel class of information processing architectures, for solving combinatorial optimization problems, by embedding them in the dynamics, of analog, physical, or a cyber-physical systems. In contrast to both more traditional engineering approaches, that build ising machines using conventional electronics, and more radical proposals, that would require large scale quantum entanglement the emerging paradigm of coherent ising machines, leverages coherent nominal dynamics, in photonic or optical electronic platforms, to enable near term construction, of large scale prototypes, that leverage posting as information dynamics. The general structure of current of current CIM systems, as shown in the figure on the right, the role of the easing spins, is played by a train of optical pulses, circulating around a fiber optical storage ring, that beam splitter inserted in the ring, is used to periodically sample, the amplitude of every optical pulse. And the measurement results, are continually read into an FPGA, which uses then to compute perturbations, to be applied to each pulse, by a synchronized optical injections. These perturbations are engineered to implement, the spin-spin coupling and local magnetic field terms, of the ising hamiltonian, corresponding to a linear part of the CIM dynamics. Asynchronously pumped parametric amplifier, denoted here as PPL and wave guide, adds a crucial nonlinear component, to the CIM dynamics as well. And the basic CIM algorithm, the pump power starts very low, and is gradually increased, at low pump powers, the amplitudes of the easing spin pulses, behave as continuous complex variables, whose real parts which can be positive or negative, by the role of soft or perhaps mean field spins. Once the pump power crosses the threshold, for perimetric self oscillation in the optical fiber ring, however, the amplitudes of the easing spin pulses, become effectively quantized into binary values, while the pump power is being ramped up, the FPGA subsystem continuously applies, its measurement based feedback implementation, of the using hamiltonian terms. The interplay of the linearized easing dynamics, implemented by the FPGA , and the thresholds quantization dynamics, provided by the sink pumped parametric amplifier, result in a final state, of the optical plus amplitudes, at the end of the pump ramp, that can be read as a binary strain, giving a proposed solution, of the ising ground state problem. This method of solving ising problems, seems quite different from a conventional algorithm, that runs entirely on a digital computer. As a crucial aspect, of the computation is performed physically, by the analog continuous coherent nonlinear dynamics, of the optical degrees of freedom, in our efforts to analyze CA and performance. We have therefore turn to dynamical systems theory. Namely a study of bifurcations, the evolution of critical points, and typologies of heteroclitic orbits, and basins of attraction. We conjecture that such analysis, can provide fundamental insight, into what makes certain optimization instances, hard or easy for coherent ising machines, and hope that our approach, can lead to both improvements of the course CIM algorithm, and the pre processing rubric, for rapidly assessing the CIM insuibility of the instances. To provide a bit of intuition about how this all works. It may help to consider the threshold dynamics, of just one or two optical parametric oscillators, in the CIM architecture just described. We can think of each of the pulse time slots, circulating around the fiber ring, as are presenting an independent OPO. We can think of a single OPO degree of freedom, as a single resonant optical mode, that experiences linear dissipation, due to coupling loss, and gain in a pump near crystal, as shown in the diagram on the upper left of the slide, as the pump power is increased from zero. As in the CIM algorithm, the non-linear gain is initially too low, to overcome linear dissipation. And the OPO field remains in a near vacuum state, at a critical threshold value, gain equals dissipation, and the OPO undergoes a sort of lasing transition. And the steady States of the OPO, above this threshold are essentially coherent States. There are actually two possible values, of the OPO coherent amplitude, and any given above threshold pump power, which are equal in magnitude, but opposite in phase, when the OPO cross this threshold, it basically chooses one of the two possible phases, randomly, resulting in the generation, of a single bit of information. If we consider two uncoupled OPOs, as shown in the upper right diagram, pumped at exactly the same power at all times, then as the pump power is increased through threshold, each OPO will independently choose a phase, and thus two random bits are generated, for any number of uncoupled OPOs, the threshold power per OPOs is unchanged, from the single OPO case. Now, however, consider a scenario, in which the two appeals are coupled to each other, by a mutual injection of their out coupled fields, as shown in the diagram on the lower right. One can imagine that, depending on the sign of the coupling parameter alpha, when one OPO is lasing, it will inject a perturbation into the other, that may interfere either constructively or destructively, with the field that it is trying to generate, via its own lasing process. As a result, when can easily show that for alpha positive, there's an effective ferromagnetic coupling, between the two OPO fields, and their collective oscillation threshold, is lowered from that of the independent OPO case, but only for the two collective oscillation modes, in which the two OPO phases are the same. For alpha negative, the collective oscillation threshold, is lowered only for the configurations, in which the OPO phases are opposite. So then looking at how alpha is related to the jij matrix, of the ising spin coupling hamilitonian, it follows the, we could use this simplistic to OPO CIM, to solve the ground state problem, of the ferromagnetic or antiferromagnetic angles, to ising model, simply by increasing the pump power, from zero and observing what phase relation occurs, as the two appeals first start to lase. Clearly we can imagine generalizing the story to larger, and, however, the story doesn't stay as clean and simple, for all larger problem instances. And to find a more complicated example, we only need to go to n equals four, for some choices of jij for n equals four, the story remains simple, like the n equals two case. The figure on the upper left of this slide, shows the energy of various critical points, for a non frustrated n equals for instance, in which the first bifurcated critical point, that is the one that, by forgets of the lowest pump value a, this first bifurcated critical point, flows asyntonically into the lowest energy using solution, and the figure on the upper right, however, the first bifurcated critical point, flows to a very good, but suboptimal minimum at large pump power. The global minimum is actually given, by a distinct critical point. The first appears at a higher pump power, and is not needed radically connected to the origin. The basic CIM algorithm, is this not able to find this global minimum, such non-ideal behavior, seems to become more common at margin end, for the n equals 20 instance show in the lower plots, where the lower right pod is just a zoom into, a region of the lower left block. It can be seen that the global minimum, corresponds to a critical point, that first appears that of pump parameter a around 0.16, at some distance from the adriatic trajectory of the origin. That's curious to note that, in both of these small and examples, however, the critical point corresponding to the global minimum, appears relatively close, to the adiabatic trajectory of the origin, as compared to the most of the other, local minimum that appear. We're currently working to characterise, the face portrait typology, between the global minimum, and the adiabatic trajectory of the origin, taking clues as to how the basic CIM algorithm, could be generalized, to search for non-adiabatic trajectories, that jumped to the global minimum, during the pump up, of course, n equals 20 is still too small, to be of interest for practical optimization applications. But the advantage of beginning, with the study of small instances, is that we're able to reliably to determine, their global minima, and to see how they relate to the idea, that trajectory of the origin, and the basic CIM algorithm. And the small land limit, We can also analyze, for the quantum mechanical models of CAM dynamics, but that's a topic for future talks. Existing large-scale prototypes, are pushing into the range of, n equals, 10 to the four, 10 to the five, 10 to the six. So our ultimate objective in theoretical analysis, really has to be, to try to say something about CAM dynamics, and regime of much larger in. Our initial approach to characterizing CAM behavior, in the large end regime, relies on the use of random matrix theory. And this connects to prior research on spin classes, SK models, and the tap equations, et cetera, at present we're focusing on, statistical characterization, of the CIM gradient descent landscape, including the evolution of critical points, And their value spectra, as the pump powers gradually increase. We're investigating, for example, whether there could be some way, to explain differences in the relative stability, of the global minimum versus other local minima. We're also working to understand the deleterious, or potentially beneficial effects, of non-ideologies such as asymmetry, in the implemented using couplings, looking one step ahead, we plan to move next into the direction, of considering more realistic classes of problem instances, such as quadratic binary optimization with constraints. So in closing I should acknowledge, people who did the hard work, on these things that I've shown. So my group, including graduate students, Edwin Ng, Daniel Wennberg, Ryatatsu Yanagimoto, and Atsushi Yamamura have been working, in close collaboration with, Surya Ganguli, Marty Fejer and Amir Safavi-Naeini. All of us within the department of applied physics, at Stanford university and also in collaboration with Yoshihisa Yamamoto, over at NTT-PHI research labs. And I should acknowledge funding support, from the NSF by the Coherent Ising Machines, expedition in computing, also from NTT-PHI research labs, army research office, and ExxonMobil. That's it. Thanks very much.

Published Date : Sep 21 2020

SUMMARY :

by forgets of the lowest pump value a,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Edwin NgPERSON

0.99+

ExxonMobilORGANIZATION

0.99+

Daniel WennbergPERSON

0.99+

85,900QUANTITY

0.99+

Marty FejerPERSON

0.99+

Ryatatsu YanagimotoPERSON

0.99+

4,500QUANTITY

0.99+

Hideo MabuchiPERSON

0.99+

2017DATE

0.99+

Amir Safavi-NaeiniPERSON

0.99+

13,580QUANTITY

0.99+

Surya GanguliPERSON

0.99+

48 coreQUANTITY

0.99+

136 CPUQUANTITY

0.99+

1980sDATE

0.99+

14,233QUANTITY

0.99+

20QUANTITY

0.99+

Yoshihisa YamamotoPERSON

0.99+

oneQUANTITY

0.99+

NTT-PHIORGANIZATION

0.99+

1,904,711QUANTITY

0.99+

2.4 gigahertzQUANTITY

0.99+

Atsushi YamamuraPERSON

0.99+

19,289QUANTITY

0.99+

firstQUANTITY

0.99+

twoQUANTITY

0.99+

two appealsQUANTITY

0.99+

two possible phasesQUANTITY

0.99+

10QUANTITY

0.99+

two caseQUANTITY

0.99+

Coherent Ising MachinesORGANIZATION

0.98+

0.014%QUANTITY

0.98+

131QUANTITY

0.98+

each pulseQUANTITY

0.98+

two possible valuesQUANTITY

0.98+

NSFORGANIZATION

0.98+

744,710QUANTITY

0.98+

fourQUANTITY

0.98+

Stanford UniversityORGANIZATION

0.98+

20 foldQUANTITY

0.98+

13,584QUANTITY

0.98+

bothQUANTITY

0.97+

two gigahertzQUANTITY

0.96+

single coreQUANTITY

0.96+

singleQUANTITY

0.95+

sixQUANTITY

0.95+

zeroQUANTITY

0.95+

fiveQUANTITY

0.95+

6,880QUANTITY

0.94+

approximately two daysQUANTITY

0.94+

eachQUANTITY

0.93+

each endQUANTITY

0.93+

37 heuristicQUANTITY

0.93+

MoorePERSON

0.93+

each OPOQUANTITY

0.93+

two collective oscillation modesQUANTITY

0.93+

single bitQUANTITY

0.92+

each thingQUANTITY

0.92+

20 instanceQUANTITY

0.91+

one stepQUANTITY

0.9+

around 0.16QUANTITY

0.89+

Stanford universityORGANIZATION

0.88+

single quarterQUANTITY

0.87+

approximately 4,500QUANTITY

0.87+

second quantum revolutionQUANTITY

0.85+

a yearQUANTITY

0.84+

two random bitsQUANTITY

0.83+

two OPOQUANTITY

0.81+

few years agoDATE

0.77+

two uncoupled OPOsQUANTITY

0.76+

MaxCutTITLE

0.74+

four worst caseQUANTITY

0.71+

0.0474%QUANTITY

0.7+

up toQUANTITY

0.7+

CoherentORGANIZATION

0.69+

Joshua Yulish, TmaxSoft & Sri Akula, Health Plan Services | AWS re:Invent 2018


 

>> Live, from Las Vegas, it's theCUBE. Covering AWS re:Invent, 2018. Brought to you by Amazon web services, Intel, and their ecosystem partners. >> Well, we are nearly two days strong into our coverage, here at AWS re:Invent. If you look behind us, here on the set, this show floor is still jam packed, still a lot of activity, as 40,000 plus have made their way to Las Vegas, for this year's show. Along with Justin Warren, and I'm John Walls. we're joined now by Josh Yulish, who's the CEO of TmaxSoft, and Sri Akula, who's the CIO of HealthPlan Services. Gentlemen, welcome to theCUBE, glad to have you >> Thank you for having us. >> Thank you for having us. Well, first, let's just share the story at home, a little bit, about TmaxSoft, and HealthPlan. What your core functions are, and then we'll get into why your here. >> Sure, great question. So, TmaxSoft one of the key things we're doing right now, is helping companies take their old, legacy mainframe applications, and moving them into the future, running them on the cloud. Enabling that digital transformation, of taking the old, integrating it in, with the new. >> And you're one of those companies, I assume. >> Yes, we are on of those companies, and we're a technology solutions company, in health care. And, we're the market leaders in providing the platform for the archive business. And then we have a group, and other health care solutions, as well. >> Alright, so you've got to get rid of the old, at some point, you've got to move over to the new, at some point, you can't do it all at once. How do you start making those decisions about, what legacy, what are we moving, what aren't we, what are we going to redo? I assume a lot of it's budget, but there've got to be other implications, and other considerations, as well. >> Yeah, you some of the systems evolve, over the last two decades, especially in health care. And, I think it's fair to say health care has been lagging in adapting the cloud technology. Whether it be PII, or PHI, or HIPAA regulations, but now starting to embrace cloud more. And that opens up the opportunity for us to take investments, which was them, and move to the cloud, so that we can get agility into our systems, and get some efficiency, so that we can double up the modern technologies, and get more to our customers, our members. >> Yeah. That's often a challenge, about how you choose which ones to do, at what time. Because, I mean, IT projects don't have a great track record of being completed successfully. So, when you decide to move something to cloud, you are taking on a bit of risk, there, so you need to be able to manage that risk, reward. How do you consider which projects you should be running, so that I can get a bit of short term gain, now, but also to make those more strategic decisions about, well, we actually wanted to have this happen over a longer period of time, and we're willing to take a little bit more. How do you balance that risk, reward ratio? >> I think there's multiple lenses we apply together. A, first you need the right technology, to get up on the mainframe. And then you need the right partner, not just the technology, but who understands the nuances of software, you well over the gates, to get to that. >> Yeah. >> And, also, you know, if the change is less, it's working, don't fix something that's not broken. But, still bring the agility, and then leverage the cloud, what cloud has to offer. I think that's where Tmax comes into picture, where, helps us from a technology, and as a partner, to kind of guide us through this journey. Identify the path, you can't do this in isolation. You've got to have the right technology, and the right partner to help us to get to a better place. >> Yeah. And, Josh, with customers who are running through, there's plenty of customers out there, I'm sure, who are considering this, and struggle with this themselves. What are some of the behaviors you see, from people who do this well? So, when you've seen people who are succeeding at this transformation journey, what are some of the key things you look at and say, these are the markers of someone who really understands how to do this well? >> Sure, that's a great question. So, everybody wants to do this. Nobody really wants to stay in the past. The people that do it successfully, are the people that have a change agent mentality. That understand if I ignore the problem, my business, not just my IT, but my business is going to suffer. And, the IT leaders, and the CIO's that can see that vision, are the one's that enable the business to move forward, and give them a competitive differentiator. So that, to us, is what we really see as the differentiator for who's successful, faster, versus who isn't. >> You know, Justin was talking about the long view, right, and having a firm strategy, and taking a much deeper perspective. But, how do you do that when you know, whatever course you're going to take, is going to change. Because there's going to be a new technology, there's going to be a wrinkle that's going to come along, and it's going to upset the apple cart. And it might happen in six months. >> Amazon will announce something this afternoon. (laughs) >> So how do you have that long view strategy, both of you, when you know that, whatever road we're going on right now, a year from now it's probably not going to look like this. >> I'll take a stab at it. Probably Josh has, looking at other customers, may have more insights into it. The only list I see is, the member experience is the key, right now. The digital journey is all revolving around member experience. You take any vertical, client facing, client touching, is changing, evolving much faster, but your core systems don't need to change that fast. So, if you take the core systems, which are legacy, and move to a modern, and then embed with, maybe a mobile native, in a multi only channel, digital, and there, probably you don't want to modernize it. The most systems, they seem to stay there, anyway. So, take something core, data change is low, move them to cloud, and monetize, and get the most ROI, out of the investment you made. At least, that's what we are looking at it. >> Yeah, I think that we see the same thing with most of our customers. So, when they look at, how do I get off the mainframe? These things have been around 20, 30, 40 years. If it was easy, they would have done it. It's not, it's a very closed, difficult to disrupt system. And, when they think about, we'll re-write our application, or we'll do something else, that's a five to 10 year journey, on average. So, then your disruption, or the new thing comes out in a year, it impacts that project for them, and it makes it very difficult. So, we've helped them move off six to 12 months, on average. So now you can more rapidly, solve your initial pain, and then look at your longer term journey in a way that allows you to do it all at once, and over time. >> I think, being able to react to what's coming new, as you said, John, but also have that longer term vision, that's a tricky thing to be able to do, but it is so important to be able to balance that short term benefits, that then actually support the longer term vision. >> That's spot on. So, get the TCO under control now, and then that gives us the flexibility to re-write which is going to be more forward looking, and not necessarily re-writing the same old, in a new way. >> And it builds credibility with the rest, because then we're customers as well. If you were to go and try something like this, that actually disrupts the business, and disrupts customers, then they're probably not going to trust you when you try to do this again. But if you've got a few wins on the board, then they're going to trust you with a slightly bigger project, and you can actually get further with it, I think. >> Yeah, absolutely. And, I think what we've seen, and we're working with HealthPlan Services, and all of our customers in the same way, is they can free up cash, from their operational spend, and IT, very quickly. That they can then invest into innovation. And, everything that you see here at the show, they can now go do those things. Where, before, most of there money is stuck in operating, just keeping the lights on. Keeping the lights on, on something 40 years old. Now, you can invest in innovation, without disrupting the customer, the experience is the same. Performance is the same, all of those things are the same, but they get that value of, we can make it new as well. >> I know there's plenty CFO's that'd be happy to hear that. Because they go, you want me to invest how much of new money? Oh, no, no, we found plenty of money, it was just sort of lying over here. We were setting it on fire, for some reason. (laughs) Let's not do that. >> And it's an easy conversation, as a CIO, getting to the CEO saying, hey, I'm going to take the cost out, invest back into the product. That's an easy conversation to have. >> Josh talks about this, I guess, multi-faceted process that you're going to go through, right. How do you decide, on the customer side, how do you prioritize, particularly in your space, health care, what's going to go first, what's going to go second, and then what can we put off long enough, there's probably going to be something else coming, that we can adopt a different approach. So, who are your stakeholders, who do you answer to, how do you come up with that? >> Multiple ways to look at it. Especially in our domain, at least the technology Tmax has to offer, they're taking out most of the risk for us. Because, it's a lot of lift, and shift, and the technology was so much. They are minimizing the risk. And, the timeline's also shrinking. Because the longer it takes, by the time you realize the ROI, and the projects move on, the changes, I think that's where the technology maturity's coming in. It's really helping us a lot. And, again, we look at more member experience, we do ground up building. Take the core assets, do a lift, and shift, and shrink the time. But, again, Josh if there is one thing I look at, it's the timeline before the shrink. I know, it used to be two years, 18 months, 24 months, coming to nine months to 12 months. I would like to see that more, and more, happen, maybe six to nine months. And, that gives us more leverage, and more confidence, to customers. The longer the project stays, the failure rate goes up, higher, and higher. >> Right. >> Yes. >> We all want it now. (laughs) So, Josh, go deliver, would you please? >> That's the goal. >> You've got the mission. Thank you for the time, we appreciate it. And, wish you both success down the road. >> Thank you. >> Thank you, very much >> Thank you. >> We're back with more, we're live at AWS re:Invent, in Las Vegas, Nevada. (mellow music)

Published Date : Nov 29 2018

SUMMARY :

Brought to you by Amazon web services, Intel, glad to have you Thank you for having us. So, TmaxSoft one of the key things we're doing right now, the platform for the archive business. at some point, you can't do it all at once. And, I think it's fair to say health care has been lagging So, when you decide to move something to cloud, And then you need the right partner, and the right partner to help us to get to a better place. of the key things you look at and say, these are the markers are the one's that enable the business to move forward, But, how do you do that when you know, whatever course (laughs) So how do you have that long view strategy, both of you, out of the investment you made. that allows you to do it all at once, and over time. but it is so important to be able to balance the flexibility to re-write which is going then they're going to trust you with a slightly bigger project, And, everything that you see here at the show, I know there's plenty CFO's that'd be happy to hear that. getting to the CEO saying, hey, I'm going to take the cost out, How do you decide, on the customer side, Because the longer it takes, by the time you realize So, Josh, go deliver, would you please? And, wish you both success down the road. We're back with more, we're live at AWS re:Invent,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Josh YulishPERSON

0.99+

JoshPERSON

0.99+

Justin WarrenPERSON

0.99+

JustinPERSON

0.99+

AmazonORGANIZATION

0.99+

Joshua YulishPERSON

0.99+

sixQUANTITY

0.99+

fiveQUANTITY

0.99+

John WallsPERSON

0.99+

Las VegasLOCATION

0.99+

JohnPERSON

0.99+

TmaxSoftORGANIZATION

0.99+

nine monthsQUANTITY

0.99+

two yearsQUANTITY

0.99+

24 monthsQUANTITY

0.99+

12 monthsQUANTITY

0.99+

18 monthsQUANTITY

0.99+

Sri AkulaPERSON

0.99+

Health Plan ServicesORGANIZATION

0.99+

TmaxORGANIZATION

0.99+

bothQUANTITY

0.99+

30QUANTITY

0.99+

40 yearsQUANTITY

0.99+

a yearQUANTITY

0.98+

Las Vegas, NevadaLOCATION

0.98+

10 yearQUANTITY

0.98+

40,000 plusQUANTITY

0.98+

HealthPlan ServicesORGANIZATION

0.98+

firstQUANTITY

0.96+

one thingQUANTITY

0.96+

oneQUANTITY

0.95+

IntelORGANIZATION

0.95+

this afternoonDATE

0.95+

HIPAATITLE

0.93+

HealthPlanORGANIZATION

0.92+

secondQUANTITY

0.91+

40 years oldQUANTITY

0.9+

AWSORGANIZATION

0.89+

six monthsQUANTITY

0.88+

a yearDATE

0.87+

this yearDATE

0.85+

PIITITLE

0.82+

Invent 2018EVENT

0.77+

two daysQUANTITY

0.73+

re:EVENT

0.73+

last two decadesDATE

0.71+

PHITITLE

0.69+

InventEVENT

0.69+

re:Invent, 2018EVENT

0.62+

theCUBEORGANIZATION

0.57+

TCOORGANIZATION

0.57+

re:InventEVENT

0.55+

around 20QUANTITY

0.55+

appleCOMMERCIAL_ITEM

0.5+

Radhesh Balakrishnan, Red Hat | Red Hat Summit 2018


 

[Music] from San Francisco it's the covering Red Hat summit 2018 brought to you by Red Hat everyone welcome back is the cubes live coverage here in San Francisco Red Hat summit 2018 I'm Sean furry co-host of the cube with my coasts analyst this week John Troyer who's the co-founder of tech reckoning advisory and Community Development firm our next guess is red hash Balakrishnan is the general manager of OpenStack for Red Hat welcome to the cube good to see you ready to be here so OpenStack is very hot obviously with the with the with the trends we've been covering from day one been phenomenal to watch that grow and change but with kubernetes you seeing cloud native to robust communities you got application developers and you got under the hood infrastructure so congratulations and you know what's what's the impact of that what is how is OpenStack impacted by the cloud native trend and what is Red Hat doing they're the best epidermis ation of that is openshift on OpenStack if you had caught the keynotes earlier today there was a demo that we did whereby they were spawning open shifts on bare metal using OpenStack and then you run open shift on power that's what we kind of see as the normed implementation for customers looking to get - I want an open infrastructure on Prem which is OpenStack and then eventually want to get to a multi cloud application platform on top of it that makes up the hybrid cloud right so it's a essential ingredient to the hybrid cloud that customers that are trying to get to and open shifts role in this is what I'm assuming we are asked about openshift ownerships will be multi cloud from a application platform perspective right so OpenStack is all about the infrastructure so as long as you're worrying about info or deployment management lifecycle that's going to be openstax remet once you're thinking about applications themselves the packaging of it the delivery of it and the lifecycle of it then you're in openshift land so how do you bring both these things together in a way that is easier simpler and long-standing is the opportunity and the challenge in front of us so the good news is customers are already taking us there and there's a lot of production workflow is happening on OpenStack but I got to ask the question that someone might ask who hasn't been paying attention in a year or so it was thick hey OpenStack good remember that was what's new with OpenStack what would you say that person if they asked you that question about what's new with OpenStack the answer would be something along the lines of boring is the new normal right we have taken the excitement out of OpenStack you know the conversations are on containers so OpenStack has now become the open infrastructure that customers can bring in with confidence right so that's kind of the boring Linux story but you know what that's what we thrive on right our job as reddit is to make sure that we take away the complexities involved in open source innovation and make it easy for production deployment right so that's what we're doing with OpenStack too and I'm glad that in five years we've been able to get here I definitely I think along with boring gos clarity right last year the cube was that OpenStack summit will be there again in two weeks so with you and I enjoy seeing you again for it the last year there was a lot of you know containers versus there was some confusion like where people got sorted out in their head oh this is the infrastructure layer and then this is the a play I think now people have gotten it sorted out in their head open open shipped on OpenStack very clear message so a meaning of the community in two weeks in any comments on the growth of the open OpenStack community the end users that are there the the depth of experience it seemed like last year was great everywhere for OpenStack on the edge it ended you know set top devices and pull top devices all the way to OpenStack in in private data centers and and for various security or logistical reasons where is OpenStack today yeah I think that he phrased would be workload optimization so OpenStack has now evolved to become optimized for various workloads so NFV was a workload that people were talking about now people are in when customers are in production across the globe you know beat Verizon or the some of the largest telcos that we have in any and a pack as well the fact that you can actually transform the network using OpenStack has become real today now the conversation is going from core of the data center to the edge which is radio networks so the fact that you can have a unified fabric which can transcend from data center all the way to a radio and that can be OpenStack is a you know great testament to the fact that a community has rallied around OpenStack and you know delivering on features that customers are demanding pouring is the new normal of that is boring implies reliable no-drama clean you know working if you had to kind of put a priority in a list of the top things just that it are still being worked on I see the job is never done with infrastructure always evolving about DevOps certainly shows that with programmability what are the key areas still on the table for OpenStack that are that are key discussion points where there's still innovation to be done and built upon I think the first one is it's like going from a car to a self-driving car how can we get that infrastructure to autonomously manage itself we were talking about network earlier even in that context how do you get to a implementation of OpenStack that can self manage itself so there's a huge opportunity to make sure that the tooling gets richer to be able to not just deploy manage but fine-tune the infrastructure itself as we go along so clearly you know you can call it AI machine learning implementation you on OpenStack to make sure that the benefit is occurring to the administrator that's an opportunity area the second thing is the containers and OpenStack that we taught touched upon earlier OpenShift on OpenStack in many ways is going to be the cookie cutter that we're gonna see everywhere there's going to be private cloud if you've got a private cloud it's gotta be an open shift or on OpenStack and if it's not I would like to know why right it's a it becomes a de-facto standard you start to have and they enablement skills training for a few folks as you talk to the IT consumer right the the IT admins out there you know what's the message in terms of upskilling and managing say an OpenStack installation and and what does Red Hat doing to help them come along yeah so those who are comfortable with Braille Linux skills are able to graduate easily over to OpenStack as well so we've been nationally focused on making sure that we are training the loyal Linux installed based customers and with the addition of the fact that now the learnings offerings that we have are not product specific but more at the level of the individual can get a subscription for all the products that reddit has you could get learning access to learning so that does help make sure that people are able to graduate or evolve from being able to manage Linux to manage a cloud and the and face the brave new world of hybrid cloud that's happening in front of our eyes but let's talk about the customer conversations you're having as the general manager of the stack red hat what what are the what's the nature of the conversations are they talking about high availability performance or is it more under the hood about open shift and containers or they range across the board depending upon the use cases whose they do range but the higher or the bit is that applications is where the focuses well closes where the focus is so the infrastructure in many ways needs to get out of the way to make sure that the applications can be moving from the speed of thought to execution right so that's where the customer conversations are going so which is kind of ties back to the boring is the new normal as well so if we can make sure that OpenStack is boring enough that all the energy is focused on developing applications that are needed for the enterprise then I think the job is done self-driving OpenStack it means when applications are just running and that self-healing concepts you were talking about automation is happening exactly that's the opportunity in front of us so you know it's by N's code by code we will get there I think I love the demo this morning which showed that off right bare metal stacks sitting there on stage from different vendors right actually you're the you know OpenStack is the infrastructure layer so it's it's out there with servers from Dell and HP II and others right and then booting up and then the demo with the with Amadeus showing you know OpenStack and public clouds with openshift all on top also showed how it fit into this whole multi cloud stack is it is it challenging to to be the layer with with the hardware hardware heterogeneous enough at this point that OpenStack can handle it are there any issues they're working with different OEMs and if you look at the history of red add that's what we've done right so the rel became rel because of the fact that we were able to abstract multi various innovation that was happening at the so being able to bring that for OpenStack is like we've got you know that's the right to swipe the you know employee card if you will right so I think the game is going back to what you were only talking about the game is evolving to now that you have the infrastructure which abstracts the compute storage networking etc how do you make sure that the capacity that you've created it's applied to where the need is most right for example if you're a telco and if you're enabling Phi G IOT you want to make sure that the capacity is closest to where the customer fool is right so being able to react to customer needs or you know the customers customers needs around where the capacity has to be for infrastructure is the programmability part that we've you know we can enable right so that's a fascinating place to get into I know you are technology users yourself right so clearly you can relate to the fact that if you can make available just enough technology for the right use case then I think we have a winner at hand yeah and taking as you said taking the complexity out of it also means automating away some of those administrative roles and moving to the operational piece of it which developers want to just run their code on it kind of makes things go a little faster and and so ok so I get that and I but I got to ask the question that's more Redhead specific that you could weigh in on this because this is a real legacy question around red hats business model you guys have been very strong with rel the the the record speaks for itself in terms of warranty and and serviceability you guys give like I mean how many years is it now like a zillion years that support for rel OpenStack is boring is Red Hat bringing that level of support now how many years because if I use it I'm gonna need to have support what's the Red Hat current model on support in terms of versioning xand the things that you guys do with customers thank you for bringing that up what have you been consciously doing is to make sure that we have lifecycle that is meeting two different customers segments that we are talking about one is customers who want to be with the latest and the greatest closer to the trunk so every six months there is an openstack released they want to be close enough they want to be consuming it but it's gotta be production ready in their environment the second set of customers are the ones who are saying hey look the infrastructure part needs to stay there cemented well and then every maybe a couple of years I'll take a real look at you know bringing in the new code to light up additional functionality or on storage or network etc so when you look at both the camps then the need is to have a dual life cycle so what we have done is with OpenStack platform 10 which is two years ago we have a up to five year lifecycle release so obvious that platform 10 was extensible up to five years and then every two releases from there 11 and 12 are for just one year alone and then we come back to again a major release which is OSP 13 which will be another five years I know it can be and they get the full Red Hat support that they're used to that's right so there are years that you're able to either stay at 10 or you could be the one who's going from 10 to 11 to 12 to 13 there are some customers were saying staying at 10 and then I won't go over to 13 and how do you do that we'll be a industry first and that's what we have been addressing from an engineering perspective is differentiated - I think that's a good selling point guy that's always a great thing about Red Hat you guys have good support give the customers confidence or not you guys aren't new to the enterprise and these kinds of customers so right - what are you doing here at the show red hat summit 2018 what's on your agenda what some of the hallway conversations you're hearing customer briefings obviously some of the keynote highlights were pretty impressive what going on for you it's a Volvo OpenShift on OpenStack that's where the current and the future is and it's not something that you have to wait for the reality is that when you're thinking about containers you might be starting very small but the reality is that you're going to have a reasonably sized farm that needs to power all the innovation that's going to happen in your organization so given that you need to have an infrastructure management solution thought through and implemented on day one itself so that's what OpenStack does so when you can roll out OpenStack and then on top of it bring in openshift then you not only have to you're not only taking care of today's needs but also as you scale and back to the point we were talking about moving the capacity where is needed you have a elastic infrastructure that can go where the workload is demanding the most attention so here's another question that might come up from when I asked you and you probably got this but I'll just bring it up anyway I'm a customer of OpenStack or someone kicking the tires learning about deploying up a stack I say ritesh what is all this cloud native stuff I see kubernetes out there what does that mean for me visa V OpenStack and all the efforts going on around kubernetes and above and the application pieces of the stack right let's say if you looked at the rear view mirror five years ago when we looked at cloud native as a contract the tendency was that hey look I need to be developing net new applications that's the only scenario where cloud native would be thought thought off now fast forward five years now what has happened is that cloud native and DevOps culture has become the default if you are a developer if you're not sort of in that ploughed native and DevOps then you are working on yesterday's problem in many ways so if digital transformation is urging organizations to drive - as cloud native applications then cloud native applications require an infrastructure that's fungible inelastic and that's how openshift on OpenStack again coming back to the point of that's the future that customers can build on today and moving forward so summarize I would say what I heard you saying periphery if I'm wrong open ship is a nice bridge layer or an up bridge layer but a connection point if you bet on open ship you're gonna have best of both worlds that that's a good summary and you gotta be you know betting on open first of all is the first order a bet that you should be making once you've bet on open then the question is you gotta bet on an infrastructure choice that's OpenStack and you gotta bet on an application platform choice that's open shift once you've got both of these I think then the question is what are you going to do with your spare time okay count all the cash you're making from all the savings but also choice is key you get all this choice and flexibility is a big upside I would imagine British thanks for coming on sharing your insight on the queue appreciate it thanks for letting us know what's going on and best of luck see you in Vancouver thank you for having okay so the cube live coverage here in San Francisco for Red Hat summit 2018 John four with John Troy you're more coverage after this short break

Published Date : May 8 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
John TroyerPERSON

0.99+

Radhesh BalakrishnanPERSON

0.99+

VancouverLOCATION

0.99+

VerizonORGANIZATION

0.99+

DellORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

five yearsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

LinuxTITLE

0.99+

one yearQUANTITY

0.99+

VolvoORGANIZATION

0.99+

last yearDATE

0.99+

last yearDATE

0.99+

second setQUANTITY

0.99+

OSP 13TITLE

0.99+

redditORGANIZATION

0.98+

five years agoDATE

0.98+

OpenStackORGANIZATION

0.98+

John TroyPERSON

0.98+

OpenShiftTITLE

0.98+

two different customersQUANTITY

0.98+

two years agoDATE

0.98+

two weeksQUANTITY

0.98+

second thingQUANTITY

0.97+

bothQUANTITY

0.97+

yesterdayDATE

0.97+

OpenStackTITLE

0.97+

first oneQUANTITY

0.97+

SeanPERSON

0.97+

oneQUANTITY

0.96+

both worldsQUANTITY

0.96+

platform 10TITLE

0.96+

this weekDATE

0.96+

telcoORGANIZATION

0.96+

AmadeusORGANIZATION

0.96+

todayDATE

0.96+

13DATE

0.95+

10DATE

0.95+

HP IIORGANIZATION

0.95+

first orderQUANTITY

0.94+

John fourPERSON

0.94+

Red Hat summit 2018EVENT

0.93+

a yearQUANTITY

0.93+

zillion yearsQUANTITY

0.92+

firstQUANTITY

0.92+

Phi G IOTTITLE

0.91+

red hashORGANIZATION

0.91+

up to five yearsQUANTITY

0.91+

RedheadORGANIZATION

0.91+

Red Hat summit 2018EVENT

0.9+

Red Hat Summit 2018EVENT

0.9+

OpenStackEVENT

0.9+

DevOpsTITLE

0.89+

Red Hat summit 2018EVENT

0.89+

openshiftTITLE

0.89+

11DATE

0.89+

12DATE

0.87+

up to five yearQUANTITY

0.87+

folksQUANTITY

0.87+

every six monthsQUANTITY

0.87+

SanLOCATION

0.86+

earlier todayDATE

0.86+

Peter Chen, Intel | The Computing Conference


 

>> SiliconANGLE Media presents theCUBE! Covering AlibabaCloud's annual conference. Brought to you by Intel. Now, here's John Furrier... >> Hello everyone, I'm John Furrier, the co-founder of SiliconANGLE, Wikibon, and theCUBE, for our exclusive coverage in Hangzhou, China for Alibaba Cloud Conference here, it's a Cloud Computing Conference. The entire city is a cloud. We're here at the Intel booth with Peter Chen, who's the general manager of Products and Technology, for Data Center Group Sales of Intel Corporation. Peter, AI is the hottest topic, IoT, Alibaba Cloud, I mean, a huge event here mixing, kind of a cultural shift, generational shift, young developers. >> Definitely lots of crowd, you can see people surrounding us, right? So, artificial intelligence is definitely a hot word here in China for the past 12 months. Everybody's trying to figure it out, what's going on, how they can really use them, so we're very excited as well to really partner with Alibaba to really explore some of the potentials. >> I had a chance to speak with some of the Alibaba executives, and obviously, a strategic partnership with Intel, pretty strategic, and it matters what's inside the Cloud. But it's not an Intel inside like a PC. The AI is showing that there's a little bit of Intel on everything, from IoT, industrial IoT to data center. It's a range of technology that's powering a new kind of software. This is where AI is shining. We're seeing that with machine learning and as data driven technology. So, I got to ask you. What is the view from Intel on AI? Obviously, we see the commercials, we see the technology from Intel. How does that translate to your view on AI? What's that view? >> So, essentially today's AI, artificial intelligence, is powered by three factors, the amount of data, the new algorithms, and lastly the compute power. And Intel has historically been the leader of create and compute. So, for the past many years, we has always been generating new compute powers into the cloud and data centers as well as PCs. But going forward as we look at applying AI to different usages like autonomous driving, for example, you cannot expect everything to be done just in the cloud because we need the real data to be inputted from a car, for instance, all the cameras, all the sensors. So, we do definitely see a need of actually faster processors at the edge as well to constantly bring in the data back into the cloud, so they have an autonomous feedback loop, make sure there will be right decision making. >> Yeah, so Cloud drives this, right? So, it's not just Cloud though, it's software. There's exponential growth in open source software that's causing a Renaissance in the developer community. You're seeing it here in China, a lot of young demographics here. Software and data's tsunami going on. You need compute power. >> Yes, yes. I think, everybody knows Intel is a hardware company, but we do have a very large effort on engaging a software ecosystem. From the old days on engaging Linux, the cloud different software stack, and working with CSPs like Alibaba in China to really make sure they can create and write the new latest software AI framework and taking the most advantage of our hardware platform as well. So, that's something that we've been very focusing on. >> And one of the themes here is the IoT for traffic in China. Obviously, if you've been here, you know it's kind of congested. But Alibaba is giving a lot of talks on how they're using data in this cloud city for traffic, which is an example of IoT, Internet of Things, but applied to the real world. That's where the AI kind of connects with the data. Is that kind of where it's going? >> Yeah, so I think this is a great application, as you just mentioned. And Alibaba calls it City Brain. So, essentially, imagine a normal city like in China, can easily go five million, 10 million people. The amount of people and the amount of traffic that goes on the road every day. So, if the city is able to utilize all these videos' stream of data, feedback from different traffic intersections, and be able to direct traffics and control the traffic lights dynamically, using artificial intelligence, you'd actually solve a lot of the city's congestion problem. So, I think this is where we are seeing a lot of application being explored in China, they're using very innovative, different ways by Alibaba. >> Peter, I've got to ask you because one of the things we're seeing in theCUBE and Wikibon Research is the growth of new kinds of ecosystems. Karen Liu, who runs the America's, general manages for America's Alibaba, said to me that ecosystem is super important for Alibaba as an example. But a new kind of ecosystem is developing. Cloud service providers are becoming a new hot growth area because the specialty of building applications in the cloud is not like it was kind of in the old days. You got to have a little bit of a cloud native mindset, but yet, domain expertise, whether it's traffic or a certain vertical solution. So, it's a little bit of both. Always often scalable, yet specialism. This is going to create a lot of opportunities for cloud service providers. What's your view on that from Intel's perspective? How are you guys seeing that market? Do you agree? And what are you guys looking at, at that market? >> So, obviously cloud service provider, the likes of Alibaba or Amazon, are one of our fastest growing customer base over the past five years. And in the near future, we expect this trend to continue to grow. We definitely see CSPs as a leading edge of driving innovation because they are not just the leading edge of driving consumer usages but they also, like the City Brain project, they've been really close on solving the enterprise problem as well with public cloud. So, I think we're very excited to have the opportunity to be a close partner with a CSP like Alibaba to really help them, providing our latest hardware technology to allow them to drive innovation on top of this offer and with the programs and the algorithms. >> How are they, how are those big cloud service providers, or CSPs like Alibaba, they're a big one, they're the fourth cloud in the world, enabling their CSPs? Because I was just talking to someone on the floor here, an ISV in the old world, who was telling me that he's now a cloud service provider, so you have now this nice balance in the ecosystem developing. You guys see the same thing? How do you guys, looking at that? >> So, this is what we call a hybrid situation. So, while the big CSPs like Alibaba, they have a lot of competency and they have a lot of internal engineering, it may not make sense for them to create every single application in the world. So, there may be some legacy enterprise application, for instance, a CRM software in China, maybe it was really popular, for them to forge a collaboration with the leading company Alibaba to translate their on-prem software stack into a cloud solution. So, I think we definitely see a lot of that collaboration happening, to take the best of the best from the legacy as well as the new public cloud environment to really make the better service for the companies and the customers. >> Create ecosystem opportunity. Okay, so I got to ask. What is the Intel relationship that you guys are doing on your end with Alibaba Cloud? Obviously, they're taking names, they're kicking butt. They're doing well. They're going global. They're not just in China. They're the first cloud provider here in China to go outside the mainland. Obviously, they're in the US, they're in Silicon Valley, our backyard. What's the collaboration? Share the relationship. >> We work very closely with Alibaba. Like you said, they're now the leading cloud service provider in China. They're starting going abroad. And we as an ingredient, knowledge provider perspective, we have a very close collaboration with them, sharing with them our technologies on hardware roadmap as well as software enabling to make sure they can take full advantage of it. So, we're very excited to see the growth of Alibaba over the past few years, and we look forward to seeing them continue to expand their business together with us. >> Yeah, great company. So, I got to ask you, one of the collaborations that got my attention was the, I don't want to say hack-a-thon, it was a competition, it was the AI competition called Tianchi that you guys were a part of with Alibaba. What was Intel's role in that? I saw some of the winners earlier. I didn't get a chance to get the specifics, but take me through this AI competition that Alibaba did with these entrepreneurs. >> So, I'm very, actually very excited. I just talked to one of the winning team just now. So, what happened is, when we talk about artificial intelligence, today it's a lot about image recognition, voice recognition but that's just pure technology. So, what Alibaba decide to do, which in terms of partner, is we created a medical image contest. So, we pick a particular subject, for instance, lung cancer and we invited 16 local hospitals to provide some of the image data of the patients anonymously, and then we opened it up for the software ecosystem, the academia, professors, the schools, and say, hey, why don't you come in and try to compete on the image recognition accuracy based on those X-ray images, using these images? So, it takes about six, we have overwhelming turnout. We have about 3,000 teams from 20 different countries applying to join in the contest, and then we just select the winner yesterday. So, basically, the three winning teams, two of them are from the best universities here in China, one of them is from overseas. And again, Intel's role in this is we provide a lot of consultation help. First of all, we provide the hardware system based on our Xeon Phi clusters, and on top of that, we provide a lot of the software tools, Caffe, image recognition libraries, Intel material libraries to really help the contestant to be able to use the Intel hardware for the maximum to drive the best performance. >> And so, you guys provided the technology, Alibaba the Cloud, and let these guys just take. What was the results? Was there any success? Was there a winner? >> There was a winner. I think the big winner was Beijing University. But I think overall, we are not just excited just because of this specific winners but really the larger intent. If you can imagine in a country like China, there's a lot of people, meaning there's a lot of patients at different part of the country, and not every tier two, tier three city have the same resource or access of the best doctors. If we're able to simplify the lung cancer image recognition to be able to provide this as a tool for all the tier two, tier three cities of China, imagine how much this will change. >> It's a societal impact. >> Definitely. >> And you've got a collective intelligence. It's almost like an open source kind of thing, where the more people doing it. >> It gets better, it gets better. >> The fly wheel. >> And then, we have definitely a lot of hospitals who want to really take advantage of this as well. So, we're really glad on the results of this first round, and I think Alibaba will do a next round with a different subject as well, and we're looking forward to partnering with them again. >> That's inspirational. Okay, great to have you on. Thanks for the commentary. Exclusive coverage. Final thought, what's your thoughts on the event? Where's AI going? Where do you see this trajectory of Alibaba and Intel going? >> So, definitely the event's wonderful and great. This is my third year here. It gets just bigger and bigger every time. So, I'm looking forward to come back for the next couple of years again. Our collaboration with Alibaba has been very close. We work with each other deeply, with our engineers' collaboration, and I look forward to continuing to bring out more successful projects. >> And they're really bringing together, not just science and developers, but artists. You've got a music festival here, feels like South by Southwest meets a developer conference. Societal impact, traffic, solving problems, lung cancer, big data, and data is changing the world. Now, you need the compute power, you need the analytics. Of course, you need SiliconANGLE and theCUBE and Wikibon, exclusive coverage here in China of the Alibaba Cloud Conference. Thanks for watching.

Published Date : Oct 24 2017

SUMMARY :

Brought to you by Intel. We're here at the Intel booth with Peter Chen, who's the general manager of Products and So, artificial intelligence is definitely a hot word here in China for the past 12 months. So, I got to ask you. the real data to be inputted from a car, for instance, all the cameras, all the sensors. There's exponential growth in open source software that's causing a Renaissance in the software AI framework and taking the most advantage of our hardware platform as well. And one of the themes here is the IoT for traffic in China. So, if the city is able to utilize all these videos' stream of data, feedback from different Wikibon Research is the growth of new kinds of ecosystems. So, obviously cloud service provider, the likes of Alibaba or Amazon, are one of our You guys see the same thing? of the best from the legacy as well as the new public cloud environment to really make What is the Intel relationship that you guys are doing on your end with Alibaba Cloud? forward to seeing them continue to expand their business together with us. So, I got to ask you, one of the collaborations that got my attention was the, I don't want for the maximum to drive the best performance. And so, you guys provided the technology, Alibaba the Cloud, and let these guys just lot of patients at different part of the country, and not every tier two, tier three city have And you've got a collective intelligence. So, we're really glad on the results of this first round, and I think Alibaba will do a Okay, great to have you on. So, definitely the event's wonderful and great. of the Alibaba Cloud Conference.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlibabaORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Karen LiuPERSON

0.99+

Peter ChenPERSON

0.99+

ChinaLOCATION

0.99+

twoQUANTITY

0.99+

PeterPERSON

0.99+

USLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

five millionQUANTITY

0.99+

oneQUANTITY

0.99+

IntelORGANIZATION

0.99+

John FurrierPERSON

0.99+

third yearQUANTITY

0.99+

todayDATE

0.99+

16 local hospitalsQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

first roundQUANTITY

0.99+

20 different countriesQUANTITY

0.99+

Beijing UniversityORGANIZATION

0.99+

three factorsQUANTITY

0.99+

yesterdayDATE

0.99+

LinuxTITLE

0.98+

Intel CorporationORGANIZATION

0.98+

theCUBEORGANIZATION

0.98+

SiliconANGLEORGANIZATION

0.98+

about 3,000 teamsQUANTITY

0.97+

Alibaba Cloud ConferenceEVENT

0.97+

fourth cloudQUANTITY

0.97+

three winning teamsQUANTITY

0.96+

FirstQUANTITY

0.96+

bothQUANTITY

0.95+

10 million peopleQUANTITY

0.95+

AmericaLOCATION

0.93+

Hangzhou, ChinaLOCATION

0.92+

next couple of yearsDATE

0.88+

WikibORGANIZATION

0.88+

Xeon PhiCOMMERCIAL_ITEM

0.86+

TianchiORGANIZATION

0.85+

about sixQUANTITY

0.84+

tier threeQUANTITY

0.84+

Sheila FitzPatrick, NetApp & Michael Archuleta, Mt San Rafael Hospital | NetApp Insight 2017


 

>> Narrator: Live from Las Vegas, it's The Cube, covering NetApp Insight 2017, brought to you by NetApp. >> Welcome back to our live coverage. It's The Cube here in Mandalay Bay in Las Vegas. I'm John Furrier, the co-host and co-founder of SiliconANGLE Media, with Keith Townsend my co-host, CTO Advisor. Our next two guests is Sheila Fitzpatrick, the Chief Privacy Officer for NetApp, and Michael Archuleta, CIO HIPPA and Information Security Officer at San Rafael Hospital. Thanks for joining us. >> Thank you. >> Thank you very much. >> Great topic, privacy, healthcare, ransomware, all these hacks going on, although it's not a security conversation, it really is about how data is changing, certainly with the HIPAA, which has got a history around protecting data, but is that good? So, all kinds of hornets' nest of issues are going on. Michael, all for the good, right? I mean, everything's for the good but, at what point are things foreclosed, the role of the tech? What's your update on healthcare and the role of data, and kind of the state of the union? >> Yeah, absolutely. So, data right now, is one of those assets that's really critical in a healthcare organization. When you look at value-based care, on improvements, utilization of real-time data, it's really critical that we have the data in place. But the thing though is, data is also very valuable to hackers, so it is really a major problem that we're basically having in healthcare organizations, because right now, healthcare organizations are one of the most attacked sectors out there. I was basically stating that there's an actual poll out there that stated that 43% of individuals don't even know what ransomware is. And you figure, in healthcare organizations, we're really behind the curve when it comes to technology. So when you bring that into, and you say okay guys, what's ransomware, what's cyber security? What's a breach? Everyone's like, well I-- >> Malware, resilient things. >> I don't know what it is. So it becomes an issue, and the thing though is the culture has not been fully developed in organizations like healthcare, because we're so behind in the curves. But what we've been focusing a lot on, is employee cyber security awareness, kind of bringing in that culture, having individuals understand, because as you were stating too, I mean, healthcare information is 10-times, 20-times more valuable than a Social Security and a credit card, on the dark net right now. If you figure, PHI contains a massive amount of data, so it is very profitable, and these individuals go in, hack these systems, because of course, healthcare organizations are so easy to hack, they place it out on the dark net, you go out, you buy some Bitcoins, you can go and have some good identity theft going on. And I mean, we have a massive issue here in the States, with substance abuse, so if you want basically a script, or you want multiple scripts with different identities, go out there and purchase those specific things. So, it is a problem, and then on my standpoint is, imagine if this was your mother's, your father's, your grandma's, any family member's information. That's why data is so valuable, and it's so critical that we take care of the information as securely as possible, but it starts with the people, because I always say at the end of the day, our employees hold the keys to either letting the individuals stay out, or inviting them in. So it is a problem, absolutely. >> Sheila, I want to get your thoughts, 'cause obviously this segment here is why data privacy is always one of the top-five concerns for CXOs. And obviously, the tagline NetApp has for the show is "Change the World With Data". There's a lot of societal impacts going on. We're seeing it every day, in front of our eyes, certainly here in Vegas and then throughout the world, with hacks, Equifax just still in memory there. And there's going to be another Equifax down the road. The hackers are out there, lots of security concern. You've got developers that are getting on the front lines, getting closer to business, that's a trend in the tech business. Data privacy has always been important, but this means that there's a confluence of two things happening right now, that's really that collision course: technology and policy. Privacies and policy things that people spend a lot of time trying to get right, and for all the right reasons, but I'll make some assumptions here, and could foreclose and all penaltize them, put a penalty for the future. How should CEOs, COOs, CDOs, Chief Data Officers, chief everybody, they're all CXOs, think about privacy? >> Well I think it starts with the fundamental, and you're absolutely right, there's a real misperception out there, around privacy. And I always tell people, people that know me know that my pet peeve is when people say to me we have world-class security, therefore we're good on privacy. I literally want to slap them, because they're not the same thing. If you think about-- >> She's closer to John. >> Yeah, you better move that way. If you think about the analogy of the wheel, data privacy is that full life-cycle of the wheel. It's that data that you're collecting, from the time you collect it to the time you destroy it. It's the legal and regulatory requirements that say what you can have, what you can do with that data, obtaining the consent of the individual to have that data. Certainly, protecting that data is very important, that's one spoke on that wheel, but if you're only looking at encryption, that wheel's not going to turn, 'cause you're literally encrypting data you're not legally allowed to have. So if you think about the healthcare industry, where I absolutely agree, the data that you deal with is one of the most valuable data and sensitive data individuals can have, but often times, even healthcare organizations don't even know what they're collecting, or they're collecting data that maybe they don't necessarily need, or they only think about protecting that protected health information, but they don't think about the other personal data they collect. They collect information on your name, your phone number, your home address, dependent information, emergency contact. That's not protected health information. That's personal data that's covered under privacy laws. >> Here's the dilemma I want to ask you guys to react to, because this is kind of the reality as we see it on The Cube. We go to hundreds of events a year, talk to a lot of thought leaders and experts. You guys are on the field every day. Here's the dilemma: I need to innovate my business, I got to do a digital transformation. Data is the new competitive advantage. I got a surface data, not in batch basis, real-time, so I can provide the kinds of services in real-time, using data, at the same time that's an innovative, organic growing, fast-paced technological advancement. At the same time, I'm really nervous, because the impact of ransomware and some of these backlash events, cause me to go pause. So the balancing out between governance and policy, which could make you go slower, versus the let's go, move fast, break stuff, you know, let's go build some new apps. I want to go faster, I want to innovate for my business and for my customers, but I don't want to screw myself at the same time. How do you think about that? How do you react to that? And how do you talk to customers about that when they try to figure it out? >> So that's something, that's an area that I spend a lot of time talking out, 'cause I'm very fortunate that I get to travel the globe and I'm meeting with our customers all over the world. And those same issues, they want to adapt to new technology. They want to invest in the cloud, they want to invest in AI, in internet-of-things, but at the same time, I keep going back to, it's like building a house, you have to start with the ground floor. You have to build your privacy compliance program, and understand what data do you need in order to drive your business? What data do you need to sort your customers, your patients, your employees? Once you've determined that fundamental need and what your legal requirements are, that's when you start looking at technology. What's the right technology to invest in? You don't start that journey by deciding on technology and then fit the data in. You have to start with what the data is, and what you want to do with that data, what service you're trying to provide, and what the basics are, and then you build up. >> So foundationally, data is the initial building block. >> Absolutely. You don't build a house by starting with the second floor. If you start looking at tools and technology to begin with, that house is going to collapse. So you start with the data and then you build up. >> Michael, you're on the front lines, and the realities are realities. Your thoughts? >> Absolutely. So you know, you have some excellent points. The thing is, at the end of the day, I always say security at times is inconvenience. I mean, we add two-factor authentication, we add all these additional fundamentals in what we basically do, but the bottom line is we're trying to secure this data. There has to be security governance, to really focus on okay, this is the information you need. We need to kind of go through legal, we need to go through compliance, and we need to kind of determine that this is going to be ease-of-access for your group, and we need to make sure that we are keeping you secure as well too. The bottom line is innovation, of course, it won't do so much disruption, et cetera. It's absolutely amazing. You know, I love innovation, honestly, but we still have to have some governance, and focus on that in keeping it secure, keeping it focused, and having the right individuals really-- >> How do you tackle that as a team, with your team? It's cultural organizational behavior, or project management, product planning. How do you deal with the balance? >> Well at the end of the day, the CEO of NetApp basically states it starts from the top down. You really have to have a data-driven CEO that basically understands at least the fundamentals of cyber security, information technology, innovation, have those all combined and together and having that main focus of governance, so everyone has that full fundamentals of understandment, if that makes sense. >> Let's talk tech. You know, we've talked at the high level. I love it that you brought in the global conversation into this, you're taking a global view. We talked a little bit before the show, there's a mismatch in taxonomy. Here in the U.S., we're focused first on security, maybe, and then secondarily on this concept of PII, which really doesn't exist outside of the U.S. Now we have GDPR. Talk to us about the gap in understanding of GDPR, and what we consider as PII, here in the U.S., and where U.S. companies need to get to. >> Okay, that's a great question. So, the minute an individual talks about PII, you automatically go, U.S.-centric, understanding that you must operate in a purely domestic environment. The global term for personal data is personal data, it's not PII. There is a fundamental difference: in the U.S. there is a respect for confidentiality, but there's no real respect for privacy. When you talk about GDPR, that is the biggest overhaul in data protection laws in 25 years. It is going to have ramifications and ripple-effect across the globe. It is the first extra-territorial data privacy law, and under GDPR, personal data is defined as any piece of information that is identifiable to an individual, or can identify an individual either directly or indirectly. But more importantly, it has expanded that definition to include location data, IP address, biometric information, genetic information, location data. So if you have that data and you say well I can't really tie that back to a person, if you can go through any kind of technology process to be able to tie it back to a person, it is now covered under GDPR. So one of the concepts under GDPR is privacy by design. So it's saying that you have to think about privacy very similar to where we've always sat about security up front, when you're investing in new technology, when you're investing in a new program, you need to think about, going back to what I said earlier, what data do you need? What problem are you trying to solve? What do you absolutely have to have to make this technology work? And then, what is the impact going to be on personal data? So I absolutely agree, security is incredibly important, because you need to build a fortress around that data. If you haven't dealt with the privacy component of GDPR, and other data protection laws, security would be like me going down and robbing a bank, coming home and putting that money in the vault in my house, locking it up, and going that money's secure, no one can get to it. When the police come knocking on my door, they're not going to care that I have that locked in a vault. That's not my money. And you have to think about personal data the same way, and certainly healthcare information the same way. You need the consent of the individual, and you need to articulate what you're going to do with that data, be transparent. So the laws are not trying to inhibit or prohibit technology, they're just trying to get you to think about-- >> So Michael, as we think about this, how it impacts GDPR specifically, the healthcare industry talked to dinner about this a little bit. We're talking about medical records, doctors, medical professionals like to keep as much data as possible. Researchers want to get to as much data as possible. What are some of the ramifications or considerations at least, for the medical industry? >> Yeah, absolutely. So you know, on your standpoint there, as you stated, at the end of the day when we basically look and we focus on our security governance, we go over the same fundamentals as you are going. What information is basically needed to access that information for the patient? What is needed from the physician's standpoint? What is needed from the nurse's standpoint? Because the thing is, we don't just open it up to everyone, like on a coming in by different specific job functionalities, you know. We kind of prioritize and put different levels of this is the level of data this individual basically needs, versus this individual. And the thing is, the beauty about what we basically have focused on a lot too, is we developed the overall security governance committee that kind of focuses on the specific datas from HIPAA, high-tech, and the different laws that we're focused on in healthcare. And you know, we really have started focusing a lot on two-factor authentication with accessing information, so we're really utilizing some of those VASCO tokens, RSA tokens, with algorithm changes, et cetera. But at the end of the day, the thing is, the main focus is what information do you need? And the bottom line too is, it has to have that specific culture of understanding that cyber security and data is very important. And the thing is, on a physician's standpoint, they want access to everything, literally everything, and that's understandable, because these individuals are saving lives, but the thing is though, there has to be governance in place, and they have to have that understanding that this can be an issue moving forward. These are the potential problems of a breach that could basically happen, this is the information that you need. If there's more information that is needed, it will go through the security compliance governance committee. >> It's a hard job. They want the nirvana, they want the holy grail, they want everything right there. Thanks for coming on, appreciate making aware of the data, privacy issues. Sheila, thanks so much for coming on. >> Thank you. >> Michael, I'll give you guys the final word on how management teams and executives should align around this important objective? Because there's some inconvenience, it happening in the short term, but automation is coming, machine learning, all this great stuff is being promised. Looks good off the tee as they say in golf. But, the reality is that there's a lot of lip service out there. So the taglines, oh, we're strong on privacy. So, walking the talk is about having a position, not just the tagline or the talking points, having a positioning around it first, and getting an executive alignment. So final point: what's your advice to folks out there who either are thinking this through hard? Is it a matter of reducing choices, evaluation? What is your thoughts on how to attack and think about, and start moving the ball down the field, on privacy? >> Well that's a great question. I think certainly at NetApp, and as you mentioned earlier, our executive team, and certainly George Kurian, our CEO, absolutely has a philosophical belief in that fundamental right to privacy, and respects the fact that privacy is key to what we do. It has become a competitive advantage, almost in an accidental way, because we take it so seriously. It's a matter of balance. Absolutely, we need to take advantage of new technology. We're a technology company, we're building technology, but we also have to respect the fact that we operate around the world, and there are laws that we have to comply with, and those laws dictate what data we can and cannot have, and what we can do with that data. So it's that balance between data's our greatest asset, we need to protect it, it can also be our greatest detriment if we're not treating it in a respectful manner, and if we're not building technology that enables our customers to protect that fundamental right to privacy. >> Michael, from a management team perspective, obviously, have functioning with an alignment, implies a well-oiled machine. Now always the case these days. But how do you get there? What's your advice? >> You know, my advice is speak the language. CEOs, CFOs, administration, they basically don't want to hear this tech lingo at times, okay? Have them understand the basic fundamentals of what cyber security is, what it can do to the operations of an organization, what a breach can do financially to an organization. Really have those kind of put in place. Bring that story to the Board of Directors, have them kind of focusing on the fundamentals on this is why we're protecting our information, and this is why it is so critical to keep this information safe. Because the thing is, if you don't know how to tell the story, and if you don't know how to sell it, and really sell it to the point, you will not be successful-- >> That's a great point, Michael. And you know, we hear all the time too, the trend now is, IT has always been kind of a cost center. Security and data governance around privacy should be looked at not so much as a profit center, but as a, you could go out of business. So you don't treat it as maximizing your efficiency on costs, the effectiveness of privacy is a stay-in-business table stake. And that has an impact on revenue, so it's quasi-top line. >> Well absolutely. If you think about the sanctions under the new GDPR alone, you could have one data privacy violation that could, the sanction could be equal to four-percent of your annual global turnover. So it is something-- >> It's a revenue driver. >> It's a revenue driver. It's something you need-- >> It's a revenue saver. >> Yeah. Well for some companies-- >> It's a revenue saver. >> It's become a revenue driver. Yeah, absolutely. >> Most people think P&L, oh, the cost structure, profit center. If net profit, and then sales, this is a new dynamic where risk management actually is a profit objective. >> Absolutely. >> Absolutely. >> Guys, great topic. We should continue this back in California. >> I'd love to. >> Michael, thanks for coming on and sharing the CIO perspective. >> Thank you very much. >> Great content. It's The Cube, breaking it down here, getting all the data and keeping it public. That's our job is to make all our data public and sharing it on SiliconANGLE.com and TheCube.net. More live coverage here in Las Vegas, with NetApp Insight 2017, after this short break. (electronic theme music) >> Narrator: Calling all barrier-breakers: status quo-smashers.

Published Date : Oct 4 2017

SUMMARY :

brought to you by NetApp. I'm John Furrier, the co-host and co-founder and kind of the state of the union? So when you bring that into, and you say okay guys, and the thing though is the culture You've got developers that are getting on the front lines, If you think about-- obtaining the consent of the individual to have that data. Here's the dilemma: I need to innovate my business, and understand what data do you need So foundationally, data is the So you start with the data and then you build up. and the realities are realities. and we need to make sure that we are keeping you secure How do you tackle that as a team, with your team? Well at the end of the day, the CEO of NetApp I love it that you brought in the global conversation So it's saying that you have to think about privacy What are some of the ramifications or considerations but the thing is though, there has to be governance making aware of the data, privacy issues. So the taglines, oh, we're strong on privacy. and respects the fact that privacy is key to what we do. Now always the case these days. Because the thing is, if you don't know So you don't treat it as maximizing your efficiency If you think about the sanctions It's something you need-- Well for some companies-- It's become a revenue driver. oh, the cost structure, profit center. We should continue this back in California. for coming on and sharing the CIO perspective. getting all the data and keeping it public. Narrator: Calling all barrier-breakers:

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichaelPERSON

0.99+

Keith TownsendPERSON

0.99+

George KurianPERSON

0.99+

SheilaPERSON

0.99+

Sheila FitzpatrickPERSON

0.99+

Michael ArchuletaPERSON

0.99+

John FurrierPERSON

0.99+

CaliforniaLOCATION

0.99+

10-timesQUANTITY

0.99+

20-timesQUANTITY

0.99+

Sheila FitzPatrickPERSON

0.99+

Mandalay BayLOCATION

0.99+

JohnPERSON

0.99+

second floorQUANTITY

0.99+

GDPRTITLE

0.99+

VegasLOCATION

0.99+

NetAppORGANIZATION

0.99+

Las VegasLOCATION

0.99+

U.S.LOCATION

0.99+

43%QUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

firstQUANTITY

0.99+

four-percentQUANTITY

0.99+

two thingsQUANTITY

0.99+

25 yearsQUANTITY

0.99+

EquifaxORGANIZATION

0.99+

two-factorQUANTITY

0.99+

oneQUANTITY

0.99+

SiliconANGLE.comOTHER

0.97+

HIPAATITLE

0.97+

two guestsQUANTITY

0.96+

NetApp InsightTITLE

0.95+

hundreds of events a yearQUANTITY

0.93+

Chief Privacy OfficerPERSON

0.92+

TheCube.netOTHER

0.92+

one data privacy violationQUANTITY

0.91+

RSAOTHER

0.89+

PHIORGANIZATION

0.89+

Mt San Rafael HospitalORGANIZATION

0.89+

ndustryPERSON

0.88+

NetApp Insight 2017TITLE

0.87+

The CubeTITLE

0.87+

VASCOORGANIZATION

0.86+

San Rafael HospitalORGANIZATION

0.8+

The CubeORGANIZATION

0.78+

NarratorTITLE

0.74+

fiveQUANTITY

0.71+

HIPPAORGANIZATION

0.67+

2017DATE

0.66+

CTOORGANIZATION

0.65+

The CubePERSON

0.55+

allTITLE

0.54+

top-QUANTITY

0.51+

SecurityPERSON

0.49+

CIOORGANIZATION

0.44+

Brian McDaniel, Baylor College of Medicine | Pure Accelerate 2017


 

>> Announcer: Live from San Fransisco It's theCUBE Covering PURE Accelerate 2017. Brought to you by PURESTORAGE. >> Welcome back to PURE Accelerate. This is theCUBE, the leader in live tech coverage. I'm Dave Vellante with my co-host Stu Miniman. This is PURE Accelerate. We're here at Pier 70. Brian McDaniel is here he's an infrastructure architect at the Baylor College of Medicine, not to be confused with Baylor University in Waco Texas, anymore. Brian Welcome to theCUBE. >> Thanks for having me appreciate it. >> You're very welcome. Tell us about the Baylor College of Medicine. >> So, Baylor College of Medicine is a, first and foremost, a teaching facility but also the leader in research and development for healthcare in the Texas Medical Center in Houston Texas. We currently employ roughly 1,500 physicians and so they occupy a multitude of institutions, not only at Baylor but other facilities and hospitals in and around the Texas Medical Center. >> So, it's kind of' healthcare morning here Stu. We've been talking about electronic medical records, meaningful use, the Affordable Care Act, potential changes there, HIPAA, saving lives. These are big issues. >> We're not at the HIMSS Conference Dave? >> We should be at HIMMS. So these are big issues for any organization in healthcare. It's just exacerbates the challenges on IT. So, I wonder if you can talk about some of the drivers in your business, compliance, and in new tech and maybe share with us some of the things that you're seeing. >> Absolutely so first and foremost, we are an Epic system shop. That's our EMR. So, from a enterprise and clinical operation, that is our number one mission critical application. It provides your electronic medical records to our staff, regardless of where they're physically located at. So that alone is a demanding type of solution if you will, the mobility aspect of it. Delivering that in a fast manner and a repeatable manner is upmost important to our physicians because they're actually seeing patients and getting to your records and being able to add notes and collaborate with other institutions if necessary. So, time to market is very important and accessibility is also up there. >> Right so, you mentioned that collaboration and part of that collaboration is so much data now, being able to harness that data and share it. Data explodes everywhere but in healthcare, there's so much data to the extent we start instrumenting things. What are you guys doing with all that data? >> Right now, it lives within the clinical application, right in Epic, but as you pointed out that is where the value is. that is where your crown jewels so to speak are at. That data is now being looked at as a possible access point outside of the clinical operation. So, it's environment is going to be even more important going forward, when you look to branch out into some of the basic sides in more of a research, to gain access to that clinical data. That historically has been problematic for the research to be done accessing that information. >> So, in the corporate we like to think of, from an IT perspective, you got to run the business, you got to grow the business, you got to transform the business. It's a little different in healthcare. You kind of got to comply. A lot of your time is spent on compliance and regulation changes and keeping up with that. And then there's got to be a fair amount that's at least attempting to do transformation and in kind of keeping up with the innovations. Maybe you could talk about that a little bit. >> Absolutely, particularly on the innovation side, we work closely with out partners at Epic and we work to decide roadmaps and how that fits into the Baylor world. Case in point, a year ago we were set to go to the new version of Epic, which was 2015. And Epic is nice enough to lay out requirements for you and say, here's what your system needs to meet in order to comply with Epic standards. So, they give you a seal of approval, so to speak. And there's monetary implications for not meeting those requirements. So it's actually dollars and cents. It's not just , we want you to meet this. If you do then there's advantages to meeting it. So, they provided that to us and went though the normal testing phases and evaluations of our current platform, both from compute and storage. And honestly we struggled to meet their requirements with our legacy systems. So the team was challenged to say well, what can we do to meet this? We have our historical infrastructures, so if we're going to deviate from that, let's really deviate and look at what's available to the market. So, Flash comes to mind immediately. So, there's a multitude of vendors that make Flash storage products. So we started meeting with all of 'em, doing our fact finding and our data gathering, meeting with all of 'em. First and foremost, they have to be Epic certified. That eliminated a couple of contenders right off the bat. Right? You're not certified. >> I would expect some of the startups especially. >> It did. Some of the smaller, Flash vendors, for example, one of 'em came in and we said, well, what do you do with Epic? And they said what's Epic. And you kind of scratch your head and say thank you. >> Thank you for playing. >> Here's the door. So, it eliminates people but then when we meet with PURE, and we talked to them and we meet 'em and you get to really know that the family and the culture that they bring with the technology. Yes it's got to be fast but Flash is going to be fast. What else can you do? And that's where you start learning about how it was born on Flash, how it was native to Flash and so you get added benefits to the infrastructure by looking at that type of technology, which ultimately led us there, where we're at running Epic on our Flash arrays. >> And Brian, you're using the Flash stack configuration of converge infrastructure. It sounds like it was PURE that lead you that way as opposed to Cisco? Could you maybe walk us through that? >> That's very interesting, so we're a UCS shop. We were before PURE. So when PURE came in, the fact that they had a validated design with the Flash stack infrastructure, made it all that more easier to implement the PURE solution because it just is modular enough to fit in with our current infrastructure. That made it very appealing that we didn't have to change or alter much. We just looked at the validated design that says, here's your reference architecture, how it applies to the Flash stack. You already have UCS. We love it, we're a big fan. And here's how to implement it. And it made the time to market, to get production work loads on it, very quick. >> And the CVD that you got from Cisco, that's Cisco plus PURE but was it healthcare Epic specific or was that the PURE had some knowledge for that that they pulled in? >> So, that was one of the value adds that we feel PURE brought was the Epic experience. And whether that's scripting, the backups, and if you're familiar with Epic, the environmental refreshes that they have to do. There's seven Epic environments. And they all have to refresh off of each other and play off of each other so, >> So you have a window that you have to hit right. >> And you do right? And historically that window's been quite large. And now, not so much which makes everybody happy. >> Hey, that's what weekends are for. >> Absolutely, yeah, our DBAs attest to that right? So, we would like to think we've made their world and life a little bit more enjoyable 'cause those weekends now, they're not having to babysit the Epic refreshes. Back to the point of Epic experience, that was instrumental in the decision makings from a support with the PURESTORAGE help desk, awareness of what it takes to run Epic on PURE, and then going forward knowing that there's a partnership behind Epic and PURE and certainly Baylor College of Medicine as we continue to look at the next versions of Epic, whether that's 2018 and on to 2020, whatever that decision is, we know that we have a solid foundation now to grow. >> And Brian I'm curious, you've been a Cisco shop for a while, Cisco has lots of partnerships as well as, they've got a hyper-converged offering that they sell themselves. What was your experience working with Cisco and do they just let you choose and you said, I want PURE and they're like, great? Do you know? What was that like? >> To your point, there's validated designs for many customers and Cisco is kind of at the hub of that, that core with the compute and memory of the blade systems, the UCS. They liked the fact that we went with PURE 'cause it does me a validated design. And they have others with other vendors. The challenge there is how do they really integrate with each other from tools to possibly automation down the road, and how do they truly integrate with each other. 'Cause we did bring in some of the other validated design architecture organizations and I think we did our due diligence and looked at 'em to see how they differentiate between each other. And ultimately, we wanted something that was new and different approach to storage. It wasn't just layering your legacy OS on a bunch of Flash drives and call it good. Something that was natively born to take advantage of that technology. And that's what ultimately led us to PURE. >> Well, PURE has a point of view on the so called hyper-converged space. You heard Scott Dietzen talking this morning. What's your perspective on hyper-convergence? >> Hyper-converge is one of those buzz words that I think gets thrown out of there kind of off the cuff if you will. But people hear it and get excited about it. But what type of workloads are you looking to take advantage of it? Is it truly hyper-converged or is it just something that you can say you're doing because it sounds cool? I think to some degree, people are led astray on the buzzwords of the technology where they get down to say, what's going to take advantage of it? What kind of application are you putting on it? If your application, in our case, can be written by a grad student 20 years ago that a lab is still using, it does it make sense to put it on hyper-converged? No, because it can't take advantage of the architecture or the design. So, in a lot of ways, we're waiting and seeing. And the reason we didn't go to a hyper-converged platform is a, Epic support and b, we were already changing enough to stay comfortable with the environment and knowing that come Monday morning, doctors will be seeing patients and we're already changing enough, that was another layer that we chose not to change. We went with a standard UCS configuration that everyone was already happy with. That made a significant difference from an operational perspective. >> Essentially, your processes are tightly tied to Epic and the workflow associated with that. So from an infrastructure perspective, it sounds like you just don't want it to be in the way. >> We don't. The last thing we want in infrastructure getting in the way. And quite frankly, it was in the way. Whether that was meeting latency requirements or IOPS requirements from the Cache database or the Clarity database within the Epic system, or if was just all of are just taking a little bit longer than they expect. We don't want to be that bottleneck, if you will, we want them to be able to see patients faster, run reports faster, gain access to that valuable data in a much faster way to enable them to go about their business and not have to worry about infrastructure. >> Brian, PURE said that they had, I believe it's like 25 new announcements made this morning, a lot of software features. Curious, is there anything that jumped out at you, that you've been waiting for and anything still on your to do list that you're hoping for PURE or PURE and it's extended ecosystem to deliver for you? >> Great question, so at the top of that list is the replication of the arrays, whether that's in an offsite data center or a colo and how that applies to an Epic environment that has to go through this flux of refreshes, and from a disaster or business continuity standpoint, we're actively pursuing that, and how that's going to fit with Baylor. So, we're very excited to see what our current investment, free of charge by the way, once you do the upgrade to 5.0, is to take advantage of those features, with replication being one of 'em. >> And then, I thought I heard today, Third Sight is a service. Right? So you don't have to install your own infrastructure. So, I'm not sure exactly what that's all about. I got to peel the onion on that one. >> To be determined right? When we look at things like that, particularly with Epic, we have to be careful because that is the HIPAA, PHI, that's your records, yours and mine, medical records right? You just don't want that, if I told you it's going to be hosted in a public cloud. Wait a minute. Where? No it's not. We don't want to be on the 10 o'clock news right? However, there's things like SAP HANA and other enterprise applications that we certainly could look at leveraging that technology. >> Excellent, we listen, thank you very much Brian for coming on theCUBE. We appreciate your perspectives and sort of educating us a little bit on your business and your industry anyway. And have a great rest of the show. >> Yeah, thank you very much. Appreciate it. >> You're welcome. Alright keep it right there everybody. This is theCUBE. We're back live right after this short break from PURE Accelerate 2017. Be right back.

Published Date : Jun 13 2017

SUMMARY :

Brought to you by PURESTORAGE. not to be confused with Baylor University You're very welcome. and so they occupy a multitude of institutions, So, it's kind of' healthcare morning here Stu. So, I wonder if you can talk about some of the drivers and getting to your records and being able to add notes there's so much data to the extent we start for the research to be done accessing that information. and in kind of keeping up with the innovations. And Epic is nice enough to lay out requirements for you And you kind of scratch your head and you get to really know that the family and the culture It sounds like it was PURE that lead you that way And it made the time to market, the environmental refreshes that they have to do. And you do right? and certainly Baylor College of Medicine as we continue and do they just let you choose and you said, They liked the fact that we went with PURE What's your perspective on hyper-convergence? kind of off the cuff if you will. and the workflow associated with that. and not have to worry about infrastructure. or PURE and it's extended ecosystem to deliver for you? and how that applies to an Epic environment So you don't have to install your own infrastructure. because that is the HIPAA, PHI, that's your records, Excellent, we listen, thank you very much Brian Yeah, thank you very much. This is theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

Dave VellantePERSON

0.99+

Brian McDanielPERSON

0.99+

CiscoORGANIZATION

0.99+

Baylor College of MedicineORGANIZATION

0.99+

PUREORGANIZATION

0.99+

2015DATE

0.99+

Scott DietzenPERSON

0.99+

Baylor UniversityORGANIZATION

0.99+

EpicORGANIZATION

0.99+

2020DATE

0.99+

Stu MinimanPERSON

0.99+

Affordable Care ActTITLE

0.99+

2018DATE

0.99+

25 new announcementsQUANTITY

0.99+

Monday morningDATE

0.99+

BaylorORGANIZATION

0.99+

10 o'clockDATE

0.99+

Houston TexasLOCATION

0.99+

a year agoDATE

0.99+

HIPAATITLE

0.99+

Waco TexasLOCATION

0.99+

FirstQUANTITY

0.99+

San FransiscoLOCATION

0.99+

bothQUANTITY

0.99+

todayDATE

0.98+

theCUBEORGANIZATION

0.98+

Medical CenterORGANIZATION

0.98+

PURE AccelerateORGANIZATION

0.98+

SAP HANATITLE

0.98+

UCSORGANIZATION

0.98+

Pier 70LOCATION

0.97+

FlashTITLE

0.96+

oneQUANTITY

0.96+

firstQUANTITY

0.94+

DavePERSON

0.93+

Third SightORGANIZATION

0.93+

20 years agoDATE

0.93+

HIMMSORGANIZATION

0.89+

this morningDATE

0.88+

1,500 physiciansQUANTITY

0.84+

TexasLOCATION

0.84+

Accelerate 2017COMMERCIAL_ITEM

0.84+

PHITITLE

0.82+

John Hodgson, Optum Technology - Red Hat Summit 2017


 

>> (Narrator) Live, from Boston, Massachusetts it's theCUBE, covering Red Hat Summit 2017, brought to you by Red Hat. >> Welcome back to Boston everybody, this is Red Hat Summit, and this is theCUBE, the leader in live tech coverage. I'm Dave Vellante, with my cohost Stu Miniman, and John Hodgson is here, he's the Senior Director of IT Program Management at Optum technology. John good to see ya. >> Good, it's good to be here. >> Fresh off the keynote, we were just talking about the large audience, a very large audience here. And Optum, you described a little bit at the keynote what Optum is with healthcare, sort of technology arm. Which is not super common but not uncommon in your world. But describe Optum and where it fits. >> So in the grand scheme of things within UnitedHealth Group you know, we have the parent company, of course, you know the Health Group, our insurance side, that does insurance, whether it's public sector for large corporations, as well as community and state government type work as UnitedHealthcare. They do all that, and then Optum is our technology side. We do really all the development, both for supporting UHC as our main customer, you know, they're truly our focus, but we also do a lot of commercial development as well for UnitedHealthcare's competitors. So big, big group, as I mentioned in the keynote. Over 10,000 developers in the company, lots of spend, I think in the last year our, just internal IT budget was like $1.2 billion in just IT development capital. So it's huge. >> Dave: Mind-boggling. >> John, you've got that internal Optum Cloud, Can you give us just kind of the breadth and depth, you said 1.2 billion, there. What is that make up, what geographies does that span, how many people support that kind of environment? >> As far as numbers of people supporting it, I think we've got a few hundred in our Enterprise Technology Services Group, that supports Optum Cloud. We started Optum Cloud probably a half a dozen years ago, and it's gone through its different iterations. And part of my job right now is all about Enterprise Cloud adoption and migration. So, we started with our own environment, we call it UCI, United, it was supposed to be Converged Infrastructure, but I call it our Cloud Infrastructure, that's really what it is. And we've continued to enhance that. So over the last few years, I think about 3.5, four years ago, we brought in Red Hat and OpenShift. We're on our third iteration of OpenShift. Very, very stable platform for us now. But we also have Azure Stack in there as well, I think even as Paul and those guys mentioned in the keynote there's a lot of different things that you can kind of pull from each one of the technology providers to help support what we're doing, kind of take the best of breed from each one of them, and use them in each solution. >> Organizations are always complaining that they spend all this money on keeping the lights on, and they're trying to make the shift, and obviously Cloud helps them do that, and things like OpenShift, etc. What's that like in your world? How much of your effort is spent on maintenance and keeping the lights on? Sounds like you got a lot of cool, new development activity. Can you describe that dynamic for us? >> Yeah, we've got a really good support staff. Our group, SSMO, when we build an application, they kind of take it back over and run everything. We've got a fabulous support team in the background. And to that end, and it's on both sides, right? We have our UnitedHealthcare applications that we build that have kind of their own feature set, because of what it's doing internally for us, versus what we do on the OptumInsight side, where it's more commercial in nature. So they have some different needs. Some of the things that we're developing, even for Cloud Scaffolding that I mentioned in the keynote. We're kind of working on both sides of the fence, there, to hit the different technologies that each one of them really need to be successful, but doing it in a way that it doesn't if you're on one side of the fence or the other, it's a capability that everybody will be able to use. So if there's a pattern on one side that you want to be able to use for a UHC application, by all means, go ahead and grab it, take it. And a lot of what we're doing now is even kind of crowdsourcing things, and utilizing the really super intelligent people that we have, over 10,000 developers. And so many of them, we've got a lot of legacy stuff. So there's some old-school guys that are still doing their thing, but we've got a lot of new people. And they want to get their hands on the new fresh stuff, and experience that. So there's really a good vibe going on right now, with how things are changing, all the TDP folks that we're bringing in. A lot of fresh college grads and things. And they love to see the new technologies, whether it's OpenShift or whatever. Lot are really getting into DevOps, trying to make that change in a big organization is difficult, we got a little ways to go with that. But that's kind of next up. >> You're an interesting case study, because you've got a lot of the old and a lot of cool innovation going on. And is it, how do you decide when to go, because DevOps is not always the answer. Sometimes waterfall is okay, you know. So, how do you make that determination, and where do you see that going? >> That's a great question, that's actually part of what my team does. So my specific team is all about Cloud adoption and migration, so our charter is really to work across the enterprise. So whether it's OptumInsight, OptumRx, UnitedHealthcare, we are working with them to evaluate their portfolios of applications to figure out legacy applications that we have that are still strategic. They've got life in them, they've got business benefit. And we want to be able to take advantage of that, but at the same time there's some of these monolithic applications that we look at how can we take that application, decompose it down into microservices and APIs, things like that, to make it available to other applications that maybe are just greenfield, are coming out now, but still need that same technology and information. So that's really what my team is doing right now. So we sit down with those teams and go through an analysis, help them develop a road map. And sometimes that road map is two or three years long. Getting to fully cloud from where they're at right now in some of these legacy applications is a journey. And it costs money, right? There's a lot of budget concerns and things like that that go with it. So that's part of what we helped develop is a business case for each one of those applications that we can help support them going back, and getting the necessary capital to do the cloud migrations and the improvements, and really the modernization of their applications. We started the program a couple of years ago and found that if you want to hang your hat on just going from old physical infrastructure, some of the original VMs that we had. And just moving over to cloud infrastructure, and whether that's UCI, OpenShift, Azure, whatever. If you're going to do your business case on that, you're going to be writing a lot of business cases before you get one approved. It's all about modernizing the applications. So if you fold in the move to new infrastructure, cloud infrastructure, along with the ability to modernize that application, get them doing agile development, getting down the DevOps path, looking at automated testing, automated deployment, zero downtime deployments. All of those things, when you add them up together and say, okay, here's what your real benefit looks like. And you're able to present that back to the business, and show them speed to market, speed to value is a new metric that we have. Getting things out there quickly. We used to do quarterly releases, or even biannual releases. And now we're at monthly, weekly, some of our applications that are more relatively new, Health4Me, if you go to the App Store, that's kind of our big app on the App Store. There's updates on a very frequent basis. >> So that's the operating model, really, that you're talking about, essentially, driving business value. We had a practitioner on a couple weeks ago, and he said, "If you just lift and shift to the cloud, "and you don't change your operating model, "you won't get a dime." >> Stu: You're missing the boat. >> Maybe there's something, some value there, a little faster, but you're talking about serious dollars, if you can change the operating model. And that's what you've found? >> Yeah absolutely, and that's the, it's a shift, and you've got to be able to prove it to the business that's there's benefit there, and sometimes that's hard. Some of these cloud concepts and things are a little nebulous, so-- >> It's hard 'cause it's soft. >> It's soft, right, yeah, I mean, you're putting the business case together, the hard stuff is easy to document, but when you're talking about the soft benefits, and you're trying to explain to them the value that they're going to get out of their team switching from a waterfall development over to agile and DevOps, and automated testing and things like that, where I can say, "Hey listen, "you know your team over here that has been, "you know we took them out of the pocket, "from actually doing their day jobs for the last week, "because they needed to test this new version? "If I can take that out of the mix, "and they don't have to do that anymore, "and they can keep on doing what they're doing "and not get a week behind, what value is that for you?" And all of a sudden they're like, "Oh really? "We don't have to do that anymore?" I'm like, "No, we can create test scripts and stuff. "We can automate your deployment. "We can make it zero downtime. "We have," there's an application that we're working on now that has 19,000 individual desktop deployments. And we're going to automate that, turn it into a software as a service application, host it on OpenShift, and completely knock that out. I mean deployments out to 19,000 people take weeks to get done. We only do a couple thousand a week, because there's obviously going to be issues. So now you've got helpdesk tickets, you've got desktop technicians that are going round, trying to fix things, or dialing in, remoting into somebody's desktop to try to help figure that all out. We can do the whole deployment in a day, and everybody logs in the next day, and they've got the new version. That kind of value in creating real cloud-based applications is what's driving the benefit for us. And they're finally starting to really see that. And as we're doing it, more application product owners are going, "Okay, now we're getting some traction. "We heard what you did over here. "Come talk to us, and let's talk "about building a road map and figuring out what we can do." >> John, one of the questions I got from the community after watching you keynote was, they want to understand how you handle security and enforce compliance in this new cloud development model. (laughs) >> That's beyond me, all I can tell you is that we have one of the most secure clouds out there. Our private cloud is beyond secure. We're working right now to try to get the public hybrid cloud space with both AWS and Azure, and working through contracts and stuff right now. But one of the sticking points is our security has to be absolutely top notch, if we're going to do anything that has HIPAA-related data, PHI, PII, PCI, any of that, it has got to be lock-solid secure. And we have a tremendous team led by Robert Booker, he's absolutely fabulous, I mean we're, our whole goal, security-wise, is don't be the next guy on the front page of the Wall Street Journal. >> You mentioned public cloud, how do you make your decisions as to what application, what data can live in which public cloud? You said you've got Azure Stack, and you've got OpenShift. How do you make those platform decisions? >> Well right now, both OpenShift and Azure Stack are on our internal private cloud. So we're in the process of kind of making that shift to move over towards public and hybrid cloud. So I'm working with folks on our team to help develop some of those processes and determine what's actually going to be allowed. And I think in a lot of cases the PHI and protected data is going to stay internal. And we'll be able to take advantage of hosting certain parts of an application on public cloud while keeping other parts of the data really secure and protected behind our private cloud. >> Red Hat made an announcement this morning with AWS, with OpenShift. >> Sounds like that might be of interest to you, would that impact what your doing? >> Absolutely, yeah, in fact I was talking with Jim and Paul back behind the screen this morning. And we were talking about that and I was like wow that is a game changer. With what we're thinking about doing in the hybrid cloud space, having all of the AWS APIs and services and stuff available to us. Part of the objection that I get from some folks now is knowing that we have this move toward public and hybrid cloud internally, and the limitations of our cloud. We're never going to be, our private Optum Cloud is never going to be AWS or Azure, it's just not. I mean they've spent billions of dollars getting those services and stuff in place. Why would we even bother to compete with that? So we do what we do well, and a big portion of that is security. But we want to be able to expand, and take advantage of the things that they have. So that's, this whole announcement of being able to take advantage of those services natively within OpenShift? If we're able to expose that, even internally, on our own private cloud? That's going to take away a lot of the objections, I think, from even our own folks, who are waiting to do the public hybrid cloud piece. >> When the Affordable Care Act hit, did your volume spike? And as things, there's a tug of war now in Washington, it could change again, does that drive changes in your application development in terms of the volume of requests that come in, and compliance things that you have to adhere to? And if so, does having a platform that's more agile, how does that affect your ability to respond? >> Yeah it does, I mean when we first got into the ACA, there was a number of markets that we got into. And there was definitely a ramp-up in development, new things that we had to do on the exchanges. Stuff like that. I mean we even had groups from Optum that were participating directly with the federal government, because some of their exchanges were having issues, and they needed some help from us. So we had a whole team that was kind of embedded with the federal government, helping them out, just based on our experience doing it. And, yeah, having the flexibility, in our own cloud, to be able to able to spin up environments quickly, shut them down, all that, really it's invaluable. >> So the technology business moves so fast, I mean it wasn't that long ago when people saw the first virtualized servers and went Oh my gosh, this is going to change the world. And now it's like, wow we got to do better, and containers. And so you've gone for this amazing transformation, I mean, I think it was 17 developers to 1,600, which is just mind-boggling. Okay, and that's, and you've got technologies that have helped you do that, but five years down the road there's going to be a what's next. So what is next for you? As you break out your telescope, what do you see? >> God, I don't know, I mean I never would have predicted containers. >> Even though they've been around forever, we-- >> Yeah I mean when we first went to VMs, you know back in the day I was a guy in the server room, racking and stacking servers and running cables, and doing all that, so I've seen it go from one extreme to the next. And going from VMs was a huge switch. Building our own private cloud was amazing to be a part of, and now getting into the container side of things, hybrid cloud, I think for us, really, the next big step for us is the hybrid cloud. So we're in the process of getting that, I assume by the end of this year, early next, we'll be a few steps into the hybrid cloud space. And then beyond that, gosh I don't know. >> So that's really extending the operating model into that hybrid cloud notion, bringing that security that you talked about, and that's, you got a lot of work to do. >> John: That's a big task in itself. >> Let's not go too far beyond that, John. Alright well listen, thanks for coming on theCUBE, it was really a pleasure having you. >> Yeah, thanks for having me guys, appreciate it. >> You're welcome, alright keep it right there everybody, Stu and I will be back with our next guest. This is theCUBE, we're live from Red Hat Summit in Boston. We'll be right back. (electronic music)

Published Date : May 3 2017

SUMMARY :

brought to you by Red Hat. and John Hodgson is here, And Optum, you described a little bit at the keynote So in the grand scheme of things within UnitedHealth Group What is that make up, what geographies does that span, of the technology providers to help support and things like OpenShift, etc. Some of the things that we're developing, and where do you see that going? and really the modernization of their applications. So that's the operating model, really, And that's what you've found? and you've got to be able to prove it to the business "If I can take that out of the mix, John, one of the questions I got from the community of the Wall Street Journal. How do you make those platform decisions? and protected data is going to stay internal. with AWS, with OpenShift. and take advantage of the things that they have. So we had a whole team that was kind of embedded So the technology business moves so fast, God, I don't know, I mean I never and now getting into the container side of things, So that's really extending the operating model it was really a pleasure having you. Stu and I will be back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

UnitedHealthcareORGANIZATION

0.99+

John HodgsonPERSON

0.99+

UnitedHealth GroupORGANIZATION

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

DavePERSON

0.99+

UHCORGANIZATION

0.99+

twoQUANTITY

0.99+

Robert BookerPERSON

0.99+

Stu MinimanPERSON

0.99+

PaulPERSON

0.99+

$1.2 billionQUANTITY

0.99+

Affordable Care ActTITLE

0.99+

Health GroupORGANIZATION

0.99+

17 developersQUANTITY

0.99+

19,000 peopleQUANTITY

0.99+

Red HatORGANIZATION

0.99+

StuPERSON

0.99+

JimPERSON

0.99+

App StoreTITLE

0.99+

WashingtonLOCATION

0.99+

1.2 billionQUANTITY

0.99+

three yearsQUANTITY

0.99+

1,600QUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

both sidesQUANTITY

0.99+

four years agoDATE

0.99+

BostonLOCATION

0.99+

last yearDATE

0.99+

OptumORGANIZATION

0.99+

Enterprise Technology Services GroupORGANIZATION

0.99+

Red Hat SummitEVENT

0.99+

oneQUANTITY

0.99+

Red Hat Summit 2017EVENT

0.98+

bothQUANTITY

0.98+

each solutionQUANTITY

0.98+

half a dozen years agoDATE

0.98+

Azure StackTITLE

0.98+

over 10,000 developersQUANTITY

0.98+

Over 10,000 developersQUANTITY

0.97+

DevOpsTITLE

0.97+

last weekDATE

0.97+

third iterationQUANTITY

0.97+

firstQUANTITY

0.96+

ACATITLE

0.96+

OpenShiftTITLE

0.96+

HIPAATITLE

0.96+

five yearsQUANTITY

0.96+

Optum TechnologyORGANIZATION

0.95+

each oneQUANTITY

0.95+

federal governmentORGANIZATION

0.94+

one sideQUANTITY

0.94+

next dayDATE

0.94+

billions of dollarsQUANTITY

0.94+

UCIORGANIZATION

0.93+

SSMOORGANIZATION

0.93+

this morningDATE

0.92+

a dayQUANTITY

0.92+

couple weeks agoDATE

0.92+

AzureTITLE

0.91+

couple of years agoDATE

0.91+

Optum CloudTITLE

0.89+

Bina Hallman, IBM & Tahir Ali | IBM Interconnect 2017


 

>> Narrator: Live from Las Vegas, it's the Cube covering Interconnect 2017, brought to you by IBM. >> Welcome back to Interconnect 2017 from Las Vegas everybody, this is the Cube the leader in live tech coverage. Bina Halmann is here, she's a Cube alumn and the vice president of offering management for storage and software defined at IBM and she's joined by Tahir Ali, who's the director of Enterprise Architecture at the City of Hope Medical Center. Folks, welcome to the Cube- >> Tahir: Thank you very much. >> Thanks so much for coming on. >> Bina: Thanks for having us. >> So Bina we'll start with you been on the cube a number of times. >> Yes. >> Give us the update on what's happening with IBM and Interconnect. >> Yeah, no it's a great show. Lots of exciting announcements and such. From an IBM perspective storage we've been very busy. Filling out our whole flash portfolio. Adding a complete set of hybrid cloud capabilities to our software defined storage. It's been a great 2016 and we're off to a great start in 2017 as well. >> Yeah [Inaudible] going to be here tomorrow >> That's right. so everbody's looking forward to that. So Tahir, let's get into City of Hope. Tell us about the organization and your role. >> Sure, so City of Hope if one of the forty seven comprehensive cancer centers in the nation. We deal with cancer of course, HIV, diabetes and other life threatening diseases. We are maybe 15 to 17 miles east of Los Angeles. My role in particular, I'm a Director of Enterprise Architecture so all new technologies, all new applications that land on City of Hope, we go through all the background. See how the security is going to be, how it's going to implement in our environment, if it's even possible to implement it. Making sure we talk to our business owners, figure out if there's a disaster recovery requirement if they have a HA requirement, if it's a clinical versus a non-clinical application. So we look at a whole stack and see how a new application fits into the infrastructure of City of Hope. >> So you guys to a lot of research there as well or? >> Absolutely. >> Yeah. >> So we are research, we are the small EDU and we are the medical center so- >> So a lot of data. >> A whole lot of data. Data just keeps coming and keeps coming and it's almost like never ending stream of data. Now with the data it's not only just data- Individual data is also growing. So a lot of imaging that happens for cancer research, or cancer medical center, gets bigger and bigger per patient as the three dimensional imaging is here. We look at resolution that is so much more today than it used to be five years. So every single image itself is so much bigger today than it used to be five years ago. Just a sheer difference in the resolution and the dimensions of the data. >> So what are the big drivers in your industry, and how is it affecting the architecture that you put forward? >> Right, so I think that a couple of huge things that are maybe two or three huge conversion points, or the pivot points that we see today. One of them is just the data stream as I mentioned earlier. The second is because a lot of the PHI and hipaa data that we have today- Security is a huge concern in a lot of the healthcare environment. So those two things, and it's almost like a catch 22. More data is coming in you have to figure out where you're going to put that data. But at the same time you got to make sure every single bit is secured enough. So there's a catch 22 where its going, where you have to make sure that data keeps coming and you keep securing the same data. Right so, those two things that we see pivoting the way we strategize around our infrastructure. >> It's hard, they're in conflict in way, >> Tahir: Absolutely. >> Because you've got to lock the data up but then you want to provide accessibilty... >> Tahir: Absolutely. >> as well. So paint a picture of your infrastructure and the applications that it's supporting. >> Right, so our infrastructure is mainly in-house, and our EMR is currently off-prem. A lot of clinical and non-clinical also stay in-house with us in our data center on-prem. Now we are kind of starting to migrate to cloud technologies more and more, as just things are ballooning. So we are in that middle piece where some of our infrastructure in in-house, slowly we are migrating to cloud. So we are at like at a hybrid currently. And as things progress I think more and more is going to go to the cloud. But for a medical center security is everything. So we have to be very careful where our data sits. >> So Bina when you hear that from a client >> Bina: Mm-hmm (affirmative) >> how do you respond? And you know, what do you propose? >> Bina: Yeah. >> How does it all... >> Yeah well- >> come about. >> You know as we see clients like Tahir, and some of the requirements in these spaces. Security is definitely a key factor. So as we develop our products, as we develop capabilities we ensure that security is a number one focus area for us. Whether it's for the on-prem storage, whether it's for the data that's in motion from moving from the on-prem into the cloud, and secure completely all the way through where the client has the control on the security, the keys et cetera. So a lot goes into making sure as we architect these solutions for our clients, that we focus on security. And of course some of the other requirements, industry specific requirements, are all also very important and we focus in on those as well. Whether it's regulatory or compliance requirements, right. >> So from a sort of portfolio standpoint what do you guys do when there's all kinds of innovations over that last four or five years coming in with flash, we heard about object stores this morning, we got cloud, you got block, you've got file, what are you guys doing? >> So we do a lot of different things, so from having filers in-house to doing block storage from- And the worst thing now these days with big data is, as the data is growing the security needs are growing but the end result with the researchers and our physicians the data availability needs to be fast. So now comes a bigger catch 22, where the data is so huge but at the same time they want that all of that very quickly on their fingertips. So now what do you do? That's where we bring in a lot of the flash to upfront it. 10 to 12 percent of our infrastructure has flash in the front, this way all the rendering, or all the rights that happen or- First land on the flash. So everybody who writes, feels like it's a very quick write. But there's a petabytes and petabytes behind the scene that could be on-prem, it could be on the cloud, but they don't need to know that. Its, everything lands so fast that it looks like it's just local and fast. So there's a lot of crisscross that is happening, and started maybe four five years ago with the speed of data is not going to be slow. The size of data increasing like crazy and then security is becoming a bigger and bigger concern as you know. Maybe every month or month and a half there's a breach somewhere that people have to deal with. So we have to handle all of that in one shot. So you know, it's more than just infrastructure itself. There's policies, there's procedures, there's a lot that goes around. >> So when you think about architecting, obviously you think about workloads and- >> Tahir: Of Course. >> what the workload requirement is, it's no a one size fits all. >> Tahir: Right right. >> So where do you start, do you start with- >> Tahir: Sure. >> Sort of, you know a conversation with the business? >> Sure, sure. >> How much money do you got? >> So we don't really deal with the money at all. We provide the best possible solution for that business requirement. So the conversation happens, "tell us what you're looking for." "We're looking for a very fast XYZ." "Okay tell us what exactly you need." "Here's the application, we want it available all the time, "and this is how it's going to look like, "it can't be down because our patients are depending on it". So on and so forth. We take that, we talk to our vendors. We look at exactly how it's architected. If it's- Let's just say it's three-tiered. There's a web, there's an app and then there's a database. You already know by default that if it's a database it's going to go on a high transactional IO where either it's a flash or a very fast spinning disc with a lot of spindles. From there you get the application. Could be a virtual machine, could not be a virtual machine. From there you get to a web tier. Web tiers are usually always on a virtual infrastructure. Then you realize if you want to put it on a DMZ so people from outside can get to it, or it's only for internal use. Then you draw the entire architecture diagram out. Then you price it out, you said "Okay if you want this to be "always on, maybe you need a database that is always on." Right, or you need a database that replicates 24/7. That has a cost associated to that. If you have an application- If wanted two application maybe it's a costier application it could be HA it could not be HA, so there's a cost to that. Web servers are kind of, you know cheaper tier of virtual machines. And then there's a architecture diagram, all the requirements are met in there. And there's a cost associated to that, saying business unit here is how much it's going to cost and this is what you will have. >> Okay so that's where the economics, >> Exactly >> comes into play. Okay this is what your requirements are >> Yep. >> This is, based on that what we would advise. >> Exactly, yeah. >> And then essentially it's can you afford it. >> Right right. (laughs) If you want to buy a house that is a three bedrooms and three bathrooms in Palo Alto, versus a six bedrooms and then seven bathrooms in Palo Alto it's going to be a financial impact that you might not like. (laughs) So it's one of those, right. So what you want has a financial impact on your end solution and that's what we provide. We don't force somebody to get something. We just give them- Hey how many kids do you have? Four kids, then maybe you need a five bedroom house. Right so we kind of do that. >> Is it common discussion? >> Yeah it is, it is. And that's, as you know, some of the things we do focus on. Right, as we- In addition to the security aspect of it of course, is around the automation, around driving in the efficiencies. Because at the end of the day, you know, whether as capital expands or operational expands you want to optimize for both of those. And that's where as we architect the solutions, develop the offerings, we ensure that we build-in capabilities, whether it's storage efficiency capabilities like virtualization, or de-dupe or compression. But as well as this automated tiering. Tiering off from flash to lower tier, whether it's on-prem lower, slower- >> Tahir: Could be a disc. >> speed disc or tape or even off to the cloud, right. And being able to do that, provide that I think addresses many of our clients' needs. That's a common requirement that we do hear. >> And as mentioned 10 to 12 percent of it if flash. >> Tahir: Right. >> The rest, you know ninety percent or so is something else. That's economics, correct? >> Right so- >> And how do you see that changing? >> So I think the percentage won't really change. I think the data size will change. So you have to just think about things, just in generality. Just what you do today. You know when you take a picture, maybe you look at it the first three days, even if you have a phone. After three days, maybe you look at it maybe once every two months. After three months, guess what? You will always never look at them. They're kind of moved away from even your memory banks in your head. Then you say, "Oh I was looking through it". And then maybe once in awhile you look at it. So you have to look at the behavior. A lot of the applications have the same behavior, where the new data is required right away. The older the data gets, the more archival state it gets. It gets warmer and then it gets colder. Now, as a healthcare institute we have to devise something that is great financially, also has the security, and put away in a way where we can pull it without having pain to put it back. So that's where the tiering comes to play. Doesn't matter how we do it. >> And your planning assumption is that the cost disparity between flash and other forms of storage will remain. That other- >> So- >> forms will remain cheaper. >> Right, so we are hoping, but I think the hybrid model of flash- So once you do a hybrid with flash and disc, then it becomes a little more economically suitable for a lot of the people. They do the same thing, they do tiering, but they make it look like a bigger platform. So it's like, "We can give you a petabyte "but it's going to look like flash." It doesn't work like that. They might have 300 terabyte of flash, 700- but it's so integrated quickly, that they can pull it and push it. Then there's a read-aheads write-aheads that takes that advantage to make it look like it. That will drop your pricing. The special sauce that transfer the data between slower and flash discs. >> Two questions for you. >> Sure. >> What do you look for in a supplier? And what drives you nuts about a supplier, that you don't want a supplier to do? >> Sure. So personally speaking, this is just my personal opinion. A stable environment a tried and true vendor is important. Somebody who has a core competency of doing this for a longer term is what I personally look at. There's a lot of new players who come in, they stay for a couple of years, they explode, somebody takes them over or they just kind of vanish. Or certain people outside of their core competency. So if Toyota started to make- Because they wanted to save money they said, "Hey Toyota from now on will make "the tires that are called Toyota." But Toyota is not a tire company. Other companies, Bridgestone and Michelin's have been making tires for a very long time. So the core competency of Toyota is building the cars and not the tires. So when I see these people, or the vendors saying, "Okay I can give you this this this this and this and that and the security and that. Maybe three out of those five things are not their core competency. So I start to wonder if the whole stack is worth it because there's going to be some weakness because they don't have the core competency. That's what I look at. What drives me crazy is, every single time somebody comes to meet with me they want to sell me everything and the kitchen sink under one umbrella. And the answer is one single pane of glass to manage everything. Life is not that easy, I wish it was but it really is not. (laughs) So those two things are- >> Selling the fantasy right. Now Bina we'll give you the last word. Interconnect, give us your final thoughts. What should we know about what's going on in software-defined and IBM storage. >> Yeah you know lots of announcements at Interconnect. You heard, as you talked about, cloud optic storage we've got great new pricing models and capabilities and overall software-defined storage. We're continuing to innovate, continue add capabilities like analytics and you'll see us doing more and more on cognitive. Cognitive storage management to get more out of the data, help clients get more and more information and value out of their data. >> What's the gist of the new pricing models, just um- >> Flexible pricing model depending on how the both hybrid as well as the three tiered on-prem and in between. But really cold as well as a flexible pricing model where depending on how you use the data you know you get consistent pricing so between on-prem and in the cloud. >> So more cloud-like pricing >> Yes, exactly. >> Great. >> Yep. >> Easier consumption, excellent. Well Bina Tahir thanks very much for coming to the cube. >> Yes yes thank you. >> Dave: Pleasure having you. >> Thank you. >> Thank you for having us. >> Dave: You're welcome. Alright keep it right there everybody we'll be back with our next guest and a wrap, right after this short break. Right back. (upbeat music)

Published Date : Mar 22 2017

SUMMARY :

brought to you by IBM. and the vice president So Bina we'll start with you with IBM and Interconnect. to a great start in 2017 as well. So Tahir, let's get into City of Hope. See how the security is going to be, So a lot of imaging that But at the same time you got to but then you want to and the applications that it's supporting. So we are in that middle piece where and some of the requirements of the flash to upfront it. it's no a one size fits all. and this is what you will have. Okay this is what your requirements are This is, based on that it's can you afford it. So what you want has a of the things we do focus on. that we do hear. And as mentioned 10 to The rest, you know ninety So you have to just think about assumption is that the cost So it's like, "We can give you a petabyte So the core competency of Toyota Now Bina we'll give you the last word. Yeah you know lots of where depending on how you much for coming to the cube. we'll be back with our

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichelinORGANIZATION

0.99+

TahirPERSON

0.99+

IBMORGANIZATION

0.99+

BridgestoneORGANIZATION

0.99+

10QUANTITY

0.99+

ToyotaORGANIZATION

0.99+

2017DATE

0.99+

Bina HalmannPERSON

0.99+

Palo AltoLOCATION

0.99+

DavePERSON

0.99+

Four kidsQUANTITY

0.99+

15QUANTITY

0.99+

Bina HallmanPERSON

0.99+

Two questionsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

two thingsQUANTITY

0.99+

Tahir AliPERSON

0.99+

twoQUANTITY

0.99+

ninety percentQUANTITY

0.99+

todayDATE

0.99+

2016DATE

0.99+

oneQUANTITY

0.99+

BinaPERSON

0.99+

300 terabyteQUANTITY

0.99+

InterconnectORGANIZATION

0.99+

Las VegasLOCATION

0.99+

12 percentQUANTITY

0.99+

700QUANTITY

0.99+

five bedroomQUANTITY

0.99+

secondQUANTITY

0.99+

Bina TahirPERSON

0.99+

bothQUANTITY

0.99+

17 milesQUANTITY

0.99+

three bathroomsQUANTITY

0.99+

threeQUANTITY

0.98+

one shotQUANTITY

0.98+

tomorrowDATE

0.98+

six bedroomsQUANTITY

0.97+

five years agoDATE

0.97+

City of HopeLOCATION

0.97+

five thingsQUANTITY

0.97+

first three daysQUANTITY

0.97+

three bedroomsQUANTITY

0.97+

five yearsQUANTITY

0.96+

seven bathroomsQUANTITY

0.96+

catch 22OTHER

0.95+

Los AngelesLOCATION

0.93+

FirstQUANTITY

0.93+

four five years agoDATE

0.93+

one umbrellaQUANTITY

0.92+

three huge conversion pointsQUANTITY

0.91+

City of Hope Medical CenterORGANIZATION

0.91+

two applicationQUANTITY

0.91+

After three daysDATE

0.9+

After three monthsDATE

0.89+

a halfQUANTITY

0.89+

this morningDATE

0.88+

forty seven comprehensive cancer centersQUANTITY

0.87+

Lisa Spelman, Intel - Google Next 2017 - #GoogleNext17 - #theCUBE


 

(bright music) >> Narrator: Live from Silicon Valley. It's theCUBE, covering Google Cloud Next 17. >> Okay, welcome back, everyone. We're live in Palo Alto for theCUBE special two day coverage here in Palo Alto. We have reporters, we have analysts on the ground in San Francisco, analyzing what's going on with Google Next, we have all the great action. Of course, we also have reporters at Open Compute Summit, which is also happening in San Hose, and Intel's at both places, and we have Intel senior manager on the line here, on the phone, Lisa Spelman, vice president and general manager of the Xeon product line, product manager responsibility as well as marketing across the data center. Lisa, welcome to theCUBE, and thanks for calling in and dissecting Google Next, as well as teasing out maybe a little bit of OCP around the Xeon processor, thanks for calling. >> Lisa: Well, thank you for having me, and it's hard to be in many places at once, so it's a busy week and we're all over, so that's that. You know, we'll do this on the phone, and next time we'll do it in person. >> I'd love to. Well, more big news is obviously Intel has a big presence with the Google Next, and tomorrow there's going to be some activity with some of the big name executives at Google. Talking about your relationship with Google, aka Alphabet, what are some of the key things that you guys are doing with Google that people should know about, because this is a very turbulent time in the ecosystem of the tech business. You saw Mobile World Congress last week, we've seen the evolution of 5G, we have network transformation going on. Data centers are moving to a hybrid cloud, in some cases, cloud native's exploding. So all new kind of computing environment is taking shape. What is Intel doing here at Google Next that's a proof point to the trajectory of the business? >> Lisa: Yeah, you know, I'd like to think it's not too much of a surprise that we're there, arm in arm with Google, given all of the work that we've done together over the last several years in that tight engineering and technical partnership that we have. One of the big things that we've been working with Google on is, as they move from delivering cloud services for their own usage and for their own applications that they provide out to others, but now as they transition into being a cloud service provider for enterprises and other IT shops as well, so they've recently launched their Google Cloud platform, just in the last week or so. Did a nice announcement about the partnership that we have together, and how the Google Cloud platform is now available and running and open for business on our latest next generation Intel Xeon product, and that's codenamed Skylake, but that's something that we've been working on with them since the inception of the design of the product, so it's really nice to have it out there and in the market, and available for customers, and we very much value partnerships, like the one we have with Google, where we have that deep technical engagement to really get to the heart of the workload that they need to provide, and then can design product and solution around that. So you don't just look at it as a one off project or a one time investment, it's an ongoing continuation and evolution of new product, new features, new capabilities to continue to improve their total cost of ownership and their customer experience. >> Well, Lisa, this is your baby, the Xeon, codename Skylake, which I love that name. Intel always has great codenames, by the way, we love that, but it's real technology. Can you share some specific features of what's different around these new workloads because, you know, we've been teasing out over the past day and we're going to be talking tomorrow as well about these new use cases, because you're looking at a plethora of use cases, from IoT edge all the way down into cloud native applications. What specific things is Xeon doing that's next generation that you could highlight, that points to this new cloud operating system, the cloud service providers, whether it's managed services to full blown down and dirty cloud? >> Lisa: So it is my baby, I appreciate you saying that, and it's so exciting to see it out there and starting to get used and picked up and be unleashing it on the world. With this next generation of Xeon, it's always about the processor, but what we've done has gone so much beyond that, so we have a ton of what we call platform level innovation that is coming in, we really see this as one of our biggest kind of step function improvements in the last 10 years that we've offered. Some of the features that we've already talked about are things like AVX-512 instructions, which I know just sounds fun and rolls of the tongue, but really it's very specific workload acceleration for things like high performance computing workloads. And high performance computing is something that we see more and more getting used in access in cloud style infrastructure. So it's this perfect marrying of that workload specifically deriving benefit from the new platforms, and seeing really strong performance improvements. It also speaks to the way with Intel and Xeon families, 'cause remember, with Xeon, we have Xeon Phi, you've got standard Xeon, you've got Xeon D. You can use these instructions across the families and have workloads that can move to the most optimized hardware for whatever you're trying to drive. Some of the other things that we've talked about announced is we'll have our next generation of Intel Resource Director technology, which really helps you manage and provide quality of service within you application, which is very important to cloud service providers, giving them control over hardware and software assets so that they can deliver the best customer experience to their customers based on the service level agreement they've signed up for. And then the other one is Intel Omni-Path architecture, so again, fairly high performance computing focused product, Omni-Path is a fabric, and we're going to offer that in an integrated fashion with Skylake so that you can get even higher level of performance and capability. So we're looking forward to a lot more that we have to come, the whole of the product line will continue to roll out in the middle of this year, but we're excited to be able to offer an early version to the cloud service providers, get them started, get it out in the market and then do that full scale enterprise validation over the next several months. >> So I got to ask you the question, because this is something that's coming up, we're seeing a transition, also the digital transformation's been talked about for a while. Network transformation, IoTs all around the corner, we've got autonomous vehicles, smart cities, on and on. But I got to ask you though, the cloud service providers seems to be coming out of this show as a key storyline in Google Next as the multi cloud architectures become very clear. So it's become clear, not just this show but it's been building up to this, it's pretty clear that it's going to be a multi cloud world. As well as you're starting to see the providers talk about their SaaS offerings, Google talking about G Suite, Microsoft talks about Office 365, Oracle has their apps, IBM's got Watson, so you have this SaaSification. So this now creates a whole another category of what cloud is. If you include SaaS, you're really talking about Salesforce, Adobe, you know, on and on the list, everyone is potentially going to become a SaaS provider whether they're unique cloud or partnering with some other cloud. What does that mean for a cloud service provider, what do they need for applications support requirements to be successful? >> So when we look at the cloud service provider market inside of Intel, we are talking about infrastructure as a service, platform as a service and software as a service. So cutting across the three major categories, I give you like, up until now, infrastructure of the service has gotten a lot of the airtime or focus, but SaaS is actually the bigger business, and that's why you see, I think, people moving towards it, especially as enterprise IT becomes more comfortable with using SaaS application. You know, maybe first they started with offloading their expense report tool, but over time, they've moved into more sophisticated offerings that free up resources for them to do their most critical or business critical applications the they require to stay in more of a private cloud. I think that's evolution to a multi cloud, a hybrid cloud, has happened across the entire industry, whether you are an enterprise or whether you are a cloud service provider. And then the move to SaaS is logical, because people are demanding just more and more services. One of the things through all our years of partnering with the biggest to the smallest cloud service providers and working so closely on those technical requirements that we've continued to find is that total cost of ownership really is king, it's that performance per dollar, TCO, that they can provide and derive from their infrastructure, and we focused a lot of our engineering and our investment in our silicon design around providing that. We have multi generations that we've provided even just in the last five years to continue to drive those step function improvements and really optimize our hardware and the code that runs on top of it to make sure that it does continue to deliver on those demanding workloads. The other thing that we see the providers focusing on is what's their differentiation. So you'll see cloud service providers that will look through the various silicon features that we offer and choose, they'll pick and choose based on whatever their key workload is or whatever their key market is, and really kind of hone in and optimize for those silicon features so that they can have a differentiated offering into the market about what capabilities and services they'll provide. So it's an area where we continue to really focus our efforts, understand the workload, drive the TCO down, and then focus in on the design point of what's going to give that differentiation and acceleration. >> It's interesting, the definition's also where I would agree with you, the cloud service provider is a huge market when you even look at the SaaS. 'Cause whether you're talking about Uber or Netflix, for instance, examples people know about in real life, you can't ignore these new diverse use cases coming out. For instance, I was just talking with Stu Miniman, one of our analysts here, Wikibon, and Riot Games could be considered a cloud, right, I mean, 'cause it's a SaaS platform, it's gaming. You're starting to see these new apps coming out of the woodwork. There seems to be a requirement for being agile as a cloud provider. How do you enable that, what specifically can you share, if I'm a cloud service provider, to be ready to support anything that's coming down the pike? >> Lisa: You know, we do do a lot of workload and market analysis inside of Intel and the data center group, and then if you have even seen over the past five years, again, I'll just stick with the new term, how much we've expanded and broadened our product portfolio. So again, it will still be built upon that foundation of Xeon and what we have there, but we've gone to offer a lot of varieties. So again, I mentioned Xeon Phi. Xeon Phi at the 72 cores, bootable Xeon but specific workload acceleration targeted at high performance computing and other analytics workloads. And then you have things at the other end. You've got Xeon D, which is really focused at more frontend web services and storage and network workloads, or Atom, which is even lower power and more focused on cold and warm storage workloads, and again, that network function. So you could then say we're not just sticking with one product line and saying this is the answer for everything, we're saying here's the core of what we offer, and the features people need, and finding options, whether they range from low power to high power high performance, and kind of mixed across that whole kind of workload spectrum, and then we've broadened around the CPU into a lot of other silicon innovation. So I don't know if you guys have had a chance to talk about some of the work that we're doing with FPGAs, with our FPGA group and driving and delivering cloud and network acceleration through FPGAs. We've also introduced new products in the last year like Silicon Photonics, so dealing with network traffic crossing through-- >> Well, is FPGA, that's the Altera stuff, we did talk with them, they're doing the programmable chips. >> Lisa: Exactly, so it requires a level of sophistication and understanding what you need the workload to accelerate, but once you have it, it is a very impressive and powerful performance gain for you, so the cloud service providers are a perfect market for that, as are the cloud service providers because they have very sophisticated IT and very technically astute engineering teams that are able to really, again, go back to the workload, understand what they need and figure out the right software solution to pair with it. So that's been a big focus of our targeting. And then, like I said, we've added all these different things, different new products to the platform that start to, over time, just work better and better together, so when you have things like Intel SSD there together with Intel CPUs and Intel Ethernet and Intel FPGA and Intel Silicon Photonics, you can start to see how the whole package, when it's designed together under one house, can offer a tremendous amount of workload acceleration. >> I got to ask you a question, Lisa, 'cause this comes up, while you're talking, I'm just in my mind visualizing a new kind of virtual computer server, the cloud is one big server, so it's a design challenge. And what was teased out at Mobile World Congress that was very clear was this new end to end architecture, you know, re-imagined, but if you have these processors that have unique capabilities, that have use case specific capabilities, in a way, you guys are now providing a portfolio of solutions so that it almost can be customized for a variety of cloud service providers. Am I getting that right, is that how you guys see this happening where you guys can just say, "Hey, just mix and match what you want and you're good." >> Lisa: Well, and we try to provide a little bit more guidance than as you wish, I mean, of course, people have their options to choose, so like, with the cloud service providers, that's what we have, really tight engineering engagement, so that we can, you know, again, understand what they need, what their design point is, what they're honing in on. You might work with one cloud service provider that is very facilities limited, and you might work with another one that is, they're face limited, the other one's power limited, and another one has performance is king, so you can, we can cut some SKUs to help meet each of those needs. Another good example is in the artificial intelligence space where we did another acquisition last year, a company called Nervana that's working on optimized silicon for a neural network. And so now we have put together this AI portfolio, so instead of saying, "Oh, here's one answer "for artificial intelligence," it's, "Here's a multitude of answers where you've got Xeon," so if you have, I'm going to utilize capacity, and are starting down your artificial intelligence journey, just use your Xeon capacity with an optimized framework and you'll get great results and you can start your journey. If you are monetizing and running your business based on what AI can do for you and you are leading the pack out there, you've got the best data scientists and algorithm writers and peak running experts in the world, then you're going to want to use something like the silicon that we acquired from the Nervana team, and that codename is Lake Crest, speaking of some lakes there. And you'll want to use something like Xeon with Lake Crest to get that ultimate workload acceleration. So we have the whole portfolio that goes from Xeon to Xeon Phi to Xeon with FPGAs or Xeon with Lake Crest. Depending on what you're doing, and again, what your design point is, we have a solution for you. And of course, when we say solution, we don't just mean hardware, we mean the optimized software frameworks and the libraries and all of that, that actually give you something that can perform. >> On the competitive side, we've seen the processor landscape heat up on the server and the cloud space. Obviously, whether it's from a competitor or homegrown foundry, whatever fabs are out there, I mean, so Intel's always had a great partnership with cloud service providers. Vis-a-vis the competition and context to that, what are you guys doing specifically and how you'd approach the marketplace in light of competition? >> Lisa: So we do operate in a highly competitive market, and we always take all competitors seriously. So far we've seen the press heat up, which is different than seeing all of the deployments, so what we look for is to continue to offer the highest performance and lowest total cost of ownership for all our customers, and in this case, the cloud service providers, of course. And what do we do is we kind of stick with our game plan of putting the best silicon in the world into the market on a regular beat rate and cadence, and so there's always news, there's always an interesting story, but when you look at having had eight new products and new generations in market since the last major competitive x86 product, that's kind of what we do, just keep delivering so that our customers know that they can bet on us to always be there and not have these massive gaps. And then I also talked to you about portfolio expansion, we don't bet on just one horse, we give our customers the choice to optimize for their workloads, so you can go up to 72 cores with Xeon Phi if that's important, you can go as low as two cores with Atom, if that's what works for you. Just an example of how we try to kind of address all of our customer segments with the right product at the right time. >> And IoT certainly brings a challenge too, when you hear about network edge, that's a huge, huge growth area, I mean, you can't deny that that's going to be amazing, you look at the cars are data centers these days, right? >> Lisa: A data center on wheels. >> Data center on wheels. >> Lisa: That's one of the fun things about my role, even in the last year, is that growing partnership, even inside of Intel with our IoT team, and just really going through all of the products that we have in development, and how many of them can be reused and driven towards IoT solution. The other thing is, if you look into the data center space, I genuinely believe we have the world's best ecosystem, you can't find an ISV that we haven't worked with to optimize their solution to run best on Intel architecture and get that workload acceleration. And now we have the chance to put that same playbook into play in the IoT space, so it's a growing, somewhat nascent but growing market with a ton of opportunity and a ton of standards to still be built, and a lot of full solution kits to be put together. And that's kind of what Intel does, you know, we don't just throw something out to the market and say, "Good luck," we actually put the ecosystem together around it so that it performs. But I think that's kind of what you see with, I don't know if you guys saw our Intel GO announcement, but it's really like the software development kit and the whole product offering for what you need for truly delivering automated vehicles. >> Well, Lisa, I got to say, so you guys have a great formula, why fix what's not broken, stay with Moore's law, keep that cadence going, but what's interesting is you are listening and adapting to the architectural shifts, which is smart, so congratulations and I think, as the cloud service provider world changes, and certainly in the data center, it's going to be a turbulent time, but a lot of opportunity, and so good to have that reliability and, if you can make the software go faster then they can write more software faster, so-- >> Lisa: Yup, and that's what we've seen every time we deliver a step function improvement in performance, we see a step function improvement in demand, and so the world is still hungry for more and more compute, and we see this across all of our customer bases. And every time you make that compute more affordable, they come up with new, innovative, different ways to do things, to get things done and new services to offer, and that fundamentally is what drives us, is that desire to continue to be the backbone of that industry innovation. >> If you could sum up in a bumper sticker what that step function is, what is that new step function? >> Lisa: Oh, when we say step functions of improvements, I mean, we're always looking at targeting over 20% performance improvement per generation, and then on top of that, we've added a bunch of other capabilities beyond it. So it might show up as, say, a security feature as well, so you're getting the massive performance improvement gen to gen, and then you're also getting new capabilities like security features added on top. So you'll see more and more of those types of announcements from us as well where we kind of highlight the, not just the performance but that and what else comes with it, so that you can continue to address, you know, again, the growing needs that are out there, so all we're trying to say is, day a step ahead. >> All right, Lisa Spelman, VP of the GM, the Xeon product family as well as marketing and data center. Thank you for spending the time and sharing your insights on Google Next, and giving us a peak at the portfolio of the Xeon next generation, really appreciate it, and again, keep on bringing that power, Moore's law, more flexibility. Thank you so much for sharing. We're going to have more live coverage here in Palo Alto after this short break. (bright music)

Published Date : Mar 9 2017

SUMMARY :

Narrator: Live from Silicon Valley. maybe a little bit of OCP around the Xeon processor, and it's hard to be in many places at once, of the tech business. partnerships, like the one we have with Google, that you could highlight, that points to and it's so exciting to see it out there So I got to ask you the question, and really optimize our hardware and the code is a huge market when you even look at the SaaS. and the data center group, and then if you have even seen Well, is FPGA, that's the Altera stuff, the right software solution to pair with it. I got to ask you a question, Lisa, so that we can, you know, again, understand what they need, Vis-a-vis the competition and context to that, And then I also talked to you about portfolio expansion, and the whole product offering for what you need and so the world is still hungry for more and more compute, with it, so that you can continue to address, you know, at the portfolio of the Xeon next generation,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa SpelmanPERSON

0.99+

GoogleORGANIZATION

0.99+

NervanaORGANIZATION

0.99+

LisaPERSON

0.99+

Palo AltoLOCATION

0.99+

San FranciscoLOCATION

0.99+

AlphabetORGANIZATION

0.99+

two coresQUANTITY

0.99+

OracleORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

IBMORGANIZATION

0.99+

IntelORGANIZATION

0.99+

UberORGANIZATION

0.99+

last yearDATE

0.99+

Silicon PhotonicsORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

72 coresQUANTITY

0.99+

two dayQUANTITY

0.99+

last weekDATE

0.99+

San HoseLOCATION

0.99+

oneQUANTITY

0.99+

G SuiteTITLE

0.99+

Office 365TITLE

0.99+

Stu MinimanPERSON

0.99+

Open Compute SummitEVENT

0.98+

Mobile World CongressEVENT

0.98+

XeonORGANIZATION

0.98+

tomorrowDATE

0.98+

both placesQUANTITY

0.98+

AlteraORGANIZATION

0.98+

Riot GamesORGANIZATION

0.97+

OneQUANTITY

0.97+

WikibonORGANIZATION

0.97+

WatsonTITLE

0.96+

over 20%QUANTITY

0.95+

SaaSTITLE

0.95+

firstQUANTITY

0.95+

one horseQUANTITY

0.94+

Silicon ValleyLOCATION

0.94+

one productQUANTITY

0.94+

eachQUANTITY

0.94+

one answerQUANTITY

0.94+

eight new productsQUANTITY

0.93+

one timeQUANTITY

0.92+

XeonCOMMERCIAL_ITEM

0.92+

GMORGANIZATION

0.91+

one houseQUANTITY

0.91+

Google CloudTITLE

0.91+

Tom Davenport, Babson College - #MITCDOIQ - #theCUBE


 

in Cambridge Massachusetts it's the cube covering the MIT chief data officer and information quality symposium now here are your hosts Stu miniman and George Gilbert you're watching the cube SiliconANGLE media's flagship program we go out to lots of technology shows and symposiums like this one here help extract the signal from the noise I'm Stu miniman droid joined by George Gilbert from the Wikibon research team and really thrilled to have on the program the keynote speaker from this MIT event Tom Davenport whose pressure at babson author of some books including a new one that just came out and thank you so much for joining us my pleasure great to be here all right so uh you know so many things your morning keynote that I know George and I want to dig into I guess I'll start with you talk about the you know for eras of you called it data today used to be formation from the information sorry but you said you started with when it was three eras of analytics and now you've came to information so I'm just curious we you know we get caught up sometimes on semantics but is there a reason why you switch from you know analytics to information now well I'm not sure it's a permanent switch I just did it for this occasion but you know I I think that it's important for even people who aren't who don't have as their job doing something with analytics to realize that analytics or how we turn data into information so kind of on a whim I change it from four errors of analytics 24 hours of information to kind of broaden it out in a sense and make people realize that the whole world is changing it's not just about analytics ya know I it resonated with me because you know in the tech industry so much we get caught up on the latest tool George will be talking about how Hadoop is moving to spark and you know right if we step back and look from a longitudinal view you know data is something's been around for a long time but as as you said from Peter Drucker's quote when we endow that with relevance and purpose you know that that's when we get information so yeah and that's why I got interested in analytics a year ago or so it was because we weren't thinking enough about how we endowed data with relevance and purpose turning it into knowledge and knowledge management was one of those ways and I did that for a long time but the people who were doing stuff with analytics weren't really thinking about any of the human mechanisms for adding value to to data so that moved me in analytics direction okay so so Tommy you've been at this event before you know you you've taught in written and you know written books about this about this whole space so willing I'm old no no its you got a great perspective okay so bring us what's exciting you these days what are some of our big challenges and big opportunities that we're facing as kind of kind of humanity and in an industry yeah well I think for me the most exciting thing is they're all these areas where there's just too much data and too much analysis for humans to to do it anymore you know when I first started working with analytics the idea was some human analysts would have a hypothesis about how to do that about what's going on in the data and you'd gather some data and test that hypothesis and so on it could take weeks if not months and now you know we need me to make decisions in milliseconds on way too much data for a human to absorb even in areas like health care we have 400 different types of cancer hundreds of genes that might be related to cancer hundreds of drugs to administer you know we have these decisions have to be made by technology now and so very interesting to think about what's the remaining human role how do we make sure those decisions are good how do we review them and understand them all sorts of fascinating new issues I think along those lines come you know in at a primitive level in the Big Data realm the tools are kind of still emerging and we want to keep track of every time someone's touched it or transformed it but when you talk about something as serious as cancer and let's say we're modeling how we decide to or how we get to a diagnosis do we need a similar mechanism so that it's not either/or either the doctor or you know some sort of machine machine learning model or cognitive model some waited for the model to say here's how I arrived at that conclusion and then for the doctor to say you know to the patient here's my thinking along those lines yeah I mean I think one can like or just like Watson it was being used for a lot of these I mean Watson's being used for a lot of these oncology oriented projects and the good thing about Watson in that context is it does kind of presume a human asking a question in the first place and then a human deciding whether to take the answer the answers in most cases still have confidence intervals you know confidence levels associated with them so and in health care it's great that we have this electronic medical record where the physicians decision of their clinicians decision about how to treat that patient is recorded in a lot of other areas of business we don't really have that kind of system of record to say you know what what decision did we make and why do we make it and so on so in a way I think health care despite being very backward in a lot of areas is kind of better off than then a lot of areas of business the other thing I often say about healthcare is if they're treating you badly and you die at least there will be a meeting about it in a healthcare institution in business you know we screw up a decision we push it under the rug nobody ever nobody ever considered it what about 30 years ago I think it was with Porter's second book you know and the concept of the value chain and sort of remaking the the understanding of strategy and you're talking about the you know the AP AP I economy and and the data flows within that can you help tie your concept you know the data flows the data value chain and the api's that connect them with the porters value chain across companies well it's an interesting idea I think you know companies are just starting to realize that we are in this API economy you don't have to do it all yourself the smart ones have without kind of modeling it in any systematic way like the porter value chain have said you know we we need to have other people linking to our information through api's google is fairly smart i think in saying will even allow that for free for a while and if it looks like there's money to be made in what start charging for access to those api so you know building the access and then thinking about the the revenue from it is one of the new principles of this approach but i haven't seen its i think would be a great idea for paper to say how do we translate the sort of value chain ideas a michael porter which were i don't know 30 years ago into something for the api oriented world that we live in today which you think would you think that might be appropriate for the sort of platform economics model of thinking that's emerging that's an interesting question i mean the platform people are quite interested in inner organizational connections i don't hear them as talking as much about you know the new rules of the api economy it's more about how to two sided and multi-sided platforms work and so on Michael Porter was a sort of industrial economist a lot of those platform people are economists so from that sense it's the same kind of overall thinking but lots of opportunity there to exploit I think so tell me what want to bring it back to kind of the chief data officer when one of the main themes of the symposium here I really like you talked about kind of there needs to be a balance of offense and defense because so much at least in the last couple of years we've been covering this you know governance and seems to be kind of a central piece of it but it's such an exciting subject it's exciting subject but you know you you put that purely in defense on and you know we get excited the companies that are you know building new products you know either you know saving or making more money with with data Kenny can you talk a little bit about kind of as you see how this chief data officer needs to be how that fits into your kind of four arrows yeah yeah well I don't know if I mentioned it in my talk but I went back and confirmed my suspicions that the sama Phi odd was the world's first chief data officer at Yahoo and I looked at what Osama did at Yahoo and it was very much data product and offense or unity established yahoo research labs you know not everything worked out well at Yahoo in retrospect but I think they were going in the direction of what interesting data products can can we create and so I think we saw a lot of kind of what I call to point o companies in the in the big data area in Silicon Valley sing it's not just about internal decisions from data it's what can we provide to customers in terms of data not just access but things that really provide value that means data plus analytics so you know linkedin they attribute about half of their membership to the people you may know data product and everybody else as a people you may know now well we these companies haven't been that systematic about how you build them and how do you know which one to actually take the market and so on but I think now more and more companies even big industrial companies are realizing that this is a distinct possibility and we oughta we ought to look externally with our data for opportunities as much as supporting internal and I guess for you talk to you know companies like Yahoo some of the big web companies the whole you know Big Data meme has been about allowing you know tools and processes to get to a broader you know piece of the economy you know the counterbalance that a little bit you know large public clouds and services you know how much can you know a broad spectrum of companies out there you know get the skill set and really take advantage of these tools versus you know or is it going to be something that I'm going to still going to need to go to some outside chores for some of this well you know I think it's all being democratized fairly rapidly and I read yesterday the first time the quote nobody ever got fired for choosing amazon web services that's a lot cheaper than the previous company in that role which was IBM where you had to build up all these internal capabilities so I think the human side is being democratized they're over a hundred company over 100 universities in the US alone that have analytics oriented degree programs so i think there's plenty of opportunity for existing companies to do this it's just a matter of awareness on the part of the management team I think that's what's lacking in most cases they're not watching your shows i guess and i along the lines of the you know going back 30 years we had a preference actually a precedent where the pc software sort of just exploded onto the scene and it was i want control over my information not just spreadsheets you know creating my documents but then at the same time aighty did not have those guardrails to you know help help people from falling off you know their bikes and getting injured what are the what tools and technologies do we have for both audiences today so that we don't repeat that mistake ya know it's a very interesting question and I think you know spreadsheets were great you know the ultimate democratization tool but depending on which study you believe 22 eighty percent of them had errors in them and there was some pretty bad decisions that were made sometimes with them so we now have the tools so that we could tell people you know that spreadsheet is not going to calculate the right value or you should not be using a pie chart for that visual display I think vendors need to start building in those guardrails as you put it to say here's how you use this product effectively in addition to just accomplishing your basic task but you wouldn't see those guardrails extending all the way back because of data that's being provisioned for the users well I think ultimately if we got to the point of having better control over our data to saying you should not be using that data element it's not you know the right one for representing you know customer address or something along those lines we're not there yet and the vast majority of companies I've seen a few that have kind of experimented with data watermarks or something to say yes this is the one that you're allowed to to use has been certified as the right one for that purpose but we need to do a lot more in that regard yeah all right so Tommy you've got a new book that came out earlier this year only humans need apply winners and losers in the age of smart machines so ask you the same question we asked eric donaldson and Auntie McAfee when they wrote the second Machine Age you know are we all out of job soon well I think big day and I have become a little more optimistic as we look in some depth at at the data I mean one there are a lot of jobs evolving working with these technologies and you know it's just somebody was telling me the other day that is that I was doing a radio interview from my book and the guy was hung who said you know I've made a big transition into podcasting he said but the vast majority of people in radio have not been able to make that transition so if you're willing to kind of go with the flow learn about new technologies how they work I think there are plenty of opportunities the other thing to think about is that these transitions tend to be rather slow I mean we had about in the United States in 1980 about half a million bank tellers since then we've had ATMs online banking etc give so many bank tellers we have in 2016 about half a million it's rather shocking i think i don't know exactly what they're all doing but we're pretty slow in making these transitions so i think those of us sitting here today or even watching her probably okay we'll see some job loss on the margins but anybody who's willing to keep up with new technologies and add value to the smart machines that come into the workplace i think is likely to be okay okay do you have any advice for people that either are looking at becoming you know chief data officers well yeah as I as you said balanced offense and defense defense is a very tricky area to inhabit as a CDO because you if you succeed and you prevent you know breaches and privacy problems and security issues and so on nobody gives you necessarily any credit for it or even knows that it's helps of your work that you were successful and if you fail it's obviously very visible and bad for your career too so I think you need to supplement defense with offense activities are analytics adding valued information digitization data products etc and then I think it's very important that you make nice with all the other data oriented c-level executives you know you may not want to report to the CIO or if you have a cheap analytics officer or chief information security officer chief digitization officer chief digital officer you gotta present a united front to your organization and figure out what's the division of labor who's going to do what in too many of these organizations some of these people aren't even talking to each other and it's crazy really and very confusing to the to the rest of the organization about who's doing what yeah do you see the CDO role but you know five years from now being a standalone you know peace in the organization and you know any guidance on where that should sit is structurally compared to say the CIO yeah I don't you know I I've said that ideally you'd have a CIO or somebody who all of these things reported to who could kind of represent all these different interests of the rest of the organization that doesn't mean that a CDO shouldn't engage with the rest of the business I think CIO should be very engaged with the rest of the business but i think this uncontrolled proliferation has not been a good thing it does mean that information and data are really important to organization so we need multiple people to address it but they need to be coordinated somehow in a smart CEO would say you guys get your act together and figure out sort of who does what tell me a structure I think multiple different things can work you can have it inside of IT outside of IT but you can at least be collaborating okay last question I've got is you talked about these errors and you know that they're not you know not one dies in the next one comes and you talked about you know we know how slow you know people especially are to change so what happened to the company that are still sitting in the 10 or 20 era as we see more 30 and 40 companies come yeah well it's not a good place to be in general and I think what we've seen is this in many industries the sophisticated companies with regard to IT are the ones that get more and more market share the the late adopters end up ultimately going out of business I mean you think about in retail who's still around Walmart was the most aggressive company in terms of Technology Walmart is the world's largest company in moving packages around the world FedEx was initially very aggressive with IT UPS said we better get busy and they did it to not too much left of anybody else sending packages around the world so I think in every industry ultimately the ones that embrace these ideas tend to be the ones who who prosper all right well Tom Davenport really appreciate this morning's keynote and sharing with our audience everything that's happening in the space will be back with lots more coverage here from the MIT CDO IQ symposium you're watching the q hi this is christopher

Published Date : Jul 14 2016

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

WalmartORGANIZATION

0.99+

Tom DavenportPERSON

0.99+

2016DATE

0.99+

YahooORGANIZATION

0.99+

yahooORGANIZATION

0.99+

Tom DavenportPERSON

0.99+

Michael PorterPERSON

0.99+

GeorgePERSON

0.99+

IBMORGANIZATION

0.99+

FedExORGANIZATION

0.99+

24 hoursQUANTITY

0.99+

amazonORGANIZATION

0.99+

United StatesLOCATION

0.99+

1980DATE

0.99+

hundreds of drugsQUANTITY

0.99+

TommyPERSON

0.99+

Silicon ValleyLOCATION

0.99+

USLOCATION

0.99+

second bookQUANTITY

0.99+

Peter DruckerPERSON

0.99+

MITORGANIZATION

0.99+

Stu minimanPERSON

0.99+

yesterdayDATE

0.99+

michael porterPERSON

0.99+

hundreds of genesQUANTITY

0.99+

UPSORGANIZATION

0.99+

Auntie McAfeePERSON

0.98+

todayDATE

0.98+

30QUANTITY

0.98+

Babson CollegeORGANIZATION

0.98+

a year agoDATE

0.98+

about half a millionQUANTITY

0.98+

sama PhiPERSON

0.98+

both audiencesQUANTITY

0.98+

about half a millionQUANTITY

0.98+

eric donaldsonPERSON

0.97+

40 companiesQUANTITY

0.97+

over 100 universitiesQUANTITY

0.96+

Cambridge MassachusettsLOCATION

0.96+

over a hundredQUANTITY

0.96+

OsamaPERSON

0.96+

30 years agoDATE

0.96+

22 eighty percentQUANTITY

0.96+

first timeQUANTITY

0.95+

this morningDATE

0.93+

earlier this yearDATE

0.92+

two sidedQUANTITY

0.92+

firstQUANTITY

0.92+

WatsonTITLE

0.9+

oneQUANTITY

0.9+

first chief data officerQUANTITY

0.9+

secondQUANTITY

0.88+

four errorsQUANTITY

0.88+

#MITCDOIQORGANIZATION

0.87+

KennyPERSON

0.86+

MIT CDO IQ symposiumEVENT

0.86+

christopher Tom DavenportPERSON

0.84+

SiliconANGLEORGANIZATION

0.84+

about 30 years agoDATE

0.83+

400 different types of cancerQUANTITY

0.83+

Wikibon research teamORGANIZATION

0.82+

last couple of yearsDATE

0.82+

googleORGANIZATION

0.81+

PorterPERSON

0.81+

one of those waysQUANTITY

0.8+

20QUANTITY

0.75+

first placeQUANTITY

0.75+

10QUANTITY

0.74+

HadoopPERSON

0.74+

some booksQUANTITY

0.73+

about halfQUANTITY

0.72+

lot of areasQUANTITY

0.71+

three erasQUANTITY

0.7+

five yearsQUANTITY

0.7+

MachineTITLE

0.63+

lot of areasQUANTITY

0.61+

30 yearsQUANTITY

0.59+

lotQUANTITY

0.59+

millisecondsQUANTITY

0.55+

linkedinORGANIZATION

0.55+

APORGANIZATION

0.52+

showsQUANTITY

0.51+

aightyORGANIZATION

0.5+

WatsonORGANIZATION

0.5+

babsonTITLE

0.42+