Image Title

Search Results for David Tucker:

Edouard Bugnion, EPFL | CUBE Conversation


 

(cheerful music) >> Hi, I'm Peter Burris and welcome once again to a Cube Conversation from our beautiful Studios here in Palo Alto, California. Today we've got a great guest. Ed Bugnion is a Professor of Computer Science at EPFL, one of the leading Swiss technology institutes or engineering institutes, a country known for engineering. Ed B, thanks very much for being here. >> Thanks for having me. >> So a lot going on this week but you're here for a particular reason here in Silicon Valley. Long journey, what are you here for? What's going on? >> Yeah, so I'm back to my old neighborhood in Palo Alto because VMware had its 20th birthday celebration this week and they were kind enough to invite me, invite all the founders and so it was a great event. Happy to be here. >> So what was your role in the early VMware? >> I had many, many different roles. I had many different lives. At one point I was the CTO of the company from the beginning until 2005. >> So this week a lot of catching up with folks, talking about a 20 year history, anything particular, interesting? >> I mean I think the nice thing was that VMware's actually doing great. It's got a great future ahead for itself but it was actually nice to to be able to communicate to the existing, the current employees what VMware was 20 years ago. >> And where it's meant so they can see a perspective. So I actually have an interesting thought, at least I think it's an interesting thought, and that is I've been doing this for a long time and if I look back over the last 20 years I think there were two really consequential technology changes. One was virtualization which obviously VMware popularized in kind of a open way. Because without it, first of all it created great as you said a great company, but also without it it would not have been possible to have the cloud because the cloud is effectively a whole bunch of virtualized resources. But the second one is flash. And the reason why I think flash is important is because for the first 50 years of computing we were focused on how do we reliably persist data onto these things called disks? And with flash now we're thinking about how we quickly deliver data out to applications. I don't see how AI and some of these new types of applications we're talking about, businesses we're talking about, are possible without flash. What do you think? >> Obviously these are two of the big pillars right? There are few other pillars, networking being one of them, both within the data center and delivery of content otherwise we would not have the network effect and all the applications that are pairing us mobile as well. But yes from a data center infrastructure perspective, virtualization which is you know started as a relatively point technology right? How to run two operating systems on a computer at the time, it wasn't even a laptop, it was a desktop into being what it is today has had a profound effect. It forced us to separate the logical from the physical and manage them separately and think about capacity differently. And then create the flexibility in the provisioning of these new resources applied to computing and networking and storage. >> And flash is also had a similar kind of effect, I mean would you agree with that as well? >> Yeah, I mean it's totally changed expectations right? Before flash and before in memory, the expectation was that anything that involved data warehousing and analytics was something that was a batch process. You have to wait for it and the notion that data is available on demand is something that is now taken for granted by users but it wouldn't have been possible without those new technologies. >> And it's had an enormous impact on system design and system architecture. Another thing we believe at Wikibon is that digital transformation is real. And by that we mean that the emergence of data as an asset is having a profound consequence on how business thinks because at the end of the day you institutionalize your work, your value propositions, how you get things done around the assets which you consider your assets and as you do that for data you're going to rebuild your business around data as an asset. But it also suggests that data is going to take a more central role in describing how future architectures are laid out. Now at EPFL you're doing a lot of research specifically on how data center infrastructure is going to be organized. What do you think? Is the data going to move to the cloud? Is the cloud going to come to the data? What does that mean? >> Well it's actually, my research is actually squarely on what's happening within the data center. And in particular whether you can actually take make efficient use of the resources in a given data center while meeting service level objectives. How do you make sure that you can respond to user facing requests fast enough and have and at the same time be able to deploy that with the right amount of capacity? >> When you say user you mean not only human being but also other system resources right? (crosstalk) >> The interactive behavior makes things different right? Because then you actually have an actual time constraint. And it's actually difficult to be able to solve the problem of delivering latency critical, human real-time responses reliably and at the same time being able to do that without consuming an exorbitant amount of resources. You know energy is a big issue. If you can deliver the same amount of capacity of actual traffic with less underlying hardware capacity then it's a win. >> So as as we think about data centers going forward, I presume that you believe that data centers are going to change and evolve but still in some capacity be very much in force as a design center for how an enterprise thinks about its resources. Is that accurate? >> Yeah, I mean the notion that everything is going to concentrate into a few mega data centers is obviously a little bit of a stretch right? There will always be a balance. There are economies of scale in these very large mega data centers. The Suites point and the minimal operating point at which it makes sense to actually build on our data center and to deploy infrastructure has actually changed right? A few years ago it actually made sense to put three servers in a basement. That doesn't make any sense today. But for many enterprises it does still make sense to have some amount of capacity on-premise because it's an economic balance right? You get to own the assets but you need to have a certain scale. >> So as you're driving your research about the future of the data center and how it's going to be organized, what role does automation play in conceptualizing what the future of the data center looks like? >> There's an old friend of mine who once said screwdrivers don't scale. (laughter) If you want to be able to operate anything at any scale, you need to have automation. And virtualization is a one of the mechanism for automation, it's one of the foundational elements right? You want to make absolutely clearly separate the act of operating screwdrivers which you need to do once in a while. You need to add capacity physically in a data center but you want to make sure that that is completely decoupled from the operations. >> So how do you think or where do you think some of the next set of advances are going to come as we think about the data center? You know given virtualization, given flash, given improvements in networking. Where do you see some of that next round of technological advances coming? >> Well if there were no new applications, if there were no digital transformation the answer would be easy right? It's not a hard problem. You just keep doing and it's going to get better over time. >> Just faster. Faster, cheaper. >> The reality is we have a digital transformation. It is, if anything, accelerating and so the question is how do you keep up with the growth complexity? And the reality of virtualization is whenever you apply to a particular domain, right, you allow that domain to scale by reducing operational complexity but part of that operational complexity actually gets shifted elsewhere. The early days of virtualization at VMware we virtualized servers, we virtualized clusters of servers. That was really nice right? You could actually move VMs around across you know transparently. We obviously push a lot of that complexity into storage area networks. And that was fine at small scale. At larger scale it creates again an operational issue with storage because we move some of that complexity into another subsystem. So it is about chasing where which subsystem actually has the pain point and has the complexity at any point in time. >> So as we start chasing these new opportunities, we're also seeing the business start to react as they try to exploit data differently. So that the whole concept of technology, not at the infrastructure level per se, but rather as an enabler or as a strategic capability within a business starts again elevating it up into the organization. We start worrying about security. We start worrying about customer experience and the rate at which we transition. When we substitute labor, technology for labor, in a customer experience kind of way. As we think about those types of things that suggests that the technology professional has to start becoming a little bit more mindful of their responsibilities, what do you envision will be the role of where that interplay between a sense of responsibility and engineering as we start to conceive of some of these more complex rich systems? >> So that's actually is the one of the big, big transitions because when I started in tech what we did effectively had a relatively moderate implication on people's lives right? It was basically business process that was being digitized and we were enabling a more efficient digitization of business processes but it was sort of left at that. Today tech is at a stage where we can actually directly impact people's lives for the better or for worse. And it's very important that as an industry we actually have the appropriate introspection so that we know we're doing things in a sensible way. It might involve actually slowing down at some times the pace of innovation. Trying to do things in a more deliberate, careful way. Other segments and industries had to do that. You can't you know come up with a new drug and simply put it on the market. There's a process. Obviously this is an extreme example but tech has always been on the other extreme. And the big trend is to find the appropriate level balance. I live in Switzerland now and GDPR is all over Europe. It's actually a big change in the mindset because now you not only have to make sure that you can manage data for yourself as an enterprise but also that you actually abide to your responsibilities as an uprise as a data processor for your customers and your users. >> For other peoples data. Yeah and it's interesting because in many respects medicine has itself been at the vanguard of ethics for a long time and what we're suggesting is that eventually technology is going to have to start thinking about what do the new ethics mean. Now at EPFL are, I'm putting you on the spot, at EPFL are you starting to introduce these concepts of ethics directly into the curriculum? Are you teaching the next generation to think in these terms? >> Yeah, well actually the first thing we're doing is we're adding into the curriculum for all engineers not just computer science crowd but all engineering students the notion of computational thinking as a freshman class, mandatory core freshman class. >> Peter Denning. >> And computational thinking is really about sort of we're positioning that sort of a third pillar of the engineering foundation along with math and physics right? You need math to learn rigor and you need physics to sort of understand how to model the world. And we're adding computational thinking as a way to you know reason about how you can use computational methods to solve engineering problems because as an engineer all of us will actually use computers all the time. And yet we never really know what it actually means to apply computational methods and to think about it in those terms. >> So coming back to this notion of the world of flash is playing in the industry, we also believe here at Wikibon that we are seeing a significant transformation in the computational model. The basic way that you approach a problem. And so taking the notion of computational thinking and I mentioned Peter Denning, who's a guide known for a long time, now down at the Naval Postgraduate School to Cebrowski Institute. When you start asking that fundamental question, how do you approach a problem? How are people going to approach the problems going forward as a consequence of a new orientation of delivering data? >> Well Peter Denning obviously is known for the locality principle. And the locality principle says that you affect. >> Great segway by the way. >> I mean you need to have, you need to know what your working set of data is and you need to have it close to you know to operate because you cannot have uniform equal cost access to all data at at all times. It's particularly interesting when you combine flash technologies from a latency and throughput perspective with networking technologies and computational technologies. It's about knowing where do you actually actuate the points, at what point do you go from an aggregate model to disaggregate model? What are the pros and cons of each? But fundamentally you know recognizing that locality that does exist and locality matters is fundamental to the scaling of the infrastructure. Obviously these are the problems that we infrastructure people worry about so that from an application perspective and from a policy and reflection perspective we don't have to worry about those. >> And so the application people don't but especially the business people can focus more on customer experience and those types of things. Coming back to this notion of locality and tying it back into GDPR for example, it seems as though the constraints of locality are not just latency and cost but they also are increasingly in human terms, in ethical terms, including regulatory principles but also intellectual property principles. When you start to think about how again this notion of the data center gets organized where we probably increasingly start organizing data centers around the natural location of data, I don't mean geographic, I mean the natural location of data, do you foresee a new set of constraints starting to influence not just latency, not just cost but some of these other more human concerns starting to impact how we conceive of where we put data centers and what we put in those data centers? >> Well there are two different aspects to the answer. One is data centers consume energy. And so the location of the data center from an energy perspective will matter and will keep mattering and because we need to be very conscious about the overall global footprint of these data centers. And then the other consideration which is completely orthogonal is natural boundaries also matter and the notion of sovereignty and obviously I'm not a lawyer, I don't know if you're a lawyer. >> Nope. >> But the notion of sovereignty is rooted in the notion of of national boundaries right? It applies to land. It applies to water. It applies to airspace. >> Laws, culture. >> And so the question of how it applies to data is a really important one right? Does it matter where the data is actually stored? Can I reach into some other country's data? These questions are completely open at this point. They must be resolved. I think there is a global reflection among the industry right now that the time has come for both the govern entities and the industrial players to sort of take a position that this problem must be addressed. How it will be addressed? That I don't know. >> Well so I have a perspective on related I'm not going to answer how it's going to be addressed but security is a crucially important feature of how we think about computing going forward specifically data security. And it seems to me as though if we think about these data assets and how we institutionalize work around these assets, security is a significant feature of how we actually conceive of and create data assets because effectively it is through security that we privatize data. What laws and whatnot that we put on things turns into policy turns into technology for privatizing things. So talk a little bit about how you foresee the future of security, the data security, technology security and data coming together as people think about the role of data is going to play in our lives? >> So security is in a way a very technical way of looking at the problem right? Not everybody you know outside of tech actually appreciates what we all mean by security and within tech sometimes we mean different things when we talk about security. One of the themes we're trying to talk about is the notion that we need trust as a society irrespective of how it's done technologically. You need trust. We know how to establish trust in the physical world. We've been doing this for a few centuries or millennia. We need to learn how to establish trust in the digital world. So that's actually one of the initiatives we have right now at EPFL is actually establishing a center for digital trust. Whose goal is to basically try to ask the question of how do you actually have the same level of trust between players in the digital world that we can actually establish through known means, that we've learned to experience over centuries in the physical world? It's not an easy problem. >> No, it's not. So I got one more question for you. As you, so imagine you're writing a book in 2035 and you're writing a history of computing. You're looking back and you're saying, "Wow, look at all these things that happened." And we've already discussed some of the salient inflection points within the industry but if we think about an inflection point between 2018 and 2035, what do you think in a future purchase sense looking back what was the inflection point? When did it occur in the next 17 years? >> Well if you're an optimist then the path between today and 2035 was a positive one, free of any hardship or complications or unintended consequences. If you're a realist, we have to anticipate that there are some unanticipated consequences of tech and emergent properties of tech and where those evolution will take us. I mean I'm not a futurist right? I try to, my fellow could, my sort of my own research agenda, I try to look five years out as where things might go at a particular layer. If we look at the emergent properties, the emergent behavior I think they're very hard to anticipate. We're just trying to learn right now as collectively the side effects of social networking on how we interact as a society, as a democracy. It's very difficult to imagine where we'll go between now and 2035. There are a few things that are obvious and I'm going to just state what is obvious is the digital transformation is accelerating. The importance of data is growing. The existential threat associated with the misuse of data is going to be greater and greater especially as we digitize you know our human lives, our biological lives get digitized for example. That's going to have a huge impact. And then the drill transformation is also going to change jobs and change entire industries. Automation, AI, is going to have a profound effect. How fast that effect will be, I think is the open question. The history has always been an evolution of technology. I think what may be different this time is that its operating on a global scale faster than before. >> That affects a lot more people. So in certain respects it's especially crucial over the next few years to as you said the word, the key word is emergent. That there's going to be a lot of emergent properties that come out of technologies. Accelerating technology programs itself for example. Those types of things and so you kind of summarize, it's that fine line between too much control and too much freedom and staying right there so we get the innovation while at the same time we can have some degree of say over how it actually behaves. Is that kind of where we're going to be thinking? >> Yeah, I mean that's one way to look at this. Obviously regulation is not the answer. The other way to solve these problems is to actually have the appropriate products. I'll just give an example. Database management systems were not designed with data privacy in mind. They were designed to process data. Now GDPR comes along and what does it mean if I have a sequel database and I also need to be GDPR compliant? That's, if you think about it, there's somewhat of a mismatch between the two if you look at it purely from a technical perspective. Five years from now, does it make sense to have a GDPR by design database, whatever that means right? Maybe, I haven't thought about it too deeply but it's one of those examples where you have a new set of constraints and I think as an industry we need to take them as parameters. And what we've been consistently very good at in the tech industry is to actually take these constraints and actually turn them into products that people know how to operate and deploy. >> Excellent. Ed Bugnion, Computer Science Professor at EPFL. Thank you very much for being on The Cube. >> Thanks for having me. It was a pleasure. >> Once again, Peter Burris, Cube Conversation. Thanks for watching. See you again. (cheerful music)

Published Date : Apr 17 2018

SUMMARY :

at EPFL, one of the leading Swiss technology Long journey, what are you here for? Yeah, so I'm back to my old neighborhood from the beginning until 2005. to communicate to the existing, the current and if I look back over the last 20 years of these new resources applied to computing the expectation was that anything of the day you institutionalize your work, and have and at the same time be able and at the same time being able to do that I presume that you believe that data centers You get to own the assets but you need the act of operating screwdrivers which you some of the next set of advances are going to come You just keep doing and it's going to get better And the reality of virtualization is whenever So that the whole concept of technology, but also that you actually abide to your responsibilities that eventually technology is going to have students the notion of computational thinking You need math to learn rigor and you need physics is playing in the industry, we also believe And the locality principle says that you affect. to have it close to you know to operate And so the application people don't but especially And so the location of the data center in the notion of of national boundaries right? And so the question of how it applies to data And it seems to me as though if we think the notion that we need trust as a society When did it occur in the next 17 years? and I'm going to just state what is obvious over the next few years to as you said the word, at in the tech industry is to actually take Thank you very much for being on The Cube. It was a pleasure. See you again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ed BugnionPERSON

0.99+

SwitzerlandLOCATION

0.99+

Peter BurrisPERSON

0.99+

Peter DenningPERSON

0.99+

Silicon ValleyLOCATION

0.99+

EuropeLOCATION

0.99+

EPFLORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Edouard BugnionPERSON

0.99+

2005DATE

0.99+

2035DATE

0.99+

twoQUANTITY

0.99+

Cebrowski InstituteORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Ed BPERSON

0.99+

Naval Postgraduate SchoolORGANIZATION

0.99+

GDPRTITLE

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

first 50 yearsQUANTITY

0.99+

WikibonORGANIZATION

0.99+

bothQUANTITY

0.99+

five yearsQUANTITY

0.99+

this weekDATE

0.98+

one more questionQUANTITY

0.98+

20 years agoDATE

0.98+

2018DATE

0.97+

TodayDATE

0.97+

eachQUANTITY

0.97+

todayDATE

0.96+

second oneQUANTITY

0.96+

three serversQUANTITY

0.94+

two different aspectsQUANTITY

0.94+

one wayQUANTITY

0.94+

one pointQUANTITY

0.93+

two operating systemsQUANTITY

0.91+

third pillarQUANTITY

0.9+

few years agoDATE

0.86+

millenniaQUANTITY

0.82+

firstQUANTITY

0.81+

last 20 yearsDATE

0.8+

20th birthday celebrationQUANTITY

0.78+

first thingQUANTITY

0.75+

20 yearQUANTITY

0.75+

next 17 yearsDATE

0.75+

two really consequential technology changesQUANTITY

0.74+

Cube ConversationEVENT

0.73+

onceQUANTITY

0.72+

few yearsDATE

0.69+

Five yearsQUANTITY

0.64+

pillarsQUANTITY

0.61+

mega data centersQUANTITY

0.6+

themQUANTITY

0.59+

elementsQUANTITY

0.58+

more peopleQUANTITY

0.56+

themesQUANTITY

0.52+

SwissOTHER

0.44+

CubeTITLE

0.33+

CubeCOMMERCIAL_ITEM

0.31+

Edouard Bugnion, EPFL - Second Segment | CUBE Conversation


 

(bright, upbeat music) >> Hi, I'm Peter Burris, and welcome to another CUBE Conversation. We've got another great guest this week, Ed Bugnion, who's a professor of computer science at EPFL, a leading technical university in Switzerland. Ed, welcome to theCUBE. >> Thanks for having me. >> So Ed, you do at EPFL, you are leading research on the future of the data center. What I want to do, is I want to talk about the near term of the data center, 'cause a lot of people have questions about what's going to happen over the next few years. Let's posit that the data center's not going to go away any time soon, and instead talk about inside the data center. What's going to happen with the organization of technology inside data centers? >> Well it's always been a chase about how to reduce complexity. You always start with basically having a number of moving parts and then the business requirements keep increasing, and at some point, the complexity just overwhelms the operational model. So I was involved in virtualization. I've been working virtualization for close to 20 years. Right, virtualization was about reducing the complexity for the servers, and basically moved from having to manage servers one by one, separate the physical from logical and sort of solving that problem. Now what we actually did, as a side effect, is we actually pushed the remaining aspects of that complexity elsewhere. The servers were mobile, they were flexible, they could v motion across a cluster, they had to be stored on a storage area network so as a result, we ended up having this entire operational complexity around the management of storage area networks for very large amounts of data and as the increase in virtualization became more and more important, that became bigger, more of an issue. So then I actually got involved into into networking and networking was about the fact that a decade or so ago there was a proliferation of incompatible networks inside the data center. I was involved at Cisco in the pure storage, unified conversion networking with the UCS product so we could both do storage and regular TCPIP networking on the same on the line firework. This was about reducing complexity, but we didn't address all the complexity problems, we created other bottlenecks so it's always this ever shifting issue with dealing with scale. >> So as we virtualized the servers, we virtualize the storage and now we're virtualizing the network, that suggests that we can start bringing these things together in new novel ways have I got that right? >> Yeah so we first virtualized the network access, right, the storage access and the SANs and then now we're obviously with hyperconvergence we're about disaggregating storage and rethinking storage because of these new requirements. That's solves a number of the problems, right? It's actually now proving out to be sort of an industry-wide accepted model that we move away from storage arrays into hyperconverged models and hyperconvergence alone if the only thing you're doing is moving blocks around is again only solving part of the problem, you still need to worry about DR, you still need to worry about backup, you still need to worry about offsite. You still need to worry about locality, right, because having completely filed storage is a gross violation of the locality principal and the locality principal actually does come back and matter at some point in time. So it's really about finding the balance between the space and feeds, what needs to be co-located and what can be disaggregated and then what use-case must be addressed. >> And I presume, how much control can be bought from a single point of presence, console, onto the underlying infrastructures, is that how the rest are worried about? >> Yeah so I think there's, you're going to have to separate two things. One is the physical building blocks and the other one are the operational consoles, right, and the physical building blocks, the number of people providing these physical building blocks is small and if anything, shrinking. If you think about the operational console, the different panels, right? If you think about the different software companies providing technology, they actually themselves offer different panels to different constituencies. The silos have not completely disappeared in the IT operation model today, they're, communication is much better, tools facilitate this communication but silos not completely gone. So you still have these different panels, they can come from one vendor, different vendors, the same vendor can actually provide multiple capabilities but the theme is do you actually want to move away from having to deal with the complexity of having completely different universes into having much more coherent elements to talk to each other? >> So if we have this more coherence, presumably that means these more coherent elements can actually support each other in providing, as you said earlier, some of the crucial features of what a complex, large, scalable system needs to perform. You mentioned backup restore for example. How do you anticipate that the requirements of what constitute as systems, before it was scale compute and now we're actually worried about making sure that all those other issues from an automation from a business requirement standpoint and increasing impinging upon what we regard as design, like, having data protection. How do those new constraints start to impact folks to think about what to buy, what to use now? >> Yeah it's actually fascinating that tape, right, as we know it and as we knew it which largely has not changed, right, is actually still present. Tape obviously is a sequential approach, it's not by any stretch not the most easy way technology to operate and yet it still has sort of a presence so moving away from this, and the interesting observation is and you can now move away from these classic approaches of backup to object-based solutions. These object-based solutions, provide that you have the appropriate kind of connectivity assumptions can either be offsite or onsite and it's a very fluid and transparent model. And these object-based solutions are actually now designed into scale and can be used to either store primary data and stream data also to store backups of data and so this convergence between using object storage between what is backup and what is live data is one of the interesting themes. >> So we're talking about convergence of the hardware elements, but now we're also talking about convergence of the services and the capabilities associated, all within the same console, all within the same platform, utilizing specialization where it makes sense, have I got that right? >> Yes I mean you obviously have different use cases right? One of the things that is always goes back to the question of what is the API right? If you have an API and it is really you know gets and sets on an object model, that is designed to operate transactional objects right, you effectively are in a particular mindset. If you actually want to guarantee retention, you actually want a different set of APIs right, one of the things that's really important is to make sure that the data is actually safe and that the API won't prevent a catastrophic misuse and deletion of the data, for example. >> So there's one bit of advice you can offer someone who's sitting in a data center today and thinking about what they should be doing to increase the returns on their data assets and what they provide to the business, what would that kind of one thing that you'd leave them with be? >> Almost depends on where you start from, right? >> Peter: Okay good point. >> But having said that, there are sort of two general approaches, one is sort of the incremental approach which is you try to catch up with the technology trends and the other one is to say, okay what are actually my problems that I'm trying to solve purely from an infrastructure perspective and how do I actually solve these problems in a reasonable timeframe? It's actually if you think about the pros and cons between the two approaches, the first approach is this pragmatic, it's going to be better this quarter than last quarter, but you may never be able to catch up the other approach requires a little bit more thinking, sometimes process re-engineering, sometimes thinking about things differently. Changing the operational model, how your teams operate within the IT organization, sometimes it actually delivers the right solution. >> And we do have a model for how to do this, the big hyper-scalers are doing just that second approach and it's having a consequential impact on the industry isn't it? >> Yeah well storage, the storage industry has always been a fascinating industry, it was static for a few years, it's now extremely dynamic industry, there's a lot of companies that went public in the storage space over the last few years as we all know. They went because there was new technology, right? Flash sort of was transforming to the landscape. Now object and hyperconverged and post-hyperconverged solutions are actually also completely transforming the landscape because now, we think about storage different because it's not, the paradigm is no longer the same. >> Thinking about computing entirely differently. Storage plus everything else. >> Well at the end of the day, this is purely, this is infrastructure right? >> Right. >> And infrastructure is never for infrastructure's sake. Infrastructure is to deliver a new capabilities, new applications. The combination of you know phenomenal increases in primary memory, in Flash memory, and NVME, all these technologies are sort of transforming our expectation with respect to responsiveness and access to data. And then the changes on the compute side and the huge specialization going on in hardware in A-six that we know how to process data in much more efficient way and this is, we haven't talked about AI yet but fundamentally when you think about all these AI-based improvements, it is about being able to put massive amount of computational capabilities onto mass amounts of data. >> So you've been part VMware, you've been part of Neva, you've been part of a lot of different companies, if you look out, what types of foci, what types of centers of innovation amongst, in the valley do you look to for leadership? (laughing) >> The nice thing is, I was in the valley, i was in the industry and now I'm. >> And now you're out. (laughing) >> So I actually don't have to take a position. It's actually nice to be able to look at it much more from a principal perspective rather than to look at is as to which of the existing players are, the agenda they're trying to push. They each have legitimate agendas because they're driving their business and the evolution of their business for their customer and trying to deliver value to their customer. Obviously the customers have to choose. When I look at it sort of from my perspective both academically and so simply from an IT perspective as I operate a fair amount of IT EDPFL, it's really this notion of what problems are we trying to solve? And whether the boundaries that we traditionally had between the classic large vendors still make sense in this sort of hyperconverged environment. >> Alright well, Ed Bugnion, Professor of computer science at EDPFL, thanks again for being on theCUBE and this is Peter Burris and once again, great CUBE conversation and hope to see you soon. (bright upbeat music)

Published Date : Apr 17 2018

SUMMARY :

to another CUBE Conversation. Let's posit that the data center's not going to go away and as the increase in virtualization and the locality principal actually does come back and the other one are the operational consoles, right, folks to think about what to buy, and the interesting observation is and you can now and that the API won't prevent a catastrophic and the other one is to say, okay the paradigm is no longer the same. Thinking about computing entirely differently. and the huge specialization going on in hardware and now I'm. And now you're out. Obviously the customers have to choose. great CUBE conversation and hope to see you soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EdPERSON

0.99+

SwitzerlandLOCATION

0.99+

Peter BurrisPERSON

0.99+

Ed BugnionPERSON

0.99+

CiscoORGANIZATION

0.99+

EPFLORGANIZATION

0.99+

PeterPERSON

0.99+

Edouard BugnionPERSON

0.99+

EDPFLORGANIZATION

0.99+

two approachesQUANTITY

0.99+

two thingsQUANTITY

0.99+

UCSORGANIZATION

0.99+

last quarterDATE

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

second approachQUANTITY

0.98+

a decade or so agoDATE

0.98+

first approachQUANTITY

0.97+

VMwareORGANIZATION

0.96+

one vendorQUANTITY

0.96+

two general approachesQUANTITY

0.94+

firstQUANTITY

0.93+

this weekDATE

0.92+

todayDATE

0.91+

close to 20 yearsQUANTITY

0.91+

this quarterDATE

0.91+

one bitQUANTITY

0.9+

eachQUANTITY

0.9+

single pointQUANTITY

0.84+

SecondQUANTITY

0.83+

ConversationEVENT

0.73+

one thingQUANTITY

0.72+

last few yearsDATE

0.7+

next few yearsDATE

0.69+

NevaORGANIZATION

0.61+

CUBEORGANIZATION

0.56+

themesQUANTITY

0.54+

thingsQUANTITY

0.53+

yearsQUANTITY

0.4+

Zeb Ahmed, IBM Cloud | VeeamOn 2018


 

>> Narrator: Live from Chicago Illinois, it's theCUBE! Covering VeammOn 2018. Brought to you by Veamm. >> Welcome back to VeammOn 2018 everybody, and you're watching theCUBE. The leader in live tech coverage. We go out to the events, we extract the signal from the noise. This is day one of our coverage of VeammOn, the second year theCUBE has been here. I'm Dave Vellante with my co-host Stu Miniman. Zeb Ahmed is here, he's the Senior Offering Manager for VMWare, with the IBM Cloud, at IBM of course. Thanks for coming on theCUBE, good to see you Zeb. >> Thank you for having me, very excited to be here. >> Yeah so IBM, Cloud, big part of our business. Obviously VMWare, you've been there for a long time. Partnerships with Veamm. Lay it all out for us, what's going on at IBM, IBM Cloud. >> Yeah so we started the VMWare partnership a couple years ago, and our goal was really to build a practice run VMWare which was automated, take it to the next level essentially, not just be a me too player, what everybody else was doing out there, but rather, make the transition from on premises to the Cloud much easier for those VMWare customers. So we've automated a lot of things on the VMWare platform, you can deploy the inverse stack, in a matter of minutes, instead of days and months. So it's a much easier transition, we also work with a lot of partners, such as Veamm, but customers was using on premises, and we've allowed them to have those capabilities in the Cloud as well, in a very automated fashion. >> Quickly if I remember, I think you guys were first doing something with VMWare in the Cloud, you're kind of a year ahead of most. I mean-- >> Stu: It was a few months ahead, they were the first big partner out there with the VMWare Cloud basically. We got, put in Cloud air and everything. >> But in terms of shipping, actually, you guys-- >> We were the first ones, yeah. So we were the first ones to market with Cloud foundation stack right? Yeah and then the other vendors followed as well, but yeah that's been doing great, right? And again, it's fully automated, matter of minutes you can deploy the whole stack, a lot of value add there. >> Yeah Zeb, maybe help set the picture for us a little bit. 'Cause we talk about this multi-cloud world, IBM owns a lot of applications, IBM partners with a lot, where does IBM see themselves playing in this multi-cloud, multi-app world? >> Great question, I think I, so I refer to it two T's. So the first one being the transition, and then the transformation. So the transition is really where the challenge has been for those customers, the barrier to entry, how do these customers actually make that move seamless to the Cloud, especially the space that IBM is in on the enterprise side, these applications are legacy, very very complicated design, a lot of dependencies, so that was a challenge that we tried to solve for. And I think we're at a state now where we've not only solved for that, we've also, I don't know if you guys have seen HCX that we had with VMWare recently, which was a great migration tool, and helps customer on board Cloud, and adapt to Cloud much much faster. And then also build that ecosystem partner network. So all those tools, that we were using on premises like Veamm, right? Making those available in the Cloud for those customers, and it has been great, and also in the transformation side, right? So not only just move them to the Cloud, but also help them leverage, and go up the stack. So micro-services, blockchain, Watson, containers, all those things are available to our customers. >> I think that's a key point that I wanted to highlight is, people often say, how does IBM compete with some of the big Cloud players? You're not just infrastructure as a service, you've got a giant SAS portfolio, you mentioned Watson so, talk about your strategy in that regard. >> Yeah I mean so, the enterprise customer, typical customer, whether it's financial industry or whether health care or transportation et cetera, nobody is just looking for a partner where they can just move the infrastructure too. They're looking for the next state, they're looking to transform the business, they're trying to utilize all those new capabilities that exist in the Cloud today. And IBM has sought for that exactly because not only just use, move your infrastructure and workloads, but now you can consume all those additional gallywads, in the Cloud like Watson, make it for a more intelligent solution in the end. >> Right, so that's a key differentiator. There's only a couple of companies that have that, well I guess you guys, Oracle, Microsoft obviously has the applications, and IBM talks a lot about the cognitive piece, am I correct you can only get Watson in the IBM Cloud, is that still the case, or are you now have it on prem? >> No no, Watson can be consumed using an API. So it's a PAS platform, and if somebody wanted to consume Watson for the on premises workloads and wanted to bring that intelligence for that on premises environment they can do that. >> Dave: Are you seeing more demand for that? >> Oh yes. >> Or is it primarily in the Cloud? >> We've got huge traction in the healthcare space especially, there's a lot of financial customers that are onboarding that as well. So Watson's doing great in that regard. >> Sort of privacy reasons and-- >> Zeb: That's right. >> Zeb one of the things we've been watching with Veamm for the last few years is how do they penetrate deeper into the enterprise. Of course IBM has a strong position in the enterprise, help connect for us how the Veamm and IBM partnerships go together. >> So I think this was a very easy answer for a lot of our customers, because Veamm has a lot of penetration on the on premises workloads, especially on the back-up and business continuity space. So when we looked at the partners and the products that existed in the space, we really looked at the market space, what the customers were consuming. Veamm had a huge market share, and like I said previously, we wanted to solve for those problems and we wanted to keep the tools at the same tool set that they were using today on the premises, so this was very seamless for us, and it is seamless for the customers, to move to IBM Cloud and leverage those same tools exactly. >> So talk about choice because, I can imagine you're getting a call from Ed Walsh, "Hey, how about using my data protection software instead of Veamm?". How do you manage that? >> Zeb: It is tough, right? It is obviously tough, IBM also has a huge portfolio of products, right? In the end the decision was or it really came down to, what is it the customers are looking for? When it came to the back-up space, especially on the VMWare platform, The answer was there, a lot of the VMWare customers use Veamm. In addition to that, Veamm also checks a lot of other boxes for us. So, not only does VMWare stack, but also, I don't know if it's been announced yet or not, but AIX is something of beta that they're launching, at this event, so that is huge for IBM. >> Dave: Really? >> Oh yes, they're also in the bare metal space, so a consolidated view of all your back-ups, all your bare metal, for AIX, for virtualized platform. >> So the power guys will be happy. >> For those that aren't as familiar anymore, I mean remember AIX back in the day, but this is second week in a row I'm talking about AIX. It was Nutanix last week, and it's Veamm today. How much AIX is there still out in the wild? >> There's quite a bit, I mean IBM, if you guys know the background, right? When software was acquired it was a bare metal shop. So with that a lot of the power stuff came as well. So we have a huge power practice in IBM. >> Right, and well it's still, I remember the Steve Mills charts, which showed the availability of AIX versus, the only more available platform was the mainframe, and then with AIX, and then, and you had all that other stuff that everybody else buys but, it's a volume market so it kind of makes sense though. People will pay up for that. And still, a huge install base, now growing, and Nutanix has a relationship with the power guys, so maybe that's where sort of factored in, right? But Linux, of course, is the hot space, right? I mean sure you see it's powering the web. >> Well I'm a VMWare guy, so (laughs). >> There's Linux sitting on top of some of that. >> That's right, of course, of course. >> You've got Linux of mainframe, right? Okay, alright so, talk a little bit more about what you're seeing from the VMWare customer base, how it's synergistic and not just sort of a one way trip into a hotel California. >> Yeah, so a typical VMWare customer that we're seeing who's on premises today are looking to IBM Cloud, or actually take a leap into the Cloud, especially on the enterprise base, these customers want to transform. I mean, there has been a lot of questions for them, especially the customer base IBM focused on, questions around security, compliance, business continuity and data protection and such. So these customers not only want to just make the leap into the Cloud, but they also want to solve for some of these challenges, and also go up to stack like I was mentioning. So, we're seeing a huge push for containers, for those customers that are moving to VMWare, they want to build up the stack, on the PAS layer, and also want to leverage Watson and services like that. >> Yeah, could you expand on that a little more, things like are you working with PKS, the solution with VMWare and Pivotal, and the Kubernetes stuff, or? >> Yes, Kubernetes, Dockers, we also give the customers ability to do their own stuff, go up the stack. I mean, in the end, you know, they can consume us from an IS standpoint and build their PAS on top, or we can, they can use our own, so Kubernetes, Dockers, et cetera. >> What's the story, Stu, with Cloud foundry these days? There was a big push early on, and I fell like I can, I'm not as close as you are, but there seems to be a, I don't want to say a pull back, but maybe less enthusiasm, what's the lay of the land? >> Sure, I mean IBM was one of the earliest bloomix, I believe, and with IBM Cloud, IBM has a few different offerings, I didn't see as big of a push from IBM at the Cloud foundry summit I was at last month, but IBM, like most of the Cloud providers are giving customers choice. >> Zeb: That's right. >> So I guess the question is what-- >> And heavy in Open Source, I mean I'm seeing IBM heavy push, I'm wondering server-less, if you've got any commentary there. We've been looking at like Open Whisk and some of the pieces there. >> Yeah Open Whisk is there, there's, server-less is a thing that a lot of these customers, back to your own original question, a lot of these customers are looking for those types of services, and they're all available in the catalog. >> It's still pretty early, that hasn't overtaken the discussions of the (mumbles) and the (mumbles) stuff in your world has it? >> It hasn't, but I think the enterprise customers who are looking to move to Cloud, they are thinking about those things. So these are some of the check boxes that need to be checked for them for the future growth, et cetera. >> So you've got VMWare's obviously visualization strategy, you've got containers coming, I remember when we had Pat Gelsinger on theCUBE several years ago, when containers were, docker was rocketing, and everybody was like oh docker's going to kill VMWare. And Pat's response was, "Look, we've got the best containers in the world. We're going to embrace containers". They're like, oh sure. But that's exactly what happened. What's IBM's point of view on it? >> Yeah, here's the thing, we want to give them the option to do whatever they want to do. We're seeing a lot of traction on the micro-services side, on the containerization, but I think it's going hand in hand, a lot of the customers are using VMWare platform still, yet they're also leveraging some of these other micro-services and containers, so I think Pat's right on. I think originally what was people were talking about getting rid of the IS layer of VMWare and just going containers completely. Our take is, give the customers all the functionality and the ability to do whatever they want to do, we are seeing it's more of a mix at the moment. >> And we had Edouard Bugnion on recently, found of, one of the founders of VMWare, and he was talking about the challenges in the data center at scale. And in particular when you introduce virtualization and you reduce some of the hardware resources, how do you deliver predictable, high-performance, at scale, and some of the challenges there. That's even on prem. Now introduce Cloud, and you've got distance and latency and other physics so, what's the discussion like with customers around how to architect the ideal Cloud, on prem, hybrid. >> It's a great question, because that is a question I get asked all the time, because in the enterprise base like I said, these customers in a lot of cases have a hybrid or multi-cloud strategy, so network becomes a key part of that discussion. For us, the answer is very simple. We've laid down the fiber of (mumbles) across all these data centers, so when you're talking about latency, and data transfer, and those types of speeds, or having a multi-cloud strategy across the globe, it's a very simple and easy conversation because not only do we make all that information available to our customers, far as what latency they expect from which data center to another one across the globe, but also it's all private, and it's all secure, and it makes for a very good multi-cloud story. >> I don't know if you saw Jenny Remmetti's talk at IBM Think, but she used the term, a lot of people tongue in cheek, but I kind of like it, "incumbent disruptors". I mean look if you're IBM and you've got the client base that IBM has, you better come up with a term like that because that's exactly what you're trying to help your customers do. So, my question is, where does the Cloud and your offering with VMWare fit into the incumbent disruptive scenario? >> Yeah, so VMWare like I said earlier, we didn't want to be a me too player with VMWare. Not only did we want to have a good story with VMWare because obviously VMWare is a huge market share when it comes to virtualization, but on top of that, we wanted to be more futuristic, and solve for those, some of those questions and concerns that the enterprise customer had. So, tight integration on the enterprise base, on the micro-services, containerization, Watson is a huge part of the VMWare platform, you can seamlessly integrate into Watson and really have intelligent decision making on the VMWare platform. So, we wanted to ensure that we were helping our enterprise customers move to Cloud, yet also solve for the future problems. >> So the incumbent piece, both VMWare and IBM, right, incumbent customers, the disruptor would be I guess Cloud, all the new Cloud services, certainly the machine intelligence cognitive, et cetera, components is the disruptive capability, now it's up to you to figure out, okay, how do you apply all that, presumably IBM and your partners can help. >> Yeah and here's the thing, you mentioned earlier, IBM is one of the only companies in the world that can have an end-to-end, not just infrastructure, but also services wrapped around it. So if you're a customer who's not only looking to move to the Cloud but also have services wrapped around, to go end-to-end, IBM is the company to do that for you. >> Dave: Well that's interesting. Okay, I got to ask him Stu. So we had, we were at Dell Technologies World a couple weeks ago, and we had Jeff Clark on, and we asked him, we said, "Look, companies like IBM, HPE, sort of, IBM selling off its X86 division, and HPE splitting, Dell did the opposite. The mega merger". And his comment was, "Well I don't see how you can do end-to-end without both ends". Now, his definition of end is obviously different to your end definition, and I have to ask you, what do you mean by end-to-end? Is the client sort of just a commodity, we can get that anywhere, it's not really an integration challenge? >> So when I'm saying end-to-end what I'm talking about is a enterprise customer looking to move to the Cloud, solve for the future problems, essentially re-invent themselves, transform their business, leverage the new applications, micro-services that are there, but also have services wrapped around it, right? Somebody who's there to help them end-to-end, whether it's just doing migrations for example, right, from on premises to the Cloud, but also help them onboard and guide them on what is there in the Cloud, or the micro-services, or our PAS layer, and how they can transform really. >> So that to me Stu is, Zeb's talking about not a hardware view, of end-to-end, but a, maybe a systems and a software view of end-to-end, in the Cloud services. Alright, Zeb, thank you very much for, do you have one more? You good? Thanks so much for coming on theCUBE. >> Guys, thank you very much, appreciate it. >> Appreciate it. Alright, keep it right there buddy, Stu and I will be back with our next guest. This is theCUBE, we're live from VeeamOn 2018, in Chi-town, we'll be right back. (electronic music)

Published Date : May 15 2018

SUMMARY :

Brought to you by Veamm. Zeb Ahmed is here, he's the Thank you for having me, Yeah so IBM, Cloud, but rather, make the transition I think you guys were first with the VMWare Cloud basically. deploy the whole stack, Yeah Zeb, maybe help set the the barrier to entry, some of the big Cloud players? that exist in the Cloud today. in the IBM Cloud, is that still the case, the on premises workloads So Watson's doing great in that regard. Zeb one of the things we've been and it is seamless for the customers, How do you manage that? In the end the decision was of all your back-ups, all your bare metal, I mean remember AIX back in the day, So we have a huge power practice in IBM. I remember the Steve Mills on top of some of that. You've got Linux of mainframe, right? especially on the enterprise base, I mean, in the end, you know, but IBM, like most of the Cloud providers some of the pieces there. a lot of these customers are looking for the future growth, et cetera. containers in the world. a lot of the customers in the data center at scale. because in the enterprise the Cloud and your offering with VMWare of the VMWare platform, So the incumbent piece, Yeah and here's the thing, and HPE splitting, Dell did the opposite. or the micro-services, or our PAS layer, in the Cloud services. Guys, thank you very Stu and I will be back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Jenny RemmettiPERSON

0.99+

DellORGANIZATION

0.99+

PatPERSON

0.99+

Jeff ClarkPERSON

0.99+

VeammORGANIZATION

0.99+

StuPERSON

0.99+

Stu MinimanPERSON

0.99+

last weekDATE

0.99+

Ed WalshPERSON

0.99+

HPEORGANIZATION

0.99+

Steve MillsPERSON

0.99+

Pat GelsingerPERSON

0.99+

VMWareTITLE

0.99+

WatsonTITLE

0.99+

LinuxTITLE

0.99+

Zeb AhmedPERSON

0.99+

bothQUANTITY

0.99+

Edouard BugnionPERSON

0.99+

Chicago IllinoisLOCATION

0.99+

NutanixORGANIZATION

0.98+

ZebPERSON

0.98+

first oneQUANTITY

0.98+

todayDATE

0.98+

CaliforniaLOCATION

0.98+

Chi-townLOCATION

0.98+

both endsQUANTITY

0.98+

several years agoDATE

0.97+

oneQUANTITY

0.97+

VMWare CloudTITLE

0.97+

firstQUANTITY

0.97+

CloudTITLE

0.97+

second weekQUANTITY

0.97+

one wayQUANTITY

0.96+

SASORGANIZATION

0.96+

first onesQUANTITY

0.96+

last monthDATE

0.96+