Image Title

Search Results for cern:

Ricardo Rocha, CERN | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>>from around the globe. It's >>the cube >>with coverage of >>Kublai khan and >>Cloud Native Con, Europe 2021 virtual brought >>to you by red hat, >>the cloud Native >>Computing foundation and ecosystem partners. Hello, welcome back to the cubes coverage of Kublai khan. Cloud Native Con 2021 part of the CNC. S continuing cube partnership virtual here because we're not in person soon, we'll be out of the pandemic and hopefully in person for the next event. I'm john for your host of the key. We're here with ricardo. Roach computing engineers sir. In CUBA. I'm not great to see you ricardo. Thanks for remote ng in all the way across the world. Thanks for coming in. >>Hello, Pleasure. Happy to be here. >>I saw your talk with Priyanka on linkedin and all around the web. Great stuff as always, you guys do great work over there at cern. Talk about what's going on with you and the two speaking sessions you have it coop gone pretty exciting news and exciting sessions happening here. So take us through the sessions. >>Yeah. So actually the two sessions are kind of uh showing the two types of things we do with kubernetes. We we are doing we have a lot of uh services moving to kubernetes, but the first one is more on the services we have in the house. So certain is known for having a lot of data and requests, requiring a lot of computing capacity to analyze all this data. But actually we have also very large community and we have a lot of users and people interested in the stuff we do. So the first question will actually show how we've been uh migrating our group of infrastructure into the into communities and in this case actually open shift. And uh the challenge there is to to run a very large amount of uh global websites on coordinators. Uh we run more than 1000 websites and there will be a demonstration on how we do all the management of the website um life cycle, including upgrading and deploying new new websites and an operator that was developed for this purpose. And then more on the other side will give with a colleague also talk about machine learning. Machine learning has been a big topic for us. A lot of our workloads are migrating to accelerators and can benefit a lot from machine learning. So we're giving a talk about a new service that we've deployed on top of Cuban areas where we try to manage to uh lifecycle of machine learning workloads from data preparation all the way to serving the bottles, also exploring the communities features and integrating accelerators and a lot of accelerators. >>So one part of the one session, it's a large scale deployment kubernetes key to there and now the machine learning essentially service for other people to use that. Right? Like take me through the first large scale deployment. What's the key innovation there in your opinion? >>Yeah, I think compared to the infrastructure we had before, is this notion that we can develop an operator that will uh, manage resource, in this case a website. And this is uh, something that is not always obvious when people start with kubernetes, it's not just an orchestra, it's really the ap and the capability of managing a huge amount of resources, including custom resources. So the possibility to develop this operator and then uh, manage the lifecycle of uh, something that was defined in the house and that fits our needs. Uh, There are challenges there because we have a large amount of websites and uh, they can be pretty active. Uh, we also have to some scaling issues on the storage that serves these these websites and we'll give some details uh during the talk as well, >>so kubernetes storage, this is all kind of under the covers, making this easier. Um and the machine learning, it plays nicely in that what if you take us for the machine learning use case, what's going on there, wow, what was the discovery, How did you guys put that together? What's the key elements there? >>Right, so the main challenge there has been um that machine learning is is quite popular but it's quite spread as well, so we have multiple groups focusing on this, but there's no obvious way to centralize not only the resource usage and make it more efficient, but also centralize the knowledge of how these procedures can be done. So what we are trying to do is just offer a service to all our users where we help them with infrastructure so that they don't have to focus on that and they could focus just on their workloads and we do everything from exposing the data systems that we have in the house so that they can do access to the data and data preparation and then doing um some iteration using notebooks and then doing distributed training with potentially large amount of gps and that storage and serving up the models and all of this is uh is managed with the coordinates cluster underneath. Uh We had a lot of knowledge of how to handle kubernetes and uh all the features that everyone likes scalability. The reliability out of scaling is very important for this type of workload. This is, this is key. >>Yeah, it's interesting to see how kubernetes is maturing, um congratulations on the projects. Um they're going to probably continue to scale. Remember this reminds me of when I was uh you know coming into the business in the 98 late eighties early nineties with TCP I. P. And the S. I. Model, you saw the standards evolve and get settled in and then boom innovation everywhere. And that took about a year to digest state and scale up. It's happening much faster now with kubernetes I have to ask you um what's your experience with the question that people are looking to get answered? Which is as kubernetes goes, the next generation of the next step? Um People want to integrate. So how is kubernetes exposing a. P. I. S. To say integration points for tools and other things? Can you share your experience and where this is going, what's happening now and where it goes? Because we know there's no debate. People like the kubernetes aspect of it, but now it's integration is the conversation. Can you share your thoughts on that? >>I can try. Uh So it's uh I would say it's a moving target, but I would say the fact that there's such a rich ecosystem around kubernetes with all the cloud, David projects, uh it's it's uh like a real proof that the popularity of the A. P. I. And this is also something that we after we had the first step of uh deploying and understanding kubernetes, we started seeing the potential that it's not reaching only the infrastructure itself, it's reaching all the layers, all the stack that we support in house and premises. And also it's opening up uh doors to easily scale into external resources as as well. So what we've been trying to tell our users is to rely on these integrations as much as possible. So this means like the application lifecycle being managed with things like Helmand getups, but also like the monitoring being managed with Prometheus and once you're happy with your deployment in house we have ways to scale out to external resources including public clouds. And this is really like see I don't know a proof that all these A. P. I. S are not only popular but incredibly useful because there's such a rich ecosystem around it. >>So talk about the role of data in this obviously machine learning pieces something that everyone is interested in as you get infrastructure as code and devops um and def sec ops as everything's shifting left. I love that, love that narrative day to our priests. All this is all proving mature, mature ization. Um data is critical. Right? So now you get real time information, real time data. The expectations for the apps is to integrate the data. What's your view on how this is progressing from your standpoint because machine learning and you mentioned you know acceleration or being part of another system. Cashing has always done that would say databases. Right. So you've got now is databases get slower, caches are getting faster now they're all the ones so it's all changing. So what's your thoughts on this next level data equation into kubernetes? Because you know stateless is cool but now you've got state issues. >>Yeah so uh yeah we we've always had huge needs for for data we store and I I think we are over half an exhibit of data available on the premises but we we kind of have our own storage systems which are external and that's for for like the physics data, the raw data and one particular charity that we had with our workloads until recently is that we we call them embarrassing parallel in the sense that they don't really need uh very tight connectivity between the different workloads. So if it's people always say tens of thousands of jobs to do some analysis, they're actually quite independent, they will produce a lot more data but we can store them independently. Machine learning is is posing a challenge in the sense that this is a training tends to be a lot more interconnected. Um so it can be a benefit from from um systems that we are not so familiar with. So for us it's it's maybe not so much the cashing layers themselves is really understanding how our infrastructure needs to evolve on premises to support this kind of workloads. We had some smallish uh more high performance computing clusters with things like infinite and for low latency. But this is not the bulk of our workloads. This is not what we are experts on these days. This is the transition we are doing towards uh supporting this machine learning workers >>um just as a reference for the folks watching you mentioned embarrassing parallel and that's a quote that you I read on your certain tech blog. So if you go to tech blog dot web dot search dot ch or just search cern tech blog, you'll see the post there um and good stuff there and in there you go, you lay out a bunch of other things too where you start to see the deployment services and customer resource definitions being part of this, is it going to get to the point where automation is a bigger part of the cluster management setting stuff up quicker. Um As you look at some of the innovations you're doing with machines and Coubertin databases and thousands of other point things that you're working on there, I mean I know you've got a lot going on there, it's in the post but um you know, we don't want to have the problem of it's so hard to stand up and manage and this is what people want to make simpler. How do you how do you answer that when people say say we want to make it easier? >>Yeah. So uh for us it's it's really automate everything and up to now it has been automate the deployment in the kubernetes clusters right now we are looking at automating the kubernetes clusters themselves. So there's some really interesting projects, uh So people are used to using things like terra form to manage the deployment of clusters, but there are some projects like cross playing, for example, that allows us to have the clusters themselves being resources within kubernetes. Uh and this is something we are exploring quite a bit. Uh This allows us to also abstract the kubernetes clusters themselves uh as uh as carbonated resources. So this this idea of having a central cluster that will manage a much larger infrastructure. So this is something that we're exploring the getups part is really key for us to, it's something that eases the transition from from from people that are used already to manage large scale systems but are not necessarily experts on core NATO's. Uh they see that there's an easier past there if they if they can be introduced slowly through through the centralized configuration. >>You know, you mentioned cross plane, I had some on earlier, he's awesome dude, great guy and I was smiling because you know I still have you know flashbacks and trigger episodes from the Hadoop world, you know when it was such so promising that technology but it was just so hard to stand up and managed to be like really an expert to do that. And I think you mentioned cross plane, this comes up to the whole operator notion of operating the clusters, right? So you know, this comes back down to provisioning and managing the infrastructure, which is, you know, we all know is key, right? But when you start getting into multi cloud and multiple environments, that's where it becomes challenging. And I think I like what they're doing is that something that's on your mind to around hybrid and multi cloud? Can you share your thoughts on that whole trajectory? >>Absolutely. So I actually gave an internal seminar just last week describing what we've been playing with in this area and I showed some demo of using cross plane to manage clusters on premises but also manage clusters running on public clouds. A. W. S. Uh google cloud in nature and it's really like the goal there. There are many reasons we we want to explore external resources. We are kind of used to this because we have a lot of sites around the world that collaborate with us, but specifically for public clouds. Uh there are some some motivations there. The first one is this idea that we have periodic load spikes. So we knew we have international conferences, the number of analysis and job requests goes up quite a bit, so we need to be able to like scale on demand for short periods instead of over provisioning this uh in house. The second one is again coming back to machine learning this idea of accelerators. We have a lot of Cpus, we have a lot less gPS uh so it would be nice to go on fish uh for those in the public clouds. And then there's also other accelerators that are quite interesting, like CPUs and I p u s that will definitely play a role and we probably, or maybe we will never have among premises, will only be able to to use them externally. So in that, in that respect, actually coming back to your previous question, this idea of storage then becomes quite important. So what we've been playing with is not only managing this external cluster centrally, but also managing the wall infrastructure from a central place. So this means uh, making all the clusters, whatever they are look very, very much the same, including like the monitoring and the aggregation of the monitoring centrally. And then as we talked about storage, this idea of having local storage that that will be allow us to do really quick software distribution but also access to the data, >>what you guys are doing as we say, cool. And relevant projects. I mean you got the large scale deployments and the machine learning to really kind of accelerate which will drive a lot of adoption in terms of automation. And as that kicks in when you got to get the foundational work done, I see that clearly the right trajectory, you know, reminds me ricardo, um you know, again not do a little history lesson here, but you know, back when network protocols were moving from proprietary S N A for IBM deck net for digital back in the history the old days the os I Open Systems Interconnect Standard stack was evolving and you know when TCP I P came around that really opened up this interoperability, right? And SAM and I were talking about this kind of cross cloud connections or inter clouding as lou lou tucker. And I talked that open stack in 2013 about inter networking or interconnections and it's about integration and interoperability. This is like the next gen conversation that kubernetes is having. So as you get to scale up which is happening very fast as you get machine learning which can handle data and enable modern applications really it's connecting networks and connecting systems together. This is a huge architectural innovation direction. Could you share your reaction to that? >>Yeah. So actually we are starting the easy way, I would say we are starting with the workloads that are loosely coupled that we don't necessarily have to have this uh tighten inter connectivity between the different deployments, I would say that this is this is already giving us a lot because our like the bulk of our workloads are this kind of batch, embarrassing parallel, uh and we are also doing like co location when we have large workloads that made this kind of uh close inter connectivity then we kind of co locate them in the same deployment, same clouds in region. Um I think like what you describe of having cross clouds interconnectivity, this will be like a huge topic. It is already, I would say so we started investigating a lot of service measure options to try to learn what we can gain from it. There is clearly a benefit for managing services but there will be definitely also potential to allow us to kind of more easily scale out across regions. There's we've seen this by using the public cloud. Some things that we found is for example, this idea of infinite, infinite capacity which is kind of sometimes uh it feels kind of like that even at the scale we have for Cpus But when you start using accelerators, Yeah, you start negotiating like maybe use multiple regions because there's not enough capacity in a single region and you start having to talk to the cloud providers to negotiate this. And this makes the deployments more complicated of course. So this, this interconnectivity between regions and clouds will be a big thing. >>And, and again, low hanging fruit is just a kind of existing market but has thrown the vision out there mainly to kind of talk about what what we're seeing which is the world's are distributed computer. And if you have the standards, good things happen. Open systems, open innovating in the open really could make a big difference is going to be the difference between real value for the society of global society or are we going to get into the silo world? So I think the choice is the industry and I think, you know, Cern and C and C. F and Lennox Foundation and all the companies that are investing in open really is a key inflection point for us right now. So congratulations. Thanks for coming on the cube. Yeah, appreciate it. Thank you. Okay, Ricardo, rocha computing engineer cern here in the cube coverage of the CN Cf cube con cloud, native con europe. I'm john for your host of the cube. Thanks for watching.

Published Date : May 5 2021

SUMMARY :

from around the globe. I'm not great to see you ricardo. Happy to be here. what's going on with you and the two speaking sessions you have it coop gone pretty exciting news the two types of things we do with kubernetes. So one part of the one session, it's a large scale deployment kubernetes key to there and now So the possibility to Um and the machine learning, it plays nicely in that what if you take us for the machine learning use case, the data systems that we have in the house so that they can do access to the data and data preparation in the 98 late eighties early nineties with TCP I. P. And the S. I. Model, you saw the standards that the popularity of the A. P. I. And this is also something that we So talk about the role of data in this obviously machine learning pieces something that everyone is interested in as This is the transition we are doing towards So if you go to tech blog dot web dot search dot ch Uh and this is something we are exploring quite a bit. this comes back down to provisioning and managing the infrastructure, which is, you know, we all know is key, The first one is this idea that we have periodic load spikes. and the machine learning to really kind of accelerate which will drive a lot of adoption in terms of uh it feels kind of like that even at the scale we have for Cpus But when you open innovating in the open really could make a big difference is going to be the difference

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PriyankaPERSON

0.99+

Ricardo RochaPERSON

0.99+

2013DATE

0.99+

DavidPERSON

0.99+

IBMORGANIZATION

0.99+

two sessionsQUANTITY

0.99+

first questionQUANTITY

0.99+

CERNORGANIZATION

0.99+

two typesQUANTITY

0.99+

RicardoPERSON

0.99+

more than 1000 websitesQUANTITY

0.99+

last weekDATE

0.99+

CUBALOCATION

0.99+

98 late eightiesDATE

0.99+

NATOORGANIZATION

0.99+

Lennox FoundationORGANIZATION

0.98+

two speaking sessionsQUANTITY

0.98+

first oneQUANTITY

0.98+

thousandsQUANTITY

0.98+

Cloud Native ConEVENT

0.98+

second oneQUANTITY

0.97+

Cloud Native Con 2021EVENT

0.97+

first stepQUANTITY

0.97+

one sessionQUANTITY

0.96+

C. FORGANIZATION

0.96+

KubeConEVENT

0.95+

CORGANIZATION

0.95+

ricardoPERSON

0.95+

linkedinORGANIZATION

0.95+

tens of thousands of jobsQUANTITY

0.95+

johnPERSON

0.95+

PrometheusTITLE

0.95+

one partQUANTITY

0.94+

europeLOCATION

0.94+

about a yearQUANTITY

0.93+

cloud NativeORGANIZATION

0.9+

2021EVENT

0.89+

one particular charityQUANTITY

0.88+

pandemicEVENT

0.81+

red hatORGANIZATION

0.81+

single regionQUANTITY

0.81+

HelmandTITLE

0.81+

Kublai khanPERSON

0.8+

first largeQUANTITY

0.8+

CubanLOCATION

0.8+

Cern andORGANIZATION

0.79+

EuropeLOCATION

0.78+

P.OTHER

0.77+

CoubertinORGANIZATION

0.75+

early ninetiesDATE

0.7+

CloudNativeCon Europe 2021EVENT

0.7+

over halfQUANTITY

0.68+

formTITLE

0.68+

conCOMMERCIAL_ITEM

0.67+

S. I. ModelOTHER

0.67+

Kublai khanPERSON

0.65+

TCP I.OTHER

0.65+

CfCOMMERCIAL_ITEM

0.64+

deploymentQUANTITY

0.56+

servicesQUANTITY

0.53+

googleORGANIZATION

0.48+

SAMORGANIZATION

0.46+

P. I.OTHER

0.4+

native conCOMMERCIAL_ITEM

0.37+

Ricardo Rocha, CERN | KubeCon + CloudNativeCon NA 2020


 

from around the globe it's thecube with coverage of kubecon and cloudnativecon north america 2020 virtual brought to you by red hat the cloud native computing foundation and ecosystem partners hey welcome back everybody jeff frick here with thecube coming to you from our palo alto studios for the continuing coverage of kubecon cloud native con 2020 north america there was the european version earlier in the summer it's all virtual uh so the good news is we don't have to get on planes and we can get guests from all over the world and we're excited to welcome back for his return to the cube ricardo rocha he is a staff member and computing engineer at cern ricardo great to see you hello thanks for having me absolutely and you're coming in from uh from geneva so you're you already had a good thursday i bet yeah we're just finishing right now yeah right so in in getting ready for this um interview i was looking at the interview that you did i think it was two cube cons ago uh in may of 2019 and it just strikes me a lot of people know what cern is but a lot of people don't know what's cern in so i wonder if you can just give you know kind of the 101 of what cern's mission is and what is some of the work that you guys do there yeah sure uh so cern is the european organization for uh nuclear research we are the largest particle physics laboratory in the world and our main mission is uh fundamental research so we try to answer big questions about why don't we see antimatter what is dark matter or dark energy other questions about the origin of the universe and to answer these questions we build very large machines particle accelerators where we try to recreate some of [Music] the moments just after the universe was created the big bang to try to understand better what was the state of the matter at that time the result of all of this is very often a lot of data that has to be analyzed and that's why we traditionally have had a huge requirements for computing resources during the the start of cern we always had this this large large requirements right and so you have this large particle accelerators as you said large machines the one that you've got now the the latest one how long has that one been operational yeah so it started uh like maybe around 10 years ago the first launch was a bit before that uh and it's uh it's a very large uh it's the largest one ever built so it's 27 kilometers in perimeter we inject protons into different uh directions and then we we make them collide where we build these huge detectors that can can see what's happening in these collisions uh the the main the main particle accelerator is this one we do have other experiments we have a nancy meta factory that is just uh down from my office and we have other types of experiments as well going right 27 kilometers that's a big that's a big number and then and then again just so people get some type of sense of scale so then you you you speed up the particles you smash them together you see what happens they collect all the data what types of data sets are generated off off just a one you know kind of event and i don't even know if that's a relative you know if that's a valid measure how do how do you measure kind of quantities of data around event just you know kind of for orders of magnitude right so uh the way it works is as you said we accelerate the particles to very close to the speed of light and we increase the energy by by having the beams well controlled and then at specific points we make them collide we have this gigantic detectors underground all of this is 100 meters in the ground and these detectors are pretty much a very large camera that would take something like 40 million pictures a second and the result of this is a huge amount of data each of these detectors can generate up to one petabyte of second this is not something we can record so what we do is we have hardware filters that will bring this down to something we can manage which is in the order of a few tens of gigabytes per second wow so you've been you've got a very serious computing challenge ahead of you because you're the one that's on the hook for for grabbing the data recording the data making the data available for for people to use um on their experiments um so we're here at kubecon cloud native con where did containers come into the story uh and and kubernetes specifically what was the real uh challenge that you're trying to overcome yeah so uh this is a a long story of uh using distributed computing at cern and other types of computing so as i mentioned we generate a lot of data we generate something like 7 but of 70 petabytes of data every year and we accumulated something over one half an exabyte of data by now so uh traditionally we've had to build this software ourselves um which was uh because there was not so many people around that would have this kind of needs but this revolution with containers and the clouds appearing kind of allowed us to to join other other communities and benefit also from their work and not have to do everything ourselves so this is the main probe for us to start doing this the other point is more containerization we traditionally are very we have a lot of needs to share information but also share resources between physicists and engineers so this idea of containerizing the work including all the code all the data and then sharing this with our colleagues is very appealing the fact that we can also take this unit of work and just deploy it in any infrastructure that has a standardized api like kubernetes and scale that monitoring the same way it's also very appealing so all of these things kind of connect with our way of working our natural way of working i would say right so you've talked about the this upgrade is coming um to the particle accelerator in a couple four or five years whatever that timeline is relatively soon um this as you've said before is a huge step function in the data that's that that's going to come off these experiments i mean how are you keeping up on the compute side with the fundamental shift in on kind of the physics side and the data that's going to be generated to make sure that you can keep up and i think you said it in a prior interview somewhere along the way that you know you don't want to be the bottleneck when there's all this great work being done but if it's not captured and made available for people to do stuff with the data then you know it's not uh it's not the greatest experiment so how are you keeping up and and what's the relative scale to have what you got to do on the compute side to keep up with the the guys on the physics side yeah so the the the idea well we what we will have to deal with is an increase of 10 times of more data than we have today we already have a lot and very soon we'll have a lot more but this is not i would say this is not the first time this kind of uh step happens uh in our computing we always kind of found a new technology or a new way to do things that would improve in in this case uh what we do is we do what we always do which is we try to look for all sorts of new technologies or all sorts of new resources that we could make use of in this case a lot is involving improving our own software to replace what we currently use with hardware triggers to replace that with software-based using accelerators gpus and other types of accelerators this will play a big role and also making our software more efficient in this way the second thing that we are doing is trying to make our infrastructure more agile and this is where cloud native kubernetes plays a huge role so that we can benefit from external resources uh we we can always think of like expanding our in on-premises resources but it's also very good to be able to just go and fish around if there's something available externally kubernetes plays a very big role in that respect as well yeah i'd love to dig into that a little deeper because the cloud native foundation is a super active foundation obviously a ton of activity around kubernetes so what does that mean to you as an infrastructure provider you know to your own company being on the hook to have now you know kind of an open source community that's supporting you indirectly via ongoing developments and ongoing projects and having as you said kind of this broader group of brain power to pull from to help you move your own infrastructure along yeah i think this this is great we've had really good experiences in the past we've been uh heavy users of uh linux from from from for a very long time we've used openstack for our private cloud and we've been heavily involved in that community as well we not only uh contribute as end users but we also uh offer some some manpower for development and helping with the community and we are doing the same with kubernetes uh and this is uh this is really we we end up getting a lot more than we we are putting in the community we are quite involved but uh it's so large and and and with such big players that have very similar needs to ours that uh we end up having a lot a lot more back than we are putting in we try to help as much as possible but uh yeah we have limited resources as well now open source is an amazing it's just an amazing innovation uh machine and and obviously it's proved as its value over a lot of things from linux to kubernetes being one of the most recent i want to shift gears a little bit right and ask you just your your take on public cloud right one of the huge benefits of public cloud is is the flexibility to add capacity shrink capacity as you need it and you talked again in a prior thing i was looking at you know that you definitely have spikes uh in demand spikes whether there's a high frequency of experiments i don't know how frequently you run those things versus maybe a conference or something where you said people you know want to get access to the data run experiments prior to your conference do you where does public cloud play in your thoughts and maybe you're there today maybe you're not how do you think about you know kind of public cloud generically but more specifically you know that ability to add a little bit more flex in your compute horsepower or are you just going up into the right up into the right and not really flexing down very much yeah so this is this is something we've been working on for a few years now uh we it's uh it's uh it's i would say it's an ongoing work it's a situation that will will not uh be very clear for the for the next few years but again what what we try to do is just to explore as much as possible all kinds of resources that can help us what we did in the kubecon last year was this demonstration that we can actually scale we can scale out and burst for for this uh spiky workloads we have we can burst to the to the public cloud quite easily using this kind of cloud native technologies that we have today and this is extremely important because it kind of changes our mindset instead of having to to think only on investing on premises we can think that maybe we can cover for the majority of use cases but then explore and burst to the public cloud this has to be easy in terms of infrastructure and that we are at that point right now with kubernetes we also have kind of workload that is maybe easier to do these things than than a traditional i.t where services are very interconnected in our case we are more thinking of batch workloads where we can just submit jobs uh and then fetch the data back right this also has a few challenges but but it's i would say it's it's easier than the traditional ite service deployments the other aspect where the public cloud is also very interesting is uh for resources that we don't have in large quantities so we have a very large farm for with cpus we have some gpus and it's very good to be able to explore this new accelerator technologies and maybe expand our available pool of accelerators by going to the public cloud maybe to use them but also to validate to see which ones are best for our use cases and explore that option as well it's not only general capacity it's really like dedicated um hardware that we might not even have ever like we think of tpus or ipu's it's something that is very interesting that we can scale and just go go use them in the public cloud yeah that's a really interesting point because because the cloud providers are big enough now right that they're building all kind of specialized specialized server specialized uh cpu specialized gpus dpus is a new one i've heard a data processing unit as you said there's fpgas and all kinds of accelerators so it is a really rich environment for as you said to do your experiments and find what the optimal solution is for whatever that particular workload is but ricardo i want to shift gears a little bit as we come to the end of 2020 thankfully for a whole bunch of reasons as you look forward to 2021 i mean clearly anticipating and starting to plan to get ready for your upgrade as a priority i'm just curious what are your other priorities and how does you know kind of the compute infrastructure in terms of an investment within cern you know kind of rank with the investment around the physical things that you're building the big machines because without the compute those other things really don't provide much data and i know those are we always talked about how expensive the particle accelerators is it's an interesting number and it's big but you guys are a big piece of that as well so what are your priorities looking forward to 2021 yeah from from the compute side i think we are keeping the the priorities in similar to what we've been doing the last few years which is to make sure that we improve all our automation to improve efficiency as well to prepare for these upgrades we have but also there's a lot of activity in this new uh area with machine learning popping up we have a ton of services appearing where people want to to start doing machine learning in many many use cases in some cases they want to do the filtering in the detectors in other cases they want to generate simulation data a lot faster using machine learning as well so i think this will be something that will be a huge topic for next year even for the next couple of years which is to see how we can offer our users and physicists the best service so that they don't have to care about the infrastructure they don't have to know about the details of how they scale their their model training their serving of their models all of this i think this will be a very big topic um it's something that it's becoming really a big part of of the world computing for high energy physics and for cern as well that's great we see that a lot you know just applied machine learning to very specific problems you talked about you still can't even record all that information that comes off those things you have to do some compression technology and other things so real opportunities barely scratched on the surface of machine learning and ai but i'm sure you're going to be using it a ton well ricardo give you give you the last word um we're in at cncf's uh kubecon cloud native con you know what do you get out of these types of shows and why is this such again kind of why is it such an important piece of your way you get your job done yeah honestly uh with all this uh situation right now i kind of really miss this kind of conferences in person uh it's really a huge opportunity to connect with uh with the other end users but also with with the community and to talk to the developers discuss things over uh coffee beer this is something that is really something that is really useful to to have this kind of meetings every year uh i think what what uh i always try to say is uh this this wall infrastructure is is truly making a big impact in the way we do things so we can only thank the community uh it's it allows us to to kind of shift to focusing on a higher level to focus more on our use cases instead of having to focus so much on the infrastructure we kind of start giving it as a given that the infrastructure scales and we can just use it and focus on optimizing our own software so this is a huge contribution we can only thank the cncf projects and everyone involved great well thank you for that uh that summary and that that's a terrific summary so ricardo thank you so much for all your hard work answering really big helping answer really big questions and uh and for joining us today and sharing your insight thank you very much all right he's ricardo i'm jeff you're watching the cube from our palo alto studios for continuing coverage of kubecon cloud nativecon 2020. thanks for watching see you next time [Music] you

Published Date : Nov 19 2020

SUMMARY :

the relative scale to have what you got

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ricardo RochaPERSON

0.99+

100 metersQUANTITY

0.99+

10 timesQUANTITY

0.99+

2021DATE

0.99+

27 kilometersQUANTITY

0.99+

jeff frickPERSON

0.99+

last yearDATE

0.99+

CERNORGANIZATION

0.99+

todayDATE

0.99+

second thingQUANTITY

0.99+

five yearsQUANTITY

0.99+

ricardoPERSON

0.98+

palo altoORGANIZATION

0.98+

40 million picturesQUANTITY

0.98+

KubeConEVENT

0.98+

first launchQUANTITY

0.98+

first timeQUANTITY

0.98+

next yearDATE

0.98+

CloudNativeConEVENT

0.97+

jeffPERSON

0.96+

ricardo rochaPERSON

0.96+

north americaLOCATION

0.95+

around 10 years agoDATE

0.95+

genevaLOCATION

0.95+

fourQUANTITY

0.95+

101QUANTITY

0.94+

over one half an exabyte of dataQUANTITY

0.93+

70 petabytes of dataQUANTITY

0.93+

kubeconORGANIZATION

0.92+

next couple of yearsDATE

0.92+

7QUANTITY

0.92+

every yearQUANTITY

0.91+

linuxTITLE

0.9+

last few yearsDATE

0.89+

up to one petabyteQUANTITY

0.89+

may of 2019DATE

0.87+

end of 2020DATE

0.87+

2020DATE

0.87+

next few yearsDATE

0.86+

a ton of servicesQUANTITY

0.84+

nancy meta factoryORGANIZATION

0.82+

NA 2020EVENT

0.8+

eachQUANTITY

0.8+

cloudnativeconORGANIZATION

0.8+

a lot of peopleQUANTITY

0.79+

a lot of dataQUANTITY

0.79+

oneQUANTITY

0.78+

few tens of gigabytes per secondQUANTITY

0.78+

so many peopleQUANTITY

0.76+

kubeconEVENT

0.75+

openstackTITLE

0.74+

challengesQUANTITY

0.7+

kubecon cloudORGANIZATION

0.66+

thursdayDATE

0.66+

secondQUANTITY

0.66+

a secondQUANTITY

0.64+

lot of peopleQUANTITY

0.63+

a few yearsQUANTITY

0.62+

hatORGANIZATION

0.61+

cernORGANIZATION

0.61+

europeanOTHER

0.58+

lot of dataQUANTITY

0.58+

foundationORGANIZATION

0.57+

in the summerDATE

0.55+

redPERSON

0.54+

cloud nativecon 2020EVENT

0.54+

lot of activityQUANTITY

0.53+

two cubeQUANTITY

0.49+

conEVENT

0.4+

Lukas Heinrich & Ricardo Rocha, CERN | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE, covering KubeCon + CloudNativeCon Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Welcome back to theCUBE, here at KubeCon CloudNativeCon 2019 in Barcelona, Spain. I'm Stu Miniman. My co-host is Corey Quinn and we're thrilled to welcome to the program two gentlemen from CERN. Of course, CERN needs no introduction. We're going to talk some science, going to talk some tech. To my right here is Ricardo Rocha, who is the computer engineer, and Lukas Heinrich, who's a physicist. So Lukas, let's start with you, you know, if you were a traditional enterprise, we'd talk about your business, but talk about your projects, your applications. What piece of, you know, fantastic science is your team working on? >> All right, so I work on an experiment that is situated with the Large Hadron Collider, so it's a particle accelerator experiments where we accelerate protons, which are hydrogen nuclei, to a very high energy, so that they almost go with the speed of light. And so, we have a large tunnel underground, 100 meters underground in Geneva, so straddling the border of France and Switzerland. And there, we're accelerating two beams. One is going clockwise. The other one is going counterclockwise, and there, we collide them. And so, I work on an experiment that kind of looks at these collisions and then analyzes this data. >> Lukas, if I can, you know, when you talk to most companies, you talk about scale, you talk about latency, you talk about performance. Those have real-world implications for your world. Do you have anything you could share there? >> Yeah, so, one of the main things that we need to do, so we collide 40 million times a second these protons, and we need to analyze them in real time, because we cannot write out all the collision data to disk because we don't have enough disk space, and so we've essentially run 10,000 core real-time application to analyze this data in real-time and see what collisions are actually most interesting, and then only those get written out to disk, so this is a system that I work on called The Trigger, and yeah, that's pretty dependent on latency. >> All right, Ricardo, luckily you know, your job's easy. We say most people you need to respond, you know, to what the business needs for you and, you know, don't worry, you can't go against the laws of physics. Well, you're working on physics here, and boy those are some hefty requirements there. Talk a little bit about that dynamic and how your team has to deal with some pretty tough challenges. >> Right, so, as Lukas was saying, we have this large amount of data. The machines can generate something around the order of a petabyte a second, and then, thanks to their hardware- and software-level triggers, they will reduce this to something that is 10 gigabytes a second, and that's what my side has to handle. So, it's still a lot of data. We are collecting something like 70 petabytes a year, and we keep adding, so right now we have, the amount of storage available is on the order of 400 petabytes. We're starting to get at a pretty large scale. And then we have to analyze all of this. So we have one big data center at CERN, which is 300,000 cores, or something like this, around that, but that's not enough, so what we've done over the last 15, 20 years, we've created this large distributed computing environment around the world. We link to many different institutes and research labs together, and this doubles our capacity. So that's our challenge, is to make sure all the effort that the physicists put into building this large machine, that, in the end, it's not the computing that is breaking the world system. We have to keep up, yup. >> One thing that I always find fascinating is people who are dealing with real problems that push our conception of what scale starts to look like, and when you're talking about things like a petabyte a second, that's beyond the comprehension of what most of us can wind up talking about. One problem that I've seen historically with a number of different infrastructure approaches is it requires a fair level of complexity to go from this problem to this problem to this problem, and you have to wind up working through a bunch of layers of abstraction, and the end result is, and at the end of all of this we can run our blog that gets eight visits a day, and that just doesn't seem to make sense. Whereas what you're talking about, that level of complexity is more than justified. So my question for you is, as you start seeing these things evolve and looking at other best practices and guidance from folks who are doing far less data-intensive applications, are you seeing that a lot of the best practices start to fall down as you're pushing theoretical boundaries of scale? >> Right, that's actually a good point. Like, the physicists are very good at getting things done, and they don't worry that much about the process, as long as in the end it works. But there's always this kind of split between the physicists and the more computing engineer where the practices, we want to establish practices, but at the end of the day, we have a large machine that has to work, so sometimes we skip a couple of steps, but we still need, there's still quite a lot of control on like data quality and the software validation and all of this. But yeah, it's a non-traditional environment in terms of IT, I would say. It's much more fast pacing than most traditional companies. >> You mentioned you had how many cores working on these problems on site? >> So in-house, we have 300,000. >> If you were to do a full migration to the public cloud, you'd almost have to repurpose that many cores just to calculating out the bill at that point. Just, because all the different dimensions, everything winds working on at that scale becomes almost completely non-trivial. I don't often say that I'm not sure public cloud can scale to the level that someone would need to. In your case, that becomes a very real concern. >> Yeah, so that's one debate we are having now, and it's, it has a lot of advantages to have the computing in-house, and also because we pretty much use it 24/7, it's a very different type of workload. So we need a lot of resources 24/7, like even the pricing is kind of calculated differently. But the issue we have now is that the accelerator will go through a major upgrade just in five years' time, where we will increase the amount of data by 100 times. Now we are talking about 70 petabytes a year and we're very soon talking about like exabytes. So the amount of computing we'll need there is just going to explode, so we need all the options. We're looking into GPUs and machine learning to change how we do computing, and we are looking at any kind of additional resources we might get, and there the public cloud will probably play a role. >> Could you speak to kind of the dynamic of how something like an upgrade of that, you know, how do you work together? I can't imagine that you just say, "Well, we built it, "whatever we needed and everything, and, you know, "throw it over the wall and make sure it works." >> Right, I mean, so I work a lot on this boundary between computing and physics, and so internally, I think we also go through the same processes as a lot of companies, that we're trying to educate people on the physics side how to go through the best practices, because it's also important. So one thing I stressed also in the keynote is this idea of reproducibility and reusability of scientific software is pretty important, so we teach people to containerize their applications and then make them reusable and stuff like that, yup. >> Anything about that relationship you can expound on? >> Yeah, so like this keynote we had yesterday is a perfect example of how this is improving a lot at CERN. We were actually using data from CMS, which was one of the experiments. Lukas is a physicist in ATLAS, which is like a computing experiment, kind of. I'm in IT, and like all this containerized infrastructure kind of is getting us all together because computing is getting much easier in terms of how to share pieces of software and even infrastructure, and this helps us a lot internally also. >> So what particular about Kubernetes helps your environment? You talk for 15 years that you've been on this distributed systems build-out, so sounds like you were the hipsters when it came to some of these solutions we're working on today. >> That has been like a major change. Lukas mentioned the container part for the software reproducibility, but I have been working on the infrastructure for, I joined CERN as a student and I've been working on the distributed infrastructure for many years, and we basically had to write our own tools, like storage systems, all the batch systems, over the years, and suddenly with this public cloud explosion and open source usage, we can just go and join communities that have requirements sometimes that are higher than ours and we can focus really on the application development. If we base, if we start writing software using Kubernetes, then not only we get this flexibility of choosing different public clouds or different infrastructures, but also we don't have to care so much about the core infrastructure, all the monitoring, log collection, restarting. Kubernetes is very important for us in this respect. We kind of remove a lot of the software we were depending on for many years. >> So these days, as you look at this build-out and what you're looking, not just what you're doing today but what you're looking to build in the upcoming years, are you viewing containers as the fundamental primitive of what empowers this? Are you looking at virtual machines as that primitive? Are you looking at functions? Where exactly do you draw the abstraction layer, as you start building this architecture? >> So, yeah, traditionally we've been using virtual machines for like the last maybe 10 years almost, or, I don't know, eight years at least, and we see containerization happening very quickly, and maybe Lukas can say a bit more about the physics, how this is important on the physics side? >> Yeah, what's been, so currently I think we are looking at containers for the main abstraction because it's also we go through things like functions as a service. What's kind of special about scientific applications is that we don't usually just have our entire code base on one software stack, right? It's not like we would deploy Node.js application or Python stack and that's it. And so, sometimes you have a complete mix between C++, Python, Fortran, and all that stuff. So this idea that we can build the entire software stack as we want it is pretty important. So even for functions as a service where, traditionally, you had just a limited choice of runtimes, this becomes important. >> Like, from our side, the virtual machines still had a very complex setup to be able to support all this diversity of software and the containerization, just all the people have to give us is like run this building block and it's kind of a standard interface, so we only have to build the infrastructure to be able to handle these pieces. >> Well, I don't think anyone can dispute that you folks are experts in taking larger things and breaking them down into constituent components thereof. I mean, you are, quite obviously, the leading world experts on that. But was there any challenge to you as you went through that process of, I don't necessarily even want to say modernizing, but in changing your viewpoint of those primitives as you've evolved, have you seen that there were challenges in gaining buy-in throughout the organization? Was there pushback? Was it culturally painful to wind up moving away from the virtual machine approach into a containerized world? >> Right, so yeah, a bit, of course. But traditionally we, like physicists really focus on their end goal. We often say that we don't count how many cores or whatever, we care about events per second, how many events we can process per second. So, it's a kind of more open-minded community maybe than traditional IT, so we don't care so much about which technology we use at some point, as long as the job gets done. So, yeah, there's a bit of traction sometimes, but there's also a push when you can demonstrate that we get a clear benefit, then it's kind of easier to push it. >> What's a little bit special maybe also for particle physics is that it's not only CERN that is the researcher. We are an international collaboration of many, many institutes all around the world that work on the same project, which is just hosted at CERN, and so it's a very flat hierarchy and people do have the freedom to try out things and so it's not like we have a top-down mandate what technology we use. And then somebody tries something out. If it works and people see a value in it then you get adoption from it. >> The collaboration with the data volumes you're talking about as well has got to be intense. I think you're a little bit beyond the, okay, we ran the experiment, we put the data in Dropbox, go ahead and download it, you'll get that in only 18 short years. It seems like there's absolutely a challenge in that. >> That was one of the key points actually in the keynote is that, so a lot of the experiments at CERN have an open data policy where we release our data, and so that's great because we think it's important for open science, but it was always a bit of an issue, like who can actually practically analyze this data for people who don't have a data center? And so one part of the keynote was that we could demonstrate that using Kubernetes and public cloud infrastructure actually becomes possible for people who don't work at CERN to analyze this large-scale scientific data sets. >> Yeah, I mean maybe just for our audience, the punchline is rediscovering the Higgs boson in the public cloud. Maybe just give our audience a little bit of taste of that. >> Right, yeah, so basically what we did is, so the Higgs boson was discovered in 2012 by both ATLAS and CMS, and a part of that data, we used open data from CMS and part of that data has now been released publicly, and basically this was a 70-terabyte data set which we, thanks to our Google Cloud partners, could put onto public cloud infrastructure and then we analyzed it on a large-scale Kubernetes cluster, and-- >> The main challenge there was that, like, we publish it and we say you probably need a month to process it, but we had like 20 minutes on the keynote, so we kind of needed a bit larger infrastructure than usual to run it down to five minutes or less. In the end, it all worked out, but that was a bit of a challenge. >> How are you approaching, I guess, making this more accessible to more people? By which I mean, not just other research institutions scattered around the world, but students, individual students, sometimes in emerging economies, where they don't have access to the kinds of resources that many of us take for granted, particularly work for a prestigious research institutions? What are you doing to make this more accessible to high school kids, for example, folks who are just dipping their toes into a world they find fascinating? >> We have entire programs, outreach programs that go to high schools. I've been doing this when I was a student in Germany. We would go to high schools and we would host workshops and people would analyze a lot of this data themselves on their computers. So we would come with a USB stick that have data on them, and they could analyze it. And so part of also the open data strategy from ATLAS is to use that open data for educational purposes. And then there are also programs in emerging countries. >> Lukas and Ricardo, really appreciate you sharing the open data, open science mission that you have with our audience. Thank you so much for joining us. >> Thank you. >> Thank you. >> All right, for Corey Quinn, I'm Stu Miniman. We're in day two of two days live coverage here at KubeCon + CloudNativeCon 2019. Thank you for watching theCUBE. (upbeat music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, What piece of, you know, fantastic science and there, we collide them. to most companies, you talk about scale, Yeah, so, one of the main things that we need to do, to what the business needs for you and, you know, and we keep adding, so right now we have, and at the end of all of this we can run our blog but at the end of the day, we have a large machine Just, because all the different dimensions, But the issue we have now is that the accelerator "whatever we needed and everything, and, you know, on the physics side how to go through the best practices, Yeah, so like this keynote we had yesterday so sounds like you were the hipsters and we basically had to write our own tools, is that we don't usually just have our entire code base just all the people have to give us But was there any challenge to you We often say that we don't count how many cores and so it's not like we have a top-down mandate okay, we ran the experiment, we put the data in Dropbox, And so one part of the keynote was that we could demonstrate in the public cloud. and we say you probably need a month to process it, And so part of also the open data strategy Lukas and Ricardo, really appreciate you sharing Thank you for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ricardo RochaPERSON

0.99+

Corey QuinnPERSON

0.99+

Stu MinimanPERSON

0.99+

CERNORGANIZATION

0.99+

LukasPERSON

0.99+

ATLASORGANIZATION

0.99+

2012DATE

0.99+

GenevaLOCATION

0.99+

GermanyLOCATION

0.99+

RicardoPERSON

0.99+

Lukas HeinrichPERSON

0.99+

Red HatORGANIZATION

0.99+

20 minutesQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

70-terabyteQUANTITY

0.99+

15 yearsQUANTITY

0.99+

300,000 coresQUANTITY

0.99+

300,000QUANTITY

0.99+

Node.jsTITLE

0.99+

70 petabytesQUANTITY

0.99+

PythonTITLE

0.99+

400 petabytesQUANTITY

0.99+

10,000 coreQUANTITY

0.99+

Barcelona, SpainLOCATION

0.99+

100 metersQUANTITY

0.99+

eight yearsQUANTITY

0.99+

KubeConEVENT

0.99+

a monthQUANTITY

0.99+

100 timesQUANTITY

0.99+

SwitzerlandLOCATION

0.99+

five minutesQUANTITY

0.99+

oneQUANTITY

0.99+

FortranTITLE

0.98+

yesterdayDATE

0.98+

FranceLOCATION

0.98+

two daysQUANTITY

0.98+

Ecosystem PartnersORGANIZATION

0.98+

One problemQUANTITY

0.98+

OneQUANTITY

0.98+

five years'QUANTITY

0.98+

18 short yearsQUANTITY

0.97+

CMSORGANIZATION

0.97+

two beamsQUANTITY

0.97+

two gentlemenQUANTITY

0.96+

KubernetesTITLE

0.96+

bothQUANTITY

0.96+

CloudNativeCon Europe 2019EVENT

0.95+

40 million times a secondQUANTITY

0.95+

One thingQUANTITY

0.94+

eight visits a dayQUANTITY

0.94+

CloudNativeCon EU 2019EVENT

0.93+

CloudNativeCon 2019EVENT

0.93+

C++TITLE

0.93+

many yearsQUANTITY

0.92+

KubeCon CloudNativeCon 2019EVENT

0.92+

todayDATE

0.91+

one softwareQUANTITY

0.91+

DropboxORGANIZATION

0.89+

about 70 petabytesQUANTITY

0.86+

one debateQUANTITY

0.86+

10 gigabytes a secondQUANTITY

0.85+

one partQUANTITY

0.77+

a yearQUANTITY

0.75+

one thingQUANTITY

0.74+

a secondQUANTITY

0.73+

petabyteQUANTITY

0.73+

Derek Mathieson, CERN | PentahoWorld 2017


 

>> Announcer: Live from Orlando, Florida, it's theCUBE covering PentahoWorld 2017. Brought to you by Hitachi Vantara. >> Welcome back to theCUBE's live coverage of PentahoWorld brought to you by Hitachi Vantara. I'm your host Rebecca Knight, along with my cohost Dave Vellante. We are joined by Derek Mathieson, he is the group leader at CERN. Welcome, Derek, glad to have you on the show. >> Well, glad to be here, thank you very much. >> So, CERN, which is of course the European Organization for Nuclear Research. And you know we think of it as this place of physicists and engineers working together to solve these problems. And probe the mysteries of the universe but in fact, CERN is a technology organization. >> Absolutely, I mean, I think that's the- CERN has this reputation of being exclusively physics. I mean, it is the world leading particle physics laboratory. But in fact, in the end, yeah, we're an infrastructure organization who provides all the technology, all the science. And all the scientists and engineers come to CERN to do their work. But CERN itself provides the facilities. So, our main focus, in fact, is technology. Computer science, civil engineering, construction. I mean, we built cathedral size concrete structures 400 and 50 feet underground, 17 mile long tunnels. I mean, this is civil engineering in the grand scale. And that's actually one of the major focuses. Is that CERN, although it's a physics organization, one of the difficulties we have as an organization is to explain to people, in fact, what we're looking for when we're recruiting. When we're contacting other universities. It's all about the fact that we're not looking for physicists, we're looking for engineers and technology specialists to come and work at CERN. >> So talk to us about some of the new, exciting projects that you're working on there. >> Oh, I mean, there's a lot going on. Obviously, the reason I'm here today is all about the work that we're doing with Pentaho. So we're, you know, building a new data warehouse. My group's actually responsible for the administrative computing of CERN. So basically running CERN as a business. I mean this is, there's a budget of around about one billion U.S. dollars. Going into CERN every year, in order to do all this physics research. So obviously we have a responsibility to treat, be faithfully to these tax dollars, carefully and you know spend them wisely. So a lot of my work is to make sure that we have the appropriate infrastructure, controls and proper technology there. To make sure that it's used effectively and wisely. >> So paint a picture of that infrastructure for us, if you would. What's it look like if we took a peak under the tent? Well, I mean, it's what quite nice about it is with the technology infrastructure that we have. So we have a huge computer center. There's a hundred thousand CPU's in our computer center. That's mainly used for doing physics but because we have all this infrastructure there, we can use part of it to also run the administration. Which gives us the ability to run a real world class technology stack to actually run the organization. So we have a huge data warehouse. Which gives a very rapid response to the physicists and engineers who actually want to go on and do their work. My job is to make sure that the administration of CERN doesn't get in their way. So we want to provide them the facilities so they just get on with their job and all the other things to do with actually running the organization are my problem and the team that works for me. And good examples is that CERN literally sits on the border between France and Switzerland. So we have, you know, we care about things like, there's 80 different customs forms that we have to worry about on a daily basis just as we move materials around the site. So we have such an usual organization but it's unique in the world. And that's what attracts people to work there is all these new challenges that we got. It's really a fantastic place. >> And the view is pleasant I bet. >> Oh yeah. (all giggling) >> Okay, so tell us more about the infrastructure. So you talked about this really fast data warehouse. 100,000 CPUs, is it all sort of on prem? Is it a mix sort of on prem and the Cloud? What's the data warehouse, you know, give us a sense of what that infrastructure is. 'Cause people hear data warehouse, they think you know, kind of old, clunky data warehouse. You're talking about this super high performance. >> Exactly, in fact, that's one of the challenges that we face is. We've got scientists who are used to dealing with high volumes of data with high fixation. Our particle detectors produce around 2 petabytes of data per second. So they're used to dealing with large amount of data. So immediately when they started looking at the administration of the organization of the same high expectations. They want it to be fast, they want it to process the data. Large quantities of data, very quickly indeed and give the answers (snaps) in a split second. So to do that we have to obviously put quite a lot of hardware behind it and also use good technical strength as well. We're quite big users of Oracle at CERN. We have a big Oracle database which is for the principle, where we keep most of our data. And then we use Pentaho on top of that in order to do all the deporting, the analytics, the building the Cube, so all this kind of thing. And their user base is very transient. So there's around fifteen thousand people who're actually working at CERN at any one time. Half of the world's particle physicists work at CERN. >> Rebecca: Wow. >> So, they're coming and going all the time. They don't want to worry about how to get the data. So it has to be there, has to be there right away. Has to be easy to use and easy to understand. These people live and work and breathe particle physics. They don't worry about the budget and the details about how to do all this stuff. This is something where the accountants have to get there. Get it in such a way that it's easy for them to do the right thing and make sure that we stay compliance with the various regulations. And make sure that the organization continues to function as a business while still getting on with our primary mission of particle physics research. >> And that infrastructure is primarily on premise, that correct? >> It's on premise, the vast majority of it. In fact, one of the, we have two main data centers. So there's one physically located at Cern in Geneva. And then there's another one over in the (mumbles) institute, in (snaps) >> The other place. >> The other place. (both laughing) >> Okay. >> Yep. >> And that, presume, because you've got such volumes of data. You can't just be moving that stuff around up into the Cloud. >> Right, in fact yeah, we have a lot of high speed data links between the different data centers in order to. We have a copy of quite a lot of the data in fact. The principle physics data is copied, not only at CERN, which is what's called a 2-0 site where we have all the data to start with. But we also copy it to I think it's around about seven different institutes around the world. So they have a first-line copy as well. Altogether we have a network of around a hundred computer centers working for CERN in some way or other. That's part of what we call the LHC computing grids which is (mumbles) a planetary data center in computer infrastructure to do all this processing of the LHC data. >> I'm going to ask you to go back to about the organizational structure. I mean, you described this office situated on the border of France and Switzerland. Where half the world's particle physicists work. What is the culture like? And how do you get- and as you said also the administrations job is to really get out of their way so they can do their thing. What is the culture like there? How do people work together? How do people collaborate? What do you do when there's disagreement? >> I mean this is one of the unique aspects of CERN. Is bringing people together. There's around about 90 different countries represented at CERN. Around about 100 different nationalities, all working on site. It's very much like a university environment. We have a canteen where people will come in. Their always saying that probably most of the physics and most of the science discoveries are happening within the canteen as people meet together from all over the world. We have countries, India, Pakistan, have just joined as associate members. We've got 22 member states. Mainly around Europe but now we have a policy enlargement. So we're actually trying to make the organization even larger. Touching more countries around the world. United States is an observer now within the organization. So they actually participate in the CERN council and they're also major players in some of the large LHC experiments as well. But yeah, on a day to day basis, I'll be sitting in the restaurant and there will be Nobel Prize winners. We have our director general, she will be there as well, having lunch with everyone else. So it's a very much a leveling organization where everyone feels free to speak to each other. And discuss the matters of the day and particle physics. >> So what do you guys talk about? >> (laughs) What's the canteen conversation? >> I think this is the utter geek speak usually. That's the main problem in CERN is that people are passionate about what they do. So they come to CERN, they love what they do, they talk about it all the time. So, I mean, people will be talking about the latest generation of the CPU architecture, GPU programming. How do we do simulations with petabytes of data? This is lunch time conversation. And evening and everything else. >> So you're not talking about the a football game, right? You're talking about this sort of, talking shop mostly right? >> There is a football team, there is a rugby team as well. There's real life as well at CERN but yeah, I mean, most people are there because they're passionate about what they do. >> Obviously you're listening to those conversations you must pick up a lot of it. >> Yeah, I know, I mean, I think it's if you work at Cern and you're at a dinner party, someone laughs, "Oh you work at Cern, tell me all about physics." So you pick up a bit about it of course. Everyone can speak a little bit about what we're doing at Cern and I think that's an imperative because we work there. Of course you hear about what's going on and understand a little bit about it. But I would never claim to be a physicist of course. >> Rebecca: You can fake it though. >> I have lunch with physicists, I'm not one myself. >> How 'about Pentaho? You painted the picture of the infrastructure before. Where does Pentaho fit? And how are they adding value? >> We've been using Pentaho now for the last few years. We started, I mean, what really attracted is actually this combination of open-source plus propriety software. We like the core and the open-source nature of it which it very much fits with the values of CERN as well as being an open lab. And sharing everything that we do. So we started, as I say, with Pentaho a few years ago. Now, it's a core component. It's a core strategic component of the administration and also used in other areas as well. So it's also used in some of the more technical infrastructure areas in terms of: how do we actually run the lab? Parts of the infrastructure in terms of monitoring the different parts of the accelerator complex. And even in terms of, you know, the maintenance of the buildings, all of that. So it's really, you know, core within the organization as a core component for us. >> So, CERN is an organization then as- I'll use the word insistent, if you will, on open-source as a component. So that puts pressure on companies like Pentaho to pay attention to the next project. Maybe contribute, maybe not. But it certainly integrate. Score card, how have they done on that? What would you like to see them do better in that regard? And what kind of open-source projects do you- and you may not be able to answer this. But, might your organizations see in the horizon that you want Pentaho to capture? I mean, obviously 8.0, you've heard about, Spark and bringing in Kafka and the like. But maybe you could comment. >> Absolutely, I think this is one of the eighters who's really attracted us was the open-source nature. And certainly Pentaho's movement in that direction particularly, I think, was the integration with Hitachi as well. They're seeing many other projects now being integrated within to that sort of pentacle world. This is something that was interesting to us. Of course because of our Cloud based infrastructure. The idea of scaling up and scaling out. And they're going with the open-source projects to particular and the patchy projects. Which was really interesting to us as well. Something that we've been working on a bit ourselves. And now to hear that Pentaho was doing that as well. That was great, a good piece of news for me because it was something that we have been struggling with is basically spreading out. We've got fifteen thousand users. We want to have a dynamic infrastructure where we can actually provision more service where necessary in order be able to take load when we need it. But at the same time we don't want to waste the resources when they're off doing something else. >> Over the course of last decade, let's say, has there ever been a tendency for- 'cause you've got so many alpha geeks running around. To say, "Hey, I can take these open-source components and kind of do it myself." >> Derek: Yeah. >> "I don't need the Pentaho load bouncer, I got yarn to negotiate my resources. Look what I built." And so, how do you manage that? >> No, I mean, you're absolutely right. It's a problem here there's always the risk of the naught of engineer syndrome where, "I could do it better." And we have to pressure against that. But, I mean, I think the important of the issue is take the bigger picture. If it's already done well, we don't need to do it again. Build on top of it, make something better on top of something that already exists. And that's the thing, that's the message that we can give to any of the engineers working at CERN. Is, "You can do so much more if you already use the infrastructure that's already solid." And that's part of this, you know, reuse, of course. Open-source software allows us to build on things which are already solid. We don't need to make another one of them. We'll make something on top of it. That's a primary message that we try to give. >> So here we are at Pentaho World and you're with a bunch of other practitioners. Sharing best practices, talking about how you use the product, learning from them too. What are some of the take aways? And how much are you actually talking to them versus talking to the Pentaho product people? >> We did a presentation yesterday. The focus of our presentation was managing Pentaho. So, one of the things that we've been using now for a number of years is you have to have an infrastructure to be able to actually take care of all the different artifacts, all the different reports. We have many, many different user who want to be able to use Pentaho at the same time creating their own artifacts. I mean we have to have some way of managing to actually manage all this landscape. Although Pentaho has got some tools necessary, that was one of the areas that we felt we could add some value in there. So we've been building on top of the existing Pentaho APIs. Building an infrastructure to make it easier to support for other people. And what was quite nice is we were speaking to some of the other attendees. And that's exactly the kind of thing they've been worrying about as well. And there was even some presentations of people doing a similar approach in their own organizations. On how they were actually trying to build some kind of architecture on top of Pentaho just to manage the whole thing. When you have hundred of reports and hundred of artifacts and very complicated data warehouse cubes, you need something on top of that to actually just manage the whole thing. And that's something that we've been focused on. And I see other people are doing the same kind of thing. So I can imagine that Pentaho will be taking note of this and probable incorporating some of the ideas. >> It's sending a loud and clear message to Pentaho, yes absolutely. >> How about the event? You've been to at least two or that I know of. I don't know if you were at the original. >> I've been to three altogether. >> Okay, so you've been to, I think all of them, right? >> I could have been all of them, yeah. >> I think the first one was 14, I think, I'm pretty sure. Things you've taken away? You know, interesting conversations? >> I think it's the main reason we come in. It's a long way for us to come all the way from Geneva to come here. It's really important for us to touch base with other people using the product. It is an open community, people do like to talk to each other about, you know the new things that are happening within the Pentaho community. And I think face to face contact, in the end, is very hard to beat. And we're coming to an event like this you actually get the opportunity to speak to people over lunch. Or in the evening events you can talk to them and actually find out what it's really like to use Pentaho. >> Great, well thank you so much Derek for coming on theCUBE. >> Thank you very much. >> I'm Rebecca Knight for Dave Vellante. We well have more from Pentaho World just after this.

Published Date : Oct 27 2017

SUMMARY :

Brought to you by Hitachi Vantara. he is the group leader at CERN. Well, glad to be here, And probe the mysteries of one of the difficulties we So talk to us about some of the new, for the administrative computing of CERN. the other things to do Oh yeah. What's the data warehouse, you know, So to do that we have to And make sure that the It's on premise, the The other place. And that, presume, because you've got have all the data to start with. What is the culture like? and most of the science of the CPU architecture, GPU programming. about what they do. conversations you must I think it's if you work I have lunch with You painted the picture of component of the administration and the like. But at the same time we don't Over the course of "I don't need the Pentaho load bouncer, of the issue is take the bigger picture. What are some of the take aways? of all the different artifacts, clear message to Pentaho, How about the event? I think the first one was get the opportunity to Great, well thank you so much Derek We well have more from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Rebecca KnightPERSON

0.99+

CERNORGANIZATION

0.99+

DerekPERSON

0.99+

RebeccaPERSON

0.99+

Derek MathiesonPERSON

0.99+

GenevaLOCATION

0.99+

HitachiORGANIZATION

0.99+

PentahoORGANIZATION

0.99+

EuropeLOCATION

0.99+

17 mileQUANTITY

0.99+

OracleORGANIZATION

0.99+

400QUANTITY

0.99+

50 feetQUANTITY

0.99+

todayDATE

0.99+

SwitzerlandLOCATION

0.99+

European Organization for Nuclear ResearchORGANIZATION

0.99+

fifteen thousand usersQUANTITY

0.99+

oneQUANTITY

0.99+

CernORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

100,000 CPUsQUANTITY

0.99+

14QUANTITY

0.99+

yesterdayDATE

0.99+

FranceLOCATION

0.99+

threeQUANTITY

0.98+

22 member statesQUANTITY

0.97+

two main data centersQUANTITY

0.97+

around fifteen thousand peopleQUANTITY

0.97+

80 different customs formsQUANTITY

0.97+

bothQUANTITY

0.97+

last decadeDATE

0.96+

CERN councilORGANIZATION

0.96+

about 100 different nationalitiesQUANTITY

0.96+

hundred of reportsQUANTITY

0.95+

hundred of artifactsQUANTITY

0.94+

Hitachi VantaraORGANIZATION

0.94+

first oneQUANTITY

0.94+

first-line copyQUANTITY

0.94+

around a hundred computer centersQUANTITY

0.93+

about one billionQUANTITY

0.93+

theCUBEORGANIZATION

0.93+

PentahoTITLE

0.92+

Marcel Hild, Red Hat & Kenneth Hoste, Ghent University | Kubecon + Cloudnativecon Europe 2022


 

(upbeat music) >> Announcer: theCUBE presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome to Valencia, Spain, in KubeCon CloudNativeCon Europe 2022. I'm your host Keith Townsend, along with Paul Gillon. And we're going to talk to some amazing folks. But first Paul, do you remember your college days? >> Vaguely. (Keith laughing) A lot of them are lost. >> I think a lot of mine are lost as well. Well, not really, I got my degree as an adult, so they're not that far past. I can remember 'cause I have the student debt to prove it. (both laughing) Along with us today is Kenneth Hoste, systems administrator at Ghent University, and Marcel Hild, senior manager software engineering at Red Hat. You're working in office of the CTO? >> That's absolutely correct, yes >> So first off, I'm going to start off with you Kenneth. Tell us a little bit about the research that the university does. Like what's the end result? >> Oh, wow, that's a good question. So the research we do at university and again, is very broad. We have bioinformaticians, physicists, people looking at financial data, all kinds of stuff. And the end result can be very varied as well. Very often it's research papers, or spinoffs from the university. Yeah, depending on the domain I would say, it depends a lot on. >> So that sounds like the perfect environment for cloud native. Like the infrastructure that's completely flexible, that researchers can come and have a standard way of interacting, each team just use it's resources as they would, the Navana for cloud native. >> Yeah. >> But somehow, I'm going to guess HPC isn't quite there yet. >> Yeah, not really, no. So, HPC is a bit, let's say slow into adopting new technologies. And we're definitely seeing some impact from cloud, especially things like containers and Kubernetes, or we're starting to hear these things in HPC community as well. But I haven't seen a lot of HPC clusters who are really fully cloud native. Not yet at least. Maybe this is coming. And if I'm walking around here at KubeCon, I can definitely, I'm being convinced that it's coming. So whether we like it or not we're probably going to have to start worrying about stuff like this. But we're still, let's say, the most prominent technologies of things like NPI, which has been there for 20, 30 years. The Fortran programming language is still the main language, if you're looking at compute time being spent on supercomputers, over 1/2 of the time spent is in Fortran code essentially. >> Keith: Wow. >> So either the application itself where the simulations are being done is implemented in Fortran, or the libraries that we are talking to from Python for example, for doing heavy duty computations, that backend library is implemented in Fortran. So if you take all of that into account, easily over 1/2 of the time is spent in Fortran code. >> So is this because the libraries don't migrate easily to, distributed to that environment? >> Well, it's multiple things. So first of all, Fortran is very well suited for implementing these type of things. >> Paul: Right. >> We haven't really seen a better alternative maybe. And also it'll be a huge effort to re-implement that same functionality in a newer language. So, the use case has to be very convincing, there has to be a very good reason why you would move away from Fortran. And, at least the HPC community hasn't seen that reason yet. >> So in theory, and right now we're talking about the theory and then what it takes to get to the future. In theory, I can take that Fortran code put it in a compiler that runs in a container? >> Yeah, of course, yeah. >> Why isn't it that simple? >> I guess because traditionally HPC is very slow at adopting new stuff. So, I'm not saying there isn't a reason that we should start looking at these things. Flexibility is a very important one. For a lot of researchers, their compute needs are very picky. So they're doing research, they have an idea, they want you to run lots of simulations, get the results, but then they're silent for a long time writing the paper, or thinking about how to, what they can learn from the results. So there's lots of peaks, and that's a very good fit for a cloud environment. I guess at the scale of university you have enough diversity end users that all those peaks never fall at the same time. So if you have your big own infrastructure you can still fill it up quite easily and keep your users happy. But this busty thing, I guess we're seeing that more and more or so. >> So Marcel, talk to us about, Red Hat needing to service these types of end users. That it can be on both ends I'd imagine that you have some people still in writing in Fortran, you have some people that's asking you for objects based storage. Where's Fortran, I'm sorry, not Fortran, but where is Red Hat in providing the underlay and the capabilities for the HPC and AI community? >> Yeah. So, I think if you look at the user base that we're looking at, it's on this spectrum from development to production. So putting AI workloads into production, it's an interesting challenge but it's easier to solve, and it has been solved to some extent, than the development cycle. So what we're looking at in Kenneth's domain it's more like the end user, the data scientist, developing code, and doing these experiments. Putting them into production is that's where containers live and thrive. You can containerize your model, you containerize your workload, you deploy it into your OpenShift Kubernetes cluster, done, you monitor it, done. So the software developments and the SRE, the ops part, done, but how do I get the data scientist into this cloud native age where he's not developing on his laptop or on a machine, where he SSH into and then does some stuff there. And then some system admin comes and needs to tweak it because it's running out of memory or whatnot. But how do we take him and make him, well, and provide him an environment that is good enough to work in, in the browser, and then with IDE, where the workload of doing the computation and the experimentation is repeatable, so that the environment is always the same, it's reliable, so it's always up and running. It doesn't consume resources, although it's up and running. Where it's, where the supply chain and the configuration of... And the, well, the modules that are brought into the system are also reliable. So all these problems that we solved in the traditional software development world, now have to transition into the data science and HPC world, where the problems are similar, but yeah, it's different sets. It's more or less, also a huge educational problem and transitioning the tools over into that is something... >> Well, is this mostly a technical issue or is this a cultural issue? I mean, are HPC workloads that different from more conventional OLTP workloads that they would not adapt well to a distributed containerized environment? >> I think it's both. So, on one hand it's the cultural issue because you have two different communities, everybody is reinventing the wheel, everybody is some sort of siloed. So they think, okay, what we've done for 30 years now we, there's no need to change it. And they, so it's, that's what thrives and here at KubeCon where you have different communities coming together, okay, this is how you solved the problem, maybe this applies also to our problem. But it's also the, well, the tooling, which is bound to a machine, which is bound to an HPC computer, which is architecturally different than a distributed environment where you would treat your containers as kettle, and as something that you can replace, right? And the HPC community usually builds up huge machines, and these are like the gray machines. So it's also technical bit of moving it to this age. >> So the massively parallel nature of HPC workloads you're saying Kubernetes has not yet been adapted to that? >> Well, I think that parallelism works great. It's just a matter of moving that out from an HPC computer into the scale out factor of a Kubernetes cloud that elastically scales out. Whereas the traditional HPC computer, I think, and Kenneth can correct me here is, more like, I have this massive computer with 1 million cores or whatnot, and now use it. And I can use my time slice, and book my time slice there. Whereas this a Kubernetes example the concept is more like, I have 1000 cores and I declare something into it and scale it up and down based on the needs. >> So, Kenneth, this is where you talked about the culture part of the changes that need to be happening. And quite frankly, the computer is a tool, it's a tool to get to the answer. And if that tool is working, if I have a 1000 cores on a single HPC thing, and you're telling me, well, I can't get to a system with 2000 cores. And if you containerized your process and move it over then maybe I'll get to the answer 50% faster maybe I'm not that... Someone has to make that decision. How important is it to get people involved in these types of communities from a researcher? 'Cause research is very tight-knit community to have these conversations and help that see move happen. >> I think it's very important to that community should, let's say, the cloud community, HPC research community, they should be talking a lot more, there should be way more cross pollination than there is today. I'm actually, I'm happy that I've seen HPC mentioned at booths and talks quite often here at KubeCon, I wasn't really expecting that. And I'm not sure, it's my first KubeCon, so I don't know, but I think that's kind of new, it's pretty recent. If you're going to the HPC community conferences there containers have been there for a couple of years now, something like Kubernetes is still a bit new. But just this morning there was a keynote by a guy from CERN, who was explaining, they're basically slowly moving towards Kubernetes even for their HPC clusters as well. And he's seeing that as the future because all the flexibility it gives you and you can basically hide all that from the end user, from the researcher. They don't really have to know that they're running on top of Kubernetes. They shouldn't care. Like you said, to them it's just a tool, and they care about if the tool works, they can get their answers and that's what they want to do. How that's actually being done in the background they don't really care. >> So talk to me about the AI side of the equation, because when I talk to people doing AI, they're on the other end of the spectrum. What are some of the benefits they're seeing from containerization? >> I think it's the reproducibility of experiments. So, and data scientists are, they're data scientists and they do research. So they care about their experiment. And maybe they also care about putting the model into production. But, I think from a geeky perspective they are more interested in finding the next model, finding the next solution. So they do an experiment, and they're done with it, and then maybe it's going to production. So how do I repeat that experiment in a year from now, so that I can build on top of it? And a container I think is the best solution to wrap something with its dependency, like freeze it, maybe even with the data, store it away, and then come to it back later and redo the experiment or share the experiment with some of my fellow researchers, so that they don't have to go through the process of setting up an equivalent environment on their machines, be it their laptop, via their cloud environment. So you go to the internet, download something doesn't work, container works. >> Well, you said something that really intrigues me you know in concept, I can have a, let's say a one terabyte data set, have a experiment associated with that. Take a snapshot of that somehow, I don't know how, take a snapshot of that and then share it with the rest of the community and then continue my work. >> Marcel: Yeah. >> And then we can stop back and compare notes. Where are we at in a maturity scale? Like, what are some of the pitfalls or challenges customers should be looking out for? >> I think you actually said it right there, how do I snapshot a terabyte of data? It's, that's... >> It's a terabyte of data. (both conversing) >> It's a bit of a challenge. And if you snapshot it, you have two terabytes of data or you just snapshot the, like and get you to do a, okay, this is currently where we're at. So that's why the technology is evolving. How do we do source control management for data? How do we license data? How do we make sure that the data is unbiased, et cetera? So that's going more into the AI side of things. But at dealing with data in a declarative way in a containerized way, I think that's where currently a lot of innovation is happening. >> What do you mean by dealing with data in a declarative way? >> If I'm saying I run this experiment based on this data set and I'm running this other experiment based on this other data set, and I as the researcher don't care where the data is stored, I care that the data is accessible. And so I might declare, this is the process that I put on my data, like a data processing pipeline. These are the steps that it's going through. And eventually it will have gone through this process and I can work with my data. Pretty much like applying the concept of pipelines through data. Like you have these data pipelines and then now you have cube flow pipelines as one solution to apply the pipeline concept, to well, managing your data. >> Given the stateless nature of containers, is that an impediment to HPC adoption because of the very large data sets that are typically involved? >> I think it is if you have terabytes of data. Just, you have to get it to the place where the computation will happen, right? And just uploading that into the cloud is already a challenge. If you have the data sitting there on a supercomputer and maybe it was sitting there for two years, you probably don't care. And typically a lot of universities the researchers don't necessarily pay for the compute time they use. Like, this is also... At least in Ghent that's the case, it's centrally funded, which means, the researchers don't have to worry about the cost, they just get access to the supercomputer. If they need two terabytes of data, they get that space and they can park it on the system for years, no problem. If they need 200 terabytes of data, that's absolutely fine. >> But the university cares about the cost? >> The university cares about the cost, but they want to enable the researchers to do the research that they want to do. >> Right. >> And we always tell researchers don't feel constrained about things like compute power, storage space. If you're doing smaller research, because you're feeling constrained, you have to tell us, and we will just expand our storage system and buy a new cluster. >> Paul: Wonderful. >> So you, to enable your research. >> It's a nice environment to be in. I think this might be a Jevons paradox problem, you give researchers this capability you might, you're going to see some amazing things. Well, now the people are snapshoting, one, two, three, four, five, different versions of a one terabytes of data. It's a good problem to have, and I hope to have you back on theCUBE, talking about how Red Hat and Ghent have solved those problems. Thank you so much for joining theCUBE. From Valencia, Spain, I'm Keith Townsend along with Paul Gillon. And you're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, do you remember your college days? A lot of them are lost. the student debt to prove it. that the university does. So the research we do at university Like the infrastructure I'm going to guess HPC is still the main language, So either the application itself So first of all, So, the use case has talking about the theory I guess at the scale of university and the capabilities for and the experimentation is repeatable, And the HPC community usually down based on the needs. And quite frankly, the computer is a tool, And he's seeing that as the future What are some of the and redo the experiment the rest of the community And then we can stop I think you actually It's a terabyte of data. the AI side of things. I care that the data is accessible. for the compute time they use. to do the research that they want to do. and we will just expand our storage system and I hope to have you back on theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillonPERSON

0.99+

Keith TownsendPERSON

0.99+

KennethPERSON

0.99+

Kenneth HostePERSON

0.99+

Marcel HildPERSON

0.99+

PaulPERSON

0.99+

Red HatORGANIZATION

0.99+

two yearsQUANTITY

0.99+

KeithPERSON

0.99+

MarcelPERSON

0.99+

1 million coresQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

50%QUANTITY

0.99+

20QUANTITY

0.99+

FortranTITLE

0.99+

1000 coresQUANTITY

0.99+

30 yearsQUANTITY

0.99+

two terabytesQUANTITY

0.99+

CERNORGANIZATION

0.99+

2000 coresQUANTITY

0.99+

GhentLOCATION

0.99+

Valencia, SpainLOCATION

0.99+

firstQUANTITY

0.99+

GhentORGANIZATION

0.99+

one terabytesQUANTITY

0.99+

each teamQUANTITY

0.99+

one solutionQUANTITY

0.99+

KubeConEVENT

0.99+

todayDATE

0.99+

one terabyteQUANTITY

0.99+

PythonTITLE

0.99+

Ghent UniversityORGANIZATION

0.99+

KubernetesTITLE

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

HPCORGANIZATION

0.98+

two different communitiesQUANTITY

0.96+

terabytes of dataQUANTITY

0.96+

both endsQUANTITY

0.96+

over 1/2QUANTITY

0.93+

twoQUANTITY

0.93+

CloudnativeconORGANIZATION

0.93+

CloudNativeCon Europe 2022EVENT

0.92+

this morningDATE

0.92+

a yearQUANTITY

0.91+

fiveQUANTITY

0.9+

theCUBEORGANIZATION

0.89+

FortranORGANIZATION

0.88+

KubeConORGANIZATION

0.87+

two terabytes of dataQUANTITY

0.86+

KubeCon CloudNativeCon Europe 2022EVENT

0.86+

EuropeLOCATION

0.85+

yearsQUANTITY

0.81+

a terabyte of dataQUANTITY

0.8+

NavanaORGANIZATION

0.8+

200 terabytes ofQUANTITY

0.79+

Kubecon +ORGANIZATION

0.77+

Jasmine James and Ricardo Rocha | KubeCon + CloudNativeCon EU 2022


 

>>Welcome to the cubes coverage of C CFS, co con EU cloud native con in Valencia Spain, I'm John furrier. This is a preview interview with the co-chairs versus we have Jasmine James senior engineering manager and of developer experience and Coon cloud native con EU co-chair and RI ricotta Rocher computing engineer at CERN and Coon co-chair as well at EU. Great to have you both on great to see you, both of you, >>Hey, to be here, >>Us >>Keep alumni. So, you know, Coon just continues to roll and get bigger and bigger, um, and watching all the end user action watching the corporations enterprises come in and just all the open source projects being green litted and just all the developer onboarding has been amazing. So it should be a great EU and Vale span, great venue. A lot of people I I'm talking to are very excited, so let's get into it as co-chairs take us through kind of the upcoming schedule at a very high level. Then I wanna dig into, uh, some of the new insights into selection and program programming that you guys had to go through. I know every year it's hard. So let's start with the overall upcoming schedule for COCOM. >>Yeah. So I'll dive into that. So the schedule is represents a, quite a diverse set of topics. I would say, um, I personally am a fan of those, you know, more personal talks from an end user perspective. There's also like a lot of the representation from a community perspective and how folks can get involved. Um, as most of, you know, like our tracks, the types of tracks has evolved over the year as well. So we now have a community track student track. So it's gonna be very exciting to hear content within those tracks, um, through in Valencia. So a very exciting schedule. Um, yeah. >>And just real quick for the folks watching it's virtual and physical it's hybrid event May 4th through seventh Ricardo, what's your take on the schedule? Uh, how do you see it breaking down from a high level standpoint? >>Yeah, so, um, I'm pretty excited. Um, I think the, the fact that this hybrid will help keep, um, build on the experiences we had, uh, during the pandemic times to, to give a better experience for people not making, uh, it to Valencia. I'm pretty excited also about the number of co-located events. So the two days before the conference will include, uh, um, a large number of co-located events, focusing on security S uh, and some new stuff for like batch and HPC workloads that I'm pretty close to as well. Uh, and then some, some really good consolidation in some tracks like this value, which I think will be quite, quite interesting as well. >>So you mentioned this is gonna be like watch parties, people gonna be creating kind of satellite events. Is that what you're referring to, uh, in terms of the physical space gonna be an event, obviously, um, what's going on around, outside the event, either online or as part of the program. >>So, yeah, uh, the, the, all the sessions, uh, from, from the collocated events will be available virtually as well. I don't know if people will actually be setting up parties everywhere. <laugh>, I'm sure some people will. Yeah, >>There'll definitely be >>Some. And then for, for, for the conference itself, there will be dedicated rooms where for the virtual talks, uh, people can just join in and sit for a while and watch the virtual talks and then go back to the in person, ones, uh, Monday feel >>Like, yeah, it's always a good event. Uh, Jasmine, we talked about this last time and Ricardo, we always get into the hood as well. What's the vibe on the, the, the, the programming. And honestly, people wanna get, give talks. There's a virtual component, which opens up more aperture, uh, for more community and more actions as, as Ricardo pointed out. What's what was the process this year? Because we're seeing a lot of big trends emerge, obviously securities front and center, um, end user projects are growing data engineering is a new persona. That's just really emerged out of kind of the growth of data and the role of data that it plays and containers. And, and with Kubernetes, just a lot of action. What's the, what was it like this year in, in the selection process for the program? >>Yeah, I mean, the selection process is always lots of fun for the co-chairs. Um, you shout out to program committee, track chairs, you put in a lot of great work and reviewing talks and, and it's just a very, very thorough process. So kudos to all of us who are getting through it for this year. I think that lots of things emerge, but I still feel like security is top of mind for a lot of folks, like security is really has provided. One of the biggest, um, submissions is from, from a quantity perspective, there are tons of talks submitted for security track, and that just kind of speaks for itself, right? This is something that the cloud native community cares about, and there's still a lot of innovation and people wanna voice what they're doing and share it. >>Ricardo, what's your take, we've had a lot of chats around not only some of the hardcore tech, but some of the new waves that are emerging out of the growth, the mature maturization of, of, of the segment. What are you seeing, uh, as terms of like the, the key things that came out during the, the process? >>Yeah, exactly. So I think I would highlight something that Jasmine said, which is the, the emergency emergence of some new tracks as well. Uh, she mentioned the student track, but also we added a research track, which is actually the first time we'll have it. So I'm pretty excited about that. Of course, uh, then for the trends, clearly security observability are, uh, massive tracks for app dev operations, uh, extending Kubernetes had also a lot of submissions. Um, I think the, the main things I saw that, uh, kind of, uh, gain a bit of more consistency is the part for the business value. And, uh, the, the, the fact that people are now looking more at the second step, like managing cloud costs, uh, how to optimize, uh, spot usage and, um, usage of GPUs for machine learning, things like this. So I'm pretty excited. And all these hybrid deployments also is something that keeps coming back. So those were, are the ones that, uh, I, I think came out from, from, from the submission at this time, >>You know, it's interesting as the growth comes in, you see these cool new things happen, but there are also signs of problems that need to be solved to create opportunities. Jasmine, you mentioned security. Um, there's a lot of big trends, scale Ricardo kind of hinting at the scale piece of it, but there's all this now new things, the security posture changes, uh, as you shift left, it's not, it's not, it's not over when you shift left in security in the pipeline in there, but it's, there's audits. There's the size of, uh, the security elements, uh, there's bill of materials. Now, people who got supply chains, these are huge conversations right now in the industry, supply chain security, um, scale data, uh, optimization management, um, notifications, all this is built in, built into a whole nother level. What do you guys see in the key trends in the cloud native ecosystem? >>I, I would say that a lot of the key trends, like you said, it, right, these things are not going anywhere. It's actually coming to a point of maturation. Um, I see more of a focus on how consuming, how, how companies go about consuming these different capabilities. What is that experience like? There's a talk that's gonna be offered, um, as a keynote, um, just about that security and leveraging developers to scale security within your environment. And not only is it a tool problem, it's a mindset thing that you have to be able to get over and partner bridge gaps between teams in order to make this, um, a reality within, within, um, people, within certain organizations. So I see the experience part of it, um, coming a big, a big thing. Um, there's multiple talks about that. >>Ricardo, what's your take on these trends? Cause I look at the, the, the paragraph of the projects now it's like this big used to be like a couple sentences. Now you got more projects coming on, you got the rookies in there and you got the, the veterans, the veteran projects in there. So this speaks volumes to kind of things like notaries new, right? So this is cool. Wait, what does that mean? Okay. Security auditing all this is happening. What are the, what are the big trends that you're excited about that you see that people are gonna be digging in, in, in the pro in, in the event? >>Yeah, I think we, we, we talked about supply chain just before. I think that's, that's a big one. We, we saw a, a keynote back in north America already introducing this, and we saw a lot of consolidation happening now in projects, but also companies supporting this project. Um, I, I'm also quite interested, interested in the evolution of Kubernetes in the sense that it's not just for, what was it, it was traditionally used for like traditional it services and scaling. We start seeing, there will be a very cool keynote from, from deploying, uh, Kubernetes at the edge, but really at the edge with the lower orbit satellites running ES in basically, uh, space. So those things I think are, are, are very cool. Like we start seeing really a lot of consolidation, but also people looking at Kubernetes for, for pretty crazy things, which is very exciting. >>Yeah. You mention, you mentioned space that really takes us to a whole edge, another level of edge thinking, um, you know, I've had many conversations around how do you do break fixing space with some folks in, in the space industry, in, in public sector, software is key in all this. And again, back to open source, open source has to be secured. It has to be, be able to managed effectively. It needs to be optimized into the new workflows space is one of them, you know, you see in, um, 5g edge is huge, uh, with new kind of apps that are being built there. So open source plays a big role in all this. So the, the question I wanna ask you guys is as open source continues to grow and it's growing, we're seeing startups emerge with the playbook of you. You play an open source or you actually create a project and then you get funding behind it because I know at least three or four VCs here in Silicon valley that look at the projects and say, they're looking for deals. And they're saying, keep it open a whole nother level. Can you guys share your insights on how the ecosystem's, uh, evolving with entrepreneurship and, and startups? >>Uh, oh, I guess I'll start. Um, I think that it's such a healthy thing, um, to have such innovation occurring, um, is it's really just, uh, Testament as to how the cloud native community right. Nurtures and cultivates these ideas and provides a great framework for them to develop over time, going from, you know, the sandbox and incubating and graduating and having the support of a solid framework, I think is a lot of the reason why a lot of these projects grow so quickly and reach certain these high levels of adoption. Um, so it's a really fantastic thing to see. I think that, you know, VCC an opportunity and, and, and there's a lot of great innovation that can be, you know, operationalized and scaled, right. Um, and applied to a lot of industries. So I feel it, I feel like it's a very healthy thing. Um, it also creates a lot of opportunities about something I'm passionate about, which is like, you know, people getting involved in open source as a step into the world of tech. Um, so all of these projects coming about provide an opportunity for folks to get involved in a particular component they're interested in and then grow their career in open source. So really great thing, in my opinion. >>And you mentioned the student track, by the way, I kept to point that out. I mean, that's huge. That's gonna be a lot of people who have, you know, in computer science programs or self learning. I mean, the, the, the ability to get up to speed, uh, from a development standpoint, as a coder, um, you can be a rural comp SI or, uh, just a practitioner just coding. I mean, data's everywhere. So data engineering, coding, I mean, Ricardo, this is huge student and then just every sector's opening up. I mean, the color codes on the calendar is, uh, larger than ever before. >>Yeah. I think, yeah, the, the diversity of the usage and the communities is, is something that is really important and it's been growing still. So I, I think this one not stop. Um, I'm pretty, pretty, pretty excited to see also how we'll handle this growth, because as you mentioned, like everything is increasing in numbers, number of projects, number of startups around this project. Uh, so one, one thing that I'm particularly interested on as an end user is to understand also how to help other end users that are jumping in not only the, the developers or, or the people wanting to support these projects, but also the end users. How, how do they choose their sta how, how it's, how, how should they look like for their use cases, much more than just going, uh, from, from the selection, individual projects to understand how they, they work together. So I think this is a challenge for, for the next couple of years. >>Yeah. I mean, roll your own and building blocks, whatever you wanna call it, you're starting to see people, uh, build their own stacks. And that's not a bad thing. It might be a feature, not a bug. >>Yeah. I, I would agree that I think it's something that we have to work on, uh, together to, to, to help, especially people starting in the ecosystem, but also for, for the experienced ones that start looking at other use cases as well. >>Okay. Jasmine, we talked about this last time, you gotta pick a favorite, uh, child in the, in the, in the agenda. Uh, what's your favorite session? Um, and you gotta pick one or three or maybe put handful, um, as you guys look through this year, what's the theme. I mean, people like you can kind of sense what's happening. Uh, when you look at the agenda, obviously observability is in there, all these great stuff's in there, but what's the, what's your favorite, um, uh, project or topic this year that, uh, you're jazzed about >>For me, I I'd say there's such diverse, um, topics that are being presented both on the keynote stage and throughout, um, the various tracks. I will just reference, um, the talk that I, I sort of alluded to earlier about, um, leveraging developers to scale Kubernetes. Um, it's a talk given by red hat on the keynote stage. Um, I just think it, you know, the abstracts will me because it's talks about bridging two different roles together, um, and scaling what we all know to be so important within the cloud native space, security and Kubernetes. So it's something that's very like real for me, um, in, in my current role and previous roles. So I think that that's the one that spoke to me. >>Awesome. Ricardo, what's your favorite, uh, this year? What do you, what do you, uh, if you had to put a little gold star on something that you're interested in, what it would it be? >>I think I hinted on, on it just before, which is, uh, I'm, I'm kind of a space enthusiast. So all, all this idea of running Kubernetes in space, um, makes me very excited. So really looking forward to that one, but as an end user, I'm also very interested in talks. Uh, like the one Mercedes will be doing, which is the transition from a kind of a more traditional company to this, uh, uh, more modern world of, uh, cloud native. And I'm quite interested to hear how, how, what their experience has been has been like in the last few years. >>Well, you guys do a great job. I love chatting with you and I love, uh, CNCF and following from the beginning, we were there when it was, when it was created and watched it grow from an insider perspective, the hyperscalers people who are really kind of eating glass and building scale, you know, SREs. Now you have, you have the SRE concept going kind of global mainstream, seeing enterprises and end users contributing and participating enterprises, getting, connecting those two worlds. Jasmine, as you said, as you look at that, you're starting to see the scale piece become huge. You mentioned it a little bit earlier, Jasmine, the SRE role was specific to servers and cloud. You're kind of seeing that kind of role needed for this kind of cloud native layer. We're seeing it with data engineering. It's not for the faint of heart. It may not be a persona. That's got zillions of people, but it scales. It's like an SRE role. You're seeing that with this kind of monitoring and, and with containers and Kubernetes where it's gotta get easier and scale, how do you guys see that? Do you see that emerging in the community, this, this kind of new scale role and, um, what is it, what is this trend? Or maybe I'm misrepresenting it or maybe I'm sensing it wrong, but what do you guys think about the scale piece? How is that F falling into place? >>Yeah, I, I think that is, um, adoption, like, or there's more saturation of, of cloud native technologies within any environment. Um, most in most companies realize that you have to have that represented right within the role that is managing it. Um, if you wanna have it be reliable. Um, so I think that a lot of roles are adopting those behaviors, right. In order to be able to sustain this within their environment and learning as they start to implement these things. Um, so I see that to be something that just happens. Um, we saw it was like DevOps, right? You know, engineers were starting to adopt, you know, working on the systems versus just, you know, working on software. Um, so it's sort of like encompassing all the things, right. We're, we're seeing a shift in the role and, and the behaviors that are within it in order to maintain these cloud native services. So >>Ricardo, what's your take, we've been seeing engineers get to the front lines more and more. Uh, you guys mentioned business value as one of the tracks and, uh, focus topics this year, it's happening, engineers and developers. They're getting in the front lines cuz as you move up that stack, whether it's a headless system for retail or deploying something in another sector, they gotta be in the front lines. If you're gonna be in doing machine learning and have data, you gotta have domain scales about what the business is. Right? >>Yeah. I, I, I agree very much with what Jasmine said and, and uh, if we add this for, for kind of the business value and the, this opportu opportunistic usage of, uh, all types of resources that can come from basically anywhere these days, I think this is, this is really becoming, um, a real role to, to understand how, how to best, uh, use all of this and uh, to, to make the best of all this available resources. When we start talking about, uh, CPUs, it's already important. If we start talking about GPU's, which are more scar or some sort of specialized accelerators, then, then it becomes really like something that, uh, you, you need people that know where, where to go and fish for those. Cause they, they, you can just build your own data center and, and scale that anymore. So you really need to understand what's out there. >>Applications gotta have the security posture nailed down. They gotta have it. Automation built in. You gotta have the observability, you gotta have the business value. I mean, it sounds like a mature industry developing here finally. It's happening. Good job guys. Thanks for coming on the queue. Really appreciate it. >>Thank you. Thank you for having >>Us. And we'll see the cube here at Koon cloud native con May 16th through the 20th in Vale Spain, the cube will be there. We'll have some online coverage as well. Look for the virtual from CNCF. The cube will bring all the, all the action. I'm John fur, your host, see you in Spain and see you on the 16th.

Published Date : May 10 2022

SUMMARY :

Great to have you both on great to see you, both of you, that you guys had to go through. of those, you know, more personal talks from an end user perspective. So the two days before the conference will include, So you mentioned this is gonna be like watch parties, people gonna be creating kind of satellite events. from, from the collocated events will be available virtually as well. talks and then go back to the in person, ones, uh, Monday feel of kind of the growth of data and the role of data that it plays and containers. Um, you shout out to program committee, track chairs, you put in a lot of great work and reviewing What are you seeing, uh, as terms of like the, the key things that came out during Uh, she mentioned the student track, but also we added a research track, which is actually the first time You know, it's interesting as the growth comes in, you see these cool new things happen, but there are also signs So I see the experience part of it, um, coming a big, a big thing. Now you got more projects coming on, you got the rookies in there and you got the, Um, I, I'm also quite interested, interested in the evolution of Kubernetes in the sense the new workflows space is one of them, you know, you see in, um, 5g edge is huge, I think that, you know, VCC an opportunity and, and, and there's a lot of great innovation that can I mean, the color codes on the calendar is, uh, larger than ever before. So I think this is a challenge for, for the next couple of years. uh, build their own stacks. but also for, for the experienced ones that start looking at other use cases as well. Um, and you gotta pick one or three I just think it, you know, the abstracts will me because it's talks about bridging two different Ricardo, what's your favorite, uh, this year? So all, all this idea of running Kubernetes in space, um, makes me very excited. I love chatting with you and I love, uh, CNCF and following from the beginning, Um, if you wanna have it be reliable. They're getting in the front lines cuz as you move up that stack, So you really need to understand what's out there. You gotta have the observability, you gotta have the business value. Thank you for having the cube will be there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RicardoPERSON

0.99+

JasminePERSON

0.99+

CERNORGANIZATION

0.99+

SpainLOCATION

0.99+

May 4thDATE

0.99+

Jasmine JamesPERSON

0.99+

CoonORGANIZATION

0.99+

ValenciaLOCATION

0.99+

Jasmine JamesPERSON

0.99+

Ricardo RochaPERSON

0.99+

MercedesORGANIZATION

0.99+

north AmericaLOCATION

0.99+

bothQUANTITY

0.99+

May 16thDATE

0.99+

second stepQUANTITY

0.99+

Valencia SpainLOCATION

0.99+

Silicon valleyLOCATION

0.99+

John furPERSON

0.99+

twoQUANTITY

0.98+

this yearDATE

0.98+

threeQUANTITY

0.98+

20thDATE

0.98+

Vale SpainLOCATION

0.97+

CloudNativeConEVENT

0.97+

oneQUANTITY

0.96+

ricotta RocherPERSON

0.95+

CNCFORGANIZATION

0.95+

zillions of peopleQUANTITY

0.94+

KubeConEVENT

0.94+

four VCsQUANTITY

0.93+

one thingQUANTITY

0.93+

first timeQUANTITY

0.92+

KubernetesTITLE

0.92+

two worldsQUANTITY

0.91+

John furrierPERSON

0.91+

16thDATE

0.9+

C CFSORGANIZATION

0.89+

EULOCATION

0.89+

MondayDATE

0.87+

OneQUANTITY

0.85+

KubernetesORGANIZATION

0.82+

two daysDATE

0.81+

seventhQUANTITY

0.79+

rolesQUANTITY

0.77+

EUORGANIZATION

0.76+

pandemicEVENT

0.76+

COCOMORGANIZATION

0.73+

next couple of yearsDATE

0.71+

tons of talksQUANTITY

0.7+

conEVENT

0.69+

inkQUANTITY

0.67+

SRETITLE

0.61+

KoonEVENT

0.61+

at least threeQUANTITY

0.58+

VCCORGANIZATION

0.58+

lastDATE

0.57+

ValeLOCATION

0.51+

cloud nativeORGANIZATION

0.46+

5gTITLE

0.42+

EUEVENT

0.42+

2022DATE

0.31+

The University of Edinburgh and Rolls Royce Drive in Exascale Style | Exascale Day


 

>>welcome. My name is Ben Bennett. I am the director of HPC Strategic programs here at Hewlett Packard Enterprise. It is my great pleasure and honor to be talking to Professor Mark Parsons from the Edinburgh Parallel Computing Center. And we're gonna talk a little about exa scale. What? It means we're gonna talk less about the technology on Maura about the science, the requirements on the need for exa scale. Uh, rather than a deep dive into the enabling technologies. Mark. Welcome. >>I then thanks very much for inviting me to tell me >>complete pleasure. Um, so I'd like to kick off with, I suppose. Quite an interesting look back. You and I are both of a certain age 25 plus, Onda. We've seen these milestones. Uh, I suppose that the S I milestones of high performance computing's come and go, you know, from a gig a flop back in 1987 teraflop in 97 a petaflop in 2000 and eight. But we seem to be taking longer in getting to an ex a flop. Um, so I'd like your thoughts. Why is why is an extra flop taking so long? >>So I think that's a very interesting question because I started my career in parallel computing in 1989. I'm gonna join in. IPCC was set up then. You know, we're 30 years old this year in 1990 on Do you know the fastest computer we have them is 800 mega flops just under a getting flogged. So in my career, we've gone already. When we reached the better scale, we'd already gone pretty much a million times faster on, you know, the step from a tariff block to a block scale system really didn't feel particularly difficult. Um, on yet the step from A from a petaflop PETA scale system. To an extent, block is a really, really big challenge. And I think it's really actually related to what's happened with computer processes over the last decade, where, individually, you know, approached the core, Like on your laptop. Whoever hasn't got much faster, we've just got more often So the perception of more speed, but actually just being delivered by more course. And as you go down that approach, you know what happens in the supercomputing world as well. We've gone, uh, in 2010 I think we had systems that were, you know, a few 1000 cores. Our main national service in the UK for the last eight years has had 118,000 cores. But looking at the X scale we're looking at, you know, four or five million cores on taming that level of parallelism is the real challenge. And that's why it's taking an enormous and time to, uh, deliver these systems. That is not just on the hardware front. You know, vendors like HP have to deliver world beating technology and it's hard, hard. But then there's also the challenge to the users. How do they get the codes to work in the face of that much parallelism? >>If you look at what the the complexity is delivering an annex a flop. Andi, you could have bought an extra flop three or four years ago. You couldn't have housed it. You couldn't have powered it. You couldn't have afforded it on, do you? Couldn't program it. But you still you could have You could have bought one. We should have been so lucky to be unable to supply it. Um, the software, um I think from our standpoint, is is looking like where we're doing mawr enabling with our customers. You sell them a machine on, then the the need then to do collaboration specifically seems mawr and Maura around the software. Um, so it's It's gonna be relatively easy to get one x a flop using limb pack, but but that's not extra scale. So what do you think? On exa scale machine versus an X? A flop machine means to the people like yourself to your users, the scientists and industry. What is an ex? A flop versus >>an exa scale? So I think, you know, supercomputing moves forward by setting itself challenges. And when you when you look at all of the excess scale programs worldwide that are trying to deliver systems that can do an X a lot form or it's actually very arbitrary challenge. You know, we set ourselves a PETA scale challenge delivering a petaflop somebody manage that, Andi. But you know, the world moves forward by setting itself challenges e think you know, we use quite arbitrary definition of what we mean is well by an exit block. So, you know, in your in my world, um, we either way, first of all, see ah flop is a computation, so multiply or it's an ad or whatever on we tend. Thio, look at that is using very high precision numbers or 64 bit numbers on Do you know, we then say, Well, you've got to do the next block. You've got to do a billion billion of those calculations every second. No, a some of the last arbitrary target Now you know today from HPD Aiken by my assistant and will do a billion billion calculations per second. And they will either do that as a theoretical peak, which would be almost unattainable, or using benchmarks that stressed the system on demonstrate a relaxing law. But again, those benchmarks themselves attuned Thio. Just do those calculations and deliver and explore been a steady I'll way if you like. So, you know, way kind of set ourselves this this this big challenge You know, the big fence on the race course, which were clambering over. But the challenge in itself actually should be. I'm much more interesting. The water we're going to use these devices for having built um, eso. Getting into the extra scale era is not so much about doing an extra block. It's a new generation off capability that allows us to do better scientific and industrial research. And that's the interesting bit in this whole story. >>I would tend to agree with you. I think the the focus around exa scale is to look at, you know, new technologies, new ways of doing things, new ways of looking at data and to get new results. So eventually you will get yourself a nexus scale machine. Um, one hopes, sooner rather >>than later. Well, I'm sure you don't tell me one, Ben. >>It's got nothing to do with may. I can't sell you anything, Mark. But there are people outside the door over there who would love to sell you one. Yes. However, if we if you look at your you know your your exa scale machine, Um, how do you believe the workloads are going to be different on an extra scale machine versus your current PETA scale machine? >>So I think there's always a slight conceit when you buy a new national supercomputer. On that conceit is that you're buying a capability that you know on. But many people will run on the whole system. Known truth. We do have people that run on the whole of our archer system. Today's A 118,000 cores, but I would say, and I'm looking at the system. People that run over say, half of that can be counted on Europe on a single hand in a year, and they're doing very specific things. It's very costly simulation they're running on. So, you know, if you look at these systems today, two things show no one is. It's very difficult to get time on them. The Baroque application procedures All of the requirements have to be assessed by your peers and your given quite limited amount of time that you have to eke out to do science. Andi people tend to run their applications in the sweet spot where their application delivers the best performance on You know, we try to push our users over time. Thio use reasonably sized jobs. I think our average job says about 20,000 course, she's not bad, but that does mean that as we move to the exits, kill two things have to happen. One is actually I think we've got to be more relaxed about giving people access to the system, So let's give more people access, let people play, let people try out ideas they've never tried out before. And I think that will lead to a lot more innovation and computational science. But at the same time, I think we also need to be less precious. You know, we to accept these systems will have a variety of sizes of job on them. You know, we're still gonna have people that want to run four million cores or two million cores. That's absolutely fine. Absolutely. Salute those people for trying really, really difficult. But then we're gonna have a huge spectrum of views all the way down to people that want to run on 500 cores or whatever. So I think we need Thio broaden the user base in Alexa Skill system. And I know this is what's happening, for example, in Japan with the new Japanese system. >>So, Mark, if you cast your mind back to almost exactly a year ago after the HPC user forum, you were interviewed for Premier Magazine on Do you alluded in that article to the needs off scientific industrial users requiring, you know, uh on X a flop or an exa scale machine it's clear in your in your previous answer regarding, you know, the workloads. Some would say that the majority of people would be happier with, say, 10 100 petaflop machines. You know, democratization. More people access. But can you provide us examples at the type of science? The needs of industrial users that actually do require those resources to be put >>together as an exa scale machine? So I think you know, it's a very interesting area. At the end of the day, these systems air bought because they are capability systems on. I absolutely take the argument. Why shouldn't we buy 10 100 pattern block systems? But there are a number of scientific areas even today that would benefit from a nexus school system and on these the sort of scientific areas that will use as much access onto a system as much time and as much scale of the system as they can, as you can give them eso on immediate example. People doing chroma dynamics calculations in particle physics, theoretical calculations, they would just use whatever you give them. But you know, I think one of the areas that is very interesting is actually the engineering space where, you know, many people worry the engineering applications over the last decade haven't really kept up with this sort of supercomputers that we have. I'm leading a project called Asimov, funded by M. P S O. C in the UK, which is jointly with Rolls Royce, jointly funded by Rolls Royce and also working with the University of Cambridge, Oxford, Bristol, Warrick. We're trying to do the whole engine gas turbine simulation for the first time. So that's looking at the structure of the gas turbine, the airplane engine, the structure of it, how it's all built it together, looking at the fluid dynamics off the air and the hot gasses, the flu threat, looking at the combustion of the engine looking how fuel is spread into the combustion chamber. Looking at the electrics around, looking at the way the engine two forms is, it heats up and cools down all of that. Now Rolls Royce wants to do that for 20 years. Andi, Uh, whenever they certify, a new engine has to go through a number of physical tests, and every time they do on those tests, it could cost them as much as 25 to $30 million. These are very expensive tests, particularly when they do what's called a blade off test, which would be, you know, blade failure. They could prove that the engine contains the fragments of the blade. Sort of think, continue face really important test and all engines and pass it. What we want to do is do is use an exa scale computer to properly model a blade off test for the first time, so that in future, some simulations can become virtual rather than having thio expend all of the money that Rolls Royce would normally spend on. You know, it's a fascinating project is a really hard project to do. One of the things that I do is I am deaf to share this year. Gordon Bell Price on bond I've really enjoyed to do. That's one of the major prizes in our area, you know, gets announced supercomputing every year. So I have the pleasure of reading all the submissions each year. I what's been really interesting thing? This is my third year doing being on the committee on what's really interesting is the way that big systems like Summit, for example, in the US have pushed the user communities to try and do simulations Nowhere. Nobody's done before, you know. And we've seen this as well, with papers coming after the first use of the for Goku system in Japan, for example, people you know, these are very, very broad. So, you know, earthquake simulation, a large Eddie simulations of boats. You know, a number of things around Genome Wide Association studies, for example. So the use of these computers spans of last area off computational science. I think the really really important thing about these systems is their challenging people that do calculations they've never done before. That's what's important. >>Okay, Thank you. You talked about challenges when I nearly said when you and I had lots of hair, but that's probably much more true of May. Um, we used to talk about grand challenges we talked about, especially around the teraflop era, the ski red program driving, you know, the grand challenges of science, possibly to hide the fact that it was a bomb designing computer eso they talked about the grand challenges. Um, we don't seem to talk about that much. We talk about excess girl. We talk about data. Um Where are the grand challenges that you see that an exa scale computer can you know it can help us. Okay, >>so I think grand challenges didn't go away. Just the phrase went out of fashion. Um, that's like my hair. I think it's interesting. The I do feel the science moves forward by setting itself grand challenges and always had has done, you know, my original backgrounds in particle physics. I was very lucky to spend four years at CERN working in the early stage of the left accelerator when it first came online on. Do you know the scientists there? I think they worked on left 15 years before I came in and did my little ph d on it. Andi, I think that way of organizing science hasn't changed. We just talked less about grand challenges. I think you know what I've seen over the last few years is a renaissance in computational science, looking at things that have previously, you know, people have said have been impossible. So a couple of years ago, for example, one of the key Gordon Bell price papers was on Genome Wide Association studies on some of it. If I may be one of the winner of its, if I remember right on. But that was really, really interesting because first of all, you know, the sort of the Genome Wide Association Studies had gone out of favor in the bioinformatics by a scientist community because people thought they weren't possible to compute. But that particular paper should Yes, you could do these really, really big Continental little problems in a reasonable amount of time if you had a big enough computer. And one thing I felt all the way through my career actually is we've probably discarded Mawr simulations because they were impossible at the time that we've actually decided to do. And I sometimes think we to challenge ourselves by looking at the things we've discovered in the past and say, Oh, look, you know, we could actually do that now, Andi, I think part of the the challenge of bringing an extra service toe life is to get people to think about what they would use it for. That's a key thing. Otherwise, I always say, a computer that is unused to just be turned off. There's no point in having underutilized supercomputer. Everybody loses from that. >>So Let's let's bring ourselves slightly more up to date. We're in the middle of a global pandemic. Uh, on board one of the things in our industry has bean that I've been particularly proud about is I've seen the vendors, all the vendors, you know, offering up machine's onboard, uh, making resources available for people to fight things current disease. Um, how do you see supercomputers now and in the future? Speeding up things like vaccine discovery on help when helping doctors generally. >>So I think you're quite right that, you know, the supercomputer community around the world actually did a really good job of responding to over 19. Inasmuch as you know, speaking for the UK, we put in place a rapid access program. So anybody wanted to do covert research on the various national services we have done to the to two services Could get really quick access. Um, on that, that has worked really well in the UK You know, we didn't have an archer is an old system, Aziz. You know, we didn't have the world's largest supercomputer, but it is happily bean running lots off covert 19 simulations largely for the biomedical community. Looking at Druk modeling and molecular modeling. Largely that's just been going the US They've been doing really large uh, combinatorial parameter search problems on on Summit, for example, looking to see whether or not old drugs could be reused to solve a new problem on DSO, I think, I think actually, in some respects Kobe, 19 is being the sounds wrong. But it's actually been good for supercomputing. Inasmuch is pointed out to governments that supercomputers are important parts off any scientific, the active countries research infrastructure. >>So, um, I'll finish up and tap into your inner geek. Um, there's a lot of technologies that are being banded around to currently enable, you know, the first exa scale machine, wherever that's going to be from whomever, what are the current technologies or emerging technologies that you are interested in excited about looking forward to getting your hands on. >>So in the business case I've written for the U. K's exa scale computer, I actually characterized this is a choice between the American model in the Japanese model. Okay, both of frozen, both of condoms. Eso in America, they're very much gone down the chorus plus GPU or GPU fruit. Um, so you might have, you know, an Intel Xeon or an M D process er center or unarmed process or, for that matter on you might have, you know, 24 g. P. U s. I think the most interesting thing that I've seen is definitely this move to a single address space. So the data that you have will be accessible, but the G p u on the CPU, I think you know, that's really bean. One of the key things that stopped the uptake of GPS today and that that that one single change is going Thio, I think, uh, make things very, very interesting. But I'm not entirely convinced that the CPU GPU model because I think that it's very difficult to get all the all the performance set of the GPU. You know, it will do well in H p l, for example, high performance impact benchmark we're discussing at the beginning of this interview. But in riel scientific workloads, you know, you still find it difficult to find all the performance that has promised. So, you know, the Japanese approach, which is the core, is only approach. E think it's very attractive, inasmuch as you know They're using very high bandwidth memory, very interesting process of which they are going to have to, you know, which they could develop together over 10 year period. And this is one thing that people don't realize the Japanese program and the American Mexico program has been working for 10 years on these systems. I think the Japanese process really interesting because, um, it when you look at the performance, it really does work for their scientific work clothes, and that's that does interest me a lot. This this combination of a A process are designed to do good science, high bandwidth memory and a real understanding of how data flows around the supercomputer. I think those are the things are exciting me at the moment. Obviously, you know, there's new networking technologies, I think, in the fullness of time, not necessarily for the first systems. You know, over the next decade we're going to see much, much more activity on silicon photonics. I think that's really, really fascinating all of these things. I think in some respects the last decade has just bean quite incremental improvements. But I think we're supercomputing is going in the moment. We're a very very disruptive moment again. That goes back to start this discussion. Why is extra skill been difficult to get? Thio? Actually, because the disruptive moment in technology. >>Professor Parsons, thank you very much for your time and your insights. Thank you. Pleasure and folks. Thank you for watching. I hope you've learned something, or at least enjoyed it. With that, I would ask you to stay safe and goodbye.

Published Date : Oct 16 2020

SUMMARY :

I am the director of HPC Strategic programs I suppose that the S I milestones of high performance computing's come and go, But looking at the X scale we're looking at, you know, four or five million cores on taming But you still you could have You could have bought one. challenges e think you know, we use quite arbitrary focus around exa scale is to look at, you know, new technologies, Well, I'm sure you don't tell me one, Ben. outside the door over there who would love to sell you one. So I think there's always a slight conceit when you buy a you know, the workloads. That's one of the major prizes in our area, you know, gets announced you know, the grand challenges of science, possibly to hide I think you know what I've seen over the last few years is a renaissance about is I've seen the vendors, all the vendors, you know, Inasmuch as you know, speaking for the UK, we put in place a rapid to currently enable, you know, I think you know, that's really bean. Professor Parsons, thank you very much for your time and your insights.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ben BennettPERSON

0.99+

1989DATE

0.99+

Rolls RoyceORGANIZATION

0.99+

UKLOCATION

0.99+

500 coresQUANTITY

0.99+

10 yearsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

JapanLOCATION

0.99+

ParsonsPERSON

0.99+

1990DATE

0.99+

MarkPERSON

0.99+

2010DATE

0.99+

1987DATE

0.99+

HPORGANIZATION

0.99+

118,000 coresQUANTITY

0.99+

first timeQUANTITY

0.99+

four yearsQUANTITY

0.99+

AmericaLOCATION

0.99+

CERNORGANIZATION

0.99+

third yearQUANTITY

0.99+

fourQUANTITY

0.99+

firstQUANTITY

0.99+

30 yearsQUANTITY

0.99+

2000DATE

0.99+

four million coresQUANTITY

0.99+

two million coresQUANTITY

0.99+

Genome Wide AssociationORGANIZATION

0.99+

two servicesQUANTITY

0.99+

BenPERSON

0.99+

first systemsQUANTITY

0.99+

two formsQUANTITY

0.99+

USLOCATION

0.99+

bothQUANTITY

0.99+

IPCCORGANIZATION

0.99+

threeDATE

0.99+

todayDATE

0.98+

Hewlett Packard EnterpriseORGANIZATION

0.98+

University of CambridgeORGANIZATION

0.98+

five million coresQUANTITY

0.98+

a year agoDATE

0.98+

singleQUANTITY

0.98+

Mark ParsonsPERSON

0.98+

two thingsQUANTITY

0.98+

$30 millionQUANTITY

0.98+

oneQUANTITY

0.98+

Edinburgh Parallel Computing CenterORGANIZATION

0.98+

AzizPERSON

0.98+

Gordon BellPERSON

0.98+

MayDATE

0.98+

64 bitQUANTITY

0.98+

EuropeLOCATION

0.98+

OneQUANTITY

0.97+

each yearQUANTITY

0.97+

about 20,000 courseQUANTITY

0.97+

TodayDATE

0.97+

AlexaTITLE

0.97+

this yearDATE

0.97+

HPCORGANIZATION

0.96+

IntelORGANIZATION

0.96+

XeonCOMMERCIAL_ITEM

0.95+

25QUANTITY

0.95+

over 10 yearQUANTITY

0.95+

1000 coresQUANTITY

0.95+

ThioPERSON

0.95+

800 mega flopsQUANTITY

0.95+

ProfessorPERSON

0.95+

AndiPERSON

0.94+

one thingQUANTITY

0.94+

couple of years agoDATE

0.94+

over 19QUANTITY

0.93+

U. KLOCATION

0.92+

Premier MagazineTITLE

0.92+

10 100 petaflop machinesQUANTITY

0.91+

four years agoDATE

0.91+

ExascaleLOCATION

0.91+

HPD AikenORGANIZATION

0.91+

Intro | Exascale Day


 

>> Hi everyone, this is Dave Vellante and I want to welcome you to our celebration of Exascale Day. A community event with support from Hewlett Packard Enterprise. Now, Exascale Day is October 18th, that's 10, 18 as in 10 to the power of 18. And on that day we celebrate the scientists, and researchers, who make breakthrough discoveries, with the assistance, of some of the most sophisticated supercomputers in the world. Ones that can run and Exascale. Now in this program, we're going to kick off the weekend and discuss the significance of Exascale computing, how we got here, why it's so challenging to get to the point where we're at now where we can perform almost, 10 to the 18th floating point operations per second. Or an exaFLOP. We should be there by 2021. And importantly, what innovations and possibilities Exascale computing will unlock. So today, we got a great program for you. We're not only going to dig into a bit of the history of supercomputing, we're going to talk with experts, folks like Dr. Ben Bennett, who's doing and some work with the UK government. And he's going to talk about some of the breakthroughs that we can expect with Exascale. You'll also hear from experts like, Professor Mark Parsons of the University of Edinburgh, who cut his teeth at CERN, in Geneva. And Dr. Brian Pigeon Nuskey of Purdue University, who's studying buyer diversity. We're going to also hear about supercomputers in space as we get as a great action going on with supercomputers up at the International Space Station. Let me think about that, powerful high performance water-cooled supercomputers, running on solar, and mounted overhead, that's right. Even though at the altitude at the International Space Station, there's 90% of the Earth's gravity. Objects, including humans they're essentially in a state of free fall. At 400 kilometers above earth, there no air. You're in a vacuum. Like have you ever been on the Tower of Terror at Disney? In that free fall ride, or a nosedive in an airplane, I have. And if you have binoculars around your neck, they would float. So the supercomputers can actually go into the ceiling, crazy right? And that's not all. We're going to hear from experts on what the exascale era. will usher in for not only space exploration, but things like weather forecasting, life sciences, complex modeling, and all types of scientific endeavors. So stay right there for all the great content. You can use the #ExascaleDay on Twitter, and, enjoy the program. Thanks everybody for watching.

Published Date : Oct 15 2020

SUMMARY :

of the history of supercomputing,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

GenevaLOCATION

0.99+

Ben BennettPERSON

0.99+

2021DATE

0.99+

90%QUANTITY

0.99+

October 18thDATE

0.99+

University of EdinburghORGANIZATION

0.99+

International Space StationLOCATION

0.99+

Brian Pigeon NuskeyPERSON

0.99+

EarthLOCATION

0.99+

400 kilometersQUANTITY

0.99+

Mark ParsonsPERSON

0.99+

Exascale DayEVENT

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

earthLOCATION

0.99+

ExascaleTITLE

0.98+

CERNORGANIZATION

0.98+

18QUANTITY

0.97+

todayDATE

0.97+

Purdue UniversityORGANIZATION

0.97+

DisneyORGANIZATION

0.93+

#ExascaleDayEVENT

0.93+

UK governmentORGANIZATION

0.92+

18thQUANTITY

0.92+

10DATE

0.89+

ProfessorPERSON

0.87+

ExascaleEVENT

0.82+

TwitterORGANIZATION

0.79+

10QUANTITY

0.72+

Tower of TerrorTITLE

0.66+

secondQUANTITY

0.61+

DayTITLE

0.59+

Clayton Coleman, Red Hat | Red Hat Summit 2020


 

>>from around the globe. It's the Cube with digital coverage of Red Hat. Summit 2020 Brought to you by Red Hat. >>Hi, I'm stupid, man. And this is the Cube's coverage of the Red Hat Summit 2020 course. The event this year is digital. We're talking to Red Hat executives, partners and customers where they are around the globe, pulling them in remotely happy to welcome back to the program. One of our Cube alumni on a very important topic, of course, that red hat open shift and joining me is Clayton Coleman. Who's the open shift chief architect with Red Hat. Clayton, thanks so much for joining us. Thank you >>for having me today. >>All right, So before we get into the product, it's probably worthwhile that we talked about you know what's happening in the community and talking specifically, you know, kubernetes the whole cloud, native space. Normally we would have gotten together. I would have seen you at Cube Con Ah, you know, at the end of March. But instead, here we are at the end of April. Looking out, you know, more CN cf events later this year, but first Red Hat Summit is a great open source event and broad community. So would really love your viewpoint as to what's happening in that ecosystem. >>It's been a really interesting year, obviously. Ah, with an open source community, you know, we react to this. Um, like we always react to all the things that go on in open source. People come to the community and sometimes they have more time, and sometimes they have less time. I think just from a community perspective, there's been a lot of people you know. It's reaching out to their colleagues outside of their companies, to their friends and coworkers and all of the different participants in the community. And there's been a lot of people getting together for a little bit of extra time trying todo, you know, connect virtually where they can't connect physically. And it's been it's been great to at least see where we've come this year. We haven't had Cube con and that'll be coming up later this year. But Kubernetes just had the 1 18 release, and I think Kubernetes is moving into that phase where it's a mature, open source project. We've got a lot of the processes down. I'm really happy with the work that the steering committee, um, has gone through. We handed off the last of the bootstrap Steering Committee members hand it off to the new, fully elected steering committee last year, and it's gone absolutely smoothly, which has been phenomenal on the The core project is trying to be a little bit more stable and to focus on closing out those loose ends being a little bit more conservative to change. And at the same time, the ecosystem has really exploded in a number of directions, as as Kubernetes becomes more of a bedrock technology for, um, enterprises and individuals and startups and everything in between. We've really seen a huge amount of of innovation in the space, and every year it just gets bigger and bigger. There's a lot of exciting projects that >>I >>have never even talk to somebody on the Kubernetes project. But they have made and build and, uh, and solve problems for their environments without us ever having to be involved, which I think it's success. >>Yeah, Clayton, you know, one of the challenges when you talk to practitioners out there is just keeping up with the pace of change. Can really be challenging. Something we really saw acutely was Docker was rolling out updates every six weeks. Most customers aren't going to be able to change fast enough to keep up with things you love your view point both is toe really what the CN CF says, as well as how Red Hat thinks of products. So you talked about you know, kubernetes 1.18. My understanding, even Google isn't yet packaging and offering that version there. So there's a lag between things. And as we start talking about managing across lots of clusters, how does Red Hat think of this? How should customers think about this? How do we make sure that we're, you know, staying secure and keeping updated on things without getting run over by the constant treadmill of >>change? That the interesting part about kubernetes Is it so much more than just that core project? You know, no matter what any of us in the in the core kubernetes project or in the products that red hat that build around open shift and layers on top, there's a There's a whole ecosystem of components that most people think of this fundamental to accomplishing building applications deploying them, running them, Whether it's their continuous integration pipelines or it's their monitoring stacks, we really as communities has become a little bit more conservative. >>Um, I >>think we really nail down our processes for taking that change from the community, testing it. You know, we run tens of thousands of automation tests a week on the latest and greatest kubernetes code, given time to soak, and we did it together with all those pieces of the ecosystem and then make sure that they work well together. And I've noticed over the last two years that the rate of oops we missed that in KUBERNETES 1 17 that by the time someone saw it, people are already using that that started to go down for us, it really hasn't been about the pace of keeping up with the upstream. But it's about making sure that we can responsibly pull together all the other ecosystem components that are still have much newer and a little bit. How do we say, Ah, they are then the exciting phase of their development while still giving ah predictable, reliable update stream. I would say that the challenges that most people are going to see is how they bring together all those pieces. And that's something that, on open shift, we think of as our goal is to help pull together all the pieces of this ecosystem, Um, and to make some choices for customers that makes sense and to give them flexibility where it's not clear yet what the right choice might be or where different people could reasonably disagree. And I'm really excited. I feel like we've got our We have a release cadence down and we're shipping the latest Cube after it's had time to quickly review, and I think we've gotten better and better at that. So I'm really proud of the team on Red Hat and how they've worked within the community so that everybody benefits from that in that testing of that stability. >>Great. I'd like to teach here, you dig in a little bit on the application side what's happening from the work loads that customers are using? Ah, what other innovations happening around that space? And how is Red Hat really helping? Really, The the infrastructure team and the developer team work even closer together, like Red Hat has done for a long time. >>This is This is a great question. I say There's two key, um, two key groups coming together. People are bringing substantial important critical production workloads, and they expect things both to just work, but also to be able to understand it. And they're making the transition. Ah, lot of folks I talked to were making the transition from previous systems they've got. They've been running open shift for a while, or they've been running kubernetes for a while, and they're getting ready to move, um, a significant portion of their applications over. And so, you know, in the early days of any project, you get the exciting Greenfield development and you get to go play with new technologies. But as you start moving your 1st 1 and then 10 and then 100 of your core business applications from the EMS or from bare metal into containers, you're taking advantage of that technology in a responsible way. And so the the expectations on us as engineers and community members is to really make sure that we're closing out the little stuff. You know, no bug is too small, but it can't trip up someone's production applications. So seeing a lot of that whether it's something new and exciting like, Um uh, model is a service or ai workloads or whether it's traditional big enterprise transaction processing. APS on the other side on that development, um, model I think we're starting to see phase to our community is 2.0, in the community, which is people are really leveraging the flexibility and the power of containers, things that aren't necessarily new to people who had. We got into containers early and had a chance to go through a couple of iterations. But now people are starting to find patterns that up level development teams, so being able to run applications the same way on a local machine as in a production environment. Well, most production environments are there now, and so people are really having toe. They're having to go through all of their tools and saying, Well, does this process that works for an individual developer also work when I want to move it there, my production or staging environments to production, and so on. New projects like K native and tectonic, which are kubernetes native, that's just one part of the ecosystem around development. On top of kubernetes, there's tons of exciting projects out there from companies that have adopted the full stack of kubernetes. They built it into their mindset, this idea of flexible infrastructure, and we're seeing this explosion of new ways where kubernetes is really just a detail, and containers are just the detail and the fact that it's running this little thing called Docker down at the heart of it. Nobody talks about anymore, and so that that transition has been really exciting. I think there's a lot that we're trying to do to help developers and administrators see eye to eye. And a lot of it's learning from the customers and users out there who really paved the way the which is the open source way. It's learning from others and helping others benefit from that. >>Yeah, I think you bring up a really important point we've been saying for a couple of years. Now that you know KUBERNETES should get to the point where it's boring and boring in a way also cause it's gonna be baked in everywhere we saw from basically customers just taking the code, really spending a lot of their own things by building the stack to, of course, lots of customers have used open shift over the year to If I'm adopting Public Cloud more and more, they're using those services from that standpoint. Can you talk a bit about how Red Hat is really integrating with public clouds? And you know your architectural technical philosophy on that? And how might that be? Differ from some other companies that you might call a little bit more, you know, Cloud of Jason, as opposed to being deeply integrated with the public cloud. >>The interesting thing about Kubernetes is that while it was developed on top of the clouds, it wasn't really built from Day one assuming a cloud underneath it. And I think that was an opportunity that we really missed. And to be fair, we had to make the thing work first before we depended on these unreliable clouds. You know, when we started, the clouds were really hitting their stride on stability and reliability, and people were it was the hot was becoming the obvious choice to some of what we've tried to do is take flexible infrastructure is a given, um, assume that the things that the cloud provides should be programmed for the for the benefit of the developer and the application, and I think that's a that's a key trend is we're not using the cloud because our administration teams want us. We're using the cloud because it makes us more powerful developers. That enables new scenarios. It shortens the the time between idea reality. What we have done in open shift is we've really built around The idea of open shift running on a cloud should take advantage of that cloud to an extreme degree, which is infrastructure could be flexible. The machines in that cluster need to come and go according to the demands of the applications on top of it. So giving a little bit more power to the cluster and taking a little bit of way from the cloud I'm. But that benefits. That also needs to benefit that those who are running on premise because I think, as you noted, our goal is you want this ubiquitous kubernetes environment everywhere, and the operations teams and the development teams and the Dev Ops teams in between need to have a consistent environment and so you can do this on the cloud. But you don't have that flexibility on premise. You've lost something. And so what we've tried to do as well is to think about those ideas that are what we think of as quote unquote cloud native that starts with a mutable operating systems. It starts with everything being declarative and working backwards from, you know, I wanna have 15 machines and then the cluster or controllers on the cluster say, Oh, well, you know, one of the machines has gone bad. Let's replace it on the cloud. You ask for a new I'm cloud infrastructure provider or you ask the the cloud a p i for a new machine, and then you replace it automatically, and no one knows any better on premise. We'd love to do the same thing with both bare metal virtualization on top of kubernetes. So we have that flexibility to say you may not have all of the options, but we should certainly be able to say, Oh, well, this hardware is bad or the machine stopped, so let's reboot it, and there's a lot of that same mindset that could be applied. We think that'll, um if you need virtualization, you can always use it. But virtualization is a layer on top benefits from some of the same things that all the other extensions and applications on top of kubernetes competitive trump. So trying to pay that layer and make sure that you have flexible, reliable storage on premise through our SEF and red hat storage products, which are built on top of the cluster exactly like virtualization, is both on top of the cluster. So you get cloud native storage mixed in working with those teams toe. Take those operational best practices. You know there's well, I think one of the things that interests me is no. 1 20 years ago, who was running an early version of SEF wouldn't have some approach to run these very large things that scales organizations like CERN have been using SEF for over a decade at extremely large scales. Some of what our mindset is we think it's time to bake some of that knowledge actually into our software for a very long time. We've kind of been building out and adding more and more software, but we always left the automation and the the knowledge about how that software supposed to be run to the side. And so by taking that and we talked about operators. Kubernetes really enshrines. This principle is taking that idea, taking some of that operational knowledge into the software we ship. Um, though that software can rely on kubernetes open shift tries to hide the details of the infrastructure underneath and our goal. I think in the long run it will just make everybody's lives easier. I shouldn't have to ship you a SEF admin for you to be successful. And we think we think there's a lot more room here that's really gonna improve how operations teams work, that the software that they use day to day. >>So Clinton you mentioned virtualization is one of the topics in there. Of course, virtualization is very prevalent in a customer's data center environment today. Red Hat open shift, oftentimes in data centers, is sitting on BM ware environments. Of course. Recently, VM Ware announced that they have kubernetes baked into the solution, and red hat has open shift with red hat virtualization. Maybe, you know, without going into too much depth, and you probably have breakouts and white papers on this. But you know what kind of decision point should customers be thinking about when they're deciding? Do I do this in bare metal. Do I do it in virtualization? What are some of the, you know, just high level trade offs there when they need to make those decisions, >>I think it's, um I think the 1st 1 is Virtualization is a mature technology. It's a known quantity for many organizations, and so those who are comfortable with virtualization, I'd say, like any responsible, uh, architecture engineering team. You don't want to stop using something that's working well just because you can. And a lot of what I would see as the transition that companies on is for some organizations without a big investment in virtualization. They don't see the need for it anymore, except as maybe a technical detail of how they isolate insecure workloads. One of the great things about virtualization technology that we're all aware of over the last couple years is it creates a boundary between work loads and the underlying environment. That doesn't mean that the underlying environment and containers can't be as secure or benefit from those same techniques. And so we're starting to see that in the community, this kind of spectrum of virtualization all the way from the big traditional virtualization to very streamlined, stripped down virtualization wrappers around containers. Um, like some of the cloud providers use for their application environments. So I'm really excited about the open source. Community is touching each of these points on the spectrum. Some of our goals are if you're happy with your infrastructure provider, we want to work well with, and that's kind of the pragmatic of everyone's on a different step in that journey. The benefit of containers is no matter how fast you make of VM, it's never gonna be quite as fast, is it containers. And it's never gonna be quite as easy for a developer to run on their laptop. And I think working through this is there's still a lot of work that we as a community to do around, making it easier for developers to build containers and test them locally in smaller environments. But all of that flexibility can still benefit from virtualization under later or virtualization used as an isolation technology. So projects like Kata and some of the work that's being done in the open source community around projects like firecracker taking the same, um, open source ideas and remixing them a different points gives us a lot of flexibility. So I would say, um, I'm actually less interested in virtualization then all of the other technologies that are application centric and at the heart of it, a VM isn't really a developer centric idea. It's specifically an administrative concept that benefits the administrator, and developers can take advantage of it. But I think all of the capabilities that you think of when you think about building an application like scaling out and making sure patches are applied, being able to roll back separating your configuration on then all of the hundreds of other levels of complexity that will add around that like service MASH and the ability to gracefully tolerate failures in your database. These were where I think, um, virtualization needs to work with the platform rather than being something that dominates how we think about the platform. It's application first, not being first. >>Yeah, no, you're absolutely right that the critique I've always given, you know for a number of years now is if you look at virtualization, the promise was, let's take that old application that probably should have been updated and just shove it in a VM and never think about it again. That's not doing good things for the user. So if I look at that at one end of the spectrum away at the other end of the spectrum, trying not to think about infrastructure, you mentioned K native s 01 of the things that you know I've been digging in tryingto learn more about at Red Hat Summit has really been the open shift server lists. So give us the update on that piece. Um, you know, that's obviously very different discussion than what we were just having from a virtualization standpoint. Eso How does open shift look at server lists? How does that tie into what? You know, if I'm doing server, listen, Amazon versus you know some of the other open source options for serverless. How should I be thinking about that? >>There's a lot of great choices on the spectrum out there. I think one of the interesting things and I love the word spectrum here because cane native kind of sits in a spot where it tries to be, as the name says, it tries to be as kubernetes native as possible, which lets you tap into some of those additional capabilities when you need it. And one of the things I've always appreciate it is the more restrictive framework is usually the better. It is doing that one thing and doing it really well. We learned this with rails. We learned this with no Js. And as people have built over the years, the idea of simple development platforms. The core function idea is a great simple idea, but sometimes you need to break out of that. You need extra flexibility or your application needs to run longer or slow Start is actually an issue. One of the things I think is most interesting about K native and I see comers and user. I think this way it's a good point. Um, that gives you some of the flexibility of kubernetes and a lot of the simplicity of, um, the functions is a service, but I think that there's going to be an inevitable set of use cases that tie into that which are simpler where open organization has a very opinionated way of running applications, and I think that flexibility will really benefit K native. Whereas some of the more opinionated remarks around server lists lose a little bit of that. So that's one dimension that I still think a native is well positioned to kind of capture the broadest possible audience, which for kubernetes and Containers was kind of our mindset. We wanted to solve enough of the problems that you can solve. You can run all your software. We don't have to solve all those problems to such a level that there's endless complexity, although we've been accused of having endless complexity and Cooper days before, but just trying to think through what are the problems that everyone's going to have to give them a way out? I'm at the same time for us, when we think about prioritization functions is service about integration. It's about taking applications and connecting them, connecting them through kubernetes. And so it really depends on identity and access to data and tying that into your cloud environment. If you're running on top of a cloud or tying it into your back end databases, if your on premise, >>I >>think that is where the ecosystem is still working to bring together and standardize some of those pieces in kubernetes or on top of Kubernetes. What I'm really excited about is the team as much. You know, there's been this core community effort to get a native to a G, a quality. Alongside that, the open shift serverless team has been trying to make it a dramatically simpler action. If you have kubernetes and open shift, it's a one click action to get started with, Um Kay native and just like any other technology. How accessible it is determines how easy users find it to get started and to build the applications they need. So for us, it's not just about the core technology. It's about someone who's not familiar with Serverless or not familiar with kubernetes. Bring up an editor and build a function and then deploy it on top of open shift. See it scale out like a normal kubernetes application, not having to know about pods or persistent volumes or notes. And so these air, these are some of the steps. I've been really proud that the team's done. I think there's a huge amount of innovation that will happen this year and next year, as the maturity of the kubernetes ecosystem really grows up, we'll start to see standardized technologies, for I'm sharing identity across multiple clouds across multiple environments. It's no good if you've got these applications on the cloud that need to tie into your corporate L dap. But you can't connect your corporate held up to the cloud. And so your applications need 1/3 identity system. Nobody wants 1/3 identity system. And so, working through some of this thing where the challenges I think that hybrid organizations are already facing and our job is just to work with them in the open source communities and with the cloud providers partner with them and open source so that the technologies in kubernetes fit very well into whatever environment they run it. Alright, >>well, Clayton, really appreciate all the updates there. I know the community is definitely looking forward to digging through some of the breakout sessions reading all the new announcements. And, of course, we look forward to seeing you on the team participating in many of the kubernetes related events happening later this >>year. That's right. It's ah, gonna be a good year. >>All right. Thanks so much for joining us. I'm still Minuteman and as always thank you for watching you. >>Yeah, yeah, yeah, yeah

Published Date : Apr 29 2020

SUMMARY :

Summit 2020 Brought to you by Red Hat. Who's the open shift chief architect with Red Hat. All right, So before we get into the product, it's probably worthwhile that we talked about you We handed off the last of the bootstrap Steering Committee members hand it off to the new, have never even talk to somebody on the Kubernetes project. going to be able to change fast enough to keep up with things you love your view point both in the products that red hat that build around open shift and layers on top, there's it really hasn't been about the pace of keeping up with the upstream. I'd like to teach here, you dig in a little bit on the application side what's And a lot of it's learning from the customers and users out there who really And you know your architectural technical philosophy on that? on the cluster say, Oh, well, you know, one of the machines has gone bad. What are some of the, you know, just high level trade offs the ability to gracefully tolerate failures in your database. the things that you know I've been digging in tryingto learn more about at Red Hat Summit has really the functions is a service, but I think that there's going to be an inevitable and open source so that the technologies in kubernetes fit very well into I know the community is definitely looking forward to digging It's ah, gonna be a good year. I'm still Minuteman and as always thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ClaytonPERSON

0.99+

15 machinesQUANTITY

0.99+

Red HatORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

ClintonPERSON

0.99+

CERNORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

100QUANTITY

0.99+

last yearDATE

0.99+

red hatORGANIZATION

0.99+

Clayton ColemanPERSON

0.99+

10QUANTITY

0.99+

next yearDATE

0.99+

two key groupsQUANTITY

0.99+

VM WareORGANIZATION

0.99+

one clickQUANTITY

0.99+

two keyQUANTITY

0.99+

CubeORGANIZATION

0.99+

Summit 2020EVENT

0.99+

end of AprilDATE

0.98+

Red Hat SummitEVENT

0.98+

SEFTITLE

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

OneQUANTITY

0.98+

firstQUANTITY

0.98+

end of MarchDATE

0.97+

this yearDATE

0.97+

one partQUANTITY

0.97+

Red Hat Summit 2020EVENT

0.97+

one dimensionQUANTITY

0.97+

later this yearDATE

0.96+

todayDATE

0.96+

eachQUANTITY

0.93+

KubernetesTITLE

0.93+

Day oneQUANTITY

0.93+

hundredsQUANTITY

0.92+

KayPERSON

0.92+

one endQUANTITY

0.91+

20 years agoDATE

0.91+

one thingQUANTITY

0.91+

KataTITLE

0.91+

1st 1QUANTITY

0.91+

red hatTITLE

0.89+

CN CFORGANIZATION

0.87+

over a decadeQUANTITY

0.86+

tens of thousands of automation testsQUANTITY

0.85+

last two yearsDATE

0.84+

MinutemanPERSON

0.82+

KubernetesORGANIZATION

0.82+

CubeCOMMERCIAL_ITEM

0.82+

every six weeksQUANTITY

0.81+

1/3QUANTITY

0.79+

cfEVENT

0.75+

Steering CommitteeORGANIZATION

0.75+

last couple yearsDATE

0.74+

K native sORGANIZATION

0.74+

a weekQUANTITY

0.73+

BMORGANIZATION

0.68+

kubernetesTITLE

0.66+

later thisDATE

0.63+

Tom Eby, Micron | Micron Insight 2019


 

live from San Francisco it's the cube covering micron insight 2019 brought to you by micron welcome back to San Francisco everybody we're here at Pier 27 the Sun is setting behind the the buildings in San Francisco you're watching the cube the leader in live tech cover jump date Volante with my co-host David Flair we've been here all day covering micron insight 2019 Tommy Vee is here is the senior vice president and general manager of the compute and networking business unit at micron Tama great to see you again great to see you so you got compute and networking two of the big three you're in your business unit there you go but we're gonna talk about 3d crosspoint today but so anyway you know absolutely we're kind of bringing you outside the swimlane or maybe not but tell us about your bu and what's the update yes we you know we sell primarily memory today DRAM although in the future we see 3d crosspoint it's a great opportunity into the the data center you know both traditional servers and the cloud players pcs graphics and networking yes so you get some hard news today why don't we dig into that a little bit we surely haven't covered much of it but okay yeah so I guess you know a couple couple things of interest probably most directly as we we announced our our first 3d crosspoint storage device it's a it's a it's the highest performance SSD in the world and offers compared to other 3d crosspoint based solutions on the market you know anywhere from three and a half to five times the performance on a range of both sequential and random reads and writes two and a half million I ops bandwidth readin right north of nine gigabytes a second and I'm super fast super fast fast and you know similar similar you know a very positive comparisons up against up against me and SSDs ok and so we're excited about that so where's the fit what are the use cases who you're targeting with sure yeah I mean I think you know that one way to think about it is that anytime you introduce a new layer into the memory and storage hierarchy you know historically it was SRAM caches and then it was SSDs going in between dear and rotating media now this is 3d crosspoint sitting in between DRAM and and NAND and and the reason it is a benefit in terms of another layer is it's you know higher density and and greater persistence than DRAM it's greater performance and and you know you can it can cycle greater endurance than the man and and when you do that you do nibble away at either side of that layer so in this case that nibbles away a little bit from DRAM and a little bit from NAND but it grows the overall pie and it's the only player in the industry that provides DRAM 3d crosspoint in and we think that's a great opportunity at some code to the economics cuz it's more expensive than and less expensive than the DRAM higher performance than the traditional flash short lower performance well under the performance of DRAM so yeah I mean so again I think you know the the the you know the benefits like I said is it's it offers greater density and it offers greater persistence than DRAM and so that's the advantage there and it offers much greater performance on things like bandwidth and I ops and much greater endurance than the NAND and certainly our preliminary results are in in applications like databases in certain AI and machine learning workloads and in workloads that that benefit from low latency I think financial service markets is one specific example you know we think there's a good value bro so so a Colombo question if I may yeah so si P would say no throw it throw everything in memory in Hana and of course sell the DRAM and say ok that's ok with us so you mentioned databases how should we think about this relative to in-memory databases sure I mean I think that if if you can afford it and of course it will be more expensive we would love to provide you know our highest density DRAM modules on on the highest end server platforms and you know put put you know you mentioned you know Hana database in the terabytes and terabytes of the RAM that would be great that is is not free if we refer you to do it right exactly and and so if you have the need for that performance that's will do but we we see there's a you know a an attractive range of workloads that cannot afford you know there's a costume that very high-end solution and so this affords something that that gives you know good benefits a database performance but at a slightly more I know you want to jump in go oh yeah sure I compare yourself with Intel which is obviously got the same raw technology they have gone for consumer type obtain [Music] SSDs but they put all their effort into combining it with a DVD or envied him and have combined that with the processor itself and made a combination which is very good for storage controllers yeah so the quest you can very well in in in the SSD much much much more than they have are you looking to go into that and the dim because he obviously you don't have the processes themselves to to to man yeah I mean you know to be clear the you know what we're offering today you know is a product that runs on standard and yeah and via me and while there may in the future be opportunities to further enhance performance with software optimization it runs you know out of the box absolutely without any software optimization and but I do think that you know there are opportunities both to use this technology in you know more of a storage type of configuration and and looking forward there are also opportunities to use it in a memory configuration you know what what what we're announcing today is our is our first storage and with regard to additional products you know stay tuned so if I think about the storage hierarchy you know the the classical pyramid and forget about let's let's focus on the persistent end of that spectrum yeah this is at the tip right is that how we should think about this or not necessarily I mean it is at the storage tip yes but I think we 10 to think a little bit more holistically that you know that that triangle extends from you know from DRAM traditionally to SSDs to rotating and we're now inserting a 3d crosspoint based layer in between and and so from that perspective it is it is the tip of the storage triangle right but it does sit below it does sit below DRAM so in the overall and the reason for my question was sort of a loaded question because if you eliminate the the DRAM piece now you've got that tip sewn and benefits from the volume of consumer thoughts on how you get volume with 3d crosspoint sure you know again I think there are you know at a at a lower performance point you know you can get higher density you know more cost effective storage solutions with that um and we certainly don't see you know NAND going away or we're quite bullish on that you're like man you know it's both a both a SATA and a nvme 96 layer TLC nan based products today so that's that continues to be a major area of investment but you know from a you know from a from a value and opportunity point of view we see a better opportunity you know applying this technology again into this layer in the you know in the in the server or datacenter hierarchy um you know as opposed to what one might be able to do in the consumer space and your OEM say bring it on right I mean they they want this we're talking about the server manufacturers data center yeah I mean I think we're in you know we're in we're in limited sampling with select customers so you know more to say about our go-to-market you know at a at a future date but certainly we we see that there is you know we're we're bullish about the opportunity the marketplace so just asking a question about volume making sure you if you look at the marketplace it's arm has been incredibly successful and it's driven a huge amount of memory and and Nan for yourself then that seems to be where the volume is growing much faster than most other platforms are you looking to use this technology 3d crosspoint as in in in that environment as even memory as in DRAM itself as memory itself at a much lower level I'm just thinking of ways that you could increase volume sure I mean so to be just just to be clear you're talking about what's driven overwhelmingly by by the cell phone market right obviously it's it's proliferating into IOT you know I guess again our our our view of the of the first and best opportunity is in the data center which is still today an x86 dominated world I would say you know in terms of opportunities like I said for you know memory based solutions in the data center um and for how we apply this in other areas you know stay tuned let's talk about this forward next acquisition so it's really interesting to see micron making moves in an AI why the acquisition tell us more about it sure yeah so it's a it's a it's a small small start-up you know handful of players although you know fairly experienced as as I believe sanjay mentioned they're on their their fifth generation of their architecture and so what we've acquired it's both it's both the hardware architecture that currently runs on FPGAs along with the supporting software that supports all the common frameworks the tensorflow is the the PI torches as well as the range of the network architectures you know that that are necessary to support again primarily on the inference side you know are we see the best opportunities in edge in fencing but in terms of what's behind the acquisition first of all there is there's an explosion of opportunity in machine learning we see that in particular on you know on edge inferencing and we feel that in order for us to continue to optimize and develop the best solutions both over all of a deep learning platform that includes memories but also just memories that are best optimized we need to understand you know when you noticed in the workloads we understand the best solutions and and and so that's why we made this acquisition we integrated it with our team that has for some time developed FPGA based adding cards and it's actually the basis of the technology for some of the dialog that used to offer example with OHSU when you talk about edge inferencing we're envisioning this sort of massively scalable distributed system that of course comprises edge you want to bring the compute to the data wherever the data lives obviously don't want to start moving data around now you're bringing a eye to that data which is the data data ai cloud all these superpowers coming together uh-huh so our premise is that the inferencing is going to be done at the edge much much of the data if not most of the data is going to stay at the edge yeah so this is what you're enabling through that integration provision heterogeneous combination of technologies correct I mean you know to use the extreme example that we talked about you know on stage earlier you know CERN has this massive amount of information that comes from the I think it's 40 million collisions a second or I may have my figures wrong and you cannot possibly store nor do you want to transmit that data and and so you you have to be applying AI to figure out what the good stuff is and there's no stream it's exactly and that solution exists in a myriad of applications you know the very you know simplistic one you're not going to send you know the picture of who's at your front door you know to a core data center to figure out if it's somebody in your family yeah you don't want to be doing that maybe not in the camera but certainly a lot closer because you just you know the network simply will not can't handle the capacity all right we got to go but but last word you know what are the takeaways from today what do you want our audience to remember from this event well I think you know I think it's just we continue to build on our memory and storage base to to move up the stack and add values in way that maybe storage subsystems like our our NAND SSD and 3d crosspoint that you know go a little further up the stack in terms of our gaining greater expertise in you know machine learning solutions or or the example with authentic of providing you know a broader solution including key management for how we secure the billions of devices they're gonna be at the edge touching all the bases Tom all right congratulations on all the hard work and it was great to see you again thanks guys Dave and Dave thank you and you keep right there but it will be back to wrap micron insight 2019 right after this short break from San Francisco you watching the cube

Published Date : Oct 25 2019

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
David FlairPERSON

0.99+

DavePERSON

0.99+

San FranciscoLOCATION

0.99+

Tommy VeePERSON

0.99+

CERNORGANIZATION

0.99+

two and a half millionQUANTITY

0.99+

todayDATE

0.98+

MicronORGANIZATION

0.98+

five timesQUANTITY

0.98+

first storageQUANTITY

0.98+

three and a halfQUANTITY

0.97+

billions of devicesQUANTITY

0.97+

firstQUANTITY

0.97+

bothQUANTITY

0.97+

2019DATE

0.96+

TomPERSON

0.96+

Tom EbyPERSON

0.95+

40 million collisions a secondQUANTITY

0.94+

Pier 27LOCATION

0.94+

HanaORGANIZATION

0.94+

terabytesQUANTITY

0.93+

nine gigabytesQUANTITY

0.92+

one wayQUANTITY

0.9+

IntelORGANIZATION

0.9+

fifth generationQUANTITY

0.89+

twoQUANTITY

0.8+

micron TamaORGANIZATION

0.79+

a secondQUANTITY

0.79+

micron insightORGANIZATION

0.77+

threeQUANTITY

0.73+

first 3dQUANTITY

0.72+

one specific exampleQUANTITY

0.71+

Micron InsightORGANIZATION

0.68+

handful of playersQUANTITY

0.57+

x86COMMERCIAL_ITEM

0.54+

vice presidentPERSON

0.52+

couple coupleQUANTITY

0.51+

OHSUORGANIZATION

0.42+

Jason Bloomberg, Intellyx | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE! Covering KubeCon and CloudNativeCon Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation, and ecosystem partners. >> Welcome back. This is theCUBE's live coverage of KubeCon, CloudNativeCon 2019 here in Barcelona, Spain. 7,700 here in attendance, here about all the Cloud Native technologies. I'm Stu Miniman; my cohost to the two days of coverage is Corey Quinn. And to help us break down what's happening in this ecosystem, we've brought in Jason Bloomberg, who's the president at Intellyx. Jason, thanks so much for joining us. >> It's great to be here. >> All right. There's probably some things in the keynote I want to talk about, but I also want to get your general impression of the show and beyond the show, just the ecosystem here. Brian Liles came out this morning. He did not sing or rap for us this morning like he did yesterday. He did remind us that the dinners in Barcelona meant that people were a little late coming in here because, even once you've got through all of your rounds of tapas and everything like that, getting that final check might take a little while. They did eventually filter in, though. Always a fun city here in Barcelona. I found some interesting pieces. Always love some customer studies. Conde Nast talking about what they've done with their digital imprint. CERN, who we're going to have on this program. As a science lover, you want to geek out as to how they're finding the Higgs boson and how things like Kubernetes are helping them there. And digging into things like storage, which I worked at a storage company for 10 years. So, understanding that storage is hard. Well, yeah. When containers came out, I was like, "Oh, god, we just fixed it for virtualization, "and it took us a decade. "How are we going to do it this time?" And they actually quoted a crowd chat that we had in our community. Tim Hawken, of course one of the first Kubernetes guys, was in on that. And we're going to have Tim on this afternoon, too. So, just to set a little context there. Jason, what's your impressions of the show? Anything that has changed in your mind from when you came in here to today? Let's get into it from there. >> Well, this is my second KubeCon. The first one I went to was in Seattle in December. What's interesting from a big picture is really how quickly and broadly KubeCon has been adopted in the enterprise. It's still, in the broader scheme of things, relatively new, but it's really taking its place as the only container orchestrator anybody cares about. It sort of squashed the 20-or-so alternative container orchestrators that had a brief day in the sun. And furthermore, large enterprises are rapidly adopting it. It's remarkable how many of them have adopted it and how broadly, how large the deployment. The Conde Nast example was one. But there are quite a number. So we turned the corner, even though it's relatively immature technology. That's the interesting story as well, that there's still pieces missing. It's sort of like flying an airplane while you're still assembling it, which makes it that much more exciting. >> Yeah, one of the things that has excited me over the last 10 years in tech is how fast it takes me to go from ideation to production, has been shrinking. Big data was: "Let's take the thing that used to take five years "and get it down to 18 months." We all remember ERP deployments and how much money and people you need to throw at that. >> It still takes a lot of money and people. >> Right, because it's ERP. I was talking to one of the booths here, and they were doing an informal poll of, "How many of you are going to have Kubernetes "in production in the next six months?" Not testing it, but in production in the next six months, and it was more than half of the people were going to be ramping it up in that kind of environment. Anything architecturally? What's intriguing you? What's the area that you're digging down to? We know that we are not fully mature, and even though we're in production and huge growth, there's still plenty of work to do. >> An interesting thing about the audience here is it's primarily infrastructure engineers. And the show is aimed at the infrastructure engineers, so it's technical. It's focused on people who code for a living at the infrastructure level, not at the application level. So you have that overall context, and what you end up having, then, is a lot of discussions about the various components. "Here's how we do storage." "Here's how we do this, here's how we do that." And it's all these pieces that people now have to assemble, as opposed to thinking of it overall, from the broader context, which is where I like writing about, in terms of the bigger picture. So the bigger picture is really that Cloud Native, broadly speaking, is a new architectural paradigm. It's more than just an architectural trend. It's set of trends that really change the way we think about architecture. >> One interesting piece about Kubernetes, as well. One of the things we're seeing as we see Kubernetes start to expand out is, unlike serverless, it doesn't necessarily require the same level of, oh, just take everything you've done and spend 18 months rewriting it from scratch, and then it works in this new paradigm in a better way. It's much less of a painful conversion process. We saw in the keynote today that they took WebLogic, of all things, and dropped that into Kubernetes. If you can do it with something as challenging, in some respects, and as monolithic as WebLogic, then almost any other stack you're going to see winds up making some sense. >> Right, you mentioned serverless in contrast with Kubernetes, but actually, serverless is part of this Cloud Native paradigm as well. So it's broader than Kubernetes, although Kubernetes has established itself as the container orchestration platform of choice. But it's really an overall story about how we can leverage the best practices we've learned from cloud computing across the entire enterprise IT landscape, both in the cloud and on premises. And Kubernetes is driving this in large part, but it's bigger picture than the technology itself. That's what's so interesting, because it's so transformative, but people here are thinking about trees, not the forest. >> It's an interesting thing you say there, and I'm curious if you can help our community, Because they look at this, and they're like, "Kubernetes, Kubernetes, Kubernetes." Well, a bunch of the things sit on Kubernetes. As they've tried to say, it's a platform of platforms. It's not the piece. Many of the things can be with Kubernetes but don't have to be. So, the whole observability piece. We heard the merging of the OpenCensus, OpenTracing with OpenTelemetry. You don't have to have Kubernetes for that to be a piece of it. It can be serverless underneath it. It can be all these other pieces. Cloud Native architecture sits on top of it. So when you say Cloud Native architecture, what defines that? What are the pieces? How do I have to do it? Is it just, I have to have meditated properly and had a certain sense of being? What do we have to do to be Cloud Native? >> Well, an interesting way of looking at it is: What we have subtracted from the equation, so what is intentionally missing. Cloud Native is stateless, it is codeless, and it is trustless. Now, not to say that we don't have ways of dealing with state, and of course there's still plenty of code, and we still need trust. But those are architectural principals that really percolate through everything we do. So containers are inherently stateless; they're ephemeral. Kubernetes deals with ephemeral resources that come and go as needed. This is key part of how we achieve the scale we're looking for. So now we have to deal with state in a stateless environment, and we need to do that in a codeless way. By codeless, I mean declarative. Instead of saying, how are we going to do something? Let's write code for that, we're going to say, how are we going to do that? Let's write a configuration file, a YAML file, or some other declarative representation of what we want to do. And Kubernetes is driven this way. It's driven by configuration, which means that you don't need to fork it. You don't need to go in and monkey with the insides to do something with it. It's essentially configurable and extensible, as opposed to customizable. This is a new way of thinking about how to leverage open-source infrastructure software. In the past, it was open-source. Let's go in an monkey with the code, because that's one of the benefits of open-source. Nobody wants to do that now, because it's declaratively-driven, and it's configurable. >> Okay, I hear what you're saying, and I like what you're saying. But one of the things that people say here is everyone's a little bit different, and it is not one solution. There's lots of different paths, and that's what's causing a little bit of confusion as to which service mesh, or do I have a couple of pieces that overlap. And every deployment that I see of this is slightly different, so how do I have my cake and eat it, too? >> Well, you mentioned that Kubernetes is a platform of platforms, and there's little discussion of what we're actually doing with the Kubernetes here at the show. Occasionally, there's some talk about AI, and there's some talk about a few other things, but it's really up to the users of Kubernetes, who are now the development teams in the enterprises, to figure out what they want to do with it and, as such, figure out what capabilities they require. Depending upon what applications you're running and the business use cases, you may need certain things more than others. Because AI is very different from websites, it's very different from other things you might be running. So that's part of the benefit of a platform of platforms, is it's inherently configurable. You can pick and choose the capabilities you want without having to go into Kubernetes and fork it. We don't want 12 different Kubernetes that are incompatible with each other, but we're perfectly okay with different flavors that are all based on the same, fundamental, identical code base. >> We take a look at this entire conference, and it really comes across as, yes, it's KubeCon and CloudNativeCon. We look at the, I think, 36 projects that are now being managed by this. But if we look at the conversations of what's happening here, it's very clear that the focus of this show is Kubernetes and friends, where it tends to be taking the limelight of a lot of this. One of the challenges you start seeing as soon as you start moving up the stack, out through the rest of the stack, rather, and seeing what all of these Cloud Native technologies are is, increasingly, they're starting to be defined by what they aren't. I mean, you have the old saw of, serverless runs on servers, and other incredibly unhelpful sentiments. And we talk about what things aren't more so than we do what they are. And what about capabilities story? I don't have an answer for this. I think it's one of those areas where language is hard, and defining what these things are is incredibly difficult. But I see what you're saying. We absolutely are seeing a transformative moment. And one of the strangest things about it, to me at least, is the enthusiasm with which we're seeing large enterprises, that you don't generally think of as being particularly agile or fast-moving, are demonstrating otherwise. They're diving into this in fascinating ways. It's really been enlightening to have conversations for the last couple of days with companies that are embracing this new paradigm. >> Right. Well, in our perspective at Intellyx, we're focusing on digital transformation in the enterprise, which really means putting the customer first and having a customer-driven transformation of IT, as well as the organization itself. And it's hard to think in those terms, in customer-facing terms, when you're only talking about IT infrastructure. Be that as it may, it's still all customer-driven. And this is sometimes the missing piece, is how do we connect what we're doing on the infrastructure side with what customers require from these companies that are implementing it? Often, that missing piece centers on the workload. Because, from the infrastructure perspective, we have a notion of a workload, and we want workload portability. And portability is one of the key benefits of Kubernetes. It gives us a lot of flexibility in terms of scalability and deployment options, as well as resilience and other benefits. But the workload also represents the applications we're putting in front of our end users, whether they're employees or end customers. So that's they key piece that is like the keystone that ties the digital story, that is the customer-facing, technology-driven, technology-empowered story, with the IT infrastructure stories. How do we support the flexibility, scalability, resilience of the workloads that the business needs to meet its business goals? >> Yeah, I'm really glad you brought up that digital transformation piece, because I have two questions, and I want to make sure I'm allowing you to cover both of them. One is, the outcome we from people as well: "I need to be faster, and I need to be agile." But at the same point, which pieces should I, as an enterprise, really need to manage? Many of these pieces, shouldn't I just be able to consume it as a managed service? Because I don't need to worry about all of those pieces. The Google presentation this morning about storage was: You have two options. Path one is: we'll take care of all of that for you. Path two is: here's the level of turtles that you're going to go all the way down, and we all know how complicated storage is, and it's got to work. If I lose my state, if I lose my pieces there, I'm probably out of business or at least in really big trouble. The second piece on that, you talked about the application. And digital transformation. Speed's great and everything, but we've said at Wikibon that the thing that will differentiate the traditional companies and the digitally transformed is data will drive your business. You will have data, it will add value of business, and I don't feel that story has come out yet. Do you see that as the end result from this? And apologies for having two big, complex questions here for you. >> Well, data are core to the digital transformation story, and it's also an essential part of the Kubernetes story. Although, from the infrastructure perspective, we're really thinking more about compute than about data. But of course, everything boils down to the data. That is definitely always a key part of the story. And you're talking about the different options. You could run it yourself or run it as a managed service. This is a key part of the story as well, is that it's not about making a single choice. It's about having options, and this is part of the modern cloud storage. It's not just about, "Okay, we'll put everything in one public cloud." It's about having multiple public clouds, private clouds, on-premises virtualization, as well as legacy environments. This is what you call hybrid IT. Having an abstracted collection of environments that supports workload portability in order to meet the business needs for the infrastructure. And that workload portability, in the context of multiple clouds, that is becoming increasingly dependent on Kubernetes as an essential element of the infrastructure. So Kubernetes is not the be-all and end-all, but it's become an essentially necessary part of the infrastructure, to make this whole vision of hybrid IT and digital transformation work. >> For now. I mean, I maintain that, five years from now, no one is going to care about Kubernetes. And there's two ways that goes. Either it dries up, blows away, and something else replaces it, which I don't find likely, or, more likely, it slips beneath the surface of awareness for most people. >> I would agree, yeah. >> The same way that we're not sitting here, having an in-depth conversation about which distribution of Linux, or what Linux kernel or virtual memory manager we're working with. That stuff has all slipped under the surface, to the point where there are people who care tremendously about this, but you don't need to employ them at every company. And most companies don't even have to think about it. I think Kubernetes is heading that direction. >> Yeah, it looks like it. Obviously, things continue to evolve. Yeah, Linux is a good example. TCP/IP as well. I remember the network protocol wars of the early 90s, before the web came along, and it was, "Are we going to use Banyan VINES, "are we going to use NetWare?" Remember NetWare? "Or are we going to use TCP/IP or Token Ring?" Yeah! >> Thank you. >> We could use GDP, but I don't get it. >> Come on, KOBOL's coming back, we're going to bring back Token Ring, too. >> KOBOL never went away. Token Ring, though, it's long gone. >> I am disappointed in Corey, here, for not asking the question about portability. The concern we have, as you say: okay, I put Kubernetes in here because I want portability. Do I end up with least-common-denominator cloud? I'm making a decision that I'm not going to go deep on some of the pieces, because nice as the IPI lets things through, but we understand if I need to work across multiple environments, I'm usually making a trade-off there. What do you hear from customers? Are they aware that they're doing this? Is this a challenge for people, not getting the full benefit out of whichever primary or whichever clouds they are using? >> Well, portability is not just one thing. It's actually a set of capabilities, depending upon what you are trying to accomplish. So for instance, you may want to simply support backing up your workload, so you want to be able to move it from here to there, to back it up. Or you may want to leverage different public clouds, because different public clouds have different strengths. There may be some portability there. Or you may be doing cloud migration, where you're trying to move from on-premises to cloud, so it's kind of a one-time portability. So there could be a number of reasons why portability is important, and that could impact what it means to you, to move something from here to there. And why, how often you're going to do it, how important it is, whether it's a one-to-many kind of thing, or it's a one-to-one kind of thing. It really depends on what you're trying to accomplish. >> Jason, last thing real quick. What research do you see coming out of this? What follow-up? What should people be looking for from Intellyx in this space in the near future? >> Well, we continue to focus on hybrid IT, which include Kubernetes, as well as some of the interesting trends. One of the interesting stories is how Kubernetes is increasingly being deployed on the edge. And there's a very interesting story there with edge computing, because the telcos are, in large part, driving that, because of their 5G roll-outs. So we have this interesting confluence of disruptive trends. We have 5G, we have edge computing, we have Kubernetes, and it's also a key use case for OpenStack, as well. So it's like all of these interesting trends are converging to meet a new class of challenges. And AI is part of that story as well, because we want to run AI at the edge, as well. That's the sort of thing we do at Intellyx, is try to take multiple disruptive trends and show the big picture overall. And for my articles for SiliconANGLE, that's what I'm doing as well, so stay tuned for those. >> All right. Jason Bloomberg, thank you for helping us break down what we're doing in this environment. And as you said, actually, some people said OpenStack is dead. Look, it's alive and well in the Telco space and actually merging into a lot of these environments. Nothing ever dies in IT, and theCUBE always keeps rolling throughout all the shows. For Corey Quinn, I'm Stu Miniman. We have a full-packed day of interviews here, so be sure to stay with us. And thank you for watching theCUBE. (upbeat techno music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, And to help us break down what's happening Tim Hawken, of course one of the first Kubernetes guys, and how broadly, how large the deployment. Yeah, one of the things that has excited me What's the area that you're digging down to? is a lot of discussions about the various components. One of the things we're seeing as we see Kubernetes but it's bigger picture than the technology itself. Many of the things can be with Kubernetes Now, not to say that we don't have But one of the things that people say here is You can pick and choose the capabilities you want One of the challenges you start seeing And portability is one of the key benefits of Kubernetes. One is, the outcome we from people as well: of the infrastructure, to make this whole vision beneath the surface of awareness for most people. And most companies don't even have to think about it. I remember the network protocol wars of the early 90s, we're going to bring back Token Ring, too. KOBOL never went away. because nice as the IPI lets things through, and that could impact what it means to you, What research do you see coming out of this? That's the sort of thing we do at Intellyx, And as you said, actually,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim HawkenPERSON

0.99+

JasonPERSON

0.99+

SeattleLOCATION

0.99+

Corey QuinnPERSON

0.99+

Stu MinimanPERSON

0.99+

Brian LilesPERSON

0.99+

Jason BloombergPERSON

0.99+

12QUANTITY

0.99+

BarcelonaLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

two questionsQUANTITY

0.99+

five yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

DecemberDATE

0.99+

bothQUANTITY

0.99+

18 monthsQUANTITY

0.99+

secondQUANTITY

0.99+

CERNORGANIZATION

0.99+

36 projectsQUANTITY

0.99+

20QUANTITY

0.99+

TimPERSON

0.99+

IntellyxORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

two waysQUANTITY

0.99+

second pieceQUANTITY

0.99+

OneQUANTITY

0.99+

two daysQUANTITY

0.99+

7,700QUANTITY

0.99+

KubeConEVENT

0.99+

two optionsQUANTITY

0.99+

KOBOLORGANIZATION

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.98+

one solutionQUANTITY

0.98+

LinuxTITLE

0.98+

GoogleORGANIZATION

0.98+

todayDATE

0.97+

KubernetesTITLE

0.97+

early 90sDATE

0.97+

Cloud NativeTITLE

0.96+

WikibonORGANIZATION

0.96+

more than halfQUANTITY

0.96+

this morningDATE

0.95+

CloudNativeCon Europe 2019EVENT

0.95+

one thingQUANTITY

0.95+

WebLogicTITLE

0.94+

first oneQUANTITY

0.94+

One interesting pieceQUANTITY

0.93+

Path oneQUANTITY

0.93+

single choiceQUANTITY

0.93+

this afternoonDATE

0.92+

CloudNativeCon 2019EVENT

0.92+

Path twoQUANTITY

0.92+

one of the boothsQUANTITY

0.92+

next six monthsDATE

0.91+

Linux kernelTITLE

0.9+

two bigQUANTITY

0.89+

Madhu Matta, Lenovo & Dr. Daniel Gruner, SciNet | Lenovo Transform 2018


 

>> Live from New York City it's theCube. Covering Lenovo Transform 2.0. Brought to you by Lenovo. >> Welcome back to theCube's live coverage of Lenovo Transform, I'm your host Rebecca Knight along with my co-host Stu Miniman. We're joined by Madhu Matta; He is the VP and GM High Performance Computing and Artificial Intelligence at Lenovo and Dr. Daniel Gruner the CTO of SciNet at University of Toronto. Thanks so much for coming on the show gentlemen. >> Thank you for having us. >> Our pleasure. >> So, before the cameras were rolling, you were talking about the Lenovo mission in this area to use the power of supercomputing to help solve some of society's most pressing challenges; and that is climate change, and curing cancer. Can you talk a little bit, tell our viewers a little bit about what you do and how you see your mission. >> Yeah so, our tagline is basically, Solving humanity's greatest challenges. We're also now the number one supercomputer provider in the world as measured by the rankings of the top 500 and that comes with a lot of responsibility. One, we take that responsibility very seriously, but more importantly, we work with some of the largest research institutions, universities all over the world as they do research, and it's amazing research. Whether it's particle physics, like you saw this morning, whether it's cancer research, whether it's climate modeling. I mean, we are sitting here in New York City and our headquarters is in Raleigh, right in the path of Hurricane Florence, so the ability to predict the next anomaly, the ability to predict the next hurricane is absolutely critical to get early warning signs and a lot of survival depends on that. So we work with these institutions jointly to develop custom solutions to ensure that all this research one it's powered and second to works seamlessly, and all their researchers have access to this infrastructure twenty-four seven. >> So Danny, tell us a little bit about SciNet, too. Tell us what you do, and then I want to hear how you work together. >> And, no relation with Skynet, I've been assured? Right? >> No. Not at all. It is also no relationship with another network that's called the same, but, it doesn't matter. SciNet is an organization that's basically the University of Toronto and the associated research hospitals, and we happen to run Canada's largest supercomputer. We're one of a number of computer sites around Canada that are tasked with providing resources and support, support is the most important, to academia in Canada. So, all academics, from all the different universities, in the country, they come and use our systems. From the University of Toronto, they can also go and use the other systems, it doesn't matter. Our mission is, as I said, we provide a system or a number of systems, we run them, but we really are about helping the researchers do their research. We're all scientists. All the guys that work with me, we're all scientists initially. We turned to computers because that was the way we do the research. You can not do astrophysics other than computationally, observationally and computationally, but nothing else. Climate science is the same story, you have so much data and so much modeling to do that you need a very large computer and, of course, very good algorithms and very careful physics modeling for an extremely complex system, but ultimately it needs a lot of horsepower to be able to even do a single simulation. So, what I was showing with Madhu at that booth earlier was results of a simulation that was done just prior us going into production with our Lenovo system where people were doing ocean circulation calculations. The ocean is obviously part of the big Earth system, which is part of the climate system as well. But, they took a small patch of the ocean, a few kilometers in size in each direction, but did it at very, very high resolution, even vertically going down to the bottom of the ocean so that the topography of the ocean floor can be taken into account. That allows you to see at a much smaller scale the onset of tides, the onset of micro-tides that allow water to mix, the cold water from the bottom and the hot water from the top; The mixing of nutrients, how life goes on, the whole cycle. It's super important. Now that, of course, gets coupled with the atmosphere and with the ice and with the radiation from the sun and all that stuff. That calculation was run by a group from, the main guy was from JPL in California, and he was running on 48,000 cores. Single runs at 48,000 cores for about two- to three-weeks and produced a petabyte of data, which is still being analyzed. That's the kind of resolution that's been enabled... >> Scale. >> It gives it a sense of just exactly... >> That's the scale. >> By a system the size of the one we have. It was not possible to do that in Canada before this system. >> I tell you both, when I lived on the vendor side and as an analyst, talking to labs and universities, you love geeking out. Because first of all, you always have a need for newer, faster things because the example you just gave is like, "Oh wait." "If I can get the next generation chipset." "If the networking can be improved." You know you can take that petabyte of data and process it so much faster. >> If I could only get more money to buy a bigger one. >> We've talked to the people at CERN and JPL and things like that. - Yeah. >> And it's like this is where most companies are it's like, yeah it's a little bit better, and it might make things a little better and make things nice, but no, this is critical to move along the research. So talk a little bit more about the infrastructure and what you look for and how that connects to the research and how you help close that gap over time. >> Before you go, I just want to also highlight a point that Danny made on solving humanity's greatest challenges which is our motto. He talked about the data analysis that he just did where they are looking at the surface of the ocean, as well as, going down, what is it, 264 nautical layers underneath the ocean? To analyze that much data, to start looking at marine life and protecting marine life. As you start to understand that level of nautical depth, they can start to figure out the nutrients value and other contents that are in that water to be able to start protecting the marine life. There again, another of humanity's greatest challenge right there that he's giving you... >> Nothing happens in isolation; It's all interconnected. >> Yeah. >> When you finally got a grant, you're able to buy a computer, how do you buy the computer that's going to give you the most bang for your buck? The best computer to do the science that we're all tasked with doing? It's tough, right? We don't fancy ourselves as computer architects; we engage the computer companies who really know about architecture to help us do it. The way we did our procurement was, 'Ok vendors, we have a set pot of money, we're willing to spend every last penny of this money, you give us the biggest and the baddest for our money." Now, it has to have a certain set of criteria. You have to be able to solve a number of benchmarks, some sample calculations that we provided. The ones that give you the best performance that's a bonus. It also has to be able to do it with the least amount of power, so we don't have to heat up the world and pay through the nose with power. Those are objective criteria that anybody can understand. But then, there's also the other criteria, so, how well will it run? How is it architected? How balanced is it? Did we get the iOS sub-system for all the storage that was the one that actually meets the criteria? What other extras do we have that will help us make the system run in a much smoother way and for a wide variety of disciplines because we run the biologists together with the physicists and the engineers and the humanitarians, the humanities people. Everybody uses the system. To make a long story short, the proposal that we got from Lenovo won the bid both in terms of what we got for in terms of hardware and also the way it was put together, which was quite innovative. >> Yeah. >> I want to hear about, you said give us the biggest, the baddest, we're willing to empty our coffers for this, so then where do you go from there? How closely do you work with SciNet, how does the relationship evolve and do you work together to innovate and kind of keep going? >> Yeah. I see it as not a segment or a division. I see High Performance Computing as a practice, and with any practice, it's many pieces that come together; you have a conductor, you have the orchestra, but the end of the day the delivery of that many systems is the concert. That's the way to look at it. To deliver this, our practice starts with multiple teams; one's a benchmarking team that understands the application that Dr. Gruner and SciNet will be running because they need to tune to the application the performance of the cluster. The second team is a set of solution architects that are deep engineers and understand our portfolio. Those two work together to say against this application, "Let's build," like he said, "the biggest, baddest, best-performing solution for that particular application." So, those two teams work together. Then we have the third team that kicks in once we win the business, which is coming on site to deploy, manage, and install. When Dr. Gruner talks about the infrastructure, it's a combination of hardware and software that all comes together and the software is open-source based that we built ourselves because we just felt there weren't the right tools in the industry to manage this level of infrastructure at that scale. All this comes together to essentially rack and roll onto their site. >> Let me just add to that. It's not like we went for it in a vacuum. We had already talked to the vendors, we always do. You always go, and they come to you and 'when's your next money coming,' and it's a dog and pony show. They tell you what they have. With Lenovo, at least the team, as we know it now, used to be the IBM team, iXsystems team, who built our previous system. A lot of these guys were already known to us, and we've always interacted very well with them. They were already aware of our thinking, where we were going, and that we're also open to suggestions for things that are non-conventional. Now, this can backfire, some data centers are very square they will only prescribe what they want. We're not prescriptive at all, we said, "Give us ideas about what can make this work better." These are the intangibles in a procurement process. You also have to believe in the team. If you don't know the team or if you don't know their track record then that's a no-no, right? Or, it takes points away. >> We brought innovations like DragonFly, which Dr. Dan will talk about that, as well as, we brought in for the first time, Excelero, which is a software-defined storage vendor and it was a smart part of the bid. We were able to flex muscles and be more creative versus just the standard. >> My understanding, you've been using water cooling for about a decade now, maybe? - Yes. >> Maybe you could give us a little bit about your experiences, how it's matured over time, and then Madhu will talk and bring us up to speed on project Neptune. >> Okay. Our first procurement about 10 years ago, again, that was the model we came up with. After years of wracking our brains, we could not decide how to build a data center and what computers to buy, it was like a chicken and egg process. We ended up saying, 'Okay, this is what we're going to do. Here's the money, here's is our total cost of operation that we can support." That included the power bill, the water, the maintenance, the whole works. So much can be used for infrastructure, and the rest is for the operational part. We said to the vendors, "You guys do the work. We want, again, the biggest and the baddest that we can operate within this budget." So, obviously, it has to be energy efficient, among other things. We couldn't design a data center and then put in the systems that we didn't know existed or vice-versa. That's how it started. The initial design was built by IBM, and they designed the data center for us to use water cooling for everything. They put rear door heat exchanges on the racks as a means of avoiding the use of blowing air and trying to contain the air which is less efficient, the air, and is also much more difficult. You can flow water very efficiently. You open the door of one of these racks. >> It's amazing. >> And it's hot air coming out, but you take the heat, right there in-situ, you remove it through a radiator. It's just like your car radiator. >> Car radiator. >> It works very well. Now, it would be nice if we could do even better by doing the hot water cooling and all that, but we're not in a university environment, we're in a strip mall out in the boonies, so we couldn't reuse the heat. Places like LRZ they're reusing the heat produced by the computers to heat their buildings. >> Wow. >> Or, if we're by a hospital, that always needs hot water, then we could have done it. But, it's really interesting how the option of that design that we ended up with the most efficient data center, certainly in Canada, and one of the most efficient in North America 10 years ago. Our PUE was 1.16, that was the design point, and this is not with direct water cooling through the chip. >> Right. Right. >> All right, bring us up to speed. Project Neptune, in general? >> Yes, so Neptune, as the name suggests, is the name of the God of the Sea and we chose that to brand our entire suite of liquid cooling products. Liquid cooling products is end to end in the sense that it's not just hardware, but, it's also software. The other key part of Neptune is a lot of these, in fact, most of these, products were built, not in a vacuum, but designed and built in conjunction with key partners like Barcelona Supercomputer, LRZ in Germany, in Munich. These were real-life customers working with us jointly to design these products. Neptune essentially allows you, very simplistically put, it's an entire suite of hardware and software that allows you to run very high-performance processes at a level of power and cooling utilization that's like using a much lower processor, it dissipates that much heat. The other key part is, you know, the normal way of cooling anything is run chilled water, we don't use chilled water. You save the money of chillers. We use ambient temperature, up to 50 degrees, 90% efficiency, 50 degree goes in, 60 degree comes out. It's really amazing, the entire suite. >> It's 50 Celsius, not Fahrenheit. >> It's Celsius, correct. >> Oh. >> Dr. Bruner talked about SciNet with the rado-heat exchanger. You actually got to stand in front of it to feel the magic of this, right? As geeky as that is. You open the door and it's this hot 60-, 65-degree C air. You close the door it's this cool 20-degree air that's coming out. So, the costs of running a data center drop dramatically with either the rado-heat exchanger, our direct to node product, which we just got released the SE650, or we have something call the thermal-transfer module, which replaces a normal heat sink. Where for an air cool we bring water cool goodness to an air cool product. >> Danny, I wonder if you can give us the final word, just the climate science in general, how's the community doing? Any technological things that are holding us back right now or anything that excites you about the research right now? >> Technology holds you back by the virtual size of the calculations that you need to do, but, it's also physics that hold you back. >> Yes. Because doing the actual modeling is very difficult and you have to be able to believe that the physics models actually work. This is one of the interesting things that Dick Peltier, who happens to be our scientific director and he's also one of the top climate scientists in the world, he's proven through some of his calculations that the models are actually pretty good. The models were designed for current conditions, with current data, so that they would reproduce the evolution of the climate that we can measure today. Now, what about climate that started happening 10,000 years ago, right? The climate was going on; it's been going on forever and ever. There's been glaciations; there's been all these events. It turns out that it has been recorded in history that there are some oscillations in temperature and other quantities that happen about every 1,000 years and nobody had been able to prove why they would happen. It turns out that the same models that we use for climate calculations today, if you take them back and do what's called paleoclimate, you start with approximating the conditions that happened 10,000 years ago, and then you move it forward, these things reproduce, those oscillations, exactly. It's very encouraging that the climate models actually make sense. We're not talking in a vacuum. We're not predicting the end of the world, just because. These calculations are right. They're correct. They're predicting the temperature of the earth is climbing and it's true, we're seeing it, but it will continue unless we do something. Right? It's extremely interesting. Now he's he's beginning to apply those results of the paleoclimate to studies with anthropologists and archeologists. We're trying to understand the events that happened in the Levant in the Middle East thousands of years ago and correlate them with climate events. Now, is that cool or what? >> That's very cool. >> So, I think humanity's greatest challenge is again to... >> I know! >> He just added global warming to it. >> You have a fun job. You have a fun job. >> It's all the interdisciplinarity that now has been made possible. Before we couldn't do this. Ten years ago we couldn't run those calculations, now we can. So it's really cool. - Amazing. Great. Well, Madhu, Danny, thank you so much for coming on the show. >> Thank you for having us. >> It was really fun talking to you. >> Thanks. >> I'm Rebecca Knight for Stu Miniman. We will have more from the Lenovo Transform just after this. (tech music)

Published Date : Sep 13 2018

SUMMARY :

Brought to you by Lenovo. and Dr. Daniel Gruner the CTO of SciNet and that is climate change, and curing cancer. so the ability to predict the next anomaly, and then I want to hear how you work together. and the hot water from the top; The mixing of nutrients, By a system the size of the one we have. and as an analyst, talking to labs and universities, to buy a bigger one. and things like that. and what you look for and how that connects and other contents that are in that water and the humanitarians, the humanities people. of that many systems is the concert. With Lenovo, at least the team, as we know it now, and it was a smart part of the bid. for about a decade now, maybe? and then Madhu will talk and bring us up to speed and the rest is for the operational part. And it's hot air coming out, but you take the heat, by the computers to heat their buildings. that we ended up with the most efficient data center, Right. Project Neptune, in general? is the name of the God of the Sea You open the door and it's this hot 60-, 65-degree C air. by the virtual size of the calculations that you need to do, of the paleoclimate to studies with anthropologists You have a fun job. It's all the interdisciplinarity We will have more from the Lenovo Transform just after this.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dick PeltierPERSON

0.99+

Rebecca KnightPERSON

0.99+

CanadaLOCATION

0.99+

LenovoORGANIZATION

0.99+

DannyPERSON

0.99+

60QUANTITY

0.99+

IBMORGANIZATION

0.99+

RaleighLOCATION

0.99+

SciNetORGANIZATION

0.99+

48,000 coresQUANTITY

0.99+

MadhuPERSON

0.99+

90%QUANTITY

0.99+

BrunerPERSON

0.99+

New York CityLOCATION

0.99+

Stu MinimanPERSON

0.99+

GermanyLOCATION

0.99+

University of TorontoORGANIZATION

0.99+

20-degreeQUANTITY

0.99+

SkynetORGANIZATION

0.99+

MunichLOCATION

0.99+

50 degreeQUANTITY

0.99+

CERNORGANIZATION

0.99+

two teamsQUANTITY

0.99+

CalifoLOCATION

0.99+

North AmericaLOCATION

0.99+

JPLORGANIZATION

0.99+

Madhu MattaPERSON

0.99+

twoQUANTITY

0.99+

DanPERSON

0.99+

third teamQUANTITY

0.99+

60 degreeQUANTITY

0.99+

50 CelsiusQUANTITY

0.99+

second teamQUANTITY

0.99+

iOSTITLE

0.99+

65-degree CQUANTITY

0.99+

iXsystemsORGANIZATION

0.99+

LRZORGANIZATION

0.99+

Ten years agoDATE

0.99+

10,000 years agoDATE

0.98+

thousands of years agoDATE

0.98+

Daniel GrunerPERSON

0.98+

bothQUANTITY

0.98+

264 nautical layersQUANTITY

0.98+

Middle EastLOCATION

0.98+

oneQUANTITY

0.98+

earthLOCATION

0.98+

first timeQUANTITY

0.98+

SingleQUANTITY

0.98+

each directionQUANTITY

0.98+

EarthLOCATION

0.98+

10 years agoDATE

0.98+

GrunerPERSON

0.98+

twenty-four sevenQUANTITY

0.97+

three-weeksQUANTITY

0.97+

NeptuneLOCATION

0.96+

Barcelona SupercomputerORGANIZATION

0.96+

single simulationQUANTITY

0.96+

todayDATE

0.95+

SE650COMMERCIAL_ITEM

0.94+

Dr.PERSON

0.94+

theCubeCOMMERCIAL_ITEM

0.94+

Hurricane FlorenceEVENT

0.94+

this morningDATE

0.93+

up to 50 degreesQUANTITY

0.92+

LevantLOCATION

0.92+

Doug VanDyke, AWS | AWS Public Sector Summit 2018


 

>> Live, from Washington DC, it's theCube, covering the AWS Public Sector Summit 2018. Brought to you by Amazon Web Services, and its ecosystem partners. (techno music) >> Welcome back everyone it's theCube's exclusive coverage here, day two of the Amazon Web Sources public sector summit. This is the public sector across the globe. This is their reinvent, this is their big event. I'm John Furrier, Stu Miniman, and also David Vellante's been here doing interviews. Our next guest is, we got Doug Van Dyke, he's the director of U.S. Federal Civilian and Non Profit Sectors of the group, welcome to theCube, good to see you. >> John, thank you very much for having me. >> So you've been in the federal, kind of game, and public sector for a while. You've known, worked with Theresa, at Microsoft before she came to Reinvent. >> 15 years now. >> How is she doing? >> She's doing great, we saw her on main stage yesterday. Force of nature, love working with her, love working for her. This is, like you were saying, this is our re-invent here in D.C. and 14,000 plus, 15,000 registrations, she's on the top of her game. >> What I'm really impressed with her and your team as well, is the focus on growth, but innovation, right? it's not just about, knock down the numbers and compete. Certainly you're competing against people who are playing all kinds of tricks. You got Oracle out there, you got IBM, we've beaten at the CIA. It's a street battle out there in this area in D.C. You guys are innovative, in that you're doing stuff with non-profits, you got mission driven, you're doing the educate stuff, so it's not just a one trick pony here. Take us through some of the where you guys heads are at now, because you're successful, everyone's watching you, you're not small anymore. What's the story? >> So, I think the differentiator for us is our focus on the customers. You know, we've got a great innovation story at the Department of Veterans Affairs with vets.gov. So five years ago if a veteran went out to get the services that the government was going to provide them, they've have to pick from 200 websites. It just wasn't to navigate through 200 websites. So, the innovation group at Veteran's Affairs, the digital services team, figured out, let's pull this all together under a single portal with vets.gov. It's running on AWS, and now veterans have a single interface into all the services they want. >> Doug, one of the things I've been impressed, my first year coming to this. I've been to many other AWS shows, but you've got all these kind of overlapping communities. Of course, the federal government, plus state and local, education. You've got this civilian agencies, so give us a little bit of flavor about that experience here at the show. What trends your hearing from those customers. >> So what's great for me is I've been here almost six and a half years, and I've seen the evolution. And you know, there were the early customers who were just the pioneers like Tom Soderstrom, from JPL, who was on main stage. And then we saw the next wave where there were programs that needed a course correction, like Center for Medicare Medicaid with Healthcare.gov. Where Amazon Web Services came in, took over, helped them with the MarketPlace, you know, get that going. And now we're doing some great innovative things at CMS, aggregating data from all 50 states, about 75 terabytes, so they can do research on fraud, waste, and abuse that they couldn't do before. So we're helping our customers innovate on the cloud, and in the cloud, and it's been a great opportunity. >> Oh my God, I had the pleasure of interviewing Tom Soderstrom two years ago. >> Okay. >> Everybody gets real excited when you talk about space. It's easy to talk about innovation there, but you know, talk about innovation throughout the customers, because some people will look at it, and be like, oh come on, government and their bureaucracy, and they're behind. What kind of innovation are you hearing from your customers? >> So there's an exciting with Department of Energy. They, you know there's a limited amount of resources that you have on premise. Well, they're doing research on the large Hadron Collider in Cern, Switzerland. And they needed to double the amount of capacity that they had on premise. So, went to the AWS cloud, fired up 50,000 cores, brought the data down, and they could do research on it. And so, we're making things possible that couldn't be done previously. >> What are some of the examples that government entities and organizations are doing to create innovation in the private sector? Cause the private sector's been the leader to the public sector, and know you're seeing people starting to integrate it. I mean, half the people behind us, that are exhibiting here, are from the commercial side doing business in the public sector. And public sector doing, enabling action in the private sector. Talk about that dynamic, cause it's not just public sector. >> Right. >> Can you just share your? >> These public, private. Great example with NOAA, the National Oceanic Atmospheric Administration. They have a new program called NEXRAD. It's the next generation of doppler radar. They have 160 stations across the world, collecting moisture, air pressure, all of the indicators that help predict the weather. They partner with us at AWS to put this data out, and through our open data program. And then organizations like the Weather Bug can grab that information, government information, and use it to build the application that you have on your I-Phone that predicts the weather. So you know whether to bring an umbrella to work tomorrow. >> So you're enabling the data from, or stuff from the public, for private, entrepreneurial activity? >> Absolutely. >> Talk about the non-profits. What's going on there? Obviously, we heard som stuff on stage with Teresa. The work she's showcasing, a lot of the non-profit. A lot of mission driven entrepreneurships happening. Here in D.C, it's almost a Silicon Valley like dynamic, where stuff that was never funded before is getting funded because they can do Cloud. They can stand it up pretty quickly and get it going. So, you're seeing kind of a resurgence of mission driven entrepreneurships. What is the nonprofit piece of it look now for AWS? How do you talk about that? >> Sure. Well again, one of the areas that I'm really passionate about being here, and being one of the people who helped start our nonprofit vertical inside of AWS, we now have over 12, I'm sorry, 22,000 nonprofits using AWS to keep going. And the mission of our nonprofit vertical is just to make sure that no nonprofit would ever fail for lack of infrastructure. So we partnered with Tech Soup, which is an organization that helps vet and coordinate our Cloud credits. So nonprofits, small nonprofit organizations can go out through Tech Soup, get access to credits, so they don't have to worry about their infrastructure. And you know we.. >> Free credits? >> Those credits, with the Tech Soup membership, they get those, yeah, and using the word credit, it's more like a grant of AWS cloud. >> You guys are enabling almost grants. >> Yes, cloud grants. Not cash grants, but cloud grants. >> Yeah, yeah great. So, how is that converting for you, in your mind? Can you share some examples of some nonprofits that are successful? >> Sure. A great presentation, and I think it was your last interview. A game changer. Where these smaller nonprofits can have a really large impact. And, but then we're also working with some of the larger nonprofits too. The American Heart Association, that built their precision medicine platform to match genotype, phenotype information, so we can further cardiovascular research. They have this great mission statement, they want to reduce cardiovascular disease by 20 percent by 2020. And we're going to help them do that. >> You guys are doing a great job, I got to say. It's been fun to watch, and now, we've been covering you guys for the past two years now, here at the event. A lot more coming on, in D.C. The CIA went in a few years ago. Certainly a shot heard around the cloud. That's been well documented. The Department of Defense looking good off these certain indicators. But, what's going on in the trends in the civilian agencies? Can you take a minute to give an update on that? >> Yeah, so I started earlier saying I've seen the full spectrum. I saw the very beginning, and then I've seen all the way to the end. Where, I think it was three years ago at this event, I talked to Joe Piva, who is the former CIO for the Department of Commerce ITA, the International Trade Association. He had data center contracts coming up for renewal. And he made a really brave decision to cancel those contracts. So he had 18 months to migrate the entire infrastructure for ITA over on to AWS. And you know, there's nothing like an impending date to move. So, we've got agencies that are going all in on AWS, and I think that's just a sign of the times. >> Data centers, I mean anyone who were startup nine years into it, we've never had a data center. I think most startups don't.. >> Born in the cloud. >> Born in the cloud. Thanks so much Dave, for coming on. Appreciate the time. Congratulations on your success. AWS public sector doing great, global public sector. You guys are doing great. Building nations, we had Baharain on as well. Good luck, and the ecosystems looks good. You guys did a good job. So, congratulations. >> John, Stu, thank you very much for having me here today. >> Live coverage here, we are in Washington D.C. For Cube. Coverage of AWS Public Sector Summit. We'll be back with more. Stay with us, we've got some more interviews after this short break. (techno music)

Published Date : Jun 21 2018

SUMMARY :

covering the AWS Public Sector Summit 2018. This is the public sector across the globe. she came to Reinvent. she's on the top of her game. it's not just about, knock down the numbers and compete. get the services that the government was going Doug, one of the things I've been impressed, and in the cloud, and it's been a great opportunity. Oh my God, I had the pleasure of interviewing the customers, because some people will look at it, brought the data down, and they could do research on it. doing business in the public sector. indicators that help predict the weather. What is the nonprofit piece of it look now for AWS? of the people who helped start our nonprofit it's more like a grant of AWS cloud. Yes, cloud grants. So, how is that converting for you, in your mind? the larger nonprofits too. in the civilian agencies? the Department of Commerce ITA, the International I think most startups don't.. Born in the cloud. We'll be back with more.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Doug VanDykePERSON

0.99+

Joe PivaPERSON

0.99+

TeresaPERSON

0.99+

TheresaPERSON

0.99+

Doug Van DykePERSON

0.99+

David VellantePERSON

0.99+

Stu MinimanPERSON

0.99+

International Trade AssociationORGANIZATION

0.99+

National Oceanic Atmospheric AdministrationORGANIZATION

0.99+

John FurrierPERSON

0.99+

D.C.LOCATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Tom SoderstromPERSON

0.99+

Veteran's AffairsORGANIZATION

0.99+

CIAORGANIZATION

0.99+

JohnPERSON

0.99+

DavePERSON

0.99+

American Heart AssociationORGANIZATION

0.99+

D.CLOCATION

0.99+

AWSORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Department of Veterans AffairsORGANIZATION

0.99+

160 stationsQUANTITY

0.99+

Washington D.C.LOCATION

0.99+

200 websitesQUANTITY

0.99+

Washington DCLOCATION

0.99+

20 percentQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

2020DATE

0.99+

Tech SoupORGANIZATION

0.99+

nine yearsQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

NOAAORGANIZATION

0.99+

18 monthsQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

15,000 registrationsQUANTITY

0.99+

Department of EnergyORGANIZATION

0.99+

DougPERSON

0.99+

first yearQUANTITY

0.99+

StuPERSON

0.99+

vets.govORGANIZATION

0.99+

Center for Medicare MedicaidORGANIZATION

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

three years agoDATE

0.99+

tomorrowDATE

0.98+

Department of CommerceORGANIZATION

0.98+

oneQUANTITY

0.98+

Cern, SwitzerlandLOCATION

0.98+

two years agoDATE

0.98+

22,000 nonprofitsQUANTITY

0.98+

single interfaceQUANTITY

0.97+

ITAORGANIZATION

0.97+

about 75 terabytesQUANTITY

0.97+

almost six and a half yearsQUANTITY

0.97+

AWS Public Sector Summit 2018EVENT

0.96+

five years agoDATE

0.95+

50,000 coresQUANTITY

0.94+

14,000 plusQUANTITY

0.92+

few years agoDATE

0.91+

Amazon Web Sources public sector summitEVENT

0.9+

U.S. Federal Civilian and Non Profit SectorsORGANIZATION

0.9+

CubeLOCATION

0.89+

single portalQUANTITY

0.89+

NEXRADOTHER

0.89+

over 12QUANTITY

0.89+

halfQUANTITY

0.88+

Healthcare.govORGANIZATION

0.88+

JPLORGANIZATION

0.87+

one trick ponyQUANTITY

0.83+

past two yearsDATE

0.83+

AWSEVENT

0.83+

day twoQUANTITY

0.82+

waveEVENT

0.81+

50 statesQUANTITY

0.77+

Alan Clark, Board, SUSE & Lew Tucker, Cisco | OpenStack Summit 2018


 

(upbeat music) >> Announcer: Live from Vancouver, Canada. It's theCUBE covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack Foundation, and its ecosystem partners. >> Welcome back. This is theCUBE's exclusive coverage of OpenStack Summit 2018 in Vancouver. I'm Stu Miniman with my co-host John Troyer. Happy to welcome back to the program two CUBE alums. We have Alan Clark, who's the board chair of the OpenStack Foundation and in the CTO office of SUSE. >> Yep, thank you. >> Thanks for joining us again. It's been a few years. >> It's been a while, I appreciate being back. >> And Lew Tucker, the vice chair of the OpenStack Foundation and vice president and CTO of Cisco. Lew, it's been weeks. >> Exactly right. >> All right. >> I've become a regular here. >> Yeah, absolutely. So, first of all, John Furrier sent his regard. He wishes he was here, you know. John's always like come on Lew and I, everybody, we were talking about when this Kubernetes thing started and all the conferences, so it's been a pleasure for us to be here. Six years now at this show, as well as some of the remote days and other things there. It's been fun to watch the progressions of-- >> Isn't it amazing how far we've come? >> Yeah, absolutely. Here's my first question for you, Alan. On the one hand, I want you to talk about how far we've gone. But the other thing is, people, when they learn about something, whenever they first learn about it tends to fossilize in their head, this is what it is and always will be. So I think most people know that this isn't the Amazon killer or you know it's free VMware. That we talked about years ago. Bring us a little bit of that journey. >> Well, so, you know, it started with the basic compute storage and as we've watched open-source grow and adoption of open-source grow, the demands on services grow. We're in this transformation period where everything's growing and changing very rapidly. Open-source is driving that. OpenStack could not stay static. When it started, it solved a need, but the needs continued to grow and continued to change. So it's not surprising at all that OpenStack has grown and changed and will continue to grow and change. >> So Lew, it's been fascinating for me, you know. I've worked with and all these things with Cisco and various pieces for my entire career. You're here wearing the OpenStack @ Cisco shirt. And Cisco's journey really did through that to digital transformation themselves. When I talked to Rowan at Cisco Live Barcelona, the future of Cisco is as a software company. So, help set OpenStack into that kind of broader picture. >> Sure, I think one of the aspects of that is that we're seeing now it is becoming this multi-cloud world. And that we see all of our customers are running in the public cloud. They have their own private data centers. And what they're looking for is they want their whole development model and everything else to now become targeted towards that multi-cloud world. They're going to do services in the public cloud, they still have their private data center. OpenStack is a place for them to actually meet and run all their services 'cause now you can build your environment within your data center that makes it look very much like your public cloud, so your developers don't have two completely different mindsets. They have the same one, it's extracting resources on demand. And that one, we're putting on top of that other newer technology that's coming, such as Kubernetes. We've got a real consistency between those environments. >> Yeah, please Alan. >> I was going to say, it enables you to leverage your existing infrastructure so you don't want to make them, particularly those SUSE's customers, they don't want us to come in and say throw everything away, start afresh right? But at the same time, you've got to be able to embrace what's new and what's coming. We're talking about many new technologies here in OpenStack Summit today right? Containers and all sorts of stuff. A lot of those things are still very new to our customers and they're preparing for that. As Lew said, we're building that infrastructure. >> One of the things, as I'm thinking about it, some people look at, they look at codec containers and some of these pieces outside of the OpenStack project and they're like, well what's the Foundation doing? But I believe it should be framed, and please, please, I would love your insight on this, in that multi-cloud discussion because this is, it can't just be, well, this is how you build private. It needs to be, this is how you live in this multi-cloud environment. >> That's why I think, you're beginning to see us talk about open infrastructure. And this is using open-source software to use software to manage your infrastructure and build it out instead of configuration, cabling, having guys going out, plugging in, unplugging network ports and whatever. We want software and automations to do all that, so OpenStack is one of the cloud platforms. But these other projects are now coming into the Foundation, which also expand that notion of open infrastructure, and that's why we're seeing these projects expand. >> Lew's exactly right and it goes beyond that. Back in 2017, early 2017, we recognized, as a board, that it's not going to be just about the projects within OpenStack. We have to embrace our adjacent communities and embrace those technologies. So that's why you're hearing a lot about Kubernetes and containers and networking and all sorts of projects that are not necessarily being done within OpenStack but you're seeing how we're collaborating with all those other communities. >> And codec is a perfect example of that. Codec containers came out of those clear containers. It's now combining the best of both worlds, 'cause now you get the speed of containers bringing up, but you get the security and isolation of virtual machines. That's important in the OpenStack community, in our world, because that's what we want out of our clouds. >> Well you both have just mentioned community a few times. I saw one thing coming in to this conference, I'm so impressed by the prominence of community. It's up on stage from the first minutes of the first keynote. People, the call to action, the pleas, for the folks, some of us have been here years and years, for the new folks, please come meet us right? That's really inviting, it's very clear that this is a community. >> Yeah I was surprised, actually, 'cause we saw it when we were asked when up on stage how many people were here for the first time? More than half the audience raised their hand. >> Alan: I was surprised by that as well. >> That was the real surprise. And at the same time, we're seeing, increasingly, users of OpenStack coming in as opposed the people who are in core projects. We're seeing Progressive insurance coming in. We're seeing Adobe Marketing Cloud having over 100,000 cores running OpenStack. That's in addition to what we've had with Walmart and others so the real users are coming. So our communities, not just the developers but the users of OpenStack and the operators. >> That's always an interesting intention for an open-source project right. You have the open-source contributors, and then you have the users and operators. But here at the show right? All of these different technology tracks. Part of community is identity. And so, as the technical work has been split-off, and is actually at another event, these are the users. But it does, with all these other technology conversations, I wonder what the core identity of, I'm an OpenStack member, like what does that end up meaning in a world of open infrastructure? if the projects, if the OpenStack itself is more mature, and as we get up the letters of the alphabet towards Z, How do you all want to steer what it means to be a member of the OpenStack community. >> We met on Sunday as a joint leadership. So we had, it wasn't just a board meeting, it was a meeting with the technical committee, it was a meeting with the user committee. So we're very much pushing to make sure we have those high interactions, that the use cases are getting translated into requirements and getting translated into blueprints and so forth. We're working very, very hard to make sure we have that communication open. And I think one of the things that sets the OpenStack community apart is what we call our Four Opens. We base everything on our Four Opens and one of those is communication, transparency and communication. And that's what people are finding enticing. And one of the big reasons is I think they're coming to OpenStack to do that innovation and collaboration. >> We've seen the same thing with Linux, for example. Linux is no longer just the operating system when people think about the Linux community. Linux community is the operating system and then all of these other projects associated with them. That's the same thing that we're seeing with OpenStack. That's why we're continuing to see, wherever there's a need as people are deploying OpenStack and operating it and running it, all of these other open-source components are coming into it because that's what they really were running, that conglomerate of projects around it. >> Certainly, the hype cycle, and maybe Linux went through it's own hype cycle, back in the day and I'm from Silicon Valley. I think the hype cycle outside the community and what's actually happening on the ground here actually are meshed quite well. What I saw this week, like you said, real users, big users, infrastructure built into every bank, transport, telecom in the world. That's a global necessary part of the infrastructure of our planet. So outside of investment, things like that-- >> Well I hope you can help us get the message out. Because that is, a major thing that we see and we experience the conf, people who are not here. They still, then maybe look at OpenStack the way it was, maybe, four years ago, and it was difficult to deploy, and people were struggling with it, and there was a lot of innovation happening at a very, very fast rate. Well now, it's proven, it's sort of industrial grade, it's being deployed at a very large scale across many, many industries. >> Well it's interesting. Remember, Lew, when we were talking about ethernet fabrics. We would talk about some of SDN and some of these big things. Well, look sometimes these things are over-hyped. It's like, well, there's a certain class of the market who absolutely needs this. If I'm at Telco, and I sat here a couple of years ago, and was like, okay, is it 20 or 50 companies in the world that it is going to be absolutely majorly transformative for them and that's hugely important. If I'm a mid-sized enterprise, I'm still not sure how much I'm caring about what's happening here, no offense, I'd love to hear some points there. But what it is and what it isn't with targets, absolutely, there are massive, massive clouds. Go to China, absolutely. You hear a lot about OpenStack here. Coming across the US, I don't hear a lot about it. We've known that for years. But I've talked to cloud provider in Australia, we've talked to Europeans that the @mail who's the provider for emails for certain providers around the world. It's kind of like okay, what part of the market and how do we make sure we target that because otherwise, it's this megaphone of yeah, OpenStack, well I'm not sure that was for me. >> So, yeah, what's your thought? >> We're seeing a lot of huge variety of implementations, users that are deploying OpenStack. And yeah we always think about the great big ones right? I love CERN, we love the Walmarts. We love China Mobiles, because they're huge, great examples. But I have to say we're actually seeing a whole range of deployments. They don't get the visibility 'cause they're small. Everybody goes, oh you're running on three machines or 10 machines, okay, right? Talk to me when you're the size of CERN. But that's not the case, we're seeing this whole range of deployments. They probably don't get much visibility, but they're just as important. So there's tons of use cases out there. There's tons of use cases published out there and we're seeing it. >> One of the interesting use cases with a different scale has been that edge discussion. I need a very small-- >> In fact that's a very pointed example, because they've had a ton of discussion because of that variety of needs. You get the telcos with their large-scale needs, but you've also got, you know, everybody else. >> It's OpenStack sitting at the bottom of a telephone pole. On a little blade with something embedded. >> In a retail store. >> It's in a retail store. >> Or in a coffee shop. >> Yeah. >> So this is really where we recognizing over and over again we go through these transitions that it used to be, even the fixed devices out of the edge. To change that, you have to replace that device. Instead, we want automation and we want software to do it. That's why OpenStack, moving to the edge, where it's a smaller device, much more capability, but it still computes storage and networking. And you want to have virtualized applications there so you can upgrade that, you can add new services without sending a truck out to replace that. >> Moving forward, do we expect to see more interaction between the Foundation itself and other foundations and open-source projects? And what might that look like? >> It depends on the community. It really does, we definitely have communications from at the board level from board-to-board between adjacent communities. It happens at the grassroots level, from, what we call SIGs or work groups with SIGs and work groups from those adjacent communities. >> I happen to sit on three boards, which is the OpenStack board the CNCF board, Cloud Foundry. And so what we're also seeing, though, now. For example, running Kubernetes, we just have now the cloud provider, which, OpenStack, being a cloud provider for Kubernetes similar in the open way that Amazon had the cloud provider for Kubernetes or Google is the cloud provider. So that now we're seeing the communities working together 'cause that's what our customers want. >> And now it's all driven by SIGs. >> The special interest groups, both sides getting together and saying, how do we make this happen? >> How do we make this happen? >> All right. One of the things you look at, there's a lot going on at the show. There's the OpenDev activity, there's a container track, there's an edge track. Sometimes, you know, where it gets a little unfocused, it's like let's talk about all the adjacencies, wait what about the core? I'd love to get your final takeaways, key things you've seen at the show, takeaways you want people to have when they think about OpenStack the show and OpenStack the Foundation. >> From my point of view it actually is back to where we started the conversation, is these users that are now coming out and saying, "I've been running OpenStack for the last three years, "now we're up to 100,000 or 200,000 cores." That shows the real adoption and those are the new operators. You don't think of Walmart or Progressive as being a service provider but they're delivering their service through the internet and they need a cloud platform in which to do that. So that's one part that I find particularly exciting. >> I totally agree with Lew. The one piece I would add is I think we've proven that it's the right infrastructure for the technology of the future, right? That's why we're able to have these additional discussions around edge and additional container technologies and Zuul with containers testing and deployment. It fits right in, so it's not a distraction. It's an addition to our infrastructure. >> I think the idea around, and that's why we actually broke up into these different tracks and had different keynotes around containers and around edge because those are primary use cases now. Two years ago when I think we were talking here, and like NFV and all the telcos were, and now that has succeeded because almost all the NFV deployments now are based on OpenStack. Now we're seeing it go to containers and edge, which are more application specific deployments. >> I'd love for you to connect the dots for us from the NFV stuff we were talking about a couple of years ago to the breadth of edge. There is no edge, it depends on who you are as to what the edge is, kind of like cloud was a few years ago. >> I mean, we actually have a white paper. If you go to OpenStack.org or just Google OpenStack edge white paper, I think you'll see that there are a variety of cases that are from manufacturing, retail, telco, I saw even space, remote driving vehicles and everything else like that. It's where latency really matters. So that we know that cloud computing is the fastest way to deploy and maintain, upgrade new applications, virtualize applications on a cloud. It's unfortunately too far away from many the places that have much more real-time characteristics. So if you're under 40 milliseconds or whatever, or you want to get something done in a VR environment or whatever, under five milliseconds, you can't go back to the cloud. It also, if you have an application, for example, a security monitoring application, whatever. 99% of the time, the video frames are the same and they're not interesting, don't push all that information back into the central cloud. Process it locally, now when you see frames that are changing, or whatever, you only use the bandwidth and the storage in the central cloud. So we're seeing this relationship between what do you want computed at the edge and how much computing can you do as we get more powerful there and then what do you want back in the centralized data centers. >> Daniel: While you simplify the management. >> Exactly right. >> Orchestration, policy. >> But you still need the automation, you need it to be virtualized, you need it to be managed in that way, so you can upgrade it. >> Alan Clark, Lew Tucker, always a pleasure to catch up. >> Thank you, yeah, >> Thank you so much for joining us. >> It's good to be here. >> John Troyer and I will be back with lots more coverage from OpenStack Summit 2018 here in Vancouver. Thanks for watching theCUBE. (upbeat music)

Published Date : May 23 2018

SUMMARY :

Brought to you by Red Hat, the OpenStack Foundation, and in the CTO office of SUSE. It's been a few years. I appreciate being back. the vice chair of the OpenStack Foundation and all the conferences, But the other thing is, people, but the needs continued to grow and continued to change. the future of Cisco is as a software company. They have the same one, But at the same time, you've got to be able One of the things, as I'm thinking about it, so OpenStack is one of the cloud platforms. just about the projects within OpenStack. That's important in the OpenStack community, People, the call to action, the pleas, for the folks, More than half the audience raised their hand. And at the same time, we're seeing, increasingly, and then you have the users and operators. that the use cases are getting translated into requirements That's the same thing that we're seeing with OpenStack. of the infrastructure of our planet. and we experience the conf, people who are not here. of the market who absolutely needs this. But that's not the case, One of the interesting use cases with a different scale You get the telcos with their large-scale needs, It's OpenStack sitting at the bottom of a telephone pole. even the fixed devices out of the edge. It depends on the community. or Google is the cloud provider. One of the things you look at, "I've been running OpenStack for the last three years, that it's the right infrastructure and like NFV and all the telcos were, from the NFV stuff we were talking about and the storage in the central cloud. the automation, you need it to be virtualized, Thank you so much John Troyer and I will be back with lots more coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alan ClarkPERSON

0.99+

AmazonORGANIZATION

0.99+

AustraliaLOCATION

0.99+

John TroyerPERSON

0.99+

WalmartORGANIZATION

0.99+

AlanPERSON

0.99+

20QUANTITY

0.99+

John FurrierPERSON

0.99+

DanielPERSON

0.99+

10 machinesQUANTITY

0.99+

WalmartsORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

USLOCATION

0.99+

CERNORGANIZATION

0.99+

ChinaLOCATION

0.99+

OpenStackORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

LewPERSON

0.99+

SundayDATE

0.99+

OpenStack FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

GoogleORGANIZATION

0.99+

VancouverLOCATION

0.99+

Lew TuckerPERSON

0.99+

2017DATE

0.99+

first questionQUANTITY

0.99+

ProgressiveORGANIZATION

0.99+

99%QUANTITY

0.99+

Vancouver, CanadaLOCATION

0.99+

twoQUANTITY

0.99+

first minutesQUANTITY

0.99+

200,000 coresQUANTITY

0.99+

TelcoORGANIZATION

0.99+

four years agoDATE

0.99+

one partQUANTITY

0.99+

oneQUANTITY

0.99+

50 companiesQUANTITY

0.99+

SUSEORGANIZATION

0.99+

early 2017DATE

0.99+

first keynoteQUANTITY

0.99+

LinuxTITLE

0.99+

OpenStack Summit 2018EVENT

0.98+

one pieceQUANTITY

0.98+

Six yearsQUANTITY

0.98+

under 40 millisecondsQUANTITY

0.98+

OneQUANTITY

0.98+

first timeQUANTITY

0.98+

JohnPERSON

0.98+

this weekDATE

0.98+

OpenStackTITLE

0.98+

both sidesQUANTITY

0.98+

Two years agoDATE

0.98+

over 100,000 coresQUANTITY

0.98+

firstQUANTITY

0.98+

three machinesQUANTITY

0.97+

OpenStack Summit North America 2018EVENT

0.97+

todayDATE

0.97+

under five millisecondsQUANTITY

0.97+

RowanPERSON

0.97+

OpenStack SummitEVENT

0.96+

bothQUANTITY

0.96+

both worldsQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

China MobilesORGANIZATION

0.96+

CNCFORGANIZATION

0.96+

Chris Hoge, OpenStack Foundation | OpenStack Summit 2018


 

>> Narrator: Live from Vancouver, Canada it's theCUBE covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack Foundation, and its ecosystem partners. >> Welcome back to theCUBE, I'm Stu Miniman, with my cohost John Troyer, and happy to welcome to the program, fresh off the container keynote, Chris Hodge, who's the senior strategic program manager with the OpenStack Foundation. Thanks so much for joining us. >> Oh yeah, thanks so much for having me. >> Alright, so short trip for you, then John's coming from the Bay Area, I'm coming from the east coast. You're coming up from Portland, which is where it was one of the attendees at the Portland OpenStack Summit, they said, "OpenStack has arrived, theCUBE's there." So, shout out to John Furrier and the team who were there early. I've been to all the North America ones since. You've been coming here for quite a while and it's now your job. >> I've been to every OpenStack Summit since then. And to the San Francisco Summit prior to that, so it was, yeah, I've been a regular. >> Okay so for those people that might not know, what's a Foundation member do these days? Other than, you know, you're working on some of the tech, you're giving keynotes, you know, what's a day in the life? >> Yeah, I mean, I mean for me, I feel like I'm really lucky because the OpenStack Foundation, you know, has you know, kind of given me a lot of freedom to go interact with other communities and that's been one of my primary tasks, to go out and work with adjacent communities and really work with them to build integrations between OpenStack and right now, particularly, Kubernetes and the other applications that are being hosted by the CNCF. >> Yeah, so I remember, and I've mentioned it a few times this week, three years ago we were sitting in the other side of the convention center, with theCUBE and it was Docker, Docker, Docker. The container sessions were overflowing and then a year later it was, you know, oh my gosh, Kubernetes. >> Chris: Yeah. (chuckles) >> This wave of, does one overtake the other, how do they fit together, and you know, in the keynotes yesterday and I'm sure your keynote today, talked a lot a bit about you know, the various ways that things fit together, because with open source communities in general and tech overall, it's never binary, it's always, it depends, and there's five different ways you could put things together depending on your needs. So, what are you seeing? >> I mean it's almost, yeah, I mean saying that it's one or the other and that one has to win and the other has to lose is actually kind of, it's kind of silly, because when we talk about Kubernetes and we talk about Docker, we're generally talking about applications. And, you know, and, with Kubernetes, when you're very focused on the applications you want to have existing infrastructure in place. I mean, this is what it's all about. People talk about, "I'm going to run my Kubernetes application "on the cloud, and the cloud has infrastructure." Well, OpenStack is infrastructure. And in fact, it is open source, it's an open source cloud. And so, so for me it feels like it's a very natural match, because you have your open application delivery system and then it integrates incredibly well with an open source cloud and so whether you're looking for a public cloud running on OpenStack or you're hosting a private cloud, you know, to me it's a very natural pairing to say that you have an OpenStack cloud, you have a bunch of integrations into Kubernetes and that the two work together. >> I think this year that that became a lot clearer, both in the keynotes and some of the sessions. The general conversation we've had with folks about the role of Kubernetes or an orchestration or the cloud layer, the application layer, the application deployment layer say, and the infrastructure somebody's got to manage the compute the network storage down here. At least, in this architectural diagram with my hands but, you can also, a couple of demos here showed deploying Kubernetes on bare metal alongside OpenStack, with that as the provider. Can you talk a little bit about that architectural pattern? It makes sense, I think, but then, you know, it's a apparent contradiction, wait a minute so now the Kubernetes is on the bare metal? So talk about that a little bit. >> So, I think, I think one of the ways you can think about resolving the contradiction is OpenStack is a bunch of applications. When you go and you install OpenStack we have all of these microsurfaces that are, some are user facing and some are controlling the architecture underneath. But they're applications and Kubernetes is well-suited for application delivery. So, say that you're starting with bare metal. You're starting with a bare metal cloud. Maybe managed by OpenStack, so you have OpenStack there at the bottom with Ironic, and you're managing your bare metal. You could easily install Kubernetes on that and that would be at your infrastructure layer, so this isn't Kubernetes that you're giving to your users, it's not Kubernetes that you're, you know, making world facing, this is internally for your organization for managing your infrastructure. But, you want OpenStack to provide that cloud infrastructure to all of your users. And since OpenStack is a big application with a lot of moving parts, Kubernetes actually becomes a very powerful tool, or any other container orchestration scheme becomes a very powerful tool for saying that you drop OpenStack on top of that and then all of a sudden you have a public cloud that's available for, you know, for the users within your organization, or you could be running a public cloud and providing those services for other people. And then suddenly that becomes a great platform for hosting Kubernetes applications on, and so the layers kind of interleave with one another. But even if you're not interested in that. Let's say you're running Kubernetes as bare metal and you're just, you want to have Kubernetes here providing some things. There's still things that OpenStack provides that you may already have existing in your infrastructure. >> Kubernetes kind of wants, it wants to access some storage. >> It wants to consume storage for example, and so we have OpenStack Cinder, which right now it supports you know, somewhere between, you know over 70 storage drivers, like these drivers exist and the nice thing about it is... You have one API to access this and we have two drivers within that, two Cinder drivers, you can either choose the, the flex volume storage or the container storage interface, the CSI storage interface. And Cinder just provides that for you. And that means if you have mixed storage within your data center, you put it all behind a Cinder API and you have one interface to your Kubernetes. >> So Chris, I believe that's one of the pieces of I believe it's called the Cloud Provider OpenStack. You talked about in the keynote. Maybe walk us through with that. >> Cloud Provider OpenStack is a project that is hosted within the, within the Kubernetes community. And it's... The owner of that code is the SIG OpenStack community inside of Kubernetes. I'm one of the three leads, one of the three SIG leads of that group and, that code does a number of things. The first is there's a cloud manager interface that is a consistent interface for Kubernetes to access infrastructure information in clouds. So information about a node, when a node joins a system, Kubernetes will know about it. Ways to attach storage, ways to provision load balancers. The cloud manager interface allows Kubernetes to do this on any cloud, whether it be Azure or GCE or Amazon. Also OpenStack. Cloud Provider OpenStack is the specific code that allows us to do that, and in fact we were, OpenStack was one of the first providers that existed in upstream Kubernetes you know, so it's kind of, we've been there since the very beginning, like this has been a, you know, an effort that's happened from the beginning. >> Somewhat non-ironically, right? A lot of that you've talked about, the OpenStack Foundation and this OpenStack Summit, a lot of the things talked about here are not OpenStack per se, the components, they are containers, there's the OpenDev Conference here, colocated. Is there confusion, there doesn't, I'm getting it straight in my head, Is there, was there, did you sense any confusion of folks here or is that, if you're in it you understand what's going on and why all these different threads are flowing together in kind of an open infrastructure conversation. It seems like the community gets it and understand it and is broadened because of it. >> Yeah, I mean, to me I've seen a tremendous shift over the last year in the general understanding of the community of the role all of these different applications play. And I think it's really, it's actually a testament to the success of all of these projects, in particular, we're building open APIs, we're building predictable behavior, and once you have that, and you have many people, many different organizations that are able to provide that, they're all able to communicate with one another and leverage the strengths of the other projects. >> All of a sudden, a standard interface, low and behold, right? A thousand flowers bloom on top. >> You know, it essentially allows you to build new things on top of that, new more interesting things. >> Alright, Chris, any interesting customer stories out of the keynote that we should share with the audience? >> I mean, there are so many fantastic stories that you can talk about, I mean, of course we saw the CERN keynote, where they're running managed Kubernetes on top of OpenStack. They have over 250 Kubernetes clusters doing research that are managed by OpenStack Magnum. I mean that's just, to me that's just tremendous. That this is being used in production, it's being used in science, and it's not just across one cloud, it's across many clouds and, You know, we also have AT&T, which has been working very hard on combining OpenStack and Kubernetes to manage their next generation of, of teleco infrastructure. And so, they've been big drivers along with SK Telecom on using Kubernetes as an infrastructure layer and then putting OpenStack on top of that, and then delivering applications with that. And so those are, you know we, the OpenStack Foundation just published on Monday a new white paper about OpenStack, how OpenStack works with containers and these are just a couple of the case studies that we actually have listed in that white paper. >> Chris, you're at the interface between OpenStack, which has become more mature and more stable, and containers, which, although it is maturing is still a little bit, is moving fast, right? Containers and Kubernetes both, a lot of development. Every summit, a lot of new projects, lot of new ways of installing, lot of new components, lot of new snaps. All sorts of things. What are you looking forward to now over the next year in terms of container maturity and how that's going to help us? >> So... People are talking so much now about security with containers and this is another really exciting thing that's coming out of our work because, you know, during one of the container keynotes, one of the things that was kind of driven home was containers don't contain. But, we're actually, at the OpenStack Foundation, we're kind of taking that on, and we, and my colleague Anne Bertucio has been leading a project, you know, has been community manager for a product called Kata Containers, which is, you know, you could almost call it containers that do contain. So I think that this is going to be really exciting in the next year as we talk more and more about we're building more generic interfaces and allowing all sorts of new approaches to solving complex problems, be it in security, be it in performance, be it in logging and monitoring. And so, I think, so the tools that are coming out of this and you know, creating these abstractions and how people are creatively innovating on top of those is pretty exciting. >> The last thing I'm hoping you can help connect the dots for us on is, when we talk Kubernetes, we're talking about multi-cloud. One of the big problems about Kubernetes, you know, came out of Google from you know, if you just say, "Why would Google do this?" It's like, well, there's that one really big cloud out there and if I don't have some portability and be able to move things, that one cloud might just continue to dominate. So, help connect OpenStack to how it lives in this multi-cloud world. Kubernetes is a piece of that, but you know, maybe, would love your viewpoint. >> Yeah, so. This is happening on so many levels. We see lots of large organizations who want to take back control of the cost of cloud and the cost of their cloud infrastructure and so they're starting to pull away from the big public clouds and invest more in private infrastructure. We see this with companies like eBay, we see it with companies like AT&T and Walmart, where they're investing heavily in OpenStack clouds. So that they have more control over the cost and how their applications are delivered. But you're also seeing this in a lot of... Like especially municipalities outside of the United States, you know, different governments that have data restrictions, restrictions on where data lives and how it's accessed, and we're seeing more governments and more businesses overseas that are turning to OpenStack as a way to have cloud infrastructure that is on their home soil, that you know, kind of meets the requirements that are necessary, you know that are necessary for them. And then kind of the third aspect of all of this is sometimes you just, sometimes you need to have lots of availability across, you know, many clouds. And you can have a private cloud, but possibly, in order to serve your customers, you might need public cloud resources, and federation across, across this, both in OpenStack and Kubernetes is improving at such an incredible pace that it becomes very easy to say that I have two, three, four, five clouds, but we're able to, we're able to combine them all and make them all look like one. >> Alright, well Chris Hodge, we really appreciate the updates on OpenStack and Kubernetes in all the various permutations. >> Yeah, it was great talking about it. This is, I mean this is the work that I love and I'm excited about, and this is, you know, I'm looking forward to it, I have fun with it and I keep looking forward to everything that's coming. >> Awesome, well we love to be able to share these stories, the technologists, the customers and everything going on in the industry. For John Troyer, I'm Stu Miniman, back with more coverage here from OpenStack Summit 2018 in beautiful Vancouver, British Columbia. Thanks for watching theCUBE. (tech music)

Published Date : May 22 2018

SUMMARY :

Brought to you by Red Hat, the OpenStack Foundation, to the program, fresh off the container keynote, I'm coming from the east coast. And to the San Francisco Summit prior to that, because the OpenStack Foundation, you know, has a year later it was, you know, oh my gosh, Kubernetes. and there's five different ways you could and the other has to lose is actually kind of, and the infrastructure somebody's got to manage and so the layers kind of interleave with one another. a Cinder API and you have one interface to your Kubernetes. I believe it's called the Cloud Provider OpenStack. The owner of that code is the and is broadened because of it. and once you have that, and you have many people, All of a sudden, a standard interface, You know, it essentially allows you to build new things that you can talk about, I mean, of course Containers and Kubernetes both, a lot of development. and you know, creating these abstractions and Kubernetes is a piece of that, but you know, that is on their home soil, that you know, in all the various permutations. and I'm excited about, and this is, you know, stories, the technologists, the customers and everything

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Chris HodgePERSON

0.99+

Anne BertucioPERSON

0.99+

WalmartORGANIZATION

0.99+

ChrisPERSON

0.99+

John TroyerPERSON

0.99+

Stu MinimanPERSON

0.99+

Chris HogePERSON

0.99+

SK TelecomORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

OpenStack FoundationORGANIZATION

0.99+

PortlandLOCATION

0.99+

Red HatORGANIZATION

0.99+

MondayDATE

0.99+

twoQUANTITY

0.99+

John FurrierPERSON

0.99+

United StatesLOCATION

0.99+

two driversQUANTITY

0.99+

North AmericaLOCATION

0.99+

eBayORGANIZATION

0.99+

Bay AreaLOCATION

0.99+

GoogleORGANIZATION

0.99+

OpenStackTITLE

0.99+

a year laterDATE

0.99+

JohnPERSON

0.99+

threeQUANTITY

0.99+

Vancouver, CanadaLOCATION

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

OpenStack Summit 2018EVENT

0.99+

Vancouver, British ColumbiaLOCATION

0.99+

three years agoDATE

0.99+

yesterdayDATE

0.99+

last yearDATE

0.98+

next yearDATE

0.98+

fourQUANTITY

0.98+

KubernetesTITLE

0.98+

CNCFORGANIZATION

0.98+

theCUBEORGANIZATION

0.98+

todayDATE

0.98+

three leadsQUANTITY

0.98+

bothQUANTITY

0.98+

this weekDATE

0.98+

fiveQUANTITY

0.98+

AmazonORGANIZATION

0.97+

this yearDATE

0.97+

over 70 storage driversQUANTITY

0.97+

one interfaceQUANTITY

0.97+

OpenStackORGANIZATION

0.97+

third aspectQUANTITY

0.96+

over 250 KubernetesQUANTITY

0.96+

one cloudQUANTITY

0.96+

San Francisco SummitEVENT

0.96+

five different waysQUANTITY

0.95+

Keynote Analysis | OpenStack Summit 2018


 

>> Announcer: Live, fro-- >> Announcer: Live from Vancouver, Canada it's theCUBE! Covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack Foundation, and it's ecosystem partners. >> Hi and welcome to SiliconANGLE Media's production of theCUBE here at OpenStack Summit 2018 in Vancouver. I'm Stu Miniman with my cohost, John Troyer. We're here for three days of live wall-to-wall coverage at the OpenStack Foundation's show they have it twice a year John, pleasure to be with you again, you and I were together at the OpenStack show in Boston, a year ago, little bit further trip for me. But views like this, I'm not complaining. >> It's a great time to be in Vancouver, little bit overcast but the convention center's beautiful and the people seem pretty excited as well. >> Yeah so if you see behind us, the keynote let out. So John, we got to get into the first question of course for some reason the last month people are always Hey Stu where are you, what're you doing and when I walk through the various shows I'm doing when it comes to this one they're like, why are you going to the OpenStack show? You know, what's going on there, hasn't that been replaced by everything else? >> I got the same thing, there seems to be kind of a almost an antireligious thing here in the industry maybe more emotional perhaps at other projects. Although frankly look, we're going to take the temperature of the community, we're going to take the temperature of the projects, the customers, we got a lot of customers here, that's really the key here is that our people actually using this, being productive, functional, and is there enough of a vendor and a community ecosystem to make this go forward. >> Absolutely, so three years ago, when we were actually here in Vancouver, the container sessions were overflowing, people sitting in the aisles. You know containers, containers, containers, docker, docker, docker, you know, we went through a year or two of that. Then Kubernetes, really a wave that has taken over, this piece of the infrastructure stack, the KubeCon and CloudNativeCon shows, in general, I think have surpassed this size, but as we know in IT, nothing ever dies, everything is always additive, and a theme that I heard here that definitely resonated is, we have complexity, we need to deal with interoperability, everybody has a lot of things and that's the, choose your word, hybrid, multi-cloud world that you have, and that's really the state of opensource, it's not a thing, it's there's lots of things you take all the pieces you need and you figure out how to put 'em together, either buy them from a platform, you have some integrator that helps, so somebody that puts it all together, and that's where, you know, we live here, which is, by they way, I thought they might rename the show in the open, and they didn't, but there's a lot of pieces to discuss. >> Definitely an open infrastructure movement, we'll probably talk about that, look I loved the message this morning that the cloud is not consolidating, in fact it's getting more complicated, and so that was a practical message here, it's a little bit of a church of opensource as well, so the open message was very well received and, these are the people that are working on it, of course, but yeah, the fact that, like last year I thought in Boston, there was a lot of, almost confusion around containers, and where containers and Kubernetes fit in the whole ecosystem, I think, now in this year in 2018 it's a lot more clear and OpenStack as a project, or as a set of projects, which traditionally was, the hit on it was very insular and inward facing, has at least, is trying to become outward facing, and again that's something we'll be looking at this week, and how well will they integrate with other opensource projects. >> I mean John, you and I are both big supporters of the opensource movements, love the community at shows like this, but not exclusively, it's, you know, Amazon participating a little bit, using a lot of opensource, they take opensource and make it as a service, you were at Red Hat Summit last week, obviously huge discussion there about everything opensource, everything, so a lot going on there, let me just set for, first of all the foundation itself in this show, the thing that I liked, coming into it, one of the things we're going to poke at is, if I go up to the highest level, OpenStack is not the only thing here, they have a few tracks they have an Edge computer track, they have a container track, and there's a co-resident OpenDev Show happening a couple floors above us and, even from what the OpenStack Foundation manages, yes it OpenStack's the main piece of it, and all those underlying projects but, they had Katacontainers, which is, you know, high level project, and the new one is Zuul, talking about CI/CD, so there are things that, will work with OpenStack but not exclusively for OpenStack, might not even come from OpenStack, so those are things that we're seeing, you know, for example, I was at the Veeam show last week, and there was a software company N2WS that Veeam had bought, and that solution only worked on Amazon to start and, you know, I was at the Nutanix show the week before, and there's lots of things that start in the Amazon environment and then make their way to the on-premises world so, we know it's a complex world, you know, I agree with you, the cloud is not getting simpler, remember when cloud was: Swipe the credit card and it's super easy, the line I've used a lot of times is, it is actually more complicated to buy, quote, a server equivalent, in the public could, than it is if I go to the website and have something that's shipped to my data center. >> It's, yeah, it's kind of ironic that that's where we've ended up. You know, we'll see, with Zuul, it'll be very interesting, one of the hits again on OpenStack has been reinvention of the wheel, like, can you inter-operate with other projects rather than doing it your self, it sounds like there's some actually, some very interesting aspects to it, as a CI/CD system, and certainly it uses stuff like Ansible so it's, it's built using opensource components, but, other opensource components, but you know, what does this give us advantage for infrastructure people, and allowing infrastructure to go live in a CI/CD way, software on hardware, rather than, the ones that've been built from the dev side, the app side. I'm assuming there's good reasons, or they wouldn't've done it, but you know, we'll see, there's still a lot of projects inside the opensource umbrella. >> Yeah, and, you know, last year we talked about it, once again, we'll talk about it here, the ecosystem has shifted. There are some of the big traditional infrastructure companies, but what they're talking about has changed a lot, you know. Remember a few years ago, it was you know, HP, thousand people, billion dollar investment, you know, IBM has been part of OpenStack since the very beginning days, but it changes, even a company like Rackspace, who helped put together this environment, the press release that went was: oh, we took all the learnings that we did from OpenStack, and this is our new Kubernetes service that we have, something that I saw, actually Randy Bias, who I'll have on the show this week, was on, the first time we did this show five years ago, can't believe it's the sixth year we're doing the show, Randy is always an interesting conversation to poke some of the sacred cows, and, I'll use that analogy, of course, because he is the one that Pets vs Cattle analogy, and he said, you know, we're spending a lot of time talking about it's not, as you hear, some game, between OpenStack and Kubernetes, containers are great, isn't that wonderful. If we're talking about that so much, maybe we should just like, go do that stuff, and not worry about this, so it'll be fun to talk to him, the Open Dev Show is being, mainly, sponsored by Mirantis who, last time I was here in Vancouver was the OpenStack company, and now, like, I saw them a year ago, and they were, the Kubernetes company, and making those changes, so we'll have Boris on, and get to find out these companies, there's not a lot of ECs here, the press and analysts that are here, most of us have been here for a lot of time so, this ecosystem has changed a lot, but, while attendance is down a little bit, from what I've heard, from previous years, there's still some good energy, people are learning a lot. >> So Stu, I did want to point out, that something I noticed on the stage, that I didn't see, was a lot of infrastructure, right? OpenStack, clearly an infrastructure stack, I think we've teased that out over the past couple years, but I didn't see a lot of talk about storage subsystems, networking, management, like all the kind of, hard, infrastructure plumbing, that actually, everybody here does, as well as a few names, so that was interesting, but at the end of the day, I mean, you got to appeal to the whole crowd here. >> Yeah, well one of the things, we spent a number of years making that stuff work, back when it was, you know, we're talkin' about gettin' Cinder, and then all the storage companies lined up with their various, do we support it, is it fully integrated, and then even further, does it actually work really well? So, same stuff that went through, for about a decade, in virtualization, we went through this in OpenStack, we actually said a couple years ago, some of the basic infrastructure stuff has gotten boring, so we don't need to talk about it anymore. Ironic, it's actually the non-virtualized environments, that's the project that they have here, we have a lot of people who are talking bare metal, who are talking containers, so that has shifted, an interesting one in the keynote is that you had the top level sponsors getting up there, Intel bringing around a lot of their ecosystem partners, talking about Edge, talking about the telecommunications, Red Hat, giving a recap of what they did last week at their summit, they've got a nice cadence, the last couple of years, they've done Red Hat Summit, and OpenStack Summit, back-to-back so that they can get that flow of information through, and then Mark Shuttleworth, who we'll have on a little bit later today, he came out puchin', you know, he started with some motherhood in Apple Pi about how Ubuntu is everywhere but then it was like, and we're going to be so much cheaper, and we're so much easier than the VMwares and Red Hats of the world, and there was a little push back from the community, that maybe that wasn't the right platform to do it. >> Yeah, I think the room got kind of cold, I mean, that's kind of a church in there, right, and everyone is an opensource believer and, this kind of invisible hand of capitalism (laughs) reached in and wrote on the wall and, you know, having written and left. But at the end of the day, right, somebody's got to pay for babies new shoes. I think that it was also very interesting seeing, at Red Hat Summit, which I covered on theCUBE, Red Hat's argument was fairly philosophical, and from first principles. Containers are Linux, therefore Red Hat, and that was logically laid out. Mark's, actually I loved Mark's, most of his speech, which was very practical, this, you know, Ubuntu's going to make both OpenStack and containers simpler, faster, quicker, and cheaper, so it was clearly benefits, and then, for the folks that don't know, then he put up a couple a crazy Eddy slides like, limited time offer, if you're here at the show, here's a deal that we've put together for ya, so that was a little bit unusual for a keynote. >> Yeah, and there are a lot of users here, and some of them'll hear that and they'll say: yeah, you know, I've used Red Hat there but, you can save me money that's awesome, let me find out some more about it. Alright, so, we've got three days of coverage here John, and we get to cover this really kind of broad ecosystem that we have here. You talked about what we don't discuss anymore, like the major lease was Queens, and it used to be, that was where I would study up and be like oh okay, we've got Hudson, and then we got, it was the letters of the alphabet, what's the next one going to be and what are the major features it's reached a certain maturity level that we're not talking the release anymore, it's more like the discussions we have in cloud, which is sometimes, here's some of the major things, and oh yeah, it just kind of wraps itself in. Deployments still, probably aren't nearly as easy as we'd like, Shuttleworth said two guys in under two weeks, that's awesome, but there's solutions we can put, stand up much faster than that now, two weeks is way better than some of the historical things we've done, but it changes quite a bit. So, telecommunications still a hot topic, Edge is something, you know what I think back, it was like, oh, all those NFE conversations we've had here, it's not just the SDN changes that are happening, but this is the Edge discussion for the Telcos, and something people were getting their arms around, so. >> It's pretty interesting to think of the cloud out on telephone poles, and in branch offices, in data centers, in closets basically or under desks almost. >> No self-driving cars on the keynote stage though? >> No, nothing that flashy this year. >> No, definitely not too flashy so, the foundation itself, it's interesting, we've heard rumors that maybe the show will change name, the foundation will not change names. So I want to give you last things, what're you looking for this week, what were you hearing from the community leading up to the show that you want to validate or poke at? >> Well, I'm going to look at real deployments, I'd like to see how standard we are, if we are, if an OpenStack deployment is standardized enough that the pool of talent is growing, and that if I hire people from outside my company who work with OpenStack, I know that they can work with my OpenStack, I think that's key for the continuation of this ecosystem. I want to look at the general energy and how people are deploying it, whether it does become really invisible and boring, but still important. Or do you end up running OpenShift on bare metal, which I, as an infrastructure person, I just can't see that the app platform should have to worry about all this infrastructure stuff, 'cause it's complicated, and so, I'll just be looking for the healthy productions and production deployments and see how that goes. >> Yeah, and I love, one of the things that they started many years ago was they have a super-user category, where they give an award, and I'm excited, we have actually have the Ontario Institute for Cancer Research is one of our guests on today, they won the 2018 super-user group, it's always awesome when you see, not only it's like, okay, CERN's here, and they're doing some really cool things looking for the Higgs boson, and all those kind of things but, you know, companies that are using technology to help them attack the battle against cancer, so, you know, you can't beat things like that. We've got the person from the keynote, Melvin, who was up on stage talking about the open lab, you know, community, ecosystem, definitely something that resonates, I know, one of the reasons I pulled you into this show in the last year is you're got a strong background there. >> Super impressed by all the community activity, this still feels like a real community, lots of pictures of people, lots of real, exhortations from stage to like, we who have been here for years know each other, please come meet us, so that's a real sign of also, a healthy community dynamic. >> Alright, so John first of all, I want to say, Happy Victoria Day, 'cause we are here in Vancouver, and we've got a lot going on here, it's a beautiful venue, hope you all join us for all of the coverage here, and I have to give a big shout out to the companies that allowed this to happen, we are independent media, but we can't survive without the funding of our sponsors so, first of all the OpenStack Foundation, helps get us here, and gives us this lovely location overlooking outside, but if it wasn't for the likes of our headline sponsor Red Hat as well as Canonical, Kontron, and Nuage Networks, we would not be able to bring you this content so, be sure to checkout thecube.net for all the coverage, for John Troyer, I'm Stu Miniman, thanks so much for watching theCUBE. (bubbly music)

Published Date : May 21 2018

SUMMARY :

the OpenStack Foundation, and it's ecosystem partners. at the OpenStack Foundation's show they have it twice a year and the people seem pretty excited as well. for some reason the last month people are always I got the same thing, there seems to be kind of a and that's really the state of opensource, it's not a thing, so the open message was very well received and, one of the things we're going to poke at is, one of the hits again on OpenStack has been and he said, you know, that something I noticed on the stage, that I didn't see, an interesting one in the keynote is that you had But at the end of the day, right, it's more like the discussions we have in cloud, It's pretty interesting to think of the cloud the foundation will not change names. I just can't see that the app platform I know, one of the reasons I pulled you into this show Super impressed by all the community activity, the companies that allowed this to happen,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John TroyerPERSON

0.99+

CanonicalORGANIZATION

0.99+

JohnPERSON

0.99+

Mark ShuttleworthPERSON

0.99+

VancouverLOCATION

0.99+

MelvinPERSON

0.99+

IBMORGANIZATION

0.99+

KontronORGANIZATION

0.99+

Nuage NetworksORGANIZATION

0.99+

BostonLOCATION

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

OpenStack FoundationORGANIZATION

0.99+

two guysQUANTITY

0.99+

HPORGANIZATION

0.99+

Randy BiasPERSON

0.99+

TelcosORGANIZATION

0.99+

RackspaceORGANIZATION

0.99+

Ontario Institute for Cancer ResearchORGANIZATION

0.99+

CERNORGANIZATION

0.99+

last yearDATE

0.99+

three daysQUANTITY

0.99+

VeeamORGANIZATION

0.99+

two weeksQUANTITY

0.99+

last weekDATE

0.99+

OpenStackORGANIZATION

0.99+

N2WSORGANIZATION

0.99+

StuPERSON

0.99+

Vancouver, CanadaLOCATION

0.99+

thecube.netOTHER

0.99+

a year agoDATE

0.99+

three years agoDATE

0.99+

AmazonORGANIZATION

0.99+

RandyPERSON

0.99+

thousand peopleQUANTITY

0.99+

a yearQUANTITY

0.98+

OpenStackEVENT

0.98+

five years agoDATE

0.98+

SiliconANGLE MediaORGANIZATION

0.98+

MarkPERSON

0.98+

Red Hat SummitEVENT

0.98+

OpenStack Summit 2018EVENT

0.98+

twoQUANTITY

0.98+

LinuxTITLE

0.98+

Open Dev ShowEVENT

0.98+

2018DATE

0.98+

QueensORGANIZATION

0.98+

this weekDATE

0.97+

todayDATE

0.97+

Red HatTITLE

0.97+

under two weeksQUANTITY

0.97+

UbuntuTITLE

0.97+

first questionQUANTITY

0.97+

first timeQUANTITY

0.97+

BorisPERSON

0.97+

OpenStack SummitEVENT

0.97+

oneQUANTITY

0.96+

sixth yearQUANTITY

0.96+

this yearDATE

0.96+

billion dollarQUANTITY

0.95+

twice a yearQUANTITY

0.95+

first principlesQUANTITY

0.95+

ZuulORGANIZATION

0.94+

last monthDATE

0.94+

OpenStack Summit North America 2018EVENT

0.93+

ShuttleworthPERSON

0.93+

OpenShiftTITLE

0.93+

OpenStackTITLE

0.93+

bothQUANTITY

0.91+

Joseph Jacks, StealthStartup | KubeCon + CloudNativeCon EU 2018


 

>> Announcer: Live, from Copenhagen, Denmark, it's theCUBE. Covering KubeCon and CloudNativeCon Europe 2018. Brought to you by the Cloud Native Computing Foundation and its Ecosystem Partners. >> Well everyone, welcome back to the live coverage of theCUBE here in Copenhagen, Denmark for KubeCon, Kubernetes Con 2018, part of the CNCF, Cloud Native Compute Foundation, part of the Linux Foundation. I'm John Furrier with Lauren Cooney, the founder of Spark Labs, breaking down day two, wrapping up our coverage of KubeCon and all the success that we've seen with Kubernetes, I thought it would be really appropriate to bring on the cofounder of KubeCon originally, Joseph Jacks, known as JJ in the industry, a good friend of theCUBE and part of the early formation of what is now Cloud Native. We were all riffing on that at the time. welcome back to theCUBE, great to see you. >> Thank you for having me John. >> So, for the story, for the folks out there, you know Cloud Native was really seen by the devops community, and infrastructure code was no secret to the insiders in the timeframes from 2010 through 2015, 16 timeframe, but really it was an open stack summit. A lot of people were kind of like, hey, you know, Google's got Kubernetes, they're going to open it up and this could be a real game changer, container, Docker was flying off the shelves. So we just kind of saw, right, and you were there and we were talking so there was a group of us. You were one of them. And you founded KubeCon, and bolted into the, at that time, the satellite Linux Foundation events, and then you pass it off as a good community citizen to the CNCF, so I wanted to just make sure that people knew that. What a great success. What's your impression? I mean, are you blown away? >> I am definitely blown away. I mean I think the size and scale of the European audience is remarkable. We had something like slightly less than half this in Austin last year. So to see more than that come here in Europe I think shows the global kind of growth curve as well as like, I think, Dan and someone else was asking sort of raise your hand if you've been to Kubecon Austin and very few actually, so there's a lot of new people showing up in Europe. I think it just shows the demand-- >> And Dan's been traveling around. I've seen him in China, some events I've been to. >> Joseph: All over. >> He's really working hard so props to him. We gave him some great props earlier. But he also told us Shanghai is coming online. >> Joseph: Yeah. >> So you got Shanghai, you to Barcelona next year for the European show, and of course Seattle. This is a community celebrating right now because there's a lot of high fives going on right now because there's a lot of cool, we've got some sort of core standard, defacto standard, now let's go to work. What are you working on now? You got a stealth startup? Share a little bit about it. I know you don't want to give the details out, but where is it kind of above the stack? Where you going to be playing? >> Sure, so we're not talking too much in terms of specifics and we're pretty stealthy, but I can tell you what I'm personally very excited about in terms of where Kubernetes is going and kind of where this ecosystem is starting to mature for practitioners, for enterprises. So one of the things that I think Kubernetes is starting to bring to bear is this idea of commoditizing distributed systems for everyday developers, for everyday enterprises. And I think that that is sort of the first time in sort of maybe, maybe the history of software development, software engineering and building applications, we're standardizing on a set of primitives, a set of building blocks for distributed system style programming. You know we had in previous eras things like Erlang and fault tolerant programming and frameworks, but those were sort of like pocketed into different programming communities and different types of stacks. I think Kubernetes is the one sort of horizontal technology that the industry's adopting and it's giving us these amazing properties, so I think some of the things that we're focusing on or excited about involve sort of the programming layer on top of Kubernetes in simplifying the experience of kind of bringing all stateful and enterprise workloads and different types of application paradigms natively into Kubernetes without requiring a developer to really understand and learn the Kubernetes primitives themselves. >> That's next level infrastructure as code. Yeah so as Kubernetes becomes more successful, as Kubernetes succeeds at a larger and larger scale, people simply shouldn't have to know or understand the internals. There's a lot of people, I think Kelsey and a few other people, started to talk about Kubernetes as the Linux kernel of distributed computing or distributed systems, and I think that's a really great way of looking at it. You know, do programmers make file system calls directly when they're building their applications? Do they script directly against the kernel for maybe some very high performance things. But generally speaking when you're writing a service or you're writing a microservice or some business logic, you're writing at a higher level of abstraction and a language that's doing some IO and maybe some reading and writing files, but you're using higher level abstractions. So I think by the same token, the focus today with Kubernetes is people are learning this API. I think over time people are going to be programming against that API at a higher level. And what are you doing here, the show? Obviously you're (mumbles) so you're doing some (mumbles) intelligence. Conversations you've been in, can you share your opinion of what's going on here? Your thoughts on the content program, the architecture, the decisions they've made. >> I think we've just, so lots of questions in there. What am I doing here? I just get so energized and I'm so, I just get reinvigorated kind of being here and talking to people and it's just super cool to see a lot of old faces, people who've been here for a while, and you know, one of the things that excites me, and this is just like proof that the event's gotten so huge. I walk around and I see a lot of familiar faces, but more than 80, 90% of people I've never seen before, and I'm like wow this has like gotten really super huge mainstream. Talking with some customers, getting a good sense of kind of what's going on. I think we've seen two really huge kind of trends come out of the event. One is this idea of multicloud sort of as a focus area, and you've talked with Bassam at Upbound and the sort of multicloud control plane, kind of need and demand out there in the community and the user base. I think what Bassam's doing is extremely exciting. The other, so multicloud is a really big paradigm that most companies are sort of prioritizing. Kubernetes is available now on all the cloud providers, but how do we actually adopt it in a way that is agnostic to any cloud provider service. That's one really big trend. The second big thing that I think we're starting to see, just kind of across a lot of talks is taking the Kubernetes API and extending it and wrapping it around stateful applications and stateful workloads, and being able to sort of program that API. And so we saw the announcement from Red Hat on the operator framework. We've seen projects like Kube Builder and other things that are really about sort of building native custom Kubernetes APIs for your applications. So extensibility, using the Kubernetes API as a building block, and then multicloud. I think those are really two huge trends happening here. >> What is your view on, I'm actually going to put you on test here. So Red Hat made a bet on Kubernetes years ago when it was not obvious to a lot of the other big wales. >> Joseph: From the very beginning really. >> Yeah from the very beginning. And that paid off huge for Red Hat as an example. So the question is, what bets should people be making if you had to lay down some thought leadership on this here, 'cause you obviously are in the middle of it and been part of the beginning. There's some bets to be made. What are the bets that the IBMs and the HPs and the Cisco's and the big players have to make and what are the bets the startups have to make? >> Well yeah, there's two angles to that. I mean, I think the investment startups are making, are different set of investments and motivated differently than the multinational, huge, you know, technology companies that have billions of dollars. I think in the startup category, startups just should really embrace Kubernetes for speeding the way they build reliable and scalable applications. I think really from the very beginning Kubernetes is becoming kind of compelling and reasonable even at a very small scale, like for two or three node environment. It's becoming very easy to run and install and manage. Of course it gives you a lot of really great properties in terms of actually running, building your systems, adopting microservices, and scaling out your application. And that's what's sort of like a direct end user use case, startups, kind of building their business, building their stack on Kubernetes. We see companies building products on top of Kubernetes. You see a lot of them here on the expo floor. That's a different type of vendor startup ecosystem. I think there's lots of opportunities there. For the big multinationals, I think one really interesting thing that hasn't really quite been done yet, is sort of treating Kubernetes as a first-class citizen as opposed to a way to commercialize and enter a new market. I think one of the default ways large technology companies tend to look at something hypergrowth like Kubernetes and TensorFlow and other projects is wrapping around it and commercializing in some way, and I think a deeper more strategic path for large companies could be to really embed Kubernetes in the core kind of crown jewel IP assets that they have. So I'll give you an example, like, for let's just take SAP, I'll just pick on SAP randomly, for no reason. This is one of the largest enterprise software companies in the world. I would encourage the co-CEOs of SAP, for example. >> John: There's only one CEO now. >> Is there one CEO now? Okay. >> John: Snabe left. It's now (drowned out by talking). >> Oh, okay, gotcha. I haven't been keeping up on the SAP... But let's just say, you know, a CEO boardroom level discussion of replatforming the entire enterprise application stack on something like Kubernetes could deliver a ton of really core meaningful benefits to their business. And I don't think like deep super strategic investments like that at that level are being made quite yet. I think at a certain point in time in the future they'll probably start to be made that way. But that's how I would like look at smart investments on the bigger scale. >> We're not seeing scale yet with Kubernetes, just the toe is in the water. >> I think we're starting to see scale, John. I think we are. >> John: What's the scale number in clusters? >> I'll give you the best example, which came up today, and actually really surprised me which I think was a super compelling example. The largest retailer in China, so essentially the Amazon of China, JD.com, is running in production for years now at 20,000 compute nodes with Kubernetes, and their largest cluster is a 5,000 node cluster. And so this is pushing the boundary of the sort of production-- >> And I think that may be the biggest one I've heard. >> Yeah, that's certainly, I mean for a disclosed user that's pretty huge. We're starting to see people actually talk publicly about this which is remarkable. And there are huge deployments out there. >> We saw Tyler Jewell come on from WSO2. He's got a new thing called Ballerina. New programming language, have you seen that? >> Joseph: I have, I have. >> Thoughts on that? What's your thoughts on that? >> You know, I think that, so I won't make any particular specific comments on Ballerina, I'm not extremely informed on it. I did play with a little bit, I don't want to give any of my opinions, but what I'd say, and I think Tyler actually mentioned this, one of the things that I believe is going to be a big deal in the coming years, is so, trying to think of Kubernetes as an implementation detail, as the kernel, do you interact directly with that? Do you learn that interface directly? Are you sort of kind of optimizing your application to be sort of natively aware of those abstractions? I think the answer to all of those questions is no, and Kubernetes is sort of delegated as a compiler target, and so frankly like directionally speaking, I think what Ballerina's sort of design is aspiring towards is the right one. Compile time abstraction for building distributed systems is probably the next logical progression. I like to think of, and I think Brendan Burns has started to talk about this over the last year or two. Everyone's writing assembly code 'cause we're swimming yaml and configuration based designs and systems. You know, sort of pseudodeclarative, but more imperative in static configurations. When in reality we shouldn't be writing these assembly artifacts. We should be delegating all of this complexity to a compiler in the same way that you know, we went from assembly to C to higher level languages. So I think over time that starts to make a lot of sense, and we're going to see a lot of innovation here probably. >> What's your take on the community formation? Obviously, it's growing, so, any observations, any insight for the folks watching what's happening in the community, patterns, trends you'd see, like, don't like. >> I think we could do a better job of reducing politics amongst the really sort of senior community leaders, particularly who have incentives behind their sort of agendas and sort of opinions, since they work for various, you know, large and small companies. >> Yeah, who horse in this race. >> Sure, and there's, whether they're perverse incentives or not, I think net the project has such a high quality genuine, like humble, focused group of people leading it that there isn't much pollution and negativity there. But I think there could be a higher standard in some cases. Since the project is so huge and there are so many very fast moving areas of evolution, there tends to be sort of a fast curve toward many cooks being in the kitchen, you know, when new things materialize and I think that could be better handled. But positive side, I think like the project is becoming incredibly diverse. I just get super excited to see Aparna from Google leading the project at Google, both on the hosted Saas offering and the Kubernetes project. People like Liz and others. And I just think it's an awesome, welcoming, super diverse community. And people should really highlight that more. 'Cause I think it's a unique asset of the project. >> Well you're involved in some deep history. I think we're going to be looking this as moment where there was once a KubeCon that was not part of the CNCF, and you know, you did the right thing, did a good thing. You could have kept it to yourself and made some good cash. >> It's definitely gotten really big, and it's way beyond me now at this point. >> Those guys did a good job with CNCF. >> They're doing phenomenal. I think vast majority of the credit, at this scale, goes to Chris Anasik and Dan Conn, and the events team at the Linux Foundation, CNCF, and obviously Kelsey and Liz and Michelle Noorali and many others. But blood, sweat, and tears. It's no small feat pulling off an event like this. You know, corralling the CFP process, coordinating speakers, setting the themes, it's a really huge job. >> And now they got to deal with all the community, licenses, Lauren your thoughts? >> Well they're consistent across Apache v2 I believe is what Dan said, so all the projects under the CNCF are consistently licensed. So I think that's great. I think they actually have it together there. You know, I do share your concerns about the politics that are going on a little bit back and forth, the high level, I tend to look back at history a little bit, and for those of us that remember JBoss and the JBoss fork, we're a little bit nervous, right? So I think that it's important to take a look at that and make sure that that doesn't happen. Also, you know, open stack and the stuff that we've talked about before with distros coming out or too many distros going to be hitting the street, and how do we keep that more narrow focused, so this can go across-- >> Yeah, I started this, I like to list rank and iterate things, and I started with this sheet of all the vendors, you know, all the Kubernetes vendors, and then Linux Foundation, or CNCF took it over, and they've got a phenomenal sort of conformance testing and sort of compliance versioning sheet, which lists all the vendors and certification status and updates and so on and I think there's 50 or 60 companies. On one hand I think that's great, because it's more innovation, lots of service providers and offerings, but there is a concern that there might be some fragmentation, but again, this is a really big area of focus, and I think it's being addressed. Yeah, I think the right ones will end up winning, right? >> Joseph: Right, for sure. >> and that's what's going to be key. >> Joseph: Healthy competition. >> Yes. >> All right final question. Let's go around the horn. We'll start with you JJ, wrapping up KubeCon 2018, your thoughts, summary, what's happened here? What will we talk about next year about what happened this week in Denmark? >> I think this week in Denmark has been a huge turning point for the growth in Europe and sort of proof that Kubernetes is on like this unstoppable inflection, growth curve. We usually see a smaller audience here in Europe, relative to the domestic event before it. And we're just seeing the numbers get bigger and bigger. I think looking back we're also going to see just the quality of end users and the end user community and more production success stories starting to become front and center, which I think is really awesome. There's lots of vendors here. But I do believe we have a huge representation of end users and companies actually sharing what they're doing pragmatically and really changing their businesses from Financial Times to Cern and physics projects, and you know, JD and other huge companies. I think that's just really awesome. That's a unique thing of the Kubernetes project. There's some hugely transformative companies doing awesome things out there. >> Lauren your thoughts, summary of the week in Denmark? >> I think it's been awesome. There's so much innovation happening here and I don't want to overuse that word 'cause I think it's kind of BS at some point, but really these companies are doing new things, and they're taking this to new levels. I think that hearing about the excitement of the folks that are coming here to actually learn about Kubernetes is phenomenal, and they're going to bring that back into their companies, and you're going to see a lot more actually coming to Europe next year. I also true multicloud would be phenomenal. I would love that if you could actually glue those platforms together, per se. That's really what I'm looking for. But also security. I think security, there needs to be a security seg. We talked to customers earlier. That's something they want to see. I think that that needs to be something that's brought to the table. >> That's awesome. My view is very simple. You know I think they've done a good job in CNCF and Linux Foundation, the team, building the ecosystem, keeping the governance and the technical and the content piece separate. I think they did a good job of showing the future state that we'd like to get to, which is true multicloud, workload portability, those things still out of reach in my opinion, but they did a great job of keeping the tight core. And to me, when I hear words like defacto standard I think of major inflection points where industries have moved big time. You think of internetworking, you think of the web, you think of these moments where that small little tweak created massive new brands and created a disruptor enabler that just created, changed the game. We saw Cisco coming out of that movement of IP with routers you're seeing 3Com come out of that world. I think that this change, this new little nuance called Kubernetes is going to be absolutely a defacto standard. I think it's definitely an inflection point and you're going to see startups come up with new ideas really fast in a new way, in a new modern global architecture, new startups, and I think people are going to be blown away. I think you're going to see fast rising growth companies. I think it's going to be an investment opportunity whether it's token economics or a venture backer private equity play. You're going to see people come out of the wood work, real smart entrepreneur. I think this is what people have been waiting for in the industry so I mean, I'm just super excited. And so thanks for coming on. >> Thank you for everything you do for the community. I think you truly extract the signal from the noise. I'm really excited to see you keep coming to the show, so it's really awesome. >> I appreciate your support, and again we're co-developing content in the open. Lauren great to host with you this week. >> Thank you, it's been awesome. >> And you got a great new venture, high five there. High five to the founder of KubeCon. This is theCUBE, not to be confused with KubeCon. And we're theCUBE, C-U-B-E. I'm John Furrier, thanks for watching. It's a wrap of day two global coverage here exclusively for KubeCon 2018, CNCF and the Linux Foundation. Thanks for watching. (techno music)

Published Date : May 3 2018

SUMMARY :

Brought to you by the Cloud Native Computing Foundation and part of the early formation of what is now Cloud Native. and then you pass it off as a good community citizen I think shows the global kind of growth curve And Dan's been traveling around. We gave him some great props earlier. I know you don't want to give the details out, And I think that that is sort of the first time I think over time people are going to be programming and the sort of multicloud control plane, What is your view on, I'm actually going to put you on and the Cisco's and the big players have to make I think really from the very beginning Is there one CEO now? It's now (drowned out by talking). And I don't think like deep super strategic investments just the toe is in the water. I think we're starting to see scale, John. of the sort of production-- We're starting to see people actually New programming language, have you seen that? I think the answer to all of those questions is no, any observations, any insight for the folks watching I think we could do a better job of reducing politics And I just think it's an awesome, welcoming, I think we're going to be looking this as moment where and it's way beyond me now at this point. and Dan Conn, and the events team at the Linux Foundation, So I think that it's important to take a look at that and I think it's being addressed. Let's go around the horn. I think looking back we're also going to see I think that that needs to be something I think it's going to be an investment opportunity I think you truly extract the signal from the noise. Lauren great to host with you this week. CNCF and the Linux Foundation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DanPERSON

0.99+

JosephPERSON

0.99+

CiscoORGANIZATION

0.99+

EuropeLOCATION

0.99+

Dan ConnPERSON

0.99+

Lauren CooneyPERSON

0.99+

ChinaLOCATION

0.99+

LaurenPERSON

0.99+

Chris AnasikPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

2010DATE

0.99+

JohnPERSON

0.99+

CNCFORGANIZATION

0.99+

50QUANTITY

0.99+

DenmarkLOCATION

0.99+

Linux FoundationORGANIZATION

0.99+

2015DATE

0.99+

Spark LabsORGANIZATION

0.99+

Michelle NooraliPERSON

0.99+

JD.comORGANIZATION

0.99+

AustinLOCATION

0.99+

HPsORGANIZATION

0.99+

John FurrierPERSON

0.99+

Cloud Native Compute FoundationORGANIZATION

0.99+

IBMsORGANIZATION

0.99+

LizPERSON

0.99+

BarcelonaLOCATION

0.99+

next yearDATE

0.99+

Joseph JacksPERSON

0.99+

Tyler JewellPERSON

0.99+

GoogleORGANIZATION

0.99+

Brendan BurnsPERSON

0.99+

two anglesQUANTITY

0.99+

TylerPERSON

0.99+

SeattleLOCATION

0.99+

60 companiesQUANTITY

0.99+

last yearDATE

0.99+

Copenhagen, DenmarkLOCATION

0.99+

KelseyPERSON

0.99+

KubeConEVENT

0.99+

SAPORGANIZATION

0.99+

JJPERSON

0.99+

this weekDATE

0.99+

KubernetesTITLE

0.99+

BassamORGANIZATION

0.99+

twoQUANTITY

0.99+

Financial TimesORGANIZATION

0.99+

todayDATE

0.99+

oneQUANTITY

0.98+

ShanghaiLOCATION

0.98+

KubeCon 2018EVENT

0.98+

bothQUANTITY

0.98+

Liz Rice, Aqua Security & Janet Kuo, Google | KubeCon + CloudNativeCon EU 2018


 

>> Announcer: Live from Copenhagen, Denmark, it's theCUBE. Covering KubeCon and CloudNativeCon Europe 2018. Brought to you by the Cloud Native Computing Foundation and its ecosystem partners. >> Hello, everyone. Welcome back to theCUBE's exclusive coverage here in Copenhagen, Denmark for KubeCon 2018, part of the CNCF Cloud Native Compute Foundation, which is part of the Linux Foundation. I'm John Furrier, your host. We've got two great guests here, we've got Liz Rice, the co-chair of KubeCon and CloudNativeCon, kind of a dual naming because it's Kubernetes and it's Cloud Native and also technology evangelist at Aqua Security. She's co-chairing with Kelsey Hightower who will be on later today, and CUBE alumni as well, and Janet Kuo who is a software engineer at Google. Welcome to theCUBE, thanks for coming on. >> Yeah, thanks for inviting us. >> Super excited, we have a lot of energy even though we've got interviews all day and it's kind of, we're holding the line here. It's almost a celebration but also not a celebration because there's more work to do with Kubernetes. Just the growth of the CNCF continues to hit some interesting good performance KPIs on metrics. Growth's up on the membership, satisfaction is high, Kubernetes is being called a de facto standard. So by all kind of general qualitative metrics and quantitative, it's doing well. >> Lauren: It's doing great. >> But it's just the beginning. >> Yeah, yeah. I talked yesterday a little bit in, in the keynote, about project updates, about how Kubernetes has graduated. That's a real signal of maturity. It's a signal to the end-user companies out there that you know, the risk, nothing is ever risk-free, but you know, Kubernetes is here to stay. It's stable, it's got stable governance model, you know, it's not going away. >> John: It's working. >> It's going to continue to evolve and improve. But it's really working, and we've got end users, you know, not only happy and using it, they're prepared to come to this conference and share their stories, share their learnings, it's brilliant. >> Yeah, and Janet also, you know, you talk about China, we have announcement that, I don't know if it's formally announced, but Shanghai, is it out there now? >> Lauren: It is. >> Okay, so Shanghai in, I think November 14th, let me get the dates here, 14th and 15th in Shanghai, China. >> Janet: Yeah. >> Where it's going to be presented in either English or in Chinese, so it's going to be fully translated. Give us the update. >> Yeah, it will be fully translated, and we'll have a CFP coming soon, and people will be submitting their talks in English but they can choose to present either in English or Chinese. >> Can you help us get a CUBE host that can translate theCUBE for us? We need some, if you're out there watching, we need some hosts in China. But in all seriousness, this is a global framework, and this is again the theme of Cloud Native, you know. Being my age, I've seen the lift and shift IT world go from awesome greatness to consolidation to VMwares. I've seen the waves. But this is a different phenomenon with Cloud Native. Take a minute to share your perspectives on the global phenomenon of Cloud Native. It's a global platform, it's not just IT, it's a global platform, the cloud, and what that brings to the table for end users. >> I think for end users, if we're talking about consumers, it actually is, well what it's doing is allowing businesses to develop applications more quickly, to respond to their market needs more quickly. And end users are seeing that in more responsive applications, more responsive services, improved delivery of tech. >> And the businesses, too, have engineers on the front lines now. >> Absolutely, and there's a lot of work going on here, I think, to basically, we were talking earlier about making technology boring, you know, this Kubernetes level is really an abstraction that most application developers don't really need to know about. And making their experience easier, they can just write their code and it runs. >> So if it's invisible to the application developer, that's the success. >> That's a really helpful thing. They shouldn't have to worry about where their code is running. >> John: That's DevOps. >> Yeah, yeah. >> I think the container in Kubernetes technology or this Cloud Native technology that brings developer the ability to, you know, move fast and give them the agility to react to the business needs very quickly. And also users benefit from that and operators also, you know, can manage their applications much more easily. >> Yeah, when you have that abstraction layer, when you have that infrastructure as code, or even this new abstraction layer which is not just infrastructure, it's services, micro-services, growth has been phenomenal. You're bringing the application developer into an efficiency productivity mode where they're dictating the business model through software of the companies. So it's not just, "Hey build me something "and let's go sell it." They're on the front lines, writing the business logic of businesses and their customers. So you're seeing it's super important for them to have that ability to either double down or abandon quickly. This is what agile is. Now it's going from software to business. This, to me, I think is the highlight for me on this show. You see the dots connecting where the developers are truly in charge of actually being a business impact because they now have more capability. As you guys put this together and do the co-chair, do you and Kelsey, what do you guys do in the room, the secret room, you like, "Well let's do this on the content." I mean, 'cause there's so much to do. Take us through the process. >> So, a little bit of insight into how that whole process works. So we had well over 1,000 submissions, which, you know, there's no, I think there's like 150 slots, something like that. So that's a pretty small percentage that we can actually accept. We had an amazing program committee, I think there were around 60 people who reviewed, every individual reviewer looked at a subset. We didn't ask them to look at all thousand, that would be crazy. They scored them, that gave us a kind of first pass, like a sort of an ability to say, "Well, anything that was below average, "we can only take the top 15%, "so anything that's below average "is not going to make the cut." And then we could start looking at trying to balance, say, for example, there's been a lot of talk about were there too many Istio talks? Well, there were a lot of Istio talks because there were a lot of Istio submissions. And that says to us that the community wants to talk about Istio. >> And then number of stars, that's the number one project on the new list. I mean, Kubeflow and Istio are super hot. >> Yeah, yeah, Kubeflow's another great example, there are lots of submissions around it. We can't take them all but we can use the ratings and the advice from the program committee to try and assemble, you know, the best talks to try and bring different voices in, you know, we want to have subject matter experts and new voices. We want to have the big name companies and start-ups, we wanted to try and get a mix, you know. A diversity of opinion, really. >> And you're a membership organization so you have to balance the membership needs with the content program so, challenging with given the growth. I mean, I can only imagine. >> Yeah, so as program co-chairs, we actually have a really free hand over the content, so it's one of the really, I think, nice things about this conference. You know, sponsors do get to stand on stage and deliver their message, but they don't get to influence the actual program. The program is put together for the community, and by doing things like looking at the number of submissions, using those signals that the community want to talk about, I hope we can carry on giving the attendees that format. >> I would just say from an outsider perspective, I think that's something you want to preserve because if you look at the success of the CNCF, one thing I'm impressed by is they've really allowed a commercial environment to be fostered and enabled. But they didn't compromise the technical. >> Lauren: Yeah. >> And the content to me, content and technical tracks are super important because content, they all work together, right? So as long as there's no meddling, stay in your swim lane, whatever, whatever it is. Content is really important. >> Absolutely, yeah. >> Because that's the learning. >> Yeah, yeah. >> Okay, so what's on the cut list that you wish you could have put back on stage? Or is that too risque? You'll come back to that. >> Yeah. >> China, talk about China. Because obviously, we were super impressed last year when we went to go visit Alibaba just to the order of magnitude to the cultural mindset for their thinking around Cloud Native. And what I was most impressed with was Dr. Wong was talking about artistry. They just don't look at it as just technology, although they are nerdy and geeky like us in Silicon Valley. But they really were thinking about the artistry 'cause the app side of it has kind of a, not just design element to the user perspective. And they're very mobile-centric in China, so they're like, they were like, "This is what we want to do." So they were very advanced in my mind on this. Does that change the program in China vis a vis Seattle and here, is there any stark differences between Shanghai and Copenhagen and Seattle in terms of the program? Is there a certain focus? What's the insight into China? >> I think it's a little early to say 'cause we haven't yet opened the CFP. It'll be opening soon but I'm fully expecting that there will be, you know, some differences. I think the, you know, we're hoping to have speakers, a lot more speakers from China, from Asia, because it's local to them. So, like here, we tried to have a European flavor. You'll see a lot of innovators from Europe, like Spotify and the Financial Times, Monzo Bank. You know, they've all been able to share their stories with us. And I think we're hoping to get the same kind of thing in China, hear local stories as well. >> I mean that's a good call. I think conferences that do the rinse and repeat from North America and just slap it down in different regions aren't as effective as making it localized, in a way. >> Yeah. >> That's super important. >> I know that a lot of China companies, they are pretty invested pretty heavily into Kubernetes and Cloud Native technology and they are very innovative. So I actually joined a project in 2015 and I've been collaborating with a lot of Chinese contributors from China remotely on GitHub. For example, the contributors from Huawei and they've been invested a lot in this. >> And they have some contributors in the core. >> Yeah, so we are expecting to see submissions from those contributors and companies and users. >> Well, that's super exciting. We look forward to being there, and it should be excellent. We always have a fun time. The question that I want to ask you guys now, just to switch gears is, for the people watching who couldn't make it or might watch it on YouTube on Demand who didn't make the trip. What surprised you here? What's new, I'm asking, you have a view as the co-chair, you've seen it. But was there anything that surprised you, or did it go right? Nothing goes perfect. I mean, it's like my wedding, everything happens, didn't happen the way you planned it. There's always a surprise. Any wild cards, any x-factors, anything that stands out to you guys? >> So what I see from, so I attend, I think around five KubeCons. So from the first one it's only 550 people, only the small community, the contributors from Google and Red Hat and CoreOS and growing from now. We are growing from the inner circle to the outside circle, from the just contributors to also the users of it, like and also the ecosystem. Everyone that's building the technology around Cloud Native, and I see that growth and it's very surprising to me. We have a keynote yesterday from CERN and everyone is talking about their keynote, like they have I think 200 clusters, and that's amazing. And they said because of Kubernetes they can just focus on physics. >> Yeah, and that's a testimonial right there. >> Yeah. >> That was really good stories to hear, and I think maybe one of the things that surprises me, it sort of continues to surprise me is how collaborative, it's something about this kind of organization, this conference, this whole kind of movement, if you like. Where companies are coming in and sharing their learnings, and we've seen that, we've seen that a lot through the keynotes. And I think we see it on the conference floor, we see it in the hallway chat. And I think we see it in the way that the different SIGs and working groups and projects are all, kind of, collaborating on problem solving. And that's really exciting. >> That's why I was saying earlier in the beginning that there's a celebration amongst ourselves and the community. But also a realization that this is just the beginning, it's not a, it's kind of like when you get venture funding if you're a start-up. That's really when it begins, you don't celebrate, but you take a little bit of a pause. Now my personal take only to all of the hundreds of events we do a year is that I that think this community here has fought the hard DevOps battle. If you go back to 2008 timeframe, and '08, '09, '10, '11, '12, those years were, those were hyper scale years. Look at Google, Facebook, all the original DevOps engineers, they were eating glass and spitting nails. It was hard work. And it was really build your own, a lot of engineering, not just software development. So I think this, kind of like, camaraderie amongst the DevOps community saying, "Look, this is a really big "step up function with Kubernetes." Everyone's had some scar tissue. >> Yeah, I think a lot of people have learned from previous, you know, even other open source projects that they've worked on. And you see some of the amazing work that goes into the kind of, like, community governance side. The things that, you know, Paris Pittman does around contributor experience. It's so good to see people who are experts in helping developers engage, helping engineers engage, really getting to play that role. >> There's a lot of common experiences for people who have never met each other because there's people who have seen the hard work pay with scale and leverage and benefits. They see it, this is amazing. We had Sheryl from Google on saying, "When I left Google and I went out into the real world, "I was like, oh my God, "they don't actually use Borg," like, what? "What do they, how do they actually write software?" I mean, so she's a fish out of water and that, it's like, so again I think there's a lot of commonality, and it's a super great opportunity and a great community and you guys have done a great job, CNCF. And we hope to nurture that, the principles, and looking forward to China. Thanks for coming on theCUBE, we appreciate it. >> Yeah. >> Okay we're here at CNCF's KubeCon 2018, I'm John Furrier, more live coverage. Stay with us, day two of two days of CUBE coverage. Go to thecube.net, siliconangle.com for all the coverage. We'll be back, stay with us after this short break.

Published Date : May 3 2018

SUMMARY :

Brought to you by the Cloud Native Computing Foundation Welcome back to theCUBE's exclusive coverage Just the growth of the CNCF continues to hit It's a signal to the end-user companies out there It's going to continue to evolve and improve. let me get the dates here, 14th and 15th in Shanghai, China. Where it's going to be presented but they can choose to present either in English or Chinese. and this is again the theme of Cloud Native, you know. to respond to their market needs more quickly. And the businesses, too, have engineers I think, to basically, we were talking earlier So if it's invisible to the application developer, They shouldn't have to worry about that brings developer the ability to, you know, the secret room, you like, And that says to us that the community that's the number one project on the new list. to try and assemble, you know, the best talks so you have to balance the membership needs but they don't get to influence the actual program. I think that's something you want to preserve And the content to me, content and technical tracks that you wish you could have put back on stage? just to the order of magnitude to the cultural mindset I think the, you know, we're hoping to have speakers, I think conferences that do the rinse and repeat and Cloud Native technology and they are very innovative. Yeah, so we are expecting to see submissions anything that stands out to you guys? from the just contributors to also the users of it, And I think we see it in the way that the different SIGs and the community. It's so good to see people who are experts and looking forward to China. Go to thecube.net, siliconangle.com for all the coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LaurenPERSON

0.99+

Liz RicePERSON

0.99+

JohnPERSON

0.99+

JanetPERSON

0.99+

2015DATE

0.99+

HuaweiORGANIZATION

0.99+

Janet KuoPERSON

0.99+

AsiaLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

ChinaLOCATION

0.99+

Linux FoundationORGANIZATION

0.99+

John FurrierPERSON

0.99+

Monzo BankORGANIZATION

0.99+

EuropeLOCATION

0.99+

November 14thDATE

0.99+

WongPERSON

0.99+

Silicon ValleyLOCATION

0.99+

North AmericaLOCATION

0.99+

2008DATE

0.99+

CERNORGANIZATION

0.99+

thecube.netOTHER

0.99+

yesterdayDATE

0.99+

FacebookORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

Kelsey HightowerPERSON

0.99+

GoogleORGANIZATION

0.99+

SpotifyORGANIZATION

0.99+

150 slotsQUANTITY

0.99+

hundredsQUANTITY

0.99+

Cloud NativeTITLE

0.99+

last yearDATE

0.99+

ShanghaiLOCATION

0.99+

KelseyPERSON

0.99+

Copenhagen, DenmarkLOCATION

0.99+

Aqua SecurityORGANIZATION

0.99+

KubeConEVENT

0.99+

two daysQUANTITY

0.99+

SherylPERSON

0.99+

Financial TimesORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

theCUBEORGANIZATION

0.99+

EnglishOTHER

0.99+

CUBEORGANIZATION

0.99+

Paris PittmanPERSON

0.99+

200 clustersQUANTITY

0.98+

Shanghai, ChinaLOCATION

0.98+

15%QUANTITY

0.98+

first oneQUANTITY

0.98+

KubeCon 2018EVENT

0.98+

ChineseOTHER

0.98+

'09DATE

0.98+

14thDATE

0.98+

CNCF Cloud Native Compute FoundationORGANIZATION

0.98+

Red HatORGANIZATION

0.98+

'10DATE

0.98+

IstioORGANIZATION

0.98+

'11DATE

0.97+

'12DATE

0.97+

over 1,000 submissionsQUANTITY

0.96+

15thDATE

0.96+

550 peopleQUANTITY

0.96+

around 60 peopleQUANTITY

0.96+

Dr.PERSON

0.95+

Yaron Haviv, iguazio & Doug Davis, IBM | KubeCon + CloudNativeCon 2018


 

>> Presenter: Live from Copenhagen, Denmark, it's the Cube. Covering Kubecon and CloudNativeCon Europe 2018. Brought to you by the Cloud Native Computing foundation, and it's ecosystem partners. >> Well, welcome back everyone, we're live here with the Cube in Copenhagen, Denmark, for KubeCon 2018 Europe, via the CFCF Cloud Native Computing foundation, part of the Linux foundation. I'm John Furrier, my co-host Lauren Cooney here this week. And up next to Yaron Haviv, the founder, and CTO of Iguazio, and Doug Davis, who is the co-chair of the serverless working group, And the CNCF, as well as a developer advocate for IBM, IBM cloud. Great to see you welcome to the Cube. >> Thank you. >> Thanks. >> Thanks for coming in. So love the serverless work, and want to dig into that with a bunch of questions. So, super important trend as we see in that success functions, and all the good stuff that's going on, programmable infrastructure. So I want to dig into that. But first, Yaron, I want to get into what's going on with the business, what's new with you? Iguazio, I saw you're on the sponsorship list here, you're doing a lot of work. You have some news as well. What's going on at KubeCon, Europe for you. >> Yeah, so we're expanding on the business side very nicely, taking more momentum, and this strength towards edge analytics, edge cloud, people starting to understand that central cloud is not the only way to build clouds. We're also progressing nicely on our serverless framework, called Nuclio. It just was published, maybe eight months ago, already made 2000 stars in GitHub, you know, users. We've got some quotes, NPR's around production version of that, including strong partnership with Acer, on being able to run the same functions in Acer, and the cloud in a joint development effort, as well as customers actually using it to build real-time analytics use case in development in the cloud, and deployment in different locations. >> Our audience knows you well, you've been on the cube many times. You also write for us, as well as other blogs with your opinion pieces and commentary. It's always edgy, and strong, and right on the money, I want to ask you your thoughts on serverless, because you were there from day one, I remember the conversation. It wasn't called serverless, we were talking about resource pools and looking at cloud computing, pontificating about, potentially, what Kubernetes and orchestration was going to look like. It's happening. So, are you happy with the progress of the industry, performance of the tech stack? What's your thoughts on serverless today, state of the union? What's your opinion? >> I think it's progressing nicely. I think many people call everything almost, serverless now. You have serverless data bases, you have serverless everything. I think serverless will become, more and more, a feature of a platform, not necessarily a thing. But, like Salesforce will have serverless functions, Wix will have serverless functions, for their own stuff. Obviously cloud platforms, analytic platforms, et cetera. So there'll be, maybe a family of generic ones, and a family of platform specific, that are more use case oriented. >> Does that connect with your business plan for Iguazio? Are you evolving with it? How are you navigating those waters on the adoption side. >> So, you know, I'm sort of trying to be inclusive, I think there's room for more than one serverless framework. There's also OpenWhisk, and Openfazzer, and a few of those. Our focus is mainly real-time analytics, and high performance in data processing. Yes, we can also do other things, but maybe we won't invest too much in some features that are more front-end oriented, or stuff like that. >> John: So you're staying focused on the core. >> Yes, on the other hand, other people to deal with front-end, we'll focus on HTTP, and Blue Logic, and things like that. Most of the frameworks don't have the same capabilities of Nuclio, like real-time stream distribution, real-time, low latencies, all that stuff. So, I think there's room for multiple frameworks, and that's also part of the relationship with Acer. Acer have their own product, which is very good with integration with the Acer stack, and the Acer components. On the other hand there is real-time analytics, in IOT Nuclio is stronger, So, there interest is, rather than saying, no we'll choose just one horse, why won't we enable the market, and allow the people the choice in solution. >> That's great. On IBM's side, Doug I want to get your thoughts on the working group, as well as IBM. You guys have done a lot of open source, IBM well known in the Linux history books, as we know. And now very active again, continuing that mission, congratulations, and thanks for doing that. But the serverless working group. This is a broader scope now, can you just give us some color on the commentary around how that's evolving, because you guys have a lot of blue chip customers. Cloud Foundry just did a survey, I was talking to Abby Kearns yesterday, about the results came back, mainstream tech, not middle of the country, but they heard about Kubenettis like, what's kubenettis? So you have people going, Okay, I've got a job to do, but now kubenettis has arrived, this is a key part of a micro-services focus. >> Right. Yeah, and so the way the serverless group got started was, about a year ago the CNCF TOC, technical oversight committee, decided serverless is kind of a new technology, we want to figure out what's going on in that space, and so they started up a working group. And our job wasn't to really decide what to do about it yet, it was to sort of give us the landscape of what's going on out there, what are people doing? What does serverless even mean, relative to function of the service, or even the other as's, and stuff like that What does a serverless framework generally look like? What do people use it for? Use cases, and stuff like that. And then at the end of that we produced a white paper with our results, as well as a landscape spreadsheet, to say all of the various technologies out there in that space, who's doing what. Without trying to pick winners, just saying what's there. And then we ended with a set of recommendations in terms of what possible next steps the CNCF could do in this space, with an eye towards interoperability building more than anything else, because that's what, really, we care about. We don't want vendor lock in and all the other good stuff. And so we had a set of recommendations, and one of the main ones was, two main things, one was function signatures was a very popular one, but we decided to focus on eventing first, because we thought that might be an easier fruit to pick off the tree first. And so we were going to focus on the formats, or meta data of an event, as it transfers between systems. And so from the service working group we create a cloud events, sort of little sub-group within our working group, to focus on creating a specification around what the meta-data around an event would look like, just so we can get some commonality. That way, at least the infrastructure between the two systems can transfer the events back and forth, much in the same way HTTP layer, doesn't have to understand the body of the message, but can look at common headers, and know how to route it properly. Same kind of thing with eventing. And again, this is all about trying to get interoperability, and portability for applications, and users more than anybody else. And so that's kind of where our focus has been on. How can we help the end user not get locked into one platform, not get locked into one solution, and make their life easier overall. >> Great. Where are you now with that? Is it running? Is it-- >> Overall done. No. >> Oh you're complete, yeah (laughs) >> Doug: But we did that last week. No, actually as of last week though, we just released our first version, 0.1. It's a very, very basic thing, and people might look at it and say, what's the big deal? But even with that simple little thing we've been able to get some level of interoperability between the various platforms. And if people actually join, when is it? Friday 11 o'clock? >> Yaron: Yeah. >> We have a session where someone's going to demonstrate interoperability between, oh gosh, IBM, you guys, Microsoft. >> Google. >> Dameware, Google. All the various companies involved in this thing. >> Love it, that's great. >> Huawei. >> Yeah. They're all going to be either sending or receiving events, using the cloud event format, to prove interoperability around the specification. So we're just at 0.1, we have some way to go, but that first step was huge just to get agreement, and everybody to the table to agree. So it's been really fun >> And it wasn't easy, it wasn't easy. And he's the peacemaker in the group. (laughs) I'm the troublemaker, he's the peacemaker. >> We have a lot of vocal people in the group, yes. (laughs) >> We're not pointing at anyone. >> No, never. >> Important first step obviously, commonality, and having some sort of standardization kind of thinking. >> Doug: Yes. >> Yaron: Don't use the standard word. There are people allergic to that. >> Well yeah, the standard bodies and what not, but in terms of the community work going on, this is super important. What's the impact of that? Obviously it's a small step, but a big step, right? So, what's it going to impact? What's next, what's coming next now that you've got the meta-data, and you've got the interoperability, what's next? >> Well, obviously we need to finish it up, because 0.1 is obviously just the first step. As I said, I think beyond that people are really itching to do function signatures. Because I think if you can get the event format coming in to be somewhat similar, and then you can get portability of moving your function from one platform to another, with hopefully minimal changes from a function signature point of view, you're a long way there towards getting portability for people. And I think that's probably the next step we're going to be looking at. >> What's the technical case from a commercial entity like yourself, who's in business to make money, obviously you have a business to run. As you build out your architecture, where is this going to be applied for you? What's the impact of this project to your product? >> So beyond my strong religion around open APIs, and you've seen the blogs I've written about it, our interest is twofold. First, we're not the market leader, Amazon is the market leader, et cetera. So if we have a better technology, and things are standard, it's easier for customers to move. Second, is we believe in interoperability, closer to the data, closer to where the processing, especially when 5G is going to evolve, and we're going to see bottlenecks between metro locations. Our sales is, go develop in the cloud, and then push it, you know the diesel twin model. This is exactly what we're demonstrating with Acer. You could develop at Acer, our Nuclio functions and deploy in a factory. So it may not be the same platform, it may not be the same serverless framework. So having the ability to run the same code in different frameworks or different platforms is very important. >> And IBM, you're doing a lot of work. OpenWhisk has been something that's gotten a lot of press and notoriety. What's up with you guys and open source? Obviously we see you guys out there doing a lot of studies and a lot content, a lot of coding. What's new over on the IBM side of the house with serverless? >> From my point of view, I think probably the biggest thing is, we're leading the charge in putting OpenWhisk to run on top of Kubernetes. And I think what's interesting about that is we're going to see, probably, some changes to Kubernetes need to be made to get the better performance that we need. Because when OpenWhisk runs vanilla on top of, say run C, or the docker stuff, we have a lot more freedom there. Pausing containers, stuff like that. Stuff you can't do in Kubernetes. We're probably going to see some more pressure on Kubernetes to add some more features, to get the kind of performance numbers we need going forward. >> And scale too, is important to understand. I was just talking about the keynotes earlier with another guest, and Cern is up there. They have a thousand nodes, it's not massive numbers yet, at scale, I mean Amazon are the big clouds, you guys have clouds. You've got a lot of nodes, so it's a lot more scale going on in the cloud as Kubernetes starts to get it's footing. >> Doug: Yep. >> How do you explain Kubernetes, how do both of you guys explain Kubernetes to the IT transformation group out there, that's going cloud operations. >> So what we've seen, because we're also selling an appliance, a full integrated solution, people, in the enterprise, they don't necessarily want to understand low level of Kubernetes. And actually serverless is a nice way for doing that. If you look at the new Nuclio dashboard, you just go, you write some code, you click deploy, it auto scales, you don't need to think about the underlying cube cut whole, the underlying networking. It's all done there for you. And I think, what you see in the trend in the industry, some people call it serverless, some people call it other things, is more and more abstractions, where users will deploy code, will deploy containers, and some frameworks underneath will deal with the high availability, elasticity, all that. I think that's what enterprise customers are looking for. Not everyone is eBay, and Google, and Netflix. >> John: Your thoughts? >> What I think is interesting, I agree with what you said, but I think it's interesting is you actually have a wider range of people, right. You have some people who think Kubernetes, as you said, nice abstraction layer, you don't have to get into the nitty gritty if you don't need to. But Kubernetes does allow you to get under the covers and twiddle those lower level bits if you actually need to. I think that's one of the things that. People who start out with Docker, they like it, it's so simple to use, and it's wonderful, and they love it. But they found it a little bit limiting, because it was too opinionated, or it didn't give you access to things under the covers. Kubernetes, I think, is trying to find that right balance between the two, and I think for the most part they kind of hit it. There's a little bit more of a learning, because it's not quite as user friendly as Docker is. But once you get over that learning hump, all the flexibility it gives you, people seem to really, really, like that. >> What are some of the things that people do under the covers, you mentioned some tweaks here and there. Is it policy based stuff? What's happening under the covers that Kubernetes getting that their groove swing on now. >> There is something called custom resource definition. So for example, when we deploy a Nulio, maybe OpenWhisk or others have it as well. It's essentially, Nuclio becomes another resource that you can actually view when you're running the Kubernetes CLI, or all the other things that manage it's liveliness, et cetera. So those are services that you get for free as a platform. But if you want your function to keep being alive you need to code your functions into the liveliness API, the thing that monitors it staying alive. So you're getting a generic service, but you need to work with it. >> Yeah, actually I'd go one step further with that and abstract it a little. Because obviously Kubernetes has a lot of knobs you can turn, a lot more than other platforms, like Docker has. But I think, for me the biggest benefit of Kubernetes is the plugability. Custom resource definitions, one of them. Ripping out schedulers, or whatever controllers you want, and replace it with your own. That kind of flexibility to say, I don't have to leave the entire Kubernetes world just to run my own scheduler, or write the infrastructure around it, I can plug in my own. That's the kind of flexibility people seem to really, really like. That way they don't feel locked in, they can still play with part of the ecosystem, but get the flexibility and customization they need. >> Awesome, great commentary there. I want to get your thoughts on KubeCon 2018 Europe, for CNCF. Continuing to see growth in CNCF, fantastic to see. As the boat gets full of people, you've got to be the peacemaker if you're co-chair. As people want to start getting their claws into the projects, this imbalance on the community side, are you guys happy with the direction, obviously the success, and the visibility is increased. What's your take on the show here? What are you guys doing? What's going on around the event for you guys. >> So it only started today, but my impression, comparing it with the previous show in the U.S. There are a lot more decision makers here. I don't know if it's the European culture of not funding every student to every show, or just the maturity of the ecosystem. But that's something I've noticed, the discussions I had with decision makers. and they're also not everyone, like in the U.S.A. everyone wants to build it their own way. People here think about operationalizing solutions, so sometimes you need to take something that someone else already built and test. >> And what's the conversations like, that you're having? Is it architecture? Is it deploying production workloads? >> So for us it's a lot about use cases, because we're doing things in a very different way. We're doing some nice demos on how, we're running real-time analytics with the sample database as the core, and we're showing how it's equivalent to another solution that they may build. And that immediately clicks. The other aspect is really, there is so much technology, but we need someone to wrap it up for us as a package solution. >> Doug, your thoughts. First of all I love your shirt, it says code with all the words in the community. >> Doug: Yeah, it's one of my favorite shirts. I like it. >> Love that shirt. I'm just looking at it like, all these questions are popping in my head. What's your plan at the show here? What's your goal, what are you guys doing, what conversations are you hearing in the hallways? >> Well, obviously being from IBM, we just promote IBM as much as we can. But beyond that, really talk about interoperability around what we're doing here, and make sure people understand that we're not here to necessarily sell our products, which we obviously want to do. We want to make sure that we do it in a way that gives people choice. And that's why we have the serverless working group, the cloud events spec. It's all about giving everybody the choice to move from one platform to another, to get their job done. As much as we want people to buy our stuff, if the customer isn't happy in getting what they need, then we're all going to lose. >> And these projects are super important to get the solidarity around these, quote, standards. >> And just to follow on your previous question about the conference, and stuff that we'd like. Obviously it's great that it's growing so much, but what I really like about this conference, beyond some other ones that I've seen is, a lot of the other ones tend to have more marketing flair to them. And obviously there's a little bit of that here, people are promoting their stuff, but I love the fact that most of the stuff that I'm doing here aren't in the sessions. Because the sessions are great and interesting, but it's the hallway chatter, and interacting with people face to face, and not just to meet them, to actually have real technical, deep discussion with them, here at the conference, because everybody's here you can do that much better face to face than you can over a Zoom call, or something else. The productivity from that level is just astronomical, I love it. >> Yeah, I totally agree. And one thing I would add, just my observation, interviews in the hallways, is that we're living, and we talk about this on the Cube all the time, a modern software architectures here. And it's got some visibility around it, it's not filled in yet, but I think there's clear visibility. Cloud, micro-service, interoperability, portability, pretty clear. And I think people are engaged, people are excited. So you have the progressive new guard coming in, on board. Great job. Thanks for coming on the cube, we appreciate that. >> Thank you. >> Thank you. >> Iguazio and IBM, here on the Cube, breaking down KubeCon 2018 Europe. More live coverage, stay with us, we'll be right back after this short break. (electronic music)

Published Date : May 2 2018

SUMMARY :

Brought to you by the Cloud Native Computing foundation, And the CNCF, and all the good stuff that's going on, and the cloud in a joint development effort, I want to ask you your thoughts on serverless, and a family of platform specific, Does that connect with your business plan for Iguazio? and a few of those. and that's also part of the relationship with Acer. not middle of the country, Yeah, and so the way the serverless group got started was, Where are you now with that? between the various platforms. IBM, you guys, Microsoft. All the various companies involved in this thing. and everybody to the table to agree. And he's the peacemaker in the group. We have a lot of vocal people in the group, yes. kind of thinking. There are people allergic to that. but in terms of the community work going on, and then you can get portability of moving your function What's the impact of this project to your product? So having the ability to run the same code What's up with you guys and open source? to get the better performance that we need. I mean Amazon are the big clouds, you guys have clouds. how do both of you guys explain Kubernetes And I think, what you see in the trend in the industry, I agree with what you said, but I think it's interesting What are some of the things that people do or all the other things but get the flexibility and customization they need. What's going on around the event for you guys. the discussions I had with decision makers. and we're showing how it's equivalent to another solution it says code with all the words in the community. I like it. what conversations are you hearing in the hallways? if the customer isn't happy in getting what they need, to get the solidarity around these, quote, standards. a lot of the other ones tend Thanks for coming on the cube, we appreciate that. Iguazio and IBM, here on the Cube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lauren CooneyPERSON

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

DougPERSON

0.99+

AmazonORGANIZATION

0.99+

AcerORGANIZATION

0.99+

JohnPERSON

0.99+

YaronPERSON

0.99+

two systemsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Yaron HavivPERSON

0.99+

last weekDATE

0.99+

NetflixORGANIZATION

0.99+

John FurrierPERSON

0.99+

2000 starsQUANTITY

0.99+

HuaweiORGANIZATION

0.99+

Doug DavisPERSON

0.99+

eBayORGANIZATION

0.99+

Friday 11 o'clockDATE

0.99+

first stepQUANTITY

0.99+

yesterdayDATE

0.99+

Copenhagen, DenmarkLOCATION

0.99+

U.S.LOCATION

0.99+

one platformQUANTITY

0.99+

KubeConEVENT

0.99+

FirstQUANTITY

0.99+

twoQUANTITY

0.99+

U.S.A.LOCATION

0.99+

SecondQUANTITY

0.99+

first versionQUANTITY

0.99+

eight months agoDATE

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

LinuxTITLE

0.99+

NuclioTITLE

0.98+

one horseQUANTITY

0.98+

CNCFORGANIZATION

0.98+

IguazioPERSON

0.98+

LinuxORGANIZATION

0.98+

todayDATE

0.98+

firstQUANTITY

0.98+

EuropeLOCATION

0.98+

CubeORGANIZATION

0.97+

KubernetesTITLE

0.97+

one solutionQUANTITY

0.97+

KubeCon 2018 EuropeEVENT

0.96+

CNCF TOCORGANIZATION

0.96+

CFCF Cloud Native Computing foundationORGANIZATION

0.96+

IguazioORGANIZATION

0.96+

two main thingsQUANTITY

0.96+

Abby KearnsPERSON

0.96+

this weekDATE

0.95+

0.1QUANTITY

0.94+

Bassam Tabbara, Upbound | KubeCon + CloudNativeCon 2018


 

>> Narrator: Live, from Copenhagen, Denmark. It's theCUBE. Covering KubeCon and CloudNativeCon Europe 2018. Brought to you by the Cloud Native Computing Foundation, and its ecosystem partners. >> Live in Copenhagen, Denmark, of KubeCon 2018 Europe. I'm John Furrier with Lauren Cooney, my cohost. Exciting startup news here. Obviously, it's a growing ecosystem, all the big names are in it, but, great ecosystem of startups. One launching here, we have Bassam Tabbara, who's the founder and CEO of Upbound, here on theCUBE. Website is going to go live in a few hours. We're here for a quick preview. Thanks for joining theCUBE today. Appreciate it. >> Oh, my pleasure. >> So, you got a company. No one knows about it. (Bassam laughing) Now they're going to hear about it. What are you guys doing? What is Upbound about? And what are you doing? >> So Upbound is going after the problem of multi-cloud. So the way to think about it is that, you know, we're seeing now the ubiquity of Kubernetes, and if you think about what Kubernetes has done, it has solved the problem of taking many machines and making them into one, and doing all the scheduling and management and becoming the operating system of a cluster, right? Upbound is the next level up. Upbound is essentially taking multiple clusters and solving a similar set of problems around running distributed systems, distributed services, global services across clusters. It was really interesting to hear CERN this morning talking about how their managing 210 clusters and you think about 210 clusters, if you would talk about 210 machines, you'd be like, "Wow, that's a lot of machines", right? This is 210 clusters, and so a similar set of problems exist at a higher level, and that's the focus of Upbound. >> So, you guys are announcing a financing, $9 million from an investment Series A financing. Google Ventures as a lead and a variety of industry super-reputable investors. What was the value proposition pitch? What got Google Ventures excited? What was the core value, technology, business model? Give us the deck. >> My understanding of their investment thesis, and it's hard to claim that you always understand this. Essentially, the next level of infrastructure problem is essentially around multi-cloud and enterprises are managing many clusters today, many different cloud environments, whether it's across regions of a public cloud vendor or it's across public cloud vendors or across hybrid boundaries, on premise verse private cloud versus public cloud. It's become a challenge to run things across clusters and there's a lot of interesting scenarios to be solved at that level. That was the premise of the investment. >> John: So, are you guys a management software piece? Are you guys technology? What's the product? >> We're essentially building a service that helps companies run across cloud environments. And it's based on Kubernetes, 'cause Kubernetes is an amazing platform to build on top of, and we've learned that through our investment in Rook. You know, it's a great extension points and awesome community to be working with, We're offering a service for multi-cloud. >> Right, is it going to be, some shape of it, going to be open source or what are you looking at in particular? >> Yeah, obviously there'll be parts of it that are open source. We're a big open-source company. The team that's in Upbound, that's actually the team that's behind Rook, and Rook is a CNCF project now and all open source, obviously. And so, yeah, we're definitely an open-source player. >> Good. >> So you're exposed to the storage challenges with Rook and all the future kind of architecture. We just had Adrian Cockcroft on. we were both high fiving each other and celebrating that microservices is going to be a modern era. >> Bassam: Yes. >> How do you guys solve that problem? What is it going to be, the buyer going to be in a cloud architect? Is it going to be a storage person? Is it an ops person? Who's the target buyer of your service? Or user of your service? >> Well essentially people, DevOps people, that are managing multiple clusters today and understand the challenges around managing multiple clusters, no normalization of policies, separate users, separate user management, observability. All those things come up with a strategist, and of course, let's not forget, stateful workloads and managing state across environments is, I'd say, probably one of the harder problems. So, you know, the buyer is essentially somebody in DevOps, and then obviously, the CTO, CIO level gets involved at some point, but it's a draw. >> When you guys were forming the company. Obviously, with the Rook project, you were exposed to some of the pain points, you mentioned a few of them. What was the one pain point that jumped out at you the most and you said, "Hey, we can build a company around this"? >> The fact that most enterprises are now managing multiple cloud environments and they are completely independent. Anything that they try to normalize or do across them is... There's a human involved, or there's some homegrown script involved to actually run across clusters. And honestly, that's the same problem that people are trying to solve across machines, right? And that led to some, you know, the work that's happening around orchestration, Kubernetes, and others. It's only logical that we move up a level and solve similar set of problems. >> Yeah, I have a question about your service. Just, along the lines of, There are a lot of people coming into this market with, "We've got this integration solution that is multi-cloud," or, "We have this kind of API platform that can solve "for multi-cloud and run applications "cross multiple cloud platforms." >> Right. >> What is your differentiator? >> Yeah, so I mean, multi-cloud has become a thing now, as you've observed. I think the power of what we're doing is that we're building a control plane based on Kubernetes and the great work that's happening in the Kubernetes space around multi-cluster and federation and everything else, and offering a set of services that layer on top of that that solve some critical problems across clouds, including stateful workloads and migration portability across clouds. And essentially, inherently building this on the Kubernetes platform and our experience with that and our experience with the community around Kubernetes, I think is differentiated. >> So that leads me to our next question. So your pricing model, you said that you were going to be open source. So is that control plane going to be open source and then some services are going to be bucketed into-- >> Yeah, it's probably too early for us to talk about the pricing model, but think of it as a service a manage service for multi-cloud. >> Great. >> You can imagine that... Open source is actually quite compatible with a service play. >> Yes it is. >> And $9 million, that's a good chunk of cash. Congratulations. Use of funds? Obviously, hiring out of the gate? What's your priorities on the use of funds on the first round of funding? >> So we're going to accelerate hiring, we're going to accelerate delivering the service, and that's, you know, this is the fun part of a startup. (John laughing) This is my second one. It's the next 18 months is all building and growing and doing product. >> John: And what's your five-year pro-forma revenue projection? (laughing) You made it up on 30C. >> Let me pull up my spreadsheet. (hosts laughing) >> I love those VC slides, "Yeah, we're makin up year five." No, but you want to have some growth, so the trend is your friend. Here, it's multi-cloud, >> That's right. >> And obviously the growth of microservices. Obviously, right. >> That's right. >> Anything else out there that's on your mind observationally, looking at the market? As you start coming, certainly you're doing a lot of due diligence on the market. What are your risk factors? How you thinking about it? What are you looking at closely? How are you studying some of the trend data? >> I mean at some level, the way to think about this is Cloud Native is still at its infancy, right, despite all the amazing momentum that's building around it. I think, you know at some level, we use the term Cloud Native but it's really just cloud computing. I think the adoption cycle is going to be interesting, so that's something that I think about a lot. You know, how long will people kind of make transformative changes to what they're doing? But, I believe the power of open source and the community is that people are. I mean look at this conference. A lot of people are here, including-- >> No doubt open source is a good bet. >> That's right. >> I think the thing that we're watching, love to get your reaction to, and Lauren, you too, is that Stu Miniman, my cohost. He's not here, he's at the Dell EMC World event. We talk about this all the time around what's the migration going to look like from on-prem to cloud? Meaning, how's the on-prem is transferring to cloud ops? >> That's right. >> Right? So okay, perfect for your case, I think. What's the ratio, what's hybrid cloud going to look like when you have a true private cloud, true cloud environment on premise? >> Yeah. >> 'Cause this speaks to the multi-cloud trend because if I can have an on-premise operation, I can make-- >> Very much. >> Well you have to look at the applications too. I mean, that's critical because you've got these monolithic applications that have to be essentially changed and ported into different environments to become multi-cloud. There's heavy lifting there. >> Yeah, I think the interesting thing about what you're describing here is that it used to be that if you're running on premise you're using a completely different stack from say what you're running in public cloud, right? And so, not only was the choice about where you're hosting your compute and your networking and storage, but it was also a choice of stacks, right? Open stack or whatever you're running on premise, and then there was Amazon or others, right? What is happening now is that we're actually normalizing on stacks. So this whole movement around Kubernetes is essentially a way to say that there is now a common stack, regardless of where you're actually being deployed, right? The store is not always there and that's, but it'll get there, right? At some level, it gives people more choices about where they want to host, and in fact, if... Portability becomes more interesting 'cause you could move in and out of clouds, right? There are costs to doing that. Data gravity is a thing, but-- >> John: The containers are helpful. >> Containers are helpful, but, you know, that Amazon truck goes in one direction. (John chuckling) It is interesting to think about that. But at least it becomes possible for people to think about how to manage their infrastructure and how to manage their services across clouds. And, end result is it'll have more choices. >> Well, I think this community, you talked about on our intro today about portability is really what this community cares a lot about. >> Bassam: Very much. >> Choice and non-lock-in. >> Very much. It's amazing how many companies that we talk to that actually have a, like a strategies, you know, CTO, CIO-level down, around not getting locked in to any vendor. Yet, they are not able to fulfill that. >> Yeah, it's hard to talk about lock in when you actually don't even know what Cloud Native is. So, it's interesting discussion, and Adrian Cockcroft was just on from Amazon, and he and I were talking with Lauren about, that's a developer organization and management discussion. >> Yeah. >> First. >> Bassam: That's right. >> So you can't, you don't know what it is? How do you know what-- (Bassam laughing) >> Well, there's... >> A lock in looks like? You can't play chess if you don't know what checkmate looks like. >> Yeah, but the good news is that developers are high up on that food chain now and they're able to actually make these buy decisions. So I think that's going to be critical. >> Well congratulations on the financing. >> Bassam: Thank you. >> Love the company name, Upbound? >> Yes. >> Upbound. >> Bassam: Essentially going above the clouds. >> Above the clouds. >> Like it. >> Congratulations. Looking forward to tracking the progress. And great to have you on theCUBE. $9 million dollars in fresh financing. Upbound just scored a great deal for multi-cloud. Again, that's a great trend. Congratulations. More CUBE coverage here in a moment. Thanks for watching. Be right back, stay with us.

Published Date : May 2 2018

SUMMARY :

Brought to you by the Cloud Native Computing Foundation, Website is going to go live in a few hours. What are you guys doing? So the way to think about it is that, you know, So, you guys are announcing a financing, and it's hard to claim that you always understand this. and awesome community to be working with, and Rook is a CNCF project now and celebrating that microservices is going to be a modern era. So, you know, the buyer is essentially somebody in DevOps, and you said, "Hey, we can build a company around this"? And that led to some, you know, Just, along the lines of, and the great work that's happening in the Kubernetes space So is that control plane going to be open source to talk about the pricing model, with a service play. on the first round of funding? and that's, you know, this is the fun part of a startup. John: And what's your five-year Let me pull up my spreadsheet. so the trend is your friend. And obviously the growth of microservices. What are you looking at closely? I think the adoption cycle is going to be interesting, and Lauren, you too, is that Stu Miniman, my cohost. What's the ratio, what's hybrid cloud going to look like that have to be essentially changed and ported and your networking and storage, and how to manage their services across clouds. you talked about on our intro today about you know, CTO, CIO-level down, Yeah, it's hard to talk about lock in You can't play chess if you don't know So I think that's going to be critical. And great to have you on theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lauren CooneyPERSON

0.99+

LaurenPERSON

0.99+

Adrian CockcroftPERSON

0.99+

BassamPERSON

0.99+

Stu MinimanPERSON

0.99+

JohnPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Google VenturesORGANIZATION

0.99+

$9 millionQUANTITY

0.99+

Bassam TabbaraPERSON

0.99+

UpboundORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

CERNORGANIZATION

0.99+

210 clustersQUANTITY

0.99+

John FurrierPERSON

0.99+

Copenhagen, DenmarkLOCATION

0.99+

RookORGANIZATION

0.99+

210 machinesQUANTITY

0.99+

FirstQUANTITY

0.99+

KubeConEVENT

0.98+

first roundQUANTITY

0.98+

five-yearQUANTITY

0.98+

second oneQUANTITY

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

$9 million dollarsQUANTITY

0.97+

KubeCon 2018 EuropeEVENT

0.96+

Cloud NativeORGANIZATION

0.96+

CloudNativeCon Europe 2018EVENT

0.96+

Dell EMC WorldEVENT

0.96+

Series AOTHER

0.96+

one pain pointQUANTITY

0.95+

KubernetesTITLE

0.93+

this morningDATE

0.91+

one directionQUANTITY

0.9+

theCUBEORGANIZATION

0.9+

UpboundTITLE

0.9+

year fiveQUANTITY

0.88+

oneQUANTITY

0.88+

CloudNativeCon 2018EVENT

0.84+

machinesQUANTITY

0.82+

CUBEORGANIZATION

0.8+

next 18 monthsDATE

0.79+

Cloud NativeCOMMERCIAL_ITEM

0.77+

Narrator: LiveTITLE

0.72+

OneQUANTITY

0.68+

CNCFORGANIZATION

0.67+

DevOpsTITLE

0.51+

Michael Weiss & Shere Saidon, NASDAQ | PentahoWorld 2017


 

>> Narrator: Live from Orlando, Florida, it's theCube covering PentahoWorld 2017 brought to you by Hitachi Ventara. >> Welcome back to theCube's live coverage of PentahoWorld brought to you by Hitachi Ventara. My name is Rebecca Knight, I'm your host along with my co-host, Dave Vellante. We're joined by Michael Weiss, he is the senior manager at NASDAQ, and Shere Saidon, who is analytics manager at NASDAQ. Thanks so much for coming back to theCube, I should say, you're Cube veterans now. >> We are, at least I am. This is his first year, this is his first time at PentahoWorld. So, excited to bring him along. >> Okay so you're a newbie but you're a veteran so. (laughing) >> Great. So, tell us a little bit about what has changed since the last time you came on, which was 2015, back then? >> So the biggest thing that's happened in the past 18 months is we've launched seven new exchanges. Integrated seven new exchanges. We bought the ISE, the International Stock Exchange, which is three options markets. We just completed that integration in August. We've also bought the Canadian, CHI-X, the Canadian Exchange, which also had three equities markets, so we integrated them, and we went live with a dark pool offering for Goldman back in June. So now we operate a dark pool for Goldman Sachs, and we're looking to kind of expand that offering at this point. >> So you're just getting bigger and bigger. So tell our viewers a little bit how Pentaho fits into this. >> So Pentaho is the engine that kind of does all our analytics behind the scenes at post trade, right. So we do a lot of traditionally TL, where we're doing batch processing. In the back-end we're doing a little bit more with the Hadoop ecosystem leveraging things like EMR, Spark, Presto, that type of stuff, And Pentaho kind of helps blend that stuff together a little bit. We use it for reporting, we do some of the BA, we're actually now looking to have the data Pentaho generates plug in a little bit of Tableau. So, we're looking to expand it and really leverage that data in other ways at this point. Even doing some things more externally, doing more data offerings via Pentaho externally. >> So I got to do a NASDAQ 101 for my 13 year-old. Came up to me the other day and said, "Daddy, what's the NASDAQ index and how does it work?" Well, give us a 20 second answer. >> Michael: On the NASDAQ index? >> Yeah, what's the NASDAQ Index and how does it work? >> Probably the wrong person to answer that one but, the index is generally just a blend of various stocks. So the S&P 500 is a blend of different stocks, much like that the cues, are NASDAQ's equivalent of the S&P, right, so, we use a different algorithm to determine the companies that make up that blend, but it's an index just like at the S&P. >> They're weighted by market cap- >> Michael: Right, yeah. >> And that determines the number at the end- >> Michael: Correct. >> And it goes up and down based on what the stock's index. >> Right, and that's how most people know NASDAQ, right. They see the S&P went up by 5 points, The Dow went down by 3 and the NASDAQ went up by a point, right. But most people don't realize that NASDAQ also operates 27 exchanges worldwide, I think it is now. So, probably a little bit more, maybe closer to 32, but... >> So you mentioned that you're doing a dark pool for Goldman >> Michael: Yes. >> So that's interesting. We were talking off camera about HFT and kind of the old days, and dark pools were criticized at the time. Now Goldman was one of the ones shown to be honest and above board, but what does that mean the dark pool for your business and how does that all tie in? >> Michael: So, dark pools are isolated markets, right, so they don't necessarily interact with the NASDAQ exchange themselves, it's all done within the pool. You interact with only people trading on that pool. What NASDAQ has done is we took our technology and we now host it for Goldman so, we have I-NETs our trading system, so we gave them I-NET, we built all the surrounding solutions, how you manage symbols, how you manage membership. Even the data, we curate their data in the AWS. We do some Pentaho transformations for them. We do some analytics for them. And that's actually going to start expanding, but yeah, we've provided them an entire solution, so now they don't have to manage their own dark pool. And now we're going to look to expand that to other potential clients. >> Dave: So that's NASDAQ as a technology >> Yes. >> Dave: Provider. Very interesting. So I was saying, earlier, the Hong Kong Stock Exchange is basically closing the facility where they house humans, again another example of machines replacing humans. So the joining, well NASDAQ, kind of, but NYSE, London Stock Exchange, Singapore, now Hong Kong... Essentially, electronic trading. So, brings us to the sort of technology underpinnings of NASDAQ. Shere, maybe you can talk a little bit about your role, and paint a picture of the technology infrastructure. >> Yeah so I focus primarily on the financial side of corporate finance. So we leverage Pentaho to do a lot of data integration, allow us to really answer our business questions. So, previously it would take days to put basic reporting together, now you've got it all automated, or we're working towards getting it mostly automated, and it just answer the questions that we need. And no longer use our gut to drive decisions, we're using hard data. And so that's helped us instrumentally in a lot of different places. >> Dave: So, talk more about the data pipeline, where the data's coming from, how you're blending it, and how you're bringing it through the pipeline and operationalizing it. >> Yeah, so we've got a lot of different billing systems, so we integrate companies, and historically we've let them keep their billings systems. So just kind of bring it all together into our core ERP, seeing how quantities...and just getting the data, and just figuring out on the basic side, how much do we make from a certain customer? What are we making from them? What happens in different scenarios if they consolidate, or if they default? And some of the pipeline there is just blending it all together, normalizing the data, making sure it's all in the same format, and then putting it in a format where our executives or business managers can actually make decisions off of it. >> Well you're talking about the decision making process, and you said it's no longer gut, you're using data to drive your decisions, to know which direction is the right direction. How big a change is that, just culturally speaking? How has that changed? >> Yeah, it's huge, at least on our side, it's making us a long more confident in the decisions we're making. We're no longer going in saying, hey this is probably how we should do it. No, the numbers are showing us that this is going to pay off, and we stick to it and look at the hard facts, rather than what do we think is going to happen? >> So, talk a little bit about what you guys are seeing here, and you're doing a lot of speaking here, we were joking earlier, you're kind of losing your voice. You're telling your story, what kind of reactions you getting? Share with us the behind the scenes at the conference. >> I think at this conference you're seeing a lot of people kind of fall in line with similar ideas that we're trying to get to. Taking advantage more instead of your traditional MPPs, or your traditional relational databases, moving more towards this Hadoop ecosystem. Leveraging Spark, Presto, Flume, all these various new technologies that have emerged over the past two to five years, and are now more viable than ever. They're easier to scale, if you look at your traditional MPPs, like we're a big Redshift user, but every time you scale it there's a cost with that, and we don't necessarily need to maintain all that data all the time, so something in the Hadoop ecosystem now lets us maintain that data without all the unnecessary cost. I see a lot of more of that than I did two years ago, a lot more people are following that trend. I think the other interesting trend I've seen this week is this idea of becoming more cloud agnostic. Where do you operate, and how do you store your data should be irrelevant to the data processing, and I think it's going to be a tough nut to crack for Pentaho, or any vendor. But if you can figure out a way to either do some type of cloud parity, where you have support across all your services, but you don't have to know which service you deploy to when you design your pipelines, I think that's going to be huge. I think we're a little ways from that, but that's been a common theme this week as well, both private and your big three cloud providers right now, your Googles, your Azures, and your AWS. >> So when I asked you said cloud agnostic, that's great, good vision and aspiration. The follow up would be, am I correct that you don't see it as data location agnostic, right, you want to bring the cloud model to your data, versus try to force your data into a cloud? Or not necessarily? >> A lot of it I think is being driven by not wanting to be vendor locked in, so they want to have the ability to, and I think this is easier said than done, the ability to move your data to different cloud providers based on pricing or offerings, right, and right now going from AWS to Google to Azure would be a very painful process. So you move petabytes of data across, it's not cost efficient and all the savings you want to realize by moving to maybe a Google in the future, are not going to be realized cause of all the effort it's going to take to get there. >> Dave: We had CERN on earlier, and they were working on that problem... >> Yeah, it's not a trivial problem to solve, but if you can crack that, and you can then say hey I wanna...even if I have a service offering, Like our operating a dark pool for Goldman. We also have a market tech side, where we sell our trading platform and various solutions to other exchanges worldwide. If we can come up with a way to be able to deploy to any cloud provider, even on an on-prem cloud, without having to do a bunch of customizations each time, that would be huge, it would revolutionize what we do. We're, as our own company, starting to look at that, and talking with Pentaho, they're also... are going to eye that as a potential way to go, with abstractions and things like that, but it's going to take some time. >> We're you guys here yesterday for the keynotes? >> Michael: Saw some of the keynotes, yes. >> The big messaging, like every conference that you go to, is be the disruptor, or you're going to get disrupted. We talked earlier off camera... Trading volumes are down, so the way you traditionally did business is changing, and made money is changing. >> Michael: Right. >> We talked earlier about you guys becoming a technology provider, I wonder if you could help us understand that a little bit, from the standpoint of NASDAQ strategy, when we hear your CEOs talk, real visionary, technology driven transformations. >> Yeah, I think Adena's coming in is definitely looking at that as a trend, right? Trading volumes are down, they've been going down, they've kind of stabilized a little bit, and we're stable able to make money in that space, but the problem is there's not a ton of growth. We acquire the ISE, we acquire the CHI-X, we're buying market share at that point. So you increase revenue, but you also increase overhead in that way. And you can only do so many major acquisitions at a time, you can only do how many one billion dollar acquisitions a year before you have to call it a day. And we can look at more strategic, smaller acquisitions for exchanges, but that doesn't necessarily bring you the transformation, the net revenue you're looking for. So what Adena has started to look at is, how do we transform to more of a technology company? We're really good at operating exchanges, how do we take that, and we already have market tech doing it, but how do we make that more scalable, not just to the financial sector, but to your other exchanges, your Ubers or your StubHubs of the world? How do you become a service provider, or a platform as a service for these other companies, to come in and use your tech? So we're looking at how do we rewrite our entire platform, from trading to the back-end, to do things like: Can we deploy to any cloud provider? Can we deploy on-prem? Can we be a little bit more technology agnostic so to speak, and offer these as services, and offer a bunch of microservices, so that if a startup comes up and wants to set up an exchange, they can do it, they can leverage our services, then build whatever other applications they want on top of it. I think that's a transformation we need to go through, I think it's good vision, and I'm looking forward to executing it. It's going to be a couple years before we see the fruits of that labor, but Adena's really doing a great job of coming in, and really driving that innovation, and Brad Peterson as well, our CIO, has really been pushing this vision, and I think it's really going to work out for us, assuming we can execute. >> Well you know what's interesting about that, if I may, is financial services is usually so secretive about their technology, right? But your business, you guys are becoming a technology provider, so you got to face the world and start marketing your capabilities now, and opening about that. It's sort of an interesting change. >> I think you'll see that starting to become more of a thing over the next year or two, as we start actually looking to build out the platform and figure it out. We do market on the market tech side, I mean it's not a small business, but we're more strategic about who we market to, cause we're still targeting your financial exchanges, more internationally than in the U.S., but there's only so many of them, again you have to start looking at rebranding, rebuilding, and rethinking how we think about exchanges in general, and not thinking of them as just a financial thing. >> Well that's what I wanted to get into, because you're talking about this rebranding, and this rebuilding, this transformation, to the backdrop within an industry that is changing rapidly, and we have sort of the threat of legislative reform, perhaps some administrative reforms coming down all the time, so how do you manage that? I mean, those are a lot of pressures there, are you constantly trying to push the envelope right up until any changes take place? Or what would you say Shere and Michael? >> Probably again not the right person to ask about this, but we're definitely trying to stay on top of the cutting edge in innovation and the technologies out there that, whether it be Blockchain, or different types of technologies. I mean we're definitely trying to make sure we're investing in them, while maintaining our core businesses. >> Right, it's trying to find that balance right now of when to make the next step in the technology food chain, and when to balance that with regulatory obligations. And if you look at it, going back to the idea of being able to launch marketplaces, I think what you're ending up seeing over the coming years is your Ubers, your StubHubs, I think they're going to become more regulated at some level. And we're good at operating more regulated markets, so I think that's where we can kind of come in and play a role, and help wade through those regulations a little bit more, and help build software to adhere to those regulations. >> Since you brought up Blockchain, Jamie Dimon craps all over Blockchain, or you know, Bitcoin, and then clarifies his remarks, saying look, technology underneath is here to stay. Thoughts on Blockchain? Obviously Financial Services is looking at it very closely, doing some really advanced stuff, what can you tell us? >> Yeah, I think there's no argument that it's definitely an innovation and a disruptive technology. I think that it's definitely in it's early stages across the board, so we're investing in it where we can, and trying to keep a close eye on it. We think that there's a lot of potential in a lot of different applications. >> As the NASDAQ transforms its business, how does that effect the sort of back-end analytics activity and infrastructure? >> The data is just growing, that's like the biggest challenge we have now. Data that used to be done in Excel, it's just no longer an option, so now in order to get the insights that we used to get just from having a couple people doing Excel transformations, you need to now invest in the infrastructure in the back-end, and so there's a lot that needs to go into building out an infrastructure to be able to ingest the data, and then also having the UI on the front-end, so that the business can actually view it the way they want. >> So skills wise, how's that affecting who you guys are hiring and training? And how's that transformation going? >> Michael: I'll let you go first. >> I think there's definitely, data analytics is a hot field. It's very new, there's definitely a big skills gap in administrative work and in the analytics side. Usually you have people could perform analytical functions just by being administrative or operational, and now it's really, we're investing in analysts, and making sure that we have the right people in place to be able to do these transformations, or pull the data and get the answers that we need from them. >> I mean from the tech side, I think what you're seeing is where we traditionally would just plug a developer in there, whether a Java developer, or an ETL developer, I think what you're seeing now is we're looking to bring more of a business minded data analyst to the tech side, right? So we're looking to bring a data engineer, so to speak, more to the tech side. So we're not looking to hire a traditional four year Computer Science degree, or Software Engineering degree, you're looking for a different breed of person, cause quite honestly because you're traditional Java dev. or C++ developer, they're not skilled or geared towards data. And when we've tried to plug that paradigm in, it just doesn't really work, so we're looking now to hiring more of an analyst, but someone who's a little bit more techie as well. They still need to have those skills to do some level of coding, and what we are finding is that skill gap is still very much... There's a gap there. There's a huge gap. And I think it's closing, but- >> And as you have to fund those for the new areas, I presume, like many companies in your business, you're trying to move away from the sort of undifferentiated low-level infrastructure deployment hassles, and the IT labor costs there, especially as we move to the cloud, presumably, so is that shift palpable? I mean, can you see that going on? >> Yeah, I think we made a lot of progress over the past couple years in doing that. We do more one button deployments, where the operation cost is a lot lower, a lot more automation around alerting, around when things go wrong, so there's not necessarily a human being sitting there watching a computer. We've invested a lot in that area to kind of reduce the costs, and make the experience better for our end user. And even from a development side, the cost of a new application is a lot less every time you have to do a release. The question is, how do you balance that with the regulations, and make sure you still have a good process in place. The idea of putting single button deployments in place is a great one, but you still have to balance that with making sure that what you push to productions been tested, well defined, and it meets the need, and you're not just arbitrarily throwing things out there. So we're still trying to hit that balance a little bit, it's more on the back-end side. The trading system is not quite there for obvious reasons, we're way more protective of what goes out there, then surrounding it a lot of the times, but I can see a future where, again going back to this idea of transforming our business, where you can stand up and do an exchange with the click of a button. I think that's a trend we're looking at. >> Rebecca: It's not too far in the future. >> No, I don't think it is. >> Last question, Pentaho report card. What are they doing really well? What do you want to see them do better? >> I think they continue to focus in the right areas, focusing more on the data processing side, and with the big data technologies, trying to fill that gap in the big data, and be the layer that you don't have to tie yourself to ike vCloud Air or MapR, you can kind of be a little bit more plug and play. I think they still need to do some improvements on there visualizations in their front-ends. I think they've been so much more focused on the data processing, that part of it, that the visualization's kind of lacked behind, so I think they need to put a little more focus into that, but all in all, they're an A, and we've been extremely happy with them as a software provider. >> Great. >> Shere: I think the visualization part is the part that allows people to understand that value being created at Pentaho. So I think being able to maybe improve a little bit on the visualization could go a far way. >> Michael, Shere, it's been so much fun having you on theCube, and having this conversation, keep that bull market coming please, do whatever you can. >> We'll do our best. >> I'm Rebecca Knight. We are here at PentahoWorld, sponsored by Hitachi Vantara. For Dave Vellante, we will have more from theCube in just a little bit.

Published Date : Oct 27 2017

SUMMARY :

brought to you by Hitachi Ventara. brought to you by Hitachi Ventara. So, excited to bring him along. Okay so you're a newbie the last time you came on, So the biggest thing that's So you're just getting So Pentaho is the engine So I got to do a NASDAQ of the S&P, right, so, we use a different And it goes up and down and the NASDAQ went up by a point, right. kind of the old days, and dark pools so now they don't have to and paint a picture of the and it just answer the about the data pipeline, And some of the pipeline there is just and you said it's no longer gut, in the decisions we're making. scenes at the conference. and I think it's going to that you don't see it as the ability to move your data and they were working on that problem... but it's going to take some time. so the way you traditionally from the standpoint of NASDAQ strategy, We acquire the ISE, we acquire the CHI-X, so you got to face the world We do market on the market tech side, and the technologies I think they're going to become stuff, what can you tell us? across the board, so we're so that the business can actually and in the analytics side. I mean from the tech side, and make the experience Rebecca: It's not What do you want to see them do better? and be the layer that you don't have to So I think being able to having you on theCube, and For Dave Vellante, we will

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael WeissPERSON

0.99+

Rebecca KnightPERSON

0.99+

RebeccaPERSON

0.99+

Dave VellantePERSON

0.99+

MichaelPERSON

0.99+

DavePERSON

0.99+

NYSEORGANIZATION

0.99+

NASDAQORGANIZATION

0.99+

AugustDATE

0.99+

Jamie DimonPERSON

0.99+

JuneDATE

0.99+

AWSORGANIZATION

0.99+

London Stock ExchangeORGANIZATION

0.99+

GoldmanORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

2015DATE

0.99+

ExcelTITLE

0.99+

SherePERSON

0.99+

Goldman SachsORGANIZATION

0.99+

Shere SaidonPERSON

0.99+

Hong Kong Stock ExchangeORGANIZATION

0.99+

20 secondQUANTITY

0.99+

GooglesORGANIZATION

0.99+

four yearQUANTITY

0.99+

27 exchangesQUANTITY

0.99+

Brad PetersonPERSON

0.99+

5 pointsQUANTITY

0.99+

UbersORGANIZATION

0.99+

AdenaORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

seven new exchangesQUANTITY

0.99+

PentahoORGANIZATION

0.99+

CERNORGANIZATION

0.99+

first yearQUANTITY

0.99+

yesterdayDATE

0.99+

International Stock ExchangeORGANIZATION

0.99+

three optionsQUANTITY

0.99+

two years agoDATE

0.99+

JavaTITLE

0.99+

first timeQUANTITY

0.98+

Hitachi VantaraORGANIZATION

0.98+

oneQUANTITY

0.98+

DavPERSON

0.98+

U.S.LOCATION

0.98+

a dayQUANTITY

0.98+

3QUANTITY

0.98+

this weekDATE

0.98+

bothQUANTITY

0.97+

each timeQUANTITY

0.97+

StubHubsORGANIZATION

0.97+

SparkORGANIZATION

0.97+

ISEORGANIZATION

0.97+

Hitachi VentaraORGANIZATION

0.97+

Lenovo Transform 2017 Kickoff with Stu & Rebecca


 

>> Announcer: Live from New York City, it's theCUBE. Covering Lenovo Transform 2017. Brought to you by Lenovo. >> Welcome to The Cube's coverage of the Lenovo Transform event. I am your host, Rebecca Knight, along with my co-host Stu Miniman. He is the senior analyst at Wikibon. Thanks so much, Stu, it's great to always be working with you here. >> It's great to be with you here, Rebecca, in New York City. What a time it is in New York City. >> Rebecca: How lucky we are to be alive right now. >> (chuckles) All right, enough Hamilton humor. Yeah, Y.Y., the CEO of Lenovo, got up on stage, talked about how there's no better transformation story than New York City, from a humble trading company, city, over 200 years ago to the center of innovation and just global commerce that it is today. >> So I want to ask you about Y.Y.'s keynote address. He was talking about how this was really an inflection point for Lenovo. He said this is the time where we celebrate what we've done, our past, and think about the impact we've had on society, and on business. And then also really look at the future, and what we aspire to, where Lenovo wants to go. I mean, where do you see Lenovo in terms of all your coverage of this company? >> Yeah, so we know that we're at an interesting time in really what's happening in IT today. One of my favorite lines that Y.Y. had is he said, you look back a hundred years, he said heck, look back 18 months, and you probably couldn't predict where we would be today 18 months ago. And that's true, the pace of change is just off the charts. On the one hand, they're talking about how ThinkPad is now 25 years old, and the server, the x86 line is also-- >> Also 25 years old. >> 25 years ago. >> Rebecca: We are grown up. >> But, you know, I've been in a lot of events this year where you talk whether it's 10, 25, or 100 years, and they say we know we're entering a new era where everything's going to change. Lenovo feels they are a good mashup of their tradition, but they're different and they're new, and one of the people in the keynote this morning said that they're a startup. Now, I wouldn't call them a startup with 43 billion in revenue, and 52,000 employees globally? >> A big startup. >> Um, no. You know, culturally, I think, Rebecca, you'd agree with me, a company of that size, I don't care if you started yesterday, because you all got moved in, you're not a startup. There's certain structure and certain things involved that make up startups and that innovation, you can't move a 52,000-person company on a dime, and say ope, hey, we're just going to go pivot into this. But, they are looking to take advantage of really the whole wave of AI, how do they harness the intelligence, is what they talked about. And what they said is they don't have some of the legacy. So what that means is that while they have a server business that has been around for many years, they've only had it for two years. They don't have the storage, they don't have some of the baggage that we've been watching the industry is, storage is trying to transfer. >> They're unencumbered. Particularly Kirk Skaugen, who we're going to have on the program later today, made the point about the lack of legacy and how that makes it easier not only to innovate, but also to sell. >> Yeah, absolutely. We've been watching that transformation about how software is eating the world, and Lenovo very much wants to focus on those software solutions. What one of the two brand names that they put out today are the ThinkAgile brand. And ThinkAgile is really focused on those software-defined solutions, highlighted by, they've got the OEM of Nutanix solutions and they're also partnering with Microsoft, where we're going to have Azure Stack coming out later this year. And Lenovo of course being one of the top server manufacturers, close partnership with Microsoft is going to drive that forward for really delivering on the promise of hybrid cloud solutions. >> So, yeah, I want to hear what you think about these product announcements. This is the largest product launch in the data portfolio in Lenovo history. Is it a game changer? >> So, ThinkSystems is the other big brand that they have, and it's server, storage and network. So, they have Intel up on stage, and a matter of fact both Kirk and Kim Stevenson both came from Intel, so we know Intel's place in the market. We understand how important they are, and with the Skylake chipset coming out later this year, it's important. Anytime Intel comes out with the next generation, it's important. The caution I have is this is, I think, the fourth or fifth show this year that theCUBE's done where Intel's up on stage talking about their next generation chipset. I was at the Google Cloud event in February, you were at the Dell EMC show in Los Vegas, we had the team at the HPE Discover, and all of them, arm-in-arm with Intel, talking about how this next generation is going to be transformative, and of course leveraging the data, being ready for all of those edge solutions, devices, and really be able to take that infrastructure and tie it to lots of different devices. But it's really that wave that Intel is, that rising tide that rises all boats, because revenue for servers actually in the first quarter this year were down a little bit because really big companies, especially the hyper-scales, are waiting for this next generation chipset. >> So in talking about how Intel is this great partner to all of these companies, what do you think sets Lenovo apart? Where does it compete, what's it's, what's unique about it? >> Yeah, so Kirk in the keynote this morning laid out a couple of places that they want to really tie their brand to. Their goal is to be the most trusted provider in the data center today, and trust is really important. Security, absolutely, it's at the board level, it's one of the top things that everyone discusses there. And when they talk about trust, it starts with up time. So, if you start with we're all using some of the same base pieces, there shouldn't be much difference between them at that point, but Lenovo has some data points to show that they had the least amount of unplanned downtime of any of their competitors. Going out at saying compare them to Dell, and HPE, and they were far and away in the lead. >> And that is huge, particularly as you were saying, the pace of business change and innovation is so fast. >> And the second piece, customer support. So we hear lots of lip service to things like customer support. Lenovo, from a cultural standpoint, they push it through the entire product line. And really, you also hear some of the leverage between the PC, laptop, and even tablet market, and even the device all the way through the servers. So talked about how when they bring in the sheet metals and the screws. You turn one way, and you go to the consumer side, you turn the other way in the factory, and it goes to the enterprise and the server division. And we know that there's leverage that can be made out of that; the economies of scale are good. And we've seen a lot of splitting of consumer and enterprise, HP cut those in two, there were rumors for years that Dell was going to sell off their PC division. Lenovo feels that they have the strength to do both of them. And as we start seeing edge solutions and mobile and all these other devices planned, Lenovo can build an end-to-end story that few companies still can. >> I want to keep, talk more about this end-to-end, because this is another thing that many executives played up in the keynote. I mean, how important is that in terms of how it competes? >> So, there are some pieces that are easy, and you say okay, from a brand standpoint, if I have the new Moto Z and I have a laptop that I like, you build that brand trust, you have a similar user interface. We've seen what Apple and Google can do pushing out across all those devices. But the second one is really if we start talking about data. If I want to have insight in con activity, Y.Y. said in his keynote, this fourth revolution is really going to be focused on the user and therefore you want to be where the data is, where the users are, where the devices are. And Lenovo has a lot of pieces that touch to those end devices. >> We're going to have a number of executives on the program too, also a customer too. One of the things that Y.Y. was talking about is harnessing AI to not only understand where your customers are today but also understand, anticipate their needs, where they want to go tomorrow. Is this something that you view as a strength of Lenovo? >> So, we're still pretty early in the AI. I feel like many of the times here, you heard Big Data and AI both being thrown out there. We know that there's so much data being created, especially with the peripheral proliferation of all of the end devices that are there. So how do we gather that data, turn that into insight, and we're starting to see where that goes. Lenovo still, primarily, is an infrastructure player, so it's devices, it's boxes, you want to hear more about the software that helps drive that, and a lot of that is through partnerships. So I walked around the area here around me. There are many partners here that are helping to be able to transfer that data and create more insight out of them. So, you know, we'll see. It's a lot of that is positioning where they want to be and where they know the new goal lines are, but I want to see some of the proof, I want to talk to customers that are using this and getting advantage from it. >> So much of Lenovo's strategy has really been about partnering and forging these alliances to augment its offerings. And Kirk had said he was going to foreshadow a bit of possible mergers and acquisitions, possible partnerships. What do you see in store for Lenovo in terms of how it moves forward in this hyper-converged world? >> Yeah, so in the software-defined storage space, Lenovo has a lot of partnerships. So whether it's Nexenta, the resale the solution, Nutanix is an OEM solution. Last year they had announced a deeper integration with a storage partner that was bought by one of their biggest competitors. So HPE has been acquisitive as of late. They've bought both SimpliVity and Nimble, both of which were good Lenovo partners. So, the question is, yeah, it's not surprising to hear Kirk say that they are going to be acquisitive. It's great to see him up on stage. I'm sure a question I'm going to have for him is what do you look for? I don't expect him to come out and say yes, this is the company I buy and I'm going to spend 10 billion dollars to go buy a company. But where are they going to fit and where are they going to partner in there? Just behind me here you've got VMware, Red Hat, Nutanix, Micron, all storage-based solutions that Lenovo can work with. Lenovo wants to be one of those platforms for infrastructure and partner with companies that help round out that stack. And therefore buying software solutions that help augment that software-defined infrastructure that Lenovo does would make a lot of sense. >> So you talked about some of your burning questions you have for Kirk, but what else do you want our viewers to come away with after a day of coverage about the Lenovo Transform event? >> Yeah, so one of the other things that Lenovo was highlighting is what they're doing in the HPC or supercomputer market. Because there's a supercomputing show going on in Europe right now, and Lenovo says that they now have 92 of the top 500 are running Lenovo, they're the fastest growth, but what I'd like to hear from him and I want to hear more of, is it's not just oh, we've got the speeds and feeds and this is great, but we're helping scientists do breakthroughs, we're helping the medical industry help out, find new cures for diseases. We usually hear about CERN and what they're doing with advancing science, so those are the kinds of things that connect the technology to the greater good. Y.Y. talked about it, Kirk talked about it, the greater good, because infrastructure at the end of the day, is only there for the applications that the business runs. And of course those applications are there to drive value to the business and hopefully for the greater world. >> Well, and that is true, and that is something that we've heard at a number of technology conferences is using technology, and these transformative new products to make huge advancements in society, and to solve big problems. I mean, how serious is the technology industry, I mean, is this just sort of a side note that you hear at conferences, do you think this really is a raison d'etre of tech right now? >> Yeah, so Rebecca, you and I were at the Red Hat Summit, and it felt ingrained in their culture. There were some companies, you hear, you talk about it, and like, oh great, you give employees time to go work on charitable events or what are you giving to schools, and helping to make things possible? So I'd love to hear from Lenovo, really, as John Furrier would say, the meat on the bone for some of these solutions. I think it is more than lip service, but how deeply ingrained is it? We'd love to hear. The technology industry in general seems to be understanding that their mission should be broader than just selling licenses or selling boxes. As a, I'm a sci-fi fan, and most science fiction is about how we can take technology and make a better future. I have friends of mine that say, if you're a technologist that means you're optimistic about what technology can do for you for the future. An area that you and I like to talk about is what will automation do to the future of jobs? So that needs to be part of the equation, 'cause it's not just oh hey, we've got this cool new data center, and I could just lock it and nobody needs to go into it. Well, what are those people doing, and what does that improve for the business, and improve the world? >> Right, and how will people work side-by-side with these technologies, how will their jobs be improved by the technology taking over some of the perhaps more monotonous tasks, things like that? >> Stu: Absolutely. >> Great. Thanks so much, Stu. I'm Rebecca Knight, we'll be back with more from Lenovo Transform just after this. (upbeat electronic music)

Published Date : Jun 20 2017

SUMMARY :

Brought to you by Lenovo. of the Lenovo Transform event. It's great to be with you here, Rebecca, in New York City. Yeah, Y.Y., the CEO of Lenovo, got up on stage, I mean, where do you see Lenovo in terms is he said, you look back a hundred years, and one of the people in the keynote this morning They don't have the storage, they don't have some of the about the lack of legacy and how that makes it easier And Lenovo of course being one of the top server This is the largest product launch and of course leveraging the data, being ready for all of Yeah, so Kirk in the keynote this morning laid out a And that is huge, particularly as you were saying, Lenovo feels that they have the strength to do both of them. I mean, how important is that in terms of how it competes? is really going to be focused on the user One of the things that Y.Y. was talking about and a lot of that is through partnerships. What do you see in store for Lenovo in terms Kirk say that they are going to be acquisitive. that connect the technology to the greater good. I mean, is this just sort of a side note that you hear So that needs to be part of the equation, 'cause it's not I'm Rebecca Knight, we'll be back with more

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

Rebecca KnightPERSON

0.99+

MicrosoftORGANIZATION

0.99+

StuPERSON

0.99+

92QUANTITY

0.99+

DellORGANIZATION

0.99+

EuropeLOCATION

0.99+

two yearsQUANTITY

0.99+

Kim StevensonPERSON

0.99+

LenovoORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

AppleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

KirkPERSON

0.99+

IntelORGANIZATION

0.99+

New York CityLOCATION

0.99+

43 billionQUANTITY

0.99+

Last yearDATE

0.99+

10 billion dollarsQUANTITY

0.99+

second pieceQUANTITY

0.99+

10QUANTITY

0.99+

Y.Y.PERSON

0.99+

52,000-personQUANTITY

0.99+

ThinkSystemsORGANIZATION

0.99+

FebruaryDATE

0.99+

twoQUANTITY

0.99+

Kirk SkaugenPERSON

0.99+

100 yearsQUANTITY

0.99+

bothQUANTITY

0.99+

52,000 employeesQUANTITY

0.99+

Los VegasLOCATION

0.99+

25QUANTITY

0.99+

John FurrierPERSON

0.99+

NutanixORGANIZATION

0.99+

OneQUANTITY

0.99+

HPEORGANIZATION

0.99+

yesterdayDATE

0.99+

oneQUANTITY

0.99+

tomorrowDATE

0.99+

this yearDATE

0.98+

first quarterDATE

0.98+

second oneQUANTITY

0.98+

two brandQUANTITY

0.98+

Eric Starkloff, National Instruments & Dr. Tom Bradicich, HPE - #HPEDiscover #theCUBE


 

>> Voiceover: Live from Las Vegas, it's theCUBE, covering Discover 2016, Las Vegas. Brought to you by Hewlett Packard Enterprise. Now, here are your hosts, John Furrier and Dave Vellante. >> Okay, welcome back everyone. We are here live in Las Vegas for SiliconANGLE Media's theCUBE. It's our flagship program, we go out to the events to extract the signal from the noise, we're your exclusive coverage of HP Enterprise, Discover 2016, I'm John Furrier with my co-host, Dave Vellante, extracting the signals from the noise with two great guests, Dr. Tom Bradicich, VP and General Manager of the servers and IoT systems, and Eric Starkloff, the EVP of Global Sales and Marketing at National Instruments, welcome back to theCUBE. >> Thank you. >> John: Welcome for the first time Cube alumni, welcome to theCUBE. >> Thank you. >> So we are seeing a real interesting historic announcement from HP, because not only is there an IoT announcement this morning that you are the architect of, but the twist that you're taking with IoT, is very cutting edge, kind of like I just had Google IO, and at these big conferences they always have some sort of sexy demo, that's to kind of show the customers the future, like AI, or you know, Oculus Rift goggles as the future of their application, but you actually don't have something that's futuristic, it's reality, you have a new product, around IoT, at the Edge, Edgeline, the announcements are all online. Tom, but you guys did something different. And Eric's here for a reason, we'll get to that in a second, but the announcement represents a significant bet. That you're making, and HP's making, on the future of IoT. Please share the vision, and the importance of this event. >> Well thank you, and it's great to be back here with you guys. We've looked around and we could not find anything that existed today, if you will, to satisfy the needs of this industry and our customers. So we had to create not only a new product, but a new product category. A category of products that didn't exist before, and the new Edgeline1000, and the Edgeline4000 are the first entrance into this new product category. Now, what's a new product category? Well, whoever invented the first automobile, there was not a category of automobiles. When the first automobile was invented, it created a new product category called automobiles, and today everybody has a new entry into that as well. So we're creating a new product category, called converged IoT systems. Converged IoT systems are needed to deliver the real-time insights, real-time response, and advance the business outcomes, or the engineering outcomes, or the scientific outcomes, depending on the situation of our customers. They're needed to do that. Now when you have a name, converged, that means somewhat, a synonym is integration, what did we integrate? Now, I want to tell you the three major things we integrated, one of which comes from Eric, and the fine National Instruments company, that makes this technology that we actually put in, to the single box. And I can't wait to tell you more about it, but that's what we did, a new product category, not just two new products. >> So, you guys are bringing two industries together, again, that's not only just point technologies or platforms, in tooling, you're bringing disparate kind of players together. >> Yes. >> But it's not just a partnership, it's not like shaking hands and doing a strategic partnership, so there's real meat on the bone here. Eric, talk about one, the importance of this integration of two industries, basically, coming together, converged category if you will, or industry, and what specifically is in the box or in the technology. >> Yeah, I think you hit it exactly right. I mean, everyone talks about the convergence of OT, or operational technology, and IT. And we're actually doing it together. I represent the OT side, National Instruments is a global leader. >> John: OT, it means, just for the audience? >> Operational Technology, it's basically industrial equipment, measurement equipment, the thing that is connected to the real world. Taking data and controlling the thing that is in the internet of things, or the industrial internet of things as we play. And we've been doing internet of... >> And IT is Information Technologies, we know what that is, OT is... >> I figured that one you knew, OT is Operational Technology. We've been doing IoT before it was a buzzword. Doing measurement and control systems on industrial equipment. So when we say we're making it real, this Edgeline system actually incorporates in National Instruments technology, on an industry standard called PXI. And it is a measurement and control standard that's ubiquitous in the industry, and it's used to connect to the real world, to connect to sensors, actuators, to take in image data, and temperature data and all of those things, to instrument the world, and take in huge amounts of analog data, and then apply the compute power of an Edgeline system onto that application. >> We don't talk a lot about analog data in the IT world. >> Yeah. >> Why is analog data so important, I mean it's prevalent obviously in your world. Talk a little bit more about that. >> It's the largest source of data in the world, as Tom says it's the oldest as well. Analog, of course if you think about it, the analog world is literally infinite. And it's only limited by how many things we want to measure, and how fast we measure them. And the trend in technology is more measurement points and faster. Let me give you a couple of examples of the world we live in. Our customers have acquired over the years, approximately 22 exabytes of data. We don't deal with exabytes that often, I'll give an analogy. It's streaming high definition video, continuously, for a million years, produces 22 exabytes of data. Customers like CERN, that do the Large Hadron Collider, they're a customer of ours, they take huge amounts of analog data. Every time they do an experiment, it's the equivalent of 14 million images, photographs, that they take per second. They create 25 petabytes of data each year. The importance of this and the importance of Edgeline, and we'll get into this some, is that when you have that quantity of data, you need to push processing, and compute technology, towards the edge. For two main reasons. One, is the quantity of data, doesn't lend itself, or takes up too much bandwidth, to be streaming all of it back to central, to cloud, or centralized storage locations. The other one that's very, very important is latency. In the applications that we serve, you often need to make a decision in microseconds. And that means that the processing needs to be done, literally the speed of light is a limiting factor, the processing must be done on the edge, at the thing itself. >> So basically you need a data center at the edge. >> A great way to say it. >> A great way to say it. And this data, or big analog data as we love to call it, is things like particulates, motion, acceleration, voltage, light, sound, location, such as GPS, as well as many other things like vibration and moisture. That is the data that is pent up in things. In the internet of things. And Eric's company National Instruments, can extract that data, digitize it, make it ones and zeroes, and put it into the IT world where we can compute it and gain these insights and actions. So we really have a seminal moment here. We really have the OT industry represented by Eric, connecting with the IT industry, in the same box, literally in the same product in the box, not just a partnership as you pointed out. In fact it's quite a moment, I think we should have a photo op here, shaking hands, two industries coming together. >> So you talk about this new product category. What are the parameters of a new product category? You gave an example of an automobile, okay, but nobody had ever seen one before, but now you're bringing together sort of two worlds. What defines the parameters of a product category, such that it warrants a new category? >> Well, in general, never been done before, and accomplishes something that's not been done before, so that would be more general. But very specifically, this new product, EL1000 and EL4000, creates a new product category because this is an industry first. Never before have we taken data acquisition and capture technology from National Instruments, and data control technology from National Instruments, put that in the same box as deep compute. Deep x86 compute. What do I mean by deep? 64 xeon cores. As you said, a piece of the data center. But that's not all we converged. We took Enterprise Class systems management, something that HP has done very well for many, many years. We've taken the Hewlett Packard Enterprise iLo lights-out technology, converged that as well. In addition we put storage in there. 10s of terabytes of storage can be at the edge. So by this combination of things, that did exist before, the elements of course, by that combination of things, we've created this new product category. >> And is there a data store out there as well? A database? >> Oh yes, now since we have, this is the profundity of what I said, lies in the fact that because we have so many cores, so close to the acquisition of the data, from National Instruments, we can run virtually any application that runs on an x86 server. So, and I'm not exaggerating, thousands. Thousands of databases. Machine learning. Manageability, insight, visualization of data. Data capture tools, that all run on servers and workstations, now run at the edge. Again, that's never been done before, in the sense that at the edge today, are very weak processing. Very weak, and you can't just run an unmodified app, at that level. >> And in terms of the value chain, National Instruments is a supplier to this new product category? Is that the right way to think about it? >> An ingredient, a solution ingredient but just like we are, number one, but we are both reselling the product together. >> Dave: Okay. >> So we've jointly, collaboratively, developed this together. >> So it's engineers and engineers getting together, building the product. >> Exactly. His engineers, mine, we worked extremely close, and produced this beauty. >> We had a conversation yesterday, argument about the iPhone, I was saying hey, this was a game-changing category, if you will, because it was a computer that had software that could make phone calls. Versus the other guys, who had a phone, that could do text messages and do email. With a browser. >> Tom: With that converged product. >> So this would be similar, if I may, and you can correct me if I'm wrong, I want you to correct me and clarify, what you're saying is, you guys essentially looked at the edge differently, saying let's build the data center, at the edge, in theory or in concept here, in a little concept, but in theory, the power of a data center, that happens to do edge stuff. >> Tom: That's right. >> Is that accurate? >> I think it's very accurate. Let me make a point and let you respond. >> Okay. >> Neapolitan ice cream has three flavors. Chocolate, vanilla, strawberry, all in one box. That's what we did with this Edgeline. What's the value of that? Well, you can carry it, you can store it, you can serve it more conveniently, with everything together. You could have separate boxes, of chocolate, vanilla, and strawberry, that existed, right, but coming together, that convergence is key. We did that with deep compute, with data capture and control, and then systems management and Enterprise class device and systems management. And I'd like to explain why this is a product. Why would you use this product, you know, as well. Before I continue though, I want to get to the seven reasons why you would use this. And we'll go fast. But seven reasons why. But would you like to add anything about the definition of the conversion? >> Yeah, I was going to just give a little perspective, from an OT and an industrial OT kind of perspective. This world has generally lived in a silo away from IT. >> Mm-hmm. >> It's been proprietary networking standards, not been connected to the rest of the enterprise. That's the huge opportunity when we talk about the IoT, or the industrial IT, is connecting that to the rest of the enterprise. Let me give you an example. One of our customers is Duke Energy. They've implemented an online monitoring system for all of their power generation plants. They have 2,000 of our devices called CompactRIO, that connect to 30,000 sensors across all of their generation plants, getting real-time monitoring, predictive analytics, predictive failure, and it needs to have processing close to the edge, that latency issue I mentioned? They need to basically be able to do deep processing and potentially shut down a machine. Immediately if it's an a condition that warrants so. The importance here is that as those things are brought online, into IT infrastructure, the importance of deep compute, and the importance of the security and the capability that HPE has, becomes critical to our customers in the industrial internet of things. >> Well, I want to push back and just kind of play devil's advocate, and kind of poke holes in your thesis, if I can. >> Eric: Sure thing. >> So you got the probes and all the sensors and all the analog stuff that's been going on for you know, years and years, powering and instrumentation. You've got the box. So okay, I'm a customer. I have other stuff I might put in there, so I don't want to just rely on just your two stuff. Your technologies. So how do you deal with the corner case of I might have my own different devices, it's connected through IT, is that just a requirement on your end, or is that... How do you deal with the multi-vendor thing? >> It has to be an open standard. And there's two elements of open standard in this product, I'll let Tom come in on one, but one of them is, the actual IO standard, that connects to the physical world, we said it's something called PXI. National Instruments is a major vendor within this PXI market, but it is an open standard, there are 70 different vendors, thousands of products, so that part of it in connecting to the physical world, is built on an open standard, and the rest of the platform is as well. >> Indeed. Can I go back to your metaphor of the smartphone that you held up? There are times even today, but it's getting less and less, that people still carry around a camera. Or a second phone. Or a music player. Or the Beats headphones, et cetera, right? There's still time for that. So to answer your question, it's not a replacement for everything. But very frankly, the vision is over time, just like the smartphone, and the app store, more and more will get converged into this platform. So it's an introduction of a platform, we've done the inaugural convergence of the aforementioned data capture, high compute, management, storage, and we'll continue to add more and more, again, just like the smartphone analogy. And there will still be peripheral solutions around, to address your point. >> But your multi-vendor strategy if I get this right, doesn't prevent you, doesn't foreclose the customer's benefits in any way, so they connect through IT, they're connected into the box and benefits. You changed, they're just not converged inside the box. >> At this point. But I'm getting calls regularly, and you may too, Eric, of other vendors saying, I want in. I would like to relate that conceptually to the app store. Third party apps are being produced all the time that go onto this platform. And it's pretty exciting. >> And before you get to your seven killer attributes, what's the business model? So you guys have jointly engineered this product, you're jointly selling it through your channels, >> Eric: Yes. >> If you have a large customer like GE for example, who just sort of made the public commitment to HPE infrastructure. How will you guys "split the booty," so to speak? (laughter) >> Well we are actually, as Tom said we are doing reselling, we'll be reselling this through our channel, but I think one of the key things is bringing together our mutual expertise. Because when we talk about convergence of OT and IT, it's also bringing together the engineering expertise of our two companies. We really understand acquiring data from the real world, controlling industrial systems. HPE is the world leader in IT technology. And so, we'll be working together and mutually with customers to bring those two perspectives together, and we see huge opportunity in that. >> Yeah, okay so it's engineering. You guys are primarily a channel company anyway, so. >> Actually, I can make it frankly real simple, knowing that if we go back to the Neapolitan ice cream, and we reference National Instruments as chocolate, they have all the contact with the chocolate vendor, the chocolate customers if you will. We have all the vanilla. So we can go in and then pull each other that way, and then go in and pull this way, right? So that's one way as this market develops. And that's going to very powerful because indeed, the more we talk about when it used to be separated, before today, the more we're expressing that also separate customers. That the other guy does not know. And that's the key here in this relationship. >> So talk about the trend we're hearing here at the show, I mean it's been around in IT for a long time. But more now with the agility, the DevOps and cloud and everything. End to end management. Because that seems to be the table stakes. Do you address any of that in the announcement, is it part, does it fit right in? >> Absolutely, because, when we take, and we shift left, this is one of our monikers, we shift left. The data center and the cloud is on the right, and we're shifting left the data center class capabilities, out to the edge. That's why we call it shift left. And we meet, our partner National Instruments is already there, and an expert and a leader. As we shift left, we're also shifting with it, the manageability capabilities and the software that runs the management. Whether it be infrastructure, I mean I can do virtualization at the edge now, with a very popular virtualization package, I can do remote desktops like the Citrix company, the VMware company, these technologies and databases that come from our own Vertica database, that come from PTC, a great partner, with again, operations technology. Things that were running already in the data center now, get to run there. >> So you bring the benefit to the IT guy, out to the edge, to management, and Eric, you get the benefit of connecting into IT, to bring that data benefits into the business processes. >> Exactly. And as the industrial internet of things scales to billions of machines that have monitoring, and online monitoring capability, that's critical. Right, it has to be manageable. You have to be able to have these IT capabilities in order to manage such a diverse set of assets. >> Well, the big data group can basically validate that, and the whole big data thesis is, moving data where it needs to be, and having data about physical analog stuff, assets, can come in and surface more insight. >> Exactly. The biggest data of all. >> And vice versa. >> Yup. >> All right, we've got to get to the significant seven, we only have a few minutes left. >> All right. Oh yeah. >> Hit us. >> Yeah, yeah. And we're cliffhanging here on that one. But let me go through them real quick. So the question is, why wouldn't I just, you know, rudimentary collect the data, do some rudimentary analytics, send it all up to the cloud. In fact you hear that today a lot, pop-up. Censored cloud, censored cloud. Who doesn't have a cloud today? Every time you turn around, somebody's got a cloud, please send me all your data. We do that, and we do that well. We have Helion, we have the Microsoft Azure IoT cloud, we do that well. But my point is, there's a world out there. And it can be as high as 40 to 50 percent of the market, IDC is quoted as suggesting 40 percent of the data collected at the edge, by for example National Instruments, will be processed at the edge. Not sent, necessarily back to the data center or cloud, okay. With that background, there are seven reasons to not send all the data, back to the cloud. That doesn't mean you can't or you shouldn't, it just means you don't have to. There are seven reasons to compute at the edge. With an Edgeline system. Ready? >> Dave: Ready. >> We're going to go fast. And there'll be a test on this, so. >> I'm writing it down. >> Number one is latency, Eric already talked about that. How fast do you want your turnaround time? How fast would you like to know your asset's going to catch on fire? How fast would you like to know when the future autonomous car, that there's a little girl playing in the road, as opposed to a plastic bag being blown against the road, and are you going to rely on the latency of going all the way to the cloud and back, which by the way may be dropped, it's not only slow, but you ever try to make a phone call recently, and it not work, right? So you get that point. So that's latency one. You need to time to incite, time to response. Number one of seven, I'll go real quick. Number two of seven is bandwidth. If you're going to send all this big analog data, the oldest, the fastest, and the biggest of all big data, all back, you need tremendous bandwidth. And sometimes it doesn't exist, or, as some of our mutual customers tell us, it exists but I don't want to use it all for edge data coming back. That's two of seven. Three of seven is cost. If you're going to use the bandwidth, you've got to pay for it. Even if you have money to pay for it, you might not want to, so again that's three, let's go to four. (coughs) Excuse me. Number four of seven is threats. If you're going to send all the data across sites, you have threats. It doesn't mean we can't handle the threats, in fact we have the best security in the industry, with our Aruba security, ClearPass, we have ArcSight, we have Volt. We have several things. But the point is, again, it just exposes it to more threats. I've had customers say, we don't want it exposed. Anyway, that's four. Let's move on to five, is duplication. If you're going to collect all the data, and then send it all back, you're going to duplicate at the edge, you're going to duplicate not all things, but some things, both. All right, so duplication. And here we're coming up to number six. Number six is corruption. Not hostile corruption, but just package dropped. Data gets corrupt. The longer you have it in motion, e.g. back to the cloud, right, the longer it is as well. So you have corruption, you can avoid. And number three, I'm sorry, number seven, here we go with number seven. Not to send all the data back, is what we call policies and compliance, geo-fencing, I've had a customer say, I am not allowed to send all the data to these data centers or to my data scientists, because I can't leave country borders. I can't go over the ocean, as well. Now again, all these seven, create a market for us, so we can solve these seven, or at least significantly ameliorate the issues by computing at the edge with the Edgeline systems. >> Great. Eric, I want to get your final thoughts here, and as we wind down the segment. You're from the ops side, ops technologies, this is your world, it's not new to you, this edge stuff, it's been there, been there, done that, it is IoT for you, right? So you've seen the evolution of your industry. For the folks that are in IT, that HP is going to be approaching with this new category, and this new shift left, what does it mean? Share your color behind, and reasoning and reality check, on the viability. >> Sure. >> And relevance. >> Yeah, I think that there are some significant things that are driving this change. The rise of software capability, connecting these previously siloed, unconnected assets to the rest of the world, is a fundamental shift. And the cost point of acquisition technology has come down the point where we literally have a better, more compelling economic case to be made, for the online monitoring of more and more machine-type data. That example I gave of Duke Energy? Ten years ago they evaluated online monitoring, and it wasn't economical, to implement that type of a system. Today it is, and it's actually very, very compelling to their business, in terms of scheduled downtime, maintenance cost, it's a compelling value proposition. And the final one is as we deliver more analytics capability to the edge, I believe that's going to create opportunity that we don't even really, completely envision yet. And this deep computing, that the Edgeline systems have, is going to enable us to do an analysis at the edge, that we've previously never done. And I think that's going to create whole new opportunities. >> So based on your expert opinion, talk to the IT guys watching, viability, and ability to do this, what's the... Because some people are a little nervous, will the parachute open? I mean, it's a huge endeavor for an IT company to instrument the edge of their business, it's the cutting, bleeding edge, literally. What's the viability, the outcome, is it possible? >> It's here now. It is here now, I mean this announcement kind of codifies it in a new product category, but it's here now, and it's inevitable. >> Final word, your thoughts. >> Tom: I agree. >> Proud papa, you're like a proud papa now, you got your baby out there. >> It's great. But the more I tell you how wonderful the EL1000, EL4000 is, it's like my mother calling me handsome. Therefore I want to point the audience to Flowserve. F-L-O-W, S-E-R-V-E. They're one of our customers using Edgeline, and National Instruments equipment, so you can find that video online as well. They'll tell us about really the value here, and it's really powerful to hear from a customer. >> John: And availability is... >> Right now we have EL1000s and EL4000s in the hands of our customers, doing evaluations, at the end of the summer... >> John: Pre-announcement, not general availability. >> Right, general availability is not yet, but we'll have that at the end of the summer, and we can do limited availability as we call it, depending on the demand, and how we roll it out, so. >> How big the customer base is, in relevance to the... Now, is this the old boon shot box, just a quick final question. >> Tom: It is not, no. >> Really? >> We are leveraging some high-performance, low-power technology, that Intel has just announced, I'd like to shout out to that partner. They just announced and launched... Diane Bryant did her keynote to launch the new xeon, E3, low-power high-performance xeon, and it was streamed, her keynote, on the Edgeline compute engine. That's actually going into the Edgeline, that compute blade is going into the Edgeline. She streamed with it, we're pretty excited about that as well. >> Tom and Eric, thanks so much for sharing the big news, and of course congratulations, new category. >> Thank you. >> Let's see how this plays out, we'll be watching, got to get the draft picks in for this new sports league, we're calling it, like IoT, the edge, of course we're theCUBE, we're living at the edge, all the time, we're at the edge of HPE Discovery. Have one more day tomorrow, but again, three days of coverage. You're watching theCUBE, I'm John Furrier with Dave Vellante, we'll be right back. (electronic music)

Published Date : Jun 9 2016

SUMMARY :

Brought to you by Hewlett Packard Enterprise. of the servers and IoT systems, John: Welcome for the first time Cube alumni, and the importance of this event. and it's great to be back here with you guys. So, you guys are bringing two industries together, Eric, talk about one, the importance I mean, everyone talks about the convergence of OT, the thing that is connected to the real world. And IT is Information Technologies, I figured that one you knew, I mean it's prevalent obviously in your world. And that means that the processing needs to be done, and put it into the IT world where we can compute it What are the parameters of a new product category? that did exist before, the elements of course, lies in the fact that because we have so many cores, but we are both reselling the product together. So we've jointly, collaboratively, building the product. and produced this beauty. Versus the other guys, who had a phone, at the edge, in theory or in concept here, Let me make a point and let you respond. about the definition of the conversion? from an OT and an industrial OT kind of perspective. and the importance of the security and the capability and kind of poke holes in your thesis, and all the analog stuff that's been going on and the rest of the platform is as well. and the app store, doesn't foreclose the customer's benefits in any way, Third party apps are being produced all the time How will you guys "split the booty," so to speak? HPE is the world leader in IT technology. Yeah, okay so it's engineering. And that's the key here in this relationship. So talk about the trend we're hearing here at the show, and the software that runs the management. and Eric, you get the benefit of connecting into IT, And as the industrial internet of things scales and the whole big data thesis is, The biggest data of all. we only have a few minutes left. All right. of the data collected at the edge, We're going to go fast. and the biggest of all big data, that HP is going to be approaching with this new category, that the Edgeline systems have, it's the cutting, bleeding edge, literally. and it's inevitable. you got your baby out there. But the more I tell you at the end of the summer... depending on the demand, How big the customer base is, that compute blade is going into the Edgeline. thanks so much for sharing the big news, all the time, we're at the edge of HPE Discovery.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

EricPERSON

0.99+

Eric StarkloffPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Diane BryantPERSON

0.99+

Duke EnergyORGANIZATION

0.99+

National InstrumentsORGANIZATION

0.99+

seven reasonsQUANTITY

0.99+

22 exabytesQUANTITY

0.99+

two companiesQUANTITY

0.99+

25 petabytesQUANTITY

0.99+

40 percentQUANTITY

0.99+

30,000 sensorsQUANTITY

0.99+

HPORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

40QUANTITY

0.99+

TodayDATE

0.99+

Tom BradicichPERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

two industriesQUANTITY

0.99+

yesterdayDATE

0.99+

14 million imagesQUANTITY

0.99+

twoQUANTITY

0.99+

Las VegasLOCATION

0.99+

CERNORGANIZATION

0.99+

todayDATE

0.99+

two elementsQUANTITY

0.99+

thousandsQUANTITY

0.99+

EL4000sCOMMERCIAL_ITEM

0.99+

threeQUANTITY

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

EL1000sCOMMERCIAL_ITEM

0.99+

first automobileQUANTITY

0.99+

70 different vendorsQUANTITY

0.99+

ThreeQUANTITY

0.99+

bothQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

IntelORGANIZATION

0.99+

billionsQUANTITY

0.99+

EdgelineORGANIZATION

0.99+

each yearQUANTITY

0.99+

two new productsQUANTITY

0.99+

single boxQUANTITY

0.99+

sevenQUANTITY

0.99+

GEORGANIZATION

0.99+

fourQUANTITY

0.99+

three daysQUANTITY

0.99+

Grady Booch - IBM Impact 2014 - TheCUBE


 

>>The cube at IBM. Impact 2014 is brought to you by headline sponsor. IBM. Here are your hosts, John furrier and Paul Gillin. Okay, welcome back. Everyone live in Las Vegas at IBM impact. This is the cube, our flagship program. We go out to the events, instruct us to live in the noise. I'm John Ferrari, the founder of SiliconANGLE Joe, my close Paul Gillen. And our next special guest is great bushes as a legend in the software development community. And then she went to st this school in Santa Barbara. My son goes there, he's a freshman, but there's a whole nother conversation. Um, welcome to the cube. Thank you. Uh, one of the things we really exciting about when we get all the IBM guys get the messaging out, you know, the IBM talk, but the groundbreaking work around, um, computer software where hardware is now exploding and capability, big data's instrumentation of data. >>Um, take us to a conversation around cognitive computing, the future of humanity, society, the societal changes that are happening. There's a huge, uh, intersection between computer science and social science. Something that's our tagline for Silicon angle. And so we are passionate about. So I want to, I just want to get your take on that and, and tell about some of the work you're doing at IBM. Um, what does all this, where's all this leading to? Where is this unlimited compute capacity, the mainframe in the cloud, big data instrumentation, indexing, human thought, um, fit, Fitbit's wearable computers, um, the sensors, internet of things. This all taking us in the direction. What's your vision? There are three things that I think are inevitable and they're irreversible, that have unintended consequences, consequences that, you know, we can't, we have to attend to and they will be in our face eventually. >>The first of these is the growth of computational power in ways we've only begun to see. The second is the development of systems that never forget with storage beyond even our expectations now. And the third is a pervasive connectivity such that we see the foundations for not just millions of devices, but billions upon billions of devices. Those three trends appear to be where technology is heading. And yet if you follow those trends out, one has to ask. The question is you begin to, what are the implications for us as humans? Um, I think that the net of those is an interesting question indeed to put in a personal blog. My wife and I are developing a documentary or the computer history with the computer history museum for public television on that very topic, looking at how computing intersects with the human experience. So we're seeing those changes in every aspect of it too, that I'll dwell upon here, which I think are germane to this particular conference are some of the ethical and moral implications. >>And second, what the implications are for cognitive systems. On the latter case we saw on the news, I guess it was today or yesterday, there's a foundation led by the Gates foundation. It's been looking at collecting data for kids in various schools. A number of States set up for it. But as they begin to realize what the implications of aggregating that information were for the privacy of that child, the parents became, became cognizant of the fact that, wow, we're disclosing things for which there can be identification of the kid in ways that maybe we wouldn't want to do that. So I think the explosion of big data and explosion of computational power has a lot of us as a society to begin asking those questions, what are the limits of ownership and the rights of that kind of information. And that's a dialogue that will continue on in the cognitive space. >>It kind of follows on because one of the problems of big data, and it's not just you know, big, big data, but like you see in at CERN and the like, but also these problems of aggregation of data, there are, there are such an accumulation information at such a speed in ways that an individual human cannot begin to reason about it in reasonable ways. Thus was born. What we did with Watson a few years ago, Watson jeopardy. I think the most important thing that the Watson jeopardy experience led us to realize is that theory is an architectural framework upon which we can do many interesting reasoning things. And now that Watson has moved from research into the Watson group, we're seeing that expand out in so many domains. So the journey is really just beginning as we take what we can know to do in reason with automated systems and apply it to these large data systems. >>It's going to be a conversation we're going to have for a few generations. You were beginning to see, I mean computing has moved beyond the, the, the role of automate or of automating rote manual tasks. We're seeing, uh, it's been, uh, I've seen forecast of these. Most of the jobs that will be automated out of existence in the next 20 years will be, will be, uh, knowledge jobs and uh, even one journalism professor of forecasting, the 80% of journalism jobs will go away and be replaced by computer, uh, over the next couple of decades. Is this something for people to fear? I'm not certain fear will do us any good, especially if the change like that is inevitable. Fear doesn't help. But I think that what will help is an understanding as to where those kinds of software systems will impact various jobs and how we as individuals should relate to them. >>We as a society, we as individuals in many ways are slowly surrendering ourselves to computing technology. And what describe is one particular domain for that. There's been tremendous debate in the economic and business community as to whether or not computing has impacted the jobs market. I'm not an economist, I'm a computer scientist, but I can certainly say from my input inside perspective, I see that transformational shift and I see that what we're doing is radically going to change the job market. There was, you know, if you'd go back to the Victorian age where people were, were looking for a future in which they had more leisure time because we'd have these devices to give us, you know, free us up for the mundane. We're there. And yet the reality is that we now have so many things that required our time before. It means yours in a way, not enough work to go around. >>And that's a very different shift than I think what anyone anticipated back to the beginnings of the industrial age. We're coming to grips with that. Therefore, I say this, don't fear it, but begin to understand those areas where we as humans provide unique value that the automated systems never will. And then ask ourselves the question, where can we as individuals continue to add that creativity and value because there and then we can view these machines as our companions in that journey. Great. You want to, I want to ask you about, um, the role, I mean the humans is great message. I mean that's the, they're driving the car here, but I want to talk about something around the humanization piece. You mentioned, um, there's a lot of conversations around computer science does a discipline which, um, the old generation when a hundred computer science school was, it was code architecture. >>But now computer science is literally mainstreams. There's general interest, hence why we built this cube operation to share signal from the noise around computer science. So there's also been a discussion around women in tech tolerance and different opinions and views, freedom of speech, if you will, and sensors if everything's measured, politically correctness. All of this is now kind of being fully transparent, so, so let's say the women in tech issue and also kids growing up who have an affinity towards computer science but may not know us. I want to ask you the question. With all that kind of as backdrop, computer science as a discipline, how is it going to evolve in this space? What are some of those things for the future generation? For the, my son who's in sixth grade, my son's a freshman in college and then in between. Is it just traditional sciences? >>What are some of the things that you see and it's not just so much coding and running Java or objective C? I wish you'd asked me some questions about some really deep topics. I mean, gosh, these are, these are, I'm sorry. It's about the kids. In the early days of the telephone, phone, telephones were a very special thing. Not everybody had them and it was predicted that as the telephone networks grew, we were going to need to have many, many more telephone operators. What happened is that we all became, so the very nature of telephony changed so that now I as an individual have the power to reach out and do the connection that had to be done by a human. A similar phenomenon I think is happening in computing that it is moved itself into the interstitial spaces of our world such that it's no longer a special thing out there. We used to speak of the programming priesthood in the 60s where I lost my thing here. Hang on. >>Here we go. I think we're good. We're good. I'm a software guy. I don't do hardware so my body rejects hardware. So we're moving in a place where computing very much is, is part of the interstitial spaces of our world. This has led to where I think the generation after us, cause our, our median age is, let me check. It's probably above 20, just guessing here. Uh, a seven. I think you're still seven. Uh, we're moving to a stage where the notion of computational thinking becomes an important skill that everyone must have. My wife loves to take pictures of people along the beach, beautiful sunset, whales jumping and the family's sitting there and it did it again. My body's rejecting this device. Clearly I have the wrong shape. i-Ready got it. Yeah. There we go. Uh, taking pictures of families who are seeing all these things and they're, they're very, with their heads in their iPhones and their tablets and they're so wedded to that technology. >>We often see, you know, kids going by and in strollers and they've got an iPad in front of them looking at something. So we have a generation that's growing up, uh, knowing how to swipe and knowing how to use these devices. It's part of their very world. It's, it's difficult for me to relate to that cause I didn't grow up in that kind of environment. But that's the environment after us. So the question I think you're generally asking is what does one need to know to live in that kind of world? And I think it says notions of computational thinking. It's an idea that's come out of uh, the folks at Carnegie Mellon university, which asks the question, what are some of the basic skills we need to know? Well, you need to know some things about what an algorithm is and a little bit behind, you know, behind the screen itself. >>One of the things we're trying to do with the documentary is opening the curtain behind just the windows you say and ask the question, how do these things actually work because some degree of understanding to that will be essential for anyone moving into, into, into life. Um, you talked about women in tech in particular. It is an important question and I think that, uh, I worked with many women side by side in the things that I do. And you know, frankly it saddens me to see the way our educational system in a way back to middle school produces a bias that pushes young women out of this society. So I'm not certain that it's a bias, it's built into computing, but it's a bias built in to culture. It's bias built into our educational system. And that obviously has to change because computing, you know, knows no gender or religious or sexual orientation boundaries. >>It's just part of our society. Now. I do want to, everyone needs to contribute. I'm sorry. I do want to ask you about software development since you're devoted your career to a couple of things about to defining, uh, architectures and disciplines and software development. We're seeing software development now as epitomized by Facebook, perhaps moving to much more of a fail fast mentality. Uh, try it. Put it out there. If it breaks, it's okay. No lives were lost. Uh, pull it back in and we'll try it again. Is this, is there a risk in, in this new approach to software? So many things here are first, is it a new approach? No, it's part of the agile process that we've been talking about for well over a decade, if not 15 years or so. You must remember that it's dangerous to generalize upon a particular development paradigm that's applied in one space that apply to all others. >>With Facebook in general, nobody, no one's life depends upon it. And so there are things that one can do that are simplifying assumptions. If I apply that same technique to the dialysis machine, to the avionics of a triple seven, a simple fly, you know, so one must be careful to generalize those kinds of approaches to every place. It depends upon the domain, depends upon the development culture. Ultimately depends upon the risk profile that would lead you to high ceremony or low ceremony approaches. Do you have greater confidence in the software that you see being developed for mission critical applications today than you did 10 years ago? Absolutely. In fact, I'll tell you a quick story and I to know we need to wind down. I had an elective open heart surgery or a few years ago elective because every male in my family died of an aneurysm. They are an aneurism. >>So I went in and got checked and indeed I had an aneurysm developing as well. So we had, you know, hi my heart ripped open and then dealt with before it would burst on me. I remember laying there in the, in the, uh, in the CT scan machine looking up and saying, this looks familiar. Oh my God, I know the people that wrote the software where this thing and they use the UML and I realized, Oh this is a good thing. Which is your creation. Yes. Yes. So it's a good thing because I felt confidence in the software that was there because I knew it was intentionally engineered. Great. I want to ask you some society questions around it. And computing. I see green as key and data centers take up a lot of space, right? So obviously we want to get to a smarter data center environment. >>And how do you see the role of software? I see with the cognitive all things you talked about helping businesses build a physical plant, if you will. And is it a shared plan is a Terminus, you seeing open power systems here from IBM, you hear him about the open sources source. Um, what, what does that future look like from your standpoint? May I borrow that cup of tea or coffee? I want to use it as a aid. Let's presume, Oh, it's still warm. Let's say that this is some tea and roughly the energy costs to boil water for a cup of tea is roughly equivalent to the energy costs needed to do a single Google search. Now imagine if I multiply that by a few billion times and you can begin to see the energy costs of some of the infrastructure, which for many are largely invisible. >>Some studies suggest that computing is grown to the place releasing the United States. It's consuming about 10% of our electrical energy production. So by no means is it something we can sweep under the rug. Um, you address I think a fundamental question, which is the hidden costs of computing, which believe people are becoming aware of the meaning. Ask the question also. Where can cognitive systems help us in that regard? Um, we live in, in Maui and there's an interesting phenomenon coming on where there are so many people using solar power, putting into the power grid that the electrical grid companies are losing money because we're generating so much power there. And yet you realize if you begin to instrument the way that people are actually using power down to the level of the homes themselves, then power generation companies can start making much more intelligent decisions about day to day, almost minute to minute power production. >>And that's something that black box analytics would help. But also cognitive systems, which are not really black box analytic systems, they're more learn systems, learning systems can then predict what that might mean for the energy production company. So we're seeing even in those places, the potential of using cognitive systems for, for uh, attending to energy costs in that regard. The future is a lot of possibilities. I know you've got to go, we're getting the hook here big time cause you gotta well we really appreciate it. These are important future decisions that are, we're on track to, to help solve and I really appreciate it. Looking for the documentary anytime table on that, uh, sometime before I die. Great. Thanks for coming on the, we really appreciate this. This SiliconANGLE's we'll be right back with our next guest at to nature. I break.

Published Date : Apr 29 2014

SUMMARY :

Impact 2014 is brought to you by headline sponsor. that have unintended consequences, consequences that, you know, we can't, we have to attend The second is the development of systems that never forget with storage can be identification of the kid in ways that maybe we wouldn't want to do that. It kind of follows on because one of the problems of big data, and it's not just you Most of the jobs that will be automated out of existence in the next 20 years will be, I see that what we're doing is radically going to change the job market. You want to, I want to ask you about, I want to ask you the question. What are some of the things that you see and it's not just so much coding and running Java or Clearly I have the wrong shape. So the question I think you're generally asking is what does one need to know to live in that kind One of the things we're trying to do with the documentary is opening the curtain behind just the windows you say and I do want to ask you about software development since you're devoted your career to a couple of things about to the risk profile that would lead you to high ceremony or low ceremony approaches. I want to ask you some society questions around it. I see with the cognitive all things you talked about helping businesses build And yet you realize if you begin to instrument the way that people are actually Looking for the documentary anytime table on that, uh, sometime before I die.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillinPERSON

0.99+

Paul GillenPERSON

0.99+

John FerrariPERSON

0.99+

Santa BarbaraLOCATION

0.99+

15 yearsQUANTITY

0.99+

MauiLOCATION

0.99+

80%QUANTITY

0.99+

IBMORGANIZATION

0.99+

yesterdayDATE

0.99+

iPadCOMMERCIAL_ITEM

0.99+

JavaTITLE

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

Las VegasLOCATION

0.99+

FacebookORGANIZATION

0.99+

sevenQUANTITY

0.99+

billionsQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

secondQUANTITY

0.99+

CERNORGANIZATION

0.99+

John furrierPERSON

0.99+

thirdQUANTITY

0.99+

oneQUANTITY

0.99+

United StatesLOCATION

0.99+

sixth gradeQUANTITY

0.99+

VictorianDATE

0.99+

WatsonPERSON

0.99+

10 years agoDATE

0.98+

2014DATE

0.98+

OneQUANTITY

0.97+

about 10%QUANTITY

0.97+

Impact 2014EVENT

0.97+

Carnegie Mellon universityORGANIZATION

0.97+

FitbitORGANIZATION

0.97+

Grady BoochPERSON

0.96+

60sDATE

0.96+

SiliconANGLEORGANIZATION

0.95+

millions of devicesQUANTITY

0.94+

GoogleORGANIZATION

0.94+

one spaceQUANTITY

0.94+

three thingsQUANTITY

0.93+

three trendsQUANTITY

0.93+

billions of devicesQUANTITY

0.92+

next couple of decadesDATE

0.92+

above 20QUANTITY

0.9+

few years agoDATE

0.89+

Gates foundationORGANIZATION

0.86+

next 20 yearsDATE

0.86+

JoePERSON

0.85+

over a decadeQUANTITY

0.81+

CTITLE

0.77+

singleQUANTITY

0.75+

billion timesQUANTITY

0.71+

TheCUBEORGANIZATION

0.63+

a hundredQUANTITY

0.63+

impactEVENT

0.35+

ImpactTITLE

0.33+