Shahid Ahmed, NTT | MWC Barcelona 2023
(inspirational music) >> theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (uplifting electronic music) (crowd chattering in background) >> Hi everybody. We're back at the Fira in Barcelona. Winding up our four day wall-to-wall coverage of MWC23 theCUBE has been thrilled to cover the telco transformation. Dave Vellante with Dave Nicholson. Really excited to have NTT on. Shahid Ahmed is the Group EVP of New Ventures and Innovation at NTT in from Chicago. Welcome to Barcelona. Welcome to theCUBE. >> Thank you for having me over. >> So, really interesting title. You have, you know, people might not know NTT you know, huge Japan telco but a lot of other businesses, explain your business. >> So we do a lot of things. Most of us are known for our Docomo business in Japan. We have one of the largest wireless cellular carriers in the world. We serve most of Japan. Outside of Japan, we are B2B systems, integration, professional services company. So we offer managed services. We have data centers, we have undersea cables. We offer all kinds of outsourcing services. So we're a big company. >> So there's a narrative out there that says, you know, 5G, it's a lot of hype, not a lot of adoption. Nobody's ever going to make money at 5G. You have a different point of view, I understand. You're like leaning into 5G and you've actually got some traction there. Explain that. >> So 5G can be viewed from two lenses. One is just you and I using our cell phones and we get 5G coverage over it. And the other one is for businesses to use 5G, and we call that private 5G or enterprise grade 5G. Two very separate distinct things, but it is 5G in the end. Now the big debate here in Europe and US is how to monetize 5G. As a consumer, you and I are not going to pay extra for 5G. I mean, I haven't. I just expect the carrier to offer faster, cheaper services. And so would I pay extra? Not really. I just want a reliable network from my carrier. >> Paid up for the good camera though, didn't you? >> I did. (Dave and Dave laughing) >> I'm waiting for four cameras now. >> So the carriers are in this little bit of a pickle at the moment because they've just spent billions of dollars, not only on spectrum but the infrastructure needed to upgrade to 5G, yet nobody's willing to pay extra for that 5G service. >> Oh, right. >> So what do they do? And one idea is to look at enterprises, companies, industrial companies, manufacturing companies who want to build their own 5G networks to support their own use cases. And these use cases could be anything from automating the surveyor belt to cameras with 5G in it to AGVs. These are little carts running around warehouses picking up products and goods, but they have to be connected all the time. Wifi doesn't work all the time there. And so those businesses are willing to pay for 5G. So your question is, is there a business case for 5G? Yes. I don't think it's in the consumer side. I think it's in the business side. And that's where NTT is finding success. >> So you said, you know, how they going to make money, right? You very well described the telco dilemma. We heard earlier this week, you know, well, we could tax the OTT vendors, like Netflix of course shot back and said, "Well, we spent a lot of money on content. We're driving a lot of value. Why don't you help us pay for the content development?" Which is incredibly expensive. I think I heard we're going to tax the developers for API calls on the network. I'm not sure how well that's going to work out. Look at Twitter, you know, we'll see. And then yeah, there's the B2B piece. What's your take on, we heard the Orange CEO say, "We need help." You know, maybe implying we're going to tax the OTT vendors, but we're for net neutrality, which seems like it's completely counter-posed. What's your take on, you know, fair share in the network? >> Look, we've seen this debate unfold in the US for the last 10 years. >> Yeah. >> Tom Wheeler, the FCC chairman started that debate and he made great progress and open internet and net neutrality. The thing is that if you create a lane, a tollway, where some companies have to pay toll and others don't have to, you create an environment where the innovation could be stifled. Content providers may not appear on the scene anymore. And with everything happening around AI, we may see that backfire. So creating a toll for rich companies to be able to pay that toll and get on a faster speed internet, that may work some places may backfire in others. >> It's, you know, you're bringing up a great point. It's one of those sort of unintended consequences. You got to be be careful because the little guy gets crushed in that environment, and then what? Right? Then you stifle innovation. So, okay, so you're a fan of net neutrality. You think the balance that the US model, for a change, maybe the US got it right instead of like GDPR, who sort of informed the US on privacy, maybe the opposite on net neutrality. >> I think so. I mean, look, the way the US, particularly the FCC and the FTC has mandated these rules and regulation. I think it's a nice balance. FTC is all looking at big tech at the moment, but- >> Lena Khan wants to break up big tech. I mean for, you know, you big tech, boom, break 'em up, right? So, but that's, you know- >> That's a whole different story. >> Yeah. Right. We could talk about that too, if you want. >> Right. But I think that we have a balanced approach, a measured approach. Asking the content providers or the developers to pay for your innovative creative application that's on your phone, you know, that's asking for too much in my opinion. >> You know, I think you're right though. Government did do a good job with net neutrality in the US and, I mean, I'm just going to go my high horse for a second, so forgive me. >> Go for it. >> Market forces have always done a better job at adjudicating, you know, competition. Now, if a company's a monopoly, in my view they should be, you know, regulated, or at least penalized. Yeah, but generally speaking, you know the attack on big tech, I think is perhaps misplaced. I sat through, and the reason it's relevant to Mobile World Congress or MWC, is I sat through a Nokia presentation this week and they were talking about Bell Labs when United States broke up, you know, the US telcos, >> Yeah. >> Bell Labs was a gem in the US and now it's owned by Nokia. >> Yeah. >> Right? And so you got to be careful about, you know what you wish for with breaking up big tech. You got AI, you've got, you know, competition with China- >> Yeah, but the upside to breaking up Ma Bell was not just the baby Bells and maybe the stranded orphan asset of Bell Labs, but I would argue it led to innovation. I'm old enough to remember- >> I would say it made the US less competitive. >> I know. >> You were in junior high school, but I remember as an adult, having a rotary dial phone and having to pay for that access, and there was no such- >> Yeah, but they all came back together. The baby Bells are all, they got all acquired. And the cable company, it was no different. So I don't know, do you have a perspective of this? Because you know this better than I do. >> Well, I think look at Nokia, just they announced a whole new branding strategy and new brand. >> I like the brand. >> Yeah. And- >> It looks cool. >> But guess what? It's B2B oriented. >> (laughs) Yeah. >> It's no longer consumer, >> Right, yeah. >> because they felt that Nokia brand phone was sort of misleading towards a lot of business to business work that they do. And so they've oriented themselves to B2B. Look, my point is, the carriers and the service providers, network operators, and look, I'm a network operator, too, in Japan. We need to innovate ourselves. Nobody's stopping us from coming up with a content strategy. Nobody's stopping a carrier from building a interesting, new, over-the-top app. In fact, we have better control over that because we are closer to the customer. We need to innovate, we need to be more creative. I don't think taxing the little developer that's building a very innovative application is going to help in the long run. >> NTT Japan, what do they have a content play? I, sorry, I'm not familiar with it. Are they strong in content, or competitive like Netflix-like, or? >> We have relationships with them, and you remember i-mode? >> Yeah. Oh yeah, sure. >> Remember in the old days. I mean, that was a big hit. >> Yeah, yeah, you're right. >> Right? I mean, that was actually the original app marketplace. >> Right. >> And the application store. So, of course we've evolved from that and we should, and this is an evolution and we should look at it more positively instead of looking at ways to regulate it. We should let it prosper and let it see where- >> But why do you think that telcos generally have failed at content? I mean, AT&T is sort of the exception that proves the rule. I mean, they got some great properties, obviously, CNN and HBO, but generally it's viewed as a challenging asset and others have had to diversify or, you know, sell the assets. Why do you think that telcos have had such trouble there? >> Well, Comcast owns also a lot of content. >> Yeah. Yeah, absolutely. >> And I think, I think that is definitely a strategy that should be explored here in Europe. And I think that has been underexplored. I, in my opinion, I believe that every large carrier must have some sort of content strategy at some point, or else you are a pipe. >> Yeah. You lose touch with a customer. >> Yeah. And by the way, being a dump pipe is okay. >> No, it's a lucrative business. >> It's a good business. You just have to focus. And if you start to do a lot of ancillary things around it then you start to see the margins erode. But if you just focus on being a pipe, I think that's a very good business and it's very lucrative. Everybody wants bandwidth. There's insatiable demand for bandwidth all the time. >> Enjoy the monopoly, I say. >> Yeah, well, capital is like an organism in and of itself. It's going to seek a place where it can insert itself and grow. Do you think that the questions around fair share right now are having people wait in the wings to see what's going to happen? Because especially if I'm on the small end of creating content, creating services, and there's possibly a death blow to my fixed costs that could be coming down the line, I'm going to hold back and wait. Do you think that the answer is let's solve this sooner than later? What are your thoughts? >> I think in Europe the opinion has been always to go after the big tech. I mean, we've seen a lot of moves either through antitrust, or other means. >> Or the guillotine! >> That's right. (all chuckle) A guillotine. Yes. And I've heard those directly. I think, look, in the end, EU has to decide what's right for their constituents, the countries they operate, and the economy. Frankly, with where the economy is, you got recession, inflation pressures, a war, and who knows what else might come down the pipe. I would be very careful in messing with this equilibrium in this economy. Until at least we have gone through this inflation and recessionary pressure and see what happens. >> I, again, I think I come back to markets, ultimately, will adjudicate. I think what we're seeing with chatGPT is like a Netscape moment in some ways. And I can't predict what's going to happen, but I can predict that it's going to change the world. And there's going to be new disruptors that come about. That just, I don't think Amazon, Google, Facebook, Apple are going to rule the world forever. They're just, I guarantee they're not, you know. They'll make it through. But there's going to be some new companies. I think it might be open AI, might not be. Give us a plug for NTT at the show. What do you guys got going here? Really appreciate you coming on. >> Thank you. So, you know, we're showing off our private 5G network for enterprises, for businesses. We see this as a huge opportunities. If you look around here you've got Rohde & Schwarz, that's the industrial company. You got Airbus here. All the big industrial companies are here. Automotive companies and private 5G. 5G inside a factory, inside a hospital, a warehouse, a mining operation. That's where the dollars are. >> Is it a meaningful business for you today? >> It is. We just started this business only a couple of years ago. We're seeing amazing growth and I think there's a lot of good opportunities there. >> Shahid Ahmed, thanks so much for coming to theCUBE. It was great to have you. Really a pleasure. >> Thanks for having me over. Great questions. >> Oh, you're welcome. All right. For David Nicholson, Dave Vellante. We'll be back, right after this short break, from the Fira in Barcelona, MWC23. You're watching theCUBE. (uplifting electronic music)
SUMMARY :
that drive human progress. Shahid Ahmed is the Group EVP You have, you know, We have one of the largest there that says, you know, I just expect the carrier to I did. So the carriers are in but they have to be We heard earlier this week, you know, in the US for the last 10 years. appear on the scene anymore. You got to be be careful because I mean, look, the way the I mean for, you know, you We could talk about that too, if you want. or the developers to pay and, I mean, I'm just going to at adjudicating, you know, competition. US and now it's owned by Nokia. And so you got to be Yeah, but the upside the US less competitive. And the cable company, Well, I think look at Nokia, just But guess what? and the service providers, I, sorry, I'm not familiar with it. Remember in the old days. I mean, that was actually And the application store. I mean, AT&T is sort of the also a lot of content. And I think that has been underexplored. And if you start to do a lot that could be coming down the line, I think in Europe the and the economy. And there's going to be new that's the industrial company. and I think there's a lot much for coming to theCUBE. Thanks for having me over. from the Fira in Barcelona, MWC23.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
FCC | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Comcast | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
Tom Wheeler | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
CNN | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Lena Khan | PERSON | 0.99+ |
HBO | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Shahid Ahmed | PERSON | 0.99+ |
FTC | ORGANIZATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
NTT | ORGANIZATION | 0.99+ |
Bell Labs | ORGANIZATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
EU | ORGANIZATION | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Orange | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Docomo | ORGANIZATION | 0.99+ |
MWC23 | EVENT | 0.99+ |
One | QUANTITY | 0.98+ |
four day | QUANTITY | 0.98+ |
earlier this week | DATE | 0.98+ |
billions of dollars | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
two lenses | QUANTITY | 0.98+ |
one idea | QUANTITY | 0.98+ |
telco | ORGANIZATION | 0.98+ |
GDPR | TITLE | 0.97+ |
US | ORGANIZATION | 0.97+ |
Mobile World Congress | EVENT | 0.97+ |
telcos | ORGANIZATION | 0.97+ |
United States | LOCATION | 0.96+ |
NTT Japan | ORGANIZATION | 0.95+ |
one | QUANTITY | 0.95+ |
MWC | EVENT | 0.95+ |
today | DATE | 0.94+ |
Fira | LOCATION | 0.93+ |
Barcelona, | LOCATION | 0.91+ |
5G | ORGANIZATION | 0.91+ |
four cameras | QUANTITY | 0.9+ |
Two very separate distinct things | QUANTITY | 0.89+ |
Rohde & Schwarz | ORGANIZATION | 0.89+ |
last 10 years | DATE | 0.88+ |
Netscape | ORGANIZATION | 0.88+ |
couple of years ago | DATE | 0.88+ |
theCUBE | ORGANIZATION | 0.85+ |
New Ventures and Innovation | ORGANIZATION | 0.73+ |
Ma Bell | ORGANIZATION | 0.71+ |
Ana Pinheiro Privette, Amazon | Amazon re:MARS 2022
>>Okay, welcome back. Everyone. Live cube coverage here in Las Vegas for Amazon re Mars hot event, machine learning, automation, robotics, and space. Two days of live coverage. We're talking to all the hot technologists. We got all the action startups and segment on sustainability and F pan hero for vet global lead, Amazon sustainability data initiative. Thanks for coming on the cube. Can I get that right? Can >>You, you, you did. >>Absolutely. Okay, great. <laugh> thank >>You. >>Great to see you. We met at the analyst, um, mixer and, um, blown away by the story going on at Amazon around sustainability data initiative, because we were joking. Everything's a data problem now, cuz that's cliche. But in this case you're using data in your program and it's really kind of got a bigger picture. Take a minute to explain what your project is, scope of it on the sustainability. >>Yeah, absolutely. And thank you for the opportunity to be here. Yeah. Um, okay. So, um, I, I lead this program that we launched several years back in 2018 more specifically, and it's a tech for good program. And when I say the tech for good, what that means is that we're trying to bring our technology and our infrastructure and lend that to the world specifically to solve the problems related to sustainability. And as you said, sustainability, uh, inherently needs data. You need, we need data to understand the baseline of where we are and also to understand the progress that we are making towards our goals. Right? But one of the big challenges that the data that we need is spread everywhere. Some of it is too large for most people to be able to, um, access and analyze. And so, uh, what we're trying to tackle is really the data problem in the sustainability space. >>Um, what we do more specifically is focus on Democrat democratizing access to data. So we work with a broader community and we try to understand what are those foundational data sets that most people need to use in the space to solve problems like climate change or food security or think about sustainable development goals, right? Yeah. Yeah. Like all the broad space. Um, and, and we basically then work with the data providers, bring the data to the cloud, make it free and open to everybody in the world. Um, I don't know how deep you want me to go into it. There's many other layers into that. So >>The perspective is zooming out. You're, you're, you're looking at creating a system where the democratizing data means making it freely available so that practitioners or citizens, data, Wrangler, people interested in helping the world could get access to it and then maybe collaborate with people around the world. Is that right? >>Absolutely. So one of the advantages of using the cloud for this kind of, uh, effort is that, you know, cloud is virtually accessible from anywhere where you have, you know, internet or bandwidth, right? So, uh, when, when you put data in the cloud in a centralized place next to compute, it really, uh, removes the, the need for everybody to have their own copy. Right. And to bring it into that, the traditional way is that you bring the data next to your compute. And so we have this multiple copies of data. Some of them are on the petabyte scale. There's obviously the, the carbon footprint associated with the storage, but there's also the complexity that not everybody's able to actually analyze and have that kind of storage. So by putting it in the cloud, now anyone in the world independent of where of their computer capabilities can have access to the same type of data to solve >>The problems. You don't remember doing a report on this in 2018 or 2017. I forget what year it was, but it was around public sector where it was a movement with universities and academia, where they were doing some really deep compute where Amazon had big customers. And there was a movement towards a open commons of data, almost like a national data set like a national park kind of vibe that seems to be getting momentum. In fact, this kind of sounds like what you're doing some similar where it's open to everybody. It's kinda like open source meets data. >>Uh, exactly. And, and the truth is that these data, the majority of it's and we primarily work with what we call authoritative data providers. So think of like NASA Noah, you came me office organizations whose mission is to create the data. So they, their mandate is actually to make the data public. Right. But in practice, that's not really the case. Right. A lot of the data is stored like in servers or tapes or not accessible. Um, so yes, you bring the data to the cloud. And in this model that we use, Amazon never actually touches the data and that's very intentional so that we preserve the integrity of the data. The data provider owns the data in the cloud. We cover all the costs, but they commit to making it public in free to anybody. Um, and obviously the computer is next to it. So that's, uh, evaluated. >>Okay. Anna. So give me some examples of, um, some successes. You've had some of the challenges and opportunities you've overcome, take me through some of the activities because, um, this is really needed, right? And we gotta, sustainability is top line conversation, even here at the conference, re Mars, they're talking about saving climate change with space mm-hmm <affirmative>, which is legitimate. And they're talking about all these new things. So it's only gonna get bigger. Yeah. This data, what are some of the things you're working on right now that you can share? >>Yeah. So what, for me, honestly, the most exciting part of all of this is, is when I see the impact that's creating on customers and the community in general, uh, and those are the stories that really bring it home, the value of opening access to data. And, and I would just say, um, the program actually offers in addition to the data, um, access to free compute, which is very important as well. Right? You put the data in the cloud. It's great. But then if you wanna analyze that, there's the cost and we want to offset that. So we have a, basically an open call for proposals. Anybody can apply and we subsidize that. But so what we see by putting the data in the cloud, making it free and putting the compute accessible is that like we see a lot, for instance, startups, startups jump on it very easily because they're very nimble. They, we basically remove all the cost of investing in the acquisition and storage of the data. The data is connected directly to the source and they don't have to do anything. So they easily build their applications on top of it and workloads and turn it on and off if you know, >>So they don't have to pay for it. >>They have to pay, they basically just pay for the computes whenever they need it. Right. So all the data is covered. So that makes it very visible for, for a lot of startups. And then we see anything like from academia and nonprofits and governments working extensively on the data, what >>Are some of the coolest things you've seen come out of the woodwork in terms of, you know, things that built on top of the, the data, the builders out there are creative, all that heavy, lifting's gone, they're being creative. I'm sure there's been some surprises, um, or obvious verticals that jump healthcare jumps out at me. I'm not sure if FinTech has a lot of data in there, but it's healthcare. I can see, uh, a big air vertical, obviously, you know, um, oil and gas, probably concern. Um, >>So we see it all over the space, honestly. But for instance, one of the things that is very, uh, common for people to use this, uh, Noah data like weather data, because no, basically weather impacts almost anything we do, right? So you have this forecast of data coming into the cloud directly streamed from Noah. And, um, a lot of applications are built on top of that. Like, um, forecasting radiation, for instance, for the solar industry or helping with navigation. But I would say some of the stories I love to mention because are very impactful are when we take data to remote places that traditionally did not have access to any data. Yeah. And for instance, we collaborate with a, with a program, a nonprofit called digital earth Africa where they, this is a basically philanthropically supported program to bring earth observations to the African continents in making it available to communities and governments and things like illegal mining fighting, illegal mining are the forestation, you know, for mangroves to deep forest. Um, it's really amazing what they are doing. And, uh, they are managing >>The low cost nature of it makes it a great use case there >>Yes. Cloud. So it makes it feasible for them to actually do this work. >>Yeah. You mentioned the Noah data making me think of the sale drone. Mm-hmm <affirmative> my favorite, um, use case. Yes. Those sales drones go around many them twice on the queue at reinvent over the years. Yeah. Um, really good innovation. That vibe is here too at the show at Remar this week at the robotics showcases you have startups and growing companies in the ML AI areas. And you have that convergence of not obvious to many, but here, this culture is like, Hey, we have, it's all coming together. Mm-hmm <affirmative>, you know, physical, industrial space is a function of the new O T landscape. Mm-hmm <affirmative>. I mean, there's no edge in space as they say, right. So the it's unlimited edge. So this kind of points to the major trend. It's not stopping this innovation, but sustainability has limits on earth. We have issues. >>We do have issues. And, uh, and I, I think that's one of my hopes is that when we come to the table with the resources and the skills we have and others do as well, we try to remove some of these big barriers, um, that make it things harder for us to move forward as fast as we need to. Right. We don't have time to spend that. Uh, you know, I've been accounted that 80% of the effort to generate new knowledge is spent on finding the data you need and cleaning it. Uh, we, we don't have time for that. Right. So can we remove that UN differentiated, heavy lifting and allow people to start at a different place and generate knowledge and insights faster. >>So that's key, that's the key point having them innovate on top of it, right. What are some things that you wanna see happen over the next year or two, as you look out, um, hopes, dreams, KPIs, performance metrics, what are you, what are you driving to? What's your north star? What are some of those milestones? >>Yeah, so some, we are investing heavily in some areas. Uh, we support, um, you know, we support broadly sustainability, which as, you know, it's like, it's all over, <laugh> the space, but, uh, there's an area that is, uh, becoming more and more critical, which is climate risk. Um, climate risk, you know, for obvious reasons we are experienced, but also there's more regulatory pressures on, uh, business and companies in general to disclose their risks, not only the physical, but also to transition risks. And that's a very, uh, data heavy and compute heavy space. Right. And so we are very focusing in trying to bring the right data and the right services to support that kind of, of activity. >>What kind of break was you looking for? >>Um, so I think, again, it goes back to this concept that there's all that effort that needs to be done equally by so many people that we are all repeating the effort. So I'll put a plug here actually for a project we are supporting, which is called OS climates. Um, I don't know if you're familiar with it, but it's the Linux foundation effort to create an open source platform for climate risk. And so they, they bought the SMP global Airbus, you know, Alliance all these big companies together. And we are one of the funding partners to basically do that basic line work. What are the data that is needed? What are the basic tools let's put it there and do the pre-competitive work. So then you can do the build the, the, the competitive part on top of it. So >>It's kinda like a data clean room. >>It kind of is right. But we need to do those things, right. So >>Are they worried about comp competitive data or is it more anonymized out? How do you, >>It has both actually. So we are primarily contributing, contributing with the open data part, but there's a lot of proprietary data that needs to be behind the whole, the walls. So, yeah, >>You're on the cutting edge of data engineering because, you know, web and ad tech technologies used to be where all that data sharing was done. Mm-hmm <affirmative> for the commercial reasons, you know, the best minds in our industry quoted by a cube alumni are working on how to place ads better. Yeah. Jeff Acker, founder of Cloudera said that on the cube. Okay. And he was like embarrassed, but the best minds are working on how to make ads get more efficient. Right. But that tech is coming to problem solving and you're dealing with data exchange data analysis from different sources, third parties. This is a hard problem. >>Well, it is a hard problem. And I'll, I'll my perspective is that the hardest problem with sustainability is that it goes across all kinds of domains. Right. We traditionally been very comfortable working in our little, you know, swimming lanes yeah. Where we don't need to deal with interoperability and, uh, extracting knowledge. But sustainability, you, you know, you touch the economic side, it touches this social or the environmental, it's all connected. Right. And you cannot just work in the little space and then go sets the impact in the other one. So it's going to force us to work in a different way. Right. It's, uh, big data complex data yeah. From different domains. And we need to somehow make sense of all of it. And there's the potential of AI and ML and things like that that can really help us right. To go beyond the, the modeling approaches we've been done so >>Far. And trust is a huge factor in all this trust. >>Absolutely. And, and just going back to what I said before, that's one of the main reasons why, when we bring data to the cloud, we don't touch it. We wanna make sure that anybody can trust that the data is nowhere data or NASA data, but not Amazon data. >>Yes. Like we always say in the cube, you should own your data plane. Don't give it up. <laugh> well, that's cool. Great. Great. To hear the update. Is there any other projects that you're working on you think might be cool for people that are watching that you wanna plug or point out because this is an area people are, are leaning into yeah. And learning more young, younger talents coming in. Um, I, whether it's university students to people on side hustles want to play with data, >>So we have plenty of data. So we have, uh, we have over a hundred data sets, uh, petabytes and petabytes of data all free. You don't even need an AWS account to access the data and take it out if you want to. Uh, but I, I would say a few things that are exciting that are happening at Mars. One is that we are actually got integrated into ADX. So the AWS that exchange and what that means is that now you can find the open data, free data from a STI in the same searching capability and service as the paid data, right. License data. So hopefully we'll make it easier if I, if you wanna play with data, we have actually something great. We just announced a hackathon this week, uh, in partnership with UNESCO, uh, focus on sustainable development goals, uh, a hundred K in prices and, uh, so much data <laugh> you >>Too years, they get the world is your oyster to go check that out at URL at website, I'll see it's on Amazon. It use our website or a project that can join, or how do people get in touch with you? >>Yeah. So, uh, Amazon SDI, like for Amazon sustainability, that initiative, so Amazon sdi.com and you'll find, um, all the data, a lot of examples of customer stories that are using the data for impactful solutions, um, and much more >>So, and these are, there's a, there's a, a new kind of hustle going out there, seeing entrepreneurs do this. And very successfully, they pick a narrow domain and they, they own it. Something really obscure that could be off the big player's reservation. Mm-hmm <affirmative> and they just become fluent in the data. And it's a big white space for them, right. This market opportunities. And at the minimum you're playing with data. So this is becoming kind of like a long tail domain expertise, data opportunity. Yeah, absolutely. This really hot. So yes. Yeah. Go play around with the data, check it outs for good cause too. And it's free. >>It's all free. >>Almost free. It's not always free. Is it >>Always free? Well, if you, a friend of mine said is only free if your time is worth nothing. <laugh>. Yeah, >>Exactly. Well, Anna, great to have you on the cube. Thanks for sharing the stories. Sustainability is super important. Thanks for coming on. Thank you for the opportunity. Okay. Cube coverage here in Las Vegas. I'm Sean. Furier, we've be back with more day one. After this short break.
SUMMARY :
Thanks for coming on the cube. <laugh> thank We met at the analyst, um, mixer and, um, blown away by the story going But one of the big challenges that the data that we need is spread everywhere. So we work with a broader community and we try to understand what are those foundational data that practitioners or citizens, data, Wrangler, people interested in helping the world could And to bring it into that, the traditional way is that you bring the data next to your compute. In fact, this kind of sounds like what you're doing some similar where it's open to everybody. And, and the truth is that these data, the majority of it's and we primarily work with even here at the conference, re Mars, they're talking about saving climate change with space making it free and putting the compute accessible is that like we see a lot, So all the data is covered. I can see, uh, a big air vertical, obviously, you know, um, oil the African continents in making it available to communities and governments and So it makes it feasible for them to actually do this work. So the it's unlimited edge. I've been accounted that 80% of the effort to generate new knowledge is spent on finding the data you So that's key, that's the key point having them innovate on top of it, right. not only the physical, but also to transition risks. that needs to be done equally by so many people that we are all repeating the effort. But we need to do those things, right. So we are primarily contributing, contributing with the open data part, Mm-hmm <affirmative> for the commercial reasons, you know, And I'll, I'll my perspective is that the hardest problem that the data is nowhere data or NASA data, but not Amazon data. people that are watching that you wanna plug or point out because this is an area people are, So the AWS that It use our website or a project that can join, or how do people get in touch with you? um, all the data, a lot of examples of customer stories that are using the data for impactful solutions, And at the minimum you're playing with data. It's not always free. Well, if you, a friend of mine said is only free if your time is worth nothing. Thanks for sharing the stories.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Acker | PERSON | 0.99+ |
Anna | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
UNESCO | ORGANIZATION | 0.99+ |
Two days | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Sean | PERSON | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ana Pinheiro Privette | PERSON | 0.99+ |
Airbus | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
twice | QUANTITY | 0.96+ |
FinTech | ORGANIZATION | 0.96+ |
Democrat | ORGANIZATION | 0.95+ |
this week | DATE | 0.95+ |
SMP | ORGANIZATION | 0.95+ |
One | QUANTITY | 0.93+ |
over a hundred data sets | QUANTITY | 0.93+ |
Linux | TITLE | 0.92+ |
Mars | LOCATION | 0.92+ |
next year | DATE | 0.91+ |
Noah | ORGANIZATION | 0.91+ |
Wrangler | PERSON | 0.91+ |
Noah | PERSON | 0.85+ |
a hundred K | QUANTITY | 0.82+ |
Alliance | ORGANIZATION | 0.82+ |
earth | LOCATION | 0.78+ |
ADX | TITLE | 0.78+ |
petabytes | QUANTITY | 0.68+ |
MARS 2022 | DATE | 0.66+ |
Mars hot | EVENT | 0.64+ |
several years | DATE | 0.55+ |
Africa | LOCATION | 0.54+ |
Remar | LOCATION | 0.54+ |
African | OTHER | 0.52+ |
two | QUANTITY | 0.5+ |
day | QUANTITY | 0.44+ |
sdi.com | TITLE | 0.41+ |
Stu Miniman, Red Hat | KubeCon + CloudNativeCon EU 2022
(upbeat music) >> Kubernetes is maturing for example moving from quarterly releases to three per year, it's adding many of the capabilities that early on were avoided by Kubernetes committers, but now are going more mainstream, for example, more robust security and better support from mobile cluster management and other functions. But core Kubernetes by itself, doesn't get organizations where they need to go. That's why the ecosystem has stepped up to fill the gaps in application development. Developers as we know, they don't care about infrastructure, but they do care about building new apps, they care about modernizing existing apps, leveraging data, scaling, they care about automation look, they want to be cloud native. And one of the companies leading the ecosystem charge and building out more robust capabilities is Red Hat. And ahead of KubeCon Spain. It's our pleasure to welcome in Stu Miniman director of market insights at Red Hat to preview the event, Stu, good to see you, how you been? >> I'm doing awesome, Dave. Thanks for having me, great to be here. >> Yeah. So what's going on in Kube land these days? >> So it's funny Dave, if you were to kind of just listen out there in the marketplace, the CNCF has a survey that's like 96% of companies running Kubernetes production, everybody's doing it. And others will say, oh no, Kubernetes, only a small group group of people are using it, it's already probably got newer technologies that's replacing it. And the customers that I'm talking to Dave, first of all, yes, containers of Kubernetes, great growth growth rate, good adoption overall, I think we've said more than a year or two ago, we've probably crossed that chasm, the Jeff Moore, it's longer the early people just building all their own thing, taking all the open source, building this crazy stack that they need to had to do a lot of work we used to say. Chewing glass to be able to make it work right or anything, but it's still not as easy as you would like, almost no company that I talk to, if you're talking about big enterprises has Kubernetes just enterprise wide, and a hundred percent of their applications running on it. What is the tough challenge for people? And I mean, Dave, something, you and I have covered for many, many years, , that application portfolio that I have, most enterprises, hundreds, thousands of applications modernizing that having that truly be cloud native, that that's a really long journey and we are still in the midst of that, so I still still think we are in that, that if you look at the cross in the chasm that early majority chunk, so some of it is how do we mature things even better? And how do we make things simpler? Talk about things like automation, simplicity, security, we need to make sure they're all there so that it can be diffused and rolled out more broadly. And then we also need to think about where are we? We talk about the next million cloud customers, where does Kubernetes and containers and all the cloud native pieces fit into that broader discussion. Yes, there's some maturity there and we can declare victory on certain things, but there's still a lot, a lot of work that everyone's doing and that leads us into the show. I mean, dozens of projects that are already graduated, many more along that process from sandbox through a whole bunch of co-located events that are there, and it's always a great community event which Red Hat of course built on open source and community projects, so we're happy to have a good presence there as always. >> So you and I have talked about this in the past how essentially container's going to be embedded into a lot of different places, and sometimes it's hard to find, it's hard to track, but if you look at kind of the pre DevOps world skillsets like provisioning LANs, or configuring ports, or troubleshooting, squeezing more, server utilism, I mean, those who are really in high demand. If that's your skillset, then you're probably out of a job today. And so that's shifted toward things like Kubernetes. So you see and you see in the ETR data, it's along with cloud, and RPA, or automation, it is right up there I mean, it's top, the big four if you will, cloud, automation, RPA, and containers. And so we know there's a lot of spending activity going on there, but sometimes, like I said, it's hard to track I mean, if you got cloud growing at 35% a year, at least for the hyperscalers that we track, Kubernetes should be growing faster than that, should it not? >> Yeah, Dave, I would agree with you when I look at the big analyst firms that track this, I believe they've only got the container space at about a 25 per percent growth rate. >> Slower than cloud. But I compare that with Deepak Singh who runs at AWS, he has the open source office, he has all the containers and Kubernetes, and has visibility in all of that. And he says, basically, containers of the default when somebody's deploying to AWS today. Yes, serverless has its place, but it has not replaced or is not pushing down, slowing down the growth of containers or Kubernetes. We've got a strong partnership, I have lots of customers running on AWS. I guess I look at the numbers and like you, I would say that I would expect that that growth rate to be north of where just cloud in general is because the general adoption of containers and Kubernetes, we're still in the early phases of things. >> And I think a lot of the spendings Stu is actually in labor resources within companies and that's hard to track. Let's talk about what we should expect at the show. Obviously this whole notion of secure supply chain was a big deal last year in LA, what's hot? >> Yeah, so security Dave, absolutely. You said for years, it's a board level discussion, it's now something that really everyone in the organization has to know about the dev sec ops movement, has seen a lot of growth, secure supply chain, we're just trying to make sure that when I use open source, there's lots of projects, there is the huge ecosystem in marketplaces that are out there. So I want to make sure that as I grab all of the pieces that I know where they got came from the proper signature certification to make sure that the full solution that I build, I understand it. And if there are vulnerabilities, I know if there's an issue, how I patch it in the industry, we talk about CBEs, so those vulnerabilities, those exploits that come out, then everybody has to do a quick runaround to understand wait, hey, is my configuration? Am I vulnerable? Do I have to patch things? So security, absolutely still a huge, huge thing. Quick from a Red Hat standpoint, people might notice we made an acquisition a year ago of StackRox. That product itself also now has a completely fully open source project itself, also called StackRox. So the product is Red Hat advanced cluster security for Kubernetes, there's an open source equivalent for that called StackRox now, open source, community, there's a monthly office hour live streaming that a guy on my team actually does, and so there'll be a lot of activity at the show talking about security. So many other things happening at the show Dave. Another key area, you talked about the developers and what they want to worry about and what they don't. In the container space, there's a project called Knative. So Google helped create that, and that's to help me really have a serverless operational model, with still the containers and Kubernetes underneath that. So at the show, there will be the firs Knative con. And if you hadn't looked at Knative in a couple of years, one of the missing pieces that is now there is eventing. So if I look at functions and events, now that event capability is there, it's something I've talked to a lot of customers that were waiting for that to have it. It's not quite the same as like a Lambda, but is similar functionality that I can have with my containers in Kubernetes world. So that's an area that's there and so many others, I mean, GitOps are super hot at the last show. It's something that we've seen, really broad adoption since Argo CD went generally available last year, and lots of customers that are taking that to help them. That's both automation put together because I can allow GitHub to be my single source of truth for where I keep code, make sure I don't have any deviation from where the kind of the golden image if you will, it lives. >> So we're talking earlier about, how hard it is to track this stuff. So with the steep trajectory of growth and new customers coming on, there's got to be a lot of experimentation going on. That probably is being done, somebody downloads the open source code and starts playing with it. And then when they go to production that I would imagine Stu that's the point at which they say, hey, we need to fill some of these gaps. And they reach out to a company like yours and say, now we got to have certifications and trust., Do you. see that? >> So here's the big shift that happened, if we were looking four or five years ago, absolutely, I'd grab the open source code and some people might do that, but what cloud really enabled Dave, is rather than just grabbing, going to the dot the GitHub repo and pulling it down itself, I can go to the cloud so Microsoft, AWS, and Google all have their Kubernetes offering and I click a button. But that just gives me Kubernetes so there's still a steep learning curve. And as you said to build out out that full stack, that is one of the big things that we do with OpenShift is we take dozens of projects, pull them in together so you get a full platform. So you spend less time on curating, integrating, and managing that platform. And more time on the real value for your business, which is the application stack itself, the security and the like. And when we deliver OpenShift in the cloud, we have an SRE team that manages that for you. So one of the big challenges we have out there, there is a skillset gap, there are thousands of people getting certified on Kubernetes. There are, I think I saw over a hundred thousand job openings with Kubernetes mentioned in it, we just can't train people up fast enough, and the question I would have as an enterprise company is, if I'm going to the cloud, how much time do I want to build having SREs, having them focus on the infrastructure versus the things that are business specific. What did Amazon promise Dave? We're going to help you get rid of undifferentiated heavy lifting. Well, I just consume things as a service where I have an SRE team manage that environment. That might make more sense so that I can spend more time focusing on my business activities. That's a big focus that we've had on Red Hat, is our offerings that we have with the cloud providers to do and need offering. >> Yeah, the managed service capability is key. We saw, go back to the Hadoop days, we saw that's where Cloudera really struggled. They had to support every open source project. And then the customers largely had to figure it out themselves. Whereas you look at what data bricks did with spark. It was a managed service that was getting much greater adoption. So these complex areas, that's what you need. So people win sometimes when I use the term super cloud, and we getting little debates on Twitter, which is a lot of fun, but the idea is that you create the abstraction layer that spans your on-prem, your cloud, so you've got a hybrid. You want to go across clouds, what people call multi-cloud but as you know, I've sort of been skeptical of multi-cloud is really multi-vendor. But so we're talking about a substantial experience that's identical across those clouds and then ultimately out to the edge and we see a super Paas layer emerging, And people building on top of that, hiding the underlying complexity. What are your thoughts on that? How does Kubernetes in your view fit in? >> Yeah, it's funny, Dave, if you look at this container space at the beginning, Docker came out of a company called dotCloud. That was a PaaS company. And there's been so many times that that core functionality of how do I make my developers not have to worry about that underlying gank, but Dave, while the storage people might not have to worry about the LANs, somebody needs to understand how storage works, how networking works, if something breaks, how do I make sure I can take care of it. Sometimes that's a service that the SRE team manages that away from me. so that yes, there is something I don't need to think of about, but these are technically tough configurations. So first to one of your main questions, what do we see in customers with their hybrid and multi-cloud journey? So OpenShift over 10 years old, we started OpenShift before Kubernetes even was a thing. Lots of our customers run in what most people would consider hybrid, what does that mean? I have something in my data center, I have something in the cloud, OpenShift health, thanks to Kubernetes, I can have consistency for the developers, the operators, the security team, across those environments. Over the last few years, we've been doing a lot in the Kubernetes space as a whole, as the community, to get Kubernetes out to the edge. So one of the nice things, where do containers live Dave? Anywhere Linux does, is Linux going to be out of the edge? Absolutely, it can be a small footprint, we can do a lot with it. There were a lot of vendors that came out with it wasn't quite Kubernetes, they would strip certain things out or make a configuration that was smaller out at the edge, but a lot of times it was something that was just for a developer or something I could play with, and what it would break sometimes was that consistency out at the edge to what my other environments would like to have. And if I'm a company that needs consistency there. So take for example, if I have an AI workload where I need edge, and I need something in the cloud, or in my data center of consistency. So the easy use case that everybody thinks about is autonomous vehicles. We work with a lot of the big car manufacturers, I need to have when my developer build something, and often my training will be done either in the data center or in the public cloud, but I need to be able to push that out to the vehicle itself and let it run. We've actually even got Dave, we've got Kubernetes running up on the ISS. And you want to make sure that we have a consistency. >> The ultimate edge. >> Yeah, so I said, right, it's edge above and beyond the clouds even, we've gone to beyond. So that is something that the industry as a whole has been working at, from a Red Hat standpoint, we can take OpenShift to a really small footprint. Last year we launched was known as single node OpenShift. We have a project called micro shift, which is also fully open source that it has less pieces of the overall environment to be able to fit onto smaller and smaller devices there. But we want to be able to manage all of them consistently because you talked about multi cluster management. Well, what if I have thousands or 10 of thousands of devices out of the edge? I don't necessarily have network, I don't have people, I need to be able to do things from an automated standpoint. And that's where containers and Kubernetes really can shine. And where a lot of effort has been done in general and something specifically, we're working on it, Red Hat, we've had some great customers in the telecommunication space. Talk about like the 5G rollout with this, and industrial companies that need to be able to push out at the edge for these type of solutions. >> So you just kind of answered my next question, but I want to double click on it which was, if I'm in the cloud, why do I need you? And you touched on it because you've got primitives, and APIs, and AWS, Google, and Microsoft, they're different, if you're going to hide the underlying complexity of that, it takes a lot of RND and work, now extend that to a Tesla. You got to make it run there, different use case, but that's kind of what Linux and OpenShift are design to do, so double click on that. >> Yeah, so right. If I look at the discussion you've been having about super clouds is interesting because there are many companies that we work with that do live across multiple environments. So number one, if I'm a developer, if my company came to me and said, hey, you've got all your certifications and you got years of experience running on Amazon, well, we need you to go run over on Google. That developer might switch companies rather than switch clouds because they've got all of their knowledge and skillset, and it's a steep learning curve. So there's a lot of companies that work on, how can we give you tools and solutions that can live across those environments? So I know you mentioned companies like Snowflake, MongoDB, companies like Red Hat, HashiCorp, GitLab, also span all of those environments. There's a lot of work, Dave, to be different than not just, I say, I don't love the term like we're cloud agnostic, which would mean, well, you can use any cloud. >> You can run on any cloud. >> That's not what we're talking about. Look at the legacy that Red Hat has is, Red Hat has decades of running in every customer's data center and pick your X 86 server of choice. And we would have deep relationships when Dell, HP, IBM, Lenovo, you name it, comes out with a new piece of hardware that was different. We would have to make sure that the Linux primitives work from a Red Hat standpoint. Interesting Dave, we're now supporting OpenShift on Azure Stack Hub. And I talked to our head of product management, and I said, we've been running OpenShift in Azure for years, isn't Azure Stack Hub? Isn't that just Azure in your data center. He's like, yeah, but down at the operating system level, we had to change some flags and change some settings and things like that, so what do we know in IT? It's always the yeah, at the high level, it looks the same, it acts the same, it feels the same. >> Seamless. >> It's seamless in everything when you get down to the primitives level, sometimes that we need to be able to do that. I'll tell you Dave, there's things even when I look at A cloud, if I'm in US East One, or US West One, there actually could be some differences in what services are there or how things react, and so therefore we have a lot of deep work that goes into all of those environments, and it's not just Red Hat, we have a marketplace and an ecosystem, we want to make sure you've got API compatibility across all of those. So we are trying to help lift up this entire ecosystem and bring everybody along with it because you set it at the upfront, Kubernetes alone won't do it, oo one vendor gives you an entire, everything that you need for your developer tool chain. There's a lot that goes into this, and that's where we have deep commitment to partnerships. We build out and support lots of ecosystems. And this show itself is very much a community driven show. And, and therefore, that's why Red Hat has a strong presence at it, 'cause that's the open source community and everything that we built on. >> You guys are knee deep in it. You know I wrote down when you were talking about Snowflake and Mongo, HashiCorps, another one, I wrote down Dell, HP, Cisco, Lenovo, that to me, that should be their strategy. NetApp, their strategy should be to basically build out that abstraction layer, the so-called super cloud. So be interesting to see if they're going to be at this show. It requires a lot of R and D number one, number two, to your point, it requires an ecosystem. So you got all these guys, most of them now do in their own as a service, as a service is their own cloud. Their own cloud means you better have an ecosystem that's robust. I want to ask you about, do you ever think about what's next beyond Kubernetes? Or do you feel like, hey, there's just so much headroom in Kubernetes and so many active projects, we got ways to go. >> Yeah, so the Kubernetes itself Dave, should be able to fade into the background some. In many ways it does mirror what happened with Linux. So Linux is just the foundation of everything we have. We would not have the public cloud providers if it wasn't for Linux. I mean, Google, of course you wouldn't have without Linux, Amazon. >> Is on the internet. >> Right, but you might not have a lot of it. So Kubernetes, I think really goes the same way is, it is the foundational layer of what so much of it is built on top of it, and it's not really. So many people think about that portability. Oh, Google's the one that created it, and they wanted to make sure that it was easy if I want to go from the cloud provider that I had to use Kubernetes on Google cloud. And while that is a piece of it, that consistency is more important. And what I can build on top of it, it is really more of a distributed systems challenge that we are solving and that we've been working on in industry now for decades. So that is what we help solve, and what's really nice, containers and Kubernetes, it's less of an abstraction, it's more of new atomic unit of how we build things. So virtualization, I don't know what's underneath, and we spent like a decade fixing the storage networking components underneath so that the LANs matched right, and the network understood what was happening in the virtual machine. The atomic unit of a container, which is what Kubernetes manages is an application or a piece of an application. And therefore that there is less of an abstraction, more of just a rearchitecting of how we build things, and that is part of what is needed, and boy, Dave, the ecosystem, oh my God, yes, we've gone to only three releases a year, but I can tell you our roadmaps are all public on the internet and we talk heavily about them. There is still so many things that just at the basic Kubernetes piece, new architectures, arm devices are now in there, we're now supporting them, Kubernetes can support them too. So there are so many hardware pieces that are coming, so many software devices, the edge, we talked about it a bit, so there's so much that's going on. One of the areas that I love hearing about at the show, we have a community event called OpenShift Comments, which one of the main things of OpenShift Comments, is customers coming to talk about what they've been doing, and not about our products, we're talking about the projects and their journey overall. We've got a at Flenty Show, Airbus and Telefonica, are both going to be talking about what they're doing. We've seen Dave, every industry is going through their digital transformation journey. And it's great to hear straight from them what they're doing, and one of the big pieces in area, we actually spend a bunch of time on that application journey. There's a group of open source projects under what's known as Konveyor, that's conveyor with a K, Konveyor.io. It's modernization in migration. So how do I go from a VM to a container? How do I go from my data center to a cloud? How do I switch between services, open source projects to help with that journey? And, oh my gosh, Dave, I mean, you know in the cloud space, I mean that's what all the SIs and all the consultancies are throwing thousands of people at, is to help us get along that curve of that modernization journey. >> Okay, so let's see May 16th, the week of May 16th is KubeCon in Valencia Spain. theCUBE's going to be there, there was a little bit of a curfuffle on Twitter because the mask mandate was lifted in Spain and people had made plans thinking, okay, it's safe everybody's going to be wearing masks. Well, now I mean, you're going to have to make your own decisions on that front. I mean, you saw that you follow Twitter quite closely, but hey, this is the world we live in. So I'll give you the last word. >> Yeah, we'll see if Twitter still exists by the time we get to that show with. >> Could be private. What happens, but yeah, no, Dave, I'll be participating remotely, it is a hybrid event, so one of the things we'll be watching is, how many people are there in person LA was a pretty small show, core contributors, brought it back to some of the early days that you covered heavily from theCUBE standpoint, how Valencia will be? I know from Red Hat standpoint, we have people there, many of them from Europe, both speaking, we talked about many of the co-located events that are there, so a lot of pieces all participate remotely. So if you stop by the OpenShift commons event, I'll be part of the event just from a hybrid standpoint. And yeah, we've actually got the week before, we've got Red Hat Summit. So it's nice to actually to have back to back weeks. We'd had that a whole bunch of times before I remember, back to back weeks in Boston one year where we had both of those events and everything. That's definitely. >> Connective tissue. >> Keeps us busy there. You've got a whole bunch of travel going on. I'm not doing too much travel just yet, Dave, but it's good to see you and it's great to be connected with community. >> Yeah, so theCUBE will be there. John Furrier is hosting with Keith Townsend. So if you're in Valencia, definitely stop by. Stu thanks so much for coming into theCUBE Studios I appreciate it. >> Thanks, Dave. >> All right, and thank you for watching. We'll see you the week of May 16th in Valencia, Spain. (upbeat music)
SUMMARY :
it's adding many of the Thanks for having me, great to be here. on in Kube land these days? that chasm, the Jeff Moore, the hyperscalers that we track, the big analyst firms that track this, containers of the default and that's hard to track. that the full solution that Stu that's the point at which they say, that is one of the big things but the idea is that you out at the edge to what of devices out of the edge? now extend that to a Tesla. If I look at the discussion that the Linux primitives work and everything that we built on. that to me, that should be their strategy. So Linux is just the foundation so that the LANs matched right, because the mask mandate still exists by the time of the early days that but it's good to see you So if you're in Valencia, We'll see you the week of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Spain | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Valencia | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
thousands | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
96% | QUANTITY | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
LA | LOCATION | 0.99+ |
May 16th | DATE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Valencia Spain | LOCATION | 0.99+ |
Last year | DATE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
StackRox | TITLE | 0.99+ |
Telefonica | ORGANIZATION | 0.99+ |
Azure Stack Hub | TITLE | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
four | DATE | 0.99+ |
last year | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
Linux | TITLE | 0.99+ |
CloudNativeCon | EVENT | 0.99+ |
Stu | PERSON | 0.99+ |
KubeCon | EVENT | 0.98+ |
OpenShift | TITLE | 0.98+ |
Red Hat | TITLE | 0.98+ |
HashiCorps | ORGANIZATION | 0.98+ |
Valencia, Spain | LOCATION | 0.98+ |
Jeff Moore | PERSON | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
Flenty Show | ORGANIZATION | 0.98+ |
ORGANIZATION | 0.98+ |
Jadesola Adedeji, STEM METS | Women in Tech: International Women's Day
(upbeat instrumental music) >> Hey, everyone, welcome to theCUBE's coverage of the International Women Showcase 2022. I'm your host, Lisa Martin. I'm pleased to welcome my next guest, Jadesola Adedeji, the Chief Executive Officer of STEM METS. Jadesola, it's wonderful to have you on the program. >> Thank you so much, Lisa. It's great to be here, thank you. >> I was looking you up on LinkedIn and I noticed that your profile describes you as a social entrepreneur. Talk to me about that. >> Well, basically, the idea is that we are a business but we are in the social segment. And of course, that segment for us is education, which is obviously is one of the critical, you know, things that you need in life to thrive and to progress. So it's a social need and we are in that space trying to make a difference and bridge a gap that is in the education sector, which is around digital skills, 21st century skills. >> Jadesola, talk to me about STEM METS, the impetus to found this organization which you and a physician friend founded seven years ago. What was the genesis? >> Okay, so about 10 years ago, my husband and I moved back to Nigeria from North America, where we'd been working and studying. And we decided that we would take our experience and education back home, as well as our young kids, who were six and 10 at the time. But when we got home, what we found was a broken and impoverished educational system. And Nigeria was, you know, essential in our own foundational years. So it was really shocking and disappointing that our education system hadn't moved with the 21st century. A lot of our youth were leaving school without the relevant skills for them to get meaningful jobs. So my co-founder and I decided to do something about that by bringing in a different and more up-to-date way of learning and teaching, which was in STEM education. And so that's how we started, so both of us had a STEM background and we decided that, well, we would do something or attempt to do something about the state of our education in Nigeria. And so that's how we started. >> I love that. And you were talking to me a little bit earlier about the enrollment rate of students. Share with the audience what some of those statistics are and why this STEM METS program is so pivotal. >> Mm hmm. So as I said earlier, there are about 80 million school-age children in Nigeria. There are 10 million children that are out of school, of which about 50 to 60% are actually girls. So we are already at a disadvantage regarding our female population and even diversity in education. And so for us, we saw it as being bad enough that we can't even get into school and then when we get into school, you're not getting quality education. You get an education, but not sufficient enough with skills to get you meaningful jobs. And so for us, STEM education was the answer to trying to bring up the quality of our education and making sure that what the learning that was going on was relevant to the 21st century, which is innovation-driven, which is technology-driven, and combining that with soft skills that are required for the future workplace or even a life in entrepreneurship. And so, that's what we did in response to that. >> Tell us a little bit about the curriculum. And also, are you focused on young, school-age children, primary school, high school? >> Sure. So the great thing about what we do is that early years is essential, we feel, because those are the foundational years when the brain is developing. So we run programs for children from ages three to 16 and we run a variety of programs, so anything from construction with Lego, robotics, coding, UX design, sound and technology, just to be able to show the array of skills and modules that are available under the STEM umbrella, and also be able to showcase the diversity in terms of career options that are available to the children in our community. >> Who are some of the educators? Because one of the things that we say often when we talk about women in STEM and women in tech or some of the challenges with respect to that is, we can't be what we can't see. Talk to me about some of the mentors or the educators within STEM METS that these young girls can have a chance, as young as three, to look up to. >> Well, so that's the thing. So, I think fundamentally, our co-founders, myself and my co-founder were pivotal in terms of positioning ourselves as role models. We're female, we both had a STEM background. And then, secondly, our educators. Not being sexist, but about 90% of our educators are female. So we train them. We make sure they have the skills that they require to also implement our programs. And that is a secondary way of also showcasing to the children and the girls that we are teaching, that look, you know, STEM isn't just for boys. These are live and present role models that you can aspire to be. And we also felt that it was essential for us to recruit from the female pool, and it also helps working mothers. So they are able to look after their family, as well as still earn an income to support their families. Otherwise, they would have to give up one or the other. And because our programs are supplementary classes and we run them as after school clubs or holiday clubs, they are able to manage their time and their family accordingly. So we see what we are doing as two programs. We are educating the kids, we are educating the girls, but we're also capacity building in terms the female work force. So yes, we think that what we're doing is just really feeding the female ecosystem and just ensuring that we are developing women with relevant skills. >> So she can be what she can see because you're enabling her to see it. Talk to me about like the number of educators versus the number of girls that are in the program so far in the first seven years. >> Okay, so to date, we've reached about 10,000 learners, of which I would say about 40% are female. Obviously, our aim is to be sure that that number increases. So we're quite targeted in some of our programs, particularly the ones that we take to low-resource community. We are supported by brands from organizations such as Airbus Foundation, so that enables us to take our programs to the low-resource community and we ensure that the enrollment and the sign-up is equitable, ensuring that the girls also have access to it. >> I'm curious about your background. You said you were 20 years in the pharmaceutical industry. Were you always interested in STEM fields since you were a child or is that something that you got into a little bit later? >> Actually I think unconsciously, well, since I was a child. In our culture, at least then when I was growing up, you were either a doctor, or an engineer, or a lawyer. So there were specific pathways. So if you were in the liberal arts, you were expected to go into maybe law. If you were in science, engineering, or medicine. So I went down the pathway of pharmacy as a sort of in-between because I wasn't very good at physics so engineering wasn't an option. But I think growing up, you know, I felt that we had role models that we could also look up to, so going into the STEM field was something that, you know, was somewhat natural actually in my educational journey. Yeah, so that's how I got into the STEM field, encouraged by my dad actually. You know, he said, "You know, if you're going to "go into a life science sector, "make sure you have something that is professional, "something that can make you independent." So my career started in the pharma industry but then I ended up running my own businesses, as well, so I had a couple of pharmacies in Canada when we lived there. So I ran that as a businesswoman, but still in the life science field. >> So you've reached 10,000 youths so far and you're showing them all about STEM. STEM is a very broad mix of science, technology, engineering, mathematics, arts, as well, if we go to the STEAM area. So you're showing these kids there's so much breadth and depth there within the STEM in and of itself. >> Exactly. So that's why we oftentimes ensure that we have a variety of programs. So, and also, educating the parents and the public that STEM does not mean you're going to be a coder. You know, you can be a graphic designer, you could be a fashion designer even, UX design, you could be a robotics engineer, you could be a pharmacist. You know, so we try and bring in programs that just exposes them to a huge array of career options. One of the programs we brought in last year was a program that Spotify runs, which combines sound and technology. So making beats, making podcasts, and in there was literacy, as well. How do you pull rhymes together? You know, if you wanted to, you know, so music production, sound production, you know, writing poems and literacy. So the idea there is to say the skill sets are transferrable not just within the STEM field but also non-STEM field. So let's not forget, it's not just a technical skills development program. We are learning critical thinking, communication, problem solving, collaboration, how can you work effectively, resilience. So they are life skills that are also incorporated into the concept of STEM education. >> That's so important because as you shared with us, your 20-year history in the pharmaceutical industry, you ran businesses, you ran own pharmacies, you parlayed your expertise in the STEM field into running STEM METS. But what you're showing these kids that you've reached so far and all the many tens of thousands that you'll reach in the future that it's not just doctor, lawyer, firefighter. There are so many, I love how you have a program with Spotify. Kids probably go, "Wait, what? "Music production? "I wouldn't have thought of that "as under the STEM umbrella." But you're showing them, you're making them aware that there's so much breadth to what STEM actually is. >> Exactly, and I think the idea is to inspire creativity and innovation. That there's always a different way to do things. And so, STEM education is actually developing learning and thinking skills. You know, it's not just rote learning or cramming or theory. And you're applying it to real-life situations and real-life scenarios. So, I always say that our vision is to raise future leaders and problem solvers and equip them with skills to tackle challenges affecting our continent, as well as the world. So those skill sets are terribly important really and have a mindset of viewing everything as bringing solutions to any potential challenges that you may face, even personally. >> Which is incredibly important, especially as we've learned in the last two years that we've all lived through. I'm curious that you've got two kids you said, are they showing interest in the STEM arena? >> We are actually quite a STEM family. So my husband's background is in chemical engineering. My son just finished his undergrad in computer science and is doing a post-grad in computer games programming. My daughter is going to university this fall and she's looking into biochemical engineering. So I think the STEM thing was passed along. Not under duress, I think they just showed a general affinity for that. But I mean, we exposed them to a plethora of different programs so we are here now. >> And you're a STEM family. But that exposure is what it's all about, like we talked a minute ago about, you know, she can be what she can see. She needs to be able to see that, she needs to have that exposure, and that's what you're helping to accomplish with the STEM METS. Talk to me, last question. What are some of the objectives that you have for the next, say, two to five years with STEM METS? >> So for us in the next two to five years is really looking for opportunities to extend the reach of our program. With COVID, obviously we had to pivot online so we're seeing ourselves now as a blended learning education company. So we want to build out our online presence and capability. We definitely are looking to reaching about five to 10 thousand learners per year so we're really looking at, you know, our path to scaling. And that could be things like trainer sessions where we also equip our teachers, who then go on to equip students in their community or in their schools, as well. So path to scaling is really important to us and we are looking to see how technology can help us do that. >> Excellent. Well, we wish you the best of luck on your path to scale, and congratulations on all the success and the youths that you have reached so far. Sounds like a great organization and we appreciate learning about that and having the chance to educate more folks on what the STEM METS program is all about. Jadesola, thank you so much for your time. >> Thank you, Lisa. For Jadesola Adedeji, I'm Lisa Martin. You're watching theCUBE's coverage of the International Women Showcase 2022. (upbeat instrumental music)
SUMMARY :
to have you on the program. It's great to be here, thank you. and I noticed that your is that we are a business the impetus to found this organization And so that's how we started, And you were talking to and making sure that what the And also, are you focused on that are available to the Who are some of the educators? that look, you know, that are in the program is equitable, ensuring that the girls or is that something that you I felt that we had role models and you're showing them all about STEM. So the idea there is to say the skill sets and all the many tens of thousands that you may face, even personally. in the last two years that so we are here now. objectives that you have and we are looking to see how technology Well, we wish you the best of of the International Women Showcase 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jadesola | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Nigeria | LOCATION | 0.99+ |
Canada | LOCATION | 0.99+ |
20-year | QUANTITY | 0.99+ |
Jadesola Adedeji | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
two kids | QUANTITY | 0.99+ |
21st century | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Airbus Foundation | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
six | QUANTITY | 0.99+ |
two programs | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
10,000 youths | QUANTITY | 0.99+ |
first seven years | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
International Women Showcase 2022 | EVENT | 0.99+ |
STEM METS | ORGANIZATION | 0.99+ |
Lego | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.98+ |
tens of thousands | QUANTITY | 0.98+ |
seven years ago | DATE | 0.98+ |
International Women's Day | EVENT | 0.98+ |
about 90% | QUANTITY | 0.98+ |
Spotify | ORGANIZATION | 0.98+ |
10 million children | QUANTITY | 0.97+ |
ORGANIZATION | 0.97+ | |
about 10,000 learners | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
16 | QUANTITY | 0.96+ |
about 40% | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
about 50 | QUANTITY | 0.95+ |
about 10 years ago | DATE | 0.94+ |
60% | QUANTITY | 0.9+ |
this fall | DATE | 0.9+ |
Women in Tech | EVENT | 0.84+ |
about five | QUANTITY | 0.83+ |
about 80 million school-age | QUANTITY | 0.82+ |
COVID | ORGANIZATION | 0.8+ |
10 thousand learners per | QUANTITY | 0.78+ |
ages three | QUANTITY | 0.77+ |
a minute ago | DATE | 0.73+ |
last two | DATE | 0.61+ |
secondl | QUANTITY | 0.57+ |
STEM | ORGANIZATION | 0.57+ |
years | QUANTITY | 0.53+ |
STEM METS | TITLE | 0.48+ |
METS | EVENT | 0.3+ |
Blake Scholl, Boom Supersonic | AWS re:Invent 2020
>>From around the globe. It's the cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. >>Welcome back to the cubes coverage of AWS reinvent 2020 live I'm Lisa Martin. Really exciting topic coming up for you next, please. Welcome Blake shoulda, founder and CEO of boom supersonic Blake. It's great to have you on the program. Thank you for having me, Lisa, and your background gives me all the way with what we're going to talk about in the next few minutes or so, but supersonic flight has existed for quite a long time, like 50 or so years. I think those of us in certain generations remember the Concorde for example, but the technology to make it efficient and mainstream is only recently been approved by or accepted by regulators. Tell us a little bit about boom, your mission to make the world more accessible with supersonic commercial flight. Well, a supersonic flight has >> actually been around since 1949 when Chuck Yeager broke the speed barrier or sorry, the sound barrier. >>And as, as many of you know, he actually passed yesterday, uh, 97. So very, very sad to see one of the supersonic pioneers behind us. Uh, but, uh, but as I say goodbye to Jaeger, a new era of supersonic flight is here. And if you look at the history of progress and transportation, since the Dawn of the industrial revolution, uh, we used to make regular progress and speed. As we went from, uh, the horse to the iron horse, to the, the boats, to the, the early propeller airplanes that have the jet age. And what happened was every time we made transportation faster, instead of spending less time traveling, we actually spent more time traveling because there were more places to go, more people to meet. Uh, we haven't had a world war since the Dawn of the jet age. Uh, places like Hawaii have become, uh, a major tourist destination. >>Uh, but today, uh, today it's been 60 years since we've had a mainstream re uh, step forward and speed. So what we're doing here at boom is picking up where Concord left off building an aircraft that flies faster by factor to the, anything you can get a ticket on today. And yet is 75% more affordable than Concorde was. So we want to make Australia as accessible as a why yesterday. We want to enable you to cross the Atlantic, do business, be home in time, detect your kids into bed, or take a three-day business trip to Asia and let you do it in just 24 >> hours. I like the sound of all of that. Even getting on a plane right now in general. I think we all do so, so interesting that you, you want to make this more accessible. And I did see the news about Chuck Yeager last night. >>Um, designing though the first supersonic airliner overture, it's called in decades, as you said, this dates back 60 years, rolling it out goal is to roll it out in 2025 and flying more than 500 trans oceanic routes. Talk to me about how you're leveraging technology and AWS to help facilitate that. Right. Well, so one of the really fascinating things is the new generation of airplanes, uh, are getting born in the cloud and then they're going to go fly through actual clouds. And so there are, there are a bunch of revolutions in technology that have happened since Concord's time that are enabling what we're doing now, their breakthroughs and materials. We've gone from aluminum to carbon fiber they're breakthroughs and engines. We've gone from after burning turbo jets that are loud and inefficient to quiet, clean, efficient turbo fans. But one of the most interesting breakthroughs has been in a available to do design digitally and iteration digitally versus, uh, versus physically. >>So when conquer was designed as an example, they were only able to do about a dozen wind tunnel tests because they were so expensive. And so time consuming and on, uh, on our XP one aircraft, which is our prototype that rolled out in October. Um, uh, we did hundreds of iterations of the design in virtual wind tunnels, where we could spin up a, uh, a simulation and HPC cluster in AWS, often more than 500 cores. And then we'd have our airplanes flying through virtual wind tunnels, thousands of flights scenarios you can figure out which were the losers, which were the winners keep iterating on the winners. And you arrive at an aerodynamic design that is more efficient at high speed. We're going very safely, very quickly in a straight line, but also a very smooth controllable for safe takeoff and landing. And the part of the artist supersonic airplane design is to accomplish both of those things. One, one airplane, and, uh, being able to design in the cloud, the cloud allows us to start up to do what previously only governments and militaries could do. I mentioned we rolled out our XP one prototype in October. That's the first time anyone has rolled out a supersonic civil aircraft since the Soviet union did it in 1968. And we're able to do as a startup because of computing. >>That's incredible born in the cloud to fly in the cloud. So talk to me about a lot of, of opportunity that technology has really accelerated. And we've seen a lot of acceleration this year in particular digital transformation businesses that if they haven't pivoted are probably in some challenging waters. So talk to us about how you're going all in with AWS to facilitate all these things that you just mentioned, which has dramatic change over 12, uh, when tone test for the Concord and how many times did it, >>Uh, I mean for 27 years, but not that many flights, never, it never changed the way mainstream, uh, never, never district some of you and I fly. Right. Um, so, so how, how are we going all in? So we've, you know, we've been using AWS for, uh, you know, basically since the founding of the company. Uh, but what we, what we're doing now is taking things that we were doing outside of the cloud and cloud. Uh, as an example, uh, we have 525 terabytes of XP one design and test data that what used to be backed up offsite. Um, and, and what we're doing is migrating into the cloud. And then your data is next. Your compute, you can start to do these really interesting things as an example, uh, you can run machine learning models to calibrate your simulations to your wind tunnel results, which accelerates convergence allows you to run more iterations even faster, and ultimately come up with a more efficient airplane, which means it's going to be more affordable for all of us to go to go break the sound barrier. >>And that sounds like kind of one of the biggest differences that you just said is that it wasn't built for mainstream before. Now, it's going to be accessibility affordability as well. So how are you going to be leveraging the cloud, you know, design manufacturing, but also other areas like the beyond onboard experience, which I'm already really excited to be participating in in the next few years. >>Yeah. So there's so many, so many examples. We've talked about design a little bit already. Uh, it's going to manifest in the manufacturing process, uh, where the, the, the, the, the supply chain, uh, will be totally digital. The factory operations will be run out of the cloud. You know, so what that means concretely is, uh, you know, literally there'll be like a million parts of this airplane. And for any given unit goes through their production line, you'll instantly know where they all are. Um, you'll know which serial numbers went on, which airplanes, uh, you'll understand, uh, if there was a problem with one of it, how you fixed it. And as you continue to iterate and refine the airplane, this, this is one of things that's actually a big deal, uh, with, with digital in the cloud is, you know, exactly what design iteration went into, exactly which airplane and, uh, and that allows you to actually iterate faster and any given airline with any given airplane will actually know exactly what, what airplane they have, but the next one that rolls off the line might be even a little bit better. >>And so it allows you to keep track of all of that. It allows you to iterate faster, uh, it allows you to spot bottlenecks in your supply chain before they impact production. Um, and then it allows you to, uh, to do preventive maintenance later. So there's to be digital interpretation all over the airplane, it's going to update the cloud on, you know, uh, are the engines running expected temperature. So I'm gonna run a little bit hot, is something vibrating more than it should vibrate. And so you catch these things way before there's any kind of real maintenance issue. You flag it in the cloud. The next time the airplane lands, there's a tech waiting for the airplane with whatever the part is and able to install it. And you don't have any downtime, and you're never anywhere close to a safety issue. You're able to do a lot more preventively versus what you can do today. >>Wow. So you have to say that you're going to be able to, to have a hundred percent visibility into manufacturing design, everything is kind of an understatement, but you launched XQ on your prototype in October. So during the pandemic, as I mentioned, we've been talking for months now on the virtual cube about the acceleration of digital transformation. Andy, Jassy talked about it in his keynote at AWS reinventing, reinventing this year, virtual, what were some of the, the, the advantages that you got, being able to stay on track and imagine if you were on track to launch in October during a time that has been so chaotic, uh, everywhere else, including air travel. >>Well, some of it's very analog, uh, and some of it's very digital. So to start with the analog, uh, we took COVID really seriously at Bo. Uh, we went into that, the pandemic first hit, we shut the company down for a couple of weeks, so we'd kind of get our feet underneath of us. And then we sort of testing, uh, everyone who had to work on the airplane every 14 days, we were religious about wearing masks. And as a result, we haven't had anyone catch COVID within the office. Um, and I'm super proud that we're able to stay productive and stay safe during the pandemic. Um, and you do that, but kind of taking it seriously, doing common sense things. And then there's the digital effort. And, uh, and so, you know, part of the company runs digitally. What we're able to do is when there's kind of a higher alert level, we go a little bit more digital when there's a lower alert level. >>Uh, we have more people in the office cause we, we still really do value that in-person collaboration and which brings it back through to a bigger point. It's been predicted for a long time, that the advent of digital communication is going to cause us not to need to travel. And, uh, what we've seen, you know, since the Dawn of the telephone is that it's actually been the opposite. The more you can know, somebody even a little bit, uh, at distance, the hungry you are to go see them in person, whether it's a business contact or someone you're in love with, um, no matter what it is, there's still that appetite to be there in person. And so I think what we're seeing with the digitization of communication is ultimately going to be very, um, uh, it's very complimentary with supersonic because you can get to know somebody a little bit over a long distance. You can have some kinds of exchanges and then you're, and then the friction for be able to see them in person is going to drop. And that is, uh, that's a wonderful combination. >>I think everybody on the planet welcomes that for sure, given what we've all experienced in the last year, you can have a lot of conversations by zoom. Obviously this was one of them, but there is to your point, something about that in-person collaboration that really takes things can anyway, to the next level. I am curious. So you launched XB one in October, as I mentioned a minute ago, and I think I read from one of your press releases planning to launch in 2025, the overture with over 500 trans oceanic routes. What can we expect from boom and the next year or two, are you on track for that 2025? >>Yeah. Things are going, things are going great. Uh, so to give a sense of what the next few years hold. So we rolled out the assembled XB one aircraft this year, uh, next year that's going to fly. And so that will be the first civil supersonic, uh, flying aircraft ever built by an independent company. Uh, and along the way, we are building the foundation of overture. So that design efforts happening now as XB one is breaking the sound barrier. We'll be finalizing the overture design in 22, we'll break ground in the factory in 23, we'll start building the first airplane and 25, we'll roll it out. And 26 we'll start flight tests. And, uh, and then we'll go through the flight test methodically, uh, systematically as carefully as we can, uh, and then be ready to carry passengers as soon as we are convinced that safe, which will be right around the end of the decade, most likely. >>Okay. Exciting. And so it sounds like you talked about the safety protocols that you guys put in place in the office, which is great. It's great to hear that, but also that this, this time hasn't derailed because you have the massive capabilities of, to be able to do all of the work that's necessary, way more than was done with before with the Concorde. And that you can do that remotely with cloud is a big facilitator of that communication. >>Yeah. You're able to do the cloud enables a lot of computational efficiencies. And I think about the, um, many times projects are not measured in how many months or years exactly does it take you to get done, but it's actually much easier to think about in terms of number of iterations. And so every time we do an airplane iteration, we look at the aerodynamics high speed. We look at the low speed. We look at the engine, uh, we look at the, the weights. Uh, we look at stability and control. We look at pilots, light aside, et cetera, et cetera. And every time you do an iteration, you're kind of looking around all of those and saying, what can I make better? But each one of those, uh, lines up a little bit differently with the rest now, for example, uh, uh, to get the best airplane aerodynamically, doesn't have a good view for the pilot. >>And that's why Concord had that droop nose famously get the nose out of the way so we can see the runway. And so we're able to do digital systems for virtual vision to let the pilot kind of look through the nose of the runway. But even then they're, trade-offs like, how, how good of an actual window do you need? And so your ability to make progress in all of this is proportional to how quickly you can make it around that, that iteration loop, that design cycle loop. And that's, that's part of where the cloud helps us. And we've, we've got some, uh, uh, some stuff we've built in house that runs on the cloud that lets you basically press a button with a whole set of airplane parameters. And bam, it gives you a, it gives you an instant report. I'm like, Oh, was it that this is a good change or bad change, uh, based on running some pretty high fidelity simulations with a very high degree of automation. And you can actually do many of those in parallel. And so it's about, you know, at this stage of the program, it's about accelerating, accelerating your design iterations, uh, giving everyone of the team visibility into those. And then, uh, I think you get together in person as it makes sense to now we're actually hitting a major design milestone with over-treat this week and we're, COVID testing everybody and get them all in the same room. Cause sometimes that in-person collaboration, uh, is really significant, even though you can still do so much digitally. >>I totally agree. There's there's certain things that you just can't replicate. Last question since my brother is a pilot for Southwest and retired Lieutenant Colonel from the air force, any special training that pilots will have to have, or are there certain pilots that are going to be maybe lower hanging fruit, if they have military experience versus commercial flight? Just curious. >>Yeah. So our XB one aircraft is being flown by test pilots. There's one ex Navy one ex air force on our crew, but, uh, overture, uh, will be accessible to any commercial pilot. So, uh, think about it as if you're, if you're used to flying Boeing, it'd be like switching to Airbus, uh, or vice versa. So the, uh, Concord is a complicated aircraft to fly because they didn't have computers. And all the complexity, the soup of supersonic flight was right there and the pilots and an overture, all that gets extracted by software. And, uh, you know, the, the, the ways the flight controls change over speed regimes. You don't have to worry about it, but the airplane is handled beautifully, no matter what you're doing. And so, uh, and so there are many, many places to innovate, but actually pilot experience, not one of them, >>Because the more conventional you can make it for people like your brother, the easier it's going to be for them to learn the aircraft. And therefore the safer it's going to be to fly. I'll let them know, like this has been fantastic, really exciting to see what boom supersonic is doing and the opportunities to make supersonic travel accessible. And I think at a time when everybody wants the world to open up, so by 20, 26, I'm going to be looking for my ticket. Awesome. Can't wait to have you on board. Likewise for Blake shul, I'm Lisa Martin. You're watching the QS live coverage of AWS reinvent 2020.
SUMMARY :
It's the cube with digital coverage of AWS It's great to have you on the program. the sound barrier. And as, as many of you know, he actually passed yesterday, uh, 97. We want to enable you to cross the Atlantic, And I did see the news about Chuck Yeager last night. And so there are, there are a bunch of revolutions in technology that have happened since Concord's time that And you arrive at an aerodynamic design that is more That's incredible born in the cloud to fly in the cloud. as an example, uh, you can run machine learning models to calibrate your simulations And that sounds like kind of one of the biggest differences that you just said is that it wasn't built for mainstream before. And as you continue to iterate all over the airplane, it's going to update the cloud on, you know, uh, are the engines running expected temperature. that you got, being able to stay on track and imagine if you were on track to launch in October And, uh, and so, you know, part of the company runs digitally. uh, what we've seen, you know, since the Dawn of the telephone is that it's actually the last year, you can have a lot of conversations by zoom. Uh, and along the way, we are building the foundation of overture. And that you can do that remotely with cloud is a big facilitator of that communication. And every time you do an iteration, you're kind of looking around all of those And then, uh, I think you get together in person as There's there's certain things that you just can't replicate. And, uh, you know, the, the, the ways the flight controls change over Because the more conventional you can make it for people like your brother, the easier it's going to be for them to learn
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chuck Yeager | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
October | DATE | 0.99+ |
1968 | DATE | 0.99+ |
2025 | DATE | 0.99+ |
Lisa | PERSON | 0.99+ |
27 years | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
60 years | QUANTITY | 0.99+ |
75% | QUANTITY | 0.99+ |
525 terabytes | QUANTITY | 0.99+ |
1949 | DATE | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
Hawaii | LOCATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Blake Scholl | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
first airplane | QUANTITY | 0.99+ |
Blake shul | PERSON | 0.99+ |
last night | DATE | 0.99+ |
Jassy | PERSON | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Concord | ORGANIZATION | 0.99+ |
XB one | COMMERCIAL_ITEM | 0.99+ |
more than 500 cores | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
Atlantic | LOCATION | 0.99+ |
both | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
hundred percent | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
first time | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Jaeger | PERSON | 0.96+ |
this week | DATE | 0.96+ |
more than 500 trans oceanic routes | QUANTITY | 0.96+ |
One | QUANTITY | 0.95+ |
Southwest | ORGANIZATION | 0.95+ |
24 >> hours | QUANTITY | 0.95+ |
XP | COMMERCIAL_ITEM | 0.94+ |
Soviet union | ORGANIZATION | 0.94+ |
a minute ago | DATE | 0.93+ |
first hit | QUANTITY | 0.93+ |
three-day business | QUANTITY | 0.93+ |
over 12 | QUANTITY | 0.92+ |
26 | OTHER | 0.92+ |
Concorde | ORGANIZATION | 0.91+ |
25 | OTHER | 0.91+ |
first civil | QUANTITY | 0.9+ |
26 | QUANTITY | 0.9+ |
about a dozen wind tunnel tests | QUANTITY | 0.9+ |
each one | QUANTITY | 0.9+ |
over 500 trans oceanic routes | QUANTITY | 0.9+ |
Navy | ORGANIZATION | 0.89+ |
one airplane | QUANTITY | 0.89+ |
Blake shoulda | PERSON | 0.88+ |
Dawn of the industrial revolution | EVENT | 0.87+ |
20 | QUANTITY | 0.86+ |
Keynote Analysis | Cisco Live EU Barcelona 2020
>>Live from Barcelona, Spain. It's the Cube covering Cisco Live 2020 right to you by Cisco and its ecosystem partners. >>Welcome to the Cube's live coverage here in Barcelona, Spain, for Cisco Live 2020. I'm John Furrier, host of the Cube Dave Volante Ecosystem Minimum here all week in Barcelona, kicking off 2020 With the keynote analysis, Cisco just unveiled their looks like their plan for the year and what looks like a future direction of Cisco again. We were here past two years covering Cisco Live. We'll be at the US show this year as well. David Stew Keynote analysis. Let's get into it right away. Mostly you start to still see the messaging Positioning unfolding in front of us is clearly not there yet. A lot of people have their check boxes that rotation David get kicked it off. I mean, when we kicked it off David gentler key executive, really leading the charge here. But this is about Cisco setting the table. Let's get into it. What do you guys think? I thought it was a good keynote. I thought was a little bit lacking in the story, telling what was the thread was no common thread. Heard a lot of little cloud. I heard a lot of sis card, a lot of speeds and feeds. Everyone kind of has their turn, and all the top people were on there. What's your thoughts? >>Well, who is? Cisco was my first thought. Is your kid coming out of college? You hear that keynote, which I agree was good keynote. But I still wouldn't be sure exactly what Cisco does on. And so I think that you're right, that messaging needs to be tightened up. There needs to be a threat. At the same time, we saw some innovation. They sort of double down on the December announcements and talked about that. I really liked the collaboration that that's been a sleepy market zoom change that woke everybody up. And so we saw some interesting features. Their stuff on app d. They made a lot of claims, which I don't know if they're true or not. It seemed like VM Ware could do some of that stuff and new relic and some of the others dynatrace. But Cisco is coming at it from a networking area of strength, and, um so I guess my bottom line is, I still wanna understand what that threat is, and they talked about multi cloud. I really do think that Cisco is in the best position to connect those clouds to on Prem and Hybrid. They've got the data from the network, and they're in the best position to leverage that for value for their customers. Kind of came through, but I think it's my inference, not their claim. I was >>a little surprised. A This third year we've done this show, and usually there's, you know, the new tag line, and they were reusing the bridge to possible and feels still where things are coming together. Francisco, as you and John were saying, some of the products moving together. So it's awesome chatter on Twitter said, Oh, great Inter site and Empty, actually going to integrate and work well together on that integration messages, one that Cisco's highlighting Cisco's always had a really broad ecosystem. They put up the video about like, you know, if you know the Internet and everything you've done, we've been there, and we're going to drive that for the next generation in the collaboration space. It's not the same WebEx that you've known forever heck, you know, we're gonna have Microsoft with teams and WebEx trying to squint through that a little bit and say, Okay, well, Cisco's got a bunch of devices. Is that all it is? Is, you know, being saying great. You know, I've got Cisco Devices and therefore, if I'm you know, teams customer for Microsoft, I can plug into that. It seems like there's a lot of inter networking pieces underneath the covers there because Microsoft driving hard in that space. Zoom as you said Dave, for the quick, easy experience that that came out of Cisco. So a lot of things moving in the collaboration space. But in the hardcore data, says center space workload Optimizer is something that they were focused on. Talk about the new router Jonathan Davidson, who we'll have on the Cube tomorrow, talking about that space. So Cisco's got a very broad portfolio, and John, I think you nailed it. I did not come out of it. A consistent You know who Cisco is. The message for how we're going to partner with in the future. >>The day bring up a good point college kid looks at This is a good way to kind of zoom out of the technical world. Remember, David Gettler is a technical person. He ran engineering. He sees his big marketing word is multi domain. Come on, Multi domain is not a marketing word. It's just it's a technical feature, but >>this is a >>technical show and a lot of their audience here at the show. We are techies, and so it's clear to me that Cisco's brick by brick building the sass ification, the cloud ification of Cisco and this is something. I think they're not yet ready to pull the switch on Dave as to use a sailing analogy as they attack into the marketplace. They got to do a full turn on the boat. I think this is just the progression. I think it's natural to see Cisco spending millions of billions of dollars as we heard cloud defying and creating this subscription business model. The other notable things is you start to see some tell signs from the keynote, a few little things and I picked up out of this that shows that they're kind of going in the right direction. Still a lot more work to do, and the story needs to be up leveled a bit. I totally agree, rather than just speeds and feeds the classic enterprise. But Wendy hit it clearly. Business model is the new killer app, and I think all the things that we've discussed over the past 10 years to past five, in particular with Cloud Native is the business outcomes is what the APS are focused on. And so they're headlining the event with APP application dynamics, which makes sense. But it's not clear enough that the business model is the key to everything, and you're gonna connect businesses what Cisco does. I mean, what a Cisco Date. They connect business that's been their their mission. From day one, they >>got to take that message, bring it >>up with the applications, are driving business model changes and results. And I think that's the thread they're trying to get through and trying to thread the needle. They're they're just not ready. >>See, from an umbrella messaging standpoint, I think that would have been a lot more effective. But some of the things that I liked in the keynote, you know, Wendy Mars did talk about the importance of privacy, how Europe is leading in diversity. So so that is really important. And they also talked about how last decade was all about enabling APS. And this decade is going to be all about enabling APs and to your point, about enabling business. John. They talked a lot about bringing I t an OT together lists, and Tony really made a big point of that. When we walked into The DEV. Net zone, there was all these network engineers looking at an I O T presentation, these air I t guys trying to learn about the edge in OT. And so I think that's a really important message to the collaboration front. You know, some neat, neat features I just wanted to mention. But my understanding is that Microsoft Teams is all about taking its the old Skype business, which has, like, fallen off a cliff because everybody hates Skype and migrating at the team so they can compete more effectively with WebEx and the rest of them. So again, a lot of different parts of Cisco, but I think there was some definite innovation there, and then when I talked about they're December announcements the optics, the silicon one and the software bringing that together, you know, that is going to power service providers for the next 5 10 years, >>we'll do. I want to get your thoughts here because one of the things that we're observing and they've got hit with teams is that they're kind of groping a little bit on areas. Everyone's gonna get their time on stage. I get that. You know, the comment I made yesterday in our pre game day zero analysis was that there needs to be a Tesla of this industry and to completely change the game. So I think Cisco, if they take the business, we're connecting businesses and looking for a business model. Change is we're gonna look for the engine of the of the car of the application of the company and then what it ISS. So Cisco as a company, is the car, the engines were there, the weaknesses. So if you look at Cisco, all they do is talk about the engine and the features of the Pistons and all the technical speeds and feeds. That's great, but at the end of the day it's a new environment on the business front and I think they got to get that kind of conversion and bring that together Because, of course, they have to check the boxes on. Look, we've got a new engine. We've got new clouds modification. This is where it's at, but it's the destination that you're driving to, which is a business model Outcomes. So, you know, under the hood, are they there? So it seems to be they're still trying to get the engine fixed, and then they could roll out >>one of the things when we always look at all of these keynotes is Are they effectively letting customers tell their story? And does that resonate with what they're talking about? For the piece I saw, I only saw two customers. There was a video with Michael Bay, Great special effects. And actually, you know, I thought it kind of resonated because it's like, Okay, you know, I've got 10 locations shooting around the world and you know, there's terror, bits of information. He's like, I don't even know what a terabyte. It sounds like a dinosaur. And of course, all the networking like Ha ha. You know, you do cool exploding stuff, but you don't know what a terabyte is. And then they had Airbus and Dave, you talked about. Listen, Tony got up on stage and look at it and ot they don't play well together and that's we've done research, looking at the challenge of really delivering on I ot it is that schism between I T and OT and I would have loved to hear a little bit more because she said, Oh well, our tools just enable ot to work on anything. It's not that easy. Just >>well, I throw >>those two worlds >>together, key their security, and we're talking about securing critical infrastructure and really, that's a whole new opportunity in realm. I mean, it kind of came through, but But that's the linchpin is really securing that critical infrastructure, whether it's power plants, it roads, all kinds of logistics and a >>lot of one on Dave. I mean, this is the whole point about Cisco's challenges. One from a story standpoint is complex from a technology integration standpoint complex because you got application awareness, which is going down to the network. And then they showed a lot of that, and I thought that was a key highlight that didn't actually come through, but they did present it. They got the clarification story And then they got network automation all those things, as well as five g around the corner. Silicon One is a lot coming >>together. Nailed that, I mean, no doubt, >>a lot coming together. And I think the key is Is that Scott? Harold nailed it. I think we get clearer and the team are right on the money. On terms of the engine is intent based networking. Multi domain. Is that to me means multi cloud and hybrid. Nail that, and you can get those kinds of innovations. And I think Scott Harrell said it. Simplification is key security and inclusive of the cloud that one word to use, he said. We're talking about something that's inclusive of cloud. He really slam Cloud, he said. You know, it's a fancy place. It's Nirvana. But don't forget the intent of having the on premise basically. So I thought that was a nice thread, the three layers of insight security business in I T. But to me it's simple. I think Cisco needs to think differently around how they position themselves, because if they're going to throw WebEx out there and throw out all these analytics and data, they're a data company. They're a data first company, and they have to be a video first company of its five G. And they got to be a virtual first company because the new future workplace is about having those kinds of workloads running those kinds of app set, you know, feed the modern enterprise. And to me, my premise is, if you can automate it, it's not a feature for the modern modern enterprise has. Automation will be critical of everything, and you can't have bloated software running virtual first environment. >>But to your point, Cisco's advantage is that the data is running through the network, so they have visibility on that data. So they are in a very good position to leverage that data for automation and to connect businesses. Networks of data video is killer feature for that. I mean, they really are the only company right now in the business that can do that. >>Yeah, actually, I like the analogy. They said you should think of the network as a sensor. This is what's going to be able to drive your insight and outcomes. It's not just the plumbing anymore, but you know, that's one of the earliest areas where we drove analytics and data out of everything that's going on and set them up for that machine learning and AI world that people are driving toe extract data >>and to your point on cloud. I mean, look it. They know that you sort of reference that the cloud is slowly eating away at their opportunity because I T practitioners will tell you what the more we do in the cloud, the less we're gonna have to spend on our own network year. >>Yeah, but here's the thing that's coming out. And during the SD win section, I was making some comments >>on >>YouTube channel. SD Win is really, to me, a bellwether of how this goes because latency matters. If you're in the Cisco ecosystem, it's late in the late latency. And if the win is the new land, which is my premise than the interactions with security between the routes becomes critical, right? So you have to have that kind of insight. So we look at something like Web experiences on the collaboration side is that product truly defined for that environment? And I think you mentioned Zoom earlier as kind of waking everyone up is they've built a product around latency and around the environment around land, not the land. So WebEx and desktop is not the state of the art. So unless you got an NVIDIA graphics card designed into it and gaming rig, it's gotta be mobile. It's gonna be over a land link for virtual. And I think if the software to bloated, it's not gonna work. And I think that's gonna be an area that Cisco is going to look at and say, Does these products fit this new use case? >>Okay, so let's say three days of coverage, right? We did. Day Zero is actually four days of coverage for us. We got a lot of good guests coming on. A lot of Cisco execs. What >>are you guys looking for? This. Let's go look at the week we had a lot of guests coming on. Dave's do. What are you guys looking for? In terms of analysis? What are you looking to tease out of the show? >>Well, like any of these shows, I'm really trying to look at the substance, trying to understand the announcements that they're making, how real they are and how they map into the customer's view of what it is that they need. I say the collaboration thing is interesting to me. I was really concerned about Cisco. I thought they were just sort of sitting on their laurels. I think they're WebEx install Base is gonna really look hard at these features. If they're in fact, they're available. I want to understand from practitioners and particularly service providers, You know what they think of all this new stuff that's coming out cause it's expensive. But that's a big, big cap ex investment for these guys. And I want understanding the core Cisco business, their their data center business, their networks. They're hyper converged where they stand competitively. And the last thing is the partner ecosystem. You know, we've talked about how they have to walk a fine line between, you know, servicing guys like IBM and Netapp and then also competing with their former great partner in EMC now Dell, EMC, and how they're gonna go forward in the next 10 years. >>Yeah, you touched on the partner ecosystem and service riders. Edge is the next big opportunity for Cisco, and how will they leverage what they're doing to support all of those partners? going forward. Big thing I'm looking for this week as well as a Z you said Dave. Maturation of a lot of the pieces that they add. Where's the substance behind the announcements that they've made? How much of them are table stakes that we see some of the other environs? Collaboration Space John. As you said, Oh, here's these things on the desktop I could do all these things on my phone was so trying to understand what is differentiated >>awesome for me I'm looking for actually, we're in the Dev Net Zone Cube. I'm looking for the developer equations that came up clear, kind of last with Susie Wee. But she put the new world of developers that's going to change the whole CC certification area and on the ecosystem. And for the developers, it's a C I O T. D and a center Inter site an umbrella. Outside of that, I'm gonna be looking for how Cisco is looking at cloud ification of networking network as a service way into Cloud versus internal SD win simplification of the edge security and networking common policy to name a few know talk a WiFi. I mean, WiFi is the preferred connectivity point inside the enterprise. And how does that relate to the whole edge thing? Application awareness. I really jazzed up by app D and I think where they're going with that is really gonna be the front end of that network policy. And that application awareness is critical on finally network automation from See I CD pipeline into analytics and how that relates to Fixed Wireless the five G, which is going to be I o. T. In the subscription based model. So yeah, to me, that's the That's the big picture. I want to dig into those areas >>that you are the things if I May 1 is this gestalt of, um, I'm gonna buy best of breed or am I going to buy from, you know, one throat to choke? And I think Cisco is obviously trying to be the ladder, and I think the last for me. Security, security, security. And how is Cisco going to help practitioners implement the best security possible? >>Yeah. And John John mentioned in the DEV. Net zone. It is that modernization of the workforce, one of the last things in the keynote they want, accelerate the 1st 500 certified definite engineers out there. So what Sisi Iea's had been doing for many decades, many of them in the future are going to be part of that dev net with security being one of the key areas that we focus >>on. And, of course, we're the top story that so far out of the keynote to me, the top story so far is that Cisco is not gonna yield to the big cloud guys, They're brick by brick moving the needle on their rebooting of their products to be cloud enabled for hybrid. And then ultimately, in multi cloud. And I still think the big switches coming. They haven't pull that lever. They haven't yet made a big move, I think a lot more to come. So we're gonna be digging in to the guys. Thanks for the analysis. Keynote analysis here. Day one of Cisco live in Barcelona kicking off in setting the agenda for 2020. It's the cube coverage. I'm John for Stu Minima Dave Volante. We'll be right back with more live coverage after this short break. >>Yeah, yeah, yeah
SUMMARY :
It's the Cube covering I'm John Furrier, host of the Cube Dave Volante Ecosystem Minimum here all week in Barcelona, I really liked the collaboration that that's been a sleepy But in the hardcore data, says center space workload Optimizer is something that they were focused Remember, David Gettler is a technical person. But it's not clear enough that the business model is the key to everything, And I think that's the thread they're trying to get through and trying to thread the needle. But some of the things that I liked in the keynote, you know, Wendy Mars did talk about the importance of privacy, a new environment on the business front and I think they got to get that kind of one of the things when we always look at all of these keynotes is Are they effectively letting customers but But that's the linchpin is really securing that critical infrastructure, They got the clarification story And then they got network automation Nailed that, I mean, no doubt, I think Cisco needs to think differently around how I mean, they really are the only company right now in the business that can do that. It's not just the plumbing the less we're gonna have to spend on our own network year. And during the SD win section, I was making some comments And I think if the software to bloated, We got a lot of good guests coming on. Let's go look at the week we had a lot of guests coming on. I say the collaboration thing is interesting to me. Maturation of a lot of the pieces that they add. And for the developers, it's a C I O T. D and a center Inter site And I think Cisco is obviously trying to be the ladder, in the future are going to be part of that dev net with security being one of the key areas that we focus And I still think the big switches coming.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
EMC | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Barcelona | LOCATION | 0.99+ |
David Gettler | PERSON | 0.99+ |
Harold | PERSON | 0.99+ |
Susie Wee | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Michael Bay | PERSON | 0.99+ |
Skype | ORGANIZATION | 0.99+ |
December | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
David Stew | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
Dave Volante | PERSON | 0.99+ |
two customers | QUANTITY | 0.99+ |
Jonathan Davidson | PERSON | 0.99+ |
four days | QUANTITY | 0.99+ |
Wendy Mars | PERSON | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
May 1 | DATE | 0.99+ |
John John | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Netapp | ORGANIZATION | 0.99+ |
10 locations | QUANTITY | 0.99+ |
Scott Harrell | PERSON | 0.99+ |
David gentler | PERSON | 0.99+ |
Francisco | PERSON | 0.99+ |
one word | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
2020 | DATE | 0.98+ |
first thought | QUANTITY | 0.97+ |
Tesla | ORGANIZATION | 0.97+ |
ORGANIZATION | 0.97+ | |
this week | DATE | 0.97+ |
YouTube | ORGANIZATION | 0.97+ |
tomorrow | DATE | 0.96+ |
WebEx | ORGANIZATION | 0.96+ |
Sisi Iea | PERSON | 0.96+ |
Day one | QUANTITY | 0.96+ |
Amit Sinha, Zscaler | CUBEConversations, January 2020
(funk music) (funk music) (funk music) (funk music) >> Hello and welcome to theCUBE studios in Palo Alto, California for another CUBE conversation where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host, Peter Burris. Every enterprise is responding to the opportunities of cloud with significant changes in people, process, how they think about technology, how they're going to align technology overall with their business and with their business strategies. Now those changes are affecting virtually every aspect of business but especially every aspect of technology. Especially security. So what does it mean to envision a world in which significant new classes of services are being provided through cloud mechanisms and modes, but you retain and in fact, even enhance the quality of security that your enterprise can utilize. To have that conversation, we're joined today by a great guest, Amit Sinha is president and CTO at Zscaler. Amit, welcome back to theCUBE. >> Thank you Peter, it's a pleasure to be here. >> So before we get into it, what's new at Zscaler? >> Well, at Zscaler our mission is to make the internet and cloud a secure place for businesses and as I engage with our global 2000 customers and prospects, they are going through some of the digital transformation challenges that you just alluded to. Specifically for security, what is happening is that they had a lot of applications that were sitting in a data center or in their headquarters and that center of gravity is now moving to the cloud. They probably adopt their Office 365, and Box, and Salesforce, and these applications have moved out. Now in addition, the users are everywhere. They're accessing those services, not just from offices but also from their mobile devices and home. So if your users have left the building, and your applications are no longer sitting in your data center, that begs that question: Where should the security stack be? You know, it cannot be your legacy security appliances that sat in your DMZ and your IT closets. So that's the challenge that we see out there, and Zscaler is helping these large global organizations transform their security and network for a more mobile and a cloud-first world. >> Distributed world? So let me make sure I got this right. So basically, cause I think I totally agree with you >> Right. >> Just to test it, that many regarded the cloud as a centralization strategy. >> Correct. >> What we really see happening, is we're seeing enterprises more distribute their data, more distribute their processing, but they have not updated how they think about security so the presumption is, "yeah we're going to put more processing data out closer to the action but we're going to backhaul a whole bunch back to our security model," and what I hear you saying is no, you need to push those security services out to where the data is, out to where the process, out to where the user is. Have I got that right? >> You have nailed it, right. Think of it this way, if I'm a large global 2000 organization, I might have thousands of branches. All of those branches, traditionally, have used a hub-and-spoke network model. I might have a branch here in Palo Alto but my headquarters is in New York. So now I have an MPLS circuit connecting this branch to New York. If my Exchange server and applications and SAP systems are all there, then that hub-and-spoke model made sense. I am in this office >> Right. >> I connect to those applications and all my security stack is also there. But fast forward to today, all of those applications are moving and they're not just in one cloud. You know, you might have adopted Salesforce.com for CRM, you might have adopted Workday, you might have adopted Office 365. So these are SaaS services. Now if I'm sitting here in Palo Alto, and if I have to access my email, it makes absolutely no sense for me to VPN back to New York only to exit to the internet right there. What users want is a fast, nimble user experience without security coming in the way. What organizations want is no compromise in their security stack. So what you really need is a security stack that follows the user wherever they are. >> And the data. >> And the data, so my data...you know Microsoft has a front-door service here in Redwood City and if if you are a user here and trying to access that, I should be able to go straight with my entire security stack right next to it. That's what Gartner is calling SASE these days. >> Well, let's get into that in a second. It almost sounds as though what you're suggesting is that the enterprise needs to look at security as a SaaS service itself. >> 100 percent. If your users are everywhere and if your applications are in the cloud, your security better be delivered as a consistent "as-a-service," right next to where the users are and hopefully co-located in the same data center as where the applications are present so the only way to have a pervasive security model is to have it delivered in the cloud, which is what Zscaler has been doing from day one. >> Now, a little spoiler alert for everybody, Zscaler's been talking about this for 10-plus years. >> Right. >> So where are we today in the market place starting to recognize and acknowledge this transformation in the basic security architecture and platform that we're going through? >> I'm very excited to see that the market is really adopting what Zscaler has been talking about for over a decade. In fact, recently, Gartner released a paper titled "SASE," it stands for Secure Access Service Edge and there are, I believe, four principal tenets of SASE. The first one, of course, is that compute and security services have to be right at the edge. And we talked about that. It makes sense. >> For where the service is being delivered. >> You can't backhaul traffic to your data center or you can't backhaul traffic to Google's central data center somewhere. You need to have compute capabilities with things like SSL Interception and all the security services running right at the edge, connecting users to applications in the shortest path, right? So that's sort of principle number one of SASE. The second principle that Gartner talks about, which again you know, has been fundamental to Zscaler's DNA, is to keep your devices and your branch offices light. Don't shove too much complexity from a security perspective on the user devices and your branches. Keep it simple. >> Or the people running those user devices >> Absolutely >> in the branches >> Yeah, so you know, keep your branch offices like a light router, that forwards traffic to the cloud, where the heavy-lifting is done. >> Right. >> The third principle they talk about is to deliver modern security, you need to have a proxy-based architecture and essentially what a proxy architecture allows you to do is to look at content, right? Gone are the days where you could just say, stop a website called "evil.com" and allow a website "good.com," right? It's not like that anymore. You have to look at content, you know. You might get malware from a Google Drive link. You can't block Google now, right? So looking at SSL-encrypted content is needed and firewalls just can't do it. You have to have a proxy architecture that can decrypt SSL connections, look at content, provide malware services, provide policy-based access control services, et cetera and that's kind of the third principle. And finally what Gartner talks about is SASE has to be cloud-native, it has to be, sort of, born and bred in the cloud, a true multitenant, cloud-first architecture. You can't take, sort of, legacy security appliances and shove it in third-party infrastructure like AWS and GCP and deliver a cloud service and the example I use often is, just because you had a great blu-ray player or a DVD player in your home theater, you can't take 100,000 of these and shove it into AWS and become a Netflix. You really need to build that service from the ground up. You know, in a multitenant fashion and that's what we have done for security as a service through the cloud. >> So we are now, the market seems to be kind of converging on some of the principles that Zscaler's been talking about for quite some time. >> Right. >> When we think about 2020, how do you anticipate enterprises are going to respond as a consequence of this convergence in acknowledging that the value proposition and the need are starting to come together? >> Absolutely, I think we see the momentum picking up in the market, we have lots of conversations with CIO's who are going through this digital transformation journey, you know transformation is hard. There's immune response in big organizations >> Sure. >> To change. Not much has changed from a security and network architecture perspective in the last two decades. But we're seeing more and more of that. In fact, over 400 of global 2000 organizations are 100 percent deployed on Zscaler. And so that momentum is picking up and we see a lot of traction with other prospects who are beginning to see the light, as we say it. >> Well as you start to imagine the relationship between security and data, between security and data, one of the things that I find interesting is many respects to cloud, especially as it becomes more distributed, is becoming better acknowledged almost as a network of services. >> Right. >> As opposed to AWS as a data center here and that makes it a cloud data center. >> Right. >> It really is this network of services, which can happen from a lot of different places, big cloud service providers, your own enterprise, partners providing services to you. How does the relationship between Zscaler and kind of an openness >> Hm-mm. >> Going to come together? Hm-mm. >> So that you can provide services from a foreign enterprise to the enterprise's partners, customers, and others that the enterprise needs to work with. >> That's a great question, Peter and I think one of the most important things I tell our customers and prospects is that if you look at a cloud-delivered security architecture, it better embrace some of the SASE principles. One of the first things we did when we built the Zscaler platform was to distribute it across 150 data centers. And why did we do that? We did that because when a user is going to destinations, they need to be able to access any destination. The destination could be on Azure, could be on AWS, could be Salesforce, so by definition, it has to be carrier-neutral, it has to be cloud-neutral. I can't build a service that is designed for all internet traffic in a GCP or AWS, right. So how did we do that? We went and looked at one of the world's best co-location facilities that provide maximum connectivity options in any given region. So in North America, we might be in an Equinix facility and we might use tier one ISPs like GTT and Zayo that provide excellent connectivity to our customers and the destinations they want to visit. When you go to China, there's no GCP there, right so we work with China Unicom and China Telecom. When we are in India, we might work with an Airtel or a Sify, when we are in Australia, we might be working with Telstra. So we work with, you know, world class tier one ISPs in best data centers that provide maximum connectivity options. We invested heavily in internet exchange connectivity. Why? Because once you come to Zscaler, you've solved the physics problem by building the data center close to you, the next thing is, you want quickly go to your application. You don't want security to be in the way >> Right. >> Of application access. So with internet exchange connectivity, we are peered in a settlement-free way or BGP with Microsoft, with Akamai, with Apple, with Yahoo, right. So we can quickly get you to the content while delivering the full security stack, right? So we had to really take no shortcuts, back to your point of the world is very diverse and you cannot operate in a walled garden of one provider anymore and if you really build a cloud platform that is embracing some of the SASE principles we talked about, you have to do it the hard way. By building this one data center at a time. >> Well, you don't want your servicers to fall down because you didn't put the partnerships in place >and hardend them Correct. >> As much as you've hardened some of the other traffic. So as we think about kind of, where this goes, what do you envision Zscaler's, kind of big customer story is going to be in 2020 and beyond? Obviously, the service is going to be everywhere, change the way you think about security, but how, for example, is the relationship between the definition of the edge and the definition of the secure service going to co-evolve? Are people going to think about the edge differently as they start to think more in terms of a secure edge or where the data resides and the secure data, what do you think? >> Let's start off with five years and go back, right? >> We're going forward. >> Work our way back. Well, five years from now, hopefully everyone is on a 5G phone, you know, with blazing-fast internet connections, on devices that you love, your applications are everywhere, so now think of it from an IT perspective. You know, my span of control is becoming thinner and thinner, right? my users are on devices that I barely control. My network is the internet that I really don't control. My applications have moved to the cloud or either hosted in third-party infrastructure or run as a SaaS application, which I really don't control. Now, in this world, how do I provide security? How do I provide user experience? Imagine if you are the CIO and your job is to make all of this work, where will you start, right? So those are some of the big problems that we are helping our customers with. So this-- >> Let me as you a question 'cause here's where I was going with the question. I would start with, if I can't control all these things, I'm going to apply my notion of security >> Hm-mm. >> And say I am going to control that which is within >> Right. >> my security boundaries, not at a perimeter level, not at a device level, but at a service level. >> Absolutely and that's really the crux of the Zscaler platform service. We build this Zero Trust architecture. Our goal is to allow users to quickly come to Zscaler and Zscaler becomes the policy engine that is securely connecting them to all the cloud services that they want to go to. Now in addition, we also allow the same users to connect to internal applications that might have required a traditional VPN. Now think of it this way, Peter. When you connect to Google today, do you VPN to Google's network? To access Gmail? No. Why should you have to VPN to access an internal application? I mean, you get a link on your mobile phone, you click on it and it didn't work because it required a separate form of network access. So with Zscaler Internet Access and Zscaler Private Access, we are delivering a beautiful service that works across 150 data centers. Users connect to the service and the service becomes a policy engine that is securely connecting you to the destinations that you want. Now, in addition, you asked about what's going to happen in a couple of years. The same service can be extended for partners. I'm a business, I have hundreds of partners who want to connect to me. Why should I allow legacy VPN access or private circuits that expose me? I don't even know who's on the other end of the line, right? They come onto my network and you hear about the Target breaches because some HVAC contract that had unrestricted access, you hear about the Airbus breach because another contract that had access. So how do we build a true Zero Trust cloud platform that is securely allowing users, whether it's your employees, to connect to named applications that they should, or your partners that need access to certain applications, without putting them on the network. We're decoupling application access from network access. And there's one final important linchpin in this whole thing. Remember we talked about how powerless organizations >> Right. >> feel in this distributed model? Now imagine, your job is to also ensure that people are having a good user experience. How will you do that, right? What Zscaler is trying to do now is, we've been very successful in providing the secure and policy-based connectivity and our customers are asking us, hey, you're sitting in between all of this, you have visibility into what's happening on the user's device. Clearly you're sitting in the middle in the cloud and you see what's happening on the left-hand side, what's happening on the right-hand side. You know, you have the cloud effect, you can see there's a problem going on with Microsoft's network in the China region, right? Correlate all of that information and give me proactive intelligence around user experience and that's what we launched recently at Zenith Live. We call it Zscaler Digital Experience, >> Hmm. >> So overall the goal of the platform is to securely connect users and entities to named applications with Zero Trust principles. We never want security and user experience to be orthogonal requirements that has traditionally been the case. And we want to provide great user experience and visibility to our customers who've started adopting this platform. >> That's a great story. It's a great story. So, once again, I want to thank you very much for coming in and that's Amit Sinha, who is the president and CTO at Zscaler, focusing a lot on the R&D types of things that Zscaler's doing. Thanks again for being on theCUBE. >> It's my pleasure, Peter. Always enjoy talking to you. >> And thanks for joining us for another CUBE conversation. I'm Peter Burris, see you next time. (funk music) (funk music)
SUMMARY :
Every enterprise is responding to the opportunities and that center of gravity is now moving to the cloud. I totally agree with you Just to test it, that many regarded the cloud our security model," and what I hear you saying is connecting this branch to New York. and if I have to access my email, and if if you are a user here is that the enterprise needs to look at security and hopefully co-located in the same data center Zscaler's been talking about this for 10-plus years. have to be right at the edge. is to keep your devices and your branch offices light. Yeah, so you know, keep your branch You have to look at content, you know. kind of converging on some of the principles that in the market, we have lots of conversations with and we see a lot of traction Well as you start to imagine the relationship and that makes it a cloud data center. and kind of an openness Going to come together? that the enterprise needs to work with. the next thing is, you want quickly go to your application. of the world is very diverse and you cannot operate Well, you don't want your servicers to fall down So as we think about kind of, where this goes, on devices that you love, your applications are everywhere, I'm going to apply my notion of security my security boundaries, not at a perimeter level, to the destinations that you want. and you see what's happening on the left-hand side, is to securely connect users and entities to So, once again, I want to thank you very much for coming in Always enjoy talking to you. I'm Peter Burris, see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amit Sinha | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Peter | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Telstra | ORGANIZATION | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
January 2020 | DATE | 0.99+ |
China | LOCATION | 0.99+ |
100,000 | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Redwood City | LOCATION | 0.99+ |
India | LOCATION | 0.99+ |
2020 | DATE | 0.99+ |
Akamai | ORGANIZATION | 0.99+ |
150 data centers | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
100 percent | QUANTITY | 0.99+ |
GTT | ORGANIZATION | 0.99+ |
China Telecom | ORGANIZATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Sify | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
Zayo | ORGANIZATION | 0.99+ |
SASE | TITLE | 0.99+ |
China Unicom | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Amit | PERSON | 0.99+ |
second principle | QUANTITY | 0.99+ |
third principle | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Office 365 | TITLE | 0.99+ |
10-plus years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Airtel | ORGANIZATION | 0.99+ |
Zscaler | PERSON | 0.99+ |
over 400 | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
Netflix | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.97+ |
Zscaler | TITLE | 0.96+ |
Equinix | ORGANIZATION | 0.96+ |
2000 customers | QUANTITY | 0.96+ |
Gmail | TITLE | 0.96+ |
Azure | TITLE | 0.95+ |
CUBE | ORGANIZATION | 0.95+ |
over a decade | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
one provider | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
René Dankwerth, RECARO Aircraft Seating Americas, LLC | Alaska Airlines Elevated Experience 2019
(upbeat music) >> Hey welcome back, Jeff Rick here with theCUBE. We're in San Francisco International, actually at gate 54B if you're trying to to track us down. It's the Alaska Airlines improved flight experience launch event. A lot of vendors here, they're rebranding their planes, they've rebranded all the Virgin Airbus planes, and they've taken that opportunity to add a lot of new innovations. So we're excited to be here, to talk to some of the people participating, and our first guest. It's Rene Donkworth, he is the general manager of Aircraft Seating America's for Recaro. Rene, great to see you. >> Thank you, great to be here. >> So I've seen a lot of people are familiar with the Recaro seats, we think of them as racing seats or, you know, upgrading our cars when we were kids, everybody wanted a Recaro seat. I had no idea you guys played such a major role in aviation. >> Absolutely. And we are since the early 70s already in the aircraft seating business, and really a major player, a global player in this business and you know it's a very long term experience and people are often flying and they're sitting on an aircraft and to be comfortable in traveling is very important and it's our mission. >> Right, it's funny because people probably usually don't think of the seat specifically until they're uncomfortable or, you know, they're in it. But you've got a lot of technology and a lot of innovation in the past but also some of these new seats that you're showing here today. >> Right. So we are showing the seat for first class here that we have displayed for Alaska Airlines, and we developed together in a very intensive process, a lot of thing on the seat here. We have a memory foam cushion with netting, a six way head rest which overall comes to a very comfortable seating experience for the passenger, and that's really one step ahead of other products, and we went through a very intensive process with Alaska and we are proud to present it and to see the roll out now because it's exciting. If you've worked all the time on such a project to see it's flying now. >> So there's a couple components to this seat, right? There's obviously the safety, its' got to stay bolted on, but you've got kind of this limited ergonomic space in terms of what the pitch is from one seat to the other. What are some of the unique challenges there and what are some of the things you guys have done to operate, you know, in kind of a restrained space? >> Of course it's always to optimize everything with the given conditions that you have. But really looking into the small details. Reduced pressure points on the body, we are using kind of pressure mapping methods to develop that together with the customer, looking into a cushy experience for the passenger, optimizing it so that you have really kinds of luxury feeling on the seat. But in addition it's also important to look into solutions like content. How is content provided and what kind of tablet integration is there, so we have very smart solutions there that we are showing today with the right viewing angles there's the right power, the high power USB which support the power, so the overall package needs to be optimized, and that's what we are working with. >> And that's where I was going to go next, is when you're sitting there for 2 hours, 5 hours, 10 hours now we're talking about 20 hour flights, right, some of these crazy ones, people are doing things in their seat. They're not just sitting, as you said. They want power, they want connectivity, they want to watch their movie on their laptop or their tablet or their phone. So you guys have really incorporated kind of that next gen entertainment experience into this new seat. >> Right. So as I explained, there is a lot about tablet integration, not only for the first class as well also for the economy class that you can see today that you can experience. But there's also a lot about stowage in total. You know, stowage is always a big topic. Where do you stow your belongings? And there you will also see here smart solutions, lots of stowage options. For example also on the coach class seat you can use the tablet, you have the right viewing angle. In addition you can fold or unfold the table, you can use the stowages, so everything is really optimized in the details. >> And this is a huge kind of change in thought process when you think of the entertainment world, right, where it used to be you have a projector TV and then they put individual seat screens, but the airlines woke up and figured out everyone's already packing their screen of choice so how do we support that experience versus putting our own screen on that seat. >> Yeah, that's where we are going, and if you look into today's passengers almost everybody has his own tablet or iPhone or whatever with him, so it's important to be able to stow everything, to connect every kind of device, to have the power. But I think then the content is really important to be provided. The integrated solutions are not so important anymore. >> Right, well Rene congratulations and enjoy the flight and seeing all your hard work up in the air. >> Thank you very much. >> Alright, he's Rene, I'm Jeff, we're at the San Fransisco International Gate 54B at the Alaska Airlines Elevated Flight Experience. Thanks for watching.
SUMMARY :
It's the Alaska Airlines improved I had no idea you guys played such a major role in aviation. a global player in this business and you know it's a in the past but also some of these new seats and we developed together in a very intensive process, and what are some of the things you guys have done so the overall package needs to be optimized, So you guys have really incorporated kind of that also for the economy class that you can see today right, where it used to be you have a projector TV and if you look into today's passengers and enjoy the flight and seeing all your hard work at the Alaska Airlines Elevated Flight Experience.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Jeff Rick | PERSON | 0.99+ |
René Dankwerth | PERSON | 0.99+ |
Rene | PERSON | 0.99+ |
Rene Donkworth | PERSON | 0.99+ |
2 hours | QUANTITY | 0.99+ |
5 hours | QUANTITY | 0.99+ |
10 hours | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Aircraft Seating America | ORGANIZATION | 0.99+ |
Alaska Airlines | ORGANIZATION | 0.99+ |
San Francisco International | LOCATION | 0.99+ |
RECARO Aircraft Seating Americas, LLC | ORGANIZATION | 0.99+ |
six way | QUANTITY | 0.99+ |
Recaro | ORGANIZATION | 0.99+ |
San Fransisco International Gate 54B | LOCATION | 0.98+ |
first guest | QUANTITY | 0.98+ |
early 70s | DATE | 0.98+ |
today | DATE | 0.98+ |
one seat | QUANTITY | 0.98+ |
one step | QUANTITY | 0.98+ |
Alaska | ORGANIZATION | 0.97+ |
first class | QUANTITY | 0.96+ |
about 20 hour | QUANTITY | 0.89+ |
theCUBE | ORGANIZATION | 0.79+ |
2019 | DATE | 0.74+ |
gate 54B | LOCATION | 0.69+ |
couple components | QUANTITY | 0.64+ |
Virgin Airbus | ORGANIZATION | 0.6+ |
Elevated Experience | EVENT | 0.59+ |
Recaro | COMMERCIAL_ITEM | 0.48+ |
Elevated Flight | EVENT | 0.31+ |
Ben Minicucci, Alaska Airlines | Alaska Airlines Elevated Experience 2019
(energizing music) >> Hey welcome back, everybody! Jeff Frick here with theCUBE. We are at San Francisco International, Gate 54B if you want to stop by. We're here for the big Alaska Elevated Flying Experience event. They basically took advantage of this opportunity with the Virgin merger to kind of rebrand, rethink, and re-execute the travel experience. We're excited to have with us Ben Minicucci, the President and CEO of Alaska. First off, congratulations on a big event. >> Thank you, thank you so much Jeff. >> And I think you said you're two years into this merger. >> Two-- >> You're getting through it? >> We are. Two years into it, it's been a great experience bringing two great brands together and really, ya know, amplifying the flying experience. Getting a great product. We're unveiling our Airbus with new seating, new first class, premium class main cabin and we're so excited. And more than that, it's just bringing our people together and just enhancing our culture. >> Right, so you talked a lot about people and culture in your opening remarks. >> Right. >> What is it about the culture of Alaska, 85 or 86 year old airline, that makes it special? >> Yeah, you know it's a wonderful culture built on strong values. And what I'll say is, for people who know Alaska, the culture is built on kindness. People who fly us will say, "Your people are kind." And they're empowered to do the right thing. It's two of our biggest values and that's what I love about our people. And when you combine that with a great product on board. 'Cause people really feel great, they're comfortable where they're sitting and, but if our people connect with them, make them feel welcome, and they show kindness then the brand just comes to life. >> That's really interesting 'cause kindness is not something that you necessarily think about. >> With airlines. >> When you're rushing through airports. >> No. >> And you know, grinding on corporate travel, right. >> Right. >> It's tough. So that's pretty interesting. The other thing I thought was interesting in your remarks is really your focus on your partner brands. >> Right. >> Both in the community but as well as Recaro the seat manufacturer who's doing your seats. And even to the wine and Salt and Straw, we were jokin'. >> Right. >> So, you guys are really paying attention to these little details that maybe people don't notice individually but in aggregate really make for a different experience. >> No, and I think what we want to do is partner with brands that share our same values, that share our same values for you know, producing a great product, you know. And their employees love working for them. And they just love the spirit of partnership and doing something good for the community. So we always look for companies and brands that share our own values as well. >> Right, which is interesting 'cause it's such a hyper competitive space. Airline industry's a tough space you got. >> Right. >> Tough margins, you got fuel volatility but a lot of people, a lot more people are flying all the time. >> Right. >> So it's a growing business. So, you know, how do you kind of keep it balan-- >> That's a great question though. >> and compete when a seat mile is a seat mile, right, at the end of the day. >> Yeah, no, it is. >> That's what the wall street guys would tell you. >> You know, the one thing about Alaska, we've been in business for almost 87 years and ya know we're in it for the long haul. So we make decisions based on long-term returns and we do have, we know that price is important. So we do work hard keeping our cost structure low so we can offer low fares but also a product where if people want to pay a little more, they can get into premium class or first class. But we're really an airline that want to make sure that we appeal to all sorts of travelers. From people who are just starting out just traveling in their teens or twenties, or if you're retired, we want to appeal to a wide range of demographic. >> Alright, well Ben, it looks like we're boarding the-- >> Okay good, yes enjoy! >> We're boarding the new airbus so I will let you go. >> Well thank you Jeff, it was a pleasure. >> Thanks for inviting us. >> Okay, thank you so much. >> Alright he's Ben, I'm Jeff. You're watching The Cube! We're at San Francisco International, at the Alaska Airlines Elevated Flight Experience. Thanks for watching, we'll see ya next time. (mellow music)
SUMMARY :
We're here for the big and really, ya know, amplifying the flying experience. Right, so you talked a lot about people and culture And when you combine that with a great product on board. is not something that you necessarily think about. is really your focus on your partner brands. Both in the community but as well So, you guys are really paying attention and doing something good for the community. Airline industry's a tough space you got. Tough margins, you got fuel volatility So, you know, how do you kind of keep it balan-- at the end of the day. and we do have, we know that price is important. at the Alaska Airlines Elevated Flight Experience.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ben Minicucci | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Alaska Airlines | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
85 | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
Alaska | LOCATION | 0.99+ |
Ben | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
Two years | QUANTITY | 0.99+ |
The Cube | TITLE | 0.98+ |
Virgin | ORGANIZATION | 0.98+ |
twenties | QUANTITY | 0.98+ |
Both | QUANTITY | 0.98+ |
San Francisco International | LOCATION | 0.98+ |
almost 87 years | QUANTITY | 0.96+ |
Recaro | ORGANIZATION | 0.96+ |
First | QUANTITY | 0.95+ |
Alaska Airlines Elevated | ORGANIZATION | 0.93+ |
Two | QUANTITY | 0.91+ |
first class | QUANTITY | 0.89+ |
one thing | QUANTITY | 0.88+ |
Alaska Elevated Flying Experience | EVENT | 0.87+ |
San Francisco International, Gate 54B | LOCATION | 0.85+ |
two great brands | QUANTITY | 0.85+ |
teens | QUANTITY | 0.84+ |
86 year old | QUANTITY | 0.82+ |
theCUBE | ORGANIZATION | 0.75+ |
Elevated | LOCATION | 0.32+ |
Day One Afternoon Keynote | Red Hat Summit 2018
[Music] [Music] [Music] [Music] ladies and gentlemen please welcome Red Hat senior vice president of engineering Matt Hicks [Music] welcome back I hope you're enjoying your first day of summit you know for us it is a lot of work throughout the year to get ready to get here but I love the energy walking into someone on that first opening day now this morning we kick off with Paul's keynote and you saw this morning just how evolved every aspect of open hybrid cloud has become based on an open source innovation model that opens source the power and potential of open source so we really brought me to Red Hat but at the end of the day the real value comes when were able to make customers like yourself successful with open source and as much passion and pride as we put into the open source community that requires more than just Red Hat given the complexity of your various businesses the solution set you're building that requires an entire technology ecosystem from system integrators that can provide the skills your domain expertise to software vendors that are going to provide the capabilities for your solutions even to the public cloud providers whether it's on the hosting side or consuming their services you need an entire technological ecosystem to be able to support you and your goals and that is exactly what we are gonna talk about this afternoon the technology ecosystem we work with that's ready to help you on your journey now you know this year's summit we talked about earlier it is about ideas worth exploring and we want to make sure you have all of the expertise you need to make those ideas a reality so with that let's talk about our first partner we have him today and that first partner is IBM when I talk about IBM I have a little bit of a nostalgia and that's because 16 years ago I was at IBM it was during my tenure at IBM where I deployed my first copy of Red Hat Enterprise Linux for a customer it's actually where I did my first professional Linux development as well you and that work on Linux it really was the spark that I had that showed me the potential that open source could have for enterprise customers now iBM has always been a steadfast supporter of Linux and a great Red Hat partner in fact this year we are celebrating 20 years of partnership with IBM but even after 20 years two decades I think we're working on some of the most innovative work that we ever have before so please give a warm welcome to Arvind Krishna from IBM to talk with us about what we are working on Arvind [Applause] hey my pleasure to be here thank you so two decades huh that's uh you know I think anything in this industry to going for two decades is special what would you say that that link is made right Hatton IBM so successful look I got to begin by first seeing something that I've been waiting to say for years it's a long strange trip it's been and for the San Francisco folks they'll get they'll get the connection you know what I was just thinking you said 16 it is strange because I probably met RedHat 20 years ago and so that's a little bit longer than you but that was out in Raleigh it was a much smaller company and when I think about the connection I think look IBM's had a long long investment and a long being a long fan of open source and when I think of Linux Linux really lights up our hardware and I think of the power box that you were showing this morning as well as the mainframe as well as all other hardware Linux really brings that to life and I think that's been at the root of our relationship yeah absolutely now I alluded to a little bit earlier we're working on some new stuff and this time it's a little bit higher in the software stack and we have before so what do you what would you say spearheaded that right so we think of software many people know about some people don't realize a lot of the words are called critical systems you know like reservation systems ATM systems retail banking a lot of the systems run on IBM software and when I say IBM software names such as WebSphere and MQ and db2 all sort of come to mind as being some of that software stack and really when I combine that with some of what you were talking about this morning along hybrid and I think this thing called containers you guys know a little about combining the two we think is going to make magic yeah and I certainly know containers and I think for myself seeing the rise of containers from just the introduction of the technology to customers consuming at mission-critical capacities it's been probably one of the fastest technology cycles I've ever seen before look we completely agree with that when you think back to what Paul talks about this morning on hybrid and we think about it we are made of firm commitment to containers all of our software will run on containers and all of our software runs Rell and you put those two together and this belief on hybrid and containers giving you their hybrid motion so that you can pick where you want to run all the software is really I think what has brought us together now even more than before yeah and the best part I think I've liked we haven't just done the product in downstream alignment we've been so tied in our technology approach we've been aligned all the way to the upstream communities absolutely look participating upstream participating in these projects really bringing all the innovation to bear you know when I hear all of you talk about you can't just be in a single company you got to tap into the world of innovation and everybody should contribute we firmly believe that instead of helping to do that is kind of why we're here yeah absolutely now the best part we're not just going to tell you about what we're doing together we're actually going to show you so how every once you tell the audience a little bit more about what we're doing I will go get the demo team ready in the back so you good okay so look we're doing a lot here together we're taking our software and we are begging to put it on top of Red Hat and openshift and really that's what I'm here to talk about for a few minutes and then we go to show it to you live and the demo guard should be with us so it'll hopefully go go well so when we look at extending our partnership it's really based on three fundamental principles and those principles are the following one it's a hybrid world every enterprise wants the ability to span across public private and their own premise world and we got to go there number two containers are strategic to both of us enterprise needs the agility you need a way to easily port things from place to place to place and containers is more than just wrapping something up containers give you all of the security the automation the deploy ability and we really firmly believe that and innovation is the path forward I mean you got to bring all the innovation to bear whether it's around security whether it's around all of the things we heard this morning around going across multiple infrastructures right the public or private and those are three firm beliefs that both of us have together so then explicitly what I'll be doing here number one all the IBM middleware is going to be certified on top of openshift and rel and through cloud private from IBM so that's number one all the middleware is going to run in rental containers on OpenShift on rail with all the cloud private automation and deployability in there number two we are going to make it so that this is the complete stack when you think about from hardware to hypervisor to os/2 the container platform to all of the middleware it's going to be certified up and down all the way so that you can get comfort that this is certified against all the cyber security attacks that come your way three because we do the certification that means a complete stack can be deployed wherever OpenShift runs so that way you give the complete flexibility and you no longer have to worry about that the development lifecycle is extended all the way from inception to production and the management plane then gives you all of the delivery and operation support needed to lower that cost and lastly professional services through the IBM garages as well as the Red Hat innovation labs and I think that this combination is really speaks to the power of both companies coming together and both of us working together to give all of you that flexibility and deployment capabilities across one can't can't help it one architecture chart and that's the only architecture chart I promise you so if you look at it right from the bottom this speaks to what I'm talking about you begin at the bottom and you have a choice of infrastructure the IBM cloud as well as other infrastructure as a service virtual machines as well as IBM power and IBM mainframe as is the infrastructure choices underneath so you choose what what is best suited for the workload well with the container service with the open shift platform managing all of that environment as well as giving the orchestration that kubernetes gives you up to the platform services from IBM cloud private so it contains the catalog of all middle we're both IBM's as well as open-source it contains all the deployment capability to go deploy that and it contains all the operational management so things like come back up if things go down worry about auto scaling all those features that you want come to you from there and that is why that combination is so so powerful but rather than just hear me talk about it I'm also going to now bring up a couple of people to talk about it and what all are they going to show you they're going to show you how you can deploy an application on this environment so you can think of that as either a cloud native application but you can also think about it as how do you modernize an application using micro services but you don't want to just keep your application always within its walls you also many times want to access different cloud services from this and how do you do that and I'm not going to tell you which ones they're going to come and tell you and how do you tackle the complexity of both hybrid data data that crosses both from the private world to the public world and as well as target the extra workloads that you want so that's kind of the sense of what you're going to see through through the demonstrations but with that I'm going to invite Chris and Michael to come up I'm not going to tell you which one's from IBM which runs from Red Hat hopefully you'll be able to make the right guess so with that Chris and Michael [Music] so so thank you Arvind hopefully people can guess which ones from Red Hat based on the shoes I you know it's some really exciting stuff that we just heard there what I believe that I'm I'm most excited about when I look out upon the audience and the opportunity for customers is with this announcement there are quite literally millions of applications now that can be modernized and made available on any cloud anywhere with the combination of IBM cloud private and OpenShift and I'm most thrilled to have mr. Michael elder a distinguished engineer from IBM here with us today and you know Michael would you maybe describe for the folks what we're actually going to go over today absolutely so when you think about how do I carry forward existing applications how do I build new applications as well you're creating micro services that always need a mixture of data and messaging and caching so this example application shows java-based micro services running on WebSphere Liberty each of which are then leveraging things like IBM MQ for messaging IBM db2 for data operational decision manager all of which is fully containerized and running on top of the Red Hat open chip container platform and in fact we're even gonna enhance stock trader to help it understand how you feel but okay hang on so I'm a little slow to the draw sometimes you said we're gonna have an application tell me how I feel exactly exactly you think about your enterprise apps you want to improve customer service understanding how your clients feel can't help you do that okay well this I'd like to see that in action all right let's do it okay so the first thing we'll do is we'll actually take a look at the catalog and here in the IBM cloud private catalog this is all of the content that's available to deploy now into this hybrid solution so we see workloads for IBM will see workloads for other open source packages etc each of these are packaged up as helm charts that are deploying a set of images that will be certified for Red Hat Linux and in this case we're going to go through and start with a simple example with a node out well click a few actions here we'll give it a name now do you have your console up over there I certainly do all right perfect so we'll deploy this into the new old namespace and will deploy notate okay alright anything happening of course it's come right up and so you know what what I really like about this is regardless of if I'm used to using IBM clout private or if I'm used to working with open shift yeah the experience is well with the tool of whatever I'm you know used to dealing with on a daily basis but I mean you know I got to tell you we we deployed node ourselves all the time what about and what about when was the last time you deployed MQ on open shift you never I maybe never all right let's fix that so MQ obviously is a critical component for messaging for lots of highly transactional systems here we'll deploy this as a container on the platform now I'm going to deploy this one again into new worlds I'm gonna disable persistence and for my application I'm going to need a queue manager so I'm going to have it automatically setup my queue manager as well now this will deploy a couple of things what do you see I see IBM in cube all right so there's your stateful set running MQ and of course there's a couple of other components that get stood up as needed here including things like credentials and secrets and the service etc but all of this is they're out of the box ok so impressive right but that's the what I think you know what I'm really looking at is maybe how a well is this running you know what else does this partnership bring when I look at IBM cloud private windows inches well so that's a key reason about why it's not just about IBM middleware running on open shift but also IBM cloud private because ultimately you need that common management plane when you deploy a container the next thing you have to worry about is how do I get its logs how do I manage its help how do I manage license consumption how do I have a common security plan right so cloud private is that enveloping wrapper around IBM middleware to provide those capabilities in a common way and so here we'll switch over to our dashboard this is our Griffin and Prometheus stack that's deployed also now on cloud private running on OpenShift and we're looking at a different namespace we're looking at the stock trader namespace we'll go back to this app here momentarily and we can see all the different pieces what if you switch over to the stock trader workspace on open shipped yeah I think we might be able to do that here hey there it is alright and so what you're gonna see here all the different pieces of this op right there's d b2 over here I see the portfolio Java microservice running on Webster Liberty I see my Redis cash I see MQ all of these are the components we saw in the architecture picture a minute ago ya know so this is really great I mean so maybe let's take a look at the actual application I see we have a fine stock trader app here now we mentioned understanding how I feel exactly you know well I feel good that this is you know a brand new stock trader app versus the one from ten years ago that don't feel like we used forever so the key thing is this app is actually all of those micro services in addition to things like business rules etc to help understand the loyalty program so one of the things we could do here is actually enhance it with a a AI service from Watson this is tone analyzer it helps me understand how that user actually feels and will be able to go through and submit some feedback to understand that user ok well let's see if we can take a look at that so I tried to click on youth clearly you're not very happy right now here I'll do one quick thing over here go for it we'll clear a cache for our sample lab so look you guys don't actually know as Michael and I just wrote this no js' front end backstage while Arvin was actually talking with Matt and we deployed it real-time using continuous integration and continuous delivery that we have available with openshift well the great thing is it's a live demo right so we're gonna do it all live all the time all right so you mentioned it'll tell me how I'm feeling right so if we look at so right there it looks like they're pretty angry probably because our cache hadn't been cleared before we started the demo maybe well that would make me angry but I should be happy because I mean I have a lot of money well it's it's more than I get today for sure so but you know again I don't want to remain angry so does Watson actually understand southern I know it speaks like eighty different languages but well you know I'm from South Carolina to understand South Carolina southern but I don't know about your North Carolina southern alright well let's give it a go here y'all done a real real know no profanity now this is live I've done a real real nice job on this here fancy demo all right hey all right likes me now all right cool and the key thing is just a quick note right it's showing you've got a free trade so we can integrate those business rules and then decide to I do put one trade if you're angry give me more it's all bringing it together into one platform all running on open show yeah and I can see the possibilities right of we've not only deployed services but getting that feedback from our customers to understand well how well the services are being used and are people really happy with what they have hey listen Michael this was amazing I read you joining us today I hope you guys enjoyed this demo as well so all of you know who this next company is as I look out through the crowd based on what I can actually see with the sun shining down on me right now I can see their influence everywhere you know Sports is in our everyday lives and these guys are equally innovative in that space as they are with hybrid cloud computing and they use that to help maintain and spread their message throughout the world of course I'm talking about Nike I think you'll enjoy this next video about Nike and their brand and then we're going to hear directly from my twitting about what they're doing with Red Hat technology new developments in the top story of the day the world has stopped turning on its axis top scientists are currently racing to come up with a solution everybody going this way [Music] the wrong way [Music] please welcome Nike vice president of infrastructure engineering Mike witig [Music] hi everybody over the last five years at Nike we have transformed our technology landscape to allow us to connect more directly to our consumers through our retail stores through Nike comm and our mobile apps the first step in doing that was redesigning our global network to allow us to have direct connectivity into both Asia and AWS in Europe in Asia and in the Americas having that proximity to those cloud providers allows us to make decisions about application workload placement based on our strategy instead of having design around latency concerns now some of those workloads are very elastic things like our sneakers app for example that needs to burst out during certain hours of the week there's certain moments of the year when we have our high heat product launches and for those type of workloads we write that code ourselves and we use native cloud services but being hybrid has allowed us to not have to write everything that would go into that app but rather just the parts that are in that application consumer facing experience and there are other back-end systems certain core functionalities like order management warehouse management finance ERP and those are workloads that are third-party applications that we host on relevent over the last 18 months we have started to deploy certain elements of those core applications into both Azure and AWS hosted on rel and at first we were pretty cautious that we started with development environments and what we realized after those first successful deployments is that are the impact of those cloud migrations on our operating model was very small and that's because the tools that we use for monitoring for security for performance tuning didn't change even though we moved those core applications into Azure in AWS because of rel under the covers and getting to the point where we have that flexibility is a real enabler as an infrastructure team that allows us to just be in the yes business and really doesn't matter where we want to deploy different workload if either cloud provider or on-prem anywhere on the planet it allows us to move much more quickly and stay much more directed to our consumers and so having rel at the core of our strategy is a huge enabler for that flexibility and allowing us to operate in this hybrid model thanks very much [Applause] what a great example it's really nice to hear an IQ story of using sort of relish that foundation to enable their hybrid clout enable their infrastructure and there's a lot that's the story we spent over ten years making that possible for rel to be that foundation and we've learned a lot in that but let's circle back for a minute to the software vendors and what kicked off the day today with IBM IBM s one of the largest software portfolios on the planet but we learned through our journey on rel that you need thousands of vendors to be able to sport you across all of your different industries solve any challenge that you might have and you need those vendors aligned with your technology direction this is doubly important when the technology direction is changing like with containers we saw that two years ago bread had introduced our container certification program now this program was focused on allowing you to identify vendors that had those shared technology goals but identification by itself wasn't enough in this fast-paced world so last year we introduced trusted content we introduced our container health index publicly grading red hats images that form the foundation for those vendor images and that was great because those of you that are familiar with containers know that you're taking software from vendors you're combining that with software from companies like Red Hat and you are putting those into a single container and for you to run those in a mission-critical capacity you have to know that we can both stand by and support those deployments but even trusted content wasn't enough so this year I'm excited that we are extending once again to introduce trusted operations now last week we announced that cube con kubernetes conference the kubernetes operator SDK the goal of the kubernetes operators is to allow any software provider on kubernetes to encode how that software should run this is a critical part of a container ecosystem not just being able to find the vendors that you want to work with not just knowing that you can trust what's inside the container but knowing that you can efficiently run that software now the exciting part is because this is so closely aligned with the upstream technology that today we already have four partners that have functioning operators specifically Couchbase dynaTrace crunchy and black dot so right out of the gate you have security monitoring data store options available to you these partners are really leading the charge in terms of what it means to run their software on OpenShift but behind these four we have many more in fact this morning we announced over 60 partners that are committed to building operators they're taking their domain expertise and the software that they wrote that they know and extending that into how you are going to run that on containers in environments like OpenShift this really brings the power of being able to find the vendors being able to trust what's inside and know that you can run their software as efficiently as anyone else on the planet but instead of just telling you about this we actually want to show you this in action so why don't we bring back up the demo team to give you a little tour of what's possible with it guys thanks Matt so Matt talked about the concept of operators and when when I think about operators and what they do it's taking OpenShift based services and making them even smarter giving you insight into how they do things for example have we had an operator for the nodejs service that I was running earlier it would have detected the problem and fixed itself but when we look at it what really operators do when I look at it from an ecosystem perspective is for ISVs it's going to be a catalyst that's going to allow them to make their services as manageable and it's flexible and as you know maintainable as any public cloud service no matter where OpenShift is running and to help demonstrate this I've got my buddy Rob here Rob are we ready on the demo front we're ready awesome now I notice this screen looks really familiar to me but you know I think we want to give folks here a dev preview of a couple of things well we want to show you is the first substantial integration of the core OS tectonic technology with OpenShift and then the other thing is we are going to dive in a little bit more into operators and their usefulness so Rob yeah so what we're looking at here is the service catalog that you know and love and openshift and we've got a few new things in here we've actually integrated operators into the Service Catalog and I'm going to take this filter and give you a look at some of them that we have today so you can see we've got a list of operators exposed and this is the same way that your developers are already used to integrating with products they're right in your catalog and so now these are actually smarter services but how can we maybe look at that I mentioned that there's maybe a new view I'm used to seeing this as a developer but I hear we've got some really cool stuff if I'm the administrator of the console yeah so we've got a whole new side of the console for cluster administrators to get a look at under the infrastructure versus this dev focused view that we're looking at today today so let's go take a look at it so the first thing you see here is we've got a really rich set of monitoring and health status so we can see that we've got some alerts firing our control plane is up and we can even do capacity planning anything that you need to do to maintenance your cluster okay so it's it's not only for the the services in the cluster and doing things that you know I may be normally as a human operator would have to do but this this console view also gives me insight into the infrastructure itself right like maybe the nodes and maybe handling the security context is that true yes so these are new capabilities that we're bringing to open shift is the ability to do node management things like drain and unscheduled nodes to do day-to-day maintenance and then as well as having security constraints and things like role bindings for example and the exciting thing about this is this is a view that you've never been able to see before it's cross-cutting across namespaces so here we've got a number of admin bindings and we can see that they're connected to a number of namespaces and these would represent our engineering teams all the groups that are using the cluster and we've never had this view before this is a perfect way to audit your security you know it actually is is pretty exciting I mean I've been fortunate enough to be on the up and shift team since day one and I know that operations view is is something that we've you know strived for and so it's really exciting to see that we can offer that now but you know really this was a we want to get into what operators do and what they can do for us and so maybe you show us what the operator console looks like yeah so let's jump on over and see all the operators that we have installed on the cluster you can see that these mirror what we saw on the Service Catalog earlier now what we care about though is this Couchbase operator and we're gonna jump into the demo namespace as I said you can share a number of different teams on a cluster so it's gonna jump into this namespace okay cool so now what we want to show you guys when we think about operators you know we're gonna have a scenario here where there's going to be multiple replicas of a Couchbase service running in the cluster and then we're going to have a stateful set and what's interesting is those two things are not enough if I'm really trying to run this as a true service where it's highly available in persistent there's things that you know as a DBA that I'm normally going to have to do if there's some sort of node failure and so what we want to demonstrate to you is where operators combined with the power that was already within OpenShift are now coming together to keep this you know particular database service highly available and something that we can continue using so Rob what have you got there yeah so as you can see we've got our couch based demo cluster running here and we can see that it's up and running we've got three members we've got an off secret this is what's controlling access to a UI that we're gonna look at in a second but what really shows the power of the operator is looking at this view of the resources that it's managing you can see that we've got a service that's doing load balancing into the cluster and then like you said we've got our pods that are actually running the software itself okay so that's cool so maybe for everyone's benefit so we can show that this is happening live could we bring up the the Couchbase console please and keep up the openshift console both sides so what we see there we go so what we see on the on the right hand side is obviously the same console Rob was working in on the left-hand side as you can see by the the actual names of the pods that are there the the couch based services that are available and so Rob maybe um let's let's kill something that's always fun to do on stage yeah this is the power of the operator it's going to recover it so let's browse on over here and kill node number two so we're gonna forcefully kill this and kick off the recovery and I see right away that because of the integration that we have with operators the Couchbase console immediately picked up that something has changed in the environment now why is that important normally a human being would have to get that alert right and so with operators now we've taken that capability and we've realized that there has been a new event within the environment this is not something that you know kubernetes or open shipped by itself would be able to understand now I'm presuming we're gonna end up doing something else it's not just seeing that it failed and sure enough there we go remember when you have a stateful application rebalancing that data and making it available is just as important as ensuring that the disk is attached so I mean Rob thank you so much for you know driving this for us today and being here I mean you know not only Couchbase but as was mentioned by matt we also have you know crunchy dynaTrace and black duck I would encourage you all to go visit their booths out on the floor today and understand what they have available which are all you know here with a dev preview and then talk to the many other partners that we have that are also looking at operators so again rub thank you for joining us today Matt come on out okay this is gonna make for an exciting year of just what it means to consume container base content I think containers change how customers can get that I believe operators are gonna change how much they can trust running that content let's circle back to one more partner this next partner we have has changed the landscape of computing specifically with their work on hardware design work on core Linux itself you know in fact I think they've become so ubiquitous with computing that we often overlook the technological marvels that they've been able to overcome now for myself I studied computer engineering so in the late 90s I had the chance to study processor design I actually got to build one of my own processors now in my case it was the most trivial processor that you could imagine it was an 8-bit subtractor which means it can subtract two numbers 256 or smaller but in that process I learned the sheer complexity that goes into processor design things like wire placements that are so close that electrons can cut through the insulation in short and then doing those wire placements across three dimensions to multiple layers jamming in as many logic components as you possibly can and again in my case this was to make a processor that could subtract two numbers but once I was done with this the second part of the course was studying the Pentium processor now remember that moment forever because looking at what the Pentium processor was able to accomplish it was like looking at alien technology and the incredible thing is that Intel our next partner has been able to keep up that alien like pace of innovation twenty years later so we're excited have Doug Fisher here let's hear a little bit more from Intel for business wide open skies an open mind no matter the context the idea of being open almost only suggests the potential of infinite possibilities and that's exactly the power of open source whether it's expanding what's possible in business the science and technology or for the greater good which is why-- open source requires the involvement of a truly diverse community of contributors to scale and succeed creating infinite possibilities for technology and more importantly what we do with it [Music] you know what Intel one of our core values is risk-taking and I'm gonna go just a bit off script for a second and say I was just backstage and I saw a gentleman that looked a lot like Scott Guthrie who runs all of Microsoft's cloud enterprise efforts wearing a red shirt talking to Cormier I'm just saying I don't know maybe I need some more sleep but that's what I saw as we approach Intel's 50th anniversary these words spoken by our co-founder Robert Noyce are as relevant today as they were decades ago don't be encumbered by history this is about breaking boundaries in technology and then go off and do something wonderful is about innovation and driving innovation in our industry and Intel we're constantly looking to break boundaries to advance our technology in the cloud in enterprise space that is no different so I'm going to talk a bit about some of the boundaries we've been breaking and innovations we've been driving at Intel starting with our Intel Xeon platform Orion Xeon scalable platform we launched several months ago which was the biggest and mark the most advanced movement in this technology in over a decade we were able to drive critical performance capabilities unmatched agility and added necessary and sufficient security to that platform I couldn't be happier with the work we do with Red Hat and ensuring that those hero features that we drive into our platform they fully expose to all of you to drive that innovation to go off and do something wonderful well there's taking advantage of the performance features or agility features like our advanced vector extensions or avx-512 or Intel quick exist those technologies are fully embraced by Red Hat Enterprise Linux or whether it's security technologies like txt or trusted execution technology are fully incorporated and we look forward to working with Red Hat on their next release to ensure that our advancements continue to be exposed and their platform and all these workloads that are driving the need for us to break boundaries and our technology are driving more and more need for flexibility and computing and that's why we're excited about Intel's family of FPGAs to help deliver that additional flexibility for you to build those capabilities in your environment we have a broad set of FPGA capabilities from our power fish at Mac's product line all the way to our performance product line on the 6/10 strat exten we have a broad set of bets FPGAs what i've been talking to customers what's really exciting is to see the combination of using our Intel Xeon scalable platform in combination with FPGAs in addition to the acceleration development capabilities we've given to software developers combining all that together to deliver better and better solutions whether it's helping to accelerate data compression well there's pattern recognition or data encryption and decryption one of the things I saw in a data center recently was taking our Intel Xeon scalable platform utilizing the capabilities of FPGA to do data encryption between servers behind the firewall all the while using the FPGA to do that they preserve those precious CPU cycles to ensure they delivered the SLA to the customer yet provided more security for their data in the data center one of the edges in cyber security is innovation and route of trust starts at the hardware we recently renewed our commitment to security with our security first pledge has really three elements to our security first pledge first is customer first urgency we have now completed the release of the micro code updates for protection on our Intel platforms nine plus years since launch to protect against things like the side channel exploits transparent and timely communication we are going to communicate timely and openly on our Intel comm website whether it's about our patches performance or other relevant information and then ongoing security assurance we drive security into every one of our products we redesigned a portion of our processor to add these partition capability which is adding additional walls between applications and user level privileges to further secure that environment from bad actors I want to pause for a second and think everyone in this room involved in helping us work through our security first pledge this isn't something we do on our own it takes everyone in this room to help us do that the partnership and collaboration was next to none it's the most amazing thing I've seen since I've been in this industry so thank you we don't stop there we continue to advance our security capabilities cross-platform solutions we recently had a conference discussion at RSA where we talked about Intel Security Essentials where we deliver a framework of capabilities and the end that are in our silicon available for those to innovate our customers and the security ecosystem to innovate on a platform in a consistent way delivering that assurance that those capabilities will be on that platform we also talked about things like our security threat technology threat detection technology is something that we believe in and we launched that at RSA incorporates several elements one is ability to utilize our internal graphics to accelerate some of the memory scanning capabilities we call this an accelerated memory scanning it allows you to use the integrated graphics to scan memory again preserving those precious cycles on the core processor Microsoft adopted this and are now incorporated into their defender product and are shipping it today we also launched our threat SDK which allows partners like Cisco to utilize telemetry information to further secure their environments for cloud workloads so we'll continue to drive differential experiences into our platform for our ecosystem to innovate and deliver more and more capabilities one of the key aspects you have to protect is data by 2020 the projection is 44 zettabytes of data will be available 44 zettabytes of data by 2025 they project that will grow to a hundred and eighty s data bytes of data massive amount of data and what all you want to do is you want to drive value from that data drive and value from that data is absolutely critical and to do that you need to have that data closer and closer to your computation this is why we've been working Intel to break the boundaries in memory technology with our investment in 3d NAND we're reducing costs and driving up density in that form factor to ensure we get warm data closer to the computing we're also innovating on form factors we have here what we call our ruler form factor this ruler form factor is designed to drive as much dense as you can in a 1u rack we're going to continue to advance the capabilities to drive one petabyte of data at low power consumption into this ruler form factor SSD form factor so our innovation continues the biggest breakthrough and memory technology in the last 25 years in memory media technology was done by Intel we call this our 3d crosspoint technology and our 3d crosspoint technology is now going to be driven into SSDs as well as in a persistent memory form factor to be on the memory bus giving you the speed of memory characteristics of memory as well as the characteristics of storage given a new tier of memory for developers to take full advantage of and as you can see Red Hat is fully committed to integrating this capability into their platform to take full advantage of that new capability so I want to thank Paul and team for engaging with us to make sure that that's available for all of you to innovate on and so we're breaking boundaries and technology across a broad set of elements that we deliver that's what we're about we're going to continue to do that not be encumbered by the past your role is to go off and doing something wonderful with that technology all ecosystems are embracing this and driving it including open source technology open source is a hub of innovation it's been that way for many many years that innovation that's being driven an open source is starting to transform many many businesses it's driving business transformation we're seeing this coming to light in the transformation of 5g driving 5g into the networked environment is a transformational moment an open source is playing a pivotal role in that with OpenStack own out and opie NFV and other open source projects were contributing to and participating in are helping drive that transformation in 5g as you do software-defined networks on our barrier breaking technology we're also seeing this transformation rapidly occurring in the cloud enterprise cloud enterprise are growing rapidly and innovation continues our work with virtualization and KVM continues to be aggressive to adopt technologies to advance and deliver more capabilities in virtualization as we look at this with Red Hat we're now working on Cube vert to help move virtualized workloads onto these platforms so that we can now have them managed at an open platform environment and Cube vert provides that so between Intel and Red Hat and the community we're investing resources to make certain that comes to product as containers a critical feature in Linux becomes more and more prevalent across the industry the growth of container elements continues at a rapid rapid pace one of the things that we wanted to bring to that is the ability to provide isolation without impairing the flexibility the speed and the footprint of a container with our clear container efforts along with hyper run v we were able to combine that and create we call cotta containers we launched this at the end of last year cotta containers is designed to have that container element available and adding elements like isolation both of these events need to have an orchestration and management capability Red Hat's OpenShift provides that capability for these workloads whether containerized or cube vert capabilities with virtual environments Red Hat openshift is designed to take that commercial capability to market and we've been working with Red Hat for several years now to develop what we call our Intel select solution Intel select solutions our Intel technology optimized for downstream workloads as we see a growth in a workload will work with a partner to optimize a solution on Intel technology to deliver the best solution that could be deployed quickly our effort here is to accelerate the adoption of these type of workloads in the market working with Red Hat's so now we're going to be deploying an Intel select solution design and optimized around Red Hat OpenShift we expect the industry's start deploying this capability very rapidly I'm excited to announce today that Lenovo is committed to be the first platform company to deliver this solution to market the Intel select solution to market will be delivered by Lenovo now I talked about what we're doing in industry and how we're transforming businesses our technology is also utilized for greater good there's no better example of this than the worked by dr. Stephen Hawking it was a sad day on March 14th of this year when dr. Stephen Hawking passed away but not before Intel had a 20-year relationship with dr. Hawking driving breakthrough capabilities innovating with him driving those robust capabilities to the rest of the world one of our Intel engineers an Intel fellow which is the highest technical achievement you can reach at Intel got to spend 10 years with dr. Hawking looking at innovative things they could do together with our technology and his breakthrough innovative thinking so I thought it'd be great to bring up our Intel fellow Lema notch Minh to talk about her work with dr. Hawking and what she learned in that experience come on up Elina [Music] great to see you Thanks something going on about the breakthrough breaking boundaries and Intel technology talk about how you use that in your work with dr. Hawking absolutely so the most important part was to really make that technology contextually aware because for people with disability every single interaction takes a long time so whether it was adapting for example the language model of his work predictor to understand whether he's gonna talk to people or whether he's writing a book on black holes or to even understand what specific application he might be using and then making sure that we're surfacing only enough actions that were relevant to reduce that amount of interaction so the tricky part is really to make all of that contextual awareness happen without totally confusing the user because it's constantly changing underneath it so how is that your work involving any open source so you know the problem with assistive technology in general is that it needs to be tailored to the specific disability which really makes it very hard and very expensive because it can't utilize the economies of scale so basically with the system that we built what we wanted to do is really enable unleashing innovation in the world right so you could take that framework you could tailor to a specific sensor for example a brain computer interface or something like that where you could actually then support a different set of users so that makes open-source a perfect fit because you could actually build and tailor and we you spoke with dr. Hawking what was this view of open source is it relevant to him so yeah so Stephen was adamant from the beginning that he wanted a system to benefit the world and not just himself so he spent a lot of time with us to actually build this system and he was adamant from day one that he would only engage with us if we were commit to actually open sourcing the technology that's fantastic and you had the privilege of working with them in 10 years I know you have some amazing stories to share so thank you so much for being here thank you so much in order for us to scale and that's what we're about at Intel is really scaling our capabilities it takes this community it takes this community of diverse capabilities it takes two births thought diverse thought of dr. Hawking couldn't be more relevant but we also are proud at Intel about leading efforts of diverse thought like women and Linux women in big data other areas like that where Intel feels that that diversity of thinking and engagement is critical for our success so as we look at Intel not to be encumbered by the past but break boundaries to deliver the technology that you all will go off and do something wonderful with we're going to remain committed to that and I look forward to continue working with you thank you and have a great conference [Applause] thank God now we have one more customer story for you today when you think about customers challenges in the technology landscape it is hard to ignore the public cloud these days public cloud is introducing capabilities that are driving the fastest rate of innovation that we've ever seen in our industry and our next customer they actually had that same challenge they wanted to tap into that innovation but they were also making bets for the long term they wanted flexibility and providers and they had to integrate to the systems that they already have and they have done a phenomenal job in executing to this so please give a warm welcome to Kerry Pierce from Cathay Pacific Kerry come on thanks very much Matt hi everyone thank you for giving me the opportunity to share a little bit about our our cloud journey let me start by telling you a little bit about Cathay Pacific we're an international airline based in Hong Kong and we serve a passenger and a cargo network to over 200 destinations in 52 countries and territories in the last seventy years and years seventy years we've made substantial investments to develop Hong Kong as one of the world's leading transportation hubs we invest in what matters most to our customers to you focusing on our exemplary service and our great product and it's both on the ground and in the air we're also investing and expanding our network beyond our multiple frequencies to the financial districts such as Tokyo New York and London and we're connecting Asia and Hong Kong with key tech hubs like San Francisco where we have multiple flights daily we're also connecting Asia in Hong Kong to places like Tel Aviv and our upcoming destination of Dublin in fact 2018 is actually going to be one of our biggest years in terms of network expansion and capacity growth and we will be launching in September our longest flight from Hong Kong direct to Washington DC and that'll be using a state-of-the-art Airbus a350 1000 aircraft so that's a little bit about Cathay Pacific let me tell you about our journey through the cloud I'm not going to go into technical details there's far smarter people out in the audience who will be able to do that for you just focus a little bit about what we were trying to achieve and the people side of it that helped us get there we had a couple of years ago no doubt the same issues that many of you do I don't think we're unique we had a traditional on-premise non-standardized fragile infrastructure it didn't meet our infrastructure needs and it didn't meet our development needs it was costly to maintain it was costly to grow and it really inhibited innovation most importantly it slowed the delivery of value to our customers at the same time you had the hype of cloud over the last few years cloud this cloud that clouds going to fix the world we were really keen on making sure we didn't get wound up and that so we focused on what we needed we started bottom up with a strategy we knew we wanted to be clouded Gnostic we wanted to have active active on-premise data centers with a single network and fabric and we wanted public clouds that were trusted and acted as an extension of that environment not independently we wanted to avoid single points of failure and we wanted to reduce inter dependencies by having loosely coupled designs and finally we wanted to be scalable we wanted to be able to cater for sudden surges of demand in a nutshell we kind of just wanted to make everything easier and a management level we wanted to be a broker of services so not one size fits all because that doesn't work but also not one of everything we want to standardize but a pragmatic range of services that met our development and support needs and worked in harmony with our public cloud not against it so we started on a journey with red hat we implemented Red Hat cloud forms and ansible to manage our hybrid cloud we also met implemented Red Hat satellite to maintain a manager environment we built a Red Hat OpenStack on crimson vironment to give us an alternative and at the same time we migrated a number of customer applications to a production public cloud open shift environment but it wasn't all Red Hat you love heard today that the Red Hat fits within an overall ecosystem we looked at a number of third-party tools and services and looked at developing those into our core solution I think at last count we had tried and tested somewhere past eight different tools and at the moment we still have around 62 in our environment that help us through that journey but let me put the technical solution aside a little bit because it doesn't matter how good your technical solution is if you don't have the culture and the people to get it right as a group we needed to be aligned for delivery and we focused on three core behaviors we focused on accountability agility and collaboration now I was really lucky we've got a pretty fantastic team for whom that was actually pretty easy but but again don't underestimate the importance of getting the culture and the people right because all the technology in the world doesn't matter if you don't have that right I asked the team what did we do differently because in our situation we didn't go out and hire a bunch of new people we didn't go out and hire a bunch of consultants we had the staff that had been with us for 10 20 and in some cases 30 years so what did we do differently it was really simple we just empowered and supported our staff we knew they were the smart ones they were the ones that were dealing with a legacy environment and they had the passion to make the change so as a team we encouraged suggestions and contributions from our overall IT community from the bottom up we started small we proved the case we told the story and then we got by him and only did did we implement wider the benefits the benefit through our staff were a huge increase in staff satisfaction reduction and application and platform outage support incidents risk free and failsafe application releases work-life balance no more midnight deployments and our application and infrastructure people could really focus on delivering customer value not on firefighting and for our end customers the people that travel with us it was really really simple we could provide a stable service that allowed for faster releases which meant we could deliver value faster in terms of stats we migrated 16 production b2c applications to a public cloud OpenShift environment in 12 months we decreased provisioning time from weeks or occasionally months we were waiting for hardware two minutes and we had a hundred percent availability of our key customer facing systems but most importantly it was about people we'd built a culture a culture of innovation that was built on a foundation of collaboration agility and accountability and that permeated throughout the IT organization not those just those people that were involved in the project everyone with an IT could see what good looked like and to see what it worked what it looked like in terms of working together and that was a key foundation for us the future for us you will have heard today everything's changing so we're going to continue to develop our open hybrid cloud onboard more public cloud service providers continue to build more modern applications and leverage the emerging technology integrate and automate everything we possibly can and leverage more open source products with the great support from the open source community so there you have it that's our journey I think we succeeded by not being over awed and by starting with the basics the technology was key obviously it's a cool component but most importantly it was a way we approached our transition we had a clear strategy that was actually developed bottom-up by the people that were involved day to day and we empowered those people to deliver and that provided benefits to both our staff and to our customers so thank you for giving the opportunity to share and I hope you enjoy the rest of the summer [Applause] I got one thanks what a great story would a great customer story to close on and we have one more partner to come up and this is a partner that all of you know that's Microsoft Microsoft has gone through an amazing transformation they've we've built an incredibly meaningful partnership with them all the way from our open source collaboration to what we do in the business side we started with support for Red Hat Enterprise Linux on hyper-v and that was truly just the beginning today we're announcing one of the most exciting joint product offerings on the market today let's please give a warm welcome to Paul correr and Scott Scott Guthrie to tell us about it guys come on out you know Scot welcome welcome to the Red Hat summer thanks for coming really appreciate it great to be here you know many surprises a lot of people when we you know published a list of speakers and then you rock you were on it and you and I are on stage here it's really really important and exciting to us exciting new partnership we've worked together a long time from the hypervisor up to common support and now around hybrid hybrid cloud maybe from your perspective a little bit of of what led us here well you know I think the thing that's really led us here is customers and you know Microsoft we've been on kind of a transformation journey the last several years where you know we really try to put customers at the center of everything that we do and you know as part of that you quickly learned from customers in terms of I'm including everyone here just you know you've got a hybrid of state you know both in terms of what you run on premises where it has a lot of Red Hat software a lot of Microsoft software and then really is they take the journey to the cloud looking at a hybrid of state in terms of how do you run that now between on-premises and a public cloud provider and so I think the thing that both of us are recognized and certainly you know our focus here at Microsoft has been you know how do we really meet customers with where they're at and where they want to go and make them successful in that journey and you know it's been fantastic working with Paul and the Red Hat team over the last two years in particular we spend a lot of time together and you know really excited about the journey ahead so um maybe you can share a bit more about the announcement where we're about to make today yeah so it's it's it's a really exciting announcement it's and really kind of I think first of its kind in that we're delivering a Red Hat openshift on Azure service that we're jointly developing and jointly managing together so this is different than sort of traditional offering where it's just running inside VMs and it's sort of two vendors working this is really a jointly managed service that we're providing with full enterprise support with a full SLA where the you know single throat to choke if you will although it's collectively both are choke the throats in terms of making sure that it works well and it's really uniquely designed around this hybrid world and in that it supports will support both Windows and Linux containers and it role you know it's the same open ship that runs both in the public cloud on Azure and on-premises and you know it's something that we hear a lot from customers I know there's a lot of people here that have asked both of us for this and super excited to be able to talk about it today and we're gonna show off the first demo of it just a bit okay well I'm gonna ask you to elaborate a bit more about this how this fits into the bigger Microsoft picture and I'll get out of your way and so thanks again thank you for coming here we go thanks Paul so I thought I'd spend just a few minutes talking about wouldn't you know that some of the work that we're doing with Microsoft Asher and the overall Microsoft cloud I didn't go deeper in terms of the new offering that we're announcing today together with red hat and show demo of it actually in action in a few minutes you know the high level in terms of you know some of the work that we've been doing at Microsoft the last couple years you know it's really been around this this journey to the cloud that we see every organization going on today and specifically the Microsoft Azure we've been providing really a cloud platform that delivers the infrastructure the application and kind of the core computing needs that organizations have as they want to be able to take advantage of what the cloud has to offer and in terms of our focus with Azure you know we've really focused we deliver lots and lots of different services and features but we focused really in particular on kind of four key themes and we see these four key themes aligning very well with the journey Red Hat it's been on and it's partly why you know we think the partnership between the two companies makes so much sense and you know for us the thing that we've been really focused on has been with a or in terms of how do we deliver a really productive cloud meaning how do we enable you to take advantage of cutting-edge technology and how do we kind of accelerate the successful adoption of it whether it's around the integration of managed services that we provide both in terms of the application space in the data space the analytic and AI space but also in terms of just the end-to-end management and development tools and how all those services work together so that teams can basically adopt them and be super successful yeah we deeply believe in hybrid and believe that the world is going to be a multi cloud and a multi distributed world and how do we enable organizations to be able to take the existing investments that they already have and be able to easily integrate them in a public cloud and with a public cloud environment and get immediate ROI on day one without how to rip and replace tons of solutions you know we're moving very aggressively in the AI space and are looking to provide a rich set of AI services both finished AI models things like speech detection vision detection object motion etc that any developer even at non data scientists can integrate to make application smarter and then we provide a rich set of AI tooling that enables organizations to build custom models and be able to integrate them also as part of their applications and with their data and then we invest very very heavily on trust Trust is sort of at the core of a sure and we now have more compliant certifications than any other cloud provider we run in more countries than any other cloud provider and we really focus around unique promises around data residency data sovereignty and privacy that are really differentiated across the industry and terms of where Iser runs today we're in 50 regions around the world so our region for us is typically a cluster of multiple data centers that are grouped together and you can see we're pretty much on every continent with the exception of Antarctica today and the beauty is you're going to be able to take the Red Hat open shift service and run it on ashore in each of these different locations and really have a truly global footprint as you look to build and deploy solutions and you know we've seen kind of this focus on productivity hybrid intelligence and Trust really resonate in the market and about 90 percent of Fortune 500 companies today are deployed on Azure and you heard Nike talked a little bit earlier this afternoon about some of their journeys as they've moved to a dot public cloud this is a small logo of just a couple of the companies that are on ashore today and what I do is actually even before we dive into the open ship demo is actually just show a quick video you know one of the companies thing there are actually several people from that organization here today Deutsche Bank who have been working with both Microsoft and Red Hat for many years Microsoft on the other side Red Hat both on the rel side and then on the OpenShift side and it's just one of these customers that have helped bring the two companies together to deliver this managed openshift service on Azure and so I'm just going to play a quick video of some of the folks that Deutsche Bank talking about their experiences and what they're trying to get out of it so we could roll the video that'd be great technology is at the absolute heart of Deutsche Bank we've recognized that the cost of running our infrastructure was particularly high there was a enormous amount of under utilization we needed a platform which was open to polyglot architecture supporting any kind of application workload across the various business lines of the third we analyzed over 60 different vendor products and we ended up with Red Hat openshift I'm super excited Microsoft or supporting Linux so strongly to adopting a hybrid approach we chose as here because Microsoft was the ideal partner to work with on constructs around security compliance business continuity as you as in all the places geographically that we need to be we have applications now able to go from a proof of concept to production in three weeks that is already breaking records openshift gives us given entities and containers allows us to apply the same sets of processes automation across a wide range of our application landscape on any given day we run between seven and twelve thousand containers across three regions we start see huge levels of cost reduction because of the level of multi-tenancy that we can achieve through containers open ship gives us an abstraction layer which is allows us to move our applications between providers without having to reconfigure or recode those applications what's really exciting for me about this journey is the way they're both Red Hat and Microsoft have embraced not just what we're doing but what each other are doing and have worked together to build open shift as a first-class citizen with Microsoft [Applause] in terms of what we're announcing today is a new fully managed OpenShift service on Azure and it's really the first fully managed service provided end-to-end across any of the cloud providers and it's jointly engineer operated and supported by both Microsoft and Red Hat and that means again sort of one service one SLA and both companies standing for a link firmly behind it really again focusing around how do we make customers successful and as part of that really providing the enterprise-grade not just isolates but also support and integration testing so you can also take advantage of all your rel and linux-based containers and all of your Windows server based containers and how can you run them in a joint way with a common management stack taking the advantage of one service and get maximum density get maximum code reuse and be able to take advantage of a containerized world in a better way than ever before and make this customer focus is very much at the center of what both companies are really centered around and so what if I do be fun is rather than just talk about openshift as actually kind of show off a little bit of a journey in terms of what this move to take advantage of it looks like and so I'd like to invite Brendan and Chris onstage who are actually going to show off a live demo of openshift on Azure in action and really walk through how to provision the service and basically how to start taking advantage of it using the full open ship ecosystem so please welcome Brendan and Chris we're going to join us on stage for a demo thanks God thanks man it's been a good afternoon so you know what we want to get into right now first I'd like to think Brandon burns for joining us from Microsoft build it's a busy week for you I'm sure your own stage there a few times as well you know what I like most about what we just announced is not only the business and technical aspects but it's that operational aspect the uniqueness the expertise that RedHat has for running OpenShift combined with the expertise that Microsoft has within Azure and customers are going to get this joint offering if you will with you know Red Hat OpenShift on Microsoft Azure and so you know kind of with that again Brendan I really appreciate you being here maybe talk to the folks about what we're going to show yeah so we're going to take a look at what it looks like to deploy OpenShift on to Azure via the new OpenShift service and the real selling point the really great part of this is the the deep integration with a cloud native app API so the same tooling that you would use to create virtual machines to create disks trade databases is now the tooling that you're going to use to create an open chip cluster so to show you this first we're going to create a resource group here so we're going to create that resource group in East us using the AZ tool that's the the azure command-line tooling a resource group is sort of a folder on Azure that holds all of your stuff so that's gonna come back into the second I've created my resource group in East us and now we're gonna use that exact same tool calling into into Azure api's to provision an open shift cluster so here we go we have AZ open shift that's our new command line tool putting it into that resource group I'm gonna get into East us alright so it's gonna take a little bit of time to deploy that open shift cluster it's doing a bunch of work behind the scenes provisioning all kinds of resources as well as credentials to access a bunch of different as your API so are we actually able to see this to you yeah so we can cut over to in just a second we can cut over to that resource group in a reload so Brendan while relating the beauty of what you know the teams have been doing together already is the fact that now open shift is a first-class citizen as it were yeah absolutely within the agent so I presume not only can I do a deployment but I can do things like scale and check my credentials and pretty much everything that I could do with any other service with that that's exactly right so we can anything that you you were used to doing via the my computer has locked up there we go the demo gods are totally with me oh there we go oh no I hit reload yeah that was that was just evil timing on the house this is another use for operators as we talked about earlier today that's right my dashboard should be coming up do I do I dare click on something that's awesome that was totally it was there there we go good job so what's really interesting about this I've also heard that it deploys you know in as little as five to six minutes which is really good for customers they want to get up and running with it but all right there we go there it is who managed to make it see that shows that it's real right you see the sweat coming off of me there but there you can see the I feel it you can see the various resources that are being created in order to create this openshift cluster virtual machines disks all of the pieces provision for you automatically via that one single command line call now of course it takes a few minutes to to create the cluster so in order to show the other side of that integration the integration between openshift and Azure I'm going to cut over to an open shipped cluster that I already have created alright so here you can see my open shift cluster that's running on Microsoft Azure I'm gonna actually log in over here and the first sign you're gonna see of the integration is it's actually using my credentials my login and going through Active Directory and any corporate policies that I may have around smart cards two-factor off anything like that authenticate myself to that open chef cluster so I'll accept that it can access my and now we're gonna load up the OpenShift web console so now this looks familiar to me oh yeah so if anybody's used OpenShift out there this is the exact same console and what we're going to show though is how this console via the open service broker and the open service broker implementation for Azure integrates natively with OpenShift all right so we can go down here and we can actually see I want to deploy a database I'm gonna deploy Mongo as my key value store that I'm going to use but you know like as we talk about management and having a OpenShift cluster that's managed for you I don't really want to have to manage my database either so I'm actually going to use cosmos DB it's a native Azure service it's a multilingual database that offers me the ability to access my data in a variety of different formats including MongoDB fully managed replicated around the world a pretty incredible service so I'm going to go ahead and create that so now Brendan what's interesting I think to me is you know we talked about the operational aspects and clearly it's not you and I running the clusters but you do need that way to interface with it and so when customers are able to deploy this all of this is out of the box there's no additional contemporary like this is what you get when you create when you use that tool to create that open chef cluster this is what you get with all of that integration ok great step through here and go ahead don't have any IP ranges there we go all right and we create that binding all right and so now behind the scenes openshift is integrated with the azure api's with all of my credentials to go ahead and create that distributed database once it's done provisioning actually all of the credentials necessary to access the database are going to be automatically populated into kubernetes available for me inside of OpenShift via service discovery to access from my application without any further work so I think that really shows not only the power of integrating openshift with an azure based API but actually the power of integrating a Druze API is inside of OpenShift to make a truly seamless experience for managing and deploying your containers across a variety of different platforms yeah hey you know Brendan this is great I know you've got a flight to catch because I think you're back onstage in a few hours but you know really appreciate you joining us today absolutely I look forward to seeing what else we do yeah absolutely thank you so much thanks guys Matt you want to come back on up thanks a lot guys if you have never had the opportunity to do a live demo in front of 8,000 people it'll give you a new appreciation for standing up there and doing it and that was really good you know every time I get the chance just to take a step back and think about the technology that we have at our command today I'm in awe just the progress over the last 10 or 20 years is incredible on to think about what might come in the next 10 or 20 years really is unthinkable you even forget 10 years what might come in the next five years even the next two years but this can create a lot of uncertainty in the environment of what's going to be to come but I believe I am certain about one thing and that is if ever there was a time when any idea is achievable it is now just think about what you've seen today every aspect of open hybrid cloud you have the world's infrastructure at your fingertips and it's not stopping you've heard about this the innovation of open source how fast that's evolving and improving this capability you've heard this afternoon from an entire technology ecosystem that's ready to help you on this journey and you've heard from customer after customer that's already started their journey in the successes that they've had you're one of the neat parts about this afternoon you will aren't later this week you will actually get to put your hands on all of this technology together in our live audience demo you know this is what some it's all about for us it's a chance to bring together the technology experts that you can work with to help formulate how to pull off those ideas we have the chance to bring together technology experts our customers and our partners and really create an environment where everyone can experience the power of open source that same spark that I talked about when I was at IBM where I understood the but intial that open-source had for enterprise customers we want to create the environment where you can have your own spark you can have that same inspiration let's make this you know in tomorrow's keynote actually you will hear a story about how open-source is changing medicine as we know it and literally saving lives it is a great example of expanding the ideas it might be possible that we came into this event with so let's make this the best summit ever thank you very much for being here let's kick things off right head down to the Welcome Reception in the expo hall and please enjoy the summit thank you all so much [Music] [Music]
SUMMARY :
from the bottom this speaks to what I'm
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug Fisher | PERSON | 0.99+ |
Stephen | PERSON | 0.99+ |
Brendan | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Robert Noyce | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
Arvind | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
March 14th | DATE | 0.99+ |
Matt | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Nike | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
Antarctica | LOCATION | 0.99+ |
Scott Guthrie | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Asia | LOCATION | 0.99+ |
Washington DC | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
two minutes | QUANTITY | 0.99+ |
Arvin | PERSON | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
two numbers | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Paul correr | PERSON | 0.99+ |
September | DATE | 0.99+ |
Kerry Pierce | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
8-bit | QUANTITY | 0.99+ |
Mike witig | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2025 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
dr. Hawking | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Dublin | LOCATION | 0.99+ |
first partner | QUANTITY | 0.99+ |
Rob | PERSON | 0.99+ |
first platform | QUANTITY | 0.99+ |
Matt Hicks | PERSON | 0.99+ |
today | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
OpenShift | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Namik Hrle, IBM | IBM Think 2018
>> Narrator: Live, from Las Vegas, it's theCUBE, covering IBM Think 2018, brought to you by IBM. >> Welcome back to theCUBE. We are live on day one of the inaugural IBM Think 2018 event. I'm Lisa Martin with Dave Vellante, and we are in sunny Vegas at the Mandalay Bay, excited to welcome to theCUBE, one of the IBM Fellows, Namik Hrle, welcome to theCUBE. >> Thank you so much. >> So you are not only an IBM Fellow, but you're also an IBM analytics technical leadership team chair. Tell us about you're role on that technical leadership team. What are some of the things that you're helping to drive? And maybe even give us some of the customer feedback that you're helping to infiltrate into IBM's technical direction. >> Okay, so basically, technical leadership team is a group of top technical leaders in the IBM analytics group, and we are kind of chartered by evaluating the new technologies, providing the guidance to our business leaders into what to invest, what to de-invest, listening to our customer requirements, listening to how the customers actually using the technology, and making sure that IBM is timely there when it's needed. And also very important element of the technical leadership team is also to promote the innovation, innovative activities, particularly kind of grass roots innovative activities. Meaning helping our technical leaders across the analytics, to encourage them to come up with innovation, to present the ideas to that, to follow up on those, to potentially turn them into projects, and so on. So that's it. >> And guide them, or just sort of send them off to discover? >> As a matter of fact, we should be probably mostly sounding board, so not necessarily that this is coming from top down, but trying to encourage them, trying to incite them, trying to kind of make the innovative activity interesting, and also at the same time, make sure that they see that there's something coming out of it. It's not just they are coming up, and then nothing's happening, but trying also to turn that into the reality by working with our business developers, which, by the way, who, by the way, they control the resources, right? So, in order to do something like that. >> How much of it is guiding folks on who want to go down a certain path that maybe you know has been attempted before in that particular way, so you know what probably better to go elsewhere? Or, do you let them go and make the same mistake? Is there any of that? Like, don't go down that, don't go through that door. >> Well, as you can imagine, it's human attempt to say, Well, you know, I've already tried, already done. but you know we are really trying not to do that. >> Yeah >> We are trying not to do that, trying to have an open mind, because in this industry in which we are there's always new set of opportunities, and new conditions, and even if you are going to talk about our current topic, like fast data, and so on, I believe that many of these things have been around already, we just didn't know how how to actually, how to help, how to support something like that. But now, with the new set of the knowledge we can actually do that. >> So, let's get into the fast data. I mean, wasn't too long ago, we just asked earlier guest what inning are we at in IOT? He said the third inning. It wasn't long ago we were in the third inning of a dupe, and everything was batched, and then all of a sudden, big data changed, everything became streaming, real-time, fast data. What do you mean by fast data? What is it? What's the state of fast data inside IBM? >> Well, thank you for that question, because I also wanted when I was preparing bit of this interview, of course, I wanted first to, that we are all on the same page in terms of what fast data actually means right? And there's of course in our industry, it's full of hype and misunderstanding and everything else. And like many other things and concepts, actually it's not fundamentally newest thing. It's just the fact that the current state of technology, and enhancements in the technology, allow us to do something that we couldn't do before. So, the requirements for the fast data value proposition were always there, but right now technology allows us actually to derive the real time inside out of the data, irrespective of the data volume, variety, velocity. And when I just said that three V's, it sounds like big data, right? >> Dave: Yeah. >> And, as a matter of fact, there is a pretty large intersection with big data, but there's a huge difference. And the huge difference that typically big data is really associated with data at rest, while the fast data is really associated with data in motion. So the examples of that particular patterns are all over the place. I mean, you can think of like a click stream and stuff. You can think about ticker financial data right? You can think about manufacturing IOT data, sensors, locks. And the spectrum of industries that take advantage of that are all over the place. From financial and retail, from manufacturing, from utilities, all the way to advertising, to agriculture, and everything else. So, I like, for example, very often when I talk about fast data, people first drop immediately into let's say, you know this have YouTube streaming, or this is Facebook, Twitter, kinds of postings, and everything else. While this is true, and certainly there are business cases built on something like that, what interests me more are the huge cases, like for example Airbus, right? With 10,000 sensors in each of the wings, for using 7 terabytes of information per day, which, by the way, cannot be just dumped somewhere like before, and then do some batch processing on it. But you actually have to process that data right there, when it happens, that millisecond because, you know, the ramifications are pretty, pretty serious about that, right? Or take for example, opportunity in the utility industry, like in power, electricity, where the distributors and manufacturers really entice people to put this smart metering in place. So, they can basically measure the consumption of power, electricity, power basically on a hourly basis. And instead of giving you once yearly, kind of bill, of what it is, to know that all the time, what is the consumption, to react on spikes, to avoid blackouts, and to come up with a totally new set of business models in terms of, you know, offering some special incentives for spending or not spending, adding additional manufacturers, I mean, fantastic set of use cases. I believe that Carter said that by 2020, like 80% of the businesses will have some sort of situational awareness obligation, which is not a world of basically using this kind of capability, of event driven messaging. And I agree with that 100%. >> So it's data, fast data is data that is analyzed in real time. >> Namik: Right. >> Such that you can affect an outcome [Namik] Right. >> Before, what, before something bad happens? Before you lose the buyer? Before-- >> All over the place. You know, before fraud happens in financials, right? Before manufacturing lines breaks, right? Before, you know, airplanes, something happens with the airplane. So there are many, many, many examples of something like that, right? And when we talk about it, what we need to understand, again, even the technologies that are needed in order to deliver fast data, value propositions, are kind of known technologies. I mean, what do you really need? You need very scalable POP SOP messaging systems like Kafka, for example, right? In order to acquire the data. Then you need a system which is typically a streaming system, streams, and you have tons of offerings in the open source space, like, you know, Apache Spark streaming, you have Storm, you have Fling, Apache Fling products, as well as you have our IBM Stream. Typically it is for really the kind of enterprise for your service delivery. And then, very importantly, and this is something that I hope we will have time to talk today, is you you also need to be able to basically absorb that data. And not only do the analytics on the fly, but also to store that data and combine that with analytics with the data that is historical. And typically for that, if you read what people are kind of suggesting what to do, you have also lots of open source technology that can do that, like a Sombra, like some HDFS based systems, and so on. But what I'm saying is all of them come with this kind of complexity that yes, you can have land data somewhere, but then you need to put it somewhere else in order to do the analytics. And basically, you are introducing the latency between data production and data consumption. And this is why I believe that the technology like DB2 event store, that we announced just yesterday, is actually something that will come very, very interestingly, a very powerful part of the whole files data story. >> So, let's talk about that a little bit more. Fast data as a term, and thank you for clarifying what it means to IBM, isn't new, but to your point, as technology is evolving, it's opening up new opportunities, much like, it sounds like kind of the innovation lab that you have within IBM, there might be, Dave was asking, ideas that people bring that aren't new, maybe they were tried before, but maybe now there's new enabling technologies. Tell us about how is IBM enabling organizations, whether they're fast paced innovative start ups, to enterprise organizations, not create that sort of latency and actually achieve the business benefits that fast data can help them achieve today with today's, or rather technologies that you're announcing at the show. >> Right, right. So again, let's go through these stages that I said that every fast data technology and project and solution should really probably have. As I said, first of all you need to have some messaging POP system, and I believe that the systems like Kafka are absolutely enough for something like that. >> Dave: Sure. >> Then you need a system that's going to take this data off that fire hose coming from the cuff, which is stream, stream technology, but and as I said, lots of technologies in the open source, but IBM Stream as a technology is something that has also hundreds of different basically models, whether predictive analytics, whether it's prescriptive analytics, whether machine learning, basically kind of AI elements, text to speech. If you can apply on the data, on the wire, with the wire speed, so you need that kind of enterprise quality of service in terms of applying the analytics on the data that is streaming, and then we come to the DB2 event store, basically a repository for that fire hose data. Where you can put this data in the format in which you can basically, immediately, without any latency between data creation and data consumption, do the analytics on it. That's what we did with our DB2 event store. So, not only that we can ingest, like millions of events per second, literally millions and millions events per second, but we can also store that in a basically open format, which is tremendous value. Remember, any data based system basically in the past, stores data in its own format. So you have to use that system that created data, in order to consume that data. >> Dave: Sure. >> What event, DB2 event store does, is actually, it ingest that data, puts it into the format that you can use any kind of open source product, like for example, Spark Analytics, to do the analytics on the data. You could use Spark Machine Learning Libraries to do immediately kind of machine learning, modeling as well as scoring, on that data. So, I believe that that particular element of event store, coupled with a tremendous capability to acquire data, is what makes a really differentiation. >> And it does that how? Through a set of API's that allows it to be read? >> So, basically, when the data is coming off the hose, you know, off the streams or something like that, what event store actually does, it puts the data, it's basically in memory database right? It puts the data in memory, >> Dave: Something else that's been around forever. >> Exactly, something else yeah. We just have more of it, right? (laughing) And guess what? If it is in memory, it's going to be faster than if it is on disk. What a surprise. >> Yeah. (chuckling) >> So, of course, when put the data into the memory, and immediately makes it basically available for querying, if you need this data that just came in. But then, kind of asynchronously, offloads the data into basically Apache Parquet format. Into the columnar store. Basically allowing very powerful analytical capabilities immediately on the data. And again, if you like, you can go to the event store to query that data, but you don't have to. You can basically use any kind of tool, like Spark, like Titan or Anaconda Stack, to go after the data and do the analytics on it, to build the models on it, and so on. >> And that asynchronous transformation is fast? >> Asynchronous transformation is such that it gives you this data, which we now call historical data, basically in a minute. >> Dave: Okay. >> So it's kind of like minutes. >> So reasonable low latency. >> But what's very important to understand that actually the union of that data and the data that is in the memory on this one, we by the way, make transparent, can give you 100% what we call kind of almost transactional consistency of your queries against the data that is kind of coming in. So, it's really now a hybrid kind of store, of the memory, in the memory, very fast log, because also logging this data in order for to have it for high visibility across multiple things because this is highly scalable, I mean, it's highly what we call web scale kind of data base. And then parquet format for the open source storing of the data for historic analysis. >> Let's in our last 30 seconds or so, give us some examples, I know this was just announced, but maybe a customer genericize in terms of the business benefits that one of the beta customers is achieving leveraging this technology. >> So, in order for customers really to take advantage of all that, as I said, what I would suggest customers to do first of all to understand where the situation or where these applications actually make sense to them. Where the data is coming in fire hoses, not in the traditional transactional capabilities, but through the fire hose. Where does it come? And then apply these technologies, as I just said. Acquisition of the data, streaming on the wire, analytics, and then DB2 event store as the sort of the data. For all that, what you also need, just to tell you, you also need kind of messaging run time, which typically products like, for example, ACCA technology, and that's why we have also, we have entered also in partnership with the Liebmann in order to deliver the entire, kind of experience, for customer that want to build application that run on a fast data. >> So maybe enabling customers to become more proactive maybe predictive, eventually? >> To enable customers to take advantage of this tremendously business relevant data, that is, data that is coming in the, is it the click stream? Is it financial data? Is it IOT data? And to combine it with the assets that they already have, coming from transactions, well, that's a powerful combination. That basically they can build totally brand new business models, as well as enhance existing ones, to something that is going to, you know, improve productivity, for example, or improve the customer satisfaction, or grow the customer segments, and so on and so forth. >> Well, Namik, thank you so much for coming on theCUBE, and sharing the insight of the announcements. It's pretty cool, Dave, I'm sittin' between you, and an IBM Fellow. >> Yeah, that's uh-- >> It's pretty good for a Monday. It's Monday, isn't it? >> Thank you so much. >> Not easy becoming an IBM Fellow, so congratulations on that. >> Thank you so much. >> Lisa: And thanks, again. >> Thank you for having me. >> Lisa: Absolutely, our pleasure. For Dave Vellante, I'm Lisa Martin. We are live at Mandalay Bay in Las Vegas. Nice, sunny day today, where we are on our first day of three days of coverage at IBM Think 2018. Check out our CUBE conversations on thecube.net. Head over to siliconangle.com to find our articles on everything we've done so far at this event and other events, and what we'll be doing for the next few days. Stick around, Dave and I are going to be right back, with our next guest after a short break. (innovative music)
SUMMARY :
covering IBM Think 2018, brought to you by IBM. We are live on day one of the inaugural What are some of the things that you're helping to drive? providing the guidance to our business leaders So, in order to do something like that. before in that particular way, so you know what Well, as you can imagine, it's human attempt to say, and new conditions, and even if you are going to talk So, let's get into the fast data. and enhancements in the technology, allow us to do something of that are all over the place. So it's data, fast data is data that is analyzed Such that you can affect an outcome that yes, you can have land data somewhere, that you have within IBM, there might be, and I believe that the systems like Kafka off that fire hose coming from the cuff, it ingest that data, puts it into the format If it is in memory, it's going to be faster to query that data, but you don't have to. it gives you this data, which we now call that is in the memory on this one, we by the way, that one of the beta customers Acquisition of the data, streaming on the wire, to something that is going to, you know, and sharing the insight of the announcements. It's pretty good for a Monday. so congratulations on that. for the next few days.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Namik | PERSON | 0.99+ |
Namik Hrle | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
10,000 sensors | QUANTITY | 0.99+ |
Mandalay Bay | LOCATION | 0.99+ |
Monday | DATE | 0.99+ |
Liebmann | ORGANIZATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
2020 | DATE | 0.99+ |
Carter | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
third inning | QUANTITY | 0.99+ |
Apache | ORGANIZATION | 0.99+ |
thecube.net | OTHER | 0.99+ |
first | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
today | DATE | 0.99+ |
hundreds | QUANTITY | 0.98+ |
IBM Think 2018 | EVENT | 0.98+ |
ORGANIZATION | 0.98+ | |
Kafka | TITLE | 0.98+ |
Airbus | ORGANIZATION | 0.98+ |
Spark | TITLE | 0.98+ |
Namik | ORGANIZATION | 0.97+ |
YouTube | ORGANIZATION | 0.97+ |
ORGANIZATION | 0.96+ | |
first day | QUANTITY | 0.96+ |
Spark Analytics | TITLE | 0.95+ |
Anaconda Stack | TITLE | 0.95+ |
DB2 | TITLE | 0.95+ |
Titan | TITLE | 0.94+ |
millions of events per second | QUANTITY | 0.94+ |
three V | QUANTITY | 0.92+ |
a minute | QUANTITY | 0.92+ |
millions events per second | QUANTITY | 0.89+ |
day one | QUANTITY | 0.88+ |
Stream | COMMERCIAL_ITEM | 0.86+ |
each | QUANTITY | 0.84+ |
7 terabytes of information | QUANTITY | 0.75+ |
one | QUANTITY | 0.74+ |
Fling | TITLE | 0.71+ |
DB2 | EVENT | 0.66+ |
Storm | TITLE | 0.65+ |
ACCA | ORGANIZATION | 0.64+ |
theCUBE | ORGANIZATION | 0.64+ |
Action Item with Peter Burris
>> Hi, I'm Peter Burris. Welcome to Wikibon's Action Item. On Action Item, every week I assemble the core of the Wikibon research time here in our theCUBE Palo Alto studios, as well as remotely, to discuss a seminal topic that's facing the technology industry, and business overall, as we navigate this complex transition of digital business. Here in the studio with me this week, I have David Floyer. David, welcome. >> Thank you. >> And then remotely, we have George Gilbert, Neil Raden, Jim Kobielus, and Ralph Finos. Guys, thank you very much for joining today. >> Hi, how are you doing? >> Great to be here. >> This week, we're going to discuss something that's a challenge to talk about in a small format, but we're going to do our best, and that is, given that the industry is maneuvering through this significant transformation from a product orientation to a services orientation, what's that going to mean for business models? Now this is not a small question, because there are some very, very big players that the technology industry has been extremely dependent upon to drive forward invention, and innovation, and new ideas, and customers, that are entirely dependent upon this ongoing stream of product revenue. On the other hand, we've got companies like AWS, and others that are much more dependent upon the notion of services revenue, where the delivery of the value is in a continuous service orientation. And we conclude most of the SaaS players in that as well, like sales force, etc. So how are those crucial companies, that have been so central to the development of the technology industry, and still are essential to the future of the technology industry, going to navigate this transition? Similarly, how are the services companies, for those circumstances in which the customer does want a private asset that they can utilize as a basis for performing their core business, how are they going to introduce a product orientation? What's that mix, what's that match going to be? And that's what we're going to talk about today. So David, I've kind of laid it out, but really, where are we in this notion of product to service in some of these business model changes? >> It's an early stage, but it's very, very profound changes going on. We can see it from the amount of business of the cloud business supplies are providing. You can see that Amazon, Google, IBM, and Microsoft Azure, all of those are putting very large resources into creating services to be provided to the business itself. But equally, we are aware that services themselves need to be on premise as well, so we're seeing the movement of true private cloud, for example, which is going to be provided as a service as well, so if we take some examples, like for example, Oracle, the customer, they're a cloud customer, they're providing exactly the same service on premise as they provide in the cloud. >> And by service, you mean in how the customer utilizes the technologies. >> Correct. >> The asset arrangement may be very different, but the proposition of what the customer gets out of the assets are essentially the same. >> Yes, the previous model was, we provide you with a product, you buy a number of those products, you put them together, you service it, you look after it. The new model, here coming in with TPC, with the single throat to choke, is that the vendor will look after the maintenance of everything, putting in new releases, bringing things up to date, and they will have a smaller set of things that they will support, and as a result, it's win-win. It's win for the customer, because he's costs are lower, and he can concentrate on differentiated services. >> And secure and privatize his assets. >> Right, and the vendor wins because they have economies of scale, they can provide it at a much lower cost as well. And even more important to both sides is that the time to value of new releases is much, much quicker, and time to security exposures, time to a whole number of other things, improve with this new model. >> So Jim, when we think about this notion of a services orientation, ultimately, it starts to change the relationships between the customer and the vendor. And the consequence of that is, not surprisingly, that a number of different considerations, whether they be metrics, or other elements, become more important. Specifically we start thinking about the experience that the customer has of using something. Walk us through this kind of transition to an experience-oriented approach to conceiving of whether or not the business model's being successful. >> Right, your customer will now perceive the experience in the context of an entire engagement that is multi-channel, multi-touch point, multi-device, multi-application, and so forth, where they're expecting the same experience, the same value, the same repeatable package of goodies, whatever it is they get from you, regardless of the channel through which you're touching them or they're touching you. That channel may be provided through a private, on-premises implementation of your stack, or through a public cloud implementation of your capability, or most likely through all of the above, combined into a hybrid true private cloud. Regardless of the packaging, and the delivery of that value in the context of the engagement the customer expects it to be, self-service increasingly, predictable, managed by the solution provider, guaranteed with a fast continuous release in update cycle. So, fundamentally it's an experience economy, because the customer has many other options to go to, of providers that can provide them with a good or better experience, in terms of the life cycle of things that you're doing for them. So bottom line, the whole notion of a TPC really gets to that notion that the experience is the most important thing, the cloud experience, that can be delivered on-prem, or can be delivered in the public environment. And that's really the new world. With a multi-cloud is that master sort of a matrix of the seamless cross-channel experience. >> We like to think of the notion of a business model as worrying about three fundamental questions. How are you going to create value? How are you going to deliver value? And how are you going to capture value? Where the creation is how shared it's going to be, it's going to be a network of providers, you're going to have to work with OEMs. The delivery, is it going to be online, is it going to be on-prem? Those types of questions, but this notion of value capture is a key feature, David, of how this is changing. And George, I want to ask you a question. The historical norm is that value capture took place in the form of, I give you a product, you give me cash. But when we start moving to a services-orientation, where the services is perhaps being operated and delivered by the supplier, it introduces softer types of exchange mechanisms, like, how are you going to use my data? Are you going to improve the fidelity of the system by pooling me with a lot of other customers? Am I losing my differentiation? My understanding of customers, is that being appropriated and munged with others to create models? Take us through this soft value capture challenge that a service provider has, and what specifically, I guess actually the real challenge that the customer has as they try to privatize their assets, George. >> So, it's a big question that you're asking, and let me use an example to help sort of make concrete the elaboration, or an explanation. So now we're not just selling software, but we might be selling sort of analytic data services. Let's say, a vendor like IBM works with Airbus to build data services where the aircraft that Airbus sells to its airline customers, that provides feedback data that both IBM has access to, to improve its models about how the aircraft work, as well as that data would also go back to Airbus. Now, Airbus then can use that data service to help its customers with prescriptions about how to operate better on certain routes, how to do maintenance better, not just predictive maintenance, but how to do it more just in time with less huge manuals. The key here is that since it's a data service that's being embedded with the product, multiple vendors can benefit from that data service. And the customer of the traditional software company, so in this case, Airbus being the customer of IBM, has to negotiate to make sure its IP is protected to some extent, but at the same time, they want IBM to continue working with that data feedback because it makes their models richer, the models that Airbus gets access to richer over time. >> But presumably that has to be factored into the contractual obligations of both parties enter into, to make sure that those soft dollars are properly commensurated in the agreements. That's not something that we're seeing a lot in the industry, but the model of how we work closely with our clients and our customers is an important one. And it's likely to change the way that IT thinks about itself as a provider of services. Neil, what kinds of behaviors are IT likely to start exhibiting as it finds itself, if not competing, at least trying to mimic the classes of behaviors that we're seeing from service providers inside their own businesses? >> Yeah, well, IT organizations grew over the last, I dunno, 50 years or so, organically, and it was actually amazing how similar their habits and processes, and ways of doing things were the same across industries, and locations, and so forth. But the problem was that everything they had to deal with, whether they were the computers, or the storage, or the networks, and so forth, were all really expensive. So they were always in a process of managing from scarcity. The business wanted more and more from them, and they had lower and lower budgets, because they had to maintain what they had, so it created a lot of tension between IT and organizations, and because of that, whenever a conversation happened between other groups within the business and IT, IT always seemed to have the last word, no, or okay. Whatever the decision was, it was really IT's. And what I see happening here is, when the IT business becomes less insular, I think a lot of this tension between IT and the rest of the organization will start to dissipate. And that's what I'm hoping will happen, because they started this concept of IT vs the business, but if you went out in an organization and asked 100 people what they did, not one of them would say, "I'm the business," right? They have a function, but IT created this us vs them thing, to protect themselves, and I think that once they're able to utilize external services for hardware, for software, for whatever else they have to do, they become more like a commercial operation, like supply-side, or procurement, or something, and managing those relationships, and getting the services that they're paying for, and I think ultimately that could really help organizations, by breaking down those walls in IT. >> So it used to be that an IT decision to make an investment would have uncertain returns, but certain costs, and there are multiple reasons why those returns would be uncertain, or those benefits would be uncertain. Usually it was because some other function would see the benefits under their umbrella, you know, marketing might see increased productivity, or finance would see increased productivity as a consequence of those investments, but the costs always ended up in IT. And that's one of the reasons why we yet find ourself in this nasty cycle of constantly trying to push costs down, because the benefits always showed up somewhere else, the costs always showed up inside IT. But it does raise this question ultimately of, does this notion of an ongoing services orientation, is it just another way of saying, we're letting a lock in back in the door in a big way? Because we're now moving from a relationship, a sourcing relationship that's procurement oriented, buy it, spend as little money as possible, get value out of it, as opposed to a services orientation, which is effectively, move responsibility for this part of the function off into some other service provider, perpetually. And that's going to have a significant implication, ultimately, on the question of whether or not we buy services, default to services. Ralph, what do you think, where are businesses going to end up on this, are we just going to see everything end up being a set of services, or is there going to be some model that we might use, and I'll ask the team this, some model that we might use to conceive when it should be a purchase, and when it should be a service? What do you think, Ralph? >> Yeah, I think the industry's gravitating towards a service model, and I think it's a function of differentiation. You know, if you're an enterprise, and you're running a hundred different workloads, and 15 of them are things that really don't differentiate you from your competition, or create value that's differentiable in some kind of way, it doesn't make any sense to own that kind of functionality. And I think, in the long run, more and more aspects, or a higher percentage of workload is going to be in that category. There will always be differentiation workloads, there will always be workloads requiring unique kinds of security, especially around transactions. But in the net, the slow march of service makes a lot of sense to me. >> What do you think, guys? Are we going to see, uh, do we agree with Ralph, number one? And number two, what about those exceptions? Is there a framework that we can start to utilize to start helping folks imagine what are the exceptions to that rule, what do you think David? >> Sure, I think that there are circumstances when... >> Well first, do we generally agree with the march? >> Absolutely, absolutely. >> I agree too. >> Yes, fully agree that more and more services are going to be purchased, and a smaller percentage of the IT budget from an enterprise will go into specific purchases of assets. But there are some circumstances where you will want to make sure that you have those assets on premise, that there is no other call on those assets, either from the court, or from difference of priority between what you need and what a service provider needs. So in both those circumstances, they may well choose to purchase it, or to have the asset on the premise so that it's clearly theirs, and clearly their priority of when to use it, and how to use it. So yes, clearly, an example might be, for example, if you are a bank, and you need to guarantee that all of that information is yours, because you need to know what assets are owned by who, and if you give it to a service provider, there are circumstances where there could be a legal claim on that service provider, which would mean that you'll essentially go out of business. So there are very clear examples of where that could happen, but in general, I agree. There's one other thing I'd like to add to this conversation. The interesting thing from an IT point of view, an enterprise IT, is that you'll have fewer people to do business with, you'll be buying a package of services. So that means many of the traditional people that you did business with, both software and hardware, will not be your customers anymore, and they will have to change their business models to deal with this. So for example, Permabit has become an OEM supplier of capabilities of data management inside. And Kaminario has just announced that it's becoming a software vendor. >> Nutanix. >> Nutanix is becoming a software vendor, and is either allowing other people to take the single throat to choke, or putting together particular packages where it will be the single throat to choke. >> Even NetAct, which is a pretty consequential business, has been been around for a long time, is moving in this direction. >> Yes, a small movement in that direction, but I think a key question for many of these vendors are, do I become an OEM supplier to the... >> Customer owner. >> The customer owner. Or what's my business model going to be? Should I become the OEM supplier, or should I try and market something directly in some sort of way to the vendors? >> Now this is a very important point, David, because one of the reasons, for a long time, why the OEM model ran into some challenges, is precisely over customer ownership. But when data from operations of the product, or of the service is capable of flowing, not only to the customer engagement originator, but also to the OEM supplier, the supplier has pretty significant, the OEM company has pretty significant visibility, ultimately, into what is going on with their product. And they can use that to continuously improve their product, while at the same time, reducing some of the costs associated with engagement. So the flowing of data, the whole notion of digital business allows a single data about operation to go to multiple parties, and as a consequence, all those parties now have viable business models, if they do it right. >> Yeah, absolutely. And Kaminario will be be a case in point. They need metadata about the whole system, as a whole, to help them know how to apply the best patches to their piece of software, and the same is true for other suppliers of software, the Permabit, or whoever those are, and it's the responsibility of that owner or the customer to make sure that all of those people can work in that OEM environment effectively, and improve their product as well. >> Yeah, so great conversation guys. This is a very, very rich and fertile domain, and I think it's one that we're going to come back to, if not directly, at least in talking about how different vendors are doing things, or how customers have to, or IT organizations have to adjust their behaviors to move from a procurement to a strategic sourcing set of relationships, etc. But what I'd like to do now, as we try to do every week, is getting to the Action Item round, and I'm going to ask each of you guys to give me, give our audience, give our users, the action item, what do they do differently on next Monday as a consequence of this conversation? And George Gilbert, I'm going to start with you. George, action item. >> Okay, so mine is really an extension of what we were talking about when I was raising my example, which is your OEM supplier, let's say IBM, or a company we just talked to recently, C3 IoT, is building essentially what are application data services that would accompany your products that you, who used to be a customer, are selling a supply chain master, say. So really trying to boil that down is, there is a model of your product or service could be the digital twin, and as your vendor keeps improving it, and you offer it to your customers, you need to make sure that as the vendor improves it, that there is a version that is backward compatible with what you are using. So there's the IP protection part, but then there's also the compatibility protection part. >> Alright, so George, your action item would be, don't focus narrowly on the dollars being spent, factor those soft dollars as well, both from a value perspective, as well an ongoing operational compatibility perspective. Alright, Jim Kobielus, action item. >> Action item's for IT professionals to take a quick inventory of what of your assets in computing you should be outsourcing to the cloud as services, it's almost everything. And also, to inventory, what of your assets must remain in the form of hard discreet tangible goods or products, and my contention is that, I would argue that the edge, the OT, the operational technology, the IOT, sensors and actuators that are embedded in your machine tools and everything else, that you're running the business on, are the last bastion of products in this new marketplace, where everything else becomes a service. Because the actual physical devices upon which you've built your OT are essentially going to remain hard tangible products forevermore, of necessity, and you'll probably want to own those, because those are the very physical fabric of your operation. >> So Jim, your action item is, start factoring the edge into your consideration of the arrangements of your assets, as you think about product vs services. >> Yes. >> Neil Raden, action item. >> Well, I want to draw a distinction between actually, sorry, between actually, ah damn, sorry. (laughs) >> Jim: I like your fan, Neil. >> Peter: Action item, get your monitor right. >> You know. I want to draw the distinction between actually moving to a service, as opposed to just doing something that's a funding operate. Suppose we have 500 Oracle applications in our company running on 35 or 40 Oracle instances, and we have this whole army of Oracle DBAs, and programmers, and instance tuners, and we say well, we're going to give all the servers to the Salvation Army, and we're going to move everything to the Oracle cloud. We haven't really changed anything in the way the IT organization works. So if we're really looking for change in culture and operation, and everything else, we have to make sure we're thinking about how we're changing, reading the way things get done and managed in the organization. And I think just moving to the cloud is very often just a budgetary thing. >> So your action item would be, as you go through this process, you're going to re-institutionalize the way you work, get ready to do it. Ralph Finos, action item. >> Yeah, I think if you're a vendor, if you're an IT industry vendor, you kind of want to begin to look a lot like, say, a Honda or Toyota in terms of selling the hardware to get the service in the long term relationship in the lock-in. I think that's really where the hardware vendors, as one group of providers, is going to want to go. And I think you want, as a user and an enterprise, I think you're going to want to drive your vendors in that direction. >> So your action item would be, for a user anyway, move from a procurement orientation that's focused on cost, to a vendor management orientation that's focused on co-development, co-evolution of the value that's being delivered by the service. David Floyer, action item. >> So my action item is for vendors, a whole number of smaller vendors. They have to decide whether they're going to invest in the single most expensive thing that they can do, which is an enterprise sales force, for direct selling of their products to enterprise IT, and-or whether they're going to take an OEM type model, and provide services to a subset, for example, to focus on the cloud service providers, which Kaminario are doing, or focus on selling indirectly to all of the, the vendors who are owning the relationship with the enterprise. So that, to me, is a key decision, very important decision as the number of vendors will decline over the next five years. >> Certainly, what we have, visibility to what we have right now, so your action item is, as a small vendor, choose whose sales force you're going to use, yours or somebody else's. >> Correct. >> Alright. So great conversation guys. Let me kind of summarize this a bit. This week, we talked about the evolving business models in the industry, and the basic notion, or the reason why this has become such an important consideration, is because we're moving from an era where the types of applications that we were building were entirely being used internally, and were therefore effectively entirely private, vs increasingly trying to extend even those high-volume transaction processing applications into other types of applications that deliver things out to customers. So the consequence of the move to greater integration, greater external delivery of things within the business, has catalyzed this movement to the cloud. And as a consequence, this significant reformation, from a product to a services orientation, is gripping the industry, and that's going to have significant implications on how both buyers and users of technology, and sellers and providers of technology are going to behave. We believe that the fundamental question is going to come down to, what process are you going to use to create value, with partnerships, go it alone? How are you going to deliver that value, through an OEM sales force, through a network of providers? And how are you going to capture value out of that process, through money, through capturing of data, and more of an advertising model? These are not just questions that feature in the consumer world, they're questions that feature significantly in the B2B world as well. Our expectations, over the next few years, we expect to see a number of changes start to manifest themselves. We expect to see, for example, a greater drive towards experience of the customer as a dominant consideration. And today, it's the cloud experience that's driving many of these changes. Can we get the cloud experience, both the public cloud, and on premise, for example? Secondly, our expectations that we're going to see a lot of emphasis on how soft exchanges of value take place, and how we privatize those exchanges. Hard dollars are always going to flow back and forth, even if they take on subscription, as opposed to a purchase orientation, but what about that data that comes out of the operations? Who owns that, and who gets to lay claim to future revenue streams as a consequence of having that data? Similarly, we expect to see that we will have a new model that IT can use to start focusing its efforts on more business orientation, and therefore not treating IT as the managers of hardware assets, but rather managers of business services that have to remain private to the business. And then finally, our expectation is that this march is going to continue. There will be significant and ongoing drive to increase the role that a service's business model plays in how value is delivered, and how value is captured. Partly because of the increasing dominant role that data's playing as an asset in digital business. But we do believe that there are some concrete formulas and frameworks that can be applied to best understand how to arrange those assets, how to institutionalize and work around those assets, and that's a key feature of how we're working with our customers today. Alright, once again, team, thank you very much for this week's Action Item. From theCUBE studios in beautiful Palo Alto, I want to thank David Floyer, George Gilbert, Jim Kobielus, Neil Raden, and Ralph Finos, this has been Action Item.
SUMMARY :
Here in the studio with me this week, I have David Floyer. And then remotely, we have George Gilbert, Neil Raden, that have been so central to the development of the cloud business supplies are providing. And by service, you mean in how the customer but the proposition of what the customer Yes, the previous model was, we provide you with the time to value of new releases is much, that the customer has of using something. because the customer has many other options to go to, Where the creation is how shared it's going to be, the models that Airbus gets access to richer over time. But presumably that has to be factored into because they had to maintain what they had, or is there going to be some model that we might use, But in the net, the slow march of service So that means many of the traditional people the single throat to choke, or is moving in this direction. do I become an OEM supplier to the... Should I become the OEM supplier, So the flowing of data, the whole notion of digital business and it's the responsibility of that owner or the customer and I'm going to ask each of you guys to give me, could be the digital twin, and as your vendor don't focus narrowly on the dollars being spent, And also, to inventory, what of your assets of the arrangements of your assets, Well, I want to draw a distinction between And I think just moving to the cloud is get ready to do it. in terms of selling the hardware to get the service co-development, co-evolution of the value and provide services to a subset, for example, what we have right now, so your action item is, So the consequence of the move to greater integration,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Airbus | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Honda | ORGANIZATION | 0.99+ |
George | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Peter | PERSON | 0.99+ |
Ralph | PERSON | 0.99+ |
50 years | QUANTITY | 0.99+ |
Salvation Army | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kaminario | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
100 people | QUANTITY | 0.99+ |
NetAct | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
single | QUANTITY | 0.99+ |
both parties | QUANTITY | 0.99+ |
next Monday | DATE | 0.99+ |
This week | DATE | 0.99+ |
500 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
40 | QUANTITY | 0.98+ |
35 | QUANTITY | 0.98+ |
Nutanix | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
this week | DATE | 0.98+ |
Permabit | ORGANIZATION | 0.96+ |
Secondly | QUANTITY | 0.95+ |
single data | QUANTITY | 0.92+ |
theCUBE | ORGANIZATION | 0.91+ |
Bill Magro, Intel | AWS re:Invent
>> Announcer: Live, from Las Vegas, it's the Cube, covering AWS Re:invent 2017 presented by AWS, Intel, and our ecosystem of partners. >> Welcome back everyone, we're here live in Las Vegas for 45,000 tech industry folks and customers with Amazon re:Invent 2017. This is the Cube's exclusive coverage, I'm John Furrier, with my co-host Justin Warren this segment. Our next guest, Bill Magro, is the Chief Technologist for Intel covering HPC high performance computing. Bill, welcome to the Cube. >> Thank you. >> John: Thanks for coming on. You guys, your booth's behind us, I don't if they can see it in the wide shot, but Intel is really taking advantage of the I don't want to say Intel inside the Cloud 'cause that's really what you guys are doing, but you got so much compute, this is your wheelhouse. Compute is what Intel is. >> Bill: Right. >> Andy Jassy at AWS, talking with their customers, they want more compute, edge of the network, so HPC, high performance computing's been around for awhile. What's the state of the art and how should people think about HPC versus the Cloud, are they the same, what's the relationship? >> Intel actually thinks of HPC or high performance computing more in terms of the activity and the workloads than the infrastructure that it runs on. So very early in the days of Cloud computing, there were a lot of people who said that the Cloud was kind of the opposite of HPC and therefore, they could never go together. But we think of Cloud as a delivery vehicle, a way to get access to compute storage networking and HPC is what you're doing. And so then, if you think about HPC as kind of a range of workloads, you can start to think about which ones are a good fit for the Cloud, and which ones aren't. So we talk a little bit about the high performance computing and tailored infrastructure for the most extreme cases of HPC. That's where you see the biggest differences with Cloud, 'cause they're at opposite ends of the spectrum. >> But you see holistically the Cloud is interplaying with HPC. >> Yeah. >> They're not mutually exclusive. >> Absolutely, we see Cloud as a way to deliver HPC capabilities. So if you think of the most demanding HPC problems, the ones that are used in national security, that are used to design commercial airplanes, and so on, those are some of the hardest problems. Predicting the climate change, predicting the weather, paths of hurricanes, those are what we call grand challenge problems. Those are not running in the Cloud. Those are running on dedicated, tailored, infrastructure built for high performance computing at that extreme. And those systems have a lot of characteristics such as very high performance networks, different from ethernet, custom topologies and are designed with software to really minimize variation because it's one large problem that has to move forward. The Cloud is kinda the opposite in a sense. It started as taking a large amount of resources and making it possible to carve them up, right. It's the opposite of aggregating resources. And so that's where a lot of the early thoughts of Cloud and HPC being at odds with each other. >> It seems to be a dream scenario because I mean, in the old days, in 80s and 90s when I was breaking into the business. If you were a database guy or a compute guy, you were a specialist, it was high end kind of computing. Moore's Law, certainly Intel, you guys took advantage of it. But now, you see so much, it's cool to do more compute. So like, it's been democratized. Databases and compute, certainly in all the conversations, for everybody, not just, the technologists. >> Right, where that's where Cloud fits in for HPC. So if you think of HPC in terms of the characteristics of the workload, it's something that's really demanding computationally. The product of the computation is like an intellectual insight. You can design a better airplane wing, a safer car, you can figure out where that hurricane is going and tell which people to evacuate. There's an intellectual product to the compute. And then the last characteristic is when you apply more compute power appropriately, you get a more valuable result. So it could be better prediction of that hurricane path, it could be a safer car because you have more time, you have more capability and were able to build a better design ahead of that deadline to get that model year of the car out. And so, if you think about that, there's a lot, there's a broad spectrum and I talked about some of those most extreme problems, but even in something like designing an airplane, there might be 16, 20, a hundred different small design variations you want to explore. Well those can actually be great for The Cloud, 'cause they're small calculations and you run many of them at the same time. And the elastic capability of The Cloud augments the supercomputer that you might be using to run your hardest problems. >> So the aperture of problem-solving is huge now. >> Bill: That's right. >> You can do more. I mean we had Thorn on yesterday. Thorn was a company that partners with Intel to do, you know, find missing and exploited children. AI for good, so everything's possible. >> Yeah even AI we think of as an example of a high performance computing workload because what does it do? It gives you insights that you didn't have otherwise, it's compute intensive, and it does better when you apply more resources. So that fits our definition. So AI is definitely under the umbrella of high performance computing. >> One of the things, one of the great benefits of Cloud is the elasticity which you mentioned before. It's like, and some of the, we know that Amazon's just brought out the C5 Instances which is a specific instance site, which would be quite useful for HPC. But what is it about the bursting workloads or that elasticity that specifically works well for HPC do you think? >> Well, there's a couple use cases that we think are particularly relevant. One of them is an existing company. Just imagine some Fortune 50 manufacturer. They have a lot of stuff that they really need to build their own supercomputer for, their own high performance computing system, but their usage, even though they keep that system busy all the time, there is some variability and they have opportunity costs of an engineer sitting while their job is in the queue, 'cause you're paying that engineer but you're not giving them insights, right. And so the Cloud can augment that, but we have a lot of examples of large Fortune 500, Fortune 50 companies augmenting their on-premise with Cloud as a way to push those workloads that can run on the Cloud there, to free up those on-prem resources which are much more tailored, much more expensive and get more value out of them. >> Okay, and what's Intel doing to help customers figure out which of those workloads is best suited for Cloud and which ones are better suited for something which is running on site? >> Well, it's mostly through our influencer sales force who engages with many, many major companies and provides consulting, because Intel doesn't sell computers directly to anyone, so it's more of a knowledge, our knowledge and sharing that with people. And what we're trying to help enterprises understand is what workloads need to stay on premise, which ones can go to the Cloud and how this, the elasticity of the Cloud can augment those on-premise resources and thus, you know, go back and forth. >> It's the classic mission for Intel, make the apps go faster, faster, smaller, cheaper, right. >> And get 'em land in the right place. So really, the two biggest considerations we find in deciding whether a workload goes into the Cloud or stays on-premise in high performance computing are the following, one, is really the sensitivity of the IP. There's a lot of workloads that could run in the Cloud and people simply want to keep it on-premise 'cause they're more comfortable knowing that their IP is sitting inside their own firewall. Though the reality is, more and more companies are getting comfortable with Cloud security as they see data breaches. And realize that some of the big Cloud providers, like Amazon, maybe have better access to the security talent than they do. >> I think Goldman-Sachs just announced they're going all in. That's Goldman-Sachs, they never do a testimonial. >> So the privacy and the sensitivity of the data is king, you know, you have to be willing to put it in the Cloud. Then the second question is, is it a technical fit? And that's where this spectrum of workloads comes in. The bigger a workload goes and the more you want to speed it up but keep the workload the same size, that's what we call strong scaling and that starts to stress the network, and stress the system. And that's where these tailored systems come in. And so, you have to look at where things fall on the spectrum. A good example of workloads that would fit is these design space explorations, anything we would call pleasingly parallel or embarrassingly parallel in the industry where the communication does happen, but it's not the limiter of the calculation. So screening for a drug candidates, for personalized medicine, lot of life sciences applications, financial services is a good fit, in manufacturing a design space exploration maybe for different designs and materials for a dashboard or a component of a car. >> Bill, when you were at your Thanksgiving dinner and your family or wherever, you're moving around in your personal life, you're a technologist. How do explain the phenomenon of Amazon Web Services and the Cloud action right now? Because, you know, you're in it every day. You're close to all the action. But I get asked all the time, what's the hub-bub about AWS and it's hard to explain the phenomenon. How would describe the, I mean you're talking about tailored systems, elasticity, I mean it's a tech dream. I mean, how do you explain it to like a normal person? >> The conversation's usually pretty short because my family involves a historian, an English major, an accountant and people who really couldn't have, a musician, a singer, people who really don't have the slightest interest in technology. >> It's hard to talk about lambda, when you're. >> So I'm really the only technologist in my family so I just avoid it, but the question does come up with my parents. You know, parents like to brag on their kids so they like to know what you do, and every year my mom asks me what I do and I try to explain high performance computing to her and she says, oh, I don't get it. But when you explain it in terms of things like climate modeling and being able to support the nuclear test ban that's worldwide, that's done with high performance computing. Safing cars, finding missing children, better quality of life through all the AI that we're now experiencing. >> John: Analytics is a great use case. >> Then people say, oh, you know, they can understand the use cases. The elasticity of the Cloud, really is not something that I discuss with family, but even coworkers, I think, that's what the conversation focuses on. Recognizing that high performance computing is a range of workloads. >> Okay, so I'll rephrase it differently. What's your perspective on, what observations that you get excited about that are enabled now by these new use cases? 'Cause there's new things now that are possible. The number of computations, you got analytics, you mentioned a few of them. What jumps out at you, wow, that's really awesome, we can do that now? >> You know, this is gonna sound a little odd, and maybe not what you expected, but I'm not actually a technology enthusiast, believe it or not, despite. I think technology's cool, I like what it does, but I don't get super excited about technology. One of the things that I'm excited about with the Cloud is probably at the opposite extreme of what you would expect which is, back to, how does the elasticity of the Cloud fit? There's so many companies in this world who could benefit from high performance computing and don't today. A recent study showed that 95 percent of U.S. small medium manufacturers which is over 300,000 are not using HPC today. And so, as they're part of this supply chain, whether it's into a Boeing or and Airbus or a Lockheed Martin or a Honda or a Toyota, there's this whole supply chain. HPC's being used at the top, it's not being used at the bottom, so I think the Cloud is actually really, really exciting because it allows somebody to get over those initial hurdles, the cap-ex, getting access to pay as you go, prove the value proposition, because a small medium business actually has to take a risk to use HPC. They have to divert capital and divert resources. And they could lose a contract. >> So do you see a lot more companies starting to take advantage of some of this high performance computing capability just because it's now, you can rent it by the hour and try it out, give it a bit of a whirl, and then see, actually this is going to be really valuable for us, and then deploy a lot more of it. >> Exactly and that's one of the key things we're promoting is 'cause we want to bring more people into the world of high performance computing. So, AWS provides all the building blocks. Compute, elastic storage and so on. But high performance computing applications really expect a specific type of platform that they can run on, and that platform aggregates the resources so there's a number of companies Rescale is one, Psycho-Computing, and others who are actually providing that platform layer. And then once you've got the platform layer, all the, I'll call it the, geeky stuff that they do, AWS has abstracted away. Now the applications can run and that's that's what's bringing new users in. >> Bill, final question for you. AWS launched its C5 Instances. What's that about, what's it mean for customers? Can you explain a little bit more on that one piece? >> Sure, we're delighted to see Amazon deploying the C5 Instances. It's based on our latest technology in the Xeon product family. We call that the Intel Xeon scalable processor family. It includes, it's based on what we call Skylake technology or code name Skylake. There's a lot of innovations in that processor and that platform that are specifically driven by the needs of high performance computing. There's something called AVX 512, which is a doubling of the vector width. Means that every core can actually do 32 floating point, double-precision floating point operations per clock. That's tremendous, tremendous compute capability, in a 2X over the previous generation. On the memory bandwidth side, which is another huge factor for high performance computing applications, like 66 percent increase in memory bandwidth. So it's a balanced platform, and we're seeing improvements in high performance computing apps of anywhere from 1.7x sometimes almost up to 5x improvement in going from the C4 to C5 Instances on a per note basis. >> This is really going to enable a lot of action. IOT, tons of great stuff. >> Absolutely and as I talked about that range of HPC and you know, what fits and what doesn't fit in the Cloud, every generation of technology, what fits in the Cloud is growing, and C5 is another important step in that direction. >> Bill, thanks for coming on the Cube. Bill Magro, Chief Technologist at Intel, HPC, high performance computing. The Cloud is one big high performance machine in the sky, wherever you want to look at it, really great opportunity at enabling all new use cases, doing things for society benefits, and customers. Great stuff here, Cloud impact is significant. IOT to the Cloud. This is the Cube, doing our share here at AWS in Las Vegas. We'll be right back with more coverage after this short break. (electronic music)
SUMMARY :
it's the Cube, covering AWS Re:invent 2017 This is the Cube's exclusive coverage, but Intel is really taking advantage of the What's the state of the art more in terms of the activity and the workloads the Cloud is interplaying with HPC. The Cloud is kinda the opposite in a sense. Databases and compute, certainly in all the conversations, augments the supercomputer that you might be using to do, you know, find missing and exploited children. and it does better when you apply more resources. is the elasticity which you mentioned before. And so the Cloud can augment that, and thus, you know, go back and forth. It's the classic mission for Intel, And realize that some of the big Cloud providers, That's Goldman-Sachs, they never do a testimonial. and stress the system. Amazon Web Services and the Cloud action right now? the slightest interest in technology. so they like to know what you do, The elasticity of the Cloud, The number of computations, you got analytics, is probably at the opposite extreme of what you would expect and then see, actually this is going to be and that platform aggregates the resources Can you explain a little bit more on that one piece? improvement in going from the C4 to C5 Instances This is really going to enable a lot of action. and you know, what fits and what doesn't fit in the Cloud, The Cloud is one big high performance machine in the sky,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Bill Magro | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Goldman-Sachs | ORGANIZATION | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
66 percent | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Honda | ORGANIZATION | 0.99+ |
Bill | PERSON | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
16 | QUANTITY | 0.99+ |
second question | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
95 percent | QUANTITY | 0.99+ |
over 300,000 | QUANTITY | 0.99+ |
2X | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Cloud | TITLE | 0.98+ |
one piece | QUANTITY | 0.98+ |
90s | DATE | 0.97+ |
Thorn | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
Lockheed Martin | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
Xeon | COMMERCIAL_ITEM | 0.96+ |
80s | DATE | 0.96+ |
1.7x | QUANTITY | 0.96+ |
Thanksgiving | EVENT | 0.96+ |
today | DATE | 0.95+ |
45,000 tech industry | QUANTITY | 0.94+ |
HPC | ORGANIZATION | 0.94+ |
English | OTHER | 0.94+ |
32 floating point | QUANTITY | 0.93+ |
two biggest considerations | QUANTITY | 0.93+ |
Moore | ORGANIZATION | 0.93+ |
Cube | COMMERCIAL_ITEM | 0.91+ |
U.S. | LOCATION | 0.88+ |
Skylake | TITLE | 0.86+ |
a hundred | QUANTITY | 0.85+ |
C5 Instances | COMMERCIAL_ITEM | 0.85+ |
C5 | TITLE | 0.85+ |
One of them | QUANTITY | 0.81+ |
couple use cases | QUANTITY | 0.8+ |
one large problem | QUANTITY | 0.78+ |
Re:invent 2017 | EVENT | 0.78+ |
C4 | TITLE | 0.77+ |
up to | QUANTITY | 0.77+ |
Fortune 500 | ORGANIZATION | 0.75+ |
Yanbing Li & Matt Amdur, VMware | VMworld 2017
>> Announcer: Live from Las Vegas, it's the Cube, covering VMworld 2017, brought to you by VMware and its ecosystem partners. (bright music) >> Welcome to VMworld 2017. This is the Cube. We are live in Las Vegas on day one of the event, a really exciting, high energy general session kicked things off. I'm Lisa Martin with my cohost, Stu Miniman. We're excited to be joined by two folks from VMware. We've got Cube alumni Yanbing Li, senior VP and GM of the storage and availability BU. Welcome back to the Cube. >> Good to be here. >> Lisa: And we've also got Matt Amdur, your first time on the Cube, principle VMware chief architect. >> Thanks for having me. >> We're excited to have you guys here so been waiting with baited breath, a lot of folks have, for what are VMware and AWS going to actually announce product-wise. Really exciting to see Pat Gelsinger on stage with Andy Jassy today. Talk to us about, as the world of hyper-converged infrastructure is changing, what does VMware cloud on AWS mean for, not just VMware customers, but new opportunities for VMware? >> Yeah, that's a great question Lisa. Let me get it started. You know, I think my biggest takeaway from the exciting keynote, a couple of things. One is private cloud is sexy again. You know, so we've been talking about cloud a lot, but there is so much opportunity and tremendous growth associated with private cloud, and certainly hyper-converged infrastructure being the next generation architecture shift is going to drive a lot of the modernization of our customers' private environment, so that's certainly very exciting. The other aspect of the excitement is how that same architecture and consistent operating model is extending into the cloud with our AWS relationship, and this is also why I have my colleague, Matt, here, because he's been the brain behind a lot of the things we're doing on AWS. >> Yeah, thanks so much, Yanbing, and I tell you, for years, it was like, ah, storage is sexy, storage is hot. Cloud's kind of sexy and hot, so we found a way to kind of connect storage into that. Matt, you know, a lot of people don't really understand what happened here. This isn't just, oh, you know, we're not layering, you know, VMware on top of the infrastructure as a service that they have. Last year, we kind of dug in a little bit with Cloud Foundation. Talk to us, what did it take to get this VMware cloud onto AWS, bring us inside a little bit, the sausage making if you would. >> I think Andy talked about this a little bit at the keynote this morning, where it's really been an incredible, collaborative effort between both engineering organizations, and it's taken a lot of effort from a huge number of people on both sides to really pull this off, and so you know, as we started looking at it, I think one of the challenges that we faced, and Andy mentioned this this morning was there was this really binary decision for customers. If you had vSphere workload, do you want them to bring them to the public cloud? There was nothing that was compatible. And so, we really sat down with Amazon and said, okay, how can we take advantage of the physical infrastructure and scale that Amazon built and provides today, and make it compatible with vSphere, and if you look at what we've done with VSAN on premise as an HCI solution, it's become a sort of ubiquitous storage platform, and it offers customers an operational and a management experience for how they think about managing their storage, and we can take that and uplift it into the cloud by doing the heavy lifting of how do we make VSAN run, scale, and operate on top of AWS's physical infrastructure. >> One of the things that I found was really interesting this morning was seeing the, I couldn't see it from where I was sitting, the sort of NASCAR slide of customers that were in beta. Talk to us a little about some of the pain points that you're helping with VMware cloud and AWS. What are some of those key pain points that those customers were facing that from an engineering perspective you took into the design of the solution? >> Sure, so I think if you look at it, some of the benefits that we see with public cloud infrastructure that our customers really want to take advantage of are flexibility and elasticity. One of the challenges that you have on premise today is if you need new hardware, you have to order it, it's got to ship on a truck, someone's got to rack it and hook it up, and if you're trying to operate and keep pace with your competition, and you have a need to allocate a lot of capacity to drive a project forward, that can be a huge impediment, and so what we wanted to do is make it really easy for our customers to configure, deploy, and provision our software. And so, one of the really interesting things about VMware managed cloud on AWS is that it's a managed service, so some of the things that, you know, we've talked about VCF and the things that we've done on premise to streamline physical infrastructure management is taken to the next level. Customers don't have to worry about managing the vSphere software lifecycle. VMware is now going to do that for them, and Amazon is going to manage the physical infrastructure, and that removes a lot of burdens and gives customers the opportunity to focus on their core business. >> If you think about, you know, Stu, you touched on Cloud Foundation, we were using Cloud Foundation to automate how our customer consumed the entire software-defined data center stack. And you think about moving that same goodness into, you know, the VMware cloud on database, and you know, really removing a lot of the complexity around managing your own infrastructure. And so that customer can truly focus on their value adds, through, you know, developing the next generation of applications that enable their business. It's been a great extension of what we're solving on premise to the public cloud. >> Yeah, I wonder if we can drill in a little bit deeper on this. So you know, most customers I think understand, okay, if I needed to set up a VSAN environment right, I got to get my servers, how long it takes, what skill set I had, virtualization admins have been doing this for a few years now, and congratulations, you've got the number up near 10,000 customers, which is, you know, great milestone there. Walk us through, you know, when we're saying okay, I want to spin it up. If I know, swipe a credit card and turn on a VM, is it as fast? And what is that base configuration, what kind of scale can it go to? >> Sure, so to start with, what was announced today for initial availability, you can come to the VMware portal, so if you come to our portal, you give us your credit card, obviously, and then you can provision between four and 16 nodes. So you pick how many nodes you want. And you give us a little bit of networking-related information so we can understand how to lay out IP address ranges so we're not going to conflict with what you have on premise. And then you click provision, and in a few hours you'll have a fully stood up SDDC. And so that's going to include a vCenter instance that we've installed, all of the ESXOs we've provisioned from Amazon, we install ESX, we configure VSAN for you. And it's basically like getting a brand new vSphere deployment, and you can start provisioning your VM workload as soon as it's ready. And then once it's there, if you want to grow your cluster, you can dynamically add hosts, on the order of about 10 minutes. And if you want to remove capacity, you can remove hosts as well. So it gives you that elasticity and flexibility from the public cloud. >> Awesome, so we're early with some of the early customers. I'm curious, do you have any compare contrast as to what they like about, you know, doing it the Amazon, you know, VMware cloud on Amazon versus my own data center? Of course there's things I could say, okay, I could spin it up faster, but I could turn it off and then not have to pay for it. What, are we at the point we understand some of those use cases to tell why they might do one versus the other? >> Yeah, I think lots of the customers interested in this new model are really liking that common operating experience. It has some of the flex of customers you've heard about this morning, you know, Medtronic for example. They are a VMware Cloud Foundation customer. They are running entire, you know, SDDC through VMware Cloud Foundation, but because they really enjoy that experience and that simplicity that brings, now they're extending that into the cloud. So they're also one of the earlier customers for VMware cloud on AWS. So having that common operational experience is a big value prop to our customers. >> And I think we really see customers wanting both, right? The customers, you mentioned before, the private cloud is sexy again. The customers who have a lot of workloads, that makes sense to run in a private cloud. But they also want the flexibility of how they can take advantage of public cloud resources. And so depending on the problem that they're trying to solve, they view this as a complement to their existing infrastructure. >> And I have to think, some of the services I have available are a little different. Things like disaster recovery, if I'm doing it in kind of that cloud operating model, a little different. I now have Amazon services I can use, and VMware announced a whole, what was it, seven new SaaS services which kind of spanned some of those environments. >> Yeah, so the SaaS services we announced, they are truly across cloud. Cause they not only limit to a vSphere power cloud, they truly are extending into this cross-cloud, multi-cloud world of, you know, heterogeneous type of cloud environments. And now, you know, you spoke about DR, and certainly for someone coming from the storage and availability background, you know, in terms of our, BU's role that we're playing in our cloud relationship, you know, certainly we are trying to provide the best storage infrastructure as part of our cloud service. But we are also looking at what are the next levels of data-related services, whether it's data mobility, application mobility, disaster recovery, or the futures of other aspects of data management. And that's what we've been focusing on. You know, we have lots of customers, you know, even thinking about what's happening with, you know, Hurricane Harvey, I still remember the Hurricane Sandy days. A lot of our site recovery manager customers told us, you know, how SRM has saved their day. We're seeing the power of a disaster recovery solution. And now with the cloud, you can totally leverage the economics and the flexibility and scalability that cloud has to offer. So those are all the directions we're working on. >> So we're coming up on the one-year anniversary of the closure of the Dell acquisition of EMC and its companies. Would love to understand, looking at this great announcement today, VMware cloud on AWS, from a differentiation perspective, what does this provide to VMware as part of Dell EMC, this big partnership with AWS? >> Yeah, so let me, you know, maybe take it back a step, not just the AWS relationship but really look more broadly, what we're doing together with Dell. And certainly, you know, starting with the storage business, we're doing amazing work around our entire portfolio of software defined storage, hyper-converged infrastructure. And the good thing is, as Stu pointed out, we're seeing tremendous growth in our core business around VSAN. You know, 10,000 customers, expanding rapidly. But we're truly firing from multiple cylinders of both consuming it as a software model as well as working with partner like Dell EMC, TurnKey appliance, such VxRail. They're seeing tremendous success. So we are extending into our partnership around data protection. This is why I'll be coming to the Cube with Matt Felon to talk about all the great things we're doing around data protection collaboration, both for on prem as well as in the VMware cloud for AWS. So lots of things happening in different parts of the business unit. So but coming back to VMware on AWS, I think we're thinking about leveraging the strength of our portfolios, say this is not just a full VMware stack, but there is some of the Dell technology IPs we're pulling in. So for example data protection, they're part of our ecosystem, being one of the very first partners, enabling data protection on top of AWS. Yeah, so Matt, anything to add? >> Yeah, I think, you know, when we look to what's made us so successful on premise, it's been that extended storage ecosystem of which Dell EMC is a huge part of. And we continue to see that value as we go to the cloud. Yanbing mentioned backup and disaster recovery as sort of the obvious starting points, but I think beyond that there's a bunch of technology that they have that's equally applicable whether or not you're running on premise or the public cloud. And the tighter we can integrate and the more we can take advantage of it, the more value we can derive for our customers. >> So VSAN 6.6 is now out. You know, any other things that we haven't talked about that you want to highlight there, and any roadmap items that you can share that are being kind of publicly discussed, you know, here at VMworld? >> So yeah, 6.6 was definitely a big hit, you know, with encryption and also lots of the cloud analytics and things we were doing has been really hitting, you know, the hard core of what our customers are looking for. So going forward with VSAN, we talked about AWS, our relationship with AWS for a long time, but the fundamental product-level innovation is happening inside VSAN as well. One of the big focus is really looking at our next generation architecture that truly enables the leverage of all the new device technology. You know, I keep saying, a software defined product is really driven by sometimes hardware innovation, and that's very true for VSAN. So at the foundational layer, we're looking at new hardware innovations and how to best leverage that. But moving up the stack, we're also looking at cloud analytics and, you know, proactive maintenance. I was just talking to one of our customers about what it takes to support, provide support in 2017. It's all these automatic intelligence, proactive, you know, you heard Pat talk about Skyline. This is a new proactive support approach we've provided, and there will be a lot of cloud analytics that's driving technology like that. >> I was going to say, on the analytics side, what are you hearing from customers with respect to what they're needing on analytics as they have this big decision to make about cloud, private, public, hybrid, what are some of the analytics needs that you're starting to hear from customers that would then be incorporated into that roadmap? >> So from our view, we're looking at lots of the infrastructure-level analytics. Certainly there is also lots of the application-level analytics. But from an infrastructure point of view, you know, to Matt's earlier point, customers do not want to really worry about their, you know, the plumbing around their infrastructure. So we're gathering analytics, we're pumping them into the cloud, we're performing, you know, intelligent analysis so that we can proactively provide intelligence and support back to our customers. >> I think it really, it helps customers to understand things about how their using their storage, how they're using their data, what applications are consuming storage, who needs IOPs, who has latency constraints, all that type of data. And being able to package that up and show it to customers in real time and help them both understand what they're currently doing and future planning, we see a lot of value in. >> Matt, I'm curious, one of the challenges you have as a software product is you need to be able to live in lots of different environments. Amazon is kind of a different beast, you know, they hyper-optimize is what I said. There's kind of misconception now. They're oh, they take, you know, white box and do this. I said, no, they will build a very specific architecture and build 10,000 nodes or more. Without sharing any trade secrets, any lessons learned or anything, you know, that kind of is like, wow, this was, you know, an interesting challenge and here's what we learned when you talk cause the challenge of our time is building distributed architectures. And I'd have to think that porting over to Amazon was not a, you know, oh, yeah, I looked at the code and everything worked day one. So what can you share? >> I think goes back to sort of the really interesting and tight collaboration from the engineering aspects. And it's really been phenomenal to see the level of detail that Amazon has in terms of how they operationalize hardware and what they can tell us about the hardware that they're building for us. And so I think it really highlights some of the value that you see in the public cloud, which is, it's not just about having physical infrastructure hosted somewhere else. It's about having a company like AWS that's understood how to deploy, monitor, and operate it at scale. And that goes to everything from how they think about, you know, the clips that are holding power cables into servers to how they think about SSDs and how they roll our firmware changes. And so from an engineering standpoint, it's been a great collaboration to help us see the level of detail that they go to there, and then we're able to take that into account for how we design and build solutions. >> Yeah, we are definitely taking all that learning into, you know, how to build cloud scale solutions that truly empower, you know, cloud scale operations. And lots of the operation learning, you know, that we get from this exercise has been just tremendous. >> Yeah, well one of the bits of news I saw is that VMware's IT is now running predominantly or all on VSAN, right? What can you tell us about that? Are there still storage arrays somewhere inside the IT? >> So we're extremely excited about this, and we have a visionary CIO, Bask Iyer, I know he was a Cube guest as well. So he's been really pushing this notion of VMware running on top of VMware. So we have 119 clusters, you know, 30,000 VMs, probably close to 1,000 hosts, and seven petabytes of data running on VSAN. And so if VSAN as a product doesn't hold up, you know, I get to experience it firsthand. So it's been pretty phenomenal to see that happen. We are also deliberately running a range of different versions of VSAN. There's, you know, some that are GA versions. There are some that are cloud edition that's yet to be made GA to our customers. So this really helps us develop much more robust software. If you see what's happening here in the hands on lab, that's being powered by VSAN as well behind the scenes. >> VMware's done a great job of leveraging kind of core competencies, like VSAN for the software defined data center. As you mentioned, 10,000 customers, I think Pat said adding 100 a week, >> Yanbing: Yeah. not sure if I heard that correctly. Wow, that's phenomenal. So as, and another thing that he said that was interesting, right before we wrap up here, is we're moving from data centers to centers of data. As customers are transitioning and really kind of figuring out what flavors of cloud are ideal for them, are you seeing any industries really leading the charge with respect to, for example, VMware cloud on AWS? Are you seeing it in, you know, we saw Medtronic, but health care, financial services, any industry specificities that you're seeing that are really leading edge that need this type of infrastructure? >> I think it's happening across many different industries. So tomorrow, I'm going to be in a session called Modernizing Data Center, but there is also lots of emphasis what's happening on the edge. So I have been exposed to customers from health care, customers from, airline customers, so we're going to be probably talking about examples of airbus 380, you know, the biggest airplane that's been ever built, and they have 300,000 sensors on the plane that's generating tons of data, and those data are being processed by technology like VSAN. And just, you know, stories across different industry. And I think that data center to edge story is very powerful. And this is also why the next generation architecture such as HCI make it happen. Clearly we've seen tremendous adoption in the data center. Now we're seeing adoption in the cloud. And I have to say, it's not just the VMware cloud on AWS. We have about 300 cloud provider partners to VMware that's adopted and deployed VSAN to different degrees. And now we're seeing it go to the edge. We have some amazing announcement this morning around HCI accelerator kit that is really providing a much more affordable solution to enable really edge use case. >> Fantastic, well tremendous momentum, great growth, we wish you guys the best of luck. Congratulations on everything announced today. And we hope you have a great rest of the show. Yanbing Li, Matt Amdur, thanks so much for joining us on the Cube. >> Thank you very much for having us. >> Thank you for having us. >> Woman: Absolutely. And we want to thank you for watching. I'm Lisa Martin with Stu Miniman, live from day one at VMworld 2017. Stick around, we'll be right back. (bright music)
SUMMARY :
brought to you by VMware and its ecosystem partners. We are live in Las Vegas on day one of the event, on the Cube, principle VMware chief architect. We're excited to have you guys here so a lot of the things we're doing on AWS. the sausage making if you would. to really pull this off, and so you know, One of the things that I found was One of the challenges that you have on premise today is and you know, really removing a lot of the complexity So you know, most customers I think understand, and then you can provision between four and 16 nodes. as to what they like about, you know, They are running entire, you know, SDDC And so depending on the problem And I have to think, some of the services And now, you know, you spoke about DR, of the closure of the Dell acquisition of EMC And certainly, you know, starting with the storage business, and the more we can take advantage of it, and any roadmap items that you can share you know, the hard core of what our customers into the cloud, we're performing, you know, And being able to package that up and show it Amazon is kind of a different beast, you know, some of the value that you see in the public cloud, And lots of the operation learning, you know, So we have 119 clusters, you know, As you mentioned, 10,000 customers, are you seeing any industries really leading of airbus 380, you know, the biggest airplane And we hope you have a great rest of the show. And we want to thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Matt Amdur | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Matt Felon | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
119 clusters | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Yanbing Li | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
ESX | TITLE | 0.99+ |
VMware Cloud Foundation | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
10,000 customers | QUANTITY | 0.99+ |
Cloud Foundation | ORGANIZATION | 0.99+ |
Medtronic | ORGANIZATION | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
300,000 sensors | QUANTITY | 0.99+ |
16 nodes | QUANTITY | 0.99+ |
two folks | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
Yanbing | PERSON | 0.99+ |
today | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
Dell EMC | ORGANIZATION | 0.98+ |
30,000 VMs | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
100 a week | QUANTITY | 0.98+ |
VMworld 2017 | EVENT | 0.98+ |
about 10 minutes | QUANTITY | 0.98+ |
Bask Iyer | PERSON | 0.98+ |
four | QUANTITY | 0.98+ |
Panel Discussion | IBM Fast Track Your Data 2017
>> Narrator: Live, from Munich, Germany, it's the CUBE. Covering IBM, Fast Track Your Data. Brought to you by IBM. >> Welcome to Munich everybody. This is a special presentation of the CUBE, Fast Track Your Data, brought to you by IBM. My name is Dave Vellante. And I'm here with my cohost, Jim Kobielus. Jim, good to see you. Really good to see you in Munich. >> Jim: I'm glad I made it. >> Thanks for being here. So last year Jim and I hosted a panel at New York City on the CUBE. And it was quite an experience. We had, I think it was nine or 10 data scientists and we felt like that was a lot of people to organize and talk about data science. Well today, we're going to do a repeat of that. With a little bit of twist on topics. And we've got five data scientists. We're here live, in Munich. And we're going to kick off the Fast Track Your Data event with this data science panel. So I'm going to now introduce some of the panelists, or all of the panelists. Then we'll get into the discussions. I'm going to start with Lillian Pierson. Lillian thanks very much for being on the panel. You are in data science. You focus on training executives, students, and you're really a coach but with a lot of data science expertise based in Thailand, so welcome. >> Thank you, thank you so much for having me. >> Dave: You're very welcome. And so, I want to start with sort of when you focus on training people, data science, where do you start? >> Well it depends on the course that I'm teaching. But I try and start at the beginning so for my Big Data course, I actually start back at the fundamental concepts and definitions they would even need to understand in order to understand the basics of what Big Data is, data engineering. So, terms like data governance. Going into the vocabulary that makes up the very introduction of the course, so that later on the students can really grasp the concepts I present to them. You know I'm teaching a deep learning course as well, so in that case I start at a lot more advanced concepts. So it just really depends on the level of the course. >> Great, and we're going to come back to this topic of women in tech. But you know, we looked at some CUBE data the other day. About 17% of the technology industry comprises women. And so we're a little bit over that on our data science panel, we're about 20% today. So we'll come back to that topic. But I don't know if there's anything you would add? >> I'm really passionate about women in tech and women who code, in particular. And I'm connected with a lot of female programmers through Instagram. And we're supporting each other. So I'd love to take any questions you have on what we're doing in that space. At least as far as what's happening across the Instagram platform. >> Great, we'll circle back to that. All right, let me introduce Chris Penn. Chris, Boston based, all right, SMI. Chris is a marketing expert. Really trying to help people understand how to get, turn data into value from a marketing perspective. It's a very important topic. Not only because we get people to buy stuff but also understanding some of the risks associated with things like GDPR, which is coming up. So Chris, tell us a little bit about your background and your practice. >> So I actually started in IT and worked at a start up. And that's where I made the transition to marketing. Because marketing has much better parties. But what's really interesting about the way data science is infiltrating marketing is the technology came in first. You know, everything went digital. And now we're at a point where there's so much data. And most marketers, they kind of got into marketing as sort of the arts and crafts field. And are realizing now, they need a very strong, mathematical, statistical background. So one of the things, Adam, the reason why we're here and IBM is helping out tremendously is, making a lot of the data more accessible to people who do not have a data science background and probably never will. >> Great, okay thank you. I'm going to introduce Ronald Van Loon. Ronald, your practice is really all about helping people extract value out of data, driving competitive advantage, business advantage, or organizational excellence. Tell us a little bit about yourself, your background, and your practice. >> Basically, I've three different backgrounds. On one hand, I'm a director at a data consultancy firm called Adversitement. Where we help companies to become data driven. Mainly large companies. I'm an advisory board member at Simply Learn, which is an e-learning platform, especially also for big data analytics. And on the other hand I'm a blogger and I host a series of webinars. >> Okay, great, now Dez, Dez Blanchfield, I met you on Twitter, you know, probably a couple of years ago. We first really started to collaborate last year. We've spend a fair amount of time together. You are a data scientist, but you're also a jack of all trades. You've got a technology background. You sit on a number of boards. You work very active with public policy. So tell us a little bit more about what you're doing these days, a little bit more about your background. >> Sure, I think my primary challenge these days is communication. Trying to join the dots between my technical background and deeply technical pedigree, to just plain English, every day language, and business speak. So bridging that technical world with what's happening in the boardroom. Toe to toe with the geeks to plain English to execs in boards. And just hand hold them and steward them through the journey of the challenges they're facing. Whether it's the enormous rapid of change and the pace of change, that's just almost exhaustive and causing them to sprint. But not just sprint in one race but in multiple lanes at the same time. As well as some of the really big things that are coming up, that we've seen like GDPR. So it's that communication challenge and just hand holding people through that journey and that mix of technical and commercial experience. >> Great, thank you, and finally Joe Caserta. Founder and president of Caserta Concepts. Joe you're a practitioner. You're in the front lines, helping organizations, similar to Ronald. Extracting value from data. Translate that into competitive advantage. Tell us a little bit about what you're doing these days in Caserta Concepts. >> Thanks Dave, thanks for having me. Yeah, so Caserta's been around. I've been doing this for 30 years now. And natural progressions have been just getting more from application development, to data warehousing, to big data analytics, to data science. Very, very organically, that's just because it's where businesses need the help the most, over the years. And right now, the big focus is governance. At least in my world. Trying to govern when you have a bunch of disparate data coming from a bunch of systems that you have no control over, right? Like social media, and third party data systems. Bringing it in and how to you organize it? How do you ingest it? How do you govern it? How do you keep it safe? And also help to define ownership of the data within an organization within an enterprise? That's also a very hot topic. Which ties back into GDPR. >> Great, okay, so we're going to be unpacking a lot of topics associated with the expertise that these individuals have. I'm going to bring in Jim Kobielus, to the conversation. Jim, the newest Wikibon analyst. And newest member of the SiliconANGLE Media Team. Jim, get us started off. >> Yeah, so we're at an event, at an IBM event where machine learning and data science are at the heart of it. There are really three core themes here. Machine learning and data science, on the one hand. Unified governance on the other. And hybrid data management. I want to circle back or focus on machine learning. Machine learning is the coin of the realm, right now in all things data. Machine learning is the heart of AI. Machine learning, everybody is going, hiring, data scientists to do machine learning. I want to get a sense from our panel, who are experts in this area, what are the chief innovations and trends right now on machine learning. Not deep learning, the core of machine learning. What's super hot? What's in terms of new techniques, new technologies, new ways of organizing teams to build and to train machine learning models? I'd like to open it up. Let's just start with Lillian. What are your thoughts about trends in machine learning? What's really hot? >> It's funny that you excluded deep learning from the response for this, because I think the hottest space in machine learning is deep learning. And deep learning is machine learning. I see a lot of collaborative platforms coming out, where people, data scientists are able to work together with other sorts of data professionals to reduce redundancies in workflows. And create more efficient data science systems. >> Is there much uptake of these crowd sourcing environments for training machine learning wells. Like CrowdFlower, or Amazon Mechanical Turk, or Mighty AI? Is that a huge trend in terms of the workflow of data science or machine learning, a lot of that? >> I don't see that crowdsourcing is like, okay maybe I've been out of the crowdsourcing space for a while. But I was working with Standby Task Force back in 2013. And we were doing a lot of crowdsourcing. And I haven't seen the industry has been increasing, but I could be wrong. I mean, because there's no, if you're building automation models, most of the, a lot of the work that's being crowdsourced could actually be automated if someone took the time to just build the scripts and build the models. And so I don't imagine that, that's going to be a trend that's increasing. >> Well, automation machine learning pipeline is fairly hot, in terms of I'm seeing more and more research. Google's doing a fair amount of automated machine learning. The panel, what do you think about automation, in terms of the core modeling tasks involved in machine learning. Is that coming along? Are data scientists in danger of automating themselves out of a job? >> I don't think there's a risk of data scientist's being put out of a job. Let's just put that on the thing. I do think we need to get a bit clearer about this meme of the mythical unicorn. But to your call point about machine learning, I think what you'll see, we saw the cloud become baked into products, just as a given. I think machine learning is already crossed this threshold. We just haven't necessarily noticed or caught up. And if we look at, we're at an IBM event, so let's just do a call out for them. The data science experience platform, for example. Machine learning's built into a whole range of things around algorithm and data classification. And there's an assisted, guided model for how you get to certain steps, where you don't actually have to understand how machine learning works. You don't have to understand how the algorithms work. It shows you the different options you've got and you can choose them. So you might choose regression. And it'll give you different options on how to do that. So I think we've already crossed this threshold of baking in machine learning and baking in the data science tools. And we've seen that with Cloud and other technologies where, you know, the Office 365 is not, you can't get a non Cloud Office 365 account, right? I think that's already happened in machine learning. What we're seeing though, is organizations even as large as the Googles still in catch up mode, in my view, on some of the shift that's taken place. So we've seen them write little games and apps where people do doodles and then it runs through the ML library and says, "Well that's a cow, or a unicorn, or a duck." And you get awards, and gold coins, and whatnot. But you know, as far as 12 years ago I was working on a project, where we had full size airplanes acting as drones. And we mapped with two and 3-D imagery. With 2-D high res imagery and LiDAR for 3-D point Clouds. We were finding poles and wires for utility companies, using ML before it even became a trend. And baking it right into the tools. And used to store on our web page and clicked and pointed on. >> To counter Lillian's point, it's not crowdsourcing but crowd sharing that's really powering a lot of the rapid leaps forward. If you look at, you know, DSX from IBM. Or you look at Node-RED, huge number of free workflows that someone has probably already done the thing that you are trying to do. Go out and find in the libraries, through Jupyter and R Notebooks, there's an ability-- >> Chris can you define before you go-- >> Chris: Sure. >> This is great, crowdsourcing versus crowd sharing. What's the distinction? >> Well, so crowdsourcing, kind of, where in the context of the question you ask is like I'm looking for stuff that other people, getting people to do stuff that, for me. It's like asking people to mine classifieds. Whereas crowd sharing, someone has done the thing already, it already exists. You're not purpose built, saying, "Jim, help me build this thing." It's like, "Oh Jim, you already "built this thing, cool. "So can I fork it and make my own from it?" >> Okay, I see what you mean, keep going. >> And then, again, going back to earlier. In terms of the advancements. Really deep learning, it probably is a good idea to just sort of define these things. Machine learning is how machines do things without being explicitly programmed to do them. Deep learning's like if you can imagine a stack of pancakes, right? Each pancake is a type of machine learning algorithm. And your data is the syrup. You pour the data on it. It goes from layer, to layer, to layer, to layer, and what you end up with at the end is breakfast. That's the easiest analogy for what deep learning is. Now imagine a stack of pancakes, 500 or 1,000 high, that's where deep learning's going now. >> Sure, multi layered machine learning models, essentially, that have the ability to do higher levels of abstraction. Like image analysis, Lillian? >> I had a comment to add about automation and data science. Because there are a lot of tools that are able to, or applications that are able to use data science algorithms and output results. But the reason that data scientists aren't in risk of losing their jobs, is because just because you can get the result, you also have to be able to interpret it. Which means you have to understand it. And that involves deep math and statistical understanding. Plus domain expertise. So, okay, great, you took out the coding element but that doesn't mean you can codify a person's ability to understand and apply that insight. >> Dave: Joe, you have something to add? >> I could just add that I see the trend. Really, the reason we're talking about it today is machine learning is not necessarily, it's not new, like Dez was saying. But what's different is the accessibility of it now. It's just so easily accessible. All of the tools that are coming out, for data, have machine learning built into it. So the machine learning algorithms, which used to be a black art, you know, years ago, now is just very easily accessible. That you can get, it's part of everyone's toolbox. And the other reason that we're talking about it more, is that data science is starting to become a core curriculum in higher education. Which is something that's new, right? That didn't exist 10 years ago? But over the past five years, I'd say, you know, it's becoming more and more easily accessible for education. So now, people understand it. And now we have it accessible in our tool sets. So now we can apply it. And I think that's, those two things coming together is really making it becoming part of the standard of doing analytics. And I guess the last part is, once we can train the machines to start doing the analytics, right? And get smarter as it ingests more data. And then we can actually take that and embed it in our applications. That's the part that you still need data scientists to create that. But once we can have standalone appliances that are intelligent, that's when we're going to start seeing, really, machine learning and artificial intelligence really start to take off even more. >> Dave: So I'd like to switch gears a little bit and bring Ronald on. >> Okay, yes. >> Here you go, there. >> Ronald, the bromide in this sort of big data world we live in is, the data is the new oil. You got to be a data driven company and many other cliches. But when you talk to organizations and you start to peel the onion. You find that most companies really don't have a good way to connect data with business impact and business value. What are you seeing with your clients and just generally in the community, with how companies are doing that? How should they do that? I mean, is that something that is a viable approach? You don't see accountants, for example, quantifying the value of data on a balance sheet. There's no standards for doing that. And so it's sort of this fuzzy concept. How are and how should organizations take advantage of data and turn it into value. >> So, I think in general, if you look how companies look at data. They have departments and within the departments they have tools specific for this department. And what you see is that there's no central, let's say, data collection. There's no central management of governance. There's no central management of quality. There's no central management of security. Each department is manages their data on their own. So if you didn't ask, on one hand, "Okay, how should they do it?" It's basically go back to the drawing table and say, "Okay, how should we do it?" We should collect centrally, the data. And we should take care for central governance. We should take care for central data quality. We should take care for centrally managing this data. And look from a company perspective and not from a department perspective what the value of data is. So, look at the perspective from your whole company. And this means that it has to be brought on one end to, whether it's from C level, where most of them still fail to understand what it really means. And what the impact can be for that company. >> It's a hard problem. Because data by its' very nature is now so decentralized. But Chris you have a-- >> The thing I want to add to that is, think about in terms of valuing data. Look at what it would cost you for data breach. Like what is the expensive of having your data compromised. If you don't have governance. If you don't have policy in place. Look at the major breaches of the last couple years. And how many billions of dollars those companies lost in market value, and trust, and all that stuff. That's one way you can value data very easily. "What will it cost us if we mess this up?" >> So a lot of CEOs will hear that and say, "Okay, I get it. "I have to spend to protect myself, "but I'd like to make a little money off of this data thing. "How do I do that?" >> Well, I like to think of it, you know, I think data's definitely an asset within an organization. And is becoming more and more of an asset as the years go by. But data is still a raw material. And that's the way I think about it. In order to actually get the value, just like if you're creating any product, you start with raw materials and then you refine it. And then it becomes a product. For data, data is a raw material. You need to refine it. And then the insight is the product. And that's really where the value is. And the insight is absolutely, you can monetize your insight. >> So data is, abundant insights are scarce. >> Well, you know, actually you could say that intermediate between insights and the data are the models themselves. The statistical, predictive, machine learning models. That are a crystallization of insights that have been gained by people called data scientists. What are your thoughts on that? Are statistical, predictive, machine learning models something, an asset, that companies, organizations, should manage governance of on a centralized basis or not? >> Well the models are essentially the refinery system, right? So as you're refining your data, you need to have process around how you exactly do that. Just like refining anything else. It needs to be controlled and it needs to be governed. And I think that data is no different from that. And I think that it's very undisciplined right now, in the market or in the industry. And I think maturing that discipline around data science, I think is something that's going to be a very high focus in this year and next. >> You were mentioning, "How do you make money from data?" Because there's all this risk associated with security breaches. But at the risk of sounding simplistic, you can generate revenue from system optimization, or from developing products and services. Using data to develop products and services that better meet the demands and requirements of your markets. So that you can sell more. So either you are using data to earn more money. Or you're using data to optimize your system so you have less cost. And that's a simple answer for how you're going to be making money from the data. But yes, there is always the counter to that, which is the security risks. >> Well, and my question really relates to, you know, when you think of talking to C level executives, they kind of think about running the business, growing the business, and transforming the business. And a lot of times they can't fund these transformations. And so I would agree, there's many, many opportunities to monetize data, cut costs, increase revenue. But organizations seem to struggle to either make a business case. And actually implement that transformation. >> Dave, I'd love to have a crack at that. I think this conversation epitomizes the type of things that are happening in board rooms and C suites already. So we've really quickly dived into the detail of data. And the detail of machine learning. And the detail of data science, without actually stopping and taking a breath and saying, "Well, we've "got lots of it, but what have we got? "Where is it? "What's the value of it? "Is there any value in it at all?" And, "How much time and money should we invest in it?" For example, we talk of being about a resource. I look at data as a utility. When I turn the tap on to get a drink of water, it's there as a utility. I counted it being there but I don't always sample the quality of the water and I probably should. It could have Giardia in it, right? But what's interesting is I trust the water at home, in Sydney. Because we have a fairly good experience with good quality water. If I were to go to some other nation. I probably wouldn't trust that water. And I think, when you think about it, what's happening in organizations. It's almost the same as what we're seeing here today. We're having a lot of fun, diving into the detail. But what we've forgotten to do is ask the question, "Well why is data even important? "What's the reasoning to the business? "Why are we in business? "What are we doing as an organization? "And where does data fit into that?" As opposed to becoming so fixated on data because it's a media hyped topic. I think once you can wind that back a bit and say, "Well, we have lot's of data, "but is it good data? "Is it quality data? "Where's it coming from? "Is it ours? "Are we allowed to have it? "What treatment are we allowed to give that data?" As you said, "Are we controlling it? "And where are we controlling it? "Who owns it?" There's so many questions to be asked. But the first question I like to ask people in plain English is, "Well is there any value "in data in the first place? "What decisions are you making that data can help drive? "What things are in your organizations, "KPIs and milestones you're trying to meet "that data might be a support?" So then instead of becoming fixated with data as a thing in itself, it becomes part of your DNA. Does that make sense? >> Think about what money means. The Economists' Rhyme, "Money is a measure for, "a systems for, a medium, a measure, and exchange." So it's a medium of exchange. A measure of value, a way to exchange something. And a way to store value. Data, good clean data, well governed, fits all four of those. So if you're trying to figure out, "How do we make money out of stuff." Figure out how money works. And then figure out how you map data to it. >> So if we approach and we start with a company, we always start with business case, which is quite clear. And defined use case, basically, start with a team on one hand, marketing people, sales people, operational people, and also the whole data science team. So start with this case. It's like, defining, basically a movie. If you want to create the movie, You know where you're going to. You know what you want to achieve to create the customer experience. And this is basically the same with a business case. Where you define, "This is the case. "And this is how we're going to derive value, "start with it and deliver value within a month." And after the month, you check, "Okay, where are we and how can we move forward? "And what's the value that we've brought?" >> Now I as well, start with business case. I've done thousands of business cases in my life, with organizations. And unless that organization was kind of a data broker, the business case rarely has a discreet component around data. Is that changing, in your experience? >> Yes, so we guide companies into be data driven. So initially, indeed, they don't like to use the data. They don't like to use the analysis. So that's why, how we help. And is it changing? Yes, they understand that they need to change. But changing people is not always easy. So, you see, it's hard if you're not involved and you're not guiding it, they fall back in doing the daily tasks. So it's changing, but it's a hard change. >> Well and that's where this common parlance comes in. And Lillian, you, sort of, this is what you do for a living, is helping people understand these things, as you've been sort of evangelizing that common parlance. But do you have anything to add? >> I wanted to add that for organizational implementations, another key component to success is to start small. Start in one small line of business. And then when you've mastered that area and made it successful, then try and deploy it in more areas of the business. And as far as initializing big data implementation, that's generally how to do it successfully. >> There's the whole issue of putting a value on data as a discreet asset. Then there's the issue, how do you put a value on a data lake? Because a data lake, is essentially an asset you build on spec. It's an exploratory archive, essentially, of all kinds of data that might yield some insights, but you have to have a team of data scientists doing exploration and modeling. But it's all on spec. How do you put a value on a data lake? And at what point does the data lake itself become a burden? Because you got to store that data and manage it. At what point do you drain that lake? At what point, do the costs of maintaining that lake outweigh the opportunity costs of not holding onto it? >> So each Hadoop note is approximately $20,000 per year cost for storage. So I think that there needs to be a test and a diagnostic, before even inputting, ingesting the data and storing it. "Is this actually going to be useful? "What value do we plan to create from this?" Because really, you can't store all the data. And it's a lot cheaper to store data in Hadoop then it was in traditional systems but it's definitely not free. So people need to be applying this test before even ingesting the data. Why do we need this? What business value? >> I think the question we need to also ask around this is, "Why are we building data lakes "in the first place? "So what's the function it's going to perform for you?" There's been a huge drive to this idea. "We need a data lake. "We need to put it all somewhere." But invariably they become data swamps. And we only half jokingly say that because I've seen 90 day projects turn from a great idea, to a really bad nightmare. And as Lillian said, it is cheaper in some ways to put it into a HDFS platform, in a technical sense. But when we look at all the fully burdened components, it's actually more expensive to find Hadoop specialists and Spark specialists to maintain that cluster. And invariably I'm finding that big data, quote unquote, is not actually so much lots of data, it's complex data. And as Lillian said, "You don't always "need to store it all." So I think if we go back to the question of, "What's the function of a data lake in the first place? "Why are we building one?" And then start to build some fully burdened cost components around that. We'll quickly find that we don't actually need a data lake, per se. We just need an interim data store. So we might take last years' data and tokenize it, and analyze it, and do some analytics on it, and just keep the meta data. So I think there is this rush, for a whole range of reasons, particularly vendor driven. To build data lakes because we think they're a necessity, when in reality they may just be an interim requirement and we don't need to keep them for a long term. >> I'm going to attempt to, the last few questions, put them all together. And I think, they all belong together because one of the reasons why there's such hesitation about progress within the data world is because there's just so much accumulated tech debt already. Where there's a new idea. We go out and we build it. And six months, three years, it really depends on how big the idea is, millions of dollars is spent. And then by the time things are built the idea is pretty much obsolete, no one really cares anymore. And I think what's exciting now is that the speed to value is just so much faster than it's ever been before. And I think that, you know, what makes that possible is this concept of, I don't think of a data lake as a thing. I think of a data lake as an ecosystem. And that ecosystem has evolved so much more, probably in the last three years than it has in the past 30 years. And it's exciting times, because now once we have this ecosystem in place, if we have a new idea, we can actually do it in minutes not years. And that's really the exciting part. And I think, you know, data lake versus a data swamp, comes back to just traditional data architecture. And if you architect your data lake right, you're going to have something that's substantial, that's you're going to be able to harness and grow. If you don't do it right. If you just throw data. If you buy Hadoop cluster or a Cloud platform and just throw your data out there and say, "We have a lake now." yeah, you're going to create a mess. And I think taking the time to really understand, you know, the new paradigm of data architecture and modern data engineering, and actually doing it in a very disciplined way. If you think about it, what we're doing is we're building laboratories. And if you have a shabby, poorly built laboratory, the best scientist in the world isn't going to be able to prove his theories. So if you have a well built laboratory and a clean room, then, you know a scientist can get what he needs done very, very, very efficiently. And that's the goal, I think, of data management today. >> I'd like to just quickly add that I totally agree with the challenge between on premise and Cloud mode. And I think one of the strong themes of today is going to be the hybrid data management challenge. And I think organizations, some organizations, have rushed to adopt Cloud. And thinking it's a really good place to dump the data and someone else has to manage the problem. And then they've ended up with a very expensive death by 1,000 cuts in some senses. And then others have been very reluctant as a result of not gotten access to rapid moving and disruptive technology. So I think there's a really big challenge to get a basic conversation going around what's the value using Cloud technology as in adopting it, versus what are the risks? And when's the right time to move? For example, should we Cloud Burst for workloads? Do we move whole data sets in there? You know, moving half a petabyte of data into a Cloud platform back is a non-trivial exercise. But moving a terabyte isn't actually that big a deal anymore. So, you know, should we keep stuff behind the firewalls? I'd be interested in seeing this week where 80% of the data, supposedly is. And just push out for Cloud tools, machine learning, data science tools, whatever they might be, cognitive analytics, et cetera. And keep the bulk of the data on premise. Or should we just move whole spools into the Cloud? There is no one size fits all. There's no silver bullet. Every organization has it's own quirks and own nuances they need to think through and make a decision themselves. >> Very often, Dez, organizations have zonal architectures so you'll have a data lake that consists of a no sequel platform that might be used for say, mobile applications. A Hadoop platform that might be used for unstructured data refinement, so forth. A streaming platform, so forth and so on. And then you'll have machine learning models that are built and optimized for those different platforms. So, you know, think of it in terms of then, your data lake, is a set of zones that-- >> It gets even more complex just playing on that theme, when you think about what Cisco started, called Folk Computing. I don't really like that term. But edge analytics, or computing at the edge. We've seen with the internet coming along where we couldn't deliver everything with a central data center. So we started creating this concept of content delivery networks, right? I think the same thing, I know the same thing has happened in data analysis and data processing. Where we've been pulling social media out of the Cloud, per se, and bringing it back to a central source. And doing analytics on it. But when you think of something like, say for example, when the Dreamliner 787 from Boeing came out, this airplane created 1/2 a terabyte of data per flight. Now let's just do some quick, back of the envelope math. There's 87,400 fights a day, just in the domestic airspace in the USA alone, per day. Now 87,400 by 1/2 a terabyte, that's 43 point five petabytes a day. You physically can't copy that from quote unquote in the Cloud, if you'll pardon the pun, back to the data center. So now we've got the challenge, a lot of our Enterprise data's behind a firewall, supposedly 80% of it. But what's out at the edge of the network. Where's the value in that data? So there are zonal challenges. Now what do I do with my Enterprise versus the open data, the mobile data, the machine data. >> Yeah, we've seen some recent data from IDC that says, "About 43% of the data "is going to stay at the edge." We think that, that's way understated, just given the examples. We think it's closer to 90% is going to stay at the edge. >> Just on the airplane topic, right? So Airbus wasn't going to be outdone. Boeing put 4,000 sensors or something in their 787 Dreamliner six years ago. Airbus just announced an 83, 81,000 with 10,000 sensors in it. Do the same math. Now the FAA in the US said that all aircraft and all carriers have to be, by early next year, I think it's like March or April next year, have to be at the same level of BIOS. Or the same capability of data collection and so forth. It's kind of like a mini GDPR for airlines. So with the 83, 81,000 with 10,000 sensors, that becomes two point five terabytes per flight. If you do the math, it's 220 petabytes of data just in one day's traffic, domestically in the US. Now, it's just so mind boggling that we're going to have to completely turn our thinking on its' head, on what do we do behind the firewall? What do we do in the Cloud versus what we might have to do in the airplane? I mean, think about edge analytics in the airplane processing data, as you said, Jim, streaming analytics in flight. >> Yeah that's a big topic within Wikibon, so, within the team. Me and David Floyer, and my other colleagues. They're talking about the whole notion of edge architecture. Not only will most of the data be persisted at the edge, most of the deep learning models like TensorFlow will be executed at the edge. To some degree, the training of those models will happen in the Cloud. But much of that will be pushed in a federated fashion to the edge, or at least I'm predicting. We're already seeing some industry moves in that direction, in terms of architectures. Google has a federated training, project or initiative. >> Chris: Look at TensorFlow Lite. >> Which is really fascinating for it's geared to IOT, I'm sorry, go ahead. >> Look at TensorFlow Lite. I mean in the announcement of having every Android device having ML capabilities, is Google's essential acknowledgment, "We can't do it all." So we need to essentially, sort of like a setting at home. Everyone's smartphone top TV box just to help with the processing. >> Now we're talking about this, this sort of leads to this IOT discussion but I want to underscore the operating model. As you were saying, "You can't just "lift and shift to the Cloud." You're not going to, CEOs aren't going to get the billion dollar hit by just doing that. So you got to change the operating model. And that leads to, this discussion of IOT. And an entirely new operating model. >> Well, there are companies that are like Sisense who have worked with Intel. And they've taken this concept. They've taken the business logic and not just putting it in the chip, but actually putting it in memory, in the chip. So as data's going through the chip it's not just actually being processed but it's actually being baked in memory. So level one, two, and three cache. Now this is a game changer. Because as Chris was saying, even if we were to get the data back to a central location, the compute load, I saw a real interesting thing from I think it was Google the other day, one of the guys was doing a talk. And he spoke about what it meant to add cognitive and voice processing into just the Android platform. And they used some number, like that had, double the amount of compute they had, just to add voice for free, to the Android platform. Now even for Google, that's a nontrivial exercise. So as Chris was saying, I think we have to again, flip it on its' head and say, "How much can we put "at the edge of the network?" Because think about these phones. I mean, even your fridge and microwave, right? We put a man on the moon with something that these days, we make for $89 at home, on the Raspberry Pie computer, right? And even that was 1,000 times more powerful. When we start looking at what's going into the chips, we've seen people build new, not even GPUs, but deep learning and stream analytics capable chips. Like Google, for example. That's going to make its' way into consumer products. So that, now the compute capacity in phones, is going to, I think transmogrify in some ways because there is some magic in there. To the point where, as Chris was saying, "We're going to have the smarts in our phone." And a lot of that workload is going to move closer to us. And only the metadata that we need to move is going to go centrally. >> Well here's the thing. The edge isn't the technology. The edge is actually the people. When you look at, for example, the MIT language Scratch. This is kids programming language. It's drag and drop. You know, kids can assemble really fun animations and make little movies. We're training them to build for IOT. Because if you look at a system like Node-RED, it's an IBM interface that is drag and drop. Your workflow is for IOT. And you can push that to a device. Scratch has a converter for doing those. So the edge is what those thousands and millions of kids who are learning how to code, learning how to think architecturally and algorithmically. What they're going to create that is beyond what any of us can possibly imagine. >> I'd like to add one other thing, as well. I think there's a topic we've got to start tabling. And that is what I refer to as the gravity of data. So when you think about how planets are formed, right? Particles of dust accrete. They form into planets. Planets develop gravity. And the reason we're not flying into space right now is that there's gravitational force. Even though it's one of the weakest forces, it keeps us on our feet. Oftentimes in organizations, I ask them to start thinking about, "Where is the center "of your universe with regard to the gravity of data." Because if you can follow the center of your universe and the gravity of your data, you can often, as Chris is saying, find where the business logic needs to be. And it could be that you got to think about a storage problem. You can think about a compute problem. You can think about a streaming analytics problem. But if you can find where the center of your universe and the center of your gravity for your data is, often you can get a really good insight into where you can start focusing on where the workloads are going to be where the smarts are going to be. Whether it's small, medium, or large. >> So this brings up the topic of data governance. One of the themes here at Fast Track Your Data is GDPR. What it means. It's one of the reasons, I think IBM selected Europe, generally, Munich specifically. So let's talk about GDPR. We had a really interesting discussion last night. So let's kind of recreate some of that. I'd like somebody in the panel to start with, what is GDPR? And why does it matter, Ronald? >> Yeah, maybe I can start. Maybe a little bit more in general unified governance. So if i talk to companies and I need to explain to them what's governance, I basically compare it with a crime scene. So in a crime scene if something happens, they start with securing all the evidence. So they start sealing the environment. And take care that all the evidence is collected. And on the other hand, you see that they need to protect this evidence. There are all kinds of policies. There are all kinds of procedures. There are all kinds of rules, that need to be followed. To take care that the whole evidence is secured well. And once you start, basically, investigating. So you have the crime scene investigators. You have the research lab. You have all different kind of people. They need to have consent before they can use all this evidence. And the whole reason why they're doing this is in order to collect the villain, the crook. To catch him and on the other hand, once he's there, to convict him. And we do this to have trust in the materials. Or trust in basically, the analytics. And on the other hand to, the public have trust in everything what's happened with the data. So if you look to a company, where data is basically the evidence, this is the value of your data. It's similar to like the evidence within a crime scene. But most companies don't treat it like this. So if we then look to GDPR, GDPR basically shifts the power and the ownership of the data from the company to the person that created it. Which is often, let's say the consumer. And there's a lot of paradox in this. Because all the companies say, "We need to have this customer data. "Because we need to improve the customer experience." So if you make it concrete and let's say it's 1st of June, so GDPR is active. And it's first of June 2018. And I go to iTunes, so I use iTunes. Let's go to iTunes said, "Okay, Apple please "give me access to my data." I want to see which kind of personal information you have stored for me. On the other end, I want to have the right to rectify all this data. I want to be able to change it and give them a different level of how they can use my data. So I ask this to iTunes. And then I say to them, okay, "I basically don't like you anymore. "I want to go to Spotify. "So please transfer all my personal data to Spotify." So that's possible once it's June 18. Then I go back to iTunes and say, "Okay, I don't like it anymore. "Please reduce my consent. "I withdraw my consent. "And I want you to remove all my "personal data for everything that you use." And I go to Spotify and I give them, let's say, consent for using my data. So this is a shift where you can, as a person be the owner of the data. And this has a lot of consequences, of course, for organizations, how to manage this. So it's quite simple for the consumer. They get the power, it's maturing the whole law system. But it's a big consequence of course for organizations. >> This is going to be a nightmare for marketers. But fill in some of the gaps there. >> Let's go back, so GDPR, the General Data Protection Regulation, was passed by the EU in 2016, in May of 2016. It is, as Ronald was saying, it's four basic things. The right to privacy. The right to be forgotten. Privacy built into systems by default. And the right to data transfer. >> Joe: It takes effect next year. >> It is already in effect. GDPR took effect in May of 2016. The enforcement penalties take place the 25th of May 2018. Now here's where, there's two things on the penalty side that are important for everyone to know. Now number one, GDPR is extra territorial. Which means that an EU citizen, anywhere on the planet has GDPR, goes with them. So say you're a pizza shop in Nebraska. And an EU citizen walks in, orders a pizza. Gives her the credit card and stuff like that. If you for some reason, store that data, GDPR now applies to you, Mr. Pizza shop, whether or not you do business in the EU. Because an EU citizen's data is with you. Two, the penalties are much stiffer then they ever have been. In the old days companies could simply write off penalties as saying, "That's the cost of doing business." With GDPR the penalties are up to 4% of your annual revenue or 20 million Euros, whichever is greater. And there may be criminal sanctions, charges, against key company executives. So there's a lot of questions about how this is going to be implemented. But one of the first impacts you'll see from a marketing perspective is all the advertising we do, targeting people by their age, by their personally identifiable information, by their demographics. Between now and May 25th 2018, a good chunk of that may have to go away because there's no way for you to say, "Well this person's an EU citizen, this person's not." People give false information all the time online. So how do you differentiate it? Every company, regardless of whether they're in the EU or not will have to adapt to it, or deal with the penalties. >> So Lillian, as a consumer this is designed to protect you. But you had a very negative perception of this regulation. >> I've looked over the GDPR and to me it actually looks like a socialist agenda. It looks like (panel laughs) no, it looks like a full assault on free enterprise and capitalism. And on its' face from a legal perspective, its' completely and wholly unenforceable. Because they're assigning jurisdictional rights to the citizen. But what are they going to do? They're going to go to Nebraska and they're going to call in the guy from the pizza shop? And call him into what court? The EU court? It's unenforceable from a legal perspective. And if you write a law that's unenforceable, you know, it's got to be enforceable in every element. It can't be just, "Oh, we're only "going to enforce it for Facebook and for Google. "But it's not enforceable for," it needs to be written so that it's a complete and actionable law. And it's not written in that way. And from a technological perspective it's not implementable. I think you said something like 652 EU regulators or political people voted for this and 10 voted against it. But what do they know about actually implementing it? Is it possible? There's all sorts of regulations out there that aren't possible to implement. I come from an environmental engineering background. And it's absolutely ridiculous because these agencies will pass laws that actually, it's not possible to implement those in practice. The cost would be too great. And it's not even needed. So I don't know, I just saw this and I thought, "You know, if the EU wants to," what they're essentially trying to do is regulate what the rest of the world does on the internet. And if they want to build their own internet like China has and police it the way that they want to. But Ronald here, made an analogy between data, and free enterprise, and a crime scene. Now to me, that's absolutely ridiculous. What does data and someone signing up for an email list have to do with a crime scene? And if EU wants to make it that way they can police their own internet. But they can't go across the world. They can't go to Singapore and tell Singapore, or go to the pizza shop in Nebraska and tell them how to run their business. >> You know, EU overreach in the post Brexit era, of what you're saying has a lot of validity. How far can the tentacles of the EU reach into other sovereign nations. >> What court are they going to call them into? >> Yeah. >> I'd like to weigh in on this. There are lots of unknowns, right? So I'd like us to focus on the things we do know. We've already dealt with similar situations before. In Australia, we introduced a goods and sales tax. Completely foreign concept. Everything you bought had 10% on it. No one knew how to deal with this. It was a completely new practice in accounting. There's a whole bunch of new software that had to be written. MYRB had to have new capability, but we coped. No one actually went to jail yet. It's decades later, for not complying with GST. So what it was, was a framework on how to shift from non sales tax related revenue collection. To sales tax related revenue collection. I agree that there are some egregious things built into this. I don't disagree with that at all. But I think if I put my slightly broader view of the world hat on, we have well and truly gone past the point in my mind, where data was respected, data was treated in a sensible way. I mean I get emails from companies I've never done business with. And when I follow it up, it's because I did business with a credit card company, that gave it to a service provider, that thought that I was going to, when I bought a holiday to come to Europe, that I might want travel insurance. Now some might say there's value in that. And other's say there's not, there's the debate. But let's just focus on what we're talking about. We're talking about a framework for governance of the treatment of data. If we remove all the emotive component, what we are talking about is a series of guidelines, backed by laws, that say, "We would like you to do this," in an ideal world. But I don't think anyone's going to go to jail, on day one. They may go to jail on day 180. If they continue to do nothing about it. So they're asking you to sort of sit up and pay attention. Do something about it. There's a whole bunch of relief around how you approach it. The big thing for me, is there's no get out of jail card, right? There is no get out of jail card for not complying. But there's plenty of support. I mean, we're going to have ambulance chasers everywhere. We're going to have class actions. We're going to have individual suits. The greatest thing to do right now is get into GDPR law. Because you seem to think data scientists are unicorn? >> What kind of life is that if there's ambulance chasers everywhere? You want to live like that? >> Well I think we've seen ad blocking. I use ad blocking as an example, right? A lot of organizations with advertising broke the internet by just throwing too much content on pages, to the point where they're just unusable. And so we had this response with ad blocking. I think in many ways, GDPR is a regional response to a situation where I don't think it's the exact right answer. But it's the next evolutional step. We'll see things evolve over time. >> It's funny you mentioned it because in the United States one of the things that has happened, is that with the change in political administrations, the regulations on what companies can do with your data have actually been laxened, to the point where, for example, your internet service provider can resell your browsing history, with or without your consent. Or your consent's probably buried in there, on page 47. And so, GDPR is kind of a response to saying, "You know what? "You guys over there across the Atlantic "are kind of doing some fairly "irresponsible things with what you allow companies to do." Now, to Lillian's point, no one's probably going to go after the pizza shop in Nebraska because they don't do business in the EU. They don't have an EU presence. And it's unlikely that an EU regulator's going to get on a plane from Brussels and fly to Topeka and say, or Omaha, sorry, "Come on Joe, let's get the pizza shop in order here." But for companies, particularly Cloud companies, that have offices and operations within the EU, they have to sit up and pay attention. So if you have any kind of EU operations, or any kind of fiscal presence in the EU, you need to get on board. >> But to Lillian's point it becomes a boondoggle for lawyers in the EU who want to go after deep pocketed companies like Facebook and Google. >> What's the value in that? It seems like regulators are just trying to create work for themselves. >> What about the things that say advertisers can do, not so much with the data that they have? With the data that they don't have. In other words, they have people called data scientists who build models that can do inferences on sparse data. And do amazing things in terms of personalization. What do you do about all those gray areas? Where you got machine learning models and so forth? >> But it applies-- >> It applies to personally identifiable information. But if you have a talented enough data scientist, you don't need the PII or even the inferred characteristics. If a certain type of behavior happens on your website, for example. And this path of 17 pages almost always leads to a conversion, it doesn't matter who you are or where you're coming from. If you're a good enough data scientist, you can build a model that will track that. >> Like you know, target, infer some young woman was pregnant. And they inferred correctly even though that was never divulged. I mean, there's all those gray areas that, how can you stop that slippery slope? >> Well I'm going to weigh in really quickly. A really interesting experiment for people to do. When people get very emotional about it I say to them, "Go to Google.com, "view source, put it in seven point Courier "font in Word and count how many pages it is." I guess you can't guess how many pages? It's 52 pages of seven point Courier font, HTML to render one logo, and a search field, and a click button. Now why do we need 52 pages of HTML source code and Java script just to take a search query. Think about what's being done in that. It's effectively a mini operating system, to figure out who you are, and what you're doing, and where you been. Now is that a good or bad thing? I don't know, I'm not going to make a judgment call. But what I'm saying is we need to stop and take a deep breath and say, "Does anybody need a 52 page, "home page to take a search query?" Because that's just the tip of the iceberg. >> To that point, I like the results that Google gives me. That's why I use Google and not Bing. Because I get better search results. So, yeah, I don't mind if you mine my personal data and give me, our Facebook ads, those are the only ads, I saw in your article that GDPR is going to take out targeted advertising. The only ads in the entire world, that I like are Facebook ads. Because I actually see products I'm interested in. And I'm happy to learn about that. I think, "Oh I want to research that. "I want to see this new line of products "and what are their competitors?" And I like the targeted advertising. I like the targeted search results because it's giving me more of the information that I'm actually interested in. >> And that's exactly what it's about. You can still decide, yourself, if you want to have this targeted advertising. If not, then you don't give consent. If you like it, you give consent. So if a company gives you value, you give consent back. So it's not that it's restricting everything. It's giving consent. And I think it's similar to what happened and the same type of response, what happened, we had the Mad Cow Disease here in Europe, where you had the whole food chain that needed to be tracked. And everybody said, "No, it's not required." But now it's implemented. Everybody in Europe does it. So it's the same, what probably going to happen over here as well. >> So what does GDPR mean for data scientists? >> I think GDPR is, I think it is needed. I think one of the things that may be slowing data science down is fear. People are afraid to share their data. Because they don't know what's going to be done with it. If there are some guidelines around it that should be enforced and I think, you know, I think it's been said but as long as a company could prove that it's doing due diligence to protect your data, I think no one is going to go to jail. I think when there's, you know, we reference a crime scene, if there's a heinous crime being committed, all right, then it's going to become obvious. And then you do go directly to jail. But I think having guidelines and even laws around privacy and protection of data is not necessarily a bad thing. You can do a lot of data, really meaningful data science, without understanding that it's Joe Caserta. All of the demographics about me. All of the characteristics about me as a human being, I think are still on the table. All that they're saying is that you can't go after Joe, himself, directly. And I think that's okay. You know, there's still a lot of things. We could still cure diseases without knowing that I'm Joe Caserta, right? As long as you know everything else about me. And I think that's really at the core, that's what we're trying to do. We're trying to protect the individual and the individual's data about themselves. But I think as far as how it affects data science, you know, a lot of our clients, they're afraid to implement things because they don't exactly understand what the guideline is. And they don't want to go to jail. So they wind up doing nothing. So now that we have something in writing that, at least, it's something that we can work towards, I think is a good thing. >> In many ways, organizations are suffering from the deer in the headlight problem. They don't understand it. And so they just end up frozen in the headlights. But I just want to go back one step if I could. We could get really excited about what it is and is not. But for me, the most critical thing there is to remember though, data breaches are happening. There are over 1,400 data breaches, on average, per day. And most of them are not trivial. And when we saw 1/2 a billion from Yahoo. And then one point one billion and then one point five billion. I mean, think about what that actually means. There were 47,500 Mongodbs breached in an 18 hour window, after an automated upgrade. And they were airlines, they were banks, they were police stations. They were hospitals. So when I think about frameworks like GDPR, I'm less worried about whether I'm going to see ads and be sold stuff. I'm more worried about, and I'll give you one example. My 12 year old son has an account at a platform called Edmodo. Now I'm not going to pick on that brand for any reason but it's a current issue. Something like, I think it was like 19 million children in the world had their username, password, email address, home address, and all this social interaction on this Facebook for kids platform called Edmodo, breached in one night. Now I got my hands on a copy. And everything about my son is there. Now I have a major issue with that. Because I can't do anything to undo that, nothing. The fact that I was able to get a copy, within hours on a dark website, for free. The fact that his first name, last name, email, mobile phone number, all these personal messages from friends. Nobody has the right to allow that to breach on my son. Or your children, or our children. For me, GDPR, is a framework for us to try and behave better about really big issues. Whether it's a socialist issue. Whether someone's got an issue with advertising. I'm actually not interested in that at all. What I'm interested in is companies need to behave much better about the treatment of data when it's the type of data that's being breached. And I get really emotional when it's my son, or someone else's child. Because I don't care if my bank account gets hacked. Because they hedge that. They underwrite and insure themselves and the money arrives back to my bank. But when it's my wife who donated blood and a blood donor website got breached and her details got lost. Even things like sexual preferences. That they ask questions on, is out there. My 12 year old son is out there. Nobody has the right to allow that to happen. For me, GDPR is the framework for us to focus on that. >> Dave: Lillian, is there a comment you have? >> Yeah, I think that, I think that security concerns are 100% and definitely a serious issue. Security needs to be addressed. And I think a lot of the stuff that's happening is due to, I think we need better security personnel. I think we need better people working in the security area where they're actually looking and securing. Because I don't think you can regulate I was just, I wanted to take the microphone back when you were talking about taking someone to jail. Okay, I have a background in law. And if you look at this, you guys are calling it a framework. But it's not a framework. What they're trying to do is take 4% of your business revenues per infraction. They want to say, "If a person signs up "on your email list and you didn't "like, necessarily give whatever "disclaimer that the EU said you need to give. "Per infraction, we're going to take "4% of your business revenue." That's a law, that they're trying to put into place. And you guys are talking about taking people to jail. What jail are you? EU is not a country. What jurisdiction do they have? Like, you're going to take pizza man Joe and put him in the EU jail? Is there an EU jail? Are you going to take them to a UN jail? I mean, it's just on its' face it doesn't hold up to legal tests. I don't understand how they could enforce this. >> I'd like to just answer the question on-- >> Security is a serious issue. I would be extremely upset if I were you. >> I personally know, people who work for companies who've had data breaches. And I respect them all. They're really smart people. They've got 25 plus years in security. And they are shocked that they've allowed a breach to take place. What they've invariably all agreed on is that a whole range of drivers have caused them to get to a bad practice. So then, for example, the donate blood website. The young person who was assist admin with all the right skills and all the right experience just made a basic mistake. They took a db dump of a mysql database before they upgraded their Wordpress website for the business. And they happened to leave it in a folder that was indexable by Google. And so somebody wrote a radio expression to search in Google to find sql backups. Now this person, I personally respect them. I think they're an amazing practitioner. They just made a mistake. So what does that bring us back to? It brings us back to the point that we need a safety net or a framework or whatever you want to call it. Where organizations have checks and balances no matter what they do. Whether it's an upgrade, a backup, a modification, you know. And they all think they do, but invariably we've seen from the hundreds of thousands of breaches, they don't. Now on the point of law, we could debate that all day. I mean the EU does have a remit. If I was caught speeding in Germany, as an Australian, I would be thrown into a German jail. If I got caught as an organization in France, breaching GDPR, I would be held accountable to the law in that region, by the organization pursuing me. So I think it's a bit of a misnomer saying I can't go to an EU jail. I don't disagree with you, totally, but I think it's regional. If I get a speeding fine and break the law of driving fast in EU, it's in the country, in the region, that I'm caught. And I think GDPR's going to be enforced in that same approach. >> All right folks, unfortunately the 60 minutes flew right by. And it does when you have great guests like yourselves. So thank you very much for joining this panel today. And we have an action packed day here. So we're going to cut over. The CUBE is going to have its' interview format starting in about 1/2 hour. And then we cut over to the main tent. Who's on the main tent? Dez, you're doing a main stage presentation today. Data Science is a Team Sport. Hillary Mason, has a breakout session. We also have a breakout session on GDPR and what it means for you. Are you ready for GDPR? Check out ibmgo.com. It's all free content, it's all open. You do have to sign in to see the Hillary Mason and the GDPR sessions. And we'll be back in about 1/2 hour with the CUBE. We'll be running replays all day on SiliconAngle.tv and also ibmgo.com. So thanks for watching everybody. Keep it right there, we'll be back in about 1/2 hour with the CUBE interviews. We're live from Munich, Germany, at Fast Track Your Data. This is Dave Vellante with Jim Kobielus, we'll see you shortly. (electronic music)
SUMMARY :
Brought to you by IBM. Really good to see you in Munich. a lot of people to organize and talk about data science. And so, I want to start with sort of can really grasp the concepts I present to them. But I don't know if there's anything you would add? So I'd love to take any questions you have how to get, turn data into value So one of the things, Adam, the reason I'm going to introduce Ronald Van Loon. And on the other hand I'm a blogger I met you on Twitter, you know, and the pace of change, that's just You're in the front lines, helping organizations, Trying to govern when you have And newest member of the SiliconANGLE Media Team. and data science are at the heart of it. It's funny that you excluded deep learning of the workflow of data science And I haven't seen the industry automation, in terms of the core And baking it right into the tools. that's really powering a lot of the rapid leaps forward. What's the distinction? It's like asking people to mine classifieds. to layer, and what you end up with the ability to do higher levels of abstraction. get the result, you also have to And I guess the last part is, Dave: So I'd like to switch gears a little bit and just generally in the community, And this means that it has to be brought on one end to, But Chris you have a-- Look at the major breaches of the last couple years. "I have to spend to protect myself, And that's the way I think about it. and the data are the models themselves. And I think that it's very undisciplined right now, So that you can sell more. And a lot of times they can't fund these transformations. But the first question I like to ask people And then figure out how you map data to it. And after the month, you check, kind of a data broker, the business case rarely So initially, indeed, they don't like to use the data. But do you have anything to add? and deploy it in more areas of the business. There's the whole issue of putting And it's a lot cheaper to store data And then start to build some fully is that the speed to value is just the data and someone else has to manage the problem. So, you know, think of it in terms on that theme, when you think about from IDC that says, "About 43% of the data all aircraft and all carriers have to be, most of the deep learning models like TensorFlow geared to IOT, I'm sorry, go ahead. I mean in the announcement of having "lift and shift to the Cloud." And only the metadata that we need And you can push that to a device. And it could be that you got to I'd like somebody in the panel to And on the other hand, you see that But fill in some of the gaps there. And the right to data transfer. a good chunk of that may have to go away So Lillian, as a consumer this is designed to protect you. I've looked over the GDPR and to me You know, EU overreach in the post Brexit era, But I don't think anyone's going to go to jail, on day one. And so we had this response with ad blocking. And so, GDPR is kind of a response to saying, a boondoggle for lawyers in the EU What's the value in that? With the data that they don't have. leads to a conversion, it doesn't matter who you are And they inferred correctly even to figure out who you are, and what you're doing, And I like the targeted advertising. And I think it's similar to what happened I think no one is going to go to jail. and the money arrives back to my bank. "disclaimer that the EU said you need to give. I would be extremely upset if I were you. And I think GDPR's going to be enforced in that same approach. And it does when you have great guests like yourselves.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ronald | PERSON | 0.99+ |
Lillian Pierson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lillian | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Joe Caserta | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dez | PERSON | 0.99+ |
Nebraska | LOCATION | 0.99+ |
Adam | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Hillary Mason | PERSON | 0.99+ |
87,400 | QUANTITY | 0.99+ |
Topeka | LOCATION | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
Thailand | LOCATION | 0.99+ |
Brussels | LOCATION | 0.99+ |
Australia | LOCATION | 0.99+ |
EU | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
Dez Blanchfield | PERSON | 0.99+ |
Chris Penn | PERSON | 0.99+ |
Omaha | LOCATION | 0.99+ |
Munich | LOCATION | 0.99+ |
May of 2016 | DATE | 0.99+ |
May 25th 2018 | DATE | 0.99+ |
Sydney | LOCATION | 0.99+ |
nine | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
17 pages | QUANTITY | 0.99+ |
Joe | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
$89 | QUANTITY | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
France | LOCATION | 0.99+ |
June 18 | DATE | 0.99+ |
83, 81,000 | QUANTITY | 0.99+ |
30 years | QUANTITY | 0.99+ |
Ronald Van Loon | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
USA | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
one point | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Wikibon Research Meeting
>> Dave: The cloud. There you go. I presume that worked. >> David: Hi there. >> Dave: Hi David. We had agreed, Peter and I had talked and we said let's just pick three topics, allocate enough time. Maybe a half hour each, and then maybe a little bit longer if we have the time. Then try and structure it so we can gather some opinions on what it all means. Ultimately the goal is to have an outcome with some research that hits the network. The three topics today, Jim Kobeielus is going to present on agile and data science, David Floyer on NVMe over fabric and of course keying off of the Micron news announcement. I think Nick is, is that Nick who just joined? He can contribute to that as well. Then George Gilbert has this concept of digital twin. We'll start with Jim. I guess what I'd suggest is maybe present this in the context of, present a premise or some kind of thesis that you have and maybe the key issues that you see and then kind of guide the conversation and we'll all chime in. >> Jim: Sure, sure. >> Dave: Take it away, Jim. >> Agile development and team data science. Agile methodology obviously is well-established as a paradigm and as a set of practices in various schools in software development in general. Agile is practiced in data science in terms of development, the pipelines. The overall premise for my piece, first of all starting off with a core definition of what agile is as a methodology. Self-organizing, cross-functional teams. They sprint toward results in steps that are fast, iterative, incremental, adaptive and so forth. Specifically the premise here is that agile has already come to data science and is coming even more deeply into the core practice of data science where data science is done in team environment. It's not just unicorns that are producing really work on their own, but more to the point, it's teams of specialists that come together in co-location, increasingly in co-located environments or in co-located settings to produce (banging) weekly check points and so forth. That's the basic premise that I've laid out for the piece. The themes. First of all, the themes, let me break it out. In terms of the overall how I design or how I'm approaching agile in this context is I'm looking at the basic principles of agile. It's really practices that are minimal, modular, incremental, iterative, adaptive, and co-locational. I've laid out how all that maps in to how data science is done in the real world right now in terms of tight teams working in an iterative fashion. A couple of issues that I see as regards to the adoption and sort of the ramifications of agile in a data science context. One of which is a co-location. What we have increasingly are data science teams that are virtual and distributed where a lot of the functions are handled by statistical modelers and data engineers and subject matter experts and visualization specialists that are working remotely from each other and are using collaborative tools like the tools from the company that I just left. How can agile, the co-location work primer for agile stand up in a world with more of the development team learning deeper and so forth is being done on a scrutiny basis and needs to be by teams of specialists that may be in different cities or different time zones, operating around the clock, produce brilliant results? Another one of which is that agile seems to be predicated on the notion that you improvise the process as you go, trial and error which seems to fly in the face of documentation or tidy documentation. Without tidy documentation about how you actually arrived at your results, how come those results can not be easily reproduced by independent researchers, independent data scientists? If you don't have well defined processes for achieving results in a certain data science initiative, it can't be reproduced which means they're not terribly scientific. By definition it's not science if you can't reproduce it by independent teams. To the extent that it's all loosey-goosey and improvised and undocumented, it's not reproducible. If it's not reproducible, to what extent should you put credence in the results of a given data science initiative if it's not been documented? Agile seems to fly in the face of reproducibility of data science results. Those are sort of my core themes or core issues that I'm pondering with or will be. >> Dave: Jim, just a couple questions. You had mentioned, you rattled off a bunch of parameters. You went really fast. One of them was co-location. Can you just review those again? What were they? >> Sure. They are minimal. The minimum viable product is the basis for agile, meaning a team puts together data a complete monolithic sect, but an initial deliverable that can stand alone, provide some value to your stakeholders or users and then you iteratively build upon that in what I call minimum viable product going forward to pull out more complex applications as needed. There's sort of a minimum viable product is at the heart of agile the way it's often looked at. The big question is, what is the minimum viable product in a data science initiative? One way you might approach that is saying that what you're doing, say you're building a predictive model. You're predicting a single scenario, for example such as whether one specific class of customers might accept one specific class of offers under the constraining circumstances. That's an example of minimum outcome to be achieved from a data science deliverable. A minimum product that addresses that requirement might be pulling the data from a single source. We'll need a very simplified feature set of predictive variables like maybe two or three at the most, to predict customer behavior, and use one very well understood algorithm like linear regressions and do it. With just a few lines of programming code in Python or Aura or whatever and build us some very crisp, simple rules. That's the notion in a data science context of a minimum viable product. That's the foundation of agile. Then there's the notion of modular which I've implied with minimal viable product. The initial product is the foundation upon which you build modular add ons. The add ons might be building out more complex algorithms based on more data sets, using more predictive variables, throwing other algorithms in to the initiative like logistic regression or decision trees to do more fine-grained customer segmentation. What I'm giving you is a sense for the modular add ons and builds on to the initial product that generally weaken incrementally in the course of a data science initiative. Then there's this, and I've already used the word incremental where each new module that gets built up or each new feature or tweak on the core model gets added on to the initial deliverable in a way that's incremental. Ideally it should all compose ultimately the sum of the useful set of capabilities that deliver a wider range of value. For example, in a data science initiative where it's customer data, you're doing predictive analysis to identify whether customers are likely to accept a given offer. One way to add on incrementally to that core functionality is to embed that capability, for example, in a target marketing application like an outbound marketing application that uses those predictive variables to drive responses in line to, say an e-commerce front end. Then there's the notion of iterative and iterative really comes down to check points. Regular reviews of the standards and check points where the team comes together to review the work in a context of data science. Data science by its very nature is exploratory. It's visualization, it's model building and testing and training. It's iterative scoring and testing and refinement of the underlying model. Maybe on a daily basis, maybe on a weekly basis, maybe adhoc, but iteration goes on all the time in data science initiatives. Adaptive. Adaptive is all about responding to circumstances. Trial and error. What works, what doesn't work at the level of the clinical approach. It's also in terms of, do we have the right people on this team to deliver on the end results? A data science team might determine mid-way through that, well we're trying to build a marketing application, but we don't have the right marketing expertise in our team, maybe we need to tap Joe over there who seems to know a little bit about this particular application we're trying to build and this particular scenario, this particular customers, we're trying to get a good profile of how to reach them. You might adapt by adding, like I said, new data sources, adding on new algorithms, totally changing your approach for future engineering as you go along. In addition to supervised learning from ground troops, you might add some unsupervised learning algorithms to being able to find patterns in say unstructured data sets as you bring those into the picture. What I'm getting at is there's a lot, 10 zillion variables that, for a data science team that you have to add in to your overall research plan going forward based on, what you're trying to derive from data science is its insights. They're actionable and ideally repeatable. That you can embed them in applications. It's just a matter of figuring out what actually helps you, what set of variables and team members and data and sort of what helps you to achieve the goals of your project. Finally, co-locational. It's all about the core team needs to be, usually in the same physical location according to the book how people normally think of agile. The company that I just left is basically doing a massive social engineering exercise, ongoing about making their marketing and R&D teams a little more agile by co-locating them in different cities like San Francisco and Austin and so forth. The whole notion that people will collaborate far better if they're not virtual. That's highly controversial, but none-the-less, that's the foundation of agile as it's normally considered. One of my questions, really an open question is what hard core, you might have a sprawling team that's doing data science, doing various aspects, but what solid core of that team needs to be physically co-located all or most of the time? Is it the statistical modeler and a data engineer alone? The one who stands up how to do cluster and the person who actually does the building and testing of the model? Do the visualization specialists need to be co-located as well? Are other specialties like subject matter experts who have the knowledge in marketing, whatever it is, do they also need to be in the physical location day in, day out, week in and week out to achieve results on these projects? Anyway, so there you go. That's how I sort of appealed the argument of (mumbling). >> Dave: Okay. I got a minimal modular, incremental, iterative, adaptive, co-locational. What was six again? I'm sorry. >> Jim: Co-locational. >> Dave: What was the one before that? >> Jim: I'm sorry. >> Dave: Adaptive. >> Minimal, modular, incremental, iterative, adaptive, and co-locational. >> Dave: Okay, there were only six. Sorry, I thought it was seven. Good. A couple of questions then we can get the discussion going here. Of course, you're talking specifically in the context of data science, but some of the questions that I've seen around agile generally are, it's not for everybody, when and where should it be used? Waterfalls still make sense sometimes. Some of the criticisms I've read, heard, seen, and sometimes experienced with agile are sort of quality issues, I'll call it lack of accountability. I don't know if that's the right terminology. We're going for speed so as long as we're fast, we checked that box, quality can sacrifice. Thoughts on that. Where does it fit and again understanding specifically you're talking about data science. Does it always fit in data science or because it's so new and hip and cool or like traditional programming environments, is it horses for courses? >> David: Can I add to that, Dave? It's a great, fundamental question. It seems to me there's two really important aspects of artificial intelligence. The first is the research part of it which is developing the algorithms, developing the potential data sources that might or might not matter. Then the second is taking that and putting it into production. That is that somewhere along the line, it's saving money, time, etc., and it's integrated with the rest of the organization. That second piece is, the first piece it seems to be like most research projects, the ROI is difficult to predict in a new sort of way. The second piece of actually implementing it is where you're going to make money. Is agile, if you can integrate that with your systems of record, for example and get automation of many of the aspects that you've researched, is agile the right way of doing it at that stage? How would you bridge the gap between the initial development and then the final instantiation? >> That's an important concern, David. Dev Ops, that's a closely related issue but it's not exactly the same scope. As data science and machine learning, let's just net it out. As machine learning and deep learning get embedded in applications, in operations I should say, like in your e-commerce site or whatever it might be, then data science itself becomes an operational function. The people who continue to iterate those models in line the operational applications. Really, where it comes down to an operational function, everything that these people do needs to be documented and version controlled and so forth. These people meaning data science professionals. You need documentation. You need accountability. The development of these assets, machine learning and so forth, needs to be, is compliance. When you look at compliance, algorithmic accountability comes into it where lawyers will, like e-discovery. They'll subpoena, theoretically all your algorithms and data and say explain how you arrived at this particular recommendation that you made to grant somebody or not grant somebody a loan or whatever it might be. The transparency of the entire development process is absolutely essential to the data science process downstream and when it's a production application. In many ways, agile by saying, speed's the most important thing. Screw documentation, you can sort of figure that out and that's not as important, that whole pathos, it goes by the wayside. Agile can not, should not skip on documentation. Documentation is even more important as data science becomes an operational function. That's one of my concerns. >> David: I think it seems to me that the whole rapid idea development is difficult to get a combination of that and operational, boring testing, regression testing, etc. The two worlds are very different. The interface between the two is difficult. >> Everybody does their e-commerce tweaks through AB testing of different layouts and so forth. AB testing is fundamentally data science and so it's an ongoing thing. (static) ... On AB testing in terms of tweaking. All these channels and all the service flow, systems of engagement and so forth. All this stuff has to be documented so agile sort of, in many ways flies in the face of that or potentially compromises the visibility of (garbled) access. >> David: Right. If you're thinking about IOT for example, you've got very expensive machines out there in the field which you're trying to optimize true put through and trying to minimize machine's breaking, etc. At the Micron event, it was interesting that Micron's use of different methodologies of putting systems together, they were focusing on the data analysis, etc., to drive greater efficiency through their manufacturing process. Having said that, they need really, really tested algorithms, etc. to make sure there isn't a major (mumbling) or loss of huge amounts of potential revenue if something goes wrong. I'm just interested in how you would create the final product that has to go into production in a very high value chain like an IOT. >> When you're running, say AI from learning algorithms all the way down to the end points, it gets even trickier than simply documenting the data and feature sets and the algorithms and so forth that were used to build up these models. It also comes down to having to document the entire life cycle in terms of how these algorithms were trained to make the predictors of whatever it is you're trying to do at the edge with a particular algorithm. The whole notion of how are all of these edge points applications being trained, with what data, at what interval? Are they being retrained on a daily basis, hourly basis, moment by moment basis? All of those are critical concerns to know whether they're making the best automated decisions or actions possible in all scenarios. That's like a black box in terms of the sheer complexity of what needs to be logged to figure out whether the application is doing its job as best a possible. You need a massive log, you need a massive event log from end to end of the IOT to do that right and to provide that visibility ongoing into the performance of these AI driven edge devices. I don't know anybody who's providing the tool to do it. >> David: If I think about how it's done at the moment, it's obviously far too slow at the moment. At the same time, you've got to have some testing and things like that. It seems to me that you've got a research model on one side and then you need to create a working model from that which is your production model. That's the one that goes through the testing and everything of that sort. It seems to me that the interface would be that transition from the research model to the working model that would be critical here and the working model is obviously a subset and it's going to be optimized for performance, etc. in real time, as opposed to the development model which can be a lot to do and take half a week to manage it necessary. It seems to me that you've got a different set of business pressures on the working model and a different set of skills as well. I think having one team here doesn't sound right to me. You've got to have a Dev Ops team who are going to take the working model from the developers and then make sure that it's sound and save. Especially in a high value IOT area that the level of iteration is not going to be nearly as high as in a lower cost marketing type application. Does that sound sensible? >> That sounds sensible. In fact in Dev Ops, the Dev Ops team would definitely be the ones that handle the continuous training and retraining of the working models on an ongoing basis. That's a core observation. >> David: Is that the right way of doing it, Jim? It seems to me that the research people would be continuing to adapt from data from a lot of different places whereas the operational model would be at a specific location with a specific IOT and they wouldn't have necessarily all the data there to do that. I'm not quite sure whether - >> Dave: Hey guys? Hey guys, hey guys? Can I jump in here? Interesting discussion, but highly nuanced and I'm struggling to figure out how this turns into a piece or sort of debating some certain specifics that are very kind of weedy. I wonder if we could just reset for a second and come back to sort of what I was trying to get to before which is really the business impact. Should this be applied broadly? Should this be applied specifically? What does it mean if I'm a practitioner? What should I take away from, Jim your premise and your sort of fixed parameters? Should I be implementing this? Why? Where? What's the value to my organization - the value I guess is obvious, but does it fit everywhere? Should it be across the board? Can you address that? >> Neil: Can I jump in here for a second? >> Dave: Please, that would be great. Is that Neil? >> Neil: Neil. I've never been a data scientist, but I was an actuary a long time ago. When the truth actuary came to me and said we need to develop a liability insurance coverage for floating oil rigs in the North Sea, I'm serious, it took a couple of months of research and modeling and so forth. If I had to go to all of those meetings and stand ups in an agile development environment, I probably would have gone postal on the place. I think that there's some confusion about what data science is. It's not a vector. It's not like a Dev Op situation where you start with something and you go (mumbling). When a data scientist or whatever you want to call them comes up with a model, that model has to be constantly revisited until it's put out of business. It's refined, it's evaluated. It doesn't have an end point like that. The other thing is that data scientist is typically going to be running multiple projects simultaneously so how in the world are you going to agilize that? I think if you look at the data science group, they're probably, I think Nick said this, there are probably groups in there that are doing fewer Dev Ops, software engineering and so forth and you can apply agile techniques to them. The whole data science thing is too squishy for that, in my opinion. >> Jim: Squishy? What do you mean by squishy, Neil? >> Neil: It's not one thing. I think if you try to represent data science as here's a project, we gather data, we work on a model, we test it, and then we put it into production, it doesn't end there. It never ends. It's constantly being revised. >> Yeah, of course. It's akin to application maintenance. The application meaning the model, the algorithm to be fit for purpose has to continually be evaluated, possibly tweaked, always retrained to determine its predictive fit for whatever task it's been assigned. You don't build it once and assume its strong predictive fit forever and ever. You can never assume that. >> Neil: James and I called that adaptive control mechanisms. You put a model out there and you monitor the return you're getting. You talk about AB testing, that's one method of doing it. I think that a data scientist, somebody who really is keyed into the machine learning and all that jazz. I just don't see them as being project oriented. I'll tell you one other thing, I have a son who's a software engineer and he said something to me the other day. He said, "Agile? Agile's dead." I haven't had a chance to find out what he meant by that. I'll get back to you. >> Oh, okay. If you look at - Go ahead. >> Dave: I'm sorry, Neil. Just to clarify, he said agile's dead? Was that what he said? >> Neil: I didn't say it, my son said it. >> Dave: Yeah, yeah, yeah right. >> Neil: No idea what he was talking about. >> Dave: Go ahead, Jim. Sorry. >> If you look at waterfall development in general, for larger projects it's absolutely essential to get requirements nailed down and the functional specifications and all that. Where you have some very extensive projects and many moving parts, obviously you need a master plan that it all fits into and waterfall, those checkpoints and so forth, those controls that are built into that methodology are critically important. Within the context of a broad project, some of the assets being build up might be machine loading models and analytics models and so forth so in the context of our broader waterfall oriented software development initiative, you might need to have multiple data science projects spun off within the sub-projects. Each of those would fit into, by itself might be indicated sort of like an exploration task where you have a team doing data visualization, exploration in more of an open-ended fashion because while they're trying to figure out the right set of predictors and the right set of data to be able to build out the right model to deliver the right result. What I'm getting at is that agile approaches might be embedded into broader waterfall oriented development initiatives, agile data science approaches. Fundamentally, data science began and still is predominantly very smart people, PhDs in statistics and math, doing open-ended exploration of complex data looking for non-obvious patterns that you wouldn't be able to find otherwise. Sort of a fishing expedition, a high priced fishing expedition. Kind of a mode of operation as how data science often is conducted in the real world. Looking for that eureka moment when the correlations just jump out at you. There's a lot of that that goes on. A lot of that is very important data science, it's more akin to pure science. What I'm getting at is there might be some role for more structure in waterfall development approaches in projects that have a data science, core data science capability to them. Those are my thoughts. >> Dave: Okay, we probably should move on to the next topic here, but just in closing can we get people to chime in on sort of the bottom line here? If you're writing to an audience of data scientists or data scientist want to be's, what's the one piece of advice or a couple of pieces of advice that you would give them? >> First of all, data science is a developer competency. The modern developers are, many of them need to be data scientists or have a strong grounding and understanding of data science, because much of that machine learning and all that is increasingly the core of what software developers are building so you can't not understand data science if you're a modern software developer. You can't understand data science as it (garbled) if you don't understand the need for agile iterative steps within the, because they're looking for the needle in the haystack quite often. The right combination of predictive variables and the right combination of algorithms and the right training regimen in order to get it all fit. It's a new world competency that need be mastered if you're a software development professional. >> Dave: Okay, anybody else want to chime in on the bottom line there? >> David: Just my two penny worth is that the key aspect of all the data scientists is to come up with the algorithm and then implement them in a way that is robust and it part of the system as a whole. The return on investment on the data science piece as an insight isn't worth anything until it's actually implemented and put into production of some sort. It seems that second stage of creating the working model is what is the output of your data scientists. >> Yeah, it's the repeatable deployable asset that incorporates the crux of data science which is algorithms that are data driven, statistical algorithms that are data driven. >> Dave: Okay. If there's nothing else, let's close this agenda item out. Is Nick on? Did Nick join us today? Nick, you there? >> Nick: Yeah. >> Dave: Sounds like you're on. Tough to hear you. >> Nick: How's that? >> Dave: Better, but still not great. Okay, we can at least hear you now. David, you wanted to present on NVMe over fabric pivoting off the Micron news. What is NVMe over fabric and who gives a fuck? (laughing) >> David: This is Micron, we talked about it last week. This is Micron announcement. What they announced is NVMe over fabric which, last time we talked about is the ability to create a whole number of nodes. They've tested 250, the architecture will take them to 1,000. 1,000 processor or 1,000 nodes, and be able to access the data on any single node at roughly the same speed. They are quoting 200 microseconds. It's 195 if it's local and it's 200 if it's remote. That is a very, very interesting architecture which is like nothing else that's been announced. >> Participant: David, can I ask a quick question? >> David: Sure. >> Participant: This latency and the node count sounds astonishing. Is Intel not replicating this or challenging in scope with their 3D Crosspoint? >> David: 3D Crosspoint, Intel would love to sell that as a key component of this. The 3D Crosspoint as a storage device is very, very, very expensive. You can replicate most of the function of 3D Crosspoint at a much lower price point by using a combination of D-RAM and protective D-RAM and Flash. At the moment, 3D Crosspoint is a nice to have and there'll be circumstances where they will use it, but at the meeting yesterday, I don't think they, they might have brought it up once. They didn't emphasize it (mumbles) at all as being part of it. >> Participant: To be clear, this means rather than buying Intel servers rounded out with lots of 3D Crosspoint, you buy Intel servers just with the CPU and then all the Micron niceness for their NVMe and their Interconnect? >> David: Correct. They are still Intel servers. The ones they were displaying yesterday were HP1's, they also used SuperMicro. They want certain characteristics of the chip set that are used, but those are just standard pieces. The other parts of the architecture are the Mellanox, the 100 gigabit converged ethernet and using Rocky which is IDMA over converged ethernet. That is the secret sauce which allows you and Mellanox themselves, their cards have a lot of offload of a lot of functionality. That's the secret sauce which allows you to go from any point to any point in 5 microseconds. Then create a transfer and other things. Files are on top of that. >> Participant: David, Another quick question. The latency is incredibly short. >> David: Yep. >> Participant: What happens if, as say an MPP SQL database with 1,000 nodes, what if they have to shuffle a lot of data? What's the throughput? Is it limited by that 100 gig or is that so insanely large that it doesn't matter? >> David: They key is this, that it allows you to move the processing to wherever the data is very, very easily. In the principle that will evolve from this architecture, is that you know where the data is so don't move the data around, that'll block things up. Move the processing to that particular node or some adjacent node and do the processing as close as possible. That is as an architecture is a long term goal. Obviously in the short term, you've got to take things as they are. Clearly, a different type of architecture for databases will need to eventually evolve out of this. At the moment, what they're focusing on is big problems which need low latency solutions and using databases as they are and the whole end to end use stack which is a much faster way of doing it. Then over time, they'll adapt new databases, new architectures to really take advantage of it. What they're offering is a POC at the moment. It's in Beta. They had their customers talking about it and they were very complimentary in general about it. They hope to get it into full production this year. There's going to be a host of other people that are doing this. I was trying to bottom line this in terms of really what the link is with digital enablement. For me, true digital enablement is enabling any relevant data to be available for processing at the point of business engagement in real time or near real time. The definition that this architecture enables. It's a, in my view a potential game changer in that this is an architecture which will allow any data to be available for processing. You don't have to move the data around, you move the processing to that data. >> Is Micron the first market with this capability, David? NV over Me? NVMe. >> David: Over fabric? Yes. >> Jim: Okay. >> David: Having said that, there are a lot of start ups which have got a significant amount of money and who are coming to market with their own versions. You would expect Dell, HP to be following suit. >> Dave: David? Sorry. Finish your thought and then I have another quick question. >> David: No, no. >> Dave: The principle, and you've helped me understand this many times, going all the way back to Hadoop, bring the application to the data, but when you're using conventional relational databases and you've had it all normalized, you've got to join stuff that might not be co-located. >> David: Yep. That's the whole point about the five microseconds. Now that the impact of non co-location if you have to join stuff or whatever it is, is much, much lower. It's so you can do the logical draw in, whatever it is, very quickly and very easily across that whole fabric. In terms of processing against that data, then you would choose to move the application to that node because it's much less data to move, that's an optimization of the architecture as opposed to a fundamental design point. You can then optimize about where you run the thing. This is ideal architecture for where I personally see things going which is traditional systems of record which need to be exactly as they've ever been and then alongside it, the artificial intelligence, the systems of understanding, data warehouses, etc. Having that data available in the same space so that you can combine those two elements in real time or in near real time. The advantage of that in terms of business value, digital enablement, and business value is the biggest thing of all. That's a 50% improvement in overall productivity of a company, that's the thing that will drive, in my view, 99% of the business value. >> Dave: Going back just to the joint thing, 100 gigs with five microseconds, that's really, really fast, but if you've got petabytes of data on these thousand nodes and you have to do a join, you still got to go through that 100 gig pipe of stuff that's not co-located. >> David: Absolutely. The way you would design that is as you would design any query. You've got a process you would need, a process in front of that which is query optimization to be able to farm all of the independent jobs needed to do in each of the nodes and take the output of that and bring that together. Both the concepts are already there. >> Dave: Like a map. >> David: Yes. That's right. All of the data science is there. You're starting from an architecture which is fundamentally different from the traditional let's get it out architectures that have existed, by removing that huge overhead of going from one to another. >> Dave: Oh, because this goes, it's like a mesh not a ring? >> David: Yes, yes. >> Dave: It's like the high performance compute of this MPI type architecture? >> David: Absolutely. NVMe, by definition is a point to point architecture. Rocky, underneath it is a point to point architecture. Everything is point to point. Yes. >> Dave: Oh, got it. That really does call for a redesign. >> David: Yes, you can take it in steps. It'll work as it is and then over time you'll optimize it to take advantage of it more. Does that definition of (mumbling) make sense to you guys? The one I quoted to you? Enabling any relevant data to be available for processing at the point of business engagement, in real time or near real time? That's where you're trying to get to and this is a very powerful enabler of that design. >> Nick: You're emphasizing the network topology, while I kind of thought the heart of the argument was performance. >> David: Could you repeat that? It's very - >> Dave: Let me repeat. Nick's a little light, but I could hear him fine. You're emphasizing the network topology, but Nick's saying his takeaway was the whole idea was the thrust was performance. >> Nick: Correct. >> David: Absolutely. Absolutely. The result of that network topology is a many times improvement in performance of the systems as a whole that you couldn't achieve in any previous architecture. I totally agree. That's what it's about is enabling low latency applications with much, much more data available by being able to break things up in parallel and delivering multiple streams to an end result. Yes. >> Participant: David, let me just ask, if I can play out how databases are designed now, how they can take advantage of it unmodified, but how things could be very, very different once they do take advantage of it which is that today, if you're doing transaction processing, you're pretty much bottle necked on a single node that sort of maintains the fresh cache of shared data and that cache, even if it's in memory, it's associated with shared storage. What you're talking about means because you've got memory speed access to that cache from anywhere, it no longer is tied to a node. That's what allows you to scale out to 1,000 nodes even for transaction processing. That's something we've never really been able to do. Then the fact that you have a large memory space means that you no longer optimize for mapping back and forth from disk and disk structures, but you have everything in a memory native structure and you don't go through this thing straw for IO to storage, you go through memory speed IO. That's a big, big - >> David: That's the end point. I agree. That's not here quite yet. It's still IO, so the IO has been improved dramatically, the protocol within the Me and the over fabric part of it. The elapsed time has been improved, but it's not yet the same as, for example, the HPV initiative. That's saying you change your architecture, you change your way of processing just in the memory. Everything is assumed to be memory. We're not there yet. 200 microseconds is still a lot, lot slower than the process that - one impact of this architecture is that the amount of data that you can pass through it is enormously higher and therefore, the memory sizes themselves within each node will need to be much, much bigger. There is a real opportunity for architectures which minimize the impact, which hold data coherently across multiple nodes and where there's minimal impact of, no tapping on the shoulder for every byte transferred so you can move large amounts of data into memory and then tell people that it's there and allow it to be shared, for example between the different calls and the GPUs and FPGAs that will be in these processes. There's more to come in terms of the architecture in the future. This is a step along the way, it's not the whole journey. >> Participant: Dave, another question. You just referenced 200 milliseconds or microseconds? >> David: Did I say milliseconds? I meant microseconds. >> Participant: You might have, I might have misheard. Relate that to the five microsecond thing again. >> David: If you have data directly attached to your processor, the access time is 195 microseconds. If you need to go to a remote, anywhere else in the thousand nodes, your access time is 200 microseconds. In other words, the additional overhead of that data is five microseconds. >> Participant: That's incredible. >> David: Yes, yes. That is absolutely incredible. That's something that data scientists have been working on for years and years. Okay. That's the reason why you can now do what I talked about which was you can have access from any node to any data within that large amount of nodes. You can have petabytes of data there and you can have access from any single node to any of that data. That, in terms of data enablement, digital enablement, is absolutely amazing. In other words, you don't have to pre put the data that's local in one application in one place. You're allowing an enormous flexibility in how you design systems. That coming back to artificial intelligence, etc. allows you a much, much larger amount of data that you can call on for improving applications. >> Participant: You can explore and train models, huge models, really quickly? >> David: Yes, yes. >> Participant: Apparently that process works better when you have an MPI like mesh than a ring. >> David: If you compare this architecture to the DSST architecture which was the first entrance into this that MP bought for a billion dollars, then that one stopped at 40 nodes. It's architecture was very, very proprietary all the way through. This one takes you to 1,000 nodes with much, much lower cost. They believe that the cost of the equivalent DSSD system will be between 10 and 20% of that cost. >> Dave: Can I ask a question about, you mentioned query optimizer. Who develops the query optimizer for the system? >> David: Nobody does yet. >> Jim: The DBMS vendor would have to re-write theirs with a whole different pensive cost. >> Dave: So we would have an optimizer database system? >> David: Who's asking a question, I'm sorry. I don't recognize the voice. >> Dave: That was Neil. Hold on one second, David. Hold on one second. Go ahead Nick. You talk about translation. >> Nick: ... On a network. It's SAN. It happens to be very low latency and very high throughput, but it's just a storage sub-system. >> David: Yep. Yep. It's a storage sub-system. It's called a server SAN. That's what we've been talking about for a long time is you need the same characteristics which is that you can get at all the data, but you need to be able to get at it in compute time as opposed to taking a stroll down the road time. >> Dave: Architecturally it's a SAN without an array controller? >> David: Exactly. Yeah, the array controller is software from a company called Xcellate, what was the name of it? I can't remember now. Say it again. >> Nick: Xcelero or Xceleron? >> David: Xcelero. That's the company that has produced the software for the data services, etc. >> Dave: Let's, as we sort of wind down this segment, let's talk about the business impact again. We're talking about different ways potentially to develop applications. There's an ecosystem requirement here it sounds like, from the ISDs to support this and other developers. It's the final, portends the elimination of the last electromechanical device in computing which has implications for a lot of things. Performance value, application development, application capability. Maybe you could talk about that a little bit again thinking in terms of how practitioners should look at this. What are the actions that they should be taking and what kinds of plans should they be making in their strategies? >> David: I thought Neil's comment last week was very perceptive which is, you wouldn't start with people like me who have been imbued with the 100 database call limits for umpteen years. You'd start with people, millennials, or sub-millenials or whatever you want to call them, who can take a completely fresh view of how you would exploit this type of architecture. Fundamentally you will be able to get through 10 or 100 times more data in real time than you can with today's systems. There's two parts of that data as I said before. The traditional systems of record that need to be updated, and then a whole host of applications that will allow you to do processes which are either not possible, or very slow today. To give one simple example, if you want to do real time changing of pricing based on availability of your supply chain, based on what you've got in stock, based on the delivery capabilities, that's a very, very complex problem. The optimization of all these different things and there are many others that you could include in that. This will give you the ability to automate that process and optimize that process in real time as part of the systems of record and update everything together. That, in terms of business value is extracting a huge number of people who previously would be involved in that chain, reducing their involvement significantly and making the company itself far more agile, far more responsive to change in the marketplace. That's just one example, you can think of hundreds for every marketplace where the application now becomes the systems of record, augmented by AI and huge amounts more data can improve the productivity of an organization and the agility of an organization in the marketplace. >> This is a godsend for AI. AI, the draw of AI is all this training data. If you could just move that in memory speed to the application in real time, it makes the applications much sharper and more (mumbling). >> David: Absolutely. >> Participant: How long David, would it take for the cloud vendors to not just offer some instances of this, but essentially to retool their infrastructure. (laughing) >> David: This is, to me a disruption and a half. The people who can be first to market in this are the SaaS vendors who can take their applications or new SaaS vendors. ISV. Sorry, say that again, sorry. >> Participant: The SaaS vendors who have their own infrastructure? >> David: Yes, but it's not going to be long before the AWS' and Microsofts put this in their tool bag. The SaaS vendors have the greatest capability of making this change in the shortest possible time. To me, that's one area where we're going to see results. Make no mistake about it, this is a big change and at the Micron conference, I can't remember what the guys name was, he said it takes two Olympics for people to start adopting things for real. I think that's going to be shorter than two Olympics, but it's going to be quite a slow process for pushing this out. It's radically different and a lot of the traditional ways of doing things are going to be affected. My view is that SaaS is going to be the first and then there are going to be individual companies that solve the problems themselves. Large companies, even small companies that put in systems of this sort and then use it to outperform the marketplace in a significant way. Particularly in the finance area and particularly in other data intent areas. That's my two pennies worth. Anybody want to add anything else? Any other thoughts? >> Dave: Let's wrap some final thoughts on this one. >> Participant: Big deal for big data. >> David: Like it, like it. >> Participant: It's actually more than that because there used to be a major trade off between big data and fast data. Latency and throughput and this starts to push some of those boundaries out so that you sort of can have both at once. >> Dave: Okay, good. Big deal for big data and fast data. >> David: Yeah, I like it. >> Dave: George, you want to talk about digital twins? I remember when you first sort of introduced this, I was like, "Huh? What's a digital twin? "That's an interesting name." I guess, I'm not sure you coined it, but why don't you tell us what digital twin is and why it's relevant. >> George: All right. GE coined it. I'm going to, at a high level talk about what it is, why it's important, and a little bit about as much as we can tell, how it's likely to start playing out and a little bit on the differences of the different vendors who are going after it. As far as sort of defining it, I'm cribbing a little bit from a report that's just in the edit process. It's data representation, this is important, or a model of a product, process, service, customer, supplier. It's not just an industrial device. It can be any entity involved in the business. This is a refinement sort of Peter helped with. The reason it's any entity is because there is, it can represent the structure and behavior, not just of a machine tool or a jet engine, but a business process like sales order process when you see it on a screen and its workflow. That's a digital twin of what used to be a physical process. It applied to both the devices and assets and processes because when you can model them, you can integrate them within a business process and improve that process. Going back to something that's more physical so I can do a more concrete definition, you might take a device like a robotic machine tool and the idea is that the twin captures the structure and the behavior across its lifecycle. As it's designed, as it's built, tested, deployed, operated, and serviced. I don't know if you all know the myth of, in the Greek Gods, one of the Goddesses sprang fully formed from the forehead of Zeus. I forgot who it was. The point of that is digital twin is not going to spring fully formed from any developers head. Getting to the level of fidelity I just described is a journey and a long one. Maybe a decade or more because it's difficult. You have to integrate a lot of data from different systems and you have to add structure and behavior for stuff that's not captured anywhere and may not be captured anywhere. Just for example, CAD data might have design information, manufacturing information might come from there or another system. CRM data might have support information. Maintenance repair and overhaul applications might have information on how it's serviced. Then you also connect the physical version with the digital version with essentially telemetry data that says how its been operating over time. That sort of helps define its behavior so you can manipulate that and predict things or simulate things that you couldn't do with just the physical version. >> You have to think about combined with say 3D printers, you could create a hot physical back up of some malfunctioning thing in the field because you have the entire design, you have the entire history of its behavior and its current state before it went kablooey. Conceivably, it can be fabricated on the fly and reconstituted as a physicologic from the digital twin that was maintained. >> George: Yes, you know what actually that raises a good point which is that the behavior that was represented in the telemetry helps the designer simulate a better version for the next version. Just what you're saying. Then with 3D printing, you can either make a prototype or another instance. Some of the printers are getting sophisticated enough to punch out better versions or parts for better versions. That's a really good point. There's one thing that has to hold all this stuff together which is really kind of difficult, which is challenging technology. IBM calls it a knowledge graph. It's pretty much in anyone's version. They might not call it a knowledge graph. It's a graph is, instead of a tree where you have a parent and then children and then the children have more children, a graph, many things can relate to many things. The reason I point that out is that puts a holistic structure over all these desperate sources of data behavior. You essentially talk to the graph, sort of like with Arnold, talk to the hand. That didn't, I got crickets. (laughing) Let me give you guys the, I put a definitions table in this dock. I had a couple things. Beta models. These are some important terms. Beta model represents the structure but not the behavior of the digital twin. The API represents the behavior of the digital twin and it should conform to the data model for maximum developer usability. Jim, jump in anywhere where you feel like you want to correct or refine. The object model is a combination of the data model and API. You were going to say something? >> Jim: No, I wasn't. >> George: Okay. The object model ultimately is the digital twin. Another way of looking at it, defining the structure and behavior. This sounds like one of these, say "T" words, the canonical model. It's a generic version of the digital twin or really the one where you're going to have a representation that doesn't have customer specific extensions. This is important because the way these things are getting built today is mostly custom spoke and so if you want to be able to reuse work. If someone's building this for you like a system integrator, you want to be able to, or they want to be able to reuse this on the next engagement and you want to be able to take the benefit of what they've learned on the next engagement back to you. There has to be this canonical model that doesn't break every time you essentially add new capabilities. It doesn't break your existing stuff. Knowledge graph again is this thing that holds together all the pieces and makes them look like one coherent hole. I'll get to, I talked briefly about network compatibility and I'll get to level of detail. Let me go back to, I'm sort of doing this from crib notes. We talked about telemetry which is sort of combining the physical and the twin. Again, telemetry's really important because this is like the time series database. It says, this is all the stuff that was going on over time. Then you can look at telemetry data that tells you, we got a dirty power spike and after three of those, this machine sort of started vibrating. That's part of how you're looking to learn about its behavior over time. In that process, models get better and better about predicting and enabling you to optimize their behavior and the business process with which it integrates. I'll give some examples of that. Twins, these digital twins can themselves be composed in levels of detail. I think I used the example of a robotic machine tool. Then you might have a bunch of machine tools on an assembly line and then you might have a bunch of assembly lines in a factory. As you start modeling, not just the single instance, but the collections that higher up and higher levels of extractions, or levels of detail, you get a richer and richer way to model the behavior of your business. More and more of your business. Again, it's not just the assets, but it's some of the processes. Let me now talk a little bit about how the continual improvement works. As Jim was talking about, we have data feedback loops in our machine learning models. Once you have a good quality digital twin in place, you get the benefit of increasing returns from the data feedback loops. In other words, if you can get to a better starting point than your competitor and then you get on the increasing returns of the data feedback loops, that is improving the fidelity of the digital twins now faster than your competitor. For one twin, I'll talk about how you want to make the whole ecosystem of twins sort of self-reinforcing. I'll get to that in a sec. There's another point to make about these data feedback loops which is traditional apps, and this came up with Jim and Neil, traditional apps are static. You want upgrades, you get stuff from the vendor. With digital twins, they're always learning from the customer's data and that has implications when the partner or vendor who helped build it for a customer takes learnings from the customer and goes to a similar customer for another engagement. I'll talk about the implications from that. This is important because it's half packaged application and half bespoke. The fact that you don't have to take the customer's data, but your model learns from the data. Think of it as, I'm not going to take your coffee beans, your data, but I'm going to run or make coffee from your beans and I'm going to take that to the next engagement with another customer who could be your competitor. In other words, you're extracting all the value from the data and that helps modify the behavior of the model and the next guy gets the benefit of it. Dave, this is the stuff where IBM keeps saying, we don't take your data. You're right, but you're taking the juice you squeezed out of it. That's one of my next reports. >> Dave: It's interesting, George. Their contention is, they uniquely, unlike Amazon and Google, don't swap spit, your spit with their competitors. >> George: That's misleading. To say Amazon and Google, those guys aren't building digital twins. Parametric technology is. I've got this definitely from a parametric technical fellow at an AWS event last week, which is they, not only don't use the data, they don't use the structure of the twin either from engagement to engagement. That's a big difference from IBM. I have a quote, Chris O'Connor from IBM Munich saying, "We'll take the data model, "but we won't take the data." I'm like, so you take the coffee from the beans even if you don't take the beans? I'm going to be very specific about saying that saying you don't do what Google and FaceBook do, what they do, it's misleading. >> Dave: My only caution there is do some more vetting and checking. A lot of times what some guy says on a Cube interview, he or she doesn't even know, in my experience. Make sure you validate that. >> George: I'll send it to them for feedback, but it wasn't just him. I got it from the CTO of the IOT division as well. >> Dave: When you were in Munich? >> George: This wasn't on the Cube either. This was by the side of, at the coffee table during our break. >> Dave: I understand and CTO's in theory should know. I can't tell you how many times I've gotten a definitive answer from a pretty senior level person and it turns out it was, either they weren't listening to me or they didn't know or they were just yessing me or whatever. Just be really careful and make sure you do your background checks. >> George: I will. I think the key is leave them room to provide a nuanced answer. It's more of a really, really, really concrete about really specific edge conditions and say do you or don't you. >> Dave: This is a pretty big one. If I'm a CIO, a chief digital officer, a chief data officer, COO, head of IT, head of data science, what should I be doing in this regard? What's the advice? >> George: Okay, can I go through a few more or are we out of time? >> Dave: No, we have time. >> George: Let me do a couple more points. I talked about training a single twin or an instance of a twin and I talked about the acceleration of the learning curve. There's edge analytics, David has educated us with the help of looking at GE Predicts. David, you have been talking about this fpr a long time. You want edge analytics to inform or automate a low latency decision and so this is where you're going to have to run some amount of analytics. Right near the device. Although I got to mention, hopefully this will elicit a chuckle. When you get some vendors telling you what their edge and cloud strategies are. Map R said, we'll have a hadoop cluster that only needs four or five nodes as our edge device. And we'll need five admins to care and feed it. He didn't say the last part, but that obviously isn't going to work. The edge analytics could be things like recalibrating the machine for different tolerance. If it's seeing that it's getting out of the tolerance window or something like that. The cloud, and this is old news for anyone who's been around David, but you're going to have a lot of data, not all of it, but going back to the cloud to train both the instances of each robotic machine tool and the master of that machine tool. The reason is, an instance would be oh I'm operating in a high humidity environment, something like that. Another one would be operating where there's a lot of sand or something that screws up the behavior. Then the master might be something that has behavior that's sort of common to all of them. It's when the training, the training will take place on the instances and the master and will in all likelihood push down versions of each. Next to the physical device process, whatever, you'll have the instance one and a class one and between the two of them, they should give you the optimal view of behavior and the ability to simulate to improve things. It's worth mentioning, again as David found out, not by talking to GE, but by accidentally looking at their documentation, their whole positioning of edge versus cloud is a little bit hand waving and in talking to the guys from ThingWorks which is a division of what used to be called Parametric Technology which is just PTC, it appears that they're negotiating with GE to give them the orchestration and distributed database technology that GE can't build itself. I've heard also from two ISV's, one a major one and one a minor one who are both in the IOT ecosystem one who's part of the GE ecosystem that predicts as a mess. It's analysis paralysis. It's not that they don't have talent, it's just that they're not getting shit done. Anyway, the key thing now is when you get all this - >> David: Just from what I learned when I went to the GE event recently, they're aware of their requirement. They've actually already got some sub parts of the predix which they can put in the cloud, but there needs to be more of it and they're aware of that. >> George: As usual, just another reason I need a red phone hotline to David for any and all questions I have. >> David: Flattery will get you everywhere. >> George: All right. One of the key takeaways, not the action item, but the takeaway for a customer is when you get these data feedback loops reinforcing each other, the instances of say the robotic machine tools to the master, then the instance to the assembly line to the factory, when all that is being orchestrated and all the data is continually enhancing the models as well as the manual process of adding contextual information or new levels of structure, this is when you're on increasing returns sort of curve that really contributes to sustaining competitive advantage. Remember, think of how when Google started off on search, it wasn't just their algorithm, but it was collecting data about which links you picked, in which order and how long you were there that helped them reinforce the search rankings. They got so far ahead of everyone else that even if others had those algorithms, they didn't have that data to help refine the rankings. You get this same process going when you essentially have your ecosystem of learning models across the enterprise sort of all orchestrating. This sounds like motherhood and apple pie and there's going to be a lot of challenges to getting there and I haven't gotten all the warts of having gone through, talked to a lot of customers who've gotten the arrows in the back, but that's the theoretical, really cool end point or position where the entire company becomes a learning organization from these feedback loops. I want to, now that we're in the edit process on the overall digital twin, I do want to do a follow up on IBM's approach. Hopefully we can do it both as a report and then as a version that's for Silicon Angle because that thing I wrote on Cloudera got the immediate attention of Cloudera and Amazon and hopefully we can both provide client proprietary value add, but also the public impact stuff. That's my high level. >> This is fascinating. If you're the Chief of Data Science for example, in a large industrial company, having the ability to compile digital twins of all your edge devices can be extraordinarily valuable because then you can use that data to do more fine-grained segmentation of the different types of edges based on their behavior and their state under various scenarios. Basically then your team of data scientists can then begin to identify the extent to which they need to write different machine learning models that are tuned to the specific requirements or status or behavior of different end points. What I'm getting at is ultimately, you're going to have 10 zillion different categories of edge devices performing in various scenarios. They're going to be driven by an equal variety of machine learning, deep learning AI and all that. All that has to be built up by your data science team in some coherent architecture where there might be a common canonical template that all devices will, all the algorithms and so forth on those devices are being built from. Each of those algorithms will then be tweaked to the specific digital twins profile of each device is what I'm getting at. >> George: That's a great point that I didn't bring up which is folks who remember object oriented programming, not that I ever was able to write a single line of code, but the idea, go into this robotic machine tool, you can inherit a couple of essentially component objects that can also be used in slightly different models, but let's say in this machine tool, there's a model for a spinning device, I forget what it's called. Like a drive shaft. That drive shaft can be in other things as well. Eventually you can compose these twins, even instances of a twin with essentially component models themselves. Thing Works does this. I don't know if GE does this. I don't think IBM does. The interesting thing about IBM is, their go to market really influences their approach to this which is they have this huge industry solutions group and then obviously the global business services group. These guys are all custom development and domain experts so they'll go into, they're literally working with Airbus and with the goal of building a model of a particular airliner. Right now I think they're doing the de-icing subsystem, I don't even remember on which model. In other words they're helping to create this bespoke thing and so that's what actually gets them into trouble with potentially channel conflict or maybe it's more competitor conflict because Airbus is not going to be happy if they take their learnings and go work with Boeing next. Whereas with PTC and Thing Works, at least their professional services arm, they treat this much more like the implementation of a packaged software product and all the learnings stay with the customer. >> Very good. >> Dave: I got a question, George. In terms of the industrial design and engineering aspect of building products, you mentioned PTC which has been in the CAD business and the engineering business for software for 50 years, and Ansis and folks like that who do the simulation of industrial products or any kind of a product that gets built. Is there a natural starting point for digital twin coming out of that area? That would be the vice president of engineering would be the guy that would be a key target for this kind of thinking. >> George: Great point. This is, I think PTC is closely aligned with Terradata and they're attitude is, hey if it's not captured in the CAD tool, then you're just hand waving because you won't have a high fidelity twin. >> Dave: Yeah, it's a logical starting point for any mechanical kind of device. What's a thing built to do and what's it built like? >> George: Yeah, but if it's something that was designed in a CAD tool, yes, but if it's something that was not, then you start having to build it up in a different way. I think, I'm trying to remember, but IBM did not look like they had something that was definitely oriented around CAD. Theirs looked like it was more where the knowledge graph was the core glue that pulled all the structure and behavior together. Again, that was a reflection of their product line which doesn't have a CAD tool and the fact that they're doing these really, really, really bespoke twins. >> Dave: I'm thinking that it strikes me that from the industrial design in engineering area, it's really the individual product is really the focus. That's one part of the map. The dynamic you're pointing at, there's lots of other elements of the map in terms of an operational, a business process. That might be the fleet of wind turbines or the fleet of trucks. How they behave collectively. There's lots of different entry points. I'm just trying to grapple with, isn't the CAD area, the engineering area at least for hard products, have an obvious starting point for users to begin to look at this. The BP of Engineering needs to be on top of this stuff. >> George: That's a great point that I didn't bring up which is, a guy at Microsoft who was their CTO in their IT organization gave me an example which was, you have a pipeline that's 1,000 miles long. It's got 10,000 valves in it, but you're not capturing the CAD design of the valve, you just put a really simple model that measures pressure, temperature, and leakage or something. You string 10,000 of those together into an overall model of the pipeline. That is a low fidelity thing, but that's all they need to start with. Then they can see when they're doing maintenance or when the flow through is higher or what the impact is on each of the different valves or flanges or whatever. It doesn't always have to start with super high fidelity. It depends on which optimizing for. >> Dave: It's funny. I had a conversation years ago with a guy, the engineering McNeil Schwendler if you remember those folks. He was telling us about 30 to 40 years ago when they were doing computational fluid dynamics, they were doing one dimensional computational fluid dynamics if you can imagine that. Then they were able, because of the compute power or whatever, to get the two dimensional computational fluid dynamics and finally they got to three dimensional and they're looking also at four and five dimensional as well. It's serviceable, I guess what I'm saying in that pipeline example, the way that they build that thing or the way that they manage that pipeline is that they did the one dimensional model of a valve is good enough, but over time, maybe a two or three dimensional is going to be better. >> George: That's why I say that this is a journey that's got to take a decade or more. >> Dave: Yeah, definitely. >> Take the example of airplane. The old joke is it's six million parts flying in close formation. It's going to be a while before you fit that in one model. >> Dave: Got it. Yes. Right on. When you have that model, that's pretty cool. All right guys, we're about out of time. I need a little time to prep for my next meeting which is in 15 minutes, but final thoughts. Do you guys feel like this was useful in terms of guiding things that you might be able to write about? >> George: Hugely. This is hugely more valuable than anything we've done as a team. >> Jim: This is great, I learned a lot. >> Dave: Good. Thanks you guys. This has been recorded. It's up on the cloud and I'll figure out how to get it to Peter and we'll go from there. Thanks everybody. (closing thank you's)
SUMMARY :
There you go. and maybe the key issues that you see and is coming even more deeply into the core practice You had mentioned, you rattled off a bunch of parameters. It's all about the core team needs to be, I got a minimal modular, incremental, iterative, iterative, adaptive, and co-locational. in the context of data science, and get automation of many of the aspects everything that these people do needs to be documented that the whole rapid idea development flies in the face of that create the final product that has to go into production and the algorithms and so forth that were used and the working model is obviously a subset that handle the continuous training and retraining David: Is that the right way of doing it, Jim? and come back to sort of what I was trying to get to before Dave: Please, that would be great. so how in the world are you going to agilize that? I think if you try to represent data science the algorithm to be fit for purpose and he said something to me the other day. If you look at - Just to clarify, he said agile's dead? Dave: Go ahead, Jim. and the functional specifications and all that. and all that is increasingly the core that the key aspect of all the data scientists that incorporates the crux of data science Nick, you there? Tough to hear you. pivoting off the Micron news. the ability to create a whole number of nodes. Participant: This latency and the node count At the moment, 3D Crosspoint is a nice to have That is the secret sauce which allows you The latency is incredibly short. Move the processing to that particular node Is Micron the first market with this capability, David? David: Over fabric? and who are coming to market with their own versions. Dave: David? bring the application to the data, Now that the impact of non co-location and you have to do a join, and take the output of that and bring that together. All of the data science is there. NVMe, by definition is a point to point architecture. Dave: Oh, got it. Does that definition of (mumbling) make sense to you guys? Nick: You're emphasizing the network topology, the whole idea was the thrust was performance. of the systems as a whole Then the fact that you have a large memory space is that the amount of data that you can pass through it You just referenced 200 milliseconds or microseconds? David: Did I say milliseconds? Relate that to the five microsecond thing again. anywhere else in the thousand nodes, That's the reason why you can now do what I talked about when you have an MPI like mesh than a ring. They believe that the cost of the equivalent DSSD system Who develops the query optimizer for the system? Jim: The DBMS vendor would have to re-write theirs I don't recognize the voice. Dave: That was Neil. It happens to be very low latency which is that you can get at all the data, Yeah, the array controller is software from a company called That's the company that has produced the software from the ISDs to support this and other developers. and the agility of an organization in the marketplace. AI, the draw of AI is all this training data. for the cloud vendors to not just offer are the SaaS vendors who can take their applications and then there are going to be individual companies Latency and throughput and this starts to push Dave: Okay, good. I guess, I'm not sure you coined it, and the idea is that the twin captures the structure Conceivably, it can be fabricated on the fly and it should conform to the data model and that helps modify the behavior Dave: It's interesting, George. saying, "We'll take the data model, Make sure you validate that. I got it from the CTO of the IOT division as well. This was by the side of, at the coffee table I can't tell you how many times and say do you or don't you. What's the advice? of behavior and the ability to simulate to improve things. of the predix which they can put in the cloud, I need a red phone hotline to David and all the data is continually enhancing the models having the ability to compile digital twins and all the learnings stay with the customer. and the engineering business for software hey if it's not captured in the CAD tool, What's a thing built to do and what's it built like? and the fact that they're doing these that from the industrial design in engineering area, but that's all they need to start with. and finally they got to three dimensional that this is a journey that's got to take It's going to be a while before you fit that I need a little time to prep for my next meeting This is hugely more valuable than anything we've done how to get it to Peter and we'll go from there.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Chris O'Connor | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
Jim Kobeielus | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Neil | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
Nick | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
1,000 miles | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
195 microseconds | QUANTITY | 0.99+ |
Deon Newman, IBM & Slava Rubin, Indiegogo - IBM Interconnect 2017 - #ibminterconnect - #theCUBE
>> Male Announcer: Live from Las Vegas, it's theCUBE, covering InterConnect 2017. Brought to you by, IBM. >> Welcome back, we're live here in Las Vegas for IBM InterConnect 2017. This is theCUBE's coverage of InterConnect, I'm John Furrier with Dave Vellante my co-host. Our next guest is Deon Newman, CMO of IBM Watson IoT, and Slava Rubin, the founder and Chief Business Officer of Indiegogo, great keynote today, you're on stage. Welcome to theCUBE. Deon, great to see you. >> Thanks for having me. >> So I got to first set the context. Indiegogo, very successful crowd-funder, you guys pioneered. It's pretty obvious now looking back, this has created so much opportunity for people starting companies, whether it's a labor of love or growing into a great business, so congratulations on your success. What's the IBM connection? Because I don't want, you know, there was some stuff on the tweets, I don't want to break the news, but you guys are here. Share the connection. What's the packaging, why is IMB and Indigogo working together? >> Yeah, so back up to 2008. We launched to be able to get people access to funding. And over the last several years, we've done a pretty good job of that. Sending over a billion dollars to over half a million entrepreneurs around the world. And more recently, we've had a lot more requests of Indiegogo can you do more? And we knew that we couldn't do it all on our own. So we partnered first with Arrow to be able to bring these ideas more into reality around components and engineering and supply chain. And we knew we needed more in terms of these IoT products, so they need to be smart and they need software. So we were really excited to be able to announce today, the partnership with IBM, around everything IoT Cloud, security, and being able to provide all the block chain and any other elements that we need. >> Deon I want to ask you, get your thoughts on, we had the Watson data platform guys on earlier in the segment, and the composability is now the norm around data. This brings the hacker-maker culture to IoT. Which if you think about it as a sweet-spot for some of the innovations. They can start small and grow big. Is that part of the plan? >> Yeah, I mean, if you look at what's going on we have about 6000 clients already with us in the IoT space. They tend to be the big end of town, you know whether it be a Daimler or an Airbus or whether it be a Kone, the world's biggest elevator company. Or ISS, the world's biggest facilities management company. So we were doing a lot of work up there really around optimizing their operations, connecting products, wrapping services around them so they can create new revenue streams. But where we didn't have an offering that was being used extensively, was in the start-up space. And you know when we saw what Indiegogo had been doing in the marketplace, and when our partner Arrow, who as Slava has said, has really built up an engineering capability and a component capability to support these makers. It was just a match made in heaven. You know, for an entrepreneur who needs to find a way to capture data, make that data valuable, you know, we can do that. We have the Cloud platform, we have the AI, et cetera. >> It's interesting, we just hit the stride of dude, we have our big data Silicon Valley event just last week, and the big thing that come out of that event is finally the revelation, this is probably not new to Slava and what you're doing, it that, the production under-the-hood hard stuff that's being done is some ways stunting the creativity around some of the cooler stuff. Like whether it's data analytics or in this case, starting a company. So, Slava I want to get your thought on, your views on how the world is becoming democratized. Because if you think about the entrepreneurship trend that you're riding, is the democratization of invention. Alright, there's a democracy, this is the creative, it's the innovation, but yet it's all this hard stuff, like what's called production or under-the-hood that IBM's bringing in. What do you expect that to fuel up? What's your vision of this democratization culture? >> I mean, it's my favorite thing that's happening. I think whether it's YouTube democratizing access to content or Indiegogo democratizing access to capital. The idea of democratizing access to entrepreneurship between our partnership, just really makes me smile. I think that capital is just one of those first points and now they're starting to get the money but lots of other things are hard. When you can actually get artificial intelligence, get Cloud capabilities, get security capabilities, put it into a service so you don't need to figure all those things out on your own so you can go from a small little idea to actually start scaling pretty rapidly, that's super exciting. When you can be on Indiegogo and in four weeks get 30,000 backers of demand across 100 countries, and people are saying, we want this, you know it's good to know you don't need to start ramping up your own dev team to figure out how to create a Cloud on your own, or create your own AI, you can tap right into a server that's provided. Which is really revolutionizing how quickly a small company can scale. So it proliferates more entrepreneurs starting because they know there's more accessibility. Plus it improves their potential for success, which in the long run just means there's more swings at the bat to be able to have and entrepreneur succeed, which I think all of us want. >> Explain to the audience how it works a little bit. You got the global platform that you built up. Arrow brings it's resources and ideation. IBM brings the IoT, the cognitive platform. Talk about how that all comes together and how people take advantage of it? >> Sure, I mean you can look at it as one example, like Water Buy. So Water Buy is an actual sensor that you can deploy against your water system to be able to detect whether or not your water that you're drinking is healthy. You're getting real-time data across your system and for some reason it's telling you that you have issues, you can react accordingly. So that was an idea. You go on Indiegogo, they post that idea and they're able to get the world to start funding it. You get customer engagement. You get actual market validation. And you get funding. Well now you actually need to make these sensors, you need to make these products, so now you get the partnership with Arrow which is really helpful cause they're helping you with the engineering, the design, the components. Now you want to be able to figure out how you can store all that data. So it's not just your own house, maybe you're evaluating across an entire neighborhood. Or as a State you want to see how the water is for the whole entire State. You put all of that data up into the Cloud, you want to be able to analyze the data rapidly through AI, and similarly this is highly sensitive data so you want it to be secure. If Water Buy on their own, had to build out all of this infrastructure, we're talking about dozens, hundreds, who knows how many people they would need? But here through the partnership you get the benefit of Indiegogo to get the brilliant idea to actually get validated, Arrow to bring your idea from the back of a napkin into reality, and then you get IBM Watson to help with all the software components and Cloud that we just talked about. >> And how did this get started? How did you guys, you know, fall into this, and how did it manifest itself? >> So can I tell the story? >> Go for it. >> So I love this story, so as Slava's explained at the front end of this it was really a partnership of Arrow and Indiegogo that came out of the need of entrepreneurs to actually build their stuff. You know, you get it funded and then you say, oh boy, now I've got a bunch of orders how do I now make this stuff? And so Arrow had a capability of looking at the way you designed, you know looking at it deeply with their engineers, sourcing the components, putting it together, maybe white-boxing it even for you. So they put that together. Now, we're all seeing that IoT and the connective products are moving for disconnection, which is actually generating data and that data having value. And so Arrow didn't have that capability, we were great partners with Arrow, you know when we all looked at it, the need for AI coming into all these products, the need for security around the connection, the platform that could actually do that connection, we were a logical map here. So we're another set of components, not the physical. You know, we're the Cloud-based components and services that enable these connected devices. >> If you think about like the impact, and it's mind-boggling what the alternative is. You mentioned that the example you gave, they probably might have abandoned the project. So if you think about the scale of these opportunities what the alternative would have been without an Indiegogo, you probably have some anecdotal kind of feeling on this. But any thoughts on what data you can share around, do you have kind of reference point of, okay, we've funded all this and 90% wouldn't have been done or 70% wouldn't have been done. Do you have any flavor for? >> It's hard to know exactly. Obviously many of these folks that come to Indiegogo, if they could've gotten funded on another path earlier in the process, they would have. Indiegogo became really a great choice. Now you're seeing instead of being the last resort, Indiegogo is becoming the first resort because they're getting so much validation and market data. The incredible thing is not to think about it at scale when you think about 500 or 700 thousand entrepreneurs, or over a billion dollars, and it's in virtually every country in the world. If you really just look at it as one product. So like, Flow Hive is just one example. They've revolutionized how honey gets harvested. That product was bought in almost 170 countries around the world and it's something that hadn't been changed in over 150 years. And it's just so interesting to see that if it wasn't for Indiegogo that idea would not go from the back of a napkin to getting funded. And now through these partnerships they're able to realize so much more of their potential. >> So it's interesting, the machine learning piece is interesting to me because you take the seed-funding which is great product-market fit as they say in the entrepreneurial culture, is validated. So that's cool. But it could be in some cases, small amounts of cash before the next milestone. But if you think about the creativity impact that machine learning can give the entrepreneur, with through in their discovery process, early stage, that's an added benefit to the entrepreneur. >> Absolutely. Yeah, a great example there is against SmartPlate. SmartPlate is trying to use a combination of a weight-sensing plate as well with photo-detection, image detection software. The more data it can feed its image detection, the more qualified it can know, is that a strawberry or a cherry, or is that beef? And we take that for granted that our eyes can detect all that, but it's really remarkable to think about instead of having to journal everything by hand or make sure you pick with your finger what's the right product and how many ounces, you can take a photo of something and now you'll know what you're eating, how much you're eating and what is the food composition? And this all requires significant data, significant processing. >> I'm really pumped about that, congratulations to you on a great deal. I love the creativity and I think the impact to the globe is just phenomenal. Thinking about the game-changing things that are coming up, Slava I've got to ask you, and Deon if you could weigh in too, maybe you have some, your favorites. You're craziest thing that you've seen funded and the coolest thing you've seen funded. (laughter) >> I mean, who is hard because it's kind of like asking well who's your favorite child? I have like 700,000 children, I'm not even Wilt Chamberlain (laughter) and I like them all. But you know it's everything from an activity tracker to security devices, to being able to see what the trend is 24, 36 months ahead. Before things become mainstream today, we're seeing these things 3, 5 years ago. Things are showing up at CES, and you know these are things we get to see in advance. In terms of something crazy, it's not quite IoT but I remember when a young woman tried to raise $200,000 to be able to get enough money for her and Justin Bieber to fly to the moon. (laughter) >> That's crazy. >> That didn't quite get enough funding. But something that's fresh right now is Nimuno Loops is getting funded right now on Indiegogo live. And they just posted less than seven days ago and they have Lego-compatible tape. So it's something that you can tape onto any surface and the other side is actually Lego-compatible so you actually put Legos onto that tape. So imagine instead of only a flat surface to do Legos, you could do Legos on any surface even your jacket. It's not the most IoT-esque product right now but you just asked for something creative. >> That's the creative. >> I think once you got Wilt Chamberlain and Justin Bieber in the conversation, I'm out. (laughter) (crosstalk) >> Well now, how does Indiegogo sustain itself? Does it take a piece of the action? Does it have other funding mechanisms for? >> Yeah, and that's the beautiful thing about Indiegogo. It's a platform and it's all about supply and demand. So supply is the ideas and the entrepreneurs and the demand is the funders. It's totally free to use the website and as long as you're able to get money in your pocket, then we take a percentage. If you're not taking any money into your pocket, then we get no money. As part of the process, you might benefit from actually not receiving money. You might try to raise a hundred grand, only raise thirty-one and learn that your price-point is wrong, your target audience is wrong, your color is wrong, you're bottom cost it too high. All this feedback is super valuable. You just saved yourself a lot of pain. So really it's about building the marketplace we're a platform, we started out just with funding, we're really becoming now a springboard for entrepreneurs. We can't do it all ourselves which is why we're bringing on these great partners. >> You know we've done, just to add to that, I think it's a relevant part here too. We've actually announced a premium-based service for the entrepreneurs to get onto the Cloud, to access the AI, to access the services as a starting point to the complete premium model so they can get started very low barrier to entry and overseeing scale as they grow. >> What do you call that? Is it IBM IoT Premium or? >> It hasn't got a name specifically to the premium element of the, it's just the Watson IoT platform. Available on Blue Mist. >> So it's a Watson sort of, right. So it's like a community edition of Watson. So Deon, new chapter for you. You know I saw a good quarter for mainframes, last quarter. It's still drafting off your great work and now you've shifted to this whole new IoT role, what's that been like? Relatively new initiative for IBM, building on some historical expertise. But give us the update on your business. >> Yes, so about 15 months ago, we announced a global headquarters that we were going to open in Munich, and we announced the Watson IT business. Which brought together a lot of IBM's expertise and a lot of our experience over the years through smarter cities, through the smarter planet initiative. You know we've been working The Internet Of Things, but we made a 3-billion dollar commitment to that marketplace, that we were going to go big and go strong. We've built out a horizontal platform, the Watson IoT platform. On top of that we've got market-leading enterprise asset management software, the Maximo portfolio, TRIRIGA for facilities management. And then we have a whole set of engineering software for designing connected products as well. So we've built out a very comprehensive industry-vertical-aligned IoT business. We added last year, we went from about 4000 to about 6000 clients. So we had a very good year in terms of real enterprises getting real outcomes. We continue to bring out new industry solutions around both connected products and then operations like retail, manufacturing, building management, telco, transportation. We're building out solutions and use-cases to leverage all that software. So business is going well. We officially the Watson IoT headquarters three weeks ago in Munich. And we're jam packed with clients coming through that building, building with us. We've got a lot of clients who've actually taken space in the building. And their using it as a co-laboratory with us to work on PSE's and see the outcomes they can drive. >> Alright, Deon Newman with IoT Watson, and IoT platforms. Slava Rubin, founder of Indiegogo, collective intelligence is cultural shift happening. Congratulations outsourcing and using all that crowdfunding. It's real good data, not just getting the entrepreneur innovations funded but really using that data and your wheelhouse IoT. Thanks for joining us on theCUBE, appreciate it. >> Thank you John. >> More live coverage after this short break, with theCUBE live in Las Vegas for IBM InterConnect. We'll be right back, stay with us. (upbeat music)
SUMMARY :
Brought to you by, IBM. and Slava Rubin, the founder So I got to first set the context. and being able to provide Is that part of the plan? And you know when we saw what Indiegogo the revelation, this is probably not new swings at the bat to be able platform that you built up. and for some reason it's telling you looking at the way you designed, You mentioned that the example you gave, And it's just so interesting to see But if you think about or make sure you pick with your finger to you on a great deal. But you know it's everything So it's something that you and Justin Bieber in the As part of the process, you might benefit for the entrepreneurs it's just the Watson IoT platform. and now you've shifted to and a lot of our experience over the years the entrepreneur innovations funded We'll be right back, stay with us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Slava Rubin | PERSON | 0.99+ |
Deon Newman | PERSON | 0.99+ |
Deon | PERSON | 0.99+ |
Justin Bieber | PERSON | 0.99+ |
Indiegogo | ORGANIZATION | 0.99+ |
Indigogo | ORGANIZATION | 0.99+ |
Munich | LOCATION | 0.99+ |
Arrow | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Daimler | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
$200,000 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Deon Newman | PERSON | 0.99+ |
IMB | ORGANIZATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Water Buy | ORGANIZATION | 0.99+ |
Wilt Chamberlain | PERSON | 0.99+ |
70% | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
3-billion dollar | QUANTITY | 0.99+ |
one product | QUANTITY | 0.99+ |
PSE | ORGANIZATION | 0.99+ |
ISS | ORGANIZATION | 0.99+ |
Watson | ORGANIZATION | 0.99+ |
over 150 years | QUANTITY | 0.99+ |
thirty-one | QUANTITY | 0.99+ |
30,000 backers | QUANTITY | 0.99+ |
three weeks ago | DATE | 0.99+ |
Kone | ORGANIZATION | 0.99+ |
Slava | PERSON | 0.99+ |
CES | EVENT | 0.98+ |
700,000 children | QUANTITY | 0.98+ |
over a billion dollars | QUANTITY | 0.98+ |
3, 5 years ago | DATE | 0.98+ |
one example | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Legos | ORGANIZATION | 0.98+ |
last quarter | DATE | 0.98+ |
about 6000 clients | QUANTITY | 0.98+ |
about 4000 | QUANTITY | 0.97+ |
four weeks | QUANTITY | 0.97+ |
first resort | QUANTITY | 0.97+ |
100 countries | QUANTITY | 0.97+ |
Watson | TITLE | 0.97+ |
700 thousand entrepreneurs | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
OLD VERSION: Deon Newman & Slava Rubin
>> Announcer: Live, from Las Vegas, it's theCUBE, covering InterConnect 2017, brought to you by IBM. >> OK, welcome back everyone, live here in Las Vegas for IBM InterConnect 2017. This is theCUBE's coverage of InterConnect. I'm John Furrier, Dave Vellante, my co-host. Our next guest is Deon Newman, CMO of IBM Watson IoT, and Slava Rubin, the founder and Chief Business Officer of Indiegogo. Great keynote today, you're on stage, welcome to theCUBE. Deon, great to see you. >> Thanks for havin' me. >> I got to first set the context. Indiegogo, very successful crowdfunder you guys pioneered. It's pretty obvious now, looking back, this creates so much opportunity for people starting companies, whether it's a labor of love or growing into a great business, so congratulations on your success. What's the IBM connection? Because there was some stuff on the tweets, I don't want to break the news, but you guys are here, share the connection. What's the packaging? Why is IBM and Indiegogo working together? >> Yes, so back up to 2008, we launched to be able to get people access to funding and over the last several years, we've done a pretty good job of that, sending over a billion dollars to over a half a million entrepreneurs around the world, and more recently, we've had a lot more requests of Indiegogo, can you do more? And we knew we couldn't do it all on our own, so we partnered first with Arrow, to be able to bring these ideas more into reality around components and engineering and supply chain, and we knew we needed more in terms of these IoT products, so they need to be smart and they need software, so we were really excited to be able to announce today the partnership with IBM, around everything IoT, clouds, security, and being able to provide all the block chain and any other elements that we need. >> Deon, I want to ask you or get your thoughts on, we have the Watson data platform guys on earlier in the segments, and the composability is now the normal around data, brings the hacker-maker culture to IoT, which, if you think about it, is a sweet spot for some of the innovations. They can start small and grow big. Is that part of the plan? >> I mean, if you look at what's going on, we have about 6,000 clients already working with us in the IoT space. They tend to be the big end of town, whether it be a Daimler or a Airbus, whether it be a KONE, the world's biggest elevator company, or ISS, the world's biggest facilities management company, so we were doin' a lot of work up there, really around optimizing their operations, connecting products, wrapping services around them so that they can create new revenue streams, but where we didn't have an offering that was being used extensively was in the start-ups place, and when we saw what Indiegogo had been doing in the marketplace, and when our partner, Arrow, who, as Slava said, has really built up an engineering capability and a component capability to support these makers, it was just a match made in heaven. For an entrepreneur who needs to find a way to capture data, make that data valuable, we can do that. We have the cloud platform, we have the AI, et cetera. >> It's interesting, we just had the Strata Hadoop, we have our own big data Silicon Valley event last week and the big thing that came out of that event, finally, the revelation, this is probably not new to Slava, what you're doin' is that the production under the hood hard stuff that's being done is, in some ways stunting the creativity around some of the cooler stuff, like whether it's data analytics, or in this case, the startin' a company, so, Slava, I want to get your thoughts on your views on how the world is becoming democratized, because if you think about the entrepreneurship trend that you're riding, there's a democratization of invention. This is the creative, it's the innovation, but yet, there's all this hard stuff, that's called, like, production, or under-the-hood, that IBM's bringin'. What do you expect that to feel up? What's your vision of this democratization culture? >> It's my favorite thing that's happening. I think, whether it's YouTube democratizing access to content, or Indiegogo democratizing access to capital, the idea of democratizing access to entrepreneurship between our partnership, just really makes me smile. I think that capital is just one of those first points and now they're starting to get the money, but lots of other things are hard. When you can actually get artificial intelligence, get cloud capabilities, get security capabilities, put it into a service, so you don't need to figure all those things out on your own, so you can go from a small little idea to actually start scaling pretty rapidly, that's super exciting. When you can be on Indiegogo, and in four weeks, get 30,000 backers of demand across 100 countries, and people are saying, "We want this," it's good to know that you don't need to start ramping up your own dev team to figure out how to create a cloud on your own, or create your own AI, you can tap right into a server that's provided, which has really revolutionizing how quickly a small company can scale, so it proliferates more entrepreneurs starting, 'cause they know there's more accessibility, plus it improves their potential for success, which in the long run, just means there's more swings at the bat to be able to have an entrepreneur succeed, which I think all of us want. >> Explain for the audience how it works a little bit. You got the global platform that you built out, Arrow brings its resources and ideation, IBM brings the IoT, the cognitive platform. Talk about how that all comes together and how people take advantage of it. >> Sure, I mean you can look at it as, one example like WaterBot. So WaterBot is an actual sensor that you can deploy against your water system to be able to detect whether or not your water that you're drinking is healthy. You're getting real-time data across your system and for some reason, it's telling you you have issues, you can react accordingly. So that was an idea. You go on Indiegogo, they post that idea, and they're able to get the world to start funding it. You get customer engagement, you get actual market validation, and you get funding. Well now you actually need to make these sensors, you need to make these products, so now you get the partnership with Arrow, which is really helpful, 'cause they're helping you with the engineering, the design, the components. Now you want to be able to figure out how you can store all that data, so it's not just your own house, maybe you're evaluating across an entire neighborhood, or as a state, you want to see how the water is for the whole entire state. You put all that data up into the cloud, you want to be able to analyze the data rapidly through AI, and similarly, this is highly sensitive data, so you want it to be secure. If WaterBot, on their own, had to build out all this infrastructure, we're talking about dozens, hundreds, who knows how many people they would need, but here, through the partnership, you get the benefit of Indiegogo to get the brilliant idea to actually get validated, Arrow, to bring your idea from back of the napkin into reality, and then you get IBM Watson to help with all of the software components and cloud that we just talked about. >> Great, and how did this get started? How did you guys fall into this and how did it manifest itself? >> Take it, I tell the story? >> Go for it. >> So, I love this story. So, Slava's explained that the front end of this, it was really a partnership of Arrow and Indiegogo that came out of the need of entrepreneurs to actually build their stuff. You know, you get it funded, and then you say, "Oh boy," now I've got a bunch of orders, how do I now make this stuff? And so, Arrow had a capability; of looking at the way you designed, looking deeply with their engineers, sourcing the components, putting together, maybe whiteboxing it even for you, and so, they put that together. Now, we'll all seeing that IoT and the connected products are moving for disconnection, it's actually generating data and that data having value. And so Arrow didn't have that capability, we were great partners with Arrow, you know, when we all looked at it, you know, the need for AI coming into all these products, the need for security around the connection platform, that can actually do that connection, we were a logical map here, so we're another set of components, not the physical. We're the cloud-based components and services that enable these connected devices to sync. >> If you think about the impact, it's mind-boggling with the alternative. You mentioned, the example you gave, they probably might have abandoned the project, so if you think about the scale of these opportunities, what the alternative would have been without an Indiegogo, you probably have some anecdotal feeling on this. Any thoughts on what data you can share, do you have any kind of reference point of like, OK, we funded all this and 90% wouldn't have been done, or 70% wouldn't have been done, do you have any flavor for what's... >> Hard to know exactly. Obviously, many of these folks that came to Indiegogo, if they could have gotten funded on another path, earlier in the process, they would have. Indiegogo became really a great choice. Now you're seeing, instead of being the last resort, Indiegogo's becoming the first resort because they're getting so much validation and market data. The incredible thing is not the thing that adds scale, when you think about 500 or 700,000 entrepreneurs or over a billion dollars and it's in virtually every country in the world, if you really just look at it as one product. So, like, Flow Hive is just one example. They've revolutionized how honey gets harvested. That product was bought in almost 170 countries around the world, and it's something that hasn't been changed in over 150 years, and it's just so interesting to see that, if it wasn't for Indiegogo, that idea would not go from the back of the napkin to getting funded, and now, through these partnerships, they're able to really realize so much more of their potential. >> So, it's interesting, the machine learning piece is interesting to me, because you take the seed funding, which is great, and product market fit as they say in the entrepreneurial culture, is validated, so that's cool, but it could be, in some cases, small amounts of cash before the next milestone, but if you think about the creativity impact that machine learning can give the entrepreneur. >> Slava: Right. >> On their discovery process, early stage, that's an added benefit to the entrepreneur. >> Absolutely. Yeah, a great example bears against SmartPlate. SmartPlate is trying to use the combination of weight sensing plate, as well with photo detection, image detection, and software. The more data it can feed its image detection, the more qualified it can know, "Is that a strawberry or a cherry or is that beef?" Right? And we take that for granted that our eyes can detect all that, but it's really remarkable to think about that instead of having to journal everything by hand or make sure you pick with your finger what's the right product, how many ounces, you can take a photo of something and now it'll know what you're eating, how much you're eating and what is the food composition? And this all requires significant data, significant processing. >> Well, I'm really pumped about that, congratulations, Deon, on a great deal. I love the creativity. I think the impact to the globe is just phenomenal. I mean, by what the game-changing things that are coming out. Slava, I got to ask you, and Deon, if you could weigh in, too, maybe you have some, your favorites, the craziest thing you've seen funded, and the coolest thing you've seen funded. >> Cool is hard, because it's kind of like asking, "Well, who's your favorite child?" I have like 700,000 children, not even Wilt Chamberlain, (laughing) and I like them all. But, you know, it's everything from an activity tracker to security devices, to be able to see what the trend is 24, 36 months ahead. Before things become mainstream today, we're seeing these things three, five years ago. Things are showing up at CES, and these are things we get to see in advance. In terms of something crazy, it's not quite IoT, but I remember when a young woman tried to raise $200,000 to be able to get enough money for her and Justin Bieber to fly to the moon. (laughter) >> That's crazy. >> That didn't get quite enough funding, but something's that flush right now is Nimuno Loops is getting funded right now on Indiegogo Live, and they just posted less than seven days ago and they have Lego-compatible tape, so it's something that you can tape onto any surface, and then the other side is actually Lego-compatible, so you're actually putting Legos onto that tape. So, imagine, instead of only a flat surface to do Legos, you could do Legos on any surfacing, even your jacket. It's not the most IoT-esque product right now, but you just asked for something creative, there you go. >> That's a creative. >> I think once you got Wilt Chamberlain and Justin Bieber in conversation, I am out. (laughter) >> Keepin' it fresh. (voices overlapping) >> Slava, how does Indiegogo sustain itself? Does it take a piece of the action? Does it have other funding mechanisms for... >> The beautiful thing about Indiegogo is, it's a platform and it's all about supply-and-demand, so supply is the ideas and the entrepreneurs, and demand is the funders. It's totally free to use the website and as long as you're able to get money in your pocket, then we take a percentage. If you're not taking any money into your pocket, then we get no money. As part of the process, you might benefit from actually not receiving money. You might try to raise 100 grand, only raise 31, and learn that your price point is wrong, your target audience is wrong, your color is wrong, your bond cost is too high. All this feedback is super value. You just saved yourself a lot of pain, so really it's about building the marketplace. We're a platform, we started out just with funding, we're really becoming now a springboard for entrepreneurs, we can't do it all ourselves, which is why we're bringing on these great partners. >> And you know, we've done, just to add to that, I think it's a relevant part here, too. We've actually announced a freemium-based service for the entrepreneurs to get onto the cloud access, the AI, or to access the services as a starting point, it's a complete freemium model, so that they can get started, very low barrier to entry and obviously, scale as they grow. >> What do you call that? Is it IBM IoT Freemium or is it? >> Hasn't been a name specifically to the Freemium element of it, it's what in IoT platform, available on Bluemix. >> So, it's like a community addition of lots of, so Deon, a new chapter for you, >> Yeah. >> I saw a good quarter for mainframes last quarter, still drafting off your great work, and now you've shifted to this whole new IoT role. What's that been like, relatively new initiative for IBM, building up on some historical expertise, but give us the update on your business. >> It's about 15 months ago, we announced a global headquarters that we're going to open in Munich and we announced the Watson IoT business, which brought together a lot of IBM's expertise and a lot of our experience over the years through Smarter Cities, through the Smarter Planet Initiative, we'd been working the Internet of Things. We'd made a three billion dollar commitment to that marketplace, though we were going to go big and go strong. We've built out a horizontal platform, the Watson IoT platform. On top of that, we've got market-leading enterprise SF management software, the Maximo portfolio, TRIRIGA for facilities management, and then we have a whole set of engineering software for designing connected products as well. So we've built out a very comprehensive industry, vertical-aligned IoT business. We added, last year, we went from about 4,000 to about 6,000 plants, so we had a very good year, in terms of real enterprises getting real outcomes. We continued to bring out new industry solutions around both connected products and then, operations like retail, manufacturing, building management, Tokyo, transportation. We're building out solutions and use-cases to leverage all that software, so business is going well, we officially opened the Watson IoT headquarters three weeks ago in Munich, and we're jampacked with clients coming through that building, building with us. We've got a lot of clients who've actually taken space in the building, and they're using the co-laboratory with us to work on PSEs and see the outcomes they can drive. >> Deon Newman, with Watson IoT platforms. Slava Rubin, founder of Indiegogo. Collective intelligence as cultural shift happening. Congratulations. Crowdsourcing and using all that crowdfunding. It's really good data, not just getting the entrepreneur innovations funded, but really using that data and way in your wheelhouse, IoT. >> Yeah. >> John: Thanks for joining us in theCUBE, appreciate it. More live coverage after this short break. It's theCUBE, live in Las Vegas, for IBM InterConnect. We'll be right back. Stay with us. (theCUBE jingle)
SUMMARY :
brought to you by IBM. and Slava Rubin, the founder and Chief Business Officer I don't want to break the news, but you guys are here, and over the last several years, and the composability is now the normal around data, We have the cloud platform, we have the AI, et cetera. and the big thing that came out of that event, it's good to know that you don't need You got the global platform that you built out, that you can deploy against your water system of looking at the way you designed, You mentioned, the example you gave, and it's just so interesting to see is interesting to me, because you take the seed funding, that's an added benefit to the entrepreneur. or make sure you pick with your finger and the coolest thing you've seen funded. and these are things we get to see in advance. so it's something that you can tape I think once you got Wilt Chamberlain Keepin' it fresh. Does it take a piece of the action? and demand is the funders. for the entrepreneurs to get onto the cloud access, the AI, to the Freemium element of it, and now you've shifted to this whole new IoT role. and a lot of our experience over the years not just getting the entrepreneur innovations funded, John: Thanks for joining us in theCUBE, appreciate it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Slava Rubin | PERSON | 0.99+ |
Deon Newman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Deon | PERSON | 0.99+ |
Indiegogo | ORGANIZATION | 0.99+ |
Justin Bieber | PERSON | 0.99+ |
John | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Munich | LOCATION | 0.99+ |
Arrow | ORGANIZATION | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
Daimler | ORGANIZATION | 0.99+ |
$200,000 | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
WaterBot | ORGANIZATION | 0.99+ |
70% | QUANTITY | 0.99+ |
Wilt Chamberlain | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
one product | QUANTITY | 0.99+ |
Slava | PERSON | 0.99+ |
over 150 years | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
30,000 backers | QUANTITY | 0.99+ |
three weeks ago | DATE | 0.99+ |
100 countries | QUANTITY | 0.99+ |
first resort | QUANTITY | 0.99+ |
one example | QUANTITY | 0.98+ |
three billion dollar | QUANTITY | 0.98+ |
700,000 children | QUANTITY | 0.98+ |
last quarter | DATE | 0.98+ |
CES | EVENT | 0.98+ |
about 6,000 plants | QUANTITY | 0.98+ |
Silicon Valley | LOCATION | 0.98+ |
Legos | ORGANIZATION | 0.98+ |
over a billion dollars | QUANTITY | 0.98+ |
four weeks | QUANTITY | 0.98+ |
first points | QUANTITY | 0.98+ |
100 grand | QUANTITY | 0.98+ |
24, 36 months | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
over a half a million | QUANTITY | 0.97+ |
ISS | ORGANIZATION | 0.97+ |
31 | QUANTITY | 0.97+ |
almost 170 countries | QUANTITY | 0.97+ |
Lego | ORGANIZATION | 0.96+ |
700,000 entrepreneurs | QUANTITY | 0.96+ |
about 6,000 clients | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
about 500 | QUANTITY | 0.96+ |
three, | DATE | 0.96+ |