Image Title

Search Results for Disco:

John Gromala, HPE Greenlake Lighthouse | HPE Discover 2021


 

(intro tune) >> Welcome back to HPE Discover 2021, the virtual version. My name is Dave Vellante and you're watching theCUBE's continuous coverage of the event. John Gromala is here. He's the Senior Director of Product Management for HPE GreenLake Lighthouse, new offering from HPE. We're going to talk about that. We're going to talk about Cloud Native. Hey, John, welcome to theCUBE. Good to see you again. >> Awesome. Great to be with you again. >> All right. What is GreenLake Lighthouse? >> Yes, very excited. Another new offering and innovation from HPE to support our broader GreenLake strategy and plans. It's really a brand new purpose built Cloud Native platform that we've developed and created that pulls together all of our infrastructure leadership with our platform software leadership into a single integrated system, built to run GreenLake Cloud Services. So think of it as, you know, fully integrated, deploy at any place you want on your premises, at a co-location provider or at the edge, wherever you need, they'll all inter-operate and work together, sharing data, you know, running apps together, great capability for people to bring the Cloud where they want. As we talk about with GreenLake, it's the Cloud that comes to you. >> So, should we think of this as a management platform? Is it also sort of a quasi development platform? Kind of where does it fit in that spectrum? >> Well, it's really more of a integrated system with all of the integrated control planes needed to run it, you know, in a distributed fashion. So it's a true distributed Cloud intended to run at any client location it's needed, connects back to GreenLake central and our GreenLake Cloud operations teams to go ahead and run any Cloud services that they want. So you get the benefit of running those workloads wherever you need, but that, you know, centralized control that people want in terms of how they run their Clouds. >> OK so, we think of these, these things like, for instance, how is it different from AWS outposts or things like, you know, as your stack or as your hub? >> Yeah. Very simply, you know, this is because it's a distributed Cloud intended to make it so you could run it wherever you need. You don't need to be tethered to any of the public Clouds or the various public Clouds out there. So people can now run their systems wherever they want, however they need without that required tethering that much of those other vendors require. So you can really sort of own your own Cloud or have that Cloud come to wherever you need it within your overall IT. >> Can I tether to a public Cloud if I want to? >> Yeah. The Cloud services, like many other Cloud services can interconnect together. So no issue if you want to run or even do fail over between public Cloud or on premises, it's all how you want to set it up. But that connection to public Cloud again, through GreenLake is done at that Cloud services level. You know, where you would connect one of these Greenlake Lighthouse systems to the public Cloud through services. >> OK so, maybe you could talk a little bit about the use cases in a minute, but how flexible is this? How do I configure Lighthouse? You know, what comes standard? What are my options? >> Yeah, so we've designed it in a very modular fashion so that people can really configure it to whatever their needs are at any given location. So there's a basic set of modules that aligned to a lot of the compute and storage instances that people are familiar with from all of the Cloud providers. You simply tell us which workloads you want to be running on it, and how much capacity you want. And that'll get configured in deploy to that given sight. In terms of the different types, we have what we're calling two series, or a set of series that are available for this to meet different sets of needs. One being more mainstream for, you know, broad use cases that people need, you know, virtualized container, any other type of enterprise workloads, and another more technically focused with higher performance networking for higher performance deployments. You can choose which of those fits your needs for those given areas. >> So maybe you could talk a little bit more about the workloads and what specifically is supported and how they get deployed. >> Yeah. Again, all of it is managed and run through GreenLake Central. That's our one location where people can go to watch these things, manage them. You can run, you know, container as a service, VM as a service, as needed on these different platforms. You can actually mix and match those as well. So one of these platforms can run multiple of those and you can vary the mix of those as your business needs change over time. So think of it as a very flexible way to manage this, which is really what Cloud Native is all about. Having that flexibility to run those workloads wherever and however you need. In addition, we can build a more advanced type of solutions on top of those sort of foundational capabilities with things like HPC as a service and Ops as a service to better enable clients to deploy any other given enterprise workloads. >> John, what about the security model for Lighthouse? That's obviously a big deal. Everybody's talking about these days. You can't open the news without seeing some kind of hack. How does lighthouse operate in a secure environment? >> Well, you know, first of all, that there's sort of a new standard that was established, you know, within these Cloud operating models. And HPE was leading in terms of infrastructure innovation with our Silicon Root of Trust, where we came out with the world's most secure infrastructure a few years ago. And what we're doing now, since this is a full platform and integrated system, we'll be extending that capability beyond just, you know, how we, you know, create a root of trust in our manufacturing facilities to ensure that it's secure, running it within the infrastructure itself. We'll be extending that vertically up into the software stacks of containers and VMs sort of using that route of trust up to make sure everything's secure in that sense and then eventually up to the workloads themselves. So by being able to go back to that root of trust it really makes a big difference in how people can run things in an enterprise secure way. Great innovations continued. And one of our big focus areas throughout this year. >> So where does it fit in the portfolio, John? I mean, how is it compliment or how is it different from, you know, the typical HPE systems, the hardware and software that we're used to? >> You might think of this as sort of a best of, bringing together all the great innovations of HPE. You know, we've got awesome infrastructure that we've led for many, many years. We've got, you know, great more Cloud Native software that's being developed. We've got great partnerships that we've got with a lot of the leading vendors out there. This allows us to bring all of those things together into a integrated platform that is really intended to run these Cloud Native services. So it builds on top of that leadership, fits in that sense with the portfolio, but it's ultimately about how it allows us to run and extend our GreenLake capabilities as we know them to make them more, more consumable, if you want to call it for a lot of our enterprise clients at whatever location that they. >> So when would I use Lighthouse and when would I use sort of a traditional HPE system? >> Yeah, again, it's a matter of which level of integration people want. You know, Cloud is really also in terms of experience about simplifying what people are purchasing and making it easier for them to consume, easier for them to roll out a lot of these things. That's when you'd want to purchase a Lighthouse versus our other infrastructure products. We'll always have those leading infrastructure products where people can put together everything in exactly the way that they want and go through the qualification and certification of a lot of those workloads. Or they can go ahead and select this GreenLake Lighthouse, where they have a lot of these things available in a catalog. We do validation of the workloads, and platform systems, so that it's all sort of ready for people to roll out at a much more secure, tested and agile fashion. >> So if I have a Cloud first strategy, but I don't want to put it in the public Cloud, but I want that Cloud experience. And I want to go fast. It sounds like Lighthouse, I'm the perfect customer for Lighthouse. >> Precisely, you know, this is taking that Cloud experience that people are wanting, the simplicity of those deployments and making it where it can come to them in whichever location that they want, you know, running it on a consumption basis. So that it's a lot easier way for them to go ahead and manage and deploy those things. Without a lot of the internal qualification and certifications that they've had to do over years. >> Versus OK, but and, or if I want to customize it maybe I want to, maybe I'm a channel partner. I want to bring some of my own value. I got a specific use case, that's not covered by something like Lighthouse. That's where I would go with a more traditional infrastructure. >> Correct. Yeah, if anyone wants to do customization, we've got a great set of products for that. We really want to use a Lighthouse as a mechanism for us to standardize and focus on more enabling these broader Cloud capabilities for clients. >> And lighthouse, talk a little bit more about the automation that I get that, you know, things like patching and software updates, that's sort of included in this integrated system, is that correct? >> Yeah. >> Absolutely. You know, when people think about you know, managing workloads in the Cloud, they don't worry about taking care of firmware updating and a lot of those things. That's all taken care of by the provider. So in that same experience, Lighthouse comes with all of the firmware updating, all of the software updating all included, all managed through our GreenLake managed services teams. So that's just part of how the system takes care of itself. You know, that's a new level of capability and experience that's consistent with all of the Cloud providers out there. >> And that's, OK so, that's something that is a managed service. So let's say I have a Lighthouse on prem, you're going to, that manage services doing all the patching and the releases, and the updates and that, that lives in the Cloud, that lives in HPE, that lives in my prem. >> Well, yeah, ultimately it all goes through GreenLake central and gets managed. You know, all of those deployments are automated in nature so that, you know, people don't have to worry about them. There's multiple ways that that can get delivered to them. We have some, you know, automation and control plane technology that brings that all together for them. You know, it can vary based on the client, on, you know, their degree of how they want to manage some of that, but it's all taken care of for them. >> And, you know, you've got GreenLake in the name, am I to infer from that that it sort of dovetails in, is one of the puzzles in the GreenLake mosaic. >> Yeah, exactly. So think of, think of GreenLake as our broader initiative for everything Cloud. And how do we start enabling not only these Cloud services, but make it easier for people to deploy those and consume them wherever they need. And this is the enablement piece. This is that portion of Greenlake that helps them enable that connected degree like central, where they can manage everything centrally. And then we've got that broad catalog of services available. >> And when can I get it, when's it go GA? >> Yeah. So it'll, July is when our first set of shipments and availability are there. So just a very, you know, few days after, you know, discover here and we'll expand the portfolio over time with more of a mainstream version, early, more technical or performance oriented ones available soon thereafter. And we've got plans even for edge type offerings, more in the future as well. So a case where we'll continue to build and expand more targeting these platforms to folks needs. Whether they're enterprise or maybe there are vertical offerings that they want, in terms of how they, you know, move all these things together. Think of Telco is a great case where people want this. Healthcare is another area where we can have the value of these integrated systems in a very purpose-built way. >> Can I ask you like, what's inside? You know, what can I get in terms of you know, basic infrastructure, compute, storage, networking? What are my options? >> All of the above, you know, what we'll do is we'll go through the basic selection of all of that greatest hits within our complete portfolio, pull them together, give you a few simple choices. You know, you think about it as, you want general purpose compute modules. You might want compute optimized or memory optimized modules. Each of those are simple choices that you'll make that come together. Underlying all that are the great infrastructure pieces that you've known for years. But we take care of simplifying that for you. So you don't have to worry about those details. >> Great. Well, John, congratulations on the new product, and thank you for sharing the update with theCUBE. >> Thank you very much. Great to talk to you. >> All right, thank you for watching theCUBE's coverage of HPE Discover 2021. My name is Dave Vellante. Keep it right there. We're right back with more coverage right after this short break. (outro tune)

Published Date : Jun 22 2021

SUMMARY :

Good to see you again. Great to be with you again. What is GreenLake Lighthouse? at the edge, wherever you need, you know, in a distributed fashion. to wherever you need it You know, where you would One being more mainstream for, you know, So maybe you could You can run, you know, You can't open the news without Well, you know, first of all, that is really intended to run and making it easier for them to consume, it in the public Cloud, Precisely, you know, this I want to bring some of my own value. of products for that. So in that same experience, that lives in the Cloud, that can get delivered to them. And, you know, you've This is that portion of Greenlake So just a very, you know, few days after, Underlying all that are the and thank you for sharing Thank you very much. All right, thank you for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

John GromalaPERSON

0.99+

JohnPERSON

0.99+

HPEORGANIZATION

0.99+

AWSORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

LighthouseORGANIZATION

0.99+

JulyDATE

0.99+

two seriesQUANTITY

0.99+

EachQUANTITY

0.98+

GreenLakeORGANIZATION

0.98+

2021DATE

0.97+

oneQUANTITY

0.97+

first setQUANTITY

0.96+

singleQUANTITY

0.96+

HPE Greenlake LighthouseORGANIZATION

0.95+

one locationQUANTITY

0.95+

HPE GreenLake LighthouseORGANIZATION

0.95+

lighthouseORGANIZATION

0.95+

GreenlakeORGANIZATION

0.94+

GreenLake LighthouseORGANIZATION

0.94+

Cloud NativeTITLE

0.94+

theCUBEORGANIZATION

0.93+

this yearDATE

0.91+

LighthouseTITLE

0.91+

GreenLake CentralORGANIZATION

0.9+

first strategyQUANTITY

0.89+

OneQUANTITY

0.89+

few years agoDATE

0.88+

GreenLakeTITLE

0.88+

CloudTITLE

0.84+

a minuteQUANTITY

0.79+

LighthouseCOMMERCIAL_ITEM

0.76+

few daysDATE

0.74+

Discover 2021EVENT

0.74+

HPE DiscoTITLE

0.69+

GreenLake Cloud ServicesTITLE

0.68+

GreenLake CloudTITLE

0.68+

Jasmine James, Twitter and Stephen Augustus, Cisco | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>> Narrator: From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe, 2021 Virtual brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Hello, welcome back to theCUBE'S coverage of KubeCon and CloudNativeCon 2021 Virtual, I'm John Furrier your host of theCUBE. We've got two great guests here, always great to talk to the KubeCon co-chairs and we have Stephen Augustus Head of Open Source at Cisco and also the KubeCon co-chair great to have you back. And Jasmine James Manager and Engineering Effectives at Twitter, the KubeCon co-chair, she's new on the job so we're not going to grill her too hard but she's excited to share her perspective, Jasmine, Stephen great to see you. Thanks for coming on theCUBE. >> Thanks for having us. >> Thank you. >> So obviously the co-chairs you guys see everything upfront Jasmine, you're going to learn that this is a really kind of key fun position because you've got to multiple hats you got to wear, you got to put a great program together, you got to entertain and surprise and delight the attendees and also can get the right trends, pick everything right and then keep that harmonious vibe going at CNCF and KubeCon is hard so it's a hard job. So I got to ask you out of the gate, what are the top trends that you guys have selected and are pushing forward this year that we're seeing evolve and unfold here at KubeCon? >> For sure yeah. So I'm excited to see, and I would say that some of the top trends for Cloud Native right now are just changes in the ecosystem, how we think about different use cases for Cloud Native technology. So you'll see lot's of talk about new architectures being introduced into Cloud Native technologies or things like WebAssembly. WebAssembly Wasm used cases and really starting to and again, I think I mentioned this every time, but like what are the customer used cases actually really thinking about how all of these building blocks connect and create a cohesive story. So I think a lot of it is enduring and will always be a part. My favorite thing to see is pretty much always maintainer and user stories, but yeah, but architecture is Wasm and security. Security is a huge focus and it's nice to see it comes to the forefront as we talked about having these like the security day, as well as all of the talk arounds, supply chain security, it has been a really, really, really big event (laughs) I'll say. >> Yeah. Well, great shot from last year we have been we're virtual again, but we're back in, the real world is coming back in the fall, so we hopefully in North America we'll be in person. Jasmine, you're new to the job. Tell us a little about you introduce yourself to the community and tell more about who you are and why you're so excited to be the co-chair with Stephen. >> Yeah, absolutely. So I'm Jasmine James, I've been in the industry for the past five or six years previous at Delta Airlines, now at Twitter, as a part of my job at Delta we did a huge drive on adopting Kubernetes. So a lot of those experiences, I was very, very blessed to be a part of in making the adoption and really the cultural shift, easy for developers during my time there. I'm really excited to experience like Cloud Native from the co-chair perspective because historically I've been like on the consumer side going to talk, taking all those best practices, stealing everything I could into bring it back into my job. So make everyone's life easier. So it's really, really great to see all of the fantastic ideas that are being presented, all of the growth and maturity within the Cloud Native world. Similar to Stephen, I'm super excited to hear about the security stuff, especially as it relates to making it easy for developers to shift left on security versus it being such an afterthought and making it something that you don't really have to think about. Developer experience is huge for me which is why I took the job at Twitter six months ago, so I'm really excited to see what I can learn from the other co-chairs and to bring it back to my day-to-day. >> Yeah, Twitter's been very active in open source. Everyone knows that and it's a great chance to see you land there. One of the interesting trends is this year I'll see besides security is GitOps but the one that I think is relevant to your background so fresh is the end user contributions and involvement has been really exploding on the scene. It's always been there. We've covered, Envoy with Lyft but now enterprise is now mainstream enterprises have been kind of going to the open source well and bringing those goodies back to their camps and building out and bringing it back. So you starting to see that flywheel developing you've been on that side now here. Talk about that dynamic and how real that is an important and share some perspective of what's really going on around this explosion around more end user contribution, more end user involvement. >> Absolutely. So I really think that a lot of industry like players are starting to see the importance of contributing back to open source because historically we've done a lot of taking, utilizing these different components to drive the business logic and not really making an investment in the product itself. So it's really, really great to see large companies invest in open source, even have whole teams dedicated to open source and how it's consumed internally. So I really think it's going to be a big win for the companies and for the open source community because I really am a big believer in like giving back and making sure that you should give back as much as you're taking and by making it easy for companies to do the right thing and then even highlighting it as a part of CNCF, it'll be really, really great, just a drive for a great environment for everyone. So really excited to see that. >> That's really good. She has been awesome stuff. Great, great insight. Stephen, I just have you piggyback off that and comment on companies enterprises that want to get more involved with the Cloud Native community from their respective experiences, what's the playbook, is there a new on-ramps? Is there new things? Is there a best practice? What's your view? I mean, obviously everyone's growing and changing. You look at IT has changed. I mean, IT is evolving completely to CloudOps, SRE get ops day two operations. It's pretty much standard now but they need to learn and change. What's your take on this? >> Yeah, so I think that to Jasmine's point and I'm not sure how much we've discussed my background in the past, but I actually came from the corporate IT background, did Desktop Sr, Desktop helped us support all of that stuff up into operations, DevOps, SRE, production engineering. I was an SRE at a startup who used core West technologies and started using Kubernetes back when Kubernetes is that one, two, I think. And that was my first journey into Cloud Native. And I became core less is like only customer to employee convert, right? So I'm very much big on that end user story and figuring out how to get people involved because that was my story as well. So I think that, some of the work that we do or a lot of the work that we do in contributor strategy, the SIG CNCF St. Contributor Strategy is all around thinking through how to bring on new contributors to these various Cloud Native projects, Right? So we've had chats with container D and linker D and a bunch of other folks across the ecosystem, as well as the kind of that maintainer circle sessions that we hold which are kind of like a private, not recorded. So maintainers can kind of get raw and talk about what they're feeling, whether it be around bolstering contributions or whether it'd be like managing burnout, right? Or thinking about how you talk through the values and the principles for your projects. So I think that, part of that story is building for multiple use cases, right? You take Kubernetes for example, right? So Ameritas chair for sync PM over in Kubernetes, one of the sub project owners for the enhancements sub project which involves basically like figuring out how we intake new enhancements to the community but as well as like what the end user cases are all of the use cases for that, right? How do we make it easy to use the technology and how we make it more effective for people to have conversations about how they use technology, right? So I think it's kind of a continuing story and it's delightful to see all of the people getting involved in a SIG Contributor Strategy, because it means that they care about all of the folks that are coming into their projects and making it a more welcoming and easier to contribute place so. >> Yeah. That's great stuff. And one of the things you mentioned about IT in your background and the scale change from IT and just the operational change over is interesting. I was just talking with a friend and we were talking about, get Op and, SRAs and how, in colleges is that an engineering track or is it computer science and it's kind of a hybrid, right? So you're seeing essentially this new operational model at scale that's CloudOps. So you've got hybrid, you've got on-premise, you've got Cloud Native and now soon to be multi-cloud so new things come into play architecture, coding, and programmability. All these things are like projects now in CNCF. And that's a lot of vendors and contributors but as a company, the IT functions is changing fast. So that's going to require more training and more involvement and yet open source is filling the void if you look at some of the successes out there, it's interesting. Can you comment on the companies that are out there saying, "Hey, I know my IT department is going to be turning into essentially SRE operations or CloudOps at scale. How do they get there? How could they work with KubeCon and what's the key playbook? How would you answer that? >> Yeah, so I would say, first off the place to go is the one-on-one track. We specifically craft that one-on-one track to make sure that people who are new to Cloud Native get a very cohesive story around what they're trying to get into, right? At any one time. So head to the one-on-one track, please add to the one-on-one track, hang out, definitely check out all of the keynotes that again, the keynotes, we put a lot of work into making sure these keynotes tell a very nice story about all of the technology and the amount of work that our presenters put into it as well is phenomenal. It's top notch. It's top notch every time. So those will always be my suggestions. Actually go to the keynotes and definitely check out the one-on-one track. >> Awesome. Jasmine, I got to get your take on this now that you're on the KubeCon and you're co-chairing with Stephen, what's your story to the folks that are in the end user side out there that were in your old position that you were at Delta doing some great Kubernetes work but now it's going beyond Kubernetes. I was just talking with another participant in the KubeCon ecosystem is saying, "It's not just Kubernetes anymore. There's other systems that we're going to deploy our real-time metrics on and whatnot". So what's the story? What's the update? What do you see on the inside now now that you're on board and you're at a Hyperscale at Twitter, what's your advice? What's your commentary to your old friends and the end user world? >> Yeah. It's not an easy task. I think that was, you had mentioned about starting with the one-on-one is like super key. Like that's where you should start. There's so many great stories out there in previous KubeCon that have been told. I was listening to those stories and the great thing about our community is that it's authentic, right? We're telling like all of the ways we tripped up so we can prevent you from doing this same thing and having an easier path, which is really awesome. Another thing I would say is do not underestimate the cultural shift, right? There are so many tools and technologies out there, but there's also a cultural transformation that has to happen. You're shifting from, traditional IT roles to a really holistic like so many different things are changing about the way infrastructure was interacted with the way developers are developing. So don't underestimate the cultural shift and make sure you're bringing everyone to the party because there's a lot of perspectives from the development side that needs to be considered before you make the shift initially So that way you can make sure you're approaching the problem in the right way. So those would be my recommendation. >> Also, speaking of cultural shifts, Stephen I know this is a big passion of yours is diversity in the ecosystem. I think with COVID we've seen probably in the past two years a major cultural shifts on the personnel involved, the people participating, still a lot more work to get done. Where are we on diversity in the ecosystem? How would you rate the progress and the overall achievements? >> I would say doing better, but never stop what has happened in COVID I think, if you look across companies, if you look across the opportunities that have opened up for people in general, there have been plenty of doors that have shut, right? And doors that have really made the assumption that you need to be physical are in person to do good work. And I think that the Cloud Native ecosystem the work that the LF and CNCF do, and really the way that we interact in projects has kind of pushed towards this async first, this remote first work culture, right? So you see it in these large corporations that have had to change the travel policies because of COVID and really for someone who's coming off being like a field engineer and solutions architect, right? The bread and butter is hopping on and off a plane, shaking hands, going to dinner, doing the song and dance, right? With customers. And for that model to functionally shift, right? Having conversations in different ways, right? And yeah, sometimes it's a lot of Zoom calls, right? Zoom calls, webinars, all of these things but I think some of what has happened is, you take the release team, for example, the Kubernetes release team. This is our first cycle with Dave Vellante who's our 121 released team lead is based in India, right? And that's the first time that we've had APAC region release team lead and what that forced us to do, we were already working on it. But what that forced us to do is really focused on asynchronous communication. How can we get things done without having to have people in the room? And we were like, "With Dave Vellante in here, it either works or it doesn't like, we're either going to prove that what we've put in place works for asynchronous communication or it doesn't." And then, given that a project of this scale can operate just fine, right? Right just fine delivering a release with people all across the globe. It proves that we have a lot of flexibility in the way that we offer opportunities, both on the open source side, as well as on the company side. >> Yeah. And I got to say KubeCon has always been global from day one. I was in Shanghai and I was in hung, Jo, visiting Ali Baba. And who do I see in the lobby? The CNCF crew. And I'm like, "What are you guys doing here?" "Oh, we're here talking to the cloud with Alibaba." So global is huge. You guys have nailed that. So congratulations and keep that going. Jasmine, your perspective is women in tech. I mean, you're seeing more and more focus and some great doors opening. It's still not enough. We've been covering this for a long time. Still the numbers are down, but we had a great conference recently at Stanford Women in Data Science amazing conference, a lot of power players coming in, women in tech is evolving. What's your take on this still a lot more work to done. You're an inspiration. Share your story. >> Yeah. We have a long way to go. There's no question about it. I do think that there's a lot of great organizations CNCF being one of them, really doing a great job at sharing, networking opportunities, encouraging other women to contribute to open source and letting that be sort of the gateway into a tech career. My journey is starting as a systems engineer at Delta, working my way into leadership, somehow I'm not sure I ended up there but really sort of shifting and being able to lift other women up has been like so fortunate to be able to do that. Women who code being a mentor, things of that nature has been a great opportunity, but I do feel like the open source community has a long way go to be a more welcoming place for women contributors, things like code of conduct, that being very prevalent making sure that it's not daunting and scary, going into GitHub and starting to create a PR for out of fear of what someone might say about your contributions instead of it being sort of an educational experience. So I think there's a lot of opportunities but there's a lot of programs, networking opportunities out there, especially everyone being remote now that have presented themselves. So I'm very hopeful. And the CNCF, like I said is doing a great job at highlighting these women contributors that are making changes to CNCF projects in really making it something that is celebrated which is really great. >> Yeah. You know that I love Stephen and we thought this last time and the Clubhouse app has come online since we were last talking and it's all audio. So there's a lot of ideas and it's all open. So with a synchronous first you have more access but still context matters. So the language, so there's still more opportunities potentially to offend or get it right so this is now becoming a new cultural shift. You brought this up last time we chatted around the language, language is important. So I think this is something that we're keeping an eye on and trying to keep open dialogue around, "Hey it matters what you say, asynchronously or in texts." We all know that text moment where someone said, "I didn't really mean that." But it was offensive or- >> It's like you said it. (laughs) >> (murmurs) you passionate about this here. This is super important how we work. >> Yeah. So you mentioned Clubhouse and it's something that I don't like. (laughs) So no offense to anyone who is behind creating new technologies for sure. But I think that Clubhouse from, if you take platforms like that, let's generalize, you take platforms like that and you think about the unintentional exclusion that those platforms involve, right? If you think about folks with disabilities who are not necessarily able to hear a conversation, right? Or you don't provide opportunities to like caption your conversations, right? That either intentionally or unintentionally excludes a group of folks, right? So I've seen Cloud Native, I've seen Cloud Native things happen on a Clubhouse, on a Twitter Spaces. I won't personally be involved in them until I know that it's a platform that is not exclusive. So I think that it's great that we're having new opportunities to engage with folks that are not necessarily, you've got people prefer the Slack and discord vibe, you've got people who prefer the text over phone calls, so to speak thing, right? You've got people who prefer phone calls. So maybe like, maybe Clubhouse, Twitter Spaces, insert new, I guess Disco is doing a thing too- >> They call it stages. Disco has stages, which is- >> Stages. They have stages. Okay. All right. So insert, Clubhouse clone here and- >> Kube House. We've got a Kube House come on in. >> Kube House. Kube House. >> Trivial (murmurs). >> So we've got great ways to engage there for people who prefer that type of engagement and something that is explicitly different from the I'm on a Zoom call all day kind of vibe enjoy yourselves, try to make it as engaging as possible, just realize what you may unintentionally be doing by creating a community that not everyone can be a part of. >> Yeah. Technical consequences. I mean, this is key language matters to how you get involved and how you support it. I mean, the accessibility piece, I never thought about that. If you can't listen, I mean, you can't there's no content there. >> Yeah. Yeah. And that's a huge part of the Cloud Native community, right? Thinking through accessibility, internationalization, localization, to make sure that our contributions are actually accessible, right? To folks who want to get involved and not just prioritizing, let's say the U.S. or our English speaking part of the world so. >> Awesome. Jasmine, what's your take? What can we do better in the world to make the diversity and inclusion not a conversation because when it's not a conversation, then it's solved. I mean, ultimately it's got a lot more work to do but you can't be exclusive. You got to be diverse more and more output happens. What's your take on this? >> Yeah. I feel like they'll always be work to do in this space because there's so many groups of people, right? That we have to take an account for. I think that thinking through inclusion in the onset of whatever you're doing is the best way to get ahead of it. There's so many different components of it and you want to make sure that you're making a space for everyone. I also think that making sure that you have a pipeline of a network of people that represent a good subset of the world is going to be very key for shaping any program or any sort of project that anyone does in the future. But I do think it's something that we have to consistently keep at the forefront of our mind always consider. It's great that it's in so many conversations right now. It really makes me happy especially being a mom with an eight year old girl who's into computer science as well. That there'll be better opportunities and hopefully more prevalent opportunities and representation for her by the time she grows up. So really, really great. >> Get her coding early, as I always say. Jasmine great to have you and Stephen as well. Good to see you. Final question. What do you hope people walk away with this year from KubeCon? What's the final kind of objective? Jasmine, we'll start with you. >> Wow. Final objective. I think that I would want people to walk away with a sense of community. I feel like the KubeCon CNCF world is a great place to get knowledge, but also an established sense of community not stopping at just the conference and taking part of the community, giving back, contributing would be a great thing for people to walk away with. >> Awesome. Stephen? >> I'm all about community as well. So I think that one of the fun things that we've been doing, is just engaging in different ways than we have normally across the kind of the KubeCon boundaries, right? So you take CNCF Twitch, you take some of the things that I can't mention yet, but are coming out you should see around and pose KubeCon week, the way that we're engaging with people is changing and it's needed to change because of how the world is right now. So I hope that to reinforce the community point, my favorite part of any conference is the hallway track. And I think I've mentioned this last time and we're trying our best. We're trying our best to create it. We've had lots of great feedback about, whether it be people playing among us on CNCF Twitch or hanging out on Slack silly early hours, just chatting it up. And are kind of like crafted hallway track. So I think that engage, don't be afraid to say hello. I know that it's new and scary sometimes and trust me, we've literally all been here. It's going to be okay, come in, have some fun, we're all pretty friendly. We're all pretty friendly and we know and understand that the only way to make this community survive and thrive is to bring on new contributors, is to get new perspectives and continue building awesome technology. So don't be afraid. >> I love it. You guys have a global diverse and knowledgeable and open community. Congratulations. Jasmine James, Stephen Augustus, co-chairs for KubeCon here on theCUBE breaking it down, I'm John Furrier for your host, thanks for watching. (upbeat music)

Published Date : May 4 2021

SUMMARY :

brought to you by Red Hat, and also the KubeCon co-chair So I got to ask you out of the gate, and really starting to and tell more about who you are on the consumer side going to talk, to see you land there. and making sure that you but they need to learn and change. and it's delightful to see all and just the operational the place to go is the one-on-one track. that are in the end user side So that way you can make and the overall achievements? and really the way that And I got to say KubeCon has always been and being able to lift So the language, so there's It's like you said it. you passionate about this here. and it's something that I don't like. They call it stages. So insert, Clubhouse clone here and- We've got a Kube House come on in. Kube House. different from the I'm I mean, the accessibility piece, speaking part of the world so. You got to be diverse more of the world is going to be What's the final kind of objective? and taking part of the Awesome. So I hope that to reinforce and knowledgeable and open community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

JasminePERSON

0.99+

Dave VellantePERSON

0.99+

Jasmine JamesPERSON

0.99+

IndiaLOCATION

0.99+

ShanghaiLOCATION

0.99+

Stephen AugustusPERSON

0.99+

John FurrierPERSON

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

DeltaORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

last yearDATE

0.99+

Delta AirlinesORGANIZATION

0.99+

North AmericaLOCATION

0.99+

hungLOCATION

0.99+

CNCFORGANIZATION

0.99+

DiscoORGANIZATION

0.99+

KubeConEVENT

0.99+

six months agoDATE

0.99+

ClubhouseTITLE

0.99+

TwitterORGANIZATION

0.99+

APACORGANIZATION

0.98+

first cycleQUANTITY

0.98+

Ecosystem PartnersORGANIZATION

0.98+

oneQUANTITY

0.98+

CloudOpsTITLE

0.98+

this yearDATE

0.98+

Cloud NativeTITLE

0.98+

first journeyQUANTITY

0.97+

U.S.LOCATION

0.97+

first timeQUANTITY

0.97+

two great guestsQUANTITY

0.97+

GitOpsTITLE

0.97+

one timeQUANTITY

0.96+

KubernetesTITLE

0.96+

bothQUANTITY

0.96+

twoQUANTITY

0.96+

LFORGANIZATION

0.96+

SIGORGANIZATION

0.96+

CloudNativeCon 2021 VirtualEVENT

0.95+

121 released teamQUANTITY

0.94+

ClubhouseORGANIZATION

0.94+

Jace Moreno, Microsoft | Enterprise Connect 2019


 

>> Live from Orlando, Florida, it's theCUBE, covering Enterprise Connect 2019. Brought to you by Five9. >> Hi, welcome back to theCUBE's coverage of Enterprise Connect 2019. I'm Lisa Martin with my co-host for the week Stu Miniman, we are in Five9's booth here at this event, excited to welcome to theCUBE for the first time Jace Moreno, Microsoft Teams Developer Platform Lead from Microsoft, Jace, welcome to theCUBE. >> Thank you for having me, it's a pleasure. >> So we're excited that you're here because you are on the main stage tomorrow morning with Lori Wright. But talk to us about Microsoft Teams. You've been with Microsoft for awhile now, about 10 months with Teams. Talk to us about this tool for collaboration that companies can use from 10 people in a meeting to 10,000? >> Yeah, you'll hear us tomorrow. The phrase we're coining is an intelligent workplace for everyone, right? And I think for a long time, we've been perceived as an organization who builds tools, a lot of times with the Enterprise Knowledge Worker, the whole goal is to dispel that. There's multiple people out there, millions of people who are frontline workers, whatever you want to call 'em but the folks that are interfacing with your actual customers. And so we need to make sure that we are developing tools that are for them. But overall as I look at the product and what we've delivered, it's about bringing you one single place to go to for collaboration, right? So and that is bringing together your tools, whether or not Microsoft built them into one experience and then process these in workflows around them. >> So do you find that in terms of traction that the, like the enterprises and maybe the more senior generations that have been working with Microsoft tools for a long time get it or I mean, 'cause I can imagine there's kind of a cultural gap there with, whether it's a large enterprise like a Microsoft or maybe a smaller organization, There are people in this modern workforce that have very different perspectives, different cultures. How can Teams help to maybe break down some of those barriers and really be a platform for innovation? >> That's a great question. I think we've been battling that cultural, digital clash for a long time to be fair. I think it really comes out with Teams, though. Because it is an entirely different way of working. It's not just chat anymore, right? It's collaboration. It's bringing together all of these experiences and so I think there's a maturity curve for some of our average users to be fair. We're already seeing that curve take off as we speak. But what I often give advice to customers and to partners, I call 'em superpowers but you got to find that one reason that really gets people over the line because we get asked all the time, "Hey, everybody loves it "but we want to get 'em to use this as the one tool, "the one place that I go so I know that everything "I send in our organization goes to that single place. "How do I deliver that?" And I go, "Just give 'em a reason." That's what it comes down to honestly and I genuinely see that with organizations. We're seeing incredible examples of organizations leveraging partner integrations where it's bringing out their culture rather than them trying to evolve it, if that makes sense. >> So Jace, I'm glad you brought up the partners there and when I hear developer platform, all right, bring us inside a little bit. Everything API compatible, when people think about developers, there have been developers in the Microsoft space. .NET's got its great ecosystem there but what is it like to be in the Microsoft ecosystem here in 2019? >> It's a fun place to be. I will say, I've even stopped using the term developer when I say platform though to be fair because, and the reason I bring this up, what we've actually built allows a lot of IT professionals to build as well on Teams. PowerShell Scripts as an example is a huge opportunity for customers. Frankly, I've never written a line of code in my life and I built a bot for Teams. So it's pretty amazing what we're enabling but when we look at a lot of what partners are building, it's where are they seeing opportunities in the marketplace? So Five9 as an example with customer care, great opportunity there where we can extend the capabilities that a contact center as an example might need inside of Teams if they want to explore that. >> I love, I actually got to interview Jeffrey Snover at Microsoft Ignite last year who of course created PowerShell and he was like more excited now than he was when it was created quite a long time ago. So when I look around this platform, tell us some of the partners that you're working with. I saw some of the early notes that things like Zoom, and gosh you know, talk about some of the partners you're working with. >> So one thing I'll touch on too that I don't know if I fully answered your last question is what I'm hearing from our partners who have built on Teams and I'll touch on which ones in a second, we call it the extensibility of our platform but quite literally what it means is they are, we are allowing partners to allow their solutions to render in different ways inside of Teams and what we're hearing from partners, I had a conversation with Disco the other day as an example, so they built a, I'm not doing them a service by explaining it like this but it's a kudos bot essentially that they've delivered and it's actually bringing out that culture. But they told us the beauty of the Teams platform is that they don't only show up as a bot to the end users, they actually, we've offered them other ways to interact with the end user, so whatever's more comfortable for me inside of team, and my interaction with that solution, it's easy for them to have that correspondence. But in terms of top partnerships that we're looking at, we've had some incredible integrations built recently. ADP just launched theirs pretty recently to check payroll and build sort of a time off process flow if you will, with the bot. Polly's been a great one from day one. We have integrations with partners like Atlassian for a DevOps tool, so Jira and Confluence Cloud, Trello for project management, I could go on forever but we have over 250 in the store right now and that is growing very rapidly. This is what we spend most of our time on. So the initial focus was what are the tools out there that most people need to get their job done every day? That's where we'll start and now we're really evolving that and we're seeing some incredible things being built as we speak. >> So Jace, being at Enterprise Connect, this is an event where it's been around for a long time and has evolved quite considerably as Enterprise Communication and Collaborations has but one of things that when I was doing research to prep for the show that I'm reading is that the customer experience is table stakes. It's make or break. But some of the recommendations that when a company is, whether it's within a business unit buying software and services or at the corporate level, the customer has to have a seat there so that the decision is being made. Are we implementing tools and technologies and services that are actually going to delight our customers, not just retain them but drive customer lifetime value? In your role, where are some of Microsoft's customers in terms of helping to evolve the evolution of the platform? >> That's a great question, I'm really glad you asked it. It's been fun in my role because what we're seeing is a lot of customers who have taken the platform and built integrations to their tools. So think outside of productivity for a second, think IT support, think employee resources, they're building those integrations and they're leveraging those as a way to drive that organic broad adoption inside of their companies. Because they don't want to do the IT force anymore, they want people to love it like you said and naturally take to it and so I keep coming back to that, I call it superpowers, again it might be a ridiculous term but it's those superpowers you deliver to your people that allow them to get their work done better, get them to love that product and to your point, not want to ever leave it 'cause you can get a majority of your work done every day in that place. So we've seen some really cool ones. A couple examples that we just shared recently, Dentsu's a great one, so they have a three person Change Management Team for a 50,000 person global organization, okay? Three people, got to scale that right? Can't do that one on one training and so they initially took Teams and integrated it into their current website, internet, internal portals to essentially create a chatbot that helped people learn how to use the technology they delivered. Now they're taken that one step further because they saw such great success and they're going to different centers of excellence inside the organization saying, "Hey, do you want to get on board? "Because we'd like to make this the bot "that you interact with as an employee of Dentsu." So it's just incredible but it's driving again that adoption they're seeing, leveraging some of the simple stuff that we have on the platform. Does that answer your question? >> Yes very well, thank you. >> So when I look at some of the macro trends about communication, where I've heard some great success stories is internally just being able to collaborate with some of my internal people, Teams has done really well. Collaborating between various organizations still seems to have more challenges. Can you just bring us a little bit of insight as to why I hear great success stories there and not negatives on Teams but just it's still challenging if I have multiple organizations? We all understand even just doing a conference call or heck, a video call between lots of different companies still in 2019's a challenge. >> Yeah look, I mean I'll give you a couple answers here. We are young, I mean it's two years old as a product. So the momentum's been incredible but I'm not going to sit here and tell you we don't have things to work on, we absolutely do. What I will say though, take Enterprise Connect for example, we actually have a Teams team for Enterprise Connect. There's, I actually checked this morning, there's 181 people in that team and a majority of them are guests, so external users, So vendors that we work with to help us plan this conference and bring it all together and a lot of that has been seamless. Yes, there are little things here or there that we're working on but in that respect it's been pretty incredible. I constantly am using it with external parties and I find though, I don't necessarily know if the challenge is in the interface itself, I think it ends up becoming this opportunity to really educate people on this new way of working. And so going back to our partners again, we're sitting here with Five9, but that becomes critical. How do we work better with these organizations who we have mutual customers with to create that experience together, right? And bring again, superpowers to the users. >> What about a security as a superpower? Where is that in these conversations? >> I mean everything we build has a layer of security. I actually just got out of a meeting, you'll see, we've got an announcement around this tomorrow. So I can't blow it unfortunately but the bottom, the foundation and core of everything that we do will be security focused, absolutely. >> All right, so I went to the Microsoft show last year, AI is also one of those things besides security. AI's infused anywhere, so where does AI fit into the whole Teams story? >> The way we see it, I look at this in a couple angles. So most people get onto Teams and it's kind of chat and collab at first, right? Not always the case but a lot of organizations do that. Then it goes to meetings then I think, and you'll see a lot of this cool stuff tomorrow, we're doing it on AI but it's how then do you proactively start delivering better experiences to your end users? So I think of things that we're looking at right now is taking data, and sending those as an example to your IT admins about giving them insight into how users are leveraging Teams. How do you improve that experience for them? So again, you drive that natural broad adoption but kind of assist them a little bit along the way. So tons of great examples around the board. I'm not sure if that fully answers your question but just the sky's the limit. I think of some other things we're looking at though, you'll see a lot coming in the form of transcription, translation, those services that really create inclusiveness which is a big focus for us. Again back to that point earlier, it's the intelligent workplace for everyone. We want to be able to provide services with our partnerships that can really reach anybody in the business world, right? And even in the consumer world in some sense. >> Well Jace, thanks so much for joining Stu and me on the program this afternoon. We're looking forward to hearing your keynote in the morning and sharing with us some of the excitement and things that are happening and announcements we're going to hear from Microsoft Teams tomorrow. >> My pleasure. Thank you so much for having me, appreciate it. >> Our pleasure, fFor Stu Miniman, I'm Lisa Martin. You're watching theCUBE's coverage of day one, Enterprise Connect 2019 from Orlando. Stick around, Stu and I will be right back with our next guest. (upbeat electronic jingle)

Published Date : Mar 19 2019

SUMMARY :

Brought to you by Five9. excited to welcome to theCUBE for the first time But talk to us about Microsoft Teams. So and that is bringing together your tools, So do you find that in terms of traction that the, and I genuinely see that with organizations. like to be in the Microsoft ecosystem here in 2019? and the reason I bring this up, what we've actually built I love, I actually got to interview Jeffrey Snover at that most people need to get their job done every day? that are actually going to delight our customers, that allow them to get their work done better, is internally just being able to and a lot of that has been seamless. the foundation and core of everything that we do AI fit into the whole Teams story? that can really reach anybody in the business world, right? We're looking forward to hearing your keynote Thank you so much for having me, appreciate it. right back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JacePERSON

0.99+

Lisa MartinPERSON

0.99+

Lori WrightPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Jace MorenoPERSON

0.99+

Stu MinimanPERSON

0.99+

Jeffrey SnoverPERSON

0.99+

2019DATE

0.99+

two yearsQUANTITY

0.99+

10 peopleQUANTITY

0.99+

StuPERSON

0.99+

Five9ORGANIZATION

0.99+

181 peopleQUANTITY

0.99+

tomorrow morningDATE

0.99+

DentsuORGANIZATION

0.99+

last yearDATE

0.99+

Three peopleQUANTITY

0.99+

OrlandoLOCATION

0.99+

10,000QUANTITY

0.99+

tomorrowDATE

0.99+

50,000 personQUANTITY

0.99+

Orlando, FloridaLOCATION

0.99+

three personQUANTITY

0.98+

over 250QUANTITY

0.98+

PowerShellTITLE

0.98+

ADPORGANIZATION

0.98+

oneQUANTITY

0.98+

one toolQUANTITY

0.98+

PollyPERSON

0.97+

Enterprise ConnectORGANIZATION

0.96+

millions of peopleQUANTITY

0.96+

about 10 monthsQUANTITY

0.96+

Confluence CloudORGANIZATION

0.95+

first timeQUANTITY

0.95+

one reasonQUANTITY

0.95+

this afternoonDATE

0.94+

theCUBEORGANIZATION

0.93+

AtlassianORGANIZATION

0.91+

DiscoORGANIZATION

0.91+

one stepQUANTITY

0.9+

single placeQUANTITY

0.9+

this morningDATE

0.88+

one experienceQUANTITY

0.88+

Microsoft IgniteORGANIZATION

0.85+

couple examplesQUANTITY

0.84+

DevOpsTITLE

0.79+

firstQUANTITY

0.77+

one thingQUANTITY

0.76+

one placeQUANTITY

0.75+

JiraORGANIZATION

0.74+

.NETORGANIZATION

0.73+

coupleQUANTITY

0.72+

Enterprise ConnectEVENT

0.71+

day oneQUANTITY

0.67+

Enterprise Connect 2019EVENT

0.66+

a secondQUANTITY

0.66+

Connect 2019TITLE

0.66+

Enterprise CommunicationORGANIZATION

0.64+

Enterprise ConnectTITLE

0.61+

Wikibon Action Item | De-risking Digital Business | March 2018


 

>> Hi I'm Peter Burris. Welcome to another Wikibon Action Item. (upbeat music) We're once again broadcasting from theCube's beautiful Palo Alto, California studio. I'm joined here in the studio by George Gilbert and David Floyer. And then remotely, we have Jim Kobielus, David Vellante, Neil Raden and Ralph Finos. Hi guys. >> Hey. >> Hi >> How you all doing? >> This is a great, great group of people to talk about the topic we're going to talk about, guys. We're going to talk about the notion of de-risking digital business. Now, the reason why this becomes interesting is, the Wikibon perspective for quite some time has been that the difference between business and digital business is the role that data assets play in a digital business. Now, if you think about what that means. Every business institutionalizes its work around what it regards as its most important assets. A bottling company, for example, organizes around the bottling plant. A financial services company organizes around the regulatory impacts or limitations on how they share information and what is regarded as fair use of data and other resources, and assets. The same thing exists in a digital business. There's a difference between, say, Sears and Walmart. Walmart mades use of data differently than Sears. And that specific assets that are employed and had a significant impact on how the retail business was structured. Along comes Amazon, which is even deeper in the use of data as a basis for how it conducts its business and Amazon is institutionalizing work in quite different ways and has been incredibly successful. We could go on and on and on with a number of different examples of this, and we'll get into that. But what it means ultimately is that the tie between data and what is regarded as valuable in the business is becoming increasingly clear, even if it's not perfect. And so traditional approaches to de-risking data, through backup and restore, now needs to be re-thought so that it's not just de-risking the data, it's de-risking the data assets. And, since those data assets are so central to the business operations of many of these digital businesses, what it means to de-risk the whole business. So, David Vellante, give us a starting point. How should folks think about this different approach to envisioning business? And digital business, and the notion of risk? >> Okay thanks Peter, I mean I agree with a lot of what you just said and I want to pick up on that. I see the future of digital business as really built around data sort of agreeing with you, building on what you just said. Really where organizations are putting data at the core and increasingly I believe that organizations that have traditionally relied on human expertise as the primary differentiator, will be disrupted by companies where data is the fundamental value driver and I think there are some examples of that and I'm sure we'll talk about it. And in this new world humans have expertise that leverage the organization's data model and create value from that data with augmented machine intelligence. I'm not crazy about the term artificial intelligence. And you hear a lot about data-driven companies and I think such companies are going to have a technology foundation that is increasingly described as autonomous, aware, anticipatory, and importantly in the context of today's discussion, self-healing. So able to withstand failures and recover very quickly. So de-risking a digital business is going to require new ways of thinking about data protection and security and privacy. Specifically as it relates to data protection, I think it's going to be a fundamental component of the so-called data-driven company's technology fabric. This can be designed into applications, into data stores, into file systems, into middleware, and into infrastructure, as code. And many technology companies are going to try to attack this problem from a lot of different angles. Trying to infuse machine intelligence into the hardware, software and automated processes. And the premise is that meaty companies will architect their technology foundations, not as a set of remote cloud services that they're calling, but rather as a ubiquitous set of functional capabilities that largely mimic a range of human activities. Including storing, backing up, and virtually instantaneous recovery from failure. >> So let me build on that. So what you're kind of saying if I can summarize, and we'll get into whether or not it's human expertise or some other approach or notion of business. But you're saying that increasingly patterns in the data are going to have absolute consequential impacts on how a business ultimately behaves. We got that right? >> Yeah absolutely. And how you construct that data model, and provide access to the data model, is going to be a fundamental determinant of success. >> Neil Raden, does that mean that people are no longer important? >> Well no, no I wouldn't say that at all. I'm talking with the head of a medical school a couple of weeks ago, and he said something that really resonated. He said that there're as many doctors who graduated at the bottom of their class as the top of their class. And I think that's true of organizations too. You know what, 20 years ago I had the privilege of interviewing Peter Drucker for an hour and he foresaw this, 20 years ago, he said that people who run companies have traditionally had IT departments that provided operational data but they needed to start to figure out how to get value from that data and not only get value from that data but get value from data outside the company, not just internal data. So he kind of saw this big data thing happening 20 years ago. Unfortunately, he had a prejudice for senior executives. You know, he never really thought about any other people in an organization except the highest people. And I think what we're talking about here is really the whole organization. I think that, I have some concerns about the ability of organizations to really implement this without a lot of fumbles. I mean it's fine to talk about the five digital giants but there's a lot of companies out there that, you know the bar isn't really that high for them to stay in business. And they just seem to get along. And I think if we're going to de-risk we really need to help companies understand the whole process of transformation, not just the technology. >> Well, take us through it. What is this process of transformation? That includes the role of technology but is bigger than the role of technology. >> Well, it's like anything else, right. There has to be communication, there has to be some element of control, there has to be a lot of flexibility and most importantly I think there has to be acceptability by the people who are going to be affected by it, that is the right thing to do. And I would say you start with assumptions, I call it assumption analysis, in other words let's all get together and figure out what our assumptions are, and see if we can't line em up. Typically IT is not good at this. So I think it's going to require the help of a lot of practitioners who can guide them. >> So Dave Vellante, reconcile one point that you made I want to come back to this notion of how we're moving from businesses built on expertise and people to businesses built on expertise resident as patterns in the data, or data models. Why is it that the most valuable companies in the world seem to be the ones that have the most real hardcore data scientists. Isn't that expertise and people? >> Yeah it is, and I think it's worth pointing out. Look, the stock market is volatile, but right now the top-five companies: Apple, Amazon, Google, Facebook and Microsoft, in terms of market cap, account for about $3.5 trillion and there's a big distance between them, and they've clearly surpassed the big banks and the oil companies. Now again, that could change, but I believe that it's because they are data-driven. So called data-driven. Does that mean they don't need humans? No, but human expertise surrounds the data as opposed to most companies, human expertise is at the center and the data lives in silos and I think it's very hard to protect data, and leverage data, that lives in silos. >> Yes, so here's where I'll take exception to that, Dave. And I want to get everybody to build on top of this just very quickly. I think that human expertise has surrounded, in other businesses, the buildings. Or, the bottling plant. Or, the wealth management. Or, the platoon. So I think that the organization of assets has always been the determining factor of how a business behaves and we institutionalized work, in other words where we put people, based on the business' understanding of assets. Do you disagree with that? Is that, are we wrong in that regard? I think data scientists are an example of reinstitutionalizing work around a very core asset in this case, data. >> Yeah, you're saying that the most valuable asset is shifting from some of those physical assets, the bottling plant et cetera, to data. >> Yeah we are, we are. Absolutely. Alright, David Foyer. >> Neil: I'd like to come in. >> Panelist: I agree with that too. >> Okay, go ahead Neil. >> I'd like to give an example from the news. Cigna's acquisition of Express Scripts for $67 billion. Who the hell is Cigna, right? Connecticut General is just a sleepy life insurance company and INA was a second-tier property and casualty company. They merged a long time ago, they got into health insurance and suddenly, who's Express Scripts? I mean that's a company that nobody ever even heard of. They're a pharmacy benefit manager, what is that? They're an information management company, period. That's all they do. >> David Foyer, what does this mean from a technology standpoint? >> So I wanted to to emphasize one thing that evolution has always taught us. That you have to be able to come from where you are. You have to be able to evolve from where you are and take the assets that you have. And the assets that people have are their current systems of records, other things like that. They must be able to evolve into the future to better utilize what those systems are. And the other thing I would like to say-- >> Let me give you an example just to interrupt you, because this is a very important point. One of the primary reasons why the telecommunications companies, whom so many people believed, analysts believed, had this fundamental advantage, because so much information's flowing through them is when you're writing assets off for 30 years, that kind of locks you into an operational mode, doesn't it? >> Exactly. And the other thing I want to emphasize is that the most important thing is sources of data not the data itself. So for example, real-time data is very very important. So what is your source of your real-time data? If you've given that away to Google or your IOT vendor you have made a fundamental strategic mistake. So understanding the sources of data, making sure that you have access to that data, is going to enable you to be able to build the sort of processes and data digitalization. >> So let's turn that concept into kind of a Geoffrey Moore kind of strategy bromide. At the end of the day you look at your value proposition and then what activities are central to that value proposition and what data is thrown off by those activities and what data's required by those activities. >> Right, both internal-- >> We got that right? >> Yeah. Both internal and external data. What are those sources that you require? Yes, that's exactly right. And then you need to put together a plan which takes you from where you are, as the sources of data and then focuses on how you can use that data to either improve revenue or to reduce costs, or a combination of those two things, as a series of specific exercises. And in particular, using that data to automate in real-time as much as possible. That to me is the fundamental requirement to actually be able to do this and make money from it. If you look at every example, it's all real-time. It's real-time bidding at Google, it's real-time allocation of resources by Uber. That is where people need to focus on. So it's those steps, practical steps, that organizations need to take that I think we should be giving a lot of focus on. >> You mention Uber. David Vellante, we're just not talking about the, once again, talking about the Uberization of things, are we? Or is that what we mean here? So, what we'll do is we'll turn the conversation very quickly over to you George. And there are existing today a number of different domains where we're starting to see a new emphasis on how we start pricing some of this risk. Because when we think about de-risking as it relates to data give us an example of one. >> Well we were talking earlier, in financial services risk itself is priced just the way time is priced in terms of what premium you'll pay in terms of interest rates. But there's also something that's softer that's come into much more widely-held consciousness recently which is reputational risk. Which is different from operational risk. Reputational risk is about, are you a trusted steward for data? Some of that could be personal information and a use case that's very prominent now with the European GDPR regulation is, you know, if I ask you as a consumer or an individual to erase my data, can you say with extreme confidence that you have? That's just one example. >> Well I'll give you a specific number on that. We've mentioned it here on Action Item before. I had a conversation with a Chief Privacy Officer a few months ago who told me that they had priced out what the fines to Equifax would have been had the problem occurred after GDPR fines were enacted. It was $160 billion, was the estimate. There's not a lot of companies on the planet that could deal with $160 billion liability. Like that. >> Okay, so we have a price now that might have been kind of, sort of mushy before. And the notion of trust hasn't really changed over time what's changed is the technical implementations that support it. And in the old world with systems of record we basically collected from our operational applications as much data as we could put it in the data warehouse and it's data marked satellites. And we try to govern it within that perimeter. But now we know that data basically originates and goes just about anywhere. There's no well-defined perimeter. It's much more porous, far more distributed. You might think of it as a distributed data fabric and the only way you can be a trusted steward of that is if you now, across the silos, without trying to centralize all the data that's in silos or across them, you can enforce, who's allowed to access it, what they're allowed to do, audit who's done what to what type of data, when and where? And then there's a variety of approaches. Just to pick two, one is where it's discovery-oriented to figure out what's going on with the data estate. Using machine learning this is, Alation is an example. And then there's another example, which is where you try and get everyone to plug into what's essentially a new system catalog. That's not in a in a deviant mesh but that acts like the fabric for your data fabric, deviant mesh. >> That's an example of another, one of the properties of looking at coming at this. But when we think, Dave Vellante coming back to you for a second. When we think about the conversation there's been a lot of presumption or a lot of bromide. Analysts like to talk about, don't get Uberized. We're not just talking about getting Uberized. We're talking about something a little bit different aren't we? >> Well yeah, absolutely. I think Uber's going to get Uberized, personally. But I think there's a lot of evidence, I mentioned the big five, but if you look at Spotify, Waze, AirbnB, yes Uber, yes Twitter, Netflix, Bitcoin is an example, 23andme. These are all examples of companies that, I'll go back to what I said before, are putting data at the core and building humans expertise around that core to leverage that expertise. And I think it's easy to sit back, for some companies to sit back and say, "Well I'm going to wait and see what happens." But to me anyway, there's a big gap between kind of the haves and the have-nots. And I think that, that gap is around applying machine intelligence to data and applying cloud economics. Zero marginal economics and API economy. An always-on sort of mentality, et cetera et cetera. And that's what the economy, in my view anyway, is going to look like in the future. >> So let me put out a challenge, Jim I'm going to come to you in a second, very quickly on some of the things that start looking like data assets. But today, when we talk about data protection we're talking about simply a whole bunch of applications and a whole bunch of devices. Just spinning that data off, so we have it at a third site. And then we can, and it takes to someone in real-time, and then if there's a catastrophe or we have, you know, large or small, being able to restore it often in hours or days. So we're talking about an improvement on RPO and RTO but when we talk about data assets, and I'm going to come to you in a second with that David Floyer, but when we talk about data assets, we're talking about, not only the data, the bits. We're talking about the relationships and the organization, and the metadata, as being a key element of that. So David, I'm sorry Jim Kobielus, just really quickly, thirty seconds. Models, what do they look like? What are the new nature of some of these assets look like? >> Well the new nature of these assets are the machine learning models that are driving so many business processes right now. And so really the core assets there are the data obviously from which they are developed, and also from which they are trained. But also very much the knowledge of the data scientists and engineers who build and tune this stuff. And so really, what you need to do is, you need to protect that knowledge and grow that knowledge base of data science professionals in your organization, in a way that builds on it. And hopefully you keep the smartest people in house. And they can encode more of their knowledge in automated programs to manage the entire pipeline of development. >> We're not talking about files. We're not even talking about databases, are we David Floyer? We're talking about something different. Algorithms and models are today's technology's really really set up to do a good job of protecting the full organization of those data assets. >> I would say that they're not even being thought about yet. And going back on what Jim was saying, Those data scientists are the only people who understand that in the same way as in the year 2000, the COBOL programmers were the only people who understood what was going on inside those applications. And we as an industry have to allow organizations to be able to protect the assets inside their applications and use AI if you like to actually understand what is in those applications and how are they working? And I think that's an incredibly important de-risking is ensuring that you're not dependent on a few experts who could leave at any moment, in the same way as COBOL programmers could have left. >> But it's not just the data, and it's not just the metadata, it really is the data structure. >> It is the model. Just the whole way that this has been put together and the reason why. And the ability to continue to upgrade that and change that over time. So those assets are incredibly important but at the moment there is no way that you can, there isn't technology available for you to actually protect those assets. >> So if I combine what you just said with what Neil Raden was talking about, David Vallante's put forward a good vision of what's required. Neil Raden's made the observation that this is going to be much more than technology. There's a lot of change, not change management at a low level inside the IT, but business change and the technology companies also have to step up and be able to support this. We're seeing this, we're seeing a number of different vendor types start to enter into this space. Certainly storage guys, Dylon Sears talking about doing a better job of data protection we're seeing middleware companies, TIBCO and DISCO, talk about doing this differently. We're seeing file systems, Scality, WekaIO talk about doing this differently. Backup and restore companies, Veeam, Veritas. I mean, everybody's looking at this and they're all coming at it. Just really quickly David, where's the inside track at this point? >> For me it is so much whitespace as to be unbelievable. >> So nobody has an inside track yet. >> Nobody has an inside track. Just to start with a few things. It's clear that you should keep data where it is. The cost of moving data around an organization from inside to out, is crazy. >> So companies that keep data in place, or technologies to keep data in place, are going to have an advantage. >> Much, much, much greater advantage. Sure, there must be backups somewhere. But you need to keep the working copies of data where they are because it's the real-time access, usually that's important. So if it originates in the cloud, keep it in the cloud. If it originates in a data-provider, on another cloud, that's where you should keep it. If it originates on your premise, keep it where it originated. >> Unless you need to combine it. But that's a new origination point. >> Then you're taking subsets of that data and then combining that up for itself. So that would be my first point. So organizations are going to need to put together what George was talking about, this metadata of all the data, how it interconnects, how it's being used. The flow of data through the organization, it's amazing to me that when you go to an IT shop they cannot define for you how the data flows through that data center or that organization. That's the requirement that you have to have and AI is going to be part of that solution, of looking at all of the applications and the data and telling you where it's going and how it's working together. >> So the second thing would be companies that are able to build or conceive of networks as data. Will also have an advantage. And I think I'd add a third one. Companies that demonstrate perennial observations, a real understanding of the unbelievable change that's required you can't just say, oh Facebook wants this therefore everybody's going to want it. There's going to be a lot of push marketing that goes on at the technology side. Alright so let's get to some Action Items. David Vellante, I'll start with you. Action Item. >> Well the future's going to be one where systems see, they talk, they sense, they recognize, they control, they optimize. It may be tempting to say, you know what I'm going to wait, I'm going to sit back and wait to figure out how I'm going to close that machine intelligence gap. I think that's a mistake. I think you have to start now, and you have to start with your data model. >> George Gilbert, Action Item. >> I think you have to keep in mind the guardrails related to governance, and trust, when you're building applications on the new data fabric. And you can take the approach of a platform-oriented one where you're plugging into an API, like Apache Atlas, that Hortonworks is driving, or a discovery-oriented one as David was talking about which would be something like Alation, using machine learning. But if, let's say the use case starts out as an IOT, edge analytics and cloud inferencing, that data science pipeline itself has to now be part of this fabric. Including the output of the design time. Meaning the models themselves, so they can be managed. >> Excellent. Jim Kobielus, you've been pretty quiet but I know you've got a lot to offer. Action Item, Jim. >> I'll be very brief. What you need to do is protect your data science knowledge base. That's the way to de-risk this entire process. And that involves more than just a data catalog. You need a data science expertise registry within your distributed value chain. And you need to manage that as a very human asset that needs to grow. That is your number one asset going forward. >> Ralph Finos, you've also been pretty quiet. Action Item, Ralph. >> Yeah, I think you've got to be careful about what you're trying to get done. Whether it's, it depends on your industry, whether it's finance or whether it's the entertainment business, there are different requirements about data in those different environments. And you need to be cautious about that and you need leadership on the executive business side of things. The last thing in the world you want to do is depend on data scientists to figure this stuff out. >> And I'll give you the second to last answer or Action Item. Neil Raden, Action Item. >> I think there's been a lot of progress lately in creating tools for data scientists to be more efficient and they need to be, because the big digital giants are draining them from other companies. So that's very encouraging. But in general I think becoming a data-driven, a digital transformation company for most companies, is a big job and I think they need to it in piece parts because if they try to do it all at once they're going to be in trouble. >> Alright, so that's great conversation guys. Oh, David Floyer, Action Item. David's looking at me saying, ah what about me? David Floyer, Action Item. >> (laughing) So my Action Item comes from an Irish proverb. Which if you ask for directions they will always answer you, "I wouldn't start from here." So the Action Item that I have is, if somebody is coming in saying you have to re-do all of your applications and re-write them from scratch, and start in a completely different direction, that is going to be a 20-year job and you're not going to ever get it done. So you have to start from what you have. The digital assets that you have, and you have to focus on improving those with additional applications, additional data using that as the foundation for how you build that business with a clear long-term view. And if you look at some of the examples that were given early, particularly in the insurance industries, that's what they did. >> Thank you very much guys. So, let's do an overall Action Item. We've been talking today about the challenges of de-risking digital business which ties directly to the overall understanding of the role of data assets play in businesses and the technology's ability to move from just protecting data, restoring data, to actually restoring the relationships in the data, the structures of the data and very importantly the models that are resident in the data. This is going to be a significant journey. There's clear evidence that this is driving a new valuation within the business. Folks talk about data as the new oil. We don't necessarily see things that way because data, quite frankly, is a very very different kind of asset. The cost could be shared because it doesn't suffer the same limits on scarcity. So as a consequence, what has to happen is, you have to start with where you are. What is your current value proposition? And what data do you have in support of that value proposition? And then whiteboard it, clean slate it and say, what data would we like to have in support of the activities that we perform? Figure out what those gaps are. Find ways to get access to that data through piecemeal, piece-part investments. That provide a roadmap of priorities looking forward. Out of that will come a better understanding of the fundamental data assets that are being created. New models of how you engage customers. New models of how operations works in the shop floor. New models of how financial services are being employed and utilized. And use that as a basis for then starting to put forward plans for bringing technologies in, that are capable of not just supporting the data and protecting the data but protecting the overall organization of data in the form of these models, in the form of these relationships, so that the business can, as it creates these, as it throws off these new assets, treat them as the special resource that the business requires. Once that is in place, we'll start seeing businesses more successfully reorganize, reinstitutionalize the work around data, and it won't just be the big technology companies who have, who people call digital native, that are well down this path. I want to thank George Gilbert, David Floyer here in the studio with me. David Vellante, Ralph Finos, Neil Raden and Jim Kobelius on the phone. Thanks very much guys. Great conversation. And that's been another Wikibon Action Item. (upbeat music)

Published Date : Mar 16 2018

SUMMARY :

I'm joined here in the studio has been that the difference and importantly in the context are going to have absolute consequential impacts and provide access to the data model, the ability of organizations to really implement this but is bigger than the role of technology. that is the right thing to do. Why is it that the most valuable companies in the world human expertise is at the center and the data lives in silos in other businesses, the buildings. the bottling plant et cetera, to data. Yeah we are, we are. an example from the news. and take the assets that you have. One of the primary reasons why is going to enable you to be able to build At the end of the day you look at your value proposition And then you need to put together a plan once again, talking about the Uberization of things, to erase my data, can you say with extreme confidence There's not a lot of companies on the planet and the only way you can be a trusted steward of that That's an example of another, one of the properties I mentioned the big five, but if you look at Spotify, and I'm going to come to you in a second And so really, what you need to do is, of protecting the full organization of those data assets. and use AI if you like to actually understand and it's not just the metadata, And the ability to continue to upgrade that and the technology companies also have to step up It's clear that you should keep data where it is. are going to have an advantage. So if it originates in the cloud, keep it in the cloud. Unless you need to combine it. That's the requirement that you have to have that goes on at the technology side. Well the future's going to be one where systems see, I think you have to keep in mind the guardrails but I know you've got a lot to offer. that needs to grow. Ralph Finos, you've also been pretty quiet. And you need to be cautious about that And I'll give you the second to last answer and they need to be, because the big digital giants David's looking at me saying, ah what about me? that is going to be a 20-year job and the technology's ability to move from just

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

AmazonORGANIZATION

0.99+

David VellantePERSON

0.99+

DavidPERSON

0.99+

AppleORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

NeilPERSON

0.99+

GoogleORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

Dave VellantePERSON

0.99+

David FloyerPERSON

0.99+

George GilbertPERSON

0.99+

Jim KobeliusPERSON

0.99+

Peter BurrisPERSON

0.99+

JimPERSON

0.99+

Geoffrey MoorePERSON

0.99+

GeorgePERSON

0.99+

Ralph FinosPERSON

0.99+

Neil RadenPERSON

0.99+

INAORGANIZATION

0.99+

EquifaxORGANIZATION

0.99+

SearsORGANIZATION

0.99+

PeterPERSON

0.99+

March 2018DATE

0.99+

UberORGANIZATION

0.99+

TIBCOORGANIZATION

0.99+

DISCOORGANIZATION

0.99+

David VallantePERSON

0.99+

$160 billionQUANTITY

0.99+

20-yearQUANTITY

0.99+

30 yearsQUANTITY

0.99+

RalphPERSON

0.99+

DavePERSON

0.99+

NetflixORGANIZATION

0.99+

Peter DruckerPERSON

0.99+

Express ScriptsORGANIZATION

0.99+

VeritasORGANIZATION

0.99+

David FoyerPERSON

0.99+

VeeamORGANIZATION

0.99+

$67 billionQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

first pointQUANTITY

0.99+

thirty secondsQUANTITY

0.99+

secondQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

TwitterORGANIZATION

0.99+

Connecticut GeneralORGANIZATION

0.99+

two thingsQUANTITY

0.99+

bothQUANTITY

0.99+

about $3.5 trillionQUANTITY

0.99+

HortonworksORGANIZATION

0.99+

CignaORGANIZATION

0.99+

BothQUANTITY

0.99+

2000DATE

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

Dylon SearsORGANIZATION

0.98+

Day One Kickoff | BigData NYC 2017


 

(busy music) >> Announcer: Live from Midtown Manhattan, it's the Cube, covering Big Data New York City 2017, brought to you by SiliconANGLE Media and its ecosystem sponsors. >> Hello, and welcome to the special Cube presentation here in New York City for Big Data NYC, in conjunction with all the activity going on with Strata, Hadoop, Strata Data Conference right around the corner. This is the Cube's special annual event in New York City where we highlight all the trends, technology experts, thought leaders, entrepreneurs here inside the Cube. We have our three days of wall to wall coverage, evening event on Wednesday. I'm John Furrier, the co-host of the Cube, with Jim Kobielus, and Peter Burris will be here all week as well. Kicking off day one, Jim, the monster week of Big Data NYC, which now has turned into, essentially, the big data industry is a huge industry. But now, subsumed within a larger industry of AI, IoT, security. A lot of things have just sucked up the big data world that used to be the Hadoop world, and it just kept on disrupting, and creative disruption of the old guard data warehouse market, which now, looks pale in comparison to the disruption going on right now. >> The data warehouse market is very much vibrant and alive, as is the big data market continuing to innovate. But the innovations, John, have moved up the stack to artificial intelligence and deep learning, as you've indicated, driving more of the Edge applications in the new generation of mobile and smart appliances and things that are coming along like smart, self-driving vehicles and so forth. What we see is data professionals and developers are moving towards new frameworks, like TensorFlow and so forth, for development of the truly disruptive applications. But big data is the foundation. >> I mean, the developers are the key, obviously, open source is growing at an enormous rate. We just had the Linux Foundation, we now have the Open Source Summit, they have kind of rebranded that. They're going to see explosion from code from 64 million lines of code to billions of lines of code, exponential growth. But the bigger picture is that it's not just developers, it's the enterprises now who want hybrid cloud, they want cloud technology. I want to get your reaction to a couple of different threads. One is the notion of community based software, which is open source, extending into the enterprise. We're seeing things like blockchain is hot right now, security, two emerging areas that are overlapping in with big data. You obviously have classic data market, and then you've got AI. All these things kind of come in together, kind of just really putting at the center of all that, this core industry around community and software AI, particular. It's not just about machine learning anymore and data, it's a bigger picture. >> Yeah, in terms of a community, development with open source, much of what we see in the AI arena, for example, with the up and coming, they're all open source tools. There's TensorFlow, there's Cafe, there's Theano and so forth. What we're seeing is not just the frameworks for developing AI that are important, but the entire ecosystem of community based development of capabilities to automate the acquisition of training data, which is so critically important for tuning AI, for its designated purpose, be it doing predictions and abstractions. DevOps, what are coming into being are DevOps frameworks to span the entire life cycle of the creation and the training and deployment and iteration of AI. What we're going to see is, like at the last Spark Summit, there was a very interesting discussion from a Stanford researcher, new open source tools that they're developing out in, actually, in Berkeley, I understand, for, related to development of training data in a more automated fashion for these new challenges. The communities are evolving up the stack to address these requirements with fairly bleeding edge capabilities that will come in the next few years into the mainstream. >> I had a chat with a big time CTO last night, he worked at some of the big web scale company, I won't say the name, give it away. But basically, he asked me a question about IoT, how real is it, and obviously, it's hyped up big time, though. But the issue in all this new markets like IoT and AI is the role of security, because a lot of enterprises are looking at the IoT, certainly in the industrial side has the most relevant low hanging fruit, but at the end of the day, the data modeling, as you're pointing out, becomes a critical thing. Connecting IoT devices to, say, an IP network sounds trivial in concept, but at the end of the day, the surface area for security is oak expose, that's causing people to stop what they're doing, not deploying it as fast. You're seeing kind of like people retrenching and replatforming at the core data centers, and then leveraging a lot of cloud, which is why Azure is hot, Microsoft Ignite Event is pretty hot this week. Role of cloud, role of data in IoT. Is IoT kind of stalled in your mind? Or is it bloating? >> I wouldn't say it's stalled or that it's bloating, but IoT is definitely coming along as the new development focus. For the more disruptive applications that can derive more intelligence directly to the end points that can take varying degrees of automated action to achieve results, but also to very much drive decision support in real time to people on their mobiles or in whatever. What I'm getting at is that IoT is definitely a reality in the real world in terms of our lives. It's definitely a reality in terms of the index generation of data applications. But there's a lot of the back end in terms of readying algorithms and in training data for deployment of really high quality IoT applications, Edge applications, that hasn't come together yet in any coherent practice. >> It's emerging, it's emerging. >> It's emerging. >> It's a lot more work to do. OK, we're going to kick off day one, we've got some great guests, we see Rob Bearden in the house, Rob Thomas from IBM. >> Rob Bearden from Hortonworks. >> Rob Bearden from Hortonworks, and Rob Thomas from IBM. I want to bring up, Rob wrote a book just recently. He wrote Big Data Revolution, but he also wrote a new book called, Every Company is a Tech Company. But he mentions, he kind of teases out this concept of a renaissance, so I want to get your thoughts on this. If you look at Strata, Hadoop, Strata Data, the O'Reilly Conference, which has turned into like a marketing machine, right. A lot of hype there. But as the community model grows up, you're starting to see a renaissance of real creative developers, you're starting to see, not just open source, pure, full stack developers doing all the heavy lifting, but real creative competition, in a renaissance, that's really the key. You're seeing a lot more developer action, tons outside of the, what was classically called the data space. The role of data and how it relates to the developer phenomenon that's going on right now. >> Yeah, it's the maker culture. Rob, in fact, about a year or more ago, IBM, at one of their events, they held a very maker oriented event, I think they called it Datapalooza at one point. What it's looking at, what's going on is it's more than just classic software developers are coming to the fore. When you're looking at IoT or Edge applications, it's hardware developers, it's UX developers, it's developers and designers who are trying to change and drive data driven applications into changing the very fabric of how things are done in the real world. What Peter Burris, we had a wiki about him called Programming in the Real World. What that all involves is there's a new set of skill sets that are coming together to develop these applications. It's well beyond just simply software development, it's well beyond simply data scientists. Maker culture. >> Programming in the real world is a great concept, because you need real time, which comes back down to this. I'm looking for this week from the guests we talked to, what their view is of the data market right now. Because if you want to get real time, you've got to move from that batch world to the real time world. I'm not saying batch is over, you've still got to store data, and that's growing at an exponential rate as well. But real time data, how do you use data in real time, how do the modelings work, how do you scale that. How do you take a DevOps culture to the data world is what I'm looking for. What are you looking for this week? >> What I'm looking for this week, I'm looking for DevOps solutions or platforms or environments for teams of data scientists who are building and training and deploying and evaluating, iterating deep learning and machine learning and natural language processing applications in a continuous release pipeline, and productionizing them. At Wikibon, we are going deeper in that whole notion of DevOps for data science. I mean, IBM's called it inside ops, others call it data ops. What we're seeing across the board is that more and more of our customers are focusing on how do we bring it all together, so the maker culture. >> Operationalizing it. >> Operationalizing it, so that the maker cultures that they have inside their value chain can come together and there's a standard pattern workflow of putting this stuff out and productionizing it, AI productionized in the real world. >> Moving in from the proof of concept notion to actually just getting things done, putting it out in the network, and then bringing it to the masses with operational support. >> Right, like the good folks at IBM with Watson data platform, on some levels, is a DevOPs for data science platform, but it's a collaborative environment. That's what I'm looking to see, and there's a lot of other solution providers who are going down that road. >> I mean, to me, if people have the community traction, that is the new benchmark, in my opinion. You heard it here on the Cube. Community continues to scale, you can start seeing it moving out of open source, you're seeing things like blockchain, you're seeing a decentralized Internet now happening everywhere, not just distributed but decentralized. When you have decentralization, community and software really shine. It's the Cube here in New York City all week. Stay with us for wall to wall coverage through Thursday here in New York City for Big Data NYC, in conjunction with Strata Data, this is the Cube, we'll be back with more coverage after this short break. (busy music) (serious electronic music) (peaceful music) >> Hi, I'm John Furrier, the Co-founder of SiliconANGLE Media, and Co-host of the Cube. I've been in the tech business since I was 19, first programming on mini computers in a large enterprise, and then worked at IBM and Hewlett Packard, a total of nine years in the enterprise, various jobs from programming, training, consulting, and ultimately, as an executive sales person, and then started my first company in 1997, and moved to Silicon Valley in 1999. I've been here ever since. I've always loved technology, and I love covering emerging technology. I was trained as a software developer and love business. I love the impact of software and technology to business. To me, creating technology that starts a company and creates value and jobs is probably one of the most rewarding things I've ever been involved in. I bring that energy to the Cube, because the Cube is where all the ideas are, and where the experts are, where the people are. I think what's most exciting about the Cube is that we get to talk to people who are making things happen, entrepreneurs, CEO of companies, venture capitalists, people who are really, on a day in and day out basis, building great companies. In the technology business, there's just not a lot real time live TV coverage, and the Cube is a non-linear TV operation. We do everything that the TV guys on cable don't do. We do longer interviews, we ask tougher questions. We ask, sometimes, some light questions. We talk about the person and what they feel about. It's not prompted and scripted, it's a conversation, it's authentic. For shows that have the Cube coverage, it makes the show buzz, it creates excitement. More importantly, it creates great content, great digital assets that can be shared instantaneously to the world. Over 31 million people have viewed the Cube, and that is the result of great content, great conversations. I'm so proud to be part of the Cube with a great team. Hi, I'm John Furrier, thanks for watching the Cube. >> Announcer: Coming up on the Cube, Tekan Sundar, CTO of Wine Disco. Live Cube coverage from Big Data NYC 2017 continues in a moment. >> Announcer: Coming up on the Cube, Donna Prlich, Chief Product Officer at Pentaho. Live Cube coverage from Big Data New York City 2017 continues in a moment. >> Announcer: Coming up on the Cube, Amit Walia, Executive Vice President and Chief Product Officer at Informatica. Live Cube coverage from Big Data New York City continues in a moment. >> Announcer: Coming up on the Cube, Prakash Nodili, Co-founder and CEO of Pexif. Live Cube coverage from Big Data New York City continues in a moment. (serious electronic music)

Published Date : Sep 27 2017

SUMMARY :

it's the Cube, covering Big Data New York City 2017, and creative disruption of the old guard as is the big data market continuing to innovate. kind of just really putting at the center of all that, and the training and deployment and iteration of AI. and replatforming at the core data centers, in the real world in terms of our lives. It's a lot more work to do. in a renaissance, that's really the key. in the real world. Programming in the real world is a great concept, so the maker culture. Operationalizing it, so that the maker cultures Moving in from the proof of concept notion Right, like the good folks at IBM that is the new benchmark, in my opinion. and that is the result of great content, continues in a moment. continues in a moment. continues in a moment. Prakash Nodili, Co-founder and CEO of Pexif.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

Donna PrlichPERSON

0.99+

Rob BeardenPERSON

0.99+

Amit WaliaPERSON

0.99+

Rob ThomasPERSON

0.99+

Peter BurrisPERSON

0.99+

Prakash NodiliPERSON

0.99+

IBMORGANIZATION

0.99+

John FurrierPERSON

0.99+

JimPERSON

0.99+

1997DATE

0.99+

BerkeleyLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

1999DATE

0.99+

Hewlett PackardORGANIZATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

ThursdayDATE

0.99+

New York CityLOCATION

0.99+

JohnPERSON

0.99+

nine yearsQUANTITY

0.99+

HortonworksORGANIZATION

0.99+

WednesdayDATE

0.99+

RobPERSON

0.99+

PexifORGANIZATION

0.99+

Tekan SundarPERSON

0.99+

Linux FoundationORGANIZATION

0.99+

first companyQUANTITY

0.99+

firstQUANTITY

0.99+

three daysQUANTITY

0.99+

WikibonORGANIZATION

0.99+

DatapaloozaEVENT

0.99+

64 million linesQUANTITY

0.98+

NYCLOCATION

0.98+

Midtown ManhattanLOCATION

0.98+

Big DataORGANIZATION

0.98+

19QUANTITY

0.98+

this weekDATE

0.97+

Over 31 million peopleQUANTITY

0.97+

Spark SummitEVENT

0.97+

last nightDATE

0.97+

Open Source SummitEVENT

0.97+

StrataEVENT

0.96+

OneQUANTITY

0.96+

Programming in the Real WorldTITLE

0.96+

Big DataEVENT

0.96+

InformaticaORGANIZATION

0.96+

day oneQUANTITY

0.96+

Strata DataORGANIZATION

0.95+

two emerging areasQUANTITY

0.95+

billions of linesQUANTITY

0.93+

MicrosoftORGANIZATION

0.93+

TensorFlowTITLE

0.92+

Strata Data ConferenceEVENT

0.92+

Day OneQUANTITY

0.92+

Live CubeCOMMERCIAL_ITEM

0.92+

CubeORGANIZATION

0.91+

Every Company is a Tech CompanyTITLE

0.9+

AzureTITLE

0.9+

about a year or more agoDATE

0.9+

CubeCOMMERCIAL_ITEM

0.9+

2017EVENT

0.89+

Wine DiscoORGANIZATION

0.89+

Big Data RevolutionTITLE

0.88+

StrataORGANIZATION

0.88+

TheanoTITLE

0.88+

WatsonORGANIZATION

0.85+

DevOpsTITLE

0.84+

Ignite EventEVENT

0.84+

Brett Rudenstein - Hadoop Summit 2014 - theCUBE - #HadoopSummit


 

the cube and hadoop summit 2014 is brought to you by anchor sponsor Hortonworks we do have do and headline sponsor when disco we make hadoop invincible okay welcome back and when we're here at the dupe summit live is looking valance the cube our flagship program we go out to the events expect a signal from noise i'm john per year but Jeff Rick drilling down on the topics we're here with wind disco welcome welcome Brett room Stein about senior director tell us what's going on for you guys I'll see you at big presence here so all the guys last night you guys have a great great booth so causing and the crew what's happening yeah I mean the show is going is going very well what's really interesting is we have a lot of very very technical individuals approaching us they're asking us you know some of the tougher more technical in-depth questions about how our consensus algorithm is able to do all this distributor replication which is really great because there's a little bit of disbelief and then of course we get to do the demonstration for them and then suspend disbelief if you will and and I think the the attendance has been great for our brief and okay I always get that you always we always have the geek conversations you guys are a very technical company Jeff and I always comment certainly de volada and Jeff Kelly that you know when disco doesn't has has their share pair of geeks and that dudes who know they're talking about so I'm sure you get that but now them in the business side you talk to customers I want to get into more the outcome that seems to be the show focused this year is a dupe of serious what are some of the outcomes then your customers are talking about when they get you guys in there what are their business issues what are they tore what are they working on to solve yeah I mean I think the first thing is to look at you know why they're looking at us and then and then with the particular business issues that we solve and the first thing and sort of the trend that we're starting to see is the prospects and the customers that we have are looking at us because of the data that they have and its data that matters so it's important data and that's when people start to come to is that's when they look to us as they have data that's very important to them in some cases if you saw some of the UCI stuff you see that the data is you know doing live monitoring of various you know patient activity where it's not just about about about a life and monitoring a life but potentially about saving the life and systems that go down not only can't save lives but they can potentially lose them so you have a demos you want to jump into this demo here what is this all about you know the demo that the demonstration that I'm going to do for you today is I want to show you our non-stop a new product i'm going to show you how we can basically stand up a single HDFS or a single Hadoop cluster across multiple data centers and I think that's one of the tough things that people are really having trouble getting their heads wrapped around because most people when they do multi data center Hadoop they tend to do two different clusters and then synchronize the data between the two of them the way they do that is they'll use you know flume or they'll use some form of parallel ingest they'll use technologies like dis CP to copy data between the data centers and each one of those has sort of an administrative burden on them and then some various flaws in their and their underlying architecture that don't allow them to do a really really detailed job as ensuring that all blocks are replicated properly that no mistakes are ever made and again there's the administrative burden you know somebody who always has to have eyes in the system we alleviate all those things so I think the first thing I want to start off with we had somebody come to our booth and we were talking about this consensus algorithm that we that we perform and the way we synchronize multiple name nodes across multiple geographies and and again and that sort of spirit of disbelief I said you know one of the key tenants of our application is it doesn't underlie it doesn't change the behavior of the application when you go from land scope to win scope and so I said for example if you create a file in one data center and 3,000 miles apart or 7,000 miles apart from that you were to hit the same create file operation you would expect that the right thing happens what somebody gets the file created and somebody gets file already exists even if at 7,000 miles distance they both hit this button at the exact same time I'm going to do a very quick demonstration of that for you here I'm going to put a file into HDFS the my top right-hand window is in Northern Virginia and then 3,000 miles distance from that my bottom right-hand window is in Oregon I'm going to put the etsy hosts file into a temp directory in Hadoop at the exact same time 3,000 miles distance apart and you'll see that exact behavior so I've just launched them both and again if you look at the top window the file is created if you look at the bottom window it says file already exists it's exactly what you'd expect a land scope up a landscape application and the way you'd expect it to behave so that is how we are ensure consistency and that was the question that the prospect has at that distance even the speed of light takes a little time right so what are some of the tips and tricks you can share this that enable you guys to do this well one of the things that we're doing is where our consensus algorithm is a majority quorum based algorithm it's based off of a well-known consensus algorithm called paxos we have a number of significant enhancements innovations beyond that dynamic memberships you know automatic scale and things of that nature but in this particular case every transaction that goes into our system gets a global sequence number and what we're able to do is ensure that those sequence numbers are executed in the correct order so you can't create you know you can't put a delete before a create you know everything has to happen in the order that it actually happened occurred in regardless of the UN distance between data centers so what is the biggest aha moment you get from customer you show them the demo is it is that the replication is availability what is the big big feature focus that they jump on yeah I think I think the biggest ones are basically when we start crashing nodes well we're running jobs we separate the the link between the win and maybe maybe I'll just do that for you now so let's maybe kick into the demonstration here what I have here is a single HDFS cluster it is spanning two geographic territory so it's one cluster in Northern Virginia part of it and the other part is in Oregon I'm going to drill down into the graphing application here and inside you see all of the name notes so you see I have three name nodes running in Virginia three name nodes running in Oregon and the demonstration is as follows I'm going to I'm going to run Terrigen and Terra sort so in other words i'm going to create some data in the cluster I'm then going to go to sort it into a total order and then I'm going to run Tara validate in the alternate data center and prove that all the blocks replicated from one side to the other however along the way I'm going to create some failures I am going to kill some of that active name nodes during this replication process i am going to shut down the when link between the two data centers during the replication paris's and then show you how we heal from from those kinds of conditions because our algorithm treats failure is a first class citizen so there's really no way to deal in the system if you will so let's start unplug John I'm active the local fails so let's go ahead and run the Terrigen in the terrorists or I'm going to put it in the directory called cube one so we're creating about 400 megabytes of data so a fairly small set that we're going to replicate between the two data centers now the first thing that you see over here on the right-hand side is that all of these name nodes kind of sprung to life that is because in an active active configuration with multiple name nodes clients actually load balance their requests across all of them also it's a synchronous namespace so any change that I make to one immediately Curzon immediately occurs on all of them the next thing you might notice in the graphing application is these blue lines over and only in the Oregon data center the blue lines essentially represent what we call a foreign block a block that is not yet made its way across the wide area network from the site of ingest now we move these blocks asynchronously from the site of in jeff's oh that I have land speed performance in fact you can see I just finished the Terrigen part of the application all at the same time pushing data across the wide area network as fast as possible now as we start to get into the next phase of the application here which is going to run terrace sort i'm going to start creating some failures in the environment so the first thing I'm going to do is want to pick two named nodes I'm going to fail a local named node and then we're also going to fail a remote name node so let's pick one of these i'm going to pick HD p 2 is the name of the machine so want to do ssh hd2 and i'm just going to reboot that machine so as I hit the reboot button the next time the graphing application updates what you'll notice here in the monitor is that a flat line so it's no longer taking any data in but if you're watching the application on the right hand side there's no interruption of the service the application is going to continue to run and you'd expect that to happen maybe in land scope cluster but remember this is a single cluster a twin scope with 3,000 miles between the two of them so I've killed one of the six active named nodes the next thing I'm going to do is kill one of the name nodes over in the Oregon data center so I'm going to go ahead and ssh into i don't know let's pick the let's pick the bottom one HTTP nine in this case and then again another reboot operation so I've just rebooted two of the six name nose while running the job but if again if you look in the upper right-hand corner the job running in Oregon kajabi running in North Virginia continues without any interruption and see we just went from 84 to eighty eight percent MapReduce and so forth so again uninterruptedly like to call continuous availability at when distances you are playing that what does continuous availability and wins because that's really important drill down on yeah I mean I think if you look at the difference between what people traditionally call high availability that means that generally speaking the system is there there is a very short time that the system will be unavailable and then it will then we come available again a continuously available system ensures that regardless of the failures that happen around it the system is always up and running something is able to take the request and in a leaderless system like ours where no one single node actually it actually creates a leadership role we're able to continue replication we're and we're also able to continue the coordinator that's two distinct is high availability which everyone kind of know was in loves expensive and then continues availability which is a little bit kind of a the Sun or cousin I guess you know saying can you put in context and cost implementation you know from a from a from a from a perspective of a when disco deployment it's kind of a continuously available system even though people look at us as somewhat traditional disaster recovery because we are replicating data to another data center but remember it's active active that means both data centers are able to write at the same time you have you get to maximize your cluster resources and again if we go back to one of the first questions you asked what are what a customer's doing this with this what a prospects want to do they want to maximize their resource investment if they have half a million dollars sitting in another data center that only is able to perform an emergency recovery situation that means they either have to a scale the primary data center or be what they want to do is utilize existing resource in an active active configuration which is why i say continuous availability they're able to do that in both data centers maximizing all their resource so you versus the consequences of not having that would be the consequences of not being able to do that is you have a one-way synchronization a disaster occurs you then have to bring that data center online you have to make sure that all the appropriate resources are there you have to you have an administrative burden that means a lot of people have to go into action very quickly with the win disco systems right what that would look like I mean with time effort cost and you have any kind of order of magnitude spec like a gay week called some guy upside dude get in the office login you have to look at individual customer service level agreements a number that i hear thrown out very very often is about 16 hours we can be back online within 16 hours really RTO 44 when disco deployment is essentially zero because both sites are active you're able to essentially continue without without any doubt some would say some would say that's contingent availability is high available because essentially zero 16 that's 16 hours I mean any any time down bad but 16 hours is huge yeah that's the service of level agreement then everyone says but we know we can do it in five hours the other of course the other part of that is of course ensuring that once a year somebody runs through the emergency configure / it you know procedure to know that they truly can be back up in line in the service level agreement timeframe so again there's a tremendous amount of effort that goes into the ongoing administrating some great comments here on our crowd chatter out chat dot net / hadoop summit joined the conversation i'll see ya we have one says nice he's talking about how the system has latency a demo is pretty cool the map was excellent excellent visual dave vellante just weighed in and said he did a survey with Jeff Kelly said large portion twenty-seven percent of respondents said lack of enterprises great availability was the biggest barriers to adoption is this what you're referring to yeah this is this is exactly what we're seeing you know people are not able to meet the uptime requirements and therefore applications stay in proof-of-concept mode or those that make it out of proof of concept are heavily burdened by administrators and a large team to ensure that same level of uptime that can be handled without error through software configuration like Linda scope so another comment from Burt thanks Burt for watching there's availability how about security yeah so security is a good one of course we are you know we run on standard dupe distributions and as such you know if you want to run your cluster with on wire encryption that's okay if you want to run your cluster with kerberos authentication that's fine we we fully support those environments got a new use case for crowd chapel in the questions got more more coming in so send them in we're watching the crowd chat slep net / hadoop summit great questions and a lot of people aren't i think people have a hard time partial eh eh versus continues availability because you can get confused between the two is it semantics or is it infrastructure concerns what is what is the how do you differentiate between those two definitions me not I think you know part of it is semantics but but but also from a win disco perspective we like to differentiate because there really isn't that that moment of downtime there is there really isn't that switch over moment where something has to fail over and then go somewhere else that's why I use that word continuous availability the system is able to simply continue operating by clients load balancing their requests to available nodes in a similar fashion when you have multiple data centers as I do here I'm able to continue operations simply by running the jobs in the alternate data center remember that it's active active so any data ingest on one side immediately transfers to the other so maybe let me do the the next part I showed you one failure scenario you've seen all the nodes have actually come back online and self healed the next part of this I want to do an separation I want to run it again so let me kick up kick that off when I would create another directory structure here only this time I'm going to actually chop the the network link between the two data centers and then after I do that I'm going to show you some some of our new products in the works give you a demonstration of that as well well that's far enough Britain what are some of the applications that that this enables people to use the do for that they were afraid to before well I think it allows you know when we look at our you know our customer base and our prospects who are evaluating our technologies it opens up all the all the regulated industries you know things like pharmaceutical companies financial services companies healthcare companies all these people who have strict regulations auditing requirements and now have a very clear concise way to not only prove that they're replicating data that data has actually made its way it can prove that it's in both locations that it's not just in both locations that it's the correct data sometimes we see in the cases of like dis CP copying files between data centers where the file isn't actually copied because it thinks it's the same but there is a slight difference between the two when the cluster diverges like that it's days of administration hour depending on the size of the cluster to actually to put the cluster you know to figure out what went wrong what went different and then of course you have to involve multiple users to figure out which one of the two files that you have is the correct one to keep so let me go ahead and stop the van link here of course with LuAnn disco technology there's nothing to keep track of you simply allow the system to do HDFS replication because it is essentially native HDFS so I've stopped the tunnel between the two datacenters while running this job one of the things that you're going to see on the left-hand size it looks like all the notes no longer respond of course that's just I have no visibility to those nodes there's no longer replicating any data because the the tunnel between the two has been shut down but if you look on the right hand side of the application the upper right-hand window of course you see that the MapReduce job is still running it's unaffected and what's interesting is once I start replicating the data again or once i should say once i start the tunnel up again between the two data centers i'll immediately start replicating data this is at the block level so again when we look at other copy technologies they are doing things of the file level so if you had a large file and it was 10 gigabytes in size and for some reason you know your your file crash but in that in that time you and you were seventy percent through your starting that whole transfer again because we're doing block replication if you had seventy percent of your box that had already gone through like perhaps what I've done here when i start the tunnel backup which i'm going to do now what's going to happen of course is we just continue from those blocks that simply haven't made their way across the net so i've started the tunnel back up the monitor you'll see springs back to life all the name nodes will have to resync that they've been out of sync for some period of time they'll learn any transactions that they missed they'll be they'll heal themselves into the cluster and we immediately start replicating blocks and then to kind of show you the bi-directional nature of this I'm going to run Tara validate in the opposite data center over in Oregon and I'll just do it on that first directory that we created and in what you'll see is that we now wind up with foreign blocks in both sides I'm running applications at the same time across datacenters fully active active configuration in a single Hadoop cluster okay so the question is on that one what is the net net summarized that demo reel quick bottom line in two sentences is that important bottom line is if name notes fail if the wind fails you are still continuously operational okay so we have questions from the commentary here from the crowd chat does this eliminate the need for backup and what is actually transferring certainly not petabytes of data ? I mean you somewhat have to transfer what what's important so if it's important for you to I suppose if it was important for you to transfer a petabyte of data then you would need the bandwidth that support I transfer of a petabyte of data but we are to a lot of Hollywood studios we were at OpenStack summit that was a big concern a lot of people are moving to the cloud for you know for workflow and for optimization Star Wars guys were telling us off the record that no the new film is in remote locations they set up data centers basically in the desert and they got actually provisioned infrastructure so huge issues yeah absolutely so what we're replicating of course is HDFS in this particular case I'm replicating all the data in this fairly small cluster between the two sites or in this case this demo is only between two sites I could add a third site and then a failure between any two would actually still allow complete you know complete availability of all the other sites that still participate in the algorithm Brent great to have you on I want to get the perspective from you in the trenches out in customers what's going on and win disco tell us what the culture there what's going on the company what's it like to work there what's the guys like I mean we we know some of the dudes there cause we always drink some vodka with him because you know likes to tip back a little bit once in a while but like great guy great geeks but like what's what's it like it when disco I think the first you know you touched on a little piece of it at first is there are a lot of smart people at windows go in fact I know when I first came on board I was like wow I'm probably the most unsmoked person at this company but culturally this is a great group of guys they like to work very hard but equally they like to play very hard and as you said you know I've been out with cause several times myself these are all great guys to be out with the culture is great it's a it's a great place to work and you know so you know people who are who are interested should certainly yeah great culture and it fits in we were talking last night very social crowd here you know something with a Hortonworks guide so javi medicate fortress ada just saw him walk up ibm's here people are really sociable this event is really has a camaraderie feel to it but yet it's serious business and you didn't the days they're all a bunch of geeks building in industry and now it's got everyone's attention Cisco's here in Intel's here IBM's here I mean what's your take on the big guys coming in I mean I think the big guys realize that that Hadoop is is is the elephant is as large as it appears elephant is in the room and exciting and it's and everybody wants a little piece of it as well they should want a piece of it Brett thanks for coming on the cube really appreciate when discs are you guys a great great company we love to have them your support thanks for supporting the cube we appreciate it we right back after this short break with our next guest thank you

Published Date : Jun 4 2014

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
two sitesQUANTITY

0.99+

Jeff KellyPERSON

0.99+

seventy percentQUANTITY

0.99+

OregonLOCATION

0.99+

two sitesQUANTITY

0.99+

Jeff KellyPERSON

0.99+

3,000 milesQUANTITY

0.99+

VirginiaLOCATION

0.99+

Jeff RickPERSON

0.99+

BurtPERSON

0.99+

84QUANTITY

0.99+

Northern VirginiaLOCATION

0.99+

North VirginiaLOCATION

0.99+

twoQUANTITY

0.99+

five hoursQUANTITY

0.99+

3,000 milesQUANTITY

0.99+

7,000 milesQUANTITY

0.99+

two data centersQUANTITY

0.99+

BrettPERSON

0.99+

Star WarsTITLE

0.99+

10 gigabytesQUANTITY

0.99+

half a million dollarsQUANTITY

0.99+

16 hoursQUANTITY

0.99+

Brett RudensteinPERSON

0.99+

JeffPERSON

0.99+

both locationsQUANTITY

0.99+

two sentencesQUANTITY

0.99+

two filesQUANTITY

0.99+

IBMORGANIZATION

0.99+

two datacentersQUANTITY

0.99+

two data centersQUANTITY

0.99+

oneQUANTITY

0.99+

two different clustersQUANTITY

0.99+

both sidesQUANTITY

0.99+

both sitesQUANTITY

0.99+

first directoryQUANTITY

0.98+

third siteQUANTITY

0.98+

first thingQUANTITY

0.98+

firstQUANTITY

0.98+

CiscoORGANIZATION

0.98+

twenty-seven percentQUANTITY

0.98+

JohnPERSON

0.98+

first thingQUANTITY

0.98+

one sideQUANTITY

0.97+

BritainLOCATION

0.97+

todayDATE

0.97+

two definitionsQUANTITY

0.97+

OpenStackEVENT

0.96+

HortonworksORGANIZATION

0.96+

eighty eight percentQUANTITY

0.96+

last nightDATE

0.96+

both data centersQUANTITY

0.94+

each oneQUANTITY

0.94+

zeroQUANTITY

0.94+

once a yearQUANTITY

0.94+

one failureQUANTITY

0.93+

the cube and hadoop summit 2014EVENT

0.93+

two geographic territoryQUANTITY

0.93+

IntelORGANIZATION

0.92+

bothQUANTITY

0.92+

singleQUANTITY

0.92+

this yearDATE

0.91+

one data centerQUANTITY

0.91+

dupe summitEVENT

0.9+

Brett room SteinPERSON

0.9+