Image Title

Search Results for REST:

How to Make a Data Fabric "Smart": A Technical Demo With Jess Jowdy


 

>> Okay, so now that we've heard Scott talk about smart data fabrics, it's time to see this in action. Right now we're joined by Jess Jowdy, who's the manager of Healthcare Field Engineering at InterSystems. She's going to give a demo of how smart data fabrics actually work, and she's going to show how embedding a wide range of analytics capabilities including data exploration, business intelligence natural language processing, and machine learning directly within the fabric, makes it faster and easier for organizations to gain new insights and power intelligence, predictive and prescriptive services and applications. Now, according to InterSystems, smart data fabrics are applicable across many industries from financial services to supply chain to healthcare and more. Jess today is going to be speaking through the lens of a healthcare focused demo. Don't worry, Joe Lichtenberg will get into some of the other use cases that you're probably interested in hearing about. That will be in our third segment, but for now let's turn it over to Jess. Jess, good to see you. >> Hi. Yeah, thank you so much for having me. And so for this demo we're really going to be bucketing these features of a smart data fabric into four different segments. We're going to be dealing with connections, collections, refinements and analysis. And so we'll see that throughout the demo as we go. So without further ado, let's just go ahead and jump into this demo and you'll see my screen pop up here. I actually like to start at the end of the demo. So I like to begin by illustrating what an end user's going to see and don't mind the screen 'cause I gave you a little sneak peek of what's about to happen. But essentially what I'm going to be doing is using Postman to simulate a call from an external application. So we talked about being in the healthcare industry. This could be for instance, a mobile application that a patient is using to view an aggregated summary of information across that patient's continuity of care or some other kind of application. So we might be pulling information in this case from an electronic medical record. We might be grabbing clinical history from that. We might be grabbing clinical notes from a medical transcription software or adverse reaction warnings from a clinical risk grouping application and so much more. So I'm really going to be assimilating a patient logging on in on their phone and retrieving this information through this Postman call. So what I'm going to do is I'm just going to hit send, I've already preloaded everything here and I'm going to be looking for information where the last name of this patient is Simmons and their medical record number their patient identifier in the system is 32345. And so as you can see I have this single JSON payload that showed up here of just relevant clinical information for my patient whose last name is Simmons all within a single response. So fantastic, right? Typically though when we see responses that look like this there is an assumption that this service is interacting with a single backend system and that single backend system is in charge of packaging that information up and returning it back to this caller. But in a smart data fabric architecture we're able to expand the scope to handle information across different, in this case, clinical applications. So how did this actually happen? Let's peel back another layer and really take a look at what happened in the background. What you're looking at here is our mission control center for our smart data fabric. On the left we have our APIs that allow users to interact with particular services. On the right we have our connections to our different data silos. And in the middle here we have our data fabric coordinator which is going to be in charge of this refinement and analysis those key pieces of our smart data fabric. So let's look back and think about the example we just showed. I received an inbound request for information for a patient whose last name is Simmons. My end user is requesting to connect to that service and that's happening here at my patient data retrieval API location. Users can define any number of different services and APIs depending on their use cases. And to that end we do also support full lifecycle API management within this platform. When you're dealing with APIs I always like to make a little shout out on this that you really want to make sure you have enough like a granular enough security model to handle and limit which APIs and which services a consumer can interact with. In this IRIS platform, which we're talking about today we have a very granular role-based security model that allows you to handle that, but it's really important in a smart data fabric to consider who's accessing your data and in what contact. >> Can I just interrupt you for a second? >> Yeah, please. >> So you were showing on the left hand side of the demo a couple of APIs. I presume that can be a very long list. I mean, what do you see as typical? >> I mean you can have hundreds of these APIs depending on what services an organization is serving up for their consumers. So yeah, we've seen hundreds of these services listed here. >> So my question is, obviously security is critical in the healthcare industry and API securities are really hot topic these days. How do you deal with that? >> Yeah, and I think API security is interesting 'cause it can happen at so many layers. So there's interactions with the API itself. So can I even see this API and leverage it? And then within an API call, you then have to deal with all right, which end points or what kind of interactions within that API am I allowed to do? What data am I getting back? And with healthcare data, the whole idea of consent to see certain pieces of data is critical. So the way that we handle that is, like I said, same thing at different layers. There is access to a particular API, which can happen within the IRIS product and also we see it happening with an API management layer, which has become a really hot topic with a lot of organizations. And then when it comes to data security, that really happens under the hood within your smart data fabric. So that role-based access control becomes very important in assigning, you know, roles and permissions to certain pieces of information. Getting that granular becomes the cornerstone of security. >> And that's been designed in, >> Absolutely, yes. it's not a bolt-on as they like to say. Okay, can we get into collect now? >> Of course, we're going to move on to the collection piece at this point in time, which involves pulling information from each of my different data silos to create an overall aggregated record. So commonly each data source requires a different method for establishing connections and collecting this information. So for instance, interactions with an EMR may require leveraging a standard healthcare messaging format like FIRE, interactions with a homegrown enterprise data warehouse for instance may use SQL for a cloud-based solutions managed by a vendor. They may only allow you to use web service calls to pull data. So it's really important that your data fabric platform that you're using has the flexibility to connect to all of these different systems and and applications. And I'm about to log out so I'm going to keep my session going here. So therefore it's incredibly important that your data fabric has the flexibility to connect to all these different kinds of applications and data sources and all these different kinds of formats and over all of these different kinds of protocols. So let's think back on our example here. I had four different applications that I was requesting information for to create that payload that we saw initially. Those are listed here under this operations section. So these are going out and connecting to downstream systems to pull information into my smart data fabric. What's great about the IRIS platform is it has an embedded interoperability platform. So there's all of these native adapters that can support these common connections that we see for different kinds of applications. So using REST or SOAP or SQL or FTP regardless of that protocol there's an adapter to help you work with that. And we also think of the types of formats that we typically see data coming in as, in healthcare we have H7, we have FIRE we have CCDs across the industry. JSON is, you know, really hitting a market strong now and XML, payloads, flat files. We need to be able to handle all of these different kinds of formats over these different kinds of protocols. So to illustrate that, if I click through these when I select a particular connection on the right side panel I'm going to see the different settings that are associated with that particular connection that allows me to collect information back into my smart data fabric. In this scenario, my connection to my chart script application in this example communicates over a SOAP connection. When I'm grabbing information from my clinical risk grouping application I'm using a SQL based connection. When I'm connecting to my EMR I'm leveraging a standard healthcare messaging format known as FIRE, which is a rest based protocol. And then when I'm working with my health record management system I'm leveraging a standard HTTP adapter. So you can see how we can be flexible when dealing with these different kinds of applications and systems. And then it becomes important to be able to validate that you've established those connections correctly and be able to do it in a reliable and quick way. Because if you think about it, you could have hundreds of these different kinds of applications built out and you want to make sure that you're maintaining and understanding those connections. So I can actually go ahead and test one of these applications and put in, for instance my patient's last name and their MRN and make sure that I'm actually getting data back from that system. So it's a nice little sanity check as we're building out that data fabric to ensure that we're able to establish these connections appropriately. So turnkey adapters are fantastic, as you can see we're leveraging them all here, but sometimes these connections are going to require going one step further and building something really specific for an application. So let's, why don't we go one step further here and talk about doing something custom or doing something innovative. And so it's important for users to have the ability to develop and go beyond what's an out of the box or black box approach to be able to develop things that are specific to their data fabric or specific to their particular connection. In this scenario, the IRIS data platform gives users access to the entire underlying code base. So you cannot, you not only get an opportunity to view how we're establishing these connections or how we're building out these processes but you have the opportunity to inject your own kind of processing your own kinds of pipelines into this. So as an example, you can leverage any number of different programming languages right within this pipeline. And so I went ahead and I injected Python. So Python is a very up and coming language, right? We see more and more developers turning towards Python to do their development. So it's important that your data fabric supports those kinds of developers and users that have standardized on these kinds of programming languages. This particular script here, as you can see actually calls out to our turnkey adapters. So we see a combination of out of the box code that is provided in this data fabric platform from IRIS combined with organization specific or user specific customizations that are included in this Python method. So it's a nice little combination of how do we bring the developer experience in and mix it with out of the box capabilities that we can provide in a smart data fabric. >> Wow. >> Yeah, I'll pause. >> It's a lot here. You know, actually, if I could >> I can pause. >> If I just want to sort of play that back. So we went through the connect and the collect phase. >> And the collect, yes, we're going into refine. So it's a good place to stop. >> Yeah, so before we get there, so we heard a lot about fine grain security, which is crucial. We heard a lot about different data types, multiple formats. You've got, you know the ability to bring in different dev tools. We heard about FIRE, which of course big in healthcare. >> Absolutely. >> And that's the standard and then SQL for traditional kind of structured data and then web services like HTTP you mentioned. And so you have a rich collection of capabilities within this single platform. >> Absolutely, and I think that's really important when you're dealing with a smart data fabric because what you're effectively doing is you're consolidating all of your processing, all of your collection into a single platform. So that platform needs to be able to handle any number of different kinds of scenarios and technical challenges. So you've got to pack that platform with as many of these features as you can to consolidate that processing. >> All right, so now we're going into refine. >> We're going into refinement, exciting. So how do we actually do refinement? Where does refinement happen and how does this whole thing end up being performant? Well the key to all of that is this SDF coordinator or stands for smart data fabric coordinator. And what this particular process is doing is essentially orchestrating all of these calls to all of these different downstream systems. It's aggregating, it's collecting that information it's aggregating it and it's refining it into that single payload that we saw get returned to the user. So really this coordinator is the main event when it comes to our data fabric. And in the IRIS platform we actually allow users to build these coordinators using web-based tool sets to make it intuitive. So we can take a sneak peek at what that looks like and as you can see it follows a flow chart like structure. So there's a start, there is an end and then there are these different arrows that point to different activities throughout the business process. And so there's all these different actions that are being taken within our coordinator. You can see an action for each of the calls to each of our different data sources to go retrieve information. And then we also have the sync call at the end that is in charge of essentially making sure that all of those responses come back before we package them together and send them out. So this becomes really crucial when we're creating that data fabric. And you know, this is a very simple data fabric example where we're just grabbing data and we're consolidating it together. But you can have really complex orchestrators and coordinators that do any number of different things. So for instance, I could inject SQL Logic into this or SQL code, I can have conditional logic, I can do looping, I can do error trapping and handling. So we're talking about a whole number of different features that can be included in this coordinator. So like I said, we have a really very simple process here that's just calling out, grabbing all those different data elements from all those different data sources and consolidating it. We'll look back at this coordinator in a second when we introduce or we make this data fabric a bit smarter and we start introducing that analytics piece to it. So this is in charge of the refinement. And so at this point in time we've looked at connections, collections, and refinements. And just to summarize what we've seen 'cause I always like to go back and take a look at everything that we've seen. We have our initial API connection we have our connections to our individual data sources and we have our coordinators there in the middle that are in charge of collecting the data and refining it into a single payload. As you can imagine, there's a lot going on behind the scenes of a smart data fabric, right? There's all these different processes that are interacting. So it's really important that your smart data fabric platform has really good traceability, really good logging 'cause you need to be able to know, you know, if there was an issue, where did that issue happen, in which connected process and how did it affect the other processes that are related to it. In IRIS, we have this concept called a visual trace. And what our clients use this for is basically to be able to step through the entire history of a request from when it initially came into the smart data fabric to when data was sent back out from that smart data fabric. So I didn't record the time but I bet if you recorded the time it was this time that we sent that request in. And you can see my patient's name and their medical record number here and you can see that that instigated four different calls to four different systems and they're represented by these arrows going out. So we sent something to chart script to our health record management system, to our clinical risk grouping application into my EMR through their FIRE server. So every request, every outbound application gets a request and we pull back all of those individual pieces of information from all of those different systems and we bundle them together. And for my FIRE lovers, here's our FIRE bundle that we got back from our FIRE server. So this is a really good way of being able to validate that I am appropriately grabbing the data from all these different applications and then ultimately consolidating it into one payload. Now we change this into a JSON format before we deliver it, but this is those data elements brought together. And this screen would also be used for being able to see things like error trapping or errors that were thrown alerts, warnings, developers might put log statements in just to validate that certain pieces of code are executing. So this really becomes the one stop shop for understanding what's happening behind the scenes with your data fabric. >> Etcher, who did what, when, where what did the machine do? What went wrong and where did that go wrong? >> Exactly. >> Right in your fingertips. >> Right, and I'm a visual person so a bunch of log files to me is not the most helpful. Well, being able to see this happened at this time in this location gives me that understanding I need to actually troubleshoot a problem. >> This business orchestration piece, can you say a little bit more about that? How people are using it? What's the business impact of the business orchestration? >> The business orchestration, especially in the smart data fabric is really that crucial part of being able to create a smart data fabric. So think of your business orchestrator as doing the heavy lifting of any kind of processing that involves data, right? It's bringing data in, it's analyzing that information, it's transforming that data, in a format that your consumer's not going to understand it's doing any additional injection of custom logic. So really your coordinator or that orchestrator that sits in the middle is the brains behind your smart data fabric. >> And this is available today? This all works? >> It's all available today. Yeah, it all works. And we have a number of clients that are using this technology to support these kinds of use cases. >> Awesome demo. Anything else you want to show us? >> Well we can keep going. 'Cause right now, I mean we can, oh, we're at 18 minutes. God help us. You can cut some of this. (laughs) I have a lot to say, but really this is our data fabric. The core competency of IRIS is making it smart, right? So I won't spend too much time on this but essentially if we go back to our coordinator here we can see here's that original that pipeline that we saw where we're pulling data from all these different systems and we're collecting it and we're sending it out. But then we see two more at the end here which involves getting a readmission prediction and then returning a prediction. So we can not only deliver data back as part of a smart data fabric but we can also deliver insights back to users and consumers based on data that we've aggregated as part of a smart data fabric. So in this scenario, we're actually taking all that data that we just looked at and we're running it through a machine learning model that exists within the smart data fabric pipeline and producing a readmission score to determine if this particular patient is at risk for readmission within the next 30 days. Which is a typical problem that we see in the healthcare space. So what's really exciting about what we're doing in the IRIS world is we're bringing analytics close to the data with integrated ML. So in this scenario we're actually creating the model, training the model, and then executing the model directly within the IRIS platform. So there's no shuffling of data, there's no external connections to make this happen. And it doesn't really require having a PhD in data science to understand how to do that. It leverages all really basic SQL like syntax to be able to construct and execute these predictions. So it's going one step further than the traditional data fabric example to introduce this ability to define actionable insights to our users based on the data that we've brought together. >> Well that readmission probability is huge. >> Yes. >> Right, because it directly affects the cost of for the provider and the patient, you know. So if you can anticipate the probability of readmission and either do things at that moment or you know, as an outpatient perhaps to minimize the probability then that's huge. That drops right to the bottom line. >> Absolutely, absolutely. And that really brings us from that data fabric to that smart data fabric at the end of the day which is what makes this so exciting. >> Awesome demo. >> Thank you. >> Fantastic people, are you cool? If people want to get in touch with you? >> Oh yes, absolutely. So you can find me on LinkedIn, Jessica Jowdy and we'd love to hear from you. I always love talking about this topic, so would be happy to engage on that. >> Great stuff, thank you Jess, appreciate it. >> Thank you so much. >> Okay, don't go away because in the next segment we're going to dig into the use cases where data fabric is driving business value. Stay right there.

Published Date : Feb 15 2023

SUMMARY :

for organizations to gain new insights And to that end we do also So you were showing hundreds of these APIs in the healthcare industry So the way that we handle that it's not a bolt-on as they like to say. that data fabric to ensure that we're able It's a lot here. So we went through the So it's a good place to stop. the ability to bring And so you have a rich collection So that platform needs to we're going into refine. that are related to it. so a bunch of log files to of being able to create this technology to support Anything else you want to show us? So in this scenario, we're Well that readmission and the patient, you know. to that smart data fabric So you can find me on you Jess, appreciate it. because in the next segment

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jessica JowdyPERSON

0.99+

Joe LichtenbergPERSON

0.99+

InterSystemsORGANIZATION

0.99+

Jess JowdyPERSON

0.99+

ScottPERSON

0.99+

JessPERSON

0.99+

18 minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

32345OTHER

0.99+

PythonTITLE

0.99+

SimmonsPERSON

0.99+

eachQUANTITY

0.99+

IRISORGANIZATION

0.99+

third segmentQUANTITY

0.99+

EtcherORGANIZATION

0.99+

todayDATE

0.99+

LinkedInORGANIZATION

0.98+

SQLTITLE

0.98+

single platformQUANTITY

0.98+

oneQUANTITY

0.98+

JSONTITLE

0.96+

each data sourceQUANTITY

0.96+

singleQUANTITY

0.95+

one stepQUANTITY

0.94+

one stepQUANTITY

0.94+

single backendQUANTITY

0.92+

single responseQUANTITY

0.9+

two moreQUANTITY

0.85+

single payloadQUANTITY

0.84+

SQL LogicTITLE

0.84+

a secondQUANTITY

0.83+

IRISTITLE

0.83+

four different segmentsQUANTITY

0.82+

PostmanPERSON

0.78+

FIRETITLE

0.77+

SOAPTITLE

0.76+

four different applicationsQUANTITY

0.74+

one stopQUANTITY

0.74+

PostmanTITLE

0.73+

one payloadQUANTITY

0.72+

each ofQUANTITY

0.71+

RESTTITLE

0.7+

Healthcare Field EngineeringORGANIZATION

0.67+

next 30 daysDATE

0.65+

fourQUANTITY

0.63+

these APIsQUANTITY

0.62+

secondQUANTITY

0.54+

GodPERSON

0.53+

everyQUANTITY

0.53+

servicesQUANTITY

0.51+

H7COMMERCIAL_ITEM

0.5+

applicationQUANTITY

0.48+

FIREORGANIZATION

0.38+

XMLTITLE

0.38+

DockerCon2021 Keynote


 

>>Individuals create developers, translate ideas to code, to create great applications and great applications. Touch everyone. A Docker. We know that collaboration is key to your innovation sharing ideas, working together. Launching the most secure applications. Docker is with you wherever your team innovates, whether it be robots or autonomous cars, we're doing research to save lives during a pandemic, revolutionizing, how to buy and sell goods online, or even going into the unknown frontiers of space. Docker is launching innovation everywhere. Join us on the journey to build, share, run the future. >>Hello and welcome to Docker con 2021. We're incredibly excited to have more than 80,000 of you join us today from all over the world. As it was last year, this year at DockerCon is 100% virtual and 100% free. So as to enable as many community members as possible to join us now, 100%. Virtual is also an acknowledgement of the continuing global pandemic in particular, the ongoing tragedies in India and Brazil, the Docker community is a global one. And on behalf of all Dr. Khan attendees, we are donating $10,000 to UNICEF support efforts to fight the virus in those countries. Now, even in those regions of the world where the pandemic is being brought under control, virtual first is the new normal. It's been a challenging transition. This includes our team here at Docker. And we know from talking with many of you that you and your developer teams are challenged by this as well. So to help application development teams better collaborate and ship faster, we've been working on some powerful new features and we thought it would be fun to start off with a demo of those. How about it? Want to have a look? All right. Then no further delay. I'd like to introduce Youi Cal and Ben, gosh, over to you and Ben >>Morning, Ben, thanks for jumping on real quick. >>Have you seen the email from Scott? The one about updates and the docs landing page Smith, the doc combat and more prominence. >>Yeah. I've got something working on my local machine. I haven't committed anything yet. I was thinking we could try, um, that new Docker dev environments feature. >>Yeah, that's cool. So if you hit the share button, what I should do is it will take all of your code and the dependencies and the image you're basing it on and wrap that up as one image for me. And I can then just monitor all my machines that have been one click, like, and then have it side by side, along with the changes I've been looking at as well, because I was also having a bit of a look and then I can really see how it differs to what I'm doing. Maybe I can combine it to do the best of both worlds. >>Sounds good. Uh, let me get that over to you, >>Wilson. Yeah. If you pay with the image name, I'll get that started up. >>All right. Sen send it over >>Cheesy. Okay, great. Let's have a quick look at what you he was doing then. So I've been messing around similar to do with the batter. I've got movie at the top here and I think it looks pretty cool. Let's just grab that image from you. Pick out that started on a dev environment. What this is doing. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working on and I'll get that opened up in my idea. Ready to use. It's a here close. We can see our environment as my Molly image, just coming down there and I've got my new idea. >>We'll load this up and it'll just connect to my dev environment. There we go. It's connected to the container. So we're working all in the container here and now give it a moment. What we'll do is we'll see what changes you've been making as well on the code. So it's like she's been working on a landing page as well, and it looks like she's been changing the banner as well. So let's get this running. Let's see what she's actually doing and how it looks. We'll set up our checklist and then we'll see how that works. >>Great. So that's now rolling. So let's just have a look at what you use doing what changes she had made. Compare those to mine just jumped back into my dev container UI, see that I've got both of those running side by side with my changes and news changes. Okay. So she's put Molly up there rather than mobi or somebody had the same idea. So I think in a way I can make us both happy. So if we just jumped back into what we'll do, just add Molly and Moby and here I'll save that. And what we can see is, cause I'm just working within the container rather than having to do sort of rebuild of everything or serve, or just reload my content. No, that's straight the page. So what I can then do is I can come up with my browser here. Once that's all refreshed, refresh the page once hopefully, maybe twice, we should then be able to see your refresh it or should be able to see that we get Malia mobi come up. So there we go, got Molly mobi. So what we'll do now is we'll describe that state. It sends us our image and then we'll just create one of those to share with URI or share. And we'll get a link for that. I guess we'll send that back over to you. >>So I've had a look at what you were doing and I'm actually going to change. I think that might work for both of us. I wondered if you could take a look at it. If I send it over. >>Sounds good. Let me grab the link. >>Yeah, it's a dev environment link again. So if you just open that back in the doc dashboard, it should be able to open up the code that I've changed and then just run it in the same way you normally do. And that shouldn't interrupt what you're already working on because there'll be able to run side by side with your other brunch. You already got, >>Got it. Got it. Loading here. Well, that's great. It's Molly and movie together. I love it. I think we should ship it. >>Awesome. I guess it's chip it and get on with the rest of.com. Wasn't that cool. Thank you Joey. Thanks Ben. Everyone we'll have more of this later in the keynote. So stay tuned. Let's say earlier, we've all been challenged by this past year, whether the COVID pandemic, the complete evaporation of customer demand in many industries, unemployment or business bankruptcies, we all been touched in some way. And yet, even to miss these tragedies last year, we saw multiple sources of hope and inspiration. For example, in response to COVID we saw global communities, including the tech community rapidly innovate solutions for analyzing the spread of the virus, sequencing its genes and visualizing infection rates. In fact, if all in teams collaborating on solutions for COVID have created more than 1,400 publicly shareable images on Docker hub. As another example, we all witnessed the historic landing and exploration of Mars by the perseverance Rover and its ingenuity drone. >>Now what's common in these examples, these innovative and ambitious accomplishments were made possible not by any single individual, but by teams of individuals collaborating together. The power of teams is why we've made development teams central to Docker's mission to build tools and content development teams love to help them get their ideas from code to cloud as quickly as possible. One of the frictions we've seen that can slow down to them in teams is that the path from code to cloud can be a confusing one, riddle with multiple point products, tools, and images that need to be integrated and maintained an automated pipeline in order for teams to be productive. That's why a year and a half ago we refocused Docker on helping development teams make sense of all this specifically, our goal is to provide development teams with the trusted content, the sharing capabilities and the pipeline integrations with best of breed third-party tools to help teams ship faster in short, to provide a collaborative application development platform. >>Everything a team needs to build. Sharon run create applications. Now, as I noted earlier, it's been a challenging year for everyone on our planet and has been similar for us here at Docker. Our team had to adapt to working from home local lockdowns caused by the pandemic and other challenges. And despite all this together with our community and ecosystem partners, we accomplished many exciting milestones. For example, in open source together with the community and our partners, we open sourced or made major contributions to many projects, including OCI distribution and the composed plugins building on these open source projects. We had powerful new capabilities to the Docker product, both free and subscription. For example, support for WSL two and apple, Silicon and Docker, desktop and vulnerability scanning audit logs and image management and Docker hub. >>And finally delivering an easy to use well-integrated development experience with best of breed tools and content is only possible through close collaboration with our ecosystem partners. For example, this last year we had over 100 commercialized fees, join our Docker verified publisher program and over 200 open source projects, join our Docker sponsored open source program. As a result of these efforts, we've seen some exciting growth in the Docker community in the 12 months since last year's Docker con for example, the number of registered developers grew 80% to over 8 million. These developers created many new images increasing the total by 56% to almost 11 million. And the images in all these repositories were pulled by more than 13 million monthly active IP addresses totaling 13 billion pulls a month. Now while the growth is exciting by Docker, we're even more excited about the stories we hear from you and your development teams about how you're using Docker and its impact on your businesses. For example, cancer researchers and their bioinformatics development team at the Washington university school of medicine needed a way to quickly analyze their clinical trial results and then share the models, the data and the analysis with other researchers they use Docker because it gives them the ease of use choice of pipeline tools and speed of sharing so critical to their research. And most importantly to the lives of their patients stay tuned for another powerful customer story later in the keynote from Matt fall, VP of engineering at Oracle insights. >>So with this last year behind us, what's next for Docker, but challenge you this last year of force changes in how development teams work, but we felt for years to come. And what we've learned in our discussions with you will have long lasting impact on our product roadmap. One of the biggest takeaways from those discussions that you and your development team want to be quicker to adapt, to changes in your environment so you can ship faster. So what is DACA doing to help with this first trusted content to own the teams that can focus their energies on what is unique to their businesses and spend as little time as possible on undifferentiated work are able to adapt more quickly and ship faster in order to do so. They need to be able to trust other components that make up their app together with our partners. >>Docker is doubling down and providing development teams with trusted content and the tools they need to use it in their applications. Second, remote collaboration on a development team, asking a coworker to take a look at your code used to be as easy as swiveling their chair around, but given what's happened in the last year, that's no longer the case. So as you even been hinted in the demo at the beginning, you'll see us deliver more capabilities for remote collaboration within a development team. And we're enabling development team to quickly adapt to any team configuration all on prem hybrid, all work from home, helping them remain productive and focused on shipping third ecosystem integrations, those development teams that can quickly take advantage of innovations throughout the ecosystem. Instead of getting locked into a single monolithic pipeline, there'll be the ones able to deliver amps, which impact their businesses faster. >>So together with our ecosystem partners, we are investing in more integrations with best of breed tools, right? Integrated automated app pipelines. Furthermore, we'll be writing more public API APIs and SDKs to enable ecosystem partners and development teams to roll their own integrations. We'll be sharing more details about remote collaboration and ecosystem integrations. Later in the keynote, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, access to content. They can trust, allows them to focus their coding efforts on what's unique and differentiated to that end Docker and our partners are bringing more and more trusted content to Docker hub Docker official images are 160 images of popular upstream open source projects that serve as foundational building blocks for any application. These include operating systems, programming, languages, databases, and more. Furthermore, these are updated patch scan and certified frequently. So I said, no image is older than 30 days. >>Docker verified publisher images are published by more than 100 commercialized feeds. The image Rebos are explicitly designated verify. So the developers searching for components for their app know that the ISV is actively maintaining the image. Docker sponsored open source projects announced late last year features images for more than 200 open source communities. Docker sponsors these communities through providing free storage and networking resources and offering their community members unrestricted access repos for businesses allow businesses to update and share their apps privately within their organizations using role-based access control and user authentication. No, and finally, public repos for communities enable community projects to be freely shared with anonymous and authenticated users alike. >>And for all these different types of content, we provide services for both development teams and ISP, for example, vulnerability scanning and digital signing for enhanced security search and filtering for discoverability packaging and updating services and analytics about how these products are being used. All this trusted content, we make available to develop teams for them directly to discover poll and integrate into their applications. Our goal is to meet development teams where they live. So for those organizations that prefer to manage their internal distribution of trusted content, we've collaborated with leading container registry partners. We announced our partnership with J frog late last year. And today we're very pleased to announce our partnerships with Amazon and Miranda's for providing an integrated seamless experience for joint for our joint customers. Lastly, the container images themselves and this end to end flow are built on open industry standards, which provided all the teams with flexibility and choice trusted content enables development teams to rapidly build. >>As I let them focus on their unique differentiated features and use trusted building blocks for the rest. We'll be talking more about trusted content as well as remote collaboration and ecosystem integrations later in the keynote. Now ecosystem partners are not only integral to the Docker experience for development teams. They're also integral to a great DockerCon experience, but please join me in thanking our Dr. Kent on sponsors and checking out their talks throughout the day. I also want to thank some others first up Docker team. Like all of you this last year has been extremely challenging for us, but the Docker team rose to the challenge and worked together to continue shipping great product, the Docker community of captains, community leaders, and contributors with your welcoming newcomers, enthusiasm for Docker and open exchanges of best practices and ideas talker, wouldn't be Docker without you. And finally, our development team customers. >>You trust us to help you build apps. Your businesses rely on. We don't take that trust for granted. Thank you. In closing, we often hear about the tenant's developer capable of great individual feeds that can transform project. But I wonder if we, as an industry have perhaps gotten this wrong by putting so much emphasis on weight, on the individual as discussed at the beginning, great accomplishments like innovative responses to COVID-19 like landing on Mars are more often the results of individuals collaborating together as a team, which is why our mission here at Docker is delivered tools and content developers love to help their team succeed and become 10 X teams. Thanks again for joining us, we look forward to having a great DockerCon with you today, as well as a great year ahead of us. Thanks and be well. >>Hi, I'm Dana Lawson, VP of engineering here at get hub. And my job is to enable this rich interconnected community of builders and makers to build even more and hopefully have a great time doing it in order to enable the best platform for developers, which I know is something we are all passionate about. We need to partner across the ecosystem to ensure that developers can have a great experience across get hub and all the tools that they want to use. No matter what they are. My team works to build the tools and relationships to make that possible. I am so excited to join Scott on this virtual stage to talk about increasing developer velocity. So let's dive in now, I know this may be hard for some of you to believe, but as a former CIS admin, some 21 years ago, working on sense spark workstations, we've come such a long way for random scripts and desperate systems that we've stitched together to this whole inclusive developer workflow experience being a CIS admin. >>Then you were just one piece of the siloed experience, but I didn't want to just push code to production. So I created scripts that did it for me. I taught myself how to code. I was the model lazy CIS admin that got dangerous and having pushed a little too far. I realized that working in production and building features is really a team sport that we had the opportunity, all of us to be customer obsessed today. As developers, we can go beyond the traditional dev ops mindset. We can really focus on adding value to the customer experience by ensuring that we have work that contributes to increasing uptime via and SLS all while being agile and productive. We get there. When we move from a pass the Baton system to now having an interconnected developer workflow that increases velocity in every part of the cycle, we get to work better and smarter. >>And honestly, in a way that is so much more enjoyable because we automate away all the mundane and manual and boring tasks. So we get to focus on what really matters shipping, the things that humans get to use and love. Docker has been a big part of enabling this transformation. 10, 20 years ago, we had Tomcat containers, which are not Docker containers. And for y'all hearing this the first time go Google it. But that was the way we built our applications. We had to segment them on the server and give them resources. Today. We have Docker containers, these little mini Oasys and Docker images. You can do it multiple times in an orchestrated manner with the power of actions enabled and Docker. It's just so incredible what you can do. And by the way, I'm showing you actions in Docker, which I hope you use because both are great and free for open source. >>But the key takeaway is really the workflow and the automation, which you certainly can do with other tools. Okay, I'm going to show you just how easy this is, because believe me, if this is something I can learn and do anybody out there can, and in this demo, I'll show you about the basic components needed to create and use a package, Docker container actions. And like I said, you won't believe how awesome the combination of Docker and actions is because you can enable your workflow to do no matter what you're trying to do in this super baby example. We're so small. You could take like 10 seconds. Like I am here creating an action due to a simple task, like pushing a message to your logs. And the cool thing is you can use it on any the bit on this one. Like I said, we're going to use push. >>You can do, uh, even to order a pizza every time you roll into production, if you wanted, but at get hub, that'd be a lot of pizzas. And the funny thing is somebody out there is actually tried this and written that action. If you haven't used Docker and actions together, check out the docs on either get hub or Docker to get you started. And a huge shout out to all those doc writers out there. I built this demo today using those instructions. And if I can do it, I know you can too, but enough yapping let's get started to save some time. And since a lot of us are Docker and get hub nerds, I've already created a repo with a Docker file. So we're going to skip that step. Next. I'm going to create an action's Yammel file. And if you don't Yammer, you know, actions, the metadata defines my important log stuff to capture and the input and my time out per parameter to pass and puts to the Docker container, get up a build image from your Docker file and run the commands in a new container. >>Using the Sigma image. The cool thing is, is you can use any Docker image in any language for your actions. It doesn't matter if it's go or whatever in today's I'm going to use a shell script and an input variable to print my important log stuff to file. And like I said, you know me, I love me some. So let's see this action in a workflow. When an action is in a private repo, like the one I demonstrating today, the action can only be used in workflows in the same repository, but public actions can be used by workflows in any repository. So unfortunately you won't get access to the super awesome action, but don't worry in the Guild marketplace, there are over 8,000 actions available, especially the most important one, that pizza action. So go try it out. Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's demo, I'm just going to use the gooey. I'm going to navigate to my actions tab as I've done here. And I'm going to in my workflow, select new work, hello, probably load some workflows to Claire to get you started, but I'm using the one I've copied. Like I said, the lazy developer I am in. I'm going to replace it with my action. >>That's it. So now we're going to go and we're going to start our commitment new file. Now, if we go over to our actions tab, we can see the workflow in progress in my repository. I just click the actions tab. And because they wrote the actions on push, we can watch the visualization under jobs and click the job to see the important stuff we're logging in the input stamp in the printed log. And we'll just wait for this to run. Hello, Mona and boom. Just like that. It runs automatically within our action. We told it to go run as soon as the files updated because we're doing it on push merge. That's right. Folks in just a few minutes, I built an action that writes an entry to a log file every time I push. So I don't have to do it manually. In essence, with automation, you can be kind to your future self and save time and effort to focus on what really matters. >>Imagine what I could do with even a little more time, probably order all y'all pieces. That is the power of the interconnected workflow. And it's amazing. And I hope you all go try it out, but why do we care about all of that? Just like in the demo, I took a manual task with both tape, which both takes time and it's easy to forget and automated it. So I don't have to think about it. And it's executed every time consistently. That means less time for me to worry about my human errors and mistakes, and more time to focus on actually building the cool stuff that people want. Obviously, automation, developer productivity, but what is even more important to me is the developer happiness tools like BS, code actions, Docker, Heroku, and many others reduce manual work, which allows us to focus on building things that are awesome. >>And to get into that wonderful state that we call flow. According to research by UC Irvine in Humboldt university in Germany, it takes an average of 23 minutes to enter optimal creative state. What we call the flow or to reenter it after distraction like your dog on your office store. So staying in flow is so critical to developer productivity and as a developer, it just feels good to be cranking away at something with deep focus. I certainly know that I love that feeling intuitive collaboration and automation features we built in to get hub help developer, Sam flow, allowing you and your team to do so much more, to bring the benefits of automation into perspective in our annual October's report by Dr. Nicole, Forsgren. One of my buddies here at get hub, took a look at the developer productivity in the stork year. You know what we found? >>We found that public GitHub repositories that use the Automational pull requests, merge those pull requests. 1.2 times faster. And the number of pooled merged pull requests increased by 1.3 times, that is 34% more poor requests merged. And other words, automation can con can dramatically increase, but the speed and quantity of work completed in any role, just like an open source development, you'll work more efficiently with greater impact when you invest the bulk of your time in the work that adds the most value and eliminate or outsource the rest because you don't need to do it, make the machines by elaborate by leveraging automation in their workflows teams, minimize manual work and reclaim that time for innovation and maintain that state of flow with development and collaboration. More importantly, their work is more enjoyable because they're not wasting the time doing the things that the machines or robots can do for them. >>And I remember what I said at the beginning. Many of us want to be efficient, heck even lazy. So why would I spend my time doing something I can automate? Now you can read more about this research behind the art behind this at October set, get hub.com, which also includes a lot of other cool info about the open source ecosystem and how it's evolving. Speaking of the open source ecosystem we at get hub are so honored to be the home of more than 65 million developers who build software together for everywhere across the globe. Today, we're seeing software development taking shape as the world's largest team sport, where development teams collaborate, build and ship products. It's no longer a solo effort like it was for me. You don't have to take my word for it. Check out this globe. This globe shows real data. Every speck of light you see here represents a contribution to an open source project, somewhere on earth. >>These arts reach across continents, cultures, and other divides. It's distributed collaboration at its finest. 20 years ago, we had no concept of dev ops, SecOps and lots, or the new ops that are going to be happening. But today's development and ops teams are connected like ever before. This is only going to continue to evolve at a rapid pace, especially as we continue to empower the next hundred million developers, automation helps us focus on what's important and to greatly accelerate innovation. Just this past year, we saw some of the most groundbreaking technological advancements and achievements I'll say ever, including critical COVID-19 vaccine trials, as well as the first power flight on Mars. This past month, these breakthroughs were only possible because of the interconnected collaborative open source communities on get hub and the amazing tools and workflows that empower us all to create and innovate. Let's continue building, integrating, and automating. So we collectively can give developers the experience. They deserve all of the automation and beautiful eye UIs that we can muster so they can continue to build the things that truly do change the world. Thank you again for having me today, Dr. Khan, it has been a pleasure to be here with all you nerds. >>Hello. I'm Justin. Komack lovely to see you here. Talking to developers, their world is getting much more complex. Developers are being asked to do everything security ops on goal data analysis, all being put on the rockers. Software's eating the world. Of course, and this all make sense in that view, but they need help. One team. I told you it's shifted all our.net apps to run on Linux from windows, but their developers found the complexity of Docker files based on the Linux shell scripts really difficult has helped make these things easier for your teams. Your ones collaborate more in a virtual world, but you've asked us to make this simpler and more lightweight. You, the developers have asked for a paved road experience. You want things to just work with a simple options to be there, but it's not just the paved road. You also want to be able to go off-road and do interesting and different things. >>Use different components, experiments, innovate as well. We'll always offer you both those choices at different times. Different developers want different things. It may shift for ones the other paved road or off road. Sometimes you want reliability, dependability in the zone for day to day work, but sometimes you have to do something new, incorporate new things in your pipeline, build applications for new places. Then you knew those off-road abilities too. So you can really get under the hood and go and build something weird and wonderful and amazing. That gives you new options. Talk as an independent choice. We don't own the roads. We're not pushing you into any technology choices because we own them. We're really supporting and driving open standards, such as ISEI working opensource with the CNCF. We want to help you get your applications from your laptops, the clouds, and beyond, even into space. >>Let's talk about the key focus areas, that frame, what DACA is doing going forward. These are simplicity, sharing, flexibility, trusted content and care supply chain compared to building where the underlying kernel primitives like namespaces and Seagraves the original Docker CLI was just amazing Docker engine. It's a magical experience for everyone. It really brought those innovations and put them in a world where anyone would use that, but that's not enough. We need to continue to innovate. And it was trying to get more done faster all the time. And there's a lot more we can do. We're here to take complexity away from deeply complicated underlying things and give developers tools that are just amazing and magical. One of the area we haven't done enough and make things magical enough that we're really planning around now is that, you know, Docker images, uh, they're the key parts of your application, but you know, how do I do something with an image? How do I, where do I attach volumes with this image? What's the API. Whereas the SDK for this image, how do I find an example or docs in an API driven world? Every bit of software should have an API and an API description. And our vision is that every container should have this API description and the ability for you to understand how to use it. And it's all a seamless thing from, you know, from your code to the cloud local and remote, you can, you can use containers in this amazing and exciting way. >>One thing I really noticed in the last year is that companies that started off remote fast have constant collaboration. They have zoom calls, apron all day terminals, shattering that always working together. Other teams are really trying to learn how to do this style because they didn't start like that. We used to walk around to other people's desks or share services on the local office network. And it's very difficult to do that anymore. You want sharing to be really simple, lightweight, and informal. Let me try your container or just maybe let's collaborate on this together. Um, you know, fast collaboration on the analysts, fast iteration, fast working together, and he wants to share more. You want to share how to develop environments, not just an image. And we all work by seeing something someone else in our team is doing saying, how can I do that too? I can, I want to make that sharing really, really easy. Ben's going to talk about this more in the interest of one minute. >>We know how you're excited by apple. Silicon and gravis are not excited because there's a new architecture, but excited because it's faster, cooler, cheaper, better, and offers new possibilities. The M one support was the most asked for thing on our public roadmap, EFA, and we listened and share that we see really exciting possibilities, usership arm applications, all the way from desktop to production. We know that you all use different clouds and different bases have deployed to, um, you know, we work with AWS and Azure and Google and more, um, and we want to help you ship on prime as well. And we know that you use huge number of languages and the containers help build applications that use different languages for different parts of the application or for different applications, right? You can choose the best tool. You have JavaScript hat or everywhere go. And re-ask Python for data and ML, perhaps getting excited about WebAssembly after hearing about a cube con, you know, there's all sorts of things. >>So we need to make that as easier. We've been running the whole month of Python on the blog, and we're doing a month of JavaScript because we had one specific support about how do I best put this language into production of that language into production. That detail is important for you. GPS have been difficult to use. We've added GPS suppose in desktop for windows, but we know there's a lot more to do to make the, how multi architecture, multi hardware, multi accelerator world work better and also securely. Um, so there's a lot more work to do to support you in all these things you want to do. >>How do we start building a tenor has applications, but it turns out we're using existing images as components. I couldn't assist survey earlier this year, almost half of container image usage was public images rather than private images. And this is growing rapidly. Almost all software has open source components and maybe 85% of the average application is open source code. And what you're doing is taking whole container images as modules in your application. And this was always the model with Docker compose. And it's a model that you're already et cetera, writing you trust Docker, official images. We know that they might go to 25% of poles on Docker hub and Docker hub provides you the widest choice and the best support that trusted content. We're talking to people about how to make this more helpful. We know, for example, that winter 69 four is just showing us as support, but the image doesn't yet tell you that we're working with canonical to improve messaging from specific images about left lifecycle and support. >>We know that you need more images, regularly updated free of vulnerabilities, easy to use and discover, and Donnie and Marie neuro, going to talk about that more this last year, the solar winds attack has been in the, in the news. A lot, the software you're using and trusting could be compromised and might be all over your organization. We need to reduce the risk of using vital open-source components. We're seeing more software supply chain attacks being targeted as the supply chain, because it's often an easier place to attack and production software. We need to be able to use this external code safely. We need to, everyone needs to start from trusted sources like photography images. They need to scan for known vulnerabilities using Docker scan that we built in partnership with sneak and lost DockerCon last year, we need just keep updating base images and dependencies, and we'll, we're going to help you have the control and understanding about your images that you need to do this. >>And there's more, we're also working on the nursery V2 project in the CNCF to revamp container signings, or you can tell way or software comes from we're working on tooling to make updates easier, and to help you understand and manage all the principals carrier you're using security is a growing concern for all of us. It's really important. And we're going to help you work with security. We can't achieve all our dreams, whether that's space travel or amazing developer products ever see without deep partnerships with our community to cloud is RA and the cloud providers aware most of you ship your occasion production and simple routes that take your work and deploy it easily. Reliably and securely are really important. Just get into production simply and easily and securely. And we've done a bunch of work on that. And, um, but we know there's more to do. >>The CNCF on the open source cloud native community are an amazing ecosystem of creators and lovely people creating an amazing strong community and supporting a huge amount of innovation has its roots in the container ecosystem and his dreams beyond that much of the innovation is focused around operate experience so far, but developer experience is really a growing concern in that community as well. And we're really excited to work on that. We also uses appraiser tool. Then we know you do, and we know that you want it to be easier to use in your environment. We just shifted Docker hub to work on, um, Kubernetes fully. And, um, we're also using many of the other projects are Argo from atheists. We're spending a lot of time working with Microsoft, Amazon right now on getting natural UV to ready to ship in the next few. That's a really detailed piece of collaboration we've been working on for a long term. Long time is really important for our community as the scarcity of the container containers and, um, getting content for you, working together makes us stronger. Our community is made up of all of you have. Um, it's always amazing to be reminded of that as a huge open source community that we already proud to work with. It's an amazing amount of innovation that you're all creating and where perhaps it, what with you and share with you as well. Thank you very much. And thank you for being here. >>Really excited to talk to you today and share more about what Docker is doing to help make you faster, make your team faster and turn your application delivery into something that makes you a 10 X team. What we're hearing from you, the developers using Docker everyday fits across three common themes that we hear consistently over and over. We hear that your time is super important. It's critical, and you want to move faster. You want your tools to get out of your way, and instead to enable you to accelerate and focus on the things you want to be doing. And part of that is that finding great content, great application components that you can incorporate into your apps to move faster is really hard. It's hard to discover. It's hard to find high quality content that you can trust that, you know, passes your test and your configuration needs. >>And it's hard to create good content as well. And you're looking for more safety, more guardrails to help guide you along that way so that you can focus on creating value for your company. Secondly, you're telling us that it's a really far to collaborate effectively with your team and you want to do more, to work more effectively together to help your tools become more and more seamless to help you stay in sync, both with yourself across all of your development environments, as well as with your teammates so that you can more effectively collaborate together. Review each other's work, maintain things and keep them in sync. And finally, you want your applications to run consistently in every single environment, whether that's your local development environment, a cloud-based development environment, your CGI pipeline, or the cloud for production, and you want that micro service to provide that consistent experience everywhere you go so that you have similar tools, similar environments, and you don't need to worry about things getting in your way, but instead things make it easy for you to focus on what you wanna do and what Docker is doing to help solve all of these problems for you and your colleagues is creating a collaborative app dev platform. >>And this collaborative application development platform consists of multiple different pieces. I'm not going to walk through all of them today, but the overall view is that we're providing all the tooling you need from the development environment, to the container images, to the collaboration services, to the pipelines and integrations that enable you to focus on making your applications amazing and changing the world. If we start zooming on a one of those aspects, collaboration we hear from developers regularly is that they're challenged in synchronizing their own setups across environments. They want to be able to duplicate the setup of their teammates. Look, then they can easily get up and running with the same applications, the same tooling, the same version of the same libraries, the same frameworks. And they want to know if their applications are good before they're ready to share them in an official space. >>They want to collaborate on things before they're done, rather than feeling like they have to officially published something before they can effectively share it with others to work on it, to solve this. We're thrilled today to announce Docker, dev environments, Docker, dev environments, transform how your team collaborates. They make creating, sharing standardized development environments. As simple as a Docker poll, they make it easy to review your colleagues work without affecting your own work. And they increase the reproducibility of your own work and decreased production issues in doing so because you've got consistent environments all the way through. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more detail on Docker dev environments. >>Hi, I'm Ben. I work as a principal program manager at DACA. One of the areas that doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner loop where the inner loop is a better development, where you write code, test it, build it, run it, and ultimately get feedback on those changes before you merge them and try and actually ship them out to production. Most amount of us build this flow and get there still leaves a lot of challenges. People need to jump between branches to look at each other's work. Independence. Dependencies can be different when you're doing that and doing this in this new hybrid wall of work. Isn't any easier either the ability to just save someone, Hey, come and check this out. It's become much harder. People can't come and sit down at your desk or take your laptop away for 10 minutes to just grab and look at what you're doing. >>A lot of the reason that development is hard when you're remote, is that looking at changes and what's going on requires more than just code requires all the dependencies and everything you've got set up and that complete context of your development environment, to understand what you're doing and solving this in a remote first world is hard. We wanted to look at how we could make this better. Let's do that in a way that let you keep working the way you do today. Didn't want you to have to use a browser. We didn't want you to have to use a new idea. And we wanted to do this in a way that was application centric. We wanted to let you work with all the rest of the application already using C for all the services and all those dependencies you need as part of that. And with that, we're excited to talk more about docket developer environments, dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, working inside a container, then able to share and collaborate more than just the code. >>We want it to enable you to share your whole modern development environment, your whole setup from DACA, with your team on any operating system, we'll be launching a limited beta of dev environments in the coming month. And a GA dev environments will be ID agnostic and supporting composts. This means you'll be able to use an extend your existing composed files to create your own development environment in whatever idea, working in dev environments designed to be local. First, they work with Docker desktop and say your existing ID, and let you share that whole inner loop, that whole development context, all of your teammates in just one collect. This means if you want to get feedback on the working progress change or the PR it's as simple as opening another idea instance, and looking at what your team is working on because we're using compose. You can just extend your existing oppose file when you're already working with, to actually create this whole application and have it all working in the context of the rest of the services. >>So it's actually the whole environment you're working with module one service that doesn't really understand what it's doing alone. And with that, let's jump into a quick demo. So you can see here, two dev environments up and running. First one here is the same container dev environment. So if I want to go into that, let's see what's going on in the various code button here. If that one open, I can get straight into my application to start making changes inside that dev container. And I've got all my dependencies in here, so I can just run that straight in that second application I have here is one that's opened up in compose, and I can see that I've also got my backend, my front end and my database. So I've got all my services running here. So if I want, I can open one or more of these in a dev environment, meaning that that container has the context that dev environment has the context of the whole application. >>So I can get back into and connect to all the other services that I need to test this application properly, all of them, one unit. And then when I've made my changes and I'm ready to share, I can hit my share button type in the refund them on to share that too. And then give that image to someone to get going, pick that up and just start working with that code and all my dependencies, simple as putting an image, looking ahead, we're going to be expanding development environments, more of your dependencies for the whole developer worst space. We want to look at backing up and letting you share your volumes to make data science and database setups more repeatable and going. I'm still all of this under a single workspace for your team containing images, your dev environments, your volumes, and more we've really want to allow you to create a fully portable Linux development environment. >>So everyone you're working with on any operating system, as I said, our MVP we're coming next month. And that was for vs code using their dev container primitive and more support for other ideas. We'll follow to find out more about what's happening and what's coming up next in the future of this. And to actually get a bit of a deeper dive in the experience. Can we check out the talk I'm doing with Georgie and girl later on today? Thank you, Ben, amazing story about how Docker is helping to make developer teams more collaborative. Now I'd like to talk more about applications while the dev environment is like the workbench around what you're building. The application itself has all the different components, libraries, and frameworks, and other code that make up the application itself. And we hear developers saying all the time things like, how do they know if their images are good? >>How do they know if they're secure? How do they know if they're minimal? How do they make great images and great Docker files and how do they keep their images secure? And up-to-date on every one of those ties into how do I create more trust? How do I know that I'm building high quality applications to enable you to do this even more effectively than today? We are pleased to announce the DACA verified polisher program. This broadens trusted content by extending beyond Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. It gives you confidence that you're getting what you expect because Docker verifies every single one of these publishers to make sure they are who they say they are. This improves our secure supply chain story. And finally it simplifies your discovery of the best building blocks by making it easy for you to find things that you know, you can trust so that you can incorporate them into your applications and move on and on the right. You can see some examples of the publishers that are involved in Docker, official images and our Docker verified publisher program. Now I'm pleased to introduce you to marina. Kubicki our senior product manager who will walk you through more about what we're doing to create a better experience for you around trust. >>Thank you, Dani, >>Mario Andretti, who is a famous Italian sports car driver. One said that if everything feels under control, you're just not driving. You're not driving fast enough. Maya Andretti is not a software developer and a software developers. We know that no matter how fast we need to go in order to drive the innovation that we're working on, we can never allow our applications to spin out of control and a Docker. As we continue talking to our, to the developers, what we're realizing is that in order to reach that speed, the developers are the, the, the development community is looking for the building blocks and the tools that will, they will enable them to drive at the speed that they need to go and have the trust in those building blocks. And in those tools that they will be able to maintain control over their applications. So as we think about some of the things that we can do to, to address those concerns, uh, we're realizing that we can pursue them in a number of different venues, including creating reliable content, including creating partnerships that expands the options for the reliable content. >>Um, in order to, in a we're looking at creating integrations, no link security tools, talk about the reliable content. The first thing that comes to mind are the Docker official images, which is a program that we launched several years ago. And this is a set of curated, actively maintained, open source images that, uh, include, uh, operating systems and databases and programming languages. And it would become immensely popular for, for, for creating the base layers of, of the images of, of the different images, images, and applications. And would we realizing that, uh, many developers are, instead of creating something from scratch, basically start with one of the official images for their basis, and then build on top of that. And this program has become so popular that it now makes up a quarter of all of the, uh, Docker poles, which essentially ends up being several billion pulse every single month. >>As we look beyond what we can do for the open source. Uh, we're very ability on the open source, uh, spectrum. We are very excited to announce that we're launching the Docker verified publishers program, which is continuing providing the trust around the content, but now working with, uh, some of the industry leaders, uh, in multiple, in multiple verticals across the entire technology technical spec, it costs entire, uh, high tech in order to provide you with more options of the images that you can use for building your applications. And it still comes back to trust that when you are searching for content in Docker hub, and you see the verified publisher badge, you know, that this is, this is the content that, that is part of the, that comes from one of our partners. And you're not running the risk of pulling the malicious image from an employee master source. >>As we look beyond what we can do for, for providing the reliable content, we're also looking at some of the tools and the infrastructure that we can do, uh, to create a security around the content that you're creating. So last year at the last ad, the last year's DockerCon, we announced partnership with sneak. And later on last year, we launched our DACA, desktop and Docker hub vulnerability scans that allow you the options of writing scans in them along multiple points in your dev cycle. And in addition to providing you with information on the vulnerability on, on the vulnerabilities, in, in your code, uh, it also provides you with a guidance on how to re remediate those vulnerabilities. But as we look beyond the vulnerability scans, we're also looking at some of the other things that we can do, you know, to, to, to, uh, further ensure that the integrity and the security around your images, your images, and with that, uh, later on this year, we're looking to, uh, launch the scope, personal access tokens, and instead of talking about them, I will simply show you what they look like. >>So if you can see here, this is my page in Docker hub, where I've created a four, uh, tokens, uh, read-write delete, read, write, read only in public read in public creeper read only. So, uh, earlier today I went in and I, I logged in, uh, with my read only token. And when you see, when I'm going to pull an image, it's going to allow me to pull an image, not a problem success. And then when I do the next step, I'm going to ask to push an image into the same repo. Uh, would you see is that it's going to give me an error message saying that they access is denied, uh, because there is an additional authentication required. So these are the things that we're looking to add to our roadmap. As we continue thinking about the things that we can do to provide, um, to provide additional building blocks, content, building blocks, uh, and, and, and tools to build the trust so that our DACA developer and skinned code faster than Mario Andretti could ever imagine. Uh, thank you to >>Thank you, marina. It's amazing what you can do to improve the trusted content so that you can accelerate your development more and move more quickly, move more collaboratively and build upon the great work of others. Finally, we hear over and over as that developers are working on their applications that they're looking for, environments that are consistent, that are the same as production, and that they want their applications to really run anywhere, any environment, any architecture, any cloud one great example is the recent announcement of apple Silicon. We heard from developers on uproar that they needed Docker to be available for that architecture before they could add those to it and be successful. And we listened. And based on that, we are pleased to share with you Docker, desktop on apple Silicon. This enables you to run your apps consistently anywhere, whether that's developing on your team's latest dev hardware, deploying an ARM-based cloud environments and having a consistent architecture across your development and production or using multi-year architecture support, which enables your whole team to collaborate on its application, using private repositories on Docker hub, and thrilled to introduce you to Hughie cower, senior director for product management, who will walk you through more of what we're doing to create a great developer experience. >>Senior director of product management at Docker. And I'd like to jump straight into a demo. This is the Mac mini with the apple Silicon processor. And I want to show you how you can now do an end-to-end arm workflow from my M one Mac mini to raspberry PI. As you can see, we have vs code and Docker desktop installed on a, my, the Mac mini. I have a small example here, and I have a raspberry PI three with an led strip, and I want to turn those LEDs into a moving rainbow. This Dockerfile here, builds the application. We build the image with the Docker, build X command to make the image compatible for all raspberry pies with the arm. 64. Part of this build is built with the native power of the M one chip. I also add the push option to easily share the image with my team so they can give it a try to now Dr. >>Creates the local image with the application and uploads it to Docker hub after we've built and pushed the image. We can go to Docker hub and see the new image on Docker hub. You can also explore a variety of images that are compatible with arm processors. Now let's go to the raspberry PI. I have Docker already installed and it's running Ubuntu 64 bit with the Docker run command. I can run the application and let's see what will happen from there. You can see Docker is downloading the image automatically from Docker hub and when it's running, if it's works right, there are some nice colors. And with that, if we have an end-to-end workflow for arm, where continuing to invest into providing you a great developer experience, that's easy to install. Easy to get started with. As you saw in the demo, if you're interested in the new Mac, mini are interested in developing for our platforms in general, we've got you covered with the same experience you've come to expect from Docker with over 95,000 arm images on hub, including many Docker official images. >>We think you'll find what you're looking for. Thank you again to the community that helped us to test the tech previews. We're so delighted to hear when folks say that the new Docker desktop for apple Silicon, it just works for them, but that's not all we've been working on. As Dani mentioned, consistency of developer experience across environments is so important. We're introducing composed V2 that makes compose a first-class citizen in the Docker CLI you no longer need to install a separate composed biter in order to use composed, deploying to production is simpler than ever with the new compose integration that enables you to deploy directly to Amazon ECS or Azure ACI with the same methods you use to run your application locally. If you're interested in running slightly different services, when you're debugging versus testing or, um, just general development, you can manage that all in one place with the new composed service to hear more about what's new and Docker desktop, please join me in the three 15 breakout session this afternoon. >>And now I'd love to tell you a bit more about bill decks and convince you to try it. If you haven't already it's our next gen build command, and it's no longer experimental as shown in the demo with built X, you'll be able to do multi architecture builds, share those builds with your team and the community on Docker hub. With build X, you can speed up your build processes with remote caches or build all the targets in your composed file in parallel with build X bake. And there's so much more if you're using Docker, desktop or Docker, CE you can use build X checkout tonus is talk this afternoon at three 45 to learn more about build X. And with that, I hope everyone has a great Dr. Khan and back over to you, Donnie. >>Thank you UA. It's amazing to hear about what we're doing to create a better developer experience and make sure that Docker works everywhere you need to work. Finally, I'd like to wrap up by showing you everything that we've announced today and everything that we've done recently to make your lives better and give you more and more for the single price of your Docker subscription. We've announced the Docker verified publisher program we've announced scoped personal access tokens to make it easier for you to have a secure CCI pipeline. We've announced Docker dev environments to improve your collaboration with your team. Uh, we shared with you Docker, desktop and apple Silicon, to make sure that, you know, Docker runs everywhere. You need it to run. And we've announced Docker compose version two, finally making it a first-class citizen amongst all the other great Docker tools. And we've done so much more recently as well from audit logs to advanced image management, to compose service profiles, to improve where you can run Docker more easily. >>Finally, as we look forward, where we're headed in the upcoming year is continuing to invest in these themes of helping you build, share, and run modern apps more effectively. We're going to be doing more to help you create a secure supply chain with which only grows more and more important as time goes on. We're going to be optimizing your update experience to make sure that you can easily understand the current state of your application, all its components and keep them all current without worrying about breaking everything as you're doing. So we're going to make it easier for you to synchronize your work. Using cloud sync features. We're going to improve collaboration through dev environments and beyond, and we're going to do make it easy for you to run your microservice in your environments without worrying about things like architecture or differences between those environments. Thank you so much. I'm thrilled about what we're able to do to help make your lives better. And now you're going to be hearing from one of our customers about what they're doing to launch their business with Docker >>I'm Matt Falk, I'm the head of engineering and orbital insight. And today I want to talk to you a little bit about data from space. So who am I like many of you, I'm a software developer and a software developer about seven companies so far, and now I'm a head of engineering. So I spend most of my time doing meetings, but occasionally I'll still spend time doing design discussions, doing code reviews. And in my free time, I still like to dabble on things like project oiler. So who's Oberlin site. What do we do? Portal insight is a large data supplier and analytics provider where we take data geospatial data anywhere on the planet, any overhead sensor, and translate that into insights for the end customer. So specifically we have a suite of high performance, artificial intelligence and machine learning analytics that run on this geospatial data. >>And we build them to specifically determine natural and human service level activity anywhere on the planet. What that really means is we take any type of data associated with a latitude and longitude and we identify patterns so that we can, so we can detect anomalies. And that's everything that we do is all about identifying those patterns to detect anomalies. So more specifically, what type of problems do we solve? So supply chain intelligence, this is one of the use cases that we we'd like to talk about a lot. It's one of our main primary verticals that we go after right now. And as Scott mentioned earlier, this had a huge impact last year when COVID hit. So specifically supply chain intelligence is all about identifying movement patterns to and from operating facilities to identify changes in those supply chains. How do we do this? So for us, we can do things where we track the movement of trucks. >>So identifying trucks, moving from one location to another in aggregate, same thing we can do with foot traffic. We can do the same thing for looking at aggregate groups of people moving from one location to another and analyzing their patterns of life. We can look at two different locations to determine how people are moving from one location to another, or going back and forth. All of this is extremely valuable for detecting how a supply chain operates and then identifying the changes to that supply chain. As I said last year with COVID, everything changed in particular supply chains changed incredibly, and it was hugely important for customers to know where their goods or their products are coming from and where they were going, where there were disruptions in their supply chain and how that's affecting their overall supply and demand. So to use our platform, our suite of tools, you can start to gain a much better picture of where your suppliers or your distributors are going from coming from or going to. >>So what's our team look like? So my team is currently about 50 engineers. Um, we're spread into four different teams and the teams are structured like this. So the first team that we have is infrastructure engineering and this team largely deals with deploying our Dockers using Kubernetes. So this team is all about taking Dockers, built by other teams, sometimes building the Dockers themselves and putting them into our production system, our platform engineering team, they produce these microservices. So they produce microservice, Docker images. They develop and test with them locally. Their entire environments are dockerized. They produce these doctors, hand them over to him for infrastructure engineering to be deployed. Similarly, our product engineering team does the same thing. They develop and test with Dr. Locally. They also produce a suite of Docker images that the infrastructure team can then deploy. And lastly, we have our R and D team, and this team specifically produces machine learning algorithms using Nvidia Docker collectively, we've actually built 381 Docker repositories and 14 million. >>We've had 14 million Docker pools over the lifetime of the company, just a few stats about us. Um, but what I'm really getting to here is you can see actually doctors becoming almost a form of communication between these teams. So one of the paradigms in software engineering that you're probably familiar with encapsulation, it's really helpful for a lot of software engineering problems to break the problem down, isolate the different pieces of it and start building interfaces between the code. This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows you to scale up certain pieces and keep others at a smaller level so that you can meet customer demands. And for us, one of the things that we can largely do now is use Dockers as that interface. So instead of having an entire platform where all teams are talking to each other, and everything's kind of, mishmashed in a monolithic application, we can now say this team is only able to talk to this team by passing over a particular Docker image that defines the interface of what needs to be built before it passes to the team and really allows us to scalp our development and be much more efficient. >>Also, I'd like to say we are hiring. Um, so we have a number of open roles. We have about 30 open roles in our engineering team that we're looking to fill by the end of this year. So if any of this sounds really interesting to you, please reach out after the presentation. >>So what does our platform do? Really? Our platform allows you to answer any geospatial question, and we do this at three different inputs. So first off, where do you want to look? So we did this as what we call an AOI or an area of interest larger. You can think of this as a polygon drawn on the map. So we have a curated data set of almost 4 million AOIs, which you can go and you can search and use for your analysis, but you're also free to build your own. Second question is what you want to look for. We do this with the more interesting part of our platform of our machine learning and AI capabilities. So we have a suite of algorithms that automatically allow you to identify trucks, buildings, hundreds of different types of aircraft, different types of land use, how many people are moving from one location to another different locations that people in a particular area are moving to or coming from all of these different analyses or all these different analytics are available at the click of a button, and then determine what you want to look for. >>Lastly, you determine when you want to find what you're looking for. So that's just, uh, you know, do you want to look for the next three hours? Do you want to look for the last week? Do you want to look every month for the past two, whatever the time cadence is, you decide that you hit go and out pops a time series, and that time series tells you specifically where you want it to look what you want it to look for and how many, or what percentage of the thing you're looking for appears in that area. Again, we do all of this to work towards patterns. So we use all this data to produce a time series from there. We can look at it, determine the patterns, and then specifically identify the anomalies. As I mentioned with supply chain, this is extremely valuable to identify where things change. So we can answer these questions, looking at a particular operating facility, looking at particular, what is happening with the level of activity is at that operating facility where people are coming from, where they're going to, after visiting that particular facility and identify when and where that changes here, you can just see it's a picture of our platform. It's actually showing all the devices in Manhattan, um, over a period of time. And it's more of a heat map view. So you can actually see the hotspots in the area. >>So really the, and this is the heart of the talk, but what happened in 2020? So for men, you know, like many of you, 2020 was a difficult year COVID hit. And that changed a lot of what we're doing, not from an engineering perspective, but also from an entire company perspective for us, the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. Now those two things often compete with each other. A lot of times you want to increase innovation, that's going to increase your costs, but the challenge last year was how to do both simultaneously. So here's a few stats for you from our team. In Q1 of last year, we were spending almost $600,000 per month on compute costs prior to COVID happening. That wasn't hugely a concern for us. It was a lot of money, but it wasn't as critical as it was last year when we really needed to be much more efficient. >>Second one is flexibility for us. We were deployed on a single cloud environment while we were cloud thought ready, and that was great. We want it to be more flexible. We want it to be on more cloud environments so that we could reach more customers. And also eventually get onto class side networks, extending the base of our customers as well from a custom analytics perspective. This is where we get into our traction. So last year, over the entire year, we computed 54,000 custom analytics for different users. We wanted to make sure that this number was steadily increasing despite us trying to lower our costs. So we didn't want the lowering cost to come as the sacrifice of our user base. Lastly, of particular percentage here that I'll say definitely needs to be improved is 75% of our projects never fail. So this is where we start to get into a bit of stability of our platform. >>Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular project or computation that runs every day and any one of those runs sale account, that is a failure because from an end-user perspective, that's an issue. So this is something that we know we needed to improve on and we needed to grow and make our platform more stable. I'm going to something that we really focused on last year. So where are we now? So now coming out of the COVID valley, we are starting to soar again. Um, we had, uh, back in April of last year, we had the entire engineering team. We actually paused all development for about four weeks. You had everyone focused on reducing our compute costs in the cloud. We got it down to 200 K over the period of a few months. >>And for the next 12 months, we hit that number every month. This is huge for us. This is extremely important. Like I said, in the COVID time period where costs and operating efficiency was everything. So for us to do that, that was a huge accomplishment last year and something we'll keep going forward. One thing I would actually like to really highlight here, two is what allowed us to do that. So first off, being in the cloud, being able to migrate things like that, that was one thing. And we were able to use there's different cloud services in a more particular, in a more efficient way. We had a very detailed tracking of how we were spending things. We increased our data retention policies. We optimized our processing. However, one additional piece was switching to new technologies on, in particular, we migrated to get lab CICB. >>Um, and this is something that the costs we use Docker was extremely, extremely easy. We didn't have to go build new new code containers or repositories or change our code in order to do this. We were simply able to migrate the containers over and start using a new CIC so much. In fact, that we were able to do that migration with three engineers in just two weeks from a cloud environment and flexibility standpoint, we're now operating in two different clouds. We were able to last night, I've over the last nine months to operate in the second cloud environment. And again, this is something that Docker helped with incredibly. Um, we didn't have to go and build all new interfaces to all new, different services or all different tools in the next cloud provider. All we had to do was build a base cloud infrastructure that ups agnostic the way, all the different details of the cloud provider. >>And then our doctors just worked. We can move them to another environment up and running, and our platform was ready to go from a traction perspective. We're about a third of the way through the year. At this point, we've already exceeded the amount of customer analytics we produce last year. And this is thanks to a ton more albums, that whole suite of new analytics that we've been able to build over the past 12 months and we'll continue to build going forward. So this is really, really great outcome for us because we were able to show that our costs are staying down, but our analytics and our customer traction, honestly, from a stability perspective, we improved from 75% to 86%, not quite yet 99 or three nines or four nines, but we are getting there. Um, and this is actually thanks to really containerizing and modularizing different pieces of our platform so that we could scale up in different areas. This allowed us to increase that stability. This piece of the code works over here, toxin an interface to the rest of the system. We can scale this piece up separately from the rest of the system, and that allows us much more easily identify issues in the system, fix those and then correct the system overall. So basically this is a summary of where we were last year, where we are now and how much more successful we are now because of the issues that we went through last year and largely brought on by COVID. >>But that this is just a screenshot of the, our, our solution actually working on supply chain. So this is in particular, it is showing traceability of a distribution warehouse in salt lake city. It's right in the center of the screen here. You can see the nice kind of orange red center. That's a distribution warehouse and all the lines outside of that, all the dots outside of that are showing where people are, where trucks are moving from that location. So this is really helpful for supply chain companies because they can start to identify where their suppliers are, are coming from or where their distributors are going to. So with that, I want to say, thanks again for following along and enjoy the rest of DockerCon.

Published Date : May 27 2021

SUMMARY :

We know that collaboration is key to your innovation sharing And we know from talking with many of you that you and your developer Have you seen the email from Scott? I was thinking we could try, um, that new Docker dev environments feature. So if you hit the share button, what I should do is it will take all of your code and the dependencies and Uh, let me get that over to you, All right. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working It's connected to the container. So let's just have a look at what you use So I've had a look at what you were doing and I'm actually going to change. Let me grab the link. it should be able to open up the code that I've changed and then just run it in the same way you normally do. I think we should ship it. For example, in response to COVID we saw global communities, including the tech community rapidly teams make sense of all this specifically, our goal is to provide development teams with the trusted We had powerful new capabilities to the Docker product, both free and subscription. And finally delivering an easy to use well-integrated development experience with best of breed tools and content And what we've learned in our discussions with you will have long asking a coworker to take a look at your code used to be as easy as swiveling their chair around, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, and finally, public repos for communities enable community projects to be freely shared with anonymous Lastly, the container images themselves and this end to end flow are built on open industry standards, but the Docker team rose to the challenge and worked together to continue shipping great product, the again for joining us, we look forward to having a great DockerCon with you today, as well as a great year So let's dive in now, I know this may be hard for some of you to believe, I taught myself how to code. And by the way, I'm showing you actions in Docker, And the cool thing is you can use it on any And if I can do it, I know you can too, but enough yapping let's get started to save Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's In essence, with automation, you can be kind to your future self And I hope you all go try it out, but why do we care about all of that? And to get into that wonderful state that we call flow. and eliminate or outsource the rest because you don't need to do it, make the machines Speaking of the open source ecosystem we at get hub are so to be here with all you nerds. Komack lovely to see you here. We want to help you get your applications from your laptops, And it's all a seamless thing from, you know, from your code to the cloud local And we all And we know that you use So we need to make that as easier. We know that they might go to 25% of poles we need just keep updating base images and dependencies, and we'll, we're going to help you have the control to cloud is RA and the cloud providers aware most of you ship your occasion production Then we know you do, and we know that you want it to be easier to use in your It's hard to find high quality content that you can trust that, you know, passes your test and your configuration more guardrails to help guide you along that way so that you can focus on creating value for your company. that enable you to focus on making your applications amazing and changing the world. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, We want it to enable you to share your whole modern development environment, your whole setup from DACA, So you can see here, So I can get back into and connect to all the other services that I need to test this application properly, And to actually get a bit of a deeper dive in the experience. Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. We know that no matter how fast we need to go in order to drive The first thing that comes to mind are the Docker official images, And it still comes back to trust that when you are searching for content in And in addition to providing you with information on the vulnerability on, So if you can see here, this is my page in Docker hub, where I've created a four, And based on that, we are pleased to share with you Docker, I also add the push option to easily share the image with my team so they can give it a try to now continuing to invest into providing you a great developer experience, a first-class citizen in the Docker CLI you no longer need to install a separate composed And now I'd love to tell you a bit more about bill decks and convince you to try it. image management, to compose service profiles, to improve where you can run Docker more easily. So we're going to make it easier for you to synchronize your work. And today I want to talk to you a little bit about data from space. What that really means is we take any type of data associated with a latitude So to use our platform, our suite of tools, you can start to gain a much better picture of where your So the first team that we have is infrastructure This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows So if any of this sounds really interesting to you, So we have a suite of algorithms that automatically allow you to identify So you can actually see the hotspots in the area. the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. of particular percentage here that I'll say definitely needs to be improved is 75% Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular And for the next 12 months, we hit that number every month. night, I've over the last nine months to operate in the second cloud environment. And this is thanks to a ton more albums, they can start to identify where their suppliers are, are coming from or where their distributors are going

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mario AndrettiPERSON

0.99+

DaniPERSON

0.99+

Matt FalkPERSON

0.99+

Dana LawsonPERSON

0.99+

AmazonORGANIZATION

0.99+

Maya AndrettiPERSON

0.99+

DonniePERSON

0.99+

MicrosoftORGANIZATION

0.99+

MonaPERSON

0.99+

NicolePERSON

0.99+

UNICEFORGANIZATION

0.99+

25%QUANTITY

0.99+

GermanyLOCATION

0.99+

14 millionQUANTITY

0.99+

75%QUANTITY

0.99+

ManhattanLOCATION

0.99+

KhanPERSON

0.99+

10 minutesQUANTITY

0.99+

last yearDATE

0.99+

99QUANTITY

0.99+

1.3 timesQUANTITY

0.99+

1.2 timesQUANTITY

0.99+

ClairePERSON

0.99+

DockerORGANIZATION

0.99+

ScottPERSON

0.99+

BenPERSON

0.99+

UC IrvineORGANIZATION

0.99+

85%QUANTITY

0.99+

OracleORGANIZATION

0.99+

34%QUANTITY

0.99+

JustinPERSON

0.99+

JoeyPERSON

0.99+

80%QUANTITY

0.99+

160 imagesQUANTITY

0.99+

2020DATE

0.99+

$10,000QUANTITY

0.99+

10 secondsQUANTITY

0.99+

23 minutesQUANTITY

0.99+

JavaScriptTITLE

0.99+

AprilDATE

0.99+

twoQUANTITY

0.99+

56%QUANTITY

0.99+

PythonTITLE

0.99+

MollyPERSON

0.99+

Mac miniCOMMERCIAL_ITEM

0.99+

Hughie cowerPERSON

0.99+

two weeksQUANTITY

0.99+

100%QUANTITY

0.99+

GeorgiePERSON

0.99+

Matt fallPERSON

0.99+

MarsLOCATION

0.99+

Second questionQUANTITY

0.99+

KubickiPERSON

0.99+

MobyPERSON

0.99+

IndiaLOCATION

0.99+

DockerConEVENT

0.99+

Youi CalPERSON

0.99+

three ninesQUANTITY

0.99+

J frogORGANIZATION

0.99+

200 KQUANTITY

0.99+

appleORGANIZATION

0.99+

SharonPERSON

0.99+

AWSORGANIZATION

0.99+

10 XQUANTITY

0.99+

COVID-19OTHER

0.99+

windowsTITLE

0.99+

381QUANTITY

0.99+

NvidiaORGANIZATION

0.99+

Jill Rouleau, Brad Thornton & Adam Miller, Red Hat | AnsibleFest 2020


 

>> (soft upbeat music) >> Announcer: From around the globe, it's the cube with digital coverage of Ansible Fest 2020, brought to you by RedHat. >> Hello, welcome to the cubes coverage of Ansible Fest 2020. We're not in person, we're virtual. I'm John Furrier , your host of theCube. We've got a great power panel here of RedHat engineers. We have Brad Thorton, Senior Principle Software Engineer for Ansible networking. Adam Miller, Senior Principle Software Engineer for Security and Jill Rouleau, who's the Senior Software Engineer for Ansible Cloud. Thanks for joining me today. Appreciate it. Thanks for coming on. >> Thanks. >> Good to be here. >> We're not in person this year because of COVID, a lot going on but still a lot of great news coming out of Ansible Fest this year. Last year, you guys launched a lot since last year. It's been awesome. Launched the new platform. The automation platform, grown the collections, certified collections community from five supported platforms to over 50, launched a lot of automation services catalog. Brad let's start with you. Why are customers successful with Ansible in networking? >> Why are customers successful with Ansible in networking? Well, let's take a step back to a bit of classic network engineering, right? Lots of CLI interaction with the terminal, a real opportunity for human error there. Managing thousands of devices from the CLI becomes very difficult. I think one of the reasons why Ansible has done well in the networking space and why a lot of network engineers find it very easy to use is because you can still see an attack at the CLI. But what we have the ability to do is pull information from the same COI that you were using manually, and showed that as structured data and then let you return that structured data and push it back to the configuration. So what you get when you're using Ansible is a way to programmatically interface and do configuration management across your entire fleet. It brings consistency and stability, and speed really to network configuration management. >> You know, one of the big hottest areas is, you know, I always ask the folks in the cloud what's next after cloud and pretty much unanimously it's edge, and edge is super important around automation, Brad. What's your thoughts on, as people start thinking about, okay, I need to have edge devices. How does automation play into that? And cause networking, edge it's kind of hand in hand there. So what's your thought on that? >> Yeah, for sure. It really depends on what infrastructure you have at the edge. You might be deploying servers at the edge. You may be administering IOT devices and really how you're directing that traffic either into edge compute or back to your data center. I think one of the places Ansible is going to be really critical is administering the network devices along that path from the edge, from IOT back to the data center, or to the cloud. >> Jill, when you have a Cloud, what's your thoughts on that? Because when you think about Cloud and Multicloud, that's coming around the horizon, you're looking at kind of the operational model. We talked about this a lot last year around having Cloud ops on premises and in the Cloud. What should customers think about when they look at the engineering challenges and the development challenges around Cloud? >> So cloud gets used for a lot of different things, right? But if we step back Cloud just means any sort of distributed applications, whether it's on prem in your own data center, on the edge, in a public hosted environment, and automation is critical for making those things work, when you have these complex applications that are distributed across, whether it's a rack, a data center or globally. You need a tool that can help you make sense of all of that. You've got to... We can't manage things just with, Oh, everything is on one box anymore. Cloud really just means that things have been exploded out and broken up into a bunch of different pieces. And there's now a lot more architectural complexity, no matter where you're running that. And so I think if you step back and look at it from that perspective, you can actually apply a lot of the same approaches and philosophies to these new challenges as they come up without having to reinvent the wheel of how you think about these applications. Just because you're putting them in a new environment, like at the edge or in a public Cloud or on a new, private on premise solution. >> It's interesting, you know, I've been really loving the cloud native action lately, especially with COVID, we're seeing a lot of more modern apps come out of that. If I could follow up there, how do you guys look at tools like Terraform and how does Ansible compare to that? Because you guys are very popular in the cloud configuration, you look at cloud native, Jill, your thoughts. >> Yeah. So Terraform and tools like that. Things like cloud formation or heat in the OpenStack world, they do really, really great at things like deploying your apps and setting up your stack and getting them out there. And they're really focused on that problem space, which is a hard problem space that they do a fantastic job with where Ansible tends to come in and a tool like Ansible is what do you do on day two with that application? How do you run an update? How do you manage it in the longterm of something like 60% of the workloads or cloud spend at least on AWS is still just EC2 instances. What do you do with all of those EC2 instances once you've deployed them, once they're in a stack, whether you're managing it, whatever tool you're managing it with, Ansible is a phenomenal way of getting in there and saying, okay, I have these instances, I know about them, but maybe I just need to connect out and run an update or add a package or reconfigure a service that's running on there. And I think you can glue these things together and use Ansible with these other stack deployment based tools really, really effectively. >> Real quick, just a quick followup on that. what's the big pain point for developers right now when they're looking at these tools? Because they see the path, what are some of the pain points that they're living right now that they're trying to overcome? >> I think one of the problems kind of coincidentally is we have so many tools. We're in kind of a tool explosion in the cloud space, right now. You could piece together as as many tools to manage your stack, as you have components in your stack and just making sense of what that landscape looks like right now and figuring out what are the right tools for the job I'm trying to do, that can be flexible and that are not going to box me into having to spend half of my engineering time, just managing my tools and making sense of all of that is a significant effort and job on its own. >> Yes, too many may add, would choke in years ago in the big data search, the tools, the tool train, one we call the tool shed, after a while, you don't know what's in the back, what you're using every day. People get comfortable with the right tools, but the platform becomes a big part of that thinking holistically as a system. And Adam, this comes back to security. There's more tools in the security space than ever before. Talking about tool challenges, security is the biggest tool shed everyone's got tools they'd buy everything, but you got to look at, what a platform looks like and developers just want to have the truth. And when you look at the configuration management piece of it, security is critical. What's your thoughts on the source of truth when it comes into play for these security appliances? >> So these are... Source of truth piece is kind of an interesting one because this is going to be very dependent on the organization. What type of brownfield environment they've developed, what type of things that they rely on, and what types of data they store there. So we have the ability for various sources of truth to come in for your inventory source and the types of information you store with that. This could be tagged information on a series of cloud instances or series about resources. This could be something you store in a network management tool or a CMDB. This could even be something that you put into a privilege access management system, such as, CyberArk or hashivault. Like those are the things and because of Ansible flexibility and because of the way that everything is put together in a pluggable nature, we have the capability to actually bring in all of these components from anywhere in a brownfield environment, in a preexisting infrastructure, as well as new decisions that are being made for the enterprise as I move forward. And, and we can bring all that together and be that infrastructure glue, be that automation component that can tie all these disjoint loosely coupled, or complete disc couple pieces, together. And that's kind of part of that, that security posture, remediation various levels of introspection into your environment, these types of things, as we go forward, and that's kind of what we're focusing on doing with this. >> What kind of data is stored in the source of truth? >> I mean... So what type of data? This could be credential. It could be single use credential access. This could be your inventory data for your systems, what target systems you're trying to do. It could be, various attributes of different systems to be able to classify them ,and codify them in different ways. It's kind of kind of depending, be configuration data. You know, we have the ability with some of the work that Brad and his team are doing to actually take unstructured data, make it structured, bullet into whatever your chosen source of truth is, store it, and then utilize that to, kind of decompose it into different vendors, specific syntax representations and those types of things. So we have a lot of different capability there as well. >> Brad, you were mentioned, do you have a talk on parsing, can you elaborate on that? And why should network operators care about that? >> Yeah, welcome to 2020. We're still parsing network configuration and operational state. This is an interesting one. If you had asked me years ago, did I think that we would be investing development time into parsing with Ansible network configurations? I would have said, "Well, I certainly hope not. "I hope programmability of network devices and the vendors "really have their API's in order." But I think what we're seeing is network containers are still comfortable with the command line. They're still very familiar with the command line and when it comes time to do operational state assessment and health assessment of your network, engineers are comfortable going to the command line and running show commands. So really what we're trying to do in the parsing space is not author brand new parking and parsing engine ourselves, but really leverage a lot of the open source tools that are already out there bringing them into Ansible, so network engineers can now harvest the critical information from usher operational state commands on their network devices. And then once they've gotten to the structure data, things get really interesting because now you can do entrance criteria checks prior to doing configuration changes, right? So if you want to ensure a network device has a very particular operational state, all the BGP neighbors are, for example before pushing configuration changes, what we have the ability to do now is actually parse the command that you would have run from the command line. Use that within a decision tree in your Ansible playbook, and only move forward when the configuration changes. If the box is healthy. And then once the configuration changes are made at the end, you run those same health checks to ensure that you're in a speck can do a steady state and are production ready. So parsing is the mechanism. It's the data that you get from the parsing that's so critical. >> If I had to ask you real quick, just while it's on my mind. You know, people want to know about automation. It's top of mind use case. What are some of these things around automation and configuration parsing, whether it's parsing to other configuration manager, what are the big challenges around automation? Because it's the Holy grail. Everyone wants it now. What are the couches? where's the hotspots that needs to be jumped on and managed carefully? Or the easiest low hanging fruit? >> Well, there's really two pieces to it, right? There's the technology. And then there's the culture. And, and we talk really about a culture of automation, bringing the team with you as you move into automation, ensuring that everybody has the tools and they're familiar with how automation is going to work and how their day job is going to change because of automation. So I think once the organization embraces automation and the culture is in place. On the technology side, low hanging fruit automation can be as simple as just using Ansible to push the commands that you would have previously pushed to the device. And then as your organization matures, and you mature along this kind of path of network automation, you're dealing with larger pieces, larger sections of the configuration. And I think over time, network engineers will become data managers, right? Because they become less concerned about the network, the vendors specific configuration, and they're really managing the data that makes up the configuration. And I think once you hit that part, you've won at automation because you can move forward with Ansible resource modules. You're well positioned to do NETCONF for RESTCONF or... Right once you've kind of grown to that it's the data that we need to be concerned about and it could fit (indistinct) and the operational state management piece, you're going to go through a transformation on the networking side. >> So you mentioned-- >> And one thing to note there, if I may, I feel like a piece of this too, is you're able to actually bridge teams because of the capability of Ansible, the breadth of technologies that we've had integrations with and our ability to actually bridge that gap between different technologies, different teams. Once you have that culture of automation, you can start to realize these DevOps and DevSecOps workflow styles that are top of everybody's mind these days. And that's something that I think is very powerful. And I like to try to preach when I have the opportunity to talk to folks about what we can do, and the fact that we have so much capability and so many integrations across the entire industry. >> That's a great point. DevSecOps is totally a hop on. When you have software and hardware, it becomes interesting. There's a variety of different equipment, on the security automation. What kind of security appliances can you guys automate? >> As of today, we are able to do endpoint management systems, enterprise firewalls, security information, and event management systems. We're able to do security orchestration, automation, remediation systems, privileged access management systems. We're doing some threat intelligence platforms. And we've recently added to the I'm sorry, did I say intrusion detection? We have intrusion detection and prevention, and we recently added endpoint security management. >> Huge, huge value there. And I think everyone's wants that. Jill, I've got to ask you about the Cloud because the modules came up. What use cases do you see the Ansible modules in for the public cloud? Because you got a lot of cloud native folks in public cloud, you've got enterprises lifting and shifting, there's a hybrid and multicloud horizon here. What's some of the use cases where you see those Ansible modules fitting well with public level. >> The modules that we have in public cloud can work across all of those things, you know. In our public clouds, we have support for Amazon web services, Azure GCP, and they all support your main services. You can spin up a Lambda, you can deploy ECS clusters, build AMI, all of those things. And then once you get all of that up there, especially looking at AWS, which is where I spend the most time, you get all your EC2 instances up, you can now pull that back down into Ansible, build an inventory from that. And seamlessly then use Ansible to manage those instances, whether they're running Linux or windows or whatever distro you might have them running, we can go straight from having deployed all of those services and resources to managing them and going between your instances in your traditional operating system management or those instances and your cloud services. And if you've got multiple clouds or if you still have on prem, or if you need to, for some reason, add those remote cloud instances into some sort of on-prem hardware load balancer, security endpoint, we can go between all of those things and glue everything together, fairly seamlessly. You can put all of that into tower and have one kind of view of your cloud and your hardware and your on-prem and being able to move things between them. >> Just put some color commentary on what that means for the customer in terms of, is it pain reduction, time savings? How would you classify their value? >> I mean, both. Instead of having to go between a number of different tools and say, "Oh, well for my on-prem, I have to use this. "But as soon as I shift over to a cloud, "I have to use these tools. "And, Oh, I can't manage my Linux instances with this tool "that only knows how to speak to, the EC2 to API." You can use one tool for all of these things. So like we were saying, bring all of your different teams together, give them one tool and one view for managing everything end to end. I think that's, that's pretty killer. >> All right. Now I get to the fun part. I want you guys to weigh in on the Kubernetes. Adam, if you can start with you, we'll start with you go in and tell us why is Kubernetes more important now? What does it mean? A lot of hype continues to be out there. What's the real meet around Kubernetes what's going on? >> I think the big thing is the modernization of the application development delivery. When you talk about Kubernetes and OpenShift and the capabilities we have there, and you talk about the architecture, you can build a lot of the tooling that you used to have to maintain, to be able to deliver sophisticated resilient architectures in your application stack, are now baked into the actual platform, so the container platform itself takes care of that for you and removes that complexity from your operations team, from your development team. And then they can actually start to use these primitives and kind of achieve what the cloud native compute foundation keeps calling cloud native applications and the ability to develop and do this in a way that you are able to take yourself out of some of the components you used to have to babysit a lot. And that becomes in also with the OpenShift operator framework that came out of originally Coral S, and if you go to operator hub, you're able to see these full lifecycle management stacks of infrastructure components that you don't... You no longer have to actually, maintain a large portion of what you start to do. And so the operator SDK itself, are actually developing these operators. Ansible is one of the automation capabilities. So there's currently three supported there's Ansible, there's one that you just have full access to the Golang API and then helm charts. So Ansible's specifically obviously being where we focus. We have our collection content for the... carries that core, and then also ReHat to OpenShift certified collection's coming out in, I think, a month or so. Don't hold me to the timeline. I'm shoving in trouble for that one, but we have those things going to come out. Those will be baked into the operator's decay that we fully supported by our customer base. And then we can actually start utilizing the Ansible expertise of your operations team to container native of the infrastructure components that you want to put into this new platform. And then Ansible itself is able to build that capability of automating the entire Kubernetes or OpenShift cluster in a way that allows you to go into a brownfield environment and automate your existing infrastructure, along with your more container native, futuristic next generation, net structure. >> Jill this brings up the question. Why don't you just use native public cloud resources versus Kubernetes and Ansible? What's the... What should people know about where you use that, those resources? >> Well, and it's kind of what Adam was saying with all of those brownfield deployments and to the same point, how many workloads are still running just in EC2 instances or VMs on the cloud. There's still a lot of tech out there that is not ready to be made fully cloud native or containerized or broken up. And with OpenShift, it's one more layer that lets you put everything into a kind of single environment instead of having to break things up and say, "Oh, well, this application has to go here. "And this application has to be in this environment.' You can do that across a public cloud and use a little of this component and a little of that component. But if you can bring everything together in OpenShift and manage it all with the same tools on the same platform, it simplifies the landscape of, I need to care about all of these things and look at all of these different things and keep track of these and are my tools all going to work together and are my tools secure? Anytime you can simplify that part of your infrastructure, I think is a big win. >> John: You know, I think about-- >> The one thing, if I may, Jill spoke to this, I think in the way that a architectural, infrastructure person would, but I want to try to really quick take the business analyst component of it as the hybrid component. If you're trying to address multiple footprints, both on prem, off prem, multiple public clouds, if you're running OpenShift across all of them, you have that single, consistent deployment and development footprint for everywhere. So I don't disagree with anything they said, I just wanted to focus specifically on... That piece is something that I find personally unique, as that was a problem for me in a past life. And that kind of speaks to me. >> Well, speaking of past lives-- >> Having me as an infrastructure person, thank you. >> Yeah. >> Well, speaking of past lives, OpenStack, you look at Jill with OpenStack, we've been covering the Cuba thing when OpenStack was rolling out back in the day, but you can also have private cloud. Where you used to... There's a lot of private cloud out there. How do you talk about that? How do people understand using public cloud versus the private cloud aspect of Ansible? >> Yeah, and I think there is still a lot of private cloud out there and I don't think that's a bad thing. I've kind of moved over onto the public cloud side of things, but there are still a lot of use cases that a lot of different industries and companies have that don't make sense for putting into public cloud. So you still have a lot of these on-prem open shift and on-prem OpenStack deployments that make a ton of sense and that are solving a bunch of problems for these folks. And I think they can all work together. We have Ansible that can support both of those. If you're a telco, you're not going to put your network function, virtualization on USC as to one in spot instances, right? When you call nine one one, you don't want that going through the public cloud. You want that to be on dedicated infrastructure, that's reliable and well-managed and engineered for that use case. So I think we're going to see a lot of ongoing OpenStack and on-prem OpenShift, especially with edge, enabling those types of use cases for a long time. And I think that's great. >> I totally agree with you. I think private cloud is not a bad thing at all. Things that are only going to accelerate my opinion. You look at the VM world, they talked about the telco cloud and you mentioned edge when five G comes out, you're going to have basically have private clouds everywhere, I guess, in my opinion. But anyway, speaking of VMware, could you talk about the Ansible VMware module real quick? >> Yeah, so we have a new collection that we'll be debuting at Ansible Fest this year bore the VMware REST API. So the existing VMware modules that we have usually SOAP API for VMware, and they rely on an external Python library that VMware provides, but with these fare 6.0 and especially in vSphere 6.5, VMware has stepped up with a REST API end point that we find is a lot more performance and offers a lot of options. So we built a new collection of VMware modules that will take advantage of that. That's brand new, it's a lighter way. It's much faster, we'll get better performance out of it. You know, reduced external requirements. You can install it and get started faster. And especially with these sphere seven, continuing to build on this REST API, we're going to see more and more interfaces being exposed so that we can take advantage. We plan to expand it as new interfaces are being exposed in that API, it's compatible with all of the existing modules. You can go back and forth, use your existing playbooks and start introducing these. But I think especially on the performance side, and especially as we get these larger clouds and more cloud deployments, edge clouds, where you have these private clouds and lots and lots of different places, the performance benefits of this new collection that we're trying to build is going to be really, really powerful for a lot of folks. >> Awesome. Brad, we didn't forget about you. We're going to bring you back in. Network automation has moved towards the resource modules. Why should people care about them? >> Yeah. Resource modules, excuse me. Probably I think having been a network engineer for so long, I think some of the most exciting work that has gone into Ansible network over the past year and a half, what the resource modules really do for you is they will reach out to network devices. They will pull back that network native, that vendor native configuration. While the resource module actually does the parsing for you. So there's none of that with the resource modules. And we returned structured data back to the user that represents the configuration. Going back to your question about source of truth. You can take that structure data, maybe for your interface CONFIG, your OSPF CONFIG, your access list CONFIG, and you can store that data in your source of truth under source of truth. And then where you are moving forward, is you really spend time as every engineer managing the data that makes up the configuration, and you can share that data across different platforms. So if you were to look at a lot of the resource modules, the data model that they support, it's fairly consistent between vendors. As an example, I can pull OSPF configuration from one vendor and with very small changes, push that OSPF configuration to a different vendor's platform. So really what we've tried to do with the resource modules is normalize the data model across vendors. It'll never be a hundred percent because there's functionality that exists in one platform that doesn't exist and that's exposed through the configuration, but where we could, we have normalized the data model. So I think it's really introducing the concept of network configuration management through data management and not through CLI commands anymore. >> Yeah, that's a great point. It just expands the network automation vision. And one of the things that's interesting here in this panel is you're talking about, cloud holistically, public multicloud, private hybrid security network automation as a platform, not just a tool, we're still going to have all kind of tools out there. And then the importance of automating the edge. I mean, that's a network game Brad. I mean, it's a data problem, right? I mean, we all know about networking, moving packets from here to there, but automating the data is critical and you give have bad data and you don't have... If you have misinformation, it sounds like our current politics, but you know, bad information is bad automation. I mean, what's your thoughts? How do you share that concept to developers out there? What should they be thinking about in terms of the data quality? >> I think that's the next thing we have to tackle as network engineers. It's not, do I have access to the data? You can get the data now for resource modules, you can get the data from NETCONF, from RESTCONF, you can get it from OpenConfig, you can get it from parsing. The question really is, how do you ensure the integrity and the quality of the data that is making up your configurations and the consistency of the data that you're using to look at operational state. And I think this is where the source of truth really becomes important. If you look at Git as a viable source of truth, you've got all the tools and the mechanisms within Git to use that as your source of truth for network configuration. So network engineers are actually becoming developers in the sense that they're using Git ops to worklow to manage configuration moving forward. It's just really exciting to see that transformation happen. >> Great panel. Thanks for everyone coming on, I appreciate it. We'll just end this by saying, if you guys could just quickly summarize Ansible fast 2020 virtual, what should people walk away with? What should your customers walk away with this year? What's the key points. Jill, we'll start with you. >> Hopefully folks will walk away with the idea that the Ansible community includes so many different folks from all over, solving lots of different, interesting problems, and that we can all come together and work together to solve those problems in a way that is much more effective than if we were all trying to solve them individually ourselves, by bringing those problems out into the open and working together, we get a lot done. >> Awesome, Brad? >> I'm going to go with collections, collections, collections. We introduced in last year. This year, they are real. Ansible2.10 that just came out is made up of collections. We've got certified collections on automation. We've got cloud collections, network collections. So they are here. They're the real thing. And I think it just gets better and deeper and more content moving forward. All right, Adam? >> Going last is difficult. Especially following these two. They covered a lot of ground and I don't really know that I have much to add beyond the fact that when you think about Ansible, don't think about it in a single context. It is a complete automation solution. The capability that we have is very extensible. It's very pluggable, which has a standing ovation to the collections and the solutions that we can come up with collectively. Thanks to ourselves. Everybody in the community is almost infinite. A few years ago, one of the core engineers did a keynote speech using Ansible to automate Philips hue light bulbs. Like this is what we're capable of. We can automate the fortune 500 data centers and telco networks. And then we can also automate random IOT devices around your house. Like we have a lot of capability here and what we can do with the platform is very unique and something special. And it's very much thanks to the community, the team, the open source development way. I just, yeah-- >> (Indistinct) the open source of truth, being collaborative all is what it makes up and DevOps and Sec all happening together. Thanks for the insight. Appreciate the time. Thank you. >> Thank you. I'm John Furrier, you're watching theCube here for Ansible Fest, 2020 virtual. Thanks for watching. (soft upbeat music)

Published Date : Sep 29 2020

SUMMARY :

brought to you by RedHat. and Jill Rouleau, who's the Launched the new platform. and then let you return I always ask the folks in the along that path from the edge, from IOT and the development lot of the same approaches and how does Ansible compare to that? And I think you can glue that they're trying to overcome? as you have components in your And when you look at the and because of the way that and those types of things. It's the data that you If I had to ask you real quick, bringing the team with you and the fact that we on the security automation. and we recently added What's some of the use cases where you see those Ansible and being able to move Instead of having to go between A lot of hype continues to be out there. and the capabilities we have there, about where you use that, and a little of that component. And that kind of speaks to me. infrastructure person, thank you. but you can also have private cloud. and that are solving a bunch You look at the VM world, and lots and lots of different places, We're going to bring you back in. and you can store that data and you give have bad data and the consistency of What's the key points. and that we can all come I'm going to go with collections, and the solutions that we can Thanks for the insight. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BradPERSON

0.99+

Adam MillerPERSON

0.99+

Brad ThortonPERSON

0.99+

JohnPERSON

0.99+

60%QUANTITY

0.99+

AdamPERSON

0.99+

JillPERSON

0.99+

Jill RouleauPERSON

0.99+

AnsibleORGANIZATION

0.99+

John FurrierPERSON

0.99+

two piecesQUANTITY

0.99+

Last yearDATE

0.99+

This yearDATE

0.99+

last yearDATE

0.99+

AmazonORGANIZATION

0.99+

GitTITLE

0.99+

AWSORGANIZATION

0.99+

vSphere 6.5TITLE

0.99+

OpenShiftTITLE

0.99+

RedHatORGANIZATION

0.99+

PhilipsORGANIZATION

0.99+

KubernetesTITLE

0.99+

PythonTITLE

0.99+

LinuxTITLE

0.99+

twoQUANTITY

0.99+

EC2TITLE

0.99+

five supported platformsQUANTITY

0.99+

Ansible FestEVENT

0.99+

one toolQUANTITY

0.99+

todayDATE

0.99+

thousands of devicesQUANTITY

0.99+

over 50QUANTITY

0.99+

bothQUANTITY

0.98+

USCORGANIZATION

0.98+

2020DATE

0.98+

oneQUANTITY

0.98+

one boxQUANTITY

0.98+

LambdaTITLE

0.98+

this yearDATE

0.98+

Brad ThorntonPERSON

0.98+

windowsTITLE

0.98+

telcoORGANIZATION

0.98+

one more layerQUANTITY

0.98+

one platformQUANTITY

0.98+

Ansible Fest 2020EVENT

0.97+

DevSecOpsTITLE

0.97+

AnsibleFestEVENT

0.96+

day twoQUANTITY

0.96+

one vendorQUANTITY

0.96+

NETCONFORGANIZATION

0.95+

threeQUANTITY

0.95+

nineQUANTITY

0.95+

one viewQUANTITY

0.95+

hundred percentQUANTITY

0.94+

Frank Slootman, Snowflake | CUBE Conversation, April 2020


 

(upbeat music) >> Narrator: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is theCUBE Coversation. >> All right everybody, this is Dave Vellante and welcome to this special CUBE Conversation. I first met Frank Slootman in 2007 when he was the CEO of Data Domain. Back then he was the CEO of a disruptive company and still is. Data Domain, believe or not back then, was actually replacing tape drives as the primary mechanism for backup. Yes, believe it or not, it used to be tape. Fast forward several years later, I met Frank again at VMworld when he had become the CEO of ServiceNow. At the time ServiceNow was a small company, about 100 plus million dollars. Frank and his team took that company to 1.2 billion. And Gartner, at the time of IPO said "you know, this doesn't make sense. "It's a small market, it's a very narrow help desk market, "it's maybe a couple billion dollars." The vision of Slootman and his team was to really expand the total available market and execute like a laser. Which they did and today, ServiceNow a very, very successful company. Snowflake first came into my line of sight in 2015 when SiliconANGLE wrote an article, "Why Snowflake is Better "Than Amazon Redshift, Re-imagining Data". Well last year Frank Slootman joined Snowflake, another disruptive company. And he's here today to talk about how Snowflake is really participating in this COVID-19 crisis. And I really want to share some of Frank's insights and leadership principles, Frank great to see you, thanks for coming on. >> Yeah, thanks for having us Dave. >> So when I first reported earlier this year on Snowflake and shared some data with the community, you reached back out to me and said "Dave, I want to just share with you. "I am not a playbook CEO, I am a situational CEO. "This is what I learned in the military." So Frank, this COVID-19 situation was thrown at you, it's a black swan, what was your first move as a leader? >> Well, my first move is let's not overreact. Take a deep breath. Let's really examine what we know. Let's not jump to conclusions, let's not try to project things that we're not capable of projecting. That's hard because we tend to have sort of levels of certainty about what's going to happen in the week, in the next month and so on and all of a sudden that's out of the window. It creates enormous anxiety with people. So in other words you got to sort of reset to okay, what do we know, what can we do, what do we control? And not let our minds sort of go out of control. So I talk to our people all the time about maintain a sense of normalcy, focus on the work, stay in the moment and by the way, turn the newsfeed off, right, because the hysteria you get fed through the media is really not helpful, right? So just cool down and focus on what we still can do. And then I think then everybody takes a deep breath and we just go back to work. I mean, we're in this mode now for three weeks and I can tell you, I'm on teleconferencing calls, whatever, eight, nine hours a day. Prospects, customers, all over the world. Pretty much what I was doing before except I'm not traveling right now. So it's not, >> Yeah, so it sounds clear-- >> Not that different than what it was before. (laughs) >> It sounds very Bill Belichickian, you know? >> Yeah. >> Focus on those things of which you can control. When you were running ServiceNow I really learned it from you and of course Mike Scarpelli, your then and current CFO about the importance of transparency. And I'm interested in how you're communicating, it sounds like you're doing some very similar things but have you changed the way in which you've communicated to your team, your internal employees at all? >> We're communicating much more. Because we can no longer rely on sort of running into people here, there and everywhere. So we have to be much more purposeful about communications. For example, I mean I send an email out to the entire company on Monday morning. And it's kind of a bunch of anecdotes. Just to bring the connection back, the normalcy. It just helps people get connected back to the mothership and like well, things are still going on. We're still talking in the way we always used to be. And that really helps and I also, I check in with people a lot more, I ask all of our leadership to constantly check in with people because you can't assume that everybody is okay, you can't be out of sight, out of mind. So we need to be more purposeful in reaching out and communicating with people than we were previously. >> And a lot of people obviously concerned about their jobs. Have you sort of communicated, what have you communicated to employees about layoffs? I mean, you guys just did a large raise just before all this, your timing was kind of impeccable. But what have you communicated in that regard? >> I've said, there's no layoffs on our radar, number one. Number two, we are hiring. And number three is we have a higher level of scrutiny on the hires that we're making. And I am very transparent. In other words I tell people look, I prioritize the roles that are closest to the direct train of the business. Right, it's kind of common sense. But I wanted to make sure that this is how we're thinking about it. There are some roles that are more postponable than others. I'm hiring in engineering without any reservation because that is the long term strategic interest of the company. One the sales side, I want to know that sales leaders know how to convert to yields, that we're not just sort of bringing capacity online. And the leadership is not convinced or confident that they can convert to yield. So there's a little bit finer level of scrutiny on the hiring. But by and large, it's not that different. There's this saying out there that we should suspend all non-essential spending and hiring, I'm like you should always do that. Right? I mean what's different today? (both laugh) If it's non-essential, why do it, right? So all of this comes back to this is probably how we should operate anyways, yep. >> I want to talk a little bit about the tech behind Snowflake. I'm very sensitive when CEOs come on my program to make sure that we're not, I'm not trying to bait CEOs into ambulance chasing, that's not what it's about. But I do want to share with our community kind of what's new, what's changed and how companies like Snowflake are participating in this crisis. And in particular, we've been reporting for awhile, if you guys bring up that first slide. That the innovation in the industry is really no longer about Moore's Law. It's really shifted. There's a new, what we call an innovation cocktail in the business and we've collected all this data over the last 10 years. With Hadoop and other distributed data and now we have Edge Data, et cetera, there's this huge trove of data. And now AI is becoming real, it's becoming much more economical. So applying machine intelligence to this data and then the Cloud allows us to do this at scale. It allows us to bring in more data sources. It brings an agility in. So I wonder if you could talk about sort of this premise and how you guys fit. >> Yeah, I would start off by reordering the sequence and saying Cloud's number one. That is foundational. That helps us bring scale to data that we never had to number two, it helps us bring computational power to data at levels we've never had before. And that just means that queries and workloads can complete orders of magnitude faster than they ever could before. And that introduces concepts like the time value of data, right? The faster you get it, the more impactful and powerful it is. I do agree, I view AI as sort of the next generation of analytics. Instead of using data to inform people, we're using data to drive processes and businesses directly, right? So I'm agreeing obviously with these strengths because we're the principal beneficiaries and drivers of these platforms. >> Well when we talked about earlier this year about Snowflake, we really brought up the notion that you guys were one of the first if not the first. And guys, bring back Frank, I got to see him. (Frank chuckles) One of the first to really sort of separate the notion of being able to scale, compute independent of storage. And that brought not only economics but it brought flexibility. So you've got this Cloud-native database. Again, what caught my attention in that Redshift article we wrote is essentially for our audience, Redshift was based on ParAccel. Amazon did a great job of really sort of making that a Cloud database but it really wasn't born in the Cloud and that's sort of the advantage of Snowflake. So that architectural approach is starting to really take hold. So I want to give an example. Guys if you bring up the next chart. This is an example of a system that I've been using since early January when I saw this COVID come out. Somebody texted me this. And it's the Johns Hopkins dataset, it's awesome. It shows you, go around the map, you can follow it, it's pretty close to real time. And it's quite good. But the problem is, all right thank you guys. The problem is that when I started to look at, I wanted to get into sort of a more granular view of the counties. And I couldn't do that. So guys bring up the next slide if you would. So what I did was I searched around and I found a New York Times GitHub data instance. And you can see it in the top left here. And basically it was a CSV. And notice what it says, it says we can't make this file beautiful and searchable because it's essentially too big. And then I ran into what you guys are doing with Star Schema, Star Schema's a data company. And essentially you guys made the notion that look, the Johns Hopkins dataset as great as it is it's not sort of ready for analytics, it's got to be cleaned, et cetera. And so I want you to talk about that a little bit. Guys, if you could bring Frank back. And share with us what you guys have done with Star Schema and how that's helping understand COVID-19 and its progression. >> Yeah, one of the really cool concepts I've felt about Snowflake is what we call the data sharing architecture. And what that really means is that if you and I both have Snowflake accounts, even though we work for different institutions, we can share data optics, tables, schema, whatever they are with each other. And you can process against that in place if they are residing in a local, to your own platform. We have taken that concept from private also to public. So that data providers like Star Schema can list their datasets, because they're a data company, so obviously it's in their business interest to allow this data to be profiled and to be accessible by the Snowflake community. And this data is what we call analytics ready. It is instantly accessible. It is also continually updated, you have to do nothing. It's augmented with incremental data and then our Snowflake users can just combine this data with supply chain, with economic data, with internal operating data and so on. And we got a very strong reaction from our customer base because they're like "man, you're saving us weeks "if not months just getting prepared to start to do an al, let alone doing them." Right? Because the data is analytics ready and they have to do literally nothing. I mean in other words if they ask us for it in the morning, in the afternoon they'll be running workloads again. Right, and then combining it with their own data. >> Yeah, so I should point out that that New York Times GitHub dataset that I showed you, it's a couple of days behind. We're talking here about near realtime, or as close as realtime as you can get, is that right? >> Yep. Yeah, every day it gets updated. >> So the other thing, one of the things we've been reporting, and Frank I wondered if you could comment on this, is this new emerging workloads in the Cloud. We've been reporting on this for a couple of years. The first generation of Cloud was IS, was really about compute, storage, some database infrastructure. But really now what we're seeing is these analytic data stores where the valuable data is sitting and much of it is in the Cloud and bringing machine intelligence and data science capabilities to that, to allow for this realtime or near realtime analysis. And that is a new, emerging workload that is really gaining a lot of steam as these companies try to go to this so-called digital transformation. Your comments on that. >> Yeah, we refer to that as the emergence or the rise of the data Cloud. If you look at the Cloud landscape, we're all very familiar with the infrastructure clouds. AWS and Azure and GCP and so on, it's just massive storage and servers. And obviously there's data locked in to those infrastructure clouds as well. We've been familiar for it for 10, 20 years now with application clouds, notably Salesforce but obviously Workday, ServiceNow, SAP and so on, they also have data in them, right? But now you're seeing that people are unsiloing the data. This is super important. Because as long as the data is locked in these infrastructure clouds, in these application clouds, we can't do the things that we need to do with it, right? We have to unsilo it to allow the scale of querying and execution against that data. And you don't see that any more clear that you do right now during this meltdown that we're experiencing. >> Okay so I learned long ago Frank not to argue with you but I want to push you on something. (Frank laughs) So I'm not trying to be argumentative. But one of those silos is on-prem. I've heard you talk about "look, we're a Cloud company. "We're Cloud first, we're Cloud only. "We're not going to do an on-prem version." But some of that data lives on-prem. There are companies out there that are saying "hey, we separate compute and storage too, "we run in the Cloud. "But we also run on-prem, that's our big differentiator." Your thoughts on that. >> Yeah, we burnt the ship behind us. Okay, we're not doing this endless hedging that people have done for 20 years, sort of keeping a leg in both worlds. Forget it, this will only work in the public Cloud. Because this is how the utility model works, right? I think everybody is coming to this realization, right? I mean excuses are running out at this point. We think that it'll, people will come to the public Cloud a lot sooner than we will ever come to the private Cloud. It's not that we can't run on a private cloud, it just diminishes the potential and the value that we bring. >> So as sort of mentioned in my intro, you have always been at the forefront of disruption. And you think about digital transformation. You know Frank we go to all of these events, it used to be physical and now we're doing theCUBE digital. And so everybody talks about digital transformation. CEOs get up, they talk about how they're helping their customers move to digital. But the reality is is when you actually talk to businesses, there was a lot of complacency. "Hey, this isn't really going to happen in my lifetime" or "we're doing pretty well." Or maybe the CEO might be committed but it doesn't necessarily trickle down to the P&L managers who have an update. One of the things that we've been talking about is COVID-19 is going to accelerate that digital transformation and make it a mandate. You're seeing it obviously in retail play out and a number of other industries, supply chains are, this has wreaked havoc on supply chains. And so there's going to be a rethinking. What are your thoughts on the acceleration of digital transformation? >> Well obviously the crisis that we're experiencing is obviously an enormous catalyst for digital transformation and everything that that entails. And what that means and I think as a industry we're just victims of inertia. Right, I mean haven't understood for 20 years why education, both K through 12 but also higher ed, why they're so brick and mortar bound and the way they're doing things, right? And we could massively scale and drop the cost of education by going digital. Now we're forced into it and everybody's like "wow, "this is not bad." You're right, it isn't, right but we haven't so the economics, the economic imperative hasn't really set in but it is now. So these are all great things. Having said that, there are also limits to digital transformation. And I'm sort of experiencing that right now, being on video calls all day. And oftentimes people I've never met before, right? There's still a barrier there, right? It's not like digital can replace absolutely everything. And that is just not true, right? I mean there's some level of filter that just doesn't happen when you're digital. So there's still a need for people to be in the same place. I don't want to sort of over rotate on this concept, that like okay, from here on out we're all going to be on the wires, that's not the way it will be. >> Yeah, be balanced. So earlier you made a comment, that "we should never "be spending on non-essential items". And so you've seen (Frank laughs) back in 2008 you saw the Rest in Peace good times, you've seen the black swan memos that go out. I assume that, I mean you're a very successful investor as well, you've done a couple of stints in the VC community. What are you seeing in the Valley in regard to investments, will investments continue, will we continue to feed innovation, what's your sense of that? Well this is another wake up call. Because in Silicon Valley there's way too much money. There's certainly a lot of ideas but there's not a lot of people that can execute on it. So what happens is a lot of things get funded and the execution is either no good or it's just not a valid opportunity. And when you go through a downturn like this you're finding out that those businesses are not going to make it. I mean when the tide is running out, only the strongest players are going to survive that. It's almost a natural selection process that happens from time to time. It's not necessarily a bad thing because people get reallocated. I mean Silicon Valley is basically one giant beehive, right? I mean we're constantly repurposing money and people and talent and so on. And that's actually good because if an idea is not worth in investing in, let's not do it. Let's repurpose those resources in places where it has merit, where it has viability. >> Well Frank, I want to thank you for coming on. Look, I mean you don't have to do this. You could've retired long, long ago but having leaders like you in place in these times of crisis, but even when in good times to lead companies, inspire people. And we really appreciate what you do for companies, for your employees, for your customers and certainly for our community, so thanks again, I really appreciate it. >> Happy to do it, thanks Dave. >> All right and thank you for watching everybody, Dave Vellante for theCUBE, we will see you next time. (upbeat music)

Published Date : Apr 1 2020

SUMMARY :

this is theCUBE Coversation. And I really want to share some of Frank's insights and said "Dave, I want to just share with you. So in other words you got to sort of reset to okay, Not that different than what it was before. I really learned it from you and of course Mike Scarpelli, I ask all of our leadership to constantly check in But what have you communicated in that regard? So all of this comes back to this is probably how and how you guys fit. And that just means that queries and workloads And then I ran into what you guys are doing And what that really means is that if you and I or as close as realtime as you can get, is that right? Yeah, every day it gets updated. and much of it is in the Cloud And you don't see that any more clear that you do right now Okay so I learned long ago Frank not to argue with you and the value that we bring. But the reality is is when you actually talk And I'm sort of experiencing that right now, And when you go through a downturn like this And we really appreciate what you do for companies, Dave Vellante for theCUBE, we will see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
FrankPERSON

0.99+

Mike ScarpelliPERSON

0.99+

2007DATE

0.99+

SlootmanPERSON

0.99+

Frank SlootmanPERSON

0.99+

Dave VellantePERSON

0.99+

2008DATE

0.99+

Bill BelichickianPERSON

0.99+

2015DATE

0.99+

AmazonORGANIZATION

0.99+

April 2020DATE

0.99+

DavePERSON

0.99+

20 yearsQUANTITY

0.99+

Data DomainORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Monday morningDATE

0.99+

1.2 billionQUANTITY

0.99+

three weeksQUANTITY

0.99+

BostonLOCATION

0.99+

eightQUANTITY

0.99+

Star SchemaORGANIZATION

0.99+

early JanuaryDATE

0.99+

ServiceNowORGANIZATION

0.99+

last yearDATE

0.99+

10QUANTITY

0.99+

firstQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

GartnerORGANIZATION

0.99+

first moveQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

COVID-19OTHER

0.99+

bothQUANTITY

0.98+

AWSORGANIZATION

0.98+

VMworldORGANIZATION

0.98+

OneQUANTITY

0.98+

about 100 plus million dollarsQUANTITY

0.98+

earlier this yearDATE

0.98+

theCUBE StudiosORGANIZATION

0.98+

first slideQUANTITY

0.98+

several years laterDATE

0.98+

SiliconANGLEORGANIZATION

0.98+

both worldsQUANTITY

0.98+

playbookORGANIZATION

0.97+

oneQUANTITY

0.97+

next monthDATE

0.97+

New York TimesORGANIZATION

0.97+

GitHubORGANIZATION

0.97+

first generationQUANTITY

0.96+

nine hours a dayQUANTITY

0.96+

todayDATE

0.96+

12QUANTITY

0.95+

Johns HopkinsORGANIZATION

0.94+

Scott Hanselman, Microsoft | Microsoft Ignite 2019


 

>> Announcer: Live from Orlando, Florida it's theCUBE! Covering Microsoft Ignite, brought to you by Cohesity. >> Hello, and happy taco Tuesday CUBE viewers! You are watching theCUBE's live coverage of Microsoft's Ignite here in Orlando, Florida. I'm your host Rebecca Knight, along with Stu Miniman. We're joined by Scott Hanselman, he is the partner program manager at Microsoft. Thank you so much for coming on theCUBE! >> Absolutely, my pleasure! >> Rebecca: And happy taco Tuesday to you! Will code for tacos. >> Will code for tacos. >> I'm digging it, I'm digging it >> I'm a very inexpensive coder. >> So you are the partner program manager, but you're really the people's programmer at Microsoft. Satya Nadella up on the main stage yesterday, talking about programming for everyone, empowering ordinary citizen developers, and you yourself were on the main stage this morning, "App Development for All", why is this such a priority for Microsoft at this point in time? >> Well there's the priority for Microsoft, and then I'll also speak selfishly as a priority for me, because when we talk about inclusion, what does that really mean? Well it is the opposite of exclusion. So when we mean inclusion, we need to mean everyone, we need to include everyone. So what can we do to make technology, to make programming possible, to make everyone enabled, whether that be something like drag and drop, and PowerApps, and the Power platform, all the way down to doing things like we did in the keynote this morning with C# on a tiny micro-controller, and the entire spectrum in between, whether it be citizen programmers in Excel using Power BI to go and do machine learning, or the silly things that we did in the keynote with rock, paper scissors that we might be able to talk about. All of that means including everyone and if the site isn't accessible, if Visual Studio as a tool isn't accessible, if you're training your AI in a non-ethical way, you are consciously excluding people. So back to what Satya thinks is why can't everyone do this? SatyaSacha thinks is why can't everyone do this? Why are we as programmers having any gate keeping, or you know, "You can't do that you're not a programmer, "you know, I'm a programmer, you can't have that." >> So what does the future look like, >> Rebecca: So what does the future look like, if everyone knows how to do it? I mean, do some imagining, visioning right now about if everyone does know how to do this, or at least can learn the building blocks for it, what does technology look like? >> Well hopefully it will be ethical, and it'll be democratized so that everyone can do it. I think that the things that are interesting, or innovative today will become commoditized tomorrow, like, something as simple as a webcam detecting your face, and putting a square around it and then you move around, and the square, we were like, "Oh my God, that was amazing!" And now it's just a library that you can download. What is amazing and interesting today, like AR and VR, where it's like, "Oh wow, I've never seen augmented reality work like that!" My eight-year-old will be able to do it in five years, and they'll be older than eight. >> So Scott, one of the big takeaways I had from the app dev keynote that you did this morning was in the past it was trying to get everybody on the same page, let's move them to our stack, let's move them to our cloud, let's move them on this programming language, and you really talked about how the example of Chipotle is different parts of the organization will write in a different language, and there needs to be, it's almost, you know, that service bus that you have between all of these environments, because we've spent, a lot of us, I know in my career I've spent decades trying to help break down those silos, and get everybody to work together, but we're never going to have everybody doing the same jobs, so we need to meet them where they are, they need to allow them to use the tools, the languages, the platforms that they want, but they need to all be able to work together, and this is not the Microsoft that I grew up with that is now an enabler of that environment. The word we keep coming back to is trust at the keynote. I know there's some awesome, cool new stuff about .net which is a piece of it, but it's all of the things together. >> Right, you know I was teaching a class at Mesa Community College down in San Diego a couple of days ago and they were trying, they were all people who wanted jobs, just community college people, I went to community college and it's like, I just want to know how to get a job, what is the thing that I can do? What language should I learn? And that's a tough question. They wonder, do I learn Java, do I learn C#? And someone had a really funny analogy, and I'll share it with you. They said, well you know English is the language, right? Why don't the other languages just give up? They said, you know, Finland, they're not going to win, right? Their language didn't win, so they should just give up, and they should all speak English, and I said, What an awful thing! They like their language! I'm not going to go to people who do Haskell, or Rust, or Scala, or F#, and say, you should give up! You're not going to win because C won, or Java won, or C# won. So instead, why don't we focus on standards where we can inter-operate, where we can accept that the reality is a hybrid cloud things like Azure Arc that allows us to connect multiple clouds, multi-vendor clouds together. That is all encompassing the concept of inclusion, including everyone means including every language, and as many standards as you can. So it might sound a little bit like a Tower of Babel, but we do have standards and the standards are HTTP, REST, JSON, JavaScript. It may not be the web we deserve, but it's the web that we have, so we'll use those building block technologies, and then let people do their own thing. >> So speaking of the keynote this morning, one of the cool things you were doing was talking about the rock, paper, scissors game, and how it's expanding. Tell our viewers a little bit more about the new elements to rock, paper, scissors. >> So folks named Sam Kass, a gentleman named Sam Kass many, many years ago on the internet, when the internet was much simpler web pages, created a game called Rock, Paper, Scissors, Lizard, Spock, and a lot of people will know that from a popular TV show on CBS, and they'll give credit to that show, in fact it was Sam Kass and Karen Bryla who created that, and we sent them a note and said, "Hey can I write a game about this?" And we basically built a Rock, Paper, Scissors, Lizard, Spock game in the cloud containerized at scale with multiple languages, and then we also put it on a tiny device, and what's fun about the game from a complexity perspective is that rock, paper, scissors is easy. There's only three rules, right? Paper covers rock, which makes no sense, but when you have five, it's hard! Spock shoots the Rock with his phaser, and then the lizard poisons Spock, and the paper disproves, and it gets really hard and complicated, but it's also super fun and nerdy. So we went and created a containerized app where we had all different bots, we had node, Python, Java, C#, and PHP, and then you can say, I'm going to pick Spock and .net, or node and paper, and have them fight, and then we added in some AI, and some machine learning, and some custom vision such that if you sign in with Twitter in this game, it will learn your patterns, and try to defeat you using your patterns and then, clicking on your choices and fun, snd then, clicking on your choices and fun, because we all want to go, "Rock, Paper, Scissors shoot!" So we made a custom vision model that would go, and detect your hand or whatever that is saying, this is Spock and then it would select it and play the game. So it was just great fun, and it was a lot more fun than a lot of the corporate demos that you see these days. >> All right Scott, you're doing a lot of different things at the show here. We said there's just a barrage of different announcements that were made. Love if you could share some of the things that might have flown under the radar. You know, Arc, everyone's talking about, but some cool things or things that you're geeking out on that you'd want to share with others? >> Two of the things that I'm most excited, one is an announcement that's specific to Ignite, and one's a community thing, the announcement is that .net Core 3.1 is coming. .net Core 3 has been a long time coming as we have began to mature, and create a cross platform open source .net runtime, but .net Core 3.1 LTS Long Term Support means that that's a version of .net core that you can put on a system for three years and be supported. Because a lot of people are saying, "All this open source is moving so fast! "I just upgraded to this, "and I don't want to upgrade to that". LTS releases are going to happen every November in the odd numbered years. So that means 2019, 2021, 2023, there's going to be a version of .net you can count on for three years, and then if you want to follow that train, the safe train, you can do that. In the even numbered years we're going to come out with a version of .net that will push the envelope, maybe introduce a new version of C#, it'll do something interesting and new, then we tighten the screws and then the following year that becomes a long term support version of .net. >> A question for you on that. One of the challenges I hear from customers is, when you talk about hybrid cloud, they're starting to get pulled apart a little bit, because in the public cloud, if I'm running Azure, I'm always on the latest version, but in my data center, often as you said, I want longer term support, I'm not ready to be able to take that CICD push all of the time, so it feels like I live, maybe call it bimodal if you want, but I'm being pulled with the am I always on the latest, getting the latest security, and it's all tested by them? Or am I on my own there? How do you help customers with that, when Microsoft's developing things, how do you live in both of those worlds or pull them together? >> Well, we're really just working on this idea of side-by-side, whether it be different versions of Visual Studio that are side-by-side, the stable one that your company is paying for, and then the preview version that you can go have side-by-side, or whether you could have .net Core 3, 3.1, or the next version, a preview version, and a safe version side-by-side. We want to enable people to experiment without fear of us messing up their machine, which is really, really important. >> One of the other things you were talking about is a cool community announcement. Can you tell us a little bit more about that? >> So this is a really cool product from a very, very small company out of Oregon, from a company called Wilderness Labs, and Wilderness Labs makes a micro-controller, not a micro-processor, not a raspberry pie, it doesn't run Linux, what it runs is .net, so we're actually playing Rock, Paper, Scissors, Lizard, Spock on this device. We've wired it all up, this is a screen from our friends at Adafruit, and I can write .net, so somehow if someone is working at, I don't know, the IT department at Little Debbie Snack Cakes, and they're making WinForms applications, they're suddenly now an IOT developer, 'cause they can go and write C# code, and control a device like this. And when you have a micro-controller, this will run for weeks on a battery, not hours. You go and 3D print a case, make this really tiny, it could become a sensor, it could become an IOT device, or one of thousands of devices that could check crops, check humidity, moisture wetness, whatever you want, and we're going to enable all kinds of things. This is just a commodity device here, this screen, it's not special. The actual device, this is the development version, size of my finger, it could be even smaller if we wanted to make it that way, and these are our friends at Wilderness Labs. and they had a successful Kickstarter, and I just wanted to give them a shout out, and I just wanted to give them a shoutout, I don't have any relationship with them, I just think they're great. >> Very cool, very cool. So you are a busy guy, and as Stu said, you're in a lot of different things within Microsoft, and yet you still have time to teach at community college. I'm interested in your perspective of why you do that? Why do you think it's so important to democratize learning about how to do this stuff? >> I am very fortunate and I think that we people, who have achieved some amount of success in our space, need to recognize that luck played a factor in that. That privilege played a factor in that. But, why can't we be the luck for somebody else, the luck can be as simple as a warm introduction. I believe very strongly in what I call the transitive value of friendship, so if we're friends, and you're friends, then the hypotenuse can be friends as well. A warm intro, a LinkedIn, a note that like, "Hey, I met this person, you should talk to them!" Non-transactional networking is really important. So I can go to a community college, and talk to a person that maybe wanted to quit, and give a speech and give them, I don't know, a week, three months, six months, more whatever, chutzpah, moxie, something that will keep them to finish their degree and then succeed, then I'm going to put good karma out into the world. >> Paying it forward. >> Exactly. >> So Scott, you mentioned that when people ask for advice, it's not about what language they do, is to, you know, is to,q you know, we talk in general about intellectual curiosity of course is good, being part of a community is a great way to participate, and Microsoft has a phenomenal one, any other tips you'd give for our listeners out there today? >> The fundamentals will never go out of style, and rather than thinking about learning how to code, why not think about learning how to think, and learning about systems thinking. One of my friends, Kishau Rogers, talked about systems thinking, I've hade her on my podcast a number of times, and we were giving a presentation at Black Girls Code, and I was talking to a fifteen-year-old young woman, and we were giving a presentation. It was clear that her mom wanted her to be there, and she's like, "Why are we here?" And I said, "All right, let's talk about programming "everybody, we're talking about programming. "My toaster is broken and the toast is not working. "What do you think is wrong?" Big, long, awkward pause and someone says, "Well is the power on?" I was like, "Well, I plugged a light in, "and nothing came on" and they were like, "Well is the fuse blown?" and then one little girl said "Well did the neighbors have power?", And I said, "You're debugging, we are debugging right?" This is the thing, you're a systems thinker, I don't know what's going on with the computer when my dad calls, I'm just figuring it out like, "Oh, I'm so happy, you work for Microsoft, "you're able to figure it out." >> Rebecca: He has his own IT guy now in you! >> Yeah, I don't know, I unplug the router, right? But that ability to think about things in the context of a larger system. I want toast, power is out in the neighborhood, drawing that line, that makes you a programmer, the language is secondary. >> Finally, the YouTube videos. Tell our viewers a little bit about those. you can go to D-O-T.net, so dot.net, the word dot, you can go to d-o-t.net, so dot.net, the word dot, slash videos and we went, and we made a 100 YouTube videos on everything from C# 101, .net, all the way up to database access, and putting things in the cloud. A very gentle, "Mr. Rodgers' Neighborhood" on-ramp. A lot of things, if you've ever seen that cartoon that says, "Want to draw an owl? "Well draw two circles, "and then draw the rest of the fricking owl." A lot of tutorials feel like that, and we don't want to do that, you know. We've got to have an on-ramp before we get on the freeway. So we've made those at dot.net/videos. >> Excellent, well that's a great plug! Thank you so much for coming on the show, Scott. >> Absolutely my pleasure! >> I'm Rebecca Knight, for Stu Miniman., stay tuned for more of theCUBE's live coverage of Microsoft Ignite. (upbeat music)

Published Date : Nov 5 2019

SUMMARY :

Covering Microsoft Ignite, brought to you by Cohesity. he is the partner program manager at Microsoft. Rebecca: And happy taco Tuesday to you! and you yourself were on the main stage this morning, and if the site isn't accessible, and the square, we were like, "Oh my God, that was amazing!" and there needs to be, it's almost, you know, and as many standards as you can. one of the cool things you were doing was talking about and then you can say, I'm going to pick Spock and Love if you could share some of the things and then if you want to follow that train, the safe train, but in my data center, often as you said, that you can go have side-by-side, One of the other things you were talking about and I just wanted to give them a shout out, and yet you still have time to teach at community college. and talk to a person that maybe wanted to quit, and we were giving a presentation at Black Girls Code, drawing that line, that makes you a programmer, and we don't want to do that, you know. Thank you so much for coming on the show, Scott. of Microsoft Ignite.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

Rebecca KnightPERSON

0.99+

Scott HanselmanPERSON

0.99+

ScottPERSON

0.99+

Karen BrylaPERSON

0.99+

Satya NadellaPERSON

0.99+

Stu MinimanPERSON

0.99+

Wilderness LabsORGANIZATION

0.99+

OregonLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

San DiegoLOCATION

0.99+

fiveQUANTITY

0.99+

six monthsQUANTITY

0.99+

three yearsQUANTITY

0.99+

ExcelTITLE

0.99+

Kishau RogersPERSON

0.99+

Sam KassPERSON

0.99+

2019DATE

0.99+

Visual StudioTITLE

0.99+

three monthsQUANTITY

0.99+

JavaTITLE

0.99+

Orlando, FloridaLOCATION

0.99+

TwoQUANTITY

0.99+

2023DATE

0.99+

PythonTITLE

0.99+

LinuxTITLE

0.99+

2021DATE

0.99+

five yearsQUANTITY

0.99+

100QUANTITY

0.99+

CBSORGANIZATION

0.99+

Little Debbie Snack CakesORGANIZATION

0.99+

SatyaPERSON

0.99+

three rulesQUANTITY

0.99+

PHPTITLE

0.99+

AdafruitORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

EnglishOTHER

0.99+

yesterdayDATE

0.99+

Rodgers'PERSON

0.99+

tomorrowDATE

0.99+

eight-yearQUANTITY

0.99+

ScalaTITLE

0.98+

OneQUANTITY

0.98+

StuPERSON

0.98+

RustTITLE

0.98+

C#TITLE

0.98+

TwitterORGANIZATION

0.98+

YouTubeORGANIZATION

0.98+

bothQUANTITY

0.98+

nodeTITLE

0.98+

ChipotleORGANIZATION

0.98+

HaskellTITLE

0.97+

AzureTITLE

0.97+

Tower of BabelTITLE

0.97+

oneQUANTITY

0.97+

Power BITITLE

0.97+

SatyaSachaPERSON

0.97+

Azure ArcTITLE

0.97+

SpockPERSON

0.97+

todayDATE

0.96+

.netOTHER

0.96+

C wonTITLE

0.95+

3.1TITLE

0.94+

a weekQUANTITY

0.94+

Sandeep Singh, HPE | CUBEConversation, May 2019


 

from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation welcome to the cube studios for another cube conversation where we go in-depth with thought leaders driving business outcomes with technology I'm your host Peter Burris one of the challenges enterprises face as they consider the new classes of applications that they are going to use to create new levels of business value is how to best deploy their data in ways that don't add to the overall complexity of how the business operates and to have that conversation we're here with Sandeep Singh who's the VP of storage marketing at HPE Sandeep welcome to the cube Peter thank you I'm very excited so Sandeep I started off by making the observation that we've got this mountain of data coming in a lot of enterprises at the same time there seems to be a the the notion of how data is going to create new classes of business value seems to be pretty deeply ingrained and acculturated to a lot of decision makers so they want more value out of their data but they're increasingly concerned about the volume of data that's going to hit them how in your conversations with customers are you hearing them talk about this fundamental challenge and so that that's a great question you know across the board data is at the heart of applications pretty much everything that organizations do and when they look at it in conversations with customers it really boils down to a couple of areas one is how is my data just effortlessly available all the time it's always fast because fundamentally that's driving the speed of my business and that's incredibly important and how can my various audiences including developers just consume it like the public cloud in a self-service fashion and then the second part of that conversation is really about this massive data storm or mountain of data that's coming and it's gonna be available how do how do I Drive a competitive advantage how do i unlock these hidden inside in that data to uncover new revenue streams new customer experiences those are the areas that we hear about and fundamentally underlying it the challenge for customers is boy I have a lot of complexity and how do I ensure that I have the necessary insights in a the infrastructure management so I am not beholden and more my IT staff isn't beholden to fighting the IT fires that can cause disruptions and delays to projects so fundamentally we want to be able to push time and attention in the infrastructure in the administration of those devices that handle the data and move that time and attention up into how we deliver the data services and ideally up into the applications that are going to actually generate dense new class of work within a digital business so I got that right absolutely it's about infrastructure that just runs seamlessly it's always on it's always fast people don't have to worry about what is it gonna go down is my data available or is it gonna slow down people don't want sometimes faster one always fast right and that's governing the application performance that ultimately I can deliver and you talked about well geez if it if the data infrastructure just works seamlessly then can I eventually get to the applications and building the right pipelines ultimately for mining that data drive doing the AI and the machine learning analytics driven insights from that so we've got the significant problem we now have to figure out how to architect because we want predictability and certainty and and and cost clarity and to how we're going to do this part of the challenge or part of the pushier is new use cases for AI so we're trying to push data up so that we can build these new use cases but it seems as though we have to also have to take some of those very same technologies and drive them down into the infrastructure so we get greater intelligence greater self meter and greater self management self administration within the infrastructure itself oh I got that right yes absolutely lay what becomes important for customers is when you think about data and ultimately storage that underlies the data is you can build and deploy fast and reliable storage but that's only solving half the problem greater than 50% of the issues actually end up arising from the higher layers for example you could change the firmware on the host bus adapter inside a server that can trickle down and cause a data unavailability or a performance low down issue you need to be able to predict that all the way at that higher level and then prevent that from occurring or your virtual machines might be in a state of over memory commitment at the server level or you could CPU over-commitment how do you discover those issues and prevent them from happening the other area that's becoming important is when we talk about this whole notion of cloud and hybrid cloud right that complexity tends to multiply exponentially so when the smarts you guys are going after building that hybrid cloud infrastructure fundamental challenges even as I've got a new workload and I want to place that you even on-premises because you've had lots of silos how do you even figure out where should I place a workload a and how it'll react with workloads B and C on a given system and now you multiply that across hundreds of systems multiple clouds and the challenge you can see that it's multiplying exponentially oh yeah well I would say that having you know where do I put workload a the right answer today maybe here but the right answer tomorrow maybe somewhere else and you want to make sure that the service is right required to perform workload a our resident and available without a lot of administrative work necessary to ensure that there's commonality that's kind of what we mean by this hybrid multi-cloud world isn't it absolutely and yet when you start to think about it basically you end up in requiring and fundamentally meeting the data mobility aspect of it because without the data you can't really move your workloads and you need consistency of data services so that your app if it's architected for reliability and a set of data services those just go along with the application and then you need building on top of that the portability for your actual application workload consistently managed with a hybrid management interface there so we want to use an intelligent data platform that's capable of assuring performance assuring availability and assuring security and going beyond that to then deliver a simplified automated experience right so that everything is just available through a self-service interface and then it brings along a level of intelligence that's just built into it globally so that in instead of trying to manually predict and landing in a world of reactive after IT fires have occurred is that there are sea of sensors and it's automatic the infrastructures automatically for predicting and preventing issues before they ever occur and then going beyond that how can you actually fingerprint the individual application workloads to then deliver prescriptive insights right to keep the infrastructure always optimized in that sense so discerning the patterns of data utilization so that the administrative costs of making sure the data is available where it needs to be number one number two assuring that data as assets is made available to developers as they create new applications new new things that create new work but also working very closely with the administrators so that they are not bound to as an explosion of the number of tasks adapt to perform to keep this all working across the board yes ok so we've got we've we've got a number of different approaches to how this class of solution is going to hit the marketplace look HP he's been around for 70 years yeah something along those lines you've been one of the leaders in the complex systems arena for a long time and that includes storage where are you guys taking some of these to oh geez yeah so our strategy is to deliver an intelligent data platform and that intelligent data platform begins with workload optimized composable systems that can span the mission critical workloads general purpose secondary Big Data ai workloads we also deliver cloud data services that enable you to embrace hybrid cloud all of these systems including all the way to Cloud Data Services are plumbed with data mobility so for example use cases of even modernizing protection and going all the way to protecting cost effectively in the public cloud are enabled but really all of these systems then are imbued with a level of intelligence with a global intelligence engine that begins with predicting and proactively resolving issues before they occur but it goes way beyond that in delivering these prescriptive insights that are built on top of global learning across hundreds of thousands of systems with over a billion data points coming in on a daily basis to be able to deliver at the information at the fingertips so even the virtual machine admins to say this virtual machine is sapping the performance of this node and if you were to move it to this other node the performance or the SLA for all of the virtual machine farm will be even better we build on top of that to deliver pre-built automation so that it's hooked in with a REST API for strategy so that developers can consume it in a containerized application that's orchestrated with kubernetes or they can leverage it as infrastructure eyes code whether it's with ansible puppet or chef we accelerate all of the application workloads and bring up where data protection so it's available for the traditional business applications whether they're built on SA P or Oracle or sequel or the virtual machine farms or the new stack containerized applications and then customers can build their ai and big data pipelines on top of the infrastructure with a plethora of tools whether they're using basically Kafka elastic map are h2o that complete flexibility exists and within HPE were then able to turn around and deliver all of this with an as a service experience with HPE Green Lake to customers so that's where I want to take you next so how invasive is this going to be to a large shop well it is completely seamless in that way so with Green Lake we're able to deliver a fully managed service experience with a cloud like pay-as-you-go consumption model and combining it with HPE financial services we're also able to transform their organization in terms of this journey and make it a fully self-funding journey as well so today the typical administrator of this typical shop has got a bunch of administrators that are administrating devices that's starting to change they've introduced automation that typically is associated with those devices but in we think three to five years out folks gonna be thinking more in terms of data services and how those services get consumed and that's going to be what the storage part of I t's can be thinking about it can almost become day to administrators if I got that right yes intelligence is fundamentally changing everything not only on the consumer side but on the business side of it a lot of what we've been talking about is intelligence is the game changer we actually see the dawn of the intelligence era and through this AI driven experience what it means for customers as a it enables a support experience that they just absolutely love secondly it means that the infrastructure is always on it's always fast it's always optimized in that sense and thirdly in terms of making these data services that are available and data insights that are being unlocked it's all about how can enable your innovators and the data scientists and the data analysts to shrink that time to deriving insights from months literally down to minutes today there's this chasm that exists where there's a great concept of how can i leverage the AI technology and between that concept to making it real to thinking about a where can it actually fit and then how do i implement an end-to-end solution and a technology stack so that I just have a pipeline that's available to me that chasm you literally as a matter of months and what we're able to deliver for example with HPE blue data is literally a catalog self-service experience where you can select and seamlessly build a pipeline literally in a matter of minutes and it's just all completely hosted seamlessly so making AI and machine learning essentially available for the mainstream through so the ontology data platform makes it possible to see these new classes of applications become routine without forcing the underlying storage administrators themselves to become data scientists absolutely all right well thank you for joining us for another cute conversation Sandeep Singh really appreciate your time in the cube thank you Peter and fundamentally what we're helping customers do is really to unlock data potential to transform their businesses and we look forward to continuing that conversation excellent I'm Peter Burris see you next time you [Music]

Published Date : May 15 2019

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Sandeep SinghPERSON

0.99+

May 2019DATE

0.99+

Peter BurrisPERSON

0.99+

threeQUANTITY

0.99+

tomorrowDATE

0.99+

PeterPERSON

0.99+

Silicon ValleyLOCATION

0.99+

todayDATE

0.99+

second partQUANTITY

0.99+

HPEORGANIZATION

0.98+

KafkaTITLE

0.98+

70 yearsQUANTITY

0.98+

over a billion data pointsQUANTITY

0.98+

hundreds of systemsQUANTITY

0.97+

greater than 50%QUANTITY

0.97+

Green LakeORGANIZATION

0.97+

five yearsQUANTITY

0.96+

hundreds of thousands of systemsQUANTITY

0.96+

SandeepPERSON

0.92+

Palo Alto CaliforniaLOCATION

0.92+

HPORGANIZATION

0.9+

HPE Green LakeORGANIZATION

0.85+

oneQUANTITY

0.81+

SA PTITLE

0.8+

sea of sensorsQUANTITY

0.8+

half the problemQUANTITY

0.77+

secondlyQUANTITY

0.74+

OracleORGANIZATION

0.72+

twoQUANTITY

0.7+

lots of silosQUANTITY

0.68+

one of the leadersQUANTITY

0.62+

intelligenceORGANIZATION

0.61+

thirdlyQUANTITY

0.55+

sequelTITLE

0.54+

HPETITLE

0.54+

lotQUANTITY

0.52+

issuesQUANTITY

0.51+

RESTTITLE

0.49+

h2oTITLE

0.46+

Anne Gentle, Cisco DevNet | DevNet Create 2019


 

>> Live from Mountain View, California, it's theCUBE! Covering DevNet Create 2019, brought to you by Cisco. >> Hi, welcome to theCUBE's coverage of Cisco DevNet Create 2019, Lisa Martin with John Furrier, we've been here all day, talking about lots of very inspiring, educational, collaborative folks, and we're pleased to welcome to theCUBE Anne Gentle, developer experience manager for Cisco DevNet, Anne, thank you so much for joining us on theCUBE today. >> Thank you so much for having me. >> So this event, everything's like, rockstar start this morning with Susie, Mandy, and the team with the keynotes, standing room only, I know when I was walking out. >> I loved it, yes. >> Yes, there's a lot of bodies in here, it's pretty toasty. >> Yeah. >> The momentum that you guys have created, pun intended. >> Oh, yes. >> No, I can't take credit for that, is really, you can feel it, there's a tremendous amount of collaboration, this is your second create? >> Second create, yeah, so I've been with DevNet itself for about year and a half, and started at Cisco about three years ago this month, but I feel like developer experience is one of my first loves, when I really started to understand how to advocate for the developer experience. So DevNet just does a great job of not only advocating within Cisco, but outside of Cisco as well, so we make sure that their voice is heard, if there's some oddity with an API, which, you know, I'm really into API design, API style, we can kind of look at that first, and kind of look at it sideways and then talk to the teams, okay is there a better way to think about this from a developer standpoint. >> It's great, I love the API love there, it's going around a lot here. DevNet create a cloud native vibe that's kind of integrating and cross-pollinating into DevNet, Cisco proper. You're no stranger to cloud computing early days, and ecosystems that have formed naturally and grown, some morph, some go different directions, so you're involved in OpenStack, we know that, we've talked before about OpenStack, just some great successes as restarts, restarts with OpenStack ultimately settled in what it did, the CNCF, the Cloud Native Computing Foundation, is kind of the cloud native OpenStack model. >> Yeah, yeah. >> You've seen the communities grow, and the market's maturing. >> Definitely, definitely. >> So what's your take on this, because it creates kind of a, the creator builder side of it, we hear builder from Amazon. >> Yeah, I feel like we're able to bring together the standards, one of the interesting things about OpenStack was okay, can we do open standards, that's an interesting idea, right? And so, I think that's partially what we're able to do here, which is share, open up about our experiences, you know, I just went to a talk recently where the SendGrid former advocate is now working more on the SDK side, and he's like, yeah the travel is brutal, and so I just kind of graduated into maintaining seven SDKs. So, that's kind of wandering from where you were originally talking, but it's like, we can share with each other not only our hardships, but also our wins as well, so. >> API marketplaces is not a new concept, Apache was acquired-- >> Yes. >> By a big company, we know that name, Google. But now it's not just application programming interface marketplaces, with containers and server space, and microservices. >> Right. >> The role of APIs growing up on a whole other level is happening. >> Exactly. >> This is where you hear Cisco, and frankly I'm blown away by this, at the Cisco Live, that all the portfolio (mumbles) has APIs. >> True, yes, exactly. >> This is just a whole changeover, so, APIs, I just feel a whole other 2.0 or 3.0 level is coming. >> Absolutely. >> What's your take on this, because-- >> So, yeah, in OpenStack we documented like, two APIs to start, and then suddenly we had 15 APIs to document, right, so, learn quick, get in there and do the work, and I think that that's what's magical about APIs, is, we're learning from our designs in the beginning, we're bringing our users along with us, and then, okay, what's next? So, James Higginbotham, I saw one of his talks today, he's really big in the API education community, and really looking towards what's next, so he's talking about different architectures, and event-driven things that are going to happen, and so even talking about, well what's after APIs, and I think that's where we're going to start to be enabled, even as end users, so, sure, I can consume APIs, I'm pretty good at that now, but what are companies building on top of it, right? So like GitHub is going even further where you can have GitHub actions, and this is what James is talking about, where it's like, well the API enabled it, but then there's these event-driven things that go past that. So I think that's what we're starting to get into, is like, APIs blew up, right? And we're beyond just the create read. >> So, user experience, developer experience, back to what you do, and what Mandy was talking about. You can always make it easier, right? And so, as tools change, there's more tools, there's more workloads, there's more tools, there's more this, more APIs, so there's more of everything coming. >> Yeah. >> It's a tsunami to the developer, what are some of the trends that you see to either abstract away complexities, and, or, standardize or reduce the toolchains? >> Love where you're going with this, so, the thing is, I really feel like in the last, even, since 2010 or so, people are starting to understand that REST APIs are really just HTTP protocol, we can all understand it, there's very simple verbs to memorize. So I'm actually starting to see that the documentation is a huge part of this, like a huge part of the developer experience, because if, for one, there are APIs that are designed enough that you can memorize the entire API, that blows me away when people have memorized an API, but at the same time, if you look at it from like, they come to your documentation every day, they're reading the exact information they can give, they're looking at your examples, of course they're going to start to just have it at their fingertips with muscle memory, so I think that's, you know, we're starting to see more with OpenAPI, which is originally called Swagger, so now the tools are Swagger, and OpenAPI is the specification, and there's just, we can get more done with our documentation if we're able to use tools like that, that start to become industry-wide, with really good tools around them, and so one of the things that I'm really excited about, what we do at DevNet, is that we can, so, we have a documentation tool system, that lets us not only publish the reference information from the OpenAPI, like very boring, JSON, blah blah blah, machines can read it, but then you can publish it in these beautiful ways that are easy to read, easy to follow, and we can also give people tutorials, code examples, like everything's integrated into the docs and the site, and we do it all from GitHub, so I don't know if you guys know that's how we do our site from the back side, it's about 1000 or 2000 GitHub repos, is how we build that documentation. >> Everything's going to GitHub, the network configurations are going to GitHub, it's programmable, it's got to be in GitHub. >> Yes, it's true, and everything's Git-based right? >> So, back to the API question, because I think I'm connecting some dots from some of the conversations we had, we heard from some of the community members, there's a lot of integration touchpoints. Oh, a call center app on their collaboration talks to another database, which talks to another database, so these disparate systems can be connected through APIs, which has been around for a while, whether it's an old school SOAP interface, to, you know, HTTP and REST APIs, to full form, cooler stuff now. But it's also more of a business model opportunity, because the point is, if your API is your connection point-- >> Yes. >> There's potential business deals that could go on, but if you don't have good documentation, it's like not having a good business model. >> Right, and the best documentation really understands a user's task, and so that's why API design is so important, because if you need to make sure that your API looks like someone's daily work, get the wording right, get the actual task right, make sure that whatever workflow you've built into your API can be shown through in any tutorial I can write, right? So yeah, it's really important. >> What's the best practice, where should I go? I want to learn about APIs, so then I'm going to have a couple beers, hockey's over, it's coming back, Sharks are going to the next round, Bruins are going to the next round, I want to dig into APIs tonight. Where do I go, what are some best practices, what should I do? >> Yeah, alright, so we have DevNet learning labs, and I'm telling you because I see the web stats, like, the most popular ones are GitHub, REST API and Python, so you're in good company. Lots of people sitting on their couches, and a lot of them are like 20 minutes at a time, and if you want to do like an entire set that we've kind of curated for you all together, you should go to developer.cisco.com/startnow, and that's basically everything from your one-on-ones, all the way up to, like, really deep dive into products, what they're meant to do, the best use cases. >> Okay, I got to ask you, and I'll put you on the spot, pick your favorite child. Gold standard, what's the best APIs that you like, do you think are the cleanest, tightest? >> Oh, best APIs I like, >> Best documented? >> So in the technical writing world, the gold standard that everyone talks about is the Stripe documentation, so that's in financial tech, and it's very clean, we actually can do something like it with a three column layout-- >> Stripe (mumbles) payment gateway-- >> Stripe is, yeah, the API, and so apparently, from a documentation standpoint, they're just, people just go gaga for their docs, and really try to emulate them, so yeah. And as far as an API I use, so I have a son with type one diabetes, I don't know if I've shared this before, but he has a continuous glucose monitor that's on his arm, and the neat thing is, we can use a REST API to get the data every five minutes on how his blood sugar is doing. So when you're monitoring this, to me that's my favorite right now, because I have it on my watch, I have it on my phone, I know he's safe at school, I know he's safe if he goes anywhere. So it's like, there's so many use cases of APIs, you know? >> He's got the policy-based program, yeah. >> He does, yes, yes. >> Based upon where's he's at, okay, drink some orange juice now, or, you know-- >> Yes, get some juice. >> Get some juice, so, really convenient real-time. >> Yes, definitely, and he, you know, he can see it at school too, and just kind of, not let his friends know too much, but kind of keep an eye on it, you know? >> Automation. >> Yeah, exactly, exactly. >> Sounds like great cloud native, cool. You have a Meraki hub in your house? >> I don't have one at home. >> Okay. >> Yeah, I need to set one up, so yeah, we're terrible net nannies and we monitor everything, so I think I need Meraki at home. (laughing) >> It's a status symbol now-- >> It is now! >> We're hearing in the community. Here in the community of DevNet, you got to have a Meraki hub in your, switch in your house. >> It's true, it's true. >> So if you look back at last year's Create versus, I know we're just through almost day one, what are some of the things that really excite you about where this community of now, what did they say this morning, 585,000 strong? Where this is going, the potential that's just waiting to be unlocked? >> So I'm super excited for our Creator awards, we actually just started that last year, and so it's really neat to see, someone who won a Creator award last year, then give a talk about the kind of things he did in the coming year. And so I think that's what's most exciting about looking a year ahead for the next Create, is like, not only what do the people on stage do, but what do the people sitting next to me in the talks do? Where are they being inspired? What kind of things are they going to invent based on seeing Susie's talk about Wi-Fi 6? I was like, someone invent the thing so that when I go to a hotel, and my kids' devices take all the Wi-Fi first, and then I don't have any, someone do that, you know what I mean, yeah? >> Parental rights. >> So like, because you're on vacation and like, everybody has two devices, well, with a family of four-- [John] - They're streaming Netflix, Amazon Prime-- >> Yeah, yeah! >> Hey, where's my video? >> Like, somebody fix this, right? >> Maybe we'll hear that next year. >> That's what I'm saying, someone invent it, please. >> And thank you so much for joining John and me on theCUBE this afternoon, and bringing your wisdom and your energy and enthusiasm, we appreciate your time. >> Thank you. >> Thank you. >> For John Furrier, I am Lisa Martin, you're watching theCUBE live from Cisco DevNet Create 2019. Thanks for watching. (upbeat music)

Published Date : Apr 25 2019

SUMMARY :

Covering DevNet Create 2019, brought to you by Cisco. Anne, thank you so much for joining us on theCUBE today. and the team with the keynotes, Yes, there's a lot of bodies in here, The momentum that you guys have created, and kind of look at it sideways and then talk to the teams, is kind of the cloud native OpenStack model. and the market's maturing. the creator builder side of it, but it's like, we can share with each other By a big company, we know that name, Google. APIs growing up on a whole other level is happening. This is where you hear Cisco, This is just a whole changeover, and event-driven things that are going to happen, back to what you do, and what Mandy was talking about. and so one of the things that I'm really excited about, the network configurations are going to GitHub, from some of the conversations we had, but if you don't have good documentation, Right, and the best documentation so then I'm going to have a couple beers, and if you want to do like an entire set Gold standard, what's the best APIs that you like, of APIs, you know? He's got the policy-based so, really convenient real-time. You have a Meraki hub in your house? Yeah, I need to set one up, so yeah, We're hearing in the community. and so it's really neat to see, And thank you so much for joining John and me you're watching theCUBE live from Cisco DevNet Create 2019.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

JamesPERSON

0.99+

Anne GentlePERSON

0.99+

James HigginbothamPERSON

0.99+

20 minutesQUANTITY

0.99+

John FurrierPERSON

0.99+

CiscoORGANIZATION

0.99+

AnnePERSON

0.99+

JohnPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

SusiePERSON

0.99+

two devicesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

last yearDATE

0.99+

CNCFORGANIZATION

0.99+

MandyPERSON

0.99+

OpenAPITITLE

0.99+

secondQUANTITY

0.99+

15 APIsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

2010DATE

0.99+

next yearDATE

0.99+

Mountain View, CaliforniaLOCATION

0.99+

SecondQUANTITY

0.99+

GitTITLE

0.99+

SharksORGANIZATION

0.99+

DevNetORGANIZATION

0.98+

developer.cisco.com/startnowOTHER

0.98+

SwaggerTITLE

0.98+

PythonTITLE

0.98+

three columnQUANTITY

0.98+

oneQUANTITY

0.98+

tonightDATE

0.97+

GitHubORGANIZATION

0.97+

todayDATE

0.96+

585,000QUANTITY

0.96+

NetflixORGANIZATION

0.96+

OpenStackTITLE

0.95+

first lovesQUANTITY

0.95+

two APIsQUANTITY

0.94+

SendGridORGANIZATION

0.94+

theCUBEORGANIZATION

0.93+

REST APITITLE

0.93+

this morningDATE

0.93+

this afternoonDATE

0.93+

firstQUANTITY

0.92+

BruinsPERSON

0.91+

three years agoDATE

0.9+

coming yearDATE

0.89+

RESTTITLE

0.88+

Cisco DevNetORGANIZATION

0.87+

2019DATE

0.86+

JSONTITLE

0.86+

seven SDKsQUANTITY

0.85+

family ofQUANTITY

0.84+

GitHubTITLE

0.81+

a yearQUANTITY

0.79+

PrimeCOMMERCIAL_ITEM

0.79+

five minutesQUANTITY

0.78+

MerakiORGANIZATION

0.75+

DevNet CreateTITLE

0.75+

DevNet Create 2019TITLE

0.73+

about 1000QUANTITY

0.73+

Dan Burkland, Five9 | Enterprise Connect 2019


 

(funky music) >> [Narrator voiceover] Live from Orlando, Florida, it's theCube, covering Enterprise Connect 2019. Brought to you by Five9. >> Hello from Orlando. I'm Lisa Martin on theCube with Stu Miniman, and we are in Five9's booth at Enterprise Connect 2019. Can you hear all of the attendees behind me? There's about 6,500 people here. In the expo hall, there's 140 exhibitors. I mentioned we are in Five9's booth, and we're pleased to welcome to theCube the president of Five9, Dan Burkland. Dan, welcome to theCube. >> Thank you, Lisa. Thank you, Stu. It's great to be here. What an event. This is amazingly well attended, and, uh, can't wait. Let's get to it. >> It is, so let's do a little bit of by the numbers. Four years in a row that Five9 has been a leader in the Gartner Magic Quadrant for Contact Center as a Service. You have, I think we were talking with some of your guys yesterday, five billion recorded customer conversations. Oh the data and the opportunities in that. Couple thousand customers worldwide, and a big, strong finish to FY18. Lot of momentum. >> Right, and 2018 couldn't have been better. We had wonderful growth, capped off with a Q4 that showed 31% revenue growth year over year. We continue to increase our profitability as well, EBITDA being 23%. So for us it was a phenomenal year. The combination of revenue growth plus the EBITDA, put us over 50. Oftentimes companies are evaluated for the rule of 40. And we shattered that and came in over 50, so we're very excited. It's helping fuel the growth for us, which we believe is still ahead of us, for the most part. >> Dan, congratulations on the progress. We've been watching, there's some big-name hires that have happened in the company. Why I was excited to talk to you, is not only have you been with the company coming up on 10 years now, but you've got sales underneath what you're doing, and when you watch a company that is exceeding the industry growth rate by, like, 2 to 3X, you know hiring and culture is so important. Bring us inside a little bit. What's happening at Five9? How do you maintain the momentum? What is that, you know, Five9 employee that you look for? >> So, Stu, and you just touched on it. We've been an execution machine for many years, growing the top line of the business, while keeping an eye on bottom line profitability, and we believe doing that as well as anybody in our industry. And what's very powerful now is those new hires that you mentioned. Wouldn't have guess in my wildest dreams that we could attract somebody with Rowan's, uh, pedigree and really his reputation of being able to take companies, transform them, and take them to new heights. You know, with his reputation he was able to attract Jonathan Rosenberg, co-author of the SIP protocol, and really thought leader in the collaboration space for us. So, we see taking what we've done over the past several years and growing the business into what's now, you know a $250 million plus company and applying their thought leadership and expertise to an already highly executing machine. Really the sky's the limit for us, and I think what it now brings us is the ability to expand the product and really take the products into whole new areas, like artificial intelligence and being able to leverage those technologies and really change the way customers provide support and really evolve the customer experience as a whole. >> Yeah, it's been interesting. I've watched the cloud space in general for a number of years, and, you know the numbers sometimes bely what we're used to. It's like, oh well okay, you know, somewhere 40 to 90% growth for some of the public cloud providers. Oh well, when's that going to slow down? And sometimes it actually still accelerates, you know. And when I look at the cloud contact centers, you know, once again, you know, you're growing at a pretty good clip, but you've still got lots of head room there. So talk a little bit about that dynamic, you know, we're past the evangelization phase, and now, you know, generally Cloud it's here it's still growing massively. >> Well said, Stu, and not only is it still here, it's just the beginning. If you look at the Cloud penetration rate to date we now see that about 10 to 15% of companies have moved and transitioned to the Cloud for their contact center needs. We see that continuing for a decade or more as we move forward, and some of the drivers for that, if you look at it, are... Uh, you mentioned at the outset, Lisa the fact that we're recording five million minutes of conversations, and that is valuable, valuable data that we're sitting on. One of the true, uh, advents is when you look at AI and you look at artificial intelligence, lots of folks around this show are talking about how AI is going to revolutionize and change the way contact centers operate. And we know it's going to do that. Lots of us are experimenting and building out proofs of concepts in the areas like agent assistance. For the first time ever, we can get strong transcription tools that allow us to take speech and convert it into text at a very high rate. And be able to then apply natural language understanding tools to that same text to be able to derive what a customer's asking for in real time, and, therefore, it takes the responsibility away from the agent and not burden the agent with having to go hunt and search for the solutions, but actually let the system go hunt and search for that, while the agent can pay attention and focus on the client or the end user customer. And then have the system be able to give responses, so if the system's giving responses to, uh, the agent, the agent then has the ability to chose which response is accurate, and the system will learn over time and become smarter. The machine learning portion of that is it will get better and better at suggesting responses to the agent. It allows us to take a very junior or unseasoned agent and make them a very experienced agent very, very quickly. So, in the past, we've had to rely on scripts that were very form-driven, with a few variables being filled in. Now we can be very dynamic, with AI providing those responses. >> Let's talk about the impacts of the consumer. We are consumers, right? We're so empowered. We can make any decision, and we have these expectations on any business that we're dealing with, you're going to be able to... If it's an agent we're dealing with because we have a problem, you're going to be able to identify my problem right away, um, in whatever channel it is that I want to communicate with you, and I have this expectation that... That whatever... I want it to be as easy as, uh, you know, downloading something on Netflix. So, in terms of the consumer influence, what are some of the ways in which Five9 can help those agents really become empowered decision makers and help the businesses be able to have the content that they can, in real time, distribute through the appropriate channel? >> Right, well, that's a great question because there's nothing more important... I should say nothing more frustrating for a consumer who contacts a business and has to go through an IVR, which we've all done. It's not a pleasant experience. We're oftentimes trying to input information first so the company can identify who we are, second to derive intent. Why are we contacting them? And third, now that the company knows why we are contacting them, how they can find and locate and route us to the most appropriate resource to handle that transaction. And with AI coming, it can allow us to very quickly identify the caller, ask them why they're calling, and, through that natural language understanding, then we can derive why they're calling and not only distribute the call to the right agent, but as I mentioned earlier, be able to actually provide the pertinent data to help them interact with the consumer. So as consumer expectations continue to rise, and they expect to not have to give information more than once, regardless of the channel, it puts the onus on us as technology solution providers to build the solutions that will accomplish just that. >> Yeah, Dan, I wonder if you can help us peel the onion a little bit when you talk about the opportunity for growth. We know where Cloud adoption is still a little bit more heavy here in North America, so bring us, bring us global, you know, Five9 is a global company, but what's the international opportunity there? >> So, great, we've had small European teams and Latin America teams for several years now, in fact we set up our data centers in Europe with a fully geographic-redundant solution. And it's been hardened. We have over a hundred customers now on our European data centers, and so we're now in the mode of scale and execution. It's very critical in this space to establish presence, get reference ability within a region, and then scale it. That's the same we did here in the US, and we're doing the same thing now in Europe and Latin America. So that's a big area for our expansion needs for 2019. We're also seeing an area along, erhm... with the large systems intergrators, the global companies like Deloitte and also with PWC, EY, Accenture, IBM. Being able to really leverage what they have is a lot of account control, and they're look at as trusted advisors to come in and help companies go through that digital transformation, which includes so much more than just contact center, but we're a critical element. And so what we've done is we've worked very closely with them to make sure that we can participate in that transformation, and they've been wonderful about introducing us into domestic customers but many global customers in nature. >> I love that you brought up the digital transformation. When we talk to companies, data is so important to what they're doing, is at the center of that digital transformation. Data plays a pretty important role in the contact centers. We talked to Jonathan yesterday about some of the future of AI. That's there. When you talk about your field and your engagement with your customers, you know, where does contact center fit into some of those big themes and big transformations that they're doing? >> So, um, uh, data's critical because it's, A, how it was mentioned yesterday by Jonathan, it's really what's going to allow us to teach the machine learning to be so much more accurate, you've got to have millions of conversations in order to do that effectively. The other is for our customers and potential customers. Well, the time is now to move to the Cloud, so that when the application that we're doing PoCs with and we're testing the different use cases, when those become relevant and prevalent within production environments, the first step is you need to be in the Cloud already, and the next step is start recording your data today. So if you're recording calls and you're flushing those calls and you're not saving them, the key is that's valuable data and can create valuable insights to what conversations are happening between your agents and your customers. Today many of our customers record all their calls, and they put them in a vault, and about 1 to 2% get listened to again. And they're getting listened to, not for the content, but for training purposes. And we would argue that there's so much valuable content in there, that if it can be taken, transcribed, organized, filtered, and then brought back to the business, there's many insights that can be driven because of that. One example is if you take data from a conversation and you're able to transcribe it, the system can have the intelligence to go in and mine for topics. What are the key elements that this conversation just had? And take that information and disposition a call. So today many organizations have their agents do wrap up on their keyboard and type in notes about what occurred on a call. Sometimes those notes are accurate, sometimes they're not. Same with dispositioning. I may have a pull-down menu and disposition a call. It may be accurate, oftentimes it's not. Well, with the technology that available to us today, we can auto-disposition, if you will, or let the system disposition the call, let the system put in the relevant notes, and let the agent move on to their next call. The key there is, tomorrow, when I call back, if Lisa answers my call, Lisa can say, oh great. I see you talked to Stu yesterday for 20 minutes, and you talked about X, Y, and Z. Is that why you're calling? A much more personalized conversation. >> So since customers are going to these great lengths to record and store these conversations, and as you're saying, there's so much value there, way besides training, what are some of the barriers that you're finding that they need help getting over to actually start mining that data to dramatically improve their competitive advantage, reduce churn, increase CLV? How do you help them get over that, all right we have to invest here. >> Yeah, great. Some of the barriers historically have been just the sheer accuracy. There's been speech to text technology and algorithms available on the market for many years. They just haven't had the accuracy rate of that of a human. So we've always relied on the humans to extract the detail and be able to transcribe. The trouble is going from a active conversation to transcribing what I really just heard... a lot gets lost in that translation as well. And so with human levels of accuracy of the new technology coming from companies like Google and others, to be able to transcribe that information, then we need to then apply the NLU to that same text. And so those historically have been the challenges. Now that we're there with the technology, it becomes for companies like us to be able to apply that technology in a way that's fitting for the customer. >> Dan, I expect you're meeting with a lot of customers at a show like this, and just in your job you talk to a lot of customers. When we've been talking to many of your partners here at the show, we hear about, uh, you know, if only we could get greater adoption, we need to help people train up, we need to help them get over the barrier of learning something new. What, is that one of the main challenges you hear from your users or are there anything else that's kind of rising up as the, kind of, the biggest challenges that your customers face today? From a distribution standpoint, and channel perspective, we have lots of channels that are embracing Five9 and bringing for the first time in many cases a Cloud solution to their customers. Historically that's been met with mixed results because they felt like, hey, there's a large CAPX solution that I can sell, or I can go on a subscription basis, and I don't see that revenue for many years to come. And so we're finding that that channel is really opening up nicely for us. And that goes from everything from a regional VAR that may have been providing and selling the premises-based solutions to the large global CDWs of the world that have global reach and global scale. And then, of course, like I mentioned earlier, the SIs like Deloitte and Accenture and those that have strategic value that they bring to their clients. So we're seeing it hit on many different cylinders to help fuel our expansion, and that's what's really, what we're looking forward to go from, you know, the, the, the 250 million to 500 million to a billion. Those are the types of channels that will help us scale much more effectively. >> Yeah, the last thing I wanted to, to ask about is your Cloud-based solution, but my understanding is you still treat this, it's white glove, every customer, you're engaging there. It's not some hands-off relationship there. >> Right. Could you talk a little bit about that? That differentiation for Five9? >> Yes, that's an excellent point, and it's something that I brought over from my previous experience of selling into larger enterprises. We said, you know there's this misnomer about, oh if it's in the Cloud, it's off the shelf, and it's not as customizable. That's... Nothing could be further from the truth, for one. We have to... All of our enterprise customers have a different deployment and a different effective implementation of our solution. Uh, and having built out 300 REST-based APIs, we give tremendous flexibility and allow our customers to get very creative on how they customize it, whether that be for integrating to back-office data systems, building their own UIs or dashboards that are in the look and feel that the want, there's a variety of different ways that they can leverage and customize the product. So it's important for us to have a solution that has that flexibility, and um, it allows customers to move to the Cloud and yet still have that much greater flexibility than they had with premises systems. Um, and that's come about over the last several years. >> Okay, Dan, last question. Let's bring it home. You're on the road all the time. You talk with a ton of customers. Give us one by name, if you can, or industry that really epitomizes the breadth and the strength that Five9 and your channel partners can deliver. >> Yeah, so, getting back, and I'll touch on the channel partners first. The channel partners element of them being able to recognize, the first step we want for them is to leverage and empower them to recognize a Cloud offering or an application that might be a fit for Five9. And then we're now in the process of enabling them not only on the pre-sale side but on the post-sale side to be able to leverage their expertise from a services perspective. So when you look at customers that really want to take that and go to the next level, a lot of them have been brought to us by I'll us Deloitte as an example. When Deloitte first brought us in to Lili, they were able to bring us in and help Lili transform their global centers over to Five9. And they're taking out and migrating their premises-based solutions over the last three years, over to Five9. And it has to do with the program management and the strategic nature in which Deloitte helps them time those projects and the technology that Five9 brings to the table to allow us to eventually take on their entire enterprise, move it over. >> Wow. Dan, thank you so much for sharing all of this great excitement about what you're doing at Five9 and how you're really helping to move the business forward. Stu and I appreciate your time. >> Excellent. Thank you, Lisa. Thank you, Stu. Great to be here. >> For Stu Miniman, I'm Lisa Martin. You're watching theCube. (upbeat music)

Published Date : Mar 20 2019

SUMMARY :

Brought to you by Five9. In the expo hall, there's 140 exhibitors. It's great to be here. It is, so let's do a little bit of by the numbers. The combination of revenue growth plus the EBITDA, the industry growth rate by, like, 2 to 3X, and really take the products into whole new areas, So talk a little bit about that dynamic, you know, One of the true, uh, advents is when you look at AI the businesses be able to have the content that they can, and not only distribute the call to the right agent, a little bit when you talk about the opportunity for growth. That's the same we did here in the US, I love that you brought up the digital transformation. Well, the time is now to move to the Cloud, So since customers are going to these great lengths and be able to transcribe. What, is that one of the main challenges you hear Yeah, the last thing I wanted to, to ask about Could you talk a little bit about that? that are in the look and feel that the want, that really epitomizes the breadth and the strength and the technology that Five9 brings to the table and how you're really helping to move the business forward. Great to be here. For Stu Miniman, I'm Lisa Martin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PWCORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

Jonathan RosenbergPERSON

0.99+

IBMORGANIZATION

0.99+

EYORGANIZATION

0.99+

EuropeLOCATION

0.99+

Lisa MartinPERSON

0.99+

JonathanPERSON

0.99+

Dan BurklandPERSON

0.99+

DanPERSON

0.99+

AccentureORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

LisaPERSON

0.99+

$250 millionQUANTITY

0.99+

20 minutesQUANTITY

0.99+

USLOCATION

0.99+

StuPERSON

0.99+

Five9ORGANIZATION

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

2019DATE

0.99+

23%QUANTITY

0.99+

five million minutesQUANTITY

0.99+

tomorrowDATE

0.99+

North AmericaLOCATION

0.99+

31%QUANTITY

0.99+

AmericaLOCATION

0.99+

250 millionQUANTITY

0.99+

10 yearsQUANTITY

0.99+

140 exhibitorsQUANTITY

0.99+

2018DATE

0.99+

Orlando, FloridaLOCATION

0.99+

LatLOCATION

0.99+

500 millionQUANTITY

0.99+

Four yearsQUANTITY

0.99+

OrlandoLOCATION

0.99+

first stepQUANTITY

0.99+

five billionQUANTITY

0.99+

first timeQUANTITY

0.99+

One exampleQUANTITY

0.99+

NLUORGANIZATION

0.99+

Couple thousand customersQUANTITY

0.98+

NetflixORGANIZATION

0.98+

thirdQUANTITY

0.98+

40QUANTITY

0.98+

TodayDATE

0.98+

3XQUANTITY

0.98+

Enterprise Connect 2019EVENT

0.98+

300 RESTQUANTITY

0.98+

90%QUANTITY

0.98+

LisPERSON

0.97+

firstQUANTITY

0.96+

Five9TITLE

0.96+

over a hundred customersQUANTITY

0.96+

LiliORGANIZATION

0.96+

over 50QUANTITY

0.96+

EuropeanOTHER

0.95+

about 6,500 peopleQUANTITY

0.95+

15%QUANTITY

0.94+

Jim Long, Sarbjeet Johal, and Joseph Jacks | CUBEConversation, February 2019


 

(lively classical music) >> Hello everyone, welcome to this special Cube conversation, we are here at the Power Panel Conversation. I'm John Furrier, in Palo Alto, California, theCUBE studies we have remote on the line here, talk about the cloud technology's impact on entrepreneurship and startups and overall ecosystem is Jim Long, who's the CEO of Didja, which is a startup around disrupting digital TV, also has been an investor and a serial entrepreneur, Sarbjeet Johal, who's the in-cloud influencer of strategy and investor out of Berkeley, California, The Batchery, and also Joseph Jacks, CUBE alumni, actually you guys are all CUBE alumni, so great to have you on. Joseph Jacks is the founder and general partner of OSS Capital, Open Source Software Capital, a new fund that's been raised specifically to commercialize and fund startups around open source software. Guys, we got a great panel here of experts, thanks for joining us, appreciate it. >> Go Bears! >> Nice to be here. >> So we have a distinguished panel, it's the Power Panel, we're on cloud technos, first I'd like to get you guys' reaction you know, you're to seeing a lot of negative news around what Facebook has become, essentially their own hyper-scale cloud with their application. They were called the digital, you know, renegades, or digital gangsters in the UK by the Parliament, which was built on open source software. Amazon's continuing to win, Azure's doing their thing, bundling Office 365, making it look like they've got more revenue with their catching up, Google, and then you got IBM and Oracle, and then you got an ecosystem that's impacted by this large scale, so I want to get your thoughts on first point here. Is there room for more clouds? There's a big buzzword around multiple clouds. Are we going to see specialty clouds? 'Causes Salesforce is a cloud, so is there room for more cloud? Jim, why don't you start? >> Well, I sure hope so. You know, the internet has unfortunately become sort of the internet of monopolies, and that doesn't do anyone any good. In fact, you bring up an interesting point, it'd be kind of interesting to see if Facebook created a social cloud for certain types of applications to use. I've no idea whether that makes any sense, but Amazon's clearly been the big gorilla now, and done an amazing job, we love using them, but we also love seeing, trying out different services that they have and then figuring out whether we want to develop them ourselves or use a specialty service, and I think that's going to be interesting, particularly in the AI area, stuff like that. So I sure hope more clouds are around for all of us to take advantage of. >> Joseph, I want you to weigh in here, 'cause you were close to the Kubernetes trend, in fact we were at a OpenStack event when you started Kismatic, which is the movement that became KubeCon Cloud Native, many many years ago, now you're investing in open source. The world's built on open source, there's got to be room for more clouds. Your thoughts on the opportunities? >> Yeah, thanks for having me on, John. I think we need a new kind of open collaborative cloud, and to date, we haven't really seen any of the existing major sort of large critical mass cloud providers participate in that type of model. Arguably, Google has probably participated and contributed the most in the open source ecosystem, contributing TensorFlow and Kubernetes and Go, lots of different open source projects, but they're ultimately focused on gravitating huge amounts of compute and storage cycles to their cloud platform. So I think one of the big missing links in the industry is, as we continue to see the rise of these large vertically integrated proprietary control planes for computing and storage and applications and services, I think as the open source community and the open source ecosystem continues to grow and explode, we'll need a third sort of provider, one that isn't based on monopoly or based on a traditional proprietary software business like Microsoft kind of transitioning their enterprise customers to services, sort of Amazon in the first camp vertically integrated many a buffet of all these different compute, storage, networking services, application, middleware. Microsoft focused on sort of building managed services of their software portfolio. I think we need a third model where we have sort of an open set of interfaces and an open standards based cloud provider that might be a pure software company, it might be a company that builds on the rails and the infrastructure that Amazon has laid down, spending tens of billions in cap ex, or it could be something based on a project like Kubernetes or built from the community ecosystem. So I think we need something like that just to sort of provide, speed the innovation, and disaggregate the services away from a monolithic kind of closed vendor like Amazon or Azure. >> I want to come back to that whole startup opportunity, but I want to get Sarbjeet in here, because we've been in the B2B area with just last week at IBM Think 2019. Obviously they're trying to get back into the cloud game, but this digital transformation that has been the cliche for almost a couple of years now, if not five or plus. Business has got to move to the cloud, so there's a whole new ball game of complete cultural shift. They need stability. So I want to talk more about this open cloud, which I love that conversation, but give me the blocking and tackling capabilities first, 'cause I got to get out of that old cap ex model, move to an operating model, transform my business, whether it's multi clouds. So Sarbjeet, what's your take on the cloud market for say, the enterprise? >> Yeah, I think for the enterprise... you're just sitting in that data center and moving those to cloud, it's a cumbersome task. For that to work, they actually don't need all the bells and whistles which Amazon has in the periphery, if you will. They need just core things like compute, network, and storage, and some other sort of services, maybe database, maybe data share and stuff like that, but they just want to move those applications as is to start with, with some replatforming and with some changes. Like, they won't make changes to first when they start moving those applications, but our minds are polluted by this thinking. When we see a Facebook being formed by a couple of people, or a company of six people sold for a billion dollars, it just messes up with our mind on the enterprise side, hey we can do that too, we can move that fast and so forth, but it's sort of tragic that we think that way. Well, having said that, and I think we have talked about this in the past. If you are doing anything in the way of systems innovation, if your building those at, even at the enterprise, I think cloud is the way to go. To your original question, if there's room for newer cloud players, I think there is, provided that we can detach the platforms from the environments they are sitting on. So the proprietariness has to kinda, it has to be lowered, the degree of proprietariness has to be lower. It can be through open source I think mainly, it can be from open technologies, they don't have to be open source, but portable. >> JJ was mentioning that, I think that's a big point. Jim Long, you're an entrepreneur, you've been a VC, you know all the VCs, been around for a while, you're also, you're an entrepreneur, you're a serial entrepreneur, starting out at Cal Berkeley back in the day. You know, small ideas can move fast, and you're building on Amazon, and you've got a media kind of thing going on, there's a cloud opportunity for you, 'cause you are cloud native, 'cause you're built in the cloud. How do you see it playing out? 'Cause you're scaling with Amazon. >> Well, so we obviously, as a new startup, don't have the issues the enterprise folks have, and I could really see the enterprise customers, what we used to call the Fortune 500, for example, getting together and insisting on at least a base set of APIs that Amazon and Microsoft et cetera adopt, and for a startup, it's really about moving fast with your own solution that solves a problem. So you don't necessarily care too much that you're tied into Amazon completely because you know that if you need to, you can make a change some day. But they do such a good job for us, and their costs, while they can certainly be lower, and we certainly would like more volume discounts, they're pretty darn amazing across the network, across the internet, we do try to price out other folks just for the heck of it, been doing that recently with CDNs, for example. But for us, we're actually creating a hybrid cloud, if you will, a purpose-built cloud to support local television stations, and we do think that's going to be, along with using Amazon, a unique cloud with our own APIs that we will hopefully have lots of different TV apps use our hybrid cloud for part of their application to service local TV. So it's kind of a interesting play for us, the B2B part of it, we're hoping to be pretty successful as well, and we hope to maybe have multiple cloud vendors in our mix, you know. Not that our users will know who's behind us, maybe Amazon, for something, Limelight for another, or whatever, for example. >> Well you got to be concerned about lock-in as you become in the cloud, that's something that everybody's worried about. JJ, I want to get back to you on the investment thesis, because you have a cutting edge business model around investing in open source software, and there's two schools of thought in the open source community, you know, free contribution's great, and let tha.t be organic, and then there's now commercialization. There's real value being created in open source. You had put together a chart with your team about the billions of dollars in exits from open source companies. So what are you investing in, what do you see as opportunities for entrepreneurs like Jim and others that are out there looking at scaling their business? How do you look at success, what's your advice, what do you see as leading indicators? >> I think I'll broadly answer your question with a model that we've been thinking a lot about. We're going to start writing publicly about it and probably eventually maybe publish a book or two on it, and it's around the sort of fundamental perspective of creating value and capturing value. So if you model a famous investor and entrepreneur in Silicon Valley who has commonly modeled these things using two different letter variables, X and Y, but I'll give you the sort of perspective of modeling value creation and value capture around open source, as compared to closed source or proprietary software. So if you look at value creation modeled as X, and value capture modeled as Y, where X and Y are two independent variables with a fully proprietary software company based approach, whether you're building a cloud service or a proprietary software product or whatever, just a software company, your value creation exponent is typically bounded by two things. Capital and fundraising into the entity creating the software, and the centralization of research and development, meaning engineering output for producing the software. And so those two things are tightly coupled to and bounded to the company. With commercial open source software, the exact opposite is true. So value creation is decoupled and independent from funding, and value creation is also decentralized in terms of the research and development aspect. So you have a sort of decentralized, community-based, crowd-sourced, or sort of internet, global phenomena of contributing to a code base that isn't necessarily owned or fully controlled by a single entity, and those two properties are sort of decoupled from funding and decentralized R and D, are fundamentally changing the value creation kind of exponent. Now let's look at the value capture variable. With proprietary software company, or proprietary technology company, you're primarily looking at two constituents capturing value, people who pay for accessing the service or the software, and people who create the software. And so those two constituents capture all the value, they capture, you know, the vendor selling the software captures maybe 10 or 20% of the value, and the rest of the value, I would would express it say as the customer is capturing the rest of the value. Most economists don't express value capture as capturable by an end user or a customer. I think that's a mistake. >> Jim, you're-- >> So now... >> Okay, Jim, your reaction to that, because there's an article went around this weekend from Motherboard. "The internet was built on free labor "of open source developers. "Is that sustainable?" So Jim, what's your reaction to JJ's comments about the interactions and the dynamic between value creation, value capture, free versus sustainable funding? >> Well if you can sort of mix both together, that's what I would like, I haven't really ever figured out how to make open source work in our business model, but I haven't really tried that hard. It's an intriguing concept for sure, particularly if we come up with APIs that are specific to say, local television or something like that, and maybe some special processes that do things that are of interest to the wider community. So it's something I do plan to look at because I do agree that if you, I mean we use open source, we use this thing called FFmpeg, and several other things, and we're really happy that there's people out there adding value to them, et cetera, and we have our own versions, et cetera, so we'd like to contribute to the community if we could figure out how. >> Sarbjeet, your reactions to JJ's thesis there? >> I think two things. I will comment on two different aspects. One is the lack of standards, and then open source becoming the standard, right. I think open source kind of projects take birth and life in its own, because we have lack of standard, 'cause these different vendors can't agree on standards. So remember we used to have service-oriented architecture, we have Microsoft pushing some standards from one side and IBM pushing from other, SOAP versus xCBL and XML, different sort of paradigms, right, but then REST API became the de facto standard, right, it just took over, I think what REST has done for software in last about 10 years or so, nothing has done that for us. >> well Kubernetes is right now looking pretty good. So if you look at JJ, Kubernetes, the movement you were really were pioneering on, it's having similar dynamic, I mean Kubernetes is becoming a forcing function for solidarity in the community of cloud native, as well as an actual interoperable orchestration layer for multiple clouds and other services. So JJ, your thoughts on how open source continues as some of these new technologies, like Kubernetes, continue to hit the scene. Is there any trajectory change in open source that you see, that you could share, I'd love to get your insights on what's next behind, you know, the rise of Kubernetes is happening, what's next? >> I think more abstractly from Kubernetes, we believe that if you just look at the rate of innovation as a primary factor for progress and forward change in the world, open source software has the highest rate of innovation of any technology creation phenomena, and as a consequence, we're seeing more standards emerge from the open source ecosystem, we're seeing more disruption happen from the open source ecosystem, we're seeing more new technology companies and new paradigms and shifts happen from the open source ecosystem, and kind of all progress across the largest, most difficult sort of compound, sensitive problems, influenced and kind of sourced from the open source ecosystem and the open source world overall. Whether it's chip design, machine learning or computing innovations or new types of architectures, or new types of developer paradigms, you know, biological breakthroughs, there's kind of things up and down the technology spectrum that have a lot to sort of thank open source for. We think that the future of technology and the future of software is really that open source is at the core, as opposed to the periphery or the edges, and so today, every software technology company, and cloud providers included, have closed proprietary cores, meaning that where the core is, the data path, the runtime, the core business logic of the company, today that core is proprietary software or closed source software, and yet what is also true, is at the edges, the wrappers, the sort of crust, the periphery of every technology company, we have lots of open source, we have client libraries and bindings and languages and integrations, configuration, UIs and so on, but the cores are proprietary. We think the following will happen over the next few decades. We think the future will gradually shift from closed proprietary cores to open cores, where instead of a proprietary core, an open core is where you have core open source software project, as the fundamental building block for the company. So for example, Hadoop caused the creation of MapR and Cloudera and Hortonworks, Spark caused the creation of Databricks, Kafka caused the creation of Confluent, Git caused the creation of GitHub and GitLab, and this type of commercial open source software model, where there's a core open source project as the kernel building block for the company, and then an extension of intellectual property or wrappers around that open source project, where you can derive value capture and charge for licensed product with the company, and impress customer, we think that model is where the future is headed, and this includes cloud providers, basically selling proprietary services that could be based on a mixture of open source projects, but perhaps not fundamentally on a core open source project. Now we think generally, like abstractly, with maybe somewhat of a reductionist explanation there, but that open core future is very likely, fundamentally because of the rate of innovation being the highest with the open source model in general. >> All right, that's great stuff. Jim, you're a historian of tech, you've lived it. Your thoughts on some of the emerging trends around cloud, because you're disrupting linear TV with Didja, in a new way using cloud technology. How do you see cloud evolving? >> Well, I think the long lines we discussed, certainly I think that's a really interesting model, and having the open source be the center of the universe, then figure out how to have maybe some proprietary stuff, if I can use that word, around it, that other people can take advantage of, but maybe you get the value capture and build a business on that, that makes a lot of sense, and could certainly fit in the TV industry if you will from where I sit... Bring services to businesses and consumers, so it's not like there's some reason it wouldn't work, you know, it's bound to, it's bound to figure out a way, and if you can get a whole mass of people around the world working on the core technology and if it is sort of unique to what mission of, or at least the marketplace you're going after, that could be pretty interesting, and that would be great to see a lot of different new mini-clouds, if you will, develop around that stuff would be pretty cool. >> Sarbjeet, I want you to talk about scale, because you also have experience working with Rackspace. Rackspace was early on, they were trying to build the cloud, and OpenStack came out of that, and guess what, the world was moving so fast, Amazon was a bullet train just flying down the tracks, and it just felt like Rackspace and their cloud, you know OpenStack, just couldn't keep up. So is scale an issue, and how do people compete against scale in your mind? >> I think scale is an issue, and software chops is an issue, so there's some patterns, right? So one pattern is that we tend to see that open source is now not very good at the application side. You will hardly see any applications being built as open source. And also on the extreme side, open source is pretty sort of lame if you will, at very core of the things, like OpenStack failed for that reason, right? But it's pretty good in the middle as Joseph said, right? So building pipes, building some platforms based on open source, so the hooks, integration, is pretty good there, actually. I think that pattern will continue. Hopefully it will go deeper into the core, which we want to see. The other pattern is I think the software chops, like one vendor has to lead the project for certain amount of time. If that project goes into sort of open, like anybody can grab it, lot of people contribute and sort of jump in very quickly, it tends to fail. That's what happened to, I think, OpenStack, and there were many other reasons behind that, but I think that was the main reason, and because we were smaller, and we didn't have that much software chops, I hate to say that, but then IBM could control like hundred parties a week, at the project >> They did, and look where they are. >> And so does HP, right? >> And look where they are. All right, so I'd love to have a Power Panel on open source, certainly JJ's been in the thick of it as well as other folks in the community. I want to just kind of end on lightweight question for you guys. What have you guys learned? Go down the line, start with Jim, Sarbjeet, and then JJ we'll finish with you. Share something that you've learned over the past three months that moved you or that people should know about in tech or cloud trends that's notable. What's something new that you've learned? >> In my case, it was really just spending some time in the last few months getting to know our end users a little bit better, consumers, and some of the impact that having free internet television has on their lives, and that's really motivating... (distorted speech) Something as simple as you might take for granted, but lower income people don't necessarily have a TV that works or a hotel room that has a TV that works, or heaven forbid they're homeless and all that, so it's really gratifying to me to see people sort of tuning back into their local media through television, just by offering it on their phone and laptops. >> And what are you going to do as a result of that? Take a different action, what's the next step for you, what's the action item? >> Well we're hoping, once our product gets filled out with the major networks, et cetera, that we actually provide a community attachment to it, so that we have over-the-air television channels is the main part of the app, and then a side part of the app could be any IP stream, from city council meetings to high schools, to colleges, to local community groups, local, even religious situations or festivals or whatever, and really try to tie that in. We'd really like to use local television as a way to strengthening all local media and local communities, that's the vision at least. >> It's a great mission you guys have at Didja, thanks for sharing that. Sarbjeet, what have learned over the past quarter, three months that was notable for you and the impact and something that changed you a little bit? >> What actually I have gravitated towards in last three to six months is the blockchain, actually. I was light on that, like what it can do for us, and is there really a thing behind it, and can we leverage it. I've seen more and more actually usage of that, and sort of full SCM, supply chain management and healthcare and some other sort of use cases if you will. I'm intrigued by it, and there's a lot of activity there. I think there's some legs behind it, so I'm excited about that. >> And are doing a blockchain project as a result, or are you still tire-kicking? >> No actually, I will play with it, I'm a practitioner, I play with it, I write code and play with it and see (Jim laughs) what does that level of effort it takes to do that, and as you know, I wrote the Alexa scale couple of weeks back, and play with AI and stuff like that. So I try to do that myself before I-- >> We're hoping blockchain helps even out the TV ad economy and gets rid of middle men and makes more trusting transactions between local businesses and stuff. At least I say that, I don't really know what I'm talking about. >> It sounds good though. You get yourself a new round of funding on that sound byte alone. JJ, what have you learned in the past couple months that's new to you and changed you or made you do something different? >> I've learned over the last few months, OSS Capital is a few months and change old, and so just kind of getting started on that, and it's really, I think potentially more than one decade, probably multi-decade kind of mostly consensus building effort. There's such a huge lack of consensus and agreement in the industry. It's a fascinatingly polarizing area, the sort of general topic of open source technology, economics, value creation, value capture. So my learnings over the past few months have just intensified in terms of the lack of consensus I've seen in the industry. So I'm trying to write a little bit more about observations there and sort of put thoughts out, and that's kind of been the biggest takeaway over the last few months for me. >> I'm sure you learned about all the lawyer conversations, setting up a fund, learnings there probably too right, (Jim laughs) I mean all the detail. All right, JJ, thanks so much, Sarbjeet, Jim, thanks for joining me on this Power Panel, cloud conversation impact, to entrepreneurship, open source. Jim Long, Sarbjeet Johal and Joseph Jacks, JJ, thanks for joining us, theCUBE Conversation here in Palo Alto, I'm John Furrier, thanks for watching. >> Thanks John. (lively classical music)

Published Date : Feb 20 2019

SUMMARY :

so great to have you on. Google, and then you got IBM and Oracle, sort of the internet of monopolies, there's got to be room for more clouds. and the open source that has been the cliche So the proprietariness has to kinda, Berkeley back in the day. across the internet, we do in the open source community, you know, and the rest of the value, about the interactions and the dynamic to them, et cetera, and we have One is the lack of standards, the movement you were and the future of software is really that How do you see cloud evolving? and having the open source be just flying down the tracks, and because we were smaller, and look where they are. over the past three months that moved you and some of the impact that of the app could be any IP stream, and the impact and something is the blockchain, actually. and as you know, I wrote the Alexa scale the TV ad economy and in the past couple months and agreement in the industry. I mean all the detail. (lively classical music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

IBMORGANIZATION

0.99+

Jim LongPERSON

0.99+

JJPERSON

0.99+

AmazonORGANIZATION

0.99+

OracleORGANIZATION

0.99+

SarbjeetPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Sarbjeet JohalPERSON

0.99+

JosephPERSON

0.99+

JohnPERSON

0.99+

Joseph JacksPERSON

0.99+

OSS CapitalORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

February 2019DATE

0.99+

GoogleORGANIZATION

0.99+

six peopleQUANTITY

0.99+

John FurrierPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Palo AltoLOCATION

0.99+

10QUANTITY

0.99+

two thingsQUANTITY

0.99+

20%QUANTITY

0.99+

CUBEORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

fiveQUANTITY

0.99+

HPORGANIZATION

0.99+

twoQUANTITY

0.99+

two constituentsQUANTITY

0.99+

Open Source Software CapitalORGANIZATION

0.99+

UKLOCATION

0.99+

Office 365TITLE

0.99+

last weekDATE

0.99+

DidjaORGANIZATION

0.99+

two propertiesQUANTITY

0.99+

bothQUANTITY

0.98+

two schoolsQUANTITY

0.98+

OneQUANTITY

0.98+

first pointQUANTITY

0.98+

RackspaceORGANIZATION

0.98+

third modelQUANTITY

0.98+

first campQUANTITY

0.98+

AlexaTITLE

0.98+

Susie Wee, Cisco DevNet | Cisco Live EU 2019


 

>> Live from Barcelona, Spain, its theCUBE, covering Cisco Live! Europe, brought to you by Cisco, and its ecosystem partners. >> Hello everyone, welcome back to theCUBE's live coverage here in Barcelona, Spain, for Cisco Live! Europe 2019, I'm John Furrier, with my co-host Dave Vellante as well as Stu Miniman has been co-hosting all week, three days of coverage, we're in day two. We're here with very special guest, we're in the DevNet Zone, and we're here with the leader of the DevNet team of Cisco, Susie Wee, Senior Vice President, CTO of Cisco DevNet, welcome, good to see you. >> Thank you, good to see you, and I'm glad that we have you here again in the DevNet Zone. >> You've been running around, it's been super exciting to watch the evolution, we chatted a couple of years ago, okay we're going to get some developer-centric APIs and a small community growing, now it's exploding. (Susie laughs) Feature of the show, the size gets bigger every year. >> It was interesting, yeah, we took a chance on it right? So we didn't know and you took this bet with me is just that the network is becoming programmable, the infrastructure is programmable, and not only is the technology becoming programmable, but we can take the community of networkers, IT infrastructure folks, app developers and get them to understand the programmability of the infrastructure, and it's really interesting that, you know, these classes are packed, they're very deep they're very technical, the community's getting along and, you know, networkers are developers. >> Yeah you know, you nailed it, because I think as a CTO, you understood the dev-ops movement, saw that in cloud. And I remember my first conversation with you like, you know, the network has a dev-ops angle too if you can make it programmable, and that's what it's done, and you're seeing Cisco's wide having this software extraction, ACI anywhere, hyperflux anywhere, connected to the cloud, now Edge. APIs are at the center, the DNA Center platform. >> Yes! >> API First, very successful project. >> Yes yes, it's-- >> This is the new DNA of Cisco is APIs, this is what it's all about. >> It is, it is and you know, like at first, you know, when we started this journey five years ago a few of our products had APIs, like a few of them were programmable. But you know, you don't take your network in overnight, it's programmable when you have this type of thing. But we've been building it in, and now practically every product is programmable, every product has APIs, so now you have a really rich fabric of yeah, security, data center, enterprises and campus and branch networks. Like, and it can now, put together really interesting things. >> Well congratulations, it happened and it's happening, so I got to ask the question, now that it's happening, happened and happening, continuing to happen, what's the impact to the customer base because now you're now seeing Cisco clearly defining the network and the security aspect of what the network can do, foundationally, and then enabling it to be programmable. >> Yeah. >> What's happening now for you guys, obviously apps could take advantage of it, but what else is the side effect of this investment? >> Yeah so, the interesting thing is, if we take a look at the industry at large, what happens is, you kind of have the traditional view of, IT, you know, so if you take a look at IT, you know, what do you need it for? I need it to get my compute, just give me my servers, give me my network, and let's just hope it works. And then it was also viewed as being old, like I can get all this stuff on the cloud, and I can just do my development there, why do I need all of that stuff right? But once you take it, and you know, the industry has come along, what happens is, you need to bring those systems together, you need to modernize your IT, you need to be able to just, you know, take in the cloud services, to take the applications come across, but the real reason you need it is because you want to impact the business, you know, so kind of what happens is like, every business in the world, every, is being disrupted right, and if you take a look, it has a digital disruptor going on. If you're in retail, then, you know, you're a brick and mortar, you know, traditionally a brick and mortar store kind of company, and then you have an online retailer that's kind of starting to eat your lunch, right, if you're in banking, you have the digital disruption like every, manufacturing is starting to get interesting and you know, what you're doing in energy. So all of this has kind of disruption angles, but really the key is that, IT holds the keys. So, IT can sit there and keep its old infrastructure and say, I have all this responsibility, I'm running this machinery, I have this customer database, or you can modernize, right? And so you can either hold your business back, or you can modernize, make it programmable and then suddenly allow cloud native, public, private cloud, deploy new applications and services and suddenly become an innovative platform for the company, then you can solve business problems and make that real, and we're actually seeing that's becoming real. (laughs) >> Well and you're seeing it right in front of us. So a big challenge there of what you just mentioned, is just having the skills to be able to do that but the appetite of this audience to absorb that knowledge is very very high, so for example, we've been here all week watching, essentially Cisco users, engineers, absorb this new content to learn how to basically program infrastructure. >> That's right, and it's not Cisco employees, it's the community, it's the world of like, Cisco-certified engineers like, people who are doing networking and IT for companies and partners around the world. >> And so, what do they have to go through to get from, you know, where they were, not modernized to modernized? >> Yeah, and actually, and that's a good way 'cause when we look back to five years ago, it was a question, like we knew the technology was going to become programmable and the question is, are these network guys, you know, are these IT guys everywhere are they going to stay in the old world are they really going to be the ones that can work in the new world, or are we going to hire a bunch of new software guys who just know it, are cloud native, they get it all, to do it all. Well, it doesn't work that way because to work in oil and gas, you need some expertise in that and those guys know about it, to work in, you know, retail and banking, and all of these, there's some industry knowledge that you need to have. But then you need to pick up that software skill and five years ago, we didn't know if they would make that transition, but we created DevNet to give them the tools within their language and kind of, you know if they do and what we found is that, they're making the jump. And you see it here with everyone behind us, in front of us, like they are learning. >> Your community said we're all in. Well I'm interested in, we've seen other large organizations infrastructure companies try to attract developers like this, I'm wondering is it because of the network, is it because of Cisco? Are there some other ingredients that you could buy, is it the certified engineers who have this appetite? Why is it that Cisco has been so successful, and I can name a number of other companies that have tried and failed, some of them even owned clouds, and have really not been able to get traction with developers, why Cisco? >> Well I mean, I think we've been fortunate in many ways, as we've been building it out but I think part of it, you know like the way any company would have to go about you know, kind of taking on programmability, dev-ops, you know, these types of models, is tough, and it's, there's not one formula for how you do it, but in our case, it was that Cisco had a very loyal community. Or we have, and we appreciate that very loyal community 'cause they are out there, workin' the gear, building the networks like, running train stations, transportation systems you know, running all around the world, and so, and they've had to invest a lot into that knowledge. Now we then, gave them the tools to learn, we said, here's coding 101, here's your APIs, here's how to learn about it, and your first API call will be get network devices. Here's how you automate your infrastructure, here's how you do your things, and because we put it in, they're grabbing on and they're doing it and you know, so, it was kind of having that base community and being respectful of it and yet, bringing them along, pushing them. Like we don't say keep doing things the old way yes, learn software, and we're not going to water down how you have to learn software. Like you're going to get in there, you're going to use Rest APIs, you're going to use Postman, you're going to use Git, and we have that kind of like first track to just get 'em using those tools. And we also don't take an elitist culture like we're very welcoming of it, and respectful of what they've done and like, just teach 'em and let 'em go. And the thing is like, once you do it, like once you spend your time and you go oh, okay, so you get the code from GitHub, I got it, now I see all this other stuff. Now I made my Rest API call and I've used Postman. Oh, I get it, it's a tool. Just, once you've done just that, you are a different person. >> And then it's business impact. >> Then it's business, yeah no and like then you're also able to experiment, like you suddenly see a bigger world. 'Cause you've been responsible for this one thing, but now you see the bigger world and you think differently, and then it's business impact, because then you're like okay, how do I modernize my infrastructure? How can I just automate this task that I do every day? I'm like, I don't want to do that anymore, I want to automate it, let me do this. And once you get that mindset, then you're doing more, and then you're saying wait, now can I install applications on this, boy, my network and my infrastructure can gives lots of business insights. So I can start to get information about what applications are being called, what are being used, you know, when you have retail operations you can say, oh, what's happening in this store versus that store? When you have a transportation system, where are we most busy? When you're doing banking, where is like, are you having mobile transactions or in-store transactions? There's all this stuff you learn and then suddenly, you can, you know, really create the applications that-- >> So they get the bug, they get inspired they stand up some quick sandbox with some value and go wow-- >> Or they use our DevNet Sandbox so that they can start stuff and get experi-- >> It's a cloud kind of mindset of standing something up and saying look at it, wow, I can do this, I can be more contributing to the organization. Talk about the modernization, I want to get kind of the next step for you 'cause the next level for you is what? Because if this continues, you're going to start to see enterprises saying oh, I can play in the cloud, I can use microservices. >> Yes. >> I can tap into that agility and scale of the cloud, and leverage my resources and my investment I have now to compete, you just mentioned that. How is that going to work, take us through that. >> Yeah and there's more, in addition to that, is also, I can also leverage the ecosystem, right? 'Cause you're used to doing everything yourself, but you're not going to win by doing everything yourself, even if you made everything modern, right? You still need to use the ecosystem as well. But you know, but then at that stage what you can do and actually we're seeing this as, like our developers are not only the infrastructure folks, but now, all of the sudden our ISVs, app developers, who are out there writing apps, are able to actually put stuff into the infrastructure, so we actually had some IoT announcements this week, where we have these industrial routers that are coming out, and you can take an industrial router and put it into a police car and because a police car has a dashboard camera, it has a WiFi system, it has on-board computer, tablets, like all of this stuff, the officer has stuff, that's a mobile office. And it has a gateway in it. Well now, the gateway that we put in there does app hosting, it can host containerized applications. So then if you take a look at it, all the police cars that are moving around are basically hosting containerized apps, you have this kind of system, and Cisco makes that. >> In a moveable edge. >> And then we have the gateway manager that does it, and if you take a look at what does the gateway manager do it has to manage all of those devices, you know, and then it can also deploy applications. So we have an ability to now manage, we also have an ability to deploy containers, pull back containers, and then this also works in manufacturing, it works in utility, so you have a substation, you have these industrial routers out there that can host apps, you know, then all of a sudden edge computing becomes real. But what this brings together is that now, you can actually get ISVs who can actually now say, hey I'm an app developer, I wanted to write an app, I have one that could be used in manufacturing. I could never do it before, but oh, there's this platform, now I can do it, and I don't have to start installing routers, like a Cisco partner will do it for a customer, and I can just drop my app in and it's, we're actually seeing that now-- >> So basically what's happening, the nirvana is first of all, intelligent edge is actually possible. >> Yes. >> With having the power at the edge with APIs, but for the ISVs, they might have the domain expertise at saying, hey I'm an expert on police, fire, public safety, vertical. >> Yes. >> But, I could build the best app, but I don't need to do all this other stuff. >> Yes. >> So I can focus all my attention on this. >> Yes. >> And their bottleneck was having that kind of compute and or Edge device. >> Yes. >> Is that what you're kind of getting at? >> Yeah, and there's, exactly it was because you know, I mean an app developer is awesome at writing apps. They don't want to get into the business of deploying networks and like even managing and operating how that is, but there's a whole like kind of Cisco ecosystem that does that. Like we have a lot of people who will love to operationalize that system, deploy that, you know, kind of maintain it. Then there's IT and OT operators who are running that stuff, but that app developer can write their app drop it into there, and then all of that can be taken care of. And we actually have two ISVs here with us, one in manufacturing, one in utilities, who are, you know, DevNet ISV partners, they've written applications and they actually have real stories about this, and kind of what they had to say is, like in the manufacturing example, is okay, so they write, they have this innovation, I wrote this cool app for manufacturing, right? So there's something that it does, it's building it, you know, they've gotten expertise in that, and then, as they've been, they're doing something innovative, they actually need the end customer, who does, the manufacturer, to use it, and adopt a new technology. Well, hey, you know, I'm running my stuff, why should I use that, how would I? So they actually work with a systems integrator, like a channel partner that actually will customize the solution. But even that person may not have thought about edge computing, what can you do, what's this crazy idea you have, but now they've actually gotten trained up, they're getting trained up on our IoT technologies, they're getting trained up on how to operationalize it, and this guy just writes his app, he actually points them to the DevNet Sandbox to learn about it, so he's like, no let me show you how this Edge processing thing works, go use the DevNet Sandbox, you can spin up your instance, you can see it working, oh look there's these APIs, let me show you. And it turns out they're using the Sandbox to actually train the partners and the end customer about what this model is like. And then, these guys are adopting it, and they're getting paying customers through this. >> Did you start hunting for ISVs, did they find you, how did that all transpire? >> It kind of happens in all different ways. (laughter) >> So yes. >> Yeah yeah, it happens in all different ways, and basically, in some cases like we actually sometimes have innovation centers and then you have you know, kind of as you know, the start-up that's trying to figure out how to get their stuff seen, they show up, we look for it. In our case in Italy, with the manufacturing company, then what happened was, the government was actually investing and the government was actually giving tax subsidies for manufacturing plants to modernize. And so, what they were doing was actually giving an incentive and then looking for these types of partners, so we actually teamed up with our country teams to find some of these and they have a great product. And then we started, you know, working with them. They actually already had an appreciation for Cisco because they, you know, in their country, they did computer science in college, they might've done some networking with the Cisco Networking Academy, so they knew about it, but finally, it came that they could actually bring this ecosystem together. >> Susie, congratulations on all your success, been great to be part of it in our way, but you and your team have done an amazing job, great feedback on Twitter on the swag got the-- (laughter) Swag bag's gettin' a lot of attention, which is always a key important thing. But in general, super important initiative, share some insight into how this has changed Cisco's executive view of the world because, you know, the cloud had horizontal scalability, but Cisco had it too. And now the new positioning, the new branding that Karen Walker and her team are putting out, the bridge to tomorrow, the future, is about almost a horizontally scalable Cisco. It's everywhere now so-- >> Yeah the bridge to possible, yeah. >> Bridge to possible, yes. >> Yeah well I mean, really what happens is, you know, there was a time when you're like, I'm going to buy my security, I'm going to buy my networking, I'm going to buy my data center, but really more and more people just want an infrastructure that works, right? An infrastructure that's capable that can allow you to innovate, and really what happens, when you think about how do you put all of these systems together, 'cause they're still individual, and they need to be individual in best in class products, well the best way to put 'em together is with APIs. (laughs) So, it's not that you need to architect them all into one big product, it's actually better to have best in class, clearly define the APIs, and then allow, as kind of modularity and to build it out. So, really we've had tremendous support from Chuck Robbins, our CEO, and he's understood this vision and he's been helping, kind of, you know, like DevNet is a start-up itself, like he's been helping us navigate the waters to really make it happen and as we moved and as he's evolved the organization, we've actually started to get more and more support from our executives and we're working across the team, so everything that we do is together with all the teams. And now what we're doing is we're co-launching products. Every time we launch a new product, we launch a new product with the product offer and the developer offer. >> Yeah. >> So, you know, here we've launched the new IoT products. >> With APIs. >> And, with APIs, and IOX and App-posting capabilities and we launched them together with a new DevNet IoT developer center. At developer.cisco.com/iot, and this is actually, if you take a look at the last say half year or year, our products have been launching, you'll see, oh here's the new DNA Center, and here's the new DevNet developer center. You know, then we can say, here's the new kind of ACI, and here's the new ACI developer center. Here's the new Meraki feature, here's the new ACI-- >> And it's no secret that DNA Center has over 600 people engineers in there. >> Yeah (laughs) >> That public information might not be-- >> You know, but we've actually gotten in the mode in the understanding of you know, every product should have a developer offer because it's about the ecosystem, and we're getting tremendous support now. >> Yeah a lot of people ask me about Amazon Web Services 'cause we're so close, we cover them deeply. They always ask me, hey John, why is that, why is Amazon so successful I go, well they got a great management team, they've got a great business model, but it was built on APIs first. It was a web service framework. You guys have been very smart by betting on the API because that's where the growth is, so it's not Amazon being the cloud, it's the fact that they built building blocks with APIs, that grew. >> Yes. >> And so I think what you've got here, that's lightening in the bottle is, having an API strategy creates more connections, connections create more fabric, and then there's more data, it's just, it's a great growth vehicle. >> Absolutely. >> So, congratulations. >> Thank you. >> So is that your market place, do you have a market place so it's just, I guess SDKs and APIs and now that you have ISVs comin' in, is that sort of in the plan? >> We do, no we do actually so, so yeah so basically, when you're in this world, then you have your device, you know, it's your phone, and then you have apps that you download and you get it from an app store. But when we're talking about, you know, the types of solutions we're talking about, there is infrastructure, there is infrastructure for you know, again, utilities companies, for police stations, for retail stores, and then, you have ISV applications that can help in each of those domains. There's oftentimes a systems integrator that's putting something together for a customer. And so now kind of the app store for this type of thing actually involves, you know, our infrastructure products together with kind of, and infrastructure, and third-party ones, you know, ISV software that can be customized and have innovation in different ways together with that system integrator and we're training them all, people across that, but we actually have something called DevNet Exchange. And what we've done is there's actually two parts, there's Code Exchange, which is basically, pointers out to you know, source code that's out in GitHub, so we're just going out to code repos that are actually helping people get started with different products. But in addition, we have Ecosystem Exchange, which actually lists the ISV solutions that can be used as well as the system's integrators who can actually deliver solutions in these different domains, so you know, DevNet Ecosystem Exchange is the place where we actually do list the ISVs with the SIs you know, with the different platforms so, that's the app store for a programmable infrastructure. >> Susie, congratulations again, thank you so much for including us in your DevNet Zone with theCUBE here for three days. >> Thank you for coming to us and for really helping us tell the story. >> It' a great story to tell and it's kickin' butt and takin' names-- (laughter) Susie Wee, Senior Vice President and CTO of DevNet, makin' it happen just the beginning, scratching the surface of the explosion of API-based economies, around the network, the network value, and certainly cloud and IoT. Of course, we're bringing you the edge of the network here with theCUBE, in Barcelona, we'll be back with more live coverage day two, after this short break. (upbeat music)

Published Date : Jan 30 2019

SUMMARY :

brought to you by Cisco, and its ecosystem partners. with the leader of the DevNet team of Cisco, that we have you here again in the DevNet Zone. Feature of the show, the size gets bigger every year. the community's getting along and, you know, Yeah you know, you nailed it, This is the new DNA of Cisco is APIs, But you know, you don't take your network in overnight, and the security aspect of what the network can do, and you know, what you're doing in energy. So a big challenge there of what you just mentioned, it's the community, it's the world of like, to work in oil and gas, you need some expertise in that is it because of the network, is it because of Cisco? and they're doing it and you know, so, and then suddenly, you can, you know, kind of the next step for you 'cause I have now to compete, you just mentioned that. So then if you take a look at it, it has to manage all of those devices, you know, the nirvana is first of all, intelligent edge but for the ISVs, they might have But, I could build the best app, And their bottleneck was having that it's building it, you know, they've gotten It kind of happens in all different ways. And then we started, you know, working with them. because, you know, the cloud had horizontal and he's been helping, kind of, you know, So, you know, here we've launched if you take a look at the last say half year or year, And it's no secret that DNA Center of you know, every product should have it's the fact that they built building blocks and then there's more data, it's just, and then you have apps that you download thank you so much for including us in your DevNet Zone Thank you for coming to us and for really Of course, we're bringing you the edge of the network here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Chuck RobbinsPERSON

0.99+

Susie WeePERSON

0.99+

CiscoORGANIZATION

0.99+

Karen WalkerPERSON

0.99+

JohnPERSON

0.99+

SusiePERSON

0.99+

Stu MinimanPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

BarcelonaLOCATION

0.99+

ItalyLOCATION

0.99+

Cisco Networking AcademyORGANIZATION

0.99+

GitHubORGANIZATION

0.99+

three daysQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

developer.cisco.com/iotOTHER

0.99+

two partsQUANTITY

0.99+

DevNet ExchangeTITLE

0.98+

GitTITLE

0.98+

DevNetORGANIZATION

0.98+

over 600 peopleQUANTITY

0.98+

Cisco DevNetORGANIZATION

0.98+

first trackQUANTITY

0.98+

oneQUANTITY

0.98+

DNA CenterORGANIZATION

0.98+

this weekDATE

0.98+

first conversationQUANTITY

0.98+

five years agoDATE

0.97+

DevNetTITLE

0.96+

day twoQUANTITY

0.96+

PostmanTITLE

0.95+

DevNet Ecosystem ExchangeTITLE

0.95+

Ecosystem ExchangeTITLE

0.95+

ACIORGANIZATION

0.95+

TwitterORGANIZATION

0.94+

CTOPERSON

0.93+

Code ExchangeTITLE

0.93+

eachQUANTITY

0.93+

firstQUANTITY

0.91+

tomorrowDATE

0.9+

Senior Vice PresidentPERSON

0.87+

DevNet SandboxTITLE

0.87+

all weekQUANTITY

0.84+

couple of years agoDATE

0.83+

two ISVsQUANTITY

0.81+

Cisco Live! Europe 2019EVENT

0.78+

theCUBEORGANIZATION

0.75+

EdgeTITLE

0.72+

every yearQUANTITY

0.71+

first APIQUANTITY

0.7+

IOXTITLE

0.7+

MerakiORGANIZATION

0.7+

George Bentinck, Cisco Meraki | Cisco Live EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE, covering Cisco Live! Europe. Brought to you by Cisco and its ecosystem partners. >> Welcome back to Cisco Live! We're in Barcelona, Dave Villante and Stu Miniman. You're watching theCUBE, the leader in live tech coverage. We go out to the events, we extract the signal from the noise. George Bentinck is here. He's a product manager for Camera Systems at Cisco Meraki. >> Hi. >> Great to see you. Thanks for coming on theCUBE. >> Thanks very much. >> So, we were saying, Meraki's not just about wireless. It's all about cameras now. Tell us about your role. >> The Meraki camera is relatively new. It's one of the newer products. It came out just over two years ago and it's really embodying what we're about as a business unit at Cisco, which is about simplicity. It's about taking normally complex technology and sort of distilling it so customers can really use it. So what we did with the camera was we spoke to a lot of our customers, listened what they had to say, and they were fed up with the boxes. They don't want these servers, they don't want the recording solutions, they just want to get video. And so we built a camera which has everything inside it. All the video is stored in the camera using the latest solid state storage. And then we did all the analytics and the other sort of cool things people want to do with video in the camera as well. And yet to make it easy to use, it's all managed from the Meraki cloud. So that allows you to scale it from one camera to 100 cameras to 100,000 cameras and yet have nothing else other than the cameras and the management from the cloud. >> Well the way you describes it sounds so simple, but technically, it's a real challenge, what you've described. What were some of the technical challenges of you guys getting there? >> Well, there are sort of two components. There's the device piece and when we look at the device piece, we basically leverage the latest advances in the mobile phone industry. So if you look at the latest iPhones and Android phones, we've taken that high density, highly reliable storage and integrated it into the camera. And then we've also taken the really powerful silicone, so we have Qualcomm Snapdragon system-on-chip in there and that performance allows us to do all the analytics in the camera. And so the second piece is the cloud, the scaling, and the management. And with video, it's lots of big data, which I'm guessing you guys are probably pretty familiar with. And trying to search that and know what's going on and managing its scale can be really painful. But we have a lot of experience with this. Meraki's cloud infrastructure manages millions of connected nodes with billions of connected devices and billions of pieces of associated metadata. This is just like video, so we can reuse a lot of the existing technology we've built in the cloud and now move it to this other field of video and make it much easier to find things. >> And when people talk about, y'know, the camera systems, IoT obviously comes into play and security's a big concern. Y'know, people are concerned about IP cameras off the shelf. Y'know, everybody knows the stories about the passwords where, y'know, they never changed out of the factory and they're the same passwords across the, and so, y'know, presumably, Cisco Meraki, trusted name, and there's a security component here as well. >> Yeah, absolutely. This is actually one of my favorite topics because, unfortunately, not many people ask about it. It's one of those, it's not an issue until it's an issue type of things and we put a lot of work in it. I mean, Cisco has security in its DNA. It's just like part of what we do. And so we did all of the things which I think every camera vendor and IoT vendor should be doing anyway. So that's things like encryption for everything and by default. So all the storage on the camera is encrypted. It's mandatory so you can't turn it off. And there's zero configuration, so when you turn it on, it won't record for a few minutes while it encrypts its storage volume and then you're good to go. We also manage all the certificates on the camera and we also have encrypted management for the camera with things like two-factor authentication and other authentication mechanisms on top of that as well. So it's sort of leaps and bounds ahead of where most of the decision makers are thinking in this space because they're physical security experts. They know about locks and doors and things like that. They're not digital security experts but the Cisco customer and our organization, we know this and so we have really taken that expertise and added it to the camera. >> Yeah, George, security goes hand-in-hand with a lot of the Cisco solutions. Is that the primary or only use case for the Meraki camera? Y'know, I could just see a lot of different uses for this kind of technology. >> It really is very varied and the primary purpose of it is a physical security camera. So being able to make sure that if there's an incident in your store, you have footage of maybe the shoplifting incident or whatever. But, because it's so easy to use, customers are using it for other things. And I think one of the things that's really exciting to me is when I look at the data. And if I look at the data, we know that about 1% of all the video we store is actually viewed by customers. 99% just sits there and does nothing. And so, as we look at how we can provide greater value to customers, it's about taking the advances in things such as machine learning for computer vision, sort of artificial intelligence, and allowing you to quantify things in that data. It allows you to, for example, determine how many people are there and where they go and things like that. And to maybe put it all into context, because one of my favorite examples is a Cisco case study in Australia, where they're using cameras at a connected farm as part of an IoT deployment, to understand sheep grazing behavior and so this camera watches the sheep all day. Now as a human, I don't want to watch the sheep all day, but the camera doesn't care. And so the farmer looks at eight images representing eight hours, which is a heat map of the animals' movement in the field, and they can know where they've been grazing, where they need to move them, where this might be overgrazed. And so the camera's not security at this point, it really is like a sensor for the enterprise. >> Yeah, it's interesting, actually I did a walk through the DevNet Zone and I saw a lot of areas where I think they're leveraging some of your technology. Everything from let's plug in some of the AI to be able to allow me to do some interesting visualizations. What we're doing, there's a magic mirror where you can ask it like an Alexa or Google, but it's Debbie, the robot here as to give you answers of how many people are in a different area here. A camera is no longer just a camera. It's now just an end node connected and there's so many technologies. How do you manage that as a product person where you have the direction, where you put the development? You can't support a million different customer use cases. You want to be able to scale that business. >> Absolutely, I think the North Star always has to be simplistic. If you can't go and deploy it, you can't use it. And so we see a lot of these cool science projects trapped in proof of concept. And they never go into production and the customers can't take advantage of it. So we want to provide incredibly simple, easy out-the-box technology, which allows people to use AI and machine learning, and then we're the experts in that, but we give you industry-standard APIs using REST or MQTT, to allow you to build business applications on it directly or integrate it into Cisco Kinetic, where you can do that using the MQTT interface. >> So, Stu, you reminded me so we're here in the DevNet Zone and right now there's a Meraki takeover. So what happens in the DevNet Zone is they'll pick a topic or a part of Cisco's business unit, right now, it's the Meraki, everyone's running around with Meraki takeover shirts, and everybody descends on the DevNet Zone. So a lot of really cool developer stuff going on here. George, I wanted to ask you about where the data flows. So the data lives at the edge, y'know, wherever you're taking the video. Does it stay there? Given that only 1% is watched, are you just leaving it there, not moving it back into the cloud? Are you sometimes moving it back into the cloud? What's the data flow look like? >> You can think of this interesting sort of mindset, which is let's have a camera where we don't ever want to show you video, we want to give you the answer because video is big, it's heavy. Let's give you the answer and if that answer means we give you video, we give you video. But if we can give you the answer through other forms of information, like a still image, or an aggregate of an image, or metadata from that, then we'll give you that instead. And that means customers can deploy this on cellular networks out in the middle of nowhere and with much fewer constraints than they had in the past. So it really depends but we try and make it as efficient as possible for the person deploying it so they don't have to have a 40G network connection to every camera to make the most of it. >> Yeah, so that would mean that most of it stays-- >> Most of it stays at the edge in the camera. >> Talk a little bit more about the analytics component. Is that sort of Meraki technology the came over with the acquisition? What has Cisco added to that? Maybe speak to that a little bit. >> So the camera is a relatively new product line within the last two and a half years and the Meraki acquisition was, I think we're only like five years or more now down that road, so this is definitely post-acquisition and part of the continued collaboration between various departments at Cisco. What it enables you to do is object detection, object classification, and object tracking. So it's I know there's a thing, I know what that thing is, and I know where that thing goes. And we do it for a high level object class today, which is people. Because if you look at most business problems, they can be broken down into understanding location, dwell times, and characteristics of people. And so if we give you the output of those algorithms as industry-standard APIs, you can build very customized business analytics or business logics. So let me give you a real world example. I have retail customers tell me that one of the common causes of fraud is an employee processing a refund when there's no customer. And so what if you could know there was no customer physically present in front of the electronic point of sale system where the refund is being processed? Well, the camera can tell you. And it's not a specialist analytics camera, it's a security camera you were going to buy anyway, which will also give this insight. And now you know if that refund has a customer at the other side of the till. >> Well, that's awesome. Okay, so that's an interesting use case. What are some of the other ones that you foresee or your customers are pushing you towards? Paint a picture as to what you think this looks like in the future. >> It really is this camera as a sensor so one of the newer things we've added is the ability to have real-time updates of the lights' conditions from the camera, so you can get from the hardware-backed light sensor on the camera the lux levels. And what that means is now you have knowledge of people, where they are, where they go, knowledge of lights, and now you can start going okay, well maybe we adjust the lighting based on these parameters. And so we want to expose more and more data collection from this endpoint, which is the camera, to allow you to make either smarter business decisions or to move to the digital workplace and that's really what we're trying to do in the Meraki offices in San Francisco. >> And do you get to the point or does the client get to the point where they know not only that information you just described but who the person is? >> Yes and no. I think one of the things that I'm definitely advocating caution on is the face recognition technology has a lot of hype, has a lot of excitement, and I get asked about it regularly. And I do test state-of-the-art and a lot of this technology all the time. And I wear hats because I find them fun and entertaining but they're amazingly good at stopping most of these systems from working. And so you can actually get past some of the state-of-the-art face recognition systems with two simple things, a hat and a mobile phone. And you look at your phone as you walk along and they won't catch you. And when I speak to customers, they're expectation of the performance of this technology does not match the investment cost required. So I'm not saying it isn't useful to someone, it's just, for a lot of our customers, when they see what they would get in exchange for such a huge investment, it's not something they are interested in. >> Yeah, the ROI's just really not there today. >> Not today, but the technology's moving very fast so we'll see what the future brings. >> Yeah, great. Alright, George, thanks so much for coming to theCUBE. It was really, really interesting. Leave you the last word. Customer reactions to what you guys are showing at the event? Any kind of new information that you want to share? >> There are some that we'll talk about in the Whisper Suite, which I will leave unsaid, unfortunately. It's just knowing that you can use it so simply and that the analytics and the machine learning come as part of the product at no additional cost. Because this is pretty cutting-edge stuff. You see it in the newspapers, you see it in the headlines and to say I buy this one camera and I can be a coffee shop, a single owner, and I get the same technology as an international coffee organization is pretty compelling and that's what's getting people excited. >> Great and it combines the sensor at the edge and the cloud management so-- >> Best of both worlds. >> That's awesome, I love the solution. Thanks so much for sharing with us. >> Fantastic. >> Alright, keep it right there, everybody. Stu and I will be back with our next guest right after this short break. You're watching theCUBE from Cisco Live! Barcelona. We'll be right back. (techno music)

Published Date : Jan 30 2019

SUMMARY :

Brought to you by Cisco and its ecosystem partners. We go out to the events, Thanks for coming on theCUBE. So, we were saying, Meraki's not just about wireless. and the management from the cloud. Well the way you describes it sounds so simple, And so the second piece is the cloud, Y'know, people are concerned about IP cameras off the shelf. and so we have really taken that expertise Is that the primary or only use case for the Meraki camera? And so the camera's not security at this point, but it's Debbie, the robot here as to and the customers can't take advantage of it. and everybody descends on the DevNet Zone. and if that answer means we give you video, the came over with the acquisition? And so if we give you the output of those algorithms Paint a picture as to what you think and now you can start going okay, And so you can actually get past some of the so we'll see what the future brings. Customer reactions to what you guys are showing and that the analytics and the machine learning That's awesome, I love the solution. Stu and I will be back with our next guest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GeorgePERSON

0.99+

George BentinckPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VillantePERSON

0.99+

Stu MinimanPERSON

0.99+

AustraliaLOCATION

0.99+

99%QUANTITY

0.99+

BarcelonaLOCATION

0.99+

five yearsQUANTITY

0.99+

100,000 camerasQUANTITY

0.99+

San FranciscoLOCATION

0.99+

second pieceQUANTITY

0.99+

MerakiORGANIZATION

0.99+

100 camerasQUANTITY

0.99+

eight hoursQUANTITY

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

eight imagesQUANTITY

0.99+

one cameraQUANTITY

0.99+

StuPERSON

0.99+

two-factorQUANTITY

0.99+

QualcommORGANIZATION

0.99+

AndroidTITLE

0.99+

todayDATE

0.99+

DebbiePERSON

0.98+

Barcelona, SpainLOCATION

0.98+

Whisper SuiteTITLE

0.98+

two componentsQUANTITY

0.98+

DevNet ZoneTITLE

0.98+

two simple thingsQUANTITY

0.98+

billions of connected devicesQUANTITY

0.98+

AlexaTITLE

0.98+

1%QUANTITY

0.97+

EuropeLOCATION

0.97+

40GQUANTITY

0.97+

oneQUANTITY

0.97+

zero configurationQUANTITY

0.96+

North StarORGANIZATION

0.96+

single ownerQUANTITY

0.95+

about 1%QUANTITY

0.94+

millions of connected nodesQUANTITY

0.94+

MQTTTITLE

0.92+

both worldsQUANTITY

0.91+

Cisco MerakiORGANIZATION

0.86+

customerQUANTITY

0.8+

Cisco Live EU 2019EVENT

0.79+

RESTTITLE

0.78+

two years agoDATE

0.78+

billions of pieces of associated metadataQUANTITY

0.76+

one ofQUANTITY

0.72+

overDATE

0.7+

KineticCOMMERCIAL_ITEM

0.67+

one of my favorite topicsQUANTITY

0.67+

last twoDATE

0.64+

MerakiPERSON

0.63+

Cisco LiveORGANIZATION

0.63+

LiveORGANIZATION

0.62+

theCUBEORGANIZATION

0.62+

millionQUANTITY

0.61+

harePERSON

0.6+

half yearsQUANTITY

0.6+

GoogleORGANIZATION

0.59+

MerakiTITLE

0.57+

MerakiCOMMERCIAL_ITEM

0.57+

examplesQUANTITY

0.56+

CiscoEVENT

0.53+

George Bentinck, Cisco Meraki | Cisco Live EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE, covering Cisco Live! Europe. Brought to you by Cisco and its ecosystem partners. >> Welcome back to Cisco Live! We're in Barcelona, Dave Villante and Stu Miniman. You're watching theCUBE, the leader in live tech coverage. We go out to the events, we extract the signal from the noise. George Bentinck is here. He's a product manager for Camera Systems at Cisco Meraki. >> Hi. >> Great to see you. Thanks for coming on theCUBE. >> Thanks very much. >> So, we were saying, Meraki's not just about wireless. It's all about cameras now. Tell us about your role. >> The Meraki camera is relatively new. It's one of the newer products. It came out just over two years ago and it's really embodying what we're about as a business unit at Cisco, which is about simplicity. It's about taking normally complex technology and sort of distilling it so customers can really use it. So what we did with the camera was we spoke to a lot of our customers, listened what they had to say, and they were fed up with the boxes. They don't want these servers, they don't want the recording solutions, they just want to get video. And so we built a camera which has everything inside it. All the video is stored in the camera using the latest solid state storage. And then we did all the analytics and the other sort of cool things people want to do with video in the camera as well. And yet to make it easy to use, it's all managed from the Meraki cloud. So that allows you to scale it from one camera to 100 cameras to 100,000 cameras and yet have nothing else other than the cameras and the management from the cloud. >> Well the way you describes it sounds so simple, but technically, it's a real challenge, what you've described. What were some of the technical challenges of you guys getting there? >> Well, there are sort of two components. There's the device piece and when we look at the device piece, we basically leverage the latest advances in the mobile phone industry. So if you look at the latest iPhones and Android phones, we've taken that high density, highly reliable storage and integrated it into the camera. And then we've also taken the really powerful silicone, so we have Qualcomm Snapdragon system-on-chip in there and that performance allows us to do all the analytics in the camera. And so the second piece is the cloud, the scaling, and the management. And with video, it's lots of big data, which I'm guessing you guys are probably pretty familiar with. And trying to search that and know what's going on and managing its scale can be really painful. But we have a lot of experience with this. Meraki's cloud infrastructure manages millions of connected nodes with billions of connected devices and billions of pieces of associated metadata. This is just like video, so we can reuse a lot of the existing technology we've built in the cloud and now move it to this other field of video and make it much easier to find things. >> And when people talk about, y'know, the camera systems, IoT obviously comes into play and security's a big concern. Y'know, people are concerned about IP cameras off the shelf. Y'know, everybody knows the stories about the passwords where, y'know, they never changed out of the factory and they're the same passwords across the, and so, y'know, presumably, Cisco Meraki, trusted name, and there's a security component here as well. >> Yeah, absolutely. This is actually one of my favorite topics because, unfortunately, not many people ask about it. It's one of those, it's not an issue until it's an issue type of things and we put a lot of work in it. I mean, Cisco has security in its DNA. It's just like part of what we do. And so we did all of the things which I think every camera vendor and IoT vendor should be doing anyway. So that's things like encryption for everything and by default. So all the storage on the camera is encrypted. It's mandatory so you can't turn it off. And there's zero configuration, so when you turn it on, it won't record for a few minutes while it encrypts its storage volume and then you're good to go. We also manage all the certificates on the camera and we also have encrypted management for the camera with things like two-factor authentication and other authentication mechanisms on top of that as well. So it's sort of leaps and bounds ahead of where most of the decision makers are thinking in this space because they're physical security experts. They know about locks and doors and things like that. They're not digital security experts but the Cisco customer and our organization, we know this and so we have really taken that expertise and added it to the camera. >> Yeah, George, security goes hand-in-hand with a lot of the Cisco solutions. Is that the primary or only use case for the Meraki camera? Y'know, I could just see a lot of different uses for this kind of technology. >> It really is very varied and the primary purpose of it is a physical security camera. So being able to make sure that if there's an incident in your store, you have footage of maybe the shoplifting incident or whatever. But, because it's so easy to use, customers are using it for other things. And I think one of the things that's really exciting to me is when I look at the data. And if I look at the data, we know that about 1% of all the video we store is actually viewed by customers. 99% just sits there and does nothing. And so, as we look at how we can provide greater value to customers, it's about taking the advances in things such as machine learning for computer vision, sort of artificial intelligence, and allowing you to quantify things in that data. It allows you to, for example, determine how many people are there and where they go and things like that. And to maybe put it all into context, because one of my favorite examples is a Cisco case study in Australia, where they're using cameras at a connected farm as part of an IoT deployment, to understand sheep grazing behavior and so this camera watches the sheep all day. Now as a human, I don't want to watch the sheep all day, but the camera doesn't care. And so the farmer looks at eight images representing eight hours, which is a heat map of the animals' movement in the field, and they can know where they've been grazing, where they need to move them, where this might be overgrazed. And so the camera's not security at this point, it really is like a sensor for the enterprise. >> Yeah, it's interesting, actually I did a walk through the DevNet Zone and I saw a lot of areas where I think they're leveraging some of your technology. Everything from let's plug in some of the AI to be able to allow me to do some interesting visualizations. What we're doing, there's a magic mirror where you can ask it like an Alexa or Google, but it's Debbie, the robot here as to give you answers of how many people are in a different area here. A camera is no longer just a camera. It's now just an end node connected and there's so many technologies. How do you manage that as a product person where you have the direction, where you put the development? You can't support a million different customer use cases. You want to be able to scale that business. >> Absolutely, I think the North Star always has to be simplistic. If you can't go and deploy it, you can't use it. And so we see a lot of these cool science projects trapped in proof of concept. And they never go into production and the customers can't take advantage of it. So we want to provide incredibly simple, easy out-the-box technology, which allows people to use AI and machine learning, and then we're the experts in that, but we give you industry-standard APIs using REST or MQTT, to allow you to build business applications on it directly or integrate it into Cisco Kinetic, where you can do that using the MQTT interface. >> So, Stu, you reminded me so we're here in the DevNet Zone and right now there's a Meraki takeover. So what happens in the DevNet Zone is they'll pick a topic or a part of Cisco's business unit, right now, it's the Meraki, everyone's running around with Meraki takeover shirts, and everybody descends on the DevNet Zone. So a lot of really cool developer stuff going on here. George, I wanted to ask you about where the data flows. So the data lives at the edge, y'know, wherever you're taking the video. Does it stay there? Given that only 1% is watched, are you just leaving it there, not moving it back into the cloud? Are you sometimes moving it back into the cloud? What's the data flow look like? >> You can think of this interesting sort of mindset, which is let's have a camera where we don't ever want to show you video, we want to give you the answer because video is big, it's heavy. Let's give you the answer and if that answer means we give you video, we give you video. But if we can give you the answer through other forms of information, like a still image, or an aggregate of an image, or metadata from that, then we'll give you that instead. And that means customers can deploy this on cellular networks out in the middle of nowhere and with much fewer constraints than they had in the past. So it really depends but we try and make it as efficient as possible for the person deploying it so they don't have to have a 40G network connection to every camera to make the most of it. >> Yeah, so that would mean that most of it stays-- >> Most of it stays at the edge in the camera. >> Talk a little bit more about the analytics component. Is that sort of Meraki technology the came over with the acquisition? What has Cisco added to that? Maybe speak to that a little bit. >> So the camera is a relatively new product line within the last two and a half years and the Meraki acquisition was, I think we're only like five years or more now down that road, so this is definitely post-acquisition and part of the continued collaboration between various departments at Cisco. What it enables you to do is object detection, object classification, and object tracking. So it's I know there's a thing, I know what that thing is, and I know where that thing goes. And we do it for a high level object class today, which is people. Because if you look at most business problems, they can be broken down into understanding location, dwell times, and characteristics of people. And so if we give you the output of those algorithms as industry-standard APIs, you can build very customized business analytics or business logics. So let me give you a real world example. I have retail customers tell me that one of the common causes of fraud is an employee processing a refund when there's no customer. And so what if you could know there was no customer physically present in front of the electronic point of sale system where the refund is being processed? Well, the camera can tell you. And it's not a specialist analytics camera, it's a security camera you were going to buy anyway, which will also give this insight. And now you know if that refund has a customer at the other side of the till. >> Well, that's awesome. Okay, so that's an interesting use case. What are some of the other ones that you foresee or your customers are pushing you towards? Paint a picture as to what you think this looks like in the future. >> It really is this camera as a sensor so one of the newer things we've added is the ability to have real-time updates of the lights' conditions from the camera, so you can get from the hardware-backed light sensor on the camera the lux levels. And what that means is now you have knowledge of people, where they are, where they go, knowledge of lights, and now you can start going okay, well maybe we adjust the lighting based on these parameters. And so we want to expose more and more data collection from this endpoint, which is the camera, to allow you to make either smarter business decisions or to move to the digital workplace and that's really what we're trying to do in the Meraki offices in San Francisco. >> And do you get to the point or does the client get to the point where they know not only that information you just described but who the person is? >> Yes and no. I think one of the things that I'm definitely advocating caution on is the face recognition technology has a lot of hype, has a lot of excitement, and I get asked about it regularly. And I do test state-of-the-art and a lot of this technology all the time. And I wear hats because I find them fun and entertaining but they're amazingly good at stopping most of these systems from working. And so you can actually get past some of the state-of-the-art face recognition systems with two simple things, a hat and a mobile phone. And you look at your phone as you walk along and they won't catch you. And when I speak to customers, they're expectation of the performance of this technology does not match the investment cost required. So I'm not saying it isn't useful to someone, it's just, for a lot of our customers, when they see what they would get in exchange for such a huge investment, it's not something they are interested in. >> Yeah, the ROI's just really not there today. >> Not today, but the technology's moving very fast so we'll see what the future brings. >> Yeah, great. Alright, George, thanks so much for coming to theCUBE. It was really, really interesting. Leave you the last word. Customer reactions to what you guys are showing at the event? Any kind of new information that you want to share? >> There are some that we'll talk about in the Whisper Suite, which I will leave unsaid, unfortunately. It's just knowing that you can use it so simply and that the analytics and the machine learning come as part of the product at no additional cost. Because this is pretty cutting-edge stuff. You see it in the newspapers, you see it in the headlines and to say I buy this one camera and I can be a coffee shop, a single owner, and I get the same technology as an international coffee organization is pretty compelling and that's what's getting people excited. >> Great and it combines the sensor at the edge and the cloud management so-- >> Best of both worlds. >> That's awesome, I love the solution. Thanks so much for sharing with us. >> Fantastic. >> Alright, keep it right there, everybody. Stu and I will be back with our next guest right after this short break. You're watching theCUBE from Cisco Live! Barcelona. We'll be right back. (techno music)

Published Date : Jan 29 2019

SUMMARY :

Brought to you by Cisco and its ecosystem partners. We go out to the events, Thanks for coming on theCUBE. So, we were saying, Meraki's not just about wireless. and the management from the cloud. Well the way you describes it sounds so simple, And so the second piece is the cloud, Y'know, people are concerned about IP cameras off the shelf. and so we have really taken that expertise Is that the primary or only use case for the Meraki camera? And so the camera's not security at this point, but it's Debbie, the robot here as to and the customers can't take advantage of it. and everybody descends on the DevNet Zone. and if that answer means we give you video, the came over with the acquisition? And so if we give you the output of those algorithms Paint a picture as to what you think and now you can start going okay, And so you can actually get past some of the so we'll see what the future brings. Customer reactions to what you guys are showing and that the analytics and the machine learning That's awesome, I love the solution. Stu and I will be back with our next guest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GeorgePERSON

0.99+

George BentinckPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VillantePERSON

0.99+

Stu MinimanPERSON

0.99+

AustraliaLOCATION

0.99+

99%QUANTITY

0.99+

BarcelonaLOCATION

0.99+

five yearsQUANTITY

0.99+

100,000 camerasQUANTITY

0.99+

San FranciscoLOCATION

0.99+

second pieceQUANTITY

0.99+

MerakiORGANIZATION

0.99+

100 camerasQUANTITY

0.99+

eight hoursQUANTITY

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

eight imagesQUANTITY

0.99+

one cameraQUANTITY

0.99+

StuPERSON

0.99+

two-factorQUANTITY

0.99+

QualcommORGANIZATION

0.99+

AndroidTITLE

0.99+

todayDATE

0.99+

DebbiePERSON

0.98+

Barcelona, SpainLOCATION

0.98+

Whisper SuiteTITLE

0.98+

two componentsQUANTITY

0.98+

DevNet ZoneTITLE

0.98+

two simple thingsQUANTITY

0.98+

billions of connected devicesQUANTITY

0.98+

AlexaTITLE

0.98+

1%QUANTITY

0.97+

EuropeLOCATION

0.97+

40GQUANTITY

0.97+

oneQUANTITY

0.97+

zero configurationQUANTITY

0.96+

North StarORGANIZATION

0.96+

single ownerQUANTITY

0.95+

about 1%QUANTITY

0.94+

millions of connected nodesQUANTITY

0.94+

MQTTTITLE

0.92+

both worldsQUANTITY

0.91+

Cisco MerakiORGANIZATION

0.86+

customerQUANTITY

0.8+

Cisco Live EU 2019EVENT

0.79+

RESTTITLE

0.78+

two years agoDATE

0.78+

billions of pieces of associated metadataQUANTITY

0.76+

one ofQUANTITY

0.72+

overDATE

0.7+

KineticCOMMERCIAL_ITEM

0.67+

one of my favorite topicsQUANTITY

0.67+

last twoDATE

0.64+

MerakiPERSON

0.63+

Cisco LiveORGANIZATION

0.63+

LiveORGANIZATION

0.62+

theCUBEORGANIZATION

0.62+

millionQUANTITY

0.61+

harePERSON

0.6+

half yearsQUANTITY

0.6+

GoogleORGANIZATION

0.59+

MerakiTITLE

0.57+

MerakiCOMMERCIAL_ITEM

0.57+

examplesQUANTITY

0.56+

CiscoEVENT

0.53+

Venkat Venkataramani, Rockset & Jerry Chen, Greylock | CUBEConversation, November 2018


 

[Music] we're on welcome to the special cube conversation we're here with some breaking news we got some startup investment news here in the Q studios palo alto I'm John for your host here at Jerry Chen partnered Greylock and the CEO of rock said Venkat Venkat Rahmani welcome to the cube you guys announcing hot news today series a and seed and Series A funding 21 million dollars for your company congratulations thank you Roxette is a data company jerry great this is one of your nest you kept this secret forever it was John was really hard you know over the past two years every time I sat in this seat I'd say and one more thing you know I knew that part of the advantage was rocks I was a special company and we were waiting to announce it and that's right time so it's been about two and half years in the making I gotta give you credit Jerry I just want to say to everyone I try to get the secrets out of you so hard you are so strong and keeping a secret I said you got this hot startup this was two years ago yeah I think the probe from every different angle you can keep it secrets all the entrepreneurs out there Jerry Chen's your guide alright so congratulations let's talk about the startup so you guys got 21 million dollars how much was the seed round this is the series a the seed was three million dollars both Greylock and Sequoia participating and the series a was eighteen point five all right so other investors Jerry who else was in on this I just the two firms former beginning so we teamed up with their French from Sequoia and the seed round and then we over the course of a year and half like this is great we're super excited about the team bank had Andrew bhai belt we love the opportunity and so Mike for an office coin I said let's do this around together and we leaned in and we did it around alright so let's just get into the other side I'm gonna read your your about section of the press release roxette's visions to Korea to build the data-driven future provide a service search and analytics engine make it easy to go from data to applications essentially building a sequel layer on top of the cloud for massive data ingestion I want to jump into it but this is a hot area not a lot of people are doing this at the level you guys are now and what your vision is did this come from what's your background how did you get here did you wake up one Wednesday I'm gonna build this awesome contraction layer and build an operating system around data make this thing scalable how did it all start I think it all started from like just a realization that you know turning useful data to useful apps just requires lots of like hurdles right you have to first figure out what format the data is in you got to prepare the data you gotta find the right specialized you know data database or data management system to load it in and it often requires like weeks to months before useful data becomes useful apps right and finally you know after I you know my tenure at Facebook when I left the first thing I did was I was just talking you know talking to a lot of people with real-world companies and reload problems and I started walking away from moremore of them thinking that this is way too complex I think the the format in which a lot of the data is coming in is not the format in which traditional sequel based databases are optimized for and they were built for like transaction processing and analytical processing not for like real-time streams of data but there's JSON or you know you know parque or or any of these other formats that are very very popular and more and more data is getting produced by one set of applications and getting consumed by other applications but what we saw it was what is this how can we make it simpler why do we need all this complexity right what is a simple what is the most simple and most powerful system we can build and pulled in the hands of as many people as possible and so we very sort of naturally relate to developers and data scientists people who use code on data that's just like you know kind of like our past lives and when we thought about it well why don't we just index the data you know traditional databases were built when every byte mattered every byte of memory every byte on disk now in the cloud the economics are completely different right so when you rethink those things with fresh perspective what we said was like what if we just get all of this data index it in a format where we can directly run very very fast sequel on it how simple would the world be how much faster can people go from ideas to do experiments and experiments to production applications and how do we make it all faster also in the cloud right so that's really the genesis of it well the real inspiration came from actually talking to a lot of people with real-world problems and then figuring out what is the simplest most powerful thing we can build well I want to get to the whole complexity conversation cuz we were talking before we came on camera here about how complexity can kill and why and more complexity on top of more complexity I think there's a simplicity angle here that's interesting but I want to get back to your background of Facebook and I want to tell a story you've been there eight years but you were there during a very interesting time during that time in history Facebook was I think the first generation we've taught us on the cube all the time about how they had to build their own infrastructure at scale while they're scaling so they were literally blitzscaling as reid hoffman and would say and you guys do it the Greylock coverage unlike other companies at scale eBay Microsoft they had old-school one dotto Technology databases Facebook had to kind of you know break glass you know and build the DevOps out from generation one from scratch correct it was a fantastic experience I think when I started in 2007 Facebook had about 40 million monthly actives and I had the privilege of working with some of the best people and a lot of the problems we were very quickly around 2008 when I went and said hey I want to do some infrastructure stuff the mandate that was given to me and my team was we've been very good at taking open source software and customizing it to our needs what would infrastructure built by Facebook for Facebook look like and we then went into this journey that ended up being building the online data infrastructure at Facebook by the time I left the collectively these systems were surveying 5 plus billion requests per second across 25 plus geographical clusters and half a dozen data centers I think at that time and now there's more and the system continues to chug along so it was just a fantastic experience I think all the traditional ways of problem solving just would not work at that scale and when the user base was doubling early in the early days every four months every five months yeah and what's interesting you know you're young and here at the front lines but you're kind of the frog in boiling water and that's because you are you were at that time building the power DevOps equation automating scale growth everything's happening at once you guys were right there building it now fast forward today everyone who's got an enterprise it's it wants to get there they don't they're not Facebook they don't have this engineering staff they want to get scale they see the cloud clearly the value property has got clear visibility but the economics behind who they hire so they have all this data and they get more increasing amount of data they want to be like Facebook but can't be like Facebook so they have to build their own solutions and I think this is where a lot of the other vendors have to rebuild this cherry I want to ask you because you've been looking at a lot of investments you've seen that old guard kind of like recycled database solutions coming to the market you've seen some stuff in open source but nothing unique what was it about Roxette that when you first talk to them that but you saw that this is going to be vectoring into a trend that was going to be a perfect storm yeah I think you nailed it John historic when we have this new problems like how to use data the first thing trying to do you saw with the old technology Oh existing data warehouses akin databases okay that doesn't work and then the next thing you do is like okay you know through my investments in docker and B and the boards or a cloud aerosol firsthand you need kind of this rise of stateless apps but not stateless databases right and then I through the cloud area and a bunch of companies that I saw has an investor every pitch I saw for two or three years trying to solve this data and state problem the cloud dudes add more boxes right here's here's a box database or s3 let me solve it with like Oh another database elastic or Kafka or Mongo or you know Apache arrow and it just got like a mess because if almond Enterprise IT shop there's no way can I have the skill the developers to manage this like as Beckett like to call it Rube Goldberg machination of data pipelines and you know I first met Venkat three years ago and one of the conversations was you know complexity you can't solve complex with more complexity you can only solve complexity with simplicity and Roxette and the vision they had was the first company said you know what let's remove boxes and their design principle was not adding another boxes all a problem but how to remove boxes to solve this problem and you know he and I got along with that vision and excited from the beginning stood to leave the scene ah sure let's go back with you guys now I got the funding so use a couple stealth years to with three million which is good a small team and that goes a long way it certainly 2021 total 18 fresh money it's gonna help you guys build out the team and crank whatnot get that later but what did you guys do in the in those two years where are you now sequel obviously is lingua franca cool of sequel but all this data is doesn't need to be scheming up and built out so were you guys that now so since raising the seed I think we've done a lot of R&D I think we fundamentally believe traditional data management systems that have been ported over to run on cloud Williams does not make them cloud databases I think the cloud economics is fundamentally different I think we're bringing this just scratching the surface of what is possible the cloud economics is you know it's like a simple realization that whether you rent 100 CPUs for one minute or or one CPU 400 minutes it's cost you exactly the same so then if you really ask why is any of my query is slow right I think because your software sucks right so basically what I'm trying to say is if you can actually paralyze that and if you can really exploit the fluidity of the hardware it's not easy it's very very difficult very very challenging but it's possible I think it's not impossible and if you can actually build software ground-up natively in the cloud that simplifies a lot of this stuff and and understands the economics are different now and it's system software at the end of the day is how do I get the best you know performance and efficiency for the price being paid right and the you know really building you know that is really what I think took a lot of time for us we have built not only a ground-up indexing technique that can take raw data without knowing the shape of the data we can turn that and index it in ways and store them maybe in more than one way since for certain types of data and then also have built a distributed sequel engine that is cloud native built by ground up in the cloud and C++ and like really high performance you know technologies and we can actually run distributor sequel on this raw data very very fast my god and this is why I brought up your background on Facebook I think there's a parallel there from the ground this ground up kind of philosophy if you think of sequel as like a Google search results search you know keyword it's the keyword for machines in most database worlds that is the standard so you can just use that as your interface Christ and then you using the cloud goodness to optimize for more of the results crafty index is that right correct yes you can ask your question if your app if you know how to see you sequel you know how to use Roxette if you can frame your the question that you're asking in order to answer an API request it could be a micro service that you're building it could be a recommendation engine that you're that you're building or you could you could have recommendations you know trying to personalize it on top of real time data any of those kinds of applications where it's a it's a service that you're building an application you're building if you can represent ask a question in sequel we will make sure it's fast all right let's get into the how you guys see the application development market because the developers will other winners here end of the day so when we were covering the Hadoop ecosystem you know from the cloud era days and now the important work at the Claire merger that kind of consolidates that kind of open source pool the big complaint that we used to hear from practitioners was its time consuming Talent but we used to kind of get down and dirty the questions and ask people how they're using Hadoop and we had two answers we stood up Hadoop we were running Hadoop in our company and then that was one answer the other answer was we're using Hadoop for blank there was not a lot of those responses in other words there has to be a reason why you're using it not just standing it up and then the Hadoop had the problem of the world grew really fast who's gonna run it yeah management of it Nukem noose new things came in so became complex overnight it kind of had took on cat hair on it basically as we would say so how do you guys see your solution being used so how do you solve that what we're running Roxette oh okay that's great for what what did developers use Roxette for so there are two big personas that that we currently have as users right there are developers and data scientists people who program on data right - you know on one hand developers want to build applications that are making either an existing application better it could be a micro service that you know I want to personalize the recommendations they generated online I mean offline but it's served online but whether it is somebody you know asking shopping for cars on San Francisco was the shopping you know was the shopping for cars in Colorado we can't show the same recommendations based on how do we basically personalize it so personalization IOT these kinds of applications developers love that because often what what you need to do is you need to combine real-time streams coming in semi structured format with structured data and you have no no sequel type of systems that are very good at semi structured data but they don't give you joins they don't give you a full sequel and then traditional sequel systems are a little bit cumbersome if you think about it I new elasticsearch but you can do joins and much more complex correct exactly built for the cloud and with full feature sequel and joins that's how that's the best way to think about it and that's how developers you said on the other side because its sequel now all of a sudden did you know data scientist also loved it they had they want to run a lot of experiments they are the sitting on a lot of data they want to play with it run experiments test hypotheses before they say all right I got something here I found a pattern that I don't know I know I had before which is why when you go and try to stand up traditional database infrastructure they don't know how what indexes to build how do i optimize it so that I can ask you know interrogatory and all that complexity away from those people right from basically provisioning a sandbox if you will almost like a perpetual sandbox of data correct except it's server less so like you don't you never think about you know how many SSDs do I need how many RAM do I need how many hosts do I need what configure your programmable data yes exactly so you start so DevOps for data is finally the interview I've been waiting for I've been saying it for years when's is gonna be a data DevOps so this is kind of what you're thinking right exactly so you know you give us literally you you log in to rocks at you give us read permissions to battle your data sitting in any cloud and more and more data sources we're adding support every day and we will automatically cloudburst will automatically interested we will schematize the data and we will give you very very fast sequel over rest so if you know how to use REST API and if you know how to use sequel you'd literally need don't need to think about anything about Hardware anything about standing up any servers shards you know reindex and restarting none of that you just go from here is a bunch of data here are my questions here is the app I want to build you know like you should be bottleneck by your career and imagination not by what can my data employers give me through a use case real quick island anyway the Jarius more the structural and architectural questions around the marketplace take me through a use case I'm a developer what's the low-hanging fruit use case how would I engage with you guys yeah do I just you just ingest I just point data at you how do you see your market developing from the customer standpoint cool I'll take one concrete example from a from a developer right from somebody we're working with right now so they have right now offline recommendations right or every night they generate like if you're looking for this car or or this particular item in e-commerce these are the other things are related well they show the same thing if you're looking at let's say a car this is the five cars that are closely related this car and they show that no matter who's browsing well you might have clicked on blue cars the 17 out of 18 clicks you should be showing blue cars to them right you may be logging in from San Francisco I may be logging in from like Colorado we may be looking for different kinds of cars with different you know four-wheel drives and other options and whatnot there's so much information that's available that you can you're actually by personalizing it you're adding creating more value to your customer we make it very easy you know live stream all the click stream beta to rock set and you can join that with all the assets that you have whether it's product data user data past transaction history and now if you can represent the joins or whatever personalization that you want to find in real time as a sequel statement you can build that personalization engine on top of Roxanne this is one one category you're putting sequel code into the kind of the workflow of the code saying okay when someone gets down to these kinds of interactions this is the sequel query because it's a blue car kind of go down right so like tell me all the recent cars that this person liked what color is this and I want to like okay here's a set of candidate recommendations I have how do I start it what are the four five what are the top five I want to show and then on the data science use case there's a you know somebody building a market intelligence application they get a lot of third-party data sets it's periodic dumps of huge blocks of JSON they want to combine that with you know data that they have internally within the enterprise to see you know which customers are engaging with them who are the persons churning out what are they doing and they in the in the market and trying to bring they bring it all together how do you do that when you how do you join a sequel table with a with a JSON third party dumb and especially for coming and like in the real-time or periodic in a week or week month or one month literally you can you know what took this particular firm that we're working with this is an investment firm trying to do market intelligence it used age to run ad hoc scripts to turn all of this data into a useful Excel report and that used to take them three to four weeks and you know two people working on one person working part time they did the same thing in two days and Rock said I want to get to back to microservices in a minute and hold that thought I won't go to Jerry if you want to get to the business model question that landscape because micro services were all the world's going to Inc so competition business model I'll see you gets are funded so they said love the thing about monetization to my stay on the core value proposition in light of the red hat being bought by by IBM had a tweet out there kind of critical of the transactions just in terms of you know people talk about IBM's betting the company on RedHat Mike my tweet was don't get your reaction will and tie it to the visible here is that it seems like they're going to macro services not micro services and that the world is the stack is changing so when IBM sell out their stack you have old-school stack thinkers and then you have new-school stack thinkers where cloud completely changes the nature of the stack in this case this venture kind of is an indication that if you think differently the stack is not just a full stack this way it's this way in this way yeah as we've been saying on the queue for a couple of years so you get the old guard trying to get a position and open source all these things but the stacks changing these guys have the cloud out there as a tailwind which is a good thing how do you see the business model evolving do you guys talk about that in terms of you can hey just try to find your groove swing get customers don't worry about the monetization how many charging so how's that how do you guys talk about the business model is it specific and you guys have clear visibility on that what's the story on that I mean I think yeah I always tell Bank had this kind of three hurdles you know you have something worthwhile one well someone listen to your pitch right people are busy you like hey John you get pitched a hundred times a day by startups right will you take 30 seconds listen to it that's hurdle one her will to is we spend time hands on keyboards playing around with the code and step threes will they write you a check and I as a as a enter price offered investor in a former operator we don't overly folks in the revenue model now I think writing a check the biz model just means you're creating value and I think people write you checking screening value but you know the feedback I always give Venkat and the founders work but don't overthink pricing if the first 10 customers just create value like solve their problems make them love the product get them using it and then the monetization the actual specifics the business model you know we'll figure out down the line I mean it's a cloud service it's you know service tactically to many servers in that sentence but it's um it's to your point spore on the cloud the one that economists are good so if it works it's gonna be profitable yeah it's born the cloud multi-cloud right across whatever cloud I wanna be in it's it's the way application architects going right you don't you don't care about VMs you don't care about containers you just care about hey here's my data I just want to query it and in the past you us developer he had to make compromises if I wanted joins in sequel queries I had to use like postgrads if I won like document database and he's like Mongo if I wanted index how to use like elastic and so either one I had to pick one or two I had to use all three you know and and neither world was great and then all three of those products have different business models and with rocks head you actually don't need to make choices right yes this is classic Greylock investment you got sequoia same way go out get a position in the market don't overthink the revenue model you'll funded for grow the company let's scale a little bit and figure out that blitzscale moment I believe there's probably the ethos that you guys have here one thing I would add in the business model discussion is that we're not optimized to sell latte machines who are selling coffee by the cup right so like that's really what I mean we want to put it in the hands of as many people as possible and make sure we are useful to them right and I think that is what we're obsessed about where's the search is a good proxy I mean that's they did well that way and rocks it's free to get started right so right now they go to rocks calm get started for free and just start and play around with it yeah yeah I mean I think you guys hit the nail on the head on this whole kind of data addressability I've been talking about it for years making it part of the development process programming data whatever buzzword comes out of it I think the trend is it looks a lot like that depo DevOps ethos of automation scale you get to value quickly not over thinking it the value proposition and let it organically become part of the operation yeah I think we we the internal KPIs we track are like how many users and applications are using us on a daily and weekly basis this is what we obsess about I think we say like this is what excellence looks like and we pursue that the logos in the revenue would would you know would be a second-order effect yeah and it's could you build that core kernels this classic classic build up so I asked about the multi cloud you mention that earlier I want to get your thoughts on kubernetes obviously there's a lot of great projects going on and CN CF around is do and this new state problem that you're solving in rest you know stateless has been an easy solution VP is but API 2.0 is about state right so that's kind of happening now what's your view on kubernetes why is it going to be impactful if someone asked you you know at a party hey thank you why is what's all this kubernetes what party going yeah I mean all we do is talk about kubernetes and no operating systems yeah hand out candy last night know we're huge fans of communities and docker in fact in the entire rock set you know back-end is built on top of that so we run an AWS but with the inside that like we run or you know their entire infrastructure in one kubernetes cluster and you know that is something that I think is here to stay I think this is the the the programmability of it I think the DevOps automation that comes with kubernetes I think all of that is just like this is what people are going to start taking why is it why is it important in your mind the orchestration because of the statement what's the let's see why is it so important it's a lot of people are jazzed about it I've been you know what's what's the key thing I think I think it makes your entire infrastructure program all right I think it turns you know every aspect of you know for example yeah I'll take it I'll take a concrete example we wanted to build this infrastructure so that when somebody points that like it's a 10 terabytes of data we want to very quickly Auto scale that out and be able to grow this this cluster as quickly as possible and it's like this fluidity of the hardware that I'm talking about and it needs to happen or two levels it's one you know micro service that is ingesting all the data that needs to sort of burst out and also at the second level we need to be able to grow more more nodes that we we add to this cluster and so the programmability nature of this like just imagine without an abstraction like kubernetes and docker and containers and pods imagine doing this right you are building a you know a lots and lots of metrics and monitoring and you're trying to build the state machine of like what is my desired state in terms of server utilization and what is the observed state and everything is so ad hoc and very complicated and kubernetes makes this whole thing programmable so I think it's now a lot of the automation that we do in terms of called bursting and whatnot when I say clock you know it's something we do take advantage of that with respect to stateful services I think it's still early days so our our position on my partner it's a lot harder so our position on that is continue to use communities and continue to make things as stateless as possible and send your real-time streams to a service like Roxette not necessarily that pick something like that very separate state and keep it in a backhand that is very much suited to your micro service and the business logic that needs to live there continue should continue to live there but if you can take a very hard to scale stateful service split it into two and have some kind of an indexing system Roxette is one that you know we are proud of building and have your stateless communal application logic and continue to have that you know maybe use kubernetes scale it in lambdas you know for all we care but you can take something that is very hard to you know manage and scale today break it into the stateful part in the stateless part and the serval is back in like like Roxette will will sort of hopefully give you a huge boost in being able to go from you know an experiment to okay I'm gonna roll it out to a smaller you know set of audience to like I want to do a worldwide you know you can do all of that without having to worry about and think about the alternative if you did it the old way yeah yeah and that's like talent you'd need it would be a wired that's spaghetti everywhere so Jerry this is a kubernetes is really kind of a benefit off your your investment in docker you must be proud and that the industry has gone to a whole nother level because containers really enable all this correct yeah so that this is where this is an example where I think clouds gonna go to a whole nother level that no one's seen before these kinds of opportunities that you're investing in so I got to ask you directly as you're looking at them as a as a knowledgeable cloud guy as well as an investor cloud changes things how does that change how is cloud native and these kinds of new opportunities that have built from the ground up change a company's network network security application era formants because certainly this is a game changer so those are the three areas I see a lot of impact compute check storage check networking early days you know it's it's it's funny it gosh seems so long ago yet so briefly when you know I first talked five years ago when I first met mayor of Essen or docker and it was from beginning people like okay yes stateless applications but stateful container stateless apps and then for the next three or four years we saw a bunch of companies like how do I handle state in a docker based application and lots of stars have tried and is the wrong approach the right approach is what these guys have cracked just suffered the state from the application those are app stateless containers store your state on an indexing layer like rock set that's hopefully one of the better ways saw the problem but as you kind of under one problem and solve it with something like rock set to your point awesome like networking issue because all of a sudden like I think service mesh and like it's do and costs or kind of the technologies people talk about because as these micro services come up and down they're pretty dynamic and partially as a developer I don't want to care about that yeah right that's the value like a Roxanna service but still as they operate of the cloud or the IT person other side of the proverbial curtain I probably care security I matters because also India's flowing from multiple locations multiple destinations using all these API and then you have kind of compliance like you know GDP are making security and privacy super important right now so that's an area that we think a lot about as investors so can I program that into Roxette what about to build that in my nap app natively leveraging the Roxette abstraction checking what's the key learning feature it's just a I'd say I'm a prime agent Ariane gdpr hey you know what I got a website and social network out in London and Europe and I got this gdpr nightmare I don't we don't have a great answer for GDP are we are we're not a controller of the data right we're just a processor so I think for GDP are I think there is still the controller still has to do a lot of work to be compliant with GDP are I think the way we look at it is like we never forget that this ultimately is going to be adding value to enterprises so from day one we you can't store data and Roxette without encrypting it like it's just the on you know on by default the only way and all transit is all or HTTPS and SSL and so we never freaked out that we're building for enterprises and so we've baked in for enterprise customers if they can bring in their own custom encryption key and so everything will be encrypted the key never leaves their AWS account if it's a you know kms key support private VP ceilings like we have a plethora of you know security features so that the the control of the data is still with the data controller with this which is our customer but we will be the the processor and a lot of the time we can process it using their encryption keys if I'm gonna build a GDP our sleeves no security solution I would probably build on Roxette and some of the early developers take around rocks at our security companies that are trying to track we're all ideas coming and going so there the processor and then one of the companies we hope to enable with Roxette is another generation security and privacy companies that in the past had a hard time tracking all this data so I can build on top of rocks crack okay so you can built you can build security a gbbr solution on top rock set because rock set gives you the power to process all the data index all the data and then so one of the early developers you know stolen stealth is they looking at the data flows coming and go he's using them and they'll apply the context right they'll say oh this is your credit card the Social Security is your birthday excetera your favorite colors and they'll apply that but I think to your point it's game-changing like not just Roxette but all the stuff in cloud and as an investor we see a whole generation of new companies either a to make things better or B to solve this new category problems like pricing the cloud and I think the future is pretty bright for both great founders and investors because there's just a bunch of great new companies and it's building up from the ground up this is the thing I brought my mother's red hat IBM thing is that's not the answer at the root level I feel like right now I'd be on I I think's fastenings but it's almost like you're almost doubling down to your your comment on the old stack right it's almost a double down the old stack versus an aggressive bet on kind of what a cloud native stack will look like you know I wish both companies are great people I was doing the best and stuff do well with I think I'd like to do great with OpenStack but again their product company as the people that happen to contribute to open source I think was a great move for both companies but it doesn't mean that that's not we can't do well without a new stack doing well and I think you're gonna see this world where we have to your point oh these old stacks but then a category of new stack companies that are being born in the cloud they're just fun to watch it all it's all big all big investments that would be blitzscaling criteria all start out organically on a wave in a market that has problems yeah and that's growing so I think cloud native ground-up kind of clean sheet of paper that's the new you know I say you're just got a pic pick up you got to pick the right way if I'm oh it's gotta pick a big wave big wave is not a bad wave to be on right now and it's at the data way that's part of the cloud cracked and it's it's been growing bigger it's it's arguably bigger than IBM is bigger than Red Hat is bigger than most of the companies out there and I think that's the right way to bet on it so you're gonna pick the next way that's kind of cloud native-born the cloud infrastructure that is still early days and companies are writing that way we're gonna do well and so I'm pretty excited there's a lot of opportunities certainly this whole idea that you know this change is coming societal change you know what's going on mission based companies from whether it's the NGO to full scale or all the applications that the clouds can enable from data privacy your wearables or cars or health thing we're seeing it every single day I'm pretty sad if you took amazon's revenue and then edit edit and it's not revenue the whole ready you look at there a dybbuk loud revenue so there's like 20 billion run which you know Microsoft had bundles in a lot of their office stuff as well if you took amazon's customers to dinner in the marketplace and took their revenue there clearly would be never for sure if item binds by a long shot so they don't count that revenue and that's a big factor if you look at whoever can build these enabling markets right now there's gonna be a few few big ones I think coming on they're gonna do well so I think this is a good opportunity of gradual ations thank you thank you at 21 million dollars final question before we go what are you gonna spend it on we're gonna spend it on our go-to-market strategy and hiding amazing people as many as we can get good good answer didn't say launch party that I'm saying right yeah okay we're here Rex at SIA and Joe's Jerry Chen cube cube royalty number two all-time on our Keeble um nine list partner and Greylock guy states were coming in I'm Jeffrey thanks for watching this special cube conversation [Music]

Published Date : Nov 1 2018

SUMMARY :

the enterprise to see you know which

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
San FranciscoLOCATION

0.99+

amazonORGANIZATION

0.99+

2007DATE

0.99+

five carsQUANTITY

0.99+

Jerry ChenPERSON

0.99+

three million dollarsQUANTITY

0.99+

10 terabytesQUANTITY

0.99+

30 secondsQUANTITY

0.99+

ColoradoLOCATION

0.99+

EuropeLOCATION

0.99+

LondonLOCATION

0.99+

one minuteQUANTITY

0.99+

twoQUANTITY

0.99+

21 million dollarsQUANTITY

0.99+

IBMORGANIZATION

0.99+

November 2018DATE

0.99+

FacebookORGANIZATION

0.99+

JerryPERSON

0.99+

17QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

two peopleQUANTITY

0.99+

2021DATE

0.99+

AWSORGANIZATION

0.99+

second levelQUANTITY

0.99+

ExcelTITLE

0.99+

MikePERSON

0.99+

three millionQUANTITY

0.99+

eight yearsQUANTITY

0.99+

reid hoffmanPERSON

0.99+

RoxetteORGANIZATION

0.99+

five years agoDATE

0.99+

Rube GoldbergPERSON

0.99+

three yearsQUANTITY

0.99+

two answersQUANTITY

0.99+

two levelsQUANTITY

0.99+

threeQUANTITY

0.99+

both companiesQUANTITY

0.99+

RoxannaORGANIZATION

0.99+

RockPERSON

0.99+

C++TITLE

0.99+

two big personasQUANTITY

0.99+

21 million dollarsQUANTITY

0.99+

18 clicksQUANTITY

0.99+

HadoopTITLE

0.99+

oneQUANTITY

0.99+

SequoiaORGANIZATION

0.98+

Venkat VenkataramaniPERSON

0.98+

three years agoDATE

0.98+

JeffreyPERSON

0.98+

JohnPERSON

0.98+

two firmsQUANTITY

0.98+

eBayORGANIZATION

0.98+

one personQUANTITY

0.98+

VenkatORGANIZATION

0.98+

100 CPUsQUANTITY

0.98+

AndrewPERSON

0.98+

25 plus geographical clustersQUANTITY

0.98+

todayDATE

0.98+

half a dozen data centersQUANTITY

0.98+

four weeksQUANTITY

0.98+

both companiesQUANTITY

0.98+

one monthQUANTITY

0.97+

two years agoDATE

0.97+

400 minutesQUANTITY

0.97+

more than one wayQUANTITY

0.97+

one answerQUANTITY

0.97+

two daysQUANTITY

0.96+

SIAORGANIZATION

0.96+

John Thomas, IBM | Change the Game: Winning With AI


 

(upbeat music) >> Live from Time Square in New York City, it's The Cube. Covering IBM's change the game, winning with AI. Brought to you by IBM. >> Hi everybody, welcome back to The Big Apple. My name is Dave Vellante. We're here in the Theater District at The Westin Hotel covering a Special Cube event. IBM's got a big event today and tonight, if we can pan here to this pop-up. Change the game: winning with AI. So IBM has got an event here at The Westin, The Tide at Terminal 5 which is right up the Westside Highway. Go to IBM.com/winwithAI. Register, you can watch it online, or if you're in the city come down and see us, we'll be there. Uh, we have a bunch of customers will be there. We had Rob Thomas on earlier, he's kind of the host of the event. IBM does these events periodically throughout the year. They gather customers, they put forth some thought leadership, talk about some hard dues. So, we're very excited to have John Thomas here, he's a distinguished engineer and Director of IBM Analytics, long time Cube alum, great to see you again John >> Same here. Thanks for coming on. >> Great to have you. >> So we just heard a great case study with Niagara Bottling around the Data Science Elite Team, that's something that you've been involved in, and we're going to get into that. But give us the update since we last talked, what have you been up to?? >> Sure sure. So we're living and breathing data science these days. So the Data Science Elite Team, we are a team of practitioners. We actually work collaboratively with clients. And I stress on the word collaboratively because we're not there to just go do some work for a client. We actually sit down, expect the client to put their team to work with our team, and we build AI solutions together. Scope use cases, but sort of you know, expose them to expertise, tools, techniques, and do this together, right. And we've been very busy, (laughs) I can tell you that. You know it has been a lot of travel around the world. A lot of interest in the program. And engagements that bring us very interesting use cases. You know, use cases that you would expect to see, use cases that are hmmm, I had not thought of a use case like that. You know, but it's been an interesting journey in the last six, eight months now. >> And these are pretty small, agile teams. >> Sometimes people >> Yes. use tiger teams and they're two to three pizza teams, right? >> Yeah. And my understanding is you bring some number of resources that's called two three data scientists, >> Yes and the customer matches that resource, right? >> Exactly. That's the prerequisite. >> That is the prerequisite, because we're not there to just do the work for the client. We want to do this in a collaborative fashion, right. So, the customers Data Science Team is learning from us, we are working with them hand in hand to build a solution out. >> And that's got to resonate well with customers. >> Absolutely I mean so often the services business is like kind of, customers will say well I don't want to keep going back to a company to get these services >> Right, right. I want, teach me how to fish and that's exactly >> That's exactly! >> I was going to use that phrase. That's exactly what we do, that's exactly. So at the end of the two or three month period, when IBM leaves, my team leaves, you know, the client, the customer knows what the tools are, what the techniques are, what to watch out for, what are success criteria, they have a good handle of that. >> So we heard about the Niagara Bottling use case, which was a pretty narrow, >> Mm-hmm. How can we optimize the use of the plastic wrapping, save some money there, but at the same time maintain stability. >> Ya. You know very, quite a narrow in this case. >> Yes, yes. What are some of the other use cases? >> Yeah that's a very, like you said, a narrow one. But there are some use cases that span industries, that cut across different domains. I think I may have mentioned this on one of our previous discussions, Dave. You know customer interactions, trying to improve customer interactions is something that cuts across industry, right. Now that can be across different channels. One of the most prominent channels is a call center, I think we have talked about this previously. You know I hate calling into a call center (laughter) because I don't know Yeah, yeah. What kind of support I'm going to get. But, what if you could equip the call center agents to provide consistent service to the caller, and handle the calls in the best appropriate way. Reducing costs on the business side because call handling is expensive. And eventually lead up to can I even avoid the call, through insights on why the call is coming in in the first place. So this use case cuts across industry. Any enterprise that has got a call center is doing this. So we are looking at can we apply machine-learning techniques to understand dominant topics in the conversation. Once we understand with these have with unsupervised techniques, once we understand dominant topics in the conversation, can we drill into that and understand what are the intents, and does the intent change as the conversation progress? So you know I'm calling someone, it starts off with pleasantries, it then goes into weather, how are the kids doing? You know, complain about life in general. But then you get to something of substance why the person was calling in the first place. And then you may think that is the intent of the conversation, but you find that as the conversation progresses, the intent might actually change. And can you understand that real time? Can you understand the reasons behind the call, so that you could take proactive steps to maybe avoid the call coming in at the first place? This use case Dave, you know we are seeing so much interest in this use case. Because call centers are a big cost to most enterprises. >> Let's double down on that because I want to understand this. So you basically doing. So every time you call a call center this call may be recorded, >> (laughter) Yeah. For quality of service. >> Yeah. So you're recording the calls maybe using MLP to transcribe those calls. >> MLP is just the first step, >> Right. so you're absolutely right, when a calls come in there's already call recording systems in place. We're not getting into that space, right. So call recording systems record the voice calls. So often in offline batch mode you can take these millions of calls, pass it through a speech-to-text mechanism, which produces a text equivalent of the voice recordings. Then what we do is we apply unsupervised machine learning, and clustering, and topic-modeling techniques against it to understand what are the dominant topics in this conversation. >> You do kind of an entity extraction of those topics. >> Exactly, exactly, exactly. >> Then we find what is the most relevant, what are the relevant ones, what is the relevancy of topics in a particular conversation. That's not enough, that is just step two, if you will. Then you have to, we build what is called an intent hierarchy. So this is at top most level will be let's say payments, the call is about payments. But what about payments, right? Is it an intent to make a late payment? Or is the intent to avoid the payment or contest a payment? Or is the intent to structure a different payment mechanism? So can you get down to that level of detail? Then comes a further level of detail which is the reason that is tied to this intent. What is a reason for a late payment? Is it a job loss or job change? Is it because they are just not happy with the charges that I have coming? What is a reason? And the reason can be pretty complex, right? It may not be in the immediate vicinity of the snippet of conversation itself. So you got to go find out what the reason is and see if you can match it to this particular intent. So multiple steps off the journey, and eventually what we want to do is so we do our offers in an offline batch mode, and we are building a series of classifiers instead of classifiers. But eventually we want to get this to real time action. So think of this, if you have machine learning models, supervised models that can predict the intent, the reasons, et cetera, you can have them deployed operationalize them, so that when a call comes in real time, you can screen it in real time, do the speech to text, you can do this pass it to the supervise models that have been deployed, and the model fires and comes back and says this is the intent, take some action or guide the agent to take some action real time. >> Based on some automated discussion, so tell me what you're calling about, that kind of thing, >> Right. Is that right? >> So it's probably even gone past tell me what you're calling about. So it could be the conversation has begun to get into you know, I'm going through a tough time, my spouse had a job change. You know that is itself an indicator of some other reasons, and can that be used to prompt the CSR >> Ah, to take some action >> Ah, oh case. appropriate to the conversation. >> So I'm not talking to a machine, at first >> no no I'm talking to a human. >> Still talking to human. >> And then real time feedback to that human >> Exactly, exactly. is a good example of >> Exactly. human augmentation. >> Exactly, exactly. I wanted to go back and to process a little bit in terms of the model building. Are there humans involved in calibrating the model? >> There has to be. Yeah, there has to be. So you know, for all the hype in the industry, (laughter) you still need a (laughter). You know what it is is you need expertise to look at what these models produce, right. Because if you think about it, machine learning algorithms don't by themselves have an understanding of the domain. They are you know either statistical or similar in nature, so somebody has to marry the statistical observations with the domain expertise. So humans are definitely involved in the building of these models and claiming of these models. >> Okay. >> (inaudible). So that's who you got math, you got stats, you got some coding involved, and you >> Absolutely got humans are the last mile >> Absolutely. to really bring that >> Absolutely. expertise. And then in terms of operationalizing it, how does that actually get done? What tech behind that? >> Ah, yeah. >> It's a very good question, Dave. You build models, and what good are they if they stay inside your laptop, you know, they don't go anywhere. What you need to do is, I use a phrase, weave these models in your business processes and your applications. So you need a way to deploy these models. The models should be consumable from your business processes. Now it could be a Rest API Call could be a model. In some cases a Rest API Call is not sufficient, the latency is too high. Maybe you've got embed that model right into where your application is running. You know you've got data on a mainframe. A credit card transaction comes in, and the authorization for the credit card is happening in a four millisecond window on the mainframe on all, not all, but you know CICS COBOL Code. I don't have the time to make a Rest API call outside. I got to have the model execute in context with my CICS COBOL Code in that memory space. >> Yeah right. You know so the operationalizing is deploying, consuming these models, and then beyond that, how do the models behave over time? Because you can have the best programmer, the best data scientist build the absolute best model, which has got great accuracy, great performance today. Two weeks from now, performance is going to go down. >> Hmm. How do I monitor that? How do I trigger a loads map for below certain threshold. And, can I have a system in place that reclaims this model with new data as it comes in. >> So you got to understand where the data lives. >> Absolutely. You got to understand the physics, >> Yes. The latencies involved. >> Yes. You got to understand the economics. >> Yes. And there's also probably in many industries legal implications. >> Oh yes. >> No, the explainability of models. You know, can I prove that there is no bias here. >> Right. Now all of these are challenging but you know, doable things. >> What makes a successful engagement? Obviously you guys are outcome driven, >> Yeah. but talk about how you guys measure success. >> So um, for our team right now it is not about revenue, it's purely about adoption. Does the client, does the customer see the value of what IBM brings to the table. This is not just tools and technology, by the way. It's also expertise, right? >> Hmm. So this notion of expertise as a service, which is coupled with tools and technology to build a successful engagement. The way we measure success is has the client, have we built out the use case in a way that is useful for the business? Two, does a client see value in going further with that. So this is right now what we look at. It's not, you know yes of course everybody is scared about revenue. But that is not our key metric. Now in order to get there though, what we have found, a little bit of hard work, yes, uh, no you need different constituents of the customer to come together. It's not just me sending a bunch of awesome Python Programmers to the client. >> Yeah right. But now it is from the customer's side we need involvement from their Data Science Team. We talk about collaborating with them. We need involvement from their line of business. Because if the line of business doesn't care about the models we've produced you know, what good are they? >> Hmm. And third, people don't usually think about it, we need IT to be part of the discussion. Not just part of the discussion, part of being the stakeholder. >> Yes, so you've got, so IBM has the chops to actually bring these constituents together. >> Ya. I have actually a fair amount of experience in herding cats on large organizations. (laughter) And you know, the customer, they've got skin in the IBM game. This is to me a big differentiator between IBM, certainly some of the other technology suppliers who don't have the depth of services, expertise, and domain expertise. But on the flip side of that, differentiation from many of the a size who have that level of global expertise, but they don't have tech piece. >> Right. >> Now they would argue well we do anybodies tech. >> Ya. But you know, if you've got tech. >> Ya. >> You just got to (laughter) Ya. >> Bring those two together. >> Exactly. And that's really seems to me to be the big differentiator >> Yes, absolutely. for IBM. Well John, thanks so much for stopping by theCube and explaining sort of what you've been up to, the Data Science Elite Team, very exciting. Six to nine months in, >> Yes. are you declaring success yet? Still too early? >> Uh, well we're declaring success and we are growing, >> Ya. >> Growth is good. >> A lot of lot of attention. >> Alright, great to see you again, John. >> Absolutely, thanks you Dave. Thanks very much. Okay, keep it right there everybody. You're watching theCube. We're here at The Westin in midtown and we'll be right back after this short break. I'm Dave Vellante. (tech music)

Published Date : Sep 13 2018

SUMMARY :

Brought to you by IBM. he's kind of the host of the event. Thanks for coming on. last talked, what have you been up to?? We actually sit down, expect the client to use tiger teams and they're two to three And my understanding is you bring some That's the prerequisite. That is the prerequisite, because we're not And that's got to resonate and that's exactly So at the end of the two or three month period, How can we optimize the use of the plastic wrapping, Ya. You know very, What are some of the other use cases? intent of the conversation, but you So every time you call a call center (laughter) Yeah. So you're recording the calls maybe So call recording systems record the voice calls. You do kind of an entity do the speech to text, you can do this Is that right? has begun to get into you know, appropriate to the conversation. I'm talking to a human. is a good example of Exactly. a little bit in terms of the model building. You know what it is is you need So that's who you got math, you got stats, to really bring that how does that actually get done? I don't have the time to make a Rest API call outside. You know so the operationalizing is deploying, that reclaims this model with new data as it comes in. So you got to understand where You got to understand Yes. You got to understand And there's also probably in many industries No, the explainability of models. but you know, doable things. but talk about how you guys measure success. the value of what IBM brings to the table. constituents of the customer to come together. about the models we've produced you know, Not just part of the discussion, to actually bring these differentiation from many of the a size Now they would argue Ya. But you know, And that's really seems to me to be Six to nine months in, are you declaring success yet? Alright, great to see you Absolutely, thanks you Dave.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JohnPERSON

0.99+

DavePERSON

0.99+

Rob ThomasPERSON

0.99+

twoQUANTITY

0.99+

John ThomasPERSON

0.99+

IBMORGANIZATION

0.99+

SixQUANTITY

0.99+

Time SquareLOCATION

0.99+

tonightDATE

0.99+

first stepQUANTITY

0.99+

threeQUANTITY

0.99+

three monthQUANTITY

0.99+

nine monthsQUANTITY

0.99+

thirdQUANTITY

0.98+

TwoQUANTITY

0.98+

OneQUANTITY

0.98+

New York CityLOCATION

0.98+

todayDATE

0.98+

PythonTITLE

0.98+

IBM AnalyticsORGANIZATION

0.97+

Terminal 5LOCATION

0.97+

Data Science Elite TeamORGANIZATION

0.96+

NiagaraORGANIZATION

0.96+

oneQUANTITY

0.96+

IBM.com/winwithAIOTHER

0.96+

first placeQUANTITY

0.95+

eight monthsQUANTITY

0.94+

Change the Game: Winning With AITITLE

0.89+

The WestinORGANIZATION

0.89+

Niagara BottlingPERSON

0.89+

Theater DistrictLOCATION

0.88+

four millisecond windowQUANTITY

0.87+

step twoQUANTITY

0.86+

CubePERSON

0.85+

Westside HighwayLOCATION

0.83+

firstQUANTITY

0.83+

Two weeksDATE

0.82+

millions of callsQUANTITY

0.79+

two three data scientistsQUANTITY

0.78+

CICSTITLE

0.77+

COBOLOTHER

0.69+

Rest API callOTHER

0.68+

The TideLOCATION

0.68+

theCubeORGANIZATION

0.67+

The WestinLOCATION

0.66+

Rest APIOTHER

0.66+

AppleLOCATION

0.63+

BigORGANIZATION

0.62+

WestinLOCATION

0.51+

last sixDATE

0.48+

HotelORGANIZATION

0.45+

theCubeTITLE

0.33+

BottlingCOMMERCIAL_ITEM

0.3+

Mike Bollman, Enterprise Products Company and Scott Delandy, Dell EMC | Dell Technologies World 2018


 

>> Announcer: Live from Las Vegas it's theCUBE covering Dell Technologies World 2018, brought to you by Dell EMC and its ecosystem partners. (bright music) >> Welcome back to Las Vegas. I'm Lisa Martin with Keith Townsend. We are with Dell Technologies World and about 14,000 other people here. You're watching theCUBE. We are excited to welcome back to theCUBE Scott Delandy, the Technical Director of Dell EMC. Hey, Scott! >> Hey guys, how are you? >> And you have a featured guest, Mike Bollman, the Director of Server and Storage Architecture from Enterprise Products Company, welcome! >> Thanks for having me. >> So you guys are a leader in oil and gas. I hear some great things. Talk to us about what it is that you're doing and how you're working with Dell EMC to be innovative in the oil and gas industry. >> So we're actually a Dell EMC storage customer for about the last two years now, and working with them on how we can bring in a lot of the data that we have from the field. The buzzword today is Internet of Things, or IoT. We've been doing it for many, many years, though, so we pull that data in and we look and analyze it and figure out how we can glean more information out of it. How can we tune our systems? As an example, one of the things that we do is we model a product as it flows through a pipeline because we're looking for bubbles. And bubbles mean friction and friction means less flow and we're all about flow. The more product we can flow the more money we can make. So it's one of the interesting things that we do with the data that we have. >> And Scott, talk to us about specifically oil and gas in terms of an industry that is helping Dell EMC really define this next generation of technology to modernize data centers and enable companies to kind of follow along the back of one and start doing IoT as well. >> Yeah, so the things that Mike has been able to accomplish within Enterprise Products is amazing because they truly are an innovator in terms of how they leverage technology to not just kind of maintain sort of the core applications that they need to support just to keep the business up and running but how they're investing in new applications, in new concepts to help further drive the business and be able to add value back into the organization. So we love working with Enterprise and users like Mike just because they really push technology, they're, again, very innovative in terms of the things that they're trying to do, and they provide us incredible feedback in terms of the things that we're doing, the things that we're looking to build and helping us understand what are the challenges that users like Mike are facing and how do we take our technology and adapt it to make sure that we're meeting his requirements. >> So unlike any other energy, oil and gas, you guys break scale. I mean you guys define scale when it comes to the amount of data and the need to analyze that data. How has this partnership allowed you to, what specifically have you guys leveraged from Dell EMC to move faster? >> So we've done a number of things. Early on when we first met with Scott and team at Dell EMC we said we're not looking to establish a traditional sales-customer relationship. We want a two-way business partnership. We want to be able to take your product, leverage it in our data centers, learn from it, provide feedback, and ask for enhancements, things that we think would make it better not only for us but for other customers. So one of the examples if I can talk to it. >> Scott: Please. >> One of the examples was early on when PowerMax was kind of going through its development cycle, there was talk about introducing data deduplication. And one of the things that we knew from experiences is that there are some workloads that may not do well with data dedup, and so we wanted some control over that versus some of the competitor arrays that just say everything's data dedup, good, bad, or indifferent, right? And we have some of that anecdotal knowledge. So that was a feature that the team listened to and introduced into the product. >> Yeah, yeah, I mean it was great because we were able to take the feedback and because we worked so closely with the engineering teams and because we really value the things that Mike brings to the table in terms of how he wants to adopt the technology and the things that he wants to support from a functionality perspective, we were able to basically build that into the product. So the technology that we literally announced earlier this morning, there are pieces of code that were specifically written into that system based on some of the comments that Mike had provided a year plus ago when we were in the initial phases of development. >> So being an early adopter and knowing that you were going to have this opportunity to collaborate and really establish this symbiotic relationship that allows you to test things, allows Dell EMC to get that information to make the product better, what is it that your company saw in Dell EMC to go, "Yeah, we're not afraid to send them back," or, "Let's try this together and be that leading edge"? >> I think honestly it came down to the very first meeting that we had. We had a relationship with some of the executives inside of EMC from other business relationships years ago, and we reached out and said, "Look, we want to have a conversation," and we literally put together a kind of a bullet-pointed list of here's how we want to conduct business and here's what we want to talk about. And they brought down some of their best and brightest within the engineering organization to have a open discussion with us. And really we're very open and honest with what we were trying to accomplish and how they could fit in, and then, again, we had that two-way dialogue back of, "Okay, well what about this," or, "What about that?" And so from day one it has been truly a two-way partnership. >> So Lisa's all about relationships and governance. I'm all about speeds and feeds. (Mike laughing) I'm a geek, and I want to hear some numbers, man. (Mike laughing) So you guys got the PowerMax. We had Caitlin Gordon on earlier. She's Product Marketing for the PowerMax, very, very proud of the product, but you're a customer that had it in your data center. Tell us the truth. (Mike laughing) How is, is it... Is it what you need to move forward? >> It is unbelievably fast in all honesty. So early on we brought it into our lab environment and we got it online and we stood it up, and so we were basically generating simulated workloads, right? And so you've got all of these basically host machines that are just clobbering it as fast as you can. We ran into a point where we just didn't have any more hardware to throw at it. The box just kept going, and it's like okay, well we're measuring 700,000 IOPS, it's not breaking a sweat. It's submillisecond (laughs) leads. It's like well, what else do we have? (laughs) And so it just became one of those things. Well, all right, let's start throwing snapshots at it and let's do this and let's do that. It truly is a remarkable box. And keep in mind we had the smallest configurable system you could get. We had the what is now, I guess, the PowerMax 2000, >> The 2000, yeah. yeah, in a very, very small baseline configuration. And it was just phenomenal in what it could do. >> So I would love to hear a little bit more about that. When we look at things such as the VMAX, incredible platform which had been positioned as a data center consolidator, but a lot of customers I saw using that as purpose-built for a mission critical set of applications, subset of applications in the data center. Sounds like the PowerMax, an example of the beta relationship you guys have, is a true platform that you can run an entire data center on and realistically get mission critical support out of a single platform. >> Absolutely, yeah, so even today in our production data center we have VMAX 450, VMAX 950s in today running. And we have everything from Oracle databases, SQL databases, Exchange, various workloads, a tremendous number of virtual iServers running on there, I mean hundreds and hundreds or actually probably several thousand. And it doesn't matter how we mix and match those. I have Exchange running on one array along with an Oracle database and several dozen SQL databases and hundreds of VMs all on one array and it's no problems whatsoever. There's no competition for I/O or any latency issues that are happening. It just works really well. >> And I think one of the other powerful use cases, if I could just talk to this, in your environment specifically there's some of the things you're doing around replication where you're doing multi-site replication, and on a regular basis you're doing failover, recovery, failback as part of the testing process. >> Mike: Absolutely. >> So it's not just running the I/O and getting the performance of the system, it's making sure that from a service-level perspective from the way the data's being protected being able to have the right recovery time objectives, recovery point objectives for all of the applications that you're running in your environment, to be able to have the infrastructure in place that could support that. >> Lisa: So I want to, oh. >> Go head. >> Sorry, thanks Keith. So I want to, I'm going to go ahead and go back up a little bit. >> Mike: Sure. >> One of the announcements that came out today from Dell Technologies was about modernizing the data center. You've just given us a great overview of what you're doing at the technical level. Where are you in developing a modern data center? Are you where you want to be? What's next steps for that? >> So I don't think we're ever where we want to be. There's always something else so we're always chasing things. But where we are today is that there's a lot of talk for the last several years around cloud, cloud this, cloud that. Everybody has a hardware, software, or service offering that's cloud-something. We look at cloud more as an operational model. And so we're looking at how can we streamline our internal business taking advantages of, say, RESTful APIs that are in PowerMax and basically automating end to end from a provisioning or a request perspective all the way through the provisioning all the way to final deployment and basically pulling the people out of that, the touchpoints, trying to streamline our operations, make them more efficient. It's been long said that we can't get more people in IT. It's just do more for less and that's not stopping. >> And if I could just make another plug for Mike, so I visited Mike in his data center it was about a year ago or something like that. And I've been in a lot of data centers and I've seen all kinds of organizations of all different size and scale and still today I talk about the lab tour that we went on because just the efficiency in how everything was racked, how everything was labeled, there was no empty boxes scattered around. Just the operational efficiency that you've built into the organization is, and it's part of the culture there. That's what gives Mike the ability to do the types of things that he's able to do with what's really a pretty limited staff of resources that support all of those different applications. So it's incredibly impressive not just in terms of what Mike has been able to do in terms of the technology piece but just kind of the people and the operational side of things. It's really, really impressive. I would call it a gold standard (Mike laughing) from an IT organization. >> And you're not biased about it. (Lisa and Mike laughing) >> Mental note, complete opposite of any data center I've ever met. (Lisa, Mike, and Scott laughing) Okay, so Mike, talk to us about this automation piece. We hear a lot about the first step to modernization is automation, but when I look at the traditional data center and I look at all the things that could be automated how do you guys prioritize where to go first? >> So we look at it from where are we spending our time, so it's really kind of simple of looking at what are your trouble tickets and what are your change control processes or trouble control tickets that are coming in and where are you spending the bulk of your time. And it's all about bang for the buck. So you want to do the things that you're going to get the biggest payback on first and then the low-hanging fruit, and then you go back and you tweak further and further from there. So from our perspective we did an analysis internally and we found that we spent a lot of time doing basic provisioning. We get a tremendous number of requests from our end users, from our app devs and from our DBAs. They're saying, "Hey, I need 10 new servers by Monday," and it's Friday afternoon, that sort of request. And so we spend the time jumping through hoops. It was like, well, why? We can do better than that. We should do better than that. >> So PowerMax built in modern times for the modern data center. Have you guys seen advantages for this modern platform for automation? Have you looked at it and been like, "Oh, you know what? "We love that Dell EMC took this angle "towards building this product "because they had the modern data center in mind"? >> So again I think it goes back to largely around REST APIs. So with PowerMax OS 5978 there's been further enhancements there. So pretty much anything that you could do before with SIM CLI or through the GUI has now been exposed to the REST API and everybody in the industry's kind of moving that way whether you're talking about a storage platform or a server platform, even some of the networking vendors. I had a meeting earlier today and they're moving that way as well. It's like whoa, have you seen what we're doing with REST? So from an infrastructure standpoint, from a plumbing perspective, that's really what we're looking at in tracking-- >> And if I can add to that I think one of the other sort of core enablers for that is just simply to move to an all flash-based system because in the world of spinning drives, mechanical systems, hybrid systems, an awful lot of administrative time is spent in kind of performance tuning. How do I shave off milliseconds of response time? How do I minimize those response time peaks during different parts of the day? And when you move to the all flash there's obviously a boost in terms of performance. But it's not just the performance, it's the predictability of that performance and not having to go in and figure out okay, what happened Tuesday night between four and six that caused this application to go from here to here? What do we have to do to go and run the analysis to figure all of that out? You don't see that type of behavior anymore. >> Yeah, it's that indirect operational savings. So before when flash drives kind of first got introduced to the market we had these great things like FaaS where you could go in and you could tune stuff and these algorithms that would watch those workloads and make their best guesses at what data to move when and where. Without flash, that's out the window. There's no more coming in on Monday and all of a sudden then something got tuned over the weekend down to a lower tier storage and it's too slow for the performance requirements Monday morning. That problem's gone. >> And when you look under the covers of the PowerMax we talked a lot today about some of the machine learning and the predictive analytics that are built into that system that help people like Mike to be able to consolidate hundreds, thousands of applications onto this single system. But now to have to go in and worry about how do I tune, how do I optimize not just based on a runtime of applications but real-time changes that are happening into those workloads and the system being able to automatically adjust and to be able to do the right thing to be able to maintain the level of performance that they require from that environment. >> Last question, Scott, we just have a few seconds left. Looking at oil and gas and what Mike and team have done in early adoption context, helping Dell EMC evolve this technology, what are some of the other industries that you see that can really benefit from this early adopter in-- >> I, what I would say is there are lots of industries out there that we work with and they all have sort of unique challenges and requirements for the types of things that they're trying to do to support their businesses. What I would say, the real thing is to be able to build the relationships and to have the trust so that when they're asking for something on our side we're understanding what that requirement and if there are things that we can do to help that we can have that conversation. But if there are things that we can't control or if there are things that are very, very specific to a small set of customers but require huge investments in terms of R&D and resources to do the development, we can have that honest conversation and say, "Hey Mike, it's a really good idea "and we understand how it helps you here, "but we're still a business. "We still have to make money." So we can do some things but we have to be realistic in terms of being able to balance helping Mike but still being able to run a business. >> Sure, and I wish we had more time to keep going, but thanks, guys, for stopping by, talking about how Dell EMC and Enterprise Products Company are collaborating and all of the anticipated benefits that will no doubt proliferate among industries. We want to thank you for watching theCUBE. I'm Lisa Martin with Keith Townsend. We're live, day two of Dell Technologies World in Vegas. Stick around, we'll be right back after a short break. (bright music)

Published Date : May 2 2018

SUMMARY :

brought to you by Dell EMC and its ecosystem partners. We are excited to welcome back to theCUBE Talk to us about what it is that you're doing So it's one of the interesting things that we do And Scott, talk to us about specifically oil and gas Yeah, so the things that Mike has been able to accomplish and the need to analyze that data. So one of the examples if I can talk to it. And one of the things that we knew from experiences the things that Mike brings to the table and then, again, we had that two-way dialogue back and I want to hear some numbers, man. and so we were basically And it was just phenomenal in what it could do. an example of the beta relationship you guys have, and hundreds of VMs all on one array and on a regular basis you're doing and getting the performance of the system, So I want to, I'm going to go ahead and go back up a little bit. One of the announcements that came out today and basically pulling the people out of that, and it's part of the culture there. (Lisa and Mike laughing) and I look at all the things that could be automated and we found that we spent a lot of time for the modern data center. and everybody in the industry's kind of moving that way and not having to go in and figure out kind of first got introduced to the market and the system being able to automatically adjust that you see that can really benefit and if there are things that we can do to help that are collaborating and all of the anticipated benefits

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

Mike BollmanPERSON

0.99+

KeithPERSON

0.99+

Keith TownsendPERSON

0.99+

Lisa MartinPERSON

0.99+

MikePERSON

0.99+

LisaPERSON

0.99+

Scott DelandyPERSON

0.99+

MondayDATE

0.99+

EMCORGANIZATION

0.99+

Monday morningDATE

0.99+

Tuesday nightDATE

0.99+

Dell EMCORGANIZATION

0.99+

Las VegasLOCATION

0.99+

todayDATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

hundredsQUANTITY

0.99+

Friday afternoonDATE

0.99+

10 new serversQUANTITY

0.99+

Enterprise Products CompanyORGANIZATION

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.99+

PowerMaxORGANIZATION

0.99+

Caitlin GordonPERSON

0.99+

first stepQUANTITY

0.99+

two-wayQUANTITY

0.99+

VegasLOCATION

0.98+

VMAX 950sCOMMERCIAL_ITEM

0.98+

2000COMMERCIAL_ITEM

0.98+

ExchangeTITLE

0.98+

Dell Technologies World 2018EVENT

0.97+

fourQUANTITY

0.97+

one arrayQUANTITY

0.97+

VMAX 450COMMERCIAL_ITEM

0.97+

firstQUANTITY

0.96+

single platformQUANTITY

0.96+

first meetingQUANTITY

0.95+

SQLTITLE

0.95+

700,000 IOPSQUANTITY

0.95+

PowerMax 2000COMMERCIAL_ITEM

0.95+

a year plus agoDATE

0.95+

hundreds, thousands of applicationsQUANTITY

0.95+

Kostas Tzoumas, data Artisans | Flink Forward 2018


 

(techno music) >> Announcer: Live, from San Francisco, it's theCUBE. Covering Flink Forward, brought to you by data Artisans. (techno music) >> Hello again everybody, this is George Gilbert, we're at the Flink Forward Conference, sponsored by data Artisans, the provider of both Apache Flink and the commercial distribution, the dA Platform that supports the productionization and operationalization of Flink, and makes it more accessible to mainstream enterprises. We're priviledged to have Kostas Tzoumas, CEO of data Artisans, with us today. Welcome Kostas. >> Thank you. Thank you George. >> So, tell us, let's start with sort of an idealized application-use case, that is in the sweet spot of Flink, and then let's talk about how that's going to broaden over time. >> Yeah, so just a little bit of an umbrella above that. So what we see very, very consistently, we see it in tech companies, and we see, so modern tech companies, and we see it in traditional enterprises that are trying to move there, is a move towards a business that runs in real time. Runs 24/7, is data-driven, so decisions are made based on data, and is software operated. So increasingly decisions are made by AI, by software, rather than someone looking at something and making a decision, yeah. So for example, some of the largest users of Apache Flink are companies like Uber, Netflix, Alibaba, Lyft, they are all working in this way. >> Can you tell us about the size of their, you know, something in terms of records per day, or cluster size, or, >> Yeah, sure. So, latest I heard, Alibaba is powering Alibaba Certs, more than a thousand nodes, terabytes of states, I'm pretty sure they will give us bigger numbers today. Netflix has reported of doing about one trillion events per day. >> George: Wow. >> On Flink. So pretty big sizes. >> So and is Netflix, I think I read, is powering their real time recommendation updates. >> They are powering a bunch of things, a bunch of applications, there's a lot of routing events internally. I think they have a talk, they had a talk definitely at the last conference, where they talk about this. And it's really a variety of use cases. It's really about building a platform, internally. And offering it to all sorts of departments in the company, be that for recommendations, be that for BI, be that for running, state of microservices, you know, all sorts of things. And we also see, the more traditional enterprise moving to this modus operandi. For example, ING is also one of our biggest partners, it's a global consumer bank based in the Netherlands, and their CEO is saying that ING is not a bank, it's a tech company that happens to have a banking license. It's a tech company that inherited a banking license. So that's how they want to operate. So what we see, is stream processing is really the enabler for this kind of business, for this kind of modern business where we interact with, in real time, they interact with the consumer in real time, they push notifications, they can change the pricing, et cetera, et cetera. So this is really the crux of stateful stream processing , for me. >> So okay, so tell us, for those who, you know, have a passing understanding of how Kafka's evolving, how Apache Spark and Structured Streaming's evolving, as distinct from, but also, Databricks. What is it about having state management that's sort of integrated, that for example, might make it easy to elastically change a cluster size by repartitioning. What can you assume about managing state internally, that makes things easier? >> Yeah, so I think really the, the sweet spot of Flink, is that if you are looking for stream process, from a stream processing engine, and for a stateful stream processing engine for that matter, Flink is the definition of this. It's the definite solution to this problem. It was created from scratch, with this in mind, it was not sort of a bolt-on on top of something else, so it's streaming from the get-go. And we have done a lot of work to make state a first-class citizen. What this means, is that in Flink programs, you can keep state that scales to terabytes, we have seen that, and you can manage this state together with your application. So Flink has this model based on check points, where you take a check point of your application and state together, and you can restart at any time from there. So it's really, the core of Flink, is around state management. >> And you manage exactly one semantics across the checkpointing? >> It's exactly once, it's application-level exactly once. We have also introduced end-to-end exactly once with Kafka. So Kafka-Flink-Kafka exactly once. So fully consistent. >> Okay so, let's drill down a little bit. What are some of the things that customers would do with an application running on a, let's say a big cluster or a couple clusters, where they want to operate both on the application logic and on the state that having it integrated you know makes much easier? >> Yeah, so it is a lot about a flipped architecture and about making operations and DevOps much, much easier. So traditionally what you would do is create, let's say a containerized stateless application and have a central centralized data store to keep all your states. What you do now, is the state becomes part of the application. So this has several benefits. It has performance benefits, it has organizational benefits in the company. >> Autonomy >> Autonomy between teams. It has, you know it gives you a lot of flexibility on what you can do with the applications, like, for example right, scaling an application. What you can do with Flink is that you have an application running with parallelism over 100 and you are getting a higher volume and you want to scale it to 500 right, so you can simply with Flink take a snapshot of the state and the application together, and then restart it at a 500 and Flink is going to resolve the state. So no need to do anything on a database. >> And then it'll reshard and Flink will reshard it. >> Will reshard and it will restart. And then one step further with the product that we have introduced, dA Platform which includes Flink, you can simply do this with one click or with one rest command. >> So, the the resharding was possible with core Flink, the Apache Flink and the dA Platform just makes it that much easier along with other operations. >> Yeah so what the dA Platform does is it gives you an API for common operational tasks, that we observed everybody that was deploying Flink at a decent scale needs to do. It abstracts, it is based on Kubernetes, but it gives you a higher-level API than Kubernetes. You can manage the application and the state together, and it gives that to you in a rest API, in a UI, et cetera. >> Okay, so in other words it's sort of like by abstracting even up from Kubernetes you might have a cluster as a first-class citizen but you're treating it almost like a single entity and then under the covers you're managing the, the things that happen across the cluster. >> So what we have in the dA Platform is a notion of a deployment which is, think of it as, I think of it as a cluster, but it's basically based on containers. So you have this notion of deployments that you can manage, (coughs) sorry, and then you have a notion of an application. And an application, is a Flink job that evolves over time. And then you have a very, you know, bird's-eye view on this. You can, when you update the code, this is the same application with updated code. You can travel through a history, you can visit the logs, and you can do common operational tasks, like as I said, rescaling, updating the code, rollbacks, replays, migrate to a new deployment target, et cetera. >> Let me ask you, outside of the big tech companies who have built much of the application management scaffolding themselves, you can democratize access to stream processing because the capabilities, you know, are not in the skill set of traditional, mainstream developers. So question, the first thing I hear from a lot of sort of newbies, or people who want to experiment, is, "Well, it's so easy to manage the state "in a shared database, even if I'm processing, "you know, continuously." Where should they make the trade-off? When is it appropriate to use a shared database? Maybe you know, for real OLTP work, and then when can you sort of scale it out and manage it integrally with the rest of the application? >> So when should we use a database and when should we use streaming, right? >> Yeah, and even if it's streaming with the embedded state. >> Yeah, that's a very good question. I think it really depends on the use case. So what we see in the market, is many enterprises start with with a use case that either doesn't scale, or it's not developer friendly enough to have these database application levels. Level separation. And then it quickly spreads out in the whole company and other teams start using it. So for example, in the work we did with ING, they started with a fraud detection application, where the idea was to load models dynamically in the application, as the data scientists are creating new models, and have a scalable fraud detection system that can handle their load. And then we have seen other teams in the company adopting processing after that. >> Okay, so that sounds like where the model becomes part of the application logic and it's a version of the application logic and then, >> The version of the model >> Is associated with the checkpoint >> Correct. >> So let me ask you then, what happens when you you're managing let's say terabytes of state across a cluster, and someone wants to query across that distributed state. Is there in Flink a query manager that, you know, knows about where all the shards are and the statistics around the shards to do a cost-based query? >> So there is a feature in Flink called queryable state that gives you the ability to do, very simple for now, queries on the state. This feature is evolving, it's in progress. And it will get more sophisticated and more production-ready over time. >> And that enables a different class of users. >> Exactly, I wouldn't, like to be frank, I wouldn't use it for complex data warehousing scenarios. That still needs a data warehouse, but you can do point queries and a few, you know, slightly more sophisticated queries. >> So this is different. This type of state would be different from like in Kafka where you can store you know the commit log for X amount of time and then replay it. This, it's in a database I assume, not in a log form and so, you have faster access. >> Exactly, and it's placed together with a log, so, you can think of the state in Flink as the materialized view of the log, at any given point in time, with various versions. >> Okay. >> And really, the way replay works is, roll back the state to a prior version and roll back the log, the input log, to that same logical time. >> Okay, so how do you see Flink spreading out, now that it's been proven in the most demanding customers, and now we have to accommodate skills, you know, where the developers and DevOps don't have quite the same distributed systems knowledge? >> Yeah, I mean we do a lot of work at data Artisans with financial services, insurance, very traditional companies, but it's definitely something that is work in progress in the sense that our product the dA Platform makes operation smarts easier. This was a common problem everywhere, this was something that tech companies solved for themselves, and we wanted to solve it for everyone else. Application development is yet another thing, and as we saw today in the last keynote, we are working together with Google and the BIM Community to bring Python, GOLD, all sorts of languages into Flink. >> Okay so that'll help at the developer level, and you're also doing work at the operations level with the platform. >> And of course there's SQL right? So Flink has Stream SQL which is standard SQL. >> And would you see, at some point, actually sort of managing the platform for customers, either on-prem or in the cloud? >> Yeah, so right now, the platform is running on Kubernetes, which means that typically the customer installs it in their clusters, in their Kubernetes clusters. Which can be either their own machines, or it can be a Kubernetes service from a cloud vendor. Moving forward I think it will be very interesting yes, to move to more hosted solutions. Make it even easier for people. >> Do you see a breakpoint or a transition between the most sophisticated customers who, either are comfortable on their own premises, or who were cloud, sort of native, from the beginning, and then sort of the rest of the mainstream? You know, what sort of applications might they move to the cloud or might coexist between on-prem and the cloud? >> Well I think it's clear that the cloud is, you know, every new business starts on the cloud, that's clear. There's a lot of enterprise that is not yet there, but there's big willingness to move there. And there's a lot of hybrid cloud solutions as well. >> Do you see mainstream customers rewriting applications because they would be so much more powerful in stream processing, or do you see them doing just new applications? >> Both, we see both. It's always easier to start with a new application, but we do see a lot of legacy applications in big companies that are not working anymore. And we see those rewritten. And very core applications, very core to the business. >> So could that be, could you be sort of the source and in an analytic processing for the continuous data and then that sort of feeds a transaction and some parameters that then feed a model? >> Yeah. >> Is that, is that a, >> Yeah. >> so in other words you could augment existing OLTP applications with analytics then inform them in real time essentially. >> Absolutely. >> Okay, 'cause that sounds like then something that people would build around what exists. >> Yeah, I mean you can do, you can think of stream processing, in a way, as transaction processing. It's not a dedicated OLTP store, but you can think of it in this flipped architecture right? Like the log is essentially the re-do log, you know, and then you create the materialized views, that's the write path, and then you have the read path, which is queryable state. This is this whole CQRS idea right? >> Yeah, Command-Query-Response. >> Exactly. >> So, this is actually interesting, and I guess this is critical, it's sort of like a new way of doing distributed databases. I know that's not the word you would choose, but it's like the derived data, managed by, sort of coming off of the state changes, then in the stream processor that goes through a single sort of append-only log, and then reading, and how do you manage consistency on the materialized views that derive data? >> Yeah, so we have seen Flink users implement that. So we have seen, you know, companies really base the complete product on the CQRS pattern. I think this is a little bit further out. Consistency-wise, Flink gives you the exactly once consistency on the write path, yeah. What we see a lot more is an architecture where there's a lot of transactional stores in the front end that are running, and then there needs to be some kind of global, of single source of truth, between all of them. And a very typical way to do that is to get these logs into a stream, and then have a Flink application that can actually scale to that. Create a single source of truth from all of these transactional stores. >> And by having, by feeding the transactional stores into this sort of hub, I presume, some cluster as a hub, and even if it's in the form of sort of a log, how can you replay it with sufficient throughput, I guess not to be a data warehouse but to, you know, have low latency for updating the derived data? And is that derived data I assume, in non-Flink products? >> Yeah, so the way it works is that, you know, you can get the change logs from the databases, you can use something like Kafka to buffer them up, and then you can use Flink for all the processing and to do the reprocessing with Flink, this is really one of the core strengths of Flink. Basically what you do is, you replay the Flink program together with the states you can get really, really high throughput reprocessing there. >> Where does the super high throughput come from? Is that because of the integration of state and logic? >> Yeah, that is because Flink is a true streaming engine. It is a high-performance streaming engine. And it manages the state, there's no tier, >> Crossing a boundary? >> no tier crossing and there's no boundary crossing when you access state. It's embedded in the Flink application. >> Okay, so that you can optimize the IO path? >> Correct. >> Okay, very, very interesting. So, it sounds like the Kafka guys, the Confluent folks, their aspirations, from the last time we talked to 'em, doesn't extend to analytics, you know, I don't know whether they want partners to do that, but it sounds like they have a similar topology, but they're, but I'm not clear how much of a first-class citizen state is, other than the log. How would you characterize the trade-offs between the two? >> Yeah, so, I mean obviously I cannot comment on Confluent, but like, what I think is that the state and the log are two very different things. You can think of the log as storage, it's a kind of hot storage because it's the most recent data but you know, you cannot query it, it's not a materialized view, right. So for me the separation is between processing state and storage. The log is is a kind of storage, so kind of message queue. State is really the active data, the real-time active data that needs to have consistency guarantees, and that's a completely different thing. >> Okay, and that's the, you're managing, it's almost like you're managing under the covers a distributed database. >> Yes, kind of. Yeah a distributed key-value store if you wish. >> Okay, okay, and then that's exposed through multiple interfaces, data stream, table. >> Data stream, table API, SQL, other languages in the future, et cetera. >> Okay, so going further down the line, how do you see the sort of use cases that are going to get you across the chasm from the big tech companies into the mainstream? >> Yeah, so we are already seeing that a lot. So we're doing a lot of work with financial services, insurance companies a lot of very traditional businesses. And it's really a lot about maintaining single source of truth, becoming more real-time in the way they interact with the outside world, and the customer, like they do see the need to transform. If we take financial services and investment banks for example, there is a big push in this industry to modernize the IT infrastructure, to get rid of legacy, to adopt modern solutions, become more real-time, et cetera. >> And so they really needed this, like the application platform, the dA Platform, because operationalizing what Netflix did isn't going to be very difficult maybe for non-tech companies. >> Yeah, I mean, you know, it's always a trade-off right, and you know for some, some companies build, some companies buy, and for many companies it's much more sensible to buy. That's why we have software products. And really, our motivation was that we worked in the open-source Flink community with all the big tech companies. We saw their successes, we saw what they built, we saw, you know, their failures. We saw everything and we decided to build this for everybody else, for everyone that, you know, is not Netflix, is not Uber, cannot hire software developers so easily, or with such good quality. >> Okay, alright, on that note, Kostas, we're going to have to end it, and to be continued, one with Stefan next, apparently. >> Nice. >> And then hopefully next year as well. >> Nice. Thank you. >> Alright, thanks Kostas. >> Thank you George. Alright, we're with Kostas Tzoumas, CEO of data Artisans, the company behind Apache Flink and now the application platform that makes Flink run for mainstream enterprises. We will be back, after this short break. (techno music)

Published Date : Apr 11 2018

SUMMARY :

Covering Flink Forward, brought to you by data Artisans. and makes it more accessible to mainstream enterprises. Thank you George. application-use case, that is in the sweet spot of Flink, So for example, some of the largest users of Apache Flink I'm pretty sure they will give us bigger numbers today. So pretty big sizes. So and is Netflix, I think I read, is powering it's a tech company that happens to have a banking license. So okay, so tell us, for those who, you know, and you can restart at any time from there. We have also introduced end-to-end exactly once with Kafka. and on the state that having it integrated So traditionally what you would do is and you want to scale it to 500 right, which includes Flink, you can simply do this with one click So, the the resharding was possible with and it gives that to you in a rest API, in a UI, et cetera. you might have a cluster as a first-class citizen and you can do common operational tasks, because the capabilities, you know, are not in the skill set So for example, in the work we did with ING, and the statistics around the shards that gives you the ability to do, but you can do point queries and a few, you know, where you can store you know the commit log so, you can think of the state in Flink and roll back the log, the input log, in the sense that our product the dA Platform at the operations level with the platform. And of course there's SQL right? Yeah, so right now, the platform is running on Kubernetes, Well I think it's clear that the cloud is, you know, It's always easier to start with a new application, so in other words you could augment Okay, 'cause that sounds like then something that's the write path, and then you have the read path, I know that's not the word you would choose, So we have seen, you know, companies Yeah, so the way it works is that, you know, And it manages the state, there's no tier, It's embedded in the Flink application. doesn't extend to analytics, you know, but you know, you cannot query it, Okay, and that's the, you're managing, it's almost like Yeah a distributed key-value store if you wish. Okay, okay, and then that's exposed other languages in the future, et cetera. and the customer, like they do see the need to transform. like the application platform, the dA Platform, and you know for some, some companies build, and to be continued, one with Stefan next, apparently. and now the application platform

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlibabaORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

UberORGANIZATION

0.99+

INGORGANIZATION

0.99+

GeorgePERSON

0.99+

George GilbertPERSON

0.99+

Kostas TzoumasPERSON

0.99+

GoogleORGANIZATION

0.99+

LyftORGANIZATION

0.99+

KostasPERSON

0.99+

StefanPERSON

0.99+

San FranciscoLOCATION

0.99+

FlinkORGANIZATION

0.99+

next yearDATE

0.99+

NetherlandsLOCATION

0.99+

twoQUANTITY

0.99+

BothQUANTITY

0.99+

KafkaTITLE

0.99+

bothQUANTITY

0.99+

one clickQUANTITY

0.99+

PythonTITLE

0.99+

SQLTITLE

0.98+

first thingQUANTITY

0.98+

more than a thousand nodesQUANTITY

0.98+

KubernetesTITLE

0.98+

500QUANTITY

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

ConfluentORGANIZATION

0.97+

ArtisansORGANIZATION

0.96+

single sourceQUANTITY

0.96+

2018DATE

0.96+

over 100QUANTITY

0.95+

dA PlatformTITLE

0.95+

FlinkTITLE

0.94+

about one trillion events per dayQUANTITY

0.94+

Apache FlinkORGANIZATION

0.93+

singleQUANTITY

0.92+

Flink Forward ConferenceEVENT

0.9+

one stepQUANTITY

0.9+

ApacheORGANIZATION

0.89+

DatabricksORGANIZATION

0.89+

KafkaPERSON

0.88+

firstQUANTITY

0.86+

single entityQUANTITY

0.84+

one rest commandQUANTITY

0.82+

Kyle Ruddy, VMware | VTUG Winter Warmer 2018


 

>> Announcer: From Gillette Stadium in Foxborough, Massachusetts, it's theCube! Covering VTUG Winter Warmer 2018. Presented by SiliconeANGLE. (energetic music) >> Hi, I'm Stu Miniman and this is theCube's coverage of the VTUG Winter Warmer 2018, the 12th year of this user group, fifth year we've had theCube here. I happen to have on the program a first-time guest, Kyle Ruddy, who's a Senior Technical Marketing Engineer with VMware, knows a thing or two about virtualization. >> Maybe a couple of things. >> Stu: Thanks for joining us, Kyle. >> Oh, thank you for having me. I'm happy to be here. >> All right, so Kyle, I know you were sitting at home in Florida and saying, "What I'd like to do is come up in the 20s. "It kind of feels like single digits." Why did you leave the warmth of the south to come up here to the frigid New England? >> (chuckles) Yeah, well, it was a great opportunity. I've never been to one of the VTUGs before, so they gave me a chance to talk about something that I'm extremely passionate about which is API usage. Once I got the invite, no-brainer, made the trip. >> Awesome! So definitely, Jonathan Frappier who we asked to be on the program but he said Kyle's going to be way better. (Kyle chuckles) Speak better, you got the better beard. (Kyle laughs) I think we're just going to give Frappier a bunch of grief since he didn't agree to come on. Give us first a little bit about your background, how long you been VMware, what kind of roles have you had there? >> Yeah, absolutely! So I've probably been in IT for over 15 years, a long-time customer. I did that for about 10 to 12 years of the IT span doing everything from help desk working my way up to being on the engineer side. I really fell in love with automation during that time period and then made the jump to the vendor side. I've been at VMware for about two years now where I focus on creating content and being at events like these to talk about our automation strategy for vSphere. >> Before you joined VMware, were you a vExpert? Have you presented at VMUGs? >> Yes, yes, so I've been a vExpert. I think I'm going on seven years now. I've helped run the Indianapolis VMUG for five to six years. I've presented VMUGs all over the country. >> Yeah, one of the things we always emphasize, especially at groups like this, is get involved, participate, it can do great things for your career. >> Yes, absolutely! I certainly wouldn't be here without that kind of input and guidance. >> Indy VMUG's a great one, a real large one here, even though I hear this one here has tended to be a little bit bigger, but a good rivalry going on there. I want to talk about the keynote you talked about, automation and APIs. It's not kind of the virtualization 101, so what excites you so much about it? And let's get in a little bit, talk about what you discussed there. >> Yeah, absolutely! We were talking about using Ansible with the vSphere 6.5 RESTful APIs. That's something that's new, brand new, to vSphere 6.5, and really just being able to, when those were released, allow our users and our customers to make use of those APIs in however way that they wanted to. If you look back at some of our prior APIs and our SDKs, you were a little more constrained. They were SOAP-based so there was a lot of overhead that came with those. There was a large learning curve that also came along with those. So by switching to REST, it's a whole lot more user friendly. You can use it with tools like Ansible which that was just something that Jon knew quite well. I thought that was a perfect opportunity for me to finally do a presentation with Jon. It went quite well. I think the audience learned quite a bit. We even kind of relayed to the audience that this isn't something that's just for vSphere. Ansible is something you can use with anything. >> For somebody out there watching this, how do they get started? What's kind of some of the learning curve that they need to do? What skillsets are they going to build on versus what they need to learn for new? >> Sure. A lot of the ways to really get started with these things, I've created a ton of blog posts that are out there on the VMware {code} blog. The first one is just getting started with the RESTful APIs that we've provided. There's a program that's called Postman, we give a couple of collections that you can automatically import and start using that. Ansible has some really good documentation on getting started with Ansible and whichever environment you're choosing to work or use it with. So they've got a Getting Started with vSphere, they've got a Getting Started with different operating systems as well. Those are really good tools to get started and get that integrated into your normal working environment. Obviously, we're building on automation here. We're building on... At least when I was in admin, I got involved in automation because there was a way for me to automate and get rid of those tasks, those menial tasks that I didn't really enjoy doing. So I could automate that, push that off, and get back to something that I cared about that I enjoyed. >> Yeah, great point there 'cause, yeah, some people, they're a little bit nervous, "Oh, wait, are these tools going to take away my job?" And to repeat what you were just saying, "No, no." There's the stuff that you don't really love doing and that you probably have to do a bunch. Those are the things that are probably, maybe the easiest to be able to move to the automation. How much do people look at this and be like, "Wait, no, once I start automating it, "then I kind of need to care, and feed, and maintain that, "versus just buying something off the shelf "or using some service that I can do." Any feedback on that? >> Well, it's more of a... It's a passion thing. If it's something that you're really get ingrained in, you really enjoy, then you're going to want to care and feed that because it's going to grow. It's going to expand into other areas of your environment. It's going to expand into other technologies that are within your environment. So of course, you can buy something. You could get somebody from... There are professional services organizations involved, so you don't have to do the menial tasks of updating that. Say if you go from one version to a next version, you don't have to deal with that. But if you're passionate about it, you enjoy doing that, and that's where I was. >> The other thing I picked up on is you said some of these things are new only in 6.5. One of the challenges we've always had out there is, "Oh, wait, I need to upgrade. "When can I do it? "What challenges I'm going to have?" What's the upgrade experience like now and anything else that you'd want to point out that said, "Hey, it's time to plan for that upgrade "and here are some of the things that are going to help you"? >> We actually have an End of Availability and End of Support coming up for vSphere 5.5. That's going to be coming up in here later this year in September-October timeframe. So you're not going to be able to open up a support request for that. This is a perfect time to start planning that upgrade to get up to at least 6.0, if not 6.5. And the other thing to keep in mind is that we've announced deprecation for the Windows version of vSphere. Moving forward past our next numbered release, that's going to be all vCenter Server Appliance from that point forward. Now we also have a really great tool that's called the VCSA Migration tool that you can use to help you migrate from Windows to the Appliance. Super simple, very straightforward, gives you a migration assistant to even point out some of those places where you might miss if you did it on your own. So that's a really great tool and really helps to remove that pain out of that process. >> Yeah, it's good, you've got a mix of a little bit of the stick, you got to get off! (Kyle chuckles) I know a lot of people still running 5.5 out there as well as there's the carrot out there. All the good stuff that's going to get you going. All right, hey, Kyle, last thing I want to ask is 2018. Boy, there's a lot of change going on in the industry. One, how do you keep up with everything, and two, what's exciting you about what's happening in the industry right now? >> As far as what excites me right now, Python. That's been something that's been coming up a lot more with the folks that I'm talking to. Even today, just at lunch, I was talking to somebody and they were bringing up Python. I'm like, "Wow!" This is something that keeps coming up more and more often. I'm using a lot more of my time, even my personal time, to start looking at that. And so when you start hearing the passion of people who are using some of these new technologies, that's when I start getting interested because I'm like, "Hey, if you're that interested, "and you're that passionate about it, "I should be too." So that's kind of what drives me to keep learning and to keep up with all of the latest and greatest things that are out there. Plus when you have events like this, you can go talk to some of the sponsors. You can talk and see what they're doing, how to make use of their product, and some of their automation frameworks, and with what programming languages. That kind of comes back to Python on that one because a lot more companies are releasing their automation tools for use with Python. >> Yeah, and you answered the second part of my question probably without even thinking about it. The passion, the excitement, talking to your peers, coming to events like this. All right, Kyle Ruddy, really appreciate you joining us here. We'll be back with more coverage here from the VTUG Winter Warmer 2018. I'm Stu Miniman. You're watching theCube. (energetic music)

Published Date : Jan 30 2018

SUMMARY :

it's theCube! I happen to have on the program I'm happy to be here. "What I'd like to do is come up in the 20s. so they gave me a chance to talk about something on the program but he said Kyle's going to be way better. I did that for about 10 to 12 years of the IT span for five to six years. Yeah, one of the things we always emphasize, that kind of input and guidance. even though I hear this one here has tended to be We even kind of relayed to the audience and get back to something that I cared about And to repeat what you were just saying, and feed that because it's going to grow. "and here are some of the things that are going to help you"? And the other thing to keep in mind is that All the good stuff that's going to get you going. and to keep up with all of the latest and greatest things Yeah, and you answered the second part of my question

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jonathan FrappierPERSON

0.99+

KylePERSON

0.99+

Kyle RuddyPERSON

0.99+

fiveQUANTITY

0.99+

JonPERSON

0.99+

FloridaLOCATION

0.99+

seven yearsQUANTITY

0.99+

FrappierPERSON

0.99+

PythonTITLE

0.99+

2018DATE

0.99+

second partQUANTITY

0.99+

fifth yearQUANTITY

0.99+

VMwareORGANIZATION

0.99+

12th yearQUANTITY

0.99+

StuPERSON

0.99+

Stu MinimanPERSON

0.99+

six yearsQUANTITY

0.99+

vSphere 6.5TITLE

0.99+

Gillette StadiumLOCATION

0.99+

WindowsTITLE

0.99+

over 15 yearsQUANTITY

0.99+

todayDATE

0.99+

twoQUANTITY

0.98+

first-timeQUANTITY

0.98+

first oneQUANTITY

0.98+

vSphereTITLE

0.98+

one versionQUANTITY

0.98+

Foxborough, MassachusettsLOCATION

0.98+

New EnglandLOCATION

0.98+

OctoberDATE

0.98+

SeptemberDATE

0.97+

firstQUANTITY

0.97+

6.5QUANTITY

0.97+

about two yearsQUANTITY

0.96+

later this yearDATE

0.96+

OneQUANTITY

0.96+

12 yearsQUANTITY

0.95+

about 10QUANTITY

0.94+

vSphere 5.5TITLE

0.94+

VTUGEVENT

0.94+

VTUG Winter Warmer 2018EVENT

0.94+

PostmanTITLE

0.93+

VMUGsORGANIZATION

0.87+

SiliconeANGLEORGANIZATION

0.87+

a thingQUANTITY

0.87+

RESTTITLE

0.86+

VMwareTITLE

0.86+

AnsibleORGANIZATION

0.84+

oneQUANTITY

0.82+

VCSATITLE

0.82+

at least 6.0QUANTITY

0.74+

theCubeORGANIZATION

0.72+

vExpertORGANIZATION

0.68+

Winter WarmerEVENT

0.68+

5.5QUANTITY

0.66+

theCubeCOMMERCIAL_ITEM

0.65+

One of the challengesQUANTITY

0.62+

vCenterTITLE

0.61+

Indy VMUGORGANIZATION

0.6+

ton of blog postsQUANTITY

0.56+

singleQUANTITY

0.54+

Indianapolis VMUGORGANIZATION

0.54+

AnsibleTITLE

0.53+

20sQUANTITY

0.51+

101QUANTITY

0.44+

coupleQUANTITY

0.38+

VTUGsORGANIZATION

0.32+

Krishna Subramanian, Komprise | CUBEConversation Dec 2017


 

(techy music playing) >> Hey, welcome back, everybody. Jeff Frick here at the CUBE, we're in our Palo Alto Studios for a CUBE Conversation. You know, it's kind of when we get a break, we're not at a show. It's a little bit quieter, a little calmer situation so we can have a little bit different kinds of conversations and we're excited to have our next guest and talk about a really important piece of this whole cloud thing, which is not only do you need to turn things on, but you need to also turn them off and that's what gets people in trouble, I think, on the cost comparison. We're joined by Krishna Subramanian, she is the co-founder and COO of Komprise, welcome. >> Thank you, thanks for having me on the show. >> Absolutely, so just real briefly for people that aren't familiar, just give them kind of the overview of Komprise. >> Komprise is the only solution that provides analytics and data management in a single package and the reason we started the company is because customers told us that they're literally drowning in data these days. As data for print continues to grow, a lot of it is in unstructured data and data, you know, what's unique about it is that you never just keep one copy of data because if your data is lost, like if your child's first year birthday picture is lost you wouldn't like that, right? >> Jeff: Do not bring that kind of stuff up in an interview. (laughs) We don't want to talk about lost photographs or broken RAID boxes, that's another conversation, but yes, you do not want to lose those pictures. >> So, you keep multiple copies. >> Right, right. >> And that's what businesses do. They usually keep a DR copy, a few backup copies of their data, so if you have 100 terabytes of data you probably have three to four copies of it, that's 400 terabytes and if 70% of that data hasn't been touched in over six months 280 of your 400 terabytes is being actively managed for no reason. >> Jeff: Right, right. >> And Komprise analyzes and finds all that data for you and shows you how much you can save by managing it at lower cost, then it actually moves and archives and reduces the cost of managing that data so you can save 70% or more on your storage. >> Right, so there's a couple components to that that you talked about. So, break it down a little bit more. One is how actively is the data managed, how hot is the data, you know, what type of storage the data is based on, its importance, its relevance and how often you're accessing it. So, one of the big problems, if I heard you right, is you guys figure out what stuff is being managed that way, as active, high value, sitting on flash, paying lots of money, that doesn't need to be. >> That's exactly right, we find that all the cold data on your current storage... We show you how much more you're spending to manage that data than you need to. >> So, how do you do that in an environment where, you know, that data is obviously connected to applications, that data might be in my data center, it could be Amazon or could be at GCP, how do you do that without interfering with my active applications on that data, because even though some of it might be ready for cold storage there might be some of it, obviously, that isn't. So, how do you manage that without impacting my operations? >> That's a great question, because really, you know, data management is like a good housekeeper. You should never know that the housekeeper is there, they should never get in the way of what you're doing, but they keep your house clean, right? And that's kind of what Komprise does for your data, and how do we do that? Well, we do that by being adaptive. So, Komprise connects to your storage just through open protocols. So, we don't make any changes to your environment and our software automatically slows itself down and runs in the background to not interfere with anything active on your storage. So, we are like a good partner to your storage. You don't even know we're there, we're invisible to all the active work and yet we're giving all these important analytics and when we move the data, all the data looks like it's still there, so it's fully transparent. >> Okay, you touched on a couple things. So, one is how do you sit there without impacting it? I think you said you partner with all the big data, or excuse me, all the big storage providers. >> Krishna: Yes. >> You partner with all the three big cloud providers, just won an award at re:Invent, congratulations. >> Krishna: Thank you. >> So, how do you do that, where does your software sit, does it sit in the data center or does it sit at Amazon and how does it interact with other management tools that I might already have in place? >> That's a great question, so Komprise runs as a hybrid cloud service, and essentially there is a console that's running in the cloud, but the actual analysis and data movement is done by virtual machines that are running at the customer's site and you literally just point our virtual machine at any storage you have and we work through standard protocols, through NFS, SMB CIFS, and REST S3, so whether you have NetApp storage or EMC storage or Windows File Servers or Hitachi NAS or you're putting data on Amazon or Azure or Google or an object storage, it doesn't actually matter. Komprise works with all those environments because we are working through open standards, and because we're adaptive we're automatically running in the background, so it's working through open standards and it's non-intrusive. >> Okay, and then if you designate that some percentage of this storage does not need to be in the high, expensive environment, you actually go to the next step and you actually help manage it and move it, so how does that impact my other kind of data management procedures? >> Yes, so it's a great question. So, most of the time you would probably have some DR copy and some backups running on your hot storage, on your flash storage, say, and you don't want to change that and you don't want users to point anywhere else, so what Komprise does is it takes the cold data from all that storage and when it moves that data it's fully transparent. The moved data looks like it's still there on that storage, it's just that the footprint is reduced now, so for 100MB file you just have a one kilobyte link on that storage, and we don't use any stub files, we don't put any agents on the storage, so we don't make any changes to your active environment. It's fully transparent, users and applications think all the data is still there, but the data is now sitting in something lower cost and it's dynamically managed through open standards, just like you and I are talking now and I don't need a translator between us because we both understand English. >> Jeff: Right. >> But maybe if I were speaking Japanese you might need a translator, right? >> Jeff: I would, yeah. (laughs) Yes. >> Krishna: That was just a guess, I didn't know. So, that's kind of how we do it, we work through the open standards and in the past solutions were... We didn't do that, they would have a proprietary protocol and that's why they could only work with some storage and not all, and they would get in the way of all the access. >> But do I want it to look like it looked before if in fact it's ready to be retired into cold storage or Glacier or whatever, because I would imagine there's a reason and I don't know that I necessarily want the app to have access. I would imagine my access and availability of stuff that's in cold storage is very different kind of profile than the hot stuff. >> It depends, you know, sometimes some data you may want to truly archive and never be able to see it live. Like, maybe you're putting it in Glacier, and you can control how the data looks, but sometimes you don't want to interrupt what the applications are doing. You want to just go to a lower cost of storage, like an object storage on-premise. >> Right. >> But you still want the data accessible because you don't want a vague user and application behavior. >> Jeff: Right, right. >> Yeah. >> Okay, so give us a little bit more information on the company. So, you've been around for three years. We talked a little bit before we turned the cameras on, you know, kind of how many people do you have, how many customers, how many rounds of funding have you guys raised? >> Komprise is growing rapidly. We have about 60 people, we have a headquarters in Campbell, California, we also have offices in Bangalore, India. We just hired a new VP of worldwide sales and we're putting field sales teams in different regions, we have over 60 customers worldwide. Our customer base is growing rapidly. Just this last quarter we added about four times the number of customers, and we're seeing customers all the way from general mix and healthcare to big insurance and financial services companies, anywhere where there's data, you know. Universities, all the major research universities are our customers and government institutions, you know, state and local governments, et cetera. So, these are all good markets for us. >> Right, and you said it's a services, like a SAS model, so you charge based on how much data that's under management. >> Yeah, we charge for all the data that's under management and it's a fraction of what you pay to store the data, so our cost is like less than half a penny a gig a month. >> Right, it's pretty interesting, you know, we just got back from AWS re:Invent as well, over 40,000 people, it's bananas. But this whole kind of rent versus buy conversation is really interesting to me, and again, I always go back to Netflix. If anybody uses a massive amount of storage and a massive amount of network and computing where they own like, I don't know, 50% of the Friday night internet traffic, right, in the States is Netflix and they're still on Amazon. I think what's really interesting is that if you... The flexibility of the cloud to be able to turn things on really easily is important, but I think what people often forget is it's also you need to turn it off and so much activity around better managing your investment and the resources at Amazon to use what you need when you need it, but don't pay for what you don't need when you don't, and that seems to be, you know, something that you guys are right in line with and consistent with. >> Yeah, I think that's actually a good way to put it. Yeah, don't pay for data when you don't need to, right? You can still have it but you don't need to pay for it. >> Right, well Krishna, thanks for taking a few minutes out of your day to stop by and give us the story on Komprise. >> Yeah, thank you very much, thanks for having me. >> All right, pleasure, she's Krishna, I'm Jeff, you're watching the CUBE. We're at Palo Alto Studios, CUBE Conversation, we'll see you next time, thanks for watching. (techy music playing)

Published Date : Dec 21 2017

SUMMARY :

but you need to also turn them off for people that aren't familiar, that you never just keep one copy of data but yes, you do not want to lose those pictures. of data you probably have three to four copies of it, so you can save 70% or more on your storage. how hot is the data, you know, what type of storage to manage that data than you need to. So, how do you do that in an environment where, That's a great question, because really, you know, So, one is how do you sit there without impacting it? You partner with all the three big cloud providers, at the customer's site and you literally So, most of the time you would probably Jeff: I would, yeah. and in the past solutions were... different kind of profile than the hot stuff. and you can control how the data looks, accessible because you don't want kind of how many people do you have, you know, state and local governments, et cetera. Right, and you said it's a services, of what you pay to store the data, so our cost and that seems to be, you know, something that you guys Yeah, don't pay for data when you don't need to, right? to stop by and give us the story on Komprise. we'll see you next time, thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

KrishnaPERSON

0.99+

AmazonORGANIZATION

0.99+

100MBQUANTITY

0.99+

50%QUANTITY

0.99+

100 terabytesQUANTITY

0.99+

Dec 2017DATE

0.99+

70%QUANTITY

0.99+

400 terabytesQUANTITY

0.99+

NetflixORGANIZATION

0.99+

Campbell, CaliforniaLOCATION

0.99+

Krishna SubramanianPERSON

0.99+

threeQUANTITY

0.99+

first yearQUANTITY

0.99+

AWSORGANIZATION

0.99+

four copiesQUANTITY

0.99+

KompriseORGANIZATION

0.99+

Bangalore, IndiaLOCATION

0.99+

one kilobyteQUANTITY

0.99+

Palo AltoLOCATION

0.98+

over 60 customersQUANTITY

0.98+

CUBEORGANIZATION

0.98+

GoogleORGANIZATION

0.98+

over 40,000 peopleQUANTITY

0.98+

HitachiORGANIZATION

0.98+

OneQUANTITY

0.98+

EnglishOTHER

0.98+

one copyQUANTITY

0.98+

over six monthsQUANTITY

0.97+

GCPORGANIZATION

0.97+

WindowsTITLE

0.97+

about 60 peopleQUANTITY

0.97+

single packageQUANTITY

0.96+

Friday nightDATE

0.96+

bothQUANTITY

0.96+

JapaneseOTHER

0.95+

oneQUANTITY

0.95+

last quarterDATE

0.95+

280QUANTITY

0.94+

three yearsQUANTITY

0.93+

less than half a penny a gig a monthQUANTITY

0.91+

three big cloud providersQUANTITY

0.9+

StatesLOCATION

0.89+

NetAppTITLE

0.89+

about four timesQUANTITY

0.83+

couple thingsQUANTITY

0.8+

SASORGANIZATION

0.79+

AzureORGANIZATION

0.73+

couple componentsQUANTITY

0.72+

GlacierTITLE

0.71+

REST S3TITLE

0.68+

KompriseTITLE

0.66+

Alto StudiosORGANIZATION

0.6+

PaloLOCATION

0.52+

SMBTITLE

0.51+

ConversationEVENT

0.5+

NFSTITLE

0.49+

InventEVENT

0.48+

CUBEConversationORGANIZATION

0.42+

GlacierORGANIZATION

0.4+

EMCORGANIZATION

0.37+

ConversationTITLE

0.33+

Amanda Whaley, Cisco | Cisco DevNet Create 2017


 

>> Narrator: Live from San Francisco it's The Cube. Covering Devnet Create 2017. Brought to you by Cisco. >> Welcome back everyone. Live in San Francisco this is The Cube's exclusive coverage of Cisco Systems inaugural DevNet Create event an augmentation, extension and build upon their successful three year old DevNet Developer Program. Our next guest is Amanda Whaley who's the director of development experience at Cisco DevNet. Congratulations Amanda on one DevNet being successful for three years and now your foray into DevNet Create which is some call it the hoodie crowd, the cloud native developers, open source, completely different animal but important. >> Yes. >> From DevNet. >> Absolutely so the hoodie crowd is more my tribe that's my background is from software development and I came to Cisco because I was intrigued when they reached out and said we want to start a developer community, we want to start a developer program. I talked to Suzie Wee for a long time about it and what was interesting to me was there were new problems to solve in developer experience. So we know how to do rest APIs, there's a lot of best practices around how you make those easy for developers to use. How you make very consumable and developer friendly and there's a lot of work to do there but we do know how to do that. When you start adding in hardware so IOT, network devices, infrastructure, collaboration, video, there's a lot of new interesting developer experience problems to solve. So I was really intrigued to join Cisco bringing my software developer background and coming from more the web and startup world, coming into Cisco and trying to tackle what's this new connection of hardware plus software and how do we do the right developer experience around... >> Okay so I have to ask you what was your story, take us through the day in the life as you enter in to Cisco, you have Suzie wooed you in you got into the tractor beam 'cause she's brilliant she's awesome and then you go woah I'm in Cisco. >> Amanda: Yeah! >> You're looking around what was the reaction? >> So what was interesting was so DevNet started three years ago at Cisco live we had our first DevNet developer zone within Cisco Live. That was actually my first day at Cisco so my first day at Cisco. >> Peter: Baptism by fire. >> Yes absolutely and so that was my first day at Cisco and Suzie talked to me and she said hey there's a lot of network engineers that want to learn how to code and they want to learn about rest APIs. Could you do like a coding 101 and start to teach them about that so literally my first day at Cisco I was teaching this class on what's a rest API, how do you make the call, how do you learn about that and then how do you write some Python to do that? And I thought is anyone interested in this that's here? And I had this room packed with network engineers which I at that time I mean I knew some networking but definitely nothing compared to the CCIEs that were in the audience. >> John: Hardcore plumber networking guys. >> Yeah very very yeah. And so I taught the course and it just like caught on like wildfire they were so excited about because they saw this is actually pretty accessible and easy to do and one thing that stood out was we made our first rest call from Python and instead of getting your twitter followers or something like that it retrieved a list of network devices. You got IP addresses back and so it related to their world and so I think it was very fortunate that I had that on my first day 'cause I had an instant connection to what that community... >> They're like who is she's awesome come on! >> Co-Ost: Gimme that code! >> You're like ready to go for a walk around the block now come on kindergartners come on out. No but these network guys they're smart >> Really smart. so they can learn I mean it's not like they're wet behind the ears in terms smarts it's just new language for them. >> And that was the point of the class was like you guys are super smart you know all of this you just need some help getting tarted on this tooling. And so many of them I keep up with them on Twitter and other places and they have taken it so far beyond and they just needed that start and they were off to the races. So that's been really interesting and then the other piece of it has been working in our more app developer technologies as developer experience for DevNet I get to work across collaboration, IOT, Networking, data center like the whole spectrum of Cisco technologies. So on the other side in application we have Cisco Spark they have javascript SDKs and it's very developer friendly and so that is kind of going back to my developer tribe and bringing them in and saying to you want to sell to the enterprise, do you want to work with the enterprise, Cisco's got a lot to offer and there's a lot of interesting things to do there. >> Yeah a lot of them have Cisco networks and gear all around the place so it's important. Now talk about machine learning and AI the hottest trend on the planet right now in your tribe and in developer tribe a lot of machine learning going on and machine learning's been around data center, networking guys it's not new to them either so that's an interesting convergence point. IOT as a network device. >> Amanda: Right right. >> So you got IOT you got AI and machine learning booming, this seems like it's a perfect storm for the melting pot of... >> It really is so today in my keynote I talked a little bit about first of all why have I always liked working with the APIs and doing these integrations and I've always thought that it's what I like about it is the possibility you have a defined set of tools or Legos and then you can build them into whatever interesting thing you want to and I would say right now developers have a really interesting set of Legos, a new set of Legos because with sensors, whether that's an IOT sensor or a phone or a video camera or a piece of a switch in your data center a lot of those you can get information from them. So whatever kind of sensor it is plus easy connectivity and kind of connectivity everywhere plus could computing plus data equals like magic because now you can do now machine learning finally has enough data to do the real thing. My original background was chemical engineering and I actually did predictive model control and we did machine learning on it but we didn't have quite enough data. We couldn't store quite enough of it, we didn't have enough connectivity we couldn't really get there. And now it's like all of my grad school dreams are coming true and you can do all these amazing things that seemed possible then and so I think that's what DevNet Create has been about to me is getting the infrastructure, the engineers, the app developers together with the machine learning community and saying like now's the time there's a lot of interesting things we can build. >> And magic can come out of that. >> Magic yeah right! >> And you think about it that's chemical reaction. The chemistry of bringing multiple things together and there's experimentation sometimes it might blow up. >> Amanda: Hopefully not! >> Innovation you know has is about experimentation and Andy Jassy at Amazon web services I mean I've talked to him multiple times and him and Jeff Bezos consistently talk about do experiments try things and I think that is the ethos. >> It is and that is particularly our ethos in DevNet in fact in DevNet Create an experiment right a new conference let's get people together and start this conversation and see how it comes together. >> What's your reaction to the show here? The vibe your feeling? Feedback your getting? Observations. >> I'm so happy it's been great. I had someone tell mt today that this was the most welcome they had felt at any developer conference that they'd been to and I took that as a huge complement that they felt very comfortable, they liked the conversations they were having they were learning lots of new information so I think that's been good and then I think exactly that mix of infrastructure plus app developer that we were trying to put together is absolutely happening. I see it in the sessions I see it in the birds of a feather and there's a lot of good conversations happening around that. >> Question for you that we get all the time and it comes up on crowd chat I'd like to ask you the question just get your reaction to is what misperception of devops is out there that you would like to correct? If there could be one and you say you know it's not that what's your... >> The one that seems the most prevalent to me and I think it's starting to get some attention but it's still out there is that devops is just about about the tools. Like just pick the right devops tools. Docker docker docker or use puppet and chef and you're good you're devopsing and it's like that is not the case right? It's really a lot more about the culture and the way the teams work together so if there was anything I could, and the people right, so it's flipping the emphasis from what's the devops tool that you're using to how are you building the right culture and structure of people? That's the one I would correct. >> Suzie was on yesterday and Peter and Suzie had a little bit of a bonding moment because they recognize each other from previous lives HP and his old job and it brought up a conversation around what Peter also did at his old job at Metagroup where he talked about this notion of an infrastructure engineer and what's interesting. >> Peter: Infrastructure developer. >> I mean infrastructure developer sorry. That was normally like a network engineer. So the network engineer's now on the engineering side meeting with developers almost like there seems I can't put my finger on it just like I can feel it my knee weather patterns coming over that a new developer is emerging. And we've talked a little bit about it last night about this what is a full stack developer it doesn't stop at the database it can go all the way down to the network so you're starting to see the view a little bit of a new kind of developer. Kind of like when data science emerged from not being an analyst but to being an algorithms specialist meets data person. >> Right I think it's interesting and this shows up in a lot of different places. When I think about devops I think about this spectrum of the teams working and there's the infrastructure teams who are working on the most deepest layer of the infrastructure and you kind of build up through there into the Devops teams into the app dev teams into maybe even something sort of above the app dev team which would be like a low code solution where you're just using something like build.io or something like that. Something that we wouldn't normally think of as developers right. So that spectrum is broadening on both ends and people are moving down the stack and moving up the stack. The network engineers one of the things in DevNet we're working on is what we call the evolution of the network engineer and where is that going and network engineers have had to learn new technology before and now there's just a new set which includes automation and APIs and configuration management, infrastructures, code and so they're moving up the stack. And then developers are also starting to think I really want my application to run well on the network because if no one can use it then my application's not doing anything and so things like the optimized for business that we have with Apple where a developer can go in through an SDK and say I want to set these QOS settings so that my app gets treatment like that's a way that they're converging and I think that's really interesting. >> Peter: So one of the things that we've been working on at Wikibon I want to test this assumption by we've talked a little bit about it is the idea of a data zone. Where just as we use a security zone as a concept where everything that's in that zone and it's both the technologies there's governmental there's other types of, has this seized security characteristics and if it's going to be part of that conglomeration it must have these security characteristics. And we're no thinking you could do the same thing with data. Where you start saying so for example we talked earlier about the idea that the network is what connects places together and that developers think in terms of the places things are like the internet of things. I'm wondering if it's time for us to think in terms of the network in time or the network is time and not think in terms of where something is but think in terms of when it is. And whether or not that's going to become a very powerful way of helping developers think about the role that the network's going to play is the data available now because I have an event that I have to support now and it seems as though that could be one of those things that snaps this group, these two communities together to think it's in time that you're trying to make things happen and the network has to be able to present things in time and you have to be cognisant of in time. It's one of the reasons for example why restful is not the only way to do things. >> Right exactly. >> IOT thinks in time what do you think about that? >> Yeah I think that's really interesting and actually that's something we're diving in with our community on is so you've been a developer you've worked with rest services and now you're doing IOT well you need to learn a lot of new protocols and how to do things more in real time and that's a skill set that some developers maybe don't have they're interested in learning so we're looking at how do we help people along that way. >> John: Well data in motion is a big topic. >> Exactly yeah absolutely. And so I think and then the network, thinking about from a network provider like I need this data here at this time is very interesting concept and that starts to speak to what can be done at the edge which is obviously like an interesting concept for us. >> But also the role the network's going to play in terms of predicatively anticipating where stuff is and when it needs to be there. >> Yeah yeah I think that's a really interesting space. >> But it's programmable if you think about what' Cisco's always been good at and most network and ops guys is they've been good at policy based stuff and they really they know what events are they have network events right things happen all the time. Network management software principles have always been grounded in software so now how do you take that to bridging against hat's why I see a convergence. >> Amanda: We should have a conference around that. >> It's called DevNet Create. Okay so final question for you as you guys have done this how's your team doing with the talks was one going on behind us is a birds of a feather IOT session you've got a hack-a-thon over here. Pretty cool by design that we heard yesterday that it's not 90% Cisco it's 90% community 10% Cisco so this is not a Cisco coming in and saying hey we're in cloud native get used to us we're here you know. >> Absolutely not so it's I'm really proud of how my team came together around that so I have our team of developer evangelists who we connect with the developer community and we really look at our job as this full circle of we get materials out and learning and get people excited about using Cisco APIs and we also bring information back about like here's what customers think about using it, here's what the community's doing all of that. So when we started DevNet Create we set the stake in the ground of we want this to be way more community content than our content we produce ourselves. And so the evangelists did a great job of reaching out into communities, connecting with speakers, finding the content that we wanted to highlight to this audience and bringing it in so that the talks have been fabulous, the workshops have been a huge hit it's like standing room only in there and people getting a seat and not wanting to leave because they want to keep their seat and so they'll stay for four workshops in a row you know it's been amazing. >> I think it's great it's exciting for me to watch 'cause I know the developer goodness is happening. People are donating soft we see Google donating a lot of open source even Amazon on the machine learning you guys have a lot of people that open source but I got to ask you know within Cisco and it's ecosystem of a company we see a lot or Cisco on our Cube events that we go to. We go to 100 events last year we've been to 150 this year. We saw Dehli and Ciro we saw some Cisco folks there. Sapphire there's a deal with Century Link and Honna Cloud, Enterprise Cloud so there's Cisco everywhere. There's relationships that Cisco has, how are you looking at taking DevNet Create or are you going to stay a little bit decoupled, be more startup like and kind of figure that scene out or is that on the radar yet? >> So I think we know with starting DevNet Create for this first year what we really want to do is get foundation out there, stake in the ground, get a community started and get this conversation started. And we're really looking to in the iterative experimental way look at what comes out of this year and where the community really wants to take it. So I think we'll be figuring that out. >> John: So see what grows out of it. It's a thousand flowers kind of thing. >> Yeah and I think that it will be, we will always have the intention of keeping that we want to keep the mix of audience of infrastructure and app and we'll see how that grows so... >> Well Amanda congratulations to you, Rick and Suzie and the teams. I'd like to get some of those experts on the Cube interviews as soon as possible. >> Absolutely! >> And some crowd chats. You guys did an amazing IOT crowd chat. I'll share that out to the hashtag. >> That was really fun. >> Very collaborative you guys are a lot of experts and Cisco's got a lot of experts in hiding behind the curtain there you're bringing them out in public here. >> That's right. >> Congratulations. >> Thank you very much. >> We're here live with special inaugural coverage of DevNet Create, Cisco's new event. Cloud native, open source, all about the community. Like The Cube we care about that and we'll bring you more live coverage after this short break. >> Hi I'm April Mitchell and I'm the Senior director of Strategy and Planning for Cisco.

Published Date : May 24 2017

SUMMARY :

Brought to you by Cisco. and now your foray into DevNet Create and coming from more the web and startup world, Okay so I have to ask you what was your story, at Cisco live we had our first DevNet developer Yes absolutely and so that was my first day And so I taught the course and it just like the block now come on kindergartners come on out. so they can learn I mean it's not like they're and so that is kind of going back to and gear all around the place so it's important. for the melting pot of... and so I think that's what DevNet Create and there's experimentation sometimes and I think that is the ethos. It is and that is particularly our ethos The vibe your feeling? the birds of a feather and there's a lot like to ask you the question just get your reaction to and it's like that is not the case right? and it brought up a conversation around So the network engineer's now on of the infrastructure and you kind about the role that the network's going to play and how to do things more in real time that starts to speak to what can be done But also the role the network's and they really they know what events are Okay so final question for you so that the talks have been fabulous, but I got to ask you know within Cisco So I think we know with starting DevNet Create John: So see what grows out of it. of keeping that we want to keep Rick and Suzie and the teams. I'll share that out to the hashtag. in hiding behind the curtain there and we'll bring you more live coverage Hi I'm April Mitchell and I'm the Senior director

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PeterPERSON

0.99+

Jeff BezosPERSON

0.99+

SuziePERSON

0.99+

AmandaPERSON

0.99+

Amanda WhaleyPERSON

0.99+

Andy JassyPERSON

0.99+

RickPERSON

0.99+

AmazonORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Century LinkORGANIZATION

0.99+

Suzie WeePERSON

0.99+

JohnPERSON

0.99+

AppleORGANIZATION

0.99+

April MitchellPERSON

0.99+

three yearsQUANTITY

0.99+

yesterdayDATE

0.99+

last yearDATE

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

first dayQUANTITY

0.99+

90%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

Cisco SystemsORGANIZATION

0.99+

PythonTITLE

0.99+

HPORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

firstQUANTITY

0.99+

last nightDATE

0.99+

bothQUANTITY

0.98+

Honna CloudORGANIZATION

0.98+

10%QUANTITY

0.98+

twitterORGANIZATION

0.98+

three years agoDATE

0.98+

this yearDATE

0.98+

WikibonORGANIZATION

0.98+

two communitiesQUANTITY

0.97+

Enterprise CloudORGANIZATION

0.97+

three year oldQUANTITY

0.97+

first rest callQUANTITY

0.97+

DevNet CreateEVENT

0.96+

DevNetTITLE

0.96+

TwitterORGANIZATION

0.95+

one thingQUANTITY

0.95+

100 eventsQUANTITY

0.95+

four workshopsQUANTITY

0.95+

Jack Berkowitz, Oracle - Oracle Modern Customer Experience #ModernCX - #theCUBE


 

(upbeat music) [Narrator] Live from Las Vegas. It's the CUBE, covering Oracle Modern Customer Experience 2017. Brought to you by Oracle. >> Welcome back everyone. We're live in Las Vegas here at the Mandalay Bay for Oracle's Modern Customer Experience conference, their second year. This is the CUBE, Silicon ANGLES flagship program. We go out to the events and extract the signal from the noise. I'm John Furrier. My co-host Peter Burris, head of research at Wikibon.com. Our next guest is Jack Berkowitz who's the Vice President of Products and Data Science at Oracle. Well, great to have you on the CUBE. Thanks for coming on. >> Thanks a lot. >> Appreciate it. Love talking to the product guys, getting down and dirty on the products. So, AI is hot this year. It's everywhere. Everyone's got an AI in their product. What is the AI component in your product? >> Well, what we're working on is building truly adaptive experiences for people. So, we have a whole bunch of different techniques and technologies all of it comes together essentially to create a system that amplifies peoples capabilities. That's really the key thing. Two real important components. First of all, it's all about data. Everybody talks about it. Well, what we've put together is, in terms of consumers, is the largest collection of consumer data in the Oracle data cloud. So we take advantage of all that consumer data. We also have a lot of work going on with collecting business data, both Oracle originated data as well as partner data. We're bringing that all that together and it sets the context for the AI. Now on top of that we have not just the latest trends in terms of machine learning or neural networks or things like that, but we're borrowing concepts from advertising, borrowing concepts from hedge funds so that we can make a real-time system. It's all about real-time. >> You mentioned neural networks. A lot of stuff conceptually in computer science has been around literally for decades. What is, from your definition - obviously cloud creates a lot of data out there now, but what is AI these days? Because everyone now is seeing AI as a mainstream term. Even the word metadata, since Snowden's thing, is now a mainstream term. Who would have thought metadata and AI would be talked about at kitchen tables? >> Yeah. >> What is AI from your perspective? >> Yeah, from my perspective it's really about augmenting folks. It's really about helping people do things. So maybe we'll automate some very manual tasks out, right, that will free up people to have more time to do some other things. I don't think it's about replacing people. People are creative. We want to get people back to being creative and people are great at problem solving so let's get them that information. Let's get them aid so they can get back to it. >> And give them options. >> Give them options, exactly. Exactly. You know, if you can free up somebody from having to manipulate spreadsheets and all this other stuff so they can just get the answer and get on with things, people are happier. >> So Oracle is using first-person data and third-person data to build these capabilities, right? >> Jack: Yeah, exactly. >> How is that going to play out? How is Oracle going to go to a customer and say we will appropriately utilize this third-person data in a way that does not undermine your first-person rights or value proposition? >> That's a great question. So, privacy and respect has been sort of the principle we've been driving at here. So there's the mechanics of it. People can opt in. People can opt out. There's all the mechanics and the regulatory side of it but it's really about how do you use these things so that it doesn't feel creepy. How do you do this in a subtle way so that somebody accepts the fact that that's the case? And it's really about the benefit to the person as to whether or not they're willing to make that trade-off. A great example is Waze. Waze I use all the time to get around San Francisco traffic. You guys probably use it as well. Well, guess what? If you really think about it, Waze knows what time I leave the house in the morning, what time I come home. Uber knows that once a month I leave at 2:00 on a Sunday and come back a week later. So, as long as you think about that, I'm getting a benefit from Waze I'm happy to have that partnership with them in terms of my data and they respect it and so therefore it works. >> And that comes back to some of the broader concepts of modern customer experience. It is that quid pro quo that I'll take a little data from you to improve the service that I'm able to provide as measured by the increasing value customer experience that's provided. >> Yeah, that's right. I used to live in London and in London there's these stores where you can go in and that sales guy has been there for like twenty years and you just develop a relationship. He knows you. He knows your kids, and so sure enough, stationary store or whatever it is and he gives you that personal experience. That's a relationship that I've built. That really all we're trying to do with all of this. We're trying to create a situation where people can have relationships again. >> And he's prompted with history of knowing you, just give you a pleasant surprise or experience that makes you go wow. And that's data driven now. So how do you guys do that? Cause this is something that, you know, Mark Heard brought up in his keynote that every little experience in the world is a data touchpoint. >> Jack: Yeah. >> And digital, whatever you're doing, so how do you guys put that in motion for data because that means data's got to be freely available. >> Data's got to be freely available. One of the big things that we brought to bear with the Suite X is that the data is connected and the experiences are connected so really we're talking about adding that connected intelligence on top of that data. So, it's not just the data. In fact we talked about it last night. It's not just the data even from the CX systems from service, but even the feed of what inventory's going on in real-time. So I can tell somebody if something's broken, hey, tell you what. This store has it. You can go exchange it, in real-time. Instead of having to wait for a courier or things like that. So it is that data being connected and the fact that our third-party data, you know this consumer data, is actually connected as well. So we bring that in on the fly with the appropriate context so it just works. >> So one of the new things here is the adaptive intelligence positioning products. What is that and take a minute to explain the features of how that came to be and how it's different from the competition. >> Okay, great. So the products are very purposeful built apps that plug in and amplify Oracle cloud apps and you can actually put in a third-party capability if you happen to have it. So that's the capability and it's got the decision science and machine learning and the data. >> Peter: So give me an example of a product. >> So a product is adaptive intelligence offers which we were showing here. It gives product recommendations, gives promotions, gives content recommendations on websites but also in your email. If you go into the store you get the same stuff and we can then go and activate advertising campaigns to bring in more people based on those successful pick ups of products or promotions. Its a great example. Very constrained use case addressed? >> Peter: Fed by a lot of different data. >> Fed by a lot of different data. The reason why they're adaptive is because they happen in real-time. So this isn't a batch mode thing. We don't calculate it the day before. We don't calculate it a week before or every three hours. It's actually click by click for you, and for you, reacting and re-scoring and re-balancing. And so we can get a wisdom of the crowds going on and an individual reaction, click by click, interaction by interaction. >> This is an important point I think that's nuanced in the industry. You mentioned batch mode which talks about how things are processed and managed to real-time and the big data space is a huge transition whether you're looking at hadoop or in memory or at all the architectures out there from batch data lakes to data in motion they're calling it. >> Yeah, exactly. >> So now you have this free flowing scalable data layers, if you will, every where, so being adaptive means what? Being ready? Being ... >> Being ready is the fundamental principle to getting to being adaptive. Being adaptive is just like this conversation. Being able to adjust, right? And not giving you the same exact answer seven times in a row because you asked me the same question. >> Or if it's in some talking point database you'd pull up from a FAQ. >> Peter: So it adapts to context. >> It's all about adapting to context. If the concepts change, then the system will adopt that context and adapt it's response. >> That's right. And we were showing last night, even in the interaction, as more context is given, the system can then pick that up and spin and then give you what you need? >> The Omni Channel is a term that's not new but certainly is amplified by this because now you have a world certainly with multiple clouds available to customers but also data is everywhere. Data is everywhere and channels are everywhere. >> Data is everywhere. And being adaptive also means customizing something at a point and time >> Exactly. and you might not know what it is up until seconds or near real-time or actually real-time. >> Real time, right? Real human time. 100 milliseconds. 150 milliseconds, anywhere in the world, is what we're striving for. >> And that means knowing that in some database somewhere you checked into a hotel, The Four Seasons, doing a little check in the hotel and now, oh, you left your house on Uber. Oh, you're the CEO of Oracle. You're in a rental car. I'm going to give you a different experience. >> Jack: Yeah. >> Knowing you're a travel warrior, executive. That's kind of what Mark Heard was trying to get to yesterday. >> Yeah, that's what he's getting to. So it's a bit of a journey, right? This is not a sprint. So there's been all this press and you think, oh my god, if I don't have ... It's a journey. It's a bit of a marathon, but these are the experiences that are happening. >> I want to pick up on 150 milliseconds is quite the design point. I mean human beings are not able to register information faster than about 80 milliseconds. >> Jack: Yeah, yeah. So you're talking about two brain cycles coming back to that. >> Jack: Yeah. >> I mean it's an analogy but it's not a bad one. >> Jack: No. >> 150 milliseconds anywhere in the world. That is a supreme design point. >> And it is what we're shooting for. Obviously there's things about networks and everything that have to be worked through but yeah, that responsiveness, but you're seeing that responsiveness at some of the big consumer sites. You see that type of responsiveness. That's what we want to get to. >> So at the risk of getting too technical here, how does multiple cloud integration or hopping change that equation? Is this one of the reasons it's going to drive customers to a tighter relationship with Oracle because it's going to be easier to provide the 150 millisecond response inside the Oracle fabric? >> Yeah, you nailed it. And I don't want to take too many shots at my competitors, but I'm going to. We don't have to move data. I don't have to move my data from me to AWS to some place else, right, Blue Mix, whatever it happens to be. And because we don't have to move data, we can get that speed. And because it's behind the fabric, as you put it, we can get that speed. We have the ability to scale the data centers. We have the data centers located where we need them. Now your recommendations, if you happen to be here today, they're here. They may transition to Sydney if you're in Australia to be able to give you that speed but that is the notion to have that seamless experience for you, even for travelers. >> That's a gauntlet. You just threw down a gauntlet. >> Jack: I did. Yeah. >> And that's what we're going to go compete against. Because what we're competing is on the experience for people. We're not competing on who's got the better algorithm. We're competing on that experience to people and everything about that. >> So that also brings up the point of third-party data because to have that speed certainly you have advantages in your architecture but humans don't care about Oracle and on which server. They care about what's going on on their phone, on their mobile. >> Jack: That's right. >> Okay, so the user, that requires some integration. So it won't be 100 percent Oracle. There's some third-party. What's the architecture, philosophy, guiding principles around integrating third-party data for you guys. Because it's certainly part of the system. It's part of the product, but I don't think it's ... >> So there's third=party data which could be from data partners or Oracle originated data through our Oracle data cloud or the 1500 licensed data partners there and there's also third-party systems. So for example if somebody had Magento Commerce and they wanted to include that into our capability. On the third party systems, we actually have built this around an API architecture or infrastructure using REST and it's basically a challenge I gave my PMs. I said look, I want you to test against the Oracle cloud system. I want you to test against the Oracle on-prem system and I want you to find the leading third-party system. I don't care if it's sales force or anybody else and I want you to test against that and so as long as people can map to the REST APIs that we have, they can have inter-operation with their systems. >> I mean the architectural philosophy is to decouple and make highly cohesive elements and you guys are a big part of that with Oracle as a component. >> Jack: That's right. >> But I'm still going to need to get stuff from other places and so API is a strategy and microservices are all going to be involved with that. >> Yeah, and actually we deployed a full microservice architecture so behind the scenes on that offers one, 19 microservices interplaying and operating. >> But the reality is this is going to be one of the biggest challenges that answers faces is that how we bridge, or how we gateway, cloud services from a lot of different providers is a non-trivial challenge. >> Jack: That's right. >> I remember back early on in my career when we had all these mini computer companies and each one had their own proprietary network on the shop floor for doing cell controllers or finance or whatever it might be and when customers wanted to bring those things together the mini computer companies said, yeah, put a bridge in place. >> Yeah, exactly. >> And along came TCPIP and Cisco and said forget that. Throw them all out. It wasn't the microprocessor that couldn't stick to those mini computer companies. It was TCPIP. The challenge that we face here is how are we going to do something similar because we're not going to bridge these things. The latency and the speed, and you hit the key point, where is the data, is going to have an enormous impact on this. >> That's right. And again, the investments we have been making with the CX Cloud Suite will allow us to do that. Allow us to take advantage with a whole bunch of data right away and the integration with the ODCs, so we couldn't probably have done this two or three years ago because we weren't ready. We're ready now. And now we can start to build it. We can start to take it now up to the next level. >> And to his point about the road map and TCPIP was interesting. We're all historians here. We're old enough to remember those days, but TCPIP standardized the OSI model which was a fantasy of seven layers of open standards if you remember. >> Jack: Seven layers, yep, whew. >> Peter: See we still talk about it. >> What layer are you on? >> But at the time, the proprietary was IBM and DEC owned the network stacks so that essentially leveled off there so the high-water mark was operating at TCPIP. Is there an equivalent analog to that in this world because IF you can almost take what he said and say take it to the cloud and say look at some point in this whatever stack you want to call it, if it is a stack, there has to be a moment of coalescing around something for everybody. And then a point of differentiation. >> So yeah, and again I'm just going to go back - and that's a great question by the way and it's - I'm like thinking this through as I say it, but I'm going to go right back to what I said. It's about people. So if I coalesce the information around that person, whether that person is a consumer or that person's a sales guy or that person's working on inventory management or better yet disaster relief, which is all those things put together. It's about them and about what they need. So if I get that central object around people, around companies then I have something that I can coalesce and share a semantic on. So the semantic is another old seven layer word. I didn't want to say it today but I can have ... >> Disruptive enabler. >> So then what you're saying is that we need a stack, and I use that word prohibitively, but we need a way of characterizing layer seven application so that we have ... >> Or horizontal >> Either way. But the idea is that we need to get more into how the data gets handled and not just how the message gets handled. >> Jack: That's right. >> OSI's always focused on how the message got handled. Now we're focused on how the data gets handled given that messaging substraight and that is going to be the big challenge for the industry. >> Jack: Yeah. >> Well, certainly Larry Ellis is going to love this conversation, OSI, TCPIP, going old school right here. >> Jack: Like you said, we're all old and yeah, that's what we grew up in. >> Yeah, but this is definitely ... >> Hey, today's computers and today's notions are built on the shoulders of giants. >> Well the enabling that's happening is so disruptive it's going to be a 20 or 30 year innovation window and we're just at the beginning. So the final question I have for you Jack is summarize for the folks watching. What is the exciting things about the AI and the adaptive intelligence announcements and products that you guys are showing here and how does that go forward into the future without revealing any kind of secrets on Oracle like you're a public company. What's the bottom line? What's the exciting thing they should know about? >> I think the exciting thing is that they're going to be able to take advantage of these technologies, these techniques, all this stuff, without having to hire a thousand data scientists in a seven month program or seven year program to take advantage of it. They're going to be able to get up and running very, very quickly. They can experiment with it to be able to make sure that it's doing the right thing. From a CX company, they can get back to doing what they do which is building great product, building great promotions, building a great customer service experience. They don't have to worry about gee, what's our seven year plan for building AI capabilities? That's pretty exciting. It lets them get back to doing what they do which is to compete on their products. >> And I think the messaging of this show is really good because you talk about empowerment, the hero. It's kind of gimmicky but the truth is what cloud has shown in the world is you can offload some of those mundane stuff and really focus on the task at hand, being creative or building solutions, or whatever you're doing. >> Yeah. Mark was talking about it. You have this much money to spend, what's my decision to spend it on. Spend it on competing with your products. >> All right, Jack Berkowitz live here inside the CUBE here at Oracle's Modern Customer Experience, talking about the products, the data science, AI's hot. Great products. Thanks for joining us. Appreciate it. Welcome to the CUBE and good job sharing some great insight and the data here. I'm John Furrier with Peter Burris. We'll be back with more after this short break. (upbeat music)

Published Date : Apr 26 2017

SUMMARY :

Brought to you by Oracle. Well, great to have you on the CUBE. What is the AI component in your product? and it sets the context for the AI. Even the word metadata, since Snowden's thing, Let's get them aid so they can get back to it. from having to manipulate spreadsheets And it's really about the benefit to the person And that comes back to some of the broader concepts or whatever it is and he gives you that personal experience. that every little experience in the world got to be freely available. One of the big things that we brought to bear What is that and take a minute to explain the features and machine learning and the data. to bring in more people based on those successful pick ups We don't calculate it the day before. and the big data space is a huge transition So now you have this free flowing scalable data layers, Being ready is the fundamental principle Or if it's in some talking point database If the concepts change, then the system will adopt and then give you what you need? available to customers but also data is everywhere. Data is everywhere. and you might not know what it is 150 milliseconds, anywhere in the world, I'm going to give you a different experience. to get to yesterday. So there's been all this press and you think, is quite the design point. coming back to that. 150 milliseconds anywhere in the world. that have to be worked through but yeah, but that is the notion to have that seamless experience That's a gauntlet. Jack: I did. We're competing on that experience to people because to have that speed certainly It's part of the product, but I don't think it's ... and so as long as people can map to the REST APIs I mean the architectural philosophy is to decouple and microservices are all going to be involved with that. full microservice architecture so behind the scenes on But the reality is this is going to be one on the shop floor for doing cell controllers or finance The latency and the speed, and you hit the key point, And again, the investments we have been making And to his point about the road map and say take it to the cloud and say look and that's a great question by the way so that we have ... But the idea is that we need to get more OSI's always focused on how the message got handled. to love this conversation, OSI, TCPIP, Jack: Like you said, we're all old and yeah, are built on the shoulders of giants. and how does that go forward into the future without It lets them get back to doing what they do in the world is you can offload some of those mundane stuff You have this much money to spend, and the data here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark HeardPERSON

0.99+

Peter BurrisPERSON

0.99+

IBMORGANIZATION

0.99+

Jack BerkowitzPERSON

0.99+

Larry EllisPERSON

0.99+

AustraliaLOCATION

0.99+

LondonLOCATION

0.99+

OracleORGANIZATION

0.99+

MarkPERSON

0.99+

JackPERSON

0.99+

PeterPERSON

0.99+

UberORGANIZATION

0.99+

DECORGANIZATION

0.99+

SydneyLOCATION

0.99+

20QUANTITY

0.99+

John FurrierPERSON

0.99+

150 millisecondsQUANTITY

0.99+

CiscoORGANIZATION

0.99+

seven yearQUANTITY

0.99+

AWSORGANIZATION

0.99+

Las VegasLOCATION

0.99+

San FranciscoLOCATION

0.99+

30 yearQUANTITY

0.99+

SnowdenPERSON

0.99+

twenty yearsQUANTITY

0.99+

a week laterDATE

0.99+

100 millisecondsQUANTITY

0.99+

WazeORGANIZATION

0.99+

seven timesQUANTITY

0.99+

1500 licensed data partnersQUANTITY

0.99+

150 millisecondQUANTITY

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

Mandalay BayLOCATION

0.99+

100 percentQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.98+

Seven layersQUANTITY

0.98+

Blue MixORGANIZATION

0.98+

19 microservicesQUANTITY

0.98+

twoDATE

0.98+

last nightDATE

0.98+

FirstQUANTITY

0.98+

second yearQUANTITY

0.98+

Wikibon.comORGANIZATION

0.98+

each oneQUANTITY

0.98+

about 80 millisecondsQUANTITY

0.97+

OneQUANTITY

0.97+

Greg Benson, SnapLogic - AWS Summit SF 2017 - #AWSSummit - #theCUBE


 

>> Voiceover: Live from San Francisco it's theCUBE. Covering AWS Summit 2017. Brought to you by Amazon Web Services. (upbeat music) >> Hey welcome back to theCUBE live at the Moscone Center at the Amazon Web Services Summit San Francisco. Very excited to be here, my co-host Jeff Rick. We're now talking to the Chief Scientist and professor at University of San Francisco, Greg Benson of SnapLogic. Greg, welcome to theCUBE, this is your first time here we're excited to have you. >> Thanks for having me. >> Lisa: So talk to us about what SnapLogic is, what do you do, and what did announce recently, today, with Amazon Web Services? >> Greg: Sure, so SnapLogic is a data integration company. We deliver a cloud-native product that allows companies to easily connect their different data sources and cloud applications to enrich their business processes and really make some of their business processes a lot easier. We have a very easy-to-use what we call self-service interface. So previously a lot of the things that people would have to do is hire programmers and do lots of manual programming to achieve some of the same things that they can do with our product. And we have a nice drag-and-drop. We call it digital programming interface to achieve this. And along those lines, I've been working for the last two years on ways to make that experience even easier than it already is. And because we're Cloud-based, because we have access to all of the types of problems that our customers run into, and the solutions that they solve with our product, we can now leverage that, and use it to harness machine-learning. We call this technology Iris, is what we're calling it. And so we've built out this entire meta-data framework that allows us to do data science on all of our meta-data in a very iterative and rapid fashion. And then we look for patterns, we look for historical data that we can learn from. And then what we do is we use that to train machinery and algorithms, in order to improve the customer experience in some way. When they're trying to achieve a task, specifically the first product feature that is based on the Iris technology is called the Integration Assistant. And the Integration Assistant is a very practical tool that is involved in the process of actually building out these pipelines. We call, when you build a pipeline it consists of these things called snaps, right? Snaps encapsulate functionality and then you can connect these snaps together. Now, it's often challenging when you have a problem to figure, OK, it's like a puzzle what snaps do I put together, and when do I put them together? Well, now that we've been doing this for a little while and we have quite a few customers with quite a few pipelines, we have a lot of knowledge about how people have solved those puzzles in the past. So, what we've done with Iris, is we've learned from all of those past solutions and now we give you automatic suggestions on where you might want to head next. And, we're getting pretty good accuracy for what we're predicting. So, we're basically, and this integration system is, a recommendation engine for connecting snaps into your pipelines as they're developing. So it's a real-time assistant. >> Jeff: So if I'm getting this right, it's really the intelligence of the crowd and the fact that you have so many customers that are executing many of the similar, same processes that you use as the basis to start to build the machine-learning to learn the best practices to make suggestions as people are going through this on their own. >> Greg: That's absolutely right. And furthermore, not only can we generalize from all of our customers to help new customers take advantage of this past knowledge, but what we can also do is tailor the suggestions for specific companies. So as you, as a company, as you start to build out more solutions that are specific to your problems, your different integration problems... >> Jeff: Right. >> The algorithms can now be, can learn from those specific things. So we both generalize and then we also make the work that you're doing easier within your company. >> And what's the specific impact? Are there any samples, stories you can share of what is the result of this type of activity? >> Greg: We're just, we're releasing it in May. >> Jeff: Oh OK. >> So it's going to be generally available to customers. >> Couple weeks still. >> Greg: Yeah. So... So... And... So... So we've done internal tests, so we've dove both through sort of the data science, so the experimentation to see, to feed it and get the feedback around how accurately it works. But we've also done user studies and what the user studies, not only did the science show but the user studies show that it can improve the time to completion of these pipelines, as you're building them. >> Lisa: So talk to us a little bit about who your target audience is. We're AWS, as we said. They really started 10 years ago in the start of space and have grown tremendous at getting to enterprise. Who is the target audience for SnapLogic that you're going after to help them really significantly improve their infrastructure get to the cloud, and beyond? >> Greg: So, so, so basically, we work with, largely with IT organizations within enterprises, who are, you know, larger companies are tasked with having sort of a common fabric for connecting, you know, which in an organization is lots of different databases for different purposes, ERP systems, you know, now, increasingly, lots of cloud applications and that's where part of our target is, we work with a lot of companies that still have policies where of course their data must be behind their firewall and maybe even on their premise, so our technology, while we're... we're hosted and run in the cloud, and we get the advantage of the SAS, a SAS platform, we also have the ability to run behind a firewall, and execute these data pipelines in the security domains of the customers themselves. So, they get the advantage of SAS, they get the advantage of things like Iris, and the Integration Assistant, right, because we can leverage all of the knowledge, but they get to adhere to any, you know, any regulatory or security policies that they have. And we don't have to see their data or touch their data. >> Lisa: So helping a customer that was, you know, using a service-oriented architecture or an ETL, modernize their infrastructure? >> Greg: Oh it's completely about modernization. Yeah, I mean, we, you know, our CEO, Gaurav Dhillon has been in the space for a while. He was formerly the CEO of Informatica. And so he has a lot of experience. And when he set out to start SnapLogic he wanted to look, you know, embrace the technologies of the time, right? So we're web-focused, right? We're HTTP and REST and JSON data. And we've centered the core technologies around these modern principles. So that makes us work very well with all the modern applications that you see today. >> Jeff: Look Greg, I want to shift gears a little bit. >> Greg: Yeah. >> You're also a professor. >> Greg: Correct. >> At University of San Francisco and UC Davis. I'd just love to get your perspective from the academic side of the house on what's happening at schools, around this new opportunity with big data, machine-learning, and AI and how that world is kind of changing? And then you are sitting in this great position where you kind of cross-over both... How does that really benefit, you know, to have some of that fresh, young blood, and learning, and then really take that back over, back into the other side of the house? >> Greg: Yeah, so a couple of things. Yeah, professor at University of San Francisco for 19 years. I did my PhD at UC Davis in computer science. And... My background is research in operating systems, parallel and distributed computing, in recent years, big data frameworks, big data processing. And University of San Francisco, itself, we have a, what we call the Senior and Masters Project Programs. Where, we've been doing this for, ever since I've been at USF, where what we do is we partner groups of students with outside sponsors, who are looking for opportunities to explore a research area. Maybe one that they can't allocate, you know, they can't justify allocating funds for, because it's a little bit outside of the main product, right? And so... It's a great win, 'cause our students get experience with a San Francisco, Silicon Valley company, right? So it helps their resume. It enhances their university experience, right? And because, you know, a lot of research happens in academia and computer science but a lot of research is also happening in industry, which is a really fascinating thing, if you look at what has come out of some of the bigger companies around here. And we feel like we're doing the same thing at SnapLogic and at the University of San Francisco. So just to kind of close that loop, students are great because they're not constrained by, maybe, some of us who have been in the industry for a while, about maybe what is possible and what's no so possible. And it's great to have somebody come and look at a problem and say, "You know, I think we could approach this differently." And, in fact, really, the impetus for the Integration Assistant came out of one of these projects where I pitched to our students, and I said "OK, we're going to explore SnapLogic meta-data and we're going to look at ways we can leverage machine-learning in the product on this data." But I left it kind of vague, kind of open. This fantastic student of mine from Thailand, his name is Jump, he kind of, he spent some time looking at the data and he actually said, "You know I'm seeing some patterns here. I'm seeing that, you know, we've got this great repository of these," like I described, "of these solved puzzles. And I think we could use that to train some algorithms." And so we spent, in the project phase, as part of his coursework, he worked on this technology. Then we demoed it at the company. The company said, "Wow, this is great technology. Let's put this into production." And then, there was kind of this transition from sort of this more academic, sort of experimental project into, going with engineers and making it a real feature. >> Lisa: What a great opportunity though, not just for the student to get more real-world applicability, like you're saying, taking it from that very experimental, investigational, academic approach and seeing all of the components within a business, that student probably gets so much more out of just an experiment. But your other point is very valid of having that younger talent that maybe doesn't have a lot of the biases and the pre-conceived notions that those of us that have been in the industry for a while. That's a great pipeline, no pun intended... >> Greg: Sure. >> For SnapLogic, is that something that you helped bring into the company by nature of being a professor? Just sort of a nice by-product? >> Well, so a couple of things there. One is that, like I said, University of San Francisco we were running this project class for a while, and... I got involved, you know, I had been at USF for a long time before I got involved with SnapLogic. I was introduced to Gaurav and there was this opportunity. And initially, right, initially, I was looking to apply some of my research to the technology, their product and their technology. But then it became clear that hey, you know we have this infrastructure in place at the university, they go through the academic training, our students are, it's a very rigorous program, back to your point about what they are exposed to, we have, you know, we're very modern, around big data, machine-learning, and then all of the core computer science that you would expect from a program. And so, yeah, it's been... It's been a great mutually beneficial relationship with SnapLogic and the students. But many other companies also come and pitch projects and those students also do similar types of projects at other companies. I would like to say that I started it at USF but I didn't. It was in existence. But I helped carry it forward. >> Jeff: That's great. >> Lisa: That is fantastic. >> And even before we got started, I mean you said your kind of attitude was to be the iPhone in this space. >> Greg: Of integration, yeah. >> Jeff: So again, taking a very different approach a really modern approach, to the expected behavior of things is very different. And you know, the consumerization of IT in terms of the expected behavior of how we interact with stuff has been such a powerful driver in the development of all these different applications. It's pretty amazing. >> Greg: And I think, you know, just like maybe, now you couldn't imagine most sort-of consumer-facing products not having a mobile application of some sort, increasingly what you're seeing is applications will require machine-learning, right, will require some amount of augmented intelligence. And I would go as far to say that the technology that we're doing at SnapLogic with self-service integration is also going to be a requirement. That, you just can't think of self-service integration without having it powered by a machine-learning framework helping you, right? It almost, like, in a few years we won't imagine it any other way. >> Lisa: And I like the analogy that Jeff, you just brought up, Greg, the being the iPhone of data integration. The simplicity message, something that was very prevalent today at the keynote, about making things simpler, faster, enabling more. And it sounds like that's what you're leveraging computer science to do. So, Greg Benson, Chief Scientist at SnapLogic. Thank you so much for being on theCUBE, you're now CUBE alumni, so that's fantastic. >> Alright. >> Lisa: We appreciate you being here and we appreciate you watching. For my co-host Jeff Rick, I'm Lisa Martin, again we are live from the AWS Summit in San Francisco. Stick around, we'll be right back. (upbeat music)

Published Date : Apr 19 2017

SUMMARY :

Brought to you by Amazon Web Services. live at the Moscone Center at the and now we give you automatic suggestions and the fact that you have so many customers that are more solutions that are specific to your problems, make the work that you're doing easier so the experimentation to see, to feed it Lisa: So talk to us a little bit about but they get to adhere to any, you know, any regulatory all the modern applications that you see today. How does that really benefit, you know, And because, you know, a lot of research happens not just for the student to get more real-world we have, you know, we're very modern, And even before we got started, I mean you said And you know, the consumerization of IT Greg: And I think, you know, just like maybe, And it sounds like that's what you're leveraging and we appreciate you watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff RickPERSON

0.99+

GregPERSON

0.99+

LisaPERSON

0.99+

Greg BensonPERSON

0.99+

Lisa MartinPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Gaurav DhillonPERSON

0.99+

AWSORGANIZATION

0.99+

SnapLogicORGANIZATION

0.99+

ThailandLOCATION

0.99+

MayDATE

0.99+

19 yearsQUANTITY

0.99+

InformaticaORGANIZATION

0.99+

UC DavisORGANIZATION

0.99+

University of San FranciscoORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

San FranciscoLOCATION

0.99+

first timeQUANTITY

0.99+

10 years agoDATE

0.98+

bothQUANTITY

0.98+

Silicon ValleyLOCATION

0.98+

Moscone CenterLOCATION

0.98+

GauravPERSON

0.98+

JumpPERSON

0.98+

AWS SummitEVENT

0.97+

CUBEORGANIZATION

0.97+

todayDATE

0.96+

#AWSSummitEVENT

0.94+

AWS Summit 2017EVENT

0.93+

Amazon Web Services SummitEVENT

0.93+

Couple weeksQUANTITY

0.93+

USFORGANIZATION

0.92+

IrisTITLE

0.92+

OneQUANTITY

0.91+

first product featureQUANTITY

0.89+

AWS Summit SF 2017EVENT

0.87+

last two yearsDATE

0.83+

theCUBEORGANIZATION

0.8+

JSONOTHER

0.76+

IrisORGANIZATION

0.73+

RESTOTHER

0.67+

one of theseQUANTITY

0.66+