Jill Rouleau, Brad Thornton & Adam Miller, Red Hat | AnsibleFest 2020
>> (soft upbeat music) >> Announcer: From around the globe, it's the cube with digital coverage of Ansible Fest 2020, brought to you by RedHat. >> Hello, welcome to the cubes coverage of Ansible Fest 2020. We're not in person, we're virtual. I'm John Furrier , your host of theCube. We've got a great power panel here of RedHat engineers. We have Brad Thorton, Senior Principle Software Engineer for Ansible networking. Adam Miller, Senior Principle Software Engineer for Security and Jill Rouleau, who's the Senior Software Engineer for Ansible Cloud. Thanks for joining me today. Appreciate it. Thanks for coming on. >> Thanks. >> Good to be here. >> We're not in person this year because of COVID, a lot going on but still a lot of great news coming out of Ansible Fest this year. Last year, you guys launched a lot since last year. It's been awesome. Launched the new platform. The automation platform, grown the collections, certified collections community from five supported platforms to over 50, launched a lot of automation services catalog. Brad let's start with you. Why are customers successful with Ansible in networking? >> Why are customers successful with Ansible in networking? Well, let's take a step back to a bit of classic network engineering, right? Lots of CLI interaction with the terminal, a real opportunity for human error there. Managing thousands of devices from the CLI becomes very difficult. I think one of the reasons why Ansible has done well in the networking space and why a lot of network engineers find it very easy to use is because you can still see an attack at the CLI. But what we have the ability to do is pull information from the same COI that you were using manually, and showed that as structured data and then let you return that structured data and push it back to the configuration. So what you get when you're using Ansible is a way to programmatically interface and do configuration management across your entire fleet. It brings consistency and stability, and speed really to network configuration management. >> You know, one of the big hottest areas is, you know, I always ask the folks in the cloud what's next after cloud and pretty much unanimously it's edge, and edge is super important around automation, Brad. What's your thoughts on, as people start thinking about, okay, I need to have edge devices. How does automation play into that? And cause networking, edge it's kind of hand in hand there. So what's your thought on that? >> Yeah, for sure. It really depends on what infrastructure you have at the edge. You might be deploying servers at the edge. You may be administering IOT devices and really how you're directing that traffic either into edge compute or back to your data center. I think one of the places Ansible is going to be really critical is administering the network devices along that path from the edge, from IOT back to the data center, or to the cloud. >> Jill, when you have a Cloud, what's your thoughts on that? Because when you think about Cloud and Multicloud, that's coming around the horizon, you're looking at kind of the operational model. We talked about this a lot last year around having Cloud ops on premises and in the Cloud. What should customers think about when they look at the engineering challenges and the development challenges around Cloud? >> So cloud gets used for a lot of different things, right? But if we step back Cloud just means any sort of distributed applications, whether it's on prem in your own data center, on the edge, in a public hosted environment, and automation is critical for making those things work, when you have these complex applications that are distributed across, whether it's a rack, a data center or globally. You need a tool that can help you make sense of all of that. You've got to... We can't manage things just with, Oh, everything is on one box anymore. Cloud really just means that things have been exploded out and broken up into a bunch of different pieces. And there's now a lot more architectural complexity, no matter where you're running that. And so I think if you step back and look at it from that perspective, you can actually apply a lot of the same approaches and philosophies to these new challenges as they come up without having to reinvent the wheel of how you think about these applications. Just because you're putting them in a new environment, like at the edge or in a public Cloud or on a new, private on premise solution. >> It's interesting, you know, I've been really loving the cloud native action lately, especially with COVID, we're seeing a lot of more modern apps come out of that. If I could follow up there, how do you guys look at tools like Terraform and how does Ansible compare to that? Because you guys are very popular in the cloud configuration, you look at cloud native, Jill, your thoughts. >> Yeah. So Terraform and tools like that. Things like cloud formation or heat in the OpenStack world, they do really, really great at things like deploying your apps and setting up your stack and getting them out there. And they're really focused on that problem space, which is a hard problem space that they do a fantastic job with where Ansible tends to come in and a tool like Ansible is what do you do on day two with that application? How do you run an update? How do you manage it in the longterm of something like 60% of the workloads or cloud spend at least on AWS is still just EC2 instances. What do you do with all of those EC2 instances once you've deployed them, once they're in a stack, whether you're managing it, whatever tool you're managing it with, Ansible is a phenomenal way of getting in there and saying, okay, I have these instances, I know about them, but maybe I just need to connect out and run an update or add a package or reconfigure a service that's running on there. And I think you can glue these things together and use Ansible with these other stack deployment based tools really, really effectively. >> Real quick, just a quick followup on that. what's the big pain point for developers right now when they're looking at these tools? Because they see the path, what are some of the pain points that they're living right now that they're trying to overcome? >> I think one of the problems kind of coincidentally is we have so many tools. We're in kind of a tool explosion in the cloud space, right now. You could piece together as as many tools to manage your stack, as you have components in your stack and just making sense of what that landscape looks like right now and figuring out what are the right tools for the job I'm trying to do, that can be flexible and that are not going to box me into having to spend half of my engineering time, just managing my tools and making sense of all of that is a significant effort and job on its own. >> Yes, too many may add, would choke in years ago in the big data search, the tools, the tool train, one we call the tool shed, after a while, you don't know what's in the back, what you're using every day. People get comfortable with the right tools, but the platform becomes a big part of that thinking holistically as a system. And Adam, this comes back to security. There's more tools in the security space than ever before. Talking about tool challenges, security is the biggest tool shed everyone's got tools they'd buy everything, but you got to look at, what a platform looks like and developers just want to have the truth. And when you look at the configuration management piece of it, security is critical. What's your thoughts on the source of truth when it comes into play for these security appliances? >> So these are... Source of truth piece is kind of an interesting one because this is going to be very dependent on the organization. What type of brownfield environment they've developed, what type of things that they rely on, and what types of data they store there. So we have the ability for various sources of truth to come in for your inventory source and the types of information you store with that. This could be tagged information on a series of cloud instances or series about resources. This could be something you store in a network management tool or a CMDB. This could even be something that you put into a privilege access management system, such as, CyberArk or hashivault. Like those are the things and because of Ansible flexibility and because of the way that everything is put together in a pluggable nature, we have the capability to actually bring in all of these components from anywhere in a brownfield environment, in a preexisting infrastructure, as well as new decisions that are being made for the enterprise as I move forward. And, and we can bring all that together and be that infrastructure glue, be that automation component that can tie all these disjoint loosely coupled, or complete disc couple pieces, together. And that's kind of part of that, that security posture, remediation various levels of introspection into your environment, these types of things, as we go forward, and that's kind of what we're focusing on doing with this. >> What kind of data is stored in the source of truth? >> I mean... So what type of data? This could be credential. It could be single use credential access. This could be your inventory data for your systems, what target systems you're trying to do. It could be, various attributes of different systems to be able to classify them ,and codify them in different ways. It's kind of kind of depending, be configuration data. You know, we have the ability with some of the work that Brad and his team are doing to actually take unstructured data, make it structured, bullet into whatever your chosen source of truth is, store it, and then utilize that to, kind of decompose it into different vendors, specific syntax representations and those types of things. So we have a lot of different capability there as well. >> Brad, you were mentioned, do you have a talk on parsing, can you elaborate on that? And why should network operators care about that? >> Yeah, welcome to 2020. We're still parsing network configuration and operational state. This is an interesting one. If you had asked me years ago, did I think that we would be investing development time into parsing with Ansible network configurations? I would have said, "Well, I certainly hope not. "I hope programmability of network devices and the vendors "really have their API's in order." But I think what we're seeing is network containers are still comfortable with the command line. They're still very familiar with the command line and when it comes time to do operational state assessment and health assessment of your network, engineers are comfortable going to the command line and running show commands. So really what we're trying to do in the parsing space is not author brand new parking and parsing engine ourselves, but really leverage a lot of the open source tools that are already out there bringing them into Ansible, so network engineers can now harvest the critical information from usher operational state commands on their network devices. And then once they've gotten to the structure data, things get really interesting because now you can do entrance criteria checks prior to doing configuration changes, right? So if you want to ensure a network device has a very particular operational state, all the BGP neighbors are, for example before pushing configuration changes, what we have the ability to do now is actually parse the command that you would have run from the command line. Use that within a decision tree in your Ansible playbook, and only move forward when the configuration changes. If the box is healthy. And then once the configuration changes are made at the end, you run those same health checks to ensure that you're in a speck can do a steady state and are production ready. So parsing is the mechanism. It's the data that you get from the parsing that's so critical. >> If I had to ask you real quick, just while it's on my mind. You know, people want to know about automation. It's top of mind use case. What are some of these things around automation and configuration parsing, whether it's parsing to other configuration manager, what are the big challenges around automation? Because it's the Holy grail. Everyone wants it now. What are the couches? where's the hotspots that needs to be jumped on and managed carefully? Or the easiest low hanging fruit? >> Well, there's really two pieces to it, right? There's the technology. And then there's the culture. And, and we talk really about a culture of automation, bringing the team with you as you move into automation, ensuring that everybody has the tools and they're familiar with how automation is going to work and how their day job is going to change because of automation. So I think once the organization embraces automation and the culture is in place. On the technology side, low hanging fruit automation can be as simple as just using Ansible to push the commands that you would have previously pushed to the device. And then as your organization matures, and you mature along this kind of path of network automation, you're dealing with larger pieces, larger sections of the configuration. And I think over time, network engineers will become data managers, right? Because they become less concerned about the network, the vendors specific configuration, and they're really managing the data that makes up the configuration. And I think once you hit that part, you've won at automation because you can move forward with Ansible resource modules. You're well positioned to do NETCONF for RESTCONF or... Right once you've kind of grown to that it's the data that we need to be concerned about and it could fit (indistinct) and the operational state management piece, you're going to go through a transformation on the networking side. >> So you mentioned-- >> And one thing to note there, if I may, I feel like a piece of this too, is you're able to actually bridge teams because of the capability of Ansible, the breadth of technologies that we've had integrations with and our ability to actually bridge that gap between different technologies, different teams. Once you have that culture of automation, you can start to realize these DevOps and DevSecOps workflow styles that are top of everybody's mind these days. And that's something that I think is very powerful. And I like to try to preach when I have the opportunity to talk to folks about what we can do, and the fact that we have so much capability and so many integrations across the entire industry. >> That's a great point. DevSecOps is totally a hop on. When you have software and hardware, it becomes interesting. There's a variety of different equipment, on the security automation. What kind of security appliances can you guys automate? >> As of today, we are able to do endpoint management systems, enterprise firewalls, security information, and event management systems. We're able to do security orchestration, automation, remediation systems, privileged access management systems. We're doing some threat intelligence platforms. And we've recently added to the I'm sorry, did I say intrusion detection? We have intrusion detection and prevention, and we recently added endpoint security management. >> Huge, huge value there. And I think everyone's wants that. Jill, I've got to ask you about the Cloud because the modules came up. What use cases do you see the Ansible modules in for the public cloud? Because you got a lot of cloud native folks in public cloud, you've got enterprises lifting and shifting, there's a hybrid and multicloud horizon here. What's some of the use cases where you see those Ansible modules fitting well with public level. >> The modules that we have in public cloud can work across all of those things, you know. In our public clouds, we have support for Amazon web services, Azure GCP, and they all support your main services. You can spin up a Lambda, you can deploy ECS clusters, build AMI, all of those things. And then once you get all of that up there, especially looking at AWS, which is where I spend the most time, you get all your EC2 instances up, you can now pull that back down into Ansible, build an inventory from that. And seamlessly then use Ansible to manage those instances, whether they're running Linux or windows or whatever distro you might have them running, we can go straight from having deployed all of those services and resources to managing them and going between your instances in your traditional operating system management or those instances and your cloud services. And if you've got multiple clouds or if you still have on prem, or if you need to, for some reason, add those remote cloud instances into some sort of on-prem hardware load balancer, security endpoint, we can go between all of those things and glue everything together, fairly seamlessly. You can put all of that into tower and have one kind of view of your cloud and your hardware and your on-prem and being able to move things between them. >> Just put some color commentary on what that means for the customer in terms of, is it pain reduction, time savings? How would you classify their value? >> I mean, both. Instead of having to go between a number of different tools and say, "Oh, well for my on-prem, I have to use this. "But as soon as I shift over to a cloud, "I have to use these tools. "And, Oh, I can't manage my Linux instances with this tool "that only knows how to speak to, the EC2 to API." You can use one tool for all of these things. So like we were saying, bring all of your different teams together, give them one tool and one view for managing everything end to end. I think that's, that's pretty killer. >> All right. Now I get to the fun part. I want you guys to weigh in on the Kubernetes. Adam, if you can start with you, we'll start with you go in and tell us why is Kubernetes more important now? What does it mean? A lot of hype continues to be out there. What's the real meet around Kubernetes what's going on? >> I think the big thing is the modernization of the application development delivery. When you talk about Kubernetes and OpenShift and the capabilities we have there, and you talk about the architecture, you can build a lot of the tooling that you used to have to maintain, to be able to deliver sophisticated resilient architectures in your application stack, are now baked into the actual platform, so the container platform itself takes care of that for you and removes that complexity from your operations team, from your development team. And then they can actually start to use these primitives and kind of achieve what the cloud native compute foundation keeps calling cloud native applications and the ability to develop and do this in a way that you are able to take yourself out of some of the components you used to have to babysit a lot. And that becomes in also with the OpenShift operator framework that came out of originally Coral S, and if you go to operator hub, you're able to see these full lifecycle management stacks of infrastructure components that you don't... You no longer have to actually, maintain a large portion of what you start to do. And so the operator SDK itself, are actually developing these operators. Ansible is one of the automation capabilities. So there's currently three supported there's Ansible, there's one that you just have full access to the Golang API and then helm charts. So Ansible's specifically obviously being where we focus. We have our collection content for the... carries that core, and then also ReHat to OpenShift certified collection's coming out in, I think, a month or so. Don't hold me to the timeline. I'm shoving in trouble for that one, but we have those things going to come out. Those will be baked into the operator's decay that we fully supported by our customer base. And then we can actually start utilizing the Ansible expertise of your operations team to container native of the infrastructure components that you want to put into this new platform. And then Ansible itself is able to build that capability of automating the entire Kubernetes or OpenShift cluster in a way that allows you to go into a brownfield environment and automate your existing infrastructure, along with your more container native, futuristic next generation, net structure. >> Jill this brings up the question. Why don't you just use native public cloud resources versus Kubernetes and Ansible? What's the... What should people know about where you use that, those resources? >> Well, and it's kind of what Adam was saying with all of those brownfield deployments and to the same point, how many workloads are still running just in EC2 instances or VMs on the cloud. There's still a lot of tech out there that is not ready to be made fully cloud native or containerized or broken up. And with OpenShift, it's one more layer that lets you put everything into a kind of single environment instead of having to break things up and say, "Oh, well, this application has to go here. "And this application has to be in this environment.' You can do that across a public cloud and use a little of this component and a little of that component. But if you can bring everything together in OpenShift and manage it all with the same tools on the same platform, it simplifies the landscape of, I need to care about all of these things and look at all of these different things and keep track of these and are my tools all going to work together and are my tools secure? Anytime you can simplify that part of your infrastructure, I think is a big win. >> John: You know, I think about-- >> The one thing, if I may, Jill spoke to this, I think in the way that a architectural, infrastructure person would, but I want to try to really quick take the business analyst component of it as the hybrid component. If you're trying to address multiple footprints, both on prem, off prem, multiple public clouds, if you're running OpenShift across all of them, you have that single, consistent deployment and development footprint for everywhere. So I don't disagree with anything they said, I just wanted to focus specifically on... That piece is something that I find personally unique, as that was a problem for me in a past life. And that kind of speaks to me. >> Well, speaking of past lives-- >> Having me as an infrastructure person, thank you. >> Yeah. >> Well, speaking of past lives, OpenStack, you look at Jill with OpenStack, we've been covering the Cuba thing when OpenStack was rolling out back in the day, but you can also have private cloud. Where you used to... There's a lot of private cloud out there. How do you talk about that? How do people understand using public cloud versus the private cloud aspect of Ansible? >> Yeah, and I think there is still a lot of private cloud out there and I don't think that's a bad thing. I've kind of moved over onto the public cloud side of things, but there are still a lot of use cases that a lot of different industries and companies have that don't make sense for putting into public cloud. So you still have a lot of these on-prem open shift and on-prem OpenStack deployments that make a ton of sense and that are solving a bunch of problems for these folks. And I think they can all work together. We have Ansible that can support both of those. If you're a telco, you're not going to put your network function, virtualization on USC as to one in spot instances, right? When you call nine one one, you don't want that going through the public cloud. You want that to be on dedicated infrastructure, that's reliable and well-managed and engineered for that use case. So I think we're going to see a lot of ongoing OpenStack and on-prem OpenShift, especially with edge, enabling those types of use cases for a long time. And I think that's great. >> I totally agree with you. I think private cloud is not a bad thing at all. Things that are only going to accelerate my opinion. You look at the VM world, they talked about the telco cloud and you mentioned edge when five G comes out, you're going to have basically have private clouds everywhere, I guess, in my opinion. But anyway, speaking of VMware, could you talk about the Ansible VMware module real quick? >> Yeah, so we have a new collection that we'll be debuting at Ansible Fest this year bore the VMware REST API. So the existing VMware modules that we have usually SOAP API for VMware, and they rely on an external Python library that VMware provides, but with these fare 6.0 and especially in vSphere 6.5, VMware has stepped up with a REST API end point that we find is a lot more performance and offers a lot of options. So we built a new collection of VMware modules that will take advantage of that. That's brand new, it's a lighter way. It's much faster, we'll get better performance out of it. You know, reduced external requirements. You can install it and get started faster. And especially with these sphere seven, continuing to build on this REST API, we're going to see more and more interfaces being exposed so that we can take advantage. We plan to expand it as new interfaces are being exposed in that API, it's compatible with all of the existing modules. You can go back and forth, use your existing playbooks and start introducing these. But I think especially on the performance side, and especially as we get these larger clouds and more cloud deployments, edge clouds, where you have these private clouds and lots and lots of different places, the performance benefits of this new collection that we're trying to build is going to be really, really powerful for a lot of folks. >> Awesome. Brad, we didn't forget about you. We're going to bring you back in. Network automation has moved towards the resource modules. Why should people care about them? >> Yeah. Resource modules, excuse me. Probably I think having been a network engineer for so long, I think some of the most exciting work that has gone into Ansible network over the past year and a half, what the resource modules really do for you is they will reach out to network devices. They will pull back that network native, that vendor native configuration. While the resource module actually does the parsing for you. So there's none of that with the resource modules. And we returned structured data back to the user that represents the configuration. Going back to your question about source of truth. You can take that structure data, maybe for your interface CONFIG, your OSPF CONFIG, your access list CONFIG, and you can store that data in your source of truth under source of truth. And then where you are moving forward, is you really spend time as every engineer managing the data that makes up the configuration, and you can share that data across different platforms. So if you were to look at a lot of the resource modules, the data model that they support, it's fairly consistent between vendors. As an example, I can pull OSPF configuration from one vendor and with very small changes, push that OSPF configuration to a different vendor's platform. So really what we've tried to do with the resource modules is normalize the data model across vendors. It'll never be a hundred percent because there's functionality that exists in one platform that doesn't exist and that's exposed through the configuration, but where we could, we have normalized the data model. So I think it's really introducing the concept of network configuration management through data management and not through CLI commands anymore. >> Yeah, that's a great point. It just expands the network automation vision. And one of the things that's interesting here in this panel is you're talking about, cloud holistically, public multicloud, private hybrid security network automation as a platform, not just a tool, we're still going to have all kind of tools out there. And then the importance of automating the edge. I mean, that's a network game Brad. I mean, it's a data problem, right? I mean, we all know about networking, moving packets from here to there, but automating the data is critical and you give have bad data and you don't have... If you have misinformation, it sounds like our current politics, but you know, bad information is bad automation. I mean, what's your thoughts? How do you share that concept to developers out there? What should they be thinking about in terms of the data quality? >> I think that's the next thing we have to tackle as network engineers. It's not, do I have access to the data? You can get the data now for resource modules, you can get the data from NETCONF, from RESTCONF, you can get it from OpenConfig, you can get it from parsing. The question really is, how do you ensure the integrity and the quality of the data that is making up your configurations and the consistency of the data that you're using to look at operational state. And I think this is where the source of truth really becomes important. If you look at Git as a viable source of truth, you've got all the tools and the mechanisms within Git to use that as your source of truth for network configuration. So network engineers are actually becoming developers in the sense that they're using Git ops to worklow to manage configuration moving forward. It's just really exciting to see that transformation happen. >> Great panel. Thanks for everyone coming on, I appreciate it. We'll just end this by saying, if you guys could just quickly summarize Ansible fast 2020 virtual, what should people walk away with? What should your customers walk away with this year? What's the key points. Jill, we'll start with you. >> Hopefully folks will walk away with the idea that the Ansible community includes so many different folks from all over, solving lots of different, interesting problems, and that we can all come together and work together to solve those problems in a way that is much more effective than if we were all trying to solve them individually ourselves, by bringing those problems out into the open and working together, we get a lot done. >> Awesome, Brad? >> I'm going to go with collections, collections, collections. We introduced in last year. This year, they are real. Ansible2.10 that just came out is made up of collections. We've got certified collections on automation. We've got cloud collections, network collections. So they are here. They're the real thing. And I think it just gets better and deeper and more content moving forward. All right, Adam? >> Going last is difficult. Especially following these two. They covered a lot of ground and I don't really know that I have much to add beyond the fact that when you think about Ansible, don't think about it in a single context. It is a complete automation solution. The capability that we have is very extensible. It's very pluggable, which has a standing ovation to the collections and the solutions that we can come up with collectively. Thanks to ourselves. Everybody in the community is almost infinite. A few years ago, one of the core engineers did a keynote speech using Ansible to automate Philips hue light bulbs. Like this is what we're capable of. We can automate the fortune 500 data centers and telco networks. And then we can also automate random IOT devices around your house. Like we have a lot of capability here and what we can do with the platform is very unique and something special. And it's very much thanks to the community, the team, the open source development way. I just, yeah-- >> (Indistinct) the open source of truth, being collaborative all is what it makes up and DevOps and Sec all happening together. Thanks for the insight. Appreciate the time. Thank you. >> Thank you. I'm John Furrier, you're watching theCube here for Ansible Fest, 2020 virtual. Thanks for watching. (soft upbeat music)
SUMMARY :
brought to you by RedHat. and Jill Rouleau, who's the Launched the new platform. and then let you return I always ask the folks in the along that path from the edge, from IOT and the development lot of the same approaches and how does Ansible compare to that? And I think you can glue that they're trying to overcome? as you have components in your And when you look at the and because of the way that and those types of things. It's the data that you If I had to ask you real quick, bringing the team with you and the fact that we on the security automation. and we recently added What's some of the use cases where you see those Ansible and being able to move Instead of having to go between A lot of hype continues to be out there. and the capabilities we have there, about where you use that, and a little of that component. And that kind of speaks to me. infrastructure person, thank you. but you can also have private cloud. and that are solving a bunch You look at the VM world, and lots and lots of different places, We're going to bring you back in. and you can store that data and you give have bad data and the consistency of What's the key points. and that we can all come I'm going to go with collections, and the solutions that we can Thanks for the insight. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brad | PERSON | 0.99+ |
Adam Miller | PERSON | 0.99+ |
Brad Thorton | PERSON | 0.99+ |
John | PERSON | 0.99+ |
60% | QUANTITY | 0.99+ |
Adam | PERSON | 0.99+ |
Jill | PERSON | 0.99+ |
Jill Rouleau | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
two pieces | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
This year | DATE | 0.99+ |
last year | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Git | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
vSphere 6.5 | TITLE | 0.99+ |
OpenShift | TITLE | 0.99+ |
RedHat | ORGANIZATION | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Kubernetes | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
Linux | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
EC2 | TITLE | 0.99+ |
five supported platforms | QUANTITY | 0.99+ |
Ansible Fest | EVENT | 0.99+ |
one tool | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
thousands of devices | QUANTITY | 0.99+ |
over 50 | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
USC | ORGANIZATION | 0.98+ |
2020 | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
one box | QUANTITY | 0.98+ |
Lambda | TITLE | 0.98+ |
this year | DATE | 0.98+ |
Brad Thornton | PERSON | 0.98+ |
windows | TITLE | 0.98+ |
telco | ORGANIZATION | 0.98+ |
one more layer | QUANTITY | 0.98+ |
one platform | QUANTITY | 0.98+ |
Ansible Fest 2020 | EVENT | 0.97+ |
DevSecOps | TITLE | 0.97+ |
AnsibleFest | EVENT | 0.96+ |
day two | QUANTITY | 0.96+ |
one vendor | QUANTITY | 0.96+ |
NETCONF | ORGANIZATION | 0.95+ |
three | QUANTITY | 0.95+ |
nine | QUANTITY | 0.95+ |
one view | QUANTITY | 0.95+ |
hundred percent | QUANTITY | 0.94+ |
Anne Gentle, Cisco DevNet | DevNet Create 2019
>> Live from Mountain View, California, it's theCUBE! Covering DevNet Create 2019, brought to you by Cisco. >> Hi, welcome to theCUBE's coverage of Cisco DevNet Create 2019, Lisa Martin with John Furrier, we've been here all day, talking about lots of very inspiring, educational, collaborative folks, and we're pleased to welcome to theCUBE Anne Gentle, developer experience manager for Cisco DevNet, Anne, thank you so much for joining us on theCUBE today. >> Thank you so much for having me. >> So this event, everything's like, rockstar start this morning with Susie, Mandy, and the team with the keynotes, standing room only, I know when I was walking out. >> I loved it, yes. >> Yes, there's a lot of bodies in here, it's pretty toasty. >> Yeah. >> The momentum that you guys have created, pun intended. >> Oh, yes. >> No, I can't take credit for that, is really, you can feel it, there's a tremendous amount of collaboration, this is your second create? >> Second create, yeah, so I've been with DevNet itself for about year and a half, and started at Cisco about three years ago this month, but I feel like developer experience is one of my first loves, when I really started to understand how to advocate for the developer experience. So DevNet just does a great job of not only advocating within Cisco, but outside of Cisco as well, so we make sure that their voice is heard, if there's some oddity with an API, which, you know, I'm really into API design, API style, we can kind of look at that first, and kind of look at it sideways and then talk to the teams, okay is there a better way to think about this from a developer standpoint. >> It's great, I love the API love there, it's going around a lot here. DevNet create a cloud native vibe that's kind of integrating and cross-pollinating into DevNet, Cisco proper. You're no stranger to cloud computing early days, and ecosystems that have formed naturally and grown, some morph, some go different directions, so you're involved in OpenStack, we know that, we've talked before about OpenStack, just some great successes as restarts, restarts with OpenStack ultimately settled in what it did, the CNCF, the Cloud Native Computing Foundation, is kind of the cloud native OpenStack model. >> Yeah, yeah. >> You've seen the communities grow, and the market's maturing. >> Definitely, definitely. >> So what's your take on this, because it creates kind of a, the creator builder side of it, we hear builder from Amazon. >> Yeah, I feel like we're able to bring together the standards, one of the interesting things about OpenStack was okay, can we do open standards, that's an interesting idea, right? And so, I think that's partially what we're able to do here, which is share, open up about our experiences, you know, I just went to a talk recently where the SendGrid former advocate is now working more on the SDK side, and he's like, yeah the travel is brutal, and so I just kind of graduated into maintaining seven SDKs. So, that's kind of wandering from where you were originally talking, but it's like, we can share with each other not only our hardships, but also our wins as well, so. >> API marketplaces is not a new concept, Apache was acquired-- >> Yes. >> By a big company, we know that name, Google. But now it's not just application programming interface marketplaces, with containers and server space, and microservices. >> Right. >> The role of APIs growing up on a whole other level is happening. >> Exactly. >> This is where you hear Cisco, and frankly I'm blown away by this, at the Cisco Live, that all the portfolio (mumbles) has APIs. >> True, yes, exactly. >> This is just a whole changeover, so, APIs, I just feel a whole other 2.0 or 3.0 level is coming. >> Absolutely. >> What's your take on this, because-- >> So, yeah, in OpenStack we documented like, two APIs to start, and then suddenly we had 15 APIs to document, right, so, learn quick, get in there and do the work, and I think that that's what's magical about APIs, is, we're learning from our designs in the beginning, we're bringing our users along with us, and then, okay, what's next? So, James Higginbotham, I saw one of his talks today, he's really big in the API education community, and really looking towards what's next, so he's talking about different architectures, and event-driven things that are going to happen, and so even talking about, well what's after APIs, and I think that's where we're going to start to be enabled, even as end users, so, sure, I can consume APIs, I'm pretty good at that now, but what are companies building on top of it, right? So like GitHub is going even further where you can have GitHub actions, and this is what James is talking about, where it's like, well the API enabled it, but then there's these event-driven things that go past that. So I think that's what we're starting to get into, is like, APIs blew up, right? And we're beyond just the create read. >> So, user experience, developer experience, back to what you do, and what Mandy was talking about. You can always make it easier, right? And so, as tools change, there's more tools, there's more workloads, there's more tools, there's more this, more APIs, so there's more of everything coming. >> Yeah. >> It's a tsunami to the developer, what are some of the trends that you see to either abstract away complexities, and, or, standardize or reduce the toolchains? >> Love where you're going with this, so, the thing is, I really feel like in the last, even, since 2010 or so, people are starting to understand that REST APIs are really just HTTP protocol, we can all understand it, there's very simple verbs to memorize. So I'm actually starting to see that the documentation is a huge part of this, like a huge part of the developer experience, because if, for one, there are APIs that are designed enough that you can memorize the entire API, that blows me away when people have memorized an API, but at the same time, if you look at it from like, they come to your documentation every day, they're reading the exact information they can give, they're looking at your examples, of course they're going to start to just have it at their fingertips with muscle memory, so I think that's, you know, we're starting to see more with OpenAPI, which is originally called Swagger, so now the tools are Swagger, and OpenAPI is the specification, and there's just, we can get more done with our documentation if we're able to use tools like that, that start to become industry-wide, with really good tools around them, and so one of the things that I'm really excited about, what we do at DevNet, is that we can, so, we have a documentation tool system, that lets us not only publish the reference information from the OpenAPI, like very boring, JSON, blah blah blah, machines can read it, but then you can publish it in these beautiful ways that are easy to read, easy to follow, and we can also give people tutorials, code examples, like everything's integrated into the docs and the site, and we do it all from GitHub, so I don't know if you guys know that's how we do our site from the back side, it's about 1000 or 2000 GitHub repos, is how we build that documentation. >> Everything's going to GitHub, the network configurations are going to GitHub, it's programmable, it's got to be in GitHub. >> Yes, it's true, and everything's Git-based right? >> So, back to the API question, because I think I'm connecting some dots from some of the conversations we had, we heard from some of the community members, there's a lot of integration touchpoints. Oh, a call center app on their collaboration talks to another database, which talks to another database, so these disparate systems can be connected through APIs, which has been around for a while, whether it's an old school SOAP interface, to, you know, HTTP and REST APIs, to full form, cooler stuff now. But it's also more of a business model opportunity, because the point is, if your API is your connection point-- >> Yes. >> There's potential business deals that could go on, but if you don't have good documentation, it's like not having a good business model. >> Right, and the best documentation really understands a user's task, and so that's why API design is so important, because if you need to make sure that your API looks like someone's daily work, get the wording right, get the actual task right, make sure that whatever workflow you've built into your API can be shown through in any tutorial I can write, right? So yeah, it's really important. >> What's the best practice, where should I go? I want to learn about APIs, so then I'm going to have a couple beers, hockey's over, it's coming back, Sharks are going to the next round, Bruins are going to the next round, I want to dig into APIs tonight. Where do I go, what are some best practices, what should I do? >> Yeah, alright, so we have DevNet learning labs, and I'm telling you because I see the web stats, like, the most popular ones are GitHub, REST API and Python, so you're in good company. Lots of people sitting on their couches, and a lot of them are like 20 minutes at a time, and if you want to do like an entire set that we've kind of curated for you all together, you should go to developer.cisco.com/startnow, and that's basically everything from your one-on-ones, all the way up to, like, really deep dive into products, what they're meant to do, the best use cases. >> Okay, I got to ask you, and I'll put you on the spot, pick your favorite child. Gold standard, what's the best APIs that you like, do you think are the cleanest, tightest? >> Oh, best APIs I like, >> Best documented? >> So in the technical writing world, the gold standard that everyone talks about is the Stripe documentation, so that's in financial tech, and it's very clean, we actually can do something like it with a three column layout-- >> Stripe (mumbles) payment gateway-- >> Stripe is, yeah, the API, and so apparently, from a documentation standpoint, they're just, people just go gaga for their docs, and really try to emulate them, so yeah. And as far as an API I use, so I have a son with type one diabetes, I don't know if I've shared this before, but he has a continuous glucose monitor that's on his arm, and the neat thing is, we can use a REST API to get the data every five minutes on how his blood sugar is doing. So when you're monitoring this, to me that's my favorite right now, because I have it on my watch, I have it on my phone, I know he's safe at school, I know he's safe if he goes anywhere. So it's like, there's so many use cases of APIs, you know? >> He's got the policy-based program, yeah. >> He does, yes, yes. >> Based upon where's he's at, okay, drink some orange juice now, or, you know-- >> Yes, get some juice. >> Get some juice, so, really convenient real-time. >> Yes, definitely, and he, you know, he can see it at school too, and just kind of, not let his friends know too much, but kind of keep an eye on it, you know? >> Automation. >> Yeah, exactly, exactly. >> Sounds like great cloud native, cool. You have a Meraki hub in your house? >> I don't have one at home. >> Okay. >> Yeah, I need to set one up, so yeah, we're terrible net nannies and we monitor everything, so I think I need Meraki at home. (laughing) >> It's a status symbol now-- >> It is now! >> We're hearing in the community. Here in the community of DevNet, you got to have a Meraki hub in your, switch in your house. >> It's true, it's true. >> So if you look back at last year's Create versus, I know we're just through almost day one, what are some of the things that really excite you about where this community of now, what did they say this morning, 585,000 strong? Where this is going, the potential that's just waiting to be unlocked? >> So I'm super excited for our Creator awards, we actually just started that last year, and so it's really neat to see, someone who won a Creator award last year, then give a talk about the kind of things he did in the coming year. And so I think that's what's most exciting about looking a year ahead for the next Create, is like, not only what do the people on stage do, but what do the people sitting next to me in the talks do? Where are they being inspired? What kind of things are they going to invent based on seeing Susie's talk about Wi-Fi 6? I was like, someone invent the thing so that when I go to a hotel, and my kids' devices take all the Wi-Fi first, and then I don't have any, someone do that, you know what I mean, yeah? >> Parental rights. >> So like, because you're on vacation and like, everybody has two devices, well, with a family of four-- [John] - They're streaming Netflix, Amazon Prime-- >> Yeah, yeah! >> Hey, where's my video? >> Like, somebody fix this, right? >> Maybe we'll hear that next year. >> That's what I'm saying, someone invent it, please. >> And thank you so much for joining John and me on theCUBE this afternoon, and bringing your wisdom and your energy and enthusiasm, we appreciate your time. >> Thank you. >> Thank you. >> For John Furrier, I am Lisa Martin, you're watching theCUBE live from Cisco DevNet Create 2019. Thanks for watching. (upbeat music)
SUMMARY :
Covering DevNet Create 2019, brought to you by Cisco. Anne, thank you so much for joining us on theCUBE today. and the team with the keynotes, Yes, there's a lot of bodies in here, The momentum that you guys have created, and kind of look at it sideways and then talk to the teams, is kind of the cloud native OpenStack model. and the market's maturing. the creator builder side of it, but it's like, we can share with each other By a big company, we know that name, Google. APIs growing up on a whole other level is happening. This is where you hear Cisco, This is just a whole changeover, and event-driven things that are going to happen, back to what you do, and what Mandy was talking about. and so one of the things that I'm really excited about, the network configurations are going to GitHub, from some of the conversations we had, but if you don't have good documentation, Right, and the best documentation so then I'm going to have a couple beers, and if you want to do like an entire set Gold standard, what's the best APIs that you like, of APIs, you know? He's got the policy-based so, really convenient real-time. You have a Meraki hub in your house? Yeah, I need to set one up, so yeah, We're hearing in the community. and so it's really neat to see, And thank you so much for joining John and me you're watching theCUBE live from Cisco DevNet Create 2019.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Anne Gentle | PERSON | 0.99+ |
James Higginbotham | PERSON | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Anne | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Susie | PERSON | 0.99+ |
two devices | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Mandy | PERSON | 0.99+ |
OpenAPI | TITLE | 0.99+ |
second | QUANTITY | 0.99+ |
15 APIs | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
2010 | DATE | 0.99+ |
next year | DATE | 0.99+ |
Mountain View, California | LOCATION | 0.99+ |
Second | QUANTITY | 0.99+ |
Git | TITLE | 0.99+ |
Sharks | ORGANIZATION | 0.99+ |
DevNet | ORGANIZATION | 0.98+ |
developer.cisco.com/startnow | OTHER | 0.98+ |
Swagger | TITLE | 0.98+ |
Python | TITLE | 0.98+ |
three column | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
tonight | DATE | 0.97+ |
GitHub | ORGANIZATION | 0.97+ |
today | DATE | 0.96+ |
585,000 | QUANTITY | 0.96+ |
Netflix | ORGANIZATION | 0.96+ |
OpenStack | TITLE | 0.95+ |
first loves | QUANTITY | 0.95+ |
two APIs | QUANTITY | 0.94+ |
SendGrid | ORGANIZATION | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
REST API | TITLE | 0.93+ |
this morning | DATE | 0.93+ |
this afternoon | DATE | 0.93+ |
first | QUANTITY | 0.92+ |
Bruins | PERSON | 0.91+ |
three years ago | DATE | 0.9+ |
coming year | DATE | 0.89+ |
REST | TITLE | 0.88+ |
Cisco DevNet | ORGANIZATION | 0.87+ |
2019 | DATE | 0.86+ |
JSON | TITLE | 0.86+ |
seven SDKs | QUANTITY | 0.85+ |
family of | QUANTITY | 0.84+ |
GitHub | TITLE | 0.81+ |
a year | QUANTITY | 0.79+ |
Prime | COMMERCIAL_ITEM | 0.79+ |
five minutes | QUANTITY | 0.78+ |
Meraki | ORGANIZATION | 0.75+ |
DevNet Create | TITLE | 0.75+ |
DevNet Create 2019 | TITLE | 0.73+ |
about 1000 | QUANTITY | 0.73+ |
Jim Long, Sarbjeet Johal, and Joseph Jacks | CUBEConversation, February 2019
(lively classical music) >> Hello everyone, welcome to this special Cube conversation, we are here at the Power Panel Conversation. I'm John Furrier, in Palo Alto, California, theCUBE studies we have remote on the line here, talk about the cloud technology's impact on entrepreneurship and startups and overall ecosystem is Jim Long, who's the CEO of Didja, which is a startup around disrupting digital TV, also has been an investor and a serial entrepreneur, Sarbjeet Johal, who's the in-cloud influencer of strategy and investor out of Berkeley, California, The Batchery, and also Joseph Jacks, CUBE alumni, actually you guys are all CUBE alumni, so great to have you on. Joseph Jacks is the founder and general partner of OSS Capital, Open Source Software Capital, a new fund that's been raised specifically to commercialize and fund startups around open source software. Guys, we got a great panel here of experts, thanks for joining us, appreciate it. >> Go Bears! >> Nice to be here. >> So we have a distinguished panel, it's the Power Panel, we're on cloud technos, first I'd like to get you guys' reaction you know, you're to seeing a lot of negative news around what Facebook has become, essentially their own hyper-scale cloud with their application. They were called the digital, you know, renegades, or digital gangsters in the UK by the Parliament, which was built on open source software. Amazon's continuing to win, Azure's doing their thing, bundling Office 365, making it look like they've got more revenue with their catching up, Google, and then you got IBM and Oracle, and then you got an ecosystem that's impacted by this large scale, so I want to get your thoughts on first point here. Is there room for more clouds? There's a big buzzword around multiple clouds. Are we going to see specialty clouds? 'Causes Salesforce is a cloud, so is there room for more cloud? Jim, why don't you start? >> Well, I sure hope so. You know, the internet has unfortunately become sort of the internet of monopolies, and that doesn't do anyone any good. In fact, you bring up an interesting point, it'd be kind of interesting to see if Facebook created a social cloud for certain types of applications to use. I've no idea whether that makes any sense, but Amazon's clearly been the big gorilla now, and done an amazing job, we love using them, but we also love seeing, trying out different services that they have and then figuring out whether we want to develop them ourselves or use a specialty service, and I think that's going to be interesting, particularly in the AI area, stuff like that. So I sure hope more clouds are around for all of us to take advantage of. >> Joseph, I want you to weigh in here, 'cause you were close to the Kubernetes trend, in fact we were at a OpenStack event when you started Kismatic, which is the movement that became KubeCon Cloud Native, many many years ago, now you're investing in open source. The world's built on open source, there's got to be room for more clouds. Your thoughts on the opportunities? >> Yeah, thanks for having me on, John. I think we need a new kind of open collaborative cloud, and to date, we haven't really seen any of the existing major sort of large critical mass cloud providers participate in that type of model. Arguably, Google has probably participated and contributed the most in the open source ecosystem, contributing TensorFlow and Kubernetes and Go, lots of different open source projects, but they're ultimately focused on gravitating huge amounts of compute and storage cycles to their cloud platform. So I think one of the big missing links in the industry is, as we continue to see the rise of these large vertically integrated proprietary control planes for computing and storage and applications and services, I think as the open source community and the open source ecosystem continues to grow and explode, we'll need a third sort of provider, one that isn't based on monopoly or based on a traditional proprietary software business like Microsoft kind of transitioning their enterprise customers to services, sort of Amazon in the first camp vertically integrated many a buffet of all these different compute, storage, networking services, application, middleware. Microsoft focused on sort of building managed services of their software portfolio. I think we need a third model where we have sort of an open set of interfaces and an open standards based cloud provider that might be a pure software company, it might be a company that builds on the rails and the infrastructure that Amazon has laid down, spending tens of billions in cap ex, or it could be something based on a project like Kubernetes or built from the community ecosystem. So I think we need something like that just to sort of provide, speed the innovation, and disaggregate the services away from a monolithic kind of closed vendor like Amazon or Azure. >> I want to come back to that whole startup opportunity, but I want to get Sarbjeet in here, because we've been in the B2B area with just last week at IBM Think 2019. Obviously they're trying to get back into the cloud game, but this digital transformation that has been the cliche for almost a couple of years now, if not five or plus. Business has got to move to the cloud, so there's a whole new ball game of complete cultural shift. They need stability. So I want to talk more about this open cloud, which I love that conversation, but give me the blocking and tackling capabilities first, 'cause I got to get out of that old cap ex model, move to an operating model, transform my business, whether it's multi clouds. So Sarbjeet, what's your take on the cloud market for say, the enterprise? >> Yeah, I think for the enterprise... you're just sitting in that data center and moving those to cloud, it's a cumbersome task. For that to work, they actually don't need all the bells and whistles which Amazon has in the periphery, if you will. They need just core things like compute, network, and storage, and some other sort of services, maybe database, maybe data share and stuff like that, but they just want to move those applications as is to start with, with some replatforming and with some changes. Like, they won't make changes to first when they start moving those applications, but our minds are polluted by this thinking. When we see a Facebook being formed by a couple of people, or a company of six people sold for a billion dollars, it just messes up with our mind on the enterprise side, hey we can do that too, we can move that fast and so forth, but it's sort of tragic that we think that way. Well, having said that, and I think we have talked about this in the past. If you are doing anything in the way of systems innovation, if your building those at, even at the enterprise, I think cloud is the way to go. To your original question, if there's room for newer cloud players, I think there is, provided that we can detach the platforms from the environments they are sitting on. So the proprietariness has to kinda, it has to be lowered, the degree of proprietariness has to be lower. It can be through open source I think mainly, it can be from open technologies, they don't have to be open source, but portable. >> JJ was mentioning that, I think that's a big point. Jim Long, you're an entrepreneur, you've been a VC, you know all the VCs, been around for a while, you're also, you're an entrepreneur, you're a serial entrepreneur, starting out at Cal Berkeley back in the day. You know, small ideas can move fast, and you're building on Amazon, and you've got a media kind of thing going on, there's a cloud opportunity for you, 'cause you are cloud native, 'cause you're built in the cloud. How do you see it playing out? 'Cause you're scaling with Amazon. >> Well, so we obviously, as a new startup, don't have the issues the enterprise folks have, and I could really see the enterprise customers, what we used to call the Fortune 500, for example, getting together and insisting on at least a base set of APIs that Amazon and Microsoft et cetera adopt, and for a startup, it's really about moving fast with your own solution that solves a problem. So you don't necessarily care too much that you're tied into Amazon completely because you know that if you need to, you can make a change some day. But they do such a good job for us, and their costs, while they can certainly be lower, and we certainly would like more volume discounts, they're pretty darn amazing across the network, across the internet, we do try to price out other folks just for the heck of it, been doing that recently with CDNs, for example. But for us, we're actually creating a hybrid cloud, if you will, a purpose-built cloud to support local television stations, and we do think that's going to be, along with using Amazon, a unique cloud with our own APIs that we will hopefully have lots of different TV apps use our hybrid cloud for part of their application to service local TV. So it's kind of a interesting play for us, the B2B part of it, we're hoping to be pretty successful as well, and we hope to maybe have multiple cloud vendors in our mix, you know. Not that our users will know who's behind us, maybe Amazon, for something, Limelight for another, or whatever, for example. >> Well you got to be concerned about lock-in as you become in the cloud, that's something that everybody's worried about. JJ, I want to get back to you on the investment thesis, because you have a cutting edge business model around investing in open source software, and there's two schools of thought in the open source community, you know, free contribution's great, and let tha.t be organic, and then there's now commercialization. There's real value being created in open source. You had put together a chart with your team about the billions of dollars in exits from open source companies. So what are you investing in, what do you see as opportunities for entrepreneurs like Jim and others that are out there looking at scaling their business? How do you look at success, what's your advice, what do you see as leading indicators? >> I think I'll broadly answer your question with a model that we've been thinking a lot about. We're going to start writing publicly about it and probably eventually maybe publish a book or two on it, and it's around the sort of fundamental perspective of creating value and capturing value. So if you model a famous investor and entrepreneur in Silicon Valley who has commonly modeled these things using two different letter variables, X and Y, but I'll give you the sort of perspective of modeling value creation and value capture around open source, as compared to closed source or proprietary software. So if you look at value creation modeled as X, and value capture modeled as Y, where X and Y are two independent variables with a fully proprietary software company based approach, whether you're building a cloud service or a proprietary software product or whatever, just a software company, your value creation exponent is typically bounded by two things. Capital and fundraising into the entity creating the software, and the centralization of research and development, meaning engineering output for producing the software. And so those two things are tightly coupled to and bounded to the company. With commercial open source software, the exact opposite is true. So value creation is decoupled and independent from funding, and value creation is also decentralized in terms of the research and development aspect. So you have a sort of decentralized, community-based, crowd-sourced, or sort of internet, global phenomena of contributing to a code base that isn't necessarily owned or fully controlled by a single entity, and those two properties are sort of decoupled from funding and decentralized R and D, are fundamentally changing the value creation kind of exponent. Now let's look at the value capture variable. With proprietary software company, or proprietary technology company, you're primarily looking at two constituents capturing value, people who pay for accessing the service or the software, and people who create the software. And so those two constituents capture all the value, they capture, you know, the vendor selling the software captures maybe 10 or 20% of the value, and the rest of the value, I would would express it say as the customer is capturing the rest of the value. Most economists don't express value capture as capturable by an end user or a customer. I think that's a mistake. >> Jim, you're-- >> So now... >> Okay, Jim, your reaction to that, because there's an article went around this weekend from Motherboard. "The internet was built on free labor "of open source developers. "Is that sustainable?" So Jim, what's your reaction to JJ's comments about the interactions and the dynamic between value creation, value capture, free versus sustainable funding? >> Well if you can sort of mix both together, that's what I would like, I haven't really ever figured out how to make open source work in our business model, but I haven't really tried that hard. It's an intriguing concept for sure, particularly if we come up with APIs that are specific to say, local television or something like that, and maybe some special processes that do things that are of interest to the wider community. So it's something I do plan to look at because I do agree that if you, I mean we use open source, we use this thing called FFmpeg, and several other things, and we're really happy that there's people out there adding value to them, et cetera, and we have our own versions, et cetera, so we'd like to contribute to the community if we could figure out how. >> Sarbjeet, your reactions to JJ's thesis there? >> I think two things. I will comment on two different aspects. One is the lack of standards, and then open source becoming the standard, right. I think open source kind of projects take birth and life in its own, because we have lack of standard, 'cause these different vendors can't agree on standards. So remember we used to have service-oriented architecture, we have Microsoft pushing some standards from one side and IBM pushing from other, SOAP versus xCBL and XML, different sort of paradigms, right, but then REST API became the de facto standard, right, it just took over, I think what REST has done for software in last about 10 years or so, nothing has done that for us. >> well Kubernetes is right now looking pretty good. So if you look at JJ, Kubernetes, the movement you were really were pioneering on, it's having similar dynamic, I mean Kubernetes is becoming a forcing function for solidarity in the community of cloud native, as well as an actual interoperable orchestration layer for multiple clouds and other services. So JJ, your thoughts on how open source continues as some of these new technologies, like Kubernetes, continue to hit the scene. Is there any trajectory change in open source that you see, that you could share, I'd love to get your insights on what's next behind, you know, the rise of Kubernetes is happening, what's next? >> I think more abstractly from Kubernetes, we believe that if you just look at the rate of innovation as a primary factor for progress and forward change in the world, open source software has the highest rate of innovation of any technology creation phenomena, and as a consequence, we're seeing more standards emerge from the open source ecosystem, we're seeing more disruption happen from the open source ecosystem, we're seeing more new technology companies and new paradigms and shifts happen from the open source ecosystem, and kind of all progress across the largest, most difficult sort of compound, sensitive problems, influenced and kind of sourced from the open source ecosystem and the open source world overall. Whether it's chip design, machine learning or computing innovations or new types of architectures, or new types of developer paradigms, you know, biological breakthroughs, there's kind of things up and down the technology spectrum that have a lot to sort of thank open source for. We think that the future of technology and the future of software is really that open source is at the core, as opposed to the periphery or the edges, and so today, every software technology company, and cloud providers included, have closed proprietary cores, meaning that where the core is, the data path, the runtime, the core business logic of the company, today that core is proprietary software or closed source software, and yet what is also true, is at the edges, the wrappers, the sort of crust, the periphery of every technology company, we have lots of open source, we have client libraries and bindings and languages and integrations, configuration, UIs and so on, but the cores are proprietary. We think the following will happen over the next few decades. We think the future will gradually shift from closed proprietary cores to open cores, where instead of a proprietary core, an open core is where you have core open source software project, as the fundamental building block for the company. So for example, Hadoop caused the creation of MapR and Cloudera and Hortonworks, Spark caused the creation of Databricks, Kafka caused the creation of Confluent, Git caused the creation of GitHub and GitLab, and this type of commercial open source software model, where there's a core open source project as the kernel building block for the company, and then an extension of intellectual property or wrappers around that open source project, where you can derive value capture and charge for licensed product with the company, and impress customer, we think that model is where the future is headed, and this includes cloud providers, basically selling proprietary services that could be based on a mixture of open source projects, but perhaps not fundamentally on a core open source project. Now we think generally, like abstractly, with maybe somewhat of a reductionist explanation there, but that open core future is very likely, fundamentally because of the rate of innovation being the highest with the open source model in general. >> All right, that's great stuff. Jim, you're a historian of tech, you've lived it. Your thoughts on some of the emerging trends around cloud, because you're disrupting linear TV with Didja, in a new way using cloud technology. How do you see cloud evolving? >> Well, I think the long lines we discussed, certainly I think that's a really interesting model, and having the open source be the center of the universe, then figure out how to have maybe some proprietary stuff, if I can use that word, around it, that other people can take advantage of, but maybe you get the value capture and build a business on that, that makes a lot of sense, and could certainly fit in the TV industry if you will from where I sit... Bring services to businesses and consumers, so it's not like there's some reason it wouldn't work, you know, it's bound to, it's bound to figure out a way, and if you can get a whole mass of people around the world working on the core technology and if it is sort of unique to what mission of, or at least the marketplace you're going after, that could be pretty interesting, and that would be great to see a lot of different new mini-clouds, if you will, develop around that stuff would be pretty cool. >> Sarbjeet, I want you to talk about scale, because you also have experience working with Rackspace. Rackspace was early on, they were trying to build the cloud, and OpenStack came out of that, and guess what, the world was moving so fast, Amazon was a bullet train just flying down the tracks, and it just felt like Rackspace and their cloud, you know OpenStack, just couldn't keep up. So is scale an issue, and how do people compete against scale in your mind? >> I think scale is an issue, and software chops is an issue, so there's some patterns, right? So one pattern is that we tend to see that open source is now not very good at the application side. You will hardly see any applications being built as open source. And also on the extreme side, open source is pretty sort of lame if you will, at very core of the things, like OpenStack failed for that reason, right? But it's pretty good in the middle as Joseph said, right? So building pipes, building some platforms based on open source, so the hooks, integration, is pretty good there, actually. I think that pattern will continue. Hopefully it will go deeper into the core, which we want to see. The other pattern is I think the software chops, like one vendor has to lead the project for certain amount of time. If that project goes into sort of open, like anybody can grab it, lot of people contribute and sort of jump in very quickly, it tends to fail. That's what happened to, I think, OpenStack, and there were many other reasons behind that, but I think that was the main reason, and because we were smaller, and we didn't have that much software chops, I hate to say that, but then IBM could control like hundred parties a week, at the project >> They did, and look where they are. >> And so does HP, right? >> And look where they are. All right, so I'd love to have a Power Panel on open source, certainly JJ's been in the thick of it as well as other folks in the community. I want to just kind of end on lightweight question for you guys. What have you guys learned? Go down the line, start with Jim, Sarbjeet, and then JJ we'll finish with you. Share something that you've learned over the past three months that moved you or that people should know about in tech or cloud trends that's notable. What's something new that you've learned? >> In my case, it was really just spending some time in the last few months getting to know our end users a little bit better, consumers, and some of the impact that having free internet television has on their lives, and that's really motivating... (distorted speech) Something as simple as you might take for granted, but lower income people don't necessarily have a TV that works or a hotel room that has a TV that works, or heaven forbid they're homeless and all that, so it's really gratifying to me to see people sort of tuning back into their local media through television, just by offering it on their phone and laptops. >> And what are you going to do as a result of that? Take a different action, what's the next step for you, what's the action item? >> Well we're hoping, once our product gets filled out with the major networks, et cetera, that we actually provide a community attachment to it, so that we have over-the-air television channels is the main part of the app, and then a side part of the app could be any IP stream, from city council meetings to high schools, to colleges, to local community groups, local, even religious situations or festivals or whatever, and really try to tie that in. We'd really like to use local television as a way to strengthening all local media and local communities, that's the vision at least. >> It's a great mission you guys have at Didja, thanks for sharing that. Sarbjeet, what have learned over the past quarter, three months that was notable for you and the impact and something that changed you a little bit? >> What actually I have gravitated towards in last three to six months is the blockchain, actually. I was light on that, like what it can do for us, and is there really a thing behind it, and can we leverage it. I've seen more and more actually usage of that, and sort of full SCM, supply chain management and healthcare and some other sort of use cases if you will. I'm intrigued by it, and there's a lot of activity there. I think there's some legs behind it, so I'm excited about that. >> And are doing a blockchain project as a result, or are you still tire-kicking? >> No actually, I will play with it, I'm a practitioner, I play with it, I write code and play with it and see (Jim laughs) what does that level of effort it takes to do that, and as you know, I wrote the Alexa scale couple of weeks back, and play with AI and stuff like that. So I try to do that myself before I-- >> We're hoping blockchain helps even out the TV ad economy and gets rid of middle men and makes more trusting transactions between local businesses and stuff. At least I say that, I don't really know what I'm talking about. >> It sounds good though. You get yourself a new round of funding on that sound byte alone. JJ, what have you learned in the past couple months that's new to you and changed you or made you do something different? >> I've learned over the last few months, OSS Capital is a few months and change old, and so just kind of getting started on that, and it's really, I think potentially more than one decade, probably multi-decade kind of mostly consensus building effort. There's such a huge lack of consensus and agreement in the industry. It's a fascinatingly polarizing area, the sort of general topic of open source technology, economics, value creation, value capture. So my learnings over the past few months have just intensified in terms of the lack of consensus I've seen in the industry. So I'm trying to write a little bit more about observations there and sort of put thoughts out, and that's kind of been the biggest takeaway over the last few months for me. >> I'm sure you learned about all the lawyer conversations, setting up a fund, learnings there probably too right, (Jim laughs) I mean all the detail. All right, JJ, thanks so much, Sarbjeet, Jim, thanks for joining me on this Power Panel, cloud conversation impact, to entrepreneurship, open source. Jim Long, Sarbjeet Johal and Joseph Jacks, JJ, thanks for joining us, theCUBE Conversation here in Palo Alto, I'm John Furrier, thanks for watching. >> Thanks John. (lively classical music)
SUMMARY :
so great to have you on. Google, and then you got IBM and Oracle, sort of the internet of monopolies, there's got to be room for more clouds. and the open source that has been the cliche So the proprietariness has to kinda, Berkeley back in the day. across the internet, we do in the open source community, you know, and the rest of the value, about the interactions and the dynamic to them, et cetera, and we have One is the lack of standards, the movement you were and the future of software is really that How do you see cloud evolving? and having the open source be just flying down the tracks, and because we were smaller, and look where they are. over the past three months that moved you and some of the impact that of the app could be any IP stream, and the impact and something is the blockchain, actually. and as you know, I wrote the Alexa scale the TV ad economy and in the past couple months and agreement in the industry. I mean all the detail. (lively classical music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jim Long | PERSON | 0.99+ |
JJ | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Sarbjeet | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Sarbjeet Johal | PERSON | 0.99+ |
Joseph | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Joseph Jacks | PERSON | 0.99+ |
OSS Capital | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
February 2019 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
six people | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
10 | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
five | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
two constituents | QUANTITY | 0.99+ |
Open Source Software Capital | ORGANIZATION | 0.99+ |
UK | LOCATION | 0.99+ |
Office 365 | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Didja | ORGANIZATION | 0.99+ |
two properties | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
two schools | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
first point | QUANTITY | 0.98+ |
Rackspace | ORGANIZATION | 0.98+ |
third model | QUANTITY | 0.98+ |
first camp | QUANTITY | 0.98+ |
Alexa | TITLE | 0.98+ |
Venkat Venkataramani, Rockset & Jerry Chen, Greylock | CUBEConversation, November 2018
[Music] we're on welcome to the special cube conversation we're here with some breaking news we got some startup investment news here in the Q studios palo alto I'm John for your host here at Jerry Chen partnered Greylock and the CEO of rock said Venkat Venkat Rahmani welcome to the cube you guys announcing hot news today series a and seed and Series A funding 21 million dollars for your company congratulations thank you Roxette is a data company jerry great this is one of your nest you kept this secret forever it was John was really hard you know over the past two years every time I sat in this seat I'd say and one more thing you know I knew that part of the advantage was rocks I was a special company and we were waiting to announce it and that's right time so it's been about two and half years in the making I gotta give you credit Jerry I just want to say to everyone I try to get the secrets out of you so hard you are so strong and keeping a secret I said you got this hot startup this was two years ago yeah I think the probe from every different angle you can keep it secrets all the entrepreneurs out there Jerry Chen's your guide alright so congratulations let's talk about the startup so you guys got 21 million dollars how much was the seed round this is the series a the seed was three million dollars both Greylock and Sequoia participating and the series a was eighteen point five all right so other investors Jerry who else was in on this I just the two firms former beginning so we teamed up with their French from Sequoia and the seed round and then we over the course of a year and half like this is great we're super excited about the team bank had Andrew bhai belt we love the opportunity and so Mike for an office coin I said let's do this around together and we leaned in and we did it around alright so let's just get into the other side I'm gonna read your your about section of the press release roxette's visions to Korea to build the data-driven future provide a service search and analytics engine make it easy to go from data to applications essentially building a sequel layer on top of the cloud for massive data ingestion I want to jump into it but this is a hot area not a lot of people are doing this at the level you guys are now and what your vision is did this come from what's your background how did you get here did you wake up one Wednesday I'm gonna build this awesome contraction layer and build an operating system around data make this thing scalable how did it all start I think it all started from like just a realization that you know turning useful data to useful apps just requires lots of like hurdles right you have to first figure out what format the data is in you got to prepare the data you gotta find the right specialized you know data database or data management system to load it in and it often requires like weeks to months before useful data becomes useful apps right and finally you know after I you know my tenure at Facebook when I left the first thing I did was I was just talking you know talking to a lot of people with real-world companies and reload problems and I started walking away from moremore of them thinking that this is way too complex I think the the format in which a lot of the data is coming in is not the format in which traditional sequel based databases are optimized for and they were built for like transaction processing and analytical processing not for like real-time streams of data but there's JSON or you know you know parque or or any of these other formats that are very very popular and more and more data is getting produced by one set of applications and getting consumed by other applications but what we saw it was what is this how can we make it simpler why do we need all this complexity right what is a simple what is the most simple and most powerful system we can build and pulled in the hands of as many people as possible and so we very sort of naturally relate to developers and data scientists people who use code on data that's just like you know kind of like our past lives and when we thought about it well why don't we just index the data you know traditional databases were built when every byte mattered every byte of memory every byte on disk now in the cloud the economics are completely different right so when you rethink those things with fresh perspective what we said was like what if we just get all of this data index it in a format where we can directly run very very fast sequel on it how simple would the world be how much faster can people go from ideas to do experiments and experiments to production applications and how do we make it all faster also in the cloud right so that's really the genesis of it well the real inspiration came from actually talking to a lot of people with real-world problems and then figuring out what is the simplest most powerful thing we can build well I want to get to the whole complexity conversation cuz we were talking before we came on camera here about how complexity can kill and why and more complexity on top of more complexity I think there's a simplicity angle here that's interesting but I want to get back to your background of Facebook and I want to tell a story you've been there eight years but you were there during a very interesting time during that time in history Facebook was I think the first generation we've taught us on the cube all the time about how they had to build their own infrastructure at scale while they're scaling so they were literally blitzscaling as reid hoffman and would say and you guys do it the Greylock coverage unlike other companies at scale eBay Microsoft they had old-school one dotto Technology databases Facebook had to kind of you know break glass you know and build the DevOps out from generation one from scratch correct it was a fantastic experience I think when I started in 2007 Facebook had about 40 million monthly actives and I had the privilege of working with some of the best people and a lot of the problems we were very quickly around 2008 when I went and said hey I want to do some infrastructure stuff the mandate that was given to me and my team was we've been very good at taking open source software and customizing it to our needs what would infrastructure built by Facebook for Facebook look like and we then went into this journey that ended up being building the online data infrastructure at Facebook by the time I left the collectively these systems were surveying 5 plus billion requests per second across 25 plus geographical clusters and half a dozen data centers I think at that time and now there's more and the system continues to chug along so it was just a fantastic experience I think all the traditional ways of problem solving just would not work at that scale and when the user base was doubling early in the early days every four months every five months yeah and what's interesting you know you're young and here at the front lines but you're kind of the frog in boiling water and that's because you are you were at that time building the power DevOps equation automating scale growth everything's happening at once you guys were right there building it now fast forward today everyone who's got an enterprise it's it wants to get there they don't they're not Facebook they don't have this engineering staff they want to get scale they see the cloud clearly the value property has got clear visibility but the economics behind who they hire so they have all this data and they get more increasing amount of data they want to be like Facebook but can't be like Facebook so they have to build their own solutions and I think this is where a lot of the other vendors have to rebuild this cherry I want to ask you because you've been looking at a lot of investments you've seen that old guard kind of like recycled database solutions coming to the market you've seen some stuff in open source but nothing unique what was it about Roxette that when you first talk to them that but you saw that this is going to be vectoring into a trend that was going to be a perfect storm yeah I think you nailed it John historic when we have this new problems like how to use data the first thing trying to do you saw with the old technology Oh existing data warehouses akin databases okay that doesn't work and then the next thing you do is like okay you know through my investments in docker and B and the boards or a cloud aerosol firsthand you need kind of this rise of stateless apps but not stateless databases right and then I through the cloud area and a bunch of companies that I saw has an investor every pitch I saw for two or three years trying to solve this data and state problem the cloud dudes add more boxes right here's here's a box database or s3 let me solve it with like Oh another database elastic or Kafka or Mongo or you know Apache arrow and it just got like a mess because if almond Enterprise IT shop there's no way can I have the skill the developers to manage this like as Beckett like to call it Rube Goldberg machination of data pipelines and you know I first met Venkat three years ago and one of the conversations was you know complexity you can't solve complex with more complexity you can only solve complexity with simplicity and Roxette and the vision they had was the first company said you know what let's remove boxes and their design principle was not adding another boxes all a problem but how to remove boxes to solve this problem and you know he and I got along with that vision and excited from the beginning stood to leave the scene ah sure let's go back with you guys now I got the funding so use a couple stealth years to with three million which is good a small team and that goes a long way it certainly 2021 total 18 fresh money it's gonna help you guys build out the team and crank whatnot get that later but what did you guys do in the in those two years where are you now sequel obviously is lingua franca cool of sequel but all this data is doesn't need to be scheming up and built out so were you guys that now so since raising the seed I think we've done a lot of R&D I think we fundamentally believe traditional data management systems that have been ported over to run on cloud Williams does not make them cloud databases I think the cloud economics is fundamentally different I think we're bringing this just scratching the surface of what is possible the cloud economics is you know it's like a simple realization that whether you rent 100 CPUs for one minute or or one CPU 400 minutes it's cost you exactly the same so then if you really ask why is any of my query is slow right I think because your software sucks right so basically what I'm trying to say is if you can actually paralyze that and if you can really exploit the fluidity of the hardware it's not easy it's very very difficult very very challenging but it's possible I think it's not impossible and if you can actually build software ground-up natively in the cloud that simplifies a lot of this stuff and and understands the economics are different now and it's system software at the end of the day is how do I get the best you know performance and efficiency for the price being paid right and the you know really building you know that is really what I think took a lot of time for us we have built not only a ground-up indexing technique that can take raw data without knowing the shape of the data we can turn that and index it in ways and store them maybe in more than one way since for certain types of data and then also have built a distributed sequel engine that is cloud native built by ground up in the cloud and C++ and like really high performance you know technologies and we can actually run distributor sequel on this raw data very very fast my god and this is why I brought up your background on Facebook I think there's a parallel there from the ground this ground up kind of philosophy if you think of sequel as like a Google search results search you know keyword it's the keyword for machines in most database worlds that is the standard so you can just use that as your interface Christ and then you using the cloud goodness to optimize for more of the results crafty index is that right correct yes you can ask your question if your app if you know how to see you sequel you know how to use Roxette if you can frame your the question that you're asking in order to answer an API request it could be a micro service that you're building it could be a recommendation engine that you're that you're building or you could you could have recommendations you know trying to personalize it on top of real time data any of those kinds of applications where it's a it's a service that you're building an application you're building if you can represent ask a question in sequel we will make sure it's fast all right let's get into the how you guys see the application development market because the developers will other winners here end of the day so when we were covering the Hadoop ecosystem you know from the cloud era days and now the important work at the Claire merger that kind of consolidates that kind of open source pool the big complaint that we used to hear from practitioners was its time consuming Talent but we used to kind of get down and dirty the questions and ask people how they're using Hadoop and we had two answers we stood up Hadoop we were running Hadoop in our company and then that was one answer the other answer was we're using Hadoop for blank there was not a lot of those responses in other words there has to be a reason why you're using it not just standing it up and then the Hadoop had the problem of the world grew really fast who's gonna run it yeah management of it Nukem noose new things came in so became complex overnight it kind of had took on cat hair on it basically as we would say so how do you guys see your solution being used so how do you solve that what we're running Roxette oh okay that's great for what what did developers use Roxette for so there are two big personas that that we currently have as users right there are developers and data scientists people who program on data right - you know on one hand developers want to build applications that are making either an existing application better it could be a micro service that you know I want to personalize the recommendations they generated online I mean offline but it's served online but whether it is somebody you know asking shopping for cars on San Francisco was the shopping you know was the shopping for cars in Colorado we can't show the same recommendations based on how do we basically personalize it so personalization IOT these kinds of applications developers love that because often what what you need to do is you need to combine real-time streams coming in semi structured format with structured data and you have no no sequel type of systems that are very good at semi structured data but they don't give you joins they don't give you a full sequel and then traditional sequel systems are a little bit cumbersome if you think about it I new elasticsearch but you can do joins and much more complex correct exactly built for the cloud and with full feature sequel and joins that's how that's the best way to think about it and that's how developers you said on the other side because its sequel now all of a sudden did you know data scientist also loved it they had they want to run a lot of experiments they are the sitting on a lot of data they want to play with it run experiments test hypotheses before they say all right I got something here I found a pattern that I don't know I know I had before which is why when you go and try to stand up traditional database infrastructure they don't know how what indexes to build how do i optimize it so that I can ask you know interrogatory and all that complexity away from those people right from basically provisioning a sandbox if you will almost like a perpetual sandbox of data correct except it's server less so like you don't you never think about you know how many SSDs do I need how many RAM do I need how many hosts do I need what configure your programmable data yes exactly so you start so DevOps for data is finally the interview I've been waiting for I've been saying it for years when's is gonna be a data DevOps so this is kind of what you're thinking right exactly so you know you give us literally you you log in to rocks at you give us read permissions to battle your data sitting in any cloud and more and more data sources we're adding support every day and we will automatically cloudburst will automatically interested we will schematize the data and we will give you very very fast sequel over rest so if you know how to use REST API and if you know how to use sequel you'd literally need don't need to think about anything about Hardware anything about standing up any servers shards you know reindex and restarting none of that you just go from here is a bunch of data here are my questions here is the app I want to build you know like you should be bottleneck by your career and imagination not by what can my data employers give me through a use case real quick island anyway the Jarius more the structural and architectural questions around the marketplace take me through a use case I'm a developer what's the low-hanging fruit use case how would I engage with you guys yeah do I just you just ingest I just point data at you how do you see your market developing from the customer standpoint cool I'll take one concrete example from a from a developer right from somebody we're working with right now so they have right now offline recommendations right or every night they generate like if you're looking for this car or or this particular item in e-commerce these are the other things are related well they show the same thing if you're looking at let's say a car this is the five cars that are closely related this car and they show that no matter who's browsing well you might have clicked on blue cars the 17 out of 18 clicks you should be showing blue cars to them right you may be logging in from San Francisco I may be logging in from like Colorado we may be looking for different kinds of cars with different you know four-wheel drives and other options and whatnot there's so much information that's available that you can you're actually by personalizing it you're adding creating more value to your customer we make it very easy you know live stream all the click stream beta to rock set and you can join that with all the assets that you have whether it's product data user data past transaction history and now if you can represent the joins or whatever personalization that you want to find in real time as a sequel statement you can build that personalization engine on top of Roxanne this is one one category you're putting sequel code into the kind of the workflow of the code saying okay when someone gets down to these kinds of interactions this is the sequel query because it's a blue car kind of go down right so like tell me all the recent cars that this person liked what color is this and I want to like okay here's a set of candidate recommendations I have how do I start it what are the four five what are the top five I want to show and then on the data science use case there's a you know somebody building a market intelligence application they get a lot of third-party data sets it's periodic dumps of huge blocks of JSON they want to combine that with you know data that they have internally within the enterprise to see you know which customers are engaging with them who are the persons churning out what are they doing and they in the in the market and trying to bring they bring it all together how do you do that when you how do you join a sequel table with a with a JSON third party dumb and especially for coming and like in the real-time or periodic in a week or week month or one month literally you can you know what took this particular firm that we're working with this is an investment firm trying to do market intelligence it used age to run ad hoc scripts to turn all of this data into a useful Excel report and that used to take them three to four weeks and you know two people working on one person working part time they did the same thing in two days and Rock said I want to get to back to microservices in a minute and hold that thought I won't go to Jerry if you want to get to the business model question that landscape because micro services were all the world's going to Inc so competition business model I'll see you gets are funded so they said love the thing about monetization to my stay on the core value proposition in light of the red hat being bought by by IBM had a tweet out there kind of critical of the transactions just in terms of you know people talk about IBM's betting the company on RedHat Mike my tweet was don't get your reaction will and tie it to the visible here is that it seems like they're going to macro services not micro services and that the world is the stack is changing so when IBM sell out their stack you have old-school stack thinkers and then you have new-school stack thinkers where cloud completely changes the nature of the stack in this case this venture kind of is an indication that if you think differently the stack is not just a full stack this way it's this way in this way yeah as we've been saying on the queue for a couple of years so you get the old guard trying to get a position and open source all these things but the stacks changing these guys have the cloud out there as a tailwind which is a good thing how do you see the business model evolving do you guys talk about that in terms of you can hey just try to find your groove swing get customers don't worry about the monetization how many charging so how's that how do you guys talk about the business model is it specific and you guys have clear visibility on that what's the story on that I mean I think yeah I always tell Bank had this kind of three hurdles you know you have something worthwhile one well someone listen to your pitch right people are busy you like hey John you get pitched a hundred times a day by startups right will you take 30 seconds listen to it that's hurdle one her will to is we spend time hands on keyboards playing around with the code and step threes will they write you a check and I as a as a enter price offered investor in a former operator we don't overly folks in the revenue model now I think writing a check the biz model just means you're creating value and I think people write you checking screening value but you know the feedback I always give Venkat and the founders work but don't overthink pricing if the first 10 customers just create value like solve their problems make them love the product get them using it and then the monetization the actual specifics the business model you know we'll figure out down the line I mean it's a cloud service it's you know service tactically to many servers in that sentence but it's um it's to your point spore on the cloud the one that economists are good so if it works it's gonna be profitable yeah it's born the cloud multi-cloud right across whatever cloud I wanna be in it's it's the way application architects going right you don't you don't care about VMs you don't care about containers you just care about hey here's my data I just want to query it and in the past you us developer he had to make compromises if I wanted joins in sequel queries I had to use like postgrads if I won like document database and he's like Mongo if I wanted index how to use like elastic and so either one I had to pick one or two I had to use all three you know and and neither world was great and then all three of those products have different business models and with rocks head you actually don't need to make choices right yes this is classic Greylock investment you got sequoia same way go out get a position in the market don't overthink the revenue model you'll funded for grow the company let's scale a little bit and figure out that blitzscale moment I believe there's probably the ethos that you guys have here one thing I would add in the business model discussion is that we're not optimized to sell latte machines who are selling coffee by the cup right so like that's really what I mean we want to put it in the hands of as many people as possible and make sure we are useful to them right and I think that is what we're obsessed about where's the search is a good proxy I mean that's they did well that way and rocks it's free to get started right so right now they go to rocks calm get started for free and just start and play around with it yeah yeah I mean I think you guys hit the nail on the head on this whole kind of data addressability I've been talking about it for years making it part of the development process programming data whatever buzzword comes out of it I think the trend is it looks a lot like that depo DevOps ethos of automation scale you get to value quickly not over thinking it the value proposition and let it organically become part of the operation yeah I think we we the internal KPIs we track are like how many users and applications are using us on a daily and weekly basis this is what we obsess about I think we say like this is what excellence looks like and we pursue that the logos in the revenue would would you know would be a second-order effect yeah and it's could you build that core kernels this classic classic build up so I asked about the multi cloud you mention that earlier I want to get your thoughts on kubernetes obviously there's a lot of great projects going on and CN CF around is do and this new state problem that you're solving in rest you know stateless has been an easy solution VP is but API 2.0 is about state right so that's kind of happening now what's your view on kubernetes why is it going to be impactful if someone asked you you know at a party hey thank you why is what's all this kubernetes what party going yeah I mean all we do is talk about kubernetes and no operating systems yeah hand out candy last night know we're huge fans of communities and docker in fact in the entire rock set you know back-end is built on top of that so we run an AWS but with the inside that like we run or you know their entire infrastructure in one kubernetes cluster and you know that is something that I think is here to stay I think this is the the the programmability of it I think the DevOps automation that comes with kubernetes I think all of that is just like this is what people are going to start taking why is it why is it important in your mind the orchestration because of the statement what's the let's see why is it so important it's a lot of people are jazzed about it I've been you know what's what's the key thing I think I think it makes your entire infrastructure program all right I think it turns you know every aspect of you know for example yeah I'll take it I'll take a concrete example we wanted to build this infrastructure so that when somebody points that like it's a 10 terabytes of data we want to very quickly Auto scale that out and be able to grow this this cluster as quickly as possible and it's like this fluidity of the hardware that I'm talking about and it needs to happen or two levels it's one you know micro service that is ingesting all the data that needs to sort of burst out and also at the second level we need to be able to grow more more nodes that we we add to this cluster and so the programmability nature of this like just imagine without an abstraction like kubernetes and docker and containers and pods imagine doing this right you are building a you know a lots and lots of metrics and monitoring and you're trying to build the state machine of like what is my desired state in terms of server utilization and what is the observed state and everything is so ad hoc and very complicated and kubernetes makes this whole thing programmable so I think it's now a lot of the automation that we do in terms of called bursting and whatnot when I say clock you know it's something we do take advantage of that with respect to stateful services I think it's still early days so our our position on my partner it's a lot harder so our position on that is continue to use communities and continue to make things as stateless as possible and send your real-time streams to a service like Roxette not necessarily that pick something like that very separate state and keep it in a backhand that is very much suited to your micro service and the business logic that needs to live there continue should continue to live there but if you can take a very hard to scale stateful service split it into two and have some kind of an indexing system Roxette is one that you know we are proud of building and have your stateless communal application logic and continue to have that you know maybe use kubernetes scale it in lambdas you know for all we care but you can take something that is very hard to you know manage and scale today break it into the stateful part in the stateless part and the serval is back in like like Roxette will will sort of hopefully give you a huge boost in being able to go from you know an experiment to okay I'm gonna roll it out to a smaller you know set of audience to like I want to do a worldwide you know you can do all of that without having to worry about and think about the alternative if you did it the old way yeah yeah and that's like talent you'd need it would be a wired that's spaghetti everywhere so Jerry this is a kubernetes is really kind of a benefit off your your investment in docker you must be proud and that the industry has gone to a whole nother level because containers really enable all this correct yeah so that this is where this is an example where I think clouds gonna go to a whole nother level that no one's seen before these kinds of opportunities that you're investing in so I got to ask you directly as you're looking at them as a as a knowledgeable cloud guy as well as an investor cloud changes things how does that change how is cloud native and these kinds of new opportunities that have built from the ground up change a company's network network security application era formants because certainly this is a game changer so those are the three areas I see a lot of impact compute check storage check networking early days you know it's it's it's funny it gosh seems so long ago yet so briefly when you know I first talked five years ago when I first met mayor of Essen or docker and it was from beginning people like okay yes stateless applications but stateful container stateless apps and then for the next three or four years we saw a bunch of companies like how do I handle state in a docker based application and lots of stars have tried and is the wrong approach the right approach is what these guys have cracked just suffered the state from the application those are app stateless containers store your state on an indexing layer like rock set that's hopefully one of the better ways saw the problem but as you kind of under one problem and solve it with something like rock set to your point awesome like networking issue because all of a sudden like I think service mesh and like it's do and costs or kind of the technologies people talk about because as these micro services come up and down they're pretty dynamic and partially as a developer I don't want to care about that yeah right that's the value like a Roxanna service but still as they operate of the cloud or the IT person other side of the proverbial curtain I probably care security I matters because also India's flowing from multiple locations multiple destinations using all these API and then you have kind of compliance like you know GDP are making security and privacy super important right now so that's an area that we think a lot about as investors so can I program that into Roxette what about to build that in my nap app natively leveraging the Roxette abstraction checking what's the key learning feature it's just a I'd say I'm a prime agent Ariane gdpr hey you know what I got a website and social network out in London and Europe and I got this gdpr nightmare I don't we don't have a great answer for GDP are we are we're not a controller of the data right we're just a processor so I think for GDP are I think there is still the controller still has to do a lot of work to be compliant with GDP are I think the way we look at it is like we never forget that this ultimately is going to be adding value to enterprises so from day one we you can't store data and Roxette without encrypting it like it's just the on you know on by default the only way and all transit is all or HTTPS and SSL and so we never freaked out that we're building for enterprises and so we've baked in for enterprise customers if they can bring in their own custom encryption key and so everything will be encrypted the key never leaves their AWS account if it's a you know kms key support private VP ceilings like we have a plethora of you know security features so that the the control of the data is still with the data controller with this which is our customer but we will be the the processor and a lot of the time we can process it using their encryption keys if I'm gonna build a GDP our sleeves no security solution I would probably build on Roxette and some of the early developers take around rocks at our security companies that are trying to track we're all ideas coming and going so there the processor and then one of the companies we hope to enable with Roxette is another generation security and privacy companies that in the past had a hard time tracking all this data so I can build on top of rocks crack okay so you can built you can build security a gbbr solution on top rock set because rock set gives you the power to process all the data index all the data and then so one of the early developers you know stolen stealth is they looking at the data flows coming and go he's using them and they'll apply the context right they'll say oh this is your credit card the Social Security is your birthday excetera your favorite colors and they'll apply that but I think to your point it's game-changing like not just Roxette but all the stuff in cloud and as an investor we see a whole generation of new companies either a to make things better or B to solve this new category problems like pricing the cloud and I think the future is pretty bright for both great founders and investors because there's just a bunch of great new companies and it's building up from the ground up this is the thing I brought my mother's red hat IBM thing is that's not the answer at the root level I feel like right now I'd be on I I think's fastenings but it's almost like you're almost doubling down to your your comment on the old stack right it's almost a double down the old stack versus an aggressive bet on kind of what a cloud native stack will look like you know I wish both companies are great people I was doing the best and stuff do well with I think I'd like to do great with OpenStack but again their product company as the people that happen to contribute to open source I think was a great move for both companies but it doesn't mean that that's not we can't do well without a new stack doing well and I think you're gonna see this world where we have to your point oh these old stacks but then a category of new stack companies that are being born in the cloud they're just fun to watch it all it's all big all big investments that would be blitzscaling criteria all start out organically on a wave in a market that has problems yeah and that's growing so I think cloud native ground-up kind of clean sheet of paper that's the new you know I say you're just got a pic pick up you got to pick the right way if I'm oh it's gotta pick a big wave big wave is not a bad wave to be on right now and it's at the data way that's part of the cloud cracked and it's it's been growing bigger it's it's arguably bigger than IBM is bigger than Red Hat is bigger than most of the companies out there and I think that's the right way to bet on it so you're gonna pick the next way that's kind of cloud native-born the cloud infrastructure that is still early days and companies are writing that way we're gonna do well and so I'm pretty excited there's a lot of opportunities certainly this whole idea that you know this change is coming societal change you know what's going on mission based companies from whether it's the NGO to full scale or all the applications that the clouds can enable from data privacy your wearables or cars or health thing we're seeing it every single day I'm pretty sad if you took amazon's revenue and then edit edit and it's not revenue the whole ready you look at there a dybbuk loud revenue so there's like 20 billion run which you know Microsoft had bundles in a lot of their office stuff as well if you took amazon's customers to dinner in the marketplace and took their revenue there clearly would be never for sure if item binds by a long shot so they don't count that revenue and that's a big factor if you look at whoever can build these enabling markets right now there's gonna be a few few big ones I think coming on they're gonna do well so I think this is a good opportunity of gradual ations thank you thank you at 21 million dollars final question before we go what are you gonna spend it on we're gonna spend it on our go-to-market strategy and hiding amazing people as many as we can get good good answer didn't say launch party that I'm saying right yeah okay we're here Rex at SIA and Joe's Jerry Chen cube cube royalty number two all-time on our Keeble um nine list partner and Greylock guy states were coming in I'm Jeffrey thanks for watching this special cube conversation [Music]
SUMMARY :
the enterprise to see you know which
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
San Francisco | LOCATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
2007 | DATE | 0.99+ |
five cars | QUANTITY | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
three million dollars | QUANTITY | 0.99+ |
10 terabytes | QUANTITY | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
Colorado | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
one minute | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
21 million dollars | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
November 2018 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Jerry | PERSON | 0.99+ |
17 | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two people | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
second level | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
Mike | PERSON | 0.99+ |
three million | QUANTITY | 0.99+ |
eight years | QUANTITY | 0.99+ |
reid hoffman | PERSON | 0.99+ |
Roxette | ORGANIZATION | 0.99+ |
five years ago | DATE | 0.99+ |
Rube Goldberg | PERSON | 0.99+ |
three years | QUANTITY | 0.99+ |
two answers | QUANTITY | 0.99+ |
two levels | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
Roxanna | ORGANIZATION | 0.99+ |
Rock | PERSON | 0.99+ |
C++ | TITLE | 0.99+ |
two big personas | QUANTITY | 0.99+ |
21 million dollars | QUANTITY | 0.99+ |
18 clicks | QUANTITY | 0.99+ |
Hadoop | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
Sequoia | ORGANIZATION | 0.98+ |
Venkat Venkataramani | PERSON | 0.98+ |
three years ago | DATE | 0.98+ |
Jeffrey | PERSON | 0.98+ |
John | PERSON | 0.98+ |
two firms | QUANTITY | 0.98+ |
eBay | ORGANIZATION | 0.98+ |
one person | QUANTITY | 0.98+ |
Venkat | ORGANIZATION | 0.98+ |
100 CPUs | QUANTITY | 0.98+ |
Andrew | PERSON | 0.98+ |
25 plus geographical clusters | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
half a dozen data centers | QUANTITY | 0.98+ |
four weeks | QUANTITY | 0.98+ |
both companies | QUANTITY | 0.98+ |
one month | QUANTITY | 0.97+ |
two years ago | DATE | 0.97+ |
400 minutes | QUANTITY | 0.97+ |
more than one way | QUANTITY | 0.97+ |
one answer | QUANTITY | 0.97+ |
two days | QUANTITY | 0.96+ |
SIA | ORGANIZATION | 0.96+ |
John Thomas, IBM | Change the Game: Winning With AI
(upbeat music) >> Live from Time Square in New York City, it's The Cube. Covering IBM's change the game, winning with AI. Brought to you by IBM. >> Hi everybody, welcome back to The Big Apple. My name is Dave Vellante. We're here in the Theater District at The Westin Hotel covering a Special Cube event. IBM's got a big event today and tonight, if we can pan here to this pop-up. Change the game: winning with AI. So IBM has got an event here at The Westin, The Tide at Terminal 5 which is right up the Westside Highway. Go to IBM.com/winwithAI. Register, you can watch it online, or if you're in the city come down and see us, we'll be there. Uh, we have a bunch of customers will be there. We had Rob Thomas on earlier, he's kind of the host of the event. IBM does these events periodically throughout the year. They gather customers, they put forth some thought leadership, talk about some hard dues. So, we're very excited to have John Thomas here, he's a distinguished engineer and Director of IBM Analytics, long time Cube alum, great to see you again John >> Same here. Thanks for coming on. >> Great to have you. >> So we just heard a great case study with Niagara Bottling around the Data Science Elite Team, that's something that you've been involved in, and we're going to get into that. But give us the update since we last talked, what have you been up to?? >> Sure sure. So we're living and breathing data science these days. So the Data Science Elite Team, we are a team of practitioners. We actually work collaboratively with clients. And I stress on the word collaboratively because we're not there to just go do some work for a client. We actually sit down, expect the client to put their team to work with our team, and we build AI solutions together. Scope use cases, but sort of you know, expose them to expertise, tools, techniques, and do this together, right. And we've been very busy, (laughs) I can tell you that. You know it has been a lot of travel around the world. A lot of interest in the program. And engagements that bring us very interesting use cases. You know, use cases that you would expect to see, use cases that are hmmm, I had not thought of a use case like that. You know, but it's been an interesting journey in the last six, eight months now. >> And these are pretty small, agile teams. >> Sometimes people >> Yes. use tiger teams and they're two to three pizza teams, right? >> Yeah. And my understanding is you bring some number of resources that's called two three data scientists, >> Yes and the customer matches that resource, right? >> Exactly. That's the prerequisite. >> That is the prerequisite, because we're not there to just do the work for the client. We want to do this in a collaborative fashion, right. So, the customers Data Science Team is learning from us, we are working with them hand in hand to build a solution out. >> And that's got to resonate well with customers. >> Absolutely I mean so often the services business is like kind of, customers will say well I don't want to keep going back to a company to get these services >> Right, right. I want, teach me how to fish and that's exactly >> That's exactly! >> I was going to use that phrase. That's exactly what we do, that's exactly. So at the end of the two or three month period, when IBM leaves, my team leaves, you know, the client, the customer knows what the tools are, what the techniques are, what to watch out for, what are success criteria, they have a good handle of that. >> So we heard about the Niagara Bottling use case, which was a pretty narrow, >> Mm-hmm. How can we optimize the use of the plastic wrapping, save some money there, but at the same time maintain stability. >> Ya. You know very, quite a narrow in this case. >> Yes, yes. What are some of the other use cases? >> Yeah that's a very, like you said, a narrow one. But there are some use cases that span industries, that cut across different domains. I think I may have mentioned this on one of our previous discussions, Dave. You know customer interactions, trying to improve customer interactions is something that cuts across industry, right. Now that can be across different channels. One of the most prominent channels is a call center, I think we have talked about this previously. You know I hate calling into a call center (laughter) because I don't know Yeah, yeah. What kind of support I'm going to get. But, what if you could equip the call center agents to provide consistent service to the caller, and handle the calls in the best appropriate way. Reducing costs on the business side because call handling is expensive. And eventually lead up to can I even avoid the call, through insights on why the call is coming in in the first place. So this use case cuts across industry. Any enterprise that has got a call center is doing this. So we are looking at can we apply machine-learning techniques to understand dominant topics in the conversation. Once we understand with these have with unsupervised techniques, once we understand dominant topics in the conversation, can we drill into that and understand what are the intents, and does the intent change as the conversation progress? So you know I'm calling someone, it starts off with pleasantries, it then goes into weather, how are the kids doing? You know, complain about life in general. But then you get to something of substance why the person was calling in the first place. And then you may think that is the intent of the conversation, but you find that as the conversation progresses, the intent might actually change. And can you understand that real time? Can you understand the reasons behind the call, so that you could take proactive steps to maybe avoid the call coming in at the first place? This use case Dave, you know we are seeing so much interest in this use case. Because call centers are a big cost to most enterprises. >> Let's double down on that because I want to understand this. So you basically doing. So every time you call a call center this call may be recorded, >> (laughter) Yeah. For quality of service. >> Yeah. So you're recording the calls maybe using MLP to transcribe those calls. >> MLP is just the first step, >> Right. so you're absolutely right, when a calls come in there's already call recording systems in place. We're not getting into that space, right. So call recording systems record the voice calls. So often in offline batch mode you can take these millions of calls, pass it through a speech-to-text mechanism, which produces a text equivalent of the voice recordings. Then what we do is we apply unsupervised machine learning, and clustering, and topic-modeling techniques against it to understand what are the dominant topics in this conversation. >> You do kind of an entity extraction of those topics. >> Exactly, exactly, exactly. >> Then we find what is the most relevant, what are the relevant ones, what is the relevancy of topics in a particular conversation. That's not enough, that is just step two, if you will. Then you have to, we build what is called an intent hierarchy. So this is at top most level will be let's say payments, the call is about payments. But what about payments, right? Is it an intent to make a late payment? Or is the intent to avoid the payment or contest a payment? Or is the intent to structure a different payment mechanism? So can you get down to that level of detail? Then comes a further level of detail which is the reason that is tied to this intent. What is a reason for a late payment? Is it a job loss or job change? Is it because they are just not happy with the charges that I have coming? What is a reason? And the reason can be pretty complex, right? It may not be in the immediate vicinity of the snippet of conversation itself. So you got to go find out what the reason is and see if you can match it to this particular intent. So multiple steps off the journey, and eventually what we want to do is so we do our offers in an offline batch mode, and we are building a series of classifiers instead of classifiers. But eventually we want to get this to real time action. So think of this, if you have machine learning models, supervised models that can predict the intent, the reasons, et cetera, you can have them deployed operationalize them, so that when a call comes in real time, you can screen it in real time, do the speech to text, you can do this pass it to the supervise models that have been deployed, and the model fires and comes back and says this is the intent, take some action or guide the agent to take some action real time. >> Based on some automated discussion, so tell me what you're calling about, that kind of thing, >> Right. Is that right? >> So it's probably even gone past tell me what you're calling about. So it could be the conversation has begun to get into you know, I'm going through a tough time, my spouse had a job change. You know that is itself an indicator of some other reasons, and can that be used to prompt the CSR >> Ah, to take some action >> Ah, oh case. appropriate to the conversation. >> So I'm not talking to a machine, at first >> no no I'm talking to a human. >> Still talking to human. >> And then real time feedback to that human >> Exactly, exactly. is a good example of >> Exactly. human augmentation. >> Exactly, exactly. I wanted to go back and to process a little bit in terms of the model building. Are there humans involved in calibrating the model? >> There has to be. Yeah, there has to be. So you know, for all the hype in the industry, (laughter) you still need a (laughter). You know what it is is you need expertise to look at what these models produce, right. Because if you think about it, machine learning algorithms don't by themselves have an understanding of the domain. They are you know either statistical or similar in nature, so somebody has to marry the statistical observations with the domain expertise. So humans are definitely involved in the building of these models and claiming of these models. >> Okay. >> (inaudible). So that's who you got math, you got stats, you got some coding involved, and you >> Absolutely got humans are the last mile >> Absolutely. to really bring that >> Absolutely. expertise. And then in terms of operationalizing it, how does that actually get done? What tech behind that? >> Ah, yeah. >> It's a very good question, Dave. You build models, and what good are they if they stay inside your laptop, you know, they don't go anywhere. What you need to do is, I use a phrase, weave these models in your business processes and your applications. So you need a way to deploy these models. The models should be consumable from your business processes. Now it could be a Rest API Call could be a model. In some cases a Rest API Call is not sufficient, the latency is too high. Maybe you've got embed that model right into where your application is running. You know you've got data on a mainframe. A credit card transaction comes in, and the authorization for the credit card is happening in a four millisecond window on the mainframe on all, not all, but you know CICS COBOL Code. I don't have the time to make a Rest API call outside. I got to have the model execute in context with my CICS COBOL Code in that memory space. >> Yeah right. You know so the operationalizing is deploying, consuming these models, and then beyond that, how do the models behave over time? Because you can have the best programmer, the best data scientist build the absolute best model, which has got great accuracy, great performance today. Two weeks from now, performance is going to go down. >> Hmm. How do I monitor that? How do I trigger a loads map for below certain threshold. And, can I have a system in place that reclaims this model with new data as it comes in. >> So you got to understand where the data lives. >> Absolutely. You got to understand the physics, >> Yes. The latencies involved. >> Yes. You got to understand the economics. >> Yes. And there's also probably in many industries legal implications. >> Oh yes. >> No, the explainability of models. You know, can I prove that there is no bias here. >> Right. Now all of these are challenging but you know, doable things. >> What makes a successful engagement? Obviously you guys are outcome driven, >> Yeah. but talk about how you guys measure success. >> So um, for our team right now it is not about revenue, it's purely about adoption. Does the client, does the customer see the value of what IBM brings to the table. This is not just tools and technology, by the way. It's also expertise, right? >> Hmm. So this notion of expertise as a service, which is coupled with tools and technology to build a successful engagement. The way we measure success is has the client, have we built out the use case in a way that is useful for the business? Two, does a client see value in going further with that. So this is right now what we look at. It's not, you know yes of course everybody is scared about revenue. But that is not our key metric. Now in order to get there though, what we have found, a little bit of hard work, yes, uh, no you need different constituents of the customer to come together. It's not just me sending a bunch of awesome Python Programmers to the client. >> Yeah right. But now it is from the customer's side we need involvement from their Data Science Team. We talk about collaborating with them. We need involvement from their line of business. Because if the line of business doesn't care about the models we've produced you know, what good are they? >> Hmm. And third, people don't usually think about it, we need IT to be part of the discussion. Not just part of the discussion, part of being the stakeholder. >> Yes, so you've got, so IBM has the chops to actually bring these constituents together. >> Ya. I have actually a fair amount of experience in herding cats on large organizations. (laughter) And you know, the customer, they've got skin in the IBM game. This is to me a big differentiator between IBM, certainly some of the other technology suppliers who don't have the depth of services, expertise, and domain expertise. But on the flip side of that, differentiation from many of the a size who have that level of global expertise, but they don't have tech piece. >> Right. >> Now they would argue well we do anybodies tech. >> Ya. But you know, if you've got tech. >> Ya. >> You just got to (laughter) Ya. >> Bring those two together. >> Exactly. And that's really seems to me to be the big differentiator >> Yes, absolutely. for IBM. Well John, thanks so much for stopping by theCube and explaining sort of what you've been up to, the Data Science Elite Team, very exciting. Six to nine months in, >> Yes. are you declaring success yet? Still too early? >> Uh, well we're declaring success and we are growing, >> Ya. >> Growth is good. >> A lot of lot of attention. >> Alright, great to see you again, John. >> Absolutely, thanks you Dave. Thanks very much. Okay, keep it right there everybody. You're watching theCube. We're here at The Westin in midtown and we'll be right back after this short break. I'm Dave Vellante. (tech music)
SUMMARY :
Brought to you by IBM. he's kind of the host of the event. Thanks for coming on. last talked, what have you been up to?? We actually sit down, expect the client to use tiger teams and they're two to three And my understanding is you bring some That's the prerequisite. That is the prerequisite, because we're not And that's got to resonate and that's exactly So at the end of the two or three month period, How can we optimize the use of the plastic wrapping, Ya. You know very, What are some of the other use cases? intent of the conversation, but you So every time you call a call center (laughter) Yeah. So you're recording the calls maybe So call recording systems record the voice calls. You do kind of an entity do the speech to text, you can do this Is that right? has begun to get into you know, appropriate to the conversation. I'm talking to a human. is a good example of Exactly. a little bit in terms of the model building. You know what it is is you need So that's who you got math, you got stats, to really bring that how does that actually get done? I don't have the time to make a Rest API call outside. You know so the operationalizing is deploying, that reclaims this model with new data as it comes in. So you got to understand where You got to understand Yes. You got to understand And there's also probably in many industries No, the explainability of models. but you know, doable things. but talk about how you guys measure success. the value of what IBM brings to the table. constituents of the customer to come together. about the models we've produced you know, Not just part of the discussion, to actually bring these differentiation from many of the a size Now they would argue Ya. But you know, And that's really seems to me to be Six to nine months in, are you declaring success yet? Alright, great to see you Absolutely, thanks you Dave.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
John Thomas | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Six | QUANTITY | 0.99+ |
Time Square | LOCATION | 0.99+ |
tonight | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
three month | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
third | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
New York City | LOCATION | 0.98+ |
today | DATE | 0.98+ |
Python | TITLE | 0.98+ |
IBM Analytics | ORGANIZATION | 0.97+ |
Terminal 5 | LOCATION | 0.97+ |
Data Science Elite Team | ORGANIZATION | 0.96+ |
Niagara | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
IBM.com/winwithAI | OTHER | 0.96+ |
first place | QUANTITY | 0.95+ |
eight months | QUANTITY | 0.94+ |
Change the Game: Winning With AI | TITLE | 0.89+ |
The Westin | ORGANIZATION | 0.89+ |
Niagara Bottling | PERSON | 0.89+ |
Theater District | LOCATION | 0.88+ |
four millisecond window | QUANTITY | 0.87+ |
step two | QUANTITY | 0.86+ |
Cube | PERSON | 0.85+ |
Westside Highway | LOCATION | 0.83+ |
first | QUANTITY | 0.83+ |
Two weeks | DATE | 0.82+ |
millions of calls | QUANTITY | 0.79+ |
two three data scientists | QUANTITY | 0.78+ |
CICS | TITLE | 0.77+ |
COBOL | OTHER | 0.69+ |
Rest API call | OTHER | 0.68+ |
The Tide | LOCATION | 0.68+ |
theCube | ORGANIZATION | 0.67+ |
The Westin | LOCATION | 0.66+ |
Rest API | OTHER | 0.66+ |
Apple | LOCATION | 0.63+ |
Big | ORGANIZATION | 0.62+ |
Westin | LOCATION | 0.51+ |
last six | DATE | 0.48+ |
Hotel | ORGANIZATION | 0.45+ |
theCube | TITLE | 0.33+ |
Bottling | COMMERCIAL_ITEM | 0.3+ |
Mike Bollman, Enterprise Products Company and Scott Delandy, Dell EMC | Dell Technologies World 2018
>> Announcer: Live from Las Vegas it's theCUBE covering Dell Technologies World 2018, brought to you by Dell EMC and its ecosystem partners. (bright music) >> Welcome back to Las Vegas. I'm Lisa Martin with Keith Townsend. We are with Dell Technologies World and about 14,000 other people here. You're watching theCUBE. We are excited to welcome back to theCUBE Scott Delandy, the Technical Director of Dell EMC. Hey, Scott! >> Hey guys, how are you? >> And you have a featured guest, Mike Bollman, the Director of Server and Storage Architecture from Enterprise Products Company, welcome! >> Thanks for having me. >> So you guys are a leader in oil and gas. I hear some great things. Talk to us about what it is that you're doing and how you're working with Dell EMC to be innovative in the oil and gas industry. >> So we're actually a Dell EMC storage customer for about the last two years now, and working with them on how we can bring in a lot of the data that we have from the field. The buzzword today is Internet of Things, or IoT. We've been doing it for many, many years, though, so we pull that data in and we look and analyze it and figure out how we can glean more information out of it. How can we tune our systems? As an example, one of the things that we do is we model a product as it flows through a pipeline because we're looking for bubbles. And bubbles mean friction and friction means less flow and we're all about flow. The more product we can flow the more money we can make. So it's one of the interesting things that we do with the data that we have. >> And Scott, talk to us about specifically oil and gas in terms of an industry that is helping Dell EMC really define this next generation of technology to modernize data centers and enable companies to kind of follow along the back of one and start doing IoT as well. >> Yeah, so the things that Mike has been able to accomplish within Enterprise Products is amazing because they truly are an innovator in terms of how they leverage technology to not just kind of maintain sort of the core applications that they need to support just to keep the business up and running but how they're investing in new applications, in new concepts to help further drive the business and be able to add value back into the organization. So we love working with Enterprise and users like Mike just because they really push technology, they're, again, very innovative in terms of the things that they're trying to do, and they provide us incredible feedback in terms of the things that we're doing, the things that we're looking to build and helping us understand what are the challenges that users like Mike are facing and how do we take our technology and adapt it to make sure that we're meeting his requirements. >> So unlike any other energy, oil and gas, you guys break scale. I mean you guys define scale when it comes to the amount of data and the need to analyze that data. How has this partnership allowed you to, what specifically have you guys leveraged from Dell EMC to move faster? >> So we've done a number of things. Early on when we first met with Scott and team at Dell EMC we said we're not looking to establish a traditional sales-customer relationship. We want a two-way business partnership. We want to be able to take your product, leverage it in our data centers, learn from it, provide feedback, and ask for enhancements, things that we think would make it better not only for us but for other customers. So one of the examples if I can talk to it. >> Scott: Please. >> One of the examples was early on when PowerMax was kind of going through its development cycle, there was talk about introducing data deduplication. And one of the things that we knew from experiences is that there are some workloads that may not do well with data dedup, and so we wanted some control over that versus some of the competitor arrays that just say everything's data dedup, good, bad, or indifferent, right? And we have some of that anecdotal knowledge. So that was a feature that the team listened to and introduced into the product. >> Yeah, yeah, I mean it was great because we were able to take the feedback and because we worked so closely with the engineering teams and because we really value the things that Mike brings to the table in terms of how he wants to adopt the technology and the things that he wants to support from a functionality perspective, we were able to basically build that into the product. So the technology that we literally announced earlier this morning, there are pieces of code that were specifically written into that system based on some of the comments that Mike had provided a year plus ago when we were in the initial phases of development. >> So being an early adopter and knowing that you were going to have this opportunity to collaborate and really establish this symbiotic relationship that allows you to test things, allows Dell EMC to get that information to make the product better, what is it that your company saw in Dell EMC to go, "Yeah, we're not afraid to send them back," or, "Let's try this together and be that leading edge"? >> I think honestly it came down to the very first meeting that we had. We had a relationship with some of the executives inside of EMC from other business relationships years ago, and we reached out and said, "Look, we want to have a conversation," and we literally put together a kind of a bullet-pointed list of here's how we want to conduct business and here's what we want to talk about. And they brought down some of their best and brightest within the engineering organization to have a open discussion with us. And really we're very open and honest with what we were trying to accomplish and how they could fit in, and then, again, we had that two-way dialogue back of, "Okay, well what about this," or, "What about that?" And so from day one it has been truly a two-way partnership. >> So Lisa's all about relationships and governance. I'm all about speeds and feeds. (Mike laughing) I'm a geek, and I want to hear some numbers, man. (Mike laughing) So you guys got the PowerMax. We had Caitlin Gordon on earlier. She's Product Marketing for the PowerMax, very, very proud of the product, but you're a customer that had it in your data center. Tell us the truth. (Mike laughing) How is, is it... Is it what you need to move forward? >> It is unbelievably fast in all honesty. So early on we brought it into our lab environment and we got it online and we stood it up, and so we were basically generating simulated workloads, right? And so you've got all of these basically host machines that are just clobbering it as fast as you can. We ran into a point where we just didn't have any more hardware to throw at it. The box just kept going, and it's like okay, well we're measuring 700,000 IOPS, it's not breaking a sweat. It's submillisecond (laughs) leads. It's like well, what else do we have? (laughs) And so it just became one of those things. Well, all right, let's start throwing snapshots at it and let's do this and let's do that. It truly is a remarkable box. And keep in mind we had the smallest configurable system you could get. We had the what is now, I guess, the PowerMax 2000, >> The 2000, yeah. yeah, in a very, very small baseline configuration. And it was just phenomenal in what it could do. >> So I would love to hear a little bit more about that. When we look at things such as the VMAX, incredible platform which had been positioned as a data center consolidator, but a lot of customers I saw using that as purpose-built for a mission critical set of applications, subset of applications in the data center. Sounds like the PowerMax, an example of the beta relationship you guys have, is a true platform that you can run an entire data center on and realistically get mission critical support out of a single platform. >> Absolutely, yeah, so even today in our production data center we have VMAX 450, VMAX 950s in today running. And we have everything from Oracle databases, SQL databases, Exchange, various workloads, a tremendous number of virtual iServers running on there, I mean hundreds and hundreds or actually probably several thousand. And it doesn't matter how we mix and match those. I have Exchange running on one array along with an Oracle database and several dozen SQL databases and hundreds of VMs all on one array and it's no problems whatsoever. There's no competition for I/O or any latency issues that are happening. It just works really well. >> And I think one of the other powerful use cases, if I could just talk to this, in your environment specifically there's some of the things you're doing around replication where you're doing multi-site replication, and on a regular basis you're doing failover, recovery, failback as part of the testing process. >> Mike: Absolutely. >> So it's not just running the I/O and getting the performance of the system, it's making sure that from a service-level perspective from the way the data's being protected being able to have the right recovery time objectives, recovery point objectives for all of the applications that you're running in your environment, to be able to have the infrastructure in place that could support that. >> Lisa: So I want to, oh. >> Go head. >> Sorry, thanks Keith. So I want to, I'm going to go ahead and go back up a little bit. >> Mike: Sure. >> One of the announcements that came out today from Dell Technologies was about modernizing the data center. You've just given us a great overview of what you're doing at the technical level. Where are you in developing a modern data center? Are you where you want to be? What's next steps for that? >> So I don't think we're ever where we want to be. There's always something else so we're always chasing things. But where we are today is that there's a lot of talk for the last several years around cloud, cloud this, cloud that. Everybody has a hardware, software, or service offering that's cloud-something. We look at cloud more as an operational model. And so we're looking at how can we streamline our internal business taking advantages of, say, RESTful APIs that are in PowerMax and basically automating end to end from a provisioning or a request perspective all the way through the provisioning all the way to final deployment and basically pulling the people out of that, the touchpoints, trying to streamline our operations, make them more efficient. It's been long said that we can't get more people in IT. It's just do more for less and that's not stopping. >> And if I could just make another plug for Mike, so I visited Mike in his data center it was about a year ago or something like that. And I've been in a lot of data centers and I've seen all kinds of organizations of all different size and scale and still today I talk about the lab tour that we went on because just the efficiency in how everything was racked, how everything was labeled, there was no empty boxes scattered around. Just the operational efficiency that you've built into the organization is, and it's part of the culture there. That's what gives Mike the ability to do the types of things that he's able to do with what's really a pretty limited staff of resources that support all of those different applications. So it's incredibly impressive not just in terms of what Mike has been able to do in terms of the technology piece but just kind of the people and the operational side of things. It's really, really impressive. I would call it a gold standard (Mike laughing) from an IT organization. >> And you're not biased about it. (Lisa and Mike laughing) >> Mental note, complete opposite of any data center I've ever met. (Lisa, Mike, and Scott laughing) Okay, so Mike, talk to us about this automation piece. We hear a lot about the first step to modernization is automation, but when I look at the traditional data center and I look at all the things that could be automated how do you guys prioritize where to go first? >> So we look at it from where are we spending our time, so it's really kind of simple of looking at what are your trouble tickets and what are your change control processes or trouble control tickets that are coming in and where are you spending the bulk of your time. And it's all about bang for the buck. So you want to do the things that you're going to get the biggest payback on first and then the low-hanging fruit, and then you go back and you tweak further and further from there. So from our perspective we did an analysis internally and we found that we spent a lot of time doing basic provisioning. We get a tremendous number of requests from our end users, from our app devs and from our DBAs. They're saying, "Hey, I need 10 new servers by Monday," and it's Friday afternoon, that sort of request. And so we spend the time jumping through hoops. It was like, well, why? We can do better than that. We should do better than that. >> So PowerMax built in modern times for the modern data center. Have you guys seen advantages for this modern platform for automation? Have you looked at it and been like, "Oh, you know what? "We love that Dell EMC took this angle "towards building this product "because they had the modern data center in mind"? >> So again I think it goes back to largely around REST APIs. So with PowerMax OS 5978 there's been further enhancements there. So pretty much anything that you could do before with SIM CLI or through the GUI has now been exposed to the REST API and everybody in the industry's kind of moving that way whether you're talking about a storage platform or a server platform, even some of the networking vendors. I had a meeting earlier today and they're moving that way as well. It's like whoa, have you seen what we're doing with REST? So from an infrastructure standpoint, from a plumbing perspective, that's really what we're looking at in tracking-- >> And if I can add to that I think one of the other sort of core enablers for that is just simply to move to an all flash-based system because in the world of spinning drives, mechanical systems, hybrid systems, an awful lot of administrative time is spent in kind of performance tuning. How do I shave off milliseconds of response time? How do I minimize those response time peaks during different parts of the day? And when you move to the all flash there's obviously a boost in terms of performance. But it's not just the performance, it's the predictability of that performance and not having to go in and figure out okay, what happened Tuesday night between four and six that caused this application to go from here to here? What do we have to do to go and run the analysis to figure all of that out? You don't see that type of behavior anymore. >> Yeah, it's that indirect operational savings. So before when flash drives kind of first got introduced to the market we had these great things like FaaS where you could go in and you could tune stuff and these algorithms that would watch those workloads and make their best guesses at what data to move when and where. Without flash, that's out the window. There's no more coming in on Monday and all of a sudden then something got tuned over the weekend down to a lower tier storage and it's too slow for the performance requirements Monday morning. That problem's gone. >> And when you look under the covers of the PowerMax we talked a lot today about some of the machine learning and the predictive analytics that are built into that system that help people like Mike to be able to consolidate hundreds, thousands of applications onto this single system. But now to have to go in and worry about how do I tune, how do I optimize not just based on a runtime of applications but real-time changes that are happening into those workloads and the system being able to automatically adjust and to be able to do the right thing to be able to maintain the level of performance that they require from that environment. >> Last question, Scott, we just have a few seconds left. Looking at oil and gas and what Mike and team have done in early adoption context, helping Dell EMC evolve this technology, what are some of the other industries that you see that can really benefit from this early adopter in-- >> I, what I would say is there are lots of industries out there that we work with and they all have sort of unique challenges and requirements for the types of things that they're trying to do to support their businesses. What I would say, the real thing is to be able to build the relationships and to have the trust so that when they're asking for something on our side we're understanding what that requirement and if there are things that we can do to help that we can have that conversation. But if there are things that we can't control or if there are things that are very, very specific to a small set of customers but require huge investments in terms of R&D and resources to do the development, we can have that honest conversation and say, "Hey Mike, it's a really good idea "and we understand how it helps you here, "but we're still a business. "We still have to make money." So we can do some things but we have to be realistic in terms of being able to balance helping Mike but still being able to run a business. >> Sure, and I wish we had more time to keep going, but thanks, guys, for stopping by, talking about how Dell EMC and Enterprise Products Company are collaborating and all of the anticipated benefits that will no doubt proliferate among industries. We want to thank you for watching theCUBE. I'm Lisa Martin with Keith Townsend. We're live, day two of Dell Technologies World in Vegas. Stick around, we'll be right back after a short break. (bright music)
SUMMARY :
brought to you by Dell EMC and its ecosystem partners. We are excited to welcome back to theCUBE Talk to us about what it is that you're doing So it's one of the interesting things that we do And Scott, talk to us about specifically oil and gas Yeah, so the things that Mike has been able to accomplish and the need to analyze that data. So one of the examples if I can talk to it. And one of the things that we knew from experiences the things that Mike brings to the table and then, again, we had that two-way dialogue back and I want to hear some numbers, man. and so we were basically And it was just phenomenal in what it could do. an example of the beta relationship you guys have, and hundreds of VMs all on one array and on a regular basis you're doing and getting the performance of the system, So I want to, I'm going to go ahead and go back up a little bit. One of the announcements that came out today and basically pulling the people out of that, and it's part of the culture there. (Lisa and Mike laughing) and I look at all the things that could be automated and we found that we spent a lot of time for the modern data center. and everybody in the industry's kind of moving that way and not having to go in and figure out kind of first got introduced to the market and the system being able to automatically adjust that you see that can really benefit and if there are things that we can do to help that are collaborating and all of the anticipated benefits
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Scott | PERSON | 0.99+ |
Mike Bollman | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Scott Delandy | PERSON | 0.99+ |
Monday | DATE | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Monday morning | DATE | 0.99+ |
Tuesday night | DATE | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Friday afternoon | DATE | 0.99+ |
10 new servers | QUANTITY | 0.99+ |
Enterprise Products Company | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
PowerMax | ORGANIZATION | 0.99+ |
Caitlin Gordon | PERSON | 0.99+ |
first step | QUANTITY | 0.99+ |
two-way | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.98+ |
VMAX 950s | COMMERCIAL_ITEM | 0.98+ |
2000 | COMMERCIAL_ITEM | 0.98+ |
Exchange | TITLE | 0.98+ |
Dell Technologies World 2018 | EVENT | 0.97+ |
four | QUANTITY | 0.97+ |
one array | QUANTITY | 0.97+ |
VMAX 450 | COMMERCIAL_ITEM | 0.97+ |
first | QUANTITY | 0.96+ |
single platform | QUANTITY | 0.96+ |
first meeting | QUANTITY | 0.95+ |
SQL | TITLE | 0.95+ |
700,000 IOPS | QUANTITY | 0.95+ |
PowerMax 2000 | COMMERCIAL_ITEM | 0.95+ |
a year plus ago | DATE | 0.95+ |
hundreds, thousands of applications | QUANTITY | 0.95+ |