UNLIST TILL 4/2 - Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives
>> Sue: Hello everybody. Thank you for joining us today for the Virtual Vertica BDC 2020. Today's breakout session in entitled Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives. My name is Sue LeClaire, Director of Marketing at Vertica and I'll be your host for this webinar. Joining me is Tom Wall, a member of the Vertica engineering team. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click submit. There will be a Q and A session at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions that we don't get to, we'll do our best to answer them offline. Alternatively, you can visit the Vertica forums to post you questions after the session. Our engineering team is planning to join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to view on demand later this week. We'll send you a notification as soon as it's ready. So let's get started. Tom, over to you. >> Tom: Hello everyone and thanks for joining us today for this talk. My name is Tom Wall and I am the leader of Vertica's ecosystem engineering team. We are the team that focuses on building out all the developer tools, third party integrations that enables the SoftMaker system that surrounds Vertica to thrive. So today, we'll be talking about some of our new open source initatives and how those can be really effective for you and make things easier for you to build and integrate Vertica with the rest of your technology stack. We've got several new libraries, integration projects and examples, all open source, to share, all being built out in the open on our GitHub page. Whether you use these open source projects or not, this is a very exciting new effort that will really help to grow the developer community and enable lots of exciting new use cases. So, every developer out there has probably had to deal with the problem like this. You have some business requirements, to maybe build some new Vertica-powered application. Maybe you have to build some new system to visualize some data that's that's managed by Vertica. The various circumstances, lots of choices will might be made for you that constrain your approach to solving a particular problem. These requirements can come from all different places. Maybe your solution has to work with a specific visualization tool, or web framework, because the business has already invested in the licensing and the tooling to use it. Maybe it has to be implemented in a specific programming language, since that's what all the developers on the team know how to write code with. While Vertica has many different integrations with lots of different programming language and systems, there's a lot of them out there, and we don't have integrations for all of them. So how do you make ends meet when you don't have all the tools you need? All you have to get creative, using tools like PyODBC, for example, to bridge between programming languages and frameworks to solve the problems you need to solve. Most languages do have an ODBC-based database interface. ODBC is our C-Library and most programming languages know how to call C code, somehow. So that's doable, but it often requires lots of configuration and troubleshooting to make all those moving parts work well together. So that's enough to get the job done but native integrations are usually a lot smoother and easier. So rather than, for example, in Python trying to fight with PyODBC, to configure things and get Unicode working, and to compile all the different pieces, the right way is to make it all work smoothly. It would be much better if you could just PIP install library and get to work. And with Vertica-Python, a new Python client library, you can actually do that. So that story, I assume, probably sounds pretty familiar to you. Sounds probably familiar to a lot of the audience here because we're all using Vertica. And our challenge, as Big Data practitioners is to make sense of all this stuff, despite those technical and non-technical hurdles. Vertica powers lots of different businesses and use cases across all kinds of different industries and verticals. While there's a lot different about us, we're all here together right now for this talk because we do have some things in common. We're all using Vertica, and we're probably also using Vertica with other systems and tools too, because it's important to use the right tool for the right job. That's a founding principle of Vertica and it's true today too. In this constantly changing technology landscape, we need lots of good tools and well established patterns, approaches, and advice on how to combine them so that we can be successful doing our jobs. Luckily for us, Vertica has been designed to be easy to build with and extended in this fashion. Databases as a whole had had this goal from the very beginning. They solve the hard problems of managing data so that you don't have to worry about it. Instead of worrying about those hard problems, you can focus on what matters most to you and your domain. So implementing that business logic, solving that problem, without having to worry about all of these intense, sometimes details about what it takes to manage a database at scale. With the declarative syntax of SQL, you tell Vertica what the answer is that you want. You don't tell Vertica how to get it. Vertica will figure out the right way to do it for you so that you don't have to worry about it. So this SQL abstraction is very nice because it's a well defined boundary where lots of developers know SQL, and it allows you to express what you need without having to worry about those details. So we can be the experts in data management while you worry about your problems. This goes beyond though, what's accessible through SQL to Vertica. We've got well defined extension and integration points across the product that allow you to customize this experience even further. So if you want to do things write your own SQL functions, or extend database softwares with UDXs, you can do so. If you have a custom data format that might be a proprietary format, or some source system that Vertica doesn't natively support, we have extension points that allow you to use those. To make it very easy to do passive, parallel, massive data movement, loading into Vertica but also to export Vertica to send data to other systems. And with these new features in time, we also could do the same kinds of things with Machine Learning models, importing and exporting to tools like TensorFlow. And it's these integration points that have enabled Vertica to build out this open architecture and a rich ecosystem of tools, both open source and closed source, of different varieties that solve all different problems that are common in this big data processing world. Whether it's open source, streaming systems like Kafka or Spark, or more traditional ETL tools on the loading side, but also, BI tools and visualizers and things like that to view and use the data that you keep in your database on the right side. And then of course, Vertica needs to be flexible enough to be able to run anywhere. So you can really take Vertica and use it the way you want it to solve the problems that you need to solve. So Vertica has always employed open standards, and integrated it with all kinds of different open source systems. What we're really excited to talk about now is that we are taking our new integration projects and making those open source too. In particular, we've got two new open source client libraries that allow you to build Vertica applications for Python and Go. These libraries act as a foundation for all kinds of interesting applications and tools. Upon those libraries, we've also built some integrations ourselves. And we're using these new libraries to power some new integrations with some third party products. Finally, we've got lots of new examples and reference implementations out on our GitHub page that can show you how to combine all these moving parts and exciting ways to solve new problems. And the code for all these things is available now on our GitHub page. And so you can use it however you like, and even help us make it better too. So the first such project that we have is called Vertica-Python. Vertica-Python began at our customer, Uber. And then in late 2018, we collaborated with them and we took it over and made Vertica-Python the first official open source client for Vertica You can use this to build your own Python applications, or you can use it via tools that were written in Python. Python has grown a lot in recent years and it's very common language to solve lots of different problems and use cases in the Big Data space from things like DevOps admission and Data Science or Machine Learning, or just homegrown applications. We use Python a lot internally for our own QA testing and automation needs. And with the Python 2 End Of Life, that happened at the end of 2019, it was important that we had a robust Python solution to help migrate our internal stuff off of Python 2. And also to provide a nice migration path for all of you our users that might be worried about the same problems with their own Python code. So Vertica-Python is used already for lots of different tools, including Vertica's admintools now starting with 9.3.1. It was also used by DataDog to build a Vertica-DataDog integration that allows you to monitor your Vertica infrastructure within DataDog. So here's a little example of how you might use the Python Client to do some some work. So here we open in connection, we run a query to find out what node we've connected to, and then we do a little DataLoad by running a COPY statement. And this is designed to have a familiar look and feel if you've ever used a Python Database Client before. So we implement the DB API 2.0 standard and it feels like a Python package. So that includes things like, it's part of the centralized package manager, so you can just PIP install this right now and go start using it. We also have our client for Go length. So this is called vertica-sql-go. And this is a very similar story, just in a different context or the different programming language. So vertica-sql-go, began as a collaboration with the Microsoft Focus SecOps Group who builds microfocus' security products some of which use vertica internally to provide some of those analytics. So you can use this to build your own apps in the Go programming language but you can also use it via tools that are written Go. So most notably, we have our Grafana integration, which we'll talk a little bit more about later, that leverages this new clients to provide Grafana visualizations for vertica data. And Go is another rising popularity programming language 'cause it offers an interesting balance of different programming design trade-offs. So it's got good performance, got a good current concurrency and memory safety. And we liked all those things and we're using it to power some internal monitoring stuff of our own. And here's an example of the code you can write with this client. So this is Go code that does a similar thing. It opens a connection, it runs a little test query, and then it iterates over those rows, processing them using Go data types. You get that native look and feel just like you do in Python, except this time in the Go language. And you can go get it the way you usually package things with Go by running that command there to acquire this package. And it's important to note here for the DC projects, we're really doing open source development. We're not just putting code out on our GitHub page. So if you go out there and look, you can see that you can ask questions, you can report bugs, you can submit poll requests yourselves and you can collaborate directly with our engineering team and the other vertica users out on our GitHub page. Because it's out on our GitHub page, it allows us to be a little bit faster with the way we ship and deliver functionality compared to the core vertica release cycle. So in 2019, for example, as we were building features to prepare for the Python 3 migration, we shipped 11 different releases with 40 customer reported issues, filed on GitHub. That was done over 78 different poll requests and with lots of community engagement as we do so. So lots of people are using this already, we see as our GitHub badge last showed with about 5000 downloads of this a day of people using it in their software. And again, we want to make this easy, not just to use but also to contribute and understand and collaborate with us. So all these projects are built using the Apache 2.0 license. The master branch is always available and stable with the latest creative functionality. And you can always build it and test it the way we do so that it's easy for you to understand how it works and to submit contributions or bug fixes or even features. It uses automated testing both for locally and with poll requests. And for vertica-python, it's fully automated with Travis CI. So we're really excited about doing this and we're really excited about where it can go in the future. 'Cause this offers some exciting opportunities for us to collaborate with you more directly than we have ever before. You can contribute improvements and help us guide the direction of these projects, but you can also work with each other to share knowledge and implementation details and various best practices. And so maybe you think, "Well, I don't use Python, "I don't use go so maybe it doesn't matter to me." But I would argue it really does matter. Because even if you don't use these tools and languages, there's lots of amazing vertica developers out there who do. And these clients do act as low level building blocks for all kinds of different interesting tools, both in these Python and Go worlds, but also well beyond that. Because these implementations and examples really generalize to lots of different use cases. And we're going to do a deeper dive now into some of these to understand exactly how that's the case and what you can do with these things. So let's take a deeper look at some of the details of what it takes to build one of these open source client libraries. So these database client interfaces, what are they exactly? Well, we all know SQL, but if you look at what SQL specifies, it really only talks about how to manipulate the data within the database. So once you're connected and in, you can run commands with SQL. But these database client interfaces address the rest of those needs. So what does the programmer need to do to actually process those SQL queries? So these interfaces are specific to a particular language or a technology stack. But the use cases and the architectures and design patterns are largely the same between different languages. They all have a need to do some networking and connect and authenticate and create a session. They all need to be able to run queries and load some data and deal with problems and errors. And then they also have a lot of metadata and Type Mapping because you want to use these clients the way you use those programming languages. Which might be different than the way that vertica's data types and vertica's semantics work. So some of this client interfaces are truly standards. And they are robust enough in terms of what they design and call for to support a truly pluggable driver model. Where you might write an application that codes directly against the standard interface, and you can then plug in a different database driver, like a JDBC driver, to have that application work with any database that has a JDBC driver. So most of these interfaces aren't as robust as a JDBC or ODBC but that's okay. 'Cause it's good as a standard is, every database is unique for a reason. And so you can't really expose all of those unique properties of a database through these standard interfaces. So vertica's unique in that it can scale to the petabytes and beyond. And you can run it anywhere in any environment, whether it's on-prem or on clouds. So surely there's something about vertica that's unique, and we want to be able to take advantage of that fact in our solutions. So even though these standards might not cover everything, there's often a need and common patterns that arise to solve these problems in similar ways. When there isn't enough of a standard to define those comments, semantics that different databases might have in common, what you often see is tools will invent plug in layers or glue code to compensate by defining application wide standard to cover some of these same semantics. Later on, we'll get into some of those details and show off what exactly that means. So if you connect to a vertica database, what's actually happening under the covers? You have an application, you have a need to run some queries, so what does that actually look like? Well, probably as you would imagine, your application is going to invoke some API calls and some client library or tool. This library takes those API calls and implements them, usually by issuing some networking protocol operations, communicating over the network to ask vertica to do the heavy lifting required for that particular API call. And so these API's usually do the same kinds of things although some of the details might differ between these different interfaces. But you do things like establish a connection, run a query, iterate over your rows, manage your transactions, that sort of thing. Here's an example from vertica-python, which just goes into some of the details of what actually happens during the Connect API call. And you can see all these details in our GitHub implementation of this. There's actually a lot of moving parts in what happens during a connection. So let's walk through some of that and see what actually goes on. I might have my API call like this where I say Connect and I give it a DNS name, which is my entire cluster. And I give you my connection details, my username and password. And I tell the Python Client to get me a session, give me a connection so I can start doing some work. Well, in order to implement this, what needs to happen? First, we need to do some TCP networking to establish our connection. So we need to understand what the request is, where you're going to connect to and why, by pressing the connection string. and vertica being a distributed system, we want to provide high availability, so we might need to do some DNS look-ups to resolve that DNS name which might be an entire cluster and not just a single machine. So that you don't have to change your connection string every time you add or remove nodes to the database. So we do some high availability and DNS lookup stuff. And then once we connect, we might do Load Balancing too, to balance the connections across the different initiator nodes in the cluster, or in a sub cluster, as needed. Once we land on the node we want to be at, we might do some TLS to secure our connections. And vertica supports the industry standard TLS protocols, so this looks pretty familiar for everyone who've used TLS anywhere before. So you're going to do a certificate exchange and the client might send the server certificate too, and then you going to verify that the server is who it says it is, so that you can know that you trust it. Once you've established that connection, and secured it, then you can start actually beginning to request a session within vertica. So you going to send over your user information like, "Here's my username, "here's the database I want to connect to." You might send some information about your application like a session label, so that you can differentiate on the database with monitoring queries, what the different connections are and what their purpose is. And then you might also send over some session settings to do things like auto commit, to change the state of your session for the duration of this connection. So that you don't have to remember to do that with every query that you have. Once you've asked vertica for a session, before vertica will give you one, it has to authenticate you. and vertica has lots of different authentication mechanisms. So there's a negotiation that happens there to decide how to authenticate you. Vertica decides based on who you are, where you're coming from on the network. And then you'll do an auth-specific exchange depending on what the auth mechanism calls for until you are authenticated. Finally, vertica trusts you and lets you in, so you going to establish a session in vertica, and you might do some note keeping on the client side just to know what happened. So you might log some information, you might record what the version of the database is, you might do some protocol feature negotiation. So if you connect to a version of the database that doesn't support all these protocols, you might decide to turn some functionality off and that sort of thing. But finally, after all that, you can return from this API call and then your connection is good to go. So that connection is just one example of many different APIs. And we're excited here because with vertica-python we're really opening up the vertica client wire protocol for the first time. And so if you're a low level vertica developer and you might have used Postgres before, you might know that some of vertica's client protocol is derived from Postgres. But they do differ in many significant ways. And this is the first time we've ever revealed those details about how it works and why. So not all Postgres protocol features work with vertica because vertica doesn't support all the features that Postgres does. Postgres, for example, has a large object interface that allows you to stream very wide data values over. Whereas vertica doesn't really have very wide data values, you have 30, you have long bar charts, but that's about as wide as you can get. Similarly, the vertica protocol supports lots of features not present in Postgres. So Load Balancing, for example, which we just went through an example of, Postgres is a single node system, it doesn't really make sense for Postgres to have Load Balancing. But Load Balancing is really important for vertica because it is a distributed system. Vertica-python serves as an open reference implementation of this protocol. With all kinds of new details and extension points that we haven't revealed before. So if you look at these boxes below, all these different things are new protocol features that we've implemented since August 2019, out in the open on our GitHub page for Python. Now, the vertica-sql-go implementation of these things is still in progress, but the core protocols are there for basic query operations. There's more to do there but we'll get there soon. So this is really cool 'cause not only do you have now a Python Client implementation, and you have a Go client implementation of this, but you can use this protocol reference to do lots of other things, too. The obvious thing you could do is build more clients for other languages. So if you have a need for a client in some other language that are vertica doesn't support yet, now you have everything available to solve that problem and to go about doing so if you need to. But beyond clients, it's also used for other things. So you might use it for mocking and testing things. So rather than connecting to a real vertica database, you can simulate some of that. You can also use it to do things like query routing and proxies. So Uber, for example, this log here in this link tells a great story of how they route different queries to different vertical clusters by intercepting these protocol messages, parsing the queries in them and deciding which clusters to send them to. So a lot of these things are just ideas today, but now that you have the source code, there's no limit in sight to what you can do with this thing. And so we're very interested in hearing your ideas and requests and we're happy to offer advice and collaborate on building some of these things together. So let's take a look now at some of the things we've already built that do these things. So here's a picture of vertica's Grafana connector with some data powered from an example that we have in this blog link here. So this has an internet of things use case to it, where we have lots of different sensors recording flight data, feeding into Kafka which then gets loaded into vertica. And then finally, it gets visualized nicely here with Grafana. And Grafana's visualizations make it really easy to analyze the data with your eyes and see when something something happens. So in these highlighted sections here, you notice a drop in some of the activity, that's probably a problem worth looking into. It might be a lot harder to see that just by staring at a large table yourself. So how does a picture like that get generated with a tool like Grafana? Well, Grafana specializes in visualizing time series data. And time can be really tricky for computers to do correctly. You got time zones, daylight savings, leap seconds, negative infinity timestamps, please don't ever use those. In every system, if it wasn't hard enough, just with those problems, what makes it harder is that every system does it slightly differently. So if you're querying some time data, how do we deal with these semantic differences as we cross these domain boundaries from Vertica to Grafana's back end architecture, which is implemented in Go on it's front end, which is implemented with JavaScript? Well, you read this from bottom up in terms of the processing. First, you select the timestamp and Vertica is timestamp has to be converted to a Go time object. And we have to reconcile the differences that there might be as we translate it. So Go time has a different time zone specifier format, and it also supports nanosecond precision, while Vertica only supports microsecond precision. So that's not too big of a deal when you're querying data because you just see some extra zeros, not fractional seconds. But on the way in, if we're loading data, we have to find a way to resolve those things. Once it's into the Go process, it has to be converted further to render in the JavaScript UI. So that there, the Go time object has to be converted to a JavaScript Angular JS Date object. And there too, we have to reconcile those differences. So a lot of these differences might just be presentation, and not so much the actual data changing, but you might want to choose to render the date into a more human readable format, like we've done in this example here. Here's another picture. This is another picture of some time series data, and this one shows you can actually write your own queries with Grafana to provide answers. So if you look closely here you can see there's actually some functions that might not look too familiar with you if you know vertica's functions. Vertica doesn't have a dollar underscore underscore time function or a time filter function. So what's actually happening there? How does this actually provide an answer if it's not really real vertica syntax? Well, it's not sufficient to just know how to manipulate data, it's also really important that you know how to operate with metadata. So information about how the data works in the data source, Vertica in this case. So Grafana needs to know how time works in detail for each data source beyond doing that basic I/O that we just saw in the previous example. So it needs to know, how do you connect to the data source to get some time data? How do you know what time data types and functions there are and how they behave? How do you generate a query that references a time literal? And finally, once you've figured out how to do all that, how do you find the time in the database? How do you do know which tables have time columns and then they might be worth rendering in this kind of UI. So Go's database standard doesn't actually really offer many metadata interfaces. Nevertheless, Grafana needs to know those answers. And so it has its own plugin layer that provides a standardizing layer whereby every data source can implement hints and metadata customization needed to have an extensible data source back end. So we have another open source project, the Vertica-Grafana data source, which is a plugin that uses Grafana's extension points with JavaScript and the front end plugins and also with Go in the back end plugins to provide vertica connectivity inside Grafana. So the way this works, is that the plugin frameworks defines those standardizing functions like time and time filter, and it's our plugin that's going to rewrite them in terms of vertica syntax. So in this example, time gets rewritten to a vertica cast. And time filter becomes a BETWEEN predicate. So that's one example of how you can use Grafana, but also how you might build any arbitrary visualization tool that works with data in Vertica. So let's now look at some other examples and reference architectures that we have out in our GitHub page. For some advanced integrations, there's clearly a need to go beyond these standards. So SQL and these surrounding standards, like JDBC, and ODBC, were really critical in the early days of Vertica, because they really enabled a lot of generic database tools. And those will always continue to play a really important role, but the Big Data technology space moves a lot faster than these old database data can keep up with. So there's all kinds of new advanced analytics and query pushdown logic that were never possible 10 or 20 years ago, that Vertica can do natively. There's also all kinds of data-oriented application workflows doing things like streaming data, or Parallel Loading or Machine Learning. And all of these things, we need to build software with, but we don't really have standards to go by. So what do we do there? Well, open source implementations make for easier integrations, and applications all over the place. So even if you're not using Grafana for example, other tools have similar challenges that you need to overcome. And it helps to have an example there to show you how to do it. Take Machine Learning, for example. There's been many excellent Machine Learning tools that have arisen over the years to make data science and the task of Machine Learning lot easier. And a lot of those have basic database connectivity, but they generally only treat the database as a source of data. So they do lots of data I/O to extract data from a database like Vertica for processing in some other engine. We all know that's not the most efficient way to do it. It's much better if you can leverage Vertica scale and bring the processing to the data. So a lot of these tools don't take full advantage of Vertica because there's not really a uniform way to go do so with these standards. So instead, we have a project called vertica-ml-python. And this serves as a reference architecture of how you can do scalable machine learning with Vertica. So this project establishes a familiar machine learning workflow that scales with vertica. So it feels similar to like a scickit-learn project except all the processing and aggregation and heavy lifting and data processing happens in vertica. So this makes for a much more lightweight, scalable approach than you might otherwise be used to. So with vertica-ml-python, you can probably use this yourself. But you could also see how it works. So if it doesn't meet all your needs, you could still see the code and customize it to build your own approach. We've also got lots of examples of our UDX framework. And so this is an older GitHub project. We've actually had this for a couple of years, but it is really useful and important so I wanted to plug it here. With our User Defined eXtensions framework or UDXs, this allows you to extend the operators that vertica executes when it does a database load or a database query. So with UDXs, you can write your own domain logic in a C++, Java or Python or R. And you can call them within the context of a SQL query. And vertica brings your logic to that data, and makes it fast and scalable and fault tolerant and correct for you. So you don't have to worry about all those hard problems. So our UDX examples, demonstrate how you can use our SDK to solve interesting problems. And some of these examples might be complete, total usable packages or libraries. So for example, we have a curl source that allows you to extract data from any curlable endpoint and load into vertica. We've got things like an ODBC connector that allows you to access data in an external database via an ODBC driver within the context of a vertica query, all kinds of parsers and string processors and things like that. We also have more exciting and interesting things where you might not really think of vertica being able to do that, like a heat map generator, which takes some XY coordinates and renders it on top of an image to show you the hotspots in it. So the image on the right was actually generated from one of our intern gaming sessions a few years back. So all these things are great examples that show you not just how you can solve problems, but also how you can use this SDK to solve neat things that maybe no one else has to solve, or maybe that are unique to your business and your needs. Another exciting benefit is with testing. So the test automation strategy that we have in vertica-python these clients, really generalizes well beyond the needs of a database client. Anyone that's ever built a vertica integration or an application, probably has a need to write some integration tests. And that could be hard to do with all the moving parts, in the big data solution. But with our code being open source, you can see in vertica-python, in particular, how we've structured our tests to facilitate smooth testing that's fast, deterministic and easy to use. So we've automated the download process, the installation deployment process, of a Vertica Community Edition. And with a single click, you can run through the tests locally and part of the PR workflow via Travis CI. We also do this for multiple different python environments. So for all python versions from 2.7 up to 3.8 for different Python interpreters, and for different Linux distros, we're running through all of them very quickly with ease, thanks to all this automation. So today, you can see how we do it in vertica-python, in the future, we might want to spin that out into its own stand-alone testbed starter projects so that if you're starting any new vertica integration, this might be a good starting point for you to get going quickly. So that brings us to some of the future work we want to do here in the open source space . Well, there's a lot of it. So in terms of the the client stuff, for Python, we are marching towards our 1.0 release, which is when we aim to be protocol complete to support all of vertica's unique protocols, including COPY LOCAL and some new protocols invented to support complex types, which is our new feature in vertica 10. We have some cursor enhancements to do things like better streaming and improved performance. Beyond that we want to take it where you want to bring it. So send us your requests in the Go client fronts, just about a year behind Python in terms of its protocol implementation, but the basic operations are there. But we still have more work to do to implement things like load balancing, some of the advanced auths and other things. But they're two, we want to work with you and we want to focus on what's important to you so that we can continue to grow and be more useful and more powerful over time. Finally, this question of, "Well, what about beyond database clients? "What else might we want to do with open source?" If you're building a very deep or a robust vertica integration, you probably need to do a lot more exciting things than just run SQL queries and process the answers. Especially if you're an OEM or you're a vendor that resells vertica packaged as a black box piece of a larger solution, you might to have managed the whole operational lifecycle of vertica. There's even fewer standards for doing all these different things compared to the SQL clients. So we started with the SQL clients 'cause that's a well established pattern, there's lots of downstream work that that can enable. But there's also clearly a need for lots of other open source protocols, architectures and examples to show you how to do these things and do have real standards. So we talked a little bit about how you could do UDXs or testing or Machine Learning, but there's all sorts of other use cases too. That's why we're excited to announce here our awesome vertica, which is a new collection of open source resources available on our GitHub page. So if you haven't heard of this awesome manifesto before, I highly recommend you check out this GitHub page on the right. We're not unique here but there's lots of awesome projects for all kinds of different tools and systems out there. And it's a great way to establish a community and share different resources, whether they're open source projects, blogs, examples, references, community resources, and all that. And this tool is an open source project. So it's an open source wiki. And you can contribute to it by submitting yourself to PR. So we've seeded it with some of our favorite tools and projects out there but there's plenty more out there and we hope to see more grow over time. So definitely check this out and help us make it better. So with that, I'm going to wrap up. I wanted to thank you all. Special thanks to Siting Ren and Roger Huebner, who are the project leads for the Python and Go clients respectively. And also, thanks to all the customers out there who've already been contributing stuff. This has already been going on for a long time and we hope to keep it going and keep it growing with your help. So if you want to talk to us, you can find us at this email address here. But of course, you can also find us on the Vertica forums, or you could talk to us on GitHub too. And there you can find links to all the different projects I talked about today. And so with that, I think we're going to wrap up and now we're going to hand it off for some Q&A.
SUMMARY :
Also a reminder that you can maximize your screen and frameworks to solve the problems you need to solve.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom Wall | PERSON | 0.99+ |
Sue LeClaire | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Roger Huebner | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Tom | PERSON | 0.99+ |
Python 2 | TITLE | 0.99+ |
August 2019 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Python 3 | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
Sue | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
python | TITLE | 0.99+ |
SQL | TITLE | 0.99+ |
late 2018 | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
end of 2019 | DATE | 0.99+ |
Vertica | TITLE | 0.99+ |
today | DATE | 0.99+ |
Java | TITLE | 0.99+ |
Spark | TITLE | 0.99+ |
C++ | TITLE | 0.99+ |
JavaScript | TITLE | 0.99+ |
vertica-python | TITLE | 0.99+ |
Today | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
11 different releases | QUANTITY | 0.99+ |
UDXs | TITLE | 0.99+ |
Kafka | TITLE | 0.99+ |
Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives | TITLE | 0.98+ |
Grafana | ORGANIZATION | 0.98+ |
PyODBC | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
UDX | TITLE | 0.98+ |
vertica 10 | TITLE | 0.98+ |
ODBC | TITLE | 0.98+ |
10 | DATE | 0.98+ |
Postgres | TITLE | 0.98+ |
DataDog | ORGANIZATION | 0.98+ |
40 customer reported issues | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
Dominic Preuss, Google | Google Cloud Next 2019
>> Announcer: Live from San Francisco, it's theCUBE. Covering Google Cloud Next '19. Brought to you by Google Cloud and it's ecosystem partners. >> Welcome back to the Moscone Center in San Francisco everybody. This is theCUBE, the leader in live tech coverage. This is day two of our coverage of Google Cloud Next #GoogleNext19. I'm here with my co-host Stuart Miniman and I'm Dave Vellante, John Furrier is also here. Dominic Preuss is here, he's the Director of Product Management, Storage and Databases at Google. Dominic, good to see you. Thanks for coming on. >> Great, thanks to be here. >> Gosh, 15, 20 years ago there were like three databases and now there's like, I feel like there's 300. It's exploding, all this innovation. You guys made some announcements yesterday, we're gonna get into, but let's start with, I mean, data, we were just talking at the open, is the critical part of any IT transformation, business value, it's at the heart of it. Your job is at the heart of it and it's important to Google. >> Yes. Yeah, you know, Google has a long history of building businesses based on data. We understand the importance of it, we understand how critical it is. And so, really, that ethos is carried over into Google Cloud platform. We think about it very much as a data platform and we have a very strong responsibility to our customers to make sure that we provide the most secure, the most reliable, the most available data platform for their data. And it's a key part of any decision when a customer chooses a hyper cloud vendor. >> So summarize your strategy. You guys had some announcements yesterday really embracing open source. There's certainly been a lot of discussion in the software industry about other cloud service providers who were sort of bogarting open source and not giving back, et cetera, et cetera, et cetera. How would you characterize Google's strategy with regard to open source, data storage, data management and how do you differentiate from other cloud service providers? >> Yeah, Google has always been the open cloud. We have a long history in our commitment to open source. Whether be Kubernetes, TensorFlow, Angular, Golang. Pick any one of these that we've been contributing heavily back to open source. Google's entire history is built on the success of open source. So we believe very strongly that it's an important part of the success. We also believe that we can take a different approach to open source. We're in a very pivotal point in the open source industry, as these companies are understanding and deciding how to monetize in a hyper cloud world. So we think we can take a fundamentally different approach and be very collaborative and support the open source community without taking advantage or not giving back. >> So, somebody might say, okay, but Google's got its own operational databases, you got analytic databases, relational, non-relational. I guess Google Spanner kind of fits in between those. It was an amazing product. I remember that that first came out, it was making my eyes bleed reading the white paper on it but awesome tech. You certainly own a lot of your own database technology and do a lot of innovation there. So, square that circle with regard to partnerships with open source vendors. >> Yeah, I think you alluded to a little bit earlier there are hundreds of database technologies out there today. And there's really been a proliferation of new technology, specifically databases, for very specific use cases. Whether it be graph or time series, all these other things. As a hyper cloud vendor, we're gonna try to do the most common things that people need. We're gonna do manage MySQL, and PostgreS and SQL Server. But for other databases that people wanna run we want to make sure that those solutions are first class opportunities on the platform. So we've engaged with seven of the top and leading open source companies to make sure that they can provide a managed service on Google Cloud Platform that is first class. What that means is that as a GCP customer I can choose a Google offered service or a third-party offered service and I'm gonna have the same, seamless, frictionless, integrated experience. So I'm gonna get unified billing, I'm gonna get one bill at the end of the day. I'm gonna have unified support, I'm gonna reach out to Google support and they're going to figure out what the problem is, without blaming the third-party or saying that isn't our problem. We take ownership of the issue and we'll go and figure out what's happening to make sure you get an answer. Then thirdly, a unified experience so that the GCP customer can manage that experience, inside a cloud console, just like they would their Google offered serves. >> A fully-managed database as a service essentially. >> Yes, so of the seven vendors, a number of them are databases. But also for Kafka, to manage Kafka or any other solutions that are out there as well. >> All right, so we could spend the whole time talking about databases. I wanna spend a couple minutes talking about the other piece of your business, which is storage. >> Dominic: Absolutely. >> Dave and I have a long history in what we'd call traditional storage. And the dialog over the last few years has been we're actually talking about data more than the storing of information. A few years back, I called cloud the silent killer of the old storage market. Because, you know, I'm not looking at buying a storage array or building something in the cloud. I use storage is one of the many services that I leverage. Can you just give us some of the latest updates as to what's new and interesting in your world. As well as when customers come to Google where does storage fit in that overall discussion? >> I think that the amazing opportunity that we see for for large enterprises right now is today, a lot of that data that they have in their company are in silos. It's not properly documented, they don't necessarily know where it is or who owns it or the data lineage. When we pick all that date up across the enterprise and bring it in to Google Cloud Platform, what's so great about is regardless of what storage solution you choose to put your data in it's in a centralized place. It's all integrated, then you can really start to understand what data you have, how do I do connections across it? How do I try to drive value by correlating it? For us, we're trying to make sure that whatever data comes across, customers can choose whatever storage solution they want. Whichever is most appropriate for their workload. Then once the data's in the platform we help them take advantage of it. We are very proud of the fact that when you bring data into object storage, we have a single unified API. There's only one product to use. If you would have really cold data, or really fast data, you don't have to wait hours to get the data, it's all available within milliseconds. Now we're really excited that we announced today is a new storage class. So, in Google Cloud Storage, which is our object storage product, we're now gonna have a very cold, archival storage option, that's going to start at $0.12 per gigabyte, per month. We think that that's really going to change the game in terms of customers that are trying to retire their old tape backup systems or are really looking for the most cost efficient, long term storage option for their data. >> The other thing that we've heard a lot about this week is that hybrid and multi-cloud environment. Google laid out a lot of the partnerships. I think you had VMware up on stage. You had Cisco up on stage, I see Nutanix is here. How does that storage, the hybrid multi-cloud, fit together for your world. >> I think the way that we view hybrid is that every customer, at some point, is hybrid. Like, no one ever picks up all their data on day one and on day two, it's on the cloud. It's gonna be a journey of bringing that data across. So, it's always going to be hybrid for that period of time. So for us, it's making sure that all of our storage solutions, we support open standards. So if you're using an an S3 compliant storage solution on-premise, you can use Google Cloud Storage with our S3 compatible API. If you are doing block, we work with all the large vendors, whether be NetApp or EMC or any of the other vendors you're used to having on-premise, making sure we can support those. I'm personally very excited about the work that we've done with NetApp around NetApp cloud buying for Google Cloud Platform. If you're a NetApp shop and you've been leveraging that technology and you're really comfortable and really like it on-premise, we make it really easy to bring that data to the cloud and have the same exact experience. You get all the the wonderful features that NetApp offers you on-premise in a cloud native service where you're paying on a consumption based service. So, it really takes, kind of, the decision away for the customers. You like NetApp on-premise but you want cloud native features and pricing? Great, we'll give you NetApp in the cloud. It really makes it to be an easy transition. So, for us it's making sure that we're engaged and that we have a story with all the storage vendors that you used to using on-premise today. >> Let me ask you a question, about go back, to the very cold, ice cold storage. You said $0.12 per gigabyte per month, which is kinda in between your other two major competitors. What was your thinking on the pricing strategy there? >> Yeah, basically everything we do is based on customer demand. So after talking to a bunch of customers, understanding the workloads, understanding the cost structure that they need, we think that that's the right price to meet all of those needs and allow us to basically compete for all the deals. We think that that's a really great price-point for our customers. And it really unlocks all those workloads for the cloud. >> It's dirt cheap, it's easy to store and then it takes a while to get it back, right, that's the concept? >> No, it is not at all. We are very different than other storage vendors or other public cloud offerings. When you drop your data into our system, basically, the trade up that you're making is saying, I will give you a cheaper price in exchange for agreeing to leave the data in the platform, for a longer time. So, basically you're making a time-based commitment to us, at which point we're giving you a cheaper price. But, what's fundamentally different about Google Cloud Storage, is that regardless of which storage class you use, everything is available within milliseconds. You don't have to wait hours or any amount of time to be able to get that data. It's all available to you. So, this is really important, if you have long-term archival data and then, let's say, that you got a compliance request or regulatory requests and you need to analyze all the data and get to all your data, you're not waiting hours to get access to that data. We're actually giving you, within milliseconds, giving you access to that data, so that you can get the answers you need. >> And the quid pro quo is I commit to storing it there for some period of time, is that you said? >> Correct. So, we have four storage classes. We have our Standard, our Nearline, our Coldline and this new Archival. Each of them has a lower price point, in exchange for a longer, committed time the you'll leave the product. >> That's cool. I think that adds real business value there. So, obviously, it's not sitting on tape somewhere. >> We have a number of solutions for how we store the data. For us, it's indifferent, how we store the data. It's all about how long you're willing to tell us it'll be there and that allows us to plan for those resources long term. >> That's a great story. Now, you also have this pay-as-you-go pricing tiers, can you talk about that a little bit? >> For which, for Google Cloud Storage? >> Dave: Yes. >> Yeah, everything is pay-as-you-go and so basically you write data to us and there's a charge for the operations you do and then you charge for however long you leave the data in the system. So, if you're using our Standard class, you're just paying our standard price. You can either use Regional or Multi-Regional, depending on the disaster recovery and the durability and availability requirements that you have. Then you're just paying us for that for however long you leave the data in the system. Once you delete it, you stop paying. >> So it must be, I'm not sure what kind of customer discussions are going on in terms of storage optionality. It used to be just, okay, I got block and I got file, but now you've got all different kind of. You just mentioned several different tiers of performance. What's the customer conversation like, specifically in terms of optionality and what are they asking you to deliver? >> I think within the storage space, there's really three things, there's object, block and file. So, on the object side, or on the block side we have our persistence product. Customers are asking for better price performance, more performance, more IOPS, more throughput. We're continuing to deliver a higher-performance, block device for them and that's going very, very well. For those that need file, we have our first-party service, which is Cloud Filestore, which is our manage NFS. So if you need managed NFS, we can provide that for you at a really low price point. We also partner with, you mentioned Elastifile earlier. We partner with NetApp, we're partnering with EMC. So all those options are also available for file. Then on the object side, if you can accept the object API, it's not POSIX-compliant it's a very different model. If your workloads can support that model then we give you a bunch of options with the Object Model API. >> So, data management is another hot topic and it means a lot of things to a lot of people. You hear the backup guys talking about data management. The database guys talk about data management. What is data management to Google and what your philosophy and strategy there? >> I think for us, again, I spend a lot of time making sure that the solutions are unified and consistent across. So, for us, the idea is that if you bring data into the platform, you're gonna get a consistent experience. So you're gonna have consistent backup options you're gonna have consistent pricing models. Everything should be very similar across the various products So, number one, we're just making sure that it's not confusing by making everything very simple and very consistent. Then over time, we're providing additional features that help you manage that. I'm really excited about all the work we're doing on the security side. So, you heard Orr's talk about access transparency and access approvals right. So basically, we can have a unified way to know whether or not anyone, either Google or if a third-party offer, a third-party request has come in about if we're having to access the data for any reason. So we're giving you full transparency as to what's going on with your data. And that's across the data platform. That's not on a per-product basis. We can basically layer in all these amazing security features on top of your data. The way that we view our business is that we are stewards of your data. You've given us your data and asked us to take care of it, right, don't lose it. Give it back to me when I want it and let me know when anything's happening to it. We take that very seriously and we see all the things we're able to bring to bear on the security side, to really help us be good stewards of that data. >> The other thing you said is I get those access logs in near real time, which is, again, nuanced but it's very important. Dominic, great story, really. I think clear thinking and you, obviously, delivered some value for the customers there. So thanks very much for coming on theCUBE and sharing that with us. >> Absolutely, happy to be here. >> All right, keep it right there everybody, we'll be back with our next guest right after this. You're watching theCUBE live from Google Cloud Next from Moscone. Dave Vellante, Stu Miniman, John Furrier. We'll be right back. (upbeat music)
SUMMARY :
Brought to you by Google Cloud and it's ecosystem partners. Dominic Preuss is here, he's the Director Your job is at the heart of it and it's important to Google. to make sure that we provide the most secure, and how do you differentiate from We have a long history in our commitment to open source. So, square that circle with regard to partnerships and I'm gonna have the same, seamless, But also for Kafka, to manage Kafka the other piece of your business, which is storage. of the old storage market. to understand what data you have, How does that storage, the hybrid multi-cloud, and that we have a story with all the storage vendors to the very cold, ice cold storage. that that's the right price to meet all of those needs can get the answers you need. the you'll leave the product. I think that adds real business value there. We have a number of solutions for how we store the data. can you talk about that a little bit? for the operations you do and then you charge and what are they asking you to deliver? Then on the object side, if you can accept and it means a lot of things to a lot of people. on the security side, to really help us be good stewards and sharing that with us. we'll be back with our next guest right after this.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Stuart Miniman | PERSON | 0.99+ |
Dominic Preuss | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dominic | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cisco | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Each | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
seven vendors | QUANTITY | 0.99+ |
Coldline | ORGANIZATION | 0.99+ |
MySQL | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
seven | QUANTITY | 0.98+ |
Kafka | TITLE | 0.98+ |
one product | QUANTITY | 0.98+ |
NetApp | TITLE | 0.98+ |
two major competitors | QUANTITY | 0.97+ |
PostgreS | TITLE | 0.97+ |
NetApp | ORGANIZATION | 0.97+ |
Google Cloud Next | TITLE | 0.97+ |
day two | QUANTITY | 0.97+ |
one bill | QUANTITY | 0.96+ |
S3 | TITLE | 0.96+ |
three things | QUANTITY | 0.96+ |
300 | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
Cloud Filestore | TITLE | 0.95+ |
hundreds of database technologies | QUANTITY | 0.94+ |
three databases | QUANTITY | 0.94+ |
day one | QUANTITY | 0.94+ |
first class | QUANTITY | 0.94+ |
20 years ago | DATE | 0.94+ |
this week | DATE | 0.93+ |
SQL Server | TITLE | 0.93+ |
$0.12 per gigabyte | QUANTITY | 0.93+ |
Elastifile | ORGANIZATION | 0.92+ |
2019 | DATE | 0.91+ |
Google Cloud Platform | TITLE | 0.9+ |
Gosh | PERSON | 0.89+ |
Moscone Center | LOCATION | 0.87+ |
Google Cloud Storage | TITLE | 0.82+ |
Moscone | LOCATION | 0.8+ |
theCUBE | ORGANIZATION | 0.75+ |
15 | DATE | 0.73+ |
Object Model | OTHER | 0.73+ |
A few years back | DATE | 0.73+ |
Orr | ORGANIZATION | 0.68+ |
Google Spanner | TITLE | 0.66+ |
Amudha Nadesan, Applied Materials | Splunk .conf18
>> Announcer: Live from Orlando, Florida it's theCUBE. Covering .conf18. Brought to you by Splunk. >> Hi everybody welcome back to Orlando. You're watching theCUBE, the leader in live tech coverage. We go out to the events, we extract the signal from the noise. My name is Dave Vellante, I'm here with my co-host Stu Miniman. This is day one of .conf18, Splunk's big user conference. You know we're talking a lot about AI at these conferences, talking a lot about data, one of the enablers is semiconductors, the power of semiconductors, and the cheap storage, have enabled people to ingest a lot of data. And when you look into the supply chain, beneath the semiconductors, there are companies who provide semiconductor equipment. One of those companies is Applied Materials and Amudha Nadesan is here, he's a senior manager at Applied Materials, symbol AMAT. Welcome Amudha, thanks for coming on theCUBE. >> Yeah, thank you, thank you for inviting me. >> You're welcome. So as I say, there's a semiconductor boom going on right now, which is obviously a great tailwind for your business. You're on the data side, obviously. >> Right. >> Dave: Getting your hands dirty. Give us a sense of your role and we'll get into it. >> Yeah, so I'm a senior manager in the software group of the Applied Materials actually. So Applied's core business is always the hardware which is the semiconductor and display equipment manufacturer, so every new chip that was kind of manufactured, or any new display equipment displays coming out, that's manufactured using the Applied tool actually. We are the software world that kind of interfaces with the Applied tools, so we get all the data from the Applied tools and non-Applied tools, and we kind of do all the analytics using our software, actually. So, I'm kind of the technology group leader within the automation products group, so we are responsible for bringing in the new technologies into our products, actually. And our products, now we are kind of trying to align with the industry for final principles, so we are trying to bring in all the new technologies like mobility, virtualization, IoT, then predictive monitoring, predictive analytics, all these new technologies, we are trying to kind of bring into our products right now. >> So I know that, certainly, the tolerances in the semiconductor business are so tight, and given that you're manufacturing semiconductor equipment and providing software associated with that, is it your job to try to analyze the performance and the efficacy of the equipment and feed that back to your engineers and your customers in a collaborative mode? What's the outcome that your team is trying to drive? >> So, my team's main responsibility is to kind of maintain that finite availability for all the data that is coming from the tools into our products, actually. Right so, our products need to be up and running all the time, actually. If our product stops, the production line will stop, actually, right if the production line stops, then there's going to be a big business impact, actually. So that's where we are kind of trying to leverage all these new technologies, so we can really kind of run our software with finite availability, actually. >> You mentioned three things, mobility, virtualization, prediction. There may be others. >> Right. >> So the mobility, presumably, is a productivity aspect. So people can work at home on the weekends, or wherever they are, teasing of course. Virtualization, getting more out of, that's an asset utilization play. And prediction, that's using machine intelligence to predict failures, optimize the equipment, maybe you could describe what's behind each of those. >> Yeah, I'll kind of go one by one, actually. All of our products, they are like at least twenty, thirty years old, actually. They have been all big clients, actually, running on desktops and laptops, actually. Right so now we are kind of trying to bring the user experience, where the end users who are using the UI for our products, they can get a good experience, and that can kind of improve the productivity. So that's what the mobility is. So we are kind of trying to model the latest technologies like Angular and STML for our product UI, actually. And with respect to the virtualization, we have been kind of running our softwares on physical servers, actually, in an enterprise fashion, and that is kind of taking up lot of cost, actually. So we are kind of getting into this virtualization world where we can kind of reduce the TCO of our assets actually that is running all these softwares. >> Help connect the dots with us as to how Splunk fits into your environment. >> Oh, okay, so we just got into Splunk just two years back, actually, we have close to 25 to 30 software products that kind of completely automate that manufacturing line, actually. All these products, they generate so much of logs, actually, on a daily basis. If you take in a year, they kind of generate about 100 gigs of just log files, actually, and those log files have lot of critical information within the log file, and when we didn't have the Splunk two years back, what we would do is, whenever there is a problem in our customer production line, it allows them to kind of FTP those logs, actually. And then we have to kind of manually go and scan through all those logs and identify the issue, actually. Sometimes, even to identify the issue, it takes about like a week, actually, right? And after we identify the issue, we are to come up with the resolution to kind of fix the problem, and then it takes months, sometimes. I worked on a problem, even for six months to kind of bring a resolution to it, and the customers are very upset, actually. >> Yeah, it's interesting, go back to your earlier statements, you know, we've talked for years, decades, our whole careers, about how important uptime is, and then you talk about your people and there's a lot more efficient things they could be doing if they're not looking after and doing all these manual things. You've been there 22 years, what is something like Splunk, how do you measure that, the success of the outcome of using a tool like that? >> Yeah, so right now we can see the success immediately because we have implemented Splunk, and we are kind of remotely monitoring our production lines. At least five customers, right now, we are remotely monitoring them. Every customer, they have down time at least once or twice a year, actually, so when they have a down time, if it's a small customer, they take a loss of about 10 K per hour, actually. So and if it is a medium, then probably 100 K, if it's a large, then it's 1 million actually, per hour. I have experience in the last 22 years, I've experienced at least, a customer has one to two down times a year, sometimes even more than that, actually. So after we implemented Splunk, the last two years, one of our customers we are remotely monitoring, we never had a down time, so that itself is a big success, actually, but we are not done with it yet, actually, we are continuing to innovate with Splunk on the log monitor. >> Make sure I understood what you said. So, rough rules of thumb, these things vary, we always understand that, but you say in small customers, when their down time, you said $10,000 an hour, medium $100,000 an hour, large customer's a million dollars, and probably up with huge companies. >> Yes, yeah, it really kind of depends upon, when I say a small customer, they have less number of tools, actually, which means they have less number of operators. So less number of people impact it, actually, when the production line stops, but when you go for a kind of go for a medium size, they have more tools, more people are working with those tools, they don't have to work which means right it's a disruption, actually, in the production line. And if it's a large fab, there are more number of operators actually working in the production line, so that's how we kind of calculate the loss, actually. >> When they have, right, the math is pretty simple to calculate but when they have a down time like that, do they try and make it up on the weekends? Or can they not do that because people have lives, or they are already actually running 24/7? >> It's already running 24 by seven. >> And you can't get more time in a day. >> Yeah, they can't make it over the weekend, actually it's already running 24 by seven, and when the production line stops, that means it's a revenue loss for them, and then also their operators are sitting idle actually. >> Dave: These are companies with a fab, right? >> These are companies with fab actually. >> Which is a multi-billion dollar investment oftentimes, right? >> Yes, yes. Name any semiconductor companies like Intel or Samsung, they're all using Applied tools to run their manufacturing. >> And when they're down, it's right in the bottom line. >> Yes, that's right, and they all use our softwares to kind of like completely automate their factory end to end, actually. >> Can you directly attribute the lack of down time, the reduction in that down time, to Splunk? >> That's right, actually, yeah. At least one of the customers we are remotely monitoring right now, those customers are monitored using Splunk. We are, right now, scaling up with more and more customers for the remote monitoring. >> The other thing you said is you're starting to innovate even more with Splunk, maybe you can elaborate a little on that. >> Yeah, we are trying to kind of, right now we are just using the basic machine learning algorithms that are available from Splunk for kind of doing the anomaly detection, our outlier detection, our trend analysis. So we are expecting to kind of introduce more and more machine learning algorithms that can accurately predict the servers going down, that can kind of give us more lead time to kind of proactively address the issues before the user can see an impact, actually. Currently, most of the time it is kind of more reactive, we see the issue and then we kind of react to it. We want to be more proactive and that is where Splunk is playing a big role, actually. >> Your role is customer facing, is that right? Your software is customer facing? Or are you guys using this internally as well? >> We are using both internally. Right now, it is customer facing, but our IT organization, after seeing the success with how we are kind of monitoring our customers they are also kind of adapting it, and there are other business units now who are kind of receiving lot of data from these tools actually like the sensor data from the tools, they are also kind of trying to use Splunk and see how they can kind of predict the issues in the tool more proactively or accurately. >> Splunk is not a new company. I'm just curious, and Applied Materials is obviously a huge company, you know, $35 billion market cap, why did it take you so long to find out about Splunk and adopt Splunk? Was it just organizational, was it your processes are so delicate and hardened? I wonder if you could explain. >> Yeah, so that's a very good question actually. Right so, only in the last two years we have started investing more on the R&D, especially on the software products actually. Mostly the investment was on the hardware products where they want to kind of improve the productivity, they want to kind of improve the testing methodology, all those things. Most of the investment was going to the hardware components, so they were not even looking at all these software innovations that were happening. So last two years, they're kind of investing more on the software groups actually, which they want to kind of bring it, or kind of take it to that industry 4.0 revolution, actually, right? So that's where we started investing on all, we started looking at many technologies, and one of the first technologies to adapt was the Splunk, actually. And then especially we kind of came up with this remote monitoring concept where most of our customers are, the small customers, I would say, they did not have their own IT organization, right so whenever they had a down, they had to kind of literally log a call and they had to wait for us to kind of come in, fix their problem, and it took days, actually. And they took a big impact because of that. So then they said, we don't have our own IT organization, why don't you kind of take the IT responsibilities off, keep making sure those softwares are kind of up and running all the time? So that's the time when we went to Splunk, and we got it, we implemented it, we tested it, and we are kind of seeing a good success with it, actually. >> And do you guys buy this as a subscription, or is it a perpetual license? Or how do you guys do that? >> It is a perpetual license, yeah, we have an on-prem. That's another concern with our customers, because they want to make sure their IP does not go out, actually, they don't want to put anything on the cloud. This is for every semiconductor companies, they are not there on the cloud yet, actually. So that's why we are to host the Splunk, on-prem, and kind of transfer all the data from our customers through a secure FTP, bring it to our on-prem Splunk servers and do all the analytics, actually. >> We've heard Splunk and many other companies this year and for the last couple of years talking about AI and ML. Does that resonate with you, those sort of solutions that you think you'll be looking for, that kind of functionality, how does that play into your environment? >> That's right, actually. So we are trying to kind of get into that. We have to a certain extent, we are kind of already into the machine learning algorithms, actually, but we kind of want to go more deeper into that, actually, so that currently our prediction, whatever we have built up in house, actually, our prediction algorithms can predict only 60%, actually. So that's the accuracy we could get, but we want to get somewhere in the 90% or 93% accuracy, which means we have to get more, we have to get more on the accuracy part, actually right, we have to get more accurate machine learning algorithms developed actually, so that is where we are trying to kind of see if the platform can kind of provide more of this machine learning algorithms, which can predict more accurately, actually, the problem. >> So that's data, the modeling, iterations, just time, right? You'll eventually get there. Amudha, thanks very much for coming to theCUBE, it was great to hear your story. Last question is, we hear this story of Splunk, I call it land and expand. >> Right. >> We have, you know, one use case, and then there are other use cases, is that your situation? You've only been a customer for a couple years now, do you see using Splunk potentially in other areas? >> Yes, we are trying to kind of expand to other areas, right now we started with remote monitoring, we are going to use it for IT, our IT is going to use it, and then we want to kind of go to the predictive analytics actually, that means we want to kind of look at the tool data like the data that is coming from the sensors from the tool, we want to kind of do the analytics and then make sure that we can predict the problems, we can predict the maintenance that we need to do, actually, so all those things we want to do, actually, that's the area we want to kind of more expand with, we will really kind of add value to our customers, actually. >> Amudha Nadesan from Applied Materials, thanks so much for coming on theCUBE, appreciate your time. >> Yeah, thank you. >> Alright, keep it right there, everybody, we'll be back with our next guest, I'm Dave Vellante, he's Stu Miniman, we'll be right back, you're watching theCUBE from Splunk .conf18. (techno music)
SUMMARY :
Brought to you by Splunk. We go out to the events, we extract You're on the data side, obviously. Dave: Getting your hands dirty. And our products, now we are kind of trying and running all the time, actually. You mentioned three things, So the mobility, presumably, is a productivity aspect. So we are kind of trying to model Help connect the dots with us and the customers are very upset, actually. of the outcome of using a tool like that? and we are kind of remotely monitoring our production lines. we always understand that, but you say we kind of calculate the loss, actually. and when the production line stops, all using Applied tools to run their manufacturing. to kind of like completely automate and more customers for the remote monitoring. to innovate even more with Splunk, for kind of doing the anomaly detection, the success with how we are kind of monitoring our customers to find out about Splunk and adopt Splunk? So then they said, we don't have our own IT organization, and do all the analytics, actually. of solutions that you think you'll be looking for, So that's the accuracy we could get, So that's data, the modeling, iterations, actually, that's the area we want thanks so much for coming on theCUBE, appreciate your time. we'll be back with our next guest,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Amudha | PERSON | 0.99+ |
Amudha Nadesan | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
93% | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Applied Materials | ORGANIZATION | 0.99+ |
100 K | QUANTITY | 0.99+ |
1 million | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
$35 billion | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
22 years | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
Orlando | LOCATION | 0.99+ |
24 | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
two years back | DATE | 0.99+ |
Applied | ORGANIZATION | 0.99+ |
this year | DATE | 0.98+ |
.conf18 | EVENT | 0.98+ |
One | QUANTITY | 0.98+ |
multi-billion dollar | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
a year | QUANTITY | 0.96+ |
about 100 gigs | QUANTITY | 0.96+ |
Angular | TITLE | 0.96+ |
$100,000 an hour | QUANTITY | 0.96+ |
$10,000 an hour | QUANTITY | 0.95+ |
60% | QUANTITY | 0.94+ |
about 10 K per hour | QUANTITY | 0.93+ |
seven | QUANTITY | 0.93+ |
twice a year | QUANTITY | 0.93+ |
first technologies | QUANTITY | 0.91+ |
30 software products | QUANTITY | 0.91+ |
last two years | DATE | 0.91+ |
last couple of years | DATE | 0.89+ |
Covering | EVENT | 0.88+ |
each | QUANTITY | 0.86+ |
last 22 years | DATE | 0.79+ |
thirty years old | QUANTITY | 0.79+ |
a day | QUANTITY | 0.76+ |
25 | QUANTITY | 0.75+ |
a million dollars | QUANTITY | 0.75+ |
least twenty | QUANTITY | 0.74+ |
couple years | QUANTITY | 0.71+ |
least five customers | QUANTITY | 0.7+ |
a week | QUANTITY | 0.69+ |
two down times | QUANTITY | 0.69+ |
three things | QUANTITY | 0.67+ |
STML | TITLE | 0.65+ |
close | QUANTITY | 0.61+ |
Robert Stumpf, NetApp | SAP SAPPHIRE NOW 2018
>> From Orlando, Florida, it's theCUBE, covering SAP SAPPHIRE NOW 2018. Brought to you by NetApp. >> Hey, welcome to theCUBE. I am Lisa Martin with Keith Townsend, and we are live in the NetApp booth at SAP SAPPHIRE 2018. We are joined by Robert Stumpf, Senior Director of IT, Enterprise Solutions Delivery. Welcome to theCUBE! >> Thank you, thank you. >> So we're here in the NetApp booth at SAPPHIRE NOW. As they said in the keynote this morning, they're expecting a million people to engage with SAP SAPPHIRE this week. >> Yes. >> Think, I've heard rumblings there's about 20+ thousand people here in attendance. >> Yeah. >> Huge event, huge show, lots of announcements. Let's talk about NetApp and SAP as partners. Specifically in the context of the Next-Gen Data Center, bringing cloud-ready solutions to business application. What are you guys doing there with SAP? >> Sure, I can talk a little bit about that. The NetApp solutions fit into the Next-Generation Data Center in a variety of different ways. We have the All FAS Flash that really is the core of our product base and is really the workhorse of all the hardcore applications, gives you really a strong performance in the storage area. Then we have the Cloud Volumes with when you want to scale out to hyper scaler, and you can use the Cloud Volumes abilities there. And then when you look at our HCI components, it is capable of giving you a lot more of the container-based compute power, so we fit into a variety of different components there. >> So, Robert, we're at SAP. And SAP hasn't been traditionally known as a cloud-aware application. Tell us, from the NetApp perspective, what's changed with SAP over the years that now, you can comfortably talk about SAP as a cloud-aware application? >> So SAP's moving a long way in that direction. You saw it this morning in the keynote that they were talking about the C4, their customer-focused applications. That's really kind of putting a framework on top of all of the customer engagements, and making the customer the center of everything. So they're moving a lot in that direction. We at NetApp have implemented their Hybris platform, their cloud for customer application. We just went live with that last year, so we're on that journey with SAP as well. >> So, as we talk about that, what makes the application, or what make applications in general cloud-aware? >> Okay, when you look at making something cloud-aware, you want to really look at the architecture that you have underneath it. So you'll build something that has a lot more automation in it, a lot more scalable, where you don't have to, the scalability's built into the framework, like you're leveraging. In the case of our NetApp support site, which we just completely re-architected and went live last month, we have built that on what's called a MEAN stack, so that's where the Mongo database and the back-end that's a NoSQL database, and then on top of an Angular node.js, which gives you much more robust framework for you to be able to scale-out your application. So with it being a website, and your volume can go up and down, so you want to be able to scale the application without needing people to get involved in that scaling, so they will just fire up new containers as needed as the volume increases, and it's a lot more robust in architecture. >> So if we look at Hybris and we look at NetApp products and solutions, that framework and architecture. Can you paint a picture for us what NetApp solutions and products are cloud-aware? >> Sure, the cloud-aware applications, really you need to look at the complete stack of the Next-Generation Data Center, which is really embodying the on-prem data center, your hyperscaler cloud data centers, and then a private cloud if you so wish to build one. So the Next-Generation Data Center takes advantage of the All FAS Flash on your on-prem solution, so you've got your performance, high-performance scalability. Then your Cloud Volumes allows you to move your data between your on-prem out to the hyperscaler as you need to, and the HCI component gives you that container-based compute array that allows the applications to scale. Also, you can leverage StorageGRID, which is much more of an object-based data base, which is something that you'll use extensively on cloud-aware applications. >> So, thanks Keith. So one of the things that was announced this morning, you mentioned C/4HANA where Bill McDermott was sort-of expected to announce what SAP was going to be doing that's gonna help differentiate them. They want more share from Salesforce and Oracle. He made kind of some aloof references to that, but one of the things that he talked about was: companies need, in this day and age, speed obviously, but to move away from a 360-degree view of sales automation to an actual 360-degree view of the customer. I'd love to get your insight on NetApp and SAP as partners together. Are you seeing any particular industries leading here? We think of manufacturing, maybe automotive oil and gas, but I'm just wondering from NetApp's perspective, are you seeing any industries that are really leading-edge here in evolving to a Next-Gen Data Center that enables this 360-degree view? >> There's a variety of different industries that are doing that. If you take a look at applications like Netflix and Amazon Prime, those applications are architectured to be scalable and to be much more robust, and they are much more focused on the customer. And because you don't have outages, right? They don't take the system offline when they're doing an upgrade to their capabilities. When was the last time you heard of Netflix going offline for twelve hours to do an upgrade? So, these applications are built much more robustly around that, and that's what one thing that we are looking to do at NetApp with the Hybris implementation that we did with SAP, and we're also upgrading our back office CRM system to their CRM on HANA on-prem, and we're gonna be taking advantage of the Hybris capabilities there to give that full picture of the customer. We'll be heavily engaged with SAP on their C4 journey and making sure that we are a part of that as well. >> So it's great that you brought up Netflix as an example that continues to be operating an environment that has this huge back-end automated with technology. SAP traditionally hasn't been considered a technology that you could upgrade on the fly. I've managed an SAP environment where we can only take twelve hours of downtime a year because mission critical, it's very difficult to get that time. >> Yes. >> How has the NetApp data fabric story played into making that a possibility in your own environment and customers' environments? >> Okay, we leverage a lot of the NetApp storage on our on-prem system. I'm in the exact place, same situation as you were talking about. We have a lot of mission critical customers that are on our support application. I have to give 90-days notice to take the system down for any longer than four hours at a time, so I'm in that very similar situation. So we leverage a lot of the NetApp technologies to make sure that the applications are available when I'm doing the upgrades, and we can do rapid copies of the data that's in there, make sure it's all robust. Our data, failover database, failover systems, are set up that way so that they take advantage of the snapshots that we got from the application, and we're working with SAP. The SAP Hybris application is actually built on top of NetApp storage, and we're working very closely with SAP to re-architect our applications, to take advantage of the capabilities that NetApp storage brings to the equation. >> So none of this coming into its own in this hybrid cloud model that's been around 26 years, right, long time. But now, it's everything you see. You mentioned Netflix, and I don't know anybody on the planet that would survive if Netflix went down for an hour, let alone twelve. So speed, access to data, but this evolution of NetApp, I'm interested, and you know now again in this hybrid cloud model, you guys made your name from building network attached to storage on-prem data centers, the announcement with Google Platform just last week. Talk to us about some of the evolution from NetApp, from your perspective, from the storage perspective, into really facilitating this hybrid cloud model. >> Sure, we are really at the forefront of that because at the end of the day, it's all about the data. Right, your application can run wherever you want, but wherever your data is is really the key. And that's the framework that we're putting in place is to make your data a lot more mobile. So if you want to keep the data on-premise, then you can keep it on-premise. If you want to move it out next to the hyperscaler, you can burst it out, you can use the Cloud Volumes and migrate the data. So the NetApp picture, the story is really in making your data much more mobile and moving it to the location of choice for any particular workload that you're looking for. >> So, we can't have a discussion in 2018 about data without talking about privacy and security. What's the relationship in ensuring that NetApp and SAP is one, media requirements in GDPR, we have to talk about GDPR, we have to talk about security. How is NetApp securing data and ensuring that in-users' and organizations' data stay private? >> That's a very good question, right? It's definitely a challenge that a lot of companies are struggling with, and the tools that NetApp provides with our storage systems are paramount, security is paramount, and that's something that we're very much focused on in making sure that your data is your data, and the specific components of the data that you want to keep on-premise, which you want to keep as much more secure, then you can keep that on the NetApp All FAS Flash storage systems, and then you protect it as if it's in your own kingdom. But then the data that's a little bit more lax on the security sites, then you can push that out onto the hyperscalers and use the NetApp Cloud Volumes to have it outside of your on-premise. You know, it's like your own firewall. >> So one of the basic things as a ONTAP customer that ONTAP customers depend on and the private data centers, this ability to encrypt data on the fly. Now that we look at, you know we see ONTAP in the cloud, do we get that same basic capability to encrypt data on the fly or encrypt data while it's in transit? How do I know my data is protected from an encryption perspective? >> You get the same capabilities when you're using the on-cloud tools that we provide, so there's no real difference in that, and that's the beauty behind that. You're using the same storage management tools for your Cloud Volumes as you would be for your on-premise systems. >> I want to ask a question on competition. There's a lot of co-opetition that's going on just at SAPPHIRE alone. With what you talked about about how NetApp is leveraging Hybris, you mentioned, to really kind of get towards that model of connecting supply chain with demand, getting that full view of customers, SAP partners with probably all of your competitors. So how is what NetApp is doing internally to digitally transform, how do you see it as giving NetApp that competitive edge against the other guys? >> Okay, the way that we look at our competitive edge at NetApp from an application standpoint is really focusing on keeping our core capabilities very, very vanilla. So in the implementation with Hybris, we were very much focused on not customizing the application. But because at the end of the day, you sell stuff, you build stuff, you manufacture it, and you support it. So those are the core capabilities, and we've kept that very vanilla as much as possible within the implementation. Where we differentiate, that's where we customize. So our application landscape is much more focused on customizing for the differentiating capabilities, and that's the component that's specific to NetApp and how we do business. And that's the way that we go about differentiating ourselves from our competitors. So we use the core capabilities of all the enterprise applications that we have, that we purchase such as Hybris, and then we go build our custom solutions that are differentiated, that really searches our ASUP, AutoSupport system, that gets what's embedded right from day one, that's a custom-built application, it's very proprietary, it's really the keys to the kingdom for our organization. And that's something that's very, very integral as part of the NetApp culture. >> So, let's talk about some lessons learned from that. One of the pain points for many SAP customers is they look at capability like ECC on HANA, really want it, but they've customized their environment too much, so making that switch is extremely difficult for them. What have you learned as a team that says, you know what, the best way to stay in line with SAP and follow that roadmap for mission critical applications that are both stable and differentiating, you should follow these basic policies from a hygiene perspective. >> Sure, we actually went through that last year with our project where we replaced our Sales Force Automation system, and we implemented C4, C4C Hybris. So the key to that is really getting the executive sponsorship bought-in to making sure that you're adhering to the vanilla applications and not customizing it. So we were very fortunate where we had Henri Richard and Bill Miller, our CIO. They were the executive sponsors of the project, and they were adamant that we would not customize the application, and we went through, it took us six months to replace our CRM system for an office CRM system. Very proud of that project. It was an incredible painful journey to go through, but the benefits that we got out of the end of it are phenomenal because we were in that situation where we had an overly-custom SAS application that was running our sales organization that really wasn't meeting the needs of the business. Now we have a much more agile implementation that's on top of SAP's Hybris platform, and we're taking advantage of the new capabilities they introduce, rather than focusing on our own customizations. >> That's a great summary. I think you articulated very well what, one of the themes was from Bill McDermott's keynote this morning, is making things simple, is not an easy thing to do, but it's critical. There are so many-- >> It's totally critical. >> business outcomes that come out of that, not just stream-learning processes, improving sales and marketing and connecting them together, but really affecting revenue, profit, share, et cetera. So Robert, thanks so much for stopping by theCUBE and chatting with Keith and me today about what you guys are doing with SAP. >> Great, thank you, thank you for your time. >> We want to thank you. You're watching theCUBE: Lisa Martin with Keith Townsend from SAP SAPPHIRE 2018, thanks for watching! (light percussive music)
SUMMARY :
Brought to you by NetApp. and we are live in the NetApp booth at SAP SAPPHIRE 2018. they're expecting a million people to engage there's about 20+ thousand people here in attendance. Specifically in the context of the Next-Gen Data Center, and is really the workhorse that now, you can comfortably talk about SAP and making the customer the center of everything. and the back-end that's a NoSQL database, So if we look at Hybris and we look and the HCI component gives you that container-based So one of the things that was announced this morning, and making sure that we are a part of that as well. So it's great that you brought up Netflix of the snapshots that we got from the application, and I don't know anybody on the planet So if you want to keep the data on-premise, What's the relationship in ensuring that NetApp and SAP on the security sites, then you can push that out Now that we look at, you know we see ONTAP in the cloud, and that's the beauty behind that. that competitive edge against the other guys? and that's the component that's specific to NetApp the best way to stay in line with SAP So the key to that is really getting I think you articulated very well what, one of the themes about what you guys are doing with SAP. You're watching theCUBE: Lisa Martin with Keith Townsend
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robert Stumpf | PERSON | 0.99+ |
Robert | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Bill Miller | PERSON | 0.99+ |
Bill McDermott | PERSON | 0.99+ |
360-degree | QUANTITY | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
twelve hours | QUANTITY | 0.99+ |
twelve | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
90-days | QUANTITY | 0.99+ |
Henri Richard | PERSON | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
an hour | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
last week | DATE | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
GDPR | TITLE | 0.98+ |
ONTAP | TITLE | 0.98+ |
Salesforce | ORGANIZATION | 0.98+ |
last month | DATE | 0.98+ |
C4 | TITLE | 0.98+ |
HANA | TITLE | 0.98+ |
NetApp | TITLE | 0.98+ |
SAPPHIRE | ORGANIZATION | 0.97+ |
SAP SAPPHIRE | TITLE | 0.97+ |
this week | DATE | 0.97+ |
about 20+ thousand people | QUANTITY | 0.97+ |
One | QUANTITY | 0.96+ |
Amazon | ORGANIZATION | 0.96+ |
both | QUANTITY | 0.96+ |
a year | QUANTITY | 0.96+ |
around 26 years | QUANTITY | 0.95+ |
C/4HANA | TITLE | 0.94+ |
Hybris | ORGANIZATION | 0.94+ |
this morning | DATE | 0.94+ |
SAP SAPPHIRE 2018 | TITLE | 0.93+ |
Hybris | TITLE | 0.92+ |
NoSQL | TITLE | 0.92+ |
C4C Hybris | TITLE | 0.91+ |
SAP Hybris | TITLE | 0.88+ |
one thing | QUANTITY | 0.87+ |
NOW | DATE | 0.86+ |
SAPPHIRE | TITLE | 0.86+ |
Dirk Hohndel, VMware | KubeCon + CloudNativeCon 2018
>> Announcer.: From Copenhagen, Denmark, it's theCUBE. Covering KubeCon and CloudNativeCon Europe 2018. Brought to you by the CloudNative Computing Foundation and its ecosystem partners. >> Hello everyone, and welcome back. This is theCUBE's exclusive coverage of KubeCon 2018 in Europe, part of the CNCF, Cloud Native Compute Foundation, part of the Linux Foundation. I'm John Furrier with my cohost Lauren Cooney. Our next guest is Dirk Hohndel, Vice President, Chief Open Source Officer for VMware. Great to see you. CUBE alumni, welcome back. >> Thank you, good to be here. >> So you had a keynote, smashing success today on stage, about open source, all five minutes of it, congratulations. (laughing) Take a minute to explain, I have some specific questions on VMWare, office of the CTO, how you guys are working on some really interesting things. But first, take a minute to explain, the VMware approach to open source that you're leading. What's the architecture of it, how is it organized, can you take a minute to explain-- >> Sure. >> The VMware? >> So we use open source components in literally every single one of our products, and we have a structure where each of the BUs is engaged in open source in the components that they're using, in projects that are related to the business, and they have a central organization that sits in the office of the CTO that I run, so the open source program office, which has much more of a focus of pure open source work. Focused on up stream, focus on the problems that the community sees much more than something that is product driven. I also own the whole compliance work that everyone needs to do to make sure that you follow the licenses and all that. But, fundamentally the balance between having the central organization that has maybe the center of expertise and has people who do open source and nothing but open source, and on the other hand bringing that expertise into the BU. Bring it closer to the products, and engaging across the company. We have more than 7,000 software engineers across the company and we want every single one of them to be mindful and understanding of how open source works, and how we are engaged in that space. >> And how many people, just some stats, can you share, by the numbers, how many people are on the teams, R&D, there's also in the CTO office. How many folks are on your team roughly speaking? >> So I have currently, I want to say, this is quick, 20 some odd people under me, but across the company it's a lot more. There are several hundred people who are, in their daily work engaged with open source all the time. >> That's great. >> So your team is centralized in the business units. Go ahead. >> No, that's great. I was going to say, what is it look like for people that want to contribute code that aren't on your team? Is there a process that's pretty easy to go through? Or can they just put it on GitHub? We would all like that but. >> Yeah so, we have an internal tool that we've developed they simply can request to contribute to an existing project and it goes through a very quick review and depending on the topic, this is typically a two day turnaround time, where they get approval from the BU VP and from me. And if you want to open source a project, so if you have something internal that you've done, that you want to bring out into the community, it's little more complicated with naming, and branding and what not. A lot more people need to nod basically, but it still takes usually a couple of weeks-- >> Yeah. >> And it goes through. But it's an automated process, it's driven by a PM out of my organization. >> That's great. >> And it tries to make it really, really easy. One of my big goals joining VMware was to remove friction out of this process, and to encourage people to engage with the projects that are out there, but also for us to bring software into the open that we've developed, for example internal tools, and make them useful for other people. >> Definitely, I think that's great. >> You mentioned open source models about people, can you elaborate on that because I think this is an important point, we were talking before we came on, about that role of people. >> Well, so open source is... People think of this as a software development methodology, and it is, but fundamentally it's a social phenomenon. It's this experiment of saying the way we do our work is based on relationships. It's based on trust. So I trust you that you've reviewed this code and I take that code that you've reviewed. I know that you are the expert in this area so if I make changes in this area I'll send them to you and ask for your review. It's all about relationships. And these relationships are between people, not between companies. So in so many ways, the role travels with the person, and not with the company. And we have seen this in many cases, where people move from company to company, but the work, their influence, the role comes with them. So it's very much empowering for the engineers. But it's also from a purely human perspective, an engagement where, it's not just about the code that you write, but it's about how you treat people. How you engage with them. This is why conferences like this are so wonderful. There are 4,000 people here, 4,500 people here, and you meet people whom, with whom you've been emailing for years. And this social aspect of this for an introvert like myself, is at the same time a little scary, but also it's super exciting because it is people who are driving this industry. >> John: The face to face connections really make a difference. >> I think it's the community. I mean the community always comes first, I think. I will say this, you build a community, you don't launch one, and I think that's absolutely critical. And I think, can you talk about some of the changes in mentality that you're working with across VMware right now with getting that community first sort of thing moving? >> Well, so, I mean, VMware is a very engineering centric organization. We are driven, we're founded by engineers and driven by engineers, I mean Pat Gelsinger our CEO was an engineer, and so the underlying ethos of contribution and of trying to fix problems and if you see something you go and fix it, that is something that has always been there at VMware, but what I've been trying to bring in to VMware is much more of an up-stream focus. An understanding of, it's not just important that you understand the technology well and you use it well, but also that you contribute back. And that you are seen as playing a big role in this industry. And if you look at the impact that VMware has had in the broader open source community, and how we have shifted our approach to being part of this over the last two years, I think it's been extremely successful. And you can see this with our footprint here, how many talks we have here, and how much presence we have here. I think there's 70 VMware employees at KubeCon this year. >> That's great. >> It's now cultural, it's a Tier One, I'll say Tier One role, not Tier Two when we were growing up in the industry, but part of the business software define, infrastructure, software is taking over the world as Mark Andress said is happening. Open source is there powering it. So I have to ask you the question, that would be on my mind if I'm thinking about going all in as a company, if I'm an enterprise. Hey, you know what, I like this approach. I'm going to go all in. Complete commitment. What's the best practice, what's your advice, because this is something people are talking about doing not just putting a toe in the water going all in and committing to an open source business model with their company. What's your advice for shepherding that process, cultural ethos. What's your take? >> It starts with language. It starts with how you talk about what you're doing. I hear a lot of people saying things like, "Oh, I consume open source," well it bugs me because you consume a commodity. You consume electricity, you don't care where it comes from it's just a plug in the wall. Whatever, right? Open source is always around, about the people. It's always about how do people work. How do they think about security, about releases, about maintenance? What's their work flow? And you can't just consume an open source component, you need to engage with them, you need to understand how their work affects your work. And so my recommendation is always, start with your own language. Start with the approach that you're taking when you're talking about all this. And then figure out, where are you using it, how are you using it, what are the changes that you've made to the components that you're using? How about contributing those changes back? It's a very simple first step to engage. And it's actually a step that makes total business sense because if you have your private branches, your private patches, the next time the upstream project goes through a new release, you need to port these changes, that costs money. So it's actually cheaper to simply contribute them back and have them maintained by the project. And you can use upstream, or you have a minimum set of small adjustments that don't make sense to return to the community. And this is really how you get your toe into the water. Because now you're not just a user of this, you're engaged, you're a contributor. >> You're operationalizing your business. >> Yes, you are, and then the next step is thinking about what of my internal tool sets that maybe are not my core product, but are the things that we build to build the product as part of our workflow. What of those could be used for the product community? So at VMware, for example, we built a software design system, it's called Clarity, and you can use it to create angular-based JavaScript UIs. So we use this for all of our products. We made this tool an open source tool and it's massively successful project, has weekly releases, has a ton of users, a very very active community. And it's one of those cases where you take something that isn't the core of your business, but you are earning your chops in the community. And take it one step at a time and broaden-- >> John: That's the trust relationship you're building? >> That is very much this trust relationship. And it's this track record that you're building of not only doing something, oh here's this old product and I'll open source it and then I walk away. So we call this dump and run right? You throw it over the wall, it's now open source and then you say, customer you're now on your own because it's now open source. >> It's abandoned no one's paying attention. >> Yeah that's a terrible model, but a very good model is one where you think about creating these relationships and creating a track record of being there every week, looking at the bug reports, looking at the issues, looking at the pull requests, and engaging with the people out there. And the value that this creates, the amount of value that you're getting from your outside contributor, very quickly outweighs the additional cost that it takes to get this IP clean and released and all that. >> And then there is documentation and documentation is a tremendous amount of heavy lifting on the inside of a company. But if you can spread it over an open source product that you have, it's great. And it's a really good way for people to start out in open source, I find. >> And you just said open source product, so this one of those things-- >> Project. >> Where, yeah. This is something that I think is where we come back to language being so important. I always talk to the folks internally about this distinction. What is the open source project? What is it, what the community does, what lives on GitHub or what lives in the public side of this? And then what is your product that is based on this project? And in your thinking always keep these two separate. Understand that everything that happens in the project is what is publicly available and what is done in conjunction with your community. >> John: With the team. >> Versus your product which focuses on how does the customer use this. Because open source projects, in and of themselves, are typically built by developers for developers. And the end user has actually different needs. And this is where the business model come in and that's kind of closing the question that you just asked, because the value that the company is providing this space is the understanding of the customer needs. And is the ability to take something that is creating enormous and impressive innovation, which is the community, and taking this to a place where then someone can use it in production. Where it's scales, where it's secure, where it has Day-One and Day-Two operationalization, where it has strong documentation. There is a support number you can call. All these things that a customer is-- >> John: Needs. >> Needs and that an open source project by themselves is unlikely to create. >> It's like putting money in the bank. You can't just take money out of the bank. You've got to deposit good will. The give-get is part of that project and you're saying make the product focus on the customer problems. >> Dirk: Absolutely. >> My question is are you talking kind of about a services wrapper that you put around it and maybe a couple of additional features? In part, or what are you actually kind of, just to get to the crux of it. >> So there are many different ways, many different business models around open source. For us, we are still an enterprise software company. So open source generally provides components of what we do. It may be the API that the customer is asking for. So today, Kubernetes is a set of APIs a lot of people want to use as their way to provide a container service for the orchestration, right? But what is the underlying infrastructure? How do you generate a persistent storage? A flexible networking infrastructure that can grow and shrink as your work loads grow and shrink? How do you manage your individual nodes? How do you deal with internal billings so that you can bill your data center time to your departments? And all these operational aspects are things that we're trying to solve with our products. But we offer to the customer an open source based API. So that's where our business value lies fundamentally. >> Lauren: Okay. >> Communities are a concept that's premised on create value before you capture it. And I think what you're saying is, if you have a project, you better bring something to the table, not just distract. It's a taker. >> Yes. >> If you're just taking all the time, it's not a good trust relationship building, that's what I hear you saying. >> And you will also not be successful because your customer needs, as your customers are coming to you and they're running into issues, you need to be able to address those issues. Which means you need to be productive part of that community. You need to have the in-depth understanding to then help them. >> I've seen people do things like they couldn't get a business model going so they say, "Oh we're just going to open source it, "and hope that a miracle happens." And it's not really that way. I mean, people do open source for the right reasons to bring code to the table, but you're saying nurturing that community project is a for all kind of thing. >> Fundamentally, I always think there are so many brilliant developers in these communities. And if you go into these communities with the assumption that you can learn something from the other developers, you can learn something from the other companies that are involved. And then you can contribute the areas where you are strong, where you have your core knowledge. And you wrap this into a product that provides value for your customers, everyone wins. Your customer wins, >> That a good way-- >> Your community wins, you win. >> So if you're out there thinking about it think about your core competency and what you want to open source, you got a good fit. Okay what's new for you? You diving, you're an avid scuba diver. We talked about that last time you were on theCUBE. What's new with you? >> I haven't been diving. Actually I drove up to Hootsbor to dive in 48 degrees Fahrenheit water, because I haven't been in the water for so long. My next trip is going to be Okinava which is a lot warmer than that. No, the work keeps me busy, so not as much scuba diving as I would want. But we've been very busy. We've been pushing a lot more contributions to a much larger set of projects. My team has been growing, so we've been actively hiring. And we're developing a second generation, internal set of processes to deal with all of these questions you asked about earlier, of how to make sure that you know where you contribute, how you contribute, which components you use. So we're revamping our internal processes around this. >> Lauren: That's great >> And it's keeping us very busy, but I have to say, especially, if you look at this conference here, the success is really very rewarding. We have so many more people actively engaged, and recognized in the community as key contributors. It's been a very very successful year since last we talked. >> It's awesome. Well thanks for your leadership at VMware. We love the KubeCon, we love the Linux Foundation, they've done amazing work. CNCF is just exploded with success and it's a result of, the trend is everyone's friend, which is cloud computing and software defined everything so, VMware. Thanks for coming out Dirk, appreciate it. Live coverage here in Copenhagen, Denmark. This is theCUBE, I'm John Furrier. Lauren Cooney co-hosting with me this week. And we'll be back with more, stay with us after this short break. (energetic music)
SUMMARY :
Brought to you by the CloudNative Computing Foundation of KubeCon 2018 in Europe, part of the CNCF, how is it organized, can you take a minute to explain-- that you follow the licenses and all that. And how many people, just some stats, can you share, but across the company it's a lot more. is centralized in the business units. that aren't on your team? And if you want to open source a project, And it goes through. and to encourage people to engage with the projects can you elaborate on that because I think I'll send them to you and ask for your review. John: The face to face connections And I think, can you talk about some And that you are seen as playing a big role So I have to ask you the question, And this is really how you get your toe into the water. And it's one of those cases where you take and then you say, customer you're now on your own is one where you think about creating these relationships that you have, it's great. Understand that everything that happens in the project And is the ability to take something Needs and that an open source project by themselves It's like putting money in the bank. In part, or what are you actually kind of, so that you can bill your data center time And I think what you're saying is, that's what I hear you saying. And you will also not be successful And it's not really that way. from the other developers, you can learn something and what you want to open source, of how to make sure that you know and recognized in the community as key contributors. and it's a result of, the trend is everyone's friend,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dirk | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Lauren | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Dirk Hohndel | PERSON | 0.99+ |
Lauren Cooney | PERSON | 0.99+ |
Mark Andress | PERSON | 0.99+ |
CloudNative Computing Foundation | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cloud Native Compute Foundation | ORGANIZATION | 0.99+ |
4,000 people | QUANTITY | 0.99+ |
4,500 people | QUANTITY | 0.99+ |
two day | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
48 degrees Fahrenheit | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
more than 7,000 software engineers | QUANTITY | 0.99+ |
70 | QUANTITY | 0.99+ |
Copenhagen, Denmark | LOCATION | 0.99+ |
today | DATE | 0.99+ |
second generation | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
GitHub | ORGANIZATION | 0.98+ |
first step | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
KubeCon 2018 | EVENT | 0.98+ |
one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Hootsbor | LOCATION | 0.98+ |
Okinava | LOCATION | 0.97+ |
JavaScript | TITLE | 0.97+ |
this week | DATE | 0.97+ |
CloudNativeCon Europe 2018 | EVENT | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
this year | DATE | 0.95+ |
Day | QUANTITY | 0.94+ |
one step | QUANTITY | 0.94+ |
CloudNativeCon 2018 | EVENT | 0.94+ |
One | QUANTITY | 0.93+ |
VMWare | ORGANIZATION | 0.93+ |
Two | QUANTITY | 0.91+ |
CUBE | ORGANIZATION | 0.91+ |
Kubernetes | TITLE | 0.91+ |
five minutes | QUANTITY | 0.89+ |
CTO | ORGANIZATION | 0.89+ |
hundred people | QUANTITY | 0.88+ |
Clarity | TITLE | 0.83+ |
20 some odd people | QUANTITY | 0.83+ |
last two years | DATE | 0.81+ |
single | QUANTITY | 0.76+ |
Tier One | OTHER | 0.7+ |
angular | TITLE | 0.66+ |
years | QUANTITY | 0.65+ |
Tier Two | OTHER | 0.64+ |
Vice President | PERSON | 0.63+ |
couple of weeks | QUANTITY | 0.58+ |
Action Item | March 30, 2018
>> Hi, I'm Peter Burris and welcome to another Wikibon Action Item. (electronic music) Once again, we're broadcasting from theCUBE studios in beautiful Palo Alto. Here in the studio with me are George Gilbert and David Floyer. And remote, we have Neil Raden and Jim Kobielus. Welcome everybody. >> David: Thank you. >> So this is kind of an interesting topic that we're going to talk about this week. And it really is how are we going to find new ways to generate derivative use out of many of the applications, especially web-based applications that are have been built over the last 20 years. A basic premise of digital business is that the difference between business and digital business is the data and how you craft data as an asset. Well, as we all know in any universal Turing machine, data is the basis for representing both the things that you're acting upon but also the algorithms, the software itself. Software is data and the basic principles of how we capture software oriented data assets or software assets and then turn them into derivative sources of value and then reapply them to new types of problems is going to become an increasingly important issue as we think about the world of digital business is going to play over the course of the next few years. Now, there are a lot of different domains where this might work but one in particular that's especially as important is in the web application world where we've had a lot of application developers and a lot of tools be a little bit more focused on how we use web based services to manipulate things and get software to do the things we want to do and also it's a source of a lot of the data that's been streaming into big data applications. And so it's a natural place to think about how we're going to be able to create derivative use or derivative value out of crucial software assets. How are we going to capture those assets, turn them into something that has a different role for the business, performs different types of work, and then reapply them. So to start the conversation, Jim Kobielus. Why don't you take us through what some of these tools start to look like. >> Hello, Peter. Yes, so really what we're looking at here, in order to capture these assets, the web applications, we first have to generate those applications and the bulk of that worker course is and remains manual. And in fact, there is a proliferation of web application development frameworks on the market and the range of them continues to grow. Everything from React to Angular to Ember and Node.js and so forth. So one of the core issues that we're seeing out there in the development world is... are there too many of these. Is there any prospect for simplification and consolidation and convergence on web application development framework to make the front-end choices for developers a bit easier and straightforward in terms of the front-end development of JavaScript and HTML as well as the back-end development of the logic to handle the interactions; not only with the front-end on the UI side but also with the infrastructure web services and so forth. Once you've developed the applications, you, a professional programmer, then and only then can we consider the derivative uses you're describing such as incorporation or orchestration of web apps through robotic process automation and so forth. So the issue is how can we simplify or is there a trend toward simplification or will there soon be a trend towards simplification of a front-end manual development. And right now, I'm not seeing a whole lot of action in this direction of a simplification on the front-end development. It's just a fact. >> So we're not seeing a lot of simplification and convergence on the actual frameworks for creating software or creating these types of applications. But we're starting to see some interesting trends for stuff that's already been created. How can we generate derivative use out of it? And also per some of our augmented programming research, new ways of envisioning the role that artificial intelligence machine learning, etc, can play in identifying patterns of utilization so that we are better able to target those types of things that could be used for derivative or could be applied to derivative use. Have I got that right, Jim? >> Yeah, exactly. AI within robotic process automation, anything that could has already been built can be captured through natural language processing, through a computer image recognition, OCR, and so forth. And then trans, in that way, it's an asset that can be repurposed in countless ways and that's the beauty RPA or where it's going. So the issue is then not so much capture of existing assets but how can we speed up and really automate the original development of all that UI logic? I think RPA is part of the solution but not the entire solution, meaning RPA provides visual front-end tools for the rest of us to orchestrate more of the front-end development of the application UI and interaction logic. >> And it's also popping up-- >> That's part of broader low-code-- >> Yeah, it's also popping up at a lot of the interviews that we're doing with CIOs about related types of things but I want to scope this appropriately. So we're not talking about how we're going to take those transaction processing applications, David Floyer, and envelope them and containerize them and segment them and apply a new software. That's not what we're talking about, nor are we talking about the machine to machine world. Robot process automation really is a tool for creating robots out of human time interfaces that can scale the amount of work and recombine it in different ways. But we're not really talking about the two extremes. The hardcore IoT or the hardcore systems of record. Right? >> Absolutely. But one question I have for Jim and yourself is the philosophy for most people developing these days is mobile first. The days of having an HTML layout on a screen have gone. If you aren't mobile first, that's going to be pretty well a disaster for any particular development. So Jim, how does RPA and how does your discussion fit in with mobile and all of the complexity that mobile brings? All of the alternative ways that you can do things with mobile. >> Yeah. Well David, of course, low-code tools, there are many. There are dozens out there. There are many of those that are geared towards primarily supporting of fast automated development of mobile applications to run on a variety of devices and you know, mobile UIs. That's part of the solution as it were but also in the standard web application development world. know there's these frameworks that I've described. Everything from React to Angular to Vue to Ember, everything else, are moving towards a concept, more than concept, it's a framework or paradigm called progressive web apps. And what progressive web apps are all about, that's really the mainstream of web application development now is blurring the distinction between mobile and web and desktop applications because you build applications, JavaScript applications for browsers. The apps look and behave as if they were real-time interactive in memory mobile apps. What that means is that they download fresh content throughout a browsing session progressively. I'm putting to determine air quotes because that's where the progressive web app comes in. And they don't require the end-user to visit an app store or download software. They don't require anything in terms of any special capabilities in terms of synchronizing data from servers to run in memory natively inside of web accessible containers that are local to the browser. They just feel mobile even though they, excuse me, they may be running on a standard desktop with narrowband connectivity and so forth. So they scream and they scream in the context of their standard JavaScript Ajax browser obsession. >> So when we think about this it got, jeez Jim it almost sounds like like client-side Java but I think you're we're talking about something, as you said, that that evolves as the customer uses it and there's a lot of techniques and approaches that we've been using to do some of those things. But George Gilbert, the reason I bring up the notion of client-side Java is because we've seen other initiatives over the years try to do this. Now, partly they failed because, David Floyer, they focused on too much and tried to standardize or presume that everything required a common approach and we know that that's always going to fail. But what are some of the other things that we need to think about as we think about ways of creating derivative use out of software or in digital assets. >> Okay, so. I come at it from two angles. And as Jim pointed out, there's been a Cambrian explosion of creativity and innovation on frankly on client-side development and server-side development. But if you look at how we're going to recombine our application assets, we tried 20 years ago with EAI but that was, and it's sort of like MuleSoft but only was for on-prem apps. And it didn't work because every app was bespoke essentially-- >> Well, it worked for point-to-point classes of applications. >> Yeah, but it required bespoke development for every-- >> Peter: Correct. >> Every instance because the apps were so customized. >> Peter: And the interfaces were so customized. >> Yes. At the same time we were trying to build higher-level application development capabilities on desktop productivity tools with macros and then scripting languages, cross application, and visual development or using applications as visual development building blocks. Now, you put those two things together and you have the ability to work with user interfaces by building on, I'm sorry, to work with applications that have user interfaces and you have the functionality that's in the richer enterprise applications and now we have the technology to say let's program by example on essentially a concrete use case and a concrete workflow. And then you go back in and you progressively generalize it so it can handle more exception conditions and edge conditions. In other words, you start with... it's like you start with the concrete and you get progressively more abstract. >> Peter: You start with the work that the application performs. >> Yeah. >> And not knowledge of the application itself. >> Yes. But the key thing is, as you said, recombining assets because we're sort of marrying the best of EAI world with the best of the visual client-side development world. Where, as Jim points out, machine learning is making it easier for the tools to stay up to date as the user interfaces change across releases. This means that, I wouldn't say this as easy as spreadsheet development, it's just not. >> It's not like building spreadsheet macros but it's more along those lines. >> Yeah, but it's not as low-level as just building raw JavaScript because, and Jim's great example of JavaScript client-side frameworks. Look at our Gmail inbox application that millions of people use. That just downloads a new version whenever they want to drop it and they're just shipping JavaScript over to us. But the the key thing and this is, Peter, your point about digital business. By combining user interfaces, we can bridge applications that were silos then we can automate the work the humans were doing to bridge those silos and then we can reconstitute workflows in much more efficient-- >> Around the digital assets, which is kind of how business ultimately evolves. And that's a crucial element of this whole thing. So let's change direction a little bit because we're talking about, as Jim said, we've been talking about the fact that there are all these frameworks out there. There may be some consolidation on the horizon, we're researching that right now. Although there's not a lot of evidence that it's happening but there clearly is an enormous number of digital assets that are in place inside these web-based applications, whether it be relative to mobile or something else. And we want to find derivative use of or we want to create derivative use out of them and there's some new tools that allow us to do that in a relatively simple straightforward way, like RPA and there are certainly others. But that's not where this ends up. We know that this is increasingly going to be a target for AI, what we've been calling augmented programming and the ability to use machine learning and related types of technologies to be able to reveal, make transparent, gain visibility into, patterns within applications and within the use of data and then have that become a crucial feature of the development process. And increasingly even potentially to start actually creating code automatically based on very clear guidance about what work needs to be performed. Jim, what's happening in that world right now? >> Oh, let's see. So basically, I think what's going to happen over time is that more of the development cycle for web applications will incorporate not just the derivative assets, the AI to be able to decompose existing UI elements and recombine them. Enable flexible and automated recombination in various ways but also will enable greater tuning of the UI in an automated fashion through A/B testing that's in line to the development cycle based on metrics that AI is able to sift through in terms of... different UI designs can be put out into production applications in real time and then really tested with different categories of users and then the best suited or best fit a design based on like reducing user abandonment rates and speeding up access to commonly required capabilities and so forth. The metrics can be rolled in line to the automation process to automatically select the best fit UI design that had been developed through automated means. In other words, this real-world experimentation of the UI has been going on for quite some time in many enterprises and it's often, increasingly it involves data scientists who are managing the predictive models to sort of very much drive the whole promotion process of promoting the best fit design to production status. I think this will accelerate. We'll take more of these in line metrics on UI and then we brought, I believe, into more RPA style environments so the rest of us building out these front ends are automating more of our transactions and many more of the UIs can't take advantage of the fact that we'll let the infrastructure choose the best fit of the designs for us without us having to worry about doing A/B testing and all that stuff. The cloud will handle it. >> So it's a big vision. This notion of it, even eventually through more concrete standard, well understood processes to apply some of these AIML technologies to being able to choose options for the developer and even automate some elements of those options based on policy and rules. Neil Raden, again, we've been looking at similar types of things for years. How's that worked in the past and let's talk a bit about what needs to happen now to make sure that if it's going to work, it's going to work this time. >> Well, it really hasn't worked very well. And the reason it hasn't worked very well is because no one has figured out a representational framework to really capture all the important information about these objects. It's just too hard to find them. Everybody knows that when you develop software, 80% of it is grunt work. It's just junk. You know, it's taking out the trash and it's setting things up and whatever. And the real creative stuff is a very small part of it. So if you could alleviate the developer from having to do all that junk by just picking up pieces of code that have already been written and tested, that would be big. But the idea of this has been overwhelmed by the scale and the complexity. And people have tried to create libraries like JavaBeans and object-oriented programming and that sort of thing. They've tried to create catalogs of these things. They've used relational databases, doesn't work. My feeling and I hate to use the word because it always puts people to sleep is some kind of ontology that's deep enough and rich enough to really do this. >> Oh, hold on Neil, I'm feeling... (laughs) >> Yeah. Well, I mean, what good is it, I mean go to Git, right. You can find a thousand things but you don't know which one is really going to work for you because it's not rich enough, it doesn't have enough information. It needs to have quality metrics. It needs to have reviews by people who have used converging and whatever. So that's that's where I think we run into trouble. >> Yeah, I know. >> As far as robots, yeah? >> Go ahead. >> As far as robots writing code, you're going to have the same problem. >> No, well here's where I think it's different this time and I want to throw it out to you guys and see if it's accurate and we'll get to the action items. Here's where I think it's different. In the past, partly perhaps because it's where developers were most fascinated, we try to create object-oriented database and object oriented representations of data and object oriented, using object oriented models as a way of thinking about it. And object oriented code and object oriented this and and a lot of it was relatively low in the stack. And we try to create everything from scratch and it turned out that whenever we did that, it was almost like CASE from many years ago. You create it in the tool and then you maintain it out of the tool and you lose all organization of how it worked. What we're talking about here, and the reason why I think this is different, I think Neil is absolutely right. It's because we're focusing our attention on the assets within an application that create the actual business value. What does the application do and try to encapsulate those and render those as things that are reusable without necessarily doing an enormous amount of work on the back-end. Now, we have to be worried about the back-end. It's not going to do any good to do a whole bunch of RPA or related types of stuff on the front-end that kicks off an enormous number of transactions that goes after a little server that's 15 years old. That's historically only handled a few transactions a minute. So we have to be very careful about how we do this. But nonetheless, by focusing more attention on what is generating value in the business, namely the actions that the application delivers as opposed to knowledge of the application itself, namely how it does it then I think that we're constraining the problem pretty dramatically subject to the realities of what it means to actually be able to maintain and scale applications that may be asked to do more work. What do you guys think about that? >> Now Peter, let me say one more thing about this, about robots. I think you're all a lot more sanguine about AI and robots doing these kinds of things. I'm not. Let me read to you have three pickup lines that a deep neural network developed after being trained to do pickup lines. You must be a tringle? 'Cause you're the only thing here. Hey baby, you're to be a key? Because I can bear your toot? Now, what kind of code would-- >> Well look, the problems look, we go back 50 years and ELIZA and the whole notion of whatever it was. The interactive psychology. Look, let's be honest about this. Neil, you're making a great point. I don't know that any of us are more or less sanguine and that probably is a good topic for a future action item. What are the practical limits of AI and how that's going to change over time. But let's be relatively simple here. The good news about applying AI inside IT problems is that you're starting with engineered systems, with engineered data forms, and engineered data types, and you're working with engineers, and a lot of that stuff is relatively well structured. Certainly more structured than the outside world and it starts with digital assets. That's why a AI for IT operations management is more likely. That's why AI for application programming is more likely to work as opposed to AI to do pickup lines, which is as you said semantically it's all over the place. There's very, very few people that are going to conform to a set of conventions for... Well, I want to move away from the concept of pickup lines and set conventions for other social interactions that are very, very complex. We don't look at a face and get excited or not in a way that corresponds to an obvious well-understood semantic problem. >> Exactly, the value that these applications deliver is in their engagement with the real world of experience and that's not the, you can't encode the real world of human lived experience in a crisp clear way. It simply has to be proven out in the applications or engagement through people or not through people, with the real world outcome and then some outcomes like the ones that Neil read off there, in terms of those ridiculous pickup lines. Most of those kinds of automated solutions won't make a freaking bit of sense because you need humans with their brains. >> Yeah, you need human engagement. So coming back to this key point, the constraint that we're putting on this right now and the reason why certainly, perhaps I'm a little bit more ebullient than you might be Neil. But I want to be careful about this because I also have some pretty strong feelings about where what the limits of AI are, regardless of what Elon Musk says. That at the end of the day, we're talking about digital objects, not real objects, that are engineered, not, haven't evolved over a few billion years, to deliver certain outputs and data that's been tested and relatively well verified. As opposed to have an unlimited, at least from human experience standpoint, potential set of outcomes. So in that small world and certainly the infrastructure universe is part of that and what we're saying is increasingly the application development universe is going to be part of that as part of the digital business transformation. I think it's fair to say that we're going to start seeing AI machine learning and some of these other things being applied to that realm with some degree of success. But, something to watch for. All right, so let's do action item. David Floyer, why don't we start with you. Action item. >> In addressing this, I think that the keys in terms of business focus is first of all mobiles, you have to design things for mobile. So any use of any particular platform or particular set of tools has to lead to mobile being first. And the mobiles are changing rapidly with the amount of data that's being generated on the mobile itself, around the mobile. So that's the first point I would make from a business perspective. And the second is that from a business perspective, one of the key things is that you can reduce cost. Automation must be a key element of this and therefore designing things that will take out tasks and remove tasks, make things more efficient, is going to be an incredibly important part of this. >> And reduce errors. >> And reduce errors, absolutely. Probably most important is reduce errors. Is to take those out of the of the chain and where you can speed things up by removing human intervention and human tasks and raising what humans are doing to a higher level. >> Other things. George Gilbert, action item. >> Okay, so. Really quickly on David's point that we have many more application forms and expressions that we have to present like mobile first. And going back to using RPA as an example. The UiPath product that we've been working with, the core of its capability is to be able to identify specific UI elements in a very complex presentation, whether it's on a web browser or whether it's on a native app on your desktop or whether it's mobile. I don't know how complete they are on mobile because I'm not sure if they did that first but that core capability to identify in a complex, essentially collection and hierarchy of UI elements, that's what makes it powerful. Now on the AI part, I don't think it's as easy as pointing it at one app and then another and say go make them talk. It's more like helping you on the parts where they might be a little ambiguous, like if pieces move around from release to release, things like that. So my action item is say start prototyping with the RPA tools because that's probably, they're probably robust enough to start integrating your enterprise apps. And the only big new wrinkle that's come out in the last several weeks that is now in everyone's consciousness is the MuleSoft acquisition by Salesforce because that's going back to the EAI model. And we will see more app to app integration at the cloud level that's now possible. >> Neil Raden, action item. >> Well, you know, Mark Twain said, there's only two kinds of people in the world. The kind who think there are only two kinds of people in the world and the ones who know better. I'm going to deviate from that a little and say that there's really two kinds of software developers in the world. They're the true computer scientists who want to write great code. It's elegant, it's maintainable, it adheres to all the rules, it's creative. And then there's an army of people who are just trying to get something done. So the boss comes to you and says we've got to get a new website up apologizing for selling the data of 50 million of our customers and you need to do it in three days. Now, those are the kind of people who need access to things that can be reused. And I think there's a huge market for that, as well as all these other software development robots so to speak. >> Jim Kobielus, action item. >> Yeah, for simplifying web application development, I think that developers need to distinguish between back-end and front-end framework. There's a lot of convergence around the back-end framework. Specifically Node.js. So you can basically decouple the decision in terms of front-end frameworks from that and you need to write upfront. Make sure that you have a back-end that supports many front ends because there are many front ends in the world. Secondly, the front ends themselves seem to be moving towards React and Angular and Vue as being the predominant ones. You'll find more programmers who are familiar with those. And then thirdly, as you move towards consolidation on to fewer frameworks on the front-end, move towards low-code tools that allow you just with the push of a button, you know visual development, being able to deploy the built out UI to a full range of mobile devices and web applications. And to close my action item... I'll second what David said. Move toward a mobile first development approach for web applications with a focus on progressive web applications that can run on mobiles and others. Where they give a mobile experience. With intermittent connectivity, with push notifications, with a real-time in memory fast experience. Move towards a mobile first development paradigm for all of your your browser facing applications and that really is the simplification strategy you can and should pursue right now on the development side because web apps are so important, you need a strategy. >> Yeah, so mobile irrespective of the... irrespective of the underlying biology or what have you of the user. All right, so here's our action item. Our view on digital business is that a digital business uses data differently than a normal business. And a digital business transformation ultimately is about how do we increase our visibility into our data assets and find new ways of creating new types of value so that we can better compete in markets. Now, that includes data but it also includes application elements, which also are data. And we think increasingly enterprises must take a more planful and purposeful approach to identifying new ways of deriving additional streams of value out of application assets, especially web application assets. Now, this is a dream that's been put forward for a number of years and sometimes it's work better than others. But in today's world we see a number of technologies emerging that are likely, at least in this more constrained world, to present a significant new set of avenues for creating new types of digital value. Specifically tools like RPA, remote process automation, that are looking at the outcomes of an application and allow programmers use a by example approach to start identifying what are the UI elements, what those UI elements do, how they could be combined, so that they can be composed into new things and thereby provide a new application approach, a new application integration approach which is not at the data and not at the code but more at the work that a human being would naturally do. These allow for greater scale and greater automation and a number of other benefits. The reality though is that you also have to be very cognizant as you do this, even though you can find these, find these assets, find a new derivative form and apply them very quickly to new potential business opportunities that you have to know what's happening at the back-end as well. Whether it's how you go about creating the assets, with some of the front-end tooling, and being very cognizant of which front ends are going to be better or not better or better able at creating these more reusable assets. Or whether you're talking about still how relatively mundane things like how a database serialized has access to data and will fall over because you've created an automated front-end that's just throwing a lot of transactions at it. The reality is there's always going to be complexity. We're not going to see all the problems being solved but some of the new tools allow us to focus more attention on where the real business value is created by apps, find ways to reuse that, and apply it, and bring it into a digital business transformation approach. All right. Once again. George Gilbert, David Floyer, here in the studio. Neil Raden, Jim Kobielus, remote. You've been watching Wikibon Action Item. Until next time, thanks for joining us. (electronic music)
SUMMARY :
Here in the studio with me are and get software to do the things we want to do and the range of them continues to grow. and convergence on the actual frameworks and that's the beauty RPA or where it's going. that can scale the amount of work and all of the complexity that mobile brings? but also in the standard web application development world. and we know that that's always going to fail. and innovation on frankly on client-side development classes of applications. and you have the ability to work with user interfaces that the application performs. But the key thing is, as you said, recombining assets but it's more along those lines. and they're just shipping JavaScript over to us. and the ability to use machine learning and many more of the UIs can't take advantage of the fact some of these AIML technologies to and rich enough to really do this. Oh, hold on Neil, I'm feeling... I mean go to Git, right. you're going to have the same problem. and the reason why I think this is different, Let me read to you have three pickup lines and how that's going to change over time. and that's not the, you can't encode and the reason why certainly, one of the key things is that you can reduce cost. and where you can speed things up George Gilbert, action item. the core of its capability is to So the boss comes to you and says and that really is the simplification strategy that are looking at the outcomes of an application
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Mark Twain | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
March 30, 2018 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
50 million | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Node.js | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
Salesforce | ORGANIZATION | 0.99+ |
two kinds | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Angular | TITLE | 0.99+ |
JavaScript | TITLE | 0.99+ |
Elon Musk | PERSON | 0.99+ |
MuleSoft | ORGANIZATION | 0.99+ |
two angles | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Gmail | TITLE | 0.98+ |
millions of people | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
two extremes | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
dozens | QUANTITY | 0.98+ |
one question | QUANTITY | 0.98+ |
React | TITLE | 0.98+ |
one app | QUANTITY | 0.97+ |
Ember | TITLE | 0.97+ |
Vue | TITLE | 0.97+ |
first | QUANTITY | 0.96+ |
20 years ago | DATE | 0.96+ |
today | DATE | 0.96+ |
this week | DATE | 0.95+ |
Secondly | QUANTITY | 0.94+ |
Ajax | TITLE | 0.94+ |
JavaBeans | TITLE | 0.93+ |
RPA | TITLE | 0.91+ |
Wikibon | TITLE | 0.91+ |
thirdly | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.88+ |
CASE | TITLE | 0.88+ |
Stephen Fluin, Google | Node Summit 2017
>> Hey, welcome back everybody. Jeff Frick with theCUBE. We're at Node Summit 2017, downtown San Francisco Mission Bay Conference Center, 800 people, a lot of developers, pretty much all developers talking about what's going on with Node, the Node community and some tangental things that are involved in Node, as well. We're excited to have our next guest on, he's Stephen Fluin, he's a developer advocate for Google, Stephen, welcome. >> Thank you so much for having me. >> Absolutely. First off, just kind of impressions of the show. You said you were here last year, the community's obviously very active, growing, I don't know that they're going to be able to come back to this space for very much longer. >> I know. >> What do you think? >> Probably not, I love how the community's continuing to grow and evolve, right? This technology is moving faster than almost any technology I've seen before. I call it a communatorial explosion of complexity because there's always new tools coming out, new ways of thinking and that's really rich and a great way to have a lot of innovation happening. >> Right, there was a great, one of the early ones this morning, the speaker said they had one Node app a year ago, and now they have 15 in production, 22 almost ready and 75 other internal projects, in one year! >> Yeah, it's definitely crazy. >> So why, I mean there's lots of things as to why Node's successful, but from your perspective, why is it growing so fast? >> I think it's fast because it's the first time that we've had a real extended eco-system where a lot of developers are coming together, bringing their own perspectives, and it's a very collaborative environment. Everyone's trying to help each other. >> So you just got off stage, you had your own session >> I did. >> But Angular on the Server. >> Yes. >> Even for the folks that missed it, kind of what was the main theme of your talk? >> Sure, sure, so I'm on the Angular Team, which is a client-side framework for building applications. We've really been focused a lot on really great web experiences for the client. How do we run code as close as possible to the browser so that you get these very rich, engaging applications. >> Right. >> But one of the things that we've been focused on and has been one of our design goals since the beginning is how do we write JavaScript and TypeScript in a way that you can run it on the client or the server? And so just last week we announced new support has landed in our CLI that makes this process easier so that you can run your applications on the server and then bootstrap a client-side application on top of that. >> Why is that important? >> It's important for a few different reasons. You want to run applications sometimes on the server, first, because there's a lot of computers that are processing the web and browsing the web across the internet >> Right. >> so there's search engines, there's things like Facebook and Twitter, which are scraping websites looking for metadata, looking for thumnbnails and other sorts of content, but then also there's a human aspect where by rendering things on the server, you can actually have an increased perception of your load times, so things look like they're loading faster while you can still then, on top of that, deliver very rich, engaging client side experience with animations and transitions and all those sorts of things. >> That's interesting. Before we got started you had talked about thinking of the world in terms of the user experience, at the end of the line versus thinking of it from the server. I thought you were going down kind of the server optimization, power, when you say think about the server, those types of things but you're talking about a whole different set of reasons to think about the server >> Yeah, absolutely. >> and the way that that connects to the rest of the web. >> Yes, because there's a lot of consumers of content that we don't necessarily think about when we're building applications >> Right, right. >> we normally think about the human side of things but having an application, whether it's a single application or whatever, that is also well optimized for servers can be very helpful. >> Yeah, that's pretty >> Servers as the consumers. >> servers as the consumers which I guess makes sense, right? Because the Google's Indexes and all the other ones are crawling servers >> Absolutely. >> they're not scraping web pages, hopefully, I assume, I assume we're past that stage. Alright, good, so what else is going on, in terms of the Angular community, that you're working on next? >> Sure, sure. I think we're really just focused on continuing to make things easier, smaller and faster to use, so those are kind of the three focus points we've got as we continue to invest and evolve in the platforms. So, how do we make it easier for new developers to come into the kind of Angular platform and take advantage of all we have to offer? How do we make smaller bundles so that the experience is faster for users? >> Right, right. >> And then how do we make all these things understandable and digestable for developers? >> It's like the bionic men never went away, right? It's still better, stronger, faster. >> Exactly. >> Alright, Steve, thanks for taking a few minutes out of your day and sharing your story with us. >> Thanks so much for having me. >> Absolutely, Stephen Fluin, from Google. I'm Jeff Frick, you're watching theCUBE. Thanks for watching, we'll catch you next time. Take care.
SUMMARY :
the Node community and some tangental things the community's obviously very active, growing, Probably not, I love how the community's and it's a very collaborative environment. so that you get these very rich, engaging applications. so that you can run your applications on the server that are processing the web and browsing the web you can actually have an increased perception kind of the server optimization, power, and the way that the human side of things but having an application, in terms of the Angular community, so that the experience is faster for users? It's like the bionic men never went away, right? and sharing your story with us. Thanks for watching, we'll catch you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Stephen Fluin | PERSON | 0.99+ |
Stephen | PERSON | 0.99+ |
last week | DATE | 0.99+ |
15 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
22 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Node | TITLE | 0.99+ |
800 people | QUANTITY | 0.99+ |
one year | QUANTITY | 0.99+ |
a year ago | DATE | 0.98+ |
first time | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
ORGANIZATION | 0.95+ | |
single application | QUANTITY | 0.95+ |
Angular | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.94+ |
Node Summit 2017 | EVENT | 0.94+ |
ORGANIZATION | 0.94+ | |
three focus points | QUANTITY | 0.93+ |
San Francisco Mission Bay Conference Center | LOCATION | 0.93+ |
this morning | DATE | 0.92+ |
75 other internal projects | QUANTITY | 0.91+ |
Angular | TITLE | 0.79+ |
theCUBE | ORGANIZATION | 0.75+ |
JavaScript | TITLE | 0.73+ |
lot of computers | QUANTITY | 0.72+ |
TypeScript | OTHER | 0.64+ |
Angular Team | ORGANIZATION | 0.61+ |
Node | ORGANIZATION | 0.53+ |
CLI | TITLE | 0.45+ |
Siddhartha Agarwal, Oracle Cloud Platform - Oracle OpenWorld - #oow16 - #theCUBE
>> Announcer: Live from San Francisco it's The Cube covering Oracle OpenWorld 2016 brought to you by Oracle. Now here's your host, John Furrier and Peter Burris. >> Hey welcome back everyone. We are live in San Francisco at Oracle OpenWorld 2016. This is SiliconANGLE, the key of our flagship program. We go out to the events, extract a signal from the noise. I'm John Furrier, Co-CEO of SiliconANGLE with Peter Burris, head of Research at SiliconANGLE as well as the General Manager of Wikibon Research, our next guest is Siddhartha Agarwal, Vice-President of Product Management and Strategy of Oracle Cloud Platform. Welcome back to the Cube, good to see you. >> Yes, hi John. Great to be here. >> So I've seen a lot of great stuff. The core messaging from the corporate headquarters Cloud Cloud Cloud, but there's so much stuff going on in Oracle on all the applications. We've had many great conversations around the different, kind of, how the price are all fitting into the cloud model. But Peter and I were talking yesterday in our wrap-up about, we're the developers. >> Siddhartha: Yeah. >> Now and someone made a joke, oh they're at JavaOne, which is great. A lot of them are at JavaOne, but there's a huge developer opportunity within the Oracle core ecosystem because Cloud is very developer friendly. Devops, agile, cloud-native environments really cater to, really, software developers. >> Yeah, absolutely and that's a big focus area for us because we want to get developers excited about the ability to build the next generation of applications on the Oracle Cloud. Cloud-native applications, microservices-based applications and having that environment be open with choice of programming languages, open in terms of choice of which databases they want, not just Oracle database. NoSQL, MySQL, other databases and then choice of the computeship that you're using. Containers, bare metal, virtual environments and an open standard. So it's giving a very open, modern easy platform for developers so that they'll build on our platform. >> You know, one of the things that we always talk about at events is when we talk to companies really trying to win the hearts and minds of developers. You always hear, we're going to win the developers. They're like an object, like you don't really win developers. Developers are very fickle but very loyal if you can align with what they're trying to do. >> Siddartha: Yeah. >> And they'll reject hardcore tactics of selling and lock-in so that's a concern. It's a psychology of the developers. They want cool but they want relevance and they want to align with their goals. How do you see that 'cause I think Oracle is a great ecosystem for a developer. How do you manage that psychology 'cause Oracle has traditionally been an enterprise software company, so software's great but... Amazon has a good lead on the developers right now. You know, look at the end of the day you have to get developers realizing that they can build excellent, fun creative applications to create differentiation for their organizations, right, and do it fast with cool technologies. So we're giving them, for example, not just the ability to build with Java EE but now they can build in Java SE with Tomcat, they can build with Node, they can build with PHP and soon they'll be able to do it with Ruby and Daikon. And we're giving that in a container-based platform where they don't necessarily have to manage the container. They get automatic scalability, they get back up batching, all of that stuff taken care of for them. Also, you know, being able to build rich, mobile applications, that's really important for them. So how they can build mobile applications using Ionic, Angular, whatever JavaScript framework they want, but on the back end they have to be able to connect these mobile apps to the enterprise. They have to get location-based inside and to where the person is who's using the mobile app. They need to be able to get inside and tell how the mobile app's been used, and you've heard Larry talk about the Chatbot platform, right? How do you engage with customers in a different way through Facebook Messenger? So those are some of the new technologies that we're making very easily available and then at the end of the day we're giving them choice of databases so it's not just Oracle database that you get up and running in the Cloud and it's provision managed, automated for you. But now you can ask for NoSQL databases. You can have Cassandra, MongoDB run on our IaaS and MySQL. We just announced MySQL enterprise edition available as a service in the Public Cloud. >> Yeah one of the things that developers love, you know, being an ex-developer myself in the old days, is, and we've talked to them... They're very loyal but they're very pragmatic and they're engineers, basically they're software engineers. They love tools, great tools that work, they want support, but they want distribution of their product that they create, they're creators, so distribution ultimately means modernization but developers don't harp too much on money-making although they'd want to make money. They don't want to be abandoned on those three areas. They don't want to be disloyal. They want to be loyal, they want support and they want to have distribution. What does Oracle bring to the table to address those three things? >> Yeah, they're a few ways in which we're thinking of helping developers with distributions. For example, one is, developers are building applications that they exposing their APIs and they want to be able to monetize those APIs because they are exposing business process and a logic from their organization as APIs so we're giving them the ability to have portals where they can expose their APIs and monetize the APIs. The other thing is we've also got the Oracle Cloud Marketplace where developers can put their stuff on Oracle Cloud Marketplace so others can be leveraging that content and they're getting paid for that. >> How does that work? Do they plug it into the pass layer? How does the marketplace fit in if I'm a developer? >> Sure, the marketplace is a catalog, right, and you can put your stuff on the catalog. Then when you want to drag and drop something, you drop it onto Oracle PaaS or onto Oracle IaaS. So you're taking the application that you've built and then you got it to have something that-- >> John: So composing a solution on the fly of your customer? >> Well, yeah exactly, just pulling a pre-composed solution that a developer had built and being able to drop it onto the Oracle PaaS and IaaS platform. >> So the developer gets a customer and they get paid for that through the catalog? >> Yes, yes, yes and it's also better for customers, right? They're getting all sorts of capability pre-built for them, available for them, ready for them. >> So one of the things that's come up, and we've heard it, it was really amplified too much but we saw it and it got some play. In developer communities, the messaging on the containers and microservers as you mentioned earlier. Huge deal right now. They love that ability to have the containerization. We even heard containers driving down into the IaaS area, so with the network virtualization stuff going on, so how is that going to help developers? What confidence will you share to developers that you guys are backing the container standards-- >> Siddhartha: Absolutely. >> Driving that, participating in that. >> Well I think there are a couple of things. First of all, containers are not that easy in terms of when you have to orchestrate under the containers, you have to register these containers. Today the technology is for containers to be managed, the orchestration technology which is things like Swarm, Kubernetes, MISO, et cetera. They're changing very rapidly and then in order to use these technologies, you have to have a scheduler and things like that. So there's a stack of three or four, relatively recent technologies, changing at a relatively fast pace and that creates a very unstable stack for someone who create production level stuff for them, right? The docker container that they built actually run from this slightly shaky stack. >> Like Kubernetes or what not. >> Yeah yeah and so what we've done is we're saying, look, we're giving you container as a service so if you've already created docker containers, you can now bring those containers as is to the Oracle Public Cloud. You can take this application, these 20 containers and then from that point on we've taken care of putting the containers out, scaling the containers up, registering the containers, managing the containers for you, so you're just being able to use that environment as a developer. And if you want to use the PaaS, that's that IaaS. If you want to use the PaaS, then the PhP node, JavaSE capability that I told you was also containerized. You're just not exposed to docker there. Actually, I know he's got a question, but I want to just point out Juan Loaiza, who was on Monday, he pointed out the JSON aspect of the database was I thought was pretty compelling. From a developer's standpoing, JSON's very really popular with managing APIs. So having that in the database is really kind of a good thing so people should check out that interview. >> Very quickly, one of the historical norm for developers is you start with a data model and then you take various types of tools and you build code that operates against that development for that basic data model. And Oracle obviously has, that's a big part of what your business has historically been. As you move forward, as we start looking at big data and the enormous investment that businesses are making in trying to understand how to utilize that technology, it's not going as well as a lot folks might've thought it would in part because the developer community hasn't fully engaged how to generate value out of those basic stacks of technology. How is Oracle, who has obviously a leadership position in database and is now re-committing itself to some of these new big data technologies, how're you going to differentially, or do you anticipate differentially presenting that to developers so they can do more with big data-like technologies? >> They're a few things that we've done, wonderful question. First of all, just creating the Hadoop cluster, managing the Hadoop cluster, scaling out the Hadoop cluster requires a lot of effort. So we're giving you big data as a service where you don't have to worry about that underlying infrastructure. The next problem is how do you get data into the data lake, and the data has been generated at tremendous volume. You think about internet of things, you think about devices, et cetera. They're generating data at tremendous volume. We're giving you the ability to actually be able to use a streaming, Kafka, Sparc-based serviced to be able to bring data in or to use Oracle data intergration to be able to stream data in from, let's say, something happening on the Oracle database into your big data hub. So it's giving you very easy ways to get your data into the data hub and being able to do that with HDFS, with Hive, whichever target system you want to use. Then on top of that data, the next challenge is what do you visualize, right? I mean, you've got all this data together but a very small percentage is actually giving you insight. So how do you look at this and find that needle in the haystack? So for that we've given you the ability to do analytics with the BI Cloud service to get inside into the data where we're actually doing machine learning. And we're getting inside from the data and presenting those data sets to the most relevant to the most insightful by giving you some smart insights upfront and by giving you visualizations. So for example, you search for, in all these forms, what are the users says as they entered in the data. The best way to present that is by a tag cloud. So giving you visualization that makes sense, so you can do rich discovery and get rich insight from BI Cloud service and the data visualization cloud service. Lastly, if you have, let's say, five years of data on an air conditioner and the product manager's trying to get inside into that data saying, hey what should I fix so that that doesn't happen next time around. We're giving you the big data discovery cloud service where you don't have to set up that data lab, you don't have to set up the models, et cetera. You could just say replicate two billing rows, we'll replicate it in the cloud for you within our data store and you can start getting insight from it. >> So how are developers going to start using these tools 'cause it's clear that data scientists can use it, it's clear that people that have more of analytic's background can use it. How're developers going to start grabbing a lot of these capabilities, especially with machine learning and AI and some of the other things on the horizon? And how do you guys anticipate you're going to present this stuff to a developer community so that they can, again, start creating more value for the business? Is that something that's on the horizon? >> You know it's here, it's not on the horizon, it's here. We're helping developers, for example, build a microservice that wants to get data from a treadmill that one of the customers is running on, right? We're trying to get data from one of the customers on the treadmills. Well the developer now creates a microservice where the data from the treadmill has been ingested into a data lake. We've made it very easy for them to ingest into the data lake and then that microservice will be able to very easily access the data, expose only the portion of the data that's interesting. For example, the developer wants to create a very rich mobile app that presents the customer running with all the insight into the average daily calorie burn and what they're doing, et cetera. Now they can take that data, do analytics on it and very easily be able to present it in the mobile platform without having to work through all the plumbing of the data lake, of the ingestion, of the visualization, of the mobile piece, of the integration of the backend system. All of that is being provided so developers can really plug and play and have fun. >> Yeah, they want that fun. Building is the fun part, they want to have fun-- >> They want relevance, great tools and not have to worry about the infrastructure. >> John: They want distribution. They want their work to be showcased. >> Peter: That's what I mean about relevance, that's really about relevance. >> They want to work on the cool stuff and again-- >> And be relevant. >> Developers are starting to have what I call the nightclub effect. Coding is so much fun now, there's new stuff that comes out. They want to hack with the new codes. They want to play with some that fit the form factor with either a device or whatnot. >> Yeah and one other thing that we've done is, we've made the... All developers today are doing containers delivery because they need to release code really fast, right. It's no longer about months, it's about days or hours that they have to release. So we're giving a complete continuous delivery framework where people can leverage Git for their code depository, they can use Maven for continuous integration, they can use Puppet and Chef for stripping. The can manage the backlog of their task. They can do code reviews, et cetera, all done in the cloud for them. >> So lifestyles, hospitality. Taking care of developers, that's what you got to do. >> Exactly, that's a great analogy. You know all these things, they have to have these tools that they put together and what we're doing is we're saying, you don't have to worry about putting together those tools, just use them. But if you have some, you can plug in. >> Well we think, Wikibon and SiliconeANGLE, believe that there's going to be a tsunami of enterprise developers with the consumerization of IT, now meaning the Cloud, that you're going to see enterprise development, just a boom in development. You're going to see a lot more activity. Now I know it's different in development by it's not just pure Cloud need, it's some Legacy, but it's going to be a boom so we think you guys are very set up for that. Certainly with the products, so my final question for you Siddhartha is, what's your plans? I mean, sounds great. What're you going to do about it? Is there a venture happening? How're you guys going to develop this opportunity? What're you guys going to do? >> So the product sets are already there but we're evolving those products sets to a significant pace. So first of all, you can go to cloud.oracle.com/tryit and try these cloud services and build the applications on it, that's there. We've got a portal called developer.oracle.com where you can get resources on, for example, I'm a JavaScript developer. What's everything that Oracle's doing to help JavaScript developers? I'm a MySQL developer. what's everyone doing to help with that? So they've got that. Then starting at the beginning of next year, we're going to roll out a set of workshops that happen in many cities around the world where we go work with developers, hands on, and getting them inside an experience of how to build these rich, cloud-native, microservices-based applications. So those are some of the things and then our advocacy program. We already have the ACE Program, the ACE Directive Program. Working with that program to really make it a very vibrant, energetic ecosystem that is helping, building a sort of sample codes and building expert knowledge around how the Oracle environment can be used to build really cool microservices-based, cloud-native-- >> So you're investing, you're investing. >> Siddhartha: Oh absolutely. >> Any big events, you're just more little events, any big events, any developer events you guys going to do? >> So we'll be doing these workshops and we'll be sponsoring a bunch non-Oracle developer events and then we'll be launching a big developer event of our own. >> Great, so final question. What's in it for the developer? If I'm a developer, what's in it for me? Hey I love Oracle, thanks for spending the money and investing in this. What's in it for me? Why, why should I give you a look? >> Because you can do it faster with higher quality. So that microservices application that I was talking about, if you went to any other cloud and tried to build that microservices-based application that got data from the treadmill into a data lake using IoT and the analytics integration with backend applications, it would've taken you a lot longer. You can get going in the language of your choice using the database of your choice, using standards of your choice and have no lock-in. You can take your data out, you can take your code out whenever you want. So do it faster with openness. >> Siddhartha, thanks for sharing that developer update. We were talking about it yesterday. Our prayers were answered. (laughing) You came on The Cube. We were like, where is the developer action? I mean we see that JavaOne, we love Java, certainly JavaScript is awesome and a lot of good stuff going on. Thanks for sharing and congratulations on the investments and to continuing bringing developer goodness out there. >> Thank you, John. >> This The Cube, we're sharing that data with you and we're going to bring more signal from the noise here after this short break. You're watching The Cube. (electronic beat)
SUMMARY :
brought to you by Oracle. This is SiliconANGLE, the key of our flagship program. Great to be here. in Oracle on all the applications. Now and someone made a joke, oh they're at JavaOne, and having that environment be open with choice You know, one of the things that we always talk about but on the back end they have to be able to connect Yeah one of the things that developers love, that they exposing their APIs and they want to be able to and then you got it to have something that-- to drop it onto the Oracle PaaS and IaaS platform. available for them, ready for them. So one of the things that's come up, and we've heard it, to use these technologies, you have to have So having that in the database is really kind and then you take various types of tools and you So for that we've given you the ability to do analytics and AI and some of the other things on the horizon? rich mobile app that presents the customer running Building is the fun part, they want to have fun-- have to worry about the infrastructure. They want their work to be showcased. Peter: That's what I mean about relevance, They want to play with some that fit the form factor that they have to release. Taking care of developers, that's what you got to do. we're saying, you don't have to worry about but it's going to be a boom so we think you guys are So first of all, you can go to cloud.oracle.com/tryit and then we'll be launching a big developer What's in it for the developer? and the analytics integration with backend applications, and to continuing bringing developer goodness out there. This The Cube, we're sharing that data with you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Siddhartha Agarwal | PERSON | 0.99+ |
Siddhartha | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Cassandra | PERSON | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
20 containers | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Java SE | TITLE | 0.99+ |
five years | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Siddartha | PERSON | 0.99+ |
MySQL | TITLE | 0.99+ |
cloud.oracle.com/tryit | OTHER | 0.99+ |
Monday | DATE | 0.99+ |
Java EE | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
developer.oracle.com | OTHER | 0.99+ |
Wikibon Research | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Larry | PERSON | 0.99+ |
JavaOne | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
JavaScript | TITLE | 0.99+ |
NoSQL | TITLE | 0.99+ |
Today | DATE | 0.98+ |
PHP | TITLE | 0.98+ |
JavaSE | TITLE | 0.98+ |
Chatbot | TITLE | 0.98+ |
JSON | TITLE | 0.97+ |
First | QUANTITY | 0.97+ |
Oracle OpenWorld 2016 | EVENT | 0.97+ |
Node | TITLE | 0.97+ |
IaaS | TITLE | 0.96+ |
Facebook Messenger | TITLE | 0.96+ |
two billing rows | QUANTITY | 0.96+ |
Git | TITLE | 0.96+ |
The Cube | TITLE | 0.95+ |
Wikibon | ORGANIZATION | 0.95+ |
three things | QUANTITY | 0.94+ |
three areas | QUANTITY | 0.93+ |
PaaS | TITLE | 0.92+ |
today | DATE | 0.9+ |
SiliconeANGLE | ORGANIZATION | 0.89+ |
MongoDB | TITLE | 0.87+ |
Puppet | TITLE | 0.86+ |
ACE Directive Program | TITLE | 0.85+ |
Ionic | TITLE | 0.84+ |
Oracle Cloud Platform | ORGANIZATION | 0.83+ |
Pat Casey | ServiceNow Knowledge15
live from Las Vegas Nevada it's the kue covering knowledge 15 brought to you by service now okay welcome back everyone you are watching SiliconANGLE weak bonds to cube our flagship program I go out for the events and extract the signal-to-noise i'm john furrier my coach dave vellante with Wikibon Darden we're pleased to have Pat Casey VP general manager of create now platform development early employee of service now great perspective we're gonna get geeky here but talk about some of the high-level stuff welcome back to the cube thank you very much so you've seen the evolution of service now from early days to public company scaling very cloud I mean it's inside the tornado to use that metaphor it's been so successful what do you feel what is what you're feeling right now and how much more work do you see on the horizon well I think probably the first thing I feel is shocked the things they honest answer this company was founded we didn't have office space so we borrowed office space in the basement of our vc and it had no windows so we're in this little tomb of a room and there were five people there one table we got from Ikea so to look out now we've got nine thousand customers who paid money to attend an event about this it's just it's shocking it's also humbling and it's also to be honest it's scary people are here because they are dependent on technology that we wrote and one of the things that just been always been sunk into my head and I believe this forum is I do not want to let anybody here who has put their faith in service now down so in terms of where the work is we've only just gotten started I get up every day and I am just I fundamentally want to make sure that this is the best product it can be that our customers get the basic question to me that's the startup cash but you guys know and starve your big company but you got some good things going on to get some wind at your back to use the French lupine sailing analogy the market is exploding with innovation so that's a challenge but it's also could be an upgrade opportunity so what's your take on it I mean you got the agile you got native we're hearing terms like microservices being kicked around in this native cloudapp swirl you guys better platform share with your take on some of those buzzwords of some of the big mega trends I think if you when this company was founded this was actually founded as a platform company which I think most people don't realize but when Fred sat down to design this his cocktail napkin design and there was actually no cocktail napkin but imagine there was it was we're gonna run enterprise business apps in the cloud that was the idea and the first few sales calls though selling a platform were kind of miserable because we'd go to the customers and we'd say hey we're here to see show you service now and they say well what does it do and we'd say well whatever you want it to do and they kind of cock their head and say what's your sales call guys you've got to talk to us so we built out a suite of applications on top of the platform so we'd have something concrete to sell and that's what the company sold for probably about eight years it was our itsm sweet incident management problem management change management that's what most of our customer base uses we're sort of pivoting back to focusing on the platform again though partly by building other apps we've got HR we've got facilities we've got legal we've got GRC but it's also about trying to get people just onto the platform itself and in terms of really big mega trends that is one of the mega trends we're seeing it's that people are not building everything from scratch anymore it's just not an efficient way to build things in the market anymore and people are also moving to more and more specialized pieces of tooling you don't start with a C compiler anymore you start with a higher-level language you start with Ruby on Rails you start with j2ee if your enterprise developer you pick a tool that's appropriate for the problem you want to solve and service now is a great tool for solving a lot of enterprise business application let's talk about developers because one of the things that I hear all the time is oh I built this on node i got this an angular get this in java there's love different stacks kind of being built but cobble together can you know i guess i'll put them in a container whatever they say these days there's a lot of cool stuff happening on the developer front open sources we're doing great what are you guys looking at in terms of leverage and oh by the way that enables non-programmers to do stuff that looks program to ethic so the innovation opportunity for create is huge so what's what's going on with you guys nice front we actually view the developer world is kind of being in three different groups you've got it's a Gartner term but I think it's a good term you've got locoed developers and that's someone they can make a form they can make a list they can potentiate a little bit of light scripting it's your kind of traditional system administrator archetype and that's who we founded the company to address that was the business idea we could enable loko developers we get enable administrators to build really meaningful business apps and that's really been the secret to our success we're really good at it because they're closer to the action but don't have to go in and just go out of bat and if you will the kind of develop requirements I think most people do their best work when they're scratching their own itch so if you're close to the problem you're like man I can solve this for myself and we've been very empowering to let administrators and loko developers do that but that's not the totality of people out there there's also people who can't even do that there are no code developers there my mother she can use Excel really well but she can't write code and my mom is a very bright woman she's a healthcare consultant but she's a no code developer but she can put a spreadsheet out there with column heading she can make forms using our no code tool she can actually put a business service out on the web with approval workflows notifications dynamic that's fever put out a HR appt in one day when he started playing with express absolutely that's the trend right it's that is definitely one of the futures you see is this democratization of access to development tools it used to be when I started in this industry you pretty much had to be an educated professional to build anything meaningful that's no longer the case you get kids today building great applications with real business value real value and that's the value of the modern era the barrier to entry has just declined and declines and declined because the tools have gotten so much better and so much more specialized the combination of the two is just incredibly empowering so what if we could talk about architecture maybe I don't know inside baseball or maybe maybe plumbing I don't know what you said in your keynote multi-tenant is the TV dinner of cloud vendor deployments what did you mean let's talk about multi-tenant versus multi-instance sure so traditionally in the in the SAS space there's really two different architectures people deploy the most common is something called multi tenant and multi tenant if you imagine a big old apartment building where there's one big construct is one big database some software on top of it and each individual customer is a separate software construct your sharing hardware you're sharing software you're sharing memory you're sharing an apartment in an apartment building it's really sort of efficient for the vendor it's certainly convenient for the vendor because they've got one thing to manage it you think about it though there's downsides though where if the water main breaks you have the entire apartment building or every customer in this case they don't have water so the failure modes tend to be really extreme with multi tenant environments and you can't do things like let people paint their apartment any color they want to or expand their apartment or cook foods that are really smelly you have to have apartment rules in place and you see the same thing with multi tenant architectures where in order to make it work you have to restrict what people can do within your platform you get licensing restrictions you get technical restrictions you get wrapped up in quotas that's part and parcel for multi-tenancy your service now is not multi-tenant we're multi-instance so every time a customer joins us they get a unique instance of service now it's just for them it's your own house and because of that we don't have to go in and tell you what you can do with your house there's no HOA you can paint it green you can paint it pink you can do whatever you want to because it's yours and that's the big freedom that we can do for the enterprise customer base for big customers and multi-tenancy does have its use case I don't want to oversell it if you're selling largely into kind of the SMB space for example it's a really good architecture but up at the enterprise level it's really not the multi-instance architecture we use is fundamentally I think superior okay so what what point did you make the decision to go to multi-instance obviously early on you were there early on and and why did you make that decision I think it's not as clear-cut as it is in history always look back and say well we had this great design system we set out knowing we wanted to address the enterprise space and we eventually figured out that in order to do this we couldn't do it with multi-tenancy but we sort of talked ourselves into kind of our own little version I know if you are watch south park but the underpants gnomes dilemma and if you remember that episode Cartman I think butters they decide they're going to stake out the underpants gnomes who sneak into your house and they steal your underwear and they follow them they watch them steal some underwear and they followed them down to their underground lair and they accost them and they say why have you been stealing everybody's underwear and so the gnomes take them to a small room and they show them powerpoints and the PowerPoint has three parts in part one the gnomes steal underpants and in part three the gnomes profit and then they skip back to part two and is a big question mark so we had the same problem we knew we wanted to go with multi-instance and we knew it was going to be great in the market we had no idea how to do it so we probably spent about three years of engineering effort figuring out how to make a multi instance architecture work well at scale because doing it once it's really easy we have 18,000 instances in the platform right now that's a lot things have to work with automation they have to work cleanly and they have to work all the time so it wasn't a matter of convenience for you just the opposite oh absolutely it's a terrible Jam it was a challenge we had to overcome I think it was necessary for our target audience and if you're listening to this and you're actually looking to start your own SAS company figure out who your SAS audiences if it's small business if it's medium business multi-tenancy may be absolutely the right answer okay in the trade-off is cost efficiency I mean it's more expensive right so not necessarily I think there's this myth that you know it's more expensive it's not convenient you did two more engineering work but in terms of what we actually spend on hardware and power and cooling the data center Computers Computers compute if I have to buy a lot of servers and plug them into one database or I have a lot of servers plugged into a lot of databases it generally equates to roughly the same hardware costs so it doesn't generally drive capex but what it does drive is you've got to put that engineering effort in its work up front and you're not a data intensive you have a lot of data and service now but if I remember my numbers rate were about 5 petabytes of storage so that's not how we are not saying Netflix you know we are not box you know we're not storage centric its transactions so it wasn't authorized for transaction absolutely but the the implication that you've made is that many of the clouds that are out there are fine for SMB maybe yeah if you're an SMB that is okay with that but many are not suitable for the enterprise absolutely and I think that's the big change we're seeing in the cloud space using different analogy but a hundred years ago just under half of all the cars on the road where one model is the ford model t say forty-eight percent and the best-selling car was actually a truck in 2014 was a Ford f-150 was two-point-three percent of the market the day when one car could dominate the market like that has long since passed but in the early days of the cloud there were only a few vendors so they were trying to address as much of the market as they possibly could so they built very general case solutions well time has changed people are getting much more specialized so if you wanted to surveys you probably use survey monkey they're really flip and good at surveys they're not claiming to do anything else the same thing is true with the cloud platforms the people who built general case platforms are generally getting kind of pushed a little aside by more specialized offerings that are addressing narrower market segments better how important is this issue of multi-tenant versus multi-instance you obviously feel it's important I mean you guys are talking about it now let me put you in a hypothetical situation you may or may not want to answer let's say you're a CIO you're bigger Oracle customer most your CIOs here I guarantee you're using Oracle in some way shape or form Oracle's making a big push to the cloud 12 cc4 cloud see four containers I don't know pick your poison but Oracle's generally considered a pretty you know reliable company sure um recovery is you know name of the game for them and you know they do a good job should I be concerned if they're going in a multi-tenant direction or is Oracle sort of an outlier in the cloud you honestly I'm not sure if they're an outlier but I would say that if I were hired by Oracle to run their our cloud I would not do that given their customer base I do think there's a case where the early cloud companies use sales forces with example we're a multi-tenant there multi-tenant because it was convenient there multi-tenant because that was their target audience and so they were pitching hey look the cloud and that message ultimately got tangled up with their deployment architecture so it's stuck in people's head that the cloud equals multi-tenant and it really does it SMB cloud multi-tenant is probably exactly what you want to do departmentally focus is probably right at the enterprise level it's not the right design decision them talk about what's new in the platform let's get into the platform what's happening give us the update give us the highlight reel real quick and then talk about what it's exciting you about the next evolution of the platform sure so a couple of different things I'll talk a little about what we're doing for developers historically i mentioned i talked about loko developers talks about no code developers there are also professionals I'm a professional developer i did this for 20 years of my life I lived in an IDE I started writing code I wrote C code I wrote 370 assembler I've done a lot of terrible horrifying stuff back in my day terrible is probably long school with no natural there you go that's where to put it here it was really hard you know I was being shot at but no the trick to that though is that if you were a professional and you wanted to use service now the tools were not familiar there was no IDE or single place you go to see your whole app so we built one the Geneva release the product actually has an in-browser IDE as code search it as editing it has code management you see your whole app in one place it's great and actually our teams use it to build itself it's a little bit self eating watermelon but the team working on the IDE actually programs in the IDE so they prefer that to programming and eclipse for example we're biased we like our IDE but it's actually very valuable that's for the developer side there's also a new developer program and go to developers service now calm join the program you don't need to be a customer just have an email address you can get a hold of a free instance you can get access to technology you can actually join the forums long as you use it it's yours it's really aimed everybody if you want to learn service now go to the developer program join it there's no requirements other than a willingness to learn on your part technology wise though talk about something else we live in a post Edward Snowden world and I don't really like Edward Snowden because it made my work harder but one of the things he's done is make the concept of data sovereignty and data privacy a foreground concern for a lot of people especially outside the US people don't want to put data in the cloud if there's fear of it a us-based vendor or us-based firms can potentially see it we're set aside the u.s. if it's just private information they don't want to put it in the cloud if anybody can see it one of the ways to solve that and we're addressing this is to allow the data to get encrypted before it comes to us so we're putting an encryption proxy inside the customers network along with its keys and data will pass through the proxy certain fields get encrypted and we see only ciphertext we literally can't read it so encryptions your solution there it is absolutely our solution side the international lies you go to create a replica have a cloud-based system potentially or do you can you store in the US oh it's stored in the US because the data is ciphertext we literally can't read it and that's their side effects there that are actually kind of cool in that because we can't read it you also can't use it in back-end workflows so you've got to design your wrap around the encryption but that is a hard guarantee of it is we don't have the keys it is not possible for service now to get your data back and the government subpoena you can't give it okay given really know either know that you have to supreme the cup of the company in question who had the keys and up to their legal department as to what they wanted to do with it okay so can I ask you kind of as we wrap up here a lot of great stuff containers are all the rage I think doc I just got another 95 million dollars 95 million they've raised so much funding over the years containers but promises interoperability I bring that up only as a way to tease out this notion of interoperability sure how does that how do you guys view that trend in the cloud is that something that's you change I've been around for a while sure you know programming but Dockers got the traction than you seeing security it was like lumio make it a lot of hype I think there's two different parts to bet you no one is there definitely is a push to keep applications from messing each other up and impact each other in bad ways either from a security standpoint or just from a architecture overload and you see that on back-end technologies you'll contain docker is a good example of that you know vmware's a little more mature technology doing something very similar then you know choose your virtualization layer in the more application space where service now fits we have the same problem in that we don't want a service now application to impact a different service now application so we actually invested very heavily in fuji something called scoping it allows for applications to be managed individually to be deployed individually and to be interact with each other only through defined api's and that means that you can actually deploy an application with a high degree of confidence it's not going to impact any of the other for lack of a better word innocent applications inside your system it's a very big improvement and one of the things actually allowed us to do the service now store how does open source evolution if you will you know we always talked about this but you know be me being computer science degree back in the 80s we lived in the same generations we're open source was new second classes and now its first classes and now you have beyond that now it's proven it's working is there new business models you're seeing kind of like pure pure red hat and you seeing you know open platforms like data platforms so what's the next evolution open source on how do you guys going to tap into that and what's the most relevant thing to for the folks to be looking at I think first what we're very big users of open source especially in our back-end I mean we're sent OS we're a little bit of red hat where or you know f5s we've got pixie we've got we got Python we got puppet we've got lots of open source environment and the product as well we're huge fans we think it really has brought a lot of really good technology out it's very accessible to the engineering community so we use a lot of it we even contribute back to some of them case maybe I think if you look at business models i'll be honest i have not seen a lot of open source companies do really well in the environment they built a lot of great technology and i think it's been very empowering for the developer community but even red hat has not really you know they're not huge it's not a 20 billion dollar company the case may be so I don't expect to see people flocking to the open source world to make money I see people flocking to the open source world for the same reason engineers have always built cool stuff it's that joy of creation that power of building to be of value creation and contribution it's absolute like a love innovation and it's not i think no one objects to money and that's why they call it money but the open source world from what I've seen it's not being driven by financials it's being driven by engineers wanting to solve problems it's kind of creativity brick it's also a great way to play ball and get a job and show you what you're worth it's like you know I'm sorry just like playing ball in the yard Sandlot baseball then you go pro right so it's a way for recruiting and also to meet people absolutely and we're actually as I said we're big users and we love a lot of its at knowledge we use my sequel community users as well so okay probably gonna get the hook here but I want to view the final word the future give us your take of the preferred future technology wise and just next five years ten years what's good what's the world going to be like I think five years out it's going to look fairly similar to it does today you're definitely going to see a push to drive the information you need to you without you having to go and look for it you're already seeing this you know Twitter pops when something happens data comes to you you don't have to go here hit refresh periodically that's going to drive itself into more and more parts of the world your iPhone dings when something comes up that's going to seep out away from the phone away from specialty platforms like Twitter and other applications and you're going to get more and more used to seeing things come to you other than you having to go out and look for information mission that's relevant it's going to be kind of a service-oriented internet it's going to kind of push stuff out to you ten years out I suspect there'll be more dramatic changes the big thing actually seen this is a little bit of inside baseball but operational architectures are getting much more standardized so I do suspect that the amount of compute people can throw at problems is going to continue to go up astronomically so right now big data solutions are generally applicable to fairly narrow companies who can apply a lot of data to it like a netflix can afford to optimize for recommendations for you that computes going to get cheaper and cheaper and more and more accessible and you will see that sort of solution get applied to more and more specialized problems so I think you're going to find that information is going to come to you and it's going to be more and more germane to you asynchronous definitely absolutely the value and the goodness of more and more cheap compute will create faster faster personalization faster personalization and it'll be it'll be real time there's no need for you to pull on it asynchronous it'll come to you and it'll be the information you're not near real-time real-time self-driving cars don't do very well in your that's how I okay thanks so much for sharing your time and insights here inside the Cuban my pleasure get the insight from the early days to what's going on now appreciate it this is the cube or live in Las Vegas for three days for no 15 I'm John for Dave vellante we right back with more cube signal from the noise after this short break you
SUMMARY :
of an outlier in the cloud you honestly
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Pat Casey | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
18,000 instances | QUANTITY | 0.99+ |
Ikea | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
US | LOCATION | 0.99+ |
Excel | TITLE | 0.99+ |
f-150 | COMMERCIAL_ITEM | 0.99+ |
netflix | ORGANIZATION | 0.99+ |
ten years | QUANTITY | 0.99+ |
Edward Snowden | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
five people | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
forty-eight percent | QUANTITY | 0.99+ |
nine thousand customers | QUANTITY | 0.99+ |
first classes | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
Dave vellante | PERSON | 0.99+ |
95 million dollars | QUANTITY | 0.99+ |
Ruby on Rails | TITLE | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
Fred | PERSON | 0.99+ |
second classes | QUANTITY | 0.99+ |
95 million | QUANTITY | 0.98+ |
Edward Snowden | PERSON | 0.98+ |
dave vellante | PERSON | 0.98+ |
20 billion dollar | QUANTITY | 0.98+ |
two different parts | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
iPhone | COMMERCIAL_ITEM | 0.98+ |
about three years | QUANTITY | 0.98+ |
south park | TITLE | 0.98+ |
one car | QUANTITY | 0.98+ |
two | QUANTITY | 0.97+ |
Cartman | TITLE | 0.97+ |
three different groups | QUANTITY | 0.97+ |
one database | QUANTITY | 0.97+ |
one day | QUANTITY | 0.97+ |
one model | QUANTITY | 0.97+ |
one table | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
Las Vegas Nevada | LOCATION | 0.96+ |
about eight years | QUANTITY | 0.96+ |
john furrier | PERSON | 0.96+ |
two different architectures | QUANTITY | 0.96+ |
one thing | QUANTITY | 0.96+ |
a hundred years ago | DATE | 0.95+ |
java | TITLE | 0.95+ |
first | QUANTITY | 0.92+ |
u.s. | LOCATION | 0.92+ |
first thing | QUANTITY | 0.91+ |
today | DATE | 0.91+ |
each individual customer | QUANTITY | 0.91+ |
SAS | ORGANIZATION | 0.9+ |
two-point- | QUANTITY | 0.89+ |
about 5 petabytes | QUANTITY | 0.88+ |
SiliconANGLE | TITLE | 0.88+ |
Cuban | LOCATION | 0.88+ |
lumio | TITLE | 0.87+ |
agile | TITLE | 0.87+ |
three parts | QUANTITY | 0.87+ |
one big | QUANTITY | 0.86+ |
loko | ORGANIZATION | 0.85+ |
single | QUANTITY | 0.82+ |
ORGANIZATION | 0.81+ | |
a few vendors | QUANTITY | 0.81+ |
lot of data | QUANTITY | 0.81+ |
Wikibon Darden | ORGANIZATION | 0.81+ |
a lot of servers | QUANTITY | 0.79+ |
part two | QUANTITY | 0.78+ |
baseball | TITLE | 0.78+ |
ford | ORGANIZATION | 0.76+ |
people | QUANTITY | 0.75+ |
one place | QUANTITY | 0.75+ |
VP | PERSON | 0.74+ |
one big | QUANTITY | 0.74+ |
first few sales calls | QUANTITY | 0.74+ |
Gartner | ORGANIZATION | 0.74+ |