Image Title

Search Results for twoseparate stacks:

Joe Nolte, Allegis Group & Torsten Grabs, Snowflake | Snowflake Summit 2022


 

>>Hey everyone. Welcome back to the cube. Lisa Martin, with Dave ante. We're here in Las Vegas with snowflake at the snowflake summit 22. This is the fourth annual there's close to 10,000 people here. Lots going on. Customers, partners, analysts, cross media, everyone talking about all of this news. We've got a couple of guests joining us. We're gonna unpack snow park. Torston grabs the director of product management at snowflake and Joe. No NTY AI and MDM architect at Allegis group. Guys. Welcome to the program. Thank >>You so much for having >>Us. Isn't it great to be back in person? It is. >>Oh, wonderful. Yes, it >>Is. Indeed. Joe, talk to us a little bit about Allegis group. What do you do? And then tell us a little bit about your role specifically. >>Well, Allegis group is a collection of OPCA operating companies that do staffing. We're one of the biggest staffing companies in north America. We have a presence in AMEA and in the APAC region. So we work to find people jobs, and we help get 'em staffed and we help companies find people and we help individuals find >>People incredibly important these days, excuse me, incredibly important. These days. It is >>Very, it very is right >>There. Tell me a little bit about your role. You are the AI and MDM architect. You wear a lot of hats. >>Okay. So I'm a architect and I support both of those verticals within the company. So I work, I have a set of engineers and data scientists that work with me on the AI side, and we build data science models and solutions that help support what the company wants to do, right? So we build it to make business business processes faster and more streamlined. And we really see snow park and Python helping us to accelerate that and accelerate that delivery. So we're very excited about it. >>Explain snow park for, for people. I mean, I look at it as this, this wonderful sandbox. You can bring your own developer tools in, but, but explain in your words what it >>Is. Yeah. So we got interested in, in snow park because increasingly the feedback was that everybody wants to interact with snowflake through SQL. There are other languages that they would prefer to use, including Java Scala and of course, Python. Right? So then this led down to the, our, our work into snow park where we're building an infrastructure that allows us to host other languages natively on the snowflake compute platform. And now here, what we're, what we just announced is snow park for Python in public preview. So now you have the ability to natively run Python code on snowflake and benefit from the thousands of packages and libraries that the open source community around Python has contributed over the years. And that's a huge benefit for data scientists. It is ML practitioners and data engineers, because those are the, the languages and packages that are popular with them. So yeah, we very much look forward to working with the likes of you and other data scientists and, and data engineers around the Python ecosystem. >>Yeah. And, and snow park helps reduce the architectural footprint and it makes the data pipelines a little easier and less complex. We have a, we had a pipeline and it works on DMV data. And we converted that entire pipeline from Python, running on a VM to directly running down on snowflake. Right. We were able to eliminate code because you don't have to worry about multi threading, right? Because we can just set the warehouse size through a task, no more multi threading, throw that code away. Don't need to do it anymore. Right. We get the same results, but the architecture to run that pipeline gets immensely easier because it's a store procedure that's already there. And implementing that calling to that store procedure is very easy. The architecture that we use today uses six different components just to be able to run that Python code on a VM within our ecosystem to make sure that it runs on time and is scheduled and all of that. Right. But with snowflake, with snowflake and snow park and snowflake Python, it's two components. It's the store procedure and our ETL tool calling it. >>Okay. So you've simplified that, that stack. Yes. And, and eliminated all the other stuff that you had to do that now Snowflake's doing, am I correct? That you're actually taking the application development stack and the analytics stack and bringing them together? Are they merging? >>I don't know. I think in a way I'm not real sure how I would answer that question to be quite honest. I think with stream lit, there's a little bit of application that's gonna be down there. So you could maybe start to say that I'd have to see how that carries out and what we do and what we produce to really give you an answer to that. But yeah, maybe in a >>Little bit. Well, the reason I asked you is because you talk, we always talk about injecting data into apps, injecting machine intelligence and ML and AI into apps, but there are two separate stacks today. Aren't they >>Certainly the two are getting closer >>To Python Python. It gets a little better. Explain that, >>Explain, explain how >>That I just like in the keynote, right? The other day was SRE. When she showed her sample application, you can start to see that cuz you can do some data pipelining and data building and then throw that into a training module within Python, right down inside a snowflake and have it sitting there. Then you can use something like stream lit to, to expose it to your users. Right? We were talking about that the other day, about how do you get an ML and AI, after you have it running in front of people, we have a model right now that is a Mo a predictive and prescriptive model of one of our top KPIs. Right. And right now we can show it to everybody in the company, but it's through a Jupyter notebook. How do I deliver it? How do I get it in the front of people? So they can use it well with what we saw was streamlet, right? It's a perfect match. And then we can compile it. It's right down there on snowflake. And it's completely easier time to delivery to production because since it's already part of snowflake, there's no architectural review, right. As long as the code passes code review, and it's not poorly written code and isn't using a library that's dangerous, right. It's a simple deployment to production. So because it's encapsulated inside of that snowflake environment, we have approval to just use it. However we see fit. >>It's very, so that code delivery, that code review has to occur irrespective of, you know, not always whatever you're running it on. Okay. So I get that. And, and, but you, it's a frictionless environment you're saying, right. What would you have had to do prior to snowflake that you don't have to do now? >>Well, one, it's a longer review process to allow me to push the solution into production, right. Because I have to explain to my InfoSec people, right? My other it's not >>Trusted. >>Well, well don't use that word. No. Right? It got, there are checks and balances in everything that we do, >>It has to be verified. And >>That's all, it's, it's part of the, the, what I like to call the good bureaucracy, right? Those processes are in place to help all of us stay protected. >>It's the checklist. Yeah. That you >>Gotta go to. >>That's all it is. It's like fly on a plane. You, >>But that checklist gets smaller. And sometimes it's just one box now with, with Python through snow park, running down on the snowflake platform. And that's, that's the real advantage because we can do things faster. Right? We can do things easier, right? We're doing some mathematical data science right now and we're doing it through SQL, but Python will open that up much easier and allow us to deliver faster and more accurate results and easier not to mention, we're gonna try to bolt on the hybrid tables to that afterwards. >>Oh, we had talk about that. So can you, and I don't, I don't need an exact metric, but when you say faster talking 10% faster, 20% faster, 50% path >>Faster, it really depends on the solution. >>Well, gimme a range of, of the worst case, best case. >>I, I really don't have that. I don't, I wish I did. I wish I had that for you, but I really don't have >>It. I mean, obviously it's meaningful. I mean, if >>It is meaningful, it >>Has a business impact. It'll >>Be FA I think what it will do is it will speed up our work inside of our iterations. So we can then, you know, look at the code sooner. Right. And evaluate it sooner, measure it sooner, measure it faster. >>So is it fair to say that as a result, you can do more. Yeah. That's to, >>We be able do more well, and it will enable more of our people because they're used to working in Python. >>Can you talk a little bit about, from an enablement perspective, let's go up the stack to the folks at Allegis who are on the front lines, helping people get jobs. What are some of the benefits that having snow park for Python under the hood, how does it facilitate them being able to get access to data, to deliver what they need to, to their clients? >>Well, I think what we would use snowflake for a Python for there is when we're building them tools to let them know whether or not a user or a piece of talent is already within our system. Right. Things like that. Right. That's how we would leverage that. But again, it's also new. We're still figuring out what solutions we would move to Python. We are, we have some targeted, like we're, I have developers that are waiting for this and they're, and they're in private preview. Now they're playing around with it. They're ready to start using it. They're ready to start doing some analytical work on it, to get some of our analytical work out of, out of GCP. Right. Because that's where it is right now. Right. But all the data's in snowflake and it just, but we need to move that down now and take the data outta the data wasn't in snowflake before. So there, so the dashboards are up in GCP, but now that we've moved all of that data down in, down in the snowflake, the team that did that, those analytical dashboards, they want to use Python because that's the way it's written right now. So it's an easier transformation, an easier migration off of GCP and get us into snow, doing everything in snowflake, which is what we want. >>So you're saying you're doing the visualization in GCP. Is that righting? >>It's just some dashboarding. That's all, >>Not even visualization. You won't even give for. You won't even give me that. Okay. Okay. But >>Cause it's not visualization. It's just some D boardings of numbers and percentages and things like that. It's no graphic >>And it doesn't make sense to run that in snowflake, in GCP, you could just move it into AWS or, or >>No, we, what we'll be able to do now is all that data before was in GCP and all that Python code was running in GCP. We've moved all that data outta GCP, and now it's in snowflake and now we're gonna work on taking those Python scripts that we thought we were gonna have to rewrite differently. Right. Because Python, wasn't available now that Python's available, we have an easier way of getting those dashboards back out to our people. >>Okay. But you're taking it outta GCP, putting it to snowflake where anywhere, >>Well, the, so we'll build the, we'll build those, those, those dashboards. And they'll actually be, they'll be displayed through Tableau, which is our enterprise >>Tool for that. Yeah. Sure. Okay. And then when you operationalize it it'll go. >>But the idea is it's an easier pathway for us to migrate our code, our existing code it's in Python, down into snowflake, have it run against snowflake. Right. And because all the data's there >>Because it's not a, not a going out and coming back in, it's all integrated. >>We want, we, we want our people working on the data in snowflake. We want, that's our data platform. That's where we want our analytics done. Right. We don't want, we don't want, 'em done in other places. We when get all that data down and we've, we've over our data cloud journey, we've worked really hard to move all of that data. We use out of existing systems on prem, and now we're attacking our, the data that's in GCP and making sure it's down. And it's not a lot of data. And we, we fixed it with one data. Pipeline exposes all that data down on, down in snowflake now. And we're just migrating our code down to work against the snowflake platform, which is what we want. >>Why are you excited about hybrid tables? What's what, what, what's the >>Potential hybrid tables I'm excited about? Because we, so some of the data science that we do inside of snowflake produces a set of results and there recommendations, well, we have to get those recommendations back to our people back into our, our talent management system. And there's just some delays. There's about an hour delay of delivering that data back to that team. Well, with hybrid tables, I can just write it to the hybrid table. And that hybrid table can be directly accessed from our talent management system, be for the recruiters and for the hiring managers, to be able to see those recommendations and near real time. And that that's the value. >>Yep. We learned that access to real time. Data it in recent years is no longer a nice to have. It's like a huge competitive differentiator for every industry, including yours guys. Thank you for joining David me on the program, talking about snow park for Python. What that announcement means, how Allegis is leveraging the technology. We look forward to hearing what comes when it's GA >>Yeah. We're looking forward to, to it. Nice >>Guys. Great. All right guys. Thank you for our guests and Dave ante. I'm Lisa Martin. You're watching the cubes coverage of snowflake summit 22 stick around. We'll be right back with our next guest.

Published Date : Jun 15 2022

SUMMARY :

This is the fourth annual there's close to Us. Isn't it great to be back in person? Yes, it Joe, talk to us a little bit about Allegis group. So we work to find people jobs, and we help get 'em staffed and we help companies find people and we help It is You are the AI and MDM architect. on the AI side, and we build data science models and solutions I mean, I look at it as this, this wonderful sandbox. and libraries that the open source community around Python has contributed over the years. And implementing that calling to that store procedure is very easy. And, and eliminated all the other stuff that you had to do that now Snowflake's doing, am I correct? we produce to really give you an answer to that. Well, the reason I asked you is because you talk, we always talk about injecting data into apps, It gets a little better. And it's completely easier time to delivery to production because since to snowflake that you don't have to do now? Because I have to explain to my InfoSec we do, It has to be verified. Those processes are in place to help all of us stay protected. It's the checklist. That's all it is. And that's, that's the real advantage because we can do things faster. I don't need an exact metric, but when you say faster talking 10% faster, I wish I had that for you, but I really don't have I mean, if Has a business impact. So we can then, you know, look at the code sooner. So is it fair to say that as a result, you can do more. We be able do more well, and it will enable more of our people because they're used to working What are some of the benefits that having snow park of that data down in, down in the snowflake, the team that did that, those analytical dashboards, So you're saying you're doing the visualization in GCP. It's just some dashboarding. You won't even give for. It's just some D boardings of numbers and percentages and things like that. gonna have to rewrite differently. And they'll actually be, they'll be displayed through Tableau, which is our enterprise And then when you operationalize it it'll go. And because all the data's there And it's not a lot of data. so some of the data science that we do inside of snowflake produces a set of results and We look forward to hearing what comes when it's GA Thank you for our guests and Dave ante.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Lisa MartinPERSON

0.99+

JoePERSON

0.99+

10%QUANTITY

0.99+

20%QUANTITY

0.99+

DavePERSON

0.99+

AllegisORGANIZATION

0.99+

Las VegasLOCATION

0.99+

Allegis GroupORGANIZATION

0.99+

Joe NoltePERSON

0.99+

50%QUANTITY

0.99+

north AmericaLOCATION

0.99+

PythonTITLE

0.99+

Java ScalaTITLE

0.99+

SQLTITLE

0.99+

bothQUANTITY

0.99+

one boxQUANTITY

0.99+

twoQUANTITY

0.99+

thousandsQUANTITY

0.99+

Snowflake Summit 2022EVENT

0.98+

AWSORGANIZATION

0.98+

TableauTITLE

0.98+

six different componentsQUANTITY

0.98+

two componentsQUANTITY

0.98+

Python PythonTITLE

0.98+

Torsten GrabsPERSON

0.97+

oneQUANTITY

0.96+

todayDATE

0.96+

TorstonPERSON

0.96+

Allegis groupORGANIZATION

0.96+

OPCAORGANIZATION

0.95+

one dataQUANTITY

0.95+

two separate stacksQUANTITY

0.94+

InfoSecORGANIZATION

0.91+

Dave antePERSON

0.9+

fourth annualQUANTITY

0.88+

JupyterORGANIZATION

0.88+

parkTITLE

0.85+

snowflake summit 22EVENT

0.84+

10,000 peopleQUANTITY

0.82+

SnowflakeORGANIZATION

0.78+

AMEALOCATION

0.77+

snow parkTITLE

0.76+

snowORGANIZATION

0.66+

couple of guestsQUANTITY

0.65+

NTYORGANIZATION

0.6+

SnowflakeEVENT

0.59+

MDMORGANIZATION

0.58+

APACORGANIZATION

0.58+

premORGANIZATION

0.52+

GALOCATION

0.5+

snowTITLE

0.46+

SRETITLE

0.46+

litORGANIZATION

0.43+

streamTITLE

0.41+

22QUANTITY

0.4+

Data Power Panel V3


 

(upbeat music) >> The stampede to cloud and massive VC investments has led to the emergence of a new generation of object store based data lakes. And with them two important trends, actually three important trends. First, a new category that combines data lakes and data warehouses aka the lakehouse is emerged as a leading contender to be the data platform of the future. And this novelty touts the ability to address data engineering, data science, and data warehouse workloads on a single shared data platform. The other major trend we've seen is query engines and broader data fabric virtualization platforms have embraced NextGen data lakes as platforms for SQL centric business intelligence workloads, reducing, or somebody even claim eliminating the need for separate data warehouses. Pretty bold. However, cloud data warehouses have added complimentary technologies to bridge the gaps with lakehouses. And the third is many, if not most customers that are embracing the so-called data fabric or data mesh architectures. They're looking at data lakes as a fundamental component of their strategies, and they're trying to evolve them to be more capable, hence the interest in lakehouse, but at the same time, they don't want to, or can't abandon their data warehouse estate. As such we see a battle royale is brewing between cloud data warehouses and cloud lakehouses. Is it possible to do it all with one cloud center analytical data platform? Well, we're going to find out. My name is Dave Vellante and welcome to the data platform's power panel on theCUBE. Our next episode in a series where we gather some of the industry's top analysts to talk about one of our favorite topics, data. In today's session, we'll discuss trends, emerging options, and the trade offs of various approaches and we'll name names. Joining us today are Sanjeev Mohan, who's the principal at SanjMo, Tony Baers, principal at dbInsight. And Doug Henschen is the vice president and principal analyst at Constellation Research. Guys, welcome back to theCUBE. Great to see you again. >> Thank guys. Thank you. >> Thank you. >> So it's early June and we're gearing up with two major conferences, there's several database conferences, but two in particular that were very interested in, Snowflake Summit and Databricks Data and AI Summit. Doug let's start off with you and then Tony and Sanjeev, if you could kindly weigh in. Where did this all start, Doug? The notion of lakehouse. And let's talk about what exactly we mean by lakehouse. Go ahead. >> Yeah, well you nailed it in your intro. One platform to address BI data science, data engineering, fewer platforms, less cost, less complexity, very compelling. You can credit Databricks for coining the term lakehouse back in 2020, but it's really a much older idea. You can go back to Cloudera introducing their Impala database in 2012. That was a database on top of Hadoop. And indeed in that last decade, by the middle of that last decade, there were several SQL on Hadoop products, open standards like Apache Drill. And at the same time, the database vendors were trying to respond to this interest in machine learning and the data science. So they were adding SQL extensions, the likes Hudi and Vertical we're adding SQL extensions to support the data science. But then later in that decade with the shift to cloud and object storage, you saw the vendor shift to this whole cloud, and object storage idea. So you have in the database camp Snowflake introduce Snowpark to try to address the data science needs. They introduced that in 2020 and last year they announced support for Python. You also had Oracle, SAP jumped on this lakehouse idea last year, supporting both the lake and warehouse single vendor, not necessarily quite single platform. Google very recently also jumped on the bandwagon. And then you also mentioned, the SQL engine camp, the Dremios, the Ahanas, the Starbursts, really doing two things, a fabric for distributed access to many data sources, but also very firmly planning that idea that you can just have the lake and we'll help you do the BI workloads on that. And then of course, the data lake camp with the Databricks and Clouderas providing a warehouse style deployments on top of their lake platforms. >> Okay, thanks, Doug. I'd be remiss those of you who me know that I typically write my own intros. This time my colleagues fed me a lot of that material. So thank you. You guys make it easy. But Tony, give us your thoughts on this intro. >> Right. Well, I very much agree with both of you, which may not make for the most exciting television in terms of that it has been an evolution just like Doug said. I mean, for instance, just to give an example when Teradata bought AfterData was initially seen as a hardware platform play. In the end, it was basically, it was all those after functions that made a lot of sort of big data analytics accessible to SQL. (clears throat) And so what I really see just in a more simpler definition or functional definition, the data lakehouse is really an attempt by the data lake folks to make the data lake friendlier territory to the SQL folks, and also to get into friendly territory, to all the data stewards, who are basically concerned about the sprawl and the lack of control in governance in the data lake. So it's really kind of a continuing of an ongoing trend that being said, there's no action without counter action. And of course, at the other end of the spectrum, we also see a lot of the data warehouses starting to edit things like in database machine learning. So they're certainly not surrendering without a fight. Again, as Doug was mentioning, this has been part of a continual blending of platforms that we've seen over the years that we first saw in the Hadoop years with SQL on Hadoop and data warehouses starting to reach out to cloud storage or should say the HDFS and then with the cloud then going cloud native and therefore trying to break the silos down even further. >> Now, thank you. And Sanjeev, data lakes, when we first heard about them, there were such a compelling name, and then we realized all the problems associated with them. So pick it up from there. What would you add to Doug and Tony? >> I would say, these are excellent points that Doug and Tony have brought to light. The concept of lakehouse was going on to your point, Dave, a long time ago, long before the tone was invented. For example, in Uber, Uber was trying to do a mix of Hadoop and Vertical because what they really needed were transactional capabilities that Hadoop did not have. So they weren't calling it the lakehouse, they were using multiple technologies, but now they're able to collapse it into a single data store that we call lakehouse. Data lakes, excellent at batch processing large volumes of data, but they don't have the real time capabilities such as change data capture, doing inserts and updates. So this is why lakehouse has become so important because they give us these transactional capabilities. >> Great. So I'm interested, the name is great, lakehouse. The concept is powerful, but I get concerned that it's a lot of marketing hype behind it. So I want to examine that a bit deeper. How mature is the concept of lakehouse? Are there practical examples that really exist in the real world that are driving business results for practitioners? Tony, maybe you could kick that off. >> Well, put it this way. I think what's interesting is that both data lakes and data warehouse that each had to extend themselves. To believe the Databricks hype it's that this was just a natural extension of the data lake. In point of fact, Databricks had to go outside its core technology of Spark to make the lakehouse possible. And it's a very similar type of thing on the part with data warehouse folks, in terms of that they've had to go beyond SQL, In the case of Databricks. There have been a number of incremental improvements to Delta lake, to basically make the table format more performative, for instance. But the other thing, I think the most dramatic change in all that is in their SQL engine and they had to essentially pretty much abandon Spark SQL because it really, in off itself Spark SQL is essentially stop gap solution. And if they wanted to really address that crowd, they had to totally reinvent SQL or at least their SQL engine. And so Databricks SQL is not Spark SQL, it is not Spark, it's basically SQL that it's adapted to run in a Spark environment, but the underlying engine is C++, it's not scale or anything like that. So Databricks had to take a major detour outside of its core platform to do this. So to answer your question, this is not mature because these are all basically kind of, even though the idea of blending platforms has been going on for well over a decade, I would say that the current iteration is still fairly immature. And in the cloud, I could see a further evolution of this because if you think through cloud native architecture where you're essentially abstracting compute from data, there is no reason why, if let's say you are dealing with say, the same basically data targets say cloud storage, cloud object storage that you might not apportion the task to different compute engines. And so therefore you could have, for instance, let's say you're Google, you could have BigQuery, perform basically the types of the analytics, the SQL analytics that would be associated with the data warehouse and you could have BigQuery ML that does some in database machine learning, but at the same time for another part of the query, which might involve, let's say some deep learning, just for example, you might go out to let's say the serverless spark service or the data proc. And there's no reason why Google could not blend all those into a coherent offering that's basically all triggered through microservices. And I just gave Google as an example, if you could generalize that with all the other cloud or all the other third party vendors. So I think we're still very early in the game in terms of maturity of data lakehouses. >> Thanks, Tony. So Sanjeev, is this all hype? What are your thoughts? >> It's not hype, but completely agree. It's not mature yet. Lakehouses have still a lot of work to do, so what I'm now starting to see is that the world is dividing into two camps. On one hand, there are people who don't want to deal with the operational aspects of vast amounts of data. They are the ones who are going for BigQuery, Redshift, Snowflake, Synapse, and so on because they want the platform to handle all the data modeling, access control, performance enhancements, but these are trade off. If you go with these platforms, then you are giving up on vendor neutrality. On the other side are those who have engineering skills. They want the independence. In other words, they don't want vendor lock in. They want to transform their data into any number of use cases, especially data science, machine learning use case. What they want is agility via open file formats using any compute engine. So why do I say lakehouses are not mature? Well, cloud data warehouses they provide you an excellent user experience. That is the main reason why Snowflake took off. If you have thousands of cables, it takes minutes to get them started, uploaded into your warehouse and start experimentation. Table formats are far more resonating with the community than file formats. But once the cost goes up of cloud data warehouse, then the organization start exploring lakehouses. But the problem is lakehouses still need to do a lot of work on metadata. Apache Hive was a fantastic first attempt at it. Even today Apache Hive is still very strong, but it's all technical metadata and it has so many different restrictions. That's why we see Databricks is investing into something called Unity Catalog. Hopefully we'll hear more about Unity Catalog at the end of the month. But there's a second problem. I just want to mention, and that is lack of standards. All these open source vendors, they're running, what I call ego projects. You see on LinkedIn, they're constantly battling with each other, but end user doesn't care. End user wants a problem to be solved. They want to use Trino, Dremio, Spark from EMR, Databricks, Ahana, DaaS, Frink, Athena. But the problem is that we don't have common standards. >> Right. Thanks. So Doug, I worry sometimes. I mean, I look at the space, we've debated for years, best of breed versus the full suite. You see AWS with whatever, 12 different plus data stores and different APIs and primitives. You got Oracle putting everything into its database. It's actually done some interesting things with MySQL HeatWave, so maybe there's proof points there, but Snowflake really good at data warehouse, simplifying data warehouse. Databricks, really good at making lakehouses actually more functional. Can one platform do it all? >> Well in a word, I can't be best at breed at all things. I think the upshot of and cogen analysis from Sanjeev there, the database, the vendors coming out of the database tradition, they excel at the SQL. They're extending it into data science, but when it comes to unstructured data, data science, ML AI often a compromise, the data lake crowd, the Databricks and such. They've struggled to completely displace the data warehouse when it really gets to the tough SLAs, they acknowledge that there's still a role for the warehouse. Maybe you can size down the warehouse and offload some of the BI workloads and maybe and some of these SQL engines, good for ad hoc, minimize data movement. But really when you get to the deep service level, a requirement, the high concurrency, the high query workloads, you end up creating something that's warehouse like. >> Where do you guys think this market is headed? What's going to take hold? Which projects are going to fade away? You got some things in Apache projects like Hudi and Iceberg, where do they fit Sanjeev? Do you have any thoughts on that? >> So thank you, Dave. So I feel that table formats are starting to mature. There is a lot of work that's being done. We will not have a single product or single platform. We'll have a mixture. So I see a lot of Apache Iceberg in the news. Apache Iceberg is really innovating. Their focus is on a table format, but then Delta and Apache Hudi are doing a lot of deep engineering work. For example, how do you handle high concurrency when there are multiple rights going on? Do you version your Parquet files or how do you do your upcerts basically? So different focus, at the end of the day, the end user will decide what is the right platform, but we are going to have multiple formats living with us for a long time. >> Doug is Iceberg in your view, something that's going to address some of those gaps in standards that Sanjeev was talking about earlier? >> Yeah, Delta lake, Hudi, Iceberg, they all address this need for consistency and scalability, Delta lake open technically, but open for access. I don't hear about Delta lakes in any worlds, but Databricks, hearing a lot of buzz about Apache Iceberg. End users want an open performance standard. And most recently Google embraced Iceberg for its recent a big lake, their stab at having supporting both lakes and warehouses on one conjoined platform. >> And Tony, of course, you remember the early days of the sort of big data movement you had MapR was the most closed. You had Horton works the most open. You had Cloudera in between. There was always this kind of contest as to who's the most open. Does that matter? Are we going to see a repeat of that here? >> I think it's spheres of influence, I think, and Doug very much was kind of referring to this. I would call it kind of like the MongoDB syndrome, which is that you have... and I'm talking about MongoDB before they changed their license, open source project, but very much associated with MongoDB, which basically, pretty much controlled most of the contributions made decisions. And I think Databricks has the same iron cloud hold on Delta lake, but still the market is pretty much associated Delta lake as the Databricks, open source project. I mean, Iceberg is probably further advanced than Hudi in terms of mind share. And so what I see that's breaking down to is essentially, basically the Databricks open source versus the everything else open source, the community open source. So I see it's a very similar type of breakdown that I see repeating itself here. >> So by the way, Mongo has a conference next week, another data platform is kind of not really relevant to this discussion totally. But in the sense it is because there's a lot of discussion on earnings calls these last couple of weeks about consumption and who's exposed, obviously people are concerned about Snowflake's consumption model. Mongo is maybe less exposed because Atlas is prominent in the portfolio, blah, blah, blah. But I wanted to bring up the little bit of controversy that we saw come out of the Snowflake earnings call, where the ever core analyst asked Frank Klutman about discretionary spend. And Frank basically said, look, we're not discretionary. We are deeply operationalized. Whereas he kind of poo-pooed the lakehouse or the data lake, et cetera, saying, oh yeah, data scientists will pull files out and play with them. That's really not our business. Do any of you have comments on that? Help us swing through that controversy. Who wants to take that one? >> Let's put it this way. The SQL folks are from Venus and the data scientists are from Mars. So it means it really comes down to it, sort that type of perception. The fact is, is that, traditionally with analytics, it was very SQL oriented and that basically the quants were kind of off in their corner, where they're using SaaS or where they're using Teradata. It's really a great leveler today, which is that, I mean basic Python it's become arguably one of the most popular programming languages, depending on what month you're looking at, at the title index. And of course, obviously SQL is, as I tell the MongoDB folks, SQL is not going away. You have a large skills base out there. And so basically I see this breaking down to essentially, you're going to have each group that's going to have its own natural preferences for its home turf. And the fact that basically, let's say the Python and scale of folks are using Databricks does not make them any less operational or machine critical than the SQL folks. >> Anybody else want to chime in on that one? >> Yeah, I totally agree with that. Python support in Snowflake is very nascent with all of Snowpark, all of the things outside of SQL, they're very much relying on partners too and make things possible and make data science possible. And it's very early days. I think the bottom line, what we're going to see is each of these camps is going to keep working on doing better at the thing that they don't do today, or they're new to, but they're not going to nail it. They're not going to be best of breed on both sides. So the SQL centric companies and shops are going to do more data science on their database centric platform. That data science driven companies might be doing more BI on their leagues with those vendors and the companies that have highly distributed data, they're going to add fabrics, and maybe offload more of their BI onto those engines, like Dremio and Starburst. >> So I've asked you this before, but I'll ask you Sanjeev. 'Cause Snowflake and Databricks are such great examples 'cause you have the data engineering crowd trying to go into data warehousing and you have the data warehousing guys trying to go into the lake territory. Snowflake has $5 billion in the balance sheet and I've asked you before, I ask you again, doesn't there has to be a semantic layer between these two worlds? Does Snowflake go out and do M&A and maybe buy ad scale or a data mirror? Or is that just sort of a bandaid? What are your thoughts on that Sanjeev? >> I think semantic layer is the metadata. The business metadata is extremely important. At the end of the day, the business folks, they'd rather go to the business metadata than have to figure out, for example, like let's say, I want to update somebody's email address and we have a lot of overhead with data residency laws and all that. I want my platform to give me the business metadata so I can write my business logic without having to worry about which database, which location. So having that semantic layer is extremely important. In fact, now we are taking it to the next level. Now we are saying that it's not just a semantic layer, it's all my KPIs, all my calculations. So how can I make those calculations independent of the compute engine, independent of the BI tool and make them fungible. So more disaggregation of the stack, but it gives us more best of breed products that the customers have to worry about. >> So I want to ask you about the stack, the modern data stack, if you will. And we always talk about injecting machine intelligence, AI into applications, making them more data driven. But when you look at the application development stack, it's separate, the database is tends to be separate from the data and analytics stack. Do those two worlds have to come together in the modern data world? And what does that look like organizationally? >> So organizationally even technically I think it is starting to happen. Microservices architecture was a first attempt to bring the application and the data world together, but they are fundamentally different things. For example, if an application crashes, that's horrible, but Kubernetes will self heal and it'll bring the application back up. But if a database crashes and corrupts your data, we have a huge problem. So that's why they have traditionally been two different stacks. They are starting to come together, especially with data ops, for instance, versioning of the way we write business logic. It used to be, a business logic was highly embedded into our database of choice, but now we are disaggregating that using GitHub, CICD the whole DevOps tool chain. So data is catching up to the way applications are. >> We also have databases, that trans analytical databases that's a little bit of what the story is with MongoDB next week with adding more analytical capabilities. But I think companies that talk about that are always careful to couch it as operational analytics, not the warehouse level workloads. So we're making progress, but I think there's always going to be, or there will long be a separate analytical data platform. >> Until data mesh takes over. (all laughing) Not opening a can of worms. >> Well, but wait, I know it's out of scope here, but wouldn't data mesh say, hey, do take your best of breed to Doug's earlier point. You can't be best of breed at everything, wouldn't data mesh advocate, data lakes do your data lake thing, data warehouse, do your data lake, then you're just a node on the mesh. (Tony laughs) Now you need separate data stores and you need separate teams. >> To my point. >> I think, I mean, put it this way. (laughs) Data mesh itself is a logical view of the world. The data mesh is not necessarily on the lake or on the warehouse. I think for me, the fear there is more in terms of, the silos of governance that could happen and the silo views of the world, how we redefine. And that's why and I want to go back to something what Sanjeev said, which is that it's going to be raising the importance of the semantic layer. Now does Snowflake that opens a couple of Pandora's boxes here, which is one, does Snowflake dare go into that space or do they risk basically alienating basically their partner ecosystem, which is a key part of their whole appeal, which is best of breed. They're kind of the same situation that Informatica was where in the early 2000s, when Informatica briefly flirted with analytic applications and realized that was not a good idea, need to redouble down on their core, which was data integration. The other thing though, that raises the importance of and this is where the best of breed comes in, is the data fabric. My contention is that and whether you use employee data mesh practice or not, if you do employee data mesh, you need data fabric. If you deploy data fabric, you don't necessarily need to practice data mesh. But data fabric at its core and admittedly it's a category that's still very poorly defined and evolving, but at its core, we're talking about a common meta data back plane, something that we used to talk about with master data management, this would be something that would be more what I would say basically, mutable, that would be more evolving, basically using, let's say, machine learning to kind of, so that we don't have to predefine rules or predefine what the world looks like. But so I think in the long run, what this really means is that whichever way we implement on whichever physical platform we implement, we need to all be speaking the same metadata language. And I think at the end of the day, regardless of whether it's a lake, warehouse or a lakehouse, we need common metadata. >> Doug, can I come back to something you pointed out? That those talking about bringing analytic and transaction databases together, you had talked about operationalizing those and the caution there. Educate me on MySQL HeatWave. I was surprised when Oracle put so much effort in that, and you may or may not be familiar with it, but a lot of folks have talked about that. Now it's got nowhere in the market, that no market share, but a lot of we've seen these benchmarks from Oracle. How real is that bringing together those two worlds and eliminating ETL? >> Yeah, I have to defer on that one. That's my colleague, Holger Mueller. He wrote the report on that. He's way deep on it and I'm not going to mock him. >> I wonder if that is something, how real that is or if it's just Oracle marketing, anybody have any thoughts on that? >> I'm pretty familiar with HeatWave. It's essentially Oracle doing what, I mean, there's kind of a parallel with what Google's doing with AlloyDB. It's an operational database that will have some embedded analytics. And it's also something which I expect to start seeing with MongoDB. And I think basically, Doug and Sanjeev were kind of referring to this before about basically kind of like the operational analytics, that are basically embedded within an operational database. The idea here is that the last thing you want to do with an operational database is slow it down. So you're not going to be doing very complex deep learning or anything like that, but you might be doing things like classification, you might be doing some predictives. In other words, we've just concluded a transaction with this customer, but was it less than what we were expecting? What does that mean in terms of, is this customer likely to turn? I think we're going to be seeing a lot of that. And I think that's what a lot of what MySQL HeatWave is all about. Whether Oracle has any presence in the market now it's still a pretty new announcement, but the other thing that kind of goes against Oracle, (laughs) that they had to battle against is that even though they own MySQL and run the open source project, everybody else, in terms of the actual commercial implementation it's associated with everybody else. And the popular perception has been that MySQL has been basically kind of like a sidelight for Oracle. And so it's on Oracles shoulders to prove that they're damn serious about it. >> There's no coincidence that MariaDB was launched the day that Oracle acquired Sun. Sanjeev, I wonder if we could come back to a topic that we discussed earlier, which is this notion of consumption, obviously Wall Street's very concerned about it. Snowflake dropped prices last week. I've always felt like, hey, the consumption model is the right model. I can dial it down in when I need to, of course, the street freaks out. What are your thoughts on just pricing, the consumption model? What's the right model for companies, for customers? >> Consumption model is here to stay. What I would like to see, and I think is an ideal situation and actually plays into the lakehouse concept is that, I have my data in some open format, maybe it's Parquet or CSV or JSON, Avro, and I can bring whatever engine is the best engine for my workloads, bring it on, pay for consumption, and then shut it down. And by the way, that could be Cloudera. We don't talk about Cloudera very much, but it could be one business unit wants to use Athena. Another business unit wants to use some other Trino let's say or Dremio. So every business unit is working on the same data set, see that's critical, but that data set is maybe in their VPC and they bring any compute engine, you pay for the use, shut it down. That then you're getting value and you're only paying for consumption. It's not like, I left a cluster running by mistake, so there have to be guardrails. The reason FinOps is so big is because it's very easy for me to run a Cartesian joint in the cloud and get a $10,000 bill. >> This looks like it's been a sort of a victim of its own success in some ways, they made it so easy to spin up single note instances, multi note instances. And back in the day when compute was scarce and costly, those database engines optimized every last bit so they could get as much workload as possible out of every instance. Today, it's really easy to spin up a new node, a new multi node cluster. So that freedom has meant many more nodes that aren't necessarily getting that utilization. So Snowflake has been doing a lot to add reporting, monitoring, dashboards around the utilization of all the nodes and multi node instances that have spun up. And meanwhile, we're seeing some of the traditional on-prem databases that are moving into the cloud, trying to offer that freedom. And I think they're going to have that same discovery that the cost surprises are going to follow as they make it easy to spin up new instances. >> Yeah, a lot of money went into this market over the last decade, separating compute from storage, moving to the cloud. I'm glad you mentioned Cloudera Sanjeev, 'cause they got it all started, the kind of big data movement. We don't talk about them that much. Sometimes I wonder if it's because when they merged Hortonworks and Cloudera, they dead ended both platforms, but then they did invest in a more modern platform. But what's the future of Cloudera? What are you seeing out there? >> Cloudera has a good product. I have to say the problem in our space is that there're way too many companies, there's way too much noise. We are expecting the end users to parse it out or we expecting analyst firms to boil it down. So I think marketing becomes a big problem. As far as technology is concerned, I think Cloudera did turn their selves around and Tony, I know you, you talked to them quite frequently. I think they have quite a comprehensive offering for a long time actually. They've created Kudu, so they got operational, they have Hadoop, they have an operational data warehouse, they're migrated to the cloud. They are in hybrid multi-cloud environment. Lot of cloud data warehouses are not hybrid. They're only in the cloud. >> Right. I think what Cloudera has done the most successful has been in the transition to the cloud and the fact that they're giving their customers more OnRamps to it, more hybrid OnRamps. So I give them a lot of credit there. They're also have been trying to position themselves as being the most price friendly in terms of that we will put more guardrails and governors on it. I mean, part of that could be spin. But on the other hand, they don't have the same vested interest in compute cycles as say, AWS would have with EMR. That being said, yes, Cloudera does it, I think its most powerful appeal so of that, it almost sounds in a way, I don't want to cast them as a legacy system. But the fact is they do have a huge landed legacy on-prem and still significant potential to land and expand that to the cloud. That being said, even though Cloudera is multifunction, I think it certainly has its strengths and weaknesses. And the fact this is that yes, Cloudera has an operational database or an operational data store with a kind of like the outgrowth of age base, but Cloudera is still based, primarily known for the deep analytics, the operational database nobody's going to buy Cloudera or Cloudera data platform strictly for the operational database. They may use it as an add-on, just in the same way that a lot of customers have used let's say Teradata basically to do some machine learning or let's say, Snowflake to parse through JSON. Again, it's not an indictment or anything like that, but the fact is obviously they do have their strengths and their weaknesses. I think their greatest opportunity is with their existing base because that base has a lot invested and vested. And the fact is they do have a hybrid path that a lot of the others lack. >> And of course being on the quarterly shock clock was not a good place to be under the microscope for Cloudera and now they at least can refactor the business accordingly. I'm glad you mentioned hybrid too. We saw Snowflake last month, did a deal with Dell whereby non-native Snowflake data could access on-prem object store from Dell. They announced a similar thing with pure storage. What do you guys make of that? Is that just... How significant will that be? Will customers actually do that? I think they're using either materialized views or extended tables. >> There are data rated and residency requirements. There are desires to have these platforms in your own data center. And finally they capitulated, I mean, Frank Klutman is famous for saying to be very focused and earlier, not many months ago, they called the going on-prem as a distraction, but clearly there's enough demand and certainly government contracts any company that has data residency requirements, it's a real need. So they finally addressed it. >> Yeah, I'll bet dollars to donuts, there was an EBC session and some big customer said, if you don't do this, we ain't doing business with you. And that was like, okay, we'll do it. >> So Dave, I have to say, earlier on you had brought this point, how Frank Klutman was poo-pooing data science workloads. On your show, about a year or so ago, he said, we are never going to on-prem. He burnt that bridge. (Tony laughs) That was on your show. >> I remember exactly the statement because it was interesting. He said, we're never going to do the halfway house. And I think what he meant is we're not going to bring the Snowflake architecture to run on-prem because it defeats the elasticity of the cloud. So this was kind of a capitulation in a way. But I think it still preserves his original intent sort of, I don't know. >> The point here is that every vendor will poo-poo whatever they don't have until they do have it. >> Yes. >> And then it'd be like, oh, we are all in, we've always been doing this. We have always supported this and now we are doing it better than others. >> Look, it was the same type of shock wave that we felt basically when AWS at the last moment at one of their reinvents, oh, by the way, we're going to introduce outposts. And the analyst group is typically pre briefed about a week or two ahead under NDA and that was not part of it. And when they dropped, they just casually dropped that in the analyst session. It's like, you could have heard the sound of lots of analysts changing their diapers at that point. >> (laughs) I remember that. And a props to Andy Jassy who once, many times actually told us, never say never when it comes to AWS. So guys, I know we got to run. We got some hard stops. Maybe you could each give us your final thoughts, Doug start us off and then-- >> Sure. Well, we've got the Snowflake Summit coming up. I'll be looking for customers that are really doing data science, that are really employing Python through Snowflake, through Snowpark. And then a couple weeks later, we've got Databricks with their Data and AI Summit in San Francisco. I'll be looking for customers that are really doing considerable BI workloads. Last year I did a market overview of this analytical data platform space, 14 vendors, eight of them claim to support lakehouse, both sides of the camp, Databricks customer had 32, their top customer that they could site was unnamed. It had 32 concurrent users doing 15,000 queries per hour. That's good but it's not up to the most demanding BI SQL workloads. And they acknowledged that and said, they need to keep working that. Snowflake asked for their biggest data science customer, they cited Kabura, 400 terabytes, 8,500 users, 400,000 data engineering jobs per day. I took the data engineering job to be probably SQL centric, ETL style transformation work. So I want to see the real use of the Python, how much Snowpark has grown as a way to support data science. >> Great. Tony. >> Actually of all things. And certainly, I'll also be looking for similar things in what Doug is saying, but I think sort of like, kind of out of left field, I'm interested to see what MongoDB is going to start to say about operational analytics, 'cause I mean, they're into this conquer the world strategy. We can be all things to all people. Okay, if that's the case, what's going to be a case with basically, putting in some inline analytics, what are you going to be doing with your query engine? So that's actually kind of an interesting thing we're looking for next week. >> Great. Sanjeev. >> So I'll be at MongoDB world, Snowflake and Databricks and very interested in seeing, but since Tony brought up MongoDB, I see that even the databases are shifting tremendously. They are addressing both the hashtag use case online, transactional and analytical. I'm also seeing that these databases started in, let's say in case of MySQL HeatWave, as relational or in MongoDB as document, but now they've added graph, they've added time series, they've added geospatial and they just keep adding more and more data structures and really making these databases multifunctional. So very interesting. >> It gets back to our discussion of best of breed, versus all in one. And it's likely Mongo's path or part of their strategy of course, is through developers. They're very developer focused. So we'll be looking for that. And guys, I'll be there as well. I'm hoping that we maybe have some extra time on theCUBE, so please stop by and we can maybe chat a little bit. Guys as always, fantastic. Thank you so much, Doug, Tony, Sanjeev, and let's do this again. >> It's been a pleasure. >> All right and thank you for watching. This is Dave Vellante for theCUBE and the excellent analyst. We'll see you next time. (upbeat music)

Published Date : Jun 2 2022

SUMMARY :

And Doug Henschen is the vice president Thank you. Doug let's start off with you And at the same time, me a lot of that material. And of course, at the and then we realized all the and Tony have brought to light. So I'm interested, the And in the cloud, So Sanjeev, is this all hype? But the problem is that we I mean, I look at the space, and offload some of the So different focus, at the end of the day, and warehouses on one conjoined platform. of the sort of big data movement most of the contributions made decisions. Whereas he kind of poo-pooed the lakehouse and the data scientists are from Mars. and the companies that have in the balance sheet that the customers have to worry about. the modern data stack, if you will. and the data world together, the story is with MongoDB Until data mesh takes over. and you need separate teams. that raises the importance of and the caution there. Yeah, I have to defer on that one. The idea here is that the of course, the street freaks out. and actually plays into the And back in the day when the kind of big data movement. We are expecting the end And the fact is they do have a hybrid path refactor the business accordingly. saying to be very focused And that was like, okay, we'll do it. So Dave, I have to say, the Snowflake architecture to run on-prem The point here is that and now we are doing that in the analyst session. And a props to Andy Jassy and said, they need to keep working that. Great. Okay, if that's the case, Great. I see that even the databases I'm hoping that we maybe have and the excellent analyst.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DougPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

TonyPERSON

0.99+

UberORGANIZATION

0.99+

FrankPERSON

0.99+

Frank KlutmanPERSON

0.99+

Tony BaersPERSON

0.99+

MarsLOCATION

0.99+

Doug HenschenPERSON

0.99+

2020DATE

0.99+

AWSORGANIZATION

0.99+

VenusLOCATION

0.99+

OracleORGANIZATION

0.99+

2012DATE

0.99+

DatabricksORGANIZATION

0.99+

DellORGANIZATION

0.99+

HortonworksORGANIZATION

0.99+

Holger MuellerPERSON

0.99+

Andy JassyPERSON

0.99+

last yearDATE

0.99+

$5 billionQUANTITY

0.99+

$10,000QUANTITY

0.99+

14 vendorsQUANTITY

0.99+

Last yearDATE

0.99+

last weekDATE

0.99+

San FranciscoLOCATION

0.99+

SanjMoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

8,500 usersQUANTITY

0.99+

SanjeevPERSON

0.99+

InformaticaORGANIZATION

0.99+

32 concurrent usersQUANTITY

0.99+

twoQUANTITY

0.99+

Constellation ResearchORGANIZATION

0.99+

MongoORGANIZATION

0.99+

Sanjeev MohanPERSON

0.99+

AhanaORGANIZATION

0.99+

DaaSORGANIZATION

0.99+

EMRORGANIZATION

0.99+

32QUANTITY

0.99+

AtlasORGANIZATION

0.99+

DeltaORGANIZATION

0.99+

SnowflakeORGANIZATION

0.99+

PythonTITLE

0.99+

eachQUANTITY

0.99+

AthenaORGANIZATION

0.99+

next weekDATE

0.99+

Chris Wright, Red Hat | Red Hat Summit 2022


 

(bright upbeat music) >> We're back at the Red Hat Summit at the Seaport in Boston, theCUBE's coverage. This is day two. Dave Vellante and Paul Gillin. Chris Wright is here, the chief technology officer at Red Hat. Chris, welcome back to theCUBE. Good to see you. >> Yeah, likewise. Thanks for having me. >> You're very welcome. So, you were saying today in your keynote. We got a lot of ground to cover here, Chris. You were saying that, you know, software, Andreessen's software is eating the world. Software ate the world, is what you said. And now we have to think about AI. AI is eating the world. What does that mean? What's the implication for customers and developers? >> Well, a lot of implications. I mean, to start with, just acknowledging that software isn't this future dream. It is the reality of how businesses run today. It's an important part of understanding what you need to invest in to make yourself successful, essentially, as a software company, where all companies are building technology to differentiate themselves. Take that, all that discipline, everything we've learned in that context, bring in AI. So, we have a whole new set of skills to learn, tools to create and discipline processes to build around delivering data-driven value into the company, just the way we've built software value into companies. >> I'm going to cut right to the chase because I would say data is eating software. Data and AI, to me, are like, you know, kissing cousins. So here's what I want to ask you as a technologist. So we have the application development stack, if you will. And it's separate from the data and analytics stack. All we talk about is injecting AI into applications, making them data-driven. You just used that term. But they're totally two totally separate stacks, organizationally and technically. Are those worlds coming together? Do they have to come together in order for the AI vision to be real? >> Absolutely, so, totally agree with you on the data piece. It's inextricably linked to AI and analytics and all of the, kind of, machine learning that goes on in creating intelligence for applications. The application connection to a machine learning model is fundamental. So, you got to think about not just the software developer or the data scientist, but also there's a line of business in there that's saying, "Here's the business outcomes I'm looking for." It's that trifecta that has to come together to make advancements and really make change in the business. So, you know, some of the folks we had on stage today were talking about exactly that. Which is, how do you bring together those three different roles? And there's technology that can help bridge gaps. So, we look at what we call intelligent applications. Embed intelligence into the application. That means you surface a machine learning model with APIs to make it accessible into applications, so that developers can query a machine learning model. You need to do that with some discipline and rigor around, you know, what does it mean to develop this thing and life cycle it and integrate it into this bigger picture. >> So the technology is capable of coming together. You know, Amanda Purnell is coming on next. >> Oh, great. >> 'Cause she was talking about, you know, getting, you know, insights in the hands of nurses and they're not coders. >> That's right. >> But they need data. But I feel like it's, well, I feel very strongly that it's an organizational challenge, more so. I think you're confirming. It's not really a technical challenge. I can insert a column into the application development stack and bring TensorFlow in or AI or data, whatever it is. It's not a technical issue. Is that fair? >> Well, there are some technical challenges. So, for example, data scientists. Kind of a scarce kind of skillset within any business. So, how do you scale data scientists into the developer population? Which will be a large population within an organization. So, there's tools that we can use to bring those worlds together. So, you know, it's not just TensorFlow but it's the entire workflow and platform of how you share the data, the data training models and then just deploying models into a runtime production environment. That looks similar to software development processes but it's slightly different. So, that's where a common platform can help bridge the gaps between that developer world and the data science world. >> Where is Red Hat's position in this evolving AI stack? I mean, you're not into developing tool sets like TensorFlow, right? >> Yeah, that's right. If you think about a lot of what we do, it's aggregate content together, bring a distribution of tools, giving flexibility to the user. Whether that's a developer, a system administrator, or a data scientist. So our role here is, one, make sure we work with our hardware partners to create accelerated environments for AI. So, that's sort of an enablement thing. The other is bring together those disparate tools into a workflow and give a platform that enables data scientists to choose which, is it PyTorch, is it TensorFlow? What's the best tool for you? And assemble that tool into your workflow and then proceed training, doing inference, and, you know, tuning and lather, rinse, repeat. >> So, to make your platform then, as receptive as possible, right? You're not trying to pick winners in what languages to work with or what frameworks? >> Yeah, that's right. I mean, picking winners is difficult. The world changes so rapidly. So we make big bets on key areas and certainly TensorFlow would be a great example. A lot of community attraction there. But our goal isn't to say that's the one tool that everybody should use. It's just one of the many tools in your toolbox. >> There are risks of not pursuing this, from an organization's perspective. A customer, they kind of get complacent and, you know, they could get disrupted, but there's also an industry risk. If the industry can't deliver this capability, what are the implications if the industry doesn't step up? I believe the industry will, just 'cause it always does. But what about customer complacency? We certainly saw that a lot with digital transformation and COVID sort of forced us to march to digital. What should we be thinking about of the implications of not leaning in? >> Well, I think that the disruption piece is key because there's always that spectrum of businesses. Some are more leaning in, invested in the future. Some are more laggards and kind of wait and see. Those leaning in tend to be separating themselves, wheat from the chaff. So, that's an important way to look at it. Also, if you think about it, many data science experiments fail within businesses. I think part of that is not having the rigor and discipline around connecting, not just the tools and data scientists together, but also looking at what business outcomes are you trying to drive? If you don't bring those things together then it sort of can be too academic and the business doesn't see the value. And so there's also the question of transparency. How do you understand why is a model predicting you should take a certain action or do a certain thing? As an industry, I think we need to focus on bringing tools together, bringing data together, and building better transparency into how models work. >> There's also a lot of activity around governance right now, AI governance. Particularly removing bias from ML models. Is that something that you are guiding your customers on? Or, how important do you feel this is at this point of AI's development? >> It's really important. I mean, the challenge is finding it and understanding, you know, we bring data that maybe already carrying a bias into a training process and building a model around that. How do you understand what the bias is in that model? There's a lot of open questions there and academic research to try to understand how you can ferret out, you know, essentially biased data and make it less biased or unbiased. Our role is really just bringing the toolset together so that you have the ability to do that as a business. So, we're not necessarily building the next machine learning algorithm or models or ways of building transparency into models, as much as building the platform and bringing the tools together that can give you that for your own organization. >> So, it brings up the question of architectures. I've been sort of a casual or even active observer of data architectures over the last, whatever, 15 years. They've been really centralized. Our data teams are highly specialized. You mentioned data scientists, but there's data engineers and there's data analysts and very hyper specialized roles that don't really scale that well. So there seems to be a move, talk about edge. We're going to talk about edge. The ultimate edge, which is space, very cool. But data is distributed by its very nature. We have this tendency to try to force it into this, you know, monolithic system. And I know that's a pejorative, but for good reason. So I feel like there's this push in organizations to enable scale, to decentralize data architectures. Okay, great. And put data in the hands of those business owners that you talked about earlier. The domain experts that have business context. Two things, two problems that brings up, is you need infrastructure that's self-service, in that instance. And you need, to your point, automated and computational governance. Those are real challenges. What do you see in terms of the trends to decentralize data architectures? Is it even feasible that everybody wants a single version of the truth, centralized data team, right? And they seem to be at odds. >> Yeah, well I think we're coming from a history informed by centralization. That's what we understand. That's what we kind of gravitate towards, but the reality, as you put it, the world's just distributed. So, what we can do is look at federation. So, it's not necessarily centralization but create connections between data sources which requires some policy and governance. Like, who gets access to what? And also think about those domain experts maybe being the primary source of surfacing a model that you don't necessarily have to know how it was trained or what the internals are. You're using it more to query it as a, you know, the domain expert produces this model, you're in a different part of the organization just leveraging some work that somebody else has done. Which is how we build software, reusable components in software. So, you know, I think building that mindset into data and the whole process of creating value from data is going to be a really critical part of how we roll forward. >> So, there are two things in your keynote. One, that I was kind of in awe of. You wanted to be an astronaut when you were a kid. You know, I mean, I watched the moon landing and I was like, "I'm never going up into space." So, I'm in awe of that. >> Oh, I got the space helmet picture and all that. >> That's awesome, really, you know, hat's off to you. The other one really pissed me off, which was that you're a better skier 'cause you got some device in your boot. >> Oh, it's amazing. >> And the reason it angered me is 'cause I feel like it's the mathematicians taking over baseball, you know. Now, you're saying, you're a better skier because of that. But those are two great edge examples and there's a billion of them, right? So, talk about your edge strategy. Kind of, your passion there, how you see that all evolving. >> Well, first of all, we see the edge as a fundamental part of the future of computing. So in that centralization, decentralization pendulum swing, we're definitely on the path towards distributed computing and that is edge and that's because of data. And also because of the compute capabilities that we have in hardware. Hardware gets more capable, lower power, can bring certain types of accelerators into the mix. And you really create this world where what's happening in a virtual context and what's happening in a physical context can come together through this distributed computing system. Our view is, that's hybrid. That's what we've been working on for years. Just the difference was maybe, originally it was focused on data center, cloud, multi-cloud and now we're just extending that view out to the edge and you need the same kind of consistency for development, for operations, in the edge that you do in that hybrid world. So that's really where we're placing our focus and then it gets into all the different use cases. And you know, really, that's the fun part. >> I'd like to shift gears a little bit 'cause another remarkable statistic you cited during your keynote was, it was a Forrester study that said 99% of all applications now have open source in them. What are the implications of that for those who are building applications? In terms of license compliance and more importantly, I think, confidence in the code that they're borrowing from open source projects. >> Well, I think, first and foremost, it says open source has won. We see that that was audited code bases which means there's mission critical code bases. We see that it's pervasive, it's absolutely everywhere. And that means developers are pulling dependencies into their applications based on all of the genius that's happening in open source communities. Which I think we should celebrate. Right after we're finished celebrating we got to look at what are the implications, right? And that shows up as, are there security vulnerabilities that become ubiquitous because we're using similar dependencies? What is your process for vetting code that you bring into your organization and push into production? You know that process for the code you author, what about your dependencies? And I think that's an important part of understanding and certainly there are some license implications. What are you required to do when you use that code? You've been given that code on a license from the open source community, are you compliant with that license? Some of those are reasonably well understood. Some of those are, you know, newer to the enterprise. So I think we have to look at this holistically and really help enterprises build safe application code that goes into production and runs their business. >> We saw Intel up in the keynotes today. We heard from Nvidia, both companies are coming on. We know you've done a lot of work with ARM over the years. I think Graviton was one of the announcements this week. So, love to see that. I want to run something by you as a technologist. The premise is, you know, we used to live in this CPU centric world. We marched to the cadence of Moore's Law and now we're seeing the combinatorial factors of CPU, GPU, NPU, accelerators and other supporting components. With IO and controllers and NICs all adding up. It seems like we're shifting from a processor centric world to a connect centric world on the hardware side. That first of all, do you buy that premise? And does hardware matter anymore with all the cloud? >> Hardware totally matters. I mean the cloud tried to convince us that hardware doesn't matter and it actually failed. And the reason I say that is because if you go to a cloud, you'll find 100s of different instance types that are all reflections of different types of assemblies of hardware. Faster IO, better storage, certain sizes of memory. All of that is a reflection of, applications need certain types of environments for acceleration, for performance, to do their job. Now I do think there's an element of, we're decomposing compute into all of these different sort of accelerators and the only way to bring that back together is connectivity through the network. But there's also SOCs when you get to the edge where you can integrate the entire system onto a pretty small device. I think the important part here is, we're leveraging hardware to do interesting work on behalf of applications that makes hardware exciting. And as an operating system geek, I couldn't be more thrilled, because that's what we do. We enable hardware, we get down into the bits and bytes and poke registers and bring things to life. There's a lot happening in the hardware world and applications can't always follow it directly. They need that level of indirection through a software abstraction and that's really what we're bringing to life here. >> We've seen now hardware specific AI, you know, AI chips and AI SOCs emerge. How do you make decisions about what you're going to support or do you try to support all of them? >> Well, we definitely have a breadth view of support and we're also just driven by customer demand. Where our customers are interested we work closely with our partners. We understand what their roadmaps are. We plan together ahead of time and we know where they're making investments and we work with our customers. What are the best chips that support their business needs and we focus there first but it ends up being a pretty broad list of hardware that we support. >> I could pick your brain for an hour. We didn't even get into super cloud, Chris. But, thanks so much for coming on theCUBE. It's great to have you. >> Absolutely, thanks for having me. >> All right. Thank you for watching. Keep it right there. Paul Gillin, Dave Vellante, theCUBE's live coverage of Red Hat Summit 2022 from Boston. We'll be right back. (mellow music)

Published Date : May 11 2022

SUMMARY :

We're back at the Red Hat Summit Thanks for having me. Software ate the world, is what you said. what you need to invest in And it's separate from the So, you know, some of the So the technology is 'Cause she was talking about, you know, I can insert a column into the and the data science world. and give a platform that say that's the one tool of the implications of not leaning in? and the business doesn't see the value. Is that something that you and understanding, you know, that you talked about earlier. but the reality, as you put it, when you were a kid. Oh, I got the space you know, hat's off to you. And the reason it angered in the edge that you do What are the implications of that for the code you author, The premise is, you know, and the only way to specific AI, you know, What are the best chips that It's great to have you. Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Dave VellantePERSON

0.99+

Paul GillinPERSON

0.99+

Dave VellantePERSON

0.99+

Amanda PurnellPERSON

0.99+

NvidiaORGANIZATION

0.99+

Chris WrightPERSON

0.99+

99%QUANTITY

0.99+

100sQUANTITY

0.99+

BostonLOCATION

0.99+

Red HatORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

two problemsQUANTITY

0.99+

todayDATE

0.99+

IntelORGANIZATION

0.99+

ForresterORGANIZATION

0.99+

both companiesQUANTITY

0.99+

Red Hat Summit 2022EVENT

0.99+

twoQUANTITY

0.99+

ARMORGANIZATION

0.99+

SeaportLOCATION

0.99+

oneQUANTITY

0.98+

two thingsQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

one toolQUANTITY

0.98+

OneQUANTITY

0.97+

firstQUANTITY

0.96+

Two thingsQUANTITY

0.96+

this weekDATE

0.95+

Red Hat SummitEVENT

0.95+

an hourQUANTITY

0.93+

TensorFlowTITLE

0.92+

GravitonORGANIZATION

0.87+

PyTorchTITLE

0.87+

separate stacksQUANTITY

0.85+

single versionQUANTITY

0.83+

AndreessenPERSON

0.82+

day twoQUANTITY

0.81+

MooreTITLE

0.79+

three different rolesQUANTITY

0.76+

yearsQUANTITY

0.75+

COVIDOTHER

0.7+

edgeQUANTITY

0.6+

billionQUANTITY

0.54+

Ashesh Badani, Red Hat | Red Hat Summit 2022


 

welcome back to the seaport in boston massachusetts with cities crazy with bruins and celtics talk but we're here we're talking red hat linux open shift ansible and ashesh badani is here he's the senior vice president and the head of products at red hat fresh off the keynotes had amex up in the state of great to see you face to face amazing that we're here now after two years of of the isolation economy welcome back thank you great to see you again as well and you as well paul yeah so no shortage of announcements uh from red hat this week paul wrote a piece on siliconangle.com i got my yellow highlights i've been through all the announcements which is your favorite baby hard for me to choose hard for me to choose um i'll talk about real nine right well nine's exciting um and in a weird way it's exciting because it's boring right because it's consistent three years ago we committed to releasing a major well uh every three years right so customers partners users can plan for it so we released the latest version of rel in between we've been delivering releases every six months as well minor releases a lot of capabilities that are bundled in around security automation edge management and then rel is also the foundation of the work we announced with gm with the in-vehicle operating system so you know that's extremely exciting news for us as well and the collaboration that we're doing with them and then a whole host of other announcements around you know cloud services work around devsecops and so on so yeah a lot of news a lot of announcements i would say rel nine and the work with gm probably you know comes right up to the top i wanted to get to one aspect of the rail 9 announcement that is the the rose centos streams in that development now in december i think it was red hat discontinued development or support for for centos and moved to central streams i'm still not clear what the difference is between the two can you clarify that i think we go into a situation especially with with many customers many partners as well that you know didn't sort of quite exactly uh get a sense of you know where centos was from a life cycle perspective so was it upstream to rel was it downstream to rel what's the life cycle for itself as well and then there became some sort of you know implied notions around what that looked like and so what we decided was to say well we'll make a really clean break and we'll say centos stream is the upstream for enterprise linux from day one itself partners uh you know software partners hardware partners can collaborate with us to develop rel and then take it all the way through life cycle right so now it becomes a true upstream a true place for development for us and then rel essentially comes uh out as a series of releases based on the work that we do in a fast-moving center-os environment but wasn't centos essentially that upstream uh development environment to begin with what's the difference between centos stream yeah it wasn't wasn't um it wasn't quite upstream it was actually a little bit downstream yeah it was kind of bi-directional yeah and yeah and so then you know that sort of became an implied life cycle to it when there really wasn't one but it was just became one because of some usage and adoption and so now this really clarifies the relationship between the two we've heard feedback for example from software partners users saying hey what do i do for development because i used you know centervis in the past we're like yup we have real for developers available we have rel for small teams available we have rel available for non-profit organizations up and so we've made rail now available in various form factors for the needs that folks had and they were perhaps using centos for because there was no such alternative or rel history so language so now it's this clarity so that's really the key point there so language matters a lot in the technology business we've seen it over the years the industry coalesces around you know terminology whether it was the pc era everything was pc this pc that the internet era and and certainly the cloud we we learned a lot of language from the likes of you know aws two pizza teams and working backwards and things like that became common commonplace hybrid and multi-cloud are kind of the the parlance of the day you guys use hybrid you and i have talked about this i feel like there's something new coming i don't think my term of super cloud is the right necessary terminology but it signifies something different and i feel like your announcements point to that within your hybrid umbrella point being so much talk about the edge and it's we heard paul cormier talk about new hardware architectures and you're seeing that at the edge you know what you're doing with the in-vehicle operating system these are new the cloud isn't just a a bunch of remote services in the cloud anymore it's on-prem it's a cloud it's cross-clouds it's now going out to the edge it's something new and different i think hybrid is your sort of term for that but it feels like it's transcending hybrid are your thoughts you know really really great question actually since you and i talked dave i've been spending some time you know sort of noodling just over that right and you're right right there's probably some terminology something sort of you know that will get developed you know either by us or you know in collaboration with the industry you know where we sort of almost have the connection almost like a meta cloud right that we're sort of working our way towards because there's if you will you know the cloud right so you know on premise you know virtualized uh bare metal by the way you know increasingly interesting and important you know we do a lot of work with nvidia folks want to run specific workloads there we announced support for arm right another now popular architecture especially as we go out to the edge so obviously there's private cloud public cloud then the edge becomes a continuum now you know on that process we actually have a major uh uh shipping company so uh a cruise lines that's talking about using openshift on cruise lines right so you know that's the edge right last year we had verizon talking about you know 5g and you know ran in the next generation there to then that's the edge when we talk to retail the store front's the edge right you talk to a bank you know the bank environments here so everyone's got a different kind of definition of edge we're working with them and then when we you know announce this collaboration with gm right now the edge there becomes the automobile so if you think of this as a continuum right you know bare metal private cloud public cloud take it out to the edge now we're sort of almost you know living in a world of you know a little bit of abstractions and making sure that we are focused on where uh data is being generated and then how can we help ensure that we're providing a consistent experience regardless of you know where meta meta cloud because i can work in nfts i can work a little bit we're going to get through this whole thing without saying metaverse i was hoping i do want to ask you about about the edge and the proliferation of hardware platforms paul comey mentioned this during the keynote today hardware is becoming important yeah there's a lot of people building hardware it's in development now for areas like uh like intelligent devices and ai how does this influence your development priorities you have all these different platforms that you need to support yeah so um we think about that a lot mostly because we have engagements with so many partners hardware right so obviously there's more traditional partners i'd say like the dell and the hpes that we work with we've historically worked with them also working with them in in newer areas uh with regard to appliances that are being developed um and then the work that we do with partners like nvidia or new architectures like arm and so our perspective is this will be uh use case driven more than anything else right so there are certain environments right where you have arm-based devices other environments where you've got specific workloads that can take advantage of being built on gpus that we'll see increasingly being used especially to address that problem and then provide a solution towards that so our belief has always been look we're going to give you a consistent platform a consistent abstraction across all these you know pieces of hardware um and so you mr miss customer make the best choice for yourself a couple other areas we have to hit on i want to talk about cloud services we've got to talk about security leave time to get there but why the push to cloud services what's driving that it's actually customers they're driving right so we have um customers consistently been asking us say you know love what you give us right want to make sure that's available to us when we consume in the cloud so we've made rel available for example on demand right you can consume this directly via public cloud consoles we are now making available via marketplaces uh talked about ansible available as a managed service on azure openshift of course available as a managed service in multiple clouds um all of this also is because you know we've got customers who've got these uh committed spends that they have you know with cloud providers they want to make sure that the environments that they're using are also counting towards that at the same time give them flexibility give them the choice right if in certain situations they want to run in the data center great we have that solution for them other cases they want to procure from the cloud and run it there we're happy to support them there as well let's talk about security because you have a lot of announcements like security everywhere yeah um and then some specific announcements as well i i always think about these days in the context of the solar wind supply chain hack would this have you know how would this have affected it but tell us about what's going on in security your philosophy there and the announcements that you guys made so our secure announcements actually span our entire portfolio yeah right and and that's not an accident right that's by design because you know we've really uh been thinking and emphasizing you know how we ensure that security profile is raised for users both from a malicious perspective and also helping accidental issues right so so both matters so one huge amounts of open source software you know out of the world you know and then estimates are you know one in ten right has some kind of security vulnerability um in place a massive amount of change in where software is being developed right so rate of change for example in kubernetes is dramatic right much more than even than linux right entire parts of kubernetes get rewritten over over a three-year period of time so as you introduce all that right being able to think for example about you know what's known as shift left security or devsec ops right how do we make sure we move security closer to where development is actually done how do we ensure we give you a pattern so you know we introduced a software supply chain pattern uh via openshift delivers complete stack of code that you know you can go off and run that follows best practices uh including for example for developers you know with git ops and support on the pipelines front a whole bunch of security capabilities in rel um a new image integrity measurement architecture which allows for a better ability to see in a post install environment what the integrity of the packages are signing technology they're incorporating open shift as well as an ansible so it's it's a long long list of cables and features and then also more and more defaults that we're putting in place that make it easier for example for someone not to hurt themselves accidentally on security front i noticed that uh this today's batch of announcements included support within openshift pipelines for sigstor which is an open source project that was birthed actually at red hat right uh we haven't heard a whole lot about it how important is zig store to to you know your future product direction yeah so look i i think of that you know as you know work that's you know being done out of our cto's office and obviously security is a big focus area for them um six store's great example of saying look how can we verify content that's in uh containers make sure it's you know digitally signed that's appropriate uh to be deployed across a bunch of environments but that thinking isn't maybe unique uh for us uh in the container side mostly because we have you know two decades or more of thinking about that on the rel side and so fundamentally containers are being built on linux right so a lot of the lessons that we've learned a lot of the expertise that we've built over the years in linux now we're starting to you know use that same expertise trying to apply it to containers and i'm my guess is increasingly we're going to see more of the need for that you know into the edge as well i i i picked up on that too let me ask a follow-up question on sigstor so if i'm a developer and i and i use that capability it it ensures the provenance of that code is it immutable the the signature uh and the reason i ask is because again i think of everything in the context of the solar winds where they were putting code into the the supply chain and then removing it to see what happened and see how people reacted and it's just a really scary environment yeah the hardest part you know in in these environments is actually the behavior change so what's an example of that um packages built verified you know by red hat when it went from red hat to the actual user have we been able to make sure we verify the integrity of all of those when they were put into use um and unless we have behavior that you know make sure that we do that then we find ourselves in trouble in the earliest days of open shift uh we used to get knocked a lot by by developers because i said hey this platform's really hard to use we investigate hey look why is that happening so by default we didn't allow for root access you know and so someone's using you know the openshift platform they're like oh my gosh i can't use it right i'm so used to having root access we're like no that's actually sealed by default because that's not a good security best practice now over a period of time when we you know randomly enough times explained that enough times now behavior changes like yeah that makes sense now right so even just kind of you know there's behaviors the more that we can do for example in in you know the shift left which is one of the reasons by the way why we bought uh sac rocks a year right right for declarative security contain native security so threat detection network segmentation uh watching intrusions you know malicious behavior is something that now we can you know essentially make native into uh development itself all right escape key talk futures a little bit so i went downstairs to the expert you know asked the experts and there was this awesome demo i don't know if you've seen it of um it's like a design thinking booth with what happened how you build an application i think they were using the who one of their apps um during covet and it's you know shows the the granularity of the the stack and the development pipeline and all the steps that have to take place and it strikes me of something we've talked about so you've got this application development stack if you will and the database is there to support that and then over here you've got this analytics stack and it's separate and we always talk about injecting more ai into apps more data into apps but there's separate stacks do you see a day where those two stacks can come together and if not how do we inject more data and ai into apps what are your thoughts on that so great that's another area we've talked about dave in the past right um so we definitely agree with that right and and what final shape it takes you know i think we've got some ideas around that what we started doing is starting to pick up specific areas where we can start saying let's go and see what kind of usage we get from customers around it so for example we have openshift data science which is basically a way for us to talk about ml ops right and you know how can we have a platform that allows for different models that you can use we can uh test and train data different frameworks that you can then deploy in an environment of your choice right and we run that uh for you up and assist you in in uh making sure that you're able to take the next steps you want with with your machine learning algorithms um there's work that we've uh introduced at summit around databases service so essentially our uh a cloud service that allows for deep as an easy way for customers to access either mongodb or or cockroach in a cloud native fashion and all of these things that we're sort of you know experimenting with is to be able to say look how do we sort of bring the world's closer together right off database of data of analytics with a core platform and a core stack because again right this will become part of you know one continuum that we're going to work with it's not i'd like your continuum that's that's i think really instructive it's not a technical barrier is what i'm hearing it's maybe organizational mindset i can i should be able to insert a column into my my my application you know development pipeline and insert the data i mean kafka tensorflow in there there's no technical reason i can't can't do that it's just we've created these sort of separate stovepipe organizations 100 right right so they're different teams right you've got the platform team or the ops team and you're a separate dev team there's a separate data team there's a separate storage team and each of them will work you know slightly differently independently right so the question then is i mean that's sort of how devops came along then you're like oh wait a minute yeah don't forget security and now we're at devsecops right so the more of that that we can kind of bring together i think the more convergence that we'll see when i think about the in-vehicle os i see the the that is a great use case for real-time ai inferencing streaming data i wanted to ask you that about that real quickly because at the very you know just before the conference began we got an announcement about gm but your partnership with gm it seems like this came together very quickly why is it so important for red hat this is a whole new category of application that you're going to be working on yeah so we've been working with gm not publicly for a while now um and it was very clear that look you know gm believes this is the future right you know electric vehicles into autonomous driving and we're very keen to say we believe that a lot of attributes that we've got in rel that we can bring to bear in a different form factor to assist with the different needs that exist in this industry so one it's interesting for us because we believe that's a use case that you know we can add value to um but it's also the future of automotive right so the opportunity to be able to say look we can get open source technology we can collaborate out with the community to fundamentally help transform that industry uh towards where it wants to go you know that that's just the passion that we have that you know is what wakes us up every morning you're opening into that yeah thank you for coming on the cube really appreciate your time and your insights and uh have a great rest of rest of the event thank you for having me metacloud it's a thing it's a thing right it's it's it's kind of there we're gonna we're gonna see it emerge over the next decade all right you're watching the cube's coverage of red hat summit 2022 from boston keep it right there be right back you

Published Date : May 10 2022

SUMMARY :

of the need for that you know into the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

IBMORGANIZATION

0.99+

DavePERSON

0.99+

MichaelPERSON

0.99+

eightQUANTITY

0.99+

Dave AlampiPERSON

0.99+

Michael DellPERSON

0.99+

IndiaLOCATION

0.99+

Nick CarrPERSON

0.99+

2001DATE

0.99+

MicrosoftORGANIZATION

0.99+

MohammadPERSON

0.99+

Pat KelsonPERSON

0.99+

Ashesh BadaniPERSON

0.99+

PeterPERSON

0.99+

AWSORGANIZATION

0.99+

50QUANTITY

0.99+

Mohammed FarooqPERSON

0.99+

Skyhigh NetworksORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

EMCORGANIZATION

0.99+

6thQUANTITY

0.99+

Mohammad FarooqPERSON

0.99+

2019DATE

0.99+

FacebookORGANIZATION

0.99+

MikePERSON

0.99+

CiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

100 softwaresQUANTITY

0.99+

1000 dollarsQUANTITY

0.99+

80%QUANTITY

0.99+

NetflixORGANIZATION

0.99+

Las VegasLOCATION

0.99+

DellORGANIZATION

0.99+

Allen BeanPERSON

0.99+

90%QUANTITY

0.99+

John FurrierPERSON

0.99+

80 yearsQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

1000 timesQUANTITY

0.99+

2QUANTITY

0.99+

7500 customersQUANTITY

0.99+

PivitolORGANIZATION

0.99+

100QUANTITY

0.99+

'18DATE

0.99+

1000 customersQUANTITY

0.99+

secondQUANTITY

0.99+

USLOCATION

0.99+

34 billion dollarsQUANTITY

0.99+

Power Panel: Does Hardware Still Matter


 

(upbeat music) >> The ascendancy of cloud and SAS has shown new light on how organizations think about, pay for, and value hardware. Once sought after skills for practitioners with expertise in hardware troubleshooting, configuring ports, tuning storage arrays, and maximizing server utilization has been superseded by demand for cloud architects, DevOps pros, developers with expertise in microservices, container, application development, and like. Even a company like Dell, the largest hardware company in enterprise tech touts that it has more software engineers than those working in hardware. Begs the question, is hardware going the way of Coball? Well, not likely. Software has to run on something, but the labor needed to deploy, and troubleshoot, and manage hardware infrastructure is shifting. At the same time, we've seen the value flow also shifting in hardware. Once a world dominated by X86 processors value is flowing to alternatives like Nvidia and arm based designs. Moreover, other componentry like NICs, accelerators, and storage controllers are becoming more advanced, integrated, and increasingly important. The question is, does it matter? And if so, why does it matter and to whom? What does it mean to customers, workloads, OEMs, and the broader society? Hello and welcome to this week's Wikibon theCUBE Insights powered by ETR. In this breaking analysis, we've organized a special power panel of industry analysts and experts to address the question, does hardware still matter? Allow me to introduce the panel. Bob O'Donnell is president and chief analyst at TECHnalysis Research. Zeus Kerravala is the founder and principal analyst at ZK Research. David Nicholson is a CTO and tech expert. Keith Townson is CEO and founder of CTO Advisor. And Marc Staimer is the chief dragon slayer at Dragon Slayer Consulting and oftentimes a Wikibon contributor. Guys, welcome to theCUBE. Thanks so much for spending some time here. >> Good to be here. >> Thanks. >> Thanks for having us. >> Okay before we get into it, I just want to bring up some data from ETR. This is a survey that ETR does every quarter. It's a survey of about 1200 to 1500 CIOs and IT buyers and I'm showing a subset of the taxonomy here. This XY axis and the vertical axis is something called net score. That's a measure of spending momentum. It's essentially the percentage of customers that are spending more on a particular area than those spending less. You subtract the lesses from the mores and you get a net score. Anything the horizontal axis is pervasion in the data set. Sometimes they call it market share. It's not like IDC market share. It's just the percentage of activity in the data set as a percentage of the total. That red 40% line, anything over that is considered highly elevated. And for the past, I don't know, eight to 12 quarters, the big four have been AI and machine learning, containers, RPA and cloud and cloud of course is very impressive because not only is it elevated in the vertical access, but you know it's very highly pervasive on the horizontal. So what I've done is highlighted in red that historical hardware sector. The server, the storage, the networking, and even PCs despite the work from home are depressed in relative terms. And of course, data center collocation services. Okay so you're seeing obviously hardware is not... People don't have the spending momentum today that they used to. They've got other priorities, et cetera, but I want to start and go kind of around the horn with each of you, what is the number one trend that each of you sees in hardware and why does it matter? Bob O'Donnell, can you please start us off? >> Sure Dave, so look, I mean, hardware is incredibly important and one comment first I'll make on that slide is let's not forget that hardware, even though it may not be growing, the amount of money spent on hardware continues to be very, very high. It's just a little bit more stable. It's not as subject to big jumps as we see certainly in other software areas. But look, the important thing that's happening in hardware is the diversification of the types of chip architectures we're seeing and how and where they're being deployed, right? You refer to this in your opening. We've moved from a world of x86 CPUs from Intel and AMD to things like obviously GPUs, DPUs. We've got VPU for, you know, computer vision processing. We've got AI-dedicated accelerators, we've got all kinds of other network acceleration tools and AI-powered tools. There's an incredible diversification of these chip architectures and that's been happening for a while but now we're seeing them more widely deployed and it's being done that way because workloads are evolving. The kinds of workloads that we're seeing in some of these software areas require different types of compute engines than traditionally we've had. The other thing is (coughs), excuse me, the power requirements based on where geographically that compute happens is also evolving. This whole notion of the edge, which I'm sure we'll get into a little bit more detail later is driven by the fact that where the compute actually sits closer to in theory the edge and where edge devices are, depending on your definition, changes the power requirements. It changes the kind of connectivity that connects the applications to those edge devices and those applications. So all of those things are being impacted by this growing diversity in chip architectures. And that's a very long-term trend that I think we're going to continue to see play out through this decade and well into the 2030s as well. >> Excellent, great, great points. Thank you, Bob. Zeus up next, please. >> Yeah, and I think the other thing when you look at this chart to remember too is, you know, through the pandemic and the work from home period a lot of companies did put their office modernization projects on hold and you heard that echoed, you know, from really all the network manufacturers anyways. They always had projects underway to upgrade networks. They put 'em on hold. Now that people are starting to come back to the office, they're looking at that now. So we might see some change there, but Bob's right. The size of those market are quite a bit different. I think the other big trend here is the hardware companies, at least in the areas that I look at networking are understanding now that it's a combination of hardware and software and silicon that works together that creates that optimum type of performance and experience, right? So some things are best done in silicon. Some like data forwarding and things like that. Historically when you look at the way network devices were built, you did everything in hardware. You configured in hardware, they did all the data for you, and did all the management. And that's been decoupled now. So more and more of the control element has been placed in software. A lot of the high-performance things, encryption, and as I mentioned, data forwarding, packet analysis, stuff like that is still done in hardware, but not everything is done in hardware. And so it's a combination of the two. I think, for the people that work with the equipment as well, there's been more shift to understanding how to work with software. And this is a mistake I think the industry made for a while is we had everybody convinced they had to become a programmer. It's really more a software power user. Can you pull things out of software? Can you through API calls and things like that. But I think the big frame here is, David, it's a combination of hardware, software working together that really make a difference. And you know how much you invest in hardware versus software kind of depends on the performance requirements you have. And I'll talk about that later but that's really the big shift that's happened here. It's the vendors that figured out how to optimize performance by leveraging the best of all of those. >> Excellent. You guys both brought up some really good themes that we can tap into Dave Nicholson, please. >> Yeah, so just kind of picking up where Bob started off. Not only are we seeing the rise of a variety of CPU designs, but I think increasingly the connectivity that's involved from a hardware perspective, from a kind of a server or service design perspective has become increasingly important. I think we'll get a chance to look at this in more depth a little bit later but when you look at what happens on the motherboard, you know we're not in so much a CPU-centric world anymore. Various application environments have various demands and you can meet them by using a variety of components. And it's extremely significant when you start looking down at the component level. It's really important that you optimize around those components. So I guess my summary would be, I think we are moving out of the CPU-centric hardware model into more of a connectivity-centric model. We can talk more about that later. >> Yeah, great. And thank you, David, and Keith Townsend I really interested in your perspectives on this. I mean, for years you worked in a data center surrounded by hardware. Now that we have the software defined data center, please chime in here. >> Well, you know, I'm going to dig deeper into that software-defined data center nature of what's happening with hardware. Hardware is meeting software infrastructure as code is a thing. What does that code look like? We're still trying to figure out but servicing up these capabilities that the previous analysts have brought up, how do I ensure that I can get the level of services needed for the applications that I need? Whether they're legacy, traditional data center, workloads, AI ML, workloads, workloads at the edge. How do I codify that and consume that as a service? And hardware vendors are figuring this out. HPE, the big push into GreenLake as a service. Dale now with Apex taking what we need, these bare bone components, moving it forward with DDR five, six CXL, et cetera, and surfacing that as cold or as services. This is a very tough problem. As we transition from consuming a hardware-based configuration to this infrastructure as cold paradigm shift. >> Yeah, programmable infrastructure, really attacking that sort of labor discussion that we were having earlier, okay. Last but not least Marc Staimer, please. >> Thanks, Dave. My peers raised really good points. I agree with most of them, but I'm going to disagree with the title of this session, which is, does hardware matter? It absolutely matters. You can't run software on the air. You can't run it in an ephemeral cloud, although there's the technical cloud and that's a different issue. The cloud is kind of changed everything. And from a market perspective in the 40 plus years I've been in this business, I've seen this perception that hardware has to go down in price every year. And part of that was driven by Moore's law. And we're coming to, let's say a lag or an end, depending on who you talk to Moore's law. So we're not doubling our transistors every 18 to 24 months in a chip and as a result of that, there's been a higher emphasis on software. From a market perception, there's no penalty. They don't put the same pressure on software from the market to reduce the cost every year that they do on hardware, which kind of bass ackwards when you think about it. Hardware costs are fixed. Software costs tend to be very low. It's kind of a weird thing that we do in the market. And what's changing is we're now starting to treat hardware like software from an OPEX versus CapEx perspective. So yes, hardware matters. And we'll talk about that more in length. >> You know, I want to follow up on that. And I wonder if you guys have a thought on this, Bob O'Donnell, you and I have talked about this a little bit. Marc, you just pointed out that Moore's laws could have waning. Pat Gelsinger recently at their investor meeting said that he promised that Moore's law is alive and well. And the point I made in breaking analysis was okay, great. You know, Pat said, doubling transistors every 18 to 24 months, let's say that Intel can do that. Even though we know it's waning somewhat. Look at the M1 Ultra from Apple (chuckles). In about 15 months increased transistor density on their package by 6X. So to your earlier point, Bob, we have this sort of these alternative processors that are really changing things. And to Dave Nicholson's point, there's a whole lot of supporting components as well. Do you have a comment on that, Bob? >> Yeah, I mean, it's a great point, Dave. And one thing to bear in mind as well, not only are we seeing a diversity of these different chip architectures and different types of components as a number of us have raised the other big point and I think it was Keith that mentioned it. CXL and interconnect on the chip itself is dramatically changing it. And a lot of the more interesting advances that are going to continue to drive Moore's law forward in terms of the way we think about performance, if perhaps not number of transistors per se, is the interconnects that become available. You're seeing the development of chiplets or tiles, people use different names, but the idea is you can have different components being put together eventually in sort of a Lego block style. And what that's also going to allow, not only is that going to give interesting performance possibilities 'cause of the faster interconnect. So you can share, have shared memory between things which for big workloads like AI, huge data sets can make a huge difference in terms of how you talk to memory over a network connection, for example, but not only that you're going to see more diversity in the types of solutions that can be built. So we're going to see even more choices in hardware from a silicon perspective because you'll be able to piece together different elements. And oh, by the way, the other benefit of that is we've reached a point in chip architectures where not everything benefits from being smaller. We've been so focused and so obsessed when it comes to Moore's law, to the size of each individual transistor and yes, for certain architecture types, CPUs and GPUs in particular, that's absolutely true, but we've already hit the point where things like RF for 5g and wifi and other wireless technologies and a whole bunch of other things actually don't get any better with a smaller transistor size. They actually get worse. So the beauty of these chiplet architectures is you could actually combine different chip manufacturing sizes. You know you hear about four nanometer and five nanometer along with 14 nanometer on a single chip, each one optimized for its specific application yet together, they can give you the best of all worlds. And so we're just at the very beginning of that era, which I think is going to drive a ton of innovation. Again, gets back to my comment about different types of devices located geographically different places at the edge, in the data center, you know, in a private cloud versus a public cloud. All of those things are going to be impacted and there'll be a lot more options because of this silicon diversity and this interconnect diversity that we're just starting to see. >> Yeah, David. David Nicholson's got a graphic on that. They're going to show later. Before we do that, I want to introduce some data. I actually want to ask Keith to comment on this before we, you know, go on. This next slide is some data from ETR that shows the percent of customers that cited difficulty procuring hardware. And you can see the red is they had significant issues and it's most pronounced in laptops and networking hardware on the far right-hand side, but virtually all categories, firewalls, peripheral servers, storage are having moderately difficult procurement issues. That's the sort of pinkish or significant challenges. So Keith, I mean, what are you seeing with your customers in the hardware supply chains and bottlenecks? And you know we're seeing it with automobiles and appliances but so it goes beyond IT. The semiconductor, you know, challenges. What's been the impact on the buyer community and society and do you have any sense as to when it will subside? >> You know, I was just asked this question yesterday and I'm feeling the pain. People question, kind of a side project within the CTO advisor, we built a hybrid infrastructure, traditional IT data center that we're walking with the traditional customer and modernizing that data center. So it was, you know, kind of a snapshot of time in 2016, 2017, 10 gigabit, ARISTA switches, some older Dell's 730 XD switches, you know, speeds and feeds. And we said we would modern that with the latest Intel stack and connected to the public cloud and then the pandemic hit and we are experiencing a lot of the same challenges. I thought we'd easily migrate from 10 gig networking to 25 gig networking path that customers are going on. The 10 gig network switches that I bought used are now double the price because you can't get legacy 10 gig network switches because all of the manufacturers are focusing on the more profitable 25 gig for capacity, even the 25 gig switches. And we're focused on networking right now. It's hard to procure. We're talking about nine to 12 months or more lead time. So we're seeing customers adjust by adopting cloud. But if you remember early on in the pandemic, Microsoft Azure kind of gated customers that didn't have a capacity agreement. So customers are keeping an eye on that. There's a desire to abstract away from the underlying vendor to be able to control or provision your IT services in a way that we do with VMware VP or some other virtualization technology where it doesn't matter who can get me the hardware, they can just get me the hardware because it's critically impacting projects and timelines. >> So that's a great setup Zeus for you with Keith mentioned the earlier the software-defined data center with software-defined networking and cloud. Do you see a day where networking hardware is monetized and it's all about the software, or are we there already? >> No, we're not there already. And I don't see that really happening any time in the near future. I do think it's changed though. And just to be clear, I mean, when you look at that data, this is saying customers have had problems procuring the equipment, right? And there's not a network vendor out there. I've talked to Norman Rice at Extreme, and I've talked to the folks at Cisco and ARISTA about this. They all said they could have had blowout quarters had they had the inventory to ship. So it's not like customers aren't buying this anymore. Right? I do think though, when it comes to networking network has certainly changed some because there's a lot more controls as I mentioned before that you can do in software. And I think the customers need to start thinking about the types of hardware they buy and you know, where they're going to use it and, you know, what its purpose is. Because I've talked to customers that have tried to run software and commodity hardware and where the performance requirements are very high and it's bogged down, right? It just doesn't have the horsepower to run it. And, you know, even when you do that, you have to start thinking of the components you use. The NICs you buy. And I've talked to customers that have simply just gone through the process replacing a NIC card and a commodity box and had some performance problems and, you know, things like that. So if agility is more important than performance, then by all means try running software on commodity hardware. I think that works in some cases. If performance though is more important, that's when you need that kind of turnkey hardware system. And I've actually seen more and more customers reverting back to that model. In fact, when you talk to even some startups I think today about when they come to market, they're delivering things more on appliances because that's what customers want. And so there's this kind of app pivot this pendulum of agility and performance. And if performance absolutely matters, that's when you do need to buy these kind of turnkey, prebuilt hardware systems. If agility matters more, that's when you can go more to software, but the underlying hardware still does matter. So I think, you know, will we ever have a day where you can just run it on whatever hardware? Maybe but I'll long be retired by that point. So I don't care. >> Well, you bring up a good point Zeus. And I remember the early days of cloud, the narrative was, oh, the cloud vendors. They don't use EMC storage, they just run on commodity storage. And then of course, low and behold, you know, they've trot out James Hamilton to talk about all the custom hardware that they were building. And you saw Google and Microsoft follow suit. >> Well, (indistinct) been falling for this forever. Right? And I mean, all the way back to the turn of the century, we were calling for the commodity of hardware. And it's never really happened because you can still drive. As long as you can drive innovation into it, customers will always lean towards the innovation cycles 'cause they get more features faster and things. And so the vendors have done a good job of keeping that cycle up but it'll be a long time before. >> Yeah, and that's why you see companies like Pure Storage. A storage company has 69% gross margins. All right. I want to go jump ahead. We're going to bring up the slide four. I want to go back to something that Bob O'Donnell was talking about, the sort of supporting act. The diversity of silicon and we've marched to the cadence of Moore's law for decades. You know, we asked, you know, is Moore's law dead? We say it's moderating. Dave Nicholson. You want to talk about those supporting components. And you shared with us a slide that shift. You call it a shift from a processor-centric world to a connect-centric world. What do you mean by that? And let's bring up slide four and you can talk to that. >> Yeah, yeah. So first, I want to echo this sentiment that the question does hardware matter is sort of the answer is of course it matters. Maybe the real question should be, should you care about it? And the answer to that is it depends who you are. If you're an end user using an application on your mobile device, maybe you don't care how the architecture is put together. You just care that the service is delivered but as you back away from that and you get closer and closer to the source, someone needs to care about the hardware and it should matter. Why? Because essentially what hardware is doing is it's consuming electricity and dollars and the more efficiently you can configure hardware, the more bang you're going to get for your buck. So it's not only a quantitative question in terms of how much can you deliver? But it also ends up being a qualitative change as capabilities allow for things we couldn't do before, because we just didn't have the aggregate horsepower to do it. So this chart actually comes out of some performance tests that were done. So it happens to be Dell servers with Broadcom components. And the point here was to peel back, you know, peel off the top of the server and look at what's in that server, starting with, you know, the PCI interconnect. So PCIE gen three, gen four, moving forward. What are the effects on from an interconnect versus on performance application performance, translating into new orders per minute, processed per dollar, et cetera, et cetera? If you look at the advances in CPU architecture mapped against the advances in interconnect and storage subsystem performance, you can see that CPU architecture is sort of lagging behind in a way. And Bob mentioned this idea of tiling and all of the different ways to get around that. When we do performance testing, we can actually peg CPUs, just running the performance tests without any actual database environments working. So right now we're at this sort of imbalance point where you have to make sure you design things properly to get the most bang per kilowatt hour of power per dollar input. So the key thing here what this is highlighting is just as a very specific example, you take a card that's designed as a gen three PCIE device, and you plug it into a gen four slot. Now the card is the bottleneck. You plug a gen four card into a gen four slot. Now the gen four slot is the bottleneck. So we're constantly chasing these bottlenecks. Someone has to be focused on that from an architectural perspective, it's critically important. So there's no question that it matters. But of course, various people in this food chain won't care where it comes from. I guess a good analogy might be, where does our food come from? If I get a steak, it's a pink thing wrapped in plastic, right? Well, there are a lot of inputs that a lot of people have to care about to get that to me. Do I care about all of those things? No. Are they important? They're critically important. >> So, okay. So all I want to get to the, okay. So what does this all mean to customers? And so what I'm hearing from you is to balance a system it's becoming, you know, more complicated. And I kind of been waiting for this day for a long time, because as we all know the bottleneck was always the spinning disc, the last mechanical. So people who wrote software knew that when they were doing it right, the disc had to go and do stuff. And so they were doing other things in the software. And now with all these new interconnects and flash and things like you could do atomic rights. And so that opens up new software possibilities and combine that with alternative processes. But what's the so what on this to the customer and the application impact? Can anybody address that? >> Yeah, let me address that for a moment. I want to leverage some of the things that Bob said, Keith said, Zeus said, and David said, yeah. So I'm a bit of a contrarian in some of this. For example, on the chip side. As the chips get smaller, 14 nanometer, 10 nanometer, five nanometer, soon three nanometer, we talk about more cores, but the biggest problem on the chip is the interconnect from the chip 'cause the wires get smaller. People don't realize in 2004 the latency on those wires in the chips was 80 picoseconds. Today it's 1300 picoseconds. That's on the chip. This is why they're not getting faster. So we maybe getting a little bit slowing down in Moore's law. But even as we kind of conquer that you still have the interconnect problem and the interconnect problem goes beyond the chip. It goes within the system, composable architectures. It goes to the point where Keith made, ultimately you need a hybrid because what we're seeing, what I'm seeing and I'm talking to customers, the biggest issue they have is moving data. Whether it be in a chip, in a system, in a data center, between data centers, moving data is now the biggest gating item in performance. So if you want to move it from, let's say your transactional database to your machine learning, it's the bottleneck, it's moving the data. And so when you look at it from a distributed environment, now you've got to move the compute to the data. The only way to get around these bottlenecks today is to spend less time in trying to move the data and more time in taking the compute, the software, running on hardware closer to the data. Go ahead. >> So is this what you mean when Nicholson was talking about a shift from a processor centric world to a connectivity centric world? You're talking about moving the bits across all the different components, not having the processor you're saying is essentially becoming the bottleneck or the memory, I guess. >> Well, that's one of them and there's a lot of different bottlenecks, but it's the data movement itself. It's moving away from, wait, why do we need to move the data? Can we move the compute, the processing closer to the data? Because if we keep them separate and this has been a trend now where people are moving processing away from it. It's like the edge. I think it was Zeus or David. You were talking about the edge earlier. As you look at the edge, who defines the edge, right? Is the edge a closet or is it a sensor? If it's a sensor, how do you do AI at the edge? When you don't have enough power, you don't have enough computable. People were inventing chips to do that. To do all that at the edge, to do AI within the sensor, instead of moving the data to a data center or a cloud to do the processing. Because the lag in latency is always limited by speed of light. How fast can you move the electrons? And all this interconnecting, all the processing, and all the improvement we're seeing in the PCIE bus from three, to four, to five, to CXL, to a higher bandwidth on the network. And that's all great but none of that deals with the speed of light latency. And that's an-- Go ahead. >> You know Marc, no, I just want to just because what you're referring to could be looked at at a macro level, which I think is what you're describing. You can also look at it at a more micro level from a systems design perspective, right? I'm going to be the resident knuckle dragging hardware guy on the panel today. But it's exactly right. You moving compute closer to data includes concepts like peripheral cards that have built in intelligence, right? So again, in some of this testing that I'm referring to, we saw dramatic improvements when you basically took the horsepower instead of using the CPU horsepower for the like IO. Now you have essentially offload engines in the form of storage controllers, rate controllers, of course, for ethernet NICs, smart NICs. And so when you can have these sort of offload engines and we've gone through these waves over time. People think, well, wait a minute, raid controller and NVMe? You know, flash storage devices. Does that make sense? It turns out it does. Why? Because you're actually at a micro level doing exactly what you're referring to. You're bringing compute closer to the data. Now, closer to the data meaning closer to the data storage subsystem. It doesn't solve the macro issue that you're referring to but it is important. Again, going back to this idea of system design optimization, always chasing the bottleneck, plugging the holes. Someone needs to do that in this value chain in order to get the best value for every kilowatt hour of power and every dollar. >> Yeah. >> Well this whole drive performance has created some really interesting architectural designs, right? Like Nickelson, the rise of the DPU right? Brings more processing power into systems that already had a lot of processing power. There's also been some really interesting, you know, kind of innovation in the area of systems architecture too. If you look at the way Nvidia goes to market, their drive kit is a prebuilt piece of hardware, you know, optimized for self-driving cars, right? They partnered with Pure Storage and ARISTA to build that AI-ready infrastructure. I remember when I talked to Charlie Giancarlo, the CEO of Pure about when the three companies rolled that out. He said, "Look, if you're going to do AI, "you need good store. "You need fast storage, fast processor and fast network." And so for customers to be able to put that together themselves was very, very difficult. There's a lot of software that needs tuning as well. So the three companies partner together to create a fully integrated turnkey hardware system with a bunch of optimized software that runs on it. And so in that case, in some ways the hardware was leading the software innovation. And so, the variety of different architectures we have today around hardware has really exploded. And I think it, part of the what Bob brought up at the beginning about the different chip design. >> Yeah, Bob talked about that earlier. Bob, I mean, most AI today is modeling, you know, and a lot of that's done in the cloud and it looks from my standpoint anyway that the future is going to be a lot of AI inferencing at the edge. And that's a radically different architecture, Bob, isn't it? >> It is, it's a completely different architecture. And just to follow up on a couple points, excellent conversation guys. Dave talked about system architecture and really this that's what this boils down to, right? But it's looking at architecture at every level. I was talking about the individual different components the new interconnect methods. There's this new thing called UCIE universal connection. I forget what it stands answer for, but it's a mechanism for doing chiplet architectures, but then again, you have to take it up to the system level, 'cause it's all fine and good. If you have this SOC that's tuned and optimized, but it has to talk to the rest of the system. And that's where you see other issues. And you've seen things like CXL and other interconnect standards, you know, and nobody likes to talk about interconnect 'cause it's really wonky and really technical and not that sexy, but at the end of the day it's incredibly important exactly. To the other points that were being raised like mark raised, for example, about getting that compute closer to where the data is and that's where again, a diversity of chip architectures help and exactly to your last comment there Dave, putting that ability in an edge device is really at the cutting edge of what we're seeing on a semiconductor design and the ability to, for example, maybe it's an FPGA, maybe it's a dedicated AI chip. It's another kind of chip architecture that's being created to do that inferencing on the edge. Because again, it's that the cost and the challenges of moving lots of data, whether it be from say a smartphone to a cloud-based application or whether it be from a private network to a cloud or any other kinds of permutations we can think of really matters. And the other thing is we're tackling bigger problems. So architecturally, not even just architecturally within a system, but when we think about DPUs and the sort of the east west data center movement conversation that we hear Nvidia and others talk about, it's about combining multiple sets of these systems to function together more efficiently again with even bigger sets of data. So really is about tackling where the processing is needed, having the interconnect and the ability to get where the data you need to the right place at the right time. And because those needs are diversifying, we're just going to continue to see an explosion of different choices and options, which is going to make hardware even more essential I would argue than it is today. And so I think what we're going to see not only does hardware matter, it's going to matter even more in the future than it does now. >> Great, yeah. Great discussion, guys. I want to bring Keith back into the conversation here. Keith, if your main expertise in tech is provisioning LUNs, you probably you want to look for another job. So maybe clearly hardware matters, but with software defined everything, do people with hardware expertise matter outside of for instance, component manufacturers or cloud companies? I mean, VMware certainly changed the dynamic in servers. Dell just spun off its most profitable asset and VMware. So it obviously thinks hardware can stand alone. How does an enterprise architect view the shift to software defined hyperscale cloud and how do you see the shifting demand for skills in enterprise IT? >> So I love the question and I'll take a different view of it. If you're a data analyst and your primary value add is that you do ETL transformation, talk to a CDO, a chief data officer over midsize bank a little bit ago. He said 80% of his data scientists' time is done on ETL. Super not value ad. He wants his data scientists to do data science work. Chances are if your only value is that you do LUN provisioning, then you probably don't have a job now. The technologies have gotten much more intelligent. As infrastructure pros, we want to give infrastructure pros the opportunities to shine and I think the software defined nature and the automation that we're seeing vendors undertake, whether it's Dell, HP, Lenovo take your pick that Pure Storage, NetApp that are doing the automation and the ML needed so that these practitioners don't spend 80% of their time doing LUN provisioning and focusing on their true expertise, which is ensuring that data is stored. Data is retrievable, data's protected, et cetera. I think the shift is to focus on that part of the job that you're ensuring no matter where the data's at, because as my data is spread across the enterprise hybrid different types, you know, Dave, you talk about the super cloud a lot. If my data is in the super cloud, protecting that data and securing that data becomes much more complicated when than when it was me just procuring or provisioning LUNs. So when you say, where should the shift be, or look be, you know, focusing on the real value, which is making sure that customers can access data, can recover data, can get data at performance levels that they need within the price point. They need to get at those datasets and where they need it. We talked a lot about where they need out. One last point about this interconnecting. I have this vision and I think we all do of composable infrastructure. This idea that scaled out does not solve every problem. The cloud can give me infinite scale out. Sometimes I just need a single OS with 64 terabytes of RAM and 204 GPUs or GPU instances that single OS does not exist today. And the opportunity is to create composable infrastructure so that we solve a lot of these problems that just simply don't scale out. >> You know, wow. So many interesting points there. I had just interviewed Zhamak Dehghani, who's the founder of Data Mesh last week. And she made a really interesting point. She said, "Think about, we have separate stacks. "We have an application stack and we have "a data pipeline stack and the transaction systems, "the transaction database, we extract data from that," to your point, "We ETL it in, you know, it takes forever. "And then we have this separate sort of data stack." If we're going to inject more intelligence and data and AI into applications, those two stacks, her contention is they have to come together. And when you think about, you know, super cloud bringing compute to data, that was what Haduck was supposed to be. It ended up all sort of going into a central location, but it's almost a rhetorical question. I mean, it seems that that necessitates new thinking around hardware architectures as it kind of everything's the edge. And the other point is to your point, Keith, it's really hard to secure that. So when you can think about offloads, right, you've heard the stats, you know, Nvidia talks about it. Broadcom talks about it that, you know, that 30%, 25 to 30% of the CPU cycles are wasted on doing things like storage offloads, or networking or security. It seems like maybe Zeus you have a comment on this. It seems like new architectures need to come other to support, you know, all of that stuff that Keith and I just dispute. >> Yeah, and by the way, I do want to Keith, the question you just asked. Keith, it's the point I made at the beginning too about engineers do need to be more software-centric, right? They do need to have better software skills. In fact, I remember talking to Cisco about this last year when they surveyed their engineer base, only about a third of 'em had ever made an API call, which you know that that kind of shows this big skillset change, you know, that has to come. But on the point of architectures, I think the big change here is edge because it brings in distributed compute models. Historically, when you think about compute, even with multi-cloud, we never really had multi-cloud. We'd use multiple centralized clouds, but compute was always centralized, right? It was in a branch office, in a data center, in a cloud. With edge what we creates is the rise of distributed computing where we'll have an application that actually accesses different resources and at different edge locations. And I think Marc, you were talking about this, like the edge could be in your IoT device. It could be your campus edge. It could be cellular edge, it could be your car, right? And so we need to start thinkin' about how our applications interact with all those different parts of that edge ecosystem, you know, to create a single experience. The consumer apps, a lot of consumer apps largely works that way. If you think of like app like Uber, right? It pulls in information from all kinds of different edge application, edge services. And, you know, it creates pretty cool experience. We're just starting to get to that point in the business world now. There's a lot of security implications and things like that, but I do think it drives more architectural decisions to be made about how I deploy what data where and where I do my processing, where I do my AI and things like that. It actually makes the world more complicated. In some ways we can do so much more with it, but I think it does drive us more towards turnkey systems, at least initially in order to, you know, ensure performance and security. >> Right. Marc, I wanted to go to you. You had indicated to me that you wanted to chat about this a little bit. You've written quite a bit about the integration of hardware and software. You know, we've watched Oracle's move from, you know, buying Sun and then basically using that in a highly differentiated approach. Engineered systems. What's your take on all that? I know you also have some thoughts on the shift from CapEx to OPEX chime in on that. >> Sure. When you look at it, there are advantages to having one vendor who has the software and hardware. They can synergistically make them work together that you can't do in a commodity basis. If you own the software and somebody else has the hardware, I'll give you an example would be Oracle. As you talked about with their exit data platform, they literally are leveraging microcode in the Intel chips. And now in AMD chips and all the way down to Optane, they make basically AMD database servers work with Optane memory PMM in their storage systems, not MVME, SSD PMM. I'm talking about the cards itself. So there are advantages you can take advantage of if you own the stack, as you were putting out earlier, Dave, of both the software and the hardware. Okay, that's great. But on the other side of that, that tends to give you better performance, but it tends to cost a little more. On the commodity side it costs less but you get less performance. What Zeus had said earlier, it depends where you're running your application. How much performance do you need? What kind of performance do you need? One of the things about moving to the edge and I'll get to the OPEX CapEx in a second. One of the issues about moving to the edge is what kind of processing do you need? If you're running in a CCTV camera on top of a traffic light, how much power do you have? How much cooling do you have that you can run this? And more importantly, do you have to take the data you're getting and move it somewhere else and get processed and the information is sent back? I mean, there are companies out there like Brain Chip that have developed AI chips that can run on the sensor without a CPU. Without any additional memory. So, I mean, there's innovation going on to deal with this question of data movement. There's companies out there like Tachyon that are combining GPUs, CPUs, and DPUs in a single chip. Think of it as super composable architecture. They're looking at being able to do more in less. On the OPEX and CapEx issue. >> Hold that thought, hold that thought on the OPEX CapEx, 'cause we're running out of time and maybe you can wrap on that. I just wanted to pick up on something you said about the integrated hardware software. I mean, other than the fact that, you know, Michael Dell unlocked whatever $40 billion for himself and Silverlake, I was always a fan of a spin in with VMware basically become the Oracle of hardware. Now I know it would've been a nightmare for the ecosystem and culturally, they probably would've had a VMware brain drain, but what does anybody have any thoughts on that as a sort of a thought exercise? I was always a fan of that on paper. >> I got to eat a little crow. I did not like the Dale VMware acquisition for the industry in general. And I think it hurt the industry in general, HPE, Cisco walked away a little bit from that VMware relationship. But when I talked to customers, they loved it. You know, I got to be honest. They absolutely loved the integration. The VxRail, VxRack solution exploded. Nutanix became kind of a afterthought when it came to competing. So that spin in, when we talk about the ability to innovate and the ability to create solutions that you just simply can't create because you don't have the full stack. Dell was well positioned to do that with a potential span in of VMware. >> Yeah, we're going to be-- Go ahead please. >> Yeah, in fact, I think you're right, Keith, it was terrible for the industry. Great for Dell. And I remember talking to Chad Sakac when he was running, you know, VCE, which became Rack and Rail, their ability to stay in lockstep with what VMware was doing. What was the number one workload running on hyperconverged forever? It was VMware. So their ability to remain in lockstep with VMware gave them a huge competitive advantage. And Dell came out of nowhere in, you know, the hyper-converged market and just started taking share because of that relationship. So, you know, this sort I guess it's, you know, from a Dell perspective I thought it gave them a pretty big advantage that they didn't really exploit across their other properties, right? Networking and service and things like they could have given the dominance that VMware had. From an industry perspective though, I do think it's better to have them be coupled. So. >> I agree. I mean, they could. I think they could have dominated in super cloud and maybe they would become the next Oracle where everybody hates 'em, but they kick ass. But guys. We got to wrap up here. And so what I'm going to ask you is I'm going to go and reverse the order this time, you know, big takeaways from this conversation today, which guys by the way, I can't thank you enough phenomenal insights, but big takeaways, any final thoughts, any research that you're working on that you want highlight or you know, what you look for in the future? Try to keep it brief. We'll go in reverse order. Maybe Marc, you could start us off please. >> Sure, on the research front, I'm working on a total cost of ownership of an integrated database analytics machine learning versus separate services. On the other aspect that I would wanted to chat about real quickly, OPEX versus CapEx, the cloud changed the market perception of hardware in the sense that you can use hardware or buy hardware like you do software. As you use it, pay for what you use in arrears. The good thing about that is you're only paying for what you use, period. You're not for what you don't use. I mean, it's compute time, everything else. The bad side about that is you have no predictability in your bill. It's elastic, but every user I've talked to says every month it's different. And from a budgeting perspective, it's very hard to set up your budget year to year and it's causing a lot of nightmares. So it's just something to be aware of. From a CapEx perspective, you have no more CapEx if you're using that kind of base system but you lose a certain amount of control as well. So ultimately that's some of the issues. But my biggest point, my biggest takeaway from this is the biggest issue right now that everybody I talk to in some shape or form it comes down to data movement whether it be ETLs that you talked about Keith or other aspects moving it between hybrid locations, moving it within a system, moving it within a chip. All those are key issues. >> Great, thank you. Okay, CTO advisor, give us your final thoughts. >> All right. Really, really great commentary. Again, I'm going to point back to us taking the walk that our customers are taking, which is trying to do this conversion of all primary data center to a hybrid of which I have this hard earned philosophy that enterprise IT is additive. When we add a service, we rarely subtract a service. So the landscape and service area what we support has to grow. So our research focuses on taking that walk. We are taking a monolithic application, decomposing that to containers, and putting that in a public cloud, and connecting that back private data center and telling that story and walking that walk with our customers. This has been a super enlightening panel. >> Yeah, thank you. Real, real different world coming. David Nicholson, please. >> You know, it really hearkens back to the beginning of the conversation. You talked about momentum in the direction of cloud. I'm sort of spending my time under the hood, getting grease under my fingernails, focusing on where still the lions share of spend will be in coming years, which is OnPrem. And then of course, obviously data center infrastructure for cloud but really diving under the covers and helping folks understand the ramifications of movement between generations of CPU architecture. I know we all know Sapphire Rapids pushed into the future. When's the next Intel release coming? Who knows? We think, you know, in 2023. There have been a lot of people standing by from a practitioner's standpoint asking, well, what do I do between now and then? Does it make sense to upgrade bits and pieces of hardware or go from a last generation to a current generation when we know the next generation is coming? And so I've been very, very focused on looking at how these connectivity components like rate controllers and NICs. I know it's not as sexy as talking about cloud but just how these opponents completely change the game and actually can justify movement from say a 14th-generation architecture to a 15th-generation architecture today, even though gen 16 is coming, let's say 12 months from now. So that's where I am. Keep my phone number in the Rolodex. I literally reference Rolodex intentionally because like I said, I'm in there under the hood and it's not as sexy. But yeah, so that's what I'm focused on Dave. >> Well, you know, to paraphrase it, maybe derivative paraphrase of, you know, Larry Ellison's rant on what is cloud? It's operating systems and databases, et cetera. Rate controllers and NICs live inside of clouds. All right. You know, one of the reasons I love working with you guys is 'cause have such a wide observation space and Zeus Kerravala you, of all people, you know you have your fingers in a lot of pies. So give us your final thoughts. >> Yeah, I'm not a propeller heady as my chip counterparts here. (all laugh) So, you know, I look at the world a little differently and a lot of my research I'm doing now is the impact that distributed computing has on customer employee experiences, right? You talk to every business and how the experiences they deliver to their customers is really differentiating how they go to market. And so they're looking at these different ways of feeding up data and analytics and things like that in different places. And I think this is going to have a really profound impact on enterprise IT architecture. We're putting more data, more compute in more places all the way down to like little micro edges and retailers and things like that. And so we need the variety. Historically, if you think back to when I was in IT you know, pre-Y2K, we didn't have a lot of choice in things, right? We had a server that was rack mount or standup, right? And there wasn't a whole lot of, you know, differences in choice. But today we can deploy, you know, these really high-performance compute systems on little blades inside servers or inside, you know, autonomous vehicles and things. I think the world from here gets... You know, just the choice of what we have and the way hardware and software works together is really going to, I think, change the world the way we do things. We're already seeing that, like I said, in the consumer world, right? There's so many things you can do from, you know, smart home perspective, you know, natural language processing, stuff like that. And it's starting to hit businesses now. So just wait and watch the next five years. >> Yeah, totally. The computing power at the edge is just going to be mind blowing. >> It's unbelievable what you can do at the edge. >> Yeah, yeah. Hey Z, I just want to say that we know you're not a propeller head and I for one would like to thank you for having your master's thesis hanging on the wall behind you 'cause we know that you studied basket weaving. >> I was actually a physics math major, so. >> Good man. Another math major. All right, Bob O'Donnell, you're going to bring us home. I mean, we've seen the importance of semiconductors and silicon in our everyday lives, but your last thoughts please. >> Sure and just to clarify, by the way I was a great books major and this was actually for my final paper. And so I was like philosophy and all that kind of stuff and literature but I still somehow got into tech. Look, it's been a great conversation and I want to pick up a little bit on a comment Zeus made, which is this it's the combination of the hardware and the software and coming together and the manner with which that needs to happen, I think is critically important. And the other thing is because of the diversity of the chip architectures and all those different pieces and elements, it's going to be how software tools evolve to adapt to that new world. So I look at things like what Intel's trying to do with oneAPI. You know, what Nvidia has done with CUDA. What other platform companies are trying to create tools that allow them to leverage the hardware, but also embrace the variety of hardware that is there. And so as those software development environments and software development tools evolve to take advantage of these new capabilities, that's going to open up a lot of interesting opportunities that can leverage all these new chip architectures. That can leverage all these new interconnects. That can leverage all these new system architectures and figure out ways to make that all happen, I think is going to be critically important. And then finally, I'll mention the research I'm actually currently working on is on private 5g and how companies are thinking about deploying private 5g and the potential for edge applications for that. So I'm doing a survey of several hundred us companies as we speak and really looking forward to getting that done in the next couple of weeks. >> Yeah, look forward to that. Guys, again, thank you so much. Outstanding conversation. Anybody going to be at Dell tech world in a couple of weeks? Bob's going to be there. Dave Nicholson. Well drinks on me and guys I really can't thank you enough for the insights and your participation today. Really appreciate it. Okay, and thank you for watching this special power panel episode of theCube Insights powered by ETR. Remember we publish each week on Siliconangle.com and wikibon.com. All these episodes they're available as podcasts. DM me or any of these guys. I'm at DVellante. You can email me at David.Vellante@siliconangle.com. Check out etr.ai for all the data. This is Dave Vellante. We'll see you next time. (upbeat music)

Published Date : Apr 25 2022

SUMMARY :

but the labor needed to go kind of around the horn the applications to those edge devices Zeus up next, please. on the performance requirements you have. that we can tap into It's really important that you optimize I mean, for years you worked for the applications that I need? that we were having earlier, okay. on software from the market And the point I made in breaking at the edge, in the data center, you know, and society and do you have any sense as and I'm feeling the pain. and it's all about the software, of the components you use. And I remember the early days And I mean, all the way back Yeah, and that's why you see And the answer to that is the disc had to go and do stuff. the compute to the data. So is this what you mean when Nicholson the processing closer to the data? And so when you can have kind of innovation in the area that the future is going to be the ability to get where and how do you see the shifting demand And the opportunity is to to support, you know, of that edge ecosystem, you know, that you wanted to chat One of the things about moving to the edge I mean, other than the and the ability to create solutions Yeah, we're going to be-- And I remember talking to Chad the order this time, you know, in the sense that you can use hardware us your final thoughts. So the landscape and service area Yeah, thank you. in the direction of cloud. You know, one of the reasons And I think this is going to The computing power at the edge you can do at the edge. on the wall behind you I was actually a of semiconductors and silicon and the manner with which Okay, and thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

DavidPERSON

0.99+

Marc StaimerPERSON

0.99+

Keith TownsonPERSON

0.99+

David NicholsonPERSON

0.99+

Dave NicholsonPERSON

0.99+

KeithPERSON

0.99+

Dave VellantePERSON

0.99+

MarcPERSON

0.99+

Bob O'DonnellPERSON

0.99+

DellORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

BobPERSON

0.99+

HPORGANIZATION

0.99+

LenovoORGANIZATION

0.99+

2004DATE

0.99+

Charlie GiancarloPERSON

0.99+

ZK ResearchORGANIZATION

0.99+

PatPERSON

0.99+

10 nanometerQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

10 gigQUANTITY

0.99+

25QUANTITY

0.99+

Pat GelsingerPERSON

0.99+

80%QUANTITY

0.99+

ARISTAORGANIZATION

0.99+

64 terabytesQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

Zeus KerravalaPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

Larry EllisonPERSON

0.99+

25 gigQUANTITY

0.99+

14 nanometerQUANTITY

0.99+

2017DATE

0.99+

2016DATE

0.99+

Norman RicePERSON

0.99+

OracleORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Michael DellPERSON

0.99+

69%QUANTITY

0.99+

30%QUANTITY

0.99+

OPEXORGANIZATION

0.99+

Pure StorageORGANIZATION

0.99+

$40 billionQUANTITY

0.99+

Dragon Slayer ConsultingORGANIZATION

0.99+

Analyst Power Panel: Future of Database Platforms


 

(upbeat music) >> Once a staid and boring business dominated by IBM, Oracle, and at the time newcomer Microsoft, along with a handful of wannabes, the database business has exploded in the past decade and has become a staple of financial excellence, customer experience, analytic advantage, competitive strategy, growth initiatives, visualizations, not to mention compliance, security, privacy and dozens of other important use cases and initiatives. And on the vendor's side of the house, we've seen the rapid ascendancy of cloud databases. Most notably from Snowflake, whose massive raises leading up to its IPO in late 2020 sparked a spate of interest and VC investment in the separation of compute and storage and all that elastic resource stuff in the cloud. The company joined AWS, Azure and Google to popularize cloud databases, which have become a linchpin of competitive strategies for technology suppliers. And if I get you to put your data in my database and in my cloud, and I keep innovating, I'm going to build a moat and achieve a hugely attractive lifetime customer value in a really amazing marginal economics dynamic that is going to fund my future. And I'll be able to sell other adjacent services, not just compute and storage, but machine learning and inference and training and all kinds of stuff, dozens of lucrative cloud offerings. Meanwhile, the database leader, Oracle has invested massive amounts of money to maintain its lead. It's building on its position as the king of mission critical workloads and making typical Oracle like claims against the competition. Most were recently just yesterday with another announcement around MySQL HeatWave. An extension of MySQL that is compatible with on-premises MySQLs and is setting new standards in price performance. We're seeing a dramatic divergence in strategies across the database spectrum. On the far left, we see Amazon with more than a dozen database offerings each with its own API and primitives. AWS is taking a right tool for the right job approach, often building on open source platforms and creating services that it offers to customers to solve very specific problems for developers. And on the other side of the line, we see Oracle, which is taking the Swiss Army Knife approach, converging database functionality, enabling analytic and transactional workloads to run in the same data store, eliminating the need to ETL, at the same time adding capabilities into its platform like automation and machine learning. Welcome to this database Power Panel. My name is Dave Vellante, and I'm so excited to bring together some of the most respected industry analyst in the community. Today we're going to assess what's happening in the market. We're going to dig into the competitive landscape and explore the future of database and database platforms and decode what it means to customers. Let me take a moment to welcome our guest analyst today. Matt Kimball is a vice president and principal analysts at Moor Insights and Strategy, Matt. He knows products, he knows industry, he's got real world IT expertise, and he's got all the angles 25 plus years of experience in all kinds of great background. Matt, welcome. Thanks very much for coming on theCUBE. Holgar Mueller, friend of theCUBE, vice president and principal analyst at Constellation Research in depth knowledge on applications, application development, knows developers. He's worked at SAP and Oracle. And then Bob Evans is Chief Content Officer and co-founder of the Acceleration Economy, founder and principle of Cloud Wars. Covers all kinds of industry topics and great insights. He's got awesome videos, these three minute hits. If you haven't seen 'em, checking them out, knows cloud companies, his Cloud Wars minutes are fantastic. And then of course, Marc Staimer is the founder of Dragon Slayer Research. A frequent contributor and guest analyst at Wikibon. He's got a wide ranging knowledge across IT products, knows technology really well, can go deep. And then of course, Ron Westfall, Senior Analyst and Director Research Director at Futurum Research, great all around product trends knowledge. Can take, you know, technical dives and really understands competitive angles, knows Redshift, Snowflake, and many others. Gents, thanks so much for taking the time to join us in theCube today. It's great to have you on, good to see you. >> Good to be here, thanks for having us. >> Thanks, Dave. >> All right, let's start with an around the horn and briefly, if each of you would describe, you know, anything I missed in your areas of expertise and then you answer the following question, how would you describe the state of the database, state of platform market today? Matt Kimball, please start. >> Oh, I hate going first, but that it's okay. How would I describe the world today? I would just in one sentence, I would say, I'm glad I'm not in IT anymore, right? So, you know, it is a complex and dangerous world out there. And I don't envy IT folks I'd have to support, you know, these modernization and transformation efforts that are going on within the enterprise. It used to be, you mentioned it, Dave, you would argue about IBM versus Oracle versus this newcomer in the database space called Microsoft. And don't forget Sybase back in the day, but you know, now it's not just, which SQL vendor am I going to go with? It's all of these different, divergent data types that have to be taken, they have to be merged together, synthesized. And somehow I have to do that cleanly and use this to drive strategic decisions for my business. That is not easy. So, you know, you have to look at it from the perspective of the business user. It's great for them because as a DevOps person, or as an analyst, I have so much flexibility and I have this thing called the cloud now where I can go get services immediately. As an IT person or a DBA, I am calling up prevention hotlines 24 hours a day, because I don't know how I'm going to be able to support the business. And as an Oracle or as an Oracle or a Microsoft or some of the cloud providers and cloud databases out there, I'm licking my chops because, you know, my market is expanding and expanding every day. >> Great, thank you for that, Matt. Holgar, how do you see the world these days? You always have a good perspective on things, share with us. >> Well, I think it's the best time to be in IT, I'm not sure what Matt is talking about. (laughing) It's easier than ever, right? The direction is going to cloud. Kubernetes has won, Google has the best AI for now, right? So things are easier than ever before. You made commitments for five plus years on hardware, networking and so on premise, and I got gray hair about worrying it was the wrong decision. No, just kidding. But you kind of both sides, just to be controversial, make it interesting, right. So yeah, no, I think the interesting thing specifically with databases, right? We have this big suite versus best of breed, right? Obviously innovation, like you mentioned with Snowflake and others happening in the cloud, the cloud vendors server, where to save of their databases. And then we have one of the few survivors of the old guard as Evans likes to call them is Oracle who's doing well, both their traditional database. And now, which is really interesting, remarkable from that because Oracle it was always the power of one, have one database, add more to it, make it what I call the universal database. And now this new HeatWave offering is coming and MySQL open source side. So they're getting the second (indistinct) right? So it's interesting that older players, traditional players who still are in the market are diversifying their offerings. Something we don't see so much from the traditional tools from Oracle on the Microsoft side or the IBM side these days. >> Great, thank you Holgar. Bob Evans, you've covered this business for a while. You've worked at, you know, a number of different outlets and companies and you cover the competition, how do you see things? >> Dave, you know, the other angle to look at this from is from the customer side, right? You got now CEOs who are any sort of business across all sorts of industries, and they understand that their future success is going to be dependent on their ability to become a digital company, to understand data, to use it the right way. So as you outline Dave, I think in your intro there, it is a fantastic time to be in the database business. And I think we've got a lot of new buyers and influencers coming in. They don't know all this history about IBM and Microsoft and Oracle and you know, whoever else. So I think they're going to take a long, hard look, Dave, at some of these results and who is able to help these companies not serve up the best technology, but who's going to be able to help their business move into the digital future. So it's a fascinating time now from every perspective. >> Great points, Bob. I mean, digital transformation has gone from buzzword to imperative. Mr. Staimer, how do you see things? >> I see things a little bit differently than my peers here in that I see the database market being segmented. There's all the different kinds of databases that people are looking at for different kinds of data, and then there is databases in the cloud. And so database as cloud service, I view very differently than databases because the traditional way of implementing a database is changing and it's changing rapidly. So one of the premises that you stated earlier on was that you viewed Oracle as a database company. I don't view Oracle as a database company anymore. I view Oracle as a cloud company that happens to have a significant expertise and specialty in databases, and they still sell database software in the traditional way, but ultimately they're a cloud company. So database cloud services from my point of view is a very distinct market from databases. >> Okay, well, you gave us some good meat on the bone to talk about that. Last but not least-- >> Dave did Marc, just say Oracle's a cloud company? >> Yeah. (laughing) Take away the database, it would be interesting to have that discussion, but let's let Ron jump in here. Ron, give us your take. >> That's a great segue. I think it's truly the era of the cloud database, that's something that's rising. And the key trends that come with it include for example, elastic scaling. That is the ability to scale on demand, to right size workloads according to customer requirements. And also I think it's going to increase the prioritization for high availability. That is the player who can provide the highest availability is going to have, I think, a great deal of success in this emerging market. And also I anticipate that there will be more consolidation across platforms in order to enable cost savings for customers, and that's something that's always going to be important. And I think we'll see more of that over the horizon. And then finally security, security will be more important than ever. We've seen a spike (indistinct), we certainly have seen geopolitical originated cybersecurity concerns. And as a result, I see database security becoming all the more important. >> Great, thank you. Okay, let me share some data with you guys. I'm going to throw this at you and see what you think. We have this awesome data partner called Enterprise Technology Research, ETR. They do these quarterly surveys and each period with dozens of industry segments, they track clients spending, customer spending. And this is the database, data warehouse sector okay so it's taxonomy, so it's not perfect, but it's a big kind of chunk. They essentially ask customers within a category and buy a specific vendor, you're spending more or less on the platform? And then they subtract the lesses from the mores and they derive a metric called net score. It's like NPS, it's a measure of spending velocity. It's more complicated and granular than that, but that's the basis and that's the vertical axis. The horizontal axis is what they call market share, it's not like IDC market share, it's just pervasiveness in the data set. And so there are a couple of things that stand out here and that we can use as reference point. The first is the momentum of Snowflake. They've been off the charts for many, many, for over two years now, anything above that dotted red line, that 40%, is considered by ETR to be highly elevated and Snowflake's even way above that. And I think it's probably not sustainable. We're going to see in the next April survey, next month from those guys, when it comes out. And then you see AWS and Microsoft, they're really pervasive on the horizontal axis and highly elevated, Google falls behind them. And then you got a number of well funded players. You got Cockroach Labs, Mongo, Redis, MariaDB, which of course is a fork on MySQL started almost as protest at Oracle when they acquired Sun and they got MySQL and you can see the number of others. Now Oracle who's the leading database player, despite what Marc Staimer says, we know, (laughs) and they're a cloud player (laughing) who happens to be a leading database player. They dominate in the mission critical space, we know that they're the king of that sector, but you can see here that they're kind of legacy, right? They've been around a long time, they get a big install base. So they don't have the spending momentum on the vertical axis. Now remember this is, just really this doesn't capture spending levels, so that understates Oracle but nonetheless. So it's not a complete picture like SAP for instance is not in here, no Hana. I think people are actually buying it, but it doesn't show up here, (laughs) but it does give an indication of momentum and presence. So Bob Evans, I'm going to start with you. You've commented on many of these companies, you know, what does this data tell you? >> Yeah, you know, Dave, I think all these compilations of things like that are interesting, and that folks at ETR do some good work, but I think as you said, it's a snapshot sort of a two-dimensional thing of a rapidly changing, three dimensional world. You know, the incidents at which some of these companies are mentioned versus the volume that happens. I think it's, you know, with Oracle and I'm not going to declare my religious affiliation, either as cloud company or database company, you know, they're all of those things and more, and I think some of our old language of how we classify companies is just not relevant anymore. But I want to ask too something in here, the autonomous database from Oracle, nobody else has done that. So either Oracle is crazy, they've tried out a technology that nobody other than them is interested in, or they're onto something that nobody else can match. So to me, Dave, within Oracle, trying to identify how they're doing there, I would watch autonomous database growth too, because right, it's either going to be a big plan and it breaks through, or it's going to be caught behind. And the Snowflake phenomenon as you mentioned, that is a rare, rare bird who comes up and can grow 100% at a billion dollar revenue level like that. So now they've had a chance to come in, scare the crap out of everybody, rock the market with something totally new, the data cloud. Will the bigger companies be able to catch up and offer a compelling alternative, or is Snowflake going to continue to be this outlier. It's a fascinating time. >> Really, interesting points there. Holgar, I want to ask you, I mean, I've talked to certainly I'm sure you guys have too, the founders of Snowflake that came out of Oracle and they actually, they don't apologize. They say, "Hey, we not going to do all that complicated stuff that Oracle does, we were trying to keep it real simple." But at the same time, you know, they don't do sophisticated workload management. They don't do complex joints. They're kind of relying on the ecosystems. So when you look at the data like this and the various momentums, and we talked about the diverging strategies, what does this say to you? >> Well, it is a great point. And I think Snowflake is an example how the cloud can turbo charge a well understood concept in this case, the data warehouse, right? You move that and you find steroids and you see like for some players who've been big in data warehouse, like Sentara Data, as an example, here in San Diego, what could have been for them right in that part. The interesting thing, the problem though is the cloud hides a lot of complexity too, which you can scale really well as you attract lots of customers to go there. And you don't have to build things like what Bob said, right? One of the fascinating things, right, nobody's answering Oracle on the autonomous database. I don't think is that they cannot, they just have different priorities or the database is not such a priority. I would dare to say that it's for IBM and Microsoft right now at the moment. And the cloud vendors, you just hide that right through scripts and through scale because you support thousands of customers and you can deal with a little more complexity, right? It's not against them. Whereas if you have to run it yourself, very different story, right? You want to have the autonomous parts, you want to have the powerful tools to do things. >> Thank you. And so Matt, I want to go to you, you've set up front, you know, it's just complicated if you're in IT, it's a complicated situation and you've been on the customer side. And if you're a buyer, it's obviously, it's like Holgar said, "Cloud's supposed to make this stuff easier, but the simpler it gets the more complicated gets." So where do you place your bets? Or I guess more importantly, how do you decide where to place your bets? >> Yeah, it's a good question. And to what Bob and Holgar said, you know, the around autonomous database, I think, you know, part of, as I, you know, play kind of armchair psychologist, if you will, corporate psychologists, I look at what Oracle is doing and, you know, databases where they've made their mark and it's kind of, that's their strong position, right? So it makes sense if you're making an entry into this cloud and you really want to kind of build momentum, you go with what you're good at, right? So that's kind of the strength of Oracle. Let's put a lot of focus on that. They do a lot more than database, don't get me wrong, but you know, I'm going to short my strength and then kind of pivot from there. With regards to, you know, what IT looks at and what I would look at you know as an IT director or somebody who is, you know, trying to consume services from these different cloud providers. First and foremost, I go with what I know, right? Let's not forget IT is a conservative group. And when we look at, you know, all the different permutations of database types out there, SQL, NoSQL, all the different types of NoSQL, those are largely being deployed by business users that are looking for agility or businesses that are looking for agility. You know, the reason why MongoDB is so popular is because of DevOps, right? It's a great platform to develop on and that's where it kind of gained its traction. But as an IT person, I want to go with what I know, where my muscle memory is, and that's my first position. And so as I evaluate different cloud service providers and cloud databases, I look for, you know, what I know and what I've invested in and where my muscle memory is. Is there enough there and do I have enough belief that that company or that service is going to be able to take me to, you know, where I see my organization in five years from a data management perspective, from a business perspective, are they going to be there? And if they are, then I'm a little bit more willing to make that investment, but it is, you know, if I'm kind of going in this blind or if I'm cloud native, you know, that's where the Snowflakes of the world become very attractive to me. >> Thank you. So Marc, I asked Andy Jackson in theCube one time, you have all these, you know, data stores and different APIs and primitives and you know, very granular, what's the strategy there? And he said, "Hey, that allows us as the market changes, it allows us to be more flexible. If we start building abstractions layers, it's harder for us." I think also it was not a good time to market advantage, but let me ask you, I described earlier on that spectrum from AWS to Oracle. We just saw yesterday, Oracle announced, I think the third major enhancement in like 15 months to MySQL HeatWave, what do you make of that announcement? How do you think it impacts the competitive landscape, particularly as it relates to, you know, converging transaction and analytics, eliminating ELT, I know you have some thoughts on this. >> So let me back up for a second and defend my cloud statement about Oracle for a moment. (laughing) AWS did a great job in developing the cloud market in general and everything in the cloud market. I mean, I give them lots of kudos on that. And a lot of what they did is they took open source software and they rent it to people who use their cloud. So I give 'em lots of credit, they dominate the market. Oracle was late to the cloud market. In fact, they actually poo-pooed it initially, if you look at some of Larry Ellison's statements, they said, "Oh, it's never going to take off." And then they did 180 turn, and they said, "Oh, we're going to embrace the cloud." And they really have, but when you're late to a market, you've got to be compelling. And this ties into the announcement yesterday, but let's deal with this compelling. To be compelling from a user point of view, you got to be twice as fast, offer twice as much functionality, at half the cost. That's generally what compelling is that you're going to capture market share from the leaders who established the market. It's very difficult to capture market share in a new market for yourself. And you're right. I mean, Bob was correct on this and Holgar and Matt in which you look at Oracle, and they did a great job of leveraging their database to move into this market, give 'em lots of kudos for that too. But yesterday they announced, as you said, the third innovation release and the pace is just amazing of what they're doing on these releases on HeatWave that ties together initially MySQL with an integrated builtin analytics engine, so a data warehouse built in. And then they added automation with autopilot, and now they've added machine learning to it, and it's all in the same service. It's not something you can buy and put on your premise unless you buy their cloud customers stuff. But generally it's a cloud offering, so it's compellingly better as far as the integration. You don't buy multiple services, you buy one and it's lower cost than any of the other services, but more importantly, it's faster, which again, give 'em credit for, they have more integration of a product. They can tie things together in a way that nobody else does. There's no additional services, ETL services like Glue and AWS. So from that perspective, they're getting better performance, fewer services, lower cost. Hmm, they're aiming at the compelling side again. So from a customer point of view it's compelling. Matt, you wanted to say something there. >> Yeah, I want to kind of, on what you just said there Marc, and this is something I've found really interesting, you know. The traditional way that you look at software and, you know, purchasing software and IT is, you look at either best of breed solutions and you have to work on the backend to integrate them all and make them all work well. And generally, you know, the big hit against the, you know, we have one integrated offering is that, you lose capability or you lose depth of features, right. And to what you were saying, you know, that's the thing I found interesting about what Oracle is doing is they're building in depth as they kind of, you know, build that service. It's not like you're losing a lot of capabilities, because you're going to one integrated service versus having to use A versus B versus C, and I love that idea. >> You're right. Yeah, not only you're not losing, but you're gaining functionality that you can't get by integrating a lot of these. I mean, I can take Snowflake and integrate it in with machine learning, but I also have to integrate in with a transactional database. So I've got to have connectors between all of this, which means I'm adding time. And what it comes down to at the end of the day is expertise, effort, time, and cost. And so what I see the difference from the Oracle announcements is they're aiming at reducing all of that by increasing performance as well. Correct me if I'm wrong on that but that's what I saw at the announcement yesterday. >> You know, Marc, one thing though Marc, it's funny you say that because I started out saying, you know, I'm glad I'm not 19 anymore. And the reason is because of exactly what you said, it's almost like there's a pseudo level of witchcraft that's required to support the modern data environment right in the enterprise. And I need simpler faster, better. That's what I need, you know, I am no longer wearing pocket protectors. I have turned from, you know, break, fix kind of person, to you know, business consultant. And I need that point and click simplicity, but I can't sacrifice, you know, a depth of features of functionality on the backend as I play that consultancy role. >> So, Ron, I want to bring in Ron, you know, it's funny. So Matt, you mentioned Mongo, I often and say, if Oracle mentions you, you're on the map. We saw them yesterday Ron, (laughing) they hammered RedShifts auto ML, they took swipes at Snowflake, a little bit of BigQuery. What were your thoughts on that? Do you agree with what these guys are saying in terms of HeatWaves capabilities? >> Yes, Dave, I think that's an excellent question. And fundamentally I do agree. And the question is why, and I think it's important to know that all of the Oracle data is backed by the fact that they're using benchmarks. For example, all of the ML and all of the TPC benchmarks, including all the scripts, all the configs and all the detail are posted on GitHub. So anybody can look at these results and they're fully transparent and replicate themselves. If you don't agree with this data, then by all means challenge it. And we have not really seen that in all of the new updates in HeatWave over the last 15 months. And as a result, when it comes to these, you know, fundamentals in looking at the competitive landscape, which I think gives validity to outcomes such as Oracle being able to deliver 4.8 times better price performance than Redshift. As well as for example, 14.4 better price performance than Snowflake, and also 12.9 better price performance than BigQuery. And so that is, you know, looking at the quantitative side of things. But again, I think, you know, to Marc's point and to Matt's point, there are also qualitative aspects that clearly differentiate the Oracle proposition, from my perspective. For example now the MySQL HeatWave ML capabilities are native, they're built in, and they also support things such as completion criteria. And as a result, that enables them to show that hey, when you're using Redshift ML for example, you're having to also use their SageMaker tool and it's running on a meter. And so, you know, nobody really wants to be running on a meter when, you know, executing these incredibly complex tasks. And likewise, when it comes to Snowflake, they have to use a third party capability. They don't have the built in, it's not native. So the user, to the point that he's having to spend more time and it increases complexity to use auto ML capabilities across the Snowflake platform. And also, I think it also applies to other important features such as data sampling, for example, with the HeatWave ML, it's intelligent sampling that's being implemented. Whereas in contrast, we're seeing Redshift using random sampling. And again, Snowflake, you're having to use a third party library in order to achieve the same capabilities. So I think the differentiation is crystal clear. I think it definitely is refreshing. It's showing that this is where true value can be assigned. And if you don't agree with it, by all means challenge the data. >> Yeah, I want to come to the benchmarks in a minute. By the way, you know, the gentleman who's the Oracle's architect, he did a great job on the call yesterday explaining what you have to do. I thought that was quite impressive. But Bob, I know you follow the financials pretty closely and on the earnings call earlier this month, Ellison said that, "We're going to see HeatWave on AWS." And the skeptic in me said, oh, they must not be getting people to come to OCI. And then they, you remember this chart they showed yesterday that showed the growth of HeatWave on OCI. But of course there was no data on there, it was just sort of, you know, lines up and to the right. So what do you guys think of that? (Marc laughs) Does it signal Bob, desperation by Oracle that they can't get traction on OCI, or is it just really a smart tame expansion move? What do you think? >> Yeah, Dave, that's a great question. You know, along the way there, and you know, just inside of that was something that said Ellison said on earnings call that spoke to a different sort of philosophy or mindset, almost Marc, where he said, "We're going to make this multicloud," right? With a lot of their other cloud stuff, if you wanted to use any of Oracle's cloud software, you had to use Oracle's infrastructure, OCI, there was no other way out of it. But this one, but I thought it was a classic Ellison line. He said, "Well, we're making this available on AWS. We're making this available, you know, on Snowflake because we're going after those users. And once they see what can be done here." So he's looking at it, I guess you could say, it's a concession to customers because they want multi-cloud. The other way to look at it, it's a hunting expedition and it's one of those uniquely I think Oracle ways. He said up front, right, he doesn't say, "Well, there's a big market, there's a lot for everybody, we just want on our slice." Said, "No, we are going after Amazon, we're going after Redshift, we're going after Aurora. We're going after these users of Snowflake and so on." And I think it's really fairly refreshing these days to hear somebody say that, because now if I'm a buyer, I can look at that and say, you know, to Marc's point, "Do they measure up, do they crack that threshold ceiling? Or is this just going to be more pain than a few dollars savings is worth?" But you look at those numbers that Ron pointed out and that we all saw in that chart. I've never seen Dave, anything like that. In a substantive market, a new player coming in here, and being able to establish differences that are four, seven, eight, 10, 12 times better than competition. And as new buyers look at that, they're going to say, "What the hell are we doing paying, you know, five times more to get a poor result? What's going on here?" So I think this is going to rattle people and force a harder, closer look at what these alternatives are. >> I wonder if the guy, thank you. Let's just skip ahead of the benchmarks guys, bring up the next slide, let's skip ahead a little bit here, which talks to the benchmarks and the benchmarking if we can. You know, David Floyer, the sort of semiretired, you know, Wikibon analyst said, "Dave, this is going to force Amazon and others, Snowflake," he said, "To rethink actually how they architect databases." And this is kind of a compilation of some of the data that they shared. They went after Redshift mostly, (laughs) but also, you know, as I say, Snowflake, BigQuery. And, like I said, you can always tell which companies are doing well, 'cause Oracle will come after you, but they're on the radar here. (laughing) Holgar should we take this stuff seriously? I mean, or is it, you know, a grain salt? What are your thoughts here? >> I think you have to take it seriously. I mean, that's a great question, great point on that. Because like Ron said, "If there's a flaw in a benchmark, we know this database traditionally, right?" If anybody came up that, everybody will be, "Oh, you put the wrong benchmark, it wasn't audited right, let us do it again," and so on. We don't see this happening, right? So kudos to Oracle to be aggressive, differentiated, and seem to having impeccable benchmarks. But what we really see, I think in my view is that the classic and we can talk about this in 100 years, right? Is the suite versus best of breed, right? And the key question of the suite, because the suite's always slower, right? No matter at which level of the stack, you have the suite, then the best of breed that will come up with something new, use a cloud, put the data warehouse on steroids and so on. The important thing is that you have to assess as a buyer what is the speed of my suite vendor. And that's what you guys mentioned before as well, right? Marc said that and so on, "Like, this is a third release in one year of the HeatWave team, right?" So everybody in the database open source Marc, and there's so many MySQL spinoffs to certain point is put on shine on the speed of (indistinct) team, putting out fundamental changes. And the beauty of that is right, is so inherent to the Oracle value proposition. Larry's vision of building the IBM of the 21st century, right from the Silicon, from the chip all the way across the seven stacks to the click of the user. And that what makes the database what Rob was saying, "Tied to the OCI infrastructure," because designed for that, it runs uniquely better for that, that's why we see the cross connect to Microsoft. HeatWave so it's different, right? Because HeatWave runs on cheap hardware, right? Which is the breadth and butter 886 scale of any cloud provider, right? So Oracle probably needs it to scale OCI in a different category, not the expensive side, but also allow us to do what we said before, the multicloud capability, which ultimately CIOs really want, because data gravity is real, you want to operate where that is. If you have a fast, innovative offering, which gives you more functionality and the R and D speed is really impressive for the space, puts away bad results, then it's a good bet to look at. >> Yeah, so you're saying, that we versus best of breed. I just want to sort of play back then Marc a comment. That suite versus best of breed, there's always been that trade off. If I understand you Holgar you're saying that somehow Oracle has magically cut through that trade off and they're giving you the best of both. >> It's the developing velocity, right? The provision of important features, which matter to buyers of the suite vendor, eclipses the best of breed vendor, then the best of breed vendor is in the hell of a potential job. >> Yeah, go ahead Marc. >> Yeah and I want to add on what Holgar just said there. I mean the worst job in the data center is data movement, moving the data sucks. I don't care who you are, nobody likes it. You never get any kudos for doing it well, and you always get the ah craps, when things go wrong. So it's in- >> In the data center Marc all the time across data centers, across cloud. That's where the bleeding comes. >> It's right, you get beat up all the time. So nobody likes to move data, ever. So what you're looking at with what they announce with HeatWave and what I love about HeatWave is it doesn't matter when you started with it, you get all the additional features they announce it's part of the service, all the time. But they don't have to move any of the data. You want to analyze the data that's in your transactional, MySQL database, it's there. You want to do machine learning models, it's there, there's no data movement. The data movement is the key thing, and they just eliminate that, in so many ways. And the other thing I wanted to talk about is on the benchmarks. As great as those benchmarks are, they're really conservative 'cause they're underestimating the cost of that data movement. The ETLs, the other services, everything's left out. It's just comparing HeatWave, MySQL cloud service with HeatWave versus Redshift, not Redshift and Aurora and Glue, Redshift and Redshift ML and SageMaker, it's just Redshift. >> Yeah, so what you're saying is what Oracle's doing is saying, "Okay, we're going to run MySQL HeatWave benchmarks on analytics against Redshift, and then we're going to run 'em in transaction against Aurora." >> Right. >> But if you really had to look at what you would have to do with the ETL, you'd have to buy two different data stores and all the infrastructure around that, and that goes away so. >> Due to the nature of the competition, they're running narrow best of breed benchmarks. There is no suite level benchmark (Dave laughs) because they created something new. >> Well that's you're the earlier point they're beating best of breed with a suite. So that's, I guess to Floyer's earlier point, "That's going to shake things up." But I want to come back to Bob Evans, 'cause I want to tap your Cloud Wars mojo before we wrap. And line up the horses, you got AWS, you got Microsoft, Google and Oracle. Now they all own their own cloud. Snowflake, Mongo, Couchbase, Redis, Cockroach by the way they're all doing very well. They run in the cloud as do many others. I think you guys all saw the Andreessen, you know, commentary from Sarah Wang and company, to talk about the cost of goods sold impact of cloud. So owning your own cloud has to be an advantage because other guys like Snowflake have to pay cloud vendors and negotiate down versus having the whole enchilada, Safra Catz's dream. Bob, how do you think this is going to impact the market long term? >> Well, Dave, that's a great question about, you know, how this is all going to play out. If I could mention three things, one, Frank Slootman has done a fantastic job with Snowflake. Really good company before he got there, but since he's been there, the growth mindset, the discipline, the rigor and the phenomenon of what Snowflake has done has forced all these bigger companies to really accelerate what they're doing. And again, it's an example of how this intense competition makes all the different cloud vendors better and it provides enormous value to customers. Second thing I wanted to mention here was look at the Adam Selipsky effect at AWS, took over in the middle of May, and in Q2, Q3, Q4, AWS's growth rate accelerated. And in each of those three quotas, they grew faster than Microsoft's cloud, which has not happened in two or three years, so they're closing the gap on Microsoft. The third thing, Dave, in this, you know, incredibly intense competitive nature here, look at Larry Ellison, right? He's got his, you know, the product that for the last two or three years, he said, "It's going to help determine the future of the company, autonomous database." You would think he's the last person in the world who's going to bring in, you know, in some ways another database to think about there, but he has put, you know, his whole effort and energy behind this. The investments Oracle's made, he's riding this horse really hard. So it's not just a technology achievement, but it's also an investment priority for Oracle going forward. And I think it's going to form a lot of how they position themselves to this new breed of buyer with a new type of need and expectations from IT. So I just think the next two or three years are going to be fantastic for people who are lucky enough to get to do the sorts of things that we do. >> You know, it's a great point you made about AWS. Back in 2018 Q3, they were doing about 7.4 billion a quarter and they were growing in the mid forties. They dropped down to like 29% Q4, 2020, I'm looking at the data now. They popped back up last quarter, last reported quarter to 40%, that is 17.8 billion, so they more doubled and they accelerated their growth rate. (laughs) So maybe that pretends, people are concerned about Snowflake right now decelerating growth. You know, maybe that's going to be different. By the way, I think Snowflake has a different strategy, the whole data cloud thing, data sharing. They're not trying to necessarily take Oracle head on, which is going to make this next 10 years, really interesting. All right, we got to go, last question. 30 seconds or less, what can we expect from the future of data platforms? Matt, please start. >> I have to go first again? You're killing me, Dave. (laughing) In the next few years, I think you're going to see the major players continue to meet customers where they are, right. Every organization, every environment is, you know, kind of, we use these words bespoke in Snowflake, pardon the pun, but Snowflakes, right. But you know, they're all opinionated and unique and what's great as an IT person is, you know, there is a service for me regardless of where I am on my journey, in my data management journey. I think you're going to continue to see with regards specifically to Oracle, I think you're going to see the company continue along this path of being all things to all people, if you will, or all organizations without sacrificing, you know, kind of richness of features and sacrificing who they are, right. Look, they are the data kings, right? I mean, they've been a database leader for an awful long time. I don't see that going away any time soon and I love the innovative spirit they've brought in with HeatWave. >> All right, great thank you. Okay, 30 seconds, Holgar go. >> Yeah, I mean, the interesting thing that we see is really that trend to autonomous as Oracle calls or self-driving software, right? So the database will have to do more things than just store the data and support the DVA. It will have to show it can wide insights, the whole upside, it will be able to show to one machine learning. We haven't really talked about that. How in just exciting what kind of use case we can get of machine learning running real time on data as it changes, right? So, which is part of the E5 announcement, right? So we'll see more of that self-driving nature in the database space. And because you said we can promote it, right. Check out my report about HeatWave latest release where I post in oracle.com. >> Great, thank you for that. And Bob Evans, please. You're great at quick hits, hit us. >> Dave, thanks. I really enjoyed getting to hear everybody's opinion here today and I think what's going to happen too. I think there's a new generation of buyers, a new set of CXO influencers in here. And I think what Oracle's done with this, MySQL HeatWave, those benchmarks that Ron talked about so eloquently here that is going to become something that forces other companies, not just try to get incrementally better. I think we're going to see a massive new wave of innovation to try to play catch up. So I really take my hat off to Oracle's achievement from going to, push everybody to be better. >> Excellent. Marc Staimer, what do you say? >> Sure, I'm going to leverage off of something Matt said earlier, "Those companies that are going to develop faster, cheaper, simpler products that are going to solve customer problems, IT problems are the ones that are going to succeed, or the ones who are going to grow. The one who are just focused on the technology are going to fall by the wayside." So those who can solve more problems, do it more elegantly and do it for less money are going to do great. So Oracle's going down that path today, Snowflake's going down that path. They're trying to do more integration with third party, but as a result, aiming at that simpler, faster, cheaper mentality is where you're going to continue to see this market go. >> Amen brother Marc. >> Thank you, Ron Westfall, we'll give you the last word, bring us home. >> Well, thank you. And I'm loving it. I see a wave of innovation across the entire cloud database ecosystem and Oracle is fueling it. We are seeing it, with the native integration of auto ML capabilities, elastic scaling, lower entry price points, et cetera. And this is just going to be great news for buyers, but also developers and increased use of open APIs. And so I think that is really the key takeaways. Just we're going to see a lot of great innovation on the horizon here. >> Guys, fantastic insights, one of the best power panel as I've ever done. Love to have you back. Thanks so much for coming on today. >> Great job, Dave, thank you. >> All right, and thank you for watching. This is Dave Vellante for theCube and we'll see you next time. (soft music)

Published Date : Mar 31 2022

SUMMARY :

and co-founder of the and then you answer And don't forget Sybase back in the day, the world these days? and others happening in the cloud, and you cover the competition, and Oracle and you know, whoever else. Mr. Staimer, how do you see things? in that I see the database some good meat on the bone Take away the database, That is the ability to scale on demand, and they got MySQL and you I think it's, you know, and the various momentums, and Microsoft right now at the moment. So where do you place your bets? And to what Bob and Holgar said, you know, and you know, very granular, and everything in the cloud market. And to what you were saying, you know, functionality that you can't get to you know, business consultant. you know, it's funny. and all of the TPC benchmarks, By the way, you know, and you know, just inside of that was of some of the data that they shared. the stack, you have the suite, and they're giving you the best of both. of the suite vendor, and you always get the ah In the data center Marc all the time And the other thing I wanted to talk about and then we're going to run 'em and all the infrastructure around that, Due to the nature of the competition, I think you guys all saw the Andreessen, And I think it's going to form I'm looking at the data now. and I love the innovative All right, great thank you. and support the DVA. Great, thank you for that. And I think what Oracle's done Marc Staimer, what do you say? or the ones who are going to grow. we'll give you the last And this is just going to Love to have you back. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Dave VellantePERSON

0.99+

Ron WestfallPERSON

0.99+

DavePERSON

0.99+

Marc StaimerPERSON

0.99+

MicrosoftORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MarcPERSON

0.99+

EllisonPERSON

0.99+

Bob EvansPERSON

0.99+

OracleORGANIZATION

0.99+

MattPERSON

0.99+

Holgar MuellerPERSON

0.99+

AWSORGANIZATION

0.99+

Frank SlootmanPERSON

0.99+

RonPERSON

0.99+

StaimerPERSON

0.99+

Andy JacksonPERSON

0.99+

BobPERSON

0.99+

Matt KimballPERSON

0.99+

GoogleORGANIZATION

0.99+

100%QUANTITY

0.99+

Sarah WangPERSON

0.99+

San DiegoLOCATION

0.99+

AmazonORGANIZATION

0.99+

RobPERSON

0.99+

Glyn Martin, BT Group | DevOps Virtual Forum


 

>>from around the globe. It's >>the Cube with digital coverage of Dev >>Ops Virtual Forum Brought to You by Broadcom. Welcome to Broadcom, Step Ups, Virtual Forum I and Lisa Martin and I'm joined by another Martin very socially. Distance from me all the way. Coming from Birmingham, England, is Glynn Martin, head of Q. A transformation at BT Glenn. It's great to have you on the program. >>Thank you, Lisa. I'm looking forward, Toa. >>As we said before, we went live to Martin's for the price of one in one segment. So this is gonna be an interesting segment, Guesses. What we're gonna do is Glen's gonna give us a really kind of deep inside out view of Dev ops. From an evolution perspective, Soglo's Let's start transformation is at the heart of what you dio. It's obviously been a very transformative year. How have the events of this year affected the transformation that you are so responsible for driving? >>Yeah. Thank you, Leigh. So I mean, yeah, it has been a difficult year Bond, although working for BT, which is ah, global telecommunications company. Relatively resilient, I suppose, as an industry through covert, it obviously still has been affected and has got its challenges on bond. If anything is actually caused us to accelerate of our transformation journey, you know, we had to do some great things during this time around. You know, in the UK for our emergency and health workers give them unlimited data and for vulnerable people to support them and that spent that we've had to deliver changes quickly. Um, but what? We want to be able to do it, deliver those kind of changes quickly, but sustainably for everything that we do, not just because there's an emergency eso we were already on the kind of journey to by John, but ever so ever more important now that we are what we're able to do, those that kind of work, do it more quickly on. But it works because the implications of it not working is could be terrible in terms of, you know, we've been supporting testing centers, new hospitals to treat covert patients, so we need to get it right and therefore the coverage of what we do, the quality of what we do and how quickly we do. It really has taken on a new scowling what was already a very competitive market within the telco industry within the UK. Um, you know, what I would say is that you know, we are under pressure to deliver more value, but we have small cost challenges. We have to obviously deal with the fact that you know, Cove in 19 has hit most industries kind of revenues and profits. So we've got this kind of paradox between having less cost, but they're having to deliver more value quicker on bond, you know, to higher quality. So, yeah, certainly the finances is on our minds. And that's why we need flexible models, cost models that allow us to kind of do growth. But we get that growth by showing that we're delivering value, especially in, you know, these times when there are financial challenges on companies. >>So one of the things that I want to ask you about again looking at, develops from the inside out on the evolution that you've seen you talked about the speed of things really accelerating in this last nine months or so. When we think Dev ops, we think speed. But one of the things I love to get your perspective on we've talked about in a number of the segments that we've done for this event is cultural change. What are some of the things that scene there as as needing to get, as you said, get things right but done so quickly to support essential businesses, essential workers? How have you seen that cultural shift? >>Yeah, I think you know, before, you know, test test team saw themselves of this part of the software delivery cycle. Andi, actually, now, really, our customers were expecting their quality and to deliver for our customers what they want. Quality has to be ingrained throughout the life cycle. Obviously that you know, there's lots of buzzwords like shift left. How do you do? Shift left testing. But for me, that's really instilling quality and given capabilities shared capabilities throughout the life cycle. That Dr you know, Dr Automation drive improvements. I always say that you know, you're only as good as your lowest common denominator on one thing that we're finding on our Dev Ops Journey Waas that we were you know, we would be trying thio do certain things quicker and had automated build automated tests. But if we were taking weeks to create test scripts or we were taking weeks to manly craft data, and even then when we had taken so long to do it that the coverage was quite poor and that led to lots of defects later in the lifecycle or even in in our production environment, we just couldn't afford to do that. And actually, you know, focusing on continuous testing over the last 9 to 12 months has really given us the ability Thio delivered quickly across the the whole life cycle and therefore actually go from doing a kind of semi agile kind of thing where we did you use the stories we did a few of the kind of, you know, as our ceremonies. But we weren't really deploying any quicker into production because, you know, our stakeholders were scared that we didn't have the same control that we had when we had more water for releases. And, you know, when way didn't think ourselves. So we've done a lot of work on every aspect, especially from a testing point of view, every aspect of every activity, rather than just looking at automated test, you know, whether it is actually creating the test in the first place, Whether it's doing security testing earlier in the light and performance testing. Learn the life cycle, etcetera. So, yeah, it Z It's been a riel key thing that for for C T for us to drive, develops, >>talk to me a little bit about your team. What are some of the shifts in terms of expectations that you're experiencing and how your team interacts with the internal folks from pipeline through life cycle? >>Yeah, we've done a lot of work on this, you know, there's a thing. I think people were pretty quiet. Customer experience. Gap. It reminds me of a cart, a Gilbert cartoon where, you know, we start with the requirements here on Do you know, we almost like a Chinese whisper effects and what we deliver eyes completely, completely different. So we think the testing team or the the delivery team, you know, you know, you think they've done a great job. This is what it said in the acceptance criteria, but then our customers the same Well, actually, that's not working. This isn't working, you know, on there's this kind of gap Way had a great launched this year of actual Requirement Society, one of the board common tools Onda that for the first time in in since I remember actually working within B. T, I had customers saying to may, Wow, you know, we want more of this. We want more projects, um, to have a actual requirements design on it because it allowed us to actually work with the business collaboratively. I mean, we talk about collaboration, but how do you actually, you know, do that have something that both the business on technical people can understand? And we've actually been working with the business using at our requirement. Designer Thio, you know, really look about what the requirements are. Tease out requirements to the hadn't even thought off and making sure that we've got high levels of test coverage. And so what we actually deliver at the end of it, not only have you been able Thio generate test more quickly, but we've got much higher test coverage and also can more smartly, you're using the kind of AI within the tour and with some of the other kind of pipeline tools actually deliver to choose the right tests on the bar, still actually doing a risk based testing approach. So that's been a great launched this year, but just the start of many kind of things that we're >>doing. But what I hear in that Glenn is a lot of positives that have come out of a very challenging situation. Uh, talk to me about it and I like that perspective. This is a very challenging time for everybody in the world, but it sounds like from a collaboration, perspective is you're right. We talk about that a lot critical with Dev Ops. But those challenges there you guys were able to overcome those pretty quickly. What other challenges did you face and figure out quickly enough to be able to pit it so fast? >>I mean, you talked about culture. I mean, you know, Bt is like most come countries companies. So, um, is very siloed. You know, we're still trying to work to become closer as a company. So I think there's a lot of challenges around. How do you integrate with other tools? How do you integrate with you know, the various different technologies and bt we have 58 different whitey stacks? That's not systems that stacks all of those stacks of can have, you know, hundreds of systems on we're trying to. We're gonna drive at the moment a simplified program where we're trying Thio, you know, reduce that number 2 14 stacks. And even then they'll be complexity behind the scenes that that we will be challenged. Maurin Mawr As we go forward, how do you actually hired that to our users on as an I T organization? How do we make ourselves Lena so that even when we you know, we've still got some of that legacy and we'll never fully get rid of it on that's the kind of trade off that we have to make. How do we actually deal with that and and hide that for my users a say and and and drive those programs so we can actually accelerate change. So we take, you know, reduce that kind of waste, and that kind of legacy costs out of our business. You know, the other thing is, well, beating. And I'm sure you know telecoms probably no difference to insurance or finance we've got You know, when you take the number of products that we do and then you combine them, the permutations are tens and hundreds of thousands of products. So we as a business to trying to simplify. We are trying Thio do that in a natural way and haven't trying to do agile in the proper way, you know, and really actually work it paste really deliver value. So I think what we're looking Maura, Maura, at the moment is actually, um is more value focus? Before we used to deliver changes, sometimes into production, someone had a great idea or it was a great idea nine months ago or 12 months ago. But actually, then we end up deploying it. And then we look at the the the users, you know, the usage of that product of that application or whatever it is on. It's not being used for six months, so we're getting much we haven't got, you know, because of the last 12 months, we certainly haven't got room for that kind of waste and you know, the for not really understanding the value of changes that we we are doing. So I think that's the most important thing at the moment is really taken that waste out. You know, there's lots of focus on things like flow management. What bits of the our process are actually taking too long, and we've We've started on that journey, but we've got a hell of a long way to go, you know, But that that involves looking every aspect off the kind of software delivery cycle. >>What are some? Because that that going from, what, 58 i t stocks down to 14 or whatever it's going to be go simplifying is sounds magical. Took everybody. It's a big challenge. What are some of the core technology capabilities that you see really as kind of essential for enabling that with this new way that you're working? >>Yeah. I mean, I think we've started on a continuous testing journey, and I think that's just the start. I mean, that's really, as I say, looking at every aspect off, you know, from a Q, a point of view. It's every aspect of what we dio. But it's also looking at, you know, we're starting to branch into more like a AI ops and, you know, really, the full life cycle on. But, you know, that's just a stepping stone onto, you know, I think oughta Nomics is the way forward, right? You know all of this kind of stuff that happens um, you know, monitoring, you know, monitoring systems, what's happening in production had to be feed that back. How do you get to a point where actually we think about a change on then suddenly it's in production safely. Or if it's not going to safety, it's automatically backing out. So, you know, it's a very, very long journey. But if we want Thio, you know, in a world where the pace is ever increasing the demands of the team and you know, with the pressures on at the moment where with we're being asked to do things, you know more efficiently Ondas leaving as possible. We need to be, you know, thinking about every part of the process. And how do we put the kind of stepping stones in players to lead us to a more automated kind of, you know, their future? >>Do you feel that that plant outcomes are starting to align with what's delivered? Given this massive shift that you're experiencing, >>I think it's starting to, and I think you know, Azzawi. Look at more of a value based approach on. Do you know a Zeiss? A princess was a kind of flight management. I think that's that will become ever evermore important. So I think it's starting to people. Certainly realized that, you know, people teams need to work together. You know, the kind of the cousin between business and ICT, especially as we go Teoh Mawr kind of sad space solutions, low cold solutions. You know there's not such a gap anymore. Actually, some of our business partners expects to be much more tech savvy. Eso I think you know, this is what we have to kind of appreciate. What is I ts role? How do we give the capabilities become more for centers of excellence rather than actually doing Mount amount of work And for May and from a testing point of view, you know, amount, amount of testing, actually, how do we automate that? How do we actually generate that instead of created? I think that's the kind of challenge going forward. >>What are some? As we look forward, what are some of the things that you would like to see implemented or deployed in the next say, 6 to 12 months as we hopefully round a corner with this pandemic? >>Yeah, I think you know, certainly for for where we are as a company from a Q A perspective. We are. Yeah, there's certain bits that we do Well, you know, we've started creating continuous delivery. A day evokes pipelines. Um, there's still manual aspects of that. So, you know, certainly for May I I've challenged my team with saying, How do we do an automated journey? So if I, you know, I put a requirement injera or value whoever it is, that's why. Then click a button on bond, you know, with either zero touch of one touch, then put that into production and have confidence that that has been done safely on that it works. And what happens if it doesn't work? So you know, that's that's the next in the next few months, that's what our concentration is about. But it's also about decision making, you know, how do we actually understand those value judgements? And I think there's lots of the things Dev ops, ai ops, kind of always that aspects of business operations. I think it's about having the information in one place to make those kind of decisions. How does it all tied together, as I say, even still with kind of Dev ops, we've still got elements within my company where we've got lots of different organizations doing some doing similar kind of things but the walking of working in silos Still. So I think, having a eye ops Aziz becomes more and more to the fore as we go to the cloud. And that's what we need to. You know, we're still very early on in our cloud journey, you know. So we need to make sure the technologies work with Cloud as well as you kind of legacy systems. But it's about bringing that all together and having a full visible pipeline. Everybody can see and make decisions against >>you said the word confidence, which jumped out at me right away. Because absolutely, you've gotta have be able to have confidence in what your team is delivering and how it's impacting the business and those customers. Last question for you is how would you advise your peers in a similar situation to leverage technology automation, for example, dev ops to be able to gain the confidence that they're making the right decisions for their business? >>Yeah, I mean, I think the the approach that we've taken actually is not started with technology we've actually taken human centered design a za core principle of what we dio within the i t part of BT. So by using humans tend to design. That means we talked to our customers. We understand their pain points, we map out their current processes on. But when we mapped out, those processes also understand their aspirations as well, you know, Where do they want to be in six months? You know, Do they want to be more agile and you know, or do they want Teoh? Is this apart their business that they want thio run better? We have to Then look at why that's not running well and then see what solutions are out there. We've been lucky that, you know, with our partnership with Broadcom within the P l. A. A lot of the tortures and the P l. A have directly answered some of the businesses problems. But I think by having those conversations and actually engaging with the business, um, you know, especially if the business hold the purse strings, which is you know, in some companies, including as they do there is that kind of, you know, almost by understanding their their pain points and then saying This is how we can solve your problem We've tended to be much more successful than trying Thio impose something and say We're here to technology that they don't quite understand doesn't really understand how it could have resonate with their problems. So I think that's the heart of it is really about, you know, getting looking at the data, looking at the processes, looking at where the kind of waste is on. Then actually then looking at the right solutions. And as I say, continuous testing is a massive for us. We've also got a good relationship with capitals looking at visual ai on. Actually, there's a common theme through that, and I mean, AI is becoming more and more prevalent, and I know yeah, sometimes what is A I and people have kind of the semantics of it. Is it true, ai or not? But yes, certainly, you know, AI and machine learning is becoming more and more prevalent in the way that we work, and it's allowing us to be much more effective, the quicker and what we do on being more accurate. You know, whether it's finding defects, running the right tests or, you know, being able to anticipate problems before they're happening in a production environment. >>Welcome. Thank you so much for giving us this sort of insight. Outlook at Dev Ops, sharing the successes that you're having taking those challenges, converting them toe opportunities and forgiving folks who might be in your shoes or maybe slightly behind advice. I'm sure they appreciate it. We appreciate your time. >>It's been an absolute pleasure, Really. Thank you for inviting me of Extremely enjoyed it. So thank you ever so much. >>Excellent. Me too. I've learned a lot for Glynn Martin and Lisa Martin. You're watching the Cube?

Published Date : Nov 20 2020

SUMMARY :

from around the globe. It's great to have you on the program. How have the events of this year affected the transformation that you are so We have to obviously deal with the fact that you know, What are some of the things that scene there as as needing to get, as you said, get things right but done so quickly Waas that we were you know, we would be trying thio do certain What are some of the shifts in terms of expectations So we think the testing team or the the delivery team, you know, But those challenges there you guys were able And then we look at the the the users, you know, the usage of that product of that application What are some of the core technology capabilities that you see really But if we want Thio, you know, in a world where the pace is ever increasing May and from a testing point of view, you know, amount, amount of testing, actually, how do we automate that? So you know, that's that's the next in the next few months, that's what our concentration is Last question for you is how would you advise your peers in a similar situation So I think that's the heart of it is really about, you know, getting looking at the data, Thank you so much for giving us this sort of insight. So thank you ever so much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Glynn MartinPERSON

0.99+

Lisa MartinPERSON

0.99+

JohnPERSON

0.99+

tensQUANTITY

0.99+

LisaPERSON

0.99+

Maurin MawrPERSON

0.99+

UKLOCATION

0.99+

LeighPERSON

0.99+

MauraPERSON

0.99+

AzzawiPERSON

0.99+

MartinPERSON

0.99+

Birmingham, EnglandLOCATION

0.99+

BroadcomORGANIZATION

0.99+

14QUANTITY

0.99+

6QUANTITY

0.99+

MayDATE

0.99+

oneQUANTITY

0.99+

Glyn MartinPERSON

0.99+

BT GroupORGANIZATION

0.98+

bothQUANTITY

0.98+

nine months agoDATE

0.98+

12 months agoDATE

0.98+

12 monthsQUANTITY

0.98+

GlennPERSON

0.98+

six monthsQUANTITY

0.98+

this yearDATE

0.98+

SogloORGANIZATION

0.98+

six monthsQUANTITY

0.98+

one touchQUANTITY

0.98+

ThioPERSON

0.97+

hundredsQUANTITY

0.97+

P l. AORGANIZATION

0.97+

first timeQUANTITY

0.97+

BTORGANIZATION

0.96+

GilbertPERSON

0.96+

one segmentQUANTITY

0.95+

agileTITLE

0.94+

BT GlennORGANIZATION

0.94+

ToaPERSON

0.92+

Teoh MawrPERSON

0.91+

one thingQUANTITY

0.91+

CoveORGANIZATION

0.89+

ChineseOTHER

0.88+

GlenPERSON

0.87+

ZeissPERSON

0.87+

B. TORGANIZATION

0.86+

zero touchQUANTITY

0.84+

LenaPERSON

0.83+

Step UpsORGANIZATION

0.81+

58QUANTITY

0.79+

last nine monthsDATE

0.79+

AzizPERSON

0.79+

14 stacksQUANTITY

0.78+

first placeQUANTITY

0.76+

hundreds of thousandsQUANTITY

0.76+

productsQUANTITY

0.75+

last 12 monthsDATE

0.75+

58 different whitey stacksQUANTITY

0.73+

2OTHER

0.71+

DevOpsORGANIZATION

0.71+

ForumORGANIZATION

0.69+

pandemicEVENT

0.63+

Dev OpsORGANIZATION

0.62+

Requirement SocietyORGANIZATION

0.62+

9QUANTITY

0.6+

ThioORGANIZATION

0.59+

thioPERSON

0.58+

AndiPERSON

0.57+

Dev OpsTITLE

0.54+

nextDATE

0.53+

oughta NomicsORGANIZATION

0.52+

Dev Ops JourneyTITLE

0.52+

OutlookTITLE

0.51+

monthsDATE

0.49+

lastDATE

0.47+

19QUANTITY

0.4+

DevOps Virtual Forum 2020 | Broadcom


 

>>From around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom. >>Hi, Lisa Martin here covering the Broadcom dev ops virtual forum. I'm very pleased to be joined today by a cube alumni, Jeffrey Hammond, the vice president and principal analyst serving CIO is at Forester. Jeffrey. Nice to talk with you today. >>Good morning. It's good to be here. Yeah. >>So a virtual forum, great opportunity to engage with our audiences so much has changed in the last it's an understatement, right? Or it's an overstated thing, but it's an obvious, so much has changed when we think of dev ops. One of the things that we think of is speed, you know, enabling organizations to be able to better serve customers or adapt to changing markets like we're in now, speaking of the need to adapt, talk to us about what you're seeing with respect to dev ops and agile in the age of COVID, what are things looking like? >>Yeah, I think that, um, for most organizations, we're in a, uh, a period of adjustment, uh, when we initially started, it was essentially a sprint, you know, you run as hard as you can for as fast as you can for as long as you can and you just kind of power through it. And, and that's actually what, um, the folks that get hub saw in may when they ran an analysis of how developers, uh, commit times and a level of work that they were committing and how they were working, uh, in the first couple of months of COVID was, was progressing. They found that developers, at least in the Pacific time zone were actually increasing their work volume, maybe because they didn't have two hour commutes or maybe because they were stuck away in their homes, but for whatever reason, they were doing more work. >>And it's almost like, you know, if you've ever run a marathon the first mile or two in the marathon, you feel great and you just want to run and you want to power through it and you want to go hard. And if you do that by the time you get to mile 18 or 19, you're going to be gassed. It's sucking for wind. Uh, and, and that's, I think where we're starting to hit. So as we start to, um, gear our development chops out for the reality that most of us won't be returning into an office until 2021 at the earliest and many organizations will, will be fundamentally changing, uh, their remote workforce, uh, policies. We have to make sure that the agile processes that we use and the dev ops processes and tools that we use to support these teams are essentially aligned to help developers run that marathon instead of just kind of power through. >>So, um, let me give you a couple of specifics for many organizations, they have been in an environment where they will, um, tolerate Rover remote work and what I would call remote work around the edges like developers can be remote, but product managers and, um, you know, essentially scrum masters and all the administrators that are running the, uh, uh, the SCM repositories and, and the dev ops pipelines are all in the office. And it's essentially centralized work. That's not, we are anymore. We're moving from remote workers at the edge to remote workers at the center of what we do. And so one of the implications of that is that, um, we have to think about all the activities that you need to do from a dev ops perspective or from an agile perspective, they have to be remote people. One of the things I found with some of the organizations I talked to early on was there were things that administrators had to do that required them to go into the office to reboot the SCM server as an example, or to make sure that the final approvals for production, uh, were made. >>And so the code could be moved into the production environment. And so it actually was a little bit difficult because they had to get specific approval from the HR organizations to actually be allowed to go into the office in some States. And so one of the, the results of that is that while we've traditionally said, you know, tools are important, but they're not as important as culture as structure as organization as process. I think we have to rethink that a little bit because to the extent that tools enable us to be more digitally organized and to hiring, you know, achieve higher levels of digitization in our processes and be able to support the idea of remote workers in the center. They're now on an equal footing with so many of the other levers, uh, that, that, um, uh, that organizations have at their disposal. Um, I'll give you another example for years. >>We've said that the key to success with agile at the team level is cross-functional co located teams that are working together physically co located. It's the easiest way to show agile success. We can't do that anymore. We can't be physically located at least for the foreseeable future. So, you know, how do you take the low hanging fruits of an agile transformation and apply it in, in, in, in the time of COVID? Well, I think what you have to do is that you have to look at what physical co-location has enabled in the past and understand that it's not so much the fact that we're together looking at each other across the table. It's the fact that we're able to get into a shared mindspace, uh, from, um, uh, from a measurement perspective, we can have shared purpose. We can engage in high bandwidth communications. It's the spiritual aspect of that physical co-location that is actually important. So one of the biggest things that organizations need to start to ask themselves is how do we achieve spiritual colocation with our agile teams? Because we don't have the, the ease of physical co-location available to us anymore? >>Well, the spiritual co-location is such an interesting kind of provocative phrase there, but something that probably was a challenge here, we are seven, eight months in for many organizations, as you say, going from, you know, physical workspaces, co-location being able to collaborate face to face to a, a light switch flip overnight. And this undefined period of time where all we were living with with was uncertainty, how does spiritual, what do you, when you talk about spiritual co-location in terms of collaboration and processes and technology help us unpack that, and how are you seeing organizations adopted? >>Yeah, it's, it's, um, it's a great question. And, and I think it goes to the very root of how organizations are trying to transform themselves to be more agile and to embrace dev ops. Um, if you go all the way back to the, to the original, uh, agile manifesto, you know, there were four principles that were espoused individuals and interactions over processes and tools. That's still important. Individuals and interactions are at the core of software development, processes and tools that support those individual and interact. Uh, those individuals in those interactions are more important than ever working software over comprehensive documentation. Working software is still more important, but when you are trying to onboard employees and they can't come into the office and they can't do the two day training session and kind of understand how things work and they can't just holler over the cube, uh, to ask a question, you may need to invest a little bit more in documentation to help that onboarding process be successful in a remote context, uh, customer collaboration over contract negotiation. >>Absolutely still important, but employee collaboration is equally as important if you want to be spiritually, spiritually co-located. And if you want to have a shared purpose and then, um, responding to change over following a plan. I think one of the things that's happened in a lot of organizations is we have focused so much of our dev ops effort around velocity getting faster. We need to run as fast as we can like that sprinter. Okay. You know, trying to just power through it as quickly as possible. But as we shift to, to the, to the marathon way of thinking, um, velocity is still important, but agility becomes even more important. So when you have to create an application in three weeks to do track and trace for your employees, agility is more important. Um, and then just flat out velocity. Um, and so changing some of the ways that we think about dev ops practices, um, is, is important to make sure that that agility is there for one thing, you have to defer decisions as far down the chain to the team level as possible. >>So those teams have to be empowered to make decisions because you can't have a program level meeting of six or seven teams and one large hall and say, here's the lay of the land. Here's what we're going to do here are our processes. And here are our guardrails. Those teams have to make decisions much more quickly that developers are actually developing code in smaller chunks of flow. They have to be able to take two hours here or 50 minutes there and do something useful. And so the tools that support us have to become tolerant of the reality of, of, of, of how we're working. So if they work in a way that it allows the team together to take as much autonomy as they can handle, um, to, uh, allow them to communicate in a way that, that, that delivers shared purpose and allows them to adapt and master new technologies, then they're in the zone in their spiritual, they'll get spiritually connected. I hope that makes sense. >>It does. I think we all could use some of that, but, you know, you talked about in the beginning and I've, I've talked to numerous companies during the pandemic on the cube about the productivity, or rather the number of hours of work has gone way up for many roles, you know, and, and, and times that they normally late at night on the weekends. So, but it's a cultural, it's a mind shift to your point about dev ops focused on velocity, sprints, sprints, sprints, and now we have to, so that cultural shift is not an easy one for developers. And even at this folks to flip so quickly, what have you seen in terms of the velocity at which businesses are able to get more of that balance between the velocity, the sprint and the agility? >>I think, I think at the core, this really comes down to management sensitivity. Um, when everybody was in the office, you could kind of see the mental health of development teams by, by watching how they work. You know, you call it management by walking around, right. We can't do that. Managers have to, um, to, to be more aware of what their teams are doing, because they're not going to see that, that developer doing a check-in at 9:00 PM on a Friday, uh, because that's what they had to do, uh, to meet the objectives. And, um, and, and they're going to have to, to, um, to find new ways to measure engagement and also potential burnout. Um, friend of mine once had, uh, had a great metric that he called the parking lot metric. It was helpful as the parking lot at nine. And how full was it at five? >>And that gives you an indication of how engaged your developers are. Um, what's the digital equivalent equivalent to the parking lot metric in the time of COVID it's commit stats, it's commit rates. It's, um, you know, the, uh, the turn rate, uh, that we have in our code. So we have this information, we may not be collecting it, but then the next question becomes, how do we use that information? Do we use that information to say, well, this team isn't delivering as at the same level of productivity as another team, do we weaponize that data or do we use that data to identify impedances in the process? Um, why isn't a team working effectively? Is it because they have higher levels of family obligations and they've got kids that, that are at home? Um, is it because they're working with, um, you know, hardware technology, and guess what, they, it's not easy to get the hardware technology into their home office because it's in the lab at the, uh, at the corporate office, uh, or they're trying to communicate, uh, you know, halfway around the world. >>And, uh, they're communicating with a, with an office lab that is also shut down and, and, and the bandwidth just doesn't enable the, the level of high bandwidth communications. So from a dev ops perspective, managers have to get much more sensitive to the, the exhaust that the dev ops tools are throwing off, but also how they're going to use that in a constructive way to, to prevent burnout. And then they also need to, if they're not already managing or monitoring or measuring the level of developer engagement, they have, they really need to start whether that's surveys around developer satisfaction, um, whether it's, you know, more regular social events, uh, where developers can kind of just get together and drink a beer and talk about what's going on in the project, uh, and monitoring who checks in and who doesn't, uh, they have to, to, um, work harder, I think, than they ever have before. >>Well, and you mentioned burnout, and that's something that I think we've all faced in this time at varying levels and it changes. And it's a real, there's a tension in the air, regardless of where you are. There's a challenge, as you mentioned, people having, you know, coworker, their kids as coworkers and fighting for bandwidth, because everyone is forced in this situation. I'd love to get your perspective on some businesses that are, that have done this well, this adaptation, what can you share in terms of some real-world examples that might inspire the audience? >>Yeah. Uh, I'll start with, uh, stack overflow. Uh, they recently published a piece in the journal of the ACM around some of the things that they had discovered. Um, you know, first of all, just a cultural philosophy. If one person is remote, everybody is remote. And you just think that way from an executive level, um, social spaces. One of the things that they talk about doing is leaving a video conference room open at a team level all day long, and the team members, you know, we'll go on mute, you know, so that they don't have to, that they don't necessarily have to be there with somebody else listening to them. But if they have a question, they can just pop off mute really quickly and ask the question. And if anybody else knows the answer, it's kind of like being in that virtual pod. Uh, if you, uh, if you will, um, even here at Forrester, one of the things that we've done is we've invested in social ceremonies. >>We've actually moved our to our team meetings on, on my analyst team from, from once every two weeks to weekly. And we have built more time in for social Ajay socialization, just so we can see, uh, how, how, how we're doing. Um, I think Microsoft has really made some good, uh, information available in how they've managed things like the onboarding process. I think I'm Amanda silver over there mentioned that a couple of weeks ago when, uh, uh, a presentation they did that, uh, uh, Microsoft onboarded over 150,000 people since the start of COVID, if you don't have good remote onboarding processes, that's going to be a disaster. Now they're not all developers, but if you think about it, um, everything from how you do the interviewing process, uh, to how you get people, their badges, to how they get their equipment. Um, security is a, is another issue that they called out typically, uh, it security, um, the security of, of developers machines ends at, at, at the corporate desktop. >>But, you know, since we're increasingly using our own machines, our own hardware, um, security organizations kind of have to extend their security policies to cover, uh, employee devices, and that's caused them to scramble a little bit. Uh, so, so the examples are out there. It's not a lot of, like, we have to do everything completely differently, but it's a lot of subtle changes that, that have to be made. Um, I'll give you another example. Um, one of the things that, that we are seeing is that, um, more and more organizations to deal with the challenges around agility, with respect to delivering software, embracing low-code tools. In fact, uh, we see about 50% of firms are using low-code tools right now. We predict it's going to be 75% by the end of next year. So figuring out how your dev ops processes support an organization that might be using Mendix or OutSystems, or, you know, the power platform building the front end of an application, like a track and trace application really, really quickly, but then hooking it up to your backend infrastructure. Does that happen completely outside the dev ops investments that you're making and the agile processes that you're making, or do you adapt your organization? Um, our hybrid teams now teams that not just have professional developers, but also have business users that are doing some development with a low-code tool. Those are the kinds of things that we have to be, um, willing to, um, to entertain in order to shift the focus a little bit more toward the agility side, I think >>Lot of obstacles, but also a lot of opportunities for businesses to really learn, pay attention here, pivot and grow, and hopefully some good opportunities for the developers and the business folks to just get better at what they're doing and learning to embrace spiritual co-location Jeffrey, thank you so much for joining us on the program today. Very insightful conversation. >>My pleasure. It's it's, it's an important thing. Just remember if you're going to run that marathon, break it into 26, 10 minute runs, take a walk break in between each and you'll find that you'll get there. >>Digestible components, wise advice. Jeffery Hammond. Thank you so much for joining for Jeffrey I'm Lisa Martin, you're watching Broadcom's dev ops virtual forum >>From around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom, >>Continuing our conversations here at Broadcom's dev ops virtual forum. Lisa Martin here, please. To welcome back to the program, Serge Lucio, the general manager of the enterprise software division at Broadcom. Hey, Serge. Welcome. Thank you. Good to be here. So I know you were just, uh, participating with the biz ops manifesto that just happened recently. I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept, but I wanted to get your thoughts on spiritual co-location as really a necessity for biz ops to succeed in this unusual time in which we're living. What are your thoughts on spiritual colocation in terms of cultural change versus adoption of technologies? >>Yeah, it's a, it's, it's quite interesting, right? When we, when we think about the major impediments for, uh, for dev ops implementation, it's all about culture, right? And swore over the last 20 years, we've been talking about silos. We'd be talking about the paradox for these teams to when it went to align in many ways, it's not so much about these teams aligning, but about being in the same car in the same books, right? It's really about fusing those teams around kind of the common purpose, a common objective. So to me, the, this, this is really about kind of changing this culture where people start to look at a kind of OKR is instead of the key objective, um, that, that drives the entire team. Now, what it means in practice is really that's, uh, we need to change a lot of behaviors, right? It's not about the Yarki, it's not about roles. It's about, you know, who can do what and when, and, uh, you know, driving a bias towards action. It also means that we need, I mean, especially in this school times, it becomes very difficult, right? To drive kind of a kind of collaboration between these teams. And so I think there there's a significant role that especially tools can play in terms of providing this complex feedback from teams to, uh, to be in that preface spiritual qualification. >>Well, and it talked about culture being, it's something that, you know, we're so used to talking about dev ops with respect to velocity, all about speed here. But of course this time everything changed so quickly, but going from the physical spaces to everybody being remote really does take it. It's very different than you can't replicate it digitally, but there are collaboration tools that can kind of really be essential to help that cultural shift. Right? >>Yeah. So 2020, we, we touch to talk about collaboration in a very mundane way. Like, of course we can use zoom. We can all get into, into the same room. But the point when I think when Jeff says spiritual, co-location, it's really about, we all share the same objective. Do we, do we have a niece who, for instance, our pipeline, right? When you talk about dev ops, probably we all started thinking about this continuous delivery pipeline that basically drives the automation, the orchestration across the team, but just thinking about a pipeline, right, at the end of the day, it's all about what is the meantime to beat back to these teams. If I'm a developer and a commit code, I don't, does it take where, you know, that code to be processed through pipeline pushy? Can I get feedback if I am a finance person who is funding a product or a project, what is my meantime to beat back? >>And so a lot of, kind of a, when we think about the pipeline, I think what's been really inspiring to me in the last year or so is that there is much more of an adoption of the Dora metrics. There is way more of a focus around value stream management. And to me, this is really when we talk about collaboration, it's really a balance. How do you provide the feedback to the different stakeholders across the life cycle in a very timely matter? And that's what we would need to get to in terms of kind of this, this notion of collaboration. It's not so much about people being in the same physical space. It's about, you know, when I checked in code, you know, to do I guess the system to automatically identify what I'm going to break. If I'm about to release some allegation, how can the system help me reduce my change pillar rates? Because it's, it's able to predict that some issue was introduced in the outpatient or work product. Um, so I think there's, there's a great role of technology and AI candidate Lynch to, to actually provide that new level of collaboration. >>So we'll get to AI in a second, but I'm curious, what are some of the, of the metrics you think that really matter right now is organizations are still in some form of transformation to this new almost 100% remote workforce. >>So I'll just say first, I'm not a big fan of metrics. Um, and the reason being that, you know, you can look at a change killer rate, right, or a lead time or cycle time. And those are, those are interesting metrics, right? The trend on metric is absolutely critical, but what's more important is you get to the root cause what is taught to you lean to that metric to degrade or improve or time. And so I'm much more interested and we, you know, fruit for Broadcom. Are we more interested in understanding what are the patterns that contribute to this? So I'll give you a very mundane example. You know, we know that cycle time is heavily influenced by, um, organizational boundaries. So, you know, we talk a lot about silos, but, uh, we we've worked with many of our customers doing value stream mapping. And oftentimes what you see is that really the boundaries of your organization creates a lot of idle time, right? So to me, it's less about the metrics. I think the door metrics are a pretty, you know, valid set metrics, but what's way more important is to understand what are the antiperspirants, what are the things that we can detect through the data that actually are affecting those metrics. And, uh, I mean, over the last 10, 20 years, we've learned a lot about kind of what are, what are the antiperspirants within our large enterprise customers. And there are plenty of them. >>What are some of the things that you're seeing now with respect to patterns that have developed over the last seven to eight months? >>So I think the two areas which clearly are evolving very quickly are on kind of the front end of the life cycle, where DevOps is more and more embracing value stream management value stream mapping. Um, and I think what's interesting is that in many ways the product is becoming the new silo. Uh, the notion of a product is very difficult by itself to actually define people are starting to recognize that a value stream is not its own little kind of Island. That in reality, when I define a product, this product, oftentimes as dependencies on our products and that in fact, you're looking at kind of a network of value streams, if you will. So, so even on that, and there is clearly kind of a new sets, if you will, of anti-patterns where products are being defined as a set of OTRs, they have interdependencies and you have have a new set of silos on the operands, uh, the Abra key movement to Israel and the SRE space where, um, I think there is a cultural clash while the dev ops side is very much embracing this notion of OTRs and value stream mapping and Belgium management. >>On the other end, you have the it operations teams. We still think business services, right? For them, they think about configure items, think about infrastructure. And so, you know, it's not uncommon to see, you know, teams where, you know, the operations team is still thinking about hundreds of thousands, tens of thousands of business services. And so the, the, there is there's this boundary where, um, I think, well, SRE is being put in place. And there's lots of thinking about what kind of metrics can be fined. I think, you know, going back to culture, I think there's a lot of cultural evolution that's still required for true operations team. >>And that's a hard thing. Cultural transformation in any industry pandemic or not is a challenging thing. You talked about, uh, AI and automation of minutes ago. How do you think those technologies can be leveraged by DevOps leaders to influence their successes and their ability to collaborate, maybe see eye to eye with the SRS? >>Yeah. Um, so th you're kind of too. So even for myself, as a leader of a, you know, 1500 people organization, there's a number of things I don't see right. On a daily basis. And, um, I think the, the, the, the technologies that we have at our disposal today from the AI are able to mind a lot of data and expose a lot of, uh, issues that's as leaders we may not be aware of. And some of the, some of these are pretty kind of easy to understand, right? We all think we're agile. And yet when you, when you start to understand, for instance, uh, what is the, what is the working progress right to during the sprint? Um, when you start to analyze the data you can detect, for instance, that maybe the teams are over committed, that there is too much work in progress. >>You can start to identify kind of, interdepencies either from a technology, from a people point of view, which were hidden, uh, you can start to understand maybe the change filler rates he's he is dragging. So I believe that there is a, there's a fundamental role to be played by the tools to, to expose again, these anti parents, to, to make these things visible to the teams, to be able to even compare teams. Right. One of the things that's, that's, uh, that's amazing is now we have access to tons of data, not just from a given customer, but across a large number of customers. And so we start to compare all of these teams kind of operate, and what's working, what's not working >>Thoughts on AI and automation as, as a facilitator of spiritual co-location. >>Yeah, absolutely. Absolutely. It's um, you know, th there's, uh, the problem we all face is the unknown, right? The, the law city, but volume variety of the data, uh, everyday we don't really necessarily completely appreciate what is the impact of our actions, right? And so, um, AI can really act as a safety net that enables us to, to understand what is the impact of our actions. Um, and so, yeah, in many ways, the ability to be informed in a timely matter to be able to interact with people on the basis of data, um, and collaborate on the data. And the actual matter, I think is, is a, is a very powerful enabler, uh, on, in that respect. I mean, I, I've seen, um, I've seen countless of times that, uh, for instance, at the SRE boundary, um, to basically show that we'll turn the quality attributes, so an incoming release, right. And exposing that to, uh, an operations person and a sorry person, and enabling that collaboration dialogue through data is a very, very powerful tool. >>Do you have any recommendations for how teams can use, you know, the SRE folks, the dev ops says can use AI and automation in the right ways to be successful rather than some ways that aren't going to be nonproductive. >>Yeah. So to me, the th there, there's a part of the question really is when, when we talk about data, there are there different ways you can use data, right? Um, so you can, you can do a lot of an analytics, predictive analytics. So I think there is a, there's a tendency, uh, to look at, let's say a, um, a specific KPI, like a, an availability KPI, or change filler rate, and to basically do a regression analysis and projecting all these things, going to happen in the future. To me, that that's, that's a, that's a bad approach. The reason why I fundamentally think it's a better approach is because we are systems. The way we develop software is, is a, is a non-leader kind of system, right? Software development is not linear nature. And so I think there's a D this is probably the worst approach is to actually focus on metrics on the other end. >>Um, if you, if you start to actually understand at a more granular level, what har, uh, which are the things which are contributing to this, right? So if you start to understand, for instance, that whenever maybe, you know, you affect a specific part of the application that translates into production issues. So we, we have, I've actually, uh, a customer who, uh, identified that, uh, over 50% of their unplanned outages were related to specific components in your architecture. And whenever these components were changed, this resulted in these plant outages. So if you start to be able to basically establish causality, right, cause an effect between kind of data across the last cycle. I think, I think this is the right way to, uh, to, to use AI. And so pharma to be, I think it's way more God could have a classification problem. What are the classes of problems that do exist and affect things as opposed to analytics, predictive, which I don't think is as powerful. >>So I mentioned in the beginning of our conversation, that just came off the biz ops manifesto. You're one of the authors of that. I want to get your thoughts on dev ops and biz ops overlapping, complimenting each other, what, from a, the biz ops perspective, what does it mean to the future of dev ops? >>Yeah, so, so it's interesting, right? If you think about DevOps, um, there's no felony document, right? Can we, we can refer to the Phoenix project. I mean, there are a set of documents which have been written, but in many ways, there's no clear definition of what dev ops is. Uh, if you go to the dev ops Institute today, you'll see that they are specific, um, trainings for instance, on value management on SRE. And so in many ways, the problem we have as an industry is that, um, there are set practices between agile dev ops, SRE Valley should management. I told, right. And we all basically talk about the same things, right. We all talk about essentially, um, accelerating in the meantime fee to feedback, but yet we don't have the common framework to talk about that. The other key thing is that we add to wait, uh, for, uh, for jeans, Jean Kim's Lascaux, um, to, uh, to really start to get into the business aspect, right? >>And for value stream mapping to start to emerge for us to start as an industry, right. It, to start to think about what is our connection with the business aspect, what's our purpose, right? And ultimately it's all about driving these business outcomes. And so to me, these ops is really about kind of, uh, putting a lens on this critical element that it's not business and it, that we in fact need to fuse business 19 that I need needs to transform itself to recognize that it's, it's this value generator, right. It's not a cost center. And so the relationship to me, it's more than BizOps provides kind of this Oliver or kind of framework, if you will. That set the context for what is the reason, uh, for it to exist. What's part of the core values and principles that it needs to embrace to, again, change from a cost center to a value center. And then we need to start to use this as a way to start to unify some of the, again, the core practices, whether it's agile, DevOps value, stream mapping SRE. Um, so, so I think over time, my hope is that we start to optimize a lot of our practices, language, um, and, uh, and cultural elements. >>Last question surgeon, the last few seconds we have here talking about this, the relation between biz ops and dev ops, um, what do you think as DevOps evolves? And as you talked to circle some of your insights, what should our audience keep their eyes on in the next six to 12 months? >>So to me, the key, the key, um, challenge for, for the industry is really around. So we were seeing a very rapid shift towards kind of, uh, product to product, right. Which we don't want to do is to recreate kind of these new silos, these hard silos. Um, so that, that's one of the big changes, uh, that I think we need to be, uh, to be really careful about, um, because it is ultimately, it is about culture. It's not about, uh, it's not about, um, kind of how we segment the work, right. And, uh, any true culture that we can overcome kind of silos. So back to, I guess, with Jeffrey's concept of, um, kind of the spiritual co-location, I think it's, it's really about that too. It's really about kind of, uh, uh, focusing on the business outcomes on kind of aligning on driving engagement across the teams, but, but not for create a, kind of a new set of silos, which instead of being vertical are going to be these horizontal products >>Crazy by surge that looking at culture as kind of a way of really, uh, uh, addressing and helping to, uh, re re reduce, replace challenges. We thank you so much for sharing your insights and your time at today's DevOps virtual forum. >>Thank you. Thanks for your time. >>I'll be right back >>From around the globe it's the cube with digital coverage of devops virtual forum brought to you by Broadcom. >>Welcome to Broadcom's DevOps virtual forum, I'm Lisa Martin, and I'm joined by another Martin, very socially distanced from me all the way coming from Birmingham, England is Glynn Martin, the head of QA transformation at BT. Glynn, it's great to have you on the program. Thank you, Lisa. I'm looking forward to it. As we said before, we went live to Martins for the person one in one segment. So this is going to be an interesting segment guys, what we're going to do is Glynn's going to give us a really kind of deep inside out view of devops from an evolution perspective. So Glynn, let's start. Transformation is at the heart of what you do. It's obviously been a very transformative year. How have the events of this year affected the >> transformation that you are still responsible for driving? Yeah. Thank you, Lisa. I mean, yeah, it has been a difficult year. >>Um, and although working for BT, which is a global telecommunications company, um, I'm relatively resilient, I suppose, as a, an industry, um, through COVID obviously still has been affected and has got its challenges. And if anything, it's actually caused us to accelerate our transformation journey. Um, you know, we had to do some great things during this time around, um, you know, in the UK for our emergency and, um, health workers give them unlimited data and for vulnerable people to support them. And that's spent that we've had to deliver changes quickly. Um, but what we want to be able to do is deliver those kinds of changes quickly, but sustainably for everything that we do, not just because there's an emergency. Um, so we were already on the kind of journey to agile, but ever more important now that we are, we are able to do those, that kind of work, do it more quickly. >>Um, and that it works because the, the implications of it not working is, can be terrible in terms of you know, we've been supporting testing centers,  new hospitals to treat COVID patients. So we need to get it right. And then therefore the coverage of what we do, the quality of what we do and how quickly we do it really has taken on a new scale and what was already a very competitive market within the telco industry within the UK. Um, you know, what I would say is that, you know, we are under pressure to deliver more value, but we have small cost challenges. We have to obviously, um, deal with the fact that, you know, COVID 19 has hit most industries kind of revenues and profits. So we've got this kind of paradox between having less costs, but having to deliver more value quicker and  to higher quality. So yeah, certainly the finances is, um, on our minds and that's why we need flexible models, cost models that allow us to kind of do growth, but we get that growth by showing that we're delivering value. Um, especially in these times when there are financial challenges on companies. So one of the things that I want to ask you about, I'm again, looking at DevOps from the inside >>Out and the evolution that you've seen, you talked about the speed of things really accelerating in this last nine months or so. When we think dev ops, we think speed. But one of the things I'd love to get your perspective on is we've talked about in a number of the segments that we've done for this event is cultural change. What are some of the things that you've seen there as, as needing to get, as you said, get things right, but done so quickly to support essential businesses, essential workers. How have you seen that cultural shift? >>Yeah, I think, you know, before test teams for themselves at this part of the software delivery cycle, um, and actually now really our customers are expecting that quality and to deliver for our customers what they want, quality has to be ingrained throughout the life cycle. Obviously, you know, there's lots of buzzwords like shift left. Um, how do we do shift left testing? Um, but for me, that's really instilling quality and given capabilities shared capabilities throughout the life cycle that drive automation, drive improvements. I always say that, you know, you're only as good as your lowest common denominator. And one thing that we were finding on our dev ops journey was that we  would be trying to do certain things quick, we had automated build, automated tests. But if we were taking a weeks to create test scripts, or we were taking weeks to manually craft data, and even then when we had taken so long to do it, that the coverage was quite poor and that led to lots of defects later on in the life cycle, or even in our production environment, we just couldn't afford to do that. >>And actually, focusing on continuous testing over the last nine to 12 months has really given us the ability to deliver quickly across the whole life cycle. And therefore actually go from doing a kind of semi agile kind of thing, where we did the user stories, we did a few of the kind of agile ceremonies, but we weren't really deploying any quicker into production because our stakeholders were scared that we didn't have the same control that we had when we had more waterfall releases. And, you know, when we didn't think of ourselves. So we've done a lot of work on every aspect, um, especially from a testing point of view, every aspect of every activity, rather than just looking at automated tests, you know, whether it is actually creating the test in the first place, whether it's doing security testing earlier in the lot and performance testing in the life cycle, et cetera. So, yeah,  it's been a real key thing that for CT, for us to drive DevOps, >>Talk to me a little bit about your team. What are some of the shifts in terms of expectations that you're experiencing and how your team interacts with the internal folks from pipeline through life cycle? >>Yeah, we've done a lot of work on this. Um, you know, there's a thing that I think people will probably call it a customer experience gap, and it reminds me of a Gilbert cartoon, where we start with the requirements here and you're almost like a Chinese whisper effects and what we deliver is completely different. So we think the testing team or the delivery teams, um, know in our teeth has done a great job. This is what it said in the acceptance criteria, but then our customers are saying, well, actually that's not working this isn't working and there's this kind of gap. Um, we had a great launch this year of agile requirements, it's one of the Broadcom tools. And that was the first time in, ever since I remember actually working within BT, I had customers saying to me, wow, you know, we want more of this. >>We want more projects to have extra requirements design on it because it allowed us to actually work with the business collaboratively. I mean, we talk about collaboration, but how do we actually, you know, do that and have something that both the business and technical people can understand. And we've actually been working with the business , using agile requirements designer to really look at what the requirements are, tease out requirements we hadn't even thought of and making sure that we've got high levels of test coverage. And what we actually deliver at the end of it, not only have we been able to generate tests more quickly, but we've got much higher test coverage and also can more smartly, using the kind of AI within the tool and then some of the other kinds of pipeline tools, actually deliver to choose the right tasks, and actually doing a risk based testing approach. So that's been a great launch this year, but just the start of many kinds of things that we're doing >>Well, what I hear in that, Glynn is a lot of positives that have come out of a very challenging situation. Talk to me about it. And I liked that perspective. This is a very challenging time for everybody in the world, but it sounds like from a collaboration perspective you're right, we talk about that a lot critical with devops. But those challenges there, you guys were able to overcome those pretty quickly. What other challenges did you face and figure out quickly enough to be able to pivot so fast? >>I mean, you talked about culture. You know, BT is like most companies  So it's very siloed. You know we're still trying to work to become closer as a company. So I think there's a lot of challenges around how would you integrate with other tools? How would you integrate with the various different technologies. And BT, we have 58 different IT stacks. That's not systems, that's stacks, all of those stacks can have hundreds of systems. And we're trying to, we've got a drive at the moment, a simplified program where we're trying to you know, reduce that number to 14 stacks. And even then there'll be complexity behind the scenes that we will be challenged more and more as we go forward. How do we actually highlight that to our users? And as an it organization, how do we make ourselves leaner, so that even when we've still got some of that legacy, and we'll never fully get rid of it and that's the kind of trade off that we have to make, how do we actually deal with that and hide that from our users and drive those programs, so we can, as I say, accelerate change,  reduce that kind of waste and that kind of legacy costs out of our business. You know, the other thing as well, I'm sure telecoms is probably no different to insurance or finance. When you take the number of products that we do, and then you combine them, the permutations are tens and hundreds of thousands of products. So we, as a business are trying to simplify, we are trying to do that in an agile way. >>And haven't tried to do agile in the proper way and really actually work at pace, really deliver value. So I think what we're looking more and more at the moment is actually  more value focused. Before we used to deliver changes sometimes into production. Someone had a great idea, or it was a great idea nine months ago or 12 months ago, but actually then we ended up deploying it and then we'd look at the users, the usage of that product or that application or whatever it is, and it's not being used for six months. So we haven't got, you know, the cost of the last 12 months. We certainly haven't gotten room for that kind of waste and, you know, for not really understanding the value of changes that we are doing. So I think that's the most important thing of the moment, it's really taking that waste out. You know, there's lots of focus on things like flow management, what bits of our process are actually taking too long. And we've started on that journey, but we've got a hell of a long way to go. But that involves looking at every aspect of the software delivery cycle. >> Going from, what 58 IT stacks down to 14 or whatever it's going to be, simplifying sounds magical to everybody. It's a big challenge. What are some of the core technology capabilities that you see really as kind of essential for enabling that with this new way that you're working? >>Yeah. I mean, I think we were started on a continuous testing journey, and I think that's just the start. I mean as I say, looking at every aspect of, you know, from a QA point of view is every aspect of what we do. And it's also looking at, you know, we've started to branch into more like AI, uh, AI ops and, you know, really the full life cycle. Um, and you know, that's just a stepping stone to, you know, I think autonomics is the way forward, right. You know, all of this kind of stuff that happens, um, you know, monitoring, uh, you know, watching the systems what's happening in production, how do we feed that back? How'd you get to a point where actually we think about change and then suddenly it's in production safely, or if it's not going to safety, it's automatically backing out. So, you know, it's a very, very long journey, but if we want to, you know, in a world where the pace is in ever-increasing and the demands for the team, and, you know, with the pressures on, at the moment where we're being asked to do things, uh, you know, more efficiently and as lean as possible, we need to be thinking about every part of the process and how we put the kind of stepping stones in place to lead us to a more automated kind of, um, you know, um, the future. >>Do you feel that that planned outcomes are starting to align with what's delivered, given this massive shift that you're experiencing? >>I think it's starting to, and I think, you know, as I say, as we look at more of a value based approach, um, and, um, you know, as I say, print, this was a kind of flow management. I think that that will become ever, uh, ever more important. So, um, I think it starting to people certainly realize that, you know, teams need to work together, you know, the kind of the cousin between business and it, especially as we go to more kind of SAS based solutions, low code solutions, you know, there's not such a gap anymore, actually, some of our business partners that expense to be much more tech savvy. Um, so I think, you know, this is what we have to kind of appreciate what is its role, how do we give the capabilities, um, become more of a centers of excellence rather than actually doing mounds amounts of work. And for me, and from a testing point of view, you know, mounds and mounds of testing, actually, how do we automate that? How do we actually generate that instead of, um, create it? I think that's the kind of challenge going forward. >>What are some, as we look forward, what are some of the things that you would like to see implemented or deployed in the next, say six to 12 months as we hopefully round a corner with this pandemic? >>Yeah, I think, um, you know, certainly for, for where we are as a company from a QA perspective, we are, um, you let's start in bits that we do well, you know, we've started creating, um, continuous delivery and DevOps pipelines. Um, there's still manual aspects of that. So, you know, certainly for me, I I've challenged my team with saying how do we do an automated journey? So if I put a requirement in JIRA or rally or wherever it is and why then click a button and, you know, with either zero touch for one such, then put that into production and have confidence that, that has been done safely and that it works and what happens if it doesn't work. So, you know, that's, that's the next, um, the next few months, that's what our concentration, um, is, is about. But it's also about decision-making, you know, how do you actually understand those value judgments? >>And I think there's lots of the things dev ops, AI ops, kind of that always ask aspects of business operations. I think it's about having the information in one place to make those kinds of decisions. How does it all try and tie it together? As I say, even still with kind of dev ops, we've still got elements within my company where we've got lots of different organizations doing some, doing similar kinds of things, but they're all kind of working in silos. So I think having AI ops as it comes more and more to the fore as we go to cloud, and that's what we need to, you know, we're still very early on in our cloud journey, you know, so we need to make sure the technologies work with cloud as well as you can have, um, legacy systems, but it's about bringing that all together and having a full, visible pipeline, um, that everybody can see and make decisions. >>You said the word confidence, which jumped out at me right away, because absolutely you've got to have be able to have confidence in what your team is delivering and how it's impacting the business and those customers. Last question then for you is how would you advise your peers in a similar situation to leverage technology automation, for example, dev ops, to be able to gain the confidence that they're making the right decisions for their business? >>I think the, the, the, the, the approach that we've taken actually is not started with technology. Um, we've actually taken a human centered design, uh, as a core principle of what we do, um, within the it part of BT. So by using human centered design, that means we talk to our customers, we understand their pain points, we map out their current processes. Um, and then when we mapped out what this process does, it also understand their aspirations as well, you know? Um, and where do they want to be in six months? You know, do they want it to be, um, more agile and, you know, or do they want to, you know, is, is this a part of their business that they want to do one better? We actually then looked at why that's not running well, and then see what, what solutions are out there. >>We've been lucky that, you know, with our partnership, with Broadcom within the payer line, lots of the tools and the PLA have directly answered some of the business's problems. But I think by having those conversations and actually engaging with the business, um, you know, especially if the business hold the purse strings, which in, in, uh, you know, in some companies include not as they do there is that kind of, you know, almost by understanding their, their pain points and then starting, this is how we can solve your problem. Um, is we've, we've tended to be much more successful than trying to impose something and say, well, here's the technology that they don't quite understand. It doesn't really understand how it kind of resonates with their problems. So I think that's the heart of it. It's really about, you know, getting, looking at the data, looking at the processes, looking at where the kind of waste is. >>And then actually then looking at the right solutions. Then, as I say, continuous testing is massive for us. We've also got a good relationship with Apple towards looking at visual AI. And actually there's a common theme through that. And I mean, AI is becoming more and more prevalent. And I know, you know, sometimes what is AI and people have kind of this semantics of, is it true AI or not, but it's certainly, you know, AI machine learning is becoming more and more prevalent in the way that we work. And it's allowing us to be much more effective, be quicker in what we do and be more accurate. And, you know, whether it's finding defects running the right tests or, um, you know, being able to anticipate problems before they're happening in a production environment. >>Well, thank you so much for giving us this sort of insight outlook at dev ops sharing the successes that you're having, taking those challenges, converting them to opportunities and forgiving folks who might be in your shoes, or maybe slightly behind advice enter. They appreciate it. We appreciate your time. >>Well, it's been an absolute pleasure, really. Thank you for inviting me. I have a extremely enjoyed it. So thank you ever so much. >>Excellent. Me too. I've learned a lot for Glenn Martin. I'm Lisa Martin. You're watching the cube >>Driving revenue today means getting better, more valuable software features into the hands of your customers. If you don't do it quickly, your competitors as well, but going faster without quality creates risks that can damage your brand destroy customer loyalty and cost millions to fix dev ops from Broadcom is a complete solution for balancing speed and risk, allowing you to accelerate the flow of value while minimizing the risk and severity of critical issues with Broadcom quality becomes integrated across the entire DevOps pipeline from planning to production, actionable insights, including our unique readiness score, provide a three 60 degree view of software quality giving you visibility into potential issues before they become disasters. Dev ops leaders can manage these risks with tools like Canary deployments tested on a small subset of users, or immediately roll back to limit the impact of defects for subsequent cycles. Dev ops from Broadcom makes innovation improvement easier with integrated planning and continuous testing tools that accelerate the flow of value product requirements are used to automatically generate tests to ensure complete quality coverage and tests are easily updated. >>As requirements change developers can perform unit testing without ever leaving their preferred environment, improving efficiency and productivity for the ultimate in shift left testing the platform also integrates virtual services and test data on demand. Eliminating two common roadblocks to fast and complete continuous testing. When software is ready for the CIC CD pipeline, only DevOps from Broadcom uses AI to prioritize the most critical and relevant tests dramatically improving feedback speed with no decrease in quality. This release is ready to go wherever you are in your DevOps journey. Broadcom helps maximize innovation velocity while managing risk. So you can deploy ideas into production faster and release with more confidence from around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom. >>Hi guys. Welcome back. So we have discussed the current state and the near future state of dev ops and how it's going to evolve from three unique perspectives. In this last segment, we're going to open up the floor and see if we can come to a shared understanding of where dev ops needs to go in order to be successful next year. So our guests today are, you've seen them all before Jeffrey Hammond is here. The VP and principal analyst serving CIO is at Forester. We've also Serge Lucio, the GM of Broadcom's enterprise software division and Glenn Martin, the head of QA transformation at BT guys. Welcome back. Great to have you all three together >>To be here. >>All right. So we're very, we're all very socially distanced as we've talked about before. Great to have this conversation. So let's, let's start with one of the topics that we kicked off the forum with Jeff. We're going to start with you spiritual co-location that's a really interesting topic that we've we've uncovered, but how much of the challenge is truly cultural and what can we solve through technology? Jeff, we'll start with you then search then Glen Jeff, take it away. >>Yeah, I think fundamentally you can have all the technology in the world and if you don't make the right investments in the cultural practices in your development organization, you still won't be effective. Um, almost 10 years ago, I wrote a piece, um, where I did a bunch of research around what made high-performance teams, software delivery teams, high performance. And one of the things that came out as part of that was that these teams have a high level of autonomy. And that's one of the things that you see coming out of the agile manifesto. Let's take that to today where developers are on their own in their own offices. If you've got teams where the team itself had a high level of autonomy, um, and they know how to work, they can make decisions. They can move forward. They're not waiting for management to tell them what to do. >>And so what we have seen is that organizations that embraced autonomy, uh, and got their teams in the right place and their teams had the information that they needed to make the right decisions have actually been able to operate pretty well, even as they've been remote. And it's turned out to be things like, well, how do we actually push the software that we've created into production that would become the challenge is not, are we writing the right software? And that's why I think the term spiritual co-location is so important because even though we may be physically distant, we're on the same plane, we're connected from a, from, from a, a shared purpose. Um, you know, surgeon, I worked together a long, long time ago. So it's been what almost 15, 16 years since we were at the same place. And yet I would say there's probably still a certain level of spiritual co-location between us, uh, because of the shared purposes that we've had in the past and what we've seen in the industry. And that's a really powerful tool, uh, to build on. So what do tools play as part of that, to the extent that tools make information available, to build shared purpose on to the extent that they enable communication so that we can build that spiritual co-location to the extent that they reinforce the culture that we want to put in place, they can be incredibly valuable, especially when, when we don't have the luxury of physical locate physical co-location. Okay. That makes sense. >>It does. I shouldn't have introduced us. This last segment is we're all spiritually co-located or it's a surge, clearly you're still spiritually co located with jump. Talk to me about what your thoughts are about spiritual of co-location the cultural impact and how technology can move it forward. >>Yeah. So I think, well, I'm going to sound very similar to Jeff in that respect. I think, you know, it starts with kind of a shared purpose and the other understanding, Oh, individuals teams, uh, contributed to kind of a business outcome, what is our shared goal or shared vision? What's what is it we're trying to achieve collectively and keeping it kind of aligned to that? Um, and so, so it's really starts with that now, now the big challenge, always these over the last 20 years, especially in large organization, there's been specialization of roles and functions. And so we, we all that started to basically measure which we do, uh, on a daily basis using metrics, which oftentimes are completely disconnected from kind of a business outcome or purpose. We, we kind of reverted back to, okay, what is my database all the time? What is my cycle time? >>Right. And, and I think, you know, which we can do or where we really should be focused as an industry is to start to basically provide a lens or these different stakeholders to look at what they're doing in the context of kind of these business outcomes. So, um, you know, probably one of my, um, favorites experience was to actually weakness at one of a large financial institution. Um, you know, Tuesday Golder's unquote development and operations staring at the same data, right. Which was related to, you know, in calming changes, um, test execution results, you know, Coverity coverage, um, official liabilities and all the all ran. It could have a direction level links. And that's when you start to put these things in context and represent that to you in a way that these different stakeholders can, can look at from their different lens. And, uh, and it can start to basically communicate and, and understand have they joined our company to, uh, to, to that kind of common view or objective. >>And Glen, we talked a lot about transformation with you last time. What are your thoughts on spiritual colocation and the cultural part, the technology impact? >>Yeah, I mean, I agree with Jeffrey that, you know, um, the people and culture, the most important thing, actually, that's why it's really important when you're transforming to have partners who have the same vision as you, um, who, who you can work with, have the same end goal in mind. And w I've certainly found that with our, um, you know, continuing relationship with Broadcom, what it also does though, is although, you know, tools can accelerate what you're doing and can join consistency. You know, we've seen within simplify, which is BTS flagship transformation program, where we're trying to, as it can, it says simplify the number of systems stacks that we have, the number of products that we have actually at the moment, we've got different value streams within that program who have got organizational silos. We were trying to rewrite, rewrite the wheel, um, who are still doing things manually. >>So in order to try and bring that consistency, we need the right tools that actually are at an enterprise grade, which can be flexible to work with in BT, which is such a complex and very dev, uh, different environments, depending on what area of BT you're in, whether it's a consumer, whether it's a mobile area, whether it's large global or government organizations, you know, we found that we need tools that can, um, drive that consistency, but also flex to Greenfield brownfield kind of technologies as well. So it's really important that as I say, for a number of different aspects, that you have the right partner, um, to drive the right culture, I've got the same vision, but also who have the tool sets to help you accelerate. They can't do that on their own, but they can help accelerate what it is you're trying to do in it. >>And a really good example of that is we're trying to shift left, which is probably a, quite a bit of a buzz phrase in their kind of testing world at the moment. But, you know, I could talk about things like continuous delivery direct to when a ball comes tools and it has many different features to it, but very simply on its own, it allows us to give the visibility of what the teams are doing. And once we have that visibility, then we can talk to the teams, um, around, you know, could they be doing better component testing? Could they be using some virtualized services here or there? And that's not even the main purpose of continuous delivery director, but it's just a reason that tools themselves can just give greater visibility of have much more intuitive and insightful conversations with other teams and reduce those organizational silos. >>Thanks, Ben. So we'd kind of sum it up, autonomy collaboration tools that facilitate that. So let's talk now about metrics from your perspectives. What are the metrics that matter? Jeff, >>I'm going to go right back to what Glenn said about data that provides visibility that enables us to, to make decisions, um, with shared purpose. And so business value has to be one of the first things that we look at. Um, how do we assess whether we have built something that is valuable, you know, that could be sales revenue, it could be net promoter score. Uh, if you're not selling what you've built, it could even be what the level of reuse is within your organization or other teams picking up the services, uh, that you've created. Um, one of the things that I've begun to see organizations do is to align value streams with customer journeys and then to align teams with those value streams. So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that customer journey, the value with it. >>And we're all measured on that. Um, there are flow metrics which are really important. How long does it take us to get a new feature out from the time that we conceive it to the time that we can run our first experiments with it? There are quality metrics, um, you know, some of the classics or maybe things like defect, density, or meantime to response. Um, one of my favorites came from a, um, a company called ultimate software where they looked at the ratio of defects found in production to defects found in pre production and their developers were in fact measured on that ratio. It told them that guess what quality is your job to not just the test, uh, departments, a group, the fourth level that I think is really important, uh, in, in the current, uh, situation that we're in is the level of engagement in your development organization. >>We used to joke that we measured this with the parking lot metric helpful was the parking lot at nine. And how full was it at five o'clock. I can't do that anymore since we're not physically co-located, but what you can do is you can look at how folks are delivering. You can look at your metrics in your SCM environment. You can look at, uh, the relative rates of churn. Uh, you can look at things like, well, are our developers delivering, uh, during longer periods earlier in the morning, later in the evening, are they delivering, uh, you know, on the weekends as well? Are those signs that we might be heading toward a burnout because folks are still running at sprint levels instead of marathon levels. Uh, so all of those in combination, uh, business value, uh, flow engagement in quality, I think form the backbone of any sort of, of metrics, uh, a program. >>The second thing that I think you need to look at is what are we going to do with the data and the philosophy behind the data is critical. Um, unfortunately I see organizations where they weaponize the data and that's completely the wrong way to look at it. What you need to do is you need to say, you need to say, how is this data helping us to identify the blockers? The things that aren't allowing us to provide the right context for people to do the right thing. And then what do we do to remove those blockers, uh, to make sure that we're giving these autonomous teams the context that they need to do their job, uh, in a way that creates the most value for the customers. >>Great advice stuff, Glenn, over to your metrics that matter to you that really make a big impact. And, and, and also how do you measure quality kind of following onto the advice that Jeff provided? >>That's some great advice. Actually, he talks about value. He talks about flow. Both of those things are very much on my mind at the moment. Um, but there was this, I listened to a speaker, uh, called me Kirsten a couple of months ago. It taught very much around how important flow management is and removing, you know, and using that to remove waste, to understand in terms of, you know, making software changes, um, what is it that's causing us to do it longer than we need to. So where are those areas where it takes long? So I think that's a very important thing for us. It's even more basic than that at the moment, we're on a journey from moving from kind of a waterfall to agile. Um, and the problem with moving from waterfall to agile is with waterfall, the, the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. >>Um, and with agile, there's that kind of, you know, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that we give that confidence, um, that that's ready to go, or if there's a risk that we're able to truly articulate what that risk is. So there's a bit about release confidence, um, and some of the metrics around that and how, how healthy those releases are, and actually saying, you know, we spend a lot of money, um, um, an investment setting up our teams, training our teams, are we actually seeing them deliver more quickly and are we actually seeing them deliver more value quickly? So yeah, those are the two main things for me at the moment, but I think it's also about, you know, generally bringing it all together, the dev ops, you know, we've got the kind of value ops AI ops, how do we actually bring that together to so we can make quick decisions and making sure that we are, um, delivering the biggest bang for our buck, absolutely biggest bang for the buck, surge, your thoughts. >>Yeah. So I think we all agree, right? It starts with business metrics, flow metrics. Um, these are kind of the most important metrics. And ultimately, I mean, one of the things that's very common across a highly functional teams is engagements, right? When, when you see a team that's highly functioning, that's agile, that practices DevOps every day, they are highly engaged. Um, that that's, that's definitely true. Now the, you know, back to, I think, uh, Jeff's point on weaponization of metrics. One of the key challenges we see is that, um, organizations traditionally have been kind of, uh, you know, setting up benchmarks, right? So what is a good cycle time? What is a good lead time? What is a good meantime to repair? The, the problem is that this is very contextual, right? It varies. It's going to vary quite a bit, depending on the nature of application and system. >>And so one of the things that we really need to evolve, um, as an industry is to understand that it's not so much about those flow metrics is about our, these four metrics ultimately contribute to the business metric to the business outcome. So that's one thing. The second aspect, I think that's oftentimes misunderstood is that, you know, when you have a bad cycle time or, or, or what you perceive as being a buy cycle time or better quality, the problem is oftentimes like all, do you go and explore why, right. What is the root cause of this? And I think one of the key challenges is that we tend to focus a lot of time on metrics and not on the eye type patterns, which are pretty common across the industry. Um, you know, if you look at, for instance, things like lead time, for instance, it's very common that, uh, organizational boundaries are going to be a key contributor to badly time. >>And so I think that there is, you know, the only the metrics there is, I think a lot of work that we need to do in terms of classifying, descend type patterns, um, you know, back to you, Jeff, I think you're one of the cool offers of waterscrumfall as a, as, as a key pattern, the industry or anti-spatter. Um, but waterscrumfall right is a key one, right? And you will detect that through kind of a defect arrival rates. That's where that looks like an S-curve. And so I think it's beyond kind of the, the metrics is what do you do with those metrics? >>Right? I'll tell you a search. One of the things that is really interesting to me in that space is I think those of us had been in industry for a long time. We know the anti-patterns cause we've seen them in our career maybe in multiple times. And one of the things that I think you could see tooling do is perhaps provide some notification of anti-patterns based on the telemetry that comes in. I think it would be a really interesting place to apply, uh, machine learning and reinforcement learning techniques. Um, so hopefully something that we'd see in the future with dev ops tools, because, you know, as a manager that, that, you know, may be only a 10 year veteran or 15 year veteran, you may be seeing these anti-patterns for the first time. And it would sure be nice to know what to do, uh, when they start to pop up, >>That would right. Insight, always helpful. All right, guys, I would like to get your final thoughts on this. The one thing that you believe our audience really needs to be on the lookout for and to put on our agendas for the next 12 months, Jeff will go back to you. Okay. >>I would say look for the opportunities that this disruption presents. And there are a couple that I see, first of all, uh, as we shift to remote central working, uh, we're unlocking new pools of talent, uh, we're, it's possible to implement, uh, more geographic diversity. So, so look to that as part of your strategy. Number two, look for new types of tools. We've seen a lot of interest in usage of low-code tools to very quickly develop applications. That's potentially part of a mainstream strategy as we go into 2021. Finally, make sure that you embrace this idea that you are supporting creative workers that agile and dev ops are the peanut butter and chocolate to support creative, uh, workers with algorithmic capabilities, >>Peanut butter and chocolate Glen, where do we go from there? What are, what's the one silver bullet that you think folks to be on the lookout for now? I, I certainly agree that, um, low, low code is, uh, next year. We'll see much more low code we'd already started going, moving towards a more of a SAS based world, but low code also. Um, I think as well for me, um, we've still got one foot in the kind of cow camp. Um, you know, we'll be fully trying to explore what that means going into the next year and exploiting the capabilities of cloud. But I think the last, um, the last thing for me is how do you really instill quality throughout the kind of, um, the, the life cycle, um, where, when I heard the word scrum fall, it kind of made me shut it because I know that's a problem. That's where we're at with some of our things at the moment we need to get beyond that. We need >>To be releasing, um, changes more frequently into production and actually being a bit more brave and having the confidence to actually do more testing in production and go straight to production itself. So expect to see much more of that next year. Um, yeah. Thank you. I haven't got any food analogies. Unfortunately we all need some peanut butter and chocolate. All right. It starts to take us home. That's what's that nugget you think everyone needs to have on their agendas? >>That's interesting. Right. So a couple of days ago we had kind of a latest state of the DevOps report, right? And if you read through the report, it's all about the lost city, but it's all about sweet. We still are receiving DevOps as being all about speed. And so to me, the key advice is in order to create kind of a spiritual collocation in order to foster engagement, we have to go back to what is it we're trying to do collectively. We have to go back to tie everything to the business outcome. And so for me, it's absolutely imperative for organizations to start to plot their value streams, to understand how they're delivering value into aligning everything they do from a metrics to deliver it, to flow to those metrics. And only with that, I think, are we going to be able to actually start to really start to align kind of all these roles across the organizations and drive, not just speed, but business outcomes, >>All about business outcomes. I think you guys, the three of you could write a book together. So I'll give you that as food for thought. Thank you all so much for joining me today and our guests. I think this was an incredibly valuable fruitful conversation, and we appreciate all of you taking the time to spiritually co-located with us today, guys. Thank you. Thank you, Lisa. Thank you. Thank you for Jeff Hammond serves Lucio and Glen Martin. I'm Lisa Martin. Thank you for watching the broad cops Broadcom dev ops virtual forum.

Published Date : Nov 18 2020

SUMMARY :

of dev ops virtual forum brought to you by Broadcom. Nice to talk with you today. It's good to be here. One of the things that we think of is speed, it was essentially a sprint, you know, you run as hard as you can for as fast as you can And it's almost like, you know, if you've ever run a marathon the first mile or two in the marathon, um, we have to think about all the activities that you need to do from a dev ops perspective and to hiring, you know, achieve higher levels of digitization in our processes and We've said that the key to success with agile at the team level is cross-functional organizations, as you say, going from, you know, physical workspaces, uh, agile manifesto, you know, there were four principles that were espoused individuals and interactions is important to make sure that that agility is there for one thing, you have to defer decisions So those teams have to be empowered to make decisions because you can't have a I think we all could use some of that, but, you know, you talked about in the beginning and I've, Um, when everybody was in the office, you could kind of see the And that gives you an indication of how engaged your developers are. um, whether it's, you know, more regular social events, that have done this well, this adaptation, what can you share in terms of some real-world examples that might Um, you know, first of all, since the start of COVID, if you don't have good remote onboarding processes, Those are the kinds of things that we have to be, um, willing to, um, and the business folks to just get better at what they're doing and learning to embrace It's it's, it's an important thing. Thank you so much for joining for Jeffrey I'm Lisa Martin, of dev ops virtual forum brought to you by Broadcom, I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept, uh, you know, driving a bias towards action. Well, and it talked about culture being, it's something that, you know, we're so used to talking about dev ops with respect does it take where, you know, that code to be processed through pipeline pushy? you know, when I checked in code, you know, to do I guess the system to automatically identify what So we'll get to AI in a second, but I'm curious, what are some of the, of the metrics you think that really matter right And so I'm much more interested and we, you know, fruit for Broadcom. are being defined as a set of OTRs, they have interdependencies and you have have a new set And so, you know, it's not uncommon to see, you know, teams where, you know, How do you think those technologies can be leveraged by DevOps leaders to influence as a leader of a, you know, 1500 people organization, there's a number of from a people point of view, which were hidden, uh, you can start to understand maybe It's um, you know, you know, the SRE folks, the dev ops says can use AI and automation in the right ways Um, so you can, you can do a lot of an analytics, predictive analytics. So if you start to understand, for instance, that whenever maybe, you know, So I mentioned in the beginning of our conversation, that just came off the biz ops manifesto. the problem we have as an industry is that, um, there are set practices between And so to me, these ops is really about kind of, uh, putting a lens on So to me, the key, the key, um, challenge for, We thank you so much for sharing your insights and your time at today's DevOps Thanks for your time. of devops virtual forum brought to you by Broadcom. Transformation is at the heart of what you do. transformation that you are still responsible for driving? you know, we had to do some great things during this time around, um, you know, in the UK for one of the things that I want to ask you about, I'm again, looking at DevOps from the inside But one of the things I'd love to get your perspective I always say that, you know, you're only as good as your lowest And, you know, What are some of the shifts in terms of expectations Um, you know, there's a thing that I think people I mean, we talk about collaboration, but how do we actually, you know, do that and have something that did you face and figure out quickly enough to be able to pivot so fast? and that's the kind of trade off that we have to make, how do we actually deal with that and hide that from So we haven't got, you know, the cost of the last 12 months. What are some of the core technology capabilities that you see really as kind demands for the team, and, you know, with the pressures on, at the moment where we're being asked to do things, And for me, and from a testing point of view, you know, mounds and mounds of testing, we are, um, you let's start in bits that we do well, you know, we've started creating, ops as it comes more and more to the fore as we go to cloud, and that's what we need to, Last question then for you is how would you advise your peers in a similar situation to You know, do they want it to be, um, more agile and, you know, or do they want to, especially if the business hold the purse strings, which in, in, uh, you know, in some companies include not as they And I know, you know, sometimes what is AI Well, thank you so much for giving us this sort of insight outlook at dev ops sharing the So thank you ever so much. I'm Lisa Martin. the entire DevOps pipeline from planning to production, actionable This release is ready to go wherever you are in your DevOps journey. Great to have you all three together We're going to start with you spiritual co-location that's a really interesting topic that we've we've And that's one of the things that you see coming out of the agile Um, you know, surgeon, I worked together a long, long time ago. Talk to me about what your thoughts are about spiritual of co-location I think, you know, it starts with kind of a shared purpose and the other understanding, that to you in a way that these different stakeholders can, can look at from their different lens. And Glen, we talked a lot about transformation with you last time. And w I've certainly found that with our, um, you know, continuing relationship with Broadcom, So it's really important that as I say, for a number of different aspects, that you have the right partner, then we can talk to the teams, um, around, you know, could they be doing better component testing? What are the metrics So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that um, you know, some of the classics or maybe things like defect, density, or meantime to response. later in the evening, are they delivering, uh, you know, on the weekends as well? teams the context that they need to do their job, uh, in a way that creates the most value for the customers. And, and, and also how do you measure quality kind of following the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. Um, and with agile, there's that kind of, you know, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that of, uh, you know, setting up benchmarks, right? And so one of the things that we really need to evolve, um, as an industry is to understand that we need to do in terms of classifying, descend type patterns, um, you know, And one of the things that I think you could see tooling do is The one thing that you believe our audience really needs to be on the lookout for and to put and dev ops are the peanut butter and chocolate to support creative, uh, But I think the last, um, the last thing for me is how do you really instill and having the confidence to actually do more testing in production and go straight to production itself. And if you read through the report, it's all about the I think this was an incredibly valuable fruitful conversation, and we appreciate all of you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

JeffreyPERSON

0.99+

SergePERSON

0.99+

GlenPERSON

0.99+

Lisa MartinPERSON

0.99+

Jeffrey HammondPERSON

0.99+

Serge LucioPERSON

0.99+

AppleORGANIZATION

0.99+

Jeffery HammondPERSON

0.99+

GlennPERSON

0.99+

sixQUANTITY

0.99+

26QUANTITY

0.99+

Glenn MartinPERSON

0.99+

50 minutesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

LisaPERSON

0.99+

BroadcomORGANIZATION

0.99+

Jeff HammondPERSON

0.99+

tensQUANTITY

0.99+

six monthsQUANTITY

0.99+

2021DATE

0.99+

BenPERSON

0.99+

10 yearQUANTITY

0.99+

UKLOCATION

0.99+

two hoursQUANTITY

0.99+

15 yearQUANTITY

0.99+

sevenQUANTITY

0.99+

9:00 PMDATE

0.99+

two hourQUANTITY

0.99+

14 stacksQUANTITY

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

GlynnPERSON

0.99+

two dayQUANTITY

0.99+

MartinPERSON

0.99+

Glynn MartinPERSON

0.99+

KirstenPERSON

0.99+

todayDATE

0.99+

SRE ValleyORGANIZATION

0.99+

five o'clockDATE

0.99+

BothQUANTITY

0.99+

2020DATE

0.99+

millionsQUANTITY

0.99+

second aspectQUANTITY

0.99+

Glen JeffPERSON

0.99+

threeQUANTITY

0.99+

14QUANTITY

0.99+

75%QUANTITY

0.99+

three weeksQUANTITY

0.99+

Amanda silverPERSON

0.99+

oneQUANTITY

0.99+

seven teamsQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

last yearDATE

0.99+

Breaking Analysis: VMworld 2019 Containers in Context


 

>> From the Silicon Angle Media Office, in Boston Massachusetts, it's theCUBE. Now, here's your host Dave Vellante. >> Hi everybody, welcome to this breaking analysis where we try to provide you some insights on theCUBE. My name is Dave Vellante. I'm here with Jim Kobielus who was up today, and Jim we were just off of the VMworld 2019. Big show, lot of energy, lot of announcements. I specifically want to focus on containers and the impact that containers are having on VMware, specifically the broader ecosystem and the industry at large. So, first of all, what was you take on VMworld 2019? >> Well, my take was that VMware is growing fast, and they're investing in the future, which is fairly clearly cloud and native computing on containers with Kubernetes and all that. But really that's the future and so, what VMware is doing is they're making significant bets that containers will rule the roost in cloud computing and application infrastructures going forward. But in fact virtual machines, VMs hypervisors are hotter than ever and that was well established last week by the fact that the core predominate announcement last week was a VMware Tanzu, which is not yet a production solution, but is in a limited preview, which is the new platform for coexistence of containers and vSphere. A container run time embedded in vSphere, so that customers can run containers in a highly-iso workloads, in a highly isolated VM environment. In other words, VMware is saying, we're saying to their customers, "You don't have to migrate away from VMs "until you're good and ready. "You can continue to run whatever containers "you build on vShpere, "but we more than encourage you to continue to run VMs "until you're good and ready "to migrate, if ever." >> All right. So, I want to come back and unpack that a little bit, but does your data, does your analysis, when you're talking to customers and the industry at large, is there any evidence from what you see that containers are hurting VMware's business? >> I don't get any sense that containers are hurting VMware's business. I get the strong sense that containers, they've just of course acquired Pivotal, a very additive to the revenue mix at VMware. And VMware, most of their announcements last week were in fact all around Kubernetes, and containers, and products that are very much for those customers who are going deep down the container road. >> So that was a setup question. >> You've got lots of products for them. >> So that was a setup question. So I have some data on this. >> Go ahead >> Right answer. So, I want to show you this. So, Alex, if you wouldn't mind bringing up that slide. And we shared this with you last week when we were prepping for VMworld. This is data from Enterprise Technology Research ETR, and they have a panel of 4500 end user customers that they go out and do spending surveys with them. So, what this shows is, this is container customers spending on VMware. So, you can see it goes back to early January. Now it's a little deceiving here. You see that big spike, but what it shows it that, A, that big spike is the number of shared customers. So, you really didn't have many customers back then that were doing both containers and VMware that ETR found. But as the N gets bigger, 186, 248, 257, 361, across those 461 customers, those are the shared customers in the green. And you can see that it's kind of a flat line. It's holding very well in the high 30's percent range, which is their sort of proprietary metric. So, there's absolutely no evidence, Jim, that containers, thus far anyway, are hurting VMware's business. Which of course was the narrative, containers are going to kill VMware, no evidence of that. But then why would they acquire Pivotal? Are they concerned about the future, what's your-- >> Well, they're concerned about cross selling their existing customer base who are primarily on V's, fearing the hypervisors, cross selling them on the new world of Kubernetes base products for cloud computing, and so forth and so on. In other words it's all about how do they grow their revenue base? VMware's been around for more than 20 years now. They rule the roost on the hypervisors. Where do they go from here, in terms of their product mix? Well, Kubernetes and beyond that, things like serverless will clearly be in the range of the things that they could add on. Their customers could add on to their existing deploys. I mean, look at Pivotal. Pivotal has a really strong Kubernetes distribution, which of course VMware co-developed with them. Pivotal also has a strong functions as a service backplane, the Pivotal function service for, serverless environments. So, this acquisition of Pivotal very much positions VMware to capitalize on those opportunities to sell those products when that market actually develops. But I see some evidence that virtual machines are going like gang busters in terms of customer deployments. Last week on theCUBE at VMworld, Mark Lohmeyer who's an SVP at a VMware for one of their cloud business unit, said that in the last year, for example, customers who are using a VMware cloud on AWS, VMware grew the customer base by 400% last year, and grew the number of VMs running in VMware, cloud, and AWS by 900%, which would imply that on average each customer more than doubled the number of VMs they're running on that particular cloud service. That means VMs are very much relevant now, and probably will be going forward. And why is that? That's a good question, we can debate that. >> Well, so the naysayers at VMworld in the audience were tweeting that, "Oh, I though we started Pivotal. "We launched Pivotal so that we didn't have to run VMs on, "or run containers on VMs, "so we could run them on bare metal." Are people running containers on virtual machines? >> Well, they are, yes. In fact, there's a broad range of industry initiatives, not just Tanzu at VMware, to do just that. To run containers on VMs. I mean, there is the KubeVirt, open source project over at CNCF, that's been going for a couple years now. But also, Google has Gvisor, Intel has the Kata containers initiative, I believe that there are a few others. Oh yeah, AWS with Firecracker, last year's reinvent. All this would imply, strongly indicate that these large cloud and tech vendors wouldn't be investing heavily into convergence of containers and VMs and hypervisors, if there weren't a strong demand from customers for hybrid environments where they're going to run both stacks as it were in parallel, why? Well, one of the strong advantages of VMs is workload isolation at the hardware level, which is something that typically container run times don't offer. For example, the workload isolation seems to be one of the strong features that VMware's touting for Tanzu going forward. >> So, VMware is--the centerpiece of VMware's strategy is obviously multicloud, Kubernetes as a lynch pin to enable running applications on different platforms. Will, in your opinion, and of course VMware is hard core enterprise, right? Will VMware, two things, will they be able to attract the developers, number one. And number two, will those developers build on top of VMware's platform or are they going to look to their cloud? >> That's a very important question. Last week at VMworld, I didn't get a sense that VMware has a strong developer story. I think that's a really open issue going forward for them. Why would a developer turn to VMware as their core solution provider when they don't offer a strong workbench for building these hybridized VM, /container/serverless applications that seem to be springing up all over? AWS and Microsoft and Google are much stronger in that area with their respective portfolios. >> So, I guess the obvious answer there is Pivotal is their answer to the developer quandary. >> Yes. >> And so, let's talk about that. So, Pivotal was struggling. I talked last week in my analysis, you saw the IPO price and then it dipped down, it never made it back up. Essentially the price that VMware paid the public shareholders for Pivotal was about half of it's initial IPO price, so, okay. So, the stock was struggling, the company didn't have the kind of momentum that, I think, that it wanted, so VMware picks it up. Can VMware fold in Pivotal, and use its go-to-market, and its largess to really prop up Pivotal and make it a leader? >> Well, possibly because Cloud Foundry, Pivotal Cloud Foundry could be the lynch pin of VMware's emerging developer story, if they position in that and really invest in the product in that regard. So yeah, in other words this could very much make VMware a go-to-vendor for the developers who are building the new generation of applications that present serverless functional interfaces, but will have containers under the cover, but also have VMs under the cover providing strong workload isolation in a multi-tenant environment. That would be the promise. >> Now, a couple things. You mentioned Microsoft, of course as you're in the clouding, and Google. The ETR data that I dug into when I wanted to understand, better understand multicloud. Who's got the multicloud momentum? Well, guess who has the most multicloud momentum? It's the cloud guys. Now, AWS doesn't specifically say they participate in multicloud. Certainly their marketing suggest that multicloud is for somebody else, that really they want to have uni-cloud. Whereas Google, and as you're kind of embracing multicloud and Kubernetes specifically, now of course AWS has a Kubernetes offering, but I suspect it's not something that they want to promote hard in the market place because it makes it easier for people to get off of AWS. Your thoughts on multicloud generally, but specifically Kubernetes, and containers as it relates to the big cloud providers. >> Yeah, well my thoughts on multicloud generally is that multicloud is the strategy of the second tier cloud vendors, obviously. If they can't dominate the entire space, at least they can maintain a strong, provide a strong connective tissue for the clouds that actually are deployed in their customer's environments. So, in other words, the Ciscos of the world, the VMwares of the world, IBM. In other words, these are not among the top tier of the public cloud players, hence where do they go to remain relevant? Well, they provide the connective tissue, and they provide the virtualized networking backbones, and they provide the AI ops that enables end-to-end automated monitoring management of the entire mesh. The whole notion of a mesh architecture is something that grew up with IBM and Google for lots of reasons, especially due to the fact that they themselves, as vendors, didn't dominate the public cloud. >> Well, so I agree with you. The only issue I would take is I think Microsoft is a leader in public cloud, but because it has a big On-Prem presence, it's in its best interest to push containers and Kubernetes, and so forth. But you're right about the others. Cisco doesn't have a public cloud, VMware doesn't have a public cloud, IBM has a public cloud but it's really small market share, and so it's in those companies, and Google is behind, but it's in those companies best interest really to promote multicloud, try to use it as a bull work against AWS, who's got an obviously awesome market momentum. The other thing that's interesting in the ETR data when I poke in there, it seems like there are more people looking at Google. Now maybe that's 'cause they have such strong strength in data and analytics, maybe it's 'cause they're looking for a hedge on AWS, but the spending data suggests that more and more people are kicking the tires, and more than kicking the tires on Google. Who of course is obviously behind Kubernetes and that container movement, and open source, your thoughts? >> Yeah, well, many ways, you have to think, that Google has developed the key pieces of the new stack for application development in the multicloud. Clearly they developed Kubernetes, its open source, and also they developed TensorFlow open sources, it's the predominant AI workbench essentially for the new generation of AI driven applications, which is everything. But also, if you look at Google developed Node JS for web applications and so forth. So really, Google now is the go-to-vendor for the new generation of open source application development, and increasingly DevOps in a multicloud environment, running over Istio meshes and so forth. So, I think that's, so, look at one of the announcements last weekend at VMworld. VMware and NVIDIA, their announcement of their collaboration, their joint offering to enable AI workloads, training workloads to run in GPUs in an optimal high performance fashion within a distributive of VMware cloud end-to-end. So really, I think VMware recognizes that the new workloads in the multicloud are predominately, increasingly AI workloads. And in order to, as the market goes towards those kinds of workloads, VMware very much recognizes they need to have a strong developer play, and they do with NVIDIA in a sense. Very much so because NVIDIA with the rapid framework and so forth, and NVIDIA being the predominant GPU vendor, very much is a very strategic partner for VMware as they're going forward, as they hope to line up the AI developers. But Google still is the vendor to beat as regards to AI developers of the world, in that regard, so-- >> So we're entering a world we sometimes call the post-virtual machine world. John Furrier is kind of tongue and cheek on a play on web tudauto. He calls it cloud tudauto, which is a world of multiple clouds. As I've said many times, I'm not sure multicloud is necessarily a coherent strategy yet as opposed to sort of a multi-vendor situation, Shadow IT, >> Yes. >> Lines on business, et cetera. But Jim, thanks very much-- >> Sure. >> For coming on and breaking down the container market, and VMworld 2019. It was great to see you. >> Likewise. >> All right, thank you for watching everybody. This is Dave Vellante with Jim Kobielus. We'll see you next time on theCUBE. (upbeat music)

Published Date : Sep 3 2019

SUMMARY :

From the Silicon Angle Media Office, and the industry at large. But really that's the future and so, what VMware is doing is there any evidence from what you see that containers and products that are very much for those customers So that was a setup question. A, that big spike is the number of shared customers. said that in the last year, for example, Well, so the naysayers at VMworld in the audience Well, one of the strong advantages of VMs or are they going to look to their cloud? AWS and Microsoft and Google are much stronger in that area So, I guess the obvious answer there So, the stock was struggling, Pivotal Cloud Foundry could be the lynch pin that they want to promote hard in the market place is that multicloud is the strategy and more than kicking the tires on Google. that Google has developed the key pieces of the new stack the post-virtual machine world. But Jim, thanks very much-- For coming on and breaking down the container market, This is Dave Vellante with Jim Kobielus.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

Mark LohmeyerPERSON

0.99+

JimPERSON

0.99+

NVIDIAORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Dave VellantePERSON

0.99+

900%QUANTITY

0.99+

400%QUANTITY

0.99+

John FurrierPERSON

0.99+

last weekDATE

0.99+

last yearDATE

0.99+

Last weekDATE

0.99+

461 customersQUANTITY

0.99+

PivotalORGANIZATION

0.99+

AlexPERSON

0.99+

Boston MassachusettsLOCATION

0.99+

early JanuaryDATE

0.99+

vSphereTITLE

0.99+

todayDATE

0.99+

CNCFORGANIZATION

0.99+

4500 end user customersQUANTITY

0.99+

Node JSTITLE

0.99+

more than 20 yearsQUANTITY

0.98+

two thingsQUANTITY

0.98+

Silicon Angle Media OfficeORGANIZATION

0.98+

KubernetesTITLE

0.98+

second tierQUANTITY

0.98+

CiscosORGANIZATION

0.98+

IntelORGANIZATION

0.98+

both stacksQUANTITY

0.97+

VMworldORGANIZATION

0.97+

Kit Colbert & Krish Prasad, VMware | VMworld 2019


 

>> live from San Francisco, celebrating 10 years of high tech coverage. It's the Cube covering Veum, World 2019 brought to you by the M Wear and its ecosystem partners. >> Hello, Welcome back, everyone to the Cubes Live coverage of the Emerald 2019. I'm John Career with Lycos Day, Volante Dave. 10 years covering the Q Weird Mosconi and 2010 boy Lots changed, but >> it's still the >> platform that Palmer Ritz laid out. But the stuff filling in 10 years later. >> Okay, you call that software mainframe and Robin came in so I can't call Mainframe Way >> Have leaders from PM Wears Largest business unit. The Cloud Platform Business Kid Colbert to CTO and Christmas R S v P and General Manager Guys, Thanks for coming on The key. Appreciate. >> Yeah, that's for having us. The >> world's your business units smoking hot. It's very popular, like you run around doing meetings. Cloud platform is the software model that's 10 years later actually happening at scale. Congratulations. What's the What's the big news? What's the big conversation for you guys? >> Yeah, the biggest news this week is the announcement of project specific, and, um, it's about taking the platform a Jess, um, hundreds of thousands of customers on it and bringing together communities were just now very popular with the developers and that black form together so that operators, on the one hand, can just deal with the platform they love. And the developers can deal with the kubernetes layer that they love. >> It's interesting to watch because, you know, the whole end user computing stack that was laid out 10 years ago is actually happening now, Assassin see, sass business models. We all see the and half of them is on the success of Cloud. But interesting to see kubernetes, which we've been following since the report started. Open stack days. You saw that emerging. Everyone kind of saw that. And it really became a nice layer. And the industry just create as a de facto. Yeah, you guys were actually driving that more forward. So congratulations on that. >> That's sitting it >> natively in V sphere is interesting because you guys spend a ton of time. This is a core product for you guys. So you're bringing something native into V sphere? I'm sure there's a lot of debates internally how to do that, kid. What's that? What is the relevance workers. You guys have a lot of efficiencies and be severe, but bring in kubernetes is gonna give you some new things. What, >> So the thinking is really you know, it's Christmas mentioning. How do we take this proven platform? Move it forward. Customers have moved millions of work clothes on top of the sphere, operate them in production, the Prussian great capabilities, and so they'd be able to be very successful in that. And so the question is, how do we help them move forward in the kubernetes? You know, you mentioned Crew readies is still fairly young, the ecosystem around. It's still somewhat immature, still growing right, and it's a very different environment than what folks are used to who used the sphere. So there's a big challenge that customers have around managing multiple environments. All the training that's different, all the tools that are different so we can actually take their investments. They've already made into V sphere leverage and extend those into the kubernetes world that's really powerful. We'll help our customers take all these millions of workloads and move them forward. It's >> interesting because we were always speculating about being where I started Jerry Chan when he was on yesterday. He's been of'em where since early days, you know, but looking at VM where when they went to their you guys went back to your core When we be cloud air kind of win its way and then you deal them is on since the stock price has been going great, So great chair older takeover value there. But you got clarity around what cloud was. And as you look at the operator target audience, you guys have the operators and the devil and ops is critical. So you guys have been operating a lot of work, Liz and I think this is fascinating. So the role of containers is super relevant because you got V EMS and containers. So again, the debate continues. >> Well, I think >> Tainer is wrong. Where Bond, It's interesting conversation because kubernetes is orchestrating all that >> while the snarky treat tweet Oh day and you guys feel free to come. It was Oh, I thought we started launch pivotal. So we didn't have to run containers on virtual machines. Yeah, we know that people run containers on bare metal. They run containers and virtual machines, but >> yeah, It's a debate that that we hear pop up on the on the snarky Twitter feeds and so forth. We'll talk to customers about it. You know, this whole VM versus container debate, I think, really misses the point because it's not really about that. What it's about is how do I actually operate? These were close in production, right? This kind of this three pillows we talk about build, run, manage. Custer's want to accelerate that They won't do that with enterprise, great capabilities with security. And so that's where it really gets challenging. And I think you know, we've built this amazing ecosystem around desire to achieve that. And so that's what we're taking forward here. And, yes, the fact that we're using fertilization of the covers, that's an implementation detail. Almost. What's more, valuables? All the stuff above that the manageability, the operational capabilities. That's a real problem. It seems to >> me, to the business impact because, okay, people going to go to the cloud, they're gonna build cloud native acts. But you've got all these incumbent companies trying not to get disrupted to trying to find new opportunities, playing offense and defense at the same time, they need tooling to be able to do that. They don't want to take their e r p ap and stick it in the cloud, right? They want to modernize it. And you know you're not gonna build that overnight in the cloud anyway, so they need help. >> That's the the key move that we made here. If you if you think about it, customers don't have kubernetes experts right today and most of them in their journey to the mortar naps. They're saying, Hey, we need to set up two stacks. At least we are if we immerse stack that we love. And now communities are developers laws. So we have to stand up and they don't have any in house experts to do that right? And with this one move, we have actually collapsed it back to one stack. >> Yeah, I think it's a brilliant move. Actually, it's brilliant because the Dev ops ethos has proven everyone wants to be there, all right. And the question is, who's leading? Who is lagging? So ops has traditionally lagged. If you look at it from the developer standpoint, you guys have not been lagging on the we certainly have tons of'em virtualization been standardized. Its unifying. Yeah, the two worlds together, and it really as we've been calling it cloud two point. Oh, because if you look at what hybrid really is, it's cloud two point. Oh, yeah. Cloud one data was Dev Ops Storage and compute Amazon. You're born in the cloud. We we have no I t department 50 people. Why would we ever and developers are the operators? Yeah, so we shall. Enterprise scale. It's not that easy. So I love to get your thoughts on how you guys would frame the cloud two point. Oh, Visa vi. If cloud one does storage and compute and Amazon like scale, what is cloud to point out to you? >> Yeah, well, I think so. Let's talk about the cloud journey. I think that's what you're getting at here. So here's how it discuss it with customers. You are where you are today. You have your existing apse. A lot of them are monolithic. You're slow to update. Um, you know, so forthright. And then you have some of the cloud NATO nirvana over here. We're like everything's re architected. It's Micro Service's got all these containers off, so >> it doesn't run my business >> well, yeah, well, that's what I want to get to. I think the challenge, the challenge is it's a huge amount of effort to get there, right, All the training we're talking about, all the tooling and the all the changes there, and people tend to look at. This is a very binary thing, right that you're there. Here where you are, you're in the club, New Nirvana. People don't often talk about what's in the middle and the fact that it's a spectrum. And I think what we used to get a V M, where is like, let's meet customers where they are, You know, I think one of the big realizations we had, it's not. Everyone needs to get every single application on this far side over here. Some halfs, your pieces, whatever you know, it's fine to get them a little bit of the way there, and so one of the things that we saw with the M A coordinated us, for example, was that people there was a pent up demand to move to the public cloud. But it was challenging because to go from a visa environment on Prem to an eight of US native environment to change a bunch of things that tooling changes like the environment a little bit different, but with a mark, our native us, there's no modifications at all. You just little evey motion it. And some people have you motioning things like insanely fast now, without modifying the half you can't get you know something you have to suddenly better scalable. But you get other cloud benefits. You get things like, Oh, my infrastructure is dynamic. I can add host dynamically only pay for what I need. Aiken consume this as a service. And so we help moving. We have to move there. There were clothes a little bit in the middle of the spectrum there, and I think what we're doing with Project Pacific and could realise is the same thing. They start taking advantage of these great kubernetes capabilities for their existing APs without modification. So again, kind of moving them further in that middle spectrum and then, you know, for the absolute really make a difference to their business. They can put in the effort to get all the way over there, >> and we saw that some of the evidence of some challenges of that shiny new trend within the dupe ecosystem. Big data objects to army. Who doesn't love that concept, right? Yeah, map produced. But what happened was is that the infrastructure costs on the personnel human capital cost was so massive that and then cloud cloud came along and >> just go out. There is also the other point about just just just a bespoke tooling that >> technology, right, Then the disruptions to create, you know to that, then the investments that it takes. Two >> you had a skill and you had a skills gap in terms of people have been. So that brings us back to So how do you address that problem? Because most of the audience out here, not developers. Yeah. Yeah. Total has the developers connection. So >> this is one of the really cool things about Pacific that what we've done with Pacific when you look at it from an I T. Operations, one of you that person sees v sphere the tool they already know and use understand it. Well, when a developer looks at it, they see kubernetes. And so this is two different viewpoints. Got like, you know, the blind men around the elephant. But, um but the thing is is actually a singular thing in the back end, right? You know, they have these two different views. And so the cool thing about us, we can actually bring items and developers together that they can use their own language tools process. But there's a common thing that they're talking about. They have common visibility into that, and that's super, super powerful. And when you look at, it also is happening on the kubernetes side is fully visible in the V's here side. So all these tools that already work against the sphere suddenly light up and support kubernetes automatically. So again, without any work, we suddenly get so much more benefit. >> And the category Buster's, they're going on to that. You're changing your taking software approach that your guys No, you're taking it to the software developer world. It's kind of changing the game. One of things. I want to get your thoughts on Cloud to point out because, you know, if computing storage was cloud one dato, we're seeing networking and security and data becoming critical ingredients that are problems statement areas people are working on. Certainly networking you guys are in that. So as cloud chip one is gonna take into the fact that messy middle between, you know, I'm on here and then I want the Nirvana, as always, the origination story and the outcomes and stories. Always great. But the missing messy middle. As you were pointing out, it's hard. How do you guys? >> And if you look at the moves that we made in the Do You know about the big fusion acquisition that remained right, which happened, like a month ago, and it was about preparing the platform, our foray I animal or clothes? So really, what we're trying to do is really make sure that the history of platform is ready for the modern applications, right? I am along one side communities applications, you know, service oriented applications. All of them can land on the same platform and more and more. Whether it's the I am l or other application, they're being written on top of communities that structures code. Yeah, nothing like Jenna's well, so enable incriminating will help us land all the modern applications on top of the same platform that our customers are used to. So it's a huge kind of a inflection point in the industry from my >> wealthy earlier point, every CEO I talked to said, I want to get from point A to point B and I wanna spend a billion dollars to get there. I don't wanna have to hire some systems integrator and outsource to get any there. Show me how I get without, you know, destroying my >> business. How did we meet the customers where they're at, right? Like what? The problem with this, the kind of either or model you're here you're there is that there's a huge opportunity costs. And again, Well, if you will just need a little bit of goodness, they don't need the full crazy nirvana Goodness right? And so we enable them to get that very easily in automated way, right? If you'd just been any time re factoring or thinking through this app that takes months or even a year or more, and so you know that this the speed that we can unleash her The velocity for these customers is >> the benefit of that. Nirvana is always taken out of context because people look at the outcome over over generations and saying, Well, I want to be there but it all starts with a very variable basis in shadow. I used to call it, but don't go in the cloud and do something really small, simple. And then why? This is much more official. I like this stack or this approach. That's ultimately how it gets there. So I got to get I got to get that point for infrastructures code because this is what you're enabling. Envies, fearful when I see I want to get your reaction. This because the world used to be. And I ask Elsa on this years ago, and he kind of validated it. But because he's old school, Intel infrastructure dictated to the applications what it could do based on what it could do. Now it's flipped upside down with cloud platform platform and implies enabling something enabling platform. Whatever you call the APs are dictating for the infrastructure. I need this. That's infrastructure is code. That's kind of what you're saying is that >> I mean, look kubernetes broader pattern time. It said, Hey, I can declare what I want, right, and then the system will take care of it and made in that state. I decided state execution is what it brought to the table, and the container based abs, um, have already been working that way. What this announcement does with Project Pacific is that the BM applications that our customers built in the past they are going to be able to take advantage of the same pattern, just the infrastructure escort declarative and decide state execution That that's going to happen even for the old workload, said our customer service >> and they still do viens. I mean, they're scaled 1000 the way >> they operate the same pattern. I >> mean, Paul Morris doesn't get enough credit for the comedy made in 2010. He called it the hardened top. Do you really care what's underneath if it's working effectively? >> Well, I mean, I think you know the reality today is that even though containers that get all get a lot of coverage and attention, most were close to being provisioned. New workloads even are being provisioning v EMS, right? If you look at AWS, the public clouds, I mean, is the E c to our ah go compute engine. Those service's those VM so once they're getting heavily used. And so the way we look at it, if we want to support everything. And it's just going to give customers a bunch of tools in their tool box. And let's put on used the right tool for the right job. Right? That's what the mentality >> that's really clouds. You know, Chris, I want to get your you know, I want to nail you down on the definition of two point. Uh, what is your version? Come on. We keep dodging around, get it out. Come on. >> I think we touched on all aspects of it. Which one is the interesting, less court allowing the consumer of the cloud to be able to dictate the environment in which the applications will operate and the consumer is defining it or the developers to defining it. In this case, that, to me, is the biggest shift that we have gone through in the Colorado. Yeah, and we're just making our platform come to life to support >> that. We're taking the cube serving. We'll put all together, and we want the community to define it, not us. What does it explain? The honest what it means to be a project and has a project Get into it. An offering? >> I mean, so Project Pacific is vey sphere, right? I mean, this is a massive, rethinking re architecture of Easter. Like pretty much every major subsystem component within Visa has been updated with this effort. Um, what we're doing here is what we've technically announced is actually what we call a technical preview. So saying, Hey, this is technology we're working on. We think it's really interesting We want to share with the public, get the public's feedback, you know, figure out a way on the right direction or not. We're not making any commitment, releasing it or any time frames yet. Um, but so part of that needed a name, right? And so because it is easier, but it's a specific thing. We're doing the feast here, so that's where the project comes from. I think it also gives that, you know, this thing has been a huge effort internally, right? There's a lot of work that's gone into it. So you know, it has some heft and deserves a name Min itself. >> It's Dev Ops to pointed. Your reds bring in. You making your infrastructure truly enable program out from amble for perhaps a tsunami. >> The one thing I would say is we wouldn't announce it as a project if it was not coming soon. I mean, we still are in the process. Getting feedback will turn it on or not. But it it's not something that is way out. Then it's It is going to come. >> It's a clear direction. It's a statement of putting investment into his code and going on to course correct. Get some feedback at exactly. But it's pretty obvious you can go a lot of pain. Oh, yeah, isn't easy button for combat. He's >> easy on the >> future. I think it's a great move. Congratulations. We're big fans of kubernetes. So the guys last night having a little meeting Marriott thinking up the next battle plans for game plan for you guys. So, yeah, I >> thought this is just the tip of the iceberg. We had a lot of really, really cool stuff we're doing. >> We're gonna be following the cloud platform. Your progress? Certainly. Recovering. Cloud two point. Oh, looking at these new categories that are emerging again. The end state is Dev Ops Program ability. Apple cases, the Cube coverage, 10th year covering VM world. We're in the lobby of Mosconi in San Francisco. I'm John Favorite Day Volonte. Thanks for watching

Published Date : Aug 28 2019

SUMMARY :

brought to you by the M Wear and its ecosystem partners. Hello, Welcome back, everyone to the Cubes Live coverage of the Emerald 2019. But the stuff filling in 10 years later. The Cloud Platform Business Kid Colbert to CTO Yeah, that's for having us. What's the big conversation for you guys? And the developers can deal with the kubernetes layer that they love. It's interesting to watch because, you know, the whole end user computing stack that was laid out 10 years ago is actually You guys have a lot of efficiencies and be severe, but bring in kubernetes is gonna give you some new things. So the thinking is really you know, it's Christmas mentioning. So the role of containers is super relevant because you got V EMS and containers. Where Bond, It's interesting conversation because kubernetes is orchestrating all that while the snarky treat tweet Oh day and you guys feel free to come. And I think you know, And you know you're not gonna build that overnight That's the the key move that we made here. And the question is, who's leading? And then you have some of the cloud NATO nirvana over here. of the way there, and so one of the things that we saw with the M A coordinated us, and we saw that some of the evidence of some challenges of that shiny new trend within the dupe ecosystem. There is also the other point about just just just a bespoke tooling that technology, right, Then the disruptions to create, you know to that, then the investments that it Because most of the audience out here, not developers. this is one of the really cool things about Pacific that what we've done with Pacific when you look at it from into the fact that messy middle between, you know, I'm on here and then I want the Nirvana, So it's a huge kind of a inflection point in the industry without, you know, destroying my and so you know that this the speed that we can unleash her The velocity for these customers is So I got to get I got to get that point for infrastructures code because this is what you're enabling. the old workload, said our customer service I mean, they're scaled 1000 the way I He called it the hardened top. And so the way we look at it, if we want to support everything. You know, Chris, I want to get your you know, I want to nail you down on the definition of two point. less court allowing the consumer of the cloud to be able to dictate We're taking the cube serving. get the public's feedback, you know, figure out a way on the right direction or not. It's Dev Ops to pointed. I mean, we still are in the process. But it's pretty obvious you can go a lot of pain. So the guys last night having a little meeting Marriott thinking up the next battle plans for We had a lot of really, really cool stuff we're doing. We're in the lobby of Mosconi in San Francisco.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

2010DATE

0.99+

Paul MorrisPERSON

0.99+

LizPERSON

0.99+

AmazonORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

ElsaPERSON

0.99+

M WearORGANIZATION

0.99+

ColoradoLOCATION

0.99+

10 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

yesterdayDATE

0.99+

John CareerPERSON

0.99+

1000QUANTITY

0.99+

PM WearsORGANIZATION

0.99+

Jerry ChanPERSON

0.99+

oneQUANTITY

0.99+

AppleORGANIZATION

0.99+

10 years laterDATE

0.99+

one stackQUANTITY

0.99+

eightQUANTITY

0.99+

Palmer RitzORGANIZATION

0.99+

50 peopleQUANTITY

0.98+

RobinPERSON

0.98+

10 years agoDATE

0.98+

MarriottORGANIZATION

0.98+

USLOCATION

0.98+

two different viewsQUANTITY

0.98+

todayDATE

0.98+

two stacksQUANTITY

0.98+

a yearQUANTITY

0.98+

IntelORGANIZATION

0.97+

Krish PrasadPERSON

0.97+

BondPERSON

0.97+

a month agoDATE

0.97+

10th yearQUANTITY

0.97+

JennaPERSON

0.96+

Project PacificORGANIZATION

0.96+

two different viewpointsQUANTITY

0.96+

one moveQUANTITY

0.96+

last nightDATE

0.96+

two worldsQUANTITY

0.95+

TwitterORGANIZATION

0.95+

this weekDATE

0.95+

Kit ColbertPERSON

0.94+

three pillowsQUANTITY

0.94+

MosconiLOCATION

0.93+

TwoQUANTITY

0.93+

OneQUANTITY

0.93+

billion dollarsQUANTITY

0.92+

hundreds of thousands of customersQUANTITY

0.92+

CubeORGANIZATION

0.92+

two pointQUANTITY

0.92+

millionsQUANTITY

0.92+

ChristmasEVENT

0.9+

VMwareORGANIZATION

0.9+

Volante DavePERSON

0.89+

CubesORGANIZATION

0.89+

PacificORGANIZATION

0.89+

halfQUANTITY

0.87+

PrussianOTHER

0.87+

VMworld 2019EVENT

0.86+

millions of work clothesQUANTITY

0.86+

ColbertPERSON

0.86+

KidPERSON

0.83+

point BOTHER

0.82+

New NirvanaORGANIZATION

0.81+

VMORGANIZATION

0.8+

CubeCOMMERCIAL_ITEM

0.79+

John Favorite Day VolontePERSON

0.77+

nirvana GoodnessORGANIZATION

0.76+

one sideQUANTITY

0.75+

DevTITLE

0.75+

single applicationQUANTITY

0.74+

Q Weird MosconiORGANIZATION

0.72+

VeumORGANIZATION

0.7+

NATO nirvanaORGANIZATION

0.7+

PacificLOCATION

0.68+

Visa vi.ORGANIZATION

0.67+

readiesORGANIZATION

0.63+

cloud oneORGANIZATION

0.63+

Bridget Kromhout, Microsoft | KubeCon + CloudNativeCon EU 2019


 

(upbeat techno music) >> Live from Barcelona Spain, it's theCUBE. Covering KubeCon CloudNativeCon Europe 2019. Brought to you by Red Hat, The Cloud Native Computing Foundation and Ecosystem Partners. >> Welcome back, this is The Cube's coverage of KubeCon CloudNativeCon 2019. I'm Stu Miniman with Corey Quinn as my cohost, even though he says kucon. And joining us on this segment, we're not going debate how we pronounce certain things, but I will try to make sure that I get Bridget Kromhout correct. She is a Principle Cloud Advocate at Microsoft. Thank you for coming back to The Cube. >> Thank you for having me again. This is fun! >> First of all I do have to say, the bedazzled shirt is quite impressive. We always love the sartorial, ya know, view we get at a show like this because there are some really interesting shirts and there is one guy in a three-piece suit. But ya know-- >> There is, it's the high style, got to have that. >> Oh, absolutely. >> Bringing some class to the joint. >> Wearing a suit is my primary skill. (laughing) >> I will tell you that, yes, they sell this shirt on the Microsoft company store. And yes, it's only available in unisex fitted. Which is to say much like Alice Goldfuss likes to put it, ladies is gender neutral. So, all of the gentleman who say, but I have too much dad bod to wear that shirt! I say, well ya know get your bedazzlers out. You too can make your own shirt. >> I say it's not dad bod, it's a father figure, but I digress. (laughing) >> Exactly! >> Alright, so Bridget you're doing some speaking at the conference. You've been at this show a few times. Tell us, give us a bit of an overview of what you're doing here and your role at Microsoft these days. >> Absolutely. So, my talk is tomorrow and I think that, I'm going to go with its a vote of confidence that they put your talk on the last day at 2:00 P.M. instead of the, oh gosh, are they trying to bury it? But no, it's, I have scheduled enough conferences myself that I know that you have to put some stuff on the last day that people want to go to, or they're just not going to come. And my talk is about, and I'm co-presenting with my colleague, Jessica Deen, and we're talking about Helm 3. Which is to say, I think a lot of times it would, with these open-sourced shows people say, oh, why do you have to have a lot of information about the third release of your, third major release of your project? Why? It's just an iterative release. It is, and yet there are enough significant differences that it's kind of valuable to talk about, at least the end user experience. >> Yeah, so it actually got an applause in the keynote, ya know. (Bridget laughing) There are certain shows where people are hootin' and hollerin' for every, different compute instance that that is released and you look at it a little bit funny. But at the keynote there was a singular moment where it was the removal of Tiller which Corey and I have been trying to get feedback from the community as to what this all means. >> It seems, from my perspective, it seemed like a very strange thing. It's, we added this, yay! We added this other thing, yay! We're taking this thing and ripping it out and throwing it right into the garbage and the crowd goes nuts. And my two thoughts are first, that probably doesn't feel great if that was the thing you spent a lot of time working on, but secondly, I'm not as steep in the ecosystem as perhaps I should be and I don't really know what it does. So, what does it do and why is everyone super happy to con sine it to the dub rubbish bin of history? >> Right, exactly. So, first of all, I think it's 100% impossible to be an expert on every single vertical in this ecosystem. I mean, look around, KubeCon has 7,000 plus people, about a zillion vendor booths. They're all doing something that sounds slightly, overlapping and it's very confusing. So, in the Helm, if you, if people want to look we can say there's a link in the show notes but there, we can, people can go read on Helm.sh/blog. We have a seven part, I think, blog series about exactly what the history and the current release is about. But the TLDR, the too long didn't follow the link, is that Helm 1 was pretty limited in scope, Helm 2 was certainly more ambitious and it was born out of a collaboration between Google actually and a few other project contributors and Microsoft. And, the Tiller came in with the Google folks and it really served a need at that specific time. And it was, it was a server-side component. And this was an era when the Roll by Stacks has control and Kubernetes was, well nigh not existent. And so there were a lot of security components that you kind of had to bolt on after the fact, And once we got to, I think it was Kubernetes 1.7 or 1.8 maybe, the security model had matured enough that instead of it being great to have this extra component, it became burdensome to try to work around the extra component. And so I think that's actually a really good example of, it's like you were saying, people get excited about adding things. People sometimes don't get excited about removing things, but I think people are excited about the work that went into, removing this particular component because it ends up reducing the complexity in terms of the configuration for anyone who is using this system. >> It felt very spiritually aligned in some ways, with the announcement of Open Telemetry, where you're taking two projects and combining them into one. >> Absolutely. >> Where it's, oh, thank goodness, one less thing that-- >> Yes! >> I have to think about or deal with. Instead of A or B I just mix them together and hopefully it's a chocolate and peanut butter moment. >> Delicious. >> One of the topics that's been pretty hot in this ecosystem for the last, I'd say two years now it's been service matched, and talk about some complexity. And I talk to a guy and it's like, which one of these using? Oh I'm using all three of them and this is how I use them in my environment. So, there was an announcement spearheaded by Microsoft, the Service Mesh Interface. Give us the high level of what this is. >> So, first of all, the SMI acronym is hilarious to me because I got to tell you, as a nerdy teenager I went to math camp in the summertime, as one did, and it was named SMI. It was like, Summer Mathematics Institute! And I'm like, awesome! Now we have a work project that's named that, happy memories of lots of nerdy math. But my first Unix system that I played with, so, but what's great about that, what's great about that particular project, and you're right that this is very much aligned with, you're an enterprise. You would very much like to do enterprise-y things, like being a bank or being an airline or being an insurance company, and you super don't want to look at the very confusing CNCF Project Map and go, I think we need something in that quadrant. And then set your ships for that direction, and hopefully you'll get to what you need. And it's especially when you said that, you mentioned that, this, it basically standardizes it, such that whichever projects you want to use, whichever of the N, and we used to joke about JavaScript framework for the week, but I'm pretty sure the Service Mesh Project of the week has outstripped it in terms of like speed, of new projects being released all the time. And like, a lot of end user companies would very much like to start doing something and have it work and if the adorable start-up that had all the stars on GitHub and the two contributors ends up, and I'm not even naming a specific one, I'm just saying like there are many projects out there that are great technically and maybe they don't actually plan on supporting your LTS. And that's fine, but if we end up with this interface such that whatever service mesh, mesh, that's a hard word. Whatever service mesh technology you choose to use, you can be confident that you can move forward and not have a horrible disaster later. >> Right, and I think that's something that a lot of developers when left to our own devices and in my particular device, the devices are pretty crappy. Where it becomes a, I want to get this thing built, and up and running and working, and then when it finally works I do a happy dance. And no one wants to see that, I promise. It becomes a very different story when, okay, how do you maintain this? How do you responsibly keep this running? And it's, well I just got it working, what do you mean maintain it? I'm done, my job is done, I'm going home now. It turns out that when you have a business that isn't being the most clever person in the room, you sort of need to have a longer term plan around that. >> Yeah, absolutely. >> And it's nice to see that level of maturation being absorbed into the ecosystem. >> I think the ecosystem may finally be ready for it. And this is, I feel like, it's easy for us to look at examples of the past, people kind of shake their heads at OpenStack as a cautionary tale or of Sprawl and whatnot. But this is a thriving, which means growing, which means changing, which means very busy ecosystem. But like you're pointing out, if your enterprises are going to adapt some of this technology, they look at it and everyone here was, ya know, eating cupcakes or whatever for the Kubernetes fifth birthday, to an enterprise just 'cause that launched in 2014, June 2014, that sounds kind of new. >> Oh absolutely. >> Like, we're still, we're still running that mainframe that is still producing business value and actually that's fine. I mean, I think this maybe is one of the great things about a company like Microsoft, is we are our customers. Like we also respect the fact that if something works you don't just yolo a new thing out into production to replace it for what reason? What is the business value of replacing it? And I think for this, that's why this, kind of Unix philosophy of the very modular pieces of this ecosystem and we were talking about Helm a little earlier, but there's also, Draft, Brigade, etc. Like the Porter, the CNET spec implementation stuff, and this Cloud Native application bundles, that's a whole mouthful. >> Yes, well no disrespect to your sparkly shirt, but chasing the shiny thing, and this is new and exciting is not necessarily a great thing. >> Right? >> I heard some of the shiny squad that were on the show floor earlier, complaining a little bit about the keynotes, that there haven't been a whole lot of new service and feature announcements. (Bridget laughing) And my opinion on that is feature not bug. I, it turns out most of us have jobs that aren't keeping up with every new commit to an open-source project. >> I think what you were talking about before, this idea of, I'm the developer, I yolo'd out this co-load into production, or I yolo'd this out into production. It is definitely production grade as long as everything stays on the happy path, and nothing unexpected happens. And I probably have air handling, and, yay! We had the launch party, we're drinkin' and eatin' and we're happy and we don't really care that somebody is getting paged. And, it's probably burning down. And a lot of human misery is being poured into keeping it working. I like to think that, considering that we're paying attention to our enterprise customers and their needs, they're pretty interested in things that don't just work on day one, but they work on day two and hopefully day 200 and maybe day 2000. And like, that doesn't mean that you ship something once and you're like, okay, we don't have to change it for three years. It's like, no, you ship something, then you keep iterating on it, you keep bug fixing, you keep, sure you want features, but stability is a feature. And customer value is a feature. >> Well, Bridget I'm glad you brought that up. Last thing I want to ask you 'cause Microsoft's a great example, as you say, as a customer, if you're an Azure customer, I don't ask you what version of Azure you're running or whether you've done the latest security patch that's in there because Microsoft takes care of you. Now, your customers that are pulled between their two worlds is, oh, wait, I might have gotten rid of patch Tuesdays, but I still have to worry and maintain that environment. How are they dealing with, kind of that new world and still have, certain things that are going to stay the old way that they have been since the 90's or longer? >> I mean, obviously it's a very broad question and I can really only speak to the Kubernetes space, but I will say that the customers really appreciate, and this goes for all the Cloud providers, when there is something like the dramatic CVE that we had in December for example. It's like, oh, every Kubernetes cluster everywhere is horribly insecure! That's awesome! I guess, your API gateway is also an API welcome mat for everyone who wants to, do terrible things to your clusters. All of the vendors, Microsoft included, had their managed services patched very quickly. They're probably just like your Harple's of the world. If you rolled your own, you are responsible for patching, maintaining, securing your own. And this is, I feel like that's that tension. That's that continuum we always see our customers on. Like, they probably have a data center full of ya know, veece, fear and sadness, and they would very much like to have managed happiness. And that doesn't mean that they can easily pickup everything in the data center, that they have a lease on and move it instantly. But we can work with them to make sure that, hey, say you want to run some Kubernetes stuff in your data center and you also want to have AKS. Hey, there's this open-source project that we instantiated, that we worked on with other organizations called Vertual Kubelet. There was actually a talk happening about it I think in the last hour, so people can watch the video of that. But, we have now offered, we now have Virtual Node, our product version of it in GA. And I think this is kind of that continuum. It's like, yes of course, you're early adapters want the open-source to play with. Your enterprises want it to be open-source so they can make sure that their security team is happy having reviewed it. But, like you're saying, they would very much like to consume a service so they can get to business value. Like they don't necessarily want to, take, Kelsey's wonderful Kubernetes The Hard Way Tutorial and put that in production. It's like, hmm, probably not, not because they can't, these are smart people, they absolutely could do that. But then they spent their, innovation tokens as, the McKinley blog post puts it, the, it's like, choose boring technology. It's not wrong. It's not that boring is the goal, it's that you want the exciting to be in the area that is producing value for your organization. Like that's where you want most of your effort to go. And so if you can use well vetted open-source that is cross industry standard, stuff like SMI that is going to help you use everything that you chose, wisely or not so wisely, and integrate it and hopefully not spend a lot of time redeveloping. If you redevelop the same applications you already had, its like, I don't think at the end of the quarter anybody is getting their VP level up. If you waste time. So, I think that is, like, one of the things that Microsoft is so excited about with this kind of open-source stuff is that our customers can get to value faster and everyone that we collaborate with in the other clouds and with all of these vendor partners you see on the show floor, can keep the ecosystem moving forward. 'Cause I don't know about you but I feel like for a while we were all building different things. I mean like, instead of, for example, managed services for something like Kubernetes, I mean a few jobs that would go out was that a start up that we, we built our own custom container platform, as one did in 2014. And, we assembled it out of all the LEGOs and we built it out of I think Docker and Packer and Chef and, AWS at the time and, a bunch of janky bash because like if someone tells you there's no janky bash underneath your home grown platform, they are lying. >> It's always a lie, always a lie. >> They're lying. There's definitely bash in there, they may or may not be checking exit codes. But like, we all were doing that for a while and we were all building, container orchestration systems because we didn't have a great industry standard, awesome! We're here at KubeCon. Obviously Kubernetes is a great industry standard, but everybody that wants to chase the shiny is like but surface meshes. If I review talks for, I think I reviewed talks for KubeCon in Copenhagen, and it was like 50 or 60 almost identical service mesh talk proposals. And it's like, and then now, like so that was last year and now everyone is like server lists and its like, you know you still have servers. Like you don't add sensation to them, which is great, but you still have them. I think that that hype train is going to keep happening and what we need to do is make sure that we keep it usable for what the customers are trying to accomplish. Does that make sense? >> Bridget, it does, and unfortunately, we're going to have to leave it there. Thank you so much for sharing everything with our audience here. For Corey, I'm Stu, we'll be back with more coverage. Thanks for watching The Cube. (upbeat techno music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, Thank you for coming back to The Cube. Thank you for having me again. We always love the sartorial, There is, it's the high style, Wearing a suit is my primary skill. I will tell you that, yes, they sell this shirt I say it's not dad bod, at the conference. that they put your talk on the last day at 2:00 P.M. from the community as to what this all means. doesn't feel great if that was the thing you And this was an era when the Roll by Stacks has It felt very spiritually aligned in some ways, I have to think about or deal with. And I talk to a guy and it's like, And it's especially when you said that, clever person in the room, you sort of need to And it's nice to see that level of maturation And this is, I feel like, And I think for this, sparkly shirt, but chasing the shiny thing, I heard some of the shiny squad that were on I think what you were talking about Last thing I want to ask you 'cause Microsoft's a SMI that is going to help you use everything Like you don't add sensation to them, which is great, Thank you so much for sharing everything with

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jessica DeenPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Bridget KromhoutPERSON

0.99+

DecemberDATE

0.99+

Corey QuinnPERSON

0.99+

2014DATE

0.99+

CoreyPERSON

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

three yearsQUANTITY

0.99+

Summer Mathematics InstituteORGANIZATION

0.99+

two projectsQUANTITY

0.99+

100%QUANTITY

0.99+

GALOCATION

0.99+

Vertual KubeletORGANIZATION

0.99+

Alice GoldfussPERSON

0.99+

tomorrowDATE

0.99+

KelseyPERSON

0.99+

BridgetPERSON

0.99+

third releaseQUANTITY

0.99+

last yearDATE

0.99+

KubeConEVENT

0.99+

CNETORGANIZATION

0.99+

firstQUANTITY

0.99+

CopenhagenLOCATION

0.99+

three-pieceQUANTITY

0.99+

one guyQUANTITY

0.99+

two yearsQUANTITY

0.99+

Helm 3TITLE

0.99+

seven partQUANTITY

0.99+

60QUANTITY

0.99+

50QUANTITY

0.99+

AWSORGANIZATION

0.99+

Ecosystem PartnersORGANIZATION

0.98+

OpenStackORGANIZATION

0.98+

StuPERSON

0.98+

two contributorsQUANTITY

0.98+

Barcelona SpainLOCATION

0.98+

two worldsQUANTITY

0.98+

two thoughtsQUANTITY

0.98+

KubernetesTITLE

0.98+

threeQUANTITY

0.98+

June 2014DATE

0.98+

2:00 P.M.DATE

0.97+

OneQUANTITY

0.97+

oneQUANTITY

0.97+

Kubernetes The Hard Way TutorialTITLE

0.97+

day oneQUANTITY

0.95+

McKinleyORGANIZATION

0.95+

SprawlTITLE

0.95+

7,000 plus peopleQUANTITY

0.95+

JavaScriptTITLE

0.94+

day twoQUANTITY

0.94+

third major releaseQUANTITY

0.94+

90'sDATE

0.94+

LEGOsORGANIZATION

0.93+

GitHubORGANIZATION

0.92+

Helm 2TITLE

0.9+

DockerORGANIZATION

0.9+

KubernetesPERSON

0.9+

AzureTITLE

0.89+

fifth birthdayQUANTITY

0.89+

HarpleORGANIZATION

0.88+

CloudNativeCon EU 2019EVENT

0.88+

The CubeTITLE

0.88+

The Cloud Native Computing FoundationORGANIZATION

0.87+

VirtualORGANIZATION

0.87+

AKSORGANIZATION

0.85+

about a zillion vendor boothsQUANTITY

0.85+

Helm.sh/blogOTHER

0.85+

FirstQUANTITY

0.83+

secondlyQUANTITY

0.83+

Helm 1TITLE

0.81+

SMIORGANIZATION

0.8+

David C King, FogHorn Systems | CUBEConversation, November 2018


 

(uplifting orchestral music) >> Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're at the Palo Alto studios, having theCUBE Conversation, a little break in the action of the conference season before things heat up, before we kind of come to the close of 2018. It's been quite a year. But it's nice to be back in the studio. Things are a little bit less crazy, and we're excited to talk about one of the really hot topics right now, which is edge computing, fog computing, cloud computing. What do all these things mean, how do they all intersect, and we've got with us today David King. He's the CEO of FogHorn Systems. David, first off, welcome. >> Thank you, Jeff. >> So, FogHorn Systems, I guess by the fog, you guys are all about the fog, and for those that don't know, fog is kind of this intersection between cloud, and on prem, and... So first off, give us a little bit of the background of the company and then let's jump into what this fog thing is all about. >> Sure, actually, it all dovetails together. So yeah, you're right, FogHorn, the name itself, came from Cisco's invented term, called fog computing, from almost a decade ago, and it connoted this idea of computing at the edge, but didn't really have a lot of definition early on. And so, FogHorn was started actually by a Palo Alto Incubator, just nearby here, that had the idea that hey, we got to put some real meaning and some real meat on the bones here, with fog computing. And what we think FogHorn has become over the last three and a half years, since we took it out of the incubator, since I joined, was to put some real purpose, meaning, and value in that term. And so, it's more than just edge computing. Edge computing is a related term. In the industrial world, people would say, hey, I've had edge computing for three, 40, 50 years with my production line control and also my distributed control systems. I've got hard wired compute. I run, they call them, industrial PCs in the factory. That's edge compute. The IT roles come along and said, no, no, no, fog compute is a more advanced form of it. Well, the real purpose of fog computing and edge computing, in our view, in the modern world, is to apply what has traditionally been thought of as cloud computing functions, big, big data, but running in an industrial environment, or running on a machine. And so, we call it as really big data operating in the world's smallest footprint, okay, and the real point of this for industrial customers, which is our primary focus, industrial IoT, is to deliver as much analytic machine learning, deep learning AI capability on live-streaming sensor data, okay, and what that means is rather than persisting a lot of data either on prem, and then sending it to the cloud, or trying to stream all this to the cloud to make sense of terabytes or petabytes a day, per machine sometimes, right, think about a jet engine, a petabyte every flight. You want to do the compute as close to the source as possible, and if possible, on the live streaming data, not after you've persisted it on a big storage system. So that's the idea. >> So you touch on all kinds of stuff there. So we'll break it down. >> Unpack it, yeah. >> Unpack it. So first off, just kind of the OT/IT thing, and I think that's really important, and we talked before turning the cameras on about Dr. Tom from HP, he loves to make a big symbolic handshake of the operations technology, >> One of our partners. >> Right, and IT, and the marriage of these two things, where before, as you said, the OT guys, the guys that have been running factories, you know, they've been doing this for a long time, and now suddenly, the IT folks are butting in and want to get access to that data to provide more control. So, you know, as you see the marriage of those two things coming together, what are the biggest points of friction, and really, what's the biggest opportunity? >> Great set of questions. So, quite right, the OT folks are inherently suspicious of IT, right? I mean, if you don't know the history, 40 plus years ago, there was a fork in the road, where in factory operations, were they going to embrace things like ethernet, the internet, connected systems? In fact, they purposely air gapped an island of those systems 'cause they was all about machine control, real-time, for safety, productivity, and uptime of the machine. They don't want any, you can't use kind of standard ethernet, it has to be industrial ethernet, right? It has to have time bound and deterministic. It can't be a retry kind of a system, right? So different MAC layer for a reason, for example. What did the physical wiring look like? It's also different cabling, because you can't have cuts, jumps in the cable, right? So it's a different environment entirely that OT grew up in, and so, FogHorn is trying to really bring the value of what people are delivering for AI, essentially, into that environment in a way that's non-threatening to, it's supplemental to, and adds value in the OT world. So Dr. Tom is right, this idea of bringing IT and OT together is inherently challenging, because these were kind of fork in the road, island-ed in the networks, if you will, different systems, different nomenclature, different protocols, and so, there's a real education curve that IT companies are going through, and the idea of taking all this OT data that's already been produced in tremendous volumes already before you add new kinds of sensing, and sending it across a LAN which it's never talked to before, then across a WAN to go to a cloud, to get some insight doesn't make any sense, right? So you want to leverage the cloud, you want to leverage data centers, you want to leverage the LAN, you want to leverage 5G, you want to leverage all the new IT technologies, but you have to do it in a way that makes sense for it and adds value in the OT context. >> I'm just curious, you talked about the air gapping, the two systems, which means they are not connected, right? >> No, they're connected with a duct, they're connected to themselves, in the industrial-- >> Right, right, but before, the OT system was air gapped from the IT system, so thinking about security and those types of threats, now, if those things are connected, that security measure has gone away, so what is the excitement, adoption scare when now, suddenly, these things that were separate, especially in the age of breaches that we know happen all the time as you bring those things together? >> Well, in fact, there have been cyber breaches in the OT context. Think about Stuxnet, think about things that have happened, think about the utilities back keys that were found to have malwares implanted in them. And so, this idea of industrial IoT is very exciting, the ability to get real-time kind of game changing insights about your production. A huge amount of economic activity in the world could be dramatically improved. You can talk about trillions of dollars of value which the McKenzie, and BCG, and Bain talk about, right, by bringing kind of AI, ML into the plant environment. But the inherent problem is that by connecting the systems, you introduce security problems. You're talking about a huge amount of cost to move this data around, persist it then add value, and it's not real-time, right? So, it's not that cloud is not relevant, it's not that it's not used, it's that you want to do the compute where it makes sense, and for industrial, the more industrialized the environment, the more high frequency, high volume data, the closer to the system that you can do the compute, the better, and again, it's multi-layer of compute. You probably have something on the machine, something in the plant, and something in the cloud, right? But rather than send raw OT data to the cloud, you're going to send processed intelligent metadata insights that have already been derived at the edge, update what they call the fleet-wide digital twin, right? The digital twin for that whole fleet of assets should sit in the cloud, but the digital twin of the specific asset should probably be on the asset. >> So let's break that down a little bit. There's so much good stuff here. So, we talked about OT/IT and that marriage. Next, I just want to touch on cloud, 'cause a lot of people know cloud, it's very hot right now, and the ultimate promise of cloud, right, is you have infinite capacity >> Right, infinite compute. >> Available on demand, and you have infinite compute, and hopefully you have some big fat pipes to get your stuff in and out. But the OT challenge is, and as you said, the device challenge is very, very different. They've got proprietary operating systems, they've been running for a very, very long time. As you said, they put off boatloads, and boatloads, and boatloads of data that was never really designed to feed necessarily a machine learning algorithm, or an artificial intelligence algorithm when these things were designed. It wasn't really part of the equation. And we talk all the time about you know, do you move the compute to the data, you move the data to the compute, and really, what you're talking about in this fog computing world is kind of a hybrid, if you will, of trying to figure out which data you want to process locally, and then which data you have time, relevance, and other factors that just go ahead and pump it upstream. >> Right, that's a great way to describe it. Actually, we're trying to move as much of the compute as possible to the data. That's really the point of, that's why we say fog computing is a nebulous term about edge compute. It doesn't have any value until you actually decide what you're trying to do with it, and what we're trying to do is to take as much of the harder compute challenges, like analytics, machine learning, deep learning, AI, and bring it down to the source, as close to the source as you can, because you can essentially streamline or make more efficient every layer of the stack. Your models will get much better, right? You might have built them in the cloud initially, think about a deep learning model, but it may only be 60, 70% accurate. How do you do the improvement of the model to get it closer to perfect? I can't go send all the data up to keep trying to improve it. Well, typically, what happens is I down sample the data, I average it and I send it up, and I don't see any changes in the average data. Guess what? We should do is inference all the time and all the data, run it in our stack, and then send the metadata up, and then have the cloud look across all the assets of a similar type, and say, oh, the global fleet-wide model needs to be updated, and then to push it down. So, with Google just about a month ago, in Barcelona, at the IoT show, what we demonstrated was the world's first instance of AI for industrial, which is closed loop machine learning. We were taking a model, a TensorFlow model, trained in the cloud in the data center, brought into our stack and referring 100% inference-ing in all the live data, pushing the insights back up into Google Cloud, and then automatically updating the model without a human or data scientist having to look at it. Because essentially, it's ML on ML. And that to us, ML on ML is the foundation of AI for industrial. >> I just love that something comes up all the time, right? We used to make decisions based on the sampling of historical data after the fact. >> That's right, that's how we've all been doing it. >> Now, right, right now, the promise of streaming is you can make it based on all the data, >> All the time. >> All the time in real time. >> Permanently. >> This is a very different thing. So, but as you talked about, you know, running some complex models, and running ML, and retraining these things. You know, when you think of edge, you think of some little hockey puck that's out on the edge of a field, with limited power, limited connectivity, so you know, what's the reality of, how much power do you have at some of these more remote edges, or we always talk about the field of turbines, oil platforms, and how much power do you need, and how much compute that it actually starts to be meaningful in terms of the platform for the software? >> Right, there's definitely use cases, like you think about the smart meters, right, in the home. The older generation of those meters may have had very limited compute, right, like you know, talking about single megabyte of memory maybe, or less, right, kilobytes of memory. Very hard to run a stack on that kind of footprint. The latest generation of smart meters have about 250 megabytes of memory. A Raspberry Pi today is anywhere from a half a gig to a gig of memory, and we're fundamentally memory-bound, and obviously, CPU if it's trying to really fast compute, like vibration analysis, or acoustic, or video. But if you're just trying to take digital sensing data, like temperature, pressure, velocity, torque, we can take humidity, we can take all of that, believe it or not, run literally dozens and dozens of models, even train the models in something as small as a Raspberry Pi, or a low end x86. So our stack can run in any hardware, we're completely OS independent. It's a full up software layer. But the whole stack is about 100 megabytes of memory, with all the components, including Docker containerization, right, which compares to about 10 gigs of running a stream processing stack like Spark in the Cloud. So it's that order of magnitude of footprint reduction and speed of execution improvement. So as I said, world's smallest fastest compute engine. You need to do that if you're going to talk about, like a wind turbine, it's generating data, right, every millisecond, right. So you have high frequency data, like turbine pitch, and you have other conceptual data you're trying to bring in, like wind conditions, reference information about how the turbine is supposed to operate. You're bringing in a torrential amount of data to do this computation on the fly. And so, the challenge for a lot of the companies that have really started to move into the space, the cloud companies, like our partners, Google, and Amazon, and Microsoft, is they have great cloud capabilities for AI, ML. They're trying to move down to the edge by just transporting the whole stack to there. So in a plant environment, okay, that might work if you have massive data centers that can run it. Now I still got to stream all my assets, all the data from all of my assets to that central point. What we're trying to do is come out the opposite way, which is by having the world's smallest, fastest engine, we can run it in a small compute, very limited compute on the asset, or near the asset, or you can run this in a big compute and we can take on lots and lots of use cases for models simultaneously. >> I'm just curious on the small compute case, and again, you want all the data-- >> You want to inference another thing, right? >> Does it eventually go back, or is there a lot of cases where you can get the information you need off the stream and you don't necessarily have to save or send that upstream? >> So fundamentally today, in the OT world, the data usually gets, if the PLC, the production line controller, that has simple KPIs, if temperature goes to X or pressure goes to Y, do this. Those simple KPIs, if nothing is executed, it gets dumped into a local protocol server, and then about every 30, 60, 90 days, it gets written over. Nobody ever looks at it, right? That's why I say, 99% of the brown field data in OT has never really been-- >> Almost like a security-- >> Has never been mined for insight. Right, it just gets-- >> It runs, and runs, and runs, and every so often-- >> Exactly, and so, if you're doing inference-ing, and doing real time decision making, real time actual with our stack, what you would then persist is metadata insights, right? Here is an event, or here is an outcome, and oh, by the way, if you're doing deep learning or machine learning, and you're seeing deviation or drift from the model's prediction, you probably want to keep that and some of the raw data packets from that moment in time, and send that to the cloud or data center to say, oh, our fleet-wide model may not be accurate, or may be drifting, right? And so, what you want to do, again, different horses for different courses. Use our stack to do the lion's share of the heavy duty real time compute, produce metadata that you can send to either a data center or a cloud environment for further learning. >> Right, so your piece is really the gathering and the ML, and then if it needs to go back out for more heavy lifting, you'll send it back up, or do you have the cloud application as well that connects if you need? >> Yeah, so we build connectors to you know, Google Cloud Platform, Google IoT Core, to AWS S3, to Microsoft Azure, virtually any, Kafka, Hadoop. We can send the data wherever you want, either on plant, right back into the existing control systems, we can send it to OSIsoft PI, which is a great time series database that a lot of process industries use. You could of course send it to any public cloud or a Hadoop data lake private cloud. You can send the data wherever you want. Now, we also have, one of our components is a time series database. You can also persist it in memory in our stack, just for buffering, or if you have high value data that you want to take a measurement, a value from a previous calculation and bring it into another calculation during later, right, so, it's a very flexible system. >> Yeah, we were at OSIsoft PI World earlier this year. Some fascinating stories that came out of-- >> 30 year company. >> The building maintenance, and all kinds of stuff. So I'm just curious, some of the easy to understand applications that you've seen in the field, and maybe some of the ones that were a surprise on the OT side. I mean, obviously, preventative maintenance is always towards the top of the list. >> Yeah, I call it the layer cake, right? Especially when you get to remote assets that are either not monitored or lightly monitored. They call it drive-by monitoring. Somebody shows up and listens or looks at a valve or gauge and leaves. Condition-based monitoring, right? That is actually a big breakthrough for some, you know, think about fracking sites, or remote oil fields, or mining sites. The second layer is predictive maintenance, which the next generation is kind of predictive, prescriptive, even preventive maintenance, right? You're making predictions or you're helping to avoid downtime. The third layer, which is really where our stack is sort of unique today in delivering is asset performance optimization. How do I increase throughput, how do I reduce scrap, how do I improve worker safety, how do I get better processing of the data that my PLC can't give me, so I can actually improve the performance of the machine? Now, ultimately, what we're finding is a couple of things. One is, you can look at individual asset optimization, process optimization, but there's another layer. So often, we're deployed to two layers on premise. There's also the plant-wide optimization. We talked about wind farm before, off camera. So you've got the wind turbine. You can do a lot of things about turbine health, the blade pitch and condition of the blade, you can do things on the battery, all the systems on the turbine, but you also need a stack running, like ours, at that concentration point where there's 200 plus turbines that come together, 'cause the optimization of the whole farm, every turbine affects the other turbine, so a single turbine can't tell you speed, rotation, things that need to change, if you want to adjust the speed of one turbine, versus the one next to it. So there's also kind of a plant-wide optimization. Talking about time that's driving, there's going to be five layers of compute, right? You're going to have the, almost what I call the ECU level, the individual sub-system in the car that, the engine, how it's performing. You're going to have the gateway in the car to talk about things that are happening across systems in the car. You're going to have the peer to peer connection over 5G to talk about optimization right between vehicles. You're going to have the base station algorithms looking at a micro soil or macro soil within a geographic area, and of course, you'll have the ultimate cloud, 'cause you want to have the data on all the assets, right, but you don't want to send all that data to the cloud, you want to send the right metadata to the cloud. >> That's why there are big trucks full of compute now. >> By the way, you mentioned one thing that I should really touch on, which is, we've talked a lot about what I call traditional brown field automation and control type analytics and machine learning, and that's kind of where we started in discrete manufacturing a few years ago. What we found is that in that domain, and in oil and gas, and in mining, and in agriculture, transportation, in all those places, the most exciting new development this year is the movement towards video, 3D imaging and audio sensing, 'cause those sensors are now becoming very economical, and people have never thought about, well, if I put a camera and apply it to a certain application, what can I learn, what can I do that I never did before? And often, they even have cameras today, they haven't made use of any of the data. So there's a very large customer of ours who has literally video inspection data every product they produce everyday around the world, and this is in hundreds of plants. And that data never gets looked at, right, other than training operators like, hey, you missed the defects this day. The system, as you said, they just write over that data after 30 days. Well, guess what, you can apply deep learning tensor flow algorithms to build a convolutional neural network model and essentially do the human visioning, rather than an operator staring at a camera, or trying to look at training tapes. 30 days later, I'm doing inference-ing of the video image on the fly. >> So, do your systems close loop back to the control systems now, or is it more of a tuning mechanism for someone to go back and do it later? >> Great question, I just got asked that this morning by a large oil and gas super major that Intel just introduced us to. The short answer is, our stack can absolutely go right back into the control loop. In fact, one of our investors and partners, I should mention, our investors for series A was GE, Bosch, Yokogawa, Dell EMC, and our series debuted a year ago was Intel, Saudi Aramco, and Honeywell. So we have one foot in tech, one foot in industrial, and really, what we're really trying to bring is, you said, IT, OT together. The short answer is, you can do that, but typically in the industrial environment, there's a conservatism about, hey, I don't want to touch, you know, affect the machine until I've proven it out. So initially, people tend to start with alerting, so we send an automatic alert back into the control system to say, hey, the machine needs to be re-tuned. Very quickly, though, certainly for things that are not so time-sensitive, they will just have us, now, Yokogawa, one of our investors, I pointed out our investors, actually is putting us in PLCs. So rather than sending the data off the PLC to another gateway running our stack, like an x86 or ARM gateway, we're actually, those PLCs now have Raspberry Pi plus capabilities. A lot of them are-- >> To what types of mechanism? >> Well, right now, they're doing the IO and the control of the machine, but they have enough compute now that you can run us in a separate module, like the little brain sitting right next to the control room, and then do the AI on the fly, and there, you actually don't even need to send the data off the PLC. We just re-program the actuator. So that's where it's heading. It's eventually, and it could take years before people get comfortable doing this automatically, but what you'll see is that what AI represents in industrial is the self-healing machine, the self-improving process, and this is where it starts. >> Well, the other thing I think is so interesting is what are you optimizing for, and there is no right answer, right? It could be you're optimizing for, like you said, a machine. You could be optimizing for the field. You could be optimizing for maintenance, but if there is a spike in pricing, you may say, eh, we're not optimizing now for maintenance, we're actually optimizing for output, because we have this temporary condition and it's worth the trade-off. So I mean, there's so many ways that you can skin the cat when you have a lot more information and a lot more data. >> No, that's right, and I think what we typically like to do is start out with what's the business value, right? We don't want to go do a science project. Oh, I can make that machine work 50% better, but if it doesn't make any difference to your business operations, so what? So we always start the investigation with what is a high value business problem where you have sufficient data where applying this kind of AI and the edge concept will actually make a difference? And that's the kind of proof of concept we like to start with. >> So again, just to come full circle, what's the craziest thing an OT guy said, oh my goodness, you IT guys actually brought some value here that I didn't know. >> Well, I touched on video, right, so without going into the whole details of the story, one of our big investors, a very large oil and gas company, we said, look, you guys have done some great work with I call it software defined SCADA, which is a term, SCADA is the network environment for OT, right, and so, SCADA is what the PLCs and DCSes connect over these SCADA networks. That's the control automation role. And this investor said, look, you can come in, you've already shown us, that's why they invested, that you've gone into brown field SCADA environments, done deep mining of the existing data and shown value by reducing scrap and improving output, improving worker safety, all the great business outcomes for industrial. If you come into our operation, our plant people are going to say, no, you're not touching my PLC. You're not touching my SCADA network. So come in and do something that's non-invasive to that world, and so that's where we actually got started with video about 18 months ago. They said, hey, we've got all these video cameras, and we're not doing anything. We just have human operators writing down, oh, I had a bad event. It's a totally non-automated system. So we went in and did a video use case around, we call it, flare monitoring. You know, hundreds of stacks of burning of oil and gas in a production plant. 24 by seven team of operators just staring at it, writing down, oh, I think I had a bad flare. I mean, it's a very interesting old world process. So by automating that and giving them an AI dashboard essentially. Oh, I've got a permanent record of exactly how high the flare was, how smoky was it, what was the angle, and then you can then fuse that data back into plant data, what caused that, and also OSIsoft data, what was the gas composition? Was it in fact a safety violation? Was it in fact an environmental violation? So, by starting with video, and doing that use case, we've now got dozens of use cases all around video. Oh, I could put a camera on this. I could put a camera on a rig. I could've put a camera down the hole. I could put the camera on the pipeline, on a drone. There's just a million places that video can show up, or audio sensing, right, acoustic. So, video is great if you can see the event, like I'm flying over the pipe, I can see corrosion, right, but sometimes, like you know, a burner or an oven, I can't look inside the oven with a camera. There's no camera that could survive 600 degrees. So what do you do? Well, that's probably, you can do something like either vibration or acoustic. Like, inside the pipe, you got to go with sound. Outside the pipe, you go video. But these are the kind of things that people, traditionally, how did they inspect pipe? Drive by. >> Yes, fascinating story. Even again, I think at the end of the day, it's again, you can make real decisions based on all the data in real time, versus some of the data after the fact. All right, well, great conversation, and look forward to watching the continued success of FogHorn. >> Thank you very much. >> All right. >> Appreciate it. >> He's David King, I'm Jeff Frick, you're watching theCUBE. We're having a CUBE conversation at our Palo Alto studio. Thanks for watching, we'll see you next time. (uplifting symphonic music)

Published Date : Nov 16 2018

SUMMARY :

of the conference season the background of the company and the real point of this So you touch on Unpack it, of the OT/IT thing, and the marriage of these two things, and the idea of taking all this OT data and something in the cloud, right? and the ultimate promise of cloud, right, and then which data you have time, and all the data, all the time, right? That's right, that's how and how much power do you need, and you have other conceptual data 99% of the brown field data in OT Right, it just gets-- and some of the raw data packets You can send the data wherever you want. that came out of-- and maybe some of the ones the peer to peer connection over 5G of compute now. and essentially do the human visioning, back into the control system to say, and the control of the machine, You could be optimizing for the field. of AI and the edge concept So again, just to come full circle, Outside the pipe, you go video. based on all the data in real time, we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

David KingPERSON

0.99+

BoschORGANIZATION

0.99+

GEORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

50%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

600 degreesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

David C KingPERSON

0.99+

DavidPERSON

0.99+

IntelORGANIZATION

0.99+

November 2018DATE

0.99+

FogHorn SystemsORGANIZATION

0.99+

YokogawaORGANIZATION

0.99+

HoneywellORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

99%QUANTITY

0.99+

100%QUANTITY

0.99+

two systemsQUANTITY

0.99+

one footQUANTITY

0.99+

two thingsQUANTITY

0.99+

threeQUANTITY

0.99+

BarcelonaLOCATION

0.99+

BCGORGANIZATION

0.99+

40QUANTITY

0.99+

HPORGANIZATION

0.99+

third layerQUANTITY

0.99+

Palo AltoLOCATION

0.99+

seven teamQUANTITY

0.99+

OneQUANTITY

0.99+

second layerQUANTITY

0.99+

2018DATE

0.99+

Saudi AramcoORGANIZATION

0.99+

200 plus turbinesQUANTITY

0.99+

SCADATITLE

0.99+

60QUANTITY

0.99+

two layersQUANTITY

0.99+

McKenzieORGANIZATION

0.98+

Dr.PERSON

0.98+

TomPERSON

0.98+

a year agoDATE

0.98+

OSIsoftORGANIZATION

0.98+

YokogawaPERSON

0.98+

Dell EMCORGANIZATION

0.98+

firstQUANTITY

0.98+

todayDATE

0.98+

single megabyteQUANTITY

0.98+

30 yearQUANTITY

0.98+

oneQUANTITY

0.97+

AWSORGANIZATION

0.97+

90 daysQUANTITY

0.97+

half a gigQUANTITY

0.97+

about 10 gigsQUANTITY

0.97+

earlier this yearDATE

0.97+

five layersQUANTITY

0.96+

40 plus years agoDATE

0.96+

one turbineQUANTITY

0.96+

about 100 megabytesQUANTITY

0.96+

about 250 megabytesQUANTITY

0.96+

30 days laterDATE

0.96+

one thingQUANTITY

0.96+

dozensQUANTITY

0.96+

OSIsoft PI WorldORGANIZATION

0.95+

trillions of dollarsQUANTITY

0.95+

50 yearsQUANTITY

0.94+

single turbineQUANTITY

0.93+

BainPERSON

0.93+

HadoopTITLE

0.92+

60, 70%QUANTITY

0.91+

a decade agoDATE

0.9+

a month agoDATE

0.89+

FogHornORGANIZATION

0.89+

CUBEORGANIZATION

0.88+

Jeffery Snover, Microsoft | Microsoft Ignite 2018


 

(electronic music) >> Live from Orlando, Florida, it's theCUBE! Covering Microsoft Ignite. Brought to you by Cohesity, and theCUBE's ecosystem partners. >> Welcome back everyone to theCUBE's live coverage of Microsoft Ignite here in Orlando, Florida. I'm your host, Rebecca Knight, along with my cohost, Stu Miniman. We're joined by Jeffrey Snover. He is the technical fellow and chief architect for Azure Storage and Cloud Edge at Microsoft. Thanks so much for coming, for returning to theCUBE, I should say, Jeffrey, you're a CUBE alum. >> Yes, I enjoyed the last time. So can't wait to do it again this time. >> Well we're excited to have you. So before the camera's were rolling, we were talking about PowerShell. You invented PowerShell. >> Yeah, I did. >> It was invented in the early 2000's, it took a few years to ship, as you said. But can you give our viewers an update of where we are? >> Yeah, you know, it's 2018, and it's never been a better time for PowerShell. You know, basically the initial mission is sort of complete. And the mission was provide sort of general purpose scripting for Windows. But now we have a new mission. And that new mission is to manage anything, anywhere. So we've taken PowerShell, we've open sourced it. It's now running, we've ported it to macOS and Linux. There's a very large list of Linux distributions that we support it on, and it runs everywhere. And so, now, you can manage from anywhere. Your Windows box, your Linux box, your Mac box, even in the browser, you can manage, and then anything. You can manage Windows, you can manage Linux, you can manage macOS. So manage anything, anywhere. Any cloud, Azure, or AWS, or Google. Any hypervisor, Hyper-V or VMware, or any physical server. It's amazing. In fact, our launch partners, when we launched this, our launch partners, VMware, Google, AWS. Not Microsoft's traditional partners. >> That's great to hear. It was actually, one of the critiques we had, at the key note this morning, was partnerships are critically important. But felt that Satya gave a little bit of a jab towards, the kind of, the Amazon's out there. When we talk to customers, we know it's a heterogeneous, multi-cloud world. You know, you work all over the place, with your solutions that you had. There's not, like, Azure, Azure Stack, out to The Edge. The Edge, it is early, it's going to be very heterogeneous. So connect the dots for us a little. You know, we love having the technical fellows on, as to, you go from PowerShell, to now this diverse set of solutions that you work on today. >> Yeah, exactly. So basically, from PowerShell, they asked me to be the chief architect for Windows Server. Right, because if you think about it, an operating system is largely management, right? And, so, that's what I did, resource management. And, so, I was the chief architect for that, for many years, and we decided that, as part of that, we were developing cloud-inspired infrastructure. So, basically, you know, Windows Server had grown up. You know, sort of focused in on a machine. Azure had gone and needed to build a new set of infrastructure for the cloud. And we looked at what they were doing. And they say, hey, that's some great ideas. Let's take the ideas there, and put them into the general purpose operating system. And that's what we call our software-defined data center. And the reason why we couldn't use Azure's directly is, Azure's, really, design center is very, very, very large systems. So, for instance, the storage stamp, that starts at about 10 racks. No customer wants to start with 10 racks. So we took the inspiration from them and re-implemented it. And now our systems can start with two servers. Our Azure Stack systems, well, so, then, what we decided was, hey, this is great technology. Let's take the great cloud-inspired infrastructure of Windows Server, and match it with the Azure services themselves. So we take Azure, put it on top of Windows Server, package it as an appliance experience, and we call that Azure Stack. And that's where I have been mostly focused for the last couple of years. >> Right, can you help us unpack a little bit. There's a lot of news today. >> Yes. >> You know, Windows 2019 was announced. I was real interested in the Data Box Edge solution, which I'm sure. >> Isn't that crazy? >> Yeah, really interesting. You're like, let's do some AI applications out at the Edge, and with the same kind of box that we can transport data. Because, I always say, you got to follow customers applications and data, and it's tough to move these things. You know, we've got physics that we still have to, you know, work on until some of these smart guys figure out how to break that. But, yeah, maybe give us a little context, as to news of the show, things your teams have been working on. >> Yeah, so the Data Box Edge, big, exciting stuff. Now, there's a couple scenarios for Data Box Edge. First is, first it's all kind of largely centered on storage and the Edge. So Storage, you've got a bunch of data in your enterprise, and you'd like it to be in Azure. One flavor of Data Box Edge is a disk. You call us up, we send you a disk, you fill up that disk, you send it back to us, it shows up in Azure. Next. >> A pretty big disk, though? >> Well, it can be a small disk. >> Oh, okay. >> Yeah, no, it can be a single SSD, okay. But then you can say, well, no, I need a bunch more. And so we send you a box, the box is over there. It's like 47 pounds, we send you this thing, it's about 100 terabytes of data. You fill that thing up, send it to us, and we upload it. Or a Data Box Heavy. Now this thing has a handle and wheels. I mean, literally, wheels, it's specially designed so that a forklift can pick this thing up, right? It's like, I don't know, like 400 pounds, it's crazy. And that's got about a petabyte worth of storage. Again, we ship it to you, you fill it up, ship it back to us. So that's one flavor, Data Box transport. Then there's Data Box Edge. Data Box Edge, you go to the website, say, I'd like a Data Box Edge, we send you a 1u server. You plug that in, you keep it plugged in, then you use it. How do you use it? You connect it to your Azure storage, and then all your Azure storage is available through here. And it's exposed through SMB. Later, we'll expose it through NFS and a Blob API. But, then, anything you write here is available immediately, it gets back to Azure, and, effectively, it looks like near-infinite storage. Just use it and it gets backed up, so it's amazing. Now, on that box, we're also adding the ability to say, hey, we got a bunch of compute there. You can run IoT Edge platforms. So you run the IoT Edge platform, you can run gateways, you can run Kubernetes clusters on this thing, you can run all sorts of IoT software. Including, we're integrating in brainwave technology. So, brainwave technology is, and, by the way, we'll want to talk about this a little bit, in a second. It is evidence of the largest transformation we'll see in our industry. And that is the re-integration of the industry. So, basically, what does that mean? In the past, the industry used to be, back when the big key players were digital. Remember digital, from DEC? We're all Massachusetts people. (Rebecca laughs) So, DEC was the number one employer in Massachusetts, gone. IBM dominant, much diminished, a whole bunch of people. They were dominant when the industry was vertically integrated. Vertically integrated meant all those companies designed their own silicone, they built their own boards, they built their own systems, they built their OS, they built the applications, the serviced them. Then there was the disintegration of the computer industry. Where, basically, we went vertically integrated. You got your chips from Intel or Motorola. The operating system, you got from Sun or Microsoft. The applications you got from a number of different vendors. Okay, so we got vertically integrated. What you're seeing, and what's so exciting, is a shift back to vertical integration. So Microsoft is designing its own hardware, right? We're designing our own chips. So we've designed a chip specially for AI, we call it a brainwave chip, and that's available in the Data Box Edge. So, now, when you do this AI stuff, guess what? The processing is very different. And it can be very, very fast. So that's just one example of Microsoft's innovation in hardware. >> Wow, so, I mean. >> What do you do with that? >> One of the things that we keep hearing so much, at this conference, is that Microsoft products and services are helping individual employees tap into their own creativity, their ingenuity, and then, also, collaborate with colleagues. I'm curious about where you get your ideas, and how you actually put that into practice, as a technical fellow. >> Yeah. >> How do you think about the future, and envision these next generation technologies? >> Yeah, well, you know, it's one of those things, honestly, where your strength is your weakness, your weakness is your strength. So my weakness is, I can't deal with complexity, right. And, so, what I'm always doing is I'm taking a look at a very complex situation, and I'm saying, what's the heart of it, like, give me the heart of it. So my background's physics, right? And so, in physics, you're not doing, you're looking for the F equals M A. And if you have that, when you find that, then you can apply it over, and over, and over again. So I'm always looking at what are the essential things here. And so that's this, well, you see a whole bunch of confusing things, like, what's up with this? What's with this? That idea of there is this narrative about the reintegration of the computer industry. How very large vendors, be it Microsoft, or AWS, are, because we operate at such large scales, we are going to be vertically integrated. We're developing our own hardware, we do our own systems, et cetera. So, I'm always looking for the simple story, and then applying it. And, it turns out, I do it pretty accurately. And it turns out, it's pretty valuable. >> Alright, so that's a good set up to talk about Azure Stacks. So, the value proposition we heard, of course, is, you know, start everything in the cloud first, you know, Microsoft does Azure, and then lets, you know, have some of those services in the same operating model in your data center, or in your hosting service provider environment. So, first of all, did I get that right? And, you know, give us the update on Azure Stack. I've been trying to talk to customers that are using it, talking to your partners. There is a lot of excitement around it. But, you know, proof points, early use cases, you know, where is this going to be pointing towards, where the future of the data center is? >> So, it's a great example. So what I figured out, when I thought about this, and kind of drilled in, like what's really, what really matters here? What I realized was that what the gestalt of Azure Stack is different than everything we've done in the past. And it really is an appliance, okay? So, in the past, I just had a session the other day, and people were asking, well, when are you going to, when is Azure Stack going to have the latest version of the operating system? I said, no, no, no, no, no. Internals are internal, it's an appliance. Azure Stack is for people who want to use a cloud, not for people who want to build it. So you shouldn't be concerned about all the internals. You just plug it in, fill out some forms, and then you use it, just start using it. You don't care about the details of how it's all configured, you don't do the provisioning, we do all that for you. And so that's what we've done. And it turns out that that message resonates really well. Because, as you probably know, most private clouds fail. Most private clouds fail miserably. Why? And there's really two reasons. There's two flavors of failure. But one is they just never work. Now that's because, guess what, it's incredibly hard. There are so many moving pieces and, guess what, we learned that ourselves. The numbers of times we stepped on the rakes, and, like, how do you make all this work? There's a gazillion moving parts. So if any of your, you have a team, that's failed at private cloud, they're not idiots. It's super, super, super hard. So that's one level of failure. But even those teams that got it working, they ultimately failed, as well, because of lack of usage. And the reason for that is, having done all that, they then built a snowflake cloud. And then when someone said, well, how do I use this? How do I add another NIC to a VM? The team that put it together were the only ones that could answer that. Nope, there was no ecosystem around it. So, with Azure Stack, the gestalt is, like, this is for people who want to use it, not for people who want to build it. So you just plug it in, you pick a vendor, and you pick a capacity. This vendor, four notes, this vendor 12 or 16 notes. And that's it. You come in, we ask you what IP range is, how do I integrate with your identity? Within a day, it's up and running, and your users are using it, really using it. Like, that's craziness. And then, well what does it mean to use it? Like, oh, hey, how do I ad a NIC to a VM? It's Azure, so how does Azure do it? I have an entire Azure ecosystem. There's documentation, there's training, there's videos, there's conferences. You can go and put on a resume, I'd like to hire someone with Azure skills, and get someone, and then they're productive that day. Or, and here's the best part, you can put on your resume, I have Azure skills, and you knock on 10 doors, and nine of them are going to say, come talk to me. So, that was the heart of it. And, again, it goes back to your question of, like, the value, or what does a technical fellow do. It's to figure out what really matters. And then say, we're all in on that. There was a lot of skepticism, a lot of customers like, I must have my security agent on there. It's like, well, no, then you're not a good candidate. What do you mean? I say, well, look, we're not going to do this. And they say, well you'll never be able to sell to anyone in my industry. I said, no, you're wrong. They say, what do you mean, I'm wrong? I say, well, let me prove it to ya, do you own a SAN? They say, well, of course we own a SAN. I said, I know you own a SAN. Let me ask you this, a SAN is a general purpose server with a general purpose operating system. So do you put your security and managing agents on there? And they said, no, we're not allowed to. I said, right, and that's the way Azure Stack is. It's a sealed appliance. We take care of that responsibility for you. And it's worked out very, very well. >> Alright, you got me thinking. One of the things we want to do is, we want to simplify the environment. That's been the problem we've had in IT, for a long time, is it's this heterogeneous mess. Every group did their own thing. I worry a multi-cloud world has gotten us into more silos. Because, I've got lots of SAS providers, I've got multiple cloud providers, and, boy, maybe when I get to the Edge, every customer is going to have multiple Edge applications, and they're going to be different, so, you know. How do you simplify this, over time, for customers? Or do we? >> Here's the hard story, back to getting at the heart of it. Look, one of the benefits of having done this a while, is I've stepped on a lot of these rakes. You're looking at one of the biggest, earliest adopters of the Boolean cross-platform, Gooey Framework. And, every time, there is this, oh, there's multiple platforms? People say, oh, that's a problem, I want a technology that allows me to bridge all of those things. And it sound so attractive, and generates a lot of early things, and then it turned out, I was rocking with this Boolean cross-breed platform. I wrote it, and it worked on Mac's and Windows. Except, I couldn't cut and paste. I couldn't print, I couldn't do anything. And so what happens is it's so attractive, blah, blah, blah. And then you find out, and when the platforms aren't very sophisticated, the gap between what these cross-platform things do, and the platform is not so much, so it's like, eh, it's better to do this. But, over time, the platform just grows and grows and grows. So the hard message is, people should pick. People should pick. Now, one of the benefits of Azure, as a great choice, is that, with the other guys, you are locked to vendor. Right, there is exactly one provider of those API's. With Azure, you can get an implementation of Azure from Microsoft, the Azure Public Cloud. Or you can get an implementation from one of our hardware vendors, running Azure Stack. They provide that to you. Or you can get it from a service provider. So, you don't have to get, you buy into these API's. You optimize around that, but then you can still use vendor. You know, hey, what's your price for this? What's your price for that, what can you give me? With the other guys, they're going to give you whatcha give ya, and that's your deal. (Rebecca laughs) >> That's a good note to end on. Thank you so much, Jeffrey, for coming on theCUBE again. It was great talking to you. >> Oh, that was fast. (Rebecca laughs) Enjoyed it, this was great. >> Great. I'm Rebecca Knight, for Stu Miniman, stay tuned to theCUBE. We will have more from Microsoft Ignite in just a little bit. (electronic music)

Published Date : Sep 24 2018

SUMMARY :

Brought to you by Cohesity, He is the technical Yes, I enjoyed the last time. So before the camera's were rolling, it took a few years to ship, as you said. even in the browser, you can You know, you work all over the place, So, basically, you know, Right, can you help the Data Box Edge solution, Because, I always say, you You call us up, we send you a disk, And so we send you a box, and how you actually And if you have that, when you find that, and then lets, you know, it to ya, do you own a SAN? One of the things we want to do is, they're going to give you Thank you so much, Jeffrey, Oh, that was fast. in just a little bit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffreyPERSON

0.99+

Rebecca KnightPERSON

0.99+

MotorolaORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Jeffrey SnoverPERSON

0.99+

Jeffery SnoverPERSON

0.99+

MassachusettsLOCATION

0.99+

AWSORGANIZATION

0.99+

RebeccaPERSON

0.99+

AmazonORGANIZATION

0.99+

SunORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

IntelORGANIZATION

0.99+

10 racksQUANTITY

0.99+

47 poundsQUANTITY

0.99+

Azure StackTITLE

0.99+

400 poundsQUANTITY

0.99+

IBMORGANIZATION

0.99+

DECORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

Orlando, FloridaLOCATION

0.99+

two reasonsQUANTITY

0.99+

12QUANTITY

0.99+

16 notesQUANTITY

0.99+

FirstQUANTITY

0.99+

2018DATE

0.99+

WindowsTITLE

0.99+

MacCOMMERCIAL_ITEM

0.99+

one levelQUANTITY

0.99+

LinuxTITLE

0.99+

macOSTITLE

0.98+

Windows 2019TITLE

0.98+

theCUBEORGANIZATION

0.98+

firstQUANTITY

0.98+

two flavorsQUANTITY

0.98+

10 doorsQUANTITY

0.98+

four notesQUANTITY

0.98+

two serversQUANTITY

0.98+

CohesityORGANIZATION

0.98+

AzureTITLE

0.97+

oneQUANTITY

0.97+

Azure Public CloudTITLE

0.97+

PowerShellTITLE

0.96+

todayDATE

0.96+

nineQUANTITY

0.96+

EdgeTITLE

0.96+

VMwareORGANIZATION

0.96+

one providerQUANTITY

0.95+

early 2000'sDATE

0.95+

one exampleQUANTITY

0.95+

about 100 terabytesQUANTITY

0.95+

OneQUANTITY

0.94+

SASORGANIZATION

0.94+

CUBEORGANIZATION

0.94+

One flavorQUANTITY

0.94+

PowerShellORGANIZATION

0.94+

singleQUANTITY

0.94+

IoT EdgeTITLE

0.93+

Kevin Zhang, Microsoft & Brad Berkey, Microsoft | SAP SAPPHIRE NOW 2018


 

>> From Orlando, Florida It's theCube covering SAP Sapphire Now 2018! Brought to you by NetApp. >> Welcome, you're watching theCube, On The Ground at SAP Sapphire Now. I'm your host, Keith Townsend. We're in steamy Orlando. Great convention center size of 16 American football fields. Got in about three thousand steps this morning, but you know what, I'm not here to talk about me. We're here talking about the relationship between Microsoft and NetApp. We have Brad Berkey, GM SAP Global at Microsoft and Kevin Zhang, Tech Solutions Pro, and this is a mouthful, SAP on Azure Intelligent Global and you're a black belt? >> Yes. >> Oh wow! >> Yes, I can kickbox. >> You can kick some SAP butt. >> Yes (laughs) oh no, yes, yes we do great solutions. >> So first off let's talk about the NetApp, Microsoft relationship as it pertains to SAP. What's the story behind NetApp and Microsoft? >> The great thing aout NetApp and Microsoft is you both have the same vision, right. For us, it's about our responsibility to help our customers innovate. And NetApp is a key partner for us in our ability to help our customers innovate and provide solutions around SAP. >> So, let's talk about those solutions around SAP. One of the things that's getting pushed an awful lot is that SAP is now cloud ready. We can go to the cloud. We can go to these hyperscalers, such as Azure or As-zure and swipe a credit card and get up and running with HANA. Tell us about that experience. How does that go exactly? >> Kevin? >> Oh yeah, so I don't know if you have heard. We just announced we released a 12 terabyte memory size virtual machine. Our Halo logging instances can go up to 24 terabytes. So we ran the largest SAP workload in the world. There are so many customers, about 400 SAP Azure customer. Personally I work with about 30 SAP on Azure customers and over 77 or 80 SAP HANA on Azure customers. So, it's very exciting and we see that the trend is picking up, the demand is picking up worldwide. >> Wow! Bill McDermott on stage yesterday gave the numbers around SAP HANA in general, 1800 customers. So Microsoft having 400 SAP HANA customers. >> Sure, just to be clear on that. So when we talk about customers that are sitting inside of Azure for their SAP Landscape, that's both traditional NetLever base and HANA base and I think the number that you have is closer to 70 of that larger number. The real important thing that customers are seeing today is the... When people think of cloud, they think about cost reduction. I'm gonna save money because I'm gonna be renting equipment. The true value is in your ability to be nimble to innovate, right? So imagine a customer puts their SAP Landscape inside of Azure and it's NetLever based say the older stuff. At any point along that journey, they can call us up and say, "I want the infrastructure for HANA." They can innovate at will. If they buy hardware that sits on-premise, that hardware's set to run that particular landscape, it's not set to run HANA. So there's some opportunities for the customer to innovate using Azure. It's not just cost savings, it's around efficiencies and the ability to innovate at will. >> So let's talk about hybrid clouds scenarios around that very concept. We had another NetApp partner on that talked about the scenario in which customers have this desire to innovate quickly. Traditionally, in a traditional enterprise, to your point, if I wanted to spin up a HANA workload, I'd have to procure hardware, I'd have to get my bases team to lay down the NetWeaver stack along with HANA. It could be a couple of months before I'm up and running. Then I can innovate, do my innovation. How does Microsoft help shorten that cycle? >> I can speak to it. We actually have another partner here with there model, as well, SUSE. HANA is drawn SUSE right ahead and different flavors of Linux. and they're running on Azure. Today, we are able to deploy the entire SAP Landscape using alternative scripts inside Azure. In 30 minutes, you have the entire SAP Landscape deployed including the large virtual machine M series for your HANA cluster. You also have the ESCS, the central instances and also the AFS Cluster as well as your application servers. All of those things running your automation, your cloud speed in 30 minutes instead of three months. >> So one of the obviously manages of cloud, in general, is this ability to get to agility. There's a concept that once I've innovated in the cloud, I know what the workload is, it's stable, it's not changing that I bring that back in house. Is that something that you're seeing, are people continuing to run these workloads steady state in the cloud as well? >> I think they're gonna run more so in steady state. We don't see them kind of moving it back. The idea that in a traditional SAP Landscape is that everything is always on. >> Right. >> Right. Since the lights are always on, why not I have my own equipment as opposed to renting just compute from a hyperscaler like Microsoft. The reality is, is again, back to that notion of innovating. If I'm gonna role out, let's say, S4 on top of HANA, so you think about Suite on HANA and then S4, I'm gonna set up all of these test environments, multiple test environments, versions of it as I roll out. I'm gonna be really big for a short period of time then I'm gonna roll it out and shrink back down. Also, when I do upgrades, you think about it like if you're doing payroll at the end of the month, I'm gonna be big for short periods of time. So we call that bursting, and it's that bursting that allows you to continually to reduce costs you wouldn't bring back on-prem, where you can't burst, right? Makes sense? >> That makes sense. So let's talk about some of these business conversations that you've had with customers. What have been some of the primary drivers other than the obvious agility? What are some of the conversations that you look at the broader Microsoft portfolio solutions that you're able to bring into customer conversations? >> Two things come to mind. One of which is when you think about enterprise-class security across all domains, right? So right now we provide Azure for Office 365. That's an Azure tenant. And we can give you advance security for that. Imagine that I can provide that same security for your SAP system. I want to give you an example of the type of security solutions. We have an intelligent IOT-based security model that sits inside of Azure that will predict hacks. They'll look at your environment and say, "you look just like a customer who has been hacked" or "you have the attributes of a customer "who could get hacked" and they'll proactively come in and say you need to make these adjustments That kind of stuff sits inside of the cloud in Azure. So it's not just... And again, I think the misnomer is it's just about cost savings 'cause if it was just about cost savings, then at some point, your depreciation models for on-premise hardware as long as you can stay and not change, so not changing would save you a lot of money. So that's why I get back to you, it'll allows you to change without burden of impact. >> Talking about change in the industry, we can't have a 7.5 billion dollar acquisition and not talk about it on theCube. We kind of eat this stuff up. You guys acquired GitHub. Let's talk about the relationship of developers, one of the things I haven't heard a lot, at least in conversations I've had on theCube so far this week have been about the developer. Talk about the importance of the developer relationship and potential integrations with GitHub, if you can, and SAP. >> First, that is one of my favorite topics I have. I came from a development background we call enable agility allow you to run continuous development and continuous integration, and the GitHub has been a integrate part of Microsoft Solution already. We are probably the largest contributor in the GitHub before Google and Facebook where if you ranking based on the history. The open source has been cultural after the Satya takeover as CEO has been our winning grace, open source, and we actually... The majority of our code and our deployment is in the GitHub. In the SAP world, the ARM templates for automation templates, JSON templates, and all the automation scripts we deployed in the GitHub, and we share with customer as a community. If they actually use those scripts through their deployment, continuously improve the scripts for automation. >> So, continuous integration, continued development is not a term that we hear a lot in the SAP world. As we're bringing these concepts from I think thought into reality with services such as GitHub to store DevOps scripts, automation scripts, what has been the business impact of being able to bring a continuous integration, continued development practice to SAP which is usually not big? >> I'll give you a good example. For example, when Brad Berkey mentioned earlier doing the SAP Landscape deployment, you have no N+1 deployment and you want to do a test environment, you want to do a Sandbox to troubleshoot the incidence. Today, with the scripts automation, you can spring up an entire system in three hours, four hours, including S4, including the time old system when you put in the business object BI and the other things together. You can test this and then shut down the entire system and delay the resource group inside Azure. As we move that system, they re-spring up as necessary. Also, we're working with SAP called Landscape Manager which allows you to clone the system inside the Azure. The scripts behind it is actually a computer integration into the dual element type of scripts allows you to replicate system files, allow you to deploy another testing system or training system. It gives you a lot of modern deployment methodology to give you fast agility to the business. >> So Microsoft, the ultimate platform company, one of the things that designates the platform company is that your partners basically make more money than you off the platform. Windows is a great example of a platform. So you have platform, Azure is definitely becoming known as a platform, and then we have NetApp, the data driven company. Talk through the value of the NetApp data fabric, data driven technology and platform as it pertains to the ability to have the same data operation strategy on-prem and in the Microsoft Cloud. >> Okay, I'll give you an example. A lot of our customer, Brad sells a lot of SAP on Azure to many customers. I've supported those customers. Many of them because NetApp has a super, very high speed fastest management, snapshot management to data protection and data recovery and backup, and also the DR capability, customers demand asks us can we actually work with Microsoft in the cloud or use a similar technology. So they deployed the NetApp ONTAP inside of Azure today. And we're able to support AFS file services to file sync from on-prem to the cloud, from one Azure region to another region, leverage those ONTAP snap mirroring and all the technology as well. So to enable to provide an enterprise level file sync, file protection, file recovery and warning replication as well. >> So, you guys are pretty good. I'm trying to throw you curve balls but you're pretty much knocking 'em out the park, so I'ma try to throw another curve ball. Bring the hybrid IT story in for me from a Microsoft perspective when it comes to Azure stack. How does Azure stack play a role in the overall vision whether it's Edge, Core, or like stationed into the cloud, how does Azure stack play a role in it? >> In Azure stacks. It's not for SAP. >> Yeah, okay. Azure stack is a very important overall view from Edge to the entire cloud. We have the 50 regions globally. We have many data centers combined. The largest of public quota from region perspective, but still they're areas, for example, like a cruise ship, like a defense department, they may actually require Edge inside a prime type of technology stack. Azure stack allow you to use the same interface, same view to deploy the technology. When you actually connect it, you can synchronize your subscription. So it can allow you to have end-to-end access from your on-premise into the cloud. Microsoft has the perfect hybrid cloud strategy here, and it allow you to do not only the IaaS and PaaS and also the SaaS solution to our customers. >> So, okay, let's bring the conversation back up a couple of levels and talk, Brad, what have been the conversations here? After the keynote this morning, talking about the intelligent business, the conversations yesterday with Bill McDermott with the super-high energy about SAP going into CRM, what has been the conversations with customers? >> We've had a privilege for a lot of customer meetings in here. The great thing about SAP Sapphire is you got about 20,000 customer attendees here. They're the big ones, and at the C-Suite, so we get to have some great conversations. The customer conversations have been around the notion of the responsibility that Microsoft and SAP have to them. To the point where I was speaking with a customer early, he says, "You have an accountability "to help me be innovative." That's a very important responsibility. A lot of that revolves around enterprise-class security. A lot of that revolves around uptime and legacies between those environments. "What's my performance attribute?" and "Are you going to be there with me forever?" Now when a customer chooses Azure or they choose SAP and they choose Azure, certainly, it's really a three-part partnership. The customer, Microsoft, and SAP as a partnership. If I had to add a fourth one to that, it would be the systems integrator because in the case, Microsoft doesn't upgrade, migrate, move or install anything. So we rely on all the many partners that are here to do that set of work, everywhere from Accenture to Gemini to Brave New World. That was ABC, right? I got those out, right? All of those partners are very key to both Microsoft and SAP to ensure customer success. So a lot of the meetings that we've had here have been with those partners and those customers. >> Wow, to be a fly on the wall for those. I would love to go into more detail. We've run out of time. I'm getting the wrap sign, but I would love to have a conversation around support, integration, way more areas than we have time for. We'll have to get you on theCube again. You're now Cube veterans. From Orlando, this is Keith Townsend for theCube. Stay tuned or stay in the YouTube feed to find out more about what's going on about SAP Sapphire Now On The Ground. Talk to you soon. (lively music)

Published Date : Jun 8 2018

SUMMARY :

Brought to you by NetApp. We're here talking about the relationship between So first off let's talk about the NetApp, you both have the same vision, right. One of the things that's getting pushed an awful lot Oh yeah, so I don't know if you have heard. gave the numbers around SAP HANA in general, 1800 customers. and the ability to innovate at will. the scenario in which customers have this desire and also the AFS Cluster as well as There's a concept that once I've innovated in the cloud, The idea that in a traditional SAP Landscape that allows you to continually to reduce costs What are some of the conversations that you look at the of the type of security solutions. and potential integrations with GitHub, if you can, and SAP. and all the automation scripts we deployed in the GitHub, in the SAP world. and the other things together. and in the Microsoft Cloud. and also the DR capability, How does Azure stack play a role in the overall vision It's not for SAP. and also the SaaS solution to our customers. So a lot of the meetings that we've had here We'll have to get you on theCube again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

Bill McDermottPERSON

0.99+

Brad BerkeyPERSON

0.99+

Kevin ZhangPERSON

0.99+

BradPERSON

0.99+

KevinPERSON

0.99+

three monthsQUANTITY

0.99+

three hoursQUANTITY

0.99+

Orlando, FloridaLOCATION

0.99+

30 minutesQUANTITY

0.99+

four hoursQUANTITY

0.99+

12 terabyteQUANTITY

0.99+

three-partQUANTITY

0.99+

yesterdayDATE

0.99+

FacebookORGANIZATION

0.99+

SAP HANATITLE

0.99+

OrlandoLOCATION

0.99+

GoogleORGANIZATION

0.99+

HANATITLE

0.99+

FirstQUANTITY

0.99+

400QUANTITY

0.99+

SAPORGANIZATION

0.99+

Landscape ManagerTITLE

0.99+

TodayDATE

0.99+

1800 customersQUANTITY

0.99+

S4TITLE

0.99+

50 regionsQUANTITY

0.99+

SAP LandscapeTITLE

0.99+

SAP GlobalORGANIZATION

0.99+

oneQUANTITY

0.99+

SUSETITLE

0.99+

GitHubORGANIZATION

0.99+

OneQUANTITY

0.99+

Office 365TITLE

0.99+

AccentureORGANIZATION

0.98+

80QUANTITY

0.98+

YouTubeORGANIZATION

0.98+

ABCORGANIZATION

0.98+

HaloTITLE

0.98+

AzureTITLE

0.98+

EdgeTITLE

0.98+

16QUANTITY

0.97+

Azure stackTITLE

0.97+

this weekDATE

0.97+

bothQUANTITY

0.97+

NetLeverTITLE

0.97+

fourth oneQUANTITY

0.96+

about 30QUANTITY

0.96+

70QUANTITY

0.96+

about 400QUANTITY

0.96+

Al Burgio, DigitalBits.io & Nithin Eapen, Arcadia Crypto Ventures | Blockchain Week NYC 2018


 

(techno music) >> Announcer: Live, from New York, it's theCUBE. Covering Blockchain Week. Now, here's John Furrier. (techno music) >> Hello and welcome back. this is the exclusive coverage from theCUBE. I'm John Furrier, the co-host. We're here in New York City for special on the ground coverage. We go out where all the action is. It's happening here in New York City for Blockchain Week, New York, #BlockchainWeekNY Of course, Consensus 2018 and a variety of other events, happening all over the place. We got D-Central having a big boat event here, tons of events from Hollywood. We got New York money, we got Hollywood money, we got nerd money, it's money everywhere, and of course great deals are happening, and I'm here with two friends who have done a deal. Al Burgio is a CEO of DigitalBits co-founder, and Nithin who's the partner at Arcadia Crypto Ventures. You guys we've, you know, we're like family now, and you're hiding secrets from me. You did a deal. Al, what's going on here? Some news. >> Yeah, well first John, thanks for having us. We always love coming on the show, and really enjoy spending time with you and so forth. We, you know previous conversations that we've had, we were not out there fundraising. But really had the opportunity to meet a lot of great people Nithin and his firm being definitely one of them. And as a result of that, really building this, say, following, these relationships within the venture community, more specifically the crypto venture community. When we were ready to actually go out and do, let's say a first round, for us it happened very quickly, and it was a result of being able to leverage those relationships that we had. For me, it was kind of remarkable to see that support come and happen so quickly. Normally venture, it's just a process. Many many months. >> John: Long road. >> Then a month to close. >> John: Kiss all the frogs. >> Yeah, here it's like, you know, people can do due diligence on the fly, You have an opportunity with events like this. >> John: They're smart. >> They're smart, and and there's an opportunity to really foster these relationships in this really tight-knit community. And, you know, Nithin and his firm being obviously one of those. And so when we were ready to go out and do our first round, it happened quickly, and I'd like to think that in a lot of ways, it happened amongst friends. >> Well, you're being humble. We've been covering you, you've been on theCUBE earlier, when you just started the idea, so it's fun to watch you have this idea come to fruition, but you're in a, you're hitting a TAM a Total Available Market that's pretty large. And that's one of the secrets, to have a TAM. Aggressive bold move, we'll how it turns out for you, but you know, you got to have the moonshot, you're going after the loyalty market, which is completely run by the syndicate, what do you want to call it, the mafia of loyalty. >> Yeah, well, I would say that in some cases, those that are supporting us see that as really just one use case. Because we built this general-purpose blockchain, one of the use cases and one of the first use cases that were out there to support, happens to be the loyalty space. >> John: Big. And it's massive, highly fragmented but massive market, and we can solve a lot of liquidity issues with our technology. But then it goes beyond that. So it's a big market at the start, and then that can scale even greater from there. and I think that's part of what, I mean obviously, I'm not going to speak for Nithin. >> Nithin, let me weigh in here, pass the mic over. Nithin talk about the deal, why these guys? I know you met 'em, you like Al, and the feedback I've heard from other folks is he's a classic entrepreneur and that obviously, the entrepreneur gets the deal, but obviously you don't just give money 'cause you like someone. What about this deal is it that you guys like? You guys been there early, you got some great people on your team, what about this deal is it that you like? >> Sure, for us, Al met pretty much most of, almost all the criteria that we had, okay. That we had when we go, the thesis before we go fund someone. We don't get so many deals like that. Usually we get you know, they made 50% of the criteria, we might still put money because you can't get the 100%. So one thing, Al as a founder, he's experienced, he has done it multiple times before, he sold companies. Tech guy, which is very key for us. A tech project is very key. Okay, second thing, he's built the whole thing. It's not like he's raising the money to go and build it. He built it, now he's raising money to go for go to market strategies, which makes sense. He's shown it, and we tested it out. So like, we were completely blown away. He has a team behind 'im. He's built a team on every side, on the marketing side, on PR, events. And the idea, this is a general blockchain, but he's addressing a very specific issue. It is a real problem. Loyalty points, or rewards points, or gift points. Or whatever you call them. It is segmented, it's fragmented, and this is a chance. And there might be many people who are trying to solve this problem, but I think Al has the greatest possibility, or probability, of becoming the winner. >> You and I have talked on theCUBE before, both of you guys are CUBE alumni, I know you both, so I'll ask you, 'cause I'll just remind everyone, we've talked about token economics. One of the things that's coming up here at the Consensus 2018 event in New York, onstage certainly, and some fireworks in one of the sessions, is like if you're not decentralized, why the hell are you doing a decentralized model? So one of the criterias is, the fit for the business model, has to fit the notion of a decentralized world, with the ability of tokens becoming an integral part. What about this deal makes that happen? Obviously, fragmentation, is that still decentralized? So, how are you sorting through the nuances of saying, okay, is it decentralized the market for him, and this deal? Or does it fit? >> See no, decentralize is one thing okay, in here, more than decentralized, I would say there was the platform, so that all the companies can come in, use this common platform, release it, and as a user you're getting a chance to atomically swap it if you don't like something. Most of the reward points or loyalty points go waste. Maybe the companies want it to go waste, I don't know if that is. >> It's a natural burn at equilibrium going on anyway right? Perfect fit! >> So that is the only, that was the only doubt that we had. Would companies want this, because do they want their customers' loyalty points going waste rather than swapping it for something else? That was the only question that we had. Well, that's a question that will get answered in the market. But otherwise we hadn't seen something like this before. >> What's your take of the show so far? We saw each other in the hallway as we were getting set up for theCUBE, for two days of coverage, in New York, for Blockchain Week, New York, what's your take? Obviously pretty packed. >> Oh my god, it's so packed, and it's great, the show is going on. It is bringing a lot of money in, it's bringing all the investors in a new money, old money, traditional money, nerd money as you said. >> It smells like money! >> Everybody's coming in. See the beauty about those things coming in is, you're going to get a lot of people from other fields that are going to come into this field to solve problems. 'Cause earlier, if there is no money coming in, you're going to have very smart people, or very intelligent people stick with physics or whichever was their field. Now, they're going to look into the space because they're getting paid. See that brings more people who are intelligent, and who can solve problems. That is very key for me. >> Al, I want to ask you as an entrepreneur, one things you usually have to struggle with, as any entrepreneur, is navigating the 3-D chess you got to play, whether it's competitive strategy, market movement, certainly the market's moving and shifting very quickly, but you've got growth, big tailwind for you. What's your takeaway? Because now you have new things coming on. Every every day it seems like a new shoe is dropping. SEC's firing a warning on utility tokens, security tokens are still coming, are now coming online, but that looks very promising, and then ecosystems become super important. You guys just announced news this morning around the ecosystem. >> Yeah, tomorrow we have some. We had some news today, but we have more tomorrow. >> John: Well talk about the news. >> Yeah, so we have a multi-tiered go to market strategy. Obviously in the loyalty space, again I want to emphasize, it's just one use case, but it's a massive one. You have brands, the enterprise. And many of those those enterprises or brands may operate their loyalty program internally, in terms of like back offices systems, in some cases they're outsourcing the app to a SAS provider, some application provider, that's kind of hidden in the background. But let's just say like Hilton. I use Hilton, it's the location for the event, but Hilton, you have this user experience using this app, but maybe that technology, the SAS application that's powering that, is actually not Hilton technology. And so let's just say, there's 30 million people in the Hilton program and there may be 30 million of them on the Marriott, coexisting on some SAS application. And so that's another important category for us. SAS providers and so forth, supporting that industry. And then last but not least, today, whether enterprise or SAS company, many cases not touching their own hardware, right? They're using the cloud. >> So they're outsourcing the backend. >> Yeah, and so you have managed cloud providers. >> So what does it mean for the market? I don't understand, I'm not following you. >> Well, I guess what I'm saying is that there needs to be a common standard, across enterprise application provider, in global cloud community, cloud is the new hardware. >> True. So horizontally scaling loyalties as we were (mumbles). >> Exactly, so we have, we're basically securing partnerships on all three levels, to make sure that, if you want to use new technology, you want to ensure that it's widely supported, across a variety of partners you may want to work with if you're an enterprise. Whether, a software company, cloud company, and so forth. You want to be able to ensure that it can back up the truck. So we've basically signed partnerships at all of these tiers. You're going to see news in the morning. It's late here on a Monday evening. So tomorrow 9:00 a.m, major cloud company, one of the major cloud companies, and there's more to follow, making an announcement that they've joined our ecosystem partner program, and supporting this open source technology in a number of different ways. Which we're really excited about. >> You see ecosystem as a strategic move for you. >> Absolutely, this is, for us, this is, it's all about helping the consumer, but it's not about one consumer at a time for us. It's very much an enterprise play. It's one enterprise at a time. And with each enterprise we basically add to the ecosystem millions if not tens of millions of consumers instantly. >> Nithin I want to ask you a question, because what he just brought up is interesting to me as well. As a new thing, it's not new, but it's new to the crypto world, new to the analog world, that's not in the tech field. Tech business, we all know about global system integrators, we know about ecosystems, we know the value of developer programs, and community, all those things, check, check, check. But now those things are coming to new markets. People have never seen an ecosystem play before. So it's kind of, not new, it's new for some people, it's a competitive advantage opportunity. >> True, it is. See the whole thing is so new, that you can't even define it at this point. It's very hard to define. It's like, see, as an example I would say, none of us thought that when the iPhone came, there would be a 60 billion dollar taxi sharing economy that comes out of it, right? Same thing. Blockchain comes, we just don't know. And it's very hard to predict. >> New brands are going to emerge, I mean if you look at every major inflection point, I point to a couple that I think are relevant, TCP/IP was created, internetworking. >> Yep. >> That essentially went after proprietary networks, like IBM, Digital, Stacks, but it didn't replace, it wasn't a new functionality, it was interoperability. >> Yes. >> The web, HTTP, created a whole new functionality. >> Yep. >> Out of that emerged new brands. >> Yeah. >> So I think this wave's coming is a, new brands are going to emerge. >> Here, what's the brand, I don't know what's going to emerge. There it was interoperability. >> John: Well, new players. >> It's here, it's more, the collaboration. The collaboration is so huge, it's the scale is so huge, in the sense you can collaborate across the world. You're cutting those borders, there are no borders that can hold you. Even though interoperability happened in internet, There were the Googles, and the Facebook, that still had those borders. >> Well, don't put it, Cisco came out of that, 3Com, and those generations, but the hyper-scalers came out of the web. >> Yep. >> So I'm saying, well I'm saying, I want to get your reaction to, is I think that is such a small scale relative to blockchain and crypto because it's global, it's every industry, it's not just tech it's just like everything. So there's got to be new brands. Startups going to come out of the woodwork, that's my point. >> It's not yet time for the brands to come in. See that's the whole thing. So let's put it this way, the internet was there from 1978, if you really look at it, ARPANET or DARPA, those things were there. Email was there, but it was by 1997, or by the time we all came to know Google it was 2001. There is that gap between the brand forming, because it has to permeate first, more people have to use it, like what is the user-- >> Everything was was a bubble, but everything happened. I got food delivered to my house today, right? It happened, people were saying that's a crazy idea. >> It's now it's going on, right. So it's the timing and they know the time for it to permeate so here, how many people are using Bitcoin, and to do what? Most of them are just speculating right? There's very few real use case of remittance or speculative trading, that's what's happening. See that's what I said. The other use cases, it has to permeate. And that comes with more user adoption. And the user adoption initially is going to come from the speculation. >> I think it's a good sign, honestly I think it's a tell sign, because I remember when the web was new, I was in coming out right and growing in the industry. People were poo poo, oh that's just for kids. The big company's said, we wouldn't, who the hell is going to use the World Wide Web? Enter the search engines. >> I remember that like it was yesterday. I forget that I'm not a kid anymore, and I had the opportunity to be an entrepreneur during that era. One of the things I want to add is that, we had, I think what Nithin is really pointing out, it started with the infrastructure, you had network engineers and ISPs, you know, and email. But what was the enterprise application here? What was that consumer application, and that followed right? So it started infrastructure, then it evolved. Once we saw these applications, enterprises started to go crazy. Whether it was the Ubers of the world surfacing, or enterprises reinventing themselves, that's kind of the next wave. >> Well, this is why I think you're a good opportunity. 'Cause I remember licking stamps and sending out envelopes to get people to come to a seminar, held at a hotel. That's how you did it in the old world. The web replaced that with direct response. >> But there's some, there's something else-- >> The mainframe ran faster than the web. You're replacing an old loyalty, that's like licking the stamps. It's not about comparing what you're doing to something else. >> There's also something that helps, that we're not acknowledging, that really helped take internet from 1.0 to 2.0, it's Linux. You know I remember websites were insanely expensive. It was Windows servers, it was Sun Solaris, all of this crazy, expensive, server systems, that you needed to have, so the barrier of entry was extremely high. Then Linux came along, and you still needed to have your own data center space, and so still high, but the licensing fees kind of went away. >> And now with containers and Kubernetes-- >> Exactly. >> I made a bet I was going to get Kubernetes in a crypto show. >> Anybody from a bedroom could start a company, right? You could do it with your pajamas still on. >> John: Well orchestration's easier. >> Absolutely. So this has started, this really, revolution. Now you have blockchain and you start to introduce enterprise-grade blockchain technologies, it's the next wave, you know, it's not VoIP, it's value over IP. >> Okay, I'm going to ask both you guys a final question, to end this segment here at the block event. I know you guys want to get back, and I'm taking you anyway from the schmoozing and networking and the fun out there, deejay. Predictions, next year this time, what are we going to be? What's the we're going to look like? What's going to evolve? I mean we had a conversation with Richard, who partnered with you guys at Arcadia Crypto Partners, saying the trading things interesting, the liquidity has changed. What's your take? I want you guys both to take a minute to make a prediction. Next year, what's different, who's out, who's in, what's happening, is it growing? >> So I, you know, I would say this, surprisingly, CTOs, I love CTOs, but many CTOs, I would say that well above 50% of CTOs, still can't spell blockchain. Really, and what I mean by that, really understand the transformational power what this is, in terms of how this is really web 3.0. This is going to change so many industries, create so much value for consumers, help businesses and so forth, and we're going to cross that 50% mark. >> Next year. >> With CTOs-- >> 50% of what? Be clear on-- >> Basically, we're going, in terms of the net, that blockchain's going to capture, and really enterprises and not just enterprises, service providers and so forth-- >> 50% of the mind share or 50% of the projects? >> Yeah no, I'm talking it's, people aren't going to be saying, oh, blockchain, isn't that Bitcoin? They're going to really understand, and they're going to understand that impact. And over the course of the next 12 months, we're going to see that. And it starts, obviously in many cases, with the CIO, CTO of many companies. There are definitely a lot of CIOs and CTOs on the forefront of innovation that get it, but what I'm saying is that more than 50% don't. >> So you're saying-- They're very busy in doing what they're doing today, and it hasn't hit them yet. >> To recap, you're saying by next year, 50% of CTOs or CTO equivalents, will have a clear understanding of what blockchain is-- >> Absolutely. >> And what it can do. >> Absolutely. >> Nithin, your prediction, next year, this time, what's different, what's new, what's the prediction? >> So, one of the key things that I think is going to happen is there's going to be a lot more training, and knowledge that's going to spread out, so that a lot more people understand, what blockchain is and what bitcoin is. Even now, as Al said, he was telling about CTOs, if the CTOs are, that's the state, that they can't spell blockchain, imagine where the real common man is. You've got people like Jamie Dimon coming on TV and saying he doesn't like Bitcoin, but he likes blockchain. I'm like, what the heck is he saying? That he likes a database? >> He was selling it short 100% (chuckles) >> Yeah, he likes a database. And then you have Warren Buffett coming over there-- >> Rat poison. >> And then this is rat poison. And like my question is, does any of his funds buy gold? Do they buy gold? He was telling that this is only worth as much as the next buy buying at a higher price. >> What's Warren Buffett's best tech investment? >> I don't know, I think he bought Apple, he started buying Apple now, right? When it's reached a thousand bucks? Or it reached a trillion dollars or close to that, or 750 billion? >> The Apple buy was 2006. If you were there, then you were good. >> Yeah, but-- >> So, your prediction? >> Market wise I don't know, what's going to happen? I'm expecting this, the crypto, the utility token, or the crypto market, to be at least a six trillion dollar business. But it'll happen next year? Definitely not. But I've been proven wrong, like I was expecting it to happen by 2025, but then it went to 750 billion by December. Well, it's not too far. >> You did get the prediction right, in the Bahamas at POLYCON18, about the drop around the tax consequences of the-- >> Right. >> People slinging trades around, not knowing the tax consequences. >> Right, right. We don't know because, who knows? Because what is going on over there, is IRS is still saying it's a property. That's what the last (slurs) is. SEC is saying it is all equity, and the CFTC was saying it's commodity. So what tax do I pay? >> Okay, lightning round question, 'cause I want to, one more popped in my head. The global landscape, from an investor standpoint, the US, we know what's going on in the US, accredited, SEC is throwing, firing across, bullets across the bow of the boats, kind of holding people in line. What percentage of US big investors will be overseas by next year? >> Percentage of-- >> Having, meaning having deals being done, proxy deals being down outside the US, what percentage? >> It's still going to be low though. That is going to be low, because that, I don't think the US investor, means the large scale of those investors-- >> You don't think the big funds will co-locate outside the US? >> There will be some, but not enough. >> Put a number, a percentage. >> Percentage-wise I think it's still going to be less than 10%. >> Al, your prediction? >> In terms of investment? >> Investment, investors saying hey, I got money here, I want to put it out there. >> Outside of the United States? >> Share money, not move their whole fund, but do deals from a vehicle. >> Do deals outside. I think I agree with Nithin. >> Throwing darts at the board here. >> No, I'm going to clarify. There's definitely massive investment happening overseas. In some respects probably bigger than the United States. So that's not going away. If anything that's going to grow. But your question is, in terms of US entities, making abroad investments, overseas investments, versus just domestic? I think that trend doesn't necessarily change. You have the venture community, there are certain bigger venture funds that can have global operations 'cause at the end of the day, they need to have global operations, to be able to do that, and most venture funds aren't that massive, they don't have that infrastructure. So they're going to focus on their own backyard. So I don't necessarily think blockchain changes the venture mindset. It's just easier for them logistically to do due diligence on their own backyard and invest in those. >> Guys, always a pleasure. Great to see you. You guys are like friends with entourage here, great to get the update here at Blockchain Week. We get to Silicon Valley week, we'll connect up again. I'm John Furrier, here in New York, theCUBE's continuing coverage of crypto, decentralized applications, and blockchain of course, we're all over it. You'll see us all over, all of the web, all the shows. Thanks for watching. (techno music)

Published Date : May 17 2018

SUMMARY :

Announcer: Live, from New York, it's theCUBE. I'm John Furrier, the co-host. But really had the opportunity to meet a lot of great people people can do due diligence on the fly, it happened quickly, and I'd like to think And that's one of the secrets, to have a TAM. one of the use cases and one of the first use cases So it's a big market at the start, and the feedback I've heard from other folks is It's not like he's raising the money to go and build it. So one of the criterias is, the fit for the business model, so that all the companies can come in, So that is the only, that was the only doubt that we had. We saw each other in the hallway and it's great, the show is going on. See the beauty about those things coming in is, is navigating the 3-D chess you got to play, We had some news today, but we have more tomorrow. Obviously in the loyalty space, again I want to emphasize, So what does it mean for the market? is that there needs to be a common standard, So horizontally scaling loyalties as we were (mumbles). and there's more to follow, it's all about helping the consumer, but it's new to the crypto world, See the whole thing is so new, I point to a couple that I think are relevant, it wasn't a new functionality, it was interoperability. new brands are going to emerge. There it was interoperability. in the sense you can collaborate across the world. but the hyper-scalers came out of the web. So there's got to be new brands. There is that gap between the brand forming, I got food delivered to my house today, right? So it's the timing and they know the time for it to permeate Enter the search engines. One of the things I want to add is that, we had, to get people to come to a seminar, held at a hotel. that's like licking the stamps. and so still high, but the licensing fees kind of went away. You could do it with your pajamas still on. it's the next wave, you know, Okay, I'm going to ask both you guys a final question, This is going to change so many industries, And over the course of the next 12 months, and it hasn't hit them yet. So, one of the key things that I think is going to happen And then you have Warren Buffett coming over there-- as much as the next buy buying at a higher price. If you were there, then you were good. or the crypto market, to be at least not knowing the tax consequences. and the CFTC was saying it's commodity. the US, we know what's going on in the US, That is going to be low, because that, I want to put it out there. but do deals from a vehicle. I think I agree with Nithin. You have the venture community, We get to Silicon Valley week, we'll connect up again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RichardPERSON

0.99+

NithinPERSON

0.99+

CiscoORGANIZATION

0.99+

JohnPERSON

0.99+

2001DATE

0.99+

IBMORGANIZATION

0.99+

AlPERSON

0.99+

Warren BuffettPERSON

0.99+

Al BurgioPERSON

0.99+

John FurrierPERSON

0.99+

SECORGANIZATION

0.99+

Arcadia Crypto VenturesORGANIZATION

0.99+

1997DATE

0.99+

New YorkLOCATION

0.99+

1978DATE

0.99+

two daysQUANTITY

0.99+

IRSORGANIZATION

0.99+

50%QUANTITY

0.99+

BahamasLOCATION

0.99+

HiltonORGANIZATION

0.99+

AppleORGANIZATION

0.99+

next yearDATE

0.99+

New York CityLOCATION

0.99+

100%QUANTITY

0.99+

Jamie DimonPERSON

0.99+

750 billionQUANTITY

0.99+

30 millionQUANTITY

0.99+

tomorrowDATE

0.99+

two friendsQUANTITY

0.99+

2006DATE

0.99+

Next yearDATE

0.99+

MarriottORGANIZATION

0.99+

LinuxTITLE

0.99+

Arcadia Crypto PartnersORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

United StatesLOCATION

0.99+

DecemberDATE

0.99+

tomorrow 9:00 a.mDATE

0.99+

bothQUANTITY

0.99+

USLOCATION

0.99+

Nithin EapenPERSON

0.99+

DigitalBitsORGANIZATION

0.99+

todayDATE

0.99+

first roundQUANTITY

0.99+

oneQUANTITY

0.99+

2025DATE

0.99+

Monday eveningDATE

0.99+

FacebookORGANIZATION

0.99+

more than 50%QUANTITY

0.99+

yesterdayDATE

0.99+

less than 10%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

DigitalBits.ioORGANIZATION

0.98+

60 billion dollarQUANTITY

0.98+

30 million peopleQUANTITY

0.98+

Blockchain WeekEVENT

0.98+

firstQUANTITY

0.98+

Hartej Sawhney, Pink Sky Capital & Hosho.io | Polycon 2018


 

>> Narrator: Live from Nassau in the Bahamas. It's The Cube! Covering PolyCon 18. Brought to you by PolyMath. >> Welcome back everyone, we're live here in the Bahamas with The Cube's exclusive coverage of PolyCon 18, I'm John Furrier with my co-host Dave Vellante, both co-founders of SiliconANGLE. We start our coverage of the crypto-currency ICO, blockchain, decentralized world internet that it is becoming. It's the beginning of our tour, 2018. Our next guest is Hartej Sawhney who's the advisor at Pink Sky Capital, but also the co-founder of Hosho.io. Welcome to The Cube. >> Thank you so much. >> Hey thanks for coming on. Thanks for coming on. >> Thanks guys. >> We had a great chat last night, and you do some real good work. You're one of the smartest guys in the business. Got a great reputation. A lot of good stuff going on. So, take a minute to talk about who you are, what you're working on, what you're doing, and the projects you're involved in. >> So first of all, thank you so much for having me, it's really exciting to see the progress of high-quality content being created in the space. So my name is Hartej Sawhney. We have a team based in Las Vegas. I've been based in Las Vegas for about five years. But I was born and raised in central New Jersey, in Princeton. And my co-founder is Yo Sup Quan. We started this company about seven months ago and my co-founder's background was he's the co-founder of Coin Sighter in Exchange out of New York, which exited to Kraken. After that he started Launch Key which exited to Iovation. And prior to this company, my previous company was Zuldi, Z-U-L-D-I .com where we had a mobile point of sale system specifically for high volume food and beverage companies and businesses. So we were focused on Fintech and mobile point of sale and payment processing. So both of us have a unique background in both Fintech and cyber-security and my co-founder Yo, he's a managing partner of a crypto hedge fund named Pink Sky Capital. And he was doing diligence for Pink Sky, and he realized that the quality of the smart contracts he was seeing for deals that he wanted to participate as an investor in, and I'm an advisor in that hedge fund, we both realized that essentially the quality of these smart contracts is extremely low. And that there was nobody in this space that we saw laser focused on just blockchain security. And all the solutions that would be entailed in there. And so we began focusing on just auditing smart contracts, doing a line-by-line code review of each smart contract that's written, conducting a GAS analysis, and conducting a static analysis, making sure that the smart contract does what the white paper says, and then putting a seal of approval on that smart contract to mitigate risk. So that the code has not been changed once we've done an analysis of it, that there's no security vulnerabilities in this code, and that we can mitigate the risks for exchanges and for investors that someone has done a thorough code analysis of this. That there's no chance that this is going to be hacked, that money won't be stolen, money won't be lost, and that there's no chance of a security vulnerability on this. And we put our company's name and reputation on this. >> And what was the problem that is the alternative to that? Was there just poorly written code? Was it updated code? Was it gas was too expensive? They were doing off-chain transactions. I mean what are some of the dynamics that lead you guys down this path? I mean this makes sense. You're kind of underwriting the code, or you're ensuring it or I don't know what you call it, but essentially verifying it. What was the problem? And what were some of the use cases of problems? >> I would say that the underlying problem today in this whole industry, of the blockchain space, is that the most commonly found blockchain is Ethereum. The language behind Ethereum is called Solidity. Solidity is a brand new software language that very few people in the world are sufficient programmers in Solidity. On top of that, Solidity is updated, as a language on a weekly basis. So there are a very limited number of engineers in the world who are full-stack engineers, that have studied and understand Solidity, that have a security background, and have a QA mindset. Everything that I just said does exist on this Earth today and if it does, there's a chance that that person has made too much money to want to get out of bed. Because Ethereum's price has gone up. So the quality of smart contracts that we're seeing being written by even development shops, the developers building them are actually not full-stack engineers, they're web developers who have learned the language Solidity and so thus we believe that the quality of the code has been significantly low. We're finding lots of critical vulnerabilities. In fact, 100% of the time that Hosho has audited code for a smart contract, we have found at least a couple of vulnerabilities. Even as a second or the third auditor after other companies conduct an audit, we always find a vulnerability. >> And is it correct that Solidity is much more easy to work with than say, Bitcoin scripting language, so you can do a lot more with it, so you're getting a lot more, I don't want to say rogue code, but maybe that's what it is. Is that right? Is that the nature of the theory? >> Compared to Bitcoin script, yes. But compared to JavaScript, no. Because Fortune 500 companies have rooms full of Java engineers, Java developers. And now the newer blockchains are being written, are being written on in block JavaScript, right? So you have IBM's Hyperledger program, you have EOS, you have ICX, Cardano, Stellar, Waves, Neo, there's so many new projects that are coming, that all of them are flexing about the same thing. Including Rootstock, RSK. RSK is a project where they're allowing smart contracts to be tied to the Bitcoin blockchain for the first time ever. Right, so Fortune 500 companies may take advantage of the fact that they have Java developers to take advantage of already, that already work for them, who could easily write to a new blockchain, and possibly these new blockchains are more enterprise grade and able to take more institutional capital. But only time will tell. And us as the auditor, we want to see more code from these newer blockchains, and we want to see more developers actually put in commits. Because it's what matters the most, is where are the developers putting in commits and right now maximum developers are on the Ethereum blockchain. >> Is that, the numbers I mean. Just take a step there. So the theory of blockchain. Percentage of developers vis-a-vis other platforms percentages-- >> By far the most is on developed on Ethereum. >> And in terms of code, obviously the efficiencies that are not yet realized, 'cause there's not enough cycles of coding going on, it's evolution, right? >> Yes. >> Seems to be the problem, wouldn't you say? So a combination of full-stack developer requirements, >> Yes. >> To people who aren't proficient in all levels of the stack. >> Yes. >> Just are inefficient in the coding. It's not a ding on the developers, it's just they're writing code and they miss something, right? Or maybe they're not sufficient in the language-- >> It's a new language. The functions are being updated on a weekly basis, so sometimes you copied and pasted a part of another contract, that came from a very sophisticated project, so they'll say to us, well we copied and pasted this portion from EOS, so it should be great. But what that's leading to is either A, they're using a function that's now outdated, or B, by copying and pasting someone else's code from their smart contract, this smart contract is no longer doing what you intended it to do. >> So now Hartej, how much of your capability is human versus machine? >> Yeah I was going to ask that. >> ML, AI type stuff? >> So we're increasingly becoming automated, but because of the over, there's so much demand in the space. And we've had so much demand to consistently conduct audits, it's tough to pull my engineers away from conducting an audit to work on the tooling to automate the audit, right? And so we are building a lot of proprietary tooling to speed up the process, to automate conducting a GAS analysis, where we make sure you're not clogging up the blockchain by using too much GAS. Static analysis, we're trying to automate that as fast as possible. But what's a bit more difficult to automate, at least right now, is when we have a qualified full-stack engineer read the white paper or the source of truth and make sure the smart contract actually does it, that is, it's a bit longer tail where you're leveraging machine learning and AI to make that fully automated. (talking over each other) >> But maybe is that, I'm sorry John. Is that the long term model or do you think you can actually, I mean there's people that say augmented intelligence is going to be a combination of humans and machines, what do you think? >> I think it's going to be a combination for a long time. Every single day that we audit code, our process gets faster and faster and faster because once we find a vulnerability, finding that same vulnerability next time will be faster and easier and faster and easier. And so as time goes on, we see it as, since the bundle of our work today is ICOs, token generation events, there are ERC 20 tokens on the Ethereum blockchain. And we don't know how long this party will last. Like maybe in a couple years or a couple months, we have a big twist in the ICO space that the numbers will drastically go down. The long tail of Hosho's business for us, is to keep track of people writing smart contracts, period. But we think they are going to become more functional smart contracts where the entire business is on a smart contract and they've cut out sophisticated middle men. Right and it may be less ICOs, and in those cases I mean, if you're a publicly traded company, and you're going from R&D phase where you wrote a smart contract and now actually going to deploy it, I think the publicly traded company's going to do three to five audits. They're going to do multiple audits and take security as a very major concern. And in the space today, security is not being discussed nearly as much as it should. We have the best hedge funds cutting checks into companies, before the smart contract is even written, let alone audited. And so we're trying to partner with all the biggest hedge funds and tell the hedge funds to mandate that if you cut a check into a company that is going to do a token generation event, that they need to guarantee that they're going to at least value security, both in-house for the company and for the smart contract that's going to be written. >> How much do you charge for this? I mean just ballpark. Is it a range of purchase price, sales price? What's the average engagement go for, is it on a scope of work? Statement of work? Or is it license? I mean how does it work? >> So first it depends is it a penetration test of the website or the exchange? Penetration testing of exchanges are far more complex than just a website. Or if it's a smart contract audit, is it an ICO or is it a functional smart contract? In either case for the smart contract audit, we have to build a long set of custom tooling to attack each and every smart contract. So it's definitely very case-by-case. But a ballpark that we could maybe give is somewhere around the lines of 10 to 15 thousand dollars per 100 lines of functional code. And we ask for about three weeks of lead time for both a smart contract audit and a penetration test. And surprisingly in this space, some of the highest caliber companies and high caliber projects with the best teams, are coming to us far too late to get a security audit and a penetration test. So after months of fundraising and a private pre-sale and another pre-sale, and going and throwing parties and events and conferences to increase the excitement for participating in their token sale, what we think is the most important part, the security audit for a smart contract is left to the last week before your ICO. And a ridiculous number of companies are coming to us within seven days of the token sale, >> John: Scrambling. >> Scrambling, and we're saying but we've seen you at seven conferences, I think that we need to delay your ICO by two or three weeks. We can assure you that all of your investors will say thank you for valuing security, because this is irreversible. Once this goes live and the smart contract is deployed. >> Horse is out of the barn. >> It's irreversible. >> Right right. >> And once we seal the code, no one should touch it. >> It's always the case with security, it's bolted on at the last minute. >> It's like back road recovery too, oh we'll just back it up. It's an architectural decision we should have made that months ago. So question for you, the smart contract, because again I'm just getting my wires crossed, 'cause there's levels of smart contracts. So if we, hypothetical ICO or we're doing smart contracts for our audience that's going to come out soon. But see that's more transactional. There's security token sales, >> Yes. >> That are essentially, can be ERC 20 tokens, and that's not huge numbers. It could be big, but not massive. Not a lot transaction costs. That's a contract, right? That's a smart contract? >> People are writing smart contracts to conduct a token generational event, most commonly for an ERC 20 token, that's correct. >> Okay so that's the big, I call that the big enchilada. That's the big-- >> Right now that is the most important, the most common. >> Okay so as you go in the future, I can envision a day where in our community, people going to be doing smart contracts peer-to-peer. >> Sure. >> How does that work? Is that a boiler plate? Is is audited, then it's going to be audited every time? Do the smart contracts get smaller? I mean what's your vision on that? Because we are envisioning a day where people in our audience will say hey Hartej, let's do a white paper together, let's write it together, have a handshake, do a smart contract click, click. Lock it in. And charge a dollar a download, get a million downloads, we split it. >> I envision a day where you can have a more drag and drop smart contract and not need a technical developer to be a full-stack engineer to have to write your smart contract. Yes I totally envision that day. >> John: But that's not today. >> We are very far from that today. >> Dave, kill that project. >> We're so far, we're very far from that. We're light years far from that. >> Okay well look. If we can't eliminate the full-stack engineers, I'm okay with that. Can we eliminate the lawyers? At least minimize them. >> We can minimize them possibly, but we have five stacks of lawyers for our company, I don't see them going anywhere. We need lawyers all the time. >> I see that in the press sometimes, yeah it's going to get disrupted. I don't see it happening. Okay we were having a great conversation off-camera about what makes a good ICO. You see, you have a huge observation space. And you were very opinionated. A lot of companies are out there just floating a token because they're trying to raise money. And they could do the same thing with Ethereum or Bitcoin. >> That's correct. >> Your thoughts? >> My thoughts are that it's very important for companies who are sophisticated, I think, to start by giving away a little bit of equity in the business. And that if you want to be in the blockchain space, and you really firmly believe you have a model to have a token within a decentralized application, I would still start by finding quality investors in the space, in the world. They might be still in Silicon Valley. Silicon Valley didn't just disappear overnight now that the blockchain is out. I am all for the fact that Silicon Valley no longer has as much of a grip on tech because of their blockchain world. And they're not seeing as much deal flow, and there's not as much reliance on venture capitalists, that's exciting to me. But let's not forget the value, that top-tier VCs like Andreessen Horowitz and Vinod Khosla. and Fintech VCs like Commerce Ventures and Nyca Partners in New York, Propel VC, these are good Fintech VC arms that continue to time and time again add immense value to companies. >> And they have networks. They add value. >> They have strong-valued networks, but they're just not going to disappear. And those VCs, if they've invested into a company, took a board seat, fostered their growth, taught them what it means to actually be a real business that's growing at 7-15% week over week, maybe two years down the line, after they've given away a board seat to someone like Nyca Partners, I would be interested in understanding what your token economics look like. Now that you have a revenue generating business, how you've placed a token model into this already running business that makes 25 to 50 grand a month and you have a team of 10, self-sustaining themselves off of revenue. Much more intriguing of a conversation. What's happening today in the space is, hey my buddy Jim and Steve and I came up with an idea for this business. There's going to be a token, and we're starting a private pre-sale tomorrow. I'm going to give you 300% bonus and will you be my advisor? And they're going to start raising capital because of an idea. You know what we used to say in the Silicon Valley startup world, you can raise on just a PowerPoint. I think in the blockchain world, you could raise on just an idea? And then maybe a white paper? And the white paper is one page? And so you've raised a bunch of capital, you have a white paper. >> Now you got to build it. >> Now you got to build, you got to write a smart contract, you got to build it, you got to do it, and then everyone loses excitement and it goes back to our previous conversation the development talent. So, another thing not being discussed in the space is company employee retention, right? So if you have a growing number of ICOs, that have very large budgets because investors have found a way to sink millions of dollars into a company early, you've got $5 million in the hands of a company to start, well this company can afford to pay someone a very ridiculous salary to come join them to write the smart contract now. So they could offer an engineer 500 Eth a month to come join them for three months. So you have good engineers just bouncing from one ICO to the next and as soon as the ICO goes live, they quit. This is a problem to companies who are-- >> It's migration, out migration. >> How do you retain, even capital? >> Companies like Hosho, ShapeShift, companies that are selling picks and shovels of the industry, that want to be household names in the space, we have to really think about how we're going to retain our employees in the space. >> So the recruitment and bringing on the new generation, we were also talking off camera about Bill Tye and the younger generation and kind of riffing on the notion that, because there is a new set of mission-driven developers and builders, on the business side as well. Your thoughts and reaction to what you see and what you see that's good and what you see that we need more of? >> So the most powerful thing in the blockchain space that I think is so exciting is that you have a lot of people between the age of 25 and 35 that don't come from money, that didn't go to Stanford, didn't go to Y Combinator, they're probably not white, from-- >> John: Ivy League schools. >> Ivy League schools. I'm not trying to make it about race, but if you're a white male and went to Stanford and went to Y Combinator, chances of you raising VC money on sand hill are a lot higher, right? And you have a guy looking like me who didn't go to Stanford, doesn't come from money, running up and down sand hill, I have personally faced that battle and it wasn't easy. And we were based in Vegas and so being based in Vegas, I'd also have to deal with so why do you live in Vegas? When are you going to move to Silicon Valley? And if we invest in you, you're going to open an office in sand hill right? And now in the blockchain world, what's exciting is you have so many heavy-hitters running as founders, some of the most successful companies in the space, who don't come from money and a big prestigious background, but they're honest, they're hard-working, they're putting in 12 to 15 hours of work every single day, seven days a week. And to space, six weeks is like six years. And we all have a level of trust that goes back to times when we were all running struggling startups. And so our bond is, to me, even more significant than what must have been between Keith Rabois and Peter Thiel in the PayPal Mafia. We have our own mafias being formed of much stronger bonds of younger people who will be able to share much more significant deal flow so if the PayPal Mafia was able to join forces to punch out companies like eBay and Square, wait 'til companies in this space, we have young, heavy-hitters right now who are non-reliant on some of the more traditional older folks. Wait 'til you see what happens in the next couple years. >> Hartej, great conversation. And I want to get one more question in. We've seen Keiretsu Forum, mafias, teams more than ever as community becomes an integral part of vetting and by the way trust, you have unwritten rules. I mean baseball, Dave and I used to do sports analogies. >> Self-governance. >> Reggie Jackson talked about unwritten rules and it works. If you beam the batter, the other guy, your best star, your side's going to get beamed. That's an unwritten rule. These are what keeps things going, balanced through the course of a season. What are the unwritten rules in the Ethos right now? >> Honesty, transparency, and that's the key. We need self-governance. This is a very unregulated market. There's rules being broken by people who are ignorant to the rules. The most common rule I've seen being broken is by people who are not broker dealers, running around fundraising capital, they don't even know what an institutional advisor license is. They don't know what a Series 7 and a Series 63 is. I asked a guy just last night, he said I'm pooling capital, I'm syndicating, let me know if you want in on the deal. And I said when did you take your Series 7? He goes what's that? Get away from me. You're an American, you need to look up what US securities laws are and make sure that you're playing by the rules and if someone who doesn't know the rules has entered our inner circle of investors, of advisors, of people sharing deal flow, we have a good network of people that are closing the loop for companies, whether it's lawyers, investors, exchanges, security auditors, people who write smart contracts, dev shops, people who write white papers, PR marketing, people who do the road show, there's a full circle-- >> So people are actually doing work to put into the community, to know your neighbor if you will, know the deals that are going down, to identify potential trip wires that are being established by either bad actors or-- >> KYC, AML, this is a new space that's also attracting people that have a criminal background. Right? And that's just a harsh reality of the space. That in the United States if you have a felony on your record, maybe getting a job has become really difficult and you figured let's do an ICO, no one's going to check my record. That is a reality of the space. Another reality is the money that was invested into this entire ICO clean. Right, that's a massive issue for the US government right now. It's been less than 15 hours since the SEC has issued actually subpoenas to people on this exact topic, today. >> This is a great topic, we'd like to do more on. >> Dozens of them. >> We'd like to continue to keep in touch with you on The Cube. Obviously you're welcome anytime, loved your insight. Certainly we'd love to have you be an advisor on our mission, you're welcome anytime. >> For sure, let's talk about it. Come out to Las Vegas. Hosho's always happy to host you. >> John And Dave: We're there all the time. >> The Cube lives at the sands. >> It's our second home. >> Come by Hosho's office and let us know. Vegas is our home. We are hosting a conference in Vegas after DEFCON. So DEFCON is the biggest security conference in the world. You have the best black hats and white hats show up as security experts in Vegas and right on the tail end of it, Hosho's going to host a very exclusive invite-only conference. >> What's it called? Just Hosho Conference? >> Just Blockchain. It'll be called the just, it'll be by the Just Blockchain Group and Hosho's the main backer behind it. >> Well we appreciate your integrity and your sharing here on The Cube, and again you're paying it forward in the community, that's great. Ethos we love that. That's our mission here, paying it forward content. Here in the Bahamas. Live coverage here at PolyCon 18. We're talking about securitized token, a decentralized future for awesome things happening. I'm Jeff Furrier, Dave Vellante. We'll be back with more after this short break. (upbeat music)

Published Date : Mar 2 2018

SUMMARY :

Brought to you by PolyMath. It's the beginning of our tour, 2018. Thanks for coming on. and the projects you're involved in. and he realized that the quality of the smart contracts or I don't know what you call it, is that the most commonly found blockchain is Ethereum. Is that the nature of the theory? and right now maximum developers are on the So the theory of blockchain. in all levels of the stack. It's not a ding on the developers, so they'll say to us, and make sure the smart contract actually does it, Is that the long term model and for the smart contract that's going to be written. What's the average engagement go for, and events and conferences to increase the excitement We can assure you that all of your investors It's always the case with security, that's going to come out soon. and that's not huge numbers. to conduct a token generational event, I call that the big enchilada. Right now that is the most important, people going to be doing smart contracts peer-to-peer. Is is audited, then it's going to be audited every time? and not need a technical developer to be We're so far, we're very far from that. If we can't eliminate the full-stack engineers, We need lawyers all the time. I see that in the press sometimes, And that if you want to be in the blockchain space, And they have networks. And the white paper is one page? and as soon as the ICO goes live, picks and shovels of the industry, and kind of riffing on the notion that, and so being based in Vegas, I'd also have to deal with and by the way trust, What are the unwritten rules in the Ethos right now? and that's the key. That in the United States if you have This is a great topic, We'd like to continue to keep in touch with you Come out to Las Vegas. and right on the tail end of it, and Hosho's the main backer behind it. Here in the Bahamas.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Hartej SawhneyPERSON

0.99+

Reggie JacksonPERSON

0.99+

Jeff FurrierPERSON

0.99+

Pink SkyORGANIZATION

0.99+

DavePERSON

0.99+

VegasLOCATION

0.99+

Bill TyePERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

HoshoORGANIZATION

0.99+

Nyca PartnersORGANIZATION

0.99+

$5 millionQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

eBayORGANIZATION

0.99+

12QUANTITY

0.99+

Las VegasLOCATION

0.99+

100%QUANTITY

0.99+

JimPERSON

0.99+

twoQUANTITY

0.99+

New YorkLOCATION

0.99+

Pink Sky CapitalORGANIZATION

0.99+

six yearsQUANTITY

0.99+

2018DATE

0.99+

John FurrierPERSON

0.99+

Peter ThielPERSON

0.99+

PrincetonLOCATION

0.99+

BahamasLOCATION

0.99+

three monthsQUANTITY

0.99+

25QUANTITY

0.99+

six weeksQUANTITY

0.99+

300%QUANTITY

0.99+

StevePERSON

0.99+

one pageQUANTITY

0.99+

ShapeShiftORGANIZATION

0.99+

third auditorQUANTITY

0.99+

SECORGANIZATION

0.99+

threeQUANTITY

0.99+

SquareORGANIZATION

0.99+

United StatesLOCATION

0.99+

seven daysQUANTITY

0.99+

Hosho.ioORGANIZATION

0.99+

two yearsQUANTITY

0.99+

todayDATE

0.99+

Commerce VenturesORGANIZATION

0.99+

Keith RaboisPERSON

0.99+

35QUANTITY

0.99+

10QUANTITY

0.99+

three weeksQUANTITY

0.99+

KrakenORGANIZATION

0.99+

five stacksQUANTITY

0.99+

PolyMathORGANIZATION

0.99+

last weekDATE

0.99+

DEFCONEVENT

0.99+

ZuldiORGANIZATION

0.99+

15 hoursQUANTITY

0.99+

less than 15 hoursQUANTITY

0.99+

bothQUANTITY

0.99+

EarthLOCATION

0.99+

seven conferencesQUANTITY

0.99+

Ivy LeagueORGANIZATION

0.99+

second homeQUANTITY

0.98+

JavaTITLE

0.98+

tomorrowDATE

0.98+

first timeQUANTITY

0.98+

last nightDATE

0.98+

five auditsQUANTITY

0.98+

7-15%QUANTITY

0.98+

USLOCATION

0.98+

Sam Ramji, Google Cloud Platform | VMworld 2017


 

>> Welcome to our presentation here at VM World 2017. I'm John Furrier, co-host of The Cube, with Dave Vellante who's taking a lunch break. We are at VM World on the ground on the floor where we have Google's vice president of product management developer platforms Sam Ramji. Welcome to The Cube conversation. >> Great, thank you very much John. >> So you had a keynote this morning. You know, came up on stage, big announcement. Let's get right to it. That container as a service from Pivotal, VM Ware, and Google announced kind of a joint announcement. It was kind of weird. It wasn't a fully joint but it really came from Pivotal. Clarify what the announcement was. >> Sure, so what we announced is the result of a bunch of co-engineering that we've been doing in the open source with Pivotal around kubernetes running on bosh. So, if you've been paying attention to cloud foundry, you'd know that cloud foundry is the runtime layer and there's something called bosh sitting underneath it that does the cluster management and cluster operations. Pivotal is bringing that to commercial GA later this year. So what we announced with Pivotal and VMWare is that we're going to have cost incompatibility between Pivotal's kubernetes and Google's kubernetes. Google's kubernetes service is called Google Container Engine Pivotal's offering is called Pivotal Container Service. The big deal here is that PKS is going to be the standard way that you can get kubernetes from any of the Dell Group companies, whether that's VMWare, EMC. That gives us one consistent target for compatibility because one of the things that I pointed out in the keynote was inconsistency is the enemy in the data center. That's what makes operations difficult. >> And Kubo was announced at Cloud Foundry, Stu Miniman covered it, but that wasn't commercially available. That's the nuance, right? >> That's right, and that still is available in the open source. So what we've committed to is, we've said, every time that we update Google Container Engine, Pivotal Container Service is also going to update, so we have constant compatibility, that that's delivered on top of VMWare's infrastructure including NSX for networking and then the final twist is a big reason why people choose Google Cloud is because of our services. So Big Table, Big Query, a dynamically scaling data warehouse that we run an enormous amount of Google workloads on. Spanner, right. Which is why all of your data is consisted globally across Google's planet scaled data centers. And finally, all of our new machine learning and AI investments, those services will be delivered down to Pivotal Container Service, right, that's going to be there out of the box at launch and we'll keep adding to that catalog. >> It's just that Google Next was a lot of conversations, Oh Google's catching up to Amazon, Amazon's done a great job no doubt about it. We love Amazon. Andy Jassy was here as well. >> Super capable very competent engineering team. >> There's a lot of workloads in VMWare community that runs on AWS but it's not the only game in town. Jerry Chen, investor in Docker, friend of ours, we know, called this years ago. It's not going to be a one cloud winner take all game. Clearly. But there's the big three lining up, AWS, Microsoft, Google, you guys are doing great. So I got to ask you, what is the biggest misconception that people have about Google Cloud out in the market? 'Cause a lot of enterprises are used to running ops, maybe not as much dev as there is ops, and dev ops comes in with cloud native, there's a lot of confusion, what is the thing that you'd like to clarify about Google that they may not know about? >> The single most important thing to clarify about Google Cloud is our strategy is open-hybrid cloud. We think that we are in an amazing place to run workloads, we also recognize that compute belongs everywhere. We think that the durable state of computing is more of a mosaic than a uni-directional arrow that says everything goes to cloud. We think you want to run your containers and your VM's in clouds. We think you want to run them in your data centers. We also think you want to move them around. So we've been diehard committed to building out the open-source projects, the protocols to let all of that information flow, and then providing services that can get anywhere. So open-hybrid cloud is the strategy, and that's what we've committed to with kubernetes, with tensorflow, with apache beam, with so much of the open-source that we've contributed to Linux and others, and then maintaining open standards compatibility for our services. >> Well, it's great to see you at Google because I know your history, great open source guy, you know open source, it's been really part of your life, and bringing that to Google's great, so congratulations. >> There's a reason for that though, it's pragmatic. This is not a crazy crusade. The value of open source is giving control to the customer. And I think that the most ethical way that you can build businesses and markets is based on customer choice. Giving them the ability to move to where they want. Reducing their costs of switching. If they stay with you, then you're really producing a value-added service. So I've spent time in the operator shoes, in the developer shoes, and in the vendor shoes. When I've spent time buying and running the software on my own, I really always valued and preferred things that would let me move my stuff around. I preferred open source. So that's really the method to the madness here. It's not about opening everything up insanely, giving everything away. It serves customers better and in the long run, the better you serve customers, you'll build a winning business. >> We're here on the ground floor at VMWorld 2017 in Las Vegas, where behind us is the VM Village. And obviously Sam was on stage with the big announcement with Pivotal VMWare. And this is kind of important now, we got to debate now, usually I'm not the contrarian in the group, I'm usually the guy who's like yeah, rah rah, entrepreneurial, optimistic, yeah we can do that! You know that future's here, go to the future! But I was kind of skeptical and I told VMWare and I saw Pat Gelsinger and Michael Dell in the hallways and I'm like, they thought this was going to be the big announcement, and it was their big announcement, but I was kind of like, guys, I mean, it's the long game, these guys in the VMWare community, their operations guys, their not going to connect the dots and there was kind of an applause but not a standing ovation that Google would've gotten at a Google Next conference where the geeks would've been like going crazy. What is the operational dynamic that you're seeing in this market that Google's looking at and bringing value to, so that's the question for you. >> This is what the big change in the industry is is going from only worrying about increasing application velocity to figuring out how to do that with reliability. So there's a whole community of operators that I think many of us have left behind as we've talked about clouds and cloud data. We've done a great job of appealing to developers, enabling them to be more productive, but with operators, we've kind of said, well, your mileage may vary or we don't have time for you, or you have to figure it out yourself. I think the next big phase in adoption of cloud native technology is to say, first of all, open-hybrid, run your stuff wherever you want. >> Well you've got to have experience running cloud. Now you bring that knowledge out here. >> And that's the next piece. How do we offer you the tools and the skills that you need as an operator to have that same consistency, those same guarantees you used to have, and move everything forward in the future? Because if you turn one audience, one community, into the bad people who are holding everything back, that's a losing proposition, you have to give everybody a path to win, right? Everybody wants to be the good guy. So I think, now we need to start paying really close attention to operators and be approachable, right? I would like to see GCP become the most approachable cloud. We're already well known as the most advanced cloud. But can we be the easiest to adopt as well, and that's our challenge, to get the experience. >> You got to get that touch, that these enterprise teams historically have had, but it's interesting I mean, the mosaic you'd mentioned requires some unification, right? You got to be likable. You got to be approachable. And that's where you guys are going, I know you guys are building out for that, but the question is, for you, because Google has a lot of experience, and I know from personal knowledge Google's depth of people and talent, not always the cleanest execution out to the market in terms of the front-facing white glove service that some of these other companies have done, but you guys are certainly strong. >> Well, I think this is where Diane Greene has been driving the transformation, I mean like, she breathes, eats, sleeps, dreams enterprise. So, being both a board member at Google and being the SVP of Google Cloud, she's really bringing the discipline to say, you know, white glove service is mandatory. We have a pretty substantial professional services organization and building out partnerships with Accenture, with PWC, with Deloitte, with everyone to make sure that these things are all serviceable and properly packaged all the way down to the end user. So, no doubt there's more, more room for us to improve, there's miles to go on the journey, but the focus and the drive to make sure that we're delivering the enterprise requirements, Dianne never lets us stop thinking about that. >> It's like math, right, the order of operations is super important, and there's a lot of stuff going on in the cloud right now that's complex. >> Yes. >> Ease of use is the number one thing that we're hearing, because one, it's a moving a train in general, right? But the cloud's growing, a lot of complexity, how do you guys view that? And the question I want to ask you is, we know what cloud looks like today. Amazon, they're doing great. Multi-horse race if you will. But in 2022, the expectations and what it looks like then is going to be completely different, if you just take the trajectory of what's happening. So cleaning up kubernetes, making that a manageable, all the self updates, makes a lot of sense, and I think that's the dots no one's connecting here, I get the long game, but what's the customer's view in your opinion as someone who's sitting back and with the Google perch looking out over the horizon, 2022, what's it like for the customer? >> That's an outstanding question. So I think, 2022, looking back, we've actually absorbed so much of this complexity that we can provide ease of use to every workload and to every segment. Backing into that, ease of use looks different, like, let's think about tooling, ease of use looks different to an electrician verus a carpenter versus a plumber. They're doing different jobs, they need different tools, so I think about those as different audiences and different workloads. So if you're trying to migrate virtual machines to a cloud, ease of use means a thing and it includes taking care of the networking layer, how do we make sure that our cloud network shows up like an on premises network, and you don't have to set up some weird VPC configuration, how can those just look like part of your LAN subject to your same security controls. That's a whole path of engineering for a particular division of the company. For a different division of the company focused on databases ease of use is wow, I've got this enormous database, I'm straining at the edges, how do I move that to the cloud? Well, what kind of database is it, right? Is it a SQL database? Is it a NoSQL database? So engineering that in, that's the key. The other thing that we have to do for ease of use is upscaling. So a lot of things that we talked about before are the need to drive IT efficiency through automation. But who's going to teach people how to do the automation especially while they're being held to a very high SLA standard for their own data center and held to a high standard for velocity movement to the cloud. This is where Google has invented a discipline called SRE or site reliability engineering, and it's basically the meta discipline around what many people call dev ops. We think that this is absolutely teachable, it's learnable, it's becoming a growing community. You can get O'Reilly books on the topics. So I think we have an accountability to the industry to go and teach every operator and every operating group, hey here's what SRE looks like, some of your folks might want to do this, because that will give you the lift to make all of these workloads much easier to manage 'cause it's not just about velocity, it's also about reliability. >> It's interesting, we've got about a minute left or so. I'm just going to get your thoughts on this because you've certainly seen it on the developer side, stack wars, whatever you want to call them, the my stack runs this tech, but last night I heard in the hallway here multiple times the general consensus of two stacks coming together, not just software stacks, hardware stacks, you're seeing things that have never run together or been tested together before. So the site reliability is a very interesting concept and developers get pissed off when stacks don't work, right? So this is a super kind of nuance in this new use case that are emerging because stuff's happened that's never been done before. >> Yeah, so this is where the common tutorials get really interesting, especially as we build out a planetary scale computer at Google. Right, we're no longer thinking about how does the GPU as part of your daughter board, we think about what about racks of GPU's as part of your datacenters using NVDIA K80's, what does it mean to have 180 teraflops of tensor processing capability in a cloud TPU. So getting container centric is crucial and making it really easy to attach to all of those devices by having open source drivers making sure they're all Linux compatible and developers can get to them is going to be part of the substrate to make sure that application developers can target those devices, operators can set a policy that say, yes, I want this to deploy preferentially to environments with a TPU or a GPU and that the whole system can just work and be operable. >> Great, Sam thanks so much for taking the time to stop by. One on one conversation with Sam Ramji who's a Google Cloud, he's a vice president of product management and developer platforms for Google. We'll see you at Google Next. Thanks for spending the time. I'm John Furrier, thanks for watching. >> Thank you John.

Published Date : Aug 29 2017

SUMMARY :

We are at VM World on the ground on the floor Let's get right to it. The big deal here is that PKS is going to be the standard That's the nuance, right? Pivotal Container Service is also going to update, It's just that Google Next was a lot of conversations, that runs on AWS but it's not the only game in town. the open-source projects, the protocols to let all and bringing that to Google's great, so congratulations. So that's really the method to the madness here. You know that future's here, go to the future! We've done a great job of appealing to developers, Now you bring that knowledge out here. and that's our challenge, to get the experience. not always the cleanest execution out to the market but the focus and the drive to make sure It's like math, right, the order of operations And the question I want to ask you is, I'm straining at the edges, how do I move that to the cloud? So the site reliability is a very interesting concept and that the whole system can just work and be operable. Great, Sam thanks so much for taking the time to stop by.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Sam RamjiPERSON

0.99+

Jerry ChenPERSON

0.99+

DiannePERSON

0.99+

PWCORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

Andy JassyPERSON

0.99+

DianePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

AccentureORGANIZATION

0.99+

SamPERSON

0.99+

Las VegasLOCATION

0.99+

Stu MinimanPERSON

0.99+

PivotalORGANIZATION

0.99+

Michael DellPERSON

0.99+

2022DATE

0.99+

Pat GelsingerPERSON

0.99+

EMCORGANIZATION

0.99+

O'ReillyORGANIZATION

0.99+

LinuxTITLE

0.99+

GreenePERSON

0.99+

oneQUANTITY

0.99+

singleQUANTITY

0.98+

VM WareORGANIZATION

0.98+

180 teraflopsQUANTITY

0.98+

VMWorld 2017EVENT

0.98+

Dell GroupORGANIZATION

0.98+

bothQUANTITY

0.97+

Cloud FoundryORGANIZATION

0.97+

PKSORGANIZATION

0.97+

two stacksQUANTITY

0.96+

VMworldEVENT

0.96+

The CubeORGANIZATION

0.96+

VM World 2017EVENT

0.96+

last nightDATE

0.96+

DockerORGANIZATION

0.95+

NVDIAORGANIZATION

0.94+

K80COMMERCIAL_ITEM

0.94+

OneQUANTITY

0.94+

later this yearDATE

0.94+

NoSQLTITLE

0.93+

Google CloudTITLE

0.93+

NSXORGANIZATION

0.92+

todayDATE

0.91+

SQLTITLE

0.9+

one communityQUANTITY

0.89+

one audienceQUANTITY

0.88+

threeQUANTITY

0.87+

a minuteQUANTITY

0.84+

VMWareORGANIZATION

0.82+

this morningDATE

0.79+

CloudTITLE

0.79+

John Gossman, Microsoft Azure - DockerCon 2017 - #DockerCon - #theCUBE


 

>> Announcer: Live from Austin, Texas, It's theCUBE, covering DockerCon 2017. Brought to you by Docker and support from its ecosystem partners. >> Welcome back to theCUBE here in Austin, Texas at DockerCon 2017. I'm Stu Miniman with my cohost for the two days of live broadcast, Jim Kobielus. Happy to welcome back to the program, John Gossman, who is the lead architect with Microsoft Azure. Also part of the keynote this morning. John, had the pleasure of interviewing you two years ago. We went though the obligatory wait, Microsoft Open Source, Linux, and Windows and everything living together. It's like cats and dogs. But thanks so much for joining us again. >> Yeah well as I was saying, that's 14 years in cloud years. So it's been a lot of change in that time, but thanks for having me again. >> Yeah. Absolutely. You said it was three years that you've been working Microsoft and Docker together. 21 years in it, dog or cloud years, if you will. I think Docker is more whales and turtles, as opposed to the dogs. But enough about the cartoons and the animals. Why don't you give our audience just a synopsis of kind of the key messages you were trying to get across in the keynote this morning. >> Okay well the very simple message is that what we enabled this new technology, Hyper-V isolation for Linux containers, is the ability to run Linux containers just seamlessly on Windows using the normal Docker experience. It's just Docker run, BusyBox or Docker run, MySQL, or whatever it is, and it just works. And of course if you know a little more technical detail about containers, you realize that one of the reasons that the containers are the way there are is that all the containers on a box normally share a kernel. And so you can run a Canonical, Ubuntu on user space, on a Red Hat kernel or vice versa. But Windows and Linux kernels are too different. So if you want to run Windows container, it's not going to run easily on Linux and vice versa. And you can still get this effect, if you want it, by also using a virtual machine. But then you've got the management overhead of managing the virtual machine, managing the containers, all the complexity that that involves. You have to get a VHD or AMI or something like that, as well a container image and you lose a lot of that sort of experience. >> John, first of all, I have to say congratulations to Microsoft. When the announcement was made that Windows containers were going to be developed, I have to say that I and most of my peers were a little bit skeptical as to how fast that would work; the development cycle. Probably because we have lots of experience and it's always okay, we understand how many man years this usually takes, but you guys hit and were delivering, got through the Betas, so can you speak to us about where we are with Windows containers? And one of the things people want to kind of understand is, compared to like Linux containers, how do you expect the adoption of that now that it's generally available to roll out? Do I have to wait for the next server refresh, OS refresh, how do you expect your customers to adopt and embrace? >> Well we were able to get this to work so quickly because if you remember, Docker didn't actually invent containers. They took a bunch of kernel primitives that were in Linux and put a really great user experience on it. And I'm not taking anything away from Docker by doing that, because oftentimes in the technology industry, it's easy to make something that was complicated, powerful, but not easy to use. And Windows already had a lot of those kernel primitives, same sort of similar kind of kernel primitives built-in. They had to take out Java javax, I think when Windows 2000. And so it was kind of the same experience. We took the Docker engine, so we got the API, we were using the open source project, so we have complete compatibility. And then we just had to write a basically a new back-end, and that's why it was able to come up rather quickly. And now we're in a mode you know, Windows server updates things more incrementally, than we did in the past. So this will just keep on improving as time goes on. >> Okay, one of the other big announcements in the keynote this morning was LinuxKit. And it was open source project, we actually saw Solomon move it to open source during the keynote, when they laid out the ecosystems for it like IBM, HPE, INTEL and Microsoft. So what does that mean for Microsoft? You are now a provider of Linux? How are we supposed to look at this? >> Yeah. So we're working with all the Linux vendors. So if you saw our blog about the work we did today. We also have announcements from SUSE and Red Hat and Canonical, and the usual people. And one of the things I said in this box, I said look there's the new model is that you could choose both the Linux container that you want and the kernel that you want to run it on. And we're open to all sorts of things. But we have been working with Docker for a long time. On making sure that there was a great experience for running Docker for Linux on Windows. This thing called Docker for Windows. Which they developed. And we have been helping out. And that's basically an earlier generation of this same Linux technology. So it's just the next step on that journey. >> Microsoft's pretty well recognized to have a robust solution for a hybrid cloud. Cause of course you go your Azure stack, that you're putting on premises. There's Azure itself, it's really the cloud first methodology that you've been rolling through and you offer as a service. Containers really anywhere in your environment, baked in anywhere? How should we be thinking about this going forward? >> Yeah absolutely. I mean one of the points of containers in general, one of the attractive parts of containers is that they run everywhere. Including from your laptop, to the various clouds to bare metal, to virtualized environments. And so we have both things. We want Windows containers, where we're the vendor of the container. We want those to work everywhere. And we also, as the vendors of Azure and Azure Stack, and just server system center, and other older enterprise technologies. We want containers to work on all those things. So both directions. I mean, that's kind of the world we're in now, where everything works everywhere. >> Can you square you container strategy as reflected in your partnership with Docker, With your serverless computer strategy for Azure Functions? I'm trying to get a sense for Microsoft's overall approach to running containers as it relates to the Azure strategy. >> In some ways, you can think of this as a serverless functions mode as a step even further. You just deploy a hardware machine and install everything on it. Next thing, you'd have a virtual machine and you install everything. And then you put your code and all its affinities to the container. And with serverless with Azure Functions, it's like, well why do any of that? Just write a function. Now at the same time, we think there's lots of reasons. Under the covers, all of these past systems, going all the way back, that's how Docker started. Run a container underneath the covers. in the same place, it's not literally a Docker container, but the same place down in functions has that sort of a capability. And we're certainly thinking about how Docker can handle for work in that serverless model in the future. >> See one of my core focus areas for Wikibon as an analyst, is looking at developers going more deeply into deep learning and machine learning. To what extent is Microsoft already taking its core tools in that area and containerizing them and enabling access to that functionality through serverless APIs and functions and so forth in Azure? On the serverless stuff, I'm not on the serverless team. I'm not really qualified to explain everything on their end. I do know that the CNT team has a Docker container that they put the bits in. There's the Azure Machine Learning team who's been working a lot of these sort of technologies. I'm just not the right guy to answer that question. >> As you talk to your customers, where does this fit in to the whole discussion? Do containers just happen in the background? Is it helping them with some of their application modernization? Does it help Microsoft change the way we architect things? What's kind of the practitioner, your ultimate end user viewpoint on this? Well cloud adoption is at all points on the curve simultaneously. Even the inside of individual companies. So everybody's in it, in a kind of different place. The two models that I think people have really concentrated on, is on one end, the path at least is infrastructure where you just bring your existing applications and another one would be PADS, where you rewrite the application for a more modern architecture, more cloud centric architecture. And containers fit kind of squarely in the middle of that in some respects. Because in many ways and primarily, I see Docker containers as a better form of infrastructure. It is an easier, more portable way to get all your dependency together and run them everywhere. So a lot of lift-and-shift works is in there, but once you're in containers, it is also easier to break the components apart and put them back together into a more microservice oriented cloud-native model. >> I think that's a great point because we've been having this discussion about okay, there's applications that I'm rewriting, but then I've got this huge amount of applications that I need some way to have the bridge to the future, if you will. Because I don't know, there's one analyst firm that calls it bimodal, but to customers we talked to in general, we don't segment everything we do. I have application type infrastructure and I need to be able to live across multiple environments. Wrapping versus refactoring. >> And they do both. But I always prefer to, you know some people come in and they talk about legacy and they're developers. I'm a developer, right? Developers we always want to rewrite everything. And there's a time and place to doing that. But the legacy applications are required for those applications to work. And if you don't need to refactor that thing, if you can get it into a container or virtual machine or however, and get it into that more environment, and then work around it, re-architect it, it's a whole different set of approaches. It's a good conversation to have with a customer to understand. I've seen people go both too slow, and I see people refactor their whole thing and then try to figure out how to get it to work again. >> So Microsoft has a gigantic user base, What kind of things are you doing to help educate and help the people that had certification or jobs were running exchange to move towards this new kind of world and cloud in general. And containers specifically maybe. >> Well we have a ton of stuff. I'm not familiar with the certification programs myself, but we certainly have our Developer Evangelism team, out going out training people. We've been trying to improve our documentation. And we have a bunch of guidance on cloud migration and things like that. There is a real challenge and it's the same problem for our customers and anybody looking at cloud. Is to re-educate people who have been working in some of their previous moment. Which is another reason again, where the lift and shift stuff is, you can make it more like it is on Premise, or more like it is on your laptop. It makes that journey a little easier. But we're indefinitely in one of those points where the industry is changing so fast, I personally have to spend a lot of time, What's going on? What happened this day? What's new today coming to the conference, I learn new things. >> You bring up a huge challenge that we see. I kind of like Docker has their two delivery models. They've got the Community Edition, CE, and the Enterprise Edition, EE. An EE feels more like traditional software. It's packaged, it's on the regular release cycle. CE is, Solomon talked this morning about the edge pieces. Can I keep up with every six months, or can I have stuff flying at me? People inside of Docker can't keep up with the pace of change that much. What do you see, I mean, I think back to the major Windows operating system releases that we used to, like the Intel tick-tock on releases. It's the pace of change is tough for everyone, how are you helping, you know with you product development and customers, you know, take advantage of things and try to keep up with this rapidly changing ecosystem? >> This is a constant challenge with physically software now. We can't afford to only ever ship things every three years. And at the same time there's stability. So with the major products like Windows, we have these stable branches, where things are pretty much the same going along. And then there's an inactive branch Where things are coming down and the changes and the updates are coming. I'd say the one biggest difference I'd say, but you know I've been in this industry for a long time. So say between the '90s and now, is that we have so much of it is actually off servers. Where when something crashes, we get a crash dump and we can debug the thing and so going out in the field we have much more capability in finding what's going on in the customer base than we did 20 years ago. But other than that, it's just a really hard challenge to both satisfy people that can't have anything to change, and everything changing. >> John you've been watching this for a number of years, what do we still have left to do? We come back to DockerCon next year, you know, we'll have more people, it'll be a bigger event, but you know, what's the progression, what kind of things are you looking forward to the ecosystem and yourself and Docker, knocking down and moving customers forward with? >> The first year was kind of like, what is this thing? Second year was now, the individual Docker container is there now how do you orchestrate them and next step is how do we network these things. And there's an initiative now to standardize on storage, for storage systems and docker containers. Monitoring. There's a lot of things that are still to do. We have a long ways to go. On the other side, I think this other track, which we talked about today, which is that virtualization and containers are going to blur and mend, and I don't think that seven years from now we're going to be talking about containers or virtual machines, we're just going to be saying it's some unit of compute and then there's so much in knobs and tweaks that you want it a little more isolated, you want it a little less isolated, you trade off some performance for something else. >> Business capability, in other words the enterprise architecture framework of business capabilities, will be paramount in terms of composing applications or microservices. From what I understand you saying. >> Yeah, I think where we're really going to get to is a model where people we get past this basics of storage of networking and start working up the next level So things like Helm or DCS Universe, or Storm Stacks, where you can describe more of an application, it just keeps moving up. And so I think in seven years, we won't be talking so much about this, it'll some other disruption, right? But there won't be talking about this virtualization layer as much as building apps again. >> On a visual composition of microservices, what is Microsoft doing, you say that you long ago entered Microsoft during the Vizio acquisition, what's Microsoft doing to enable more visual composition across these functions, across orchestrated team-like environments going forward? >> I think there is some work going on. It's not my area again, on visual composition, despite the fact that I came from Vizio. I kind of got away from that space >> Well I'm betraying my age. I remember that period. >> All right. Well John, always a pleasure catching up with you and thank you so much for joining us for this segment. Look forward to watching Microsoft going forward. >> Thanks. Thank you for having me. We'll be back with lots more coverage here from DockerCon 2017. You're watching theCUBE.

Published Date : Apr 19 2017

SUMMARY :

Brought to you by Docker John, had the pleasure of interviewing you two years ago. So it's been a lot of change in that time, of kind of the key messages you were trying to get across is the ability to run Linux containers And one of the things people want to kind of understand is, And now we're in a mode you know, in the keynote this morning was LinuxKit. and the kernel that you want to run it on. Cause of course you go your Azure stack, I mean one of the points of containers in general, Can you square you container strategy as And then you put your code I'm just not the right guy to answer that question. Does it help Microsoft change the way we architect things? the bridge to the future, if you will. And if you don't need to refactor that thing, and help the people that had certification or jobs There is a real challenge and it's the same problem and the Enterprise Edition, EE. So say between the '90s and now, is that we have On the other side, I think this other track, From what I understand you saying. where you can describe more of an application, despite the fact that I came from Vizio. I remember that period. up with you and thank you so much for joining Thank you for having me.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

John GossmanPERSON

0.99+

JohnPERSON

0.99+

MicrosoftORGANIZATION

0.99+

IBMORGANIZATION

0.99+

14 yearsQUANTITY

0.99+

SolomonPERSON

0.99+

Stu MinimanPERSON

0.99+

two daysQUANTITY

0.99+

two modelsQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

21 yearsQUANTITY

0.99+

DockerTITLE

0.99+

CanonicalORGANIZATION

0.99+

two delivery modelsQUANTITY

0.99+

INTELORGANIZATION

0.99+

DockerCon 2017EVENT

0.99+

WindowsTITLE

0.99+

LinuxTITLE

0.99+

DockerConEVENT

0.99+

Windows 2000TITLE

0.99+

HPEORGANIZATION

0.99+

20 years agoDATE

0.99+

seven yearsQUANTITY

0.99+

three yearsQUANTITY

0.99+

two years agoDATE

0.99+

#DockerConEVENT

0.99+

bothQUANTITY

0.99+

next yearDATE

0.98+

MySQLTITLE

0.98+

DockerORGANIZATION

0.98+

oneQUANTITY

0.98+

first methodologyQUANTITY

0.97+

Azure StackTITLE

0.97+

both thingsQUANTITY

0.97+

todayDATE

0.97+

Red HatTITLE

0.97+

Java javaxTITLE

0.96+

CNTORGANIZATION

0.96+

one endQUANTITY

0.96+

AzureTITLE

0.95+

IntelORGANIZATION

0.95+

both directionsQUANTITY

0.94+