Image Title

Search Results for Surash:

Yves Bergquist, USC | NAB Show 2017


 

>> Narrator: Live from Las Veags, it's theCube, Covering NAB 2017. Brought to you by HGST. >> Welcome back everybody, Jeff Frick here with theCube. We're at NAB 2017 with 100,000 of our closest friends. Talking all about media, entertainment and technology. The theme this year is MET, cause the technology is so mixed in with everything else that you can't separate it anymore. And we're really excited to do a deep dive into kind of the customer, or not the customer, excuse me, the consumer side of this whole world with Yves Bergquist. He's the project director, Data and Analytics Entertainment Technology Center at USC. So Yves welcome. >> Thank you, thanks for having me. >> So when I was doing some research on your segment, really interesting to see that you're involved very much in trying to figure out what people like to watch how they like to watch and get a bunch of data because now the choices for the consumers of media and entertainment are giant, like never before. >> Yeah. There's a, very very basic question that I think not a lot of people in media and entertainment can answer. Is that why are people watching your stuff? And they have sort of surface level answers, but there's ways that the content out there, that we watch, resonates cognitively with us, that is really important, is very fundamental in how we consume media and entertainment. And even the decision making of why we decide to go watch a show on Netflix, or play a mobile game, or watch a Youtube video. Why do we make these specific choices? What drives those choices? All these questions don't have a lot of really good answers right now, and that's where I, where we're focus all of our work at ETC. Is to really understand people's drive to entertain themselves or decisions to entertain themselves at a very deep level. And really understand how various narrative structures in film and trailers and brands and advertising resonate with people at a cognitive level. >> So it's pretty intersting, it really goes with the whole big data theme and the AI theme. Because now you can capture, collect, measure data in ways, and consumption in ways you couldn't ever do before. >> Yeah, that's a good point. So, you know, there's three things that are really impacting the media and entertainment industry and every industry, really. It's, number one, the ability to think in systems, right? We used to think about problems in a very sort of siloed manner, right, we think about a problem in isolation with other forces. Like we look at the flu in isolation with the environment that we're in, so like that. There's another way to look at things, in a more holistically, it's a system called systems thinking. And the ability to think of audiences as a system, just like your body's a system inside a system, right, is really revolutionizing the way we're looking at entertainment and media. The second thing thing is the availability of data, just there's an enormous amount of data out there. A lot of it is unstructured, but there's, the good thing about entertainment and media is that it drives passion and drives conversation. And anything that drives passion and conversations get very rich in data. And the third thing that is impacting the industry is machine living and AI. And the ability to really look at all of these data points across the system holistically in a very intelligent more semantic manner. And make sure that you're measuring the right things. For a very very long time the media and entertainment industry has been measuring the wrong things. And it's really now catching up very very fast and making sure that it's measuring the right things. For example, how do we measure how specific narrative structures in film resonate with people cognitively in a way that translates into the box office? Is there a specific character journey that resonates better in an action movie with males versus females. How does that matter for how a story's being told? Where do you innovate in script, right? Interesting point is the entertainment industry is very unique in that it has two major problems. Number one, its clients, its customers are absolute experts in the product. Because if you're 25 or 35, how many movies have you watched? Thousands of movies, right? So you're an expert in movies. >> Jeff: Certainly the ones you like. >> Exactly. If you're 25 you haven't bought hundreds or thousands of cars, right? So, but on the other hand the supplier of the content doesn't know as much of the customer as the customer knows about the product. So you have two problems. You have a really really really highly expert client, and, but you don't know a lot about that client as a studio, right, or a network or a media company. So that's very very unique distinct challenge that they're starting to get very very smart and very advanced in thinking about. >> The other thing is, that I see in the movie industry and I'm no expert by any stretch of the imagination but it seems like the compression pressure is huge. The budgets have grown to be giant. And the number of available weekends for your release are small. And the competition for attention and eyeballs around those weekends, it just seems to really have a really high kind of risk reward profile that's getting more and more extreme. And is that driving people more to kind of the known? Or is it just my perception that they're taking less risks on modifications from the script or modifications of kind of the norm especially around these big budget? I mean just the fact that you've got version 1,2,3,4,5,6, pick your favorite theme seems to be a trend that continues and gets even more, I mean Superman. How many Superman movies are there, or Spiderman? >> So you know, that's really interesting right? So the very natural tendency of the media and entertainment industry is when it doesn't know, as I was mentioning, it doesn't know as much as it could or should know about who its audience is. The tendency is then becomes to just take less and less risk in telling stories exactly the same way that's why you see a lot of really really formative very formulaic movies. What we're trying to do is, and the challenge with that is that, again you have an audience of experts and so if every single movie looks like the same one, look like the other one, you're going to have a problem. People aren't going to go see, going to go gravitate towards another kind of entertainment or some of your competitors. So you have to know where do you meet peoples expectations in a movie and where do you innovate? Deadpool is a really interesting example. Deadpool has the structure of a basic superhero movie but it has a lot of innovation underneath that. And so for the studios knowing where do you stick to the formula and where do you innovate in telling a story when you make a billion dollar movie, is going to become more and more interesting. Because if you innovate too much you're going to turn people off. If you don't innovate enough, you're going to turn people off. So we actually have some research looking at the mathematical definition of why we think certain things are interesting and certain things are not interesting so we can separate. These are the things you need in your movies, this is some aspects, if you go back to Deadpool, there's some aspects of Deadpool as a movie that are very traditional to the superhero genre. And a lot of other aspects that are very very innovative. So you have to innovate in certain areas and you have to no innovate areas. And that's a real challenge, and so that's why we're really applying our work to looking at narrative structure in storytelling at ETC is because that's where a lot of the revenue opportunities and the de-risking opportunities are. >> And it's interesting before we went live you were talking about thinking of storytelling and narrative as a little bit less art and a little bit more science in terms of of thinking at in terms of algorithms and algorithmically. Because there are patterns there, there is data there. So what does some of the data that you measure to get there? You mentioned earlier that in the past people were measuring the wrong thing. What are the right things to measure? What are some of the things you guys are measuring now? >> Yeah, so you know, it is still very much an art, right? It's making it, making art a little bit more optimal, and optimizing art is what we're doing, but it's, it will remain art for a very long time. I think for, and since we're at NAB, sort if in a broadcasting environment, I think a lot of the measurements and systems that have been in place for decades now are looking at demographics. And demographics, whether you're a male or female, Your age, your ethnicity, or your income, used to predict what you would watch. It doesn't do that anymore, and if you have kids, you know like me, you watch the same thing that they're watching, you're playing the same video games that they're playing. I think there's a new way to measure things more cognitively and semantically and neuroscience is starting to get into the issue of why do we think certain stories are more interesting or more appealing than others. Why do certain stories lead us to make actual decisions more than others? And so I think at a very very basic level you have to unpack this notion of why do people go see this movie? And it's a system, you know, that decision happens in a system where some of the system is demographics, demographics aren't going to go away they're still predictive to a certain extent. But it's also, you know, cast, it's also who has recommended this movie. And what are the systems of influence in driving certain people to see a movie? And all these things, and of course, what we're focusing on, which is storytelling and narrative structure and how that, sort of translates to making decisions to see this movie. A lot, you know, we're still in the infancy of measuring all of the system in a very scientific granular way, but we're making very very quick progress. And so even things like understanding the ecosystem of influence around why certain communities are influenced to go see certain movies by other communities and what happens there, right. So I'll give you an example, we did, we pulled months of data on Reddit about where supporters of Hillary Clinton and where supporters of Donald Trump would engage on that topic. Are they talking about that amongst each other or are they really going out there and trying to convince other people to vote for Trump or to vote for Hillary Clinton? And we saw some, two radically different patterns. So pattern number one, the Clinton people would mostly engage with each other on Reddit. So that's cool and that has very little value because you're not being an ambassador. On the other hand, the Trump people were engaging far outside of the Trump subReddit and trying to convince people to join the movement, to donate, to vote for Trump. So we think there's a model there that can be ported to the entertainment industry, where if your fans, if your fan base is mostly engaging with each other it has less value than if your fan base is really going out there and really trying to get other people excited about your movie. And why do certain people get excited and how do your fans, what argument do your fans use out there to convince others to go see your movie. All these things we're looking at, and it's brand new world now for media because of all of these data points. >> The systems conversation is so interesting because it's not only the system, but the individual. But it's like you said, it's all these systems of influence today. Look at the Yahoo reviews, the Rotten Tomato reviews, you know, what are there, Reddit, you know, as a system of influence, who would have ever thought? >> Yeah and we're getting it, we're going into a world very quickly, we're going to be able to understand entertainment and storytelling and narrative and it's cognitive power almost on a neural network base. In looking at what kind of neural network in our brains get fired when we are exposed to this type of character, or this type of storyline, or this type of narrative mechanics. And so this is a really exciting time. >> The other thing that's interesting, we talked again a little bit before we turned the cameras on, is about the trailers. Because that's kind of the story within the story. And depending on your objectives, and the budget, you know, they can make all kinds of number of trailers, in very different way, to approach or to target very specific audiences. I wonder if you can get into that a little bit. >> Yeah so, you know in the media and entertainment industry decisions have been made, and if you think about it it's amazing that the media and entertainment industry has made so much money, so I think it's a testament of the enormous creative talent that's involved. But, you know, especially for trailers a lot of the decisions about trailers are made sort of looking what's worked in the past in a very sort of haphazard way. There really isn't a lot of data and analytics and science applied to, hey what kind of trailer, what structure of trailer do we need to put out there in each channel for each target audience to get them really excited about the movie? Because there's many different ways you can present a movie, right, and we've seen, we've all seen many different types of trailers for many different types of movies. What we're doing, and nobody's really worried about hey let's analyze, for example, the pace, right, the edit cuts, the structure of the edits for the trailer and how that resonates with people. And now we have the ability to do that because people, you know, we will count views on YouTube for example, or there will be a way to measure how popular a trailer is. So what we're doing is we're just measuring everything that we can measure about a trailer. Is it a complete story? What is the percentage of the trailer is the main character in? What is the percentage of the trailer that the influence character is in? We're looking at cast. Does a trailer with Ben Affleck, you know, work better if Ben Affleck is a lot in the trailer, or not a lot in the trailer? And what kind of trailer types work better for specific genres, specific target audience, specific channels? So we're really unpacking that into a nice little spreadsheet. And measuring all the things that we can measure. And the thing about this is, if you think about the amount of money that's involved in making these decisions, you know if you're a studio and you're spending 3,4,5 billion dollars a year in marketing expense, and my work can make it even 10 percent more efficient, that's like half a billion dollars in savings. >> That's a real number. >> That's enormous right? So it's a really exciting time for media and entertainment because there are all these things on the horizon to help them make better decisions, more data driven decisions. And really free up creators, because if we can tell the people who tell the stories in film every, you can innovate so much more now because we've, we know that we've boiled it down to a science, and we know that in this, if you have these four or five things in your script, everywhere else you can innovate, go nuts. I think it's going to free up a lot of creative talent. We're going to see a lot more interesting movies out there. >> The other piece I think, I mean obviously a trailer for a movie's one thing, but take that little genre of creative that's purely built to drive behavior and that's a commercial. And I always joke with my kids, I watch a lot of sports, and there'll be a car ad and I'm like, just think if you're the poor guy that gets the assignment to make another car ad, I mean, how many car ads have been made, and you've got to think creatively. But the data that you're talking about, in terms of the narrative, what types of shots, the cutting, based on the demographic that you're trying to go after for that specific ad. That must be tremendously valuable information. >> Yeah it is really valuable. So you know, our philosophy is that everything is story. You're tie is a story, your haircut's a story, you're cereal's a story, your cars, everything. We make decisions based on the narratives that other other people tell us and that we tell ourselves about how to represent the world. Simply because the universe out there and the reality out there is too complex for our brains to really represent as it is, so we have to simplify, compress it into a set of a behavioral script that says, okay I'm, it's sort of an executive summary of their reality. And though that executive summary is a story. And so it's especially powerful in driving how what we buy and how we consume things. And so, I've build a platform that looks at, that extracts very very structured data from conversations about what is the narrative structure about a specific brand. You know, is it focused more on, you know,emotions? Is it focused more on ethics? Is it focused more on the, sort of the utility of the product? And trying to correlate that to look at what kind of narrative structure's around your brand? What kind of story around your brand, drives more sales? And so that's really really interesting, in sort of understanding, again, that cognitive relationship between stories and how efficient they are in driving specific behavior. That is exactly what my research is about. >> Yves, we could go on all day, but unfortunately we are out of time. So thank you for spending a few minutes and dropping by. Fascinating conversation. Alright, he's Yves Bergquist from USC, where all the film stuff's happening. I'm Jeff Frick, you're watching theCube. We'll be back NAB 2017 after this short break. Thanks for watching. (uptempo rock music)

Published Date : Apr 24 2017

SUMMARY :

Brought to you by HGST. kind of the customer, or not the customer, excuse me, a bunch of data because now the choices And even the decision making of why we Because now you can capture, collect, measure And the ability to really look at of the customer as the customer knows about the product. And is that driving people more to kind of the known? And so for the studios knowing where do you stick What are some of the things you guys are measuring now? of measuring all of the system in a very scientific because it's not only the system, but the individual. And so this is a really exciting time. and the budget, you know, And the thing about this is, if you think about in film every, you can innovate so much more now in terms of the narrative, what types of shots, and the reality out there is too complex So thank you for spending a few minutes and dropping by.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Yves BergquistPERSON

0.99+

JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

25QUANTITY

0.99+

hundredsQUANTITY

0.99+

35QUANTITY

0.99+

100,000QUANTITY

0.99+

TrumpPERSON

0.99+

ClintonPERSON

0.99+

YvesPERSON

0.99+

10 percentQUANTITY

0.99+

two problemsQUANTITY

0.99+

Hillary ClintonPERSON

0.99+

fourQUANTITY

0.99+

Thousands of moviesQUANTITY

0.99+

Donald TrumpPERSON

0.99+

half a billion dollarsQUANTITY

0.99+

each channelQUANTITY

0.99+

NAB 2017EVENT

0.98+

third thingQUANTITY

0.98+

YouTubeORGANIZATION

0.98+

RedditORGANIZATION

0.98+

YoutubeORGANIZATION

0.98+

YahooORGANIZATION

0.97+

three thingsQUANTITY

0.97+

Ben AffleckPERSON

0.97+

Data and Analytics Entertainment Technology CenterORGANIZATION

0.97+

one thingQUANTITY

0.96+

NetflixORGANIZATION

0.95+

3,4,5 billion dollars a yearQUANTITY

0.94+

second thingQUANTITY

0.93+

two major problemsQUANTITY

0.93+

five thingsQUANTITY

0.93+

SupermanPERSON

0.92+

NAB Show 2017EVENT

0.92+

SpidermanPERSON

0.92+

Rotten TomatoORGANIZATION

0.91+

NABORGANIZATION

0.91+

DeadpoolTITLE

0.88+

todayDATE

0.86+

thousands of carsQUANTITY

0.84+

this yearDATE

0.84+

ETC.ORGANIZATION

0.81+

billion dollarQUANTITY

0.79+

USCLOCATION

0.78+

two radically differentQUANTITY

0.78+

each target audienceQUANTITY

0.76+

theCubeCOMMERCIAL_ITEM

0.71+

Narrator: Live from Las VeagsTITLE

0.69+

every single movieQUANTITY

0.67+

oneQUANTITY

0.59+

muchQUANTITY

0.51+

HGSTDATE

0.41+

HPE Spotlight Segment v2


 

>>from around the globe. It's the Cube with digital coverage of HP Green Lake day made possible by Hewlett Packard Enterprise. Okay, we're not gonna dive right into some of the news and get into the Green Lake Announcement details. And with me to do that is Keith White is the senior vice president and general manager for Green Lake Cloud Services and Hewlett Packard Enterprise. Keith, thanks for your time. Great to see you. >>Hey, thanks so much for having me. I'm really excited to be here. >>You're welcome. And so listen, before we get into the hard news, can you give us an update on just Green Lake and the business? How's it going? >>You bet. No, it's fantastic. And thanks, you know, for the opportunity again. And hey, I hope everyone's at home staying safe and healthy. It's been a great year for HP Green Lake. There's a ton of momentum that we're seeing in the market place. Uh, we've booked over $4 billion of total contract value to date, and that's over 1000 customers worldwide, and frankly, it's worldwide. It's in 50 50 different countries, and this is a variety of solutions. Variety of workloads. So really just tons of momentum. But it's not just about accelerating the current momentum. It's really about listening to our customers, staying ahead of their demands, delivering more value to them and really executing on the HB Green Lake. Promise. >>Great. Thanks for that and really great detail. Congratulations on the progress, but I know you're not done. So let's let's get to the news. What do people need to know? >>Awesome. Yeah, you know, there's three things that we want to share with you today. So first is all about it's computing. So I could go into some details on that were actually delivering new industry work clothes, which I think will be exciting for a lot of the major industries that are out there. And then we're expanding RHP capabilities just to make things easier and more effective. So first off, you know, we're excited to announce today, um, acceleration of mainstream as adoption for high performance computing through HP Green Lake. And you know, in essence, what we're really excited about is this whole idea of it's a. It's a unique opportunity to write customers with the power of an agile, elastic paper use cloud experience with H. P s market. See systems. So pretty soon any enterprise will be able to tackle their most demanding compute and did intensive workloads, power, artificial intelligence and machine learning initiatives toe provide better business insights and outcomes and again providing things like faster time to incite and accelerated innovation. So today's news is really, really gonna help speed up deployment of HPC projects by 75% and reduced TCO by upto 40% for customers. >>That's awesome. Excited to learn more about the HPC piece, especially. So tell us what's really different about the news today From your perspective. >>No, that's that's a great thing. And the idea is to really help customers with their business outcomes, from building safer cars to improving their manufacturing lines with sustainable materials. Advancing discovery for drug treatment, especially in this time of co vid or making critical millisecond decisions for those finance markets. So you'll see a lot of benefits and a lot of differentiation for customers in a variety of different scenarios and industries. >>Yeah, so I wonder if you could talk a little bit mawr about specifically, you know exactly what's new. Can you unpack some of that for us? >>You bet. Well, what's key is that any enterprise will be able to run their modeling and simulation work clothes in a fully managed because we manage everything for them pre bundled. So we'll give folks this idea of small, medium and large H p e c h piece services to operate in any data center or in a cold a location. These were close air, almost impossible to move to the public cloud because the data so large or it needs to be close by for Leighton see issues. Oftentimes, people have concerns about I p protection or applications and how they run within that that local environment. So if customers are betting their business on this insight and analytics, which many of them are, they need business, critical performance and experts to help them with implementation and migration as well as they want to see resiliency. >>So is this a do it yourself model? In other words, you know the customers have toe manage it on their own. Or how are you helping there? >>No, it's a great question. So the fantastic thing about HP Green Lake is that we manage it all for the customer. And so, in essence, they don't have to worry about anything on the back end, we can flow that we manage capacity. We manage performance, we manage updates and all of those types of things. So we really make it. Make it super simple. And, you know, we're offering these bundled solutions featuring RHP Apollo systems that are purpose built for running things like modeling and simulation workloads. Um, and again, because it's it's Green Lake. And because it's cloud services, this provides itself. Service provides automation. And, you know, customers can actually, um, manage however they want to. We can do it all for them. They could do some on their own. It's really super easy, and it's really up to them on how they want to manage that system. >>What about analytics? You know, you had a lot of people want to dig deeper into the data. How are you supporting that? >>Yeah, Analytics is key. And so one of the best things about this HPC implementation is that we provide unopened platform so customers have the ability to leverage whatever tools they want to do for analytics. They can manage whatever systems they want. Want to pull data from so they really have a ton of flexibility. But the key is because it's HP Green Lake, and because it's HP es market leading HPC systems, they get the fastest they get the it all managed for them. They only pay for what they use, so they don't need to write a huge check for a large up front. And frankly, they get the best of all those worlds together in order to come up with things that matter to them, which is that true business outcome, True Analytics s so that they could make the decisions they need to run their business. >>Yeah, that's awesome. You guys clearly making some good progress here? Actually, I see it really is a game changer for the types of customers that you described. I mean, particularly those folks that you like. You said You think they can't move stuff into the cloud. They've got to stay on Prem. But they want that cloud experience. I mean, that's that's really exciting. We're gonna have you back in a few minutes to talk about the Green Lake Cloud services and in some of the new industry platforms that you see evolving >>awesome. Thanks so much. I look forward to it. >>Yeah, us too. So Okay, right now we're gonna check out the conversation that I had earlier with Pete Ungaro and Addison Snell on HPC. Let's watch welcome everybody to the spotlight session here green. Late day, We're gonna dig into high performance computing. Let me first bring in Pete Ungaro, Who's the GM for HPC and Mission Critical solutions, that Hewlett Packard Enterprise. And then we're gonna pivot Addison Snell, who is the CEO of research firm Intersect 3. 60. So, Pete, starting with you Welcome. And really a pleasure to have you here. I want to first start off by asking you what is the key trends that you see in the HPC and supercomputing space? And I really appreciate if you could talk about how customer consumption patterns are changing. >>Yeah, I appreciate that, David, and thanks for having me. You know, I think the biggest thing that we're seeing is just the massive growth of data. And as we get larger and larger data sets larger and larger models happen, and we're having more and more new ways to compute on that data. So new algorithms like A. I would be a great example of that. And as people are starting to see this, especially they're going through a digital transformations. You know, more and more people I believe can take advantage of HPC but maybe don't know how and don't know how to get started on DSO. They're looking for how to get going into this environment and many customers that are longtime HBC customers, you know, just consume it on their own data centers. They have that capability, but many don't and so they're looking at. How can I do this? Do I need to build up that capability myself? Do I go to the cloud? What about my data and where that resides. So there's a lot of things that are going into thinking through How do I start to take advantage of this new infrastructure? >>Excellent. I mean, we all know HPC workloads. You're talking about supporting research and discovery for some of the toughest and most complex problems, particularly those that affecting society. So I'm interested in your thoughts on how you see Green Lake helping in these endeavors specifically, >>Yeah, One of the most exciting things about HPC is just the impact that it has, you know, everywhere from, you know, building safer cars and airplanes. Thio looking at climate change, uh, to, you know, finding new vaccines for things like Covic that we're all dealing with right now. So one of the biggest things is how do we take advantage event and use that to, you know, benefit society overall. And as we think about implementing HPC, you know, how do we get started? And then how do we grow and scale as we get more and more capability? So that's the biggest things that we're seeing on that front. >>Yes. Okay, So just about a year ago, you guys launched the Green Lake Initiative and the whole, you know, complete focus on as a service. So I'm curious as to how the new Green Lake services the HPC services specifically as it relates to Greenlee. How do they fit in the H. P s overall high performance computing portfolio and the strategy? >>Yeah, great question. You know, Green Lake is a new consumption model for eso. It's a very exciting We keep our entire HPC portfolio that we have today, but extend it with Green Lake and offer customers you know, expanded consumption choices. So, you know, customers that potentially are dealing with the growth of their data or they're moving toe digital transformation applications they can use green light just easily scale up from workstations toe, you know, manage their system costs or operational costs, or or if they don't have staff to expand their environment. Green Light provides all of that in a manage infrastructure for them. So if they're going from like a pilot environment up into a production environment over time, Green Lake enables them to do that very simply and easily without having toe have all that internal infrastructure people, computer data centers, etcetera. Green Lake provides all that for them so they can have a turnkey solution for HBC. >>So a lot easier entry strategies. A key key word that you use. There was choice, though. So basically you're providing optionality. You're not necessarily forcing them into a particular model. Is that correct? >>Yeah, 100%. Dave. What we want to do is just expand the choices so customers can buy a new choir and use that technology to their advantage is whether they're large or small. Whether they're you know, a startup or Fortune 500 company, whether they have their own data centers or they wanna, you know, use a Coehlo facility whether they have their own staff or not, we want to just provide them the opportunity to take advantage of this leading edge resource. >>Very interesting, Pete. It really appreciate the perspective that you guys have bring into the market. I mean, it seems to me it's gonna really accelerate broader adoption of high performance computing, toe the masses, really giving them an easier entry point I want to bring in now. Addison Snell to the discussion. Addison. He's the CEO is, I said of Intersect 3 60 which, in my view, is the world's leading market research company focused on HPC. Addison, you've been following the space for a while. You're an expert. You've seen a lot of changes over the years. What do you see is the critical aspect in the market, specifically as it relates toward this as a service delivery that we were just discussing with Pete and I wonder if you could sort of work in their the benefits in terms of, in your view, how it's gonna affect HPC usage broadly. Yeah, Good morning, David. Thanks very much for having me, Pete. It's great to see you again. So we've been tracking ah lot of these utility computing models in high performance computing for years, particularly as most of the usage by revenue is actually by commercial endeavors. Using high performance computing for their R and D and engineering projects and the like. And cloud computing has been a major portion of that and has the highest growth rate in the market right now, where we're seeing this double digit growth that accounted for about $1.4 billion of the high performance computing industry last year. But the bigger trend on which makes Green like really interesting is that we saw an additional about a billion dollars worth of spending outside what was directly measured in the cloud portion of the market in in areas that we deemed to be cloud like, which were as a service types of contracts that were still utility computing. But they might be under a software as a service portion of the budget under software or some other managed services type of contract that the user wasn't reported directly is cloud, but it was certainly influenced by utility computing, and I think that's gonna be a really dominant portion of the market going forward. And when we look at growth rate and where the market's been evolving, so that's interesting. I mean, basically, you're saying this, you know, the utility model is not brand new. We've seen that for years. Cloud was obviously a catalyst that gave that a boost. What is new, you're saying is and I'll say it this way. I'd love to get your independent perspective on this is so The definition of cloud is expanding where it's you know, people always say it's not a place, it's an experience and I couldn't agree more. But I wonder if you could give us your independent perspective on that, both on the thoughts of what I just said. But also, how would you rate H. P. E s position in this market? Well, you're right, absolutely, that the definition of cloud is expanding, and that's a challenge when we run our surveys that we try to be pedantic in a sense and define exactly what we're talking about. And that's how we're able to measure both the direct usage of ah, typical public cloud, but also ah more flexible notion off as a service. Now you asked about H P E. In particular, And that's extremely relevant not only with Green Lake but with their broader presence in high performance computing. H P E is the number one provider of systems for high performance computing worldwide, and that's largely based on the breath of H. P s offerings, in addition to their performance in various segments. So picking up a lot of the commercial market with their HP apology and 10 plus, they hit a lot of big memory configurations with Superdome flex and scale up to some of the most powerful supercomputers in the world with the HP Cray X platforms that go into some of the leading national labs. Now, Green Light gives them an opportunity to offer this kind of flexibility to customers rather than committing all it wants to a particular purchase price. But if you want to do position those on a utility computing basis pay for them as a service without committing to ah, particular public cloud. I think that's an interesting role for Green Lake to play in the market. Yeah, it's interesting. I mean earlier this year, we celebrated Exa scale Day with support from HP, and it really is all about a community and an ecosystem is a lot of camaraderie going on in the space that you guys are deep into, Addison says. We could wrap. What should observers expect in this HPC market in this space over the next a few years? Yeah, that's a great question. What to expect because of 2020 has taught us anything. It's the hazards of forecasting where we think the market is going. When we put out a market forecast, we tend not to look at huge things like unexpected pandemics or wars. But it's relevant to the topic here because, as I said, we were already forecasting Cloud and as a service, models growing. Any time you get into uncertainty, where it becomes less easy to plan for where you want to be in two years, three years, five years, that model speaks well to things that are cloud or as a service to do very well, flexibly, and therefore, when we look at the market and plan out where we think it is in 2020 2021 anything that accelerates uncertainty actually is going. Thio increase the need for something like Green Lake or and as a service or cloud type of environment. So we're expecting those sorts of deployments to come in over and above where we were already previously expected them in 2020 2021. Because as a service deals well with uncertainty. And that's just the world we've been in recently. I think there's a great comments and in a really good framework. And we've seen this with the pandemic, the pace at which the technology industry in particular, of course, HP specifically have responded to support that your point about agility and flexibility being crucial. And I'll go back toe something earlier that Pete said around the data, the sooner we can get to the data to analyze things, whether it's compressing the time to a vaccine or pivoting our business is the better off we are. So I wanna thank Pete and Addison for your perspectives today. Really great stuff, guys. Thank you. >>Yeah, Thank you. >>Alright, keep it right there from, or great insights and content you're watching green leg day. Alright, Great discussion on HPC. Now we're gonna get into some of the new industry examples and some of the case studies and new platforms. Keith HP, Green Lake It's moving forward. That's clear. You're picking up momentum with customers, but can you give us some examples of platforms for industry use cases and some specifics around that? >>You know, you bet, and actually you'll hear more details from Arwa Qadoura she leads are green like the market efforts in just a little bit. But specifically, I want to highlight some examples where we provide cloud services to help solve some of the most demanding workloads on the planet. So, first off in financial services, for example, traditional banks are facing increased competition and evolving customer expectations they need to transform so that they can reduce risk, manage cop and provided differentiated customer experience. We'll talk about a platform for Splunk that does just that. Second, in health care institutions, they face the growing list of challenges, some due to the cove in 19 Pandemic and others. Years in the making, like our aging population and rise in chronic disease, is really driving up demands, and it's straining capital budgets. These global trance create a critical need for transformation. Thio improve that patient experience and their business outcomes. Another example is in manufacturing. They're facing many challenges in order to remain competitive, right, they need to be able to identify new revenue streams run more efficiently from an operation standpoint and scale. Their resource is so you'll hear more about how we're optimizing and delivery for manufacturing with S. A P Hana and always gonna highlight a little more detail on today's news how we're delivering supercomputing through HP Green Lake It's scale and finally, how we have a robust ecosystem of partners to help enterprises easily deploy these solutions. For example, I think today you're gonna be talking to Skip Bacon from Splunk. >>Yeah, absolutely. We sure are. And some really great examples there, especially a couple industries that that stood out. I mean, financial services and health care. They're ripe for transformation and maybe disruption if if they don't move fast enough. So Keith will be coming back to you a little later today to wrap things up. So So thank you. Now, now we're gonna take a look at how HP is partnering with Splunk and how Green Lake compliments, data rich workloads. Let's watch. We're not going to dig deeper into a data oriented workload. How HP Green Lake fits into this use case and with me, a Skip Bacon vice president, product management at Splunk Skip. Good to see >>you. Good to see you as well there. >>So let's talk a little bit about Splunk. I mean, you guys are a dominant player and security and analytics and you know, it's funny, Skip, I used to comment that during the big data, the rise of big data Splunk really never positioned themselves is this big data player, and you know all that hype. But But you became kind of the leader in big data without really, even, you know, promoting it. It just happened overnight, and you're really now rapidly moving toward a subscription model. You're making some strategic moves in the M and a front. Give us your perspective on what's happening at the company and why customers are so passionate about your software. >>Sure, a great, great set up, Dave. Thanks. So, yeah, let's start with the data that's underneath big data, right? I think I think it is usual. The industry sort of seasons on a term and never stops toe. Think about what it really means. Sure, one big part of big data is your transaction and stuff, right? The things that catch generated by all of your Oracle's USC Cheops that reflect how the business actually occurred. But a much bigger part is all of your digital artifacts, all of the machine generated data that tells you the whole story about what led up to the things that actually happened right within the systems within the interactions within those systems. That's where Splunk is focused. And I think what the market is the whole is really validating is that that machine generated data those digital artifacts are a tely least is important, if not more so, than the transactional artifacts to this whole digital transformation problem right there. Critical to showing I t. How to get better developing and deploying and operating software, how to get better securing these systems, and then how to take this real time view of what the business looks like as it's executing in the software right now. And hold that up to and inform the business and close that feedback loop, right? So what is it we want to do differently digitally in order to do different better on the transformation side of the house. So I think a lot of splints. General growth is proof of the value crop and the need here for sure, as we're seeing play out specifically in the domains of ICTs he operations Dev, ops, Cyber Security, right? As well as more broadly in that in that cloak closing the business loop Splunk spin on its hair and growing our footprint overall with our customers and across many new customers, we've been on its hair with moving parts of that footprints who and as a service offering and spawn cloud. But a lot of that overall growth is really fueled by just making it simpler. Quicker, faster, cheaper, easier toe operates Plunkett scale because the data is certainly not slowing down right. There's more and more and more of it every day, more late, their potential value locked up in it. So anything that we can do and that our partners conducive to improve the cost economics to prove the agility to improve the responsiveness of these systems is huge. That that customer value crop and that's where we get so excited about what's going on with green life >>Yeah, so that makes sense. I mean, the digital businesses, a data business. And that means putting data at the core. And Splunk is obviously you keep part of that. So, as I said earlier, spunk your leader in this space, what's the deal with your HP relationship? You touched on that? What should we know about your your partnership? And what's that solution with H h p E? What's that customer Sweet spot. >>Yep. Good. All good questions. So we've been working with HP for quite a while on on a number of different fronts. This Green lake peace is the most interesting and sort of the intersection of, you know, purist intersection of both of these threads of these factories, if you will. So we've been working to take our core data platform deployed on an enterprise operator for kubernetes. Stick that a top H P s green like which is really kubernetes is a service platform and go prove performance, scalability, agility, flexibility, cost economics, starting with some of slugs, biggest customers. And we've proven, you know, alot of those things In great measure, I think the opportunity you know, the ability to vertically scale Splunk in containers that taught beefy boxes and really streamline the automation, the orchestration, the operations, all of that yields what, in the words of one of our mutual customers, literally put it as This is a transformational platform for deploying and operating spot for us so hard at work on the engineering side, hard at work on the architectural referencing, sizing, you know, capacity planning sides, and then increasing really rolling up our sleeves and taking the stuff the market together. >>Yeah, I mean, we're seeing the just the idea of cloud. The definition of cloud expanding hybrid brings in on Prem. We talked about the edge and and I really We've seen Splunk rapidly transitioning its pricing model to a subscription, you know, platform, if you will. And of course, that's what Green Lakes all about. What makes Splunk a good fit for Green Lake and vice versa? What does it mean for customers? >>Sure, So a couple different parts, I think, make make this a perfect marriage. Splunk at its core, if you're using it well, you're using it in a very iterative discovery driven kind of follow you the path to value basis that makes it a little hard to plan the infrastructure and decides these things right. We really want customers to be focused on how to get more data in how to get more value out. And if you're doing it well, those things, they're going to go up and up and up over time. You don't wanna be constrained by size and capacity planning, procurement cycles for infrastructure. So the Green Lake model, you know, customers got already deployed systems already deployed, capacity available in and as the service basis, very fast, very agile. If they need a next traunch of capacity to bring in that next data set or run, that next set of analytics right it's available immediately is a service, not hey, we've got to kick off the procurement cycle for a whole bunch more hardware boxes. So that flexibility, that agility or key to the general pattern for using Splunk and again that ability to vertically scale stick multiple Splunk instances into containers and load more and more those up on these physical boxes right gives you great cost economics. You know, Splunk has a voracious appetite for data for doing analytics against that data less expensive, we can make that processing the better and the ability to really fully sweat, you know, sweat the assets fully utilize those assets. That kind of vertical scale is the other great element of the Green Lake solution. >>Yes. I mean, when you think about the value prop for for customers with Splunk and HP green, that gets a lot of what you would expect from what we used to talk about with the early days of cloud. Uh, that that flexibility, uh, it takes it away. A lot of the sort of mundane capacity planning you can shift. Resource is you talked about, you know, scale in a in a number of of use cases. So that's sort of another interesting angle, isn't it? >>Yeah. Faster. It's the classic text story. Faster, quicker, cheaper, easier, right? Just take in the whole whole new holy levels and hold the extremes with these technologies. >>What do you see? Is the differentiators with Splunk in HP, Maybe what's different from sort of the way we used to do things, but also sort of, you know, modern day competition. >>Yeah. Good. All good. All good questions. So I think the general attributes of splinter differentiated green Laker differentiated. I think when you put them together, you get this classic one plus one equals three story. So what? I hear from a lot of our target customers, big enterprises, big public sector customers. They can see the path to these benefits. They understand in theory how these different technologies would work together. But they're concerned about their own skills and abilities to go building. Run those and the rial beauty of Green Lake and Splunk is this. All comes sort of pre design, pre integrated right pre built HP is then they're providing these running containers as a service. So it's taking a lot of the skills and the concerns off the customers plate right, allowing them to fast board to, you know, cutting edge technology without any of the wrist. And then, most importantly, allowing customers to focus their very finite resource is their peoples their time, their money, their cycles on the things that are going to drive differentiated value back to the business. You know, let's face facts. Buying and provisioning Hardware is not a differentiating activity, running containers successfully, not differentiating running the core of Splunk. Not that differentiating. He can take all of those cycles and focus them instead on in the simple mechanics. How do we get more data in? Run more analytics on it and get more value out? Right then you're on the path to really delivering differentiated, you know, sustainable competitive basis type stuff back to the business, back to that digital transformation effort. So taking the skills out, taking the worries out, taking the concerns about new tech, out taking the procurement cycles, that improving scalability again quicker, faster, cheaper. Better for sure. >>It's kind of interesting when you when you look at the how the parlance has evolved from cloud and then you had Private Cloud. We talk a lot about hybrid, but I'm interested in your thoughts on why Splunk and HP Green Light green like now I mean, what's happening in the market that makes this the right place and in the right time, so to speak. >>Yeah, again, I put cloud right up there with big data is one of those really overloaded terms. Everything we keep keep redefining as we go if we define it. One way is as an experience instead of outcomes that customers looking for right, what does anyone of our mutual customers really want Well, they want capabilities that air quick to get up and running that air fast, to get the value that are aligned with how the price wise, with how they deliver value to the business and that they can quickly change right as the needs of the business and the operation shift. I think that's the outcome set that people are looking thio. Certainly the early days of cloud we thought were synonymous with public cloud. And hey, the way that you get those outcomes is you push things out. The public cloud providers, you know, what we saw is a lot of that motion in cases where there wasn't the best of alignment, right? You didn't get all those outcomes that you were hoping for. The cost savings weren't there or again. These big enterprises, these big organizations have a whole bunch of other work clothes that aren't necessarily public cloud amenable. But what they want is that same cloud experience. And this is where you see the evolution in the hybrid clouds and into private clouds. Yeah, any one of our customers is looking across the entirety of this landscape, things that are on Prem that they're probably gonna be on Prem forever. Things that they're moving into private cloud environments, things that they're moving into our growing or expanding or landing net new public cloud. They want those same outcomes, the same characteristics across all of that. That's a lot of Splunk value. Crop is a provider, right? Is we can go monitor and help you operate and developed and secure exactly all of that, no matter where it's located. Splunk on Green Lake is all about that stack, you know, working in that very cloud native way even where it made sense for customers to deploy and operate their own software. Even if this want, they're running over here themselves is hoping the modern, secure other work clothes that they put into their public cloud environments. >>Well, it Z another key proof point that we're seeing throughout the day here. Your software leader, you know, HP bring it together. It's ecosystem partners toe actually deliver tangible value. The customers skip. Great to hear your perspective today. Really appreciate you coming on the program. >>My pleasure. And thanks so much for having us take care. Stay well, >>Yeah, Cheers. You too. Okay, keep it right there. We're gonna go back to Keith now. Have him on a close out this segment of the program. You're watching HP Green Lake Day on the Cube. All right, We're So we're seeing some great examples of how Green Lake is supporting a lot of different industries. A lot of different workloads we just heard from Splunk really is part of the ecosystem. Really? A data heavy workload. And we're seeing the progress. HPC example Manufacturing. We talked about healthcare financial services, critical industries that are really driving towards the subscription model. So, Keith, thanks again for joining us. Is there anything else that we haven't hit that you feel are audience should should know about? >>Yeah, you bet. You know, we didn't cover some of the new capabilities that are really providing customers with the holistic experience to address their most demanding workloads with HP Green Lake. So first is our Green Lake managed security services. So this provides customers with an enterprise grade manage security solution that delivers lower costs and frees up a lot of their resource is the second is RHP advisory and Professional Services Group. So they help provide customers with tools and resource is to explore their needs for their digital transformation. Think about workshops and trials and proof of concepts and all of that implementation. Eso You get the strategy piece, you get the advisory piece, and then you get the implementation piece that's required to help them get started really quickly. And then third would be our H. P s moral software portfolio. So this provides customers with the ability to modernize their absent data unify, hybrid cloud and edge computing and operationalized artificial intelligence and machine learning and analytics. >>You know, I'm glad that you brought in the sort of machine intelligence piece in the machine learning because that's, ah, lot of times. That's the reason why people want to go to the cloud at the same time you bring in the security piece a lot of reasons why people want to keep things on Prem. And, of course, the use cases here. We're talking about it, really bringing that cloud experience that consumption model on Prem. I think it's critical critical for companies because they're expanding their notion of cloud computing really extending into hybrid and and the edge with that similar experience or substantially the same experience. So I think folks are gonna look at today's news as real progress. We're pushing you guys on some milestones and some proof points towards this vision is a critical juncture for organizations, especially those look, they're looking for comprehensive offerings to drive their digital transformations. Your thoughts keep >>Yeah, I know you. You know, we know as many as 70% of current and future APS and data are going to remain on Prem. They're gonna be in data centers, they're gonna be in Colo's, they're gonna be at the edge and, you know, really, for critical reasons. And so hybrid is key. As you mentioned, the number of times we wanna help customers transform their businesses and really drive business outcomes in this hybrid, multi cloud world with HP Green Lake and are targeted solutions. >>Excellent. Keith, Thanks again for coming on the program. Really appreciate your time. >>Always. Always. Thanks so much for having me and and take Take care of. Stay healthy, please. >>Alright. Keep it right there. Everybody, you're watching HP Green Lake day on the Cube

Published Date : Dec 2 2020

SUMMARY :

It's the Cube with digital coverage I'm really excited to be here. And so listen, before we get into the hard news, can you give us an update on just And thanks, you know, for the opportunity again. So let's let's get to the news. And you know, really different about the news today From your perspective. And the idea is to really help customers with Yeah, so I wonder if you could talk a little bit mawr about specifically, experts to help them with implementation and migration as well as they want to see resiliency. In other words, you know the customers have toe manage it on So the fantastic thing about HP Green Lake is that we manage it all for the You know, you had a lot of people want to dig deeper into the data. And so one of the best things about this HPC implementation is and in some of the new industry platforms that you see evolving I look forward to it. And really a pleasure to have you here. customers that are longtime HBC customers, you know, just consume it on their own for some of the toughest and most complex problems, particularly those that affecting society. that to, you know, benefit society overall. the new Green Lake services the HPC services specifically as it relates to Greenlee. today, but extend it with Green Lake and offer customers you know, A key key word that you use. Whether they're you know, a startup or Fortune 500 is a lot of camaraderie going on in the space that you guys are deep into, but can you give us some examples of platforms for industry use cases and some specifics You know, you bet, and actually you'll hear more details from Arwa Qadoura she leads are green like So Keith will be coming back to you a little later Good to see you as well there. I mean, you guys are a dominant player and security and analytics and you that tells you the whole story about what led up to the things that actually happened right within And that means putting data at the And we've proven, you know, alot of those things you know, platform, if you will. So the Green Lake model, you know, customers got already deployed systems A lot of the sort of mundane capacity planning you can shift. Just take in the whole whole new holy levels and hold the extremes with these different from sort of the way we used to do things, but also sort of, you know, modern day competition. of the skills and the concerns off the customers plate right, allowing them to fast board It's kind of interesting when you when you look at the how the parlance has evolved from cloud And hey, the way that you get those outcomes is Your software leader, you know, HP bring it together. And thanks so much for having us take care. hit that you feel are audience should should know about? Eso You get the strategy piece, you get the advisory piece, That's the reason why people want to go to the cloud at the same time you bring in the security they're gonna be at the edge and, you know, really, for critical reasons. Really appreciate your time. Thanks so much for having me and and take Take care of. Keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

PetePERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

AddisonPERSON

0.99+

HPORGANIZATION

0.99+

Pete UngaroPERSON

0.99+

KeithPERSON

0.99+

2020DATE

0.99+

Addison SnellPERSON

0.99+

DavePERSON

0.99+

Keith WhitePERSON

0.99+

SplunkORGANIZATION

0.99+

Green LakeORGANIZATION

0.99+

Green Lake Cloud ServicesORGANIZATION

0.99+

Green LakeORGANIZATION

0.99+

Green LightORGANIZATION

0.99+

100%QUANTITY

0.99+

75%QUANTITY

0.99+

OracleORGANIZATION

0.99+

last yearDATE

0.99+

Arwa QadouraPERSON

0.99+

thirdQUANTITY

0.99+

three yearsQUANTITY

0.99+

five yearsQUANTITY

0.99+

about $1.4 billionQUANTITY

0.99+

CoehloORGANIZATION

0.99+

SecondQUANTITY

0.99+

70%QUANTITY

0.99+

firstQUANTITY

0.99+

pandemicEVENT

0.99+

over $4 billionQUANTITY

0.99+

secondQUANTITY

0.98+

HP Green LakeORGANIZATION

0.98+

Keith HPPERSON

0.98+

HBCORGANIZATION

0.98+

Addison SnellPERSON

0.98+

bothQUANTITY

0.98+

Exa scale DayEVENT

0.98+

over 1000 customersQUANTITY

0.98+

Intersect 3. 60ORGANIZATION

0.98+

todayDATE

0.98+

two yearsQUANTITY

0.98+

three storyQUANTITY

0.98+

three thingsQUANTITY

0.98+

about a billion dollarsQUANTITY

0.97+

Green Lake CloudORGANIZATION

0.97+

H P EORGANIZATION

0.97+

oneQUANTITY

0.97+

HPCORGANIZATION

0.97+

Jill Rouleau, Brad Thornton & Adam Miller, Red Hat | AnsibleFest 2020


 

>> (soft upbeat music) >> Announcer: From around the globe, it's the cube with digital coverage of Ansible Fest 2020, brought to you by RedHat. >> Hello, welcome to the cubes coverage of Ansible Fest 2020. We're not in person, we're virtual. I'm John Furrier , your host of theCube. We've got a great power panel here of RedHat engineers. We have Brad Thorton, Senior Principle Software Engineer for Ansible networking. Adam Miller, Senior Principle Software Engineer for Security and Jill Rouleau, who's the Senior Software Engineer for Ansible Cloud. Thanks for joining me today. Appreciate it. Thanks for coming on. >> Thanks. >> Good to be here. >> We're not in person this year because of COVID, a lot going on but still a lot of great news coming out of Ansible Fest this year. Last year, you guys launched a lot since last year. It's been awesome. Launched the new platform. The automation platform, grown the collections, certified collections community from five supported platforms to over 50, launched a lot of automation services catalog. Brad let's start with you. Why are customers successful with Ansible in networking? >> Why are customers successful with Ansible in networking? Well, let's take a step back to a bit of classic network engineering, right? Lots of CLI interaction with the terminal, a real opportunity for human error there. Managing thousands of devices from the CLI becomes very difficult. I think one of the reasons why Ansible has done well in the networking space and why a lot of network engineers find it very easy to use is because you can still see an attack at the CLI. But what we have the ability to do is pull information from the same COI that you were using manually, and showed that as structured data and then let you return that structured data and push it back to the configuration. So what you get when you're using Ansible is a way to programmatically interface and do configuration management across your entire fleet. It brings consistency and stability, and speed really to network configuration management. >> You know, one of the big hottest areas is, you know, I always ask the folks in the cloud what's next after cloud and pretty much unanimously it's edge, and edge is super important around automation, Brad. What's your thoughts on, as people start thinking about, okay, I need to have edge devices. How does automation play into that? And cause networking, edge it's kind of hand in hand there. So what's your thought on that? >> Yeah, for sure. It really depends on what infrastructure you have at the edge. You might be deploying servers at the edge. You may be administering IOT devices and really how you're directing that traffic either into edge compute or back to your data center. I think one of the places Ansible is going to be really critical is administering the network devices along that path from the edge, from IOT back to the data center, or to the cloud. >> Jill, when you have a Cloud, what's your thoughts on that? Because when you think about Cloud and Multicloud, that's coming around the horizon, you're looking at kind of the operational model. We talked about this a lot last year around having Cloud ops on premises and in the Cloud. What should customers think about when they look at the engineering challenges and the development challenges around Cloud? >> So cloud gets used for a lot of different things, right? But if we step back Cloud just means any sort of distributed applications, whether it's on prem in your own data center, on the edge, in a public hosted environment, and automation is critical for making those things work, when you have these complex applications that are distributed across, whether it's a rack, a data center or globally. You need a tool that can help you make sense of all of that. You've got to... We can't manage things just with, Oh, everything is on one box anymore. Cloud really just means that things have been exploded out and broken up into a bunch of different pieces. And there's now a lot more architectural complexity, no matter where you're running that. And so I think if you step back and look at it from that perspective, you can actually apply a lot of the same approaches and philosophies to these new challenges as they come up without having to reinvent the wheel of how you think about these applications. Just because you're putting them in a new environment, like at the edge or in a public Cloud or on a new, private on premise solution. >> It's interesting, you know, I've been really loving the cloud native action lately, especially with COVID, we're seeing a lot of more modern apps come out of that. If I could follow up there, how do you guys look at tools like Terraform and how does Ansible compare to that? Because you guys are very popular in the cloud configuration, you look at cloud native, Jill, your thoughts. >> Yeah. So Terraform and tools like that. Things like cloud formation or heat in the OpenStack world, they do really, really great at things like deploying your apps and setting up your stack and getting them out there. And they're really focused on that problem space, which is a hard problem space that they do a fantastic job with where Ansible tends to come in and a tool like Ansible is what do you do on day two with that application? How do you run an update? How do you manage it in the longterm of something like 60% of the workloads or cloud spend at least on AWS is still just EC2 instances. What do you do with all of those EC2 instances once you've deployed them, once they're in a stack, whether you're managing it, whatever tool you're managing it with, Ansible is a phenomenal way of getting in there and saying, okay, I have these instances, I know about them, but maybe I just need to connect out and run an update or add a package or reconfigure a service that's running on there. And I think you can glue these things together and use Ansible with these other stack deployment based tools really, really effectively. >> Real quick, just a quick followup on that. what's the big pain point for developers right now when they're looking at these tools? Because they see the path, what are some of the pain points that they're living right now that they're trying to overcome? >> I think one of the problems kind of coincidentally is we have so many tools. We're in kind of a tool explosion in the cloud space, right now. You could piece together as as many tools to manage your stack, as you have components in your stack and just making sense of what that landscape looks like right now and figuring out what are the right tools for the job I'm trying to do, that can be flexible and that are not going to box me into having to spend half of my engineering time, just managing my tools and making sense of all of that is a significant effort and job on its own. >> Yes, too many may add, would choke in years ago in the big data search, the tools, the tool train, one we call the tool shed, after a while, you don't know what's in the back, what you're using every day. People get comfortable with the right tools, but the platform becomes a big part of that thinking holistically as a system. And Adam, this comes back to security. There's more tools in the security space than ever before. Talking about tool challenges, security is the biggest tool shed everyone's got tools they'd buy everything, but you got to look at, what a platform looks like and developers just want to have the truth. And when you look at the configuration management piece of it, security is critical. What's your thoughts on the source of truth when it comes into play for these security appliances? >> So these are... Source of truth piece is kind of an interesting one because this is going to be very dependent on the organization. What type of brownfield environment they've developed, what type of things that they rely on, and what types of data they store there. So we have the ability for various sources of truth to come in for your inventory source and the types of information you store with that. This could be tagged information on a series of cloud instances or series about resources. This could be something you store in a network management tool or a CMDB. This could even be something that you put into a privilege access management system, such as, CyberArk or hashivault. Like those are the things and because of Ansible flexibility and because of the way that everything is put together in a pluggable nature, we have the capability to actually bring in all of these components from anywhere in a brownfield environment, in a preexisting infrastructure, as well as new decisions that are being made for the enterprise as I move forward. And, and we can bring all that together and be that infrastructure glue, be that automation component that can tie all these disjoint loosely coupled, or complete disc couple pieces, together. And that's kind of part of that, that security posture, remediation various levels of introspection into your environment, these types of things, as we go forward, and that's kind of what we're focusing on doing with this. >> What kind of data is stored in the source of truth? >> I mean... So what type of data? This could be credential. It could be single use credential access. This could be your inventory data for your systems, what target systems you're trying to do. It could be, various attributes of different systems to be able to classify them ,and codify them in different ways. It's kind of kind of depending, be configuration data. You know, we have the ability with some of the work that Brad and his team are doing to actually take unstructured data, make it structured, bullet into whatever your chosen source of truth is, store it, and then utilize that to, kind of decompose it into different vendors, specific syntax representations and those types of things. So we have a lot of different capability there as well. >> Brad, you were mentioned, do you have a talk on parsing, can you elaborate on that? And why should network operators care about that? >> Yeah, welcome to 2020. We're still parsing network configuration and operational state. This is an interesting one. If you had asked me years ago, did I think that we would be investing development time into parsing with Ansible network configurations? I would have said, "Well, I certainly hope not. "I hope programmability of network devices and the vendors "really have their API's in order." But I think what we're seeing is network containers are still comfortable with the command line. They're still very familiar with the command line and when it comes time to do operational state assessment and health assessment of your network, engineers are comfortable going to the command line and running show commands. So really what we're trying to do in the parsing space is not author brand new parking and parsing engine ourselves, but really leverage a lot of the open source tools that are already out there bringing them into Ansible, so network engineers can now harvest the critical information from usher operational state commands on their network devices. And then once they've gotten to the structure data, things get really interesting because now you can do entrance criteria checks prior to doing configuration changes, right? So if you want to ensure a network device has a very particular operational state, all the BGP neighbors are, for example before pushing configuration changes, what we have the ability to do now is actually parse the command that you would have run from the command line. Use that within a decision tree in your Ansible playbook, and only move forward when the configuration changes. If the box is healthy. And then once the configuration changes are made at the end, you run those same health checks to ensure that you're in a speck can do a steady state and are production ready. So parsing is the mechanism. It's the data that you get from the parsing that's so critical. >> If I had to ask you real quick, just while it's on my mind. You know, people want to know about automation. It's top of mind use case. What are some of these things around automation and configuration parsing, whether it's parsing to other configuration manager, what are the big challenges around automation? Because it's the Holy grail. Everyone wants it now. What are the couches? where's the hotspots that needs to be jumped on and managed carefully? Or the easiest low hanging fruit? >> Well, there's really two pieces to it, right? There's the technology. And then there's the culture. And, and we talk really about a culture of automation, bringing the team with you as you move into automation, ensuring that everybody has the tools and they're familiar with how automation is going to work and how their day job is going to change because of automation. So I think once the organization embraces automation and the culture is in place. On the technology side, low hanging fruit automation can be as simple as just using Ansible to push the commands that you would have previously pushed to the device. And then as your organization matures, and you mature along this kind of path of network automation, you're dealing with larger pieces, larger sections of the configuration. And I think over time, network engineers will become data managers, right? Because they become less concerned about the network, the vendors specific configuration, and they're really managing the data that makes up the configuration. And I think once you hit that part, you've won at automation because you can move forward with Ansible resource modules. You're well positioned to do NETCONF for RESTCONF or... Right once you've kind of grown to that it's the data that we need to be concerned about and it could fit (indistinct) and the operational state management piece, you're going to go through a transformation on the networking side. >> So you mentioned-- >> And one thing to note there, if I may, I feel like a piece of this too, is you're able to actually bridge teams because of the capability of Ansible, the breadth of technologies that we've had integrations with and our ability to actually bridge that gap between different technologies, different teams. Once you have that culture of automation, you can start to realize these DevOps and DevSecOps workflow styles that are top of everybody's mind these days. And that's something that I think is very powerful. And I like to try to preach when I have the opportunity to talk to folks about what we can do, and the fact that we have so much capability and so many integrations across the entire industry. >> That's a great point. DevSecOps is totally a hop on. When you have software and hardware, it becomes interesting. There's a variety of different equipment, on the security automation. What kind of security appliances can you guys automate? >> As of today, we are able to do endpoint management systems, enterprise firewalls, security information, and event management systems. We're able to do security orchestration, automation, remediation systems, privileged access management systems. We're doing some threat intelligence platforms. And we've recently added to the I'm sorry, did I say intrusion detection? We have intrusion detection and prevention, and we recently added endpoint security management. >> Huge, huge value there. And I think everyone's wants that. Jill, I've got to ask you about the Cloud because the modules came up. What use cases do you see the Ansible modules in for the public cloud? Because you got a lot of cloud native folks in public cloud, you've got enterprises lifting and shifting, there's a hybrid and multicloud horizon here. What's some of the use cases where you see those Ansible modules fitting well with public level. >> The modules that we have in public cloud can work across all of those things, you know. In our public clouds, we have support for Amazon web services, Azure GCP, and they all support your main services. You can spin up a Lambda, you can deploy ECS clusters, build AMI, all of those things. And then once you get all of that up there, especially looking at AWS, which is where I spend the most time, you get all your EC2 instances up, you can now pull that back down into Ansible, build an inventory from that. And seamlessly then use Ansible to manage those instances, whether they're running Linux or windows or whatever distro you might have them running, we can go straight from having deployed all of those services and resources to managing them and going between your instances in your traditional operating system management or those instances and your cloud services. And if you've got multiple clouds or if you still have on prem, or if you need to, for some reason, add those remote cloud instances into some sort of on-prem hardware load balancer, security endpoint, we can go between all of those things and glue everything together, fairly seamlessly. You can put all of that into tower and have one kind of view of your cloud and your hardware and your on-prem and being able to move things between them. >> Just put some color commentary on what that means for the customer in terms of, is it pain reduction, time savings? How would you classify their value? >> I mean, both. Instead of having to go between a number of different tools and say, "Oh, well for my on-prem, I have to use this. "But as soon as I shift over to a cloud, "I have to use these tools. "And, Oh, I can't manage my Linux instances with this tool "that only knows how to speak to, the EC2 to API." You can use one tool for all of these things. So like we were saying, bring all of your different teams together, give them one tool and one view for managing everything end to end. I think that's, that's pretty killer. >> All right. Now I get to the fun part. I want you guys to weigh in on the Kubernetes. Adam, if you can start with you, we'll start with you go in and tell us why is Kubernetes more important now? What does it mean? A lot of hype continues to be out there. What's the real meet around Kubernetes what's going on? >> I think the big thing is the modernization of the application development delivery. When you talk about Kubernetes and OpenShift and the capabilities we have there, and you talk about the architecture, you can build a lot of the tooling that you used to have to maintain, to be able to deliver sophisticated resilient architectures in your application stack, are now baked into the actual platform, so the container platform itself takes care of that for you and removes that complexity from your operations team, from your development team. And then they can actually start to use these primitives and kind of achieve what the cloud native compute foundation keeps calling cloud native applications and the ability to develop and do this in a way that you are able to take yourself out of some of the components you used to have to babysit a lot. And that becomes in also with the OpenShift operator framework that came out of originally Coral S, and if you go to operator hub, you're able to see these full lifecycle management stacks of infrastructure components that you don't... You no longer have to actually, maintain a large portion of what you start to do. And so the operator SDK itself, are actually developing these operators. Ansible is one of the automation capabilities. So there's currently three supported there's Ansible, there's one that you just have full access to the Golang API and then helm charts. So Ansible's specifically obviously being where we focus. We have our collection content for the... carries that core, and then also ReHat to OpenShift certified collection's coming out in, I think, a month or so. Don't hold me to the timeline. I'm shoving in trouble for that one, but we have those things going to come out. Those will be baked into the operator's decay that we fully supported by our customer base. And then we can actually start utilizing the Ansible expertise of your operations team to container native of the infrastructure components that you want to put into this new platform. And then Ansible itself is able to build that capability of automating the entire Kubernetes or OpenShift cluster in a way that allows you to go into a brownfield environment and automate your existing infrastructure, along with your more container native, futuristic next generation, net structure. >> Jill this brings up the question. Why don't you just use native public cloud resources versus Kubernetes and Ansible? What's the... What should people know about where you use that, those resources? >> Well, and it's kind of what Adam was saying with all of those brownfield deployments and to the same point, how many workloads are still running just in EC2 instances or VMs on the cloud. There's still a lot of tech out there that is not ready to be made fully cloud native or containerized or broken up. And with OpenShift, it's one more layer that lets you put everything into a kind of single environment instead of having to break things up and say, "Oh, well, this application has to go here. "And this application has to be in this environment.' You can do that across a public cloud and use a little of this component and a little of that component. But if you can bring everything together in OpenShift and manage it all with the same tools on the same platform, it simplifies the landscape of, I need to care about all of these things and look at all of these different things and keep track of these and are my tools all going to work together and are my tools secure? Anytime you can simplify that part of your infrastructure, I think is a big win. >> John: You know, I think about-- >> The one thing, if I may, Jill spoke to this, I think in the way that a architectural, infrastructure person would, but I want to try to really quick take the business analyst component of it as the hybrid component. If you're trying to address multiple footprints, both on prem, off prem, multiple public clouds, if you're running OpenShift across all of them, you have that single, consistent deployment and development footprint for everywhere. So I don't disagree with anything they said, I just wanted to focus specifically on... That piece is something that I find personally unique, as that was a problem for me in a past life. And that kind of speaks to me. >> Well, speaking of past lives-- >> Having me as an infrastructure person, thank you. >> Yeah. >> Well, speaking of past lives, OpenStack, you look at Jill with OpenStack, we've been covering the Cuba thing when OpenStack was rolling out back in the day, but you can also have private cloud. Where you used to... There's a lot of private cloud out there. How do you talk about that? How do people understand using public cloud versus the private cloud aspect of Ansible? >> Yeah, and I think there is still a lot of private cloud out there and I don't think that's a bad thing. I've kind of moved over onto the public cloud side of things, but there are still a lot of use cases that a lot of different industries and companies have that don't make sense for putting into public cloud. So you still have a lot of these on-prem open shift and on-prem OpenStack deployments that make a ton of sense and that are solving a bunch of problems for these folks. And I think they can all work together. We have Ansible that can support both of those. If you're a telco, you're not going to put your network function, virtualization on USC as to one in spot instances, right? When you call nine one one, you don't want that going through the public cloud. You want that to be on dedicated infrastructure, that's reliable and well-managed and engineered for that use case. So I think we're going to see a lot of ongoing OpenStack and on-prem OpenShift, especially with edge, enabling those types of use cases for a long time. And I think that's great. >> I totally agree with you. I think private cloud is not a bad thing at all. Things that are only going to accelerate my opinion. You look at the VM world, they talked about the telco cloud and you mentioned edge when five G comes out, you're going to have basically have private clouds everywhere, I guess, in my opinion. But anyway, speaking of VMware, could you talk about the Ansible VMware module real quick? >> Yeah, so we have a new collection that we'll be debuting at Ansible Fest this year bore the VMware REST API. So the existing VMware modules that we have usually SOAP API for VMware, and they rely on an external Python library that VMware provides, but with these fare 6.0 and especially in vSphere 6.5, VMware has stepped up with a REST API end point that we find is a lot more performance and offers a lot of options. So we built a new collection of VMware modules that will take advantage of that. That's brand new, it's a lighter way. It's much faster, we'll get better performance out of it. You know, reduced external requirements. You can install it and get started faster. And especially with these sphere seven, continuing to build on this REST API, we're going to see more and more interfaces being exposed so that we can take advantage. We plan to expand it as new interfaces are being exposed in that API, it's compatible with all of the existing modules. You can go back and forth, use your existing playbooks and start introducing these. But I think especially on the performance side, and especially as we get these larger clouds and more cloud deployments, edge clouds, where you have these private clouds and lots and lots of different places, the performance benefits of this new collection that we're trying to build is going to be really, really powerful for a lot of folks. >> Awesome. Brad, we didn't forget about you. We're going to bring you back in. Network automation has moved towards the resource modules. Why should people care about them? >> Yeah. Resource modules, excuse me. Probably I think having been a network engineer for so long, I think some of the most exciting work that has gone into Ansible network over the past year and a half, what the resource modules really do for you is they will reach out to network devices. They will pull back that network native, that vendor native configuration. While the resource module actually does the parsing for you. So there's none of that with the resource modules. And we returned structured data back to the user that represents the configuration. Going back to your question about source of truth. You can take that structure data, maybe for your interface CONFIG, your OSPF CONFIG, your access list CONFIG, and you can store that data in your source of truth under source of truth. And then where you are moving forward, is you really spend time as every engineer managing the data that makes up the configuration, and you can share that data across different platforms. So if you were to look at a lot of the resource modules, the data model that they support, it's fairly consistent between vendors. As an example, I can pull OSPF configuration from one vendor and with very small changes, push that OSPF configuration to a different vendor's platform. So really what we've tried to do with the resource modules is normalize the data model across vendors. It'll never be a hundred percent because there's functionality that exists in one platform that doesn't exist and that's exposed through the configuration, but where we could, we have normalized the data model. So I think it's really introducing the concept of network configuration management through data management and not through CLI commands anymore. >> Yeah, that's a great point. It just expands the network automation vision. And one of the things that's interesting here in this panel is you're talking about, cloud holistically, public multicloud, private hybrid security network automation as a platform, not just a tool, we're still going to have all kind of tools out there. And then the importance of automating the edge. I mean, that's a network game Brad. I mean, it's a data problem, right? I mean, we all know about networking, moving packets from here to there, but automating the data is critical and you give have bad data and you don't have... If you have misinformation, it sounds like our current politics, but you know, bad information is bad automation. I mean, what's your thoughts? How do you share that concept to developers out there? What should they be thinking about in terms of the data quality? >> I think that's the next thing we have to tackle as network engineers. It's not, do I have access to the data? You can get the data now for resource modules, you can get the data from NETCONF, from RESTCONF, you can get it from OpenConfig, you can get it from parsing. The question really is, how do you ensure the integrity and the quality of the data that is making up your configurations and the consistency of the data that you're using to look at operational state. And I think this is where the source of truth really becomes important. If you look at Git as a viable source of truth, you've got all the tools and the mechanisms within Git to use that as your source of truth for network configuration. So network engineers are actually becoming developers in the sense that they're using Git ops to worklow to manage configuration moving forward. It's just really exciting to see that transformation happen. >> Great panel. Thanks for everyone coming on, I appreciate it. We'll just end this by saying, if you guys could just quickly summarize Ansible fast 2020 virtual, what should people walk away with? What should your customers walk away with this year? What's the key points. Jill, we'll start with you. >> Hopefully folks will walk away with the idea that the Ansible community includes so many different folks from all over, solving lots of different, interesting problems, and that we can all come together and work together to solve those problems in a way that is much more effective than if we were all trying to solve them individually ourselves, by bringing those problems out into the open and working together, we get a lot done. >> Awesome, Brad? >> I'm going to go with collections, collections, collections. We introduced in last year. This year, they are real. Ansible2.10 that just came out is made up of collections. We've got certified collections on automation. We've got cloud collections, network collections. So they are here. They're the real thing. And I think it just gets better and deeper and more content moving forward. All right, Adam? >> Going last is difficult. Especially following these two. They covered a lot of ground and I don't really know that I have much to add beyond the fact that when you think about Ansible, don't think about it in a single context. It is a complete automation solution. The capability that we have is very extensible. It's very pluggable, which has a standing ovation to the collections and the solutions that we can come up with collectively. Thanks to ourselves. Everybody in the community is almost infinite. A few years ago, one of the core engineers did a keynote speech using Ansible to automate Philips hue light bulbs. Like this is what we're capable of. We can automate the fortune 500 data centers and telco networks. And then we can also automate random IOT devices around your house. Like we have a lot of capability here and what we can do with the platform is very unique and something special. And it's very much thanks to the community, the team, the open source development way. I just, yeah-- >> (Indistinct) the open source of truth, being collaborative all is what it makes up and DevOps and Sec all happening together. Thanks for the insight. Appreciate the time. Thank you. >> Thank you. I'm John Furrier, you're watching theCube here for Ansible Fest, 2020 virtual. Thanks for watching. (soft upbeat music)

Published Date : Sep 29 2020

SUMMARY :

brought to you by RedHat. and Jill Rouleau, who's the Launched the new platform. and then let you return I always ask the folks in the along that path from the edge, from IOT and the development lot of the same approaches and how does Ansible compare to that? And I think you can glue that they're trying to overcome? as you have components in your And when you look at the and because of the way that and those types of things. It's the data that you If I had to ask you real quick, bringing the team with you and the fact that we on the security automation. and we recently added What's some of the use cases where you see those Ansible and being able to move Instead of having to go between A lot of hype continues to be out there. and the capabilities we have there, about where you use that, and a little of that component. And that kind of speaks to me. infrastructure person, thank you. but you can also have private cloud. and that are solving a bunch You look at the VM world, and lots and lots of different places, We're going to bring you back in. and you can store that data and you give have bad data and the consistency of What's the key points. and that we can all come I'm going to go with collections, and the solutions that we can Thanks for the insight. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BradPERSON

0.99+

Adam MillerPERSON

0.99+

Brad ThortonPERSON

0.99+

JohnPERSON

0.99+

60%QUANTITY

0.99+

AdamPERSON

0.99+

JillPERSON

0.99+

Jill RouleauPERSON

0.99+

AnsibleORGANIZATION

0.99+

John FurrierPERSON

0.99+

two piecesQUANTITY

0.99+

Last yearDATE

0.99+

This yearDATE

0.99+

last yearDATE

0.99+

AmazonORGANIZATION

0.99+

GitTITLE

0.99+

AWSORGANIZATION

0.99+

vSphere 6.5TITLE

0.99+

OpenShiftTITLE

0.99+

RedHatORGANIZATION

0.99+

PhilipsORGANIZATION

0.99+

KubernetesTITLE

0.99+

PythonTITLE

0.99+

LinuxTITLE

0.99+

twoQUANTITY

0.99+

EC2TITLE

0.99+

five supported platformsQUANTITY

0.99+

Ansible FestEVENT

0.99+

one toolQUANTITY

0.99+

todayDATE

0.99+

thousands of devicesQUANTITY

0.99+

over 50QUANTITY

0.99+

bothQUANTITY

0.98+

USCORGANIZATION

0.98+

2020DATE

0.98+

oneQUANTITY

0.98+

one boxQUANTITY

0.98+

LambdaTITLE

0.98+

this yearDATE

0.98+

Brad ThorntonPERSON

0.98+

windowsTITLE

0.98+

telcoORGANIZATION

0.98+

one more layerQUANTITY

0.98+

one platformQUANTITY

0.98+

Ansible Fest 2020EVENT

0.97+

DevSecOpsTITLE

0.97+

AnsibleFestEVENT

0.96+

day twoQUANTITY

0.96+

one vendorQUANTITY

0.96+

NETCONFORGANIZATION

0.95+

threeQUANTITY

0.95+

nineQUANTITY

0.95+

one viewQUANTITY

0.95+

hundred percentQUANTITY

0.94+

4-video test


 

>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.

Published Date : Sep 27 2020

SUMMARY :

bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Exxon MobilORGANIZATION

0.99+

AndyPERSON

0.99+

Sean HagarPERSON

0.99+

Daniel WennbergPERSON

0.99+

ChrisPERSON

0.99+

USCORGANIZATION

0.99+

CaltechORGANIZATION

0.99+

2016DATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Tatsuya NagamotoPERSON

0.99+

twoQUANTITY

0.99+

1978DATE

0.99+

FoxORGANIZATION

0.99+

six systemsQUANTITY

0.99+

HarvardORGANIZATION

0.99+

Al QaedaORGANIZATION

0.99+

SeptemberDATE

0.99+

second versionQUANTITY

0.99+

CIAORGANIZATION

0.99+

IndiaLOCATION

0.99+

300 yardsQUANTITY

0.99+

University of TokyoORGANIZATION

0.99+

todayDATE

0.99+

BurnsPERSON

0.99+

Atsushi YamamuraPERSON

0.99+

0.14%QUANTITY

0.99+

48 coreQUANTITY

0.99+

0.5 microsecondsQUANTITY

0.99+

NSFORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

CBSORGANIZATION

0.99+

NTTORGANIZATION

0.99+

first implementationQUANTITY

0.99+

first experimentQUANTITY

0.99+

123QUANTITY

0.99+

Army Research OfficeORGANIZATION

0.99+

firstQUANTITY

0.99+

1,904,711QUANTITY

0.99+

oneQUANTITY

0.99+

sixQUANTITY

0.99+

first versionQUANTITY

0.99+

StevePERSON

0.99+

2000 spinsQUANTITY

0.99+

five researcherQUANTITY

0.99+

CreoleORGANIZATION

0.99+

three setQUANTITY

0.99+

second partQUANTITY

0.99+

third partQUANTITY

0.99+

Department of Applied PhysicsORGANIZATION

0.99+

10QUANTITY

0.99+

eachQUANTITY

0.99+

85,900QUANTITY

0.99+

OneQUANTITY

0.99+

one problemQUANTITY

0.99+

136 CPUQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

ScottPERSON

0.99+

2.4 gigahertzQUANTITY

0.99+

1000 timesQUANTITY

0.99+

two timesQUANTITY

0.99+

two partsQUANTITY

0.99+

131QUANTITY

0.99+

14,233QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

two possible phasesQUANTITY

0.99+

13,580QUANTITY

0.99+

5QUANTITY

0.99+

4QUANTITY

0.99+

one microsecondsQUANTITY

0.99+

first stepQUANTITY

0.99+

first partQUANTITY

0.99+

500 spinsQUANTITY

0.99+

two identical photonsQUANTITY

0.99+

3QUANTITY

0.99+

70 years agoDATE

0.99+

IraqLOCATION

0.99+

one experimentQUANTITY

0.99+

zeroQUANTITY

0.99+

Amir Safarini NiniPERSON

0.99+

SaddamPERSON

0.99+

Networks of Optical Parametric Oscillators


 

>>Good morning. Good afternoon. Good evening, everyone. I should thank Entity Research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech. And today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum. Photonics should acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or meta materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics. And if you want to extend it even further. Some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down, and the couplings is given by the G I J. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart in standard computers, if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric oscillator on what it is is resonator with non linearity in it and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible face states as the escalation result of these Opio, which are off by pie, and that's one of the important characteristics of them. So I want to emphasize >>a little more on that, and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the strength on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal, which is half the frequency of the pump. >>And I have two of them to show you that they can acquire these face states so they're still face their frequency lock to the pump. But it can also lead in either the zero pie face state on. The idea is to use this binary phase to represent the binary icing spin. So each Opio is going to represent spin, which can be >>either is your pie or up or down, >>and to implement the network of these resonate er's. We use the time off blood scheme, and the idea is that we put impulses in the cavity, these pulses air separated by the repetition period that you put in or t R. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's If you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. If you have any minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can >>have a program. We'll all toe all connected network in this time off like scheme. >>And the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos. Each of them can be either zero pie, and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem thin the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillating the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um the first implementation was on our optical interaction. We also had an unequal 16 implementation and then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing. Hamiltonian is both in the linear and >>nonlinear side and also how we're working on miniaturization of these Opio networks. So >>the first experiment, which was the four Opium machine it was a free space implementation and this is the actual picture of the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. Yeah, so then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one, and you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective. Looks so I'm gonna split this idea of opium based icing machine into two parts One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme, and that's basically >>what gives you the icing Hamiltonian model A. So the optical loss of this network corresponds to the icing Hamiltonian. >>And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. >>So you go either to zero the pie face state, and the expectation is that this the network oscillates in the lowest possible state, the lowest possible loss state. >>There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non their dynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to on the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of States and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate er's which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping. And now we can actually look at the band structure on. This is an actual measurement >>that we get with this associate model and you see how it reasonably how how? Well, it actually follows the prediction and the theory. >>One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as we were running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example >>that we have looked at is we can actually go to the transition off going from top a logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. >>You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, >>uh, network with Harper Hofstadter model when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics. And we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic oh, classical and quantum, non innate behaviors in these networks. >>So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this phase transition, that threshold. So the low threshold we have squeezed state in these Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network. Which, for example, is if one Opio starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also, can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise, behavior So in the degenerate regime, which we call it the order state. You're gonna have the phase being locked to the phase of the pump as I talked about in the non the general regime. However, the phase is the phase is mostly dominated by the quantum diffusion off the off the phase, which is limited by the so called shallow towns limit and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. And if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at So now the question is can utilize this phase transition, which is a face driven phase transition and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition. You can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts of more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to oppose. And that's a very abrupt face transition and compared to the to the single Opio face transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and >>what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non linear elements, where we are now with the optics is probably very similar to seven years ago, which is a tabletop implementation. >>And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's Did you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar and also might affair at Stanford. And, uh, we could show that you can do the >>periodic polling in the phenomenon of it and get all sorts of very highly non in your process is happening in this net. Photonic periodically polls if, um Diabate >>and now we're working on building. Opio was based on that kind of photonic lithium Diabate and these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the OPI ohs and the Opio networks are in the works, and that's not the only way of making large networks. But also I want to point out that the reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint, they also provide some opportunities in terms of the operation regime. On one of them is about making cat states in o pos, which is can we have the quantum superposition of >>the zero pie states that I talked about >>and the nano photonics within? I would provide some opportunities to actually get >>closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform, other existing platforms and to >>go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us. See, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamiltonian implementations on those networks. So if you can't build a pos, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to >>estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pippen O pos that we have been building in the past 50 years or so. >>So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and >>our work that has been going on on icing machines and the >>measurement feedback on I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you >>a little bit about the efforts on miniaturization and going to the to the nano scale. So with that, I would like Thio stop here and thank you for your attention.

Published Date : Sep 21 2020

SUMMARY :

And if you look at the phase locking which is the length of the strength on by that modulation, which is that will make a pump. And I have two of them to show you that they can acquire these face states so they're still face their frequency and the idea is that we put impulses in the cavity, these pulses air separated by the repetition have a program. into the network, then the OPI ohs are expected to oscillating the lowest, So the reason that this implementation was very interesting is that you don't need the end what gives you the icing Hamiltonian model A. So the optical loss of this network and the delay lines are going to give you a different losses. So you go either to zero the pie face state, and the expectation is that this breaking the time reversal symmetry, meaning that you go from one spin to on the one side that we get with this associate model and you see how it reasonably how how? that now you have the flexibility of changing the network as we were running the machine. the to the standard nontrivial. You can then look at the edge states and you can also see the trivial and states and the technological at uh, network with Harper Hofstadter model when you don't have the results the motivation is if you look at the electron ICS and from relatively small scale computers in the order And the question is, how can we utilize nano photonics? periodic polling in the phenomenon of it and get all sorts of very highly non in your been building in the past few months, which I'm not gonna tell you more about. closer to that regime because of the spatial temporal confinement that you can the chi to non linearity and see how and when you can get the Opio be even lower than the type of bulk Pippen O pos that we have been building in the past So let me summarize the talk And I also told you a little bit about the efforts on miniaturization and going to the to the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CaltechORGANIZATION

0.99+

AndyPERSON

0.99+

twoQUANTITY

0.99+

2016DATE

0.99+

HarvardORGANIZATION

0.99+

USCORGANIZATION

0.99+

EachQUANTITY

0.99+

1000 timesQUANTITY

0.99+

one problemQUANTITY

0.99+

oneQUANTITY

0.99+

five researcherQUANTITY

0.99+

first experimentQUANTITY

0.99+

OneQUANTITY

0.99+

sixQUANTITY

0.99+

Al Gore ismPERSON

0.99+

todayDATE

0.99+

first implementationQUANTITY

0.99+

thousandsQUANTITY

0.99+

eachQUANTITY

0.99+

123QUANTITY

0.99+

one experimentQUANTITY

0.99+

seven years agoDATE

0.99+

GrahamPERSON

0.99+

CreoleORGANIZATION

0.99+

one phaseQUANTITY

0.98+

bothQUANTITY

0.98+

MexicoLOCATION

0.98+

Harper HofstadterPERSON

0.98+

Entity ResearchORGANIZATION

0.98+

eight graduate studentsQUANTITY

0.98+

billionsQUANTITY

0.98+

two partsQUANTITY

0.98+

ThioPERSON

0.98+

two directionsQUANTITY

0.97+

second delayQUANTITY

0.97+

two possible face statesQUANTITY

0.97+

HamiltonianOTHER

0.97+

two lossesQUANTITY

0.97+

seven yearsQUANTITY

0.96+

one exampleQUANTITY

0.96+

singleQUANTITY

0.95+

two timesQUANTITY

0.95+

One voteQUANTITY

0.95+

two simple pendulumQUANTITY

0.95+

firstQUANTITY

0.94+

one spinQUANTITY

0.94+

60DATE

0.94+

70 years agoDATE

0.94+

GaussianOTHER

0.93+

16 implementationQUANTITY

0.92+

NanaORGANIZATION

0.91+

3QUANTITY

0.91+

two identical photonsQUANTITY

0.9+

StanfordORGANIZATION

0.87+

OpioOTHER

0.85+

one sideQUANTITY

0.82+

thousands of problemsQUANTITY

0.79+

first order phaseQUANTITY

0.79+

one delayQUANTITY

0.77+

zeroQUANTITY

0.76+

lithium DiabateOTHER

0.75+

Marko LoncarPERSON

0.75+

four OpiumQUANTITY

0.75+

NanaOTHER

0.73+

G I J.PERSON

0.72+

2QUANTITY

0.72+

J I J.PERSON

0.72+

one ofQUANTITY

0.7+

OshiePERSON

0.69+

past few monthsDATE

0.66+

NPRORGANIZATION

0.65+

zero pieQUANTITY

0.64+

UNLIST TILL 4/1 - How The Trade Desk Reports Against Two 320-node Clusters Packed with Raw Data


 

hi everybody thank you for joining us today for the virtual Vertica BBC 2020 today's breakout session is entitled Vertica and en mode at the trade desk my name is su LeClair director of marketing at Vertica and I'll be your host for this webinar joining me is Ron Cormier senior Vertica database engineer at the trade desk before we begin I encourage you to submit questions or comments during the virtual session you don't have to wait just type your question or comment in the question box below the slides and click submit there will be a Q&A session at the end of the presentation we'll answer as many questions as we're able to during that time any questions that we don't address we'll do our best to answer them offline alternatively you can visit vertical forums to post your questions there after the session our engineering team is planning to join the forums to keep the conversation going also a quick reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slide and yes this virtual session is being recorded and will be available to view on demand this week we'll send you a notification as soon as it's ready so let's get started over to you run thanks - before I get started I'll just mention that my slide template was created before social distancing was a thing so hopefully some of the images will harken us back to a time when we could actually all be in the same room but with that I want to get started uh the date before I get started in thinking about the technology I just wanted to cover my background real quick because I think it's peach to where we're coming from with vertically on at the trade desk and I'll start out just by pointing out that prior to my time in the trade desk I was a tech consultant at HP HP America and so I traveled the world working with Vertica customers helping them configure install tune set up their verdict and databases and get them working properly so I've seen the biggest and the smallest implementations and everything in between and and so now I'm actually principal database engineer straight desk and and the reason I mentioned this is to let you know that I'm a practitioner I'm working with with the product every day or most days this is a marketing material so hopefully the the technical details in this presentation are are helpful I work with Vertica of course and that is most relative or relevant to our ETL and reporting stack and so what we're doing is we're taking about the data in the Vertica and running reports for our customers and we're an ad tech so I did want to just briefly describe what what that means and how it affects our implementation so I'm not going to cover the all the details of this slide but basically I want to point out that the trade desk is a DSP it's a demand-side provider and so we place ads on behalf of our customers or agencies and ad agencies and their customers that are advertised as brands themselves and the ads get placed on to websites and mobile applications and anywhere anywhere digital advertising happens so publishers are what we think ocean like we see here espn.com msn.com and so on and so every time a user goes to one of these sites or one of these digital places and an auction takes place and what people are bidding on is the privilege of showing and add one or more ads to users and so this is this is really important because it helps fund the internet ads can be annoying sometimes but they actually help help are incredibly helpful in how we get much much of our content and this is happening in real time at very high volumes so on the open Internet there is anywhere from seven to thirteen million auctions happening every second of those seven to thirteen million auctions happening every second the trade desk bids on hundreds of thousands per second um so that gives it and anytime we did we have an event that ends up in Vertica that's that's one of the main drivers of our data volume and certainly other events make their way into Vertica as well but that wanted to give you a sense of the scale of the data and sort of how it's impacting or how it is impacted by sort of real real people in the world so um the uh let's let's take a little bit more into the workload and and we have the three B's in spades late like many many people listening to a massive volume velocity and variety in terms of the data sizes I've got some information here some stats on on the raw data sizes that we deal with on a daily basis per day so we ingest 85 terabytes of raw data per day and then once we get it into Vertica we do some transformations we do matching which is like joins basically and we do some aggregation group buys to reduce the data and make it clean it up make it so it's more efficient to consume buy our reporting layer so that matching in aggregation produces about ten new terabytes of raw data per day it all comes from the it all comes from the data that was ingested but it's new data and so that's so it is reduced quite a bit but it's still pretty pretty high high volume and so we have this aggregated data that we then run reports on on behalf of our customers so we have about 40,000 reports per day oh that's probably that's actually a little bit old and older number it's probably closer to 50 or 55,000 reports per day at this point so it's I think probably a pretty common use case for for Vertica customers it's maybe a little different in the sense that most of the reports themselves are >> reports so they're not it's not a user sitting at a keyboard waiting for the result basically we have we we have a workflow where we do the ingest we do this transform and then and then once once all the data is available for a day we run reports on behalf of our customer to let me have our customers on that that daily data and then we send the reports out you via email or we drop them in a shared location and then they they look at the reports at some later point of time so it's up until yawn we did all this work on on enterprise Vertica at our peak we had four production enterprise clusters each which held two petabytes of raw data and I'll give you some details on on how those enterprise clusters were configured in the hardware but before I do that I want to talk about the reporting workload specifically so the the reporting workload is particularly lumpy and what I mean by that is there's a bunch of work that becomes available bunch of queries that we need to run in a short period of time after after the days just an aggregation is completed and then the clusters are relatively quiet for the remaining portion of the day that's not to say they are they're not doing anything as far as read workload but they certainly are but it's much less reactivity after that big spike so what I'm showing here is our reporting queue and the spike is is when all those reports become a bit sort of ailable to be processed we can't we can't process we can't run the report until we've done the full ingest and matching and aggregation for the day and so right around 1:00 or 2:00 a.m. UTC time every day that's when we get this spike and the spike we affectionately called the UTC hump but basically it's a huge number of queries that need to be processed sort of as soon as possible and we have service levels that dictate what as soon as possible means but I think the spike illustrates our use case pretty pretty accurately and um it really as we'll see it's really well suited for pervert icky on and we'll see what that means so we've got our we had our enterprise clusters that I mentioned earlier and just to give you some details on what they look like there they were independent and mirrored and so what that means is all four clusters held the same data and we did this intentionally because we wanted to be able to run our report anywhere we so so we've got this big queue over port is big a number of reports that need to be run and we've got these we started we started with one cluster and then we got we found that it couldn't keep up so we added a second and we found the number of reports went up that we needed to run that short period of time and and so on so we eventually ended up with four Enterprise clusters basically with this with the and we'd say they were mirrored they all had the same data they weren't however synchronized they were independent and so basically we would run the the tailpipe line so to speak we would run ingest and the matching and the aggregation on all the clusters in parallel so they it wasn't as if each cluster proceeded to the next step in sync with which dump the other clusters they were run independently so it was sort of like each each cluster would eventually get get consistent and so this this worked pretty well for for us but it created some imbalances and there was some cost concerns that will dig into but just to tell you about each of these each of these clusters they each had 50 nodes they had 72 logical CPU cores a half half a terabyte of RAM a bunch of raid rated disk drives and 2 petabytes of raw data as I stated before so pretty big beefy nodes that are physical physical nodes that we held we had in our data centers we actually reached these nodes so so it was on our data center providers data centers and the these were these these were what we built our business on basically but there was a number of challenges that we ran into as we as we continue to build our business and add data and add workload and and the first one is is some in ceremony can relate to his capacity planning so we had to prove think about the future and try to predict the amount of work that was going to need to be done and how much hardware we were going to need to satisfy that work to meet that demand and that's that's just generally a hard thing to do it's very difficult to verdict the future as we can probably all attest to and how much the world has changed and even in the last month so it's a it's a very difficult thing to do to look six twelve eighteen eighteen months into the future and sort of get it right and and and what people what we tended to do is we reach or we tried to our art plans our estimates were very conservative so we overbought in a lot of cases and not only that we had to plan for the peak so we're planning for that that that point in time that those number of hours in the early morning when we had to we had all those reports to run and so that so so we ended up buying a lot of hardware and we actually sort of overbought at times and then and then as the hardware were days it would kind of come into it would come into maturity and we have our our our workload would sort of come approach matching the demand so that was one of the big challenges the next challenge is that we were running on disk you can we wanted to add data in sort of two dimensions the only dimensions that everybody can think about we wanted to add more columns to our big aggregates and we wanted to keep our big aggregates for for longer periods of time so both horizontally and vertically we wanted to expand the datasets but we basically were running out of disk there was no more disk in and it's hard to add a disc to Vertica in enterprise mode not not impossible but certainly hard and and one cannot add discs without adding compute because enterprise mode the disk is all local to each of the nodes for most most people you can do not exchange with sands and other external rays but that's there are a number of other challenges with that so um adding in order to add disk we had to add compute and that basically meant kept us out of balance we're adding more compute than we needed for the amount of disk so that was the problem certainly physical nodes getting them the order delivered racked cables even before we even start such Vertica there's lead times there and and so it's also long commitment since we like I mentioned me Lisa hardware so we were committing to these nodes these physical servers for two or three years at a time and I mentioned that can be a hard thing to do but we wanted to least to keep our capex down so we wanted to keep our aggregates for a long period of time we could have done crazy things or more exotic things to to help us with this if we had to in enterprise mode we could have started to like daisy chain clusters together and that would have been sort of a non-trivial engineering effort because we would need to then figure out how to migrate data source first to recharge the data across all the clusters and we had to migrate data from one cluster to another cluster hesitation and we would have to think about how to aggregate run queries across clusters so if you assured data set spans two clusters it would have had to sort of aggregated within each cluster maybe and then build something on top the aggregated the data from each of those clusters so not impossible things but certainly not easy things and luckily for us we started talking about two Vertica about separation of compute and storage and I know other customers were talking to Vertica as we were people had had these problems and so Vertica inyeon mode came to the rescue and what I want to do is just talk about nyan mode really briefly for for those in the audience who aren't familiar but it's basically Vertigo's answered to the separation of computing storage it allows one to scale compute and or storage separately and and this there's a number of advantages to doing that whereas in the old enterprise days when you add a compute you added stores and vice-versa now we can now we can add one or the other or both according to how we want to and so really briefly how this works this slide this figure was taken directly from the verdict and documentation and so just just to talk really briefly about how it works the taking advantage of the cloud and so in this case Amazon Web Services the elasticity in the cloud and basically we've got you seen two instances so elastic cloud compute servers that access data that's in an s3 bucket and so three three ec2 nodes and in a bucket or the the blue objects in this diagram and the difference is a couple of a couple of big differences one the data no longer the persistent storage of the data the data where the data lives is no longer on each of the notes the persistent stores of the data is in s3 bucket and so what that does is it basically solves one of our first big problems which is we were running out of disk the s3 has for all intensive purposes infinite storage so we can keep much more data there and that mostly solved one of our big problems so the persistent data lives on s3 now what happens is when a query runs it runs on one of the three nodes that you see here and assuming we'll talk about depo in a second but what happens in a brand new cluster where it's just just spun up the hardware is the query will will run on those ec2 nodes but there will be no data so those nodes will reach out to s3 and run the query on remote storage so that so the query that the nodes are literally reaching out to the communal storage for the data and processing it entirely without using any data on on the nodes themselves and so that that that works pretty well it's not as fast as if the data was local to the nodes but um what Vertica did is they built a caching layer on on each of the node and that's what the depot represents so the depot is some amount of disk that is relatively local to the ec2 node and so when the query runs on remote stores on the on the s3 data it then queues up the data for download to the nodes and so the data will get will reside in the Depot so that the next query or the subsequent subsequent queries can run on local storage instead of remote stores and that speeds things up quite a bit so that that's that's what the role of the Depot is the depot is basically a caching layer and we'll talk about the details of how we can see your in our Depot the other thing that I want to point out is that since this is the cloud another problem that helps us solve is the concurrency problem so you can imagine that these three nodes are one sort of cluster and what we can do is we can spit up another three nodes and have it point to the same s3 communal storage bucket so now we've got six nodes pointing to the same data but we've you isolated each of the three nodes so that they act as if they are their own cluster and so vertical calls them sub-clusters so we've got two sub clusters each of which has three nodes and what this has essentially done it is it doubled the concurrency doubled the number of queries that can run at any given time because we've now got this new place which new this new chunk of compute which which can answer queries and so that has given us the ability to add concurrency much faster and I'll point out that for since it's cloud and and there are on-demand pricing models we can have significant savings because when a sub cluster is not needed we can stop it and we pay almost nothing for it so that's that's really really important really helpful especially for our workload which I pointed out before was so lumpy so those hours of the day when it's relatively quiet I can go and stop a bunch of sub clusters and and I will pay for them so that that yields nice cost savings let's be on in a nutshell obviously engineers and the documentation can use a lot more information and I'm happy to field questions later on as well but I want to talk about how how we implemented beyond at the trade desk and so I'll start on the left hand side at the top the the what we're representing here is some clusters so there's some cluster 0 r e t l sub cluster and it is a our primary sub cluster so when you get into the world of eon there's primary Club questions and secondary sub classes and it has to do with quorum so primary sub clusters are the sub clusters that we always expect to be up and running and they they contribute to quorum they decide whether there's enough instances number a number of enough nodes to have the database start up and so these this is where we run our ETL workload which is the ingest the match in the aggregate part of the work that I talked about earlier so these nodes are always up and running because our ETL pipeline is always on we're internet ad tech company like I mentioned and so we're constantly getting costly running ad and there's always data flowing into the system and the matching is happening in the aggregation so that part happens 24/7 and we wanted so that those nodes will always be up and running and we need this we need that those process needs to be super efficient and so what that is reflected in our instance type so each of our sub clusters is sixty four nodes we'll talk about how we came at that number but the infant type for the ETL sub cluster the primary subclusters is I 3x large so that is one of the instance types that has quite a bit of nvme stores attached and we'll talk about that but on 32 cores 240 four gigs of ram on each node and and that what that allows us to do I should have put the amount of nvme but I think it's seven terabytes for anything me storage what that allows us to do is to basically ensure that our ETL everything that this sub cluster does is always in Depot and so that that makes sure that it's always fast now when we get to the secondary subclusters these are as mentioned secondary so they can stop and start and it won't affect the cluster going up or down so they're they're sort of independent and we've got four what we call Rhian subclusters and and they're not read by definition or technically they're not read only any any sub cluster can ingest and create your data within the database and that'll all get that'll all get pushed to the s3 bucket but logically for us they're read only like these we just most of these the work that they happen to do is read only which it is which is nice because if it's read only it doesn't need to worry about commits and we let we let the primary subclusters or ETL so close to worry about committing data and we don't have to we don't have to have the all nodes in the database participating in transaction commits so we've got a for read subclusters and we've got one EP also cluster so a total of five sub clusters each so plus they're running sixty-four nodes so that gives us a 320 node database all things counted and not all those nodes are up at the same time as I mentioned but often often for big chunks of the days most of the read nodes are down but they do all spin up during our during our busy time so for the reading so clusters we've got I three for Excel so again the I three incidents family type which has nvme stores these notes have I think three and a half terabytes of nvme per node we just rate it to nvme drives we raid zero them together and 16 cores 122 gigs of ram so these are smaller you'll notice but it works out well for us because the the read workload is is typically dealing with much smaller data sets than then the ingest or the aggregation workbook so we can we can run these workloads on on smaller instances and leave a little bit of money and get more granularity with how many sub clusters are stopped and started at any given time the nvme doesn't persist the data on it isn't persisted remember you stop and start this is an important detail but it's okay because the depot does a pretty good job in that in that algorithm where it pulls data in that's recently used and the that gets pushed out a victim is the data that's least reasons use so it was used a long time ago so it's probably not going to be used to get so we've got um five sub-clusters and we have actually got to two of those so we've got a 320 node cluster in u.s. East and a 320 node cluster in u.s. West so we've got a high availability region diversity so and their peers like I talked about before they're they're independent but but yours they are each run 128 shards and and so with that what that which shards are is basically the it's similar to segmentation when you take those dataset you divide it into chunks and though and each sub cluster can concede want the data set in its entirety and so each sub cluster is dealing with 128 shards it shows 128 because it'll give us even distribution of the data on 64 node subclusters 60 120 might evenly by 64 and so there's so there's no data skew and and we chose 128 because the sort of ginger proof in case we wanted to double the size of any of the questions we can double the number of notes and we still have no excuse the data would be distributed evenly the disk what we've done is so we've got a couple of raid arrays we've got an EBS based array that they're catalog uses so the catalog storage location and I think we take for for EBS volumes and raid 0 them together and come up with 128 gigabyte Drive and we wanted an EPS for the catalog because it we can stop and start nodes and that data will persist it will come back when the node comes up so we don't have to run a bunch of configuration when the node starts up basically the node starts it automatically joins the cluster and and very strongly there after it starts processing work let's catalog and EBS now the nvme is another raid zero as I mess with this data and is ephemeral so let me stop and start it goes away but basically we take 512 gigabytes of the nvme and we give it to the data temp storage location and then we take whatever is remaining and give it to the depot and since the ETL and the reading clusters are different instance types they the depot is is side differently but otherwise it's the same across small clusters also it all adds up what what we have is now we we stopped the purging data for some of our big a grits we added bunch more columns and what basically we at this point we have 8 petabytes of raw data in each Jian cluster and it is obviously about 4 times what we can hold in our enterprise classes and we can continue to add to this maybe we need to add compute maybe we don't but the the amount of data that can can be held there against can obviously grow much more we've also built in auto scaling tool or service that basically monitors the queue that I showed you earlier monitors for those spikes I want to see as low spikes it then goes and starts up instances one sub-collector any of the sub clusters so that's that's how that's how we we have compute match the capacity match that's the demand also point out that we actually have one sub cluster is a specialized nodes it doesn't actually it's not strictly a customer reports sub clusters so we had this this tool called planner which basically optimizes ad campaigns for for our customers and we built it it runs on Vertica uses data and Vertica runs vertical queries and it was it was wildly successful um so we wanted to have some dedicated compute and beyond witty on it made it really easy to basically spin up one of these sub clusters or new sub cluster and say here you go planner team do what you want you can you can completely maximize the resources on these nodes and it won't affect any of the other operations that were doing the ingest the matching the aggregation or the reports up so it gave us a great deal of flexibility and agility which is super helpful so the question is has it been worth it and without a doubt the answer is yes we're doing things that we never could have done before sort of with reasonable cost we have lots more data specialized nodes and more agility but how do you quantify that because I don't want to try to quantify it for you guys but it's difficult because each eon we still have some enterprise nodes by the way cost as you have two of them but we also have these Eon clusters and so they're there they're running different workloads the aggregation is different the ingest is running more on eon does the number of nodes is different the hardware is different so there are significant differences between enterprise and and beyond and when we combine them together to do the entire workload but eon is definitely doing the majority of the workload it has most of the data it has data that goes is much older so it handles the the heavy heavy lifting now the query performance is more anecdotal still but basically when the data is in the Depot the query performance is very similar to enterprise quite close when the data is not in Depot and it needs to run our remote storage the the query performance is is is not as good it can be multiples it's not an order not orders of magnitude worse but certainly multiple the amount of time that it takes to run on enterprise but the good news is after the data downloads those young clusters quickly catch up as the cache populates there of cost I'd love to be able to tell you that we're running to X the number of reports or things are finishing 8x faster but it's not that simple as you Iran is that you it is me I seem to have gotten to thank you you hear me okay I can hear you now yeah we're still recording but that's fine we can edit this so if I'm just talking to the person the support person he will extend our recording time so if you want to maybe pick back up from the beginning of the slide and then we'll just edit out this this quiet period that we have sir okay great I'm going to go back on mute and why don't you just go back to the previous slide and then come into this one again and I'll make sure that I tell the person who yep perfect and then we'll continue from there is that okay yeah sound good all right all right I'm going back on yet so the question is has it been worth it and for us the answer has been a resounding yes we're doing things that we never could have done at reasonable cost before and we got more data we've got this Y note this law has nodes and in work we're much more agile so how to quantify that um well it's not quite as simple and straightforward as you might hope I mean we still have enterprise clusters we've got to update the the four that we had at peak so we've still got two of those around and we got our two yawn clusters but they're running different workloads and they're comprised of entirely different hardware the dependence has I've covered the number of nodes is different for sub-clusters so 64 versus 50 is going to have different performance the the workload itself the aggregation is aggregating more columns on yon because that's where we have disk available the queries themselves are different they're running more more queries on more intensive data intensive queries on yon because that's where the data is available so in a sense it is Jian is doing the heavy lifting for the cluster for our workload in terms of query performance still a little anecdotal but like when the queries that run on the enterprise cluster the performance matches that of the enterprise cluster quite closely when the data is in the Depot when the data is not in a Depot and Vertica has to go out to the f32 to get the data performance degrades as you might expect it can but it depends on the curious all things like counts counts are is really fast but if you need lots of the data from the material others to realize lots of columns that can run slower I'm not orders of magnitude slower but certainly multiple of the amount of time in terms of costs anecdotal will give a little bit more quantifying here so what I try to do is I try to figure out multiply it out if I wanted to run the entire workload on enterprise and I wanted to run the entire workload on e on with all the data we have today all the queries everything and to try to get it to the Apple tab so for enterprise the the and estimate that we do need approximately 18,000 cores CPU cores all together and that's a big number but that's doesn't even cover all the non-trivial engineering work that would need to be required that I kind of referenced earlier things like starting the data among multiple clusters migrating the data from one culture to another the daisy chain type stuff so that's that's the data point now for eon is to run the entire workload estimate we need about twenty thousand four hundred and eighty CPU cores so more CPU cores uh then then enterprise however about half of those and partly ten thousand of both CPU cores would only run for about six hours per day and so with the on demand and elasticity of the cloud that that is a huge advantage and so we are definitely moving as fast as we can to being on all Aeon we have we have time left on our contract with the enterprise clusters or not we're not able to get rid of them quite yet but Eon is certainly the way of the future for us I also want to point out that uh I mean yawn is we found to be the most efficient MPP database on the market and what that refers to is for a given dollar of spend of cost we get the most from that zone we get the most out of Vertica for that dollar compared to other cloud and MPP database platforms so our business is really happy with what we've been able to deliver with Yan Yan has also given us the ability to begin a new use case which is probably this case is probably pretty familiar to folks on the call where it's UI based so we'll have a website that our customers can log into and on that website they'll be able to run reports on queries through the website and have that run directly on a separate row to get beyond cluster and so much more latent latency sensitive and concurrency sensitive so the workflow that I've described up until this point has been pretty steady throughout the day and then we get our spike and then and then it goes back to normal for the rest of the day this workload it will be potentially more variable we don't know exactly when our engineers are going to deliver some huge feature that is going to make a 1-1 make a lot of people want to log into the website and check how their campaigns are doing so we but Yohn really helps us with this because we can add a capacity so easily we cannot compute and we can add so we can scale that up and down as needed and it allows us to match the concurrency so beyond the concurrency is much more variable we don't need a big long lead time so we're really excited about about this so last slide here I just want to leave you with some things to think about if you're about to embark or getting started on your journey with vertically on one of the things that you'll have to think about is the no account in the shard count so they're kind of tightly coupled the node count we determined by figuring like spinning up some instances in a single sub cluster and getting performance smaller to finding an acceptable performance considering current workload future workload for the queries that we had when we started and so we went with 64 we wanted to you want to certainly want to increase over 50 but we didn't want to have them be too big because of course it costs money and so what you like to do things in power to so 64 nodes and then the shard count for the shards again is like the data segmentation is a new type of segmentation on the data and the start out we went with 128 it began the reason is so that we could have no skew but you know could process the same same amount of data and we wanted to future-proof it so that's probably it's probably a nice general recommendation doubleness account for the nodes the instance type and and how much people space those are certainly things you're going to consider like I was talking about we went for they I three for Excel I 3/8 Excel because they offer good good Depot stores which gives us a really consistent good performance and it is all in Depot the pretty good mud presentation and some information on on I think we're going to use our r5 or the are for instance types for for our UI cluster so much less the data smaller so much less enter this on Depot so we don't need on that nvm you stores the reader we're going to want to have a reserved a mix of reserved and on-demand instances if you're if you're 24/7 shop like we are like so our ETL subclusters those are reserved instances because we know we're going to run those 24 hours a day 365 days a year so there's no advantage of having them be on-demand on demand cost more than reserve so we get cost savings on on figuring out what we're going to run and have keep running and it's the read subclusters that are for the most part on on demand we have one of our each sub Buster's is actually on 24/7 because we keep it up for ad-hoc queries your analyst queries that we don't know when exactly they're going to hit and they want to be able to continue working whenever they want to in terms of the initial data load the initial data ingest what we had to do and now how it works till today is you've got to basically load all your data from scratch there isn't a great tooling just yet for data populate or moving from enterprise to Aeon so what we did is we exported all the data in our enterprise cluster into park' files and put those out on s3 and then we ingested them into into our first Eon cluster so it's kind of a pain we script it out a bunch of stuff obviously but they worked and the good news is that once you do that like the second yon cluster is just a bucket copy in it and so there's tools missions that can help help with that you're going to want to manage your fetches and addiction so this is the data that's in the cache is what I'm referring to here the data that's in the default and so like I talked about we have our ETL cluster which has the most recent data that's just an injected and the most difficult data that's been aggregated so this really recent data so we wouldn't want anybody logging into that ETL cluster and running queries on big aggregates to go back one three years because that would invalidate the cache the depot would start pulling in that historical data and it was our assessing that historical data and evicting the recent data which would slow things out flow down that ETL pipelines so we didn't want that so we need to make sure that users whether their service accounts or human users are connecting to the right phone cluster and I mean we just created the adventure users with IPS and target groups to palm those pretty-pretty it was definitely something to think about lastly if you're like us and you're going to want to stop and start nodes you're going to have to have a service that does that for you we're where we built this very simple tool that basically monitors the queue and stops and starts subclusters accordingly we're hoping that that we can work with Vertica to have it be a little bit more driven by the cloud configuration itself so for us all amazon and we love it if we could have it have a scale with the with the with the eight of us can take through points do things to watch out for when when you're working with Eon is the first is system table queries on storage layer or metadata and the thing to be careful of is that the storage layer metadata is replicated it's caught as a copy for each of the sub clusters that are out there so we have the ETL sub cluster and our resources so for each of the five sub clusters there is a copy of all the data in storage containers system table all the data and partitions system table so when you want to use this new system tables for analyzing how much data you have or any other analysis make sure that you filter your query with a node name and so for us the node name is less than or equal to 64 because each of our sub clusters at 64 so we limit we limit the nodes to the to the 64 et 64 node ETL collector otherwise if we didn't have this filter we would get 5x the values for counts and some sort of stuff and lastly there is a problem that we're kind of working on and thinking about is a DC table data for sub clusters that are our stops when when the instances stopped literally the operating system is down and there's no way to access it so it takes the DC table DC table data with it and so I cannot after after my so close to scale up in the morning and then they scale down I can't run DC table queries on how what performed well and where and that sort of stuff because it's local to those nodes so we're working on something so something to be aware of and we're working on a solution or an implementation to try to suck that data out of all the notes you can those read only knows that stop and start all the time and bring it in to some other kind of repository perhaps another vertical cluster so that we can run analysis and monitoring even you want those those are down that's it um thanks for taking the time to look into my presentation really do it thank you Ron that was a tremendous amount of information thank you for sharing that with everyone um we have some questions come in that I would like to present to you Ron if you have a couple min it your first let's jump right in the first one a loading 85 terabytes per day of data is pretty significant amount what format does that data come in and what does that load process look like yeah a great question so the format is a tab separated files that are Jesus compressed and the reason for that could basically historical we don't have much tabs in our data and this is how how the data gets compressed and moved off of our our bidders the things that generate most of this data so it's a PSD gzip compressed and how you kind of we kind of have how we load it I would say we have actually kind of a Cadillac loader in a couple of different perspectives one is um we've got this autist raishin layer that's homegrown managing the logs is the data that gets loaded into Vertica and so we accumulate data and then we take we take some some files and we push them to redistribute them along the ETL nodes in the cluster and so we're literally pushing the file to through the nodes and we then run a copy statement to to ingest data in the database and then we remove the file from from the nodes themselves and so it's a little bit extra data movement which you may think about changing in the future assisting we move more and more to be on well the really nice thing about this especially for for the enterprise clusters is that the copy' statements are really fast and so we the coffee statements use memory but let's pick any other query but the performance of the cautery statement is really sensitive to the amount of available memory and so since the data is local to the nodes literally in the data directory that I referenced earlier it can access that data from the nvme stores and the kabhi statement runs very fast and then that memory is available to do something else and so we pay a little bit of cost in terms of latency and in terms of downloading the data to the nose we might as we move more and more PC on we might start ingesting it directly from s3 not copying the nodes first we'll see about that what's there that's how that's how we read the data interesting works great thanks Ron um another question what was the biggest challenge you found when migrating from on-prem to AWS uh yeah so um a couple of things that come to mind the first was the baculum the data load it was kind of a pain I mean like I referenced in that last slide only because I mean we didn't have tools built to do this so I mean we had to script some stuff out and it wasn't overly complex but yes it's just a lot of data to move I mean even with starting with with two petabytes so making sure that there there is no missed data no gaps making and moving it from the enterprise cluster so what we did is we exported it to the local disk on the enterprise buses and we then we push this history and then we ingested it in ze on again Allspark X oh so it's a lot of days to move around and I mean we have to you have to take an outage at some point stop loading data while we do that final kiss-up phase and so that was that was a challenge a sort of a one-time challenge the other saying that I mean we've been dealing with a week not that we're dealing with but with his challenge was is I mean it's relatively you can still throw totally new product for vertical and so we are big advantages of beyond is allow us to stop and start nodes and recently Vertica has gotten quite good at stopping in part starting nodes for a while there it was it was it took a really long time to start to Noah back up and it could be invasive but we worked with with the engineering team with Yan Zi and others to really really reduce that and now it's not really an issue that we think that we think too much about hey thanks towards the end of the presentation you had said that you've got 128 shards but you have your some clusters are usually around 64 nodes and you had talked about a ratio of two to one why is that and if you were to do it again would you use 128 shards ah good question so that is a reference the reason why is because we wanted to future professionals so basically we wanted to make sure that the number of stars was evenly divisible by the number of nodes and you could I could have done that was 64 I could have done that with 128 or any other multiple entities for but we went with 128 is to try to protect ourselves in the future so that if we wanted to double the number of nodes in the ECL phone cluster specifically we could have done that so that was double from 64 to 128 and then each node would have happened just one chart that it had would have to deal with so so no skew um the second part of question if I had to do it if I had to do it over again I think I would have done I think I would have stuck with 128 we still have I mean so we either running this cluster for more than 18 months now I think especially in USC and we haven't needed to increase the number of nodes so in that sense like it's been a little bit extra overhead having more shards but it gives us the peace of mind that we can easily double that and not have to worry about it so I think I think everyone is a nice place to start and you may even consider a three to one or four to one if if you're if you're expecting really rapid growth that you were just getting started with you on and your business and your gates that's a small now but what you expect to have them grow up significantly less powerful green thank you Ron that's with all the questions that we have out there for today if you do have others please feel free to send them in and we will get back to you and we'll respond directly via email and again our engineers will be available on the vertical forums where you can continue the discussion with them there I want to thank Ron for the great presentation and also the audience for your participation in questions please note that a replay of today's event and a copy of the slides will be available on demand shortly and of course we invite you to share this information with your colleagues as well again thank you and this concludes this webinar and have a great day you

Published Date : Mar 30 2020

SUMMARY :

stats on on the raw data sizes that we is so that we could have no skew but you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ron CormierPERSON

0.99+

sevenQUANTITY

0.99+

RonPERSON

0.99+

twoQUANTITY

0.99+

VerticaORGANIZATION

0.99+

8 petabytesQUANTITY

0.99+

122 gigsQUANTITY

0.99+

85 terabytesQUANTITY

0.99+

ExcelTITLE

0.99+

512 gigabytesQUANTITY

0.99+

128 gigabyteQUANTITY

0.99+

three nodesQUANTITY

0.99+

three yearsQUANTITY

0.99+

six nodesQUANTITY

0.99+

each clusterQUANTITY

0.99+

two petabytesQUANTITY

0.99+

240QUANTITY

0.99+

2 petabytesQUANTITY

0.99+

16 coresQUANTITY

0.99+

espn.comOTHER

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Yan YanORGANIZATION

0.99+

more than 18 monthsQUANTITY

0.99+

todayDATE

0.99+

each clusterQUANTITY

0.99+

oneQUANTITY

0.99+

one clusterQUANTITY

0.99+

eachQUANTITY

0.99+

amazonORGANIZATION

0.99+

32 coresQUANTITY

0.99+

ten thousandQUANTITY

0.98+

each sub clusterQUANTITY

0.98+

one clusterQUANTITY

0.98+

72QUANTITY

0.98+

seven terabytesQUANTITY

0.98+

two dimensionsQUANTITY

0.98+

TwoQUANTITY

0.98+

5xQUANTITY

0.98+

first oneQUANTITY

0.98+

firstQUANTITY

0.98+

eonORGANIZATION

0.98+

128QUANTITY

0.98+

50QUANTITY

0.98+

four gigsQUANTITY

0.98+

s3TITLE

0.98+

three and a half terabytesQUANTITY

0.98+

this weekDATE

0.98+

64QUANTITY

0.98+

8xQUANTITY

0.97+

one chartQUANTITY

0.97+

about ten new terabytesQUANTITY

0.97+

one-timeQUANTITY

0.97+

two instancesQUANTITY

0.97+

DepotORGANIZATION

0.97+

last monthDATE

0.97+

five sub-clustersQUANTITY

0.97+

two clustersQUANTITY

0.97+

each nodeQUANTITY

0.97+

five sub clustersQUANTITY

0.96+

Keith Barto & Russell Fishman, NetApp | Cisco Live US 2018


 

>> Live from Orlando, Florida, it's theCUBE covering Cisco Live 2018. Brought to you by Cisco, NetApp, and theCUBE's ecosystem partners. >> Hey, welcome back, everyone. We're here live at theCUBE in Orlando, Florida for Cisco Live 2018. I'm John Furrier, the co-host of theCUBE with Stu Miniman. It's our third day of three days of wall-to-wall coverage. Our next two guests are from NetApp. Russell Fishman, Director of Product Management, and Keith Barto, Director of Product Management, both directors of product management. One was the former CEO of Immersive, now with NetApp for a few years. Guys, great to see you, thanks for coming on theCUBE. >> Thanks for having us. >> Thanks for having us, John. Thank you. >> We saw you guys in Barcelona, obviously. The NetApp story just keeps on getting better. Also, you have core customer base. Cisco's going under transformation. You guys have been transforming ever since I started seeing NetApp arrive on the scene in the 90s. Every year there's always a new innovation. But now, more than ever, you're hearing even Cisco Bellwether in the routing networking business putting up old way network, hey there's a firewall. There's some devices in there. To a completely new, obviously, cloud made in the modern era really things are changing. So what's your reaction to that? Obviously, you guys are a part of that story. You have a relationship with Cisco. What's your reaction to that? And talk about your relationship with Cisco. >> So we obviously have a huge relationship with Cisco. And most folks will know about our FlexPods, I think that's probably the most famous way that we collaborate with these guys. We just came off the back of an amazing year, five straight quarters of double-digit, year-on-year growth, killing in the market. Obviously, we have to brag a little bit, right, come on. >> It's theCUBE, come on! >> It's theCUBE, we gotta be a little bit excited about it. So we're really excited about that, it just really talks to the strength of the relationship, right? So there's a very strong relationship there, and it's been there with FlexPod for eight years, and there's been a lot of transformation, exactly to your point John, a lot of transformation during that time, a lot of focus on the clouds. So one of the questions I always get asked, is why is converged infrastructure still relevant in a cloud-first world? And it is not obvious answer, now clearly our customers think that it is, and so do our partners. But it is not obvious why that is. NetApp has gone through, you talked about transformation, NetApp has gone through this massive transformation, huge focus on clouds, I mean, we have these cloud-first, cloud-native, focus around our data management platforms. We talk about a concept called the data fabric, I don't know if you've heard of the data fabric before. >> Yeah. >> And the data fabric really talks to how, our vision for how enterprises want to manage that new digital currency that is data across all the silos that they want to leverage, right? We've been able to bring some of that goodness into FlexPod, and that's why we're still relevant now. >> Yeah, so Keith, I think back to when converging infrastructure was built as about simplification, we were gonna take all these boxes and put it down to a box and that was the new unit of measurement. Well, Russell was just talking about we've got multi-cloud, when I think of NetApp now, it's always been a software company, but now software in that multi-cloud world, help connect the dots for us, as to management of converged infrastructure into that whole multi-cloud story. >> Yeah, we were very privileged to be acquired by NetApp last March, and my company Immersive, a lot of us came actually out of Cisco. So I was one of the original FlexPod architects from Cisco and had the privilege of helping to build the network, the storage that we brought into FlexPod, and a lot of our customers and our retailers kept on saying, "How do we know we put it together properly? "How are we following the best practices from the CVDs, "from the NVAs, from the TRs?" And so we took those rules and those analytics and we put them into platform, into a SaaS-based platform, and we were able to analyze that, coming from our customers' FlexPods, from within their deployments, from within their multi-data centers, and bring that into our service, run those analytics, prove those best practices, show the deficiencies, get our resellers out there to help our customers, 'cause FlexPod is a meet in the channel play, and we relied heavily on our resellers to make it a success. >> What was the driver for that product? When you started that company and that happened, what was the main motivation behind that? Was it analytics, was it insight, what was some of the things that you guys were building in, was it operational data? >> The real reason was people kept on asking, "How do I know?" Because it's a reference architecture, not a product, "How do I know I did it right?" Because it's really important, we're gonna run our key business applications on this platform, right? My SAP, my Oracle, my Sequel, my SharePoint, my Outlook. I gotta make sure this stuff is really gonna work properly, and it's going to grow in scale with the business. So I need to make sure that those redundant links are there. I need to make sure that when I do VMWare upgrade, or a Microsoft upgrade, that the firmware is alignment with the best practices in the interoperability matrix, so we wanted to make that as easy as possible, so that from a single dashboard, you can see all of those things, you can diagnose it quickly, you can get those email alerts and notifications, and because you end up with disparate operation teams, the server team, the network team, the storage team, the hypervisor team, sometimes they don't always talk effectively with each other, and from one single dashboard, we're now able to show everybody where things are today, and then, one of my favorites, when there is a problem, you call either Services or Support, and you say, "Hey it's not working," and they say, "What did you change?" And you say, "I didn't change anything." We have that historical-- >> Finger pointing kicks in, it was his fault! >> Yeah we have the historical snapshot and trending, so we can go back and look at where things were and do a comparison to where they are today, and it allows us to have a much faster mean time to resolution. >> And what do you guys call that product now within Cisco? What's it... >> It's now called Converged Systems Advisor in NetApp. >> Awesome, so what's next for convergers. Obviously, people, both cloud growth, we're seeing the on-premise, Wikibon has reported, the true private cloud numbers, which basically say there's a lot of on-premise activity going on, that's gonna look like cloud, it's gonna operate like cloud, so they need to have that. There's migration going on, but it's not a lift and shift, to cloud, there's gonna be, obviously, the hybrid cloud and multi-cloud. So, cloud folks still buy hardware, too. You gotta still run stuff, networks aren't going away, storage isn't going away, so what's next for the converged infrastructure play with FlexPod? How do you guys manage that roadmap? >> So, we just announced some things coming into, jointly with Cisco, coming into Cisco Live. One of those things we announced was something called Managed Private Cloud on FlexPods, or actually no, FlexPod Managed Private Cloud, sorry, I switch it around. And FlexPod Managed Private Cloud, it really talks to exactly what you're talking about, John, which is... What we find, cloud has fundamentally changed customers' expectations of what they want on-prem. They recognize the need on-prem, we live in a hybrid world. Those of us that've been in the industry long enough, and have a couple of gray hairs, know that there are very few transitions that are really absolute in the business. A lot of people pronounce that it's gonna be this way or that way, and the reality is, it's something in between. And that's fine because cloud is just another tool in the toolbox, and you don't want to hit every nail with the same hammer, you want to find the right tool for the right job. So what we've done is we've taken some of that cloud goodness, which really means not having to worry about the underlying infrastructure, all right. Worrying about the applications, being more application-focused, more business-value-focused, more line-of-business-focused. And being able to deliver that in a way that people can consume it on-premise. So it really feels like a FlexPod delivered like a cloud, but from a management day-to-day perspective, you don't have to do it-- >> So, it's flexible. >> It's flexible-- >> FlexPod. >> But it's done for you, so it's your little piece of cloud, sitting on-prem, and you don't have to manage it or run it day-to-day. >> Let's talk about what you just said about the whole transformation, people say a certain way, basically you're kind of saying, a lot of press, and a lot of analysts say, "Oh, you've got to do this digital transformation." Customers will take a pragmatic approach, but you guys at NetApp have been talking for a long time, I've been following it, non-disruptive operations. >> Yes. >> So what you see in the cloud if people wanna take those first three steps, but they don't want to have to overhaul anything, containers have proved to be great resource there, Kubernetes is showing a great way to have life cycle management on the app side of infrastructure. How does your customers, and Cisco customers, maintain that non-disruptive operational playbook, because Cisco guys are gonna start changing, moving up the stack too-- >> Absolutely. >> Doesn't mean storage is gonna go away, but they don't want to disrupt anything, your thoughts? >> And it doesn't mean any of it goes away, that's the funny thing, we talk about where we want to focus, but it's as much about not having to worry about the things that we had to worry about that are just there in the future, right? So it's kind of like if you went back 200 years, going to get fresh water was a big hassle, now it isn't, it's delivered to you, right? I know it sounds like a crazy analogy, but the reality is is that we shouldn't have to worry about the basics of on-premise, private cloud, it should just be automatic, it should be simple to execute, simple to manage, simple to order, simple to deploy, and then you focus on the value, so that's what we've been really focused on. >> Keith, when I listen to my friends in the networking space management's still a challenge. The punchline is usually, they hear single pane of glass, and they said that's spelled P-A-I-N. >> I've heard that one too. >> Talk a little bit about how your solutions tie into some of the broader tools out there. >> Well, we first looked at the compute layer and said, because of the extensibility of USC Manager and the API integration, we're able to take advantage of that, and be able to pull that data out, and XOS, right? We're able to do that exact same thing, and the background that we had at Cisco, and knowing those products really well, we were able to gather all the specific data we need to look at those best practices. And it's a complex architecture, but it's a very elegant architecture, because of the high availability, it can provide the performance, the non-disruptive operations that you were bringing up, John. We want to make sure that we're able to keep those things in line, so as we bring our next release of CSA out, we're going to be adding Enterprise Fibre Channel, so the new MDS switches, we're gonna be bringing our relationship with VMWare in our engine to be able to ingest the configuration of VMWare in. We're also bringing back our partner-centric reseller portal so when customer is running Converged Systems Advisor, they can share it to their reseller, and the reseller's going to be able to provide managed services, support services, and professional services to expand, to repair, to augment those existing FlexPods in their customers' environments. So we're really excited to be able to bring that solution back to our resellers-- >> What's that going to do, what's the impact of that, because I almost imagine that's going to enable them to want to be tightly integrated but also get data from their customers. What do you guys see as the value for the partners to take advantage of that? >> Well, I just met with a partner at our booth, just a few moments ago, and walked them through the solution they had never seen it before. It takes a reseller a week, or even multiple weeks, depending on the size of the FlexPod, to actually go through the configuration of the servers, the network, the storage, the hypervisors, and correlate that into a deliverable to their customer. We can do that in sub-10 minutes, sub-15 minutes. >> So faster time to the customer value. >> Faster time to customer value, faster time to resolution if there is a problem, and then again, they're running in their key business applications on this platform, we've been doing it for eight years, we want to continue to expand upon the value the FlexPod can offer. >> But I wanted to add just a couple of things to what you were saying. We talked about FlexPod really being a channel play. That's important to us in product management, not so important to our customers. What it really means to our customers is they tend to have a very close relationship with their partners. Their partners are the ones that are really enabling FlexPod for them. What we're doing with Converged Systems Advisor, is we are creating such a close relationship at a technical level, technology level, between the customer and the partner, that the partner's there to help them on a daily basis. Where there is a problem, it's almost like the telematics in your car, right? All the cars now, they're phoning back home, they're telling where there's something wrong, you get this letter or an email, you need a service, you need... This is exactly what we're achieving with the Converged System Advisor-- >> When you call support, what don't you want to hear? What's your model number, what's your serial number, what's your contract ID? Wouldn't it be great if everybody's singing off the same sheet of music? >> Well, you bring a great point there. There was so much discussion, well, converged infrastructure a public lot, those are gonna be really simple, and they're gonna be homogenous, and they're all gonna be great, but yeah, you're smiling and laughing because the reality is you're never gonna find two customers that have the same environment, no matter what you're talking about. >> No. >> So I need the tooling, I need the data and the analytics, to help get through that. I shouldn't have to spend half an hour on level one support. >> And that's all-- >> I shouldn't have to go through multiple forms the same time. >> Yes, and you're right Stu, that's always been, that's always been the mantra for FlexPod since the word dot. We don't get to an 11 billion dollar install base unless you're doing something right, and the word, the reason the word flex is in there, it's a dichotomy, whenever you go into these sorts of discussions, do you make it really fixed, right? Which is almost like, I call it like straight jacket, right. But you know what you get, right? Or do you make it flexible, right? And the flexibility really addresses the business need as opposed to the technology need. So the product guys love it when it's fixed, the customers love it when it's flexible. >> Yeah, you're talking about basically, changes... You want changes to be rolling with the... Technology rolling with the changes. >> Yes. >> Not be stuck in the straight jacket, or we'll also say tailor-made suit, but things change, you wanna... Fashion changes, so this is a real big issue, and talk about support, I think the ideal outcome is not to even call support, with analytics and push notifications and AI, you can almost see what DevNet's doing here, around how developer are getting involved with DevOps and network DevOps. Coders can come in and use the analytics, if tightly integrated in, so that you get the notifications, or they know exactly your environment. Is that, how far along are you guys on that path, because analytics play a big role, you've got the command center there, the Converged Systems Advisor, implies advising, resolution, prescription, what's the vision? >> So Immersive was a Cisco solution partner at the very beginning, so we were a part of this group right behind us, and it was exciting to be a part of that, to attend Cisco Live and be a part of DevNet, and we expanded upon, as you mentioned, the API, integrations of all these platforms, and when cluster data ONTAP came out for NetApp, we did the exact same thing, right? So we get integrated with NetApp, and very easily able to bring all that data in. Now, massaging that data is the hard part, right? Understanding what is noise and what is the real goodness, so you have to find those best practices, look at the hard work that our teams have done around validated designs between Cisco and NetApp, and look at the best practices that come from those particular pieces of hardware. And then once that intelligence is built, correlating that in the cloud service is really where the magic happens. So our teams are back there talking with the network experts the storage experts, the compute networks, the virtualization experts, and so when we have that data, and now you can decision-eer, right? You can start advising your resellers. So we bring up the rules dashboard, and then we do have alerting that we can send to ticketing systems to the remedies, the ServiceNows-- >> It's interesting, I'd love to get the product perspective on this, and across the bigger picture, because the trend we're seeing, certainly on theCUBE, over the past few years, and most recently this year, is the move from device, hardware, to system. So the systems approach really becomes more of a holistic view where, you're looking at the holistic view of multiple things happening. >> Yes. >> It's not just, this is the box, here's where the rack is, command line interface, you guys taking that same approach, can you just add some color on NetApp's vision on looking at holistically, 'cause that's really where software shines. >> No, no, and that's absolutely, so we have a way of seeing FlexPod as a, we call it a converged system, and for that exact reason. So what CSA is able to do is look at anything that happens within that converged system and the context of the overall system, and that really is the key, right? When you understand things in context it means so much more. Just think about when you listen to someone talk, a word taken out of context means nothing, right? So when we listen to that infrastructure, what it tells us is understood in context. And what it will ultimately do, and I think you were kind of hinting at this, John, the vision here is that there will be self-healing infrastructures, self-healing converged systems, just like the cloud, right? So we are continuously monitoring the configuration, the availability, and other aspects of your converged system and we are able to take action to make sure it stays on the rails. >> We saw you guys at the RSA event, you guys had a small little party we went to, and we were riffing, having fun with some of the NetApp folks, and the big trend in cloud is server-less. So the joke was, is this storage-less solution coming? To your point about this, if you think about it, it's just storage somewhere. This is kind of a joke, but it's also kind of nuanced. This is elastic-- >> No, no! It's absolutely true, if you look at NetApp's strategy, if you look at our cloud strategy, we're the first third-party branded services part of the AGI core services, we're not in the marketplace, we're actually part of AGI core. It's NetApp cloud volumes for AGI, and a customer doesn't know what's going on behind the scenes but let's be clear, we're talking about software-defined storage here, right? >> And cloud-ified, too, as well, talk about cloud operations. >> Yeah, still at the end of the day, for us, our intellectual property is not really tied to hardware, we obviously use that as a way to get our intellectual property in the hands of our customers. But we're not tied to a-- >> You guys made a good bet on cloud, I remember talking before Kurian took over, you guys were kicking the tires on Amazon years ago. >> Yes, yes, yes, that's right. >> So it's not like a Johnny-come-lately to the cloud, you guys have been deep in the core. >> Absolutely. >> To end this segment, I wanted to get your thoughts, because you guys are here at Cisco Live, what should the audience understand that couldn't make it out here as the top story at Cisco Live, and what is your role with Cisco here, what's the big story, top line, high-order bit, NetApp, Cisco story. >> So I'll go first, and I'll let my friend here go second. We were really excited coming into Cisco Live, right. We had this pretty big announcement last week, there were a few different aspects to it, but I'll talk about two of them. A new focus between Cisco and NetApp on verticals around FlexPod, and what that really means is that we're focused on very specific verticals, including healthcare, but there'll be others that come down the line. We announced a new solution base on Epic PHR. We announced some lead customers, including the Mercy Technology Services, which is part of the Mercy Hospital group. So that was super exciting, I think what it does is it just demonstrates that our focus is on the outcomes, as opposed to the actual infrastructure, the infrastructure is the way to deliver that. So we're very excited about that at Cisco. The second thing that we announced was, I said, mentioned this Managed Private Cloud, we actually announced it with four of our major joint partners, Dimension Data, ProAct, Microland, and oh my Lord, ePlus, yes of course. That was super exciting as well, and what it does is it captures the imagination, and it's always very fun when you're standing at a booth, and people say, "Oh, I've known FlexPod, "I've seen you guys around." But there's always something new to talk about. >> The relevance is more than ever. >> Absolutely. >> Keith, what wave is NetApp riding right now, if you look at the Cisco action going on, what they're going through, what should people know about the big wave that you guys are taking advantage of right now? >> I think the big wave is absolutely gotta be what we're doing with the hyperscalers. We by far have taken the industry by storm, when you think about what we've done with Microsoft, what we're doing with Google, you know, sorry? >> And Amazon. >> And Amazon, thank you. >> Small companies. >> Yeah, just small hyperscalers, right? It's amazing what we can do with cloud ONTAP, across those vendors, and when we look at what our customers have done with FlexPod, and their relationship with Cisco and NetApp, and our ability to work together to help customers get their data from their core data centers to cloud, back, to their customers, and for us to be able to use analytics the way we do on FlexPod, I think there's a real opportunity-- >> And riding the scale wave too, scaling is huge. Everyone's talking about large-scale, talking about hyperscale as that is the largest scale you can see. >> Well, and our ability to control where the data lives, right? Because you want to be able to hold control of your data, and being able to use familiar tools like what you're already using in your own data center and in your own converged infrastructures, being able to use that ONTAP operating system to be able to control that experience is gonna be very important. >> Guys, thanks for coming in for the NetApp update, great news, great alignment with Cisco. It's a large-scale world, and certainly, the world is changing, storage is gonna be a critical part of it, server, storage, infrastructure, cloud operations on-premise, and in the cloud. TheCUBE, bringing you live coverage. I'm John Furrier, Stu Miniman, stay with us for more day three of three days of coverage here in Orlando, Florida, for Cisco Live, we'll be right back. (electronic music)

Published Date : Jun 13 2018

SUMMARY :

Brought to you by Cisco, NetApp, I'm John Furrier, the co-host of theCUBE with Stu Miniman. Thanks for having us, John. arrive on the scene in the 90s. We just came off the back of an amazing year, So one of the questions I always get asked, is that new digital currency that is data across all the silos Yeah, so Keith, I think back to when and had the privilege of helping to build the network, and it's going to grow in scale with the business. and do a comparison to where they are today, And what do you guys call that product now within Cisco? for the converged infrastructure play with FlexPod? They recognize the need on-prem, we live in a hybrid world. sitting on-prem, and you don't have to manage it Let's talk about what you just said about the whole So what you see in the cloud that's the funny thing, we talk about where we want and they said that's spelled P-A-I-N. some of the broader tools out there. and the background that we had at Cisco, What's that going to do, what's the impact of that, depending on the size of the FlexPod, to actually go through the value the FlexPod can offer. that the partner's there to help them on a daily basis. the same environment, no matter what you're talking about. I need the data and the analytics, to help get through that. I shouldn't have to go So the product guys love it when it's fixed, You want changes to be rolling with the... so that you get the notifications, and we expanded upon, as you mentioned, the API, is the move from device, hardware, to system. command line interface, you guys taking that same approach, of the overall system, and that really is the key, right? and the big trend in cloud is server-less. behind the scenes but let's be clear, And cloud-ified, too, as well, Yeah, still at the end of the day, for us, you guys were kicking the tires on Amazon years ago. you guys have been deep in the core. out here as the top story at Cisco Live, just demonstrates that our focus is on the outcomes, what we're doing with Google, you know, sorry? talking about hyperscale as that is the largest scale and being able to use familiar tools Guys, thanks for coming in for the NetApp update,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Russell FishmanPERSON

0.99+

KeithPERSON

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

Keith BartoPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Mercy Technology ServicesORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

eight yearsQUANTITY

0.99+

Mercy HospitalORGANIZATION

0.99+

three daysQUANTITY

0.99+

MicrolandORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

NetAppORGANIZATION

0.99+

Dimension DataORGANIZATION

0.99+

ProActORGANIZATION

0.99+

OutlookTITLE

0.99+

OneQUANTITY

0.99+

last weekDATE

0.99+

VMWareTITLE

0.99+

fourQUANTITY

0.99+

FlexPodCOMMERCIAL_ITEM

0.99+

third dayQUANTITY

0.99+

FlexPodsCOMMERCIAL_ITEM

0.99+

last MarchDATE

0.99+

RussellPERSON

0.99+

KurianPERSON

0.99+

NetAppTITLE

0.99+

AGIORGANIZATION

0.98+

two customersQUANTITY

0.98+

DevNetORGANIZATION

0.98+

sub-15 minutesQUANTITY

0.98+

90sDATE

0.98+

a weekQUANTITY

0.98+

oneQUANTITY

0.98+

Converged System AdvisorORGANIZATION

0.98+

theCUBEORGANIZATION

0.98+

NVAsORGANIZATION

0.97+

big waveEVENT

0.97+

sub-10 minutesQUANTITY

0.97+

half an hourQUANTITY

0.97+

secondQUANTITY

0.97+

second thingQUANTITY

0.97+

Converged Systems AdvisorORGANIZATION

0.96+

first three stepsQUANTITY

0.96+

Fred Krueger, WorkCoin | Blockchain Unbound 2018


 

(Latin music) >> Narrator: Live, from San Juan, Puerto Rico, it's theCUBE! Covering Blockchain Unbound. Brought to by Blockchain Industries. (Latin music) >> Welcome back to our exclusive Puerto Rico coverage, here, this is theCUBE for Blockchain Unbound, the future of blockchain cryptocurrency, the decentralized web, the future of society, the world, of work, et cetera, play, it's all happening right here, I'm reporting it, the global internet's coming together, my next guest is Fred Krueger, a founder and CEO of a new innovative approach called WorkCoin, the future of work, he's tackling. Fred, great to see you! >> Thank you very much, John. >> So we saw each other in Palo Alto at the D10e at the Four Seasons, caught up, we're Facebook friends, we're LinkedIn friends, just a quick shout out to you, I saw you livestreaming Brock Pierce's keynote today, which I thought was phenomenal. >> Yeah, it was a great keynote. >> Great work. >> And it's Pi Day. >> It's Pi Day? >> And I'm a mathematician, so, it's my day! (Fred laughs) >> It's geek day. >> It's geek day. >> All those nerds are celebrating. So, Fred, before we get into WorkCoin, I just want to get your thoughts on the Brock Pierce keynote, I took a video of it, with my shaky camera, but I thought the content was great. You have it up on Facebook on your feed, I just shared it, what was your takeaway of his message? I thought it was unedited, obviously, no New York Times spin here, no-- >> Well first of all, it's very authentic, I've known Brock 10 years, and, I think those of us who have known Brock a long time know that he's changed. He became very rich, and he's giving away, and he really means the best. It's completely from the heart, and, it's 100% real. >> Being in the media business, kind of by accident, and I'm not a media journalist by training, we're all about the data, we open our datas, everyone knows we share the free content. I saw the New York Times article about him, and I just saw it twisted, okay? The social justice warriors out there just aren't getting the kind of social justice that he's actually trying to do. So, you've known him for 10 years, I see as clear as day, when it's unfiltered, you say, here's a guy, who's eccentric, smart, rich now, paying it forward? >> Yep. >> I don't see anything wrong with that. >> Look, I think that the-- >> What is everyone missing? >> There's a little jealously, let's be honest, people resent a little bit, and I think part of it's the cryptocurrency world's fault. When your symbol of success is the Lamborghini, it's sort of like, this is the most garish, success-driven, money-oriented crowd, and it reminds me a little bit of the domain name kind of people. But Brock's ironically not at all that, so, he's got a-- >> If you look at the ad tech world, and the domain name world, 'cause they're all kind of tied together, I won't say underbelly, but fast and loose would be kind of the way I would describe it. >> Initially, yes, ad tech, right? So if you look at ad tech back in say, I don't know, 2003, 2004, it was like gunslingers, right? You wanted to by some impressions, you'd go to a guy, the guy'd be like, "I got some choice impressions, bro." >> I'll say a watch too while I'm at it. >> Yeah, exactly. (John laughs) That was the ad tech world, right? And that world was basically replaced by Google and Facebook, who now control 80% of the inventory, and it's pretty much, you go to a screen, it's all service and that's it. I don't know if that's going to be the case in cryptocurrencies, but right now, initially, you sort of have this, they're a Wild West phenomenon. >> Any time you got alpha geeks, and major infrastructure application developer shift happening, which is happening, you kind of look at these key inflection points, you need to kind of have a strong community self-policing policy, if you look at the original DNS days, 'cause you remember, I was there too, Jon Postel, rest in peace, godspeed, we all know what he did, Vint Cerf with TCP/IP, the core dudes, and gals, back then, they were tight! So any kind of new entrants that came in had to prove their worth. I won't say they were the most welcoming, 'cause they were nervous of people to infect the early formation, mostly they're guys, they're nerds. >> Right, so I think if you look back at domain names, back in the day, a lot of people don't know this, but Jon Postel actually kept the list of domain names in a text file, right? You had basically wanted a domain name, you called Jon up, and you said, "I'd like my name added to the DNS," and he could be like, "Okay, let me add it "to the text file." Again, these things all start in a very sort of anarchic way, and now-- >> But they get commercial. >> It gets commercial, and it gets-- >> SAIC, Network Solutions, in various time, we all know the history, ICANN, controlled by the Department of Commerce up until a certain point in time-- >> Uh, 'til about four years ago, really. >> So, this is moving so fast. You're a student of the industry, you're also doing a startup called WorkCoin, what is the formula for success, what is your strategy, what are you guys doing at WorkCoin, take a minute to explain what you guys are doing, your team, your approach-- >> So let's start with the problem, right? If you look at freelancing, right now, everybody knows that a lot of people freelance, and I don't think people understand how many people freelance. There are 57 million people in America who freelance. It's close to 50%, of us, don't actually have jobs, other than freelancing. And so, this is a slow moving train, but it's basically moving in the direction of more freelancers, and we're going to cross the 50% mark-- >> And that's only going to get bigger, because of virtual work, the global workforce, no boundaries-- >> Right, and so it's global phenomena, right? Freelancing is just going up, and up, and up. Now, you would think in this world, there would be something like Google where you could sit there, and go type patent attorney, and you could get 20 patent attorneys that would be competing for your business, and each one would have their price, and, you could just click, and hire a patent attorney, right? Is that the case? >> No. >> No, okay. >> I need a patent attorney. >> So, what if you have to hire a telegram manager for your telegram channel? Can you find those just by googling telegram manager, no. So basically-- >> The user expectation is different than the infrastructure can deliver it, that's what you're basically saying. >> No, what I'm saying is it should be that way, it is not that way, and the reason it's not that way is that basically, there's no economics to do that with credit cards, so, if you're building a marketplace where it's kind of these people are find each other, you need the economics to make sense. And when you're being charged 3.5% each way, plus you have to worry about chargebacks, buyer fraud, and everything else, you can't built a marketplace that's open and transparent. It's just not possible. And I realized six months ago, that with crypto, you actually could. Not that it's going to be necessarily easy, but, technically, it is possible. There's zero marginal cost, once I'm taking in crypto, I'm paying out crypto, in a sort of open marketplace where I can actually see the person, so I could hire John Furrier, not John F., right? >> But why don't you go to LinkedIn, this is what someone might say. >> Well, if you go to LinkedIn, first of all, the person there might not be in the market, probably is not in the market for a specific service, right? You can go there, then you need to message them. And you just say, "Hey, your profile looks great, "I noticed you're a patent attorney, "you want to file this patent for me?" And then you have to negotiate, it's not a transactional mechanism, right? >> It's a lot of steps. >> It's not transactional, right? So it's not click, buy, fund, engage, it just doesn't work that way. It's just such a big elephant in the room problem, that everybody has these problems, nobody can find these good freelancers. What do you end up doing? You end up going to Facebook, and you go, "Hey, does anybody know any good patent attorneys?" That's what you do. >> That's a bounty. >> Well, it's kind of, yeah. >> It's kind of a social bounty. "Hey hive, hey friends, does anyone know anything?" >> It's social proof, right? Which is another thing that's very important, because, if John, if you were-- >> Hold on, take a minute to explain what social proof is for the folks. >> Social proof is just the simple concept that it's a recommendation coming from somebody that you know, and trust. So, for example, I may not be interested in your video services, John, but I know you, and I am in the business of a graphic designer, and you're like, "Fred, I know this amazing graphic designer, "and she's relatively cheap." Okay, well that's probably good enough for me to at least start looking at her work, and going the next step. On the other hand, if I'm just looking at 100 graphic designers, I do not know. >> It's customized contextual data, around a specific transaction from a trusted source. So you socially, are connected to, or related. >> It, sort of, think about this, it doesn't even have to be a source that you know, it could be just a source that you know of, right? So, to use the Brock example again, Brock's probably not going to be selling his services on my platform, but what if he recommends somebody, people like giving the gift of recommendation. So Brock knows a lot of people, may not be doing as well as him, right? And he's like, "Well, this guy could be a fantastic guy "to hire as social media manager," for example. Helping out a guy that needs a little bit of work. >> And endorsement's a major thing. >> It is giving something, right? You're giving your own brand, by saying, "I stand behind this person." >> Alright, so tell me about where you are with WorkCoin, honestly, people might not know your background, if you check him out on LinkedIn, Fred Krueger, mathematician, Stanford PhD, well-educated, from a centralized organization, like Stanford, has a good reputation, you're a math guy, is there math involved? Obviously, Blockchain's math related, you got crypto, how are you guys building this out, share a little bit of, if you can, show a little leg on the tech-- >> The tech is sort of simple. So basically the way it is, is right now it's built in Google Cloud, but we have an interface where you can fund the thing, and so it's built, first of all, that's the first thing. We built it on web and mobile. And you can basically buy WorkCoins from the platform itself, using Ethereum, and also, we've integrated with Sensei, a different token. So, we can integrate with different tokens, so you're using these tokens to fund the coin, to fund your account, right? And then, once you have the tokens in your account, you can then buy services with them, right? And then the service provider, the minute they finish delivery of the service, to your expectation, they get the coin in their account, and then they can transfer that coin back into Ethereum, or Bitcoin, or whatever, to cash out. >> Okay, so wait, now that product's built, has the coins been issued? Are you guys doing an ICO? Are you raising money? >> So we're in the middle of an ICO-- >> Private? >> Private, only for now. So we've raised just under $4,000,000-- >> Great, congratulations. >> I have no idea if that's good or not-- >> Well, it's better than a zero (laughs). >> It's better than zero, right? It is better than zero, right? >> So there's interest obviously. >> Yeah, so look, we've got a lot of interest in our product, and I think part of the interest is it's very simple. A lot of people can go, "I think this thing makes sense." Now, does that mean we're going to be completely successful in taking over the world, I don't know. >> Well, I mean, you got some tailwinds at your back. One, the infrastructure in e-commerce, and the things that you're going after, are 20-year-old stacks. Number two, the business model, and expectation of the users, is shifting radically, and expectations are different, and there's no actual product that does it (laughs), so. >> So a lot of these ICOs, I think they're going to have technical problems actually building into the specification. 'Cause it's difficult, when you're dealing with the Blockchain, first of all, you're building on some movable platform, right? I met some people just today who are building on Hash-Craft, now, that's great, but Hash-Craft is like one day old, you know? So you're building on something that is one day old, and they've just announced their coin five minutes ago, you know. Again, that's great, but normally as a developer myself, I'm used to building on things that are years old, I mean, even something that's three years old is new. >> This momentum going on, that someone might want to tout Hash-Craft for is, 'cause it's got momentum-- >> It's got total momentum. >> They're betting on an ecosystem. But that brings up the other thing I want to get your thoughts on, because we've observed this at Polycon, we've been watching the industry landscape now, onto our 10th year, there's almost an ecosystem stake in the ground. The good news is, ecosystem's developing. You got entrepreneurs, you got projects, you got funding coming in, but as it's going to be a fight for the ecosystem, because you can't have zillion ecosystems, eventually they have to be-- >> Well, you know-- >> Or can you? >> Here's the problem, that everybody's focused on the plumbing right now, right, the infrastructure? But, what they should be focusing it on is the app. And I've a question for you, and I've asked this question to my advisors and investors, which are DNA Fund, and I say-- >> Let's see if I get it right, it's a test here on the spot, I love this, go. >> Okay, so here's the question, how many, in your wallet right now, on your mobile phone, show me how many Blockchain apps you have right now. >> Uh, zero, on my phone? >> Okay, zero. >> Well I have a burner phone for my other one, so (laughs). >> But on any phone, on any phone that you possess, how many Blockchain apps do you have on your phone? >> Wallet or apps? >> An app that you-- >> Zero. >> An app, other than a wallet, zero, right? Every single person I've asked in this conference has the same number, zero. Now, think about this, if you'd-- >> Actually, I have one. >> Uh, which one? >> It's called Cube Coin. >> Okay, there you go, Cube Coin. But, here's the problem, if you went to a normal-- >> Can I get WorkCoin right now? >> Yeah, well not right now, but I have it on my wallet. So for example, it's in test flight, but my point is I have a fully functional thing I can go buy services, use the coin, everything, in an app. I think this is one of the things-- >> So, hypothetically, if I had an application that was fully functional, with Blockchain, with cryptocurrency, with ERC 2 smart contracts, I would be ahead of the game? >> You would be ahead of the game. I mean, I think-- >> Great news, guys! >> And I think you absolutely are thinking the right thinking, because, everybody's just looking at the plumbing, and, look, I love EOS, but, it's sort of a new operating system, same as Hash-Craft, but you need apps to run on your thing-- >> First of all, I love chatting with you, you're super smart, folks out there, Fred is someone you should check out, you got great advisor potential. You're right on this, I want to test something out with you, I've been thinking about this for a while. If you think about the OSI model, OSI stack, for the younger kids, that was a key movement that generated the key standards in the stack for inner networking, and physical devices. So, it was started from the bottom up. The top of the stack actually never standardized, it became the presentation session layer, they differentiated, then eventually became front end. If you look at what's happening now, the top of the stack is really the ones that's standardizing, or standardizing with business logic, the bottom of the stack has many different versions of say, Blockchain, so the question is is that, it might be the world that will never have a TCP/IP moment, it might be that the business app logic will dictate to some sort of abstraction layer, down to programmable plumbing. You see this with cloud with DevOps. So the question is, do see it that way? I'm thinking out loud here, but when I'm seeing the trend here, it's just that, people who make the business logic decisions first, and nail those, that they're far more successful swapping out and hedging on the plumbing. >> Look, I think you mentioned the word alpha geek, and I think you've just defined yourself as an alpha geek. Let's just go in Denzel Washington's set in the movie Philadelphia, talk to me like I'm a five year old, okay? What is the problem you're solving? >> The app, you said it, it's the app! >> My point is like, everybody is walking around with apps, if the thing doesn't fit on an app, it's not solving any problem, that's the bottom line. I don't care whether you're-- >> You're validating the concept that all that matters is the app, the plumbing will sort itself out. >> I think so. >> Is that a dependency, or is it an interdependency? >> What do you need in a plumbing? Here's how I think you should think. Do I need 4,000 transactions per second? I would say, rarely, most people are not sitting there going, "I need to do 4,000 transactions per second." >> If you need that, you've already crossed the finish line, you probably want a proprietary solution. >> Just to put things in perspective, Bitcoin does 300,000 transactions per day. >> Well, why does Ripple work? Ripple works because they nailed the business model. >> I'll tell you what I think of Ripple-- >> What's your take? >> Why ripple works, I think all, and I'm not the first person to say this, but I think that, the thing that works right now, the core application of all this stuff, is money, right? That's the core thing. Now, if you're talking about documents on the Blockchain, is that going to be useful, perhaps. In a realist's say in the Blockchain, perhaps. Poetry on the Blockchain, maybe. Love on the Blockchain? Why ban it, you know? >> Hey, there's crypto-kiddies on the Blockchain, love is coming next. >> Love is coming next. But, the core killer app, the killer app, is money. It's paying people. That is the killer app of the Blockchain right now, okay? So, every single one of the things that's really successful is about paying people. So what is Bitcoin? Bitcoin is super great, for taking money, and moving it out of China, and into the United States. Or out of Nigeria, and into Switzerland, right? You want to take $100,000 out of Nigeria, and move it to Switzerland? Bitcoin is your answer. Now, you want to move money from bank A to bank B, Ripple is your answer, right? (John laughs) If you want to move money from Medellin, Colombia, that you use in narcos, Moneiro is probably your crypto of choice, you know? (John laughs) Business truly anonymous. And I think it's really about payment, right? And so, I look at WorkCoin as, what is the killer thing you're doing here, you're paying people. You're paying people for work, so, it's designed for that. That's so simple. >> The killer app is money, Miko Matsumura would say, open source money, that's his narrative, love that vision. Okay, if money's the killer app, the rest is all kind of window dressing around trying to race to-- >> I think it's the killer, it's the initial killer app. I think we need to get to the point where we all, not all of us, but where enough of us start transacting, with money, with digital money, and then after digital money, there will be other killer apps, right? It's sort of like, if you look at the internet, and again, I'm repeating somebody else's argument-- >> It's Fred Krueger's hierarchy of needs, money-- >> Money starts, right? >> Money is the baseline. >> The initial thing, what was the first thing of internet? I was on the internet before it was the internet. It was called the ARPANET, at Stanford, right? I don't know if you remember those days-- >> I do remember, yeah, I was in college. >> But the ARPANET, it was email, right? We had the first versions of email. And that was back in 1986. >> Email was the killer app for 15, 20 years. >> It was the killer app, right? And I think-- >> For 15 or 20 years. >> Absolutely, well before websites, you know? So I think, we got to solve money first. And I bless everybody who has got some other model, and maybe they're right, maybe notarization of documents on the internet is a-- >> There's going to be use cases for Blockchain, some obvious low-hanging fruit, but, that's not revolutionary, that's not game-changing, what is game-changing is the promise of a new decentralized infrastructure. >> Here's the great thing that's absolutely killer about what this whole world is, and this is why I'm very bullish, it's, if you look at the internet of transmitting value, from one node to another node, credit cards just do not do a very good job of that, right? So, you can't put a credit card inside a machine, very well, at all, right? It doesn't work! And very simple reason, why? Because you get those Amex fraud alerts. (John laughs) Now the machine, if he's paying another machine, the second machine doesn't know how to interpret the first machine's Amex fraud alerts. So, the machine has to pay in, the machine's something that's immutable. I'm paying you a little bit of token. The classic example is the self-driving car that pays the gas pump, 'cause it's a gas self-driving car, it pays it to fill up, and the gas pump may have to pay its landlord in rent, and all of this is done with tokens, right? With credit cards, that does not work. So it has to be tokens. >> Well, what credit cards did for other transactions a little bit simplifies your things, there's a whole 'nother wave coming, that just makes it easier and reduces the steps. >> It reduces the friction, and that's why I think, actually, the killer app's going to be marketplaces, because, if you look at a marketplace, whether it's a marketplace like ours, for freelancers, or your marketplace for virtual goods, and like wax, or whatever it is, right? I think marketplaces, where there's no friction, where once you've paid, it's in. There's no like, I want my money back. That is a killer app, it's an absolute killer app. I think we're going to see real massive consumer adoption with that, and that's ultimately, I think, that's what we need, because if it's all just business models, and people touting their 4,000 transactions a second, that's not going to fly. >> Well Fred, you have a great social graph, that's socially proved, you got a great credentials, in mathematics, PhD from Stanford, you reinvent nine, how many exits? >> Nine exits. >> Nine exits. You're reinventing freelancing on the Blockchain, you're an alpha geek, but you can also explain things to a five year old, great to have you on-- >> Thank you very much John. >> Talk about the WorkCoin, final word, get the plugin for WorkCoin, can people use it now, when is it going to be available-- >> Look, you can go check out our platform, as Miko said, Miko's an advisor, and Miko said, "Fred, think of it as a museum, "you can come visit the museum, "you're not going to see a zillion, "but you can do searches there, you can find people." The museum is not fully operational, right? You can come and check it out, you can take a look at the trains at the museum, the trains will finally operate once we're finished with our ICO, we can really turn the thing on, and everything will work, and what I'd like you to do, actually, you can follow our ICO, if you're not American, you can invest in our ICO-- >> WorkCoin dot-- >> Net. >> Workcoin.net >> Workcoin.net, and, really, at the end, if you have some skill that you can sell on the internet, you're a knowledge worker, you can do anything. List your skill for sale, right? And then, that's the first thing. If you're a student at home, maybe you can do research reports. I used to be a starving student at Stanford. I was mainly spending my time in the statistics department, if somebody said, "Fred, instead of grading "undergrad papers, we'll pay you money "to do statistical work for a company," I would be like, "That would be amazing!" Of course, nobody said that. >> And anyways, you could also have the ability to collaborate with some quickly, and do a smart contract, you could do some commerce, and get paid. >> And get paid for it! >> Hey, hey! >> How 'about that, so I just see-- >> Move from the TA's grading papers payroll, which is like peanuts-- >> And maybe make a little bit more doing something that's more relevant to my PhD. All I know is there's so many times where I've said, my math skills are getting rusty, and I was like, I'd really wish I could talk to somebody who knew something about this distribution, or, could help me-- >> And instantly, magically have them-- And I can't even find them! Like, I have no idea, I have no idea how I would go and find people at Stanford Institute, I would have no idea. So if I could type Stanford, statistics, and find 20 people there, or USC Statistics, imagine that, right? That could change the world-- >> That lowers the barriers, friction barriers, to-- >> Everybody could be hiring graduate students. >> Well it's not just hiring, collaborating too. >> Collaborating, yeah. >> Everything. >> And any question that you have, you know? >> Doctor doing cancer research, might want to find someone in China, or abroad, or in-- >> It's a worldwide thing, right? We have to get this platform so it's open, and so everybody kind of goes there, and it's like your identity on there, there's no real boundary to how we can get. Once we get started, I'm sure this'll snowball. >> Fred, I really appreciate you taking the time-- >> Thanks a lot for your time. >> And I love your mission, and, we support you, whatever you need, WorkCoin, we got to find people out there to collaborate with, otherwise you're going to get pushed fake news and fake data, best way to find it is through someone's profile on WorkCoin-- >> Thanks. >> Was looking forward to seeing the product, I'm John Furrier, here in Puerto Rico for Blockchain Unbound, Restart Week, a lot of great things happening, Brock Pierce on the keynote this morning really talking about his new venture fund, Restart, which is going to be committed 100% to Puerto Rico, this is where the action will be, we will be following this exclusive story, continuing, we'll be back with more, thanks for watching. (soothing electronic music)

Published Date : Mar 15 2018

SUMMARY :

Brought to by Blockchain Industries. future of society, the world, at the D10e at the Four I thought it was unedited, obviously, and he really means the best. I saw the New York of the domain name kind of people. and the domain name world, So if you look at ad tech back in say, of the inventory, and it's pretty much, look at the original DNS days, back in the day, a lot of You're a student of the industry, but it's basically moving in the direction Is that the case? So, what if you have is different than the you need the economics to make sense. But why don't you go to LinkedIn, And then you have to negotiate, elephant in the room problem, It's kind of a social bounty. proof is for the folks. and going the next step. So you socially, are be a source that you know, You're giving your own brand, by saying, the tokens in your account, So we've raised just under $4,000,000-- in taking over the world, I don't know. and expectation of the users, the Blockchain, first of all, fight for the ecosystem, focusing it on is the app. it's a test here on the Okay, so here's the question, how many, for my other one, so (laughs). has the same number, zero. But, here's the problem, I think this is one of the things-- I mean, I think-- it might be that the business app logic in the movie Philadelphia, talk to me that's the bottom line. that all that matters is the app, Here's how I think you should think. already crossed the finish line, Just to put things in perspective, nailed the business model. documents on the Blockchain, on the Blockchain, That is the killer app of the Okay, if money's the killer app, it's the initial killer app. I don't know if you remember those days-- But the ARPANET, it was email, right? Email was the killer of documents on the internet is a-- There's going to be So, the machine has to pay in, and reduces the steps. because, if you look at a marketplace, great to have you on-- and what I'd like you to do, actually, really, at the end, if you have some skill And anyways, you could that's more relevant to my PhD. That could change the world-- Everybody could be Well it's not just and it's like your identity on there, Brock Pierce on the keynote this morning

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NigeriaLOCATION

0.99+

Miko MatsumuraPERSON

0.99+

SwitzerlandLOCATION

0.99+

MikoPERSON

0.99+

Jon PostelPERSON

0.99+

ChinaLOCATION

0.99+

JohnPERSON

0.99+

FacebookORGANIZATION

0.99+

$100,000QUANTITY

0.99+

GoogleORGANIZATION

0.99+

FredPERSON

0.99+

BrockPERSON

0.99+

1986DATE

0.99+

AmericaLOCATION

0.99+

Fred KruegerPERSON

0.99+

3.5%QUANTITY

0.99+

80%QUANTITY

0.99+

Fred KruegerPERSON

0.99+

100%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

50%QUANTITY

0.99+

United StatesLOCATION

0.99+

StanfordORGANIZATION

0.99+

15QUANTITY

0.99+

Puerto RicoLOCATION

0.99+

20 peopleQUANTITY

0.99+

three yearsQUANTITY

0.99+

20 patent attorneysQUANTITY

0.99+

JonPERSON

0.99+

USC StatisticsORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

one dayQUANTITY

0.99+

100 graphic designersQUANTITY

0.99+

10 yearsQUANTITY

0.99+

10th yearQUANTITY

0.99+

John FurrierPERSON

0.99+

John F.PERSON

0.99+

2004DATE

0.99+

20 yearsQUANTITY

0.99+

Vint CerfPERSON

0.99+

WorkCoinTITLE

0.99+

2003DATE

0.99+

57 million peopleQUANTITY

0.99+

RippleORGANIZATION

0.99+

PolyconORGANIZATION

0.99+

zeroQUANTITY

0.99+

nineQUANTITY

0.99+

Brock PiercePERSON

0.99+

six months agoDATE

0.99+

Stanford InstituteORGANIZATION

0.99+

ICANNORGANIZATION

0.99+

five minutes agoDATE

0.99+

LamborghiniORGANIZATION

0.99+

WorkCoinORGANIZATION

0.98+

second machineQUANTITY

0.98+

PhiladelphiaTITLE

0.98+

Denzel WashingtonPERSON

0.98+

LatinOTHER

0.98+

Jay Littlepage, DigitalGlobe | AWS Public Sector Summit 2017


 

>> Announcer: Live from Washington, DC, it's theCube, covering AWS Public Sector Summit 2017, brought to you by Amazon Web Services and its partner ecosystem. >> Welcome inside the convention center here in Washington, DC. You're looking at many of the attendees of the AWS Public Sector Summit 2017. We're coming to you live from our nation's capital. Several thousand people on hand here for this three-day event, we're here for two days. John Walls, along with John Furrier. John, good to see you again, sir. >> Sir, thank you. >> We're joined by Jay Littlepage, who is the VP of Infrastructure and Operations at Digital Globe, and Jay, thank you for being with us at theCube. >> My pleasure. >> John W: Good to have you. First off, your company, high-resolution, earth imagery satellite stuff. Out-of-this world business. >> Yep. >> Right, tell our viewers a little bit about what you do, I mean, the magnitude of, obviously, the environmental implications of that or defense, safety security, all those realms. >> Okay, well, stop me when I've said too much because I get pretty excited about this. We work for a very cool company. We've been taking earth imagery since 1999, when our first satellite went up in the sky. And, as we've increased our capabilities with our constellation, our latest satellite went up last November. We're flying, basically, a giant camera that we can fly like a drone. So, and when I say giant camera, it's about the size of a school bus, and the lens is about the size of the front of the school bus, and we can take imagery from 700 miles up in space and resolve a pixel about the size of a laptop. So, that gives us an incredible amount of capability, and the flying like a drone, besides just being really cool and geeky, we can sling the lens from basically Kansas City to here in Washington in 15 seconds and take a shot. And so, when world events happen, when an earthquake happens, you know, they're generally not scheduled events, we don't have to have the satellite right above the point where there's something going on on the ground, we can take a shot from an angle of 1,000 miles away, and with compute power and good algorithms, we can basically resolve the picture of the earth, and it looks like we're right overhead, and we're getting imagery out immediately to first responders, to governmental agencies so they can respond very quickly to a disaster happening to save lives. >> So, obviously, the ramifications are endless, almost, right? >> Yes. >> All that data, I mean, you can't even imagine the amount, talk about storage. So, that's certainly a complexity, and then, they are making it useful too all these different sectors. Without getting too simple, how do you manage that? >> Well, you know, it's a big trade-off because, ideally, if storage was free, all of our imagery in its highest consumable form would be available all the time to everybody. Each high-resolution image might be 35 gig by itself. So, you think of that long of flying a constellation, we've got 100 petabytes of imagery. That's too much, it's too expensive to have online all of the time. And so, we have to balance what's going to be relevant and useful to people versus cost. You know, a lot of the imagery goes through cycle where it's interesting until it's not, and it starts to age off. The thing about the planet, though, is we never know what's going to happen, and when something that aged off is going to be relevant again. And so, the balance for my team is really making sure we're hitting the sweet spot on there. The imagery that is relevant is readily accessible, and the imagery that's not is, in its cheapest form, fact or possible, which for us, is compressed, and it's in some sort of archival storage, which for us, now that we've used the Snowmobile, is Glacier. >> Jay, I want to ask to give your thoughts. I want you to talk about DigitalGlobe, before that, some context. This weekend, I was hanging out with my friends in Santa Cruz and kids were surfing. He's a big drone guy, he used to work for GoPro, and she used to buy the drones and, hey, how's it going with the drones. It got kind of boring, here's a great photo I created, but after a while, it just became like Google Earth, and it got boring. Kind of pointed out that he wanted more, and we got virtual reality, augmented reality, experience is coming to users. That puts imagery, place imagery, the globe, pictures, places and things is what you guys do. So, that's not going away any time soon. So, talk about your business, what you guys do, some of the things that you do, your business model, how that's changing, and how Amazon, here in the public sector, is changing that. >> Well, that's a fantastic questions, and our business is changing pretty rapidly. We have all that imagery, and it's beautiful imagery, but increasingly, there's so much of it, and so many of the use cases aren't about human eyeballs staring at pixels. They're about algorithms extracting information from the pixels. And, increasingly, from either the breadth of pixels, instead of just looking at a small area, you can look around it and see what's happening around it and use that as signal information, or you can go deep into an archive and see the same location on the planet over and over over years and see the changes that had happened in terms of time frame. So, increasingly, our market is about extracting information and extracting insights from the imagery more so than it is the imagery itself. And so that's driving an analytics business for us, and it's also driving a services business for us, which is particularly important in the public sector to actually use that for different purposes. >> You can imagine the creativity involved and developers out there watching or even thinking about using satellite imagery in conflux with other data. Remember, they're in the Web 2.0 craze earlier in the last decade. You saw mashups of API with Google Max. Oh yeah, pull a little pin, and then the mobile came. But now, you're seeing mashups go on with other data. And I've heard stats at Uber, for instance, remaps New York City every five days with all that GPS data of the cars, which are basically sensors. So, you can almost imagine the alchemy, the convergence of data. This is exciting for you, I can imagine. Won't you share with us, anecdotally or statistically what you're seeing, how this is playing out? >> Well, yes, some of our biggest commercial customers of our products now are location-based services. So, Uber's using our imagery because the size of the aperture of our lens means we have great resolution. And so, they've been consuming that and consuming our machine learning algorithms to basically understand where traffic is and where people are so that they can refine, on an ongoing basis, where the best pick-up and drop-off locations are. That really drives their business. Facebook's using the imagery to basically help build out the Internet. You know, they want to move into places on the planet where Internet doesn't exist. Well, in order to really understand that, they need to understand where to build, how to build, how many people are there, and you can actually extract all that from imagery by going in in detail and mapping roofs' shapes and roofs' sizes, and, from there, extracting pretty accurate estimates of how many people live in a particular area, and that's driving their project, which is ultimately going to drive access for... >> Intelligence in software, we look at imagery. I mean, we here at Amazon, recognition's their big product for facial recognition, among other pictures. But this is what's getting at, this notion of actually extracting that data. >> Well, you think about it. You know, once the data is available, once our imagery is available, then the sky's the limit. You know, we have a certain set of algorithms that we apply to help different industries, you know, to look at rooftops, to look at water extractions. After a hurricane, we can actually see how the coverage has changed. But, you look at a Facebook, and they're applying their own algorithms. We don't force our algorithms to be used. We provide the information, we try to provide the data. Companies can bring their own algorithms, and then, it's all about what can you learn, and then, what can you do about it, and it's amazing. >> So, here's the question. With the whole polyglot conversation, multiple languages that people speak that's translated into the tech industry, and interdisciplinary forces are in play: Data science, coding, cognitive, machine learning. So, the question is, for you, is that, okay, as this stuff comes together, do you speak DevOps? It's kind of a word, and we hear people say, is that in Russian or is that like English? DevOps is a cloud language mindset. And so, that brings up the question of, are you guys friendly to developers, and because people want to have microservices, I'm from a developer, I'm like, hey, I want those maps. How do I get them, can I buy it as a service, are they loaded on Amazon, how to I gauge with DataGlobe, as a developer or a company? >> Well, you think about what you just said and the customers I just talked about. They're not geospatial customers. You know, they're not staffed with people that are PhDs in extracting information. They're developers that are working for high-tech companies that have a problem that want to solve. >> There are already mobile apps or doing some cool database working in here. >> So, we're providing the raw imagery and the algorithms to very tried and true systems where people can plug into work benches and build artificial intelligence without necessarily being experts in that. And, as a case in point, my team is an IT team. You know, we've got a part of the organization that is all staffed with PhDs. They're the ones that are driving our global... >> John W: PhD is a service. (laughter) >> Well, kind of. I mean, if you think about it, they're driving the leading edge, for these solutions to our customers. But, I've got an IT team, and I've got this problem with all this data that we talked about earlier. Well, how am I actually going to manage that? I'm going to be pulling in all sorts of different sources of data, and I'm going to be applying machine learning using IT guys that aren't PhDs to actually do that, and I'm not going to send them to graduate school. They're going to be using standard APIs, and they're going to be applying fairly generic algorithms, and... >> So, is that your model, is it just API, is there other... >> I think the real key is the API makes it accessible, but a machine learning algorithm is only as good as its training. So, the more it's used, the more it refines itself, the better our algorithm gets. And so, that is going to be the type of thing that the IT developer, the infrastructure engineer of the future becomes, and I've already, basically, in the last couple of years, as we started this journey at AWS, 20% of my staff now, same size staff, but they're software developers now. >> So, I'll take this to the government side. We talked a lot about commercial use. But at the government side, I'm thinking about FEMA, disaster response, maybe a core of engineers, you know, bridge construction, road construction, coastline management. Are all those kind of applications that we see on the dot gov side? >> There are all things that you see that can be done on the dot gov side, but we're doing them all in the commercial environment. The USC's region for AWS, and I think that's actually a really important distinction, and it's something that I think more and more of the government agencies are starting to see. We do a lot of work for one particular government agency and have for years. But 99 point something percent of our imagery is commercial unclassified, and it's available for the purposes that our customers use it for, but they're also available for all those other customers I've talked about. And more and more of what we do, we are doing on the completely open but secure commercial environment because it's ubiquitous for our customers. Not all of our customers do that type of work. They don't need to comply with those rigid standards. It's generally where all AWS products that are released are released to, with the other environments lagging, and they probably don't want me saying that on TV, but I just did. And it's cheaper, you know, we're a commercial company that does public sector work. We have to make a profit, and the best way to do that is to put your environment in a place where if you're going to repeat an operation, like pulling an image of Glacier and build it into something that is consumable by either a human or an algorithm and put it back. If you're going to do something like that a million times, you want to do it really inexpensively. And so, that's where... (crosstalking) >> Lower prices, make things fast, that's Jeff Hayes' ethos, shipping products, that these books in the old days. Now, they're shipping code and making lower-latency systems. So, you guys are a big customer. What are the big implementation features that you have with AWS, and then, the second part of the question is, are you worried about locking. At some point, you're so big, the hours are going to be so massive, you're going to be paying so much cash, should you build your own, that's the big debate. Do you go private cloud, do you stay in the public? Thoughts on those two options? >> Well, we have both. Right now, we're running a 15-year-old system, which is where we create the imagery that comes off the satellites, and it goes into a tape archive. Last year, Reinvent... >> John F: Tape's supposed to be dead! >> Tape will die someday! It's going to die really soon, but, at the Reinvent Conference last year, AWS rolled out a semi truck. Well, the real semi truck was in our parking lot getting loaded with all those tapes, and it's sad... >> John F: Did you actually use the semi? >> We were the first customer ever, I believe, of the Snowmobile. And so, it takes a lot of time and effort to move 12,000 LTO 5 tapes loaded onto a semi and send it off. You know, that represents every image ever taken by DG in the history of our company, and it's now in AWS. So, to your second part of your question, we're pretty committed now. >> John F: Are you okay with that? >> Well, we're okay with that for a couple of reasons. One is, I'm not constraining the business. AWS is cheaper. It will be even cheaper for us as we learn how to pull all the levers and turn all the dials in this environment. But, you know, you think about that, we ran a particular job last year for a customer that consumes 750,000 compute hours in 22 days. We couldn't have done that in our data center. We would have said no. And so, I would... >> I know, I can't do, you can't do it. >> We can't do it! Or, we can do it, come back, the answer will be here in six months. So, time is of the essence in situations like that, so we're comfortable with it for our business. We're also comfortable with it because, increasingly, that's where our customers already are. We are creating something in our current environment and shipping it to Amazon anyway. >> We're going to start a movie about you, with Jim Carrey, Yes Man. (laughter) You're going to say yes to everything now with Amazon. Okay, but this is a good point. Joking aside, this is interesting because we have this debate all the time, when is the cloud prohibitive. In this case, your business model, based on that fact that variables spend that you turn up your Compute is based upon cadence of the business. >> That's exactly right. You know, the thing that's really changed for the business with this model is historically, IT has been a call center, and moving into Amazon, I manage our storage, and I pay for our storage because it's a shared asset. It's something that is for the common good. The business units and different product managers in our business now have the dial for what they spend on the Compute and everything else. So, if they want to go to market really rapidly, they can. If they want to spin it up rapidly, they can. If they want to turn it down, they can. And it's not a fixed investment. So, it allows the business philosophy that we've never had before. >> Jay, I know we're getting tight on time, but I do want to ask you one question, and I did not know that you were the first Snowmobile customers, so, that's good trivia to have on theCube and great to have you. So, while we got you here, being the first customer of AWS Snowmobile when they rolled out at Amazon Reinvent, we covered on SiliconAngle. Why did you jump on that and how was your experience been, share some color onto that whole process. >> Okay, it's been an iterative learning process for both us and for Amazon. We were sitting on all this imagery. We knew we wanted to get in AWS. We started using the Snowballs almost a year and a half ago. But moving 100 petabytes, 80 terabytes at a time, it's like using a spoon to move a haystack. So, when Amazon approached us, knowing the challenge we had about moving it all at once, I initially thought they were kidding, and I realized it was Amazon, they don't kid about things like this, and so we jumped on pretty early and worked with them on this. >> John F: So, you've got blown away like, what? >> Just like. >> What's the catch? >> Really, a truck, really? Yeah, but really. So, it's as secure as it could possibly be. We're taking out the Internet and all the different variables in that, including a lot of cost in bandwidth and strengths, and basically parking and next to our data, and, you know, it's basically a big NFS file system, and we loaded data onto it, the constraint for us being, basically that tape library with 10,000 miles of movement on the tape pads. We had to balance between loading the Snowmobile and basically responding to our regular customers. You know, we pulled 4 million images a year off that tape library. And so, loading every single image we've ever created onto the Snowmobile at the same time was a technical challenge on our side more so than Amazon's side. So, we had to find that sweet spot and then just let it run. >> John F: Now, it's operational. >> So, the Snowmobile is gone. AWS has got it. They're adjusting it right now into the West Region, and we're looking forward to being able to just go wild with that data. >> We got Snowmobiles, we got semis, we have satellites, we have it all, right? >> We have it all, yeah. >> It's massive, obviously, but impressed with what you're doing with this. So, congratulations on that front, and thank you again for being with us. >> My pleasure, thanks for having me. >> You bet, we continue our coverage here from Washington, DC, live on theCube. SiliconAngle TV continues right after this. (theCube jingle)

Published Date : Jun 13 2017

SUMMARY :

covering AWS Public Sector Summit 2017, brought to you by You're looking at many of the attendees of the thank you for being with us at theCube. John W: Good to have you. the environmental implications of that and the lens is about the size of All that data, I mean, you can't even imagine and the imagery that's not is, and how Amazon, here in the public sector, and so many of the use cases aren't about You can imagine the creativity involved and you can actually extract all that from imagery by Intelligence in software, we look at imagery. and then, what can you do about it, So, the question is, for you, is that, and the customers I just talked about. There are already mobile apps They're the ones that are driving our global... John W: PhD is a service. and I'm going to be applying machine learning So, is that your model, is it just API, and I've already, basically, in the last couple of years, So, I'll take this to the government side. and it's available for the purposes the hours are going to be so massive, that comes off the satellites, Well, the real semi truck was in our parking lot of the Snowmobile. One is, I'm not constraining the business. and shipping it to Amazon anyway. We're going to start a movie about you, It's something that is for the common good. and great to have you. and so we jumped on pretty early and all the different variables in that, So, the Snowmobile is gone. and thank you again for being with us. You bet, we continue our coverage here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JayPERSON

0.99+

Jim CarreyPERSON

0.99+

Jay LittlepagePERSON

0.99+

John WallsPERSON

0.99+

AmazonORGANIZATION

0.99+

Santa CruzLOCATION

0.99+

AWSORGANIZATION

0.99+

Jeff Hayes'PERSON

0.99+

Kansas CityLOCATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

John FPERSON

0.99+

UberORGANIZATION

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

two daysQUANTITY

0.99+

WashingtonLOCATION

0.99+

Last yearDATE

0.99+

700 milesQUANTITY

0.99+

FacebookORGANIZATION

0.99+

10,000 milesQUANTITY

0.99+

15 secondsQUANTITY

0.99+

Washington, DCLOCATION

0.99+

three-dayQUANTITY

0.99+

80 terabytesQUANTITY

0.99+

20%QUANTITY

0.99+

100 petabytesQUANTITY

0.99+

4 million imagesQUANTITY

0.99+

22 daysQUANTITY

0.99+

New York CityLOCATION

0.99+

GoProORGANIZATION

0.99+

second partQUANTITY

0.99+

first satelliteQUANTITY

0.99+

1,000 milesQUANTITY

0.99+

SnowmobileORGANIZATION

0.99+

35 gigQUANTITY

0.99+

OneQUANTITY

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

one questionQUANTITY

0.99+

1999DATE

0.99+

two optionsQUANTITY

0.99+

last NovemberDATE

0.99+

USCORGANIZATION

0.99+

John WPERSON

0.98+

first customerQUANTITY

0.98+

last yearDATE

0.98+

SiliconAngle TVORGANIZATION

0.98+

AWS Public Sector Summit 2017EVENT

0.98+

DevOpsTITLE

0.97+

SnowmobilesORGANIZATION

0.97+

Digital GlobeORGANIZATION

0.96+

earthLOCATION

0.96+

99 pointQUANTITY

0.96+

Google EarthTITLE

0.96+

750,000 compute hoursQUANTITY

0.95+

Reinvent ConferenceEVENT

0.95+

5 tapesQUANTITY

0.94+

DigitalGlobeORGANIZATION

0.94+

a year and a half agoDATE

0.94+

DGPERSON

0.93+

West RegionLOCATION

0.93+

Joshua Kolden, Avalanche - NAB Show 2017 - #NABShow - #theCUBE


 

>> Announcer: Live from Las Vegas. Its theCube, covering NAB 2017. Brought to you by HGST. >> Hi welcome back to theCube, we are live from NAB 2017. I'm Lisa Martin in Las Vegas, excited to be joined by the co-founder of Avalanche, Josh Kolden. Hey Josh, welcome to theCube. >> Thank you. >> So tells us a little bit about what Avalanche is. >> Well, Avalanche is a file navigator for film makers. It allows, the difference being from something like Windows Explorer or an Apple finder, is that it allows you to work with files wherever they are, on different computers, in the cloud, on different units of production as they're moving around the world. Without having to do all the low-level coordinating of that data. >> So in media we're talking about massive files. How is this different from Dropbox, Box, et cetera? >> So those tools actually try to synchronize your data. So they, if you put a big media file in Dropbox it'll try to copy not only the file to the cloud but also of course to any other computers you have your Dropbox running on. What Avalanche is doing, doesn't necessarily can move it, but it doesn't necessarily move it. Instead, let's say you're an editor or studio and you want to see what's happening on set, you can see all the files as they're coming off of a camera and interact with them. Rename them, make notes, whatever has to happen, see the notes that are already applied to them. And when those files show up in editorial, in say hard drive that's when all that happens, and gets synchronized locally. So it allows people to work in a very intuitive and natural production workflow, without actually trying to copy huge amounts of data across the net. >> In terms of like the production life cycle, are we talking about pre-production, production, post-production, or the whole kit and caboodle? >> It's the whole thing, because what happens in production is you see teams of people kind of ad hoc join the production, they might have teams during pre-production that are there for a bit and teams that come on in post-production. So there's always this coordination problem of knowing who has what, you know, where is the camera? Post-production's looking for camera imports that only people that were on set know about. And this provides a mechanism to kind of have a continuity between all those different teams across the entire production pipeline. >> Continuity is key. What, give us an example, you had mentioned, and this is really built for filmmakers. If something is filmed and the crew or the director decides, you know what, that would've been great if we'd actually shot that for VR. What's the process of them, or how simple is it or seamless for them to go back in, pull something out, change it? >> Well, in those kinds of situations, I mean production generally, usually has a lot of planning involved. So you're going to know going in those kinds of issues, if it's something as big as, we want to have extra footage for VR or whatever. But one thing that happens is, let's say for example, there's a costume change where you've got a product which is a suit or something, that needs to be placed in the scene for the financing and then somebody spills something on it, but story-wise that works, so they're going to keep it in. People that are in the product teams later down the line might need to know these changes have occurred so they can either pushback and say, no we need to re-shoot that with a clean suit, or whatever that information might be. That back and forth. So this makes that even possible at all. Before it would just be making sure that somebody on production called the, that team and explained it to them. Right now, with this, you can just put a quick note on any device and it eventually be findable, you can just search it like Google, and find any information related to that suit, or that shot, or that production day. Any kind of different ways of searching for the stuff you're looking for. >> So facilitating a little bit of automation. You talk about the connectivity, but also it sounds like the visibility is there, much more holistic. >> Yeah we call it discoverability, because right now a lot of the stuff isn't discoverable. Once, say you don't know what row database entry is, once you've lost that row number. There's no way to find out where that data comes from anymore, it's just completely disconnected. So we use a framework, it's open sourced underneath, called C4, the Cinema Content Creation Cloud and that framework provides a mechanism that what they called indelible metadata where it binds attributes to media in a way that doesn't easily get lost. So downstream you can discover relationships you didn't expect to be there. You don't have to preplan all the relationships and build them in advance. >> So one of the things you and I were chatting about before we went live is how, how Silicon Valley approaches this cloud. Versus how Hollywood approaches it. Tell us a little bit more about your insights there, I thought it was very intriguing. >> Yeah, this is a really interesting thing because not a lot of people realize, because a lot of people were on both sides, Hollywood and Silicon Valley, were using the same terminology. We're talking about the cloud, we're talking about files, we're talking about copying things. But there's subtle differences that get lost. And so what I've been working on a lot in the open sourced community, and in standards is helping to communicate this new concept that what we really need is, like a web for media production. With a normal web that most of Silicon Valley and cloud tools are built on, you're expecting to be able to transfer all your data each time. You go to the website, you get the webpage right then, you get all the images that it links to right then. But you don't want to do that when you're doing media production cause that might represent terabytes of data for each shot. And you need to work relatively quickly. You might be doing renders or composites, these things might take many many many elements to layer together. You can't be requesting this data as you need it every single time. You want to kind of get there and use, do all the processing you can possibly do all at once. So an architecture like that calls for a different kind of internet. An internet where your data moves less often. You get it to the cloud and you leave it there, and you do all your processing on it. Or it's in editorial, you do all your editing with it. The pieces that you need are in the right places, and you move them as little as possible. You move, command and control and metadata between those locations, but the media itself needs to arrive either maybe by hard drive or get synced in advance, there's different ways of that moving, but it doesn't happen at the same time that the command and control is happening. So yeah, we are trying to communicate that difference. That Hollywood is used to it happening because they have the data center in their building. Silicon Valley's used to it happening because it's small data across the network. And that's where that disconnect is happening, is they both think it's just a quick call, but it works for them because of a different architecture that they're building on top of. >> Different architectures, different, I imagine objectives. How are you helping to influence Silicon Valley coming together with Hollywood and really them influencing each other? Whether it's Hollywood influencing the type of internet that's needed and why, and Silicon Valley influencing maybe get away from the on prem data centers. Leverage hybrid as a destination, as a journey. Leverage the cloud for economies of scale. What's that influence like? >> Yeah, it's really fantastic because I think it's a perfect, it's really really good relationships between the kinds of skill-sets that Silicon Valley companies bring to the table, and the kinds of creations talent that Hollywood has. In fact, there's a lot of what Hollywood production studios don't want to have to invest in. They don't want to have a data center. If they can have a secure, productive, as you need it tool set, that they turn up they performance on when they're in production and then turn it off when they're done. That's exactly what we do with camera equipment. We rent it for the production and we give it back. So we're used to in Hollywood, that production model. So it's kind of teed up and ready to use all those services, it's just this kind of plumbing level that has been everybody's pain point. >> So from a collaboration perspective, are you facilitating, like a big cloud provider meeting with one of the big studios and really collaborating to kind of cross pollinate? >> Yeah so, I've been working with the Entertainment Technology Center, that's funded, at USC yeah, they're funded by all the major studios, and have other members like Google and other big vendors for cloud and whatnot. And these groups are very interested in trying to collaborate with technology companies and figure out the best ways to work together. And I have a lot of experience with cloud and computer technology and Silicon Valley style services. And also for production. So I've been working extensively in trying to bridge that gap, in terms of the understanding, but also in terms of some fundamental tools like I was saying, the open source framework, C4, so that, kind of like the web and HTML and all that stuff came about. Nobody could go to that level of the internet and create that new economy of the internet until those foundations were in place. So that's what we've been pushing. >> Speaking of foundation, last question before we wrap here. Where are you in this, kind of first use case example of the meeting of the minds? How close are you to really fixing this facilitated to really support what both sides need? >> We've actually been doing a number of production tasks over at ETC. We've shot several short films using these things. So all these things are actually in place and usable today. It's just a matter of getting people to start using them, be aware of them. They're all free and, you know, easy to use, relatively for technical people, for Silicon Valley people. And then there's going to be another layer that we're really, that's why we're talking a lot about it, that's going to be the software companies and the hardware companies supporting it. We're pushing it through standards. So it'll be showing up on everybody's radar soon. And we'll see higher level integrations, so the digital artists that don't know how to do that lower level software stuff will just get it for free from the tools they use. And that's kind of what the Avalanche file manager does, it provides a lot of that cloud technology underneath and you don't have to worry about it, it just looks like a file manager. >> Very exciting. Thanks so much Josh for sharing your insights and what you're working on. We look forward to seeing those things coming to the forefront very soon. >> Alright, thank you. >> Thanks for joining us on theCube and we want to thank you for watching theCube. Again I'm Lisa Martin, we are live at NAB 2017, in Las Vegas, but stick around we will be right back.

Published Date : Apr 26 2017

SUMMARY :

Brought to you by HGST. I'm Lisa Martin in Las Vegas, excited to be joined is that it allows you to work with files So in media we're talking about massive files. see the notes that are already applied to them. of knowing who has what, you know, and the crew or the director down the line might need to know these changes You talk about the connectivity, but also it sounds like So downstream you can discover relationships So one of the things you and I were chatting about You get it to the cloud and you leave it there, How are you helping to influence Silicon Valley and the kinds of creations talent that Hollywood has. and create that new economy of the internet of the meeting of the minds? so the digital artists that don't know how to do that to the forefront very soon. and we want to thank you for watching theCube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

JoshPERSON

0.99+

Josh KoldenPERSON

0.99+

Joshua KoldenPERSON

0.99+

GoogleORGANIZATION

0.99+

Las VegasLOCATION

0.99+

DropboxORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

Windows ExplorerTITLE

0.99+

each shotQUANTITY

0.99+

Entertainment Technology CenterORGANIZATION

0.99+

HollywoodORGANIZATION

0.99+

NAB 2017EVENT

0.99+

AvalancheORGANIZATION

0.99+

each timeQUANTITY

0.98+

C4TITLE

0.98+

both sidesQUANTITY

0.98+

NAB Show 2017EVENT

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.97+

#NABShowEVENT

0.97+

todayDATE

0.96+

AvalancheEVENT

0.96+

AppleORGANIZATION

0.94+

USCORGANIZATION

0.92+

first useQUANTITY

0.88+

one thingQUANTITY

0.85+

terabytes of dataQUANTITY

0.83+

HGSTORGANIZATION

0.75+

theCubeTITLE

0.75+

single timeQUANTITY

0.74+

BoxORGANIZATION

0.74+

HTMLTITLE

0.73+

theCubeCOMMERCIAL_ITEM

0.68+

Cinema Content Creation ClTITLE

0.57+

Michael Harabin, Pac-12 Networks | NAB Show 2017


 

>> Voiceover: Live from Las Vegas it's The Cube, covering NAB 2017, brought to you by HGST. (lively music) >> Good morning, welcome to The Cube I'm Lisa Martin, and we are live at day three of the NAB Show in Las Vegas. Very excited to introduce you to our first guest this morning, Michael Harbin, the VP of Pac-12 Networks. Good morning Michael, welcome to the cube. >> Good morning, how are you today? >> Very good, very energized. >> Oh good (laughter) >> Day three. So Michael, tell us about Pac-12 Networks, The content arm of the Pac-12 Conference. >> Sure, we have a six regional sports networks in the western US, and then one national feed, we also have digital properties and some over-the-top services on Twitter and Facebook Live, so we're involved as we can be in all forms of distribution. We're located in San Francisco, the conference itself is over 100 years old; it was 100 last year. The networks launched four years ago, this will be our fifth season coming up in August. We're very proud, very happy of our distribution, and our student athletes, and our partnership schools, and it's a great place. >> So you are the first and only sports media company that is owned by its 12 universities. >> That's right, so the SEC is partnered with ESPN, and the Big 10 Networks are partnered with Fox, so we're on our own, we stand on our own, and we do the best we can with what we have. >> Give us an idea of the genesis of the network. >> It started with the new commissioner, Larry Scott on the Pac-12 side, he came in and had a vision for helping the Pac-12 realize what it could be, as opposed to... Being on the West Coast has its disadvantages; our audience size isn't that big, our games start when the East Coast is going to sleep sometimes, so he wanted to get rid of an East Coast bias that existed in collegiate sports, and really make Pac-12 what it should be. We have the best geography, we have the best schools, we have land in... Tech and entrainment, so we have a lot going for us, and I think he brought those things to the forefront, and helped position Pac-12 in a much stronger position than it had been. In the world of liscencing content, we leapfrogged at the time the rest of the conferences in our deal with ESPN and Fox for our football and basketball games. With the games that weren't sold to Fox and ESPN, commissioner Scott thought to create a media company that we would own and control, and that would distribute the rest of our collegiate and athletic events that we have that are controlled by the Pac-12. >> So you mentioned basketball, football, you do big events, but you also do small events. Give us an idea of what it's like to produce a big event in the fall; a big football event, versus some of the smaller Olympic sports like field hockey? >> Sure. We have our three seasons; fall, winter, and spring, so obviously winter, the mostly indoor sports, but in the fall we kickoff big with our football season, and there's 12 or 13 weeks, and we have a championship game in early December which is a big event. That's one of the reasons the Pac-10 went to the Pac-12; the NCA says if you have 12 football teams, you can have a championship game. >> Okay. >> If you have less than 12, whoever has the best record is the winner, so we added two schools, and we have a champ game; those media rights were sold to Fox and ESPN, so it was a nice deal for us. So we start off with football; those are more traditional productions that everybody's used to. Big 53-foot truck pulls up, we do our production compliment with seven, eight or nine cameras depending on the game, depending on the market, depending on the week, the time of broadcast. We usually get- we choose our games after Fox and ESPN chooses theirs, so sometimes we get good games, sometimes we don't. They're all good; they're all Pac-12 games, so they're all good. But those are very traditional productions that are done in very traditional methodologies that everyone would see. As we start getting into basketball, those two are typical productions, but the volume of basketball games is such that we found a new way to do those games a little bit less expensively than the others. >> So less resources? >> Yeah. And then of course the spring sports where you're into baseball and softball, track and field. Track and field is a very expensive sport to produce because there's a lot going on at any one time. In that way, we've gotten away from video as a means of transmission and done IP transmission, which saved us a lot of money, and as we've got that IP path between our schools and ourselves, we've learned to do new things with it. We're doing content sharing back and forth, advanced production techniques, multiple camera paths that we normally wouldn't have on a production of that size. All of our shows, no matter where they are or what sport they are are produced in surround sound 5.0, so we think we lend a lot to the smaller sports that get smaller audiences, but we think we put a lot of production value to them to do the athletes and the sport justice. >> Talk to us about the underlying technologies that are necessary to support going from video to IP so that you can really open up the types of content and where it's distributed. >> Right, so one of the difficulties- we have around 100 venues in the 12 schools that we have to be able to broadcast from. Depends on the university; at Standford, those soccer and lacrosse fields could be way out. They call the campus 'the farm' for a reason. There's a lot of acreage there to cover. And some of our venues aren't even on campus. UCLA football is at the Rose Bowl, USC is at the Coliseum, so we had to find a way to get away from video which is just a single path and costs a lot. We needed more bidirectional service, we needed something that was secure and had really low latency so that when we did our productions we did the coaches interviews afterwards, it's basically like a phone call. We also provided internet services to the production, which everybody needs internet connectivity. The Chyron people, whomever. The crew itself, just for checking in and their report times and things like that, and we also provide four-digit extension dialing for our in-house phone systems. It's a very efficient and cost-effective way for us to do our production out there, and provide this suite of services that if I was just using a video circuit, I wouldn't have access to unless I paid extra for it. >> So presumably, creating a ton of content, how do you maintain all this content and be able to retrieve things, be able to livestream, have things on demand, that's that underlying archival storage strategy? >> So we produce 850 events throughout a year, and that's just to give you an idea, I think Big 10 and SCC are around 400, 450. We have a lot of volume going on, and we do a very good job, I think, of archiving that, logging those games, adding metadata, as much metadata as we possibly can to it. Including repurposing the closed caption files, we attach that as data, we get articles, stills, whatever we can gather about that particular game, we add it as metadata, and then we archive that. We keep it on very fast, short-term storage in our building on spinning disk, and after it ages, after about the second season, we push it into Amazon Cloud. It goes right into Glacier if it's that old, but immediately when we do a game, we push it up to S3 in Amazon, where we share and monetize our content at that point, and then from there it just goes to Glacier, so we have, we think, a very efficient workflow, it's highly automated, we have a great media management department that does a terrific job with very few people, very scarce resources, they do what I think is one of the best jobs in the industry in terms of saving that content in an effort to monetize it in the future. So if you can find it and search through it and get clips from it, it's going to be that much more valuable for us. >> So one of the prevailing things that we've been hearing all week, and not just here, is the democratization of content. The audience, we're very much empowered, right? As a viewer of anything we want; we're binge-watching, we're streaming, we're time-shifting, we're sharing it on social media. What is the process that Pac-12 Networks goes through to understand your audience as well as you can to deliver them the experience that you think they want? >> We have the data that comes back from our TV Everywhere product, there are OTT platforms that we can gather up and sift through. We've undertaken a fan engagement project to work with our universities about the type of people and who attend their football games, or their sporting events, and a way of better understanding who our audience is and tailoring our program to that. Understanding who they are, what their preferences are, it will help us, I think, to fine-tune the kind of content we put in front of them. Everybody loves a winning team, and you have no problem filling seats or getting an audience when your team is winning, so we understand that; we just want to be better during those times where the team might not be undefeated, so we'd like to get people in there anyway. It's a challenge for us, it really is. >> What about this concept of original content? You're now producing original content. There are three shows? >> Yes. We have some anthology shows; The Drive, and All Access during football and basketball season that give a behind-the-scenes look akin to the HBO shows on the professional side that look at professional sports. We go behind the scenes, and the stories for some of our athletes and some of our teams are quite compelling, and it makes good television. That gets also supported by our shoulder programming for our live events; pre and post-game SportsCenter-type shows that we do, and we try to do live halftimes that are topical for every one of our sports events that are played, so that's a lot of volume, a lot of churn that goes through a small studio in a small facility. We think it helps the live events look better, I mean, live events are what people are tuning in to watch. You can't fast-forward through a sporting event which advertisers just love, you kind of have to consume it in the moment, unless you can keep yourself away from the internet or your phone for a few hours until you get a chance to watch the game. We think being in live sports is a really special place to be, because you can't fast-forward through it. Any support that we give those live events, that's really what the other original content is geared to, is to build interest in those teams and those events, and attract people to them. >> So you have this concept of TV everywhere. Original content, traditional content, how is the cloud helping the Pac-12 Network to really collaborate across all the content, all of the connected fans and wherever they are? >> Sure. Just to make a distinction, we have the TV Everywhere which is the authenticated platforms that our cable providers use, and we have our own digital properties as well that still need to be authenticated, and then there's the over the top platforms like Facebook Live that are everything but the 850 events that go on the air. So behind the scenes, sideline reporters in the locker rooms, whatever else we could produce, pep rallies, that we think could be compelling content for Facebook Live we do. On Twitter, we've licensed out the 851st event and beyond, so we do some very limited productions, but still quality, that gets distributed on Twitter. So that's kind of this thing. TV Everywhere is basically the high-end product, and then these kind of ancillary second-screen experience, whatever you want to call them that don't need to be authenticated, that anybody can pick up and watch. That's how we make that distinction. I'm sorry, what was the second part of that question? >> How does cloud help collaboration? >> So we were really early adopters of producing those streams ourselves, so with Elemental Technologies who is a wonderful vendor and partner of ours, they're now owned by AWS, I point over there, they're somewhere in the building. >> (laughs) >> We're a big early adopter of their technology, we've really tried to strive for a business partnership with our vendors, rather than just a check-writer, check-casher relationship, which doesn't do us well, we don't think. We developed this relationship with them, and they helped us deliver our mezzanine streams to Occami and distribute from there, but we do that encoding in-house on their equipment. Eventually I think we'll move that to the cloud and get it all virtualized, but for right now we run their servers in our house, and they understand that we would like to get it out as quickly as we can as some point, but we're working on emptying our CER as fast we can; I don't want any blinking lights in my CER if I could get there someday, but that's a dream. >> So last question, we just have about thirty seconds left, you're in San Fancisco, >> Yep. >> With a really cool opportunity; sports entertainment technology. When you're looking for young talent who could potentially be swayed by the big Googles of the world and Facebook, what is really unique and cool about working with Pac-12 Network? >> For us, it's a two-edged sword. We love being in San Francisco; it gives us access to young people, a new way of thinking, different technology companies that are more IP/IT centric than TV centric. So we think that gives us a real advantage. The other edge of the sword is that we lose a lot of network engineering especially, systems engineers to the tech companies; they would prefer to work at Uber or LinkedIn, something like that. TV's kind of a dying tech, you have to jazz it up a little bit to gain their interest. >> But it's evolving based on what you're talking about-- It is. It's very much that skillset for being an old-time TV engineer is becoming less and less important than network engineering or systems engineering skillsets; that's what we really look for. If somebody has a Cisco certification, he gets our- or she gets our interest, rather than just 'I've worked in television for 20 years,' because we know which direction we're going in. >> One of the things that you articulate as we wrap things up here is that every company this day and age is a tech company, so we wish you the best of luck. You've said you've been at this show for 30 years >> 30 years. >> I can't even imaging all the things that you've seen. Michael Harbin, thank you so much for joining us on The Cube. >> Thank you very much, it was a pleasure being here. >> We want to thank you for watching, we are live from NAB in Las Vegas. I'm Lisa Martin, stick around, we'll be right back. (techno music)

Published Date : Apr 26 2017

SUMMARY :

covering NAB 2017, brought to you by HGST. I'm Lisa Martin, and we are live at day three The content arm of the Pac-12 Conference. Sure, we have a six regional sports networks So you are the first and only sports media company and we do the best we can with what we have. We have the best geography, we have the best schools, in the fall; a big football event, versus some of the but in the fall we kickoff big with our football season, and we have a champ game; those media rights were sold paths that we normally wouldn't have on a production so that you can really open up the types of content Right, so one of the difficulties- we have around 100 to Glacier, so we have, we think, a very efficient workflow, So one of the prevailing things that we've been hearing We have the data that comes back from our TV Everywhere What about this concept of original content? SportsCenter-type shows that we do, and we try to do helping the Pac-12 Network to really collaborate across and beyond, so we do some very limited productions, So we were really early adopters of producing those that we would like to get it out as quickly as we can potentially be swayed by the big Googles of the world The other edge of the sword is that we lose a lot of But it's evolving based on what you're talking about-- One of the things that you articulate as we wrap I can't even imaging all the We want to thank you for watching, we are live from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Larry ScottPERSON

0.99+

Lisa MartinPERSON

0.99+

NCAORGANIZATION

0.99+

Michael HarbinPERSON

0.99+

FoxORGANIZATION

0.99+

MichaelPERSON

0.99+

ScottPERSON

0.99+

ESPNORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Michael HarabinPERSON

0.99+

San FranciscoLOCATION

0.99+

sevenQUANTITY

0.99+

30 yearsQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

two schoolsQUANTITY

0.99+

UberORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

AugustDATE

0.99+

12 schoolsQUANTITY

0.99+

12QUANTITY

0.99+

San FanciscoLOCATION

0.99+

SECORGANIZATION

0.99+

UCLAORGANIZATION

0.99+

12 universitiesQUANTITY

0.99+

firstQUANTITY

0.99+

Pac-12 NetworkORGANIZATION

0.99+

Pac-12 NetworksORGANIZATION

0.99+

oneQUANTITY

0.99+

fifth seasonQUANTITY

0.99+

last yearDATE

0.99+

three showsQUANTITY

0.99+

Las VegasLOCATION

0.99+

Big 10 NetworksORGANIZATION

0.99+

NAB ShowEVENT

0.99+

three seasonsQUANTITY

0.99+

four years agoDATE

0.99+

early DecemberDATE

0.99+

The DriveTITLE

0.99+

12 football teamsQUANTITY

0.99+

13 weeksQUANTITY

0.99+

Elemental TechnologiesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

one timeQUANTITY

0.99+

FacebookORGANIZATION

0.99+

nine camerasQUANTITY

0.99+

Facebook LiveTITLE

0.98+

850 eventsQUANTITY

0.98+

first guestQUANTITY

0.98+

ColiseumLOCATION

0.98+

eightQUANTITY

0.98+

twoQUANTITY

0.98+

53-footQUANTITY

0.98+

second partQUANTITY

0.98+

todayDATE

0.98+

HBOORGANIZATION

0.98+

NAB 2017EVENT

0.98+

about thirty secondsQUANTITY

0.98+

OneQUANTITY

0.97+

over 100 years oldQUANTITY

0.97+

Day threeQUANTITY

0.97+

less than 12QUANTITY

0.96+

six regional sports networksQUANTITY

0.96+

Pac-12EVENT

0.96+

Pac-10EVENT

0.95+

day threeQUANTITY

0.95+

single pathQUANTITY

0.95+

TwitterORGANIZATION

0.95+

NABEVENT

0.95+

NAB Show 2017EVENT

0.95+

Pac-12 ConferenceEVENT

0.95+

around 100 venuesQUANTITY

0.94+

OlympicEVENT

0.93+

851st eventQUANTITY

0.93+

GooglesORGANIZATION

0.93+

four-digitQUANTITY

0.93+

second seasonQUANTITY

0.93+

ChyronORGANIZATION

0.91+

All AccessTITLE

0.91+

Erik Weaver, HGST - NAB Show 2017 - #NABShow - #theCUBE


 

>> Narrator: It's The Cube. Covering NAB 2017. Brought to you buy HGST. >> Hey welcome back everybody, Jeff Frick here with The Cube. We're at NAB 2017. It's not only 100,000, it's 102,000 people according to the official press release talking about the media and entertainment and technology. That theme is actually met as the technology is so intimately to media entertainment that you can't separate them out anymore. We're really excited for our next guest. He is right in the heart of it. He's in his happy place. He's leading the whole contingent here. It's Eric Weaver. He's the global director of media, entertainment, and market development for HGST. Eric, welcome. >> Thank you so much. Glad to be here today. >> So first impressions of the show. I'm sure you've been here a 1000 times. It's crazy. >> Yeah, no, it's really amazing. It's always a wonderful show. There's so many great people here really trying to get an understanding of what's coming up, what's going to solve their problems that they're facing right now. >> And the problems keep getting bigger because people want more. I mean it's amazing you walk around the level of gear and equipment. Some of the green screen setups here, they look like professional studios. And now we've gone from HD to 4K to AK to ultra HD. We've got 360 cameras. Little commercial ones by Samsung and professional grade ones. That's only going to increase the complexity of trying to manage all this stuff. >> Absolutely, it's really becoming a reality now that 4K and UHD are coming down the pipe. I think I heard some number that 56% of all sets will be that by 2020. And it's really great because you'll see the creative community starting to embrace HDR or UHD because they have never seen it before and until they go into the color suites and see the difference, they're absolutely blown away. So you're going to have a drive here. You're going to have a drive between the director saying this is what I want, and this is my look, and the camera or the tv set saying, this is what we can produce in theaters and what we can produce. >> Right, we didn't even talk about VR or AI. >> And VR and AI absolutely are some of the hottest topics out there right now. Trying to comprehend. You're also seeing a big shift from 360 video to photogrammetry and computational photography and these things. Volumetric capture. And those things are really going to be taking over in the next couple years and they are huge in understanding how they work for everyone. >> Okay, so you dropped a couple new vocabulary words. I have to have you dig in a little deeper. >> Alright, so volumetric. >> Photogemetric first? >> Photogrammetry. Photogrammetry. So what photogrammetry is is recreating a room with photographs by stitching them together. So for example, I worked on a piece called Wonder Buffalo and in Wonder Buffalo we basically took 956 photographs of a room and then stitched them together at 50 megapixels each and created this whole new room environment. You combine that with what's called volumetric capture. So instead of 12-24 cameras pointing out where you're stuck in a locked position which is a traditional 360 video. You're now doing 36 cameras in and those 36 cameras doing an almost hologram. The big difference here is now all of a sudden you feed it into a gaming engine, like Unity and you can walk around and explore the entire scene. So it's the closest you've ever seen to the Holodeck by maybe Star Trek or something. >> Right. >> It's really quite an amazing experience. >> Now on the other side of the equation, on the simpler side, you know you've got a lot of independent film makers now have YouTube and Vimeo and all these distribution platforms and you know, I'm a huge Casey Neistat fan. You know, he's got his little $2000 camera and he's out shooting and getting tremendous views so the focus on audience and story telling and sort of the democratization of distribution is another huge trend. >> Absolutely. Really big. YouTube is, what's fascinating about something like YouTube is YouTube wasn't possible a couple years ago. Something like the Cloud made YouTube possible. If you historically look back, you'll see something like the electricity juxtaposition, and until Niagara Falls was there, we didn't have the ability to have electricity in such volumes. And so some of the breakthrough cases might have been like Upcoa, who produced aluminum. They were burning, tearing down whole forests to put together furnaces that could burn hot enough to make it. Now that they have cost effective aluminum, or electricity, they could do this. The same situation was like someone like YouTube. They can scale at a level that we've never seen before and was never possible. >> Right. >> So it opens up whole new opportunities of democratization of video. >> Right. >> Absolutely amazing new tools. >> And then obviously cloud, right? Cloud is changing the world. The big cloud providers like Amazon and Google and Microsoft and a ton of second tier service providers. But they're not kind of on the cloud for big assets is speed of light is too damn slow, you know, getting stuff up and down is a pain. And also you know that's where you really wanted a big machine with local horsepower. >> So. >> But now you've got rendering, all this huge stuff that you need massive scale that you're little machine can't do anymore. >> So a big confusion a lot of people have in cloud is they think about taking their current data center and lifting and shifting it to the cloud. That doesn't work. You have to reimagine how the whole structure works. What do you put up there? Why do you put it up there? Are you using a proxy? Are you using some kind of hybrid workflow to maximize and benefit? Because if you're just dumping something up there and expecting to bounce it back and forth, you're right, speed of light and other things are going to kill you. >> Right. >> But there's other ways out there to leverage that. Principles such as IOA. Inner Oriented Connected Architectures. So placing your storage or your centralized data link at an Equinox or some kind of colo facility, where you can centrally leverage it and then working off proxies, most people don't know that when you're working in your color suite, almost all the time you're still working off proxies because you cannot see all those bits or we cannot get all the bits to the monitors. >> Right, right. >> That we have. So learning how to create the proper workflow there is absolutely critical, and will save you a fortune if you know what you're doing. >> Right. >> Or go to the right people to show you how to do that properly. >> So it's really use the best attributes of both as much as you can. >> Yes, you have to figure out how to use the best attributes of both. >> So the other kind of knock on too much tech in this business is sometimes the storytelling gets lost. And I know because I have a personal pet peeve on a lot of these big huge cinematic explosions that they could still have a story. >> Yes, yes. >> So, you know, I think that having a narrative is still so important. Is that lost? Is that enhanced? How do you see that integrating with the tech? >> So, I think it's absolutely critical. I saw Spielberg speaking at USC a little while back and he was like story, story, story. Tech is simply there to empower the story. And if you lose sight of that, you're absolutely lost. It really is the truth. So for example, I have two shorts out right now and one's at Tribeca one's at South by South West but we focused on the story. Although it's an R and D research project, you have to have a story. >> Right, right. >> That's the only way to move this thing forward. And if you don't have that, everything else is lost. >> Right. Now the other great thing that's happened with cloud and keeper storage and all these advanced infrastructure components is now you can keep everything. >> Yes. >> Data is no longer a liability that is expensive to hold and manage and you got to figure out what you're going to throw away because it's too expensive. Now people finally understand, it is an asset. So it opens up all types of opportunities to store it and do things with it. >> And you're seeing a lot of this shift from tape to object and other things like that because they want to monetize this content. There's so many new mechanisms to monetize content between the Netflix and the other distributors Amazon, and everyone else, that they are realizing this is not just an asset for the closet that you might someday use or sell in some broad agreement to some secondary station in Europe, or somewhere else. These are things that you can monetize on a regular basis. But that actually brings you the next problem. Understanding what you have. >> Right, right. >> People get very confused. They assume that there is one film. There's not one film. There's about 120 versions of the films that are released. Between the versioning such as culturally sensitive areas like the Middle East, to different language titles, to different ad pieces or other inserted parts, there are a lot of different versions to run a film. >> Right. >> And so people don't always understand that. >> And that's interesting but the other account of not gone film or video traditionally, from a metadata point of view in a search and a consumption and discovery point of view, is if I search for a picture and I find the one that I'm looking for, I immediately know that's the one that I want. But if I want to find something that's seven minutes in to an hour long video, how do I find it? How do I consume it? How do I share it. That's an age old problem with this media type. >> So, part of the problem there is that we have not broke down metadata tagging in each of these pictures and these pieces. This is coming. I actually help with ABC help build a tool that created x-ray like Amazon has for production sites, so they could scour and tag all these pieces and begin to say this is an action scene with this character in it, at this point in the movie. That is coming probably a year to a year and a half out. But all of those things will begin to evolve very very soon. >> Right. Certainly a great application for AI. >> Yeah, AI is absolutely hot as well and this is what the studios are trying to get their hands on right now. >> Right. >> People like Netflix have really pioneered some of this work and it originally was to understand how to find content or what people like content like so they could begin to produce content that was relatable to their audience. They've now moved it into things like QC'ing because they are the largest studio in the world at this point. Over 1000 hours. >> Are they the largest studio in the world? >> Netflix is the largest studio in the world right now. >> Wow, I didn't know that. >> So they're doing over 1000 hours I think a season, at this point. >> Amazing. >> But the studios are really trying to, are really doing a lot of work to get their hands on some of this and so there's a lot of really great, high level, private meetings going on that's bringing these industry leaders together. ETC is a wonder place to see that. They talk about these innovations. >> So you're in the middle of it all. You've been doing this for a long time. What are some of your priorities for 2017 and what are some of the things that still just get you up in the morning right now that you're excited about? >> So, absolutely my priorities is going to be cloud. Over the last about a year, 18 months, it's been a massive shift. It was before it was all before no, no, no. And I actually heard this exact quote from somebody at one of the major studios. He said, "It used to be no, no, no, you better have a darn good reason, to now yes, yes, yes, you better have a darn good reason not to." >> Right, to say no. >> Number one, very hot, very on board. The next one again, is VRAR, understanding how VRAR is going to begin to change our lives and produce things. I wasn't originally a big fan of that, I thought of it as kind of 3D, but then I went to USC's VR LA meeting, and there was over 600 students in this group and every single school was represented. Medical, architectural, journalism. These students understand that this is going to touch everybody. I don't know if you ever really got into genuine good content. Someone like a Nonny de la Pena does stuff that touches on more towards journalistic. For example, she did a meeting in San Diego and it's a very terrible rendering but the audio is good and you see a man being beaten from the police and people are calling out saying, "Stop, stop, stop." And you've never felt it so emotionally in your life. This is like bam. It hits you. >> The VR part of it or just that she had great content? >> The VR part of it and the context. >> Okay. >> Of telling a story and what's going wrong with the story. This is going to affect us in a different way and it might not just be they clip pieces for TV shows but it's going to be touching us in a lot of different ways. >> Right. Right. >> Very powerful stuff. >> We talk a lot about the AR. I think the AR piece from a commercial point of view is tremendous too. >> It's absolutely a bigger market. So what's really going to be biggest is mixed reality or MR. MR is going to come in and it's going to fade you between the two things. So, that is really where it's going to meet in the middle. >> You distinctly called out the differentiation between VR and 360. >> Yes. >> How do you split those? >> So when you look at it, if you're looking at 360 video that's a camera rigged stuck in one particular location, it's got 12, 24, 36 cameras all pointing outward, and when you're watching that, you're stuck in a location. You're hostage in more of a traditional film way to what within that 360 scope they want you to kind of be from one spot. When you look at volumetric capture, volumetric capture is the opposite. It allows you to walk around, choose your own point of view, be wherever you want to be within that scene. So, it's where we're going to be going, it's going to be much more like the Holodeck from Star Trek. >> Right. >> Very amazing stuff. >> Alright, well Eric, thank you for taking a few minutes. Congrats. I'm sure you're going to be busy, busy, busy for the next three days so, >> I know. >> So thank you for taking a few minutes with us on The Cube. >> No problem, thank you so much. >> Alright, he's Eric, I'm Jeff Frick. You're watching The Cube from NAB 2017 and we'll be back after this short break. Thanks for watching. (upbeat techno music)

Published Date : Apr 24 2017

SUMMARY :

Brought to you buy HGST. that you can't separate them out anymore. Thank you so much. So first impressions of the show. to get an understanding of what's coming up, I mean it's amazing you walk around and the camera or the tv set saying, And VR and AI absolutely are some of the hottest I have to have you dig in a little deeper. and explore the entire scene. and you know, I'm a huge Casey Neistat fan. And so some of the breakthrough cases So it opens up whole new opportunities Cloud is changing the world. that you need massive scale that you're little machine and lifting and shifting it to the cloud. almost all the time you're still working off proxies and will save you a fortune if you know what you're doing. Or go to the right people to show you how as much as you can. Yes, you have to figure out how to use the best attributes So the other kind of knock on too much tech How do you see that integrating with the tech? Tech is simply there to empower the story. And if you don't have that, everything else is lost. components is now you can keep everything. and you got to figure out what you're going to throw away Amazon, and everyone else, that they are realizing like the Middle East, to different language titles, and I find the one that I'm looking for, and begin to say this is an action scene Right. and this is what the studios are trying so they could begin to produce content So they're doing over 1000 hours I think a season, and so there's a lot of really great, high level, that still just get you up in the morning at one of the major studios. but the audio is good and you see a man This is going to affect us in a different way Right. We talk a lot about the AR. MR is going to come in and it's going to fade you You distinctly called out the differentiation to what within that 360 scope they want you to kind of be Alright, well Eric, thank you for taking a few minutes. So thank you for taking a few minutes with us and we'll be back after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

GoogleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

36 camerasQUANTITY

0.99+

EuropeLOCATION

0.99+

Eric WeaverPERSON

0.99+

San DiegoLOCATION

0.99+

12QUANTITY

0.99+

EricPERSON

0.99+

2017DATE

0.99+

956 photographsQUANTITY

0.99+

SamsungORGANIZATION

0.99+

$2000QUANTITY

0.99+

50 megapixelsQUANTITY

0.99+

seven minutesQUANTITY

0.99+

SpielbergPERSON

0.99+

Nonny de la PenaPERSON

0.99+

56%QUANTITY

0.99+

Star TrekTITLE

0.99+

NetflixORGANIZATION

0.99+

two thingsQUANTITY

0.99+

24QUANTITY

0.99+

2020DATE

0.99+

a yearQUANTITY

0.99+

102,000 peopleQUANTITY

0.99+

1000 timesQUANTITY

0.99+

one filmQUANTITY

0.99+

YouTubeORGANIZATION

0.99+

a year and a halfQUANTITY

0.99+

two shortsQUANTITY

0.99+

Middle EastLOCATION

0.99+

over 600 studentsQUANTITY

0.99+

Niagara FallsLOCATION

0.99+

Over 1000 hoursQUANTITY

0.99+

18 monthsQUANTITY

0.99+

Erik WeaverPERSON

0.99+

360 camerasQUANTITY

0.99+

todayDATE

0.99+

USCORGANIZATION

0.99+

NAB 2017EVENT

0.99+

over 1000 hoursQUANTITY

0.98+

bothQUANTITY

0.98+

ABCORGANIZATION

0.98+

about 120 versionsQUANTITY

0.98+

eachQUANTITY

0.98+

360 videoQUANTITY

0.98+

NAB Show 2017EVENT

0.97+

EquinoxORGANIZATION

0.97+

HolodeckTITLE

0.96+

first impressionsQUANTITY

0.96+

VimeoORGANIZATION

0.96+

Casey NeistatPERSON

0.96+

oneQUANTITY

0.95+

one spotQUANTITY

0.95+

HGSTORGANIZATION

0.94+

Wonder BuffaloTITLE

0.93+

UnityTITLE

0.93+

100,000QUANTITY

0.92+

360QUANTITY

0.92+

couple years agoDATE

0.92+

#NABShowEVENT

0.92+

The CubeTITLE

0.91+

12-24 camerasQUANTITY

0.88+

an hour longQUANTITY

0.87+

South WestLOCATION

0.85+

every single schoolQUANTITY

0.85+

VRARTITLE

0.85+

HGSTEVENT

0.82+

360 scopeQUANTITY

0.8+

TribecaORGANIZATION

0.78+

about a yearQUANTITY

0.77+

4KOTHER

0.75+

UpcoaORGANIZATION

0.72+

Bradley Wong, Docker & Kiran Kamity, Cisco - DockerCon 2017 - #theCUBE - #DockerCon


 

>> Narrator: From Austin, Texas, it's theCUBE covering DockerCon 2017, brought to you by Docker and support from it's ecosystem partners. (upbeat music) >> Hi, and we're back, I'm Stu Miniman, and this is SilconANGLES production of the Cube, here at DockerCon 2017, Austin, Texas. Happy to have on the program Kiran Kamity, who was CEO of ContainerX which was acquired by Cisco. And you're currently the senior director and head of container products at Cisco. And also joining us is Brad Wong, who is the director of product management at Docker. Gentlemen, thank you so much for joining us. >> Brad: Thanks for having us. [Kiran] Thank you, Stu. >> So Kiran, talk a little bit about ContainerX, you know, bring us back to, why containers, you know why you help start a company with containers, and when to be acquired by a big company like Cisco. >> Yeah, it was actually late 2014 is when Pradeep and I, my co-founder from ContainerX, we started brainstorming about, you know, what do we do in the space and the fact that the space was growing, and my previous company called RingCube, which has sold to Citrix, where we had actually built a container between 2006 and 2010. So we wanted to build a management platform for containers, and it was in a way there was little bit of an overlap with Docker Datacenter, but we were focusing on mostly tendency aspects of it. Bringing in concepts like viamordi rs into containers et cetera. And we were acquired by Cisco about eight months ago now, and the transition in the last eight months has been fantastic. >> Great, and Brad, you're first time on the cube, so give us your background, what brought you to Docker? >> Yeah, so actually before Docker I was at actually, a veteran of Cisco, interestingly enough. Many different ventures in Cisco, most recently I was actually part of the Insieme Networks team, focusing on the software defined networking, and Application Centric Infrastructure. Obviously I saw a pretty trend in the infrastructure space, that the future of infrastructure is being led by applications and developers. With that I actually got to start digging around with Docker quite a lot, found some good interest, and we started talking, and essentially that's how I ended up at Docker, to look at our partner ecosystem, how we can evolve that. Two years ago now, actually. >> I think two years ago Docker networking was a big discussion point. Cisco's been a partner there, but bring us up to speed if you would, both of you, on where you're engaging, on the engineering side, customer side, and the breadth and depth of what you're doing. >> You're right, two years ago, networking was in quite a different place. We kicked it off with acquiring a company back then called SocketPlane, which helped us really define-- >> Yeah and we know actually, ---- and ----, two alums, actually I know those guys, from the idea to starting the company, to doing acquisition was pretty quick for you and for them. >> Right, and we felt that we really needed to bring on board a good solid networking DNA into the company. We did that, and they helped us define what a successful model would be for networking which is why they came up with things like the container networking model, and live network, which then actually opened the door for our partners to then start creating extensions to that, and be able to ride on top of that to offer more advanced networking technologies like Contiv for example. >> Contiv was actually an open source project that was started within Cisco, even before the container was acquisitioned. Right after the acquisition happened, that team got blended into our team and we realized that there were some really crown jewels in Contiv that we wanted to productize. We've been working with Docker for the last six months now trying to productize that, and we went from alpha to beta to g a. Now Contiv is g a today, and it was announced in a blog post today, and it's actually 100% open-source networking product that Cisco TAC and Cisco advanced services have offered commercial support and services support. It's actually a unique moment, because this is the fist 100% open-source project that Cisco TAC has actually offered commercial support for, so it's a pretty interesting milestone I think. >> I think also with that, we also have it available on Docker store as well. It's actually the first Docker networking plug-in that it's been certified as well. We're pretty also happy to have that on there as well. >> Yeah. >> Anything else for the relationship we want to go in beyond those pieces? >> We also saw that there was a lot of other great synergies between the two companies as well. The first thing we wanted to do was to look at how we can also make it a lot better experience for joint customers to get Docker up and running, Docker Enterprise Edition up and running on infrastructure, specifically on Cisco infrastructure, so Cisco UCS. So we also kicked off a series of activities to test and validate and document how Docker Enterprise Edition can run on Cisco UCS, Nexus platforms, et cetera. We went ahead with that and a couple months later we brought out, jointly, to our Cisco validated designs for Docker Enterprise Edition. One on Cisco UCS infrastructure alone, and the other one jointly with NetApp as well, with the FlexPod Solution. So we're also very very happy with that as well. >> Great. Our community I'm sure knows the CVD's from what they are out there. UCS was originally designed to be the infrastructure for virtualized environments. Can you walk me through, what other significant differences there or anything kind of changing to move to containers versus what UCS for virtualized environment. >> The goal with that, UCS is esentially considered a premium kind of infrastructure server infrastructure for our customers. Not only can they run virtual environments today, but our goal is as containers become mainstreamed, containers evolved to being a first-class citizen alongside VM. We have to provide our customers with a solution that they need. And a turnkey solution from a Cisco standpoint is to take something like a Docker stack, or other stacks that our customer stopped, such as Kubernetes or other stacks as well, and offer them turnkey kind of experience. So with Docker Data Center what we have done is the CVD that we've announced so far has Docker Data Center, and the recipe provides an easy way for customers to get started with USC on Docker Data Center so that they get that turnkey experience. And with the MTA program that was announced, today at the key note. So that allows Cisco and Docker to work even more closely together to have not just the products, but also provide services to ensure that customers can completely sort of get started very very easily with support from advanced services and things like that. >> Great, I'm wondering if you have any customer examples that you can talk through. If you can't talk about a specific, logo, maybe you can talk about. Or if there are key verticals that you see that you're engaging first, or what can you share? >> We've been working joint customer evals, actually a couple of them. Once again I don't think we can point out the names yet. We haven't fully disclosed, or cleared it with their Prs Definitely into financials. Especially the online financials, a significant company that we've been working with jointly that has actually adopted both Contiv, and is actually seeing quite a lot of value in being able to take Docker, and also leverage the networking stack that Contiv provides. And be able to not just orchestrate networking policies for containers, but the other thing that they want to do is to have those same policies be able to run on cloud infrastructure, like EWS for example. So they obviously see that Docker is a great platform to be enable their affordability between on premises and also public cloud. But at the same time be able to leverage these kind of tools that makes that transition, and makes that move a lot easier so they don't have to re-think their security networking policies all over again. That's been actually a pretty used case I thought of the joint work that we did together with Contiv. >> Some of the customers that we've been talking to in fact we have one customer that I don't think I'm supposed say the name just yet, but we've drollled it out, has drolled out Contiv with the Docker on time. In five production data centers already. And these are the kind of customers that actually take to advanced networking capabilites that Contiv offers so that they can comprehensive L2 networking, L3 networking. Their monitoring pools that they currently use will be able to address the containers, because the L2, the L3 networking capabilities allows each container to have an IP address that is externally addressable, so that the current monitoring tools that you use for VMs et cetera can completely stay relevant, and be applicable in the container world. If you have an ACI fabric that continues to work with containers. So those are some of the reasons why these customers seem to like it. >> Kiran, you're relatively new into Cisco, and you were a software company. Many people they still think of Cisco as a networking company. I've heard people derogatory it's like, "Oh they made hardware define networking when they rolled out some of this stuff." Tell us about, you talk about an open source project that you guys are doing. I've talked to Lou Tucker a number of times. I know some of the software things you guys are doing. Give us your viewpoint as to your new employer, and how they might be different than people think of as the Cisco that we've known for decades. >> Cisco is, has of course it has, you know, several billion dollars of revenue coming in from hardware and infrastructure. And networking and security have been the bread and the butter for the company for many many years now But as the world moves to Cloud-Native becoming a first class citizen, the goal is really to provide complete solutions to our customers. And if you think of complete solutions, those solutions include things like networking, thing like security. Including analytics, and complete management platforms. At the same time, at the end of the day, the customers want to come to peace with the fact that this is a multi-cloud world Customers have data centers on premises, or on hosted private cloud environments. They have workloads that are running on public clouds. So with products like cloud center, our goal is to make sure that whatever they, the applications that they have, can be orchestrated across these multiple clouds. We want to make sure that the pain points the customers have around deploying whole solutions include easy set-up of products on infrastructure that they have, and that includes partnerships like UCS, or running on ACI or Nexus. We want to make sure that we give that turnkey experience to these customers. We want to make sure that those workloads can be moved across and run across these different clouds. That's where products like cloud center come in. We want to make sure that these customers have top grade analytics, which is completely software. That's were the app dynamics acquisition comes in. And we want to make sure that we provide that turnkey experience with support in terms of services. With our massive services organization, partners, et cetera. We view this as our job is to provide our customers what they need in terms of the end solution that they're looking for. And so it's not just hardware, it's just a part of it. Software, services, et cetera, complimented. >> Alright, Brad last question that I have for you in the keynote yesterday, I couldn't count how many times the word ecosystem was used. I think it was loud and clear that everybody there I think it was like, you know, Docker will not be successful unless it's partners are successful, kind of vice versa. When you look at kind of the product development piece of things, how does that resonate with you and the job that you're doing? >> We basically are seeing Docker become more of a, more and more of a platform as evidenced by yesterdays keynote. Every platform, the only way that platform's going to be successful is if we can do great, we have great options for our partners, like Cisco, to be able to integrate with us on multiple different levels, not just on one place. The networking plug-in is just one example. Many many other places as well Yesterday we announced two new open source initiatives. Lennox kit and also the movi project. You can imagine that there's probably lots of great places where partners like Cisco can actually play in there, not just only in the service fees, but maybe also in things like IOT as well, which is also a fast-emerging place for us to be. And all the way up until day two type of monitoring, type of environment as well where we think there's a lot of great places where once again, options like app dynamics, tetration analytics can fit in quite nicely with how do you take applications that have been migrated or modernized into containers, and start really tracking those using a common tool set. So we think that's really really good opportunities for our ecosystem partners to really innovate in those spaces, and to differentiate as well. >> Kiran, I want to give you the final word, take-aways that you want the users here, and those out watching the show to know about, you know, Cisco, and the Docker environment. >> I want to let everybody know that Cisco is not just hardware. Our goal is to provide turnkey complete solutions and experiences to our customers. And as they walk through this journey of embracing Cloud-Native workloads, and containerized workload there's various parts of the problem, that include all the way from hardware, to running analytics, to networking, to security, and services help, and Cisco as a company is here to offer that help, and make sure that the customers can walk away with turnkey solutions and experiences. >> Kiran and Brad, thank you so much for joining us. We'll be back with more coverage here. Day two, DockerCon 2017, you're watching theCube.

Published Date : Apr 19 2017

SUMMARY :

covering DockerCon 2017, brought to you by Docker and head of container products at Cisco. Brad: Thanks for having us. and when to be acquired by a big company like Cisco. and the fact that the space was growing, that the future of infrastructure and the breadth and depth of what you're doing. We kicked it off with acquiring a company back then from the idea to starting the company, and be able to ride on top of that and we realized that there were some really crown jewels in We're pretty also happy to have that on there as well. and the other one jointly with NetApp as well, there or anything kind of changing to move to containers and the recipe provides an easy way for customers that you can talk through. and also leverage the networking stack that Contiv provides. so that the current monitoring tools that you use for I know some of the software things you guys are doing. the goal is really to provide complete solutions and the job that you're doing? and to differentiate as well. take-aways that you want the users here, and make sure that the customers can walk away with Kiran and Brad, thank you so much for joining us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BradPERSON

0.99+

CiscoORGANIZATION

0.99+

KiranPERSON

0.99+

Kiran KamityPERSON

0.99+

Brad WongPERSON

0.99+

Lou TuckerPERSON

0.99+

100%QUANTITY

0.99+

two companiesQUANTITY

0.99+

ContivORGANIZATION

0.99+

ContainerXORGANIZATION

0.99+

2006DATE

0.99+

DockerORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Austin, TexasLOCATION

0.99+

RingCubeORGANIZATION

0.99+

2010DATE

0.99+

bothQUANTITY

0.99+

late 2014DATE

0.99+

StuPERSON

0.99+

SocketPlaneORGANIZATION

0.99+

EWSORGANIZATION

0.99+

firstQUANTITY

0.99+

DockerCon 2017EVENT

0.99+

todayDATE

0.99+

#DockerConEVENT

0.99+

each containerQUANTITY

0.99+

Two years agoDATE

0.99+

one customerQUANTITY

0.99+

two years agoDATE

0.99+

UCSORGANIZATION

0.99+

ACIORGANIZATION

0.99+

yesterdayDATE

0.98+

CitrixORGANIZATION

0.98+

Cisco TACORGANIZATION

0.98+

USCORGANIZATION

0.98+

one exampleQUANTITY

0.98+

YesterdayDATE

0.98+

DockerTITLE

0.98+

one placeQUANTITY

0.98+

Docker Enterprise EditionTITLE

0.98+

Day twoQUANTITY

0.97+

two alumsQUANTITY

0.97+

PradeepPERSON

0.97+

yesterdaysDATE

0.97+

Insieme NetworksORGANIZATION

0.97+

five production data centersQUANTITY

0.97+

OneQUANTITY

0.97+

Robert Herjavec & Atif Ghauri, Herjavec Group - Splunk .conf2016 - #splunkconf16 - #theCUBE


 

>> Live from the Walt Disney World Swan and Dolphin Resort in Orlando, Florida, it's theCUBE, covering Splunk .conf2016. Brought to you by Splunk. Now, here are your hosts John Furrier and John Walls. >> And welcome back here on theCUBE. The flagship broadcast of SiliconANGLE TV where we extract a signal from the noise. We're live at conf2016 here in Orlando, Florida on the show floor. A lot of activity, a lot of excitement, a lot of buzz and a really good segment coming up for you here. Along with John Furrier, I'm John Walls and we're joined by two gentlemen from the Herjavec Group, Robert Herjavec. Good to see you, sir. >> Greetings. Thank you for having us. >> The CEO, and Atif Ghauri is Senior VP at Herjavec. Good to see you, sir. >> Yes. >> First off, Robert, congratulations. Newly married, your defense was down for a change. Congratulations on that. (laughter) >> Oh thank you. It was wonderful. It was a great wedding, lots of fun but casual and just a big party. >> Yeah, it was. Looked like, pictures were great. (laughter) People obviously know you from Shark Tank. But the Herjavec Group has been, really, laser focused on cyber security for more than a decade now. Tell us a little bit about, if you would, maybe just paint the broad picture of the group, your focus, and why you drilled down on cyber. >> Yeah, I've been in the security business for about 30 years. I actually helped to bring a product called CheckPoint to Canada firewalls, URL filtering, and that kind of stuff. And we started this company 12 years ago, and our vision was to do managed services. That was our vision. No other customer's vision, but our vision. And we thought we'd do $5 million in sales in our first year and we did $400000. The market just wasn't there. SIEM technology, log aggregation isn't what it is today. I mean, I think at the time, it was enVision. What was it called? >> Yeah, enVision. >> enVision. And then RSA bought them. That was really the first go-to-market SIEM. Then you had ArcSight and Q1. So our initial business became around log aggregation, security, writing parsers. And then over time it grew. It took us five years to get to $6 million in sales, and we'll do about $170 million this year. We went from a Canadian company to really a global entity. We do a lot of business in the States, UK, Australia, everywhere. >> But you're certainly a celebrity. We love havin' you on theCUBE, our little Shark Tank in and of itself. But you're also an entrepreneur, right? And you know the business, you've been in software, you've been in the tech business, so you're a tech athlete, as we say. This world's changing right now. And I'm certain you get a lot of pitches as entertainment meets business. But the fact that the entrepreneurial activity, certainly in the bay area and San Francisco, the Silicon Valley, where I live, and all around the world, is really active. Whether you call the programmer or culture or just the fact that the cloud is allowing people to start companies, you're seeing a surge in entrepreneurship in the enterprise. (laughs) Which is like, was boring in the past, you know? You just mentioned CheckPoint in the old days, but now it's surging. Your thoughts on the entrepreneurial climate? >> I dunno if the enterprise entrepreneurship element is surging. By the way, I'm going to say intrepreneur, just the way I say it. Cuban always makes fun of me. (laughter) We don't say it like that in America! I'm like, screw off! (laughter) >> That's how you say it! >> I want to say it the way I want to say it. >> Well, internal entrepreneurs, right? Is that what you mean by intrepreneurship? >> Well, no. I'm just, it's just the way I say it. >> It's a Canadian thing. >> But business to business enterprise, we've always been in the enterprise business. So we're seeing a lot of growth in that area, a lot of VC money's going into that area, because it's more, you know, you can measure that level of return and you can go and get those customers. But on our show, we're a bubble. We don't do a lot of tech deals like we're talking because it's boring TV. Tech people love tech, consumers love the benefit of tech. You know, no consumer opens up their iPhone and says, oh my gosh, I love the technology behind my iPhone. They just love their iPhone. And our show is really a consumer platform that is-- >> It's on cable TV, so it's got a big audience. So you got to hit the wide swath-- >> We're one of the highest-rated shows on network television. Eight years, three Emmys. You know, it's a big show now. And what we've all learned is, because Mark Cuban and I are tech guys, we used to look for stuff we know. We don't invest in stuff we know any more. We invest in slippers, ugly Christmas sweaters, food products, because if you can tap into that consumer base, you're good to go. >> So bottom line, has it been fun for you? I mean, the show has been great. I mean, obviously the awards have been great. Has it been fun for you? What's it been like, what's the personal feeling on being on the Shark Tank. >> You know, filming is fun, and hanging out is fun, and it's fun to be a celebrity at first. Your head gets really big and you get really good tables at restaurants. There's no sporting venue-- >> People recognize you. >> Yeah. >> You get to be on theCUBE. (laughter) >> I get be on theCUBE. >> Doesn't happen every day. >> You get to go everywhere. But after a while it gets pretty dry. But it really helps our brand. We compete, typically, against IBM, Verizon, and you know, the CEO of IBM, you're not going to see him selling his security. >> Well I know they're doin' a lot, spending a lot of cash on Watson, trying to get that to work, but that's a whole 'nother story. But let's get down and dirty on Splunk. You're here because you're doin' a talk. Give a quick take on what you're talking about, why are you here at .conf for Splunk? >> Yeah, we're doing a talk on data transformation. The world today is about data. And the amount of data points and access points and the internet of things, it's just exponential growth. The stat I always love, and Atif's heard it 1000 times is, there's roughly three billion people on the internet today, and there's roughly six billion or seven billion IP addresses. By 2020, according to the IPV Committee, there'll five, six billion people connected. And hundreds of trillions of IP addresses. >> And the IoT is going to add more surface area to security attacks. I mean, it used to be, the old days, in CheckPoint, the moat, the firewall, backdoor, frontdoor. >> The idea of the perimeter is gone now. There is no such thing as a perimeter any more, because everything you can access. So a lot of work in that area. And all of that comes to data and log aggregation. And what we've seen for years is that the SIEM vendors wanted to provide more analytics. But if you really think about it, the ultimate analytics engine is Splunk. And Splunk now, with their ESM module, is moving more into the security world and really taking away market share. So we're very excited by, we have a great relationship with the Splunk guys, we see nothing but future growth. >> And you're using Splunk and working with it with your customers? >> We do, we've been using Splunk for a while. We have a private cloud. Tell us a little bit about that. >> Yeah, so we eat our own dog food. So not only do we sell Splunk, but we also use it in-house. We've been usin' it for over five years, and it powers our analytics platform, which is a fancy way to say, reduces the noise from all the different clutter from all the IoT, from all the different type of alerts that are comin' in. Companies need a way to filter through all that noise. We use Splunk to solve that problem for us internally, and then, of course, we sell it and we manage it for Global 2000 customers, Fortune 100 companies all over the world. >> Tell us what about the role of data, 'cause data transformation has been a big buzzword it's a holistic message around businesses digitizing and getting digital assets in front of their customers. We have a big research division that does all of this stuff. By the end of the day, you know, the digitization business means you're going to have to go digital all the way. And role of data is not the old data warehousing days, where it's fenced away, pull it in, now you need data moving around, you need organic sharing of data, data's driving policies and new pattern recognitions for security. How do you guys see that evolving? How do you talk to your customers, because in a way, the old stuff can work if you use the data differently. We're seeing a pattern, like, hey, that's an algorithm I used 10 years ago. But now, with new data, that might be workable. What are some of the things that you're seeing now that customers are doing that you talk to that are leveraging data, like Splunk, in a new way? >> Well, that's really where Splunk adds so much value, because a friend of mine is the dean of USC. And he has a great saying, more data is not necessarily more information. And so, the mistake that we see customers making a lot is they're collecting the data, but they're not doing the right things with it. And that's really where Splunk and that level of granularity can add tremendous value, not just from logging, but from analytics and going upstream with it. >> Yeah, and also, to that point, it's just automation. There's too much data >> That's a great point. >> And it's only going to get bigger, right, based on that stat Robert rattled off. Now, we need some machine learning analytics to move it further. And all points aside, machine learning isn't where it needs to be right now. Today in the market, it still has a long way to go. I would call it a work in progress. But however, it's the promise, because there's too much data, and to secure it, to automate behavior, is really what what we're looking for. >> The example I saw is the innovation strategy's comin' to take, and they're growin' with mobility, growin' with cloud, increase the surface area, IoT. But the supervised areas of the enterprise were the doors, right? Lock the doors. And perimeter is now dead. So now you have an unsupervised environment and the enterprise at risk. Once the hackers get in, they're havin' their way. >> The internet is, like, a kindergarten playground where there are no rules and the teacher went home at lunch. (laughter) That is the internet. And kids are throwin' crap. >> And high school. I think it would be high school. Kindergarten through high school! >> And you have different-aged kids in there. >> It's chaos, bedlam! >> Very well said. The internet is chaos, but by nature, that's what we want the internet to be. We don't want to control the chaos because we limit our ability to communicate, and that's really the promise of the internet. It's not the responsibility of the internet to police itself, it's the responsibility of each enterprise. >> So what new things are happening? We're seeing successes. Certainly, we're reporting on companies that are being successful are the ones that are doing reverse of what was once done, or said differently, new ways of doing things. Throwin' out kind of tryin' to do a hybrid legacy approach to security, and seeing the new ways, new things, new better cat and mouse games, better honeypots, intelligent fabrics. What do you guys recommend to your customers and what do you see, in your talk, this digital transformation's definitely a real trend, and security is the catastrophic time bomb that's ticking for all customers. So that's, it dwarfs compliance, risk management, current... >> Well, I dunno if that's necessarily true, that it's a time bomb. You know, the number one driver for security, still, is compliance. We sell stuff people don't really want to buy. Nobody wakes up and the morning and says, yeah, I want to go spend another $5 million on security. They do it, frankly, because they have to. If none of their competitors were spending money on security, I don't think most enterprises would. I mean, whenever you have to do something because it's good to do, you have a limited up cycle. When you do something because there's a compliance reason to do it, or bad things happen to you, you're really going to do it. >> So you think there's consumer pressure, then, to have to do this, otherwise-- >> Interesting stat, the Wall Street Journal did a study and asked 1000 people on a street corner in New York if, for a hamburger, they will give away their social insurance number, their home number, and their name. 72% of people gave out that information freely. >> Better be a good hamburger. (laughs) >> Back to your point, though, I want to get a-- >> So I think consumers have an expectation of security, and how they police that is they simply go to somebody else. So if you're my retailer and you get breached, you know what I'm going to do? I'm going to go next door. But I think that the average consumer's expectation is, security's your responsibility, not mine. >> Okay, so on the B to B side, let's get that. I wanted to push you on something I thought I kind of disagreed with. If compliance, I agree, compliance has been a big part of data governance and data management. >> Yeah, PCI has been the biggest driver in security in the last five years. >> No doubt. However, companies are now sharing data more with other companies. Financial institutions are sharing core data with other financial institutions, which kind of teases out the trend of, I'll give you some of my data to get, to fight the fraud detection market because it's a $1 trillion problem. So as you start to see points of growth where, okay, you start to see people go outside their comfort zone on compliance to share data. So we're tryin' to rationalize that. Your thoughts? I mean, is that an indicator? Do you see that as a trend, or, I mean, obviously locking down the data would be, you know. >> I think it's challenging. I mean, we were at the president's council on security last year at Stanford. And you know, President Obama got up there, made some passionate speech about sharing data. For the goodness of all of us, we need to share more data and be more secure. I got to tell you, you heard that speech and you're like, yeah baby, I'm going to share my data, we're all going to work together. Right after him, Tim Cook got up there (laughter) and said, I will never share my data with anybody in the government! And you heard him, and you're like, I am never sharing my data with anybody. >> Well there's the tension there, right? >> Well, this is a natural-- >> Natural tension between government and enterprise. >> Well, I think there's also a natural tension between enterprises. There's competitive issues, competitor pressures. >> Apple certainly is a great case. They hoard their data. Well, this is the dilemma, right? You want to have good policy, but innovation comes from experimentation. So it's a balancing act between what do you kind of do? How do you balance-- >> Yeah, it's a great time to be in our space. I mean, look at this floor. How many companies are here? Splunk is growing by 30%, the show itself, 30% per year. They're going to outgrow this venue next year and they're going to go, probably, Vegas or somebody. I think that's exciting. But these are all point products. The fastest-growing segment in the computer business is managed services, because the complexity in that world is overwhelming, and it's extremely fragmented. There's no interlinking. >> Talk about your business in there right now. What are you guys currently selling, how many employees do you have, what's the revenues like, what's the product mix? >> Yeah, so we are a global company. So we have 10 offices worldwide and close to 300 employees. We're one of the fastest-growing companies in North America. We sell, our focus is managed security services. We do consulting as well as incident response remediation, but the day-to-day, we want your logs, we want to do monitoring, we want to help with-- >> So you guys come in and do deployments and integration and then actually manage security for customers? >> We do the sexy of gettin' it in, and then we also do the unsexy of managing it day-to-day. >> Atif, nothing unsexy about our work. (laughter) >> It's all sexy, that's what theCUBE show's about. >> It's all sexy! >> That's why theCUBE's a household name. We have celebrities coming on now. Soon we'll be on cable. >> That's right! This will be a primetime show. (laughter) >> Before we know it! >> That's funny, I got approached by a network, I can't tell you who, big network with a big producer to do a cybersecurity show. And so, they approached me and they said, oh, we think it's going to be so hot. It's such a topical thing. So they spent a day with me and our team to watch what we do. There is no cybersecurity show! (laughter) They're like, do you guys do anything besides sit on the computer? >> You have a meeting and you look at the monitor. It's not much of a show. >> Does anybody have a gun?! (laughter) >> It's not great for network TV, I think. >> Build a wall. >> Someone has to die in the end. That has to be network TV. And yeah, but I mean, there's a problem. There's 1.4 million cyber jobs open right now. And that's not even including any data science statistics. So you know, so we're reporting that-- >> I'm sure it's the same thing in data science. >> Same problem. How do you take a high skill that there's not enough talent for, hopefully, computer science education, all that stuff happens, and automate it. So your point about automation. This is the number one problem. How do you guys advise clients what the hell do they do? >> You know, automation's tough. We just had this meeting before we got on here, because in our managed service, it's people-driven. We want to automate it. But there's only a certain amount of automation you can do. You still need that human element. I mean, if you can automate it, somebody can buy a product and they're secure. >> Machine learning isn't where it's supposed to be. Every vendor aside, machine learning's not where it needs to be, but we're getting there. Having succinct automation helps solve the cybersecurity labor shortage problem, because the skill level that you hire at can go lower. So you reduce the learning curve of who you need to hire, and what they do. >> That's a great point. I think the unsupervised machine learning algorithms are going to become so much smarter with the Splunk data, because they are, that's a tough nut to crack because you need to have some sort of knowledge around how to make that algorithm work. The data coming in from Splunk is so awesome, that turns that into an asset. So this is a moving train. This is the bigtime. Okay, go step back for a second, I want to change gears. Robert, I want to get your thoughts, because since you're here and you do a lot of, you know, picking the stocks, if you will, on Shark Tank, in the tech world, our boring tech world that we love, by the way. >> We love it too. >> How do you, as someone who's got a lot of experience in cycles of innovation, look at the changing digital transformation vendor landscape, Splunk, companies like Oracle tryin' to transform, Dell bought EMC, IBM's pivoting, Amazon is booming. How do you look at the new digital enterprise, and how do you look at that from, if you're a customer, an investor, where's the growth stocks, where's the growth companies, what's the growth parameters, what's your thoughts? >> One of the reasons a lot of our industry, why I got into tech was I had no money, my dad worked in a factory, my mom was a receptionist. And the old adage is, to make money, you need money. To get ahead, it's not what you know, it's who you know. I didn't know anybody. And the value of tech is tech transforms every three years. We follow these cycles where we eat our own young and we throw away stuff that doesn't add value. Tech is the great equalizer, 'cause if you don't add value, nobody cares. And you know, when I'm starting out as a guy with a small company, I love that! We're going to kick ass, we're going to add value. Now that we're a little bigger-- >> Well, when you're a young company you can eat someone's lunch, because if they're not paying attention, you can come in and-- >> For sure. It gets harder as you get bigger because now we're the big guys that somebody in their basement's tryin' to take out. But you know, we see tremendous innovation in security. If you look back three years, who were the leaders in the SIEM space? ArcSight, Q1, Nitro to a lesser degree, and enVision. Today, does RSA have a strategy around a SIEM? They have Netwitness, you know, security analytics, which is kind of a SIEM. Q1 is in the throes of the IBM machine, somewhere in their gut, nobody knows. ArcSight, who buys ArcSight anymore? It's so complicated. Who's the leader? Splunk! >> So back to the old classic team. Obviously, you have good people on the management team. Product matters now, in tech, doesn't it? More than ever. Obviously, balance sheet. Okay, let's get back to the data transformation. So you know, data is so critical now, and again, it's more from that data warehouse, which still is around, but to real-time data having value, moving it into different applications. Question is, how do you value data? I mean, you can't put it on the balance sheet. I mean, people value factories. GE said, we have all this investment in machines and assets. They worry about someone getting their data and doing a judo move on them. So data is truly an asset that's flying out of their network. How does companies value data? Can it ever be on the balance sheet? How do you look at that? >> I don't think data, in of itself, has any value. It's the effect of the data that has the value. And it's a very singular, it's what somebody does to it. Whatever the data is worth to you, from a business perspective, it's worth fundamentally more to an outside bad party because they can package that data and sell it to a competitor, a foreign government, all those kind of places. So it's the collection of raw data and applying it to something that has meaning to a third party. >> So it's like thermodynamics, really. Until it's in motion, it's really not worth anything. I mean, that's what you're saying. Data's data until it's put to work. >> Right, I don't think you're ever going to see it on a balance sheet as a hard, core value, because it has to have a transformative value. You have to do something with it. It's the something. >> So pretend you're in Shark Tank and you're a data guy, and you say, boss, I need more budget to do security, I need more budget to expand our presence. And the guy says sorry, I need to see some ROI on that data. Well, I just have a gut feeling that if we move the data around, it's going to be worth something. Oh, I pass. You can't justify the investment. So a lot of that, I mean, I'm oversimplifying it, but that's kind of like a dialogue that we hear in customers. How do you get that-- >> What I always tell CIOs and CCOs, it's challenging to get budget to do a good thing or the right thing. It's easier to get budget to do the necessary thing. And so, necessary is defined by the nature of your business. So if you make widgets and you want to get more budget to protect the widgets, no one cares. No one's sitting around, and like oh, are my widgets safe? They are, to certain degree, and they'll have limited budget for that. But if you go to them and say, you know what, we have a risk that if somebody can attack our widgets, we're going to be down for three days. And being down for three days or three hours has a dollar cost of $5 million. I need an extra $2.5 million to protect that from happening. As a business guy and a CEO, I understand that. >> That's great advice. >> And that's the biggest challenge, still, with security people is, we're technical people. We're not used to talking to business guys. >> It's like house insurance, in a way, or insurance. You invest this to recover that. >> It's a great analogy. You know, I used to race cars, and I had a life insurance premium for key man insurance. And my insurance agent comes along and says, you should buy a bigger policy. I'm like, I don't need a bigger policy. It's so much money, we're okay. And then he says to me, you know, if you die in a racecar, I'm not sure you're covered. (laughter) But if you pay me another $10000 a year in coverage, you're covered. Did I buy it? Absolutely. And it's the same analogy. >> That's very necessary. Personal question for you. So if you're, your dad had a factory, you mentioned. I saw that you mentioned that earlier. If he had a factory today in a modern era of IoT, and you were going to give him a digital transformation consulting project, how would you advise him? Because a lot of people are taking their analog business and kind of digitizing it. Some already have sensors in there. So you see it in manufacturing, and certainly, the industrial aspect of IoT has been a big deal. How would you advise your dad building a factory today? >> Yeah, so I think there's two aspects to it. One is just, you know, everything we've been talking about, data transformation, data analytics, making things better, none of those things are possible unless you're actually collecting the data. It's like, customers come to us and say, you know what, we don't want you to just manage our logs and tell us what's going on, we want higher-level value. And I'm like, no, I get that, but unless you're actually aggregating the logs, none of the upstream stuff matters. So first thing is, you have collect the data. Whether that's sensors, old devices, mechanical devices, and so on. The second part of it is, the minute you open up your factory and open up the mechanical devices and attach them to a PC or anything that's network-based, you're open for risk. And so, we're seeing that now in utilities, we're seeing that with gas companies, oil companies. You know, up until a few years ago, you couldn't physically change the flow of a pipeline, unless there was a physical connection, a mechanical on-off. It was very binary. Today, all those systems are connected to the internet. And it saves companies a lot of money 'cause they can test them and stuff. But they're also open to hackers. >> Bigtime. >> Well gentlemen, we appreciate the time. >> Thank you. >> And who says tech hasn't got a little pizazz, I mean-- (laughter) >> Come on, I was on Dancing with the Stars, that's a lot of pizazz! >> It's been great! >> You guys are exciting, but you are, no! >> Dancing with the Stars, of course! >> All right. >> Thank you very much. >> Well, thanks for bein' in theCUBE Tank, we appreciate that. >> Thank you. >> Don't call us, we'll call you. (laughter) Gentlemen, thank you very much. >> We're booked, maybe we can get you on next time. >> Okay, we're out. >> .conf2016, CUBE coverage continues live from Orlando. (electronic jingle)

Published Date : Sep 28 2016

SUMMARY :

Brought to you by Splunk. and a really good segment coming up for you here. Thank you for having us. and Atif Ghauri is Senior VP at Herjavec. Newly married, your defense was down for a change. lots of fun but casual and just a big party. But the Herjavec Group has been, really, Yeah, I've been in the security business We do a lot of business in the States, UK, Australia, And you know the business, you've been in software, I dunno if the enterprise entrepreneurship element I'm just, it's just the way I say it. because it's more, you know, you can measure So you got to hit the wide swath-- because if you can tap into that consumer base, I mean, the show has been great. and you get really good tables at restaurants. You get to be on theCUBE. and you know, the CEO of IBM, why are you here at and the internet of things, it's just exponential growth. And the IoT is going to add more surface area And all of that comes to data and log aggregation. We have a private cloud. from all the different clutter from all the IoT, By the end of the day, you know, And so, the mistake that we see customers making a lot Yeah, and also, to that point, it's just automation. But however, it's the promise, the innovation strategy's comin' to take, That is the internet. I think it would be high school. and that's really the promise of the internet. and what do you see, in your talk, I mean, whenever you have to do something the Wall Street Journal did a study Better be a good hamburger. and how they police that is they simply go to somebody else. Okay, so on the B to B side, let's get that. Yeah, PCI has been the biggest driver in security I mean, obviously locking down the data would be, you know. And you heard him, and you're like, between government and enterprise. Well, I think there's also a natural tension So it's a balancing act between what do you kind of do? because the complexity in that world is overwhelming, What are you guys currently selling, but the day-to-day, we want your logs, We do the sexy of gettin' it in, (laughter) We have celebrities coming on now. (laughter) I can't tell you who, You have a meeting and you look at the monitor. So you know, so we're reporting that-- How do you guys advise clients what the hell do they do? I mean, if you can automate it, because the skill level that you hire at can go lower. picking the stocks, if you will, on Shark Tank, and how do you look at that from, And the old adage is, to make money, you need money. But you know, we see tremendous innovation in security. I mean, you can't put it on the balance sheet. So it's the collection of raw data I mean, that's what you're saying. It's the something. And the guy says sorry, I need to see some ROI on that data. And so, necessary is defined by the nature of your business. And that's the biggest challenge, still, You invest this to recover that. And then he says to me, you know, if you die in a racecar, I saw that you mentioned that earlier. the minute you open up your factory we appreciate that. Gentlemen, thank you very much. conf2016, CUBE coverage continues live from Orlando.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim CookPERSON

0.99+

IBMORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

Mark CubanPERSON

0.99+

AmazonORGANIZATION

0.99+

RobertPERSON

0.99+

John FurrierPERSON

0.99+

three daysQUANTITY

0.99+

DellORGANIZATION

0.99+

$400000QUANTITY

0.99+

AmericaLOCATION

0.99+

three hoursQUANTITY

0.99+

$6 millionQUANTITY

0.99+

USCORGANIZATION

0.99+

John WallsPERSON

0.99+

OracleORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

AppleORGANIZATION

0.99+

Atif GhauriPERSON

0.99+

Silicon ValleyLOCATION

0.99+

New YorkLOCATION

0.99+

10 officesQUANTITY

0.99+

five yearsQUANTITY

0.99+

GEORGANIZATION

0.99+

$5 millionQUANTITY

0.99+

30%QUANTITY

0.99+

72%QUANTITY

0.99+

next yearDATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

$1 trillionQUANTITY

0.99+

AustraliaLOCATION

0.99+

EMCORGANIZATION

0.99+

2020DATE

0.99+

Herjavec GroupORGANIZATION

0.99+

two aspectsQUANTITY

0.99+

HerjavecORGANIZATION

0.99+

RSAORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

OneQUANTITY

0.99+

UKLOCATION

0.99+

North AmericaLOCATION

0.99+

last yearDATE

0.99+

12 years agoDATE

0.99+

firstQUANTITY

0.99+

Eight yearsQUANTITY

0.99+

ArcSightORGANIZATION

0.99+

VegasLOCATION

0.99+

1000 peopleQUANTITY

0.99+

threeQUANTITY

0.99+

IPV CommitteeORGANIZATION

0.99+

over five yearsQUANTITY

0.99+

TodayDATE

0.98+

about 30 yearsQUANTITY

0.98+

Dancing with the StarsTITLE

0.98+

OrlandoLOCATION

0.98+

SiliconANGLE TVORGANIZATION

0.98+

1000 timesQUANTITY

0.98+

each enterpriseQUANTITY

0.98+

oneQUANTITY

0.98+

five, six billion peopleQUANTITY

0.97+

Shark TankORGANIZATION

0.97+

10 years agoDATE

0.97+

Shark TankTITLE

0.97+

todayDATE

0.97+

CanadaLOCATION

0.96+

three yearsQUANTITY

0.96+

Robert HerjavecPERSON

0.96+