Discussion about Walmart's Approach | Supercloud2
(upbeat electronic music) >> Okay, welcome back to Supercloud 2, live here in Palo Alto. I'm John Furrier, with Dave Vellante. Again, all day wall-to-wall coverage, just had a great interview with Walmart, we've got a Next interview coming up, you're going to hear from Bob Muglia and Tristan Handy, two experts, both experienced entrepreneurs, executives in technology. We're here to break down what just happened with Walmart, and what's coming up with George Gilbert, former colleague, Wikibon analyst, Gartner Analyst, and now independent investor and expert. George, great to see you, I know you're following this space. Like you read about it, remember the first days when Dataverse came out, we were talking about them coming out of Berkeley? >> Dave: Snowflake. >> John: Snowflake. >> Dave: Snowflake In the early days. >> We, collectively, have been chronicling the data movement since 2010, you were part of our team, now you've got your nose to the grindstone, you're seeing the next wave. What's this all about? Walmart building their own super cloud, we got Bob Muglia talking about how these next wave of apps are coming. What are the super apps? What's the super cloud to you? >> Well, this key's off Dave's really interesting questions to Walmart, which was like, how are they building their supercloud? 'Cause it makes a concrete example. But what was most interesting about his description of the Walmart WCMP, I forgot what it stood for. >> Dave: Walmart Cloud Native Platform. >> Walmart, okay. He was describing where the logic could run in these stateless containers, and maybe eventually serverless functions. But that's just it, and that's the paradigm of microservices, where the logic is in this stateless thing, where you can shoot it, or it fails, and you can spin up another one, and you've lost nothing. >> That was their triplet model. >> Yeah, in fact, and that was what they were trying to move to, where these things move fluidly between data centers. >> But there's a but, right? Which is they're all stateless apps in the cloud. >> George: Yeah. >> And all their stateful apps are on-prem and VMs. >> Or the stateful part of the apps are in VMs. >> Okay. >> And so if they really want to lift their super cloud layer off of this different provider's infrastructure, they're going to need a much more advanced software platform that manages data. And that goes to the -- >> Muglia and Handy, that you and I did, that's coming up next. So the big takeaway there, George, was, I'll set it up and you can chime in, a new breed of data apps is emerging, and this highly decentralized infrastructure. And Tristan Handy of DBT Labs has a sort of a solution to begin the journey today, Muglia is working on something that's way out there, describe what you learned from it. >> Okay. So to talk about what the new data apps are, and then the platform to run them, I go back to the using what will probably be seen as one of the first data app examples, was Uber, where you're describing entities in the real world, riders, drivers, routes, city, like a city plan, these are all defined by data. And the data is described in a structure called a knowledge graph, for lack of a, no one's come up with a better term. But that means the tough, the stuff that Jack built, which was all stateless and sits above cloud vendors' infrastructure, it needs an entirely different type of software that's much, much harder to build. And the way Bob described it is, you're going to need an entirely new data management infrastructure to handle this. But where, you know, we had this really colorful interview where it was like Rock 'Em Sock 'Em, but they weren't really that much in opposition to each other, because Tristan is going to define this layer, starting with like business intelligence metrics, where you're defining things like bookings, billings, and revenue, in business terms, not in SQL terms -- >> Well, business terms, if I can interrupt, he said the one thing we haven't figured out how to APIify is KPIs that sit inside of a data warehouse, and that's essentially what he's doing. >> George: That's what he's doing, yes. >> Right. And so then you can now expose those APIs, those KPIs, that sit inside of a data warehouse, or a data lake, a data store, whatever, through APIs. >> George: And the difference -- >> So what does that do for you? >> Okay, so all of a sudden, instead of working at technical data terms, where you're dealing with tables and columns and rows, you're dealing instead with business entities, using the Uber example of drivers, riders, routes, you know, ETA prices. But you can define, DBT will be able to define those progressively in richer terms, today they're just doing things like bookings, billings, and revenue. But Bob's point was, today, the data warehouse that actually runs that stuff, whereas DBT defines it, the data warehouse that runs it, you can't do it with relational technology >> Dave: Relational totality, cashing architecture. >> SQL, you can't -- >> SQL caching architectures in memory, you can't do it, you've got to rethink down to the way the data lake is laid out on the disk or cache. Which by the way, Thomas Hazel, who's speaking later, he's the chief scientist and founder at Chaos Search, he says, "I've actually done this," basically leave it in an S3 bucket, and I'm going to query it, you know, with no caching. >> All right, so what I hear you saying then, tell me if I got this right, there are some some things that are inadequate in today's world, that's not compatible with the Supercloud wave. >> Yeah. >> Specifically how you're using storage, and data, and stateful. >> Yes. >> And then the software that makes it run, is that what you're saying? >> George: Yeah. >> There's one other thing you mentioned to me, it's like, when you're using a CRM system, a human is inputting data. >> George: Nothing happens till the human does something. >> Right, nothing happens until that data entry occurs. What you're talking about is a world that self forms, polling data from the transaction system, or the ERP system, and then builds a plan without human intervention. >> Yeah. Something in the real world happens, where the user says, "I want a ride." And then the software goes out and says, "Okay, we got to match a driver to the rider, we got to calculate how long it takes to get there, how long to deliver 'em." That's not driven by a form, other than the first person hitting a button and saying, "I want a ride." All the other stuff happens autonomously, driven by data and analytics. >> But my question was different, Dave, so I want to get specific, because this is where the startups are going to come in, this is the disruption. Snowflake is a data warehouse that's in the cloud, they call it a data cloud, they refactored it, they did it differently, the success, we all know it looks like. These areas where it's inadequate for the future are areas that'll probably be either disrupted, or refactored. What is that? >> That's what Muglia's contention is, that the DBT can start adding that layer where you define these business entities, they're like mini digital twins, you can define them, but the data warehouse isn't strong enough to actually manage and run them. And Muglia is behind a company that is rethinking the database, really in a fundamental way that hasn't been done in 40 or 50 years. It's the first, in his contention, the first real rethink of database technology in a fundamental way since the rise of the relational database 50 years ago. >> And I think you admit it's a real Hail Mary, I mean it's quite a long shot right? >> George: Yes. >> Huge potential. >> But they're pretty far along. >> Well, we've been talking on theCUBE for 12 years, and what, 10 years going to AWS Reinvent, Dave, that no one database will rule the world, Amazon kind of showed that with them. What's different, is it databases are changing, or you can have multiple databases, or? >> It's a good question. And the reason we've had multiple different types of databases, each one specialized for a different type of workload, but actually what Muglia is behind is a new engine that would essentially, you'll never get rid of the data warehouse, or the equivalent engine in like a Databricks datalake house, but it's a new engine that manages the thing that describes all the data and holds it together, and that's the new application platform. >> George, we have one minute left, I want to get real quick thought, you're an investor, and we know your history, and the folks watching, George's got a deep pedigree in investment data, and we can testify against that. If you're going to invest in a company right now, if you're a customer, I got to make a bet, what does success look like for me, what do I want walking through my door, and what do I want to send out? What companies do I want to look at? What's the kind of of vendor do I want to evaluate? Which ones do I want to send home? >> Well, the first thing a customer really has to do when they're thinking about next gen applications, all the people have told you guys, "we got to get our data in order," getting that data in order means building an integrated view of all your data landscape, which is data coming out of all your applications. It starts with the data model, so, today, you basically extract data from all your operational systems, put it in this one giant, central place, like a warehouse or lake house, but eventually you want this, whether you call it a fabric or a mesh, it's all the data that describes how everything hangs together as in one big knowledge graph. There's different ways to implement that. And that's the most critical thing, 'cause that describes your Uber landscape, your Uber platform. >> That's going to power the digital transformation, which will power the business transformation, which powers the business model, which allows the builders to build -- >> Yes. >> Coders to code. That's Supercloud application. >> Yeah. >> George, great stuff. Next interview you're going to see right here is Bob Muglia and Tristan Handy, they're going to unpack this new wave. Great segment, really worth unpacking and reading between the lines with George, and Dave Vellante, and those two great guests. And then we'll come back here for the studio for more of the live coverage of Supercloud 2. Thanks for watching. (upbeat electronic music)
SUMMARY :
remember the first days What's the super cloud to you? of the Walmart WCMP, I and that's the paradigm of microservices, and that was what they stateless apps in the cloud. And all their stateful of the apps are in VMs. And that goes to the -- Muglia and Handy, that you and I did, But that means the tough, he said the one thing we haven't And so then you can now the data warehouse that runs it, Dave: Relational totality, Which by the way, Thomas I hear you saying then, and data, and stateful. thing you mentioned to me, George: Nothing happens polling data from the transaction Something in the real world happens, that's in the cloud, that the DBT can start adding that layer Amazon kind of showed that with them. and that's the new application platform. and the folks watching, all the people have told you guys, Coders to code. for more of the live
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Chaos Search | ORGANIZATION | 0.99+ |
Jack | PERSON | 0.99+ |
Tristan | PERSON | 0.99+ |
12 years | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
DBT Labs | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
two experts | QUANTITY | 0.99+ |
Supercloud 2 | TITLE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Muglia | ORGANIZATION | 0.99+ |
one minute | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
two great guests | QUANTITY | 0.98+ |
Wikibon | ORGANIZATION | 0.98+ |
50 years | QUANTITY | 0.98+ |
John | PERSON | 0.98+ |
Rock 'Em Sock 'Em | TITLE | 0.98+ |
today | DATE | 0.98+ |
first person | QUANTITY | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
S3 | COMMERCIAL_ITEM | 0.97+ |
50 years ago | DATE | 0.97+ |
2010 | DATE | 0.97+ |
Mary | PERSON | 0.96+ |
first days | QUANTITY | 0.96+ |
SQL | TITLE | 0.96+ |
one | QUANTITY | 0.95+ |
Supercloud wave | EVENT | 0.95+ |
each one | QUANTITY | 0.93+ |
DBT | ORGANIZATION | 0.91+ |
Supercloud | TITLE | 0.91+ |
Supercloud2 | TITLE | 0.91+ |
Supercloud 2 | ORGANIZATION | 0.89+ |
Snowflake | TITLE | 0.86+ |
Dataverse | ORGANIZATION | 0.83+ |
triplet | QUANTITY | 0.78+ |
Breaking Analysis: Enterprise Technology Predictions 2023
(upbeat music beginning) >> From the Cube Studios in Palo Alto and Boston, bringing you data-driven insights from the Cube and ETR, this is "Breaking Analysis" with Dave Vellante. >> Making predictions about the future of enterprise tech is more challenging if you strive to lay down forecasts that are measurable. In other words, if you make a prediction, you should be able to look back a year later and say, with some degree of certainty, whether the prediction came true or not, with evidence to back that up. Hello and welcome to this week's Wikibon Cube Insights, powered by ETR. In this breaking analysis, we aim to do just that, with predictions about the macro IT spending environment, cost optimization, security, lots to talk about there, generative AI, cloud, and of course supercloud, blockchain adoption, data platforms, including commentary on Databricks, snowflake, and other key players, automation, events, and we may even have some bonus predictions around quantum computing, and perhaps some other areas. To make all this happen, we welcome back, for the third year in a row, my colleague and friend Eric Bradley from ETR. Eric, thanks for all you do for the community, and thanks for being part of this program. Again. >> I wouldn't miss it for the world. I always enjoy this one. Dave, good to see you. >> Yeah, so let me bring up this next slide and show you, actually come back to me if you would. I got to show the audience this. These are the inbounds that we got from PR firms starting in October around predictions. They know we do prediction posts. And so they'll send literally thousands and thousands of predictions from hundreds of experts in the industry, technologists, consultants, et cetera. And if you bring up the slide I can show you sort of the pattern that developed here. 40% of these thousands of predictions were from cyber. You had AI and data. If you combine those, it's still not close to cyber. Cost optimization was a big thing. Of course, cloud, some on DevOps, and software. Digital... Digital transformation got, you know, some lip service and SaaS. And then there was other, it's kind of around 2%. So quite remarkable, when you think about the focus on cyber, Eric. >> Yeah, there's two reasons why I think it makes sense, though. One, the cybersecurity companies have a lot of cash, so therefore the PR firms might be working a little bit harder for them than some of their other clients. (laughs) And then secondly, as you know, for multiple years now, when we do our macro survey, we ask, "What's your number one spending priority?" And again, it's security. It just isn't going anywhere. It just stays at the top. So I'm actually not that surprised by that little pie chart there, but I was shocked that SaaS was only 5%. You know, going back 10 years ago, that would've been the only thing anyone was talking about. >> Yeah. So true. All right, let's get into it. First prediction, we always start with kind of tech spending. Number one is tech spending increases between four and 5%. ETR has currently got it at 4.6% coming into 2023. This has been a consistently downward trend all year. We started, you know, much, much higher as we've been reporting. Bottom line is the fed is still in control. They're going to ease up on tightening, is the expectation, they're going to shoot for a soft landing. But you know, my feeling is this slingshot economy is going to continue, and it's going to continue to confound, whether it's supply chains or spending. The, the interesting thing about the ETR data, Eric, and I want you to comment on this, the largest companies are the most aggressive to cut. They're laying off, smaller firms are spending faster. They're actually growing at a much larger, faster rate as are companies in EMEA. And that's a surprise. That's outpacing the US and APAC. Chime in on this, Eric. >> Yeah, I was surprised on all of that. First on the higher level spending, we are definitely seeing it coming down, but the interesting thing here is headlines are making it worse. The huge research shop recently said 0% growth. We're coming in at 4.6%. And just so everyone knows, this is not us guessing, we asked 1,525 IT decision-makers what their budget growth will be, and they came in at 4.6%. Now there's a huge disparity, as you mentioned. The Fortune 500, global 2000, barely at 2% growth, but small, it's at 7%. So we're at a situation right now where the smaller companies are still playing a little bit of catch up on digital transformation, and they're spending money. The largest companies that have the most to lose from a recession are being more trepidatious, obviously. So they're playing a "Wait and see." And I hope we don't talk ourselves into a recession. Certainly the headlines and some of their research shops are helping it along. But another interesting comment here is, you know, energy and utilities used to be called an orphan and widow stock group, right? They are spending more than anyone, more than financials insurance, more than retail consumer. So right now it's being driven by mid, small, and energy and utilities. They're all spending like gangbusters, like nothing's happening. And it's the rest of everyone else that's being very cautious. >> Yeah, so very unpredictable right now. All right, let's go to number two. Cost optimization remains a major theme in 2023. We've been reporting on this. You've, we've shown a chart here. What's the primary method that your organization plans to use? You asked this question of those individuals that cited that they were going to reduce their spend and- >> Mhm. >> consolidating redundant vendors, you know, still leads the way, you know, far behind, cloud optimization is second, but it, but cloud continues to outpace legacy on-prem spending, no doubt. Somebody, it was, the guy's name was Alexander Feiglstorfer from Storyblok, sent in a prediction, said "All in one becomes extinct." Now, generally I would say I disagree with that because, you know, as we know over the years, suites tend to win out over, you know, individual, you know, point products. But I think what's going to happen is all in one is going to remain the norm for these larger companies that are cutting back. They want to consolidate redundant vendors, and the smaller companies are going to stick with that best of breed and be more aggressive and try to compete more effectively. What's your take on that? >> Yeah, I'm seeing much more consolidation in vendors, but also consolidation in functionality. We're seeing people building out new functionality, whether it's, we're going to talk about this later, so I don't want to steal too much of our thunder right now, but data and security also, we're seeing a functionality creep. So I think there's further consolidation happening here. I think niche solutions are going to be less likely, and platform solutions are going to be more likely in a spending environment where you want to reduce your vendors. You want to have one bill to pay, not 10. Another thing on this slide, real quick if I can before I move on, is we had a bunch of people write in and some of the answer options that aren't on this graph but did get cited a lot, unfortunately, is the obvious reduction in staff, hiring freezes, and delaying hardware, were three of the top write-ins. And another one was offshore outsourcing. So in addition to what we're seeing here, there were a lot of write-in options, and I just thought it would be important to state that, but essentially the cost optimization is by and far the highest one, and it's growing. So it's actually increased in our citations over the last year. >> And yeah, specifically consolidating redundant vendors. And so I actually thank you for bringing that other up, 'cause I had asked you, Eric, is there any evidence that repatriation is going on and we don't see it in the numbers, we don't see it even in the other, there was, I think very little or no mention of cloud repatriation, even though it might be happening in this in a smattering. >> Not a single mention, not one single mention. I went through it for you. Yep. Not one write-in. >> All right, let's move on. Number three, security leads M&A in 2023. Now you might say, "Oh, well that's a layup," but let me set this up Eric, because I didn't really do a great job with the slide. I hid the, what you've done, because you basically took, this is from the emerging technology survey with 1,181 responses from November. And what we did is we took Palo Alto and looked at the overlap in Palo Alto Networks accounts with these vendors that were showing on this chart. And Eric, I'm going to ask you to explain why we put a circle around OneTrust, but let me just set it up, and then have you comment on the slide and take, give us more detail. We're seeing private company valuations are off, you know, 10 to 40%. We saw a sneak, do a down round, but pretty good actually only down 12%. We've seen much higher down rounds. Palo Alto Networks we think is going to get busy. Again, they're an inquisitive company, they've been sort of quiet lately, and we think CrowdStrike, Cisco, Microsoft, Zscaler, we're predicting all of those will make some acquisitions and we're thinking that the targets are somewhere in this mess of security taxonomy. Other thing we're predicting AI meets cyber big time in 2023, we're going to probably going to see some acquisitions of those companies that are leaning into AI. We've seen some of that with Palo Alto. And then, you know, your comment to me, Eric, was "The RSA conference is going to be insane, hopping mad, "crazy this April," (Eric laughing) but give us your take on this data, and why the red circle around OneTrust? Take us back to that slide if you would, Alex. >> Sure. There's a few things here. First, let me explain what we're looking at. So because we separate the public companies and the private companies into two separate surveys, this allows us the ability to cross-reference that data. So what we're doing here is in our public survey, the tesis, everyone who cited some spending with Palo Alto, meaning they're a Palo Alto customer, we then cross-reference that with the private tech companies. Who also are they spending with? So what you're seeing here is an overlap. These companies that we have circled are doing the best in Palo Alto's accounts. Now, Palo Alto went and bought Twistlock a few years ago, which this data slide predicted, to be quite honest. And so I don't know if they necessarily are going to go after Snyk. Snyk, sorry. They already have something in that space. What they do need, however, is more on the authentication space. So I'm looking at OneTrust, with a 45% overlap in their overall net sentiment. That is a company that's already existing in their accounts and could be very synergistic to them. BeyondTrust as well, authentication identity. This is something that Palo needs to do to move more down that zero trust path. Now why did I pick Palo first? Because usually they're very inquisitive. They've been a little quiet lately. Secondly, if you look at the backdrop in the markets, the IPO freeze isn't going to last forever. Sooner or later, the IPO markets are going to open up, and some of these private companies are going to tap into public equity. In the meantime, however, cash funding on the private side is drying up. If they need another round, they're not going to get it, and they're certainly not going to get it at the valuations they were getting. So we're seeing valuations maybe come down where they're a touch more attractive, and Palo knows this isn't going to last forever. Cisco knows that, CrowdStrike, Zscaler, all these companies that are trying to make a push to become that vendor that you're consolidating in, around, they have a chance now, they have a window where they need to go make some acquisitions. And that's why I believe leading up to RSA, we're going to see some movement. I think it's going to pretty, a really exciting time in security right now. >> Awesome. Thank you. Great explanation. All right, let's go on the next one. Number four is, it relates to security. Let's stay there. Zero trust moves from hype to reality in 2023. Now again, you might say, "Oh yeah, that's a layup." A lot of these inbounds that we got are very, you know, kind of self-serving, but we always try to put some meat in the bone. So first thing we do is we pull out some commentary from, Eric, your roundtable, your insights roundtable. And we have a CISO from a global hospitality firm says, "For me that's the highest priority." He's talking about zero trust because it's the best ROI, it's the most forward-looking, and it enables a lot of the business transformation activities that we want to do. CISOs tell me that they actually can drive forward transformation projects that have zero trust, and because they can accelerate them, because they don't have to go through the hurdle of, you know, getting, making sure that it's secure. Second comment, zero trust closes that last mile where once you're authenticated, they open up the resource to you in a zero trust way. That's a CISO of a, and a managing director of a cyber risk services enterprise. Your thoughts on this? >> I can be here all day, so I'm going to try to be quick on this one. This is not a fluff piece on this one. There's a couple of other reasons this is happening. One, the board finally gets it. Zero trust at first was just a marketing hype term. Now the board understands it, and that's why CISOs are able to push through it. And what they finally did was redefine what it means. Zero trust simply means moving away from hardware security, moving towards software-defined security, with authentication as its base. The board finally gets that, and now they understand that this is necessary and it's being moved forward. The other reason it's happening now is hybrid work is here to stay. We weren't really sure at first, large companies were still trying to push people back to the office, and it's going to happen. The pendulum will swing back, but hybrid work's not going anywhere. By basically on our own data, we're seeing that 69% of companies expect remote and hybrid to be permanent, with only 30% permanent in office. Zero trust works for a hybrid environment. So all of that is the reason why this is happening right now. And going back to our previous prediction, this is why we're picking Palo, this is why we're picking Zscaler to make these acquisitions. Palo Alto needs to be better on the authentication side, and so does Zscaler. They're both fantastic on zero trust network access, but they need the authentication software defined aspect, and that's why we think this is going to happen. One last thing, in that CISO round table, I also had somebody say, "Listen, Zscaler is incredible. "They're doing incredibly well pervading the enterprise, "but their pricing's getting a little high," and they actually think Palo Alto is well-suited to start taking some of that share, if Palo can make one move. >> Yeah, Palo Alto's consolidation story is very strong. Here's my question and challenge. Do you and me, so I'm always hardcore about, okay, you've got to have evidence. I want to look back at these things a year from now and say, "Did we get it right? Yes or no?" If we got it wrong, we'll tell you we got it wrong. So how are we going to measure this? I'd say a couple things, and you can chime in. One is just the number of vendors talking about it. That's, but the marketing always leads the reality. So the second part of that is we got to get evidence from the buying community. Can you help us with that? >> (laughs) Luckily, that's what I do. I have a data company that asks thousands of IT decision-makers what they're adopting and what they're increasing spend on, as well as what they're decreasing spend on and what they're replacing. So I have snapshots in time over the last 11 years where I can go ahead and compare and contrast whether this adoption is happening or not. So come back to me in 12 months and I'll let you know. >> Now, you know, I will. Okay, let's bring up the next one. Number five, generative AI hits where the Metaverse missed. Of course everybody's talking about ChatGPT, we just wrote last week in a breaking analysis with John Furrier and Sarjeet Joha our take on that. We think 2023 does mark a pivot point as natural language processing really infiltrates enterprise tech just as Amazon turned the data center into an API. We think going forward, you're going to be interacting with technology through natural language, through English commands or other, you know, foreign language commands, and investors are lining up, all the VCs are getting excited about creating something competitive to ChatGPT, according to (indistinct) a hundred million dollars gets you a seat at the table, gets you into the game. (laughing) That's before you have to start doing promotion. But he thinks that's what it takes to actually create a clone or something equivalent. We've seen stuff from, you know, the head of Facebook's, you know, AI saying, "Oh, it's really not that sophisticated, ChatGPT, "it's kind of like IBM Watson, it's great engineering, "but you know, we've got more advanced technology." We know Google's working on some really interesting stuff. But here's the thing. ETR just launched this survey for the February survey. It's in the field now. We circle open AI in this category. They weren't even in the survey, Eric, last quarter. So 52% of the ETR survey respondents indicated a positive sentiment toward open AI. I added up all the sort of different bars, we could double click on that. And then I got this inbound from Scott Stevenson of Deep Graham. He said "AI is recession-proof." I don't know if that's the case, but it's a good quote. So bring this back up and take us through this. Explain this chart for us, if you would. >> First of all, I like Scott's quote better than the Facebook one. I think that's some sour grapes. Meta just spent an insane amount of money on the Metaverse and that's a dud. Microsoft just spent money on open AI and it is hot, undoubtedly hot. We've only been in the field with our current ETS survey for a week. So my caveat is it's preliminary data, but I don't care if it's preliminary data. (laughing) We're getting a sneak peek here at what is the number one net sentiment and mindshare leader in the entire machine-learning AI sector within a week. It's beating Data- >> 600. 600 in. >> It's beating Databricks. And we all know Databricks is a huge established enterprise company, not only in machine-learning AI, but it's in the top 10 in the entire survey. We have over 400 vendors in this survey. It's number eight overall, already. In a week. This is not hype. This is real. And I could go on the NLP stuff for a while. Not only here are we seeing it in open AI and machine-learning and AI, but we're seeing NLP in security. It's huge in email security. It's completely transforming that area. It's one of the reasons I thought Palo might take Abnormal out. They're doing such a great job with NLP in this email side, and also in the data prep tools. NLP is going to take out data prep tools. If we have time, I'll discuss that later. But yeah, this is, to me this is a no-brainer, and we're already seeing it in the data. >> Yeah, John Furrier called, you know, the ChatGPT introduction. He said it reminded him of the Netscape moment, when we all first saw Netscape Navigator and went, "Wow, it really could be transformative." All right, number six, the cloud expands to supercloud as edge computing accelerates and CloudFlare is a big winner in 2023. We've reported obviously on cloud, multi-cloud, supercloud and CloudFlare, basically saying what multi-cloud should have been. We pulled this quote from Atif Kahn, who is the founder and CTO of Alkira, thanks, one of the inbounds, thank you. "In 2023, highly distributed IT environments "will become more the norm "as organizations increasingly deploy hybrid cloud, "multi-cloud and edge settings..." Eric, from one of your round tables, "If my sources from edge computing are coming "from the cloud, that means I have my workloads "running in the cloud. "There is no one better than CloudFlare," That's a senior director of IT architecture at a huge financial firm. And then your analysis shows CloudFlare really growing in pervasion, that sort of market presence in the dataset, dramatically, to near 20%, leading, I think you had told me that they're even ahead of Google Cloud in terms of momentum right now. >> That was probably the biggest shock to me in our January 2023 tesis, which covers the public companies in the cloud computing sector. CloudFlare has now overtaken GCP in overall spending, and I was shocked by that. It's already extremely pervasive in networking, of course, for the edge networking side, and also in security. This is the number one leader in SaaSi, web access firewall, DDoS, bot protection, by your definition of supercloud, which we just did a couple of weeks ago, and I really enjoyed that by the way Dave, I think CloudFlare is the one that fits your definition best, because it's bringing all of these aspects together, and most importantly, it's cloud agnostic. It does not need to rely on Azure or AWS to do this. It has its own cloud. So I just think it's, when we look at your definition of supercloud, CloudFlare is the poster child. >> You know, what's interesting about that too, is a lot of people are poo-pooing CloudFlare, "Ah, it's, you know, really kind of not that sophisticated." "You don't have as many tools," but to your point, you're can have those tools in the cloud, Cloudflare's doing serverless on steroids, trying to keep things really simple, doing a phenomenal job at, you know, various locations around the world. And they're definitely one to watch. Somebody put them on my radar (laughing) a while ago and said, "Dave, you got to do a breaking analysis on CloudFlare." And so I want to thank that person. I can't really name them, 'cause they work inside of a giant hyperscaler. But- (Eric laughing) (Dave chuckling) >> Real quickly, if I can from a competitive perspective too, who else is there? They've already taken share from Akamai, and Fastly is their really only other direct comp, and they're not there. And these guys are in poll position and they're the only game in town right now. I just, I don't see it slowing down. >> I thought one of your comments from your roundtable I was reading, one of the folks said, you know, CloudFlare, if my workloads are in the cloud, they are, you know, dominant, they said not as strong with on-prem. And so Akamai is doing better there. I'm like, "Okay, where would you want to be?" (laughing) >> Yeah, which one of those two would you rather be? >> Right? Anyway, all right, let's move on. Number seven, blockchain continues to look for a home in the enterprise, but devs will slowly begin to adopt in 2023. You know, blockchains have got a lot of buzz, obviously crypto is, you know, the killer app for blockchain. Senior IT architect in financial services from your, one of your insight roundtables said quote, "For enterprises to adopt a new technology, "there have to be proven turnkey solutions. "My experience in talking with my peers are, "blockchain is still an open-source component "where you have to build around it." Now I want to thank Ravi Mayuram, who's the CTO of Couchbase sent in, you know, one of the predictions, he said, "DevOps will adopt blockchain, specifically Ethereum." And he referenced actually in his email to me, Solidity, which is the programming language for Ethereum, "will be in every DevOps pro's playbook, "mirroring the boom in machine-learning. "Newer programming languages like Solidity "will enter the toolkits of devs." His point there, you know, Solidity for those of you don't know, you know, Bitcoin is not programmable. Solidity, you know, came out and that was their whole shtick, and they've been improving that, and so forth. But it, Eric, it's true, it really hasn't found its home despite, you know, the potential for smart contracts. IBM's pushing it, VMware has had announcements, and others, really hasn't found its way in the enterprise yet. >> Yeah, and I got to be honest, I don't think it's going to, either. So when we did our top trends series, this was basically chosen as an anti-prediction, I would guess, that it just continues to not gain hold. And the reason why was that first comment, right? It's very much a niche solution that requires a ton of custom work around it. You can't just plug and play it. And at the end of the day, let's be very real what this technology is, it's a database ledger, and we already have database ledgers in the enterprise. So why is this a priority to move to a different database ledger? It's going to be very niche cases. I like the CTO comment from Couchbase about it being adopted by DevOps. I agree with that, but it has to be a DevOps in a very specific use case, and a very sophisticated use case in financial services, most likely. And that's not across the entire enterprise. So I just think it's still going to struggle to get its foothold for a little bit longer, if ever. >> Great, thanks. Okay, let's move on. Number eight, AWS Databricks, Google Snowflake lead the data charge with Microsoft. Keeping it simple. So let's unpack this a little bit. This is the shared accounts peer position for, I pulled data platforms in for analytics, machine-learning and AI and database. So I could grab all these accounts or these vendors and see how they compare in those three sectors. Analytics, machine-learning and database. Snowflake and Databricks, you know, they're on a crash course, as you and I have talked about. They're battling to be the single source of truth in analytics. They're, there's going to be a big focus. They're already started. It's going to be accelerated in 2023 on open formats. Iceberg, Python, you know, they're all the rage. We heard about Iceberg at Snowflake Summit, last summer or last June. Not a lot of people had heard of it, but of course the Databricks crowd, who knows it well. A lot of other open source tooling. There's a company called DBT Labs, which you're going to talk about in a minute. George Gilbert put them on our radar. We just had Tristan Handy, the CEO of DBT labs, on at supercloud last week. They are a new disruptor in data that's, they're essentially making, they're API-ifying, if you will, KPIs inside the data warehouse and dramatically simplifying that whole data pipeline. So really, you know, the ETL guys should be shaking in their boots with them. Coming back to the slide. Google really remains focused on BigQuery adoption. Customers have complained to me that they would like to use Snowflake with Google's AI tools, but they're being forced to go to BigQuery. I got to ask Google about that. AWS continues to stitch together its bespoke data stores, that's gone down that "Right tool for the right job" path. David Foyer two years ago said, "AWS absolutely is going to have to solve that problem." We saw them start to do it in, at Reinvent, bringing together NoETL between Aurora and Redshift, and really trying to simplify those worlds. There's going to be more of that. And then Microsoft, they're just making it cheap and easy to use their stuff, you know, despite some of the complaints that we hear in the community, you know, about things like Cosmos, but Eric, your take? >> Yeah, my concern here is that Snowflake and Databricks are fighting each other, and it's allowing AWS and Microsoft to kind of catch up against them, and I don't know if that's the right move for either of those two companies individually, Azure and AWS are building out functionality. Are they as good? No they're not. The other thing to remember too is that AWS and Azure get paid anyway, because both Databricks and Snowflake run on top of 'em. So (laughing) they're basically collecting their toll, while these two fight it out with each other, and they build out functionality. I think they need to stop focusing on each other, a little bit, and think about the overall strategy. Now for Databricks, we know they came out first as a machine-learning AI tool. They were known better for that spot, and now they're really trying to play catch-up on that data storage compute spot, and inversely for Snowflake, they were killing it with the compute separation from storage, and now they're trying to get into the MLAI spot. I actually wouldn't be surprised to see them make some sort of acquisition. Frank Slootman has been a little bit quiet, in my opinion there. The other thing to mention is your comment about DBT Labs. If we look at our emerging technology survey, last survey when this came out, DBT labs, number one leader in that data integration space, I'm going to just pull it up real quickly. It looks like they had a 33% overall net sentiment to lead data analytics integration. So they are clearly growing, it's fourth straight survey consecutively that they've grown. The other name we're seeing there a little bit is Cribl, but DBT labs is by far the number one player in this space. >> All right. Okay, cool. Moving on, let's go to number nine. With Automation mixer resurgence in 2023, we're showing again data. The x axis is overlap or presence in the dataset, and the vertical axis is shared net score. Net score is a measure of spending momentum. As always, you've seen UI path and Microsoft Power Automate up until the right, that red line, that 40% line is generally considered elevated. UI path is really separating, creating some distance from Automation Anywhere, they, you know, previous quarters they were much closer. Microsoft Power Automate came on the scene in a big way, they loom large with this "Good enough" approach. I will say this, I, somebody sent me a results of a (indistinct) survey, which showed UiPath actually had more mentions than Power Automate, which was surprising, but I think that's not been the case in the ETR data set. We're definitely seeing a shift from back office to front soft office kind of workloads. Having said that, software testing is emerging as a mainstream use case, we're seeing ML and AI become embedded in end-to-end automations, and low-code is serving the line of business. And so this, we think, is going to increasingly have appeal to organizations in the coming year, who want to automate as much as possible and not necessarily, we've seen a lot of layoffs in tech, and people... You're going to have to fill the gaps with automation. That's a trend that's going to continue. >> Yep, agreed. At first that comment about Microsoft Power Automate having less citations than UiPath, that's shocking to me. I'm looking at my chart right here where Microsoft Power Automate was cited by over 60% of our entire survey takers, and UiPath at around 38%. Now don't get me wrong, 38% pervasion's fantastic, but you know you're not going to beat an entrenched Microsoft. So I don't really know where that comment came from. So UiPath, looking at it alone, it's doing incredibly well. It had a huge rebound in its net score this last survey. It had dropped going through the back half of 2022, but we saw a big spike in the last one. So it's got a net score of over 55%. A lot of people citing adoption and increasing. So that's really what you want to see for a name like this. The problem is that just Microsoft is doing its playbook. At the end of the day, I'm going to do a POC, why am I going to pay more for UiPath, or even take on another separate bill, when we know everyone's consolidating vendors, if my license already includes Microsoft Power Automate? It might not be perfect, it might not be as good, but what I'm hearing all the time is it's good enough, and I really don't want another invoice. >> Right. So how does UiPath, you know, and Automation Anywhere, how do they compete with that? Well, the way they compete with it is they got to have a better product. They got a product that's 10 times better. You know, they- >> Right. >> they're not going to compete based on where the lowest cost, Microsoft's got that locked up, or where the easiest to, you know, Microsoft basically give it away for free, and that's their playbook. So that's, you know, up to UiPath. UiPath brought on Rob Ensslin, I've interviewed him. Very, very capable individual, is now Co-CEO. So he's kind of bringing that adult supervision in, and really tightening up the go to market. So, you know, we know this company has been a rocket ship, and so getting some control on that and really getting focused like a laser, you know, could be good things ahead there for that company. Okay. >> One of the problems, if I could real quick Dave, is what the use cases are. When we first came out with RPA, everyone was super excited about like, "No, UiPath is going to be great for super powerful "projects, use cases." That's not what RPA is being used for. As you mentioned, it's being used for mundane tasks, so it's not automating complex things, which I think UiPath was built for. So if you were going to get UiPath, and choose that over Microsoft, it's going to be 'cause you're doing it for more powerful use case, where it is better. But the problem is that's not where the enterprise is using it. The enterprise are using this for base rote tasks, and simply, Microsoft Power Automate can do that. >> Yeah, it's interesting. I've had people on theCube that are both Microsoft Power Automate customers and UiPath customers, and I've asked them, "Well you know, "how do you differentiate between the two?" And they've said to me, "Look, our users and personal productivity users, "they like Power Automate, "they can use it themselves, and you know, "it doesn't take a lot of, you know, support on our end." The flip side is you could do that with UiPath, but like you said, there's more of a focus now on end-to-end enterprise automation and building out those capabilities. So it's increasingly a value play, and that's going to be obviously the challenge going forward. Okay, my last one, and then I think you've got some bonus ones. Number 10, hybrid events are the new category. Look it, if I can get a thousand inbounds that are largely self-serving, I can do my own here, 'cause we're in the events business. (Eric chuckling) Here's the prediction though, and this is a trend we're seeing, the number of physical events is going to dramatically increase. That might surprise people, but most of the big giant events are going to get smaller. The exception is AWS with Reinvent, I think Snowflake's going to continue to grow. So there are examples of physical events that are growing, but generally, most of the big ones are getting smaller, and there's going to be many more smaller intimate regional events and road shows. These micro-events, they're going to be stitched together. Digital is becoming a first class citizen, so people really got to get their digital acts together, and brands are prioritizing earned media, and they're beginning to build their own news networks, going direct to their customers. And so that's a trend we see, and I, you know, we're right in the middle of it, Eric, so you know we're going to, you mentioned RSA, I think that's perhaps going to be one of those crazy ones that continues to grow. It's shrunk, and then it, you know, 'cause last year- >> Yeah, it did shrink. >> right, it was the last one before the pandemic, and then they sort of made another run at it last year. It was smaller but it was very vibrant, and I think this year's going to be huge. Global World Congress is another one, we're going to be there end of Feb. That's obviously a big big show, but in general, the brands and the technology vendors, even Oracle is going to scale down. I don't know about Salesforce. We'll see. You had a couple of bonus predictions. Quantum and maybe some others? Bring us home. >> Yeah, sure. I got a few more. I think we touched upon one, but I definitely think the data prep tools are facing extinction, unfortunately, you know, the Talons Informatica is some of those names. The problem there is that the BI tools are kind of including data prep into it already. You know, an example of that is Tableau Prep Builder, and then in addition, Advanced NLP is being worked in as well. ThoughtSpot, Intelius, both often say that as their selling point, Tableau has Ask Data, Click has Insight Bot, so you don't have to really be intelligent on data prep anymore. A regular business user can just self-query, using either the search bar, or even just speaking into what it needs, and these tools are kind of doing the data prep for it. I don't think that's a, you know, an out in left field type of prediction, but it's the time is nigh. The other one I would also state is that I think knowledge graphs are going to break through this year. Neo4j in our survey is growing in pervasion in Mindshare. So more and more people are citing it, AWS Neptune's getting its act together, and we're seeing that spending intentions are growing there. Tiger Graph is also growing in our survey sample. I just think that the time is now for knowledge graphs to break through, and if I had to do one more, I'd say real-time streaming analytics moves from the very, very rich big enterprises to downstream, to more people are actually going to be moving towards real-time streaming, again, because the data prep tools and the data pipelines have gotten easier to use, and I think the ROI on real-time streaming is obviously there. So those are three that didn't make the cut, but I thought deserved an honorable mention. >> Yeah, I'm glad you did. Several weeks ago, we did an analyst prediction roundtable, if you will, a cube session power panel with a number of data analysts and that, you know, streaming, real-time streaming was top of mind. So glad you brought that up. Eric, as always, thank you very much. I appreciate the time you put in beforehand. I know it's been crazy, because you guys are wrapping up, you know, the last quarter survey in- >> Been a nuts three weeks for us. (laughing) >> job. I love the fact that you're doing, you know, the ETS survey now, I think it's quarterly now, right? Is that right? >> Yep. >> Yep. So that's phenomenal. >> Four times a year. I'll be happy to jump on with you when we get that done. I know you were really impressed with that last time. >> It's unbelievable. This is so much data at ETR. Okay. Hey, that's a wrap. Thanks again. >> Take care Dave. Good seeing you. >> All right, many thanks to our team here, Alex Myerson as production, he manages the podcast force. Ken Schiffman as well is a critical component of our East Coast studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hoof is our editor-in-chief. He's at siliconangle.com. He's just a great editing for us. Thank you all. Remember all these episodes that are available as podcasts, wherever you listen, podcast is doing great. Just search "Breaking analysis podcast." Really appreciate you guys listening. I publish each week on wikibon.com and siliconangle.com, or you can email me directly if you want to get in touch, david.vellante@siliconangle.com. That's how I got all these. I really appreciate it. I went through every single one with a yellow highlighter. It took some time, (laughing) but I appreciate it. You could DM me at dvellante, or comment on our LinkedIn post and please check out etr.ai. Its data is amazing. Best survey data in the enterprise tech business. This is Dave Vellante for theCube Insights, powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis." (upbeat music beginning) (upbeat music ending)
SUMMARY :
insights from the Cube and ETR, do for the community, Dave, good to see you. actually come back to me if you would. It just stays at the top. the most aggressive to cut. that have the most to lose What's the primary method still leads the way, you know, So in addition to what we're seeing here, And so I actually thank you I went through it for you. I'm going to ask you to explain and they're certainly not going to get it to you in a zero trust way. So all of that is the One is just the number of So come back to me in 12 So 52% of the ETR survey amount of money on the Metaverse and also in the data prep tools. the cloud expands to the biggest shock to me "Ah, it's, you know, really and Fastly is their really the folks said, you know, for a home in the enterprise, Yeah, and I got to be honest, in the community, you know, and I don't know if that's the right move and the vertical axis is shared net score. So that's really what you want Well, the way they compete So that's, you know, One of the problems, if and that's going to be obviously even Oracle is going to scale down. and the data pipelines and that, you know, Been a nuts three I love the fact I know you were really is so much data at ETR. and we'll see you next time
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Rob Hoof | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Ravi Mayuram | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Atif Kahn | PERSON | 0.99+ |
November | DATE | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
APAC | ORGANIZATION | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
Palo | ORGANIZATION | 0.99+ |
David Foyer | PERSON | 0.99+ |
February | DATE | 0.99+ |
January 2023 | DATE | 0.99+ |
DBT Labs | ORGANIZATION | 0.99+ |
October | DATE | 0.99+ |
Rob Ensslin | PERSON | 0.99+ |
Scott Stevenson | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
69% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
CrowdStrike | ORGANIZATION | 0.99+ |
4.6% | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
2023 | DATE | 0.99+ |
Scott | PERSON | 0.99+ |
1,181 responses | QUANTITY | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
third year | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Alex | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
OneTrust | ORGANIZATION | 0.99+ |
45% | QUANTITY | 0.99+ |
33% | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
two reasons | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
BeyondTrust | ORGANIZATION | 0.99+ |
7% | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Breaking Analysis: Even the Cloud Is Not Immune to the Seesaw Economy
>>From the Cube Studios in Palo Alto in Boston, bringing you data driven insights from the cube and etr. This is breaking analysis with Dave Ante. >>Have you ever been driving on the highway and traffic suddenly slows way down and then after a little while it picks up again and you're cruising along and you're thinking, Okay, hey, that was weird. But it's clear sailing now. Off we go, only to find out in a bit that the traffic is building up ahead again, forcing you to pump the brakes as the traffic pattern ebbs and flows well. Welcome to the Seesaw economy. The fed induced fire that prompted an unprecedented rally in tech is being purposefully extinguished now by that same fed. And virtually every sector of the tech industry is having to reset its expectations, including the cloud segment. Hello and welcome to this week's Wikibon Cube Insights powered by etr. In this breaking analysis will review the implications of the earnings announcements from the big three cloud players, Amazon, Microsoft, and Google who announced this week. >>And we'll update you on our quarterly IAS forecast and share the latest from ETR with a focus on cloud computing. Now, before we get into the new data, we wanna review something we shared with you on October 14th, just a couple weeks back, this is sort of a, we told you it was coming slide. It's an XY graph that shows ET R'S proprietary net score methodology on the vertical axis. That's a measure of spending momentum, spending velocity, and an overlap or presence in the dataset that's on the X axis. That's really a measure of pervasiveness. In the survey, the table, you see that table insert there that shows Wiki Bond's Q2 estimates of IAS revenue for the big four hyperscalers with their year on year growth rates. Now we told you at the time, this is data from the July TW 22 ETR survey and the ETR hadn't released its October survey results at that time. >>This was just a couple weeks ago. And while we couldn't share the specific data from the October survey, we were able to get a glimpse and we depicted the slowdown that we saw in the October data with those dotted arrows kind of down into the right, we said at the time that we were seeing and across the board slowdown even for the big three cloud vendors. Now, fast forward to this past week and we saw earnings releases from Alphabet, Microsoft, and just last night Amazon. Now you may be thinking, okay, big deal. The ETR survey data didn't really tell us anything we didn't already know. But judging from the negative reaction in the stock market to these earnings announcements, the degree of softness surprised a lot of investors. Now, at the time we didn't update our forecast, it doesn't make sense for us to do that when we're that close to earning season. >>And now that all the big three ha with all the big four with the exception of Alibaba have announced we've, we've updated. And so here's that data. This chart lays out our view of the IS and PAs worldwide revenue. Basically it's cloud infrastructure with an attempt to exclude any SaaS revenue so we can make an apples to apples comparison across all the clouds. Now the reason that actual is in quotes is because Microsoft and Google don't report IAS revenue, but they do give us clues and kind of directional commentary, which we then triangulate with other data that we have from the channel and ETR surveys and just our own intelligence. Now the second column there after the vendor name shows our previous estimates for q3, and then next to that we show our actuals. Same with the growth rates. And then we round out the chart with that lighter blue color highlights, the full year estimates for revenue and growth. >>So the key takeaways are that we shaved about $4 billion in revenue and roughly 300 basis points of growth off of our full year estimates. AWS had a strong July but exited Q3 in the mid 20% growth rate year over year. So we're using that guidance, you know, for our Q4 estimates. Azure came in below our earlier estimates, but Google actually exceeded our expectations. Now the compression in the numbers is in our view of function of the macro demand climate, we've made every attempt to adjust for constant currency. So FX should not be a factor in this data, but it's sure you know that that ma the the, the currency effects are weighing on those companies income statements. And so look, this is the fundamental dynamic of a cloud model where you can dial down consumption when you need to and dial it up when you need to. >>Now you may be thinking that many big cloud customers have a committed level of spending in order to get better discounts. And that's true. But what's happening we think is they'll reallocate that spend toward, let's say for example, lower cost storage tiers or they may take advantage of better price performance processors like Graviton for example. That is a clear trend that we're seeing and smaller companies that were perhaps paying by the drink just on demand, they're moving to reserve instance models to lower their monthly bill. So instead of taking the easy way out and just spending more companies are reallocating their reserve capacity toward lower cost. So those sort of lower cost services, so they're spending time and effort optimizing to get more for, for less whereas, or get more for the same is really how we should, should, should phrase it. Whereas during the pandemic, many companies were, you know, they perhaps were not as focused on doing that because business was booming and they had a response. >>So they just, you know, spend more dial it up. So in general, as they say, customers are are doing more with, with the same. Now let's look at the growth dynamic and spend some time on that. I think this is important. This data shows worldwide quarterly revenue growth rates back to Q1 2019 for the big four. So a couple of interesting things. The data tells us during the pandemic, you saw both AWS and Azure, but the law of large numbers and actually accelerate growth. AWS especially saw progressively increasing growth rates throughout 2021 for each quarter. Now that trend, as you can see is reversed in 2022 for aws. Now we saw Azure come down a bit, but it's still in the low forties in terms of percentage growth. While Google actually saw an uptick in growth this last quarter for GCP by our estimates as GCP is becoming an increasingly large portion of Google's overall cloud business. >>Now, unfortunately Google Cloud continues to lose north of 850 million per quarter, whereas AWS and Azure are profitable cloud businesses even though Alibaba is suffering its woes from China. And we'll see how they come in when they report in mid-November. The overall hyperscale market grew at 32% in Q3 in terms of worldwide revenue. So the slowdown isn't due to the repatriation or competition from on-prem vendors in our view, it's a macro related trend. And cloud will continue to significantly outperform other sectors despite its massive size. You know, on the repatriation point, it just still doesn't show up in the data. The A 16 Z article from Sarah Wong and Martin Martin Kasa claiming that repatriation was inevitable as a means to lower cost of good sold for SaaS companies. You know, while that was thought provoking, it hasn't shown up in the numbers. And if you read the financial statements of both AWS and its partners like Snowflake and you dig into the, to the, to the quarterly reports, you'll see little notes and comments with their ongoing negotiations to lower cloud costs for customers. >>AWS and no doubt execs at Azure and GCP understand that the lifetime value of a customer is worth much more than near term gross margin. And you can expect the cloud vendors to strike a balance between profitability, near term profitability anyway and customer attention. Now, even though Google Cloud platform saw accelerated growth, we need to put that in context for you. So GCP, by our estimate, has now crossed over the $3 billion for quarter market actually did so last quarter, but its growth rate accelerated to 42% this quarter. And so that's a good sign in our view. But let's do a quick little comparison with when AWS and Azure crossed the $3 billion mark and compare their growth rates at the time. So if you go back to to Q2 2016, as we're showing in this chart, that's around the time that AWS hit 3 billion per quarter and at the same time was growing at 58%. >>Azure by our estimates crossed that mark in Q4 2018 and at that time was growing at 67%. Again, compare that to Google's 42%. So one would expect Google's growth rate would be higher than its competitors at this point in the MO in the maturity of its cloud, which it's, you know, it's really not when you compared to to Azure. I mean they're kind of con, you know, comparable now but today, but, but you'll go back, you know, to that $3 billion mark. But more so looking at history, you'd like to see its growth rate at this point of a maturity model at least over 50%, which we don't believe it is. And one other point on this topic, you know, my business friend Matt Baker from Dell often says it's not a zero sum game, meaning there's plenty of opportunity exists to build value on top of hyperscalers. >>And I would totally agree it's not a dollar for dollar swap if you can continue to innovate. But history will show that the first company in makes the most money. Number two can do really well and number three tends to break even. Now maybe cloud is different because you have Microsoft software estate and the power behind that and that's driving its IAS business and Google ads are funding technology buildouts for, for for Google and gcp. So you know, we'll see how that plays out. But right now by this one measurement, Google is four years behind Microsoft in six years behind aws. Now to the point that cloud will continue to outpace other markets, let's, let's break this down a bit in spending terms and see why this claim holds water. This is data from ET r's latest October survey that shows the granularity of its net score or spending velocity metric. >>The lime green is new adoptions, so they're adding the platform, the forest green is spending more 6% or more. The gray bars spending is flat plus or minus, you know, 5%. The pinkish colors represent spending less down 6% or worse. And the bright red shows defections or churn of the platform. You subtract the reds from the greens and you get what's called net score, which is that blue dot that you can see on each of the bars. So what you see in the table insert is that all three have net scores above 40%, which is a highly elevated measure. Microsoft's net scores above 60% AWS well into the fifties and GCP in the mid forties. So all good. Now what's happening with all three is more customers are keep keeping their spending flat. So a higher percentage of customers are saying, our spending is now flat than it was in previous quarters and that's what's accounting for the compression. >>But the churn of all three, even gcp, which we reported, you know, last quarter from last quarter survey was was five x. The other two is actually very low in the single digits. So that might have been an anomaly. So that's a very good sign in our view. You know, again, customers aren't repatriating in droves, it's just not a trend that we would bet on, maybe makes for a FUD or you know, good marketing head, but it's just not a big deal. And you can't help but be impressed with both Microsoft and AWS's performance in the survey. And as we mentioned before, these companies aren't going to give up customers to try and preserve a little bit of gross margin. They'll do what it takes to keep people on their platforms cuz they'll make up for it over time with added services and improved offerings. >>Now, once these companies acquire a customer, they'll be very aggressive about keeping them. So customers take note, you have negotiating leverage, so use it. Okay, let's look at another cut at the cloud market from the ETR data set. Here's the two dimensional view, again, it's back, it's one of our favorites. Net score or spending momentum plotted against presence. And the data set, that's the x axis net score on the, on the vertical axis, this is a view of et r's cloud computing sector sector. You can see we put that magic 40% dotted red line in the table showing and, and then that the table inserts shows how the data are plotted with net score against presence. I e n in the survey, notably only the big three are above the 40% line of the names that we're showing here. The oth there, there are others. >>I mean if you put Snowflake on there, it'd be higher than any of these names, but we'll dig into that name in a later breaking analysis episode. Now this is just another way of quantifying the dominance of AWS and Azure, not only relative to Google, but the other cloud platforms out there. So we've, we've taken the opportunity here to plot IBM and Oracle, which both own a public cloud. Their performance is largely a reflection of them migrating their install bases to their respective public clouds and or hybrid clouds. And you know, that's fine, they're in the game. That's a point that we've made, you know, a number of times they're able to make it through the cloud, not whole and they at least have one, but they simply don't have the business momentum of AWS and Azure, which is actually quite impressive because AWS and Azure are now as large or larger than IBM and Oracle. >>And to show this type of continued growth that that that Azure and AWS show at their size is quite remarkable and customers are starting to recognize the viability of on-prem hi, you know, hybrid clouds like HPE GreenLake and Dell's apex. You know, you may say, well that's not cloud, but if the customer thinks it is and it was reporting in the survey that it is, we're gonna continue to report this view. You know, I don't know what's happening with H P E, They had a big down tick this quarter and I, and I don't read too much into that because their end is still pretty small at 53. So big fluctuations are not uncommon with those types of smaller ends, but it's over 50. So, you know, we did notice a a a negative within a giant public and private sector, which is often a, a bellwether giant public private is big public companies and large private companies like, like a Mars for example. >>So it, you know, it looks like for HPE it could be an outlier. We saw within the Fortune 1000 HPE E'S cloud looked actually really good and it had good spending momentum in that sector. When you di dig into the industry data within ETR dataset, obviously we're not showing that here, but we'll continue to monitor that. Okay, so where's this Leave us. Well look, this is really a tactical story of currency and macro headwinds as you can see. You know, we've laid out some of the points on this slide. The action in the stock market today, which is Friday after some of the soft earnings reports is really robust. You know, we'll see how it ends up in the day. So maybe this is a sign that the worst is over, but we don't think so. The visibility from tech companies is murky right now as most are guiding down, which indicates that their conservative outlook last quarter was still too optimistic. >>But as it relates to cloud, that platform is not going anywhere anytime soon. Sure, there are potential disruptors on the horizon, especially at the edge, but we're still a long ways off from, from the possibility that a new economic model emerges from the edge to disrupt the cloud and the opportunities in the cloud remain strong. I mean, what other path is there? Really private cloud. It was kind of a bandaid until the on-prem guys could get their a as a service models rolled out, which is just now happening. The hybrid thing is real, but it's, you know, defensive for the incumbents until they can get their super cloud investments going. Super cloud implying, capturing value above the hyperscaler CapEx, you know, call it what you want multi what multi-cloud should have been, the metacloud, the Uber cloud, whatever you like. But there are opportunities to play offense and that's clearly happening in the cloud ecosystem with the likes of Snowflake, Mongo, Hashi Corp. >>Hammer Spaces is a startup in this area. Aviatrix, CrowdStrike, Zeke Scaler, Okta, many, many more. And even the projects we see coming out of enterprise players like Dell, like with Project Alpine and what Pure Storage is doing along with a number of other of the backup vendors. So Q4 should be really interesting, but the real story is the investments that that companies are making now to leverage the cloud for digital transformations will be paying off down the road. This is not 1999. We had, you know, May might have had some good ideas and admittedly at a lot of bad ones too, but you didn't have the infrastructure to service customers at a low enough cost like you do today. The cloud is that infrastructure and so far it's been transformative, but it's likely the best is yet to come. Okay, let's call this a rap. >>Many thanks to Alex Morrison who does production and manages the podcast. Also Can Schiffman is our newest edition to the Boston Studio. Kristin Martin and Cheryl Knight helped get the word out on social media and in our newsletters. And Rob Ho is our editor in chief over@siliconangle.com, who does some wonderful editing for us. Thank you. Remember, all these episodes are available as podcasts. Wherever you listen, just search breaking analysis podcast. I publish each week on wiki bond.com at silicon angle.com. And you can email me at David dot valante@siliconangle.com or DM me at Dante or comment on my LinkedIn posts. And please do checkout etr.ai. They got the best survey data in the enterprise tech business. This is Dave Valante for the Cube Insights powered by etr. Thanks for watching and we'll see you next time on breaking analysis.
SUMMARY :
From the Cube Studios in Palo Alto in Boston, bringing you data driven insights from Have you ever been driving on the highway and traffic suddenly slows way down and then after In the survey, the table, you see that table insert there that Now, at the time we didn't update our forecast, it doesn't make sense for us And now that all the big three ha with all the big four with the exception of Alibaba have announced So we're using that guidance, you know, for our Q4 estimates. Whereas during the pandemic, many companies were, you know, they perhaps were not as focused So they just, you know, spend more dial it up. So the slowdown isn't due to the repatriation or And you can expect the cloud And one other point on this topic, you know, my business friend Matt Baker from Dell often says it's not a And I would totally agree it's not a dollar for dollar swap if you can continue to So what you see in the table insert is that all three have net scores But the churn of all three, even gcp, which we reported, you know, And the data set, that's the x axis net score on the, That's a point that we've made, you know, a number of times they're able to make it through the cloud, the viability of on-prem hi, you know, hybrid clouds like HPE GreenLake and Dell's So it, you know, it looks like for HPE it could be an outlier. off from, from the possibility that a new economic model emerges from the edge to And even the projects we see coming out of enterprise And you can email me at David dot valante@siliconangle.com or DM me at Dante
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Morrison | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Alphabet | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Rob Ho | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Matt Baker | PERSON | 0.99+ |
October 14th | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Dave Valante | PERSON | 0.99+ |
October | DATE | 0.99+ |
$3 billion | QUANTITY | 0.99+ |
Sarah Wong | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
42% | QUANTITY | 0.99+ |
32% | QUANTITY | 0.99+ |
Friday | DATE | 0.99+ |
1999 | DATE | 0.99+ |
40% | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
5% | QUANTITY | 0.99+ |
six years | QUANTITY | 0.99+ |
3 billion | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
last quarter | DATE | 0.99+ |
67% | QUANTITY | 0.99+ |
Martin Martin Kasa | PERSON | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Aviatrix | ORGANIZATION | 0.99+ |
July | DATE | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
58% | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
second column | QUANTITY | 0.99+ |
Zeke Scaler | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
last quarter | DATE | 0.99+ |
each week | QUANTITY | 0.99+ |
over@siliconangle.com | OTHER | 0.99+ |
Dave Ante | PERSON | 0.99+ |
Project Alpine | ORGANIZATION | 0.99+ |
Wiki Bond | ORGANIZATION | 0.99+ |
mid forties | DATE | 0.99+ |
Hashi Corp. | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
mid-November | DATE | 0.99+ |
today | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
Azure | ORGANIZATION | 0.99+ |
about $4 billion | QUANTITY | 0.98+ |
Purnima Padmanabhan | VMware Explore 2022
>>Welcome back everyone to the cubes live coverage here in San Francisco for VMware Explorer. I'm John farer, Dave LAN two days of Wal three days of Wal Walker. Two sets live events got PERA, had Metabo, senior vice president and general manager of cloud management at VMware. I got it. Right. Thanks for coming on the queue. >>You got it right. Good to >>Be here. We're all smiles. Cause we were talking about your history. You once worked at loud cloud and we were reminiscent about how cloud was before cloud was even cloud. Exactly. And how, how hard it was. >>And >>It's still hard. Complexity is a big deal. And one of the segments we want to talk to you about is the announcement around aria and you see cloud manage a big part of this direction to multi-cloud yes. To tame the complexity. And you know, we were quoting Andy Grove on the cube, let chaos rain, and then rain in the chaos. Exactly. Okay. A very famous quote in tech and the theme here is cloud chaos. Yes. And so we're starting to see signs of raining in that chaos or solving complexity. And every major inflection point has this moment where yes, it gets so hard and then it kicks up to the right and grows and link gets solved. So we feel like we're in that moment. >>I couldn't agree more. And in fact, the way I say is our, our, our tagline is we make the complexity of managing cloud invisible so that you can focus on building your business apps. And you're right about the inflection point. Every time a new technology hits, you have some point of adoption and then it becomes insanely successful. And that's when the complexity hits, then you go and tame the complexity till the next technology hits. Right? That's what happens. It's happened with virtualization. Then it has happened with cloud then with containerization and now the next one will hit. And so with aria, we said, we have to fundamentally change the problem, right? We are constantly running a race of TAing, this complexity. So very excited about this announcement with which we're doing with aria. And we said, imagine if I could have a view of my environment and all the dependencies, I don't need to know everything, just the environment and its dependencies. Then I can now start solving problems and answering questions that I was unable to before. And newer technologies can keep coming and piling on, but I'll always be able to answer that, help >>Our audience understand Ari, a great name and, and what's new. Your Heka what's new from, you know, it's not just V V realize with a new name what's what's new specifically. >>Yeah. Please. No. >>Explain some people. Well, >>There's some commentary on snarky comments, but it's a product it's not a rebrand of something >>Else. It's right. It's not explain that. It's not a, yeah. So what we did is let, let me start off. Why, why we started aria? So we said, okay, native public managing environments, native public cloud environments and cloud native applications is a different ballgame, more Emeral workloads, very large scale, highly fragmented data. So we looked at that problem rounds up and said, we need to have a management solution that solves that problem focused on native public cloud and cloud native apps and the core to solving that problem was you can't just solve it for one cloud or you can't solve it for one discipline. When I say discipline, when you think about management, what do you manage? You're managing to optimize cost. You're managing to optimize performance. You're managing to optimize your security and you're managing to speed up the delivery. That is it. And so we said, we'll have a new look to this management. And what we have done with aria is we have introduced a brand new platform, which we call aria hub powered by aria graph, which allows you to deliver this man on this management challenges, by creating a map of your environment, a near real time map of your environment. And then we are able to, once we know what an application looks like and how it maps to the infrastructure, we can go and query other subsystems to tell you, what is the cost of an application? What is the performance of an application? Creating a common understanding >>This now it's a new architecture. >>I just wanted to get that out there. It's federated >>New graph database. >>Yes. It's a new architecture federated, a platform that not only gives you a map of your environment, but it federates into other sources to pull that data together. Right now, one of the data sources that it federates into is of course also we realize, yeah, yeah. Cloud health, >>You plug and >>Cloud observability. You plug everything into it. Yeah. And as part of the announcement, we didn't just announce a platform. We also announced a set of crosscutting solutions cuz we said, okay, what is the power of the platform? The big thing is it removes the swivel share management. It allows you to answer questions you couldn't answer before. And so >>Swivel share meaning going from one app to another one app logging in exactly >>Credentials in credentials. And you don't have a common understanding of app across those. So now you hire people who do integration buses, right? All kinds of cloud. So the three new end to end solutions we are announcing also in, along with the platform, these are brand new. One is something called aria guardrails. So when I have development environments today, for example, my, I do development on public cloud as well as private cloud. I have thousands of accounts, each one with its own security rules, each one with its own policies. After I initially deploy the account, it becomes a nightmare to manage that. So what aria guardrails allows you to do is set up these multi-cloud environments with the right policies. And not only is it about one time provisioning, but it is maintaining them on >>A run basis. And those credentials are also risk. Cuz you have a password on the dark web, that's exposed on one and you've got to change it. And, and there's so many things going on exactly on security, which brings me up to the point of, you know, we were talking, we're gonna see Tom later on security. We heard earlier, why wasn't security in the keynote? Oh, it's table stakes. That's what Z has said. But we're like, okay, I get that. So let's just say that security is table stakes. There's a big trend towards security as a state of something at a, at a given time. And that CSOs and CSOs are going to defensible. Yes. Meaning being defensible all the time. Yes. As an ongoing thing, which is not just running a pen test once a week. Yes. Like multiple testing, real testing. Not simulation. Yes. To be secure. Yes. So it's not about being secure. It's about having security, but defense ability is the action now not yeah. Yeah. >>Can >>You does that, how does that fit into this? Because this seems to like be in this wheelhouse of management. >>No, I think you're bringing a very important point, which is the security as a post. The fact item is no longer. Right? Right. You want to bake in security. This is a shift left of security that we talk about when you're building an application and you are deploying code in your test, you wanna say, Hey, what is the security? Is it secure? Is it meeting my guardrail? Then when you deploy it from an operations perspective, also it is a security concern. It's not just a security team's concern now. So is my configuration right? Is my configuration secure? Has, is it drifting? It's never a snapshot in time. It's constantly, you have to look at it. Is it drifting? And that is exactly what we are doing also with aria. So >>That's part of the solution you're talking about in the guardrails within being >>Able to maintain the secure configuration right now, as I said, there's always a security discipline. Yeah. Which is you are done by security teams, but you also want operations teams and development teams to enforce security in their respective practices. And that's what Ari allows you to do. >>So the question on multi-cloud comes in, okay. So this is all good. By the way, we love that shift left again, very developer. And I would argue actually we are argue on the cube. That dev ops is the development environment for cloud native. So the it operational once called ops is now in dev just saying he is, and then data ops and security ops are now the new it because that's where the hard problems are. So how do you look at the data side of it as well as security in your view of multi-cloud because you know, hybrid cloud, I can see the steady state between, you know, on premises and cloud, if it's operating cloudlike but now you're starting to look at spanning clouds. Yes. Yes. Not full spanning workloads. That's not there yet, but certainly people have multiple clouds. Yeah. But when you data seems to be the first thing spanning not necessarily the app itself, but how do you guys view that multi-cloud aspect of what you're managing? I mean, how you look at that? >>I think there are different angles to it. Right? You can look at it from the data angle and you look at it on how the, how protected a data is for us. When you look at management discipline, it is all from the perspective of configurations. Okay. If I have configured my environment correctly, then you should not be able to do something that destroys or the data. Right. So getting the configuration right. When you're developing that, getting the configuration right. When you're provisioning the app and then getting the configuration, right. Even when you're doing day two and ongoing operations, that is what we bring to the table. And to some extent, that aria visibility, that I was talking about an Ary graph, a near real time view of the configuration state and its dependencies is very critical. So now I can ask questions. Is there a misconfiguration, by the way, the answer is yes, they, yeah. >>That is a lot by the way, too, right? Yeah. >>Which, which exposes me. And then you can say, Hey, is there user activity associated with that misconfigured? Good object. Now suddenly you have go, go to a red alert. So not only something misconfigured, but there is user activity associated with the misconfigured data. You know, this is something that I have. This >>Is where AI sings beautifully because beautifully, once you have the configuration baseline done, yes. It's like securing the S3 bucket, which is like a knee has to be a like brushing your teeth. It's gotta be a habit. Exactly. It's like, you just don't even think about, you just don't leave an S3 bucket. >>It's gotta be simplified because you're, we're asking the devs now to be security pros, correct. Secure the run time, secure the paths, you know, secure the containers. And so they need help. This is not what they wake up in the morning passionate about. Right. >>But that is where the guardrails comes in. Totally. Yeah. So a a developer, why should they care? They should just say, look, I'm developing for the credit card industry. I need a PCI compliant environment. And then let us take care of defining that environment, deploying that environment, managing that environment on an ongoing basis, they should be building code. Yeah. Right. But there is a change also, which is in the past, these were like two different islands and two different views with aria graft. We also have created this unified API that a developer could query or an ops could query to create a common understanding of the environment. So you're not looking at, you know, the elephant won the trunk and the other one, the tail you're looking at it in a common way. >>Can you talk about the collaboration between tan zoo and aria portfolios? Because obviously the VMware customers are investing in tan zoo. A lot of stuff's coming outta the oven. We heard some Dave heard some stuff from Chris Wolf and he's gonna come on tomorrow. And Raghu was hinting at some other stuff. That's not yet public, but you know, this things happening, >>Things happening, lot of >>Things, you know, you know, announcements happened years ago last year. Now some fruit's coming off the tree, this is a hot product aria. It makes a lot of sense for the customers. Where's the cloud native stuff, kicking, connecting in. What's the give us the overview what's connection >>Is lots and lots of connections. So you have a beautiful Kubernetes environment and a cloud native platform. You have accelerated app development. Now you're building more apps, more microservices based apps, more fragmented data, more information. So think of aria as an envelope around all of this. So wherever you are, whether you are building an application, deploying an application, managing an application, retiring an application through that life cycle, we can bring that management. So what we are doing with Tansu is with the platform, develop and platform. Now we can hook in management with a common perspective earlier in the life cycle. I don't have to wait for it to go to production to start saying, is it secure? Is it configured? How is it performing? What is my cost trade off as a developer, I've decided to, to fix a latency issue, I'm gonna add a new region or I'm gonna scale out a particular tier. Do I know how much it'll cost me? Can I give you that right at your fingertips, potentially even within the development platform and within the ID, that's the power, right? So bringing Ary, >>Not a lot of heavy lifting on the develop. So it's pretty much almost like a query to a database or >>As simple API that they can just query as part of their development process. Yeah. So by bringing aria and Tansu and really aria en developing Tansu right. You're able to bring that power >>Developer. I just always smile because you, I remember we, we have a group called the cloud. AATI the early OG found cloud. >>AATI >>The early days of cloud. When we were talking about infrastructure as code yes. Way back when, and finally it's actually happening. So what you're describing is infrastructure's code because now there's more complexity happening under the hard and top and you know, service are being turned on and off automatically. Yes. And sometimes you might not even know what's going on. Exactly. If you have guard rail, >>But you have to discover the state, know something has turned on, understand the implication and then synthesize, synthesize it down to the insight for the user. >>You know, a lot of people have been complaining about other older companies. Like Splunks the world who have great logging technology for gen one cloud, but now these new logging logging becomes a problem. Can you talk about how you guys are handling that? Give confidence or yeah. Explain that there's everything's gonna be logged properly. Yeah. >>So, so really look, there are three disciplines that we have management. Discipl like, ultimately there are thousands of names, but it boils down to you're managing the cost. You're managing the security, you're managing the performance of your applications. That is it. Right. So what we found is when you think of these disciplines as siloed load solutions, you can't ask a simple question as what is my cost performance trade off. You can't ask a simple question as, Hey, I'm improving performance. How, what is the implication of security? And that's when you start building complex solutions that say, okay, let me collect log from here. Let me collect this from here. Then let me correlate and normalize an application definition and tell you something and then put it in a spreadsheet and put it in a spreadsheet finally for manual work. Exactly. So one of the pillars is about managing performance. >>We have very powerful capabilities today in our portfolio. Tansu observability, which is part of aria portfolio. We realize log, which is part of aria portfolio, networks, insights, and operations. So with the common, when you, when you have a common language, we have a common language. We understand each other. Similarly with Ary graph and aria hub, we have creating this common language. So once we create a common language, all the various observability and log solutions have a meaning. They have relevance. And so we are able to take the noise from all these systems and synthesize it down to what we call business insights. And that's what is one of the big announcement as part of aria, awesome take data, which we have lots of and convert it to information. >>Give us the bumper sticker on why VMware. >>Well, I I'll tell you, when you talk about various public clouds, each public cloud has their native solutions. I've got control tower, I've got cloud wash, cloud trail, different solutions, and some of the hyperscalers are also expanding their solutions to other cloud. I think VMware in a way, from a multi-cloud perspective, we are in a wonderfully neutral position. Not only do we have a wealth of technology and assets that we can bring to the game, but we can also do it evenly across all clouds. So, so look at something like cost. Do you trust one of the hyperscalers to tell you that what is the cost comparison between them and another hyperscaler? That is where the VMware value comes in? >>I think people just try to hear what the cost of one cloud. Exactly, exactly. That is often people make money doing that is a job. No, >>No, definitely. Even a single cloud. What is the cost? >>It's a cloud economist out there and we know who he is. Corey Corey, a friend of the cube. He does it for his living. So help people figure out their bill. Exactly. Just on one cloud. >>Exactly. It's one cloud. So being able, we have the unique position where, and the right sets of technologies and experiences to bring that solution to bear across multicloud. Right. Great. >>What's your vision real quick. One minute left. What's your vision for the group? What are you investing in? What's your goals? What are you trying to do? Ask you the products. New. Gonna roll that out. What's what's the plan. I >>Really, again, the biggest one, the, the, the tagline I talked about, right. I, I, I want to, you know, I'm telling customers, managing stuff is boring. Don't waste your time on it. Let us take care of it. Right? So make the cloud complexity invisible so that you can focus on building your applications and everything that we do in the business unit is targeted towards that one goal. It is not about doing more features, more capabilities. It's are you solving customers questions? And we start from question down, >>Be thank you for spending your valuable time here in the cube, explaining the new news. Appreciate it. All right. Get lunch. After the short breaks, stay more with the cube live here in San Francisco for VMware Explorer, 22. I'm John that's. Dave. >>Thank you.
SUMMARY :
Thanks for coming on the queue. You got it right. Cause we were talking about your history. And one of the segments we want to talk And that's when the complexity hits, then you go and Your Heka what's new from, you know, it's not just V V realize with a new name what's what's No. Well, core to solving that problem was you can't just solve it for one cloud or you can't I just wanted to get that out there. that not only gives you a map of your environment, but it federates into other sources to pull And as part of the announcement, So what aria guardrails allows you to do is set up these multi-cloud And that CSOs and CSOs are going to Because this seems to like be in this wheelhouse of management. And that is exactly what we are doing also with aria. And that's what Ari allows you to do. I can see the steady state between, you know, on premises and cloud, if it's operating cloudlike but So getting the configuration right. That is a lot by the way, too, right? And then you can say, Hey, is there user activity associated It's like securing the S3 bucket, which is like a knee has to be a like brushing your teeth. secure the paths, you know, secure the containers. look, I'm developing for the credit card industry. That's not yet public, but you know, this things happening, Things, you know, you know, announcements happened years ago last year. So you have a beautiful Kubernetes environment and a cloud Not a lot of heavy lifting on the develop. So by bringing aria and Tansu and really aria en developing Tansu right. AATI the early OG And sometimes you might not even know what's going on. But you have to discover the state, know something has turned on, understand the implication and Can you talk about how you guys are handling that? So what we found is when you think And so we are able to take the noise from all these systems and trust one of the hyperscalers to tell you that what is the cost comparison between them and I think people just try to hear what the cost of one cloud. What is the cost? Corey Corey, a friend of the cube. and the right sets of technologies and experiences to bring that solution to bear across multicloud. What are you investing in? So make the cloud complexity invisible so that you can focus on building your applications Be thank you for spending your valuable time here in the cube, explaining the new news.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Wolf | PERSON | 0.99+ |
John farer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Andy Grove | PERSON | 0.99+ |
Purnima Padmanabhan | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Corey Corey | PERSON | 0.99+ |
AATI | ORGANIZATION | 0.99+ |
One minute | QUANTITY | 0.99+ |
Two sets | QUANTITY | 0.99+ |
Raghu | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
two different islands | QUANTITY | 0.99+ |
John | PERSON | 0.98+ |
one cloud | QUANTITY | 0.98+ |
two days | QUANTITY | 0.98+ |
three disciplines | QUANTITY | 0.98+ |
Metabo | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
one app | QUANTITY | 0.98+ |
years ago | DATE | 0.98+ |
Tansu | ORGANIZATION | 0.97+ |
once a week | QUANTITY | 0.97+ |
single cloud | QUANTITY | 0.97+ |
each one | QUANTITY | 0.97+ |
tomorrow | DATE | 0.97+ |
Tom | PERSON | 0.96+ |
each | QUANTITY | 0.96+ |
one goal | QUANTITY | 0.96+ |
two different views | QUANTITY | 0.96+ |
S3 | COMMERCIAL_ITEM | 0.96+ |
Ari | PERSON | 0.95+ |
Dave LAN | PERSON | 0.95+ |
last year | DATE | 0.94+ |
three days | QUANTITY | 0.94+ |
thousands of names | QUANTITY | 0.94+ |
first thing | QUANTITY | 0.94+ |
today | DATE | 0.93+ |
aria | ORGANIZATION | 0.92+ |
thousands of accounts | QUANTITY | 0.92+ |
one time | QUANTITY | 0.92+ |
day two | QUANTITY | 0.92+ |
one discipline | QUANTITY | 0.9+ |
VMware Explorer | ORGANIZATION | 0.9+ |
gen one | QUANTITY | 0.88+ |
Wal Walker | EVENT | 0.88+ |
aria | TITLE | 0.81+ |
22 | OTHER | 0.76+ |
Tansu | TITLE | 0.75+ |
three new end | QUANTITY | 0.73+ |
Ari | ORGANIZATION | 0.73+ |
VMware Explore | TITLE | 0.69+ |
Splunks | ORGANIZATION | 0.65+ |
Ary graph | TITLE | 0.64+ |
PERA | ORGANIZATION | 0.58+ |
Z | PERSON | 0.57+ |
segments | QUANTITY | 0.51+ |
2022 | DATE | 0.49+ |
Explorer | TITLE | 0.49+ |
Kubernetes | TITLE | 0.46+ |
aria | PERSON | 0.45+ |
Wal | EVENT | 0.45+ |
4-video test
>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.
SUMMARY :
bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Exxon Mobil | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Sean Hagar | PERSON | 0.99+ |
Daniel Wennberg | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
USC | ORGANIZATION | 0.99+ |
Caltech | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Tatsuya Nagamoto | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
1978 | DATE | 0.99+ |
Fox | ORGANIZATION | 0.99+ |
six systems | QUANTITY | 0.99+ |
Harvard | ORGANIZATION | 0.99+ |
Al Qaeda | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
300 yards | QUANTITY | 0.99+ |
University of Tokyo | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Burns | PERSON | 0.99+ |
Atsushi Yamamura | PERSON | 0.99+ |
0.14% | QUANTITY | 0.99+ |
48 core | QUANTITY | 0.99+ |
0.5 microseconds | QUANTITY | 0.99+ |
NSF | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
NTT | ORGANIZATION | 0.99+ |
first implementation | QUANTITY | 0.99+ |
first experiment | QUANTITY | 0.99+ |
123 | QUANTITY | 0.99+ |
Army Research Office | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
1,904,711 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
2000 spins | QUANTITY | 0.99+ |
five researcher | QUANTITY | 0.99+ |
Creole | ORGANIZATION | 0.99+ |
three set | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
third part | QUANTITY | 0.99+ |
Department of Applied Physics | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
85,900 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
136 CPU | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
2.4 gigahertz | QUANTITY | 0.99+ |
1000 times | QUANTITY | 0.99+ |
two times | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
131 | QUANTITY | 0.99+ |
14,233 | QUANTITY | 0.99+ |
more than 100 spins | QUANTITY | 0.99+ |
two possible phases | QUANTITY | 0.99+ |
13,580 | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
4 | QUANTITY | 0.99+ |
one microseconds | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
500 spins | QUANTITY | 0.99+ |
two identical photons | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
70 years ago | DATE | 0.99+ |
Iraq | LOCATION | 0.99+ |
one experiment | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
Amir Safarini Nini | PERSON | 0.99+ |
Saddam | PERSON | 0.99+ |
Neuromorphic in Silico Simulator For the Coherent Ising Machine
>>Hi everyone, This system A fellow from the University of Tokyo before I thought that would like to thank you she and all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today or some of the recent works that have been done either by me or by character of Hong Kong Noise Group indicating the title of my talk is a neuro more fic in silica simulator for the commenters in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then I will show some proof of concept of the game in performance that can be obtained using dissimulation in the second part and the production of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is adapted from a recent natural tronics paper from the Village Back hard People. And this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, Interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba purification machine, or a recently proposed restricted Bozeman machine, FPD eight, by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition influx you beat or the energy efficiency off memory sisters uh P. J. O are still an attractive platform for building large theorizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particle in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system in this respect, the f. D. A s. They are interesting from the perspective, off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see. And so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for suggesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics. Orphan, chaotic because of symmetry, is interconnectivity. The infrastructure. No neck talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's a schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the Cortes in machine, which is a growing toe the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo F represents the monitor optical parts, the district optical parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback cooking cm using oh, more than detection and refugee A then injection off the cooking time and eso this dynamics in both cases of CME in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the Eyes in coping and the H is the extension of the rising and attorney in India and expect so. >>Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of >>this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted convergence to the global minimum of there's even 20 and using this approach. And so this is >>why we propose toe introduce a macro structure the system or where one analog spin or one D o. P. O is replaced by a pair off one and knock spin and one error on cutting. Viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a >>learning process for searching for the ground state of the icing. Every 20 >>within this massacre structure the role of the ER variable eyes to control the amplitude off the analog spins to force the amplitude of the expense toe, become equal to certain target amplitude. A Andi. This is known by moderating the strength off the icing complaints or see the the error variable e I multiply the icing complain here in the dynamics off UH, D o p o on Then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different, I think introduces a >>symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here for solving certain current size off, um, escape problem, Uh, in which the exiled from here in the i r. From here and the value of the icing energy is shown in the bottom plots. And you see this Celtics search that visit various local minima of the as Newtonian and eventually finds the local minima Um, >>it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing hamiltonian so that we're gonna do not get stuck in any of them. On more over the other types of attractors, I can eventually appear, such as the limits of contractors or quality contractors. They can also be destabilized using a moderation of the target amplitude. And so we have proposed in the past two different motivation of the target constitute the first one is a moderation that ensure the 100 >>reproduction rate of the system to become positive on this forbids the creation of any non tree retractors. And but in this work I will talk about another modulation or Uresti moderation, which is given here that works, uh, as well as this first, uh, moderation, but is easy to be implemented on refugee. >>So this couple of the question that represent the current the stimulation of the cortex in machine with some error correction, they can be implemented especially efficiently on an F B G. And here I show the time that it takes to simulate three system and eso in red. You see, at the time that it takes to simulate the X, I term the EI term, the dot product and the rising everything. Yet for a system with 500 spins analog Spain's equivalent to 500 g. O. P. S. So in f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements tobacco cm in which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as, ah one gear repression to replicate the post phaser CIA. Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts, all the dog products, respect to the problem size. And and if we had a new infinite amount of resources and PGA to simulate the dynamics, then the non in optical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a low carrot off end and while the kite off end. Because computing the dot product involves the summing, all the terms in the products, which is done by a nephew, Jay by another tree, which heights scares a logarithmic any with the size of the system. But this is in the case if we had an infinite amount of resources on the LPGA food but for dealing for larger problems off more than 100 spins, usually we need to decompose the metrics into ah smaller blocks with the block side that are not you here. And then the scaling becomes funny non inner parts linear in the and over you and for the products in the end of you square eso typically for low NF pdf cheap P a. You know you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance started path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product. By increasing the size of this at the tree and this can be done by organizing Yeah, click the extra co components within the F p G A in order which is shown here in this right panel here in order to minimize the finding finance of the system and to minimize the long distance that the path in the in the fpt So I'm not going to the details of how this is implemented the PGA. But just to give you a new idea off why the Iraqi Yahiko organization off the system becomes extremely important toe get good performance for simulator organizing mission. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should result for solving escape problems, free connected person, randomly person minus one, spin last problems and we sure, as we use as a metric the numbers >>of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with Nina successful BT against the problem size here and and in red here there's propose F B J implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior. It's similar to the car testing machine >>and security. You see that the scaling off the numbers of metrics victor product necessary to solve this problem scales with a better exponents than this other approaches. So so So that's interesting feature of the system and next we can see what is the real time to solution. To solve this, SK instances eso in the last six years, the time institution in seconds >>to find a grand state of risk. Instances remain answers is possibility for different state of the art hardware. So in red is the F B G. A presentation proposing this paper and then the other curve represent ah, brick, a local search in in orange and center dining in purple, for example, and So you see that the scaring off this purpose simulator is is rather good and that for larger politicizes, we can get orders of magnitude faster than the state of the other approaches. >>Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FBT implementation would be faster than risk Other recently proposed izing machine, such as the Hope you know network implemented on Memory Sisters. That is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the >>restricted Bosman machine implemented a PGA proposed by some group in Brooklyn recently again, which is very fast for small promise sizes. But which canning is bad So that, uh, this worse than the purpose approach so that we can expect that for promise sizes larger than, let's say, 1000 spins. The purpose, of course, would be the faster one. >>Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better cut values that have been previously found by any other >>algorithms. So they are the best known could values to best of our knowledge. And, um, or so which is shown in this paper table here in particular, the instances, Uh, 14 and 15 of this G set can be We can find better converse than previously >>known, and we can find this can vary is 100 times >>faster than the state of the art algorithm and cp to do this which is a recount. Kasich, it s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. >>So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g A onda and carefully routing the trickle components within the P G A. And and we can draw some projections of what type of performance we can achieve in >>the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape problems respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital and, you know, free to is shown in the green here, the green >>line without that's and, uh and we should two different, uh, prosthesis for this productions either that the time to solution scales as exponential off n or that >>the time of social skills as expression of square root off. So it seems according to the data, that time solution scares more as an expression of square root of and also we can be sure >>on this and this production showed that we probably can solve Prime Escape Program of Science 2000 spins to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP or optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this, what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account out on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will >>be just based on the simple common line access for the simulator and in which will have just a classical approximation of the system. We don't know Sturm, binary weights and Museum in >>term, but then will propose a second version that would extend the current arising machine to Iraq off eight f p g. A. In which we will add the more refined models truncated bigger in the bottom question model that just talked about on the supports in which he valued waits for the rising problems and support the cement. So we will announce >>later when this is available, and Farah is working hard to get the first version available sometime in September. Thank you all, and we'll be happy to answer any questions that you have.
SUMMARY :
know that the classical approximation of the Cortes in machine, which is a growing toe So the well known problem of And so this is And the addition of this chemical structure introduces learning process for searching for the ground state of the icing. off the analog spins to force the amplitude of the expense toe, symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here is a moderation that ensure the 100 reproduction rate of the system to become positive on this forbids the creation of any non tree in the in the fpt So I'm not going to the details of how this is implemented the PGA. of the mattress Victor products since it's the bottleneck of the computation, uh, You see that the scaling off the numbers of metrics victor product necessary to solve So in red is the F B G. A presentation proposing Moreover, the relatively good scanning off the But which canning is bad So that, scheme scales well that you can find the maximum cut values off benchmark the instances, Uh, 14 and 15 of this G set can be We can find better faster than the state of the art algorithm and cp to do this which is a recount. So given that the performance off the design depends on the height the near future based on the, uh, implementation that we are currently working. the time of social skills as expression of square root off. And so the idea of this model is that instead of having the very be just based on the simple common line access for the simulator and in which will have just a classical to Iraq off eight f p g. A. In which we will add the more refined models any questions that you have.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brooklyn | LOCATION | 0.99+ |
September | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Hong Kong Noise Group | ORGANIZATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
300 yards | QUANTITY | 0.99+ |
1000 spins | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
second version | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
Farah | PERSON | 0.99+ |
second part | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
500 spins | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
first step | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
more than 100 spins | QUANTITY | 0.99+ |
Scott | PERSON | 0.99+ |
University of Tokyo | ORGANIZATION | 0.99+ |
500 g. | QUANTITY | 0.98+ |
Mexican | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Kasich | PERSON | 0.98+ |
first version | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Iraq | LOCATION | 0.98+ |
third part | QUANTITY | 0.98+ |
13 clock cycles | QUANTITY | 0.98+ |
43 clock cycles | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
0.5 microseconds | QUANTITY | 0.97+ |
Jay | PERSON | 0.97+ |
Haider | LOCATION | 0.97+ |
15 | QUANTITY | 0.97+ |
one microseconds | QUANTITY | 0.97+ |
Spain | LOCATION | 0.97+ |
about 10 seconds | QUANTITY | 0.97+ |
LPGA | ORGANIZATION | 0.96+ |
each | QUANTITY | 0.96+ |
500 timer | QUANTITY | 0.96+ |
one strategy | QUANTITY | 0.96+ |
both cases | QUANTITY | 0.95+ |
one error | QUANTITY | 0.95+ |
20 watts | QUANTITY | 0.95+ |
Nina | PERSON | 0.95+ |
about 0.1 microseconds | QUANTITY | 0.95+ |
nine | QUANTITY | 0.95+ |
each graph | QUANTITY | 0.93+ |
14 | QUANTITY | 0.92+ |
CME | ORGANIZATION | 0.91+ |
Iraqi | OTHER | 0.91+ |
billions of neurons | QUANTITY | 0.91+ |
99 success | QUANTITY | 0.9+ |
about 100 | QUANTITY | 0.9+ |
larger than 500 speeds | QUANTITY | 0.9+ |
Vector | ORGANIZATION | 0.89+ |
spins | QUANTITY | 0.89+ |
Victor | ORGANIZATION | 0.89+ |
last six years | DATE | 0.86+ |
one | QUANTITY | 0.85+ |
one analog | QUANTITY | 0.82+ |
hamiltonian | OTHER | 0.82+ |
Simulator | TITLE | 0.8+ |
European | OTHER | 0.79+ |
three neuro inspired principles | QUANTITY | 0.78+ |
Bosman | PERSON | 0.75+ |
three system | QUANTITY | 0.75+ |
trump | PERSON | 0.74+ |
Xia Pios | COMMERCIAL_ITEM | 0.72+ |
100 | QUANTITY | 0.7+ |
one gear | QUANTITY | 0.7+ |
P. | QUANTITY | 0.68+ |
FPD eight | COMMERCIAL_ITEM | 0.66+ |
first one | QUANTITY | 0.64+ |
Escape Program of Science 2000 | TITLE | 0.6+ |
Celtics | OTHER | 0.58+ |
Toby | PERSON | 0.56+ |
Machine | TITLE | 0.54+ |
Refugee A | TITLE | 0.54+ |
couple | QUANTITY | 0.53+ |
Tektronix | ORGANIZATION | 0.51+ |
Opa | OTHER | 0.51+ |
P. J. O | ORGANIZATION | 0.51+ |
Bozeman | ORGANIZATION | 0.48+ |
Breaking Analysis: Cloud Remains Strong but not Immune to COVID
from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante while cloud computing is generally seen as a bright spot in tech spending the sector is not immune from the effects of covid19 look it's better to be cloud than not cloud no question but recent survey data shows that the v-shaped recovery in the stock market looks much more like a square root sign for it spending in 2020 and even the cloud is going to be negatively impacted albeit much less so than many other sectors hello everyone and welcome to this week's wikibon cube insights powered by etr i'm dave vellante and in this breaking analysis we want to update you on our latest data and thinking around the cloud computing market with an emphasis on infrastructure as a service we'll also update our latest quarterly estimates of the big three show you our typical trailing 12-month view of revenue let's start with the macro picture the reality is that the latest etr survey of nearly 1200 respondents shows that the vast majority of companies is the covet is hitting i.t budgets notably 59 of respondents have frozen hiring that's up from 26 in the last survey which was taken in march and april at the height of the u.s lockdown 24 percent have laid off employees that's up from four percent 41 percent froze new i.t deployments that's nearly double the percentage from the last survey now on the plus side there are some shops 23 percent that are accelerating i t deployments and that's up significantly from last quarter now as we've reported that's coming from the work from home and coveted tailwind segments and cloud computing is obviously one of those but these spending shifts are not enough to offset the overall outlook for 2020 and likely that's going to continue into 2021. because the big cloud players especially aws and azure are so large they're exposed to industries that have been hard hit by the pandemic as such we see pockets of spending deceleration even at these companies now the other piece of data that has our attention is the hybrid and multi-cloud market it's beginning to show some spending momentum this is particularly notable within vmware and red hat accounts and we've even seen a bit of momentum for oracle that we'll talk about in a moment now before we dig into the numbers let's hear the sentiment from some of the customers what we're showing here are some of the verbatim comments from etr customers one of the things i love about this survey is it includes quantitative and qualitative data that i can sort by industry so i've just pulled up a few examples that underscore some of the broad-based pain that companies are facing education minimum 15 cut across the organization engine energy and utilities we cut projects 10 15 across the board financials we've been asked to cut 20 out of our budget government hiring freeze larger constraints on spending health care and farmer much more scrutiny from upper management industrials materials and manufacturing slowing down is not all projects can be done remotely i.t telco head count and projects on hold and pushed into 2021 retail consumer budget cuts we lost three months of cash flow services and consulting all discretionary projects are frozen now these comments predominantly come from large companies that are big spenders now in fairness there are plenty of positives in the anecdotes but i have to say in squinting through the hundreds and hundreds of comments this pretty much sums up the sentiment now this is especially true in the all-important u.s market where we heard in cisco's earnings call this week the theme is uncertainty related to the pandemic and this is hitting i.t budgets now cloud spending remains at elevated levels but there's definitely pressure what we're showing here is the net score for the big three cloud players microsoft amazon and google in the three surveys of 2020 net scores etr's measure of spending momentum in each quarter etr asks buyers are you spending more or less on a particular platform and net score essentially subtracts the lesses from the mores it's a bit more complicated that but but that is really the essence and you can see the deceleration in all three big cloud platforms now it's important to point out that these are at elevated levels and they represent strength but there's clear pressure and headwinds on spending even in the cloud no sector is immune now there are pockets like video conferencing and security that are winning but even in these sectors it's bifurcated it's often a story of a firm that is well positioned to gain share like say a zoom or we've talked a lot about an octa or a crowdstrike or z scale or a sail point that we've highlighted in previous breaking analysis segments now this slide shows data from the etr survey the pies compare the spring survey to the summer asking buyers will covert impact your i.t budgets in 2020. in the latest covent survey 78 say yes now that's up from 63 percent see the bar chart below that answers your next obvious question which is how will your budget be impacted and can you see the distribution of the growth yes there it is you could see the decline 22 percent say no change but the red bars that decline are much bigger than the green bars and that's why we continue to forecast i.t spending declines of five to eight percent in 2020. we think this is even going to spill into the first half of 2020 who knows we'll see if it goes beyond now let's put the cloud in context despite my dire outlook we have to remember that it's all relative this chart shows you know one of our favorite views it plots net score or spending momentum on the vertical axis against the market share on the horizontal axis market share is a measure of pervasiveness in the survey and calculates the penetration of the sector as a percent of the overall survey so what this view tells us is the degree of spending momentum on the vertical axis cloud is elevated relative to other sectors that we're showing here and it shows the penetration of cloud in the data set on the horizontal axis so cloud shows spending momentum and high penetration relative to other priorities in i.t note there are dozens of other sectors but we've cherry picked a few here for context to wit other than containers ai and rpa cloud is outpacing all sectors shown for the net score and only analytics bi and big data is more pervasive so cloud very strong no doubt cloud is the place to be but the pandemic has created spending friction even in cloud and what we showed earlier the decline of the the net scores for the big three now again we're still holding here at elevated levels what this chart shows is the sectors of infrastructure as a service that show increasing next net scores relative to the last survey and you can see there are only five areas that show positive increase in net score this is out of dozens and dozens reading the bars left to right you see vmware cloud on aws with a very impressive net score of 66 percent that's up 700 basis points since the last survey next you see red hat open shift with a 44 net score that's up 600 basis points and then vmware cloud which comprises vmware cloud foundation and other hybrid and multi-cloud services from vmware it shows a net score of 42 which is up 400 basis points now after that is red hat openstack yes openstack with a 40 net score up 1200 basis points since the last survey red hat sells and supports its openstack distro now prior to the ibm acquisition red hat would frequently cite openstack as a growth business on its earnings calls and this data confirms that there's actually some momentum there as an example red hat is selling into the telco sector to service providers that want to stand up a private cloud why well the big cloud players may not have a local presence and there may be a data sovereignty requirement in that country you know that's just one example and then finally on the chart we have oracle now for sure there's some sas in there and you know oracle's net score is really not inspiring at 12 percent but it's up from the last survey so these are the only five areas showing net score expansion from the last survey we're talk which talks to the impacts of covid that we discussed earlier now let's take a look at a more granular set of data that cloud services and how they stack up what we show here are the top ten cloud services measured by net score or spending momentum this is for the july survey of respondents the first point is these are solid net scores so while i'm a bit of a davey downer today these are very strong relative to most other parts of the technology stack most companies would kill to have this type of momentum you see azure functions and azure platform they lead the pack but look at vmware cloud on aws we've seen this popping up showing strong in recent surveys and it's gaining presence and momentum in the data set then there's aws lambda you know functions or serverless this remains strong as you can see it does google functions and there's aws that's the aws overall and even though it's a bit off in net score terms from previous quarters as we'll talk about in a moment this is a 40 billion dollar business with net scores that remain elevated remember the net scores they can't grow to the moon they're going to fluctuate and the larger the base the harder it is to maintain high net score so this is very very impressive for aws google cloud platform is next and you know frankly i'd like to see stronger net scores from google gcp is around an eighth of the size of aws yet aws still maintains a notably higher net score in each survey google continues to struggle with selling into the enterprise now look at the last three in the chart you know cloud purists like aws might say that these hybrid or multi-cloud services aren't in a real cloud you know but to me this is a customer survey if the customer says their cloud i'm gonna go with that now forgetting about the semantics here the point is we've been talking about hybrid and multi-cloud for a while and we see vmware and red hat with openshift two companies that we've predicted are in a strong position to compete for hybrid and multi and they're showing up on customers spending radar i should also mention that microsoft is also a leader if not the leader in hybrid multi-cloud because it has a massive public cloud presence and numerous relevant services particularly in the hybrid space but they don't show up necessarily as discrete services in the etr taxonomy but they are in the numbers for sure probably just peanut butter spread over a number of categories now let's put this into context here's our old friend the xy graph it's one of our favorites this time we show specific named vendors in cloud on the x and the y axis axis is net score or spending velocity and the x axis is market share or pervasiveness so as usual we see aws and azure separating from the pac this is such a huge market it's really not a winner takes all space you know maybe not even a winner takes most and as you can see in the players that we've highlighted in the hybrid multi-zone you got you know google's kind of on that bubble but any player here with a net score above 40 percent in the green as you can see in the upper right hand corner is doing well red hat vmware cloud google and and look at vmware cloud on aws this service is getting a lot of traction and it better given the effort that both companies have put behind this aws has created a special bare metal instance to run this service on its cloud vmware talks about aws as its preferred partner this has been a winner for both companies aws gets access to a half a million vmware customers and vmware gets a really solid cloud play look where this goes in the future it's going to be interesting to watch when this service was announced several years ago it didn't take long for aws to also announce its vmware migration services but for now it's a win-win for the companies and a win for the customers now for context we've included both oracle and ibm cloud services and you can see where they stand relative to the rest they're not setting the world on fire but hey as i've said many times they at least are in the cloud game and importantly both companies are in a good position to migrate their customers mission critical workloads to their own respective clouds all right i want to wrap by looking at the big three performance this quarter as has been our custom we like to share our estimates of how the big three u.s cloud players stack up from a revenue standpoint this chart shows our is and pas revenue estimates for aws azure and google cloud platform the data shows 2018 19 2019 growth and the first two quarters of 2020 with a trailing 12-month view and here are the key points now as always remember aws reports clean numbers the others we have to squint through 10ks and 10qs and triangulate with survey data to come up with the reasonable apples to apple's estimate in comparison first point aws is now 40 billion wow combined the big three now account for nearly 70 billion dollars in is and pass revenue you know that's more than a sizable chunk of the data center business which is not all this hasn't been necessarily incremental growth to the it market there's been a share shift going on in other words that share shift is going from on-prem into the cloud now the third point is growth is strong but not surprisingly the bigger you get the slower the growth rate rate in 2018 aws revenue was 2.7 times greater than that of microsoft for the first time however aws revenue has dropped below 2x that of microsoft said another way microsoft's is revenue is now about 57 of aws's revenue google's growth rate at its size appears to be lagging where aws and azure's growth was at earlier points in their respective journeys for example when aws put up nearly 8 billion in 2015 in revenue it grew over 70 percent that year azure as you can see at 16 billion in 2019 grew at 65 percent now google grew 72 last quarter and 59 this quarter so you know it's no slouch but it's size with its but it's at its size with its resources we'd like to see google pick up the pace and you may have to wait until post covid but despite the coveted headwinds in the overall it market there's no question that this is a cloud world and we just happen to live in it [Music]
SUMMARY :
and even the cloud is going to be
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
hundreds | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
three months | QUANTITY | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
12 percent | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
72 | QUANTITY | 0.99+ |
12-month | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
65 percent | QUANTITY | 0.99+ |
24 percent | QUANTITY | 0.99+ |
22 percent | QUANTITY | 0.99+ |
40 billion | QUANTITY | 0.99+ |
66 percent | QUANTITY | 0.99+ |
aws | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
dozens | QUANTITY | 0.99+ |
63 percent | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
23 percent | QUANTITY | 0.99+ |
april | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
cisco | ORGANIZATION | 0.99+ |
41 percent | QUANTITY | 0.99+ |
400 basis points | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
40 billion dollar | QUANTITY | 0.99+ |
700 basis points | QUANTITY | 0.99+ |
16 billion | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
july | DATE | 0.99+ |
azure | ORGANIZATION | 0.99+ |
both companies | QUANTITY | 0.99+ |
59 | QUANTITY | 0.99+ |
42 | QUANTITY | 0.99+ |
1200 basis points | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
march | DATE | 0.99+ |
last quarter | DATE | 0.98+ |
eight percent | QUANTITY | 0.98+ |
26 | QUANTITY | 0.98+ |
four percent | QUANTITY | 0.98+ |
600 basis points | QUANTITY | 0.98+ |
boston | LOCATION | 0.98+ |
half a million | QUANTITY | 0.98+ |
five areas | QUANTITY | 0.98+ |
two companies | QUANTITY | 0.98+ |
vmware | ORGANIZATION | 0.98+ |
oracle | ORGANIZATION | 0.97+ |
10ks | QUANTITY | 0.97+ |
pandemic | EVENT | 0.97+ |
each survey | QUANTITY | 0.97+ |
one example | QUANTITY | 0.97+ |
nearly 1200 respondents | QUANTITY | 0.97+ |
several years ago | DATE | 0.97+ |
both | QUANTITY | 0.97+ |
10qs | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
openstack | ORGANIZATION | 0.96+ |
above 40 percent | QUANTITY | 0.96+ |
hundreds of comments | QUANTITY | 0.96+ |
this quarter | DATE | 0.95+ |
nearly 8 billion | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
this week | DATE | 0.94+ |
third p | QUANTITY | 0.94+ |
Day 2 Livestream | Enabling Real AI with Dell
>>from the Cube Studios >>in Palo Alto and >>Boston connecting with thought leaders all around the world. This is a cube conversation. >>Hey, welcome back here. Ready? Jeff Frick here with the Cube. We're doing a special presentation today really talking about AI and making ai really with two companies that are right in the heart of the Dell EMC as well as Intel. So we're excited to have a couple Cube alumni back on the program. Haven't seen him in a little while. First off from Intel. Lisa Spelman. She is the corporate VP and GM for the Xeon Group in Jersey on and Memory Group. Great to see you, Lisa. >>Good to see you again, too. >>And we've got Ravi Pinter. Conte. He is the SBP server product management, also from Dell Technologies. Ravi, great to see you as well. >>Good to see you on beast. Of course, >>yes. So let's jump into it. So, yesterday, Robbie, you guys announced a bunch of new kind of ai based solutions where if you can take us through that >>Absolutely so one of the things we did Jeff was we said it's not good enough for us to have a point product. But we talked about hope, the tour of products, more importantly, everything from our workstation side to the server to these storage elements and things that we're doing with VM Ware, for example. Beyond that, we're also obviously pleased with everything we're doing on bringing the right set off validated configurations and reference architectures and ready solutions so that the customer really doesn't have to go ahead and do the due diligence. Are figuring out how the various integration points are coming for us in making a solution possible. Obviously, all this is based on the great partnership we have with Intel on using not just their, you know, super cues, but FPG's as well. >>That's great. So, Lisa, I wonder, you know, I think a lot of people you know, obviously everybody knows Intel for your CPU is, but I don't think they recognize kind of all the other stuff that can wrap around the core CPU to add value around a particular solution. Set or problems. That's what If you could tell us a little bit more about Z on family and what you guys are doing in the data center with this kind of new interesting thing called AI and machine learning. >>Yeah. Um, so thanks, Jeff and Ravi. It's, um, amazing. The way to see that artificial intelligence applications are just growing in their pervasiveness. And you see it taking it out across all sorts of industries. And it's actually being built into just about every application that is coming down the pipe. And so if you think about meeting toe, have your hardware foundation able to support that. That's where we're seeing a lot of the customer interest come in. And not just a first Xeon, but, like Robbie said on the whole portfolio and how the system and solution configuration come together. So we're approaching it from a total view of being able to move all that data, store all of that data and cross us all of that data and providing options along that entire pipeline that move, um, and within that on Z on. Specifically, we've really set that as our cornerstone foundation for AI. If it's the most deployed solution and data center CPU around the world and every single application is going to have artificial intelligence in it, it makes sense that you would have artificial intelligence acceleration built into the actual hardware so that customers get a better experience right out of the box, regardless of which industry they're in or which specialized function they might be focusing on. >>It's really it's really wild, right? Cause in process, right, you always move through your next point of failure. So, you know, having all these kind of accelerants and the ways that you can carve off parts of the workload part of the intelligence that you can optimize betters is so important as you said Lisa and also Rocket and the solution side. Nobody wants General Ai just for ai sake. It's a nice word. Interesting science experiment. But it's really in the applied. A world is. We're starting to see the value in the application of this stuff, and I wonder you have a customer. You want to highlight Absalon, tell us a little bit about their journey and what you guys did with them. >>Great, sure. I mean, if you didn't start looking at Epsilon there in the market in the marketing business, and one of the crucial things for them is to ensure that they're able to provide the right data. Based on that analysis, there run on? What is it that the customer is looking for? And they can't wait for a period of time, but they need to be doing that in the near real time basis, and that's what excellent does. And what really blew my mind was the fact that they actually service are send out close to 100 billion messages. Again, it's 100 billion messages a year. And so you can imagine the amount of data that they're analyzing, which is in petabytes of data, and they need to do real time. And that's all possible because of the kind of analytics we have driven into the power It silver's, you know, using the latest of the Intel Intel Xeon processor couple with some of the technologies from the BGS side, which again I love them to go back in and analyze this data and service to the customers very rapidly. >>You know, it's funny. I think Mark Tech is kind of an under appreciated ah world of ai and, you know, in machine to machine execution, right, That's the amount of transactions go through when you load a webpage on your site that actually ideas who you are you know, puts puts a marketplace together, sells time on that or a spot on that ad and then lets people in is a really sophisticated, as you said in massive amounts of data going through the interesting stuff. If it's done right, it's magic. And if it's done, not right, then people get pissed off. You gotta have. You gotta have use our tools. >>You got it. I mean, this is where I talked about, you know, it can be garbage in garbage out if you don't really act on the right data. Right. So that is where I think it becomes important. But also, if you don't do it in a timely fashion, but you don't service up the right content at the right time. You miss the opportunity to go ahead and grab attention, >>right? Right. Lisa kind of back to you. Um, you know, there's all kinds of open source stuff that's happening also in the in the AI and machine learning world. So we hear things about tense or flow and and all these different libraries. How are you guys, you know, kind of embracing that world as you look at ai and kind of the development. We've been at it for a while. You guys are involved in everything from autonomous vehicles to the Mar Tech. Is we discussed? How are you making sure that these things were using all the available resources to optimize the solutions? >>Yeah, I think you and Robbie we're just hitting on some of those examples of how many ways people have figured out how to apply AI now. So maybe at first it was really driven by just image recognition and image tagging. But now you see so much work being driven in recommendation engines and an object detection for much more industrial use cases, not just consumer enjoyment and also those things you mentioned and hit on where the personalization is a really fine line you walk between. How do you make an experience feel good? Personalized versus creepy personalized is a real challenge and opportunity across so many industries. And so open source like you mentioned, is a great place for that foundation because it gives people the tools to build upon. And I think our strategy is really a stack strategy that starts first with delivering the best hardware for artificial intelligence and again the other is the foundation for that. But we also have, you know, Milat type processing for out of the Edge. And then we have all the way through to very custom specific accelerators into the data center, then on top about the optimized software, which is going into each of those frameworks and doing the work so that the framework recognizes the specific acceleration we built into the CPU. Whether that steel boost or recognizes the capabilities that sit in that accelerator silicon, and then once we've done that software layer and this is where we have the opportunity for a lot of partnership is the ecosystem and the solutions work that Robbie started off by talking about. So Ai isn't, um, it's not easy for everyone. It has a lot of value, but it takes work to extract that value. And so partnerships within the ecosystem to make sure that I see these are taking those optimization is building them in and fundamentally can deliver to customers. Reliable solution is the last leg of that of that strategy, but it really is one of the most important because without it you get a lot of really good benchmark results but not a lot of good, happy customer, >>right? I'm just curious, Lee says, because you kind of sit in the catbird seat. You guys at the core, you know, kind of under all the layers running data centers run these workloads. How >>do you see >>kind of the evolution of machine learning and ai from kind of the early days, where with science projects and and really smart people on mahogany row versus now people are talking about trying to get it to, like a citizen developer, but really a citizen data science and, you know, in exposing in the power of AI to business leaders or business executioners. Analysts, if you will, so they can apply it to their day to day world in their day to day life. How do you see that kind of evolving? Because you not only in it early, but you get to see some of the stuff coming down the road in design, find wins and reference architectures. How should people think about this evolution? >>It really is one of those things where if you step back from the fundamentals of AI, they've actually been around for 50 or more years. It's just that the changes in the amount of computing capability that's available, the network capacity that's available and the fundamental efficiency that I t and infrastructure managers and get out of their cloud architectures as allowed for this pervasiveness to evolve. And I think that's been the big tipping point that pushed people over this fear. Of course, I went through the same thing that cloud did where you had maybe every business leader or CEO saying Hey, get me a cloud and I'll figure out what for later give me some AI will get a week and make it work, But we're through those initial use pieces and starting to see a business value derived from from those deployments. And I think some of the most exciting areas are in the medical services field and just the amount, especially if you think of the environment we're in right now. The amount of efficiency and in some cases, reduction in human contact that you could require for diagnostics and just customer tracking and ability, ability to follow their entire patient History is really powerful and represents the next wave and care and how we scale our limited resource of doctors nurses technician. And the point we're making of what's coming next is where you start to see even more mass personalization and recommendations in that way that feel very not spooky to people but actually comforting. And they take value from them because it allows them to immediately act. Robbie reference to the speed at which you have to utilize the data. When people get immediately act more efficiently. They're generally happier with the service. So we see so much opportunity and we're continuing to address across, you know, again that hardware, software and solution stack so we can stay a step ahead of our customers, >>Right? That's great, Ravi. I want to give you the final word because you guys have to put the solutions together, it actually delivering to the customer. So not only, you know the hardware and the software, but any other kind of ecosystem components that you have to bring together. So I wonder if you can talk about that approach and how you know it's it's really the solution. At the end of the day, not specs, not speeds and feeds. That's not really what people care about. It's really a good solution. >>Yeah, three like Jeff, because end of the day I mean, it's like this. Most of us probably use the A team to retry money, but we really don't know what really sits behind 80 and my point being that you really care at that particular point in time to be able to put a radio do machine and get your dollar bills out, for example. Likewise, when you start looking at what the customer really needs to know, what Lisa hit upon is actually right. I mean what they're looking for. And you said this on the whole solution side house. To our our mantra to this is very simple. We want to make sure that we use the right basic building blocks, ensuring that we bring the right solutions using three things the right products which essentially means that we need to use the right partners to get the right processes in GPU Xen. But then >>we get >>to the next level by ensuring that we can actually do things we can either provide no ready solutions are validated reference architectures being that you have the sausage making process that you now don't need to have the customer go through, right? In a way. We have done the cooking and we provide a recipe book and you just go through the ingredient process of peering does and then off your off right to go get your solution done. And finally, the final stages there might be helped that customers still need in terms of services. That's something else Dell technology provides. And the whole idea is that customers want to go out and have them help deploying the solutions. We can also do that we're services. So that's probably the way we approach our data. The way we approach, you know, providing the building blocks are using the right technologies from our partners, then making sure that we have the right solutions that our customers can look at. And finally, they need deployment. Help weaken due their services. >>Well, Robbie, Lisa, thanks for taking a few minutes. That was a great tee up, Rob, because I think we're gonna go to a customer a couple of customer interviews enjoying that nice meal that you prepared with that combination of hardware, software, services and support. So thank you for your time and a great to catch up. All right, let's go and run the tape. Hi, Jeff. I wanted to talk about two examples of collaboration that we have with the partners that have yielded Ah, really examples of ah put through HPC and AI activities. So the first example that I wanted to cover is within your AHMAD team up in Canada with that team. We collaborated with Intel on a tuning of algorithm and code in order to accelerate the mapping of the human brain. So we have a cluster down here in Texas called Zenith based on Z on and obtain memory on. And we were able to that customer with the three of us are friends and Intel the norm, our team on the Dell HPC on data innovation, injuring team to go and accelerate the mapping of the human brain. So imagine patients playing video games or doing all sorts of activities that help understand how the brain sends the signal in order to trigger a response of the nervous system. And it's not only good, good way to map the human brain, but think about what you can get with that type of information in order to help cure Alzheimer's or dementia down the road. So this is really something I'm passionate about. Is using technology to help all of us on all of those that are suffering from those really tough diseases? Yeah, yeah, way >>boil. I'm a project manager for the project, and the idea is actually to scan six participants really intensively in both the memory scanner and the G scanner and see if we can use human brain data to get closer to something called Generalized Intelligence. What we have in the AI world, the systems that are mathematically computational, built often they do one task really, really well, but they struggle with other tasks. Really good example. This is video games. Artificial neural nets can often outperform humans and video games, but they don't really play in a natural way. Artificial neural net. Playing Mario Brothers The way that it beats the system is by actually kind of gliding its way through as quickly as possible. And it doesn't like collect pennies. For example, if you play Mary Brothers as a child, you know that collecting those coins is part of your game. And so the idea is to get artificial neural nets to behave more like humans. So like we have Transfer of knowledge is just something that humans do really, really well and very naturally. It doesn't take 50,000 examples for a child to know the difference between a dog and a hot dog when you eat when you play with. But an artificial neural net can often take massive computational power and many examples before it understands >>that video games are awesome, because when you do video game, you're doing a vision task instant. You're also doing a >>lot of planning and strategy thinking, but >>you're also taking decisions you several times a second, and we record that we try to see. Can we from brain activity predict >>what people were doing? We can break almost 90% accuracy with this type of architecture. >>Yeah, yeah, >>Use I was the lead posts. Talk on this collaboration with Dell and Intel. She's trying to work on a model called Graph Convolution Neural nets. >>We have being involved like two computing systems to compare it, like how the performance >>was voting for The lab relies on both servers that we have internally here, so I have a GPU server, but what we really rely on is compute Canada and Compute Canada is just not powerful enough to be able to run the models that he was trying to run so it would take her days. Weeks it would crash, would have to wait in line. Dell was visiting, and I was invited into the meeting very kindly, and they >>told us that they started working with a new >>type of hardware to train our neural nets. >>Dell's using traditional CPU use, pairing it with a new >>type off memory developed by Intel. Which thing? They also >>their new CPU architectures and really optimized to do deep learning. So all of that sounds great because we had this problem. We run out of memory, >>the innovation lab having access to experts to help answer questions immediately. That's not something to gate. >>We were able to train the attic snatch within 20 minutes. But before we do the same thing, all the GPU we need to wait almost three hours to each one simple way we >>were able to train the short original neural net. Dell has been really great cause anytime we need more memory, we send an email, Dell says. Yeah, sure, no problem. We'll extended how much memory do you need? It's been really simple from our end, and I think it's really great to be at the edge of science and technology. We're not just doing the same old. We're pushing the boundaries. Like often. We don't know where we're going to be in six months. In the big data world computing power makes a big difference. >>Yeah, yeah, yeah, yeah. The second example I'd like to cover is the one that will call the data accelerator. That's a publisher that we have with the University of Cambridge, England. There we partnered with Intel on Cambridge, and we built up at the time the number one Io 500 storage solution on. And it's pretty amazing because it was built on standard building blocks, power edge servers until Xeon processors some envy me drives from our partners and Intel. And what we did is we. Both of this system with a very, very smart and elaborate suffering code that gives an ultra fast performance for our customers, are looking for a front and fast scratch to their HPC storage solutions. We're also very mindful that this innovation is great for others to leverage, so the suffering Could will soon be available on Get Hub on. And, as I said, this was number one on the Iot 500 was initially released >>within Cambridge with always out of focus on opening up our technologies to UK industry, where we can encourage UK companies to take advantage of advanced research computing technologies way have many customers in the fields of automotive gas life sciences find our systems really help them accelerate their product development process. Manage Poor Khalidiya. I'm the director of research computing at Cambridge University. Yeah, we are a research computing cloud provider, but the emphasis is on the consulting on the processes around how to exploit that technology rather than the better results. Our value is in how we help businesses use advanced computing resources rather than the provision. Those results we see increasingly more and more data being produced across a wide range of verticals, life sciences, astronomy, manufacturing. So the data accelerators that was created as a component within our data center compute environment. Data processing is becoming more and more central element within research computing. We're getting very large data sets, traditional spinning disk file systems can't keep up and we find applications being slowed down due to a lack of data, So the data accelerator was born to take advantage of new solid state storage devices. I tried to work out how we can have a a staging mechanism for keeping your data on spinning disk when it's not required pre staging it on fast envy any stories? Devices so that can feed the applications at the rate quiet for maximum performance. So we have the highest AI capability available anywhere in the UK, where we match II compute performance Very high stories performance Because for AI, high performance storage is a key element to get the performance up. Currently, the data accelerated is the fastest HPC storage system in the world way are able to obtain 500 gigabytes a second read write with AI ops up in the 20 million range. We provide advanced computing technologies allow some of the brightest minds in the world really pushed scientific and medical research. We enable some of the greatest academics in the world to make tomorrow's discoveries. Yeah, yeah, yeah. >>Alright, Welcome back, Jeff Frick here and we're excited for this next segment. We're joined by Jeremy Raider. He is the GM digital transformation and scale solutions for Intel Corporation. Jeremy, great to see you. Hey, thanks for having me. I love I love the flowers in the backyard. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Garden, Right To very beautiful places to visit in Portland. >>Yeah. You know, you only get him for a couple. Ah, couple weeks here, so we get the timing just right. >>Excellent. All right, so let's jump into it. Really? And in this conversation really is all about making Ai Riel. Um, and you guys are working with Dell and you're working with not only Dell, right? There's the hardware and software, but a lot of these smaller a solution provider. So what is some of the key attributes that that needs to make ai riel for your customers out there? >>Yeah, so, you know, it's a it's a complex space. So when you can bring the best of the intel portfolio, which is which is expanding a lot, you know, it's not just the few anymore you're getting into Memory technologies, network technologies and kind of a little less known as how many resources we have focused on the software side of things optimizing frameworks and optimizing, and in these key ingredients and libraries that you can stitch into that portfolio to really get more performance in value, out of your machine learning and deep learning space. And so you know what we've really done here with Dell? It has started to bring a bunch of that portfolio together with Dell's capabilities, and then bring in that ai's V partner, that software vendor where we can really take and stitch and bring the most value out of that broad portfolio, ultimately using using the complexity of what it takes to deploy an AI capability. So a lot going on. They're bringing kind of the three legged stool of the software vendor hardware vendor dental into the mix, and you get a really strong outcome, >>right? So before we get to the solutions piece, let's stick a little bit into the Intel world. And I don't know if a lot of people are aware that obviously you guys make CPUs and you've been making great CPIs forever. But there's a whole lot more stuff that you've added, you know, kind of around the core CPU. If you will in terms of of actual libraries and ways to really optimize the seond processors to operate in an AI world. I wonder if you can kind of take us a little bit below the surface on how that works. What are some of the examples of things you can do to get more from your Gambira Intel processors for ai specific applications of workloads? >>Yeah, well, you know, there's a ton of software optimization that goes into this. You know that having the great CPU is definitely step one. But ultimately you want to get down into the libraries like tensor flow. We have data analytics, acceleration libraries. You know, that really allows you to get kind of again under the covers a little bit and look at it. How do we have to get the most out of the kinds of capabilities that are ultimately used in machine learning in deep learning capabilities, and then bring that forward and trying and enable that with our software vendors so that they can take advantage of those acceleration components and ultimately, you know, move from, you know, less training time or could be a the cost factor. But those are the kind of capabilities we want to expose to software vendors do these kinds of partnerships. >>Okay. Ah, and that's terrific. And I do think that's a big part of the story that a lot of people are probably not as aware of that. There are a lot of these optimization opportunities that you guys have been leveraging for a while. So shifting gears a little bit, right? AI and machine learning is all about the data. And in doing a little research for this, I found actually you on stage talking about some company that had, like, 350 of road off, 315 petabytes of data, 140,000 sources of those data. And I think probably not great quote of six months access time to get that's right and actually work with it. And the company you're referencing was intel. So you guys know a lot about debt data, managing data, everything from your manufacturing, and obviously supporting a global organization for I t and run and ah, a lot of complexity and secrets and good stuff. So you know what have you guys leveraged as intel in the way you work with data and getting a good data pipeline. That's enabling you to kind of put that into these other solutions that you're providing to the customers, >>right? Well, it is, You know, it's absolutely a journey, and it doesn't happen overnight, and that's what we've you know. We've seen it at Intel on We see it with many of our customers that are on the same journey that we've been on. And so you know, this idea of building that pipeline it really starts with what kind of problems that you're trying to solve. What are the big issues that are holding you back that company where you see that competitive advantage that you're trying to get to? And then ultimately, how do you build the structure to enable the right kind of pipeline of that data? Because that's that's what machine learning and deep learning is that data journey. So really a lot of focus around you know how we can understand those business challenges bring forward those kinds of capabilities along the way through to where we structure our entire company around those assets and then ultimately some of the partnerships that we're gonna be talking about these companies that are out there to help us really squeeze the most out of that data as quickly as possible because otherwise it goes stale real fast, sits on the shelf and you're not getting that value out of right. So, yeah, we've been on the journey. It's Ah, it's a long journey, but ultimately we could take a lot of those those kind of learnings and we can apply them to our silicon technology. The software optimization is that we're doing and ultimately, how we talk to our enterprise customers about how they can solve overcome some of the same challenges that we did. >>Well, let's talk about some of those challenges specifically because, you know, I think part of the the challenge is that kind of knocked big data, if you will in Hadoop, if you will kind of off the rails. Little bit was there's a whole lot that goes into it. Besides just doing the analysis, there's a lot of data practice data collection, data organization, a whole bunch of things that have to happen before. You can actually start to do the sexy stuff of AI. So you know, what are some of those challenges. How are you helping people get over kind of these baby steps before they can really get into the deep end of the pool? >>Yeah, well, you know, one is you have to have the resource is so you know, do you even have the resource is if you can acquire those Resource is can you keep them interested in the kind of work that you're doing? So that's a big challenge on and actually will talk about how that fits into some of the partnerships that we've been establishing in the ecosystem. It's also you get stuck in this poc do loop, right? You finally get those resource is and they start to get access to that data that we talked about. It start to play out some scenarios, a theorize a little bit. Maybe they show you some really interesting value, but it never seems to make its way into a full production mode. And I think that is a challenge that has faced so many enterprises that are stuck in that loop. And so that's where we look at who's out there in the ecosystem that can help more readily move through that whole process of the evaluation that proved the r a y, the POC and ultimately move that thing that capability into production mode as quickly as possible that you know that to me is one of those fundamental aspects of if you're stuck in the POC. Nothing's happening from this. This is not helping your company. We want to move things more quickly, >>right? Right. And let's just talk about some of these companies that you guys are working with that you've got some reference architectures is data robot a Grid dynamics H 20 just down the road in Antigua. So a lot of the companies we've worked with with Cube and I think you know another part that's interesting. It again we can learn from kind of old days of big data is kind of generalized. Ai versus solution specific. Ai and I think you know where there's a real opportunity is not AI for a sake, but really it's got to be applied to a specific solution, a specific problem so that you have, you know, better chatbots, better customer service experience, you know, better something. So when you were working with these folks and trying to design solutions or some of the opportunities that you saw to work with some of these folks to now have an applied a application slash solution versus just kind of AI for ai's sake. >>Yeah. I mean, that could be anything from fraud, detection and financial services, or even taking a step back and looking more horizontally like back to that data challenge. If if you're stuck at the AI built a fantastic Data lake, but I haven't been able to pull anything back out of it, who are some of the companies that are out there that can help overcome some of those big data challenges and ultimately get you to where you know, you don't have a data scientist spending 60% of their time on data acquisition pre processing? That's not where we want them, right? We want them on building out that next theory. We want them on looking at the next business challenge. We want them on selecting the right models, but ultimately they have to do that as quickly as possible so that they can move that that capability forward into the next phase. So, really, it's about that that connection of looking at those those problems or challenges in the whole pipeline. And these companies like data robot in H 20 quasi. Oh, they're all addressing specific challenges in the end to end. That's why they've kind of bubbled up as ones that we want to continue to collaborate with, because it can help enterprises overcome those issues more fast. You know more readily. >>Great. Well, Jeremy, thanks for taking a few minutes and giving us the Intel side of the story. Um, it's a great company has been around forever. I worked there many, many moons ago. That's Ah, that's a story for another time, but really appreciate it and I'll interview you will go there. Alright, so super. Thanks a lot. So he's Jeremy. I'm Jeff Frick. So now it's time to go ahead and jump into the crowd chat. It's crowdchat dot net slash make ai real. Um, we'll see you in the chat. And thanks for watching
SUMMARY :
Boston connecting with thought leaders all around the world. She is the corporate VP and GM Ravi, great to see you as well. Good to see you on beast. solutions where if you can take us through that reference architectures and ready solutions so that the customer really doesn't have to on family and what you guys are doing in the data center with this kind of new interesting thing called AI and And so if you think about meeting toe, have your hardware foundation part of the intelligence that you can optimize betters is so important as you said Lisa and also Rocket and the solution we have driven into the power It silver's, you know, using the latest of the Intel Intel of ai and, you know, in machine to machine execution, right, That's the amount of transactions I mean, this is where I talked about, you know, How are you guys, you know, kind of embracing that world as you look But we also have, you know, Milat type processing for out of the Edge. you know, kind of under all the layers running data centers run these workloads. and, you know, in exposing in the power of AI to business leaders or business the speed at which you have to utilize the data. So I wonder if you can talk about that approach and how you know to retry money, but we really don't know what really sits behind 80 and my point being that you The way we approach, you know, providing the building blocks are using the right technologies the brain sends the signal in order to trigger a response of the nervous know the difference between a dog and a hot dog when you eat when you play with. that video games are awesome, because when you do video game, you're doing a vision task instant. that we try to see. We can break almost 90% accuracy with this Talk on this collaboration with Dell and Intel. to be able to run the models that he was trying to run so it would take her days. They also So all of that the innovation lab having access to experts to help answer questions immediately. do the same thing, all the GPU we need to wait almost three hours to each one do you need? That's a publisher that we have with the University of Cambridge, England. Devices so that can feed the applications at the rate quiet for maximum performance. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Ah, couple weeks here, so we get the timing just right. Um, and you guys are working with Dell and you're working with not only Dell, right? the intel portfolio, which is which is expanding a lot, you know, it's not just the few anymore What are some of the examples of things you can do to get more from You know, that really allows you to get kind of again under the covers a little bit and look at it. So you know what have you guys leveraged as intel in the way you work with data and getting And then ultimately, how do you build the structure to enable the right kind of pipeline of that is that kind of knocked big data, if you will in Hadoop, if you will kind of off the rails. Yeah, well, you know, one is you have to have the resource is so you know, do you even have the So a lot of the companies we've worked with with Cube and I think you know another that can help overcome some of those big data challenges and ultimately get you to where you we'll see you in the chat.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeremy | PERSON | 0.99+ |
Lisa Spelman | PERSON | 0.99+ |
Canada | LOCATION | 0.99+ |
Texas | LOCATION | 0.99+ |
Robbie | PERSON | 0.99+ |
Lee | PERSON | 0.99+ |
Portland | LOCATION | 0.99+ |
Xeon Group | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Ravi | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
UK | LOCATION | 0.99+ |
60% | QUANTITY | 0.99+ |
Jeremy Raider | PERSON | 0.99+ |
Ravi Pinter | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
20 million | QUANTITY | 0.99+ |
Mar Tech | ORGANIZATION | 0.99+ |
50,000 examples | QUANTITY | 0.99+ |
Rob | PERSON | 0.99+ |
Mario Brothers | TITLE | 0.99+ |
six months | QUANTITY | 0.99+ |
Antigua | LOCATION | 0.99+ |
University of Cambridge | ORGANIZATION | 0.99+ |
Jersey | LOCATION | 0.99+ |
140,000 sources | QUANTITY | 0.99+ |
six participants | QUANTITY | 0.99+ |
315 petabytes | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
two companies | QUANTITY | 0.99+ |
500 gigabytes | QUANTITY | 0.99+ |
AHMAD | ORGANIZATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
each | QUANTITY | 0.99+ |
Cube Studios | ORGANIZATION | 0.99+ |
first example | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Memory Group | ORGANIZATION | 0.99+ |
two examples | QUANTITY | 0.99+ |
Cambridge University | ORGANIZATION | 0.98+ |
Rose Garden | LOCATION | 0.98+ |
today | DATE | 0.98+ |
both servers | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Boston | LOCATION | 0.98+ |
Intel Corporation | ORGANIZATION | 0.98+ |
Khalidiya | PERSON | 0.98+ |
second example | QUANTITY | 0.98+ |
one task | QUANTITY | 0.98+ |
80 | QUANTITY | 0.98+ |
intel | ORGANIZATION | 0.97+ |
Epsilon | ORGANIZATION | 0.97+ |
Rocket | PERSON | 0.97+ |
both | QUANTITY | 0.97+ |
Cube | ORGANIZATION | 0.96+ |
Ron Cormier, The Trade Desk | Virtual Vertica BDC 2020
>> David: It's the cube covering the virtual Vertica Big Data conference 2020 brought to you by Vertica. Hello, buddy, welcome to this special digital presentation of the cube. We're tracking the Vertica virtual Big Data conferences, the cubes. I think fifth year doing the BDC. We've been to every big data conference that they've held and really excited to be helping with the digital component here in these interesting times. Ron Cormier is here, Principal database engineer at the Trade Desk. Ron, great to see you. Thanks for coming on. >> Hi, David, my pleasure, good to see you as well. >> So we're talking a little bit about your background you got, you're basically a Vertica and database guru, but tell us about your role at Trade Desk and then I want to get into a little bit about what Trade Desk does. >> Sure, so I'm a principal database engineer at the Trade Desk. The Trade Desk was one of my customers when I was working with Hp, at HP, as a member of the Vertica team, and I joined the Trade Desk in early 2016. And since then, I've been working on building out their Vertica capabilities and expanding the data warehouse footprint and as ever growing database technology, data volume environment. >> And the Trade Desk is an ad tech firm and you are specializing in real time ad serving and pricing. And I guess real time you know, people talk about real time a lot we define real time as before you lose the customer. Maybe you can talk a little bit about you know, the Trade Desk in the business and maybe how you define real time. >> Totally, so to give everybody kind of a frame of reference. Anytime you pull up your phone or your laptop and you go to a website or you use some app and you see an ad what's happening behind the scenes is an auction is taking place. And people are bidding on the privilege to show you an ad. And across the open Internet, this happens seven to 13 million times per second. And so the ads, the whole auction dynamic and the display of the ad needs to happen really fast. So that's about as real time as it gets outside of high frequency trading, as far as I'm aware. So we put the Trade Desk participates in those auctions, we bid on behalf of our customers, which are ad agencies, and the agencies represent brands so the agencies are the madman companies of the world and they have brands that under their guidance, and so they give us budget to spend, to place the ads and to display them and once the ads get displayed, so we bid on the hundreds of thousands of auctions per second. Once we make those bids, anytime we do make a bid some data flows into our data platform, which is powered by Vertica. And, so we're getting hundreds of thousands of events per second. We have other events that flow into Vertica as well. And we clean them up, we aggregate them, and then we run reports on the data. And we run about 40,000 reports per day on behalf of our customers. The reports aren't as real time as I was talking about earlier, they're more batch oriented. Our customers like to see big chunks of time, like a whole day or a whole week or a whole month on a single report. So we wait for that time period to complete and then we run the reports on the results. >> So you you have one of the largest commercial infrastructures, in the Big Data sphere. Paint a picture for us. I understand you got a couple of like 320 node clusters we're talking about petabytes of data. But describe what your environment looks like. >> Sure, so like I said, we've been very good customers for a while. And we started out with with a bunch of enterprise clusters. So the Enterprise Mode is the traditional Vertica deployment where the compute and the storage is tightly coupled all raid arrays on the servers. And we had four of those and we're doing okay, but our volumes are ever increasing, we wanted to store more data. And we wanted to run more reports in a shorter period of time, was to keep pushing. And so we had these four clusters and then we started talking with Vertica about Eon mode, and that's Vertica separation of compute and storage where you get the compute and the storage can be scaled independently, we can add storage without adding compute or vice versa or we can add both, like. So that was something that we were very interested in for a couple reasons. One, our enterprise clusters, we're running out of disk, like when adding disk is expensive. In Enterprise Mode, it's kind of a pain, you got to add, compute at the same time, so you kind of end up in an unbalanced place. So beyond mode that problem gets a lot better. We can add disk, infinite disk because it's backed by S3. And we can add compute really easy to scale, the number of things that we run in parallel concurrency, just add a sub cluster. So they are two US East and US west of Amazon, so reasonably diverse. And and the real benefit is that they can, we can stop nodes when we don't need them. Our workload is fairly lumpy, I call it. Like we, after the day completes, we do the ingest, we do the aggregation for ingesting and aggregating all day, but the final hour, so it needs to be completed. And then once that's done, then the number of reports that we need to run spikes up, it goes really high. And we run those reports, we spin up a bunch of extra compute on the fly, run those reports and then spin them down. And we don't have to pay for that, for the rest of the day. So Eon has been a nice Boone for us for both those reasons. >> I'd love to explore you on little bit more. I mean, it's relatively new, I think 2018 Vertica announced Eon mode, so it's only been out there a couple years. So I'm curious for the folks that haven't moved the Eon mode, can you which presumably they want to for the same reasons that you mentioned why by the stories and chunks when you're on Storage if you don't have to, what were some of the challenges that you had to, that you faced in going to Eon mode? What kind of things did you have to prepare for? Were there any out of scope expectations? Can you share that experience with us? >> Sure, so we were an early adopter. We participated in the beta program. I mean, we, I think it's fair to say we actually drove the requirements and a lot of ways because we approached Vertica early on. So the challenges were what you'd expect any early adopter to be going through. The sort of getting things working as expected. I mean, there's a number of cases, which I could touch upon, like, we found an efficiency in the way that it accesses the data on S3 and it was accessing the data too frequently, which ended up was just expensive. So our S3 bill went up pretty significantly for a couple of months. So that was a challenge, but we worked through that another was that we recently made huge strides in with Vertica was the ability to stop and start nodes and not have to start them very quickly. And when they start to not interfere with any running queries, so when we create, when we want to spin up a bunch to compute, there was a point in time when it would break certain queries that were already running. So that that was a challenge. But again, the very good team has been quite responsive to solving these issues and now that's behind us. In terms of those who need to get started, there's or looking to get started. there's a number of things to think about. Off the top of my head there's sort of new configuration items that you'll want to think about, like how instance type. So certainly the Amazon has a variety of instances and its important to consider one of Vertica's architectural advantages in these areas Vertica has this caching layer on the instances themselves. And what that does is if we can keep the data in cache, what we've found is that the performance is basically the same performance of Enterprise Mode. So having a good size cast when needed, can be a little worrying. So we went with the I three instance types, which have a lot of local NVME storage that we can, so we can cache data and get good performance. That's one thing to think about. The number of nodes, the instance type, certainly the number of shards is a sort of technical item that needs to be considered. It's how the data gets, its distributed. It's sort of a layer on top of the segmentation that some Vertica engineers will be familiar with. And probably I mean, the, one of the big things that one needs to consider is how to get data in the database. So if you have an existing database, there's no sort of nice tool yet to suck all the data into an Eon database. And so I think they're working on that. But we're at the point we got there. We had to, we exported all our data out of enterprise cluster as cache dumped it out to S3 and then we had the Eon cluster to suck that data. >> So awesome advice. Thank you for sharing that with the community. So but at the end of the day, so it sounds like you had some learning to do some tweaking to do and obviously how to get the data in. At the end of the day, was it worth it? What was the business impact? >> Yeah, it definitely was worth it for us. I mean, so right now, we have four times the data in our Eon cluster that we have in our enterprise clusters. We still run some enterprise clusters. We started with four at the peak. Now we're down to two. So we have the two young clusters. So it's been, I think our business would say it's been a huge win, like we're doing things that we really never could have done before, like for accessing the data on enterprise would have been really difficult. It would have required non trivial engineering to do things like daisy chaining clusters together, and then how to aggregate data across clusters, which would, again, non trivial. So we have all the data we want, we can continue to grow data, where running reports on seasonality. So our customers can compare their campaigns last year versus this year, which is something we just haven't been able to do in the past. We've expanded that. So we grew the data vertically, we've expanded the data horizontally as well. So we were adding columns to our aggregates. We are, in reaching the data much more than we have in the past. So while we still have enterprise kicking around, I'd say our clusters are doing the majority of the heavy lifting. >> And the cloud was part of the enablement, here, particularly with scale, is that right? And are you running certain... >> Definitely. >> And you are running on prem as well, or are you in a hybrid mode? Or is it all AWS? >> Great question, so yeah. When I've been speaking about enterprise, I've been referring to on prem. So we have a physical machines in data centers. So yeah, we are running a hybrid now and I mean, and so it's really hard to get like an apples to apples direct comparison of enterprise on prem versus Eon in the cloud. One thing that I touched upon in my presentation is it would require, if I try to get apples to apples, And I think about how I would run the entire workload on enterprise or on Eon, I had to run the entire thing, we want both, I tried to think about how many cores, we would need CPU cores to do that. And basically, it would be about the same number of cores, I think, for enterprise on prime versus Eon in the cloud. However, Eon nodes only need to be running half the course only need to be running about six hours out of the day. So the other the other 18 hours I can shut them down and not be paying for them, mostly. >> Interesting, okay, and so, I got to ask you, I mean, notwithstanding the fact that you've got a lot invested in Vertica, and get a lot of experience there. A lot of you know, emerging cloud databases. Did you look, I mean, you know, a lot about database, not just Vertica, your database guru in many areas, you know, traditional RDBMS, as well as MPP new cloud databases. What is it about Vertica that works for you in this specific sweet spot that you've chosen? What's really the difference there? >> Yeah, so I think the key differences is the maturity. There are a number, I am familiar with another, a number of other database platforms in the cloud and otherwise, column stores specifically, that don't have the maturity that we're used to and we need at our scale. So being able to specify alternate projections, so different sort orders on my data is huge. And, there's other platforms where we don't have that capability. And so the, Vertica is, of course, the original column store and they've had time to build up a lead in terms of their maturity and features and I think that other other column stores cloud, otherwise are playing a little bit of catch up in that regard. Of course, Vertica is playing catch up on the cloud side. But if I had to pick whether I wanted to write a column store, first graph from scratch, or use a defined file system, like a cloud file system from scratch, I'd probably think it would be easier to write the cloud file system. The column store is where the real smarts are. >> Interesting, let's talk a little bit about some of the challenges you have in reporting. You have a very dynamic nature of reporting, like I said, your clients want to they want to a time series, they just don't want to snap snapshot of a slice. But at the same time, your reporting is probably pretty lumpy, a very dynamic, you know, demand curve. So first of all, is that accurate? Can you describe that sort of dynamic, dynamism and how are you handling that? >> Yep, that's exactly right. It is lumpy. And that's the exact word that I use. So like, at the end of the UTC day, when UTC midnight rolls around, that's we do the final ingest the final aggregate and then the queue for the number of reports that need to run spikes. So the majority of those 40,000 reports that we run per day are run in the four to six hours after that spikes up. And so that's when we need to have all the compute come online. And that's what helps us answer all those queries as fast as possible. And that's a big reason why Eon is advantage for us because the rest of the day we kind of don't necessarily need all that compute and we can shut it down and not pay for it. >> So Ron, I wonder if you could share with us just sort of the wrap here, where you want to take this you're obviously very close to Vertica. Are you driving them in a heart and Eon mode, you mentioned before you'd like, you'd have the ability to load data into Eon mode would have been nice for you, I guess that you're kind of over that hump. But what are the kinds of things, If Column Mahoney is here in the room, what are you telling him that you want the team, the engineering team at Vertica to work on that would make your life better? >> I think the things that need the most attention sort of near term is just the smoothing out some of the edges in terms of making it a little bit more seamless in terms of the cloud aspects to it. So our goal is to be able to start instances and have them join the cluster in less than five minutes. We're not quite there yet. If you look at some of the other cloud database platforms, they're beating that handle it so I know the team is working on that. Some of the other things are the control. Like I mentioned, while we like control in the column store, we also want control on the cloud side of things in terms of being able to dedicate cluster, some clusters specific. We can pin workloads against a specific sub cluster and take advantage of the cast that's over there. We can say, okay, this resource pool. I mean, the sub cluster is a new concept, relatively new concept for Vertica. So being able to have control of many things at sub cluster level, resource pools, configuration parameters, and so on. >> Yeah, so I mean, I personally have always been impressed with Vertica. And their ability to sort of ride the wave adopt new trends. I mean, they do have a robust stack. It's been, you know, been 10 plus years around. They certainly embraced to do, the embracing machine learning, we've been talking about the cloud. So I actually have a lot of confidence to them, especially when you compare it to other sort of mid last decade MPP column stores that came out, you know, Vertica is one of the few remaining certainly as an independent brand. So I think that speaks the team there and the engineering culture. But give your final word. Just final thoughts on your role the company Vertica wherever you want to take it. >> Yeah, no, I mean, we're really appreciative and we value the partners that we have and so I think it's been a win win, like our volumes are, like I know that we have some data that got pulled into their test suite. So I think it's been a win win for both sides and it'll be a win for other Vertica customers and prospects, knowing that they're working with some of the highest volume, velocity variety data that (mumbles) >> Well, Ron, thanks for coming on. I wish we could have met face to face at the the Encore in Boston. I think next year we'll be able to do that. But I appreciate that technology allows us to have these remote conversations. Stay safe, all the best to you and your family. And thanks again. >> My pleasure, David, good speaking with you. >> And thank you for watching everybody, we're covering this is the Cubes coverage of the Vertica virtual Big Data conference. I'm Dave volante. We'll be right back right after this short break. (soft music)
SUMMARY :
brought to you by Vertica. So we're talking a little bit about your background and I joined the Trade Desk in early 2016. And the Trade Desk is an ad tech firm And people are bidding on the privilege to show you an ad. So you you have one of the largest And and the real benefit is that they can, for the same reasons that you mentioned why by dumped it out to S3 and then we had the Eon cluster So but at the end of the day, So we have all the data we want, And the cloud was part of the enablement, here, half the course only need to be running I mean, notwithstanding the fact that you've got that don't have the maturity about some of the challenges you have in reporting. because the rest of the day we kind of So Ron, I wonder if you could share with us in terms of the cloud aspects to it. the company Vertica wherever you want to take it. and we value the partners that we have Stay safe, all the best to you and your family. of the Vertica virtual Big Data conference.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ron | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Ron Cormier | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
40,000 reports | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
18 hours | QUANTITY | 0.99+ |
fifth year | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Dave volante | PERSON | 0.99+ |
next year | DATE | 0.99+ |
seven | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
less than five minutes | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
10 plus years | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
early 2016 | DATE | 0.98+ |
apples | ORGANIZATION | 0.98+ |
two young clusters | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
both sides | QUANTITY | 0.98+ |
about six hours | QUANTITY | 0.98+ |
Cubes | ORGANIZATION | 0.98+ |
six hours | QUANTITY | 0.98+ |
US East | LOCATION | 0.98+ |
Hp | ORGANIZATION | 0.98+ |
Eon | ORGANIZATION | 0.96+ |
S3 | TITLE | 0.95+ |
13 million times per second | QUANTITY | 0.94+ |
half | QUANTITY | 0.94+ |
prime | COMMERCIAL_ITEM | 0.94+ |
four times | QUANTITY | 0.92+ |
hundreds of thousands of auctions | QUANTITY | 0.92+ |
mid last decade | DATE | 0.89+ |
one thing | QUANTITY | 0.88+ |
One thing | QUANTITY | 0.87+ |
single report | QUANTITY | 0.85+ |
couple reasons | QUANTITY | 0.84+ |
four clusters | QUANTITY | 0.83+ |
first graph | QUANTITY | 0.81+ |
Vertica | TITLE | 0.81+ |
hundreds of thousands of events per second | QUANTITY | 0.8+ |
about 40,000 reports per day | QUANTITY | 0.78+ |
Vertica Big Data conference 2020 | EVENT | 0.77+ |
320 node | QUANTITY | 0.74+ |
a whole week | QUANTITY | 0.72+ |
Vertica virtual Big Data | EVENT | 0.7+ |
Manish Gupta, ShiftLeft | CUBEConversation, March 2019
(upbeat instrumental music) >> From our studios in the heart of Silicon Valley, Palo Alto, California. This is a CUBE Conversation. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in our Palo Alto studios for a CUBE Conversation. It's just a couple of days until RSA kicks off a huge security conference. I think the biggest security conference in the industry. And we've got a security expert here in the house and we're excited to have him stop by. It's Manish Gupta, the Founder and CEO of ShipLeft. Manish, great to see you. >> Yeah, great to see you too, thank you. >> Welcome. So you must be really busy getting everything buttoned up for next week. >> Oh yeah, absolutely ready to go. >> Alright so for the people that aren't familiar with ShiftLeft, give us kind of the basic overview. >> Yeah of course. So ShiftLeft about a two and a half year old company. We started with the problem of you know, the software's driving innovation all around us, right? I mean, we see it in autonomous cars, IoTs, increasingly SwaaS software in the cloud. And all of the software needs to be figured out, how are we going to protect it. And so it's a big problem, and we've been working on it for about two and a half years now. Raised our Series A and most recently in the last two weeks, we announced our Series B of 20 Million. >> Congratulations. >> Amazing team, yeah! >> So, you've been in the security space for a long time. >> Correct. >> And RSA's a giant conference. I don't know what the numbers will be this year. I'm sure it'll be north of 40 thousand people. Moscone North, South and West will be full. Every hotel is full. But it kind of begs a question, like, haven't we got some of the security thing figured out? It's just a never-ending kind of startup opportunities as there's new ways to approach this kind of fundamental problem which is, how do we keep the bad guys out. How do we keep them from doing bad things while the surface area expands exponentially. The attack surface expands. And we hear every day that people are getting breached and breached and breached. So the whole ecosystem, and kind of approach has completely changed over the time that you've been involved in this business. >> Indeed, as you said, I've been in cybersecurity for a long time. I like to say the last 15 years, first part of my career, I was focused on detecting viruses. Then it became worms. Then most recently at FireEye, we were detecting advanced malware nation-state attacks like APT1s and APT3s. But it was then that sort of, it dawned on me that, look about 80% of security money gets spent on detecting bad stuff, right? And that's reactive. Essentially what that means is we are letting the bad guy shoot first and then we are trying to figure out, okay what are we going to do now. >> We're waiting like 150 days right, down from 230 days, before we even-- >> Exactly. >> know that he's shootin' at us. >> That's right. Now couple that with as you said, the attack surface is ever increasing, right. Because we're using software in every which way which means all of this stuff needs to be protected. And so that's why we were wanted to start with a fresh perspective which is to say, let's not worry about attacks. Because that is not in our control. That's in the bad guys' control. What can we control? Which is our software. And so, that is why what we do at ShiftLeft is to understand the software very quickly, extract its attack surface in minutes, and then allow you to fix whatever you want to, whatever you can during the time frame you have available. And here comes the next innovation which is, if you don't fix anything, which is almost always the case, we will protect the application in production. Now the key is, we protect the application in production against its vulnerabilities. So we never ever react to threats. We don't care. >> So you have like a wrapper around the known vulnerabilities within the code. Is that a good desciption? >> Yes, you could absolutely, that's a good way of thinking about it is you know, let's say a million lines of code. We find 10 vulnerabilities in it. So it's only in 10 specific instances of the application. Now we also know what vulnerabilities exist on line 100 and line 200 and so on. And with that knowledge, we can very precisely protect each vulnerability. >> It's a really interesting approach. You know, one of the things I find fascinating with security is it's kind of like insurance. >> Mm hmm. >> In theory, you could spend 110% of all your revenue budget >> Correct. >> on security, but you can't so you have to make trade off decisions. You have to make business value decisions and you have to prioritize. So this is a really different approach, that you're offering an option either to fix the known, and/or just to protect the known, so that there's some variability in the kind of the degree of investment that the customer wants to spend. >> You summed it up well, Jeff. I think the fundamental challenge with security has been that. Is that ya know, 15 years ago we've asked our customers to buy antivirus. Then we asked them to buy intrusion detection. Then we asked them to buy nation-state or malware protection. Now we're asking them to buy machine learning based mechanisms to detect more threats, right? And so the funnel is like this, right, but it never goes down to zero. And so tomorrow some other approach will come up to detect the 0.1% of the malware. And guess what? The sys-os really don't have a choice right? Because they have to protect their organization. So they have to buy that tool also. Now in this entire process, you never get better, right? Notice that you never get better. All you're doing is just reacting. And because a virus from 15 years ago theoretically could still come and attack you, you can't throw away that too either. Right, and so that is precisely why I'm so passionate about work we're doing at ShiftLeft is we will protect you from, in sort of in bad continuous improvement for the first time in security. Find the vulnerabilities, fix them. But if you can't fix them, we will protect you. >> Now, what about another kind of big shift in the way software is delivered, is everything is an API to someone else's software. And oftentimes there's many many components that are being pulled in from many many places that contribute to, but aren't software that I control personally. >> Correct. >> So how do you guys deal with those types of challenges? >> Great question, great question. And you know, the popular saying is we are becoming an API economy. >> Right, right. >> And what we exchange on our APIs is increasingly a lot of data. And you're right. If you think about historical approaches, we will now have to break open the API on a network, to find out what it contains. And for various reasons, super hard to do, lots of operational efficiencies, inefficiencies, excuse me. And this is again where the ShiftLeft approach is rather unique. See because we go down to the very foundation. It's hard work right, but we go down to the very foundation, what is the source code of the API. So we will understand, okay, well this is what you should be putting in the API, right? But then I see that a variable called Personally Identifiable Information is being put into that API. I can now tell you before this becomes a problem that'll embarrass you in the newspapers. I, we will tell you, hey look, you are writing PII to a third party API without encryption, right. So you get to fix the problem at the very root where it starts. >> So but, can you wrap the known vulnerability in a partner piece of software? >> Absolutely we can. >> As it interfaces with my software? >> Correct. So, there are two aspects to it right. The first is what are you putting into that API, right, that is completely in your control. >> Right. >> Right, we don't really need to understand the API for that matter. So that is one particular use case we can absolutely protect you there, right. The second is when the API, when integrated into your application, makes your application vulnerable. Right, so I'll give you an example. This happened to one our our customers. This is a 3,500 person technical, technology company based here in Santa Clara. They were using a third party API. Very popular one. That third party API in turn was using a Jackson databind library, just an open source library. Now, as a company when we decide to use that API, we don't really worry about, we don't have visibility into like what all is it hurting. >> Downstream. >> Exactly. >> And how many feeds are in that one particular one. >> That's right. And so this is the supply chain of software. Right? Multiple components are now being brought together very quickly to create the functionality that you want to deliver to your users, to your customers. But in this pace of execution, we need tools like ShiftLeft to tell us hey what are we hurting. And whatever we are hurting, how is that impacting the security of our application. >> Right, right. Pretty interesting stuff, you got another component of something that's really important today that wasn't necessarily when you started this adventure. And that's the open source play. >> Yes. >> So as I understand it, you guys started really from more of an open source play and then ShiftLeft grew out of kind of commercializing what was that open source project. I wonder if can explain a little bit more. >> Yeah, I would love to. So the foundation of what we do is a technology called Core Property Graph. So, this is an invention of our chief scientist, Dr. Fabian Yamaguchi, one of the foremost authorities in the world, in the area of understanding code, right. And so as part of his PhD thesis, he came up with this technology and decided to open source a tool called Joern. >> Joern. J-O-E-R-N. >> That's right. >> Not easy to figure out, Joern yes. (laughing) >> Exactly. And it's actually his friend's name so that's how he named it. >> Ah, is that right? >> So he open sourced it and several organizations around the world have since used it to find very hard to find vulnerabilities. Right so as an example, this is a IEEE paper where this technology was used by Fabian to find 18 zero-day vulnerabilities in the main line Linux code, right. So arguably one of the most complex pieces of code on the planet, 15 million lines of code. Arguably one of the most analyzed pieces of code on the planet. And as recently as 2015, he finds 18 zero-days. And no false positives. Every single vulnerabilty has been acknowledged and fixed by the Linux community. That's the power. And so we use that as the foundation. So you write that as open source but since then we've done a lot of incremental work on enhancing it to make it enterprise ready. And that is a product we offer. as call this Ocular. Where we give you, think about it as my best analogy, is just like Google Maps for your source code. >> Yeah, I think it's a good analogy and he goes through that in one of his videos kind of explaining the mapping. >> Correct. >> Of different layers of kind of visibility into how you should look at software code. >> Indeed. >> Yeah alright, well before we let ya go, you got some exciting things happening next week beyond just the regular activities at RSA. You guys have been invited to participate in a special activity. I wonder if you can share a little bit and give a plug and maybe we can send some fans up to, I dunno if it's going to be audience participation in the judging. >> Yes. >> Go ahead and let us know what you're doing. >> Thank you for giving me that opportunity. Yeah super, super excited about, so we've been selected as one of the top 10 finalists for the RSA Innovation Sandbox. As you mentioned in your opening, RSA is the biggest security trade show in the world. And so now this has become the most seminal way of highlighting innovative work being done in the security industry. So I get three minutes to pitch ShiftLeft in front of an audience of about 1,500 to 2,000 people. Really looking forward to that. >> Well I dunno if you can speed this up to only three minutes (laughing) but I'm sure you'll be able to nail it. >> I will try. >> Alright well Manish, thanks for taking a few minutes of your day and I'm sure we'll see you in San Francisco next week. >> Thank you very much, thank you. >> Alright, It's Manish, I'm Jeff. You're watching theCUBE. We're having a CUBE Conversation in our Palo Alto studios. Thanks for watchin' and we'll see ya next time. (upbeat music)
SUMMARY :
in the heart of Silicon Valley, It's Manish Gupta, the Founder and CEO of ShipLeft. So you must be really busy getting everything buttoned up Alright so for the people that aren't familiar And all of the software needs to be figured out, And we hear every day that people are getting breached and then we are trying to figure out, Now couple that with as you said, So you have like a wrapper is you know, let's say a million lines of code. You know, one of the things I find fascinating of investment that the customer wants to spend. we will protect you from, in sort of is everything is an API to someone else's software. And you know, the popular saying is So you get to fix the problem at the very root The first is what are you putting into that API, we can absolutely protect you there, right. how is that impacting the security of our application. And that's the open source play. you guys started really from more of an open source play So the foundation of what we do Not easy to figure out, Joern yes. And it's actually his friend's name And that is a product we offer. kind of explaining the mapping. into how you should look at software code. I wonder if you can share a little bit And so now this has become the most seminal way Well I dunno if you can speed this up and I'm sure we'll see you in San Francisco next week. in our Palo Alto studios.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Fabian Yamaguchi | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Santa Clara | LOCATION | 0.99+ |
10 vulnerabilities | QUANTITY | 0.99+ |
110% | QUANTITY | 0.99+ |
Fabian | PERSON | 0.99+ |
March 2019 | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Manish Gupta | PERSON | 0.99+ |
Manish | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
0.1% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
150 days | QUANTITY | 0.99+ |
3,500 person | QUANTITY | 0.99+ |
230 days | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
10 specific instances | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
each vulnerability | QUANTITY | 0.99+ |
ShipLeft | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
FireEye | ORGANIZATION | 0.98+ |
zero | QUANTITY | 0.98+ |
two aspects | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
20 Million | QUANTITY | 0.98+ |
three minutes | QUANTITY | 0.98+ |
15 years ago | DATE | 0.97+ |
RSA | ORGANIZATION | 0.97+ |
second | QUANTITY | 0.97+ |
IEEE | ORGANIZATION | 0.97+ |
about two and a half years | QUANTITY | 0.97+ |
about 1,500 | QUANTITY | 0.97+ |
Joern | TITLE | 0.96+ |
about 80% | QUANTITY | 0.96+ |
Google Maps | TITLE | 0.95+ |
Silicon Valley, | LOCATION | 0.95+ |
Series A | OTHER | 0.95+ |
18 zero-day | QUANTITY | 0.94+ |
2,000 people | QUANTITY | 0.94+ |
CUBE Conversation | EVENT | 0.93+ |
ShiftLeft | ORGANIZATION | 0.93+ |
Moscone North | LOCATION | 0.92+ |
15 million lines of code | QUANTITY | 0.91+ |
last two weeks | DATE | 0.91+ |
two and a half year | QUANTITY | 0.91+ |
Series B | OTHER | 0.9+ |
ShiftLeft | TITLE | 0.86+ |
a million lines | QUANTITY | 0.86+ |
40 thousand people | QUANTITY | 0.85+ |
single vulnerabilty | QUANTITY | 0.84+ |
Palo Alto, California | LOCATION | 0.82+ |
today | DATE | 0.81+ |
line 200 | OTHER | 0.81+ |
last 15 years | DATE | 0.79+ |
RSA | EVENT | 0.77+ |
Joern | PERSON | 0.77+ |
line 100 | OTHER | 0.76+ |
CUBEConversation | EVENT | 0.74+ |
Linux | ORGANIZATION | 0.71+ |
18 zero-days | QUANTITY | 0.7+ |
theCUBE | ORGANIZATION | 0.7+ |
top 10 finalists | QUANTITY | 0.69+ |
analyzed pieces | QUANTITY | 0.69+ |
one of his videos | QUANTITY | 0.67+ |
Jackson | ORGANIZATION | 0.66+ |
South | LOCATION | 0.63+ |
pieces of code | QUANTITY | 0.6+ |
APT1s | OTHER | 0.58+ |
use | QUANTITY | 0.58+ |
Soma Somasundaram, Infor | Inforum DC 2018
>> Live from Washington DC, it's theCUBE, covering Inforum DC 2018, brought to you by Infor. >> Well, good morning. Welcome back here on theCUBE. We are live in Washington DC, at Inforum 2018. You can tell, Infor's just over the shoulder here. We're on top of the show floor, looking down, and a lot of buzz, a lot of activity out there. Good to be a part of that excitement here in DC. I'm John Walls, along with Dave Vellante, and we're joined by, he said, "Just call me Soma." Soma Somasundaram, who's the CTO at Infor. Soma, good job on the keynote stage this morning. Thanks for joining, appreciate that. >> Yeah, and yesterday. >> Yup, yup, thanks. >> So, talk about a couple of new products, one launched, one in beta. Why don't you go ahead and tell our audience a little bit about that, about what you're bringing to the marketplace now? >> Yeah, so, you know, we have, you know, as I mentioned in today's keynote, we're all about product innovation, and we're engineers. Charles is an engineer, I'm an engineer, and we're constantly driving new innovation. So, some of the innovation, there's fundamentally, we want to build what I would call a shared services platform that all of our cloud suites can utilize. There's no need for each of the applications to go reinvent the wheel to build a middleware, or a data lake, or an API layer, so we built a shared services platform, which is what we called Infor OS. As part of Infor OS, we continued to release new things. You heard today, we released something called Infor Go. As the name might suggest, the idea is that you as an employee in one of the customer organizations, you want to have, easily go to the app store, download something called Infor Go, it automatically is configured for your role. It gives you, if let's say you're a salesperson, it gives you access to CRM data, to curate your pipeline, it gives you access to employee data, because you're an employee of the organization, gives you ability to file expense reports, because you're a traveler. You get the idea. So, in a role, you don't want to be dealing with 20 different apps. It's just one thing. You just go in, one sign on, you get access to everything you need. That's one announcement we made. That's on the technology side. And on the functional side, you know, we launched a new CRM this morning, and the idea there again is that, we're in the CRM business not to build a horizontal CRM. Our idea is, you build, anything you do must be industry-specific, right? When you are selling and servicing an excavator, and you are a dealer of moving equipment, you want to know what kind of configuration installed, what kind of accessories I can sell to this farmer, what kind of terrain they're operating on. That is industry-specific. So to us, that is important. That's what we're doing with CRM. We built it on obviously our own platform, technology platform, multi-tenant, running in the cloud, but the main differentiation is industry, right? So that's something we announced. We've been on building a next generation HCM suite, which we talked about a lot yesterday. The final piece of that is payroll, which is important. So that payroll, which just went beta this morning. It's all built on the exact same platform, with Infor OS, multi-tenant, and it's highly extensible, so that completes our HCM suite on a unified platform. Those were the announcements we made today. >> So I wanted to talk a little bit about the platform. So last year, after Inforum 17, I wrote a blog post, and I put up the strategy and technology stack, and I kind of missed the OS underneath. So we'll come back and maybe course-correct that. But one of the problems with enterprise software, especially suites, is there are a lot of cul-de-sacs. You go down a road, and then you hit a dead end, and then you have to come all the way back, and if you want some other function, you have to go down and come all the way back, and it's a very frustrating user experience. So, I'm inferring that what you guys have done is try to address that and other problems with a platform approach. So a platform, in my view, beats products. So maybe talk about platform and what that means to you guys, and then I would love to get into the sort of conceptual and actual stack. >> Yeah, so, it is what should be common sense, in my opinion, that if you buy a HCM suite from a provider of software, you buy ERP from that same provider, you buy travel and expense application from the same provider. You would think that they all have the same user experience, and are integrated out of the box, they all seamlessly work together, with single sign-on. That would be a normal expectation as a customer, I would think, but unfortunately, the market's not going that way, right? Everybody's got their own, even within one company, you have multiple products, they don't work together well. Our idea is that if you buy an industry cloud suite, you must feel like it came from Infor, it all should have one single user experience, it all should work together as an integrated suite, it should all be sharing data for analytics, and so on and so forth. So that is the whole idea behind building this Infor OS. So, Infor OS has got several services underneath, starting with, you know, user experience, which is developing a hook and loop. So we have all of the controls, whether it's a dropdown box, or a grid control, or date picker, they all behave exactly the same way. Whether you're in CRM, or HCM, or inside a purchasing application, they all work the same, right? So, starting with that, then you go-- >> So if I can interrupt, so the Infor OS has the core services that you need, that the software needs to access for any function that you're building, correct? >> Exactly, yeah, yeah. >> Okay, please. >> So it's user experience, then you have integration. We have one integration layer called ION, and ION supports both an API layer, if you want to build a mobile app, you need APIs into the software, so built a lot of APIs into our applications. Those are exposed through a single gateway. There's one way to get into Infor applications through this API layer. We built that as part of Infor OS. We also built Coleman, which we announced last year. Coleman depends on two things. One, a lot of access to data, so I can crunch and do machine learning, and a lot of access to APIs. So what if you could create an acquisition, tucked into a device, versus having to open up a form, right? To do that, you need APIs. If you can order Domino's pizza from home, using Alexa, why can't you do that at work? So we built this framework for those kind of things. So it's got APIs, it's got Coleman, it's got data lake. So all of this data is in one place, so you can build analytics. We have Birst, which sits on top of the data lake, and I can go on. So that's really what we're doing with Infor OS. It's really, that's very, very important. It's not like your Intel Inside kind of thing. Without Infor OS, Infor apps don't work. >> So, if I can, if you bear with me, just to conceptualize the stack, the OS is at the bottom layer, and then you've got your micro-vertical functions as sort of the next layer, and then the cloud, which is really AWS, is the cloud infrastructure, then you've got the GT Nexus, essentially, the network commerce platform, so all those data and supply chain connection points that you have access to, Birst, the analytics, which was in acquisition last year, and then the Coleman AI completes the stack. My question is, as it relates to, for instance, Birst, that was an acquisition. So, you have to bring that in and do some engineering work to make it fit into the stack, is that right, or is it just kind of bolted on? >> No, you know, so, everything has to be done with the conscious way of design, right, so it just doesn't happen by itself. So, Birst is a fantastic world-class analytics platform, right? They as a company built a world-class platform that allows for department analytics, so if you're working in sales or working in marketing, you can go bring your own data, you can do analytics. It's great at that. At the same time, it's great at enterprise analytics, where you have all of this data in one place, you harmonize the data and do that. As a platform, it's a fantastic platform, but we're about delivering content on top of that platform, so we need to bring the network data, like you said, we need to bring the industry data, we need to bring the employee data from HCM. Bringing it all together and exposing that using Birst as the visualization layer is how we are exposing it. So to that extent, Birst was connected into the data lake, and it sits on top of the data lake, leverages that data. We built a semantic layer, which reflects the model of data that we have in the data lake, so yeah, it does, and we have the single sign-on, so it actually surfaces within Ming.le, within the homepage of a purchasing manager or whoever, and that's work, that's what we did. >> So you essentially re-platformed it. So of course, part of the due diligence is how challenging it's going to be to do that, how fast you can get that to market, but this is complicated. It requires a significant engineering resource on Infor's part. We talked about this a little bit at the analyst meeting last year, the industry analyst session. Couple things, one is the integration and exploitation of AWS cloud, and all the services there, the data pipelines, and the services there, but also modern software development. You know, microservices, and containers, and all of that good stuff. Can you talk about those sort of two dimension and any other points that you'd like to emphasize in terms of the things that Infor developers are doing to create this modern platform? >> Yeah, so, first of all, you know, we are all about applications, right, so we're not building databases, we're not building our own data centers, we're not building our own operating systems. We're a business software application company. Our belief is that if you try to verticalize and try to innovate on every single layer of what you do, it stifles innovation. Why not embrace industry's innovation, right? Can we out-spend AWS, in terms of building a cloud infrastructure? I don't think so. >> No way. >> No one can. And so, it's important to focus on what you do best, and leverage innovation that's coming in outside the four walls of Infor, to embrace that to deliver what the customer requires. So, what we really did is we took the AWS services, and we encapsulated them into our application, so when the application does disaster recovery, it's actually AWS services, right? When we call Elasticsearch, we're using AWS services there. We use DynamoDB for graphing the data in the data lake. Much like Facebook works on Open Graph, of trying to find people who are connected to each other, data inside the data lake is connected, right? Sales order is connected to a sales person. It's connected to a customer. Customer is connected to returns, and so on and so forth, so, we've done those kind of things. So, we've built a layer above the web services of AWS to actually create hooks into the application that leverages that, and we built our application itself in a sort of a microservices architecture. Granular APIs is a better way to describe how we did it, so that those granular APIs can be used in a digital project to create your own mobile app. It's the same APIs that are used in Coleman, for our digital assistant, or chat bots. All of those things require clear thought in terms of design, how you expose the functionality, and how you expose data, and that's what we did. >> Yeah, so, as a developer, in an engineering organization, having access to those primitives, those granular APIs, gives you what, greater flexibility, if the market turns, you can turn more quickly. I mean, it's more complicated, right, but it gives you finer grain control. Is that fair? >> Absolutely the case, yeah, and by the way, we know that the world is heterogeneous, right? I would love for a customer organization to just use Infor for everything, nothing else, right? But that's probably not realistic. So we built this to be able to work in a heterogeneous environment. So creating APIs and having this loosely-coupled architecture allows for that to happen. Ultimately, the customer has a choice. We obviously have to work to earn their business, but if they have other things outside of Infor that they're running in their ecosystem, you need to be able to embrace that. So this architecture actually allows for that. >> So it's the architecture, but if you're saying, if I'm a customer, and I want to run in the Google cloud, or Azure, technically, at least in theory, you can support that, but do you actually do that today, or is that sort of roadmap stuff? >> Technically, you could do that, right, but we obviously leveraged a lot of AWS services in our stack. What I meant to say in heterogeneity is that if you run a non-Infor application, right, so like, Salesforce for CRM, right? I would love for the customer to use Infor CRM, 'cause we think we are very competitive, but if they are running Salesforce, and they don't want to replace that, we need to be able to work in that environment, where it's running in a different cloud, it's running in a different architecture. So, we built Infor OS and the layer to be able to deal with that kind of hybrid deployments. >> Technically, what's the enabler there? Is it just sort of an API-based framework, or... >> It is API-based framework. It's also got federated security built into it. It's got the middleware understands, ION understands that data could come from a non-Infor system. As long as you're talking, you know, you go to United Nations, if everybody there has a headset, to really translate what anyone is saying, versus if everyone speaks English, well, world would be wonderful. >> But they spoke English yesterday. (John laughs) >> I got one more, I got one more geeky question. Anytime I get the head of engineering, you know, the CTO-- >> You love this. >> We love to get into it. The audience eats the stuff up. >> Yes. >> And we love the business talk too. But, I've heard a lot about multi-tenant architecture. My friends at servers now make a big deal about multi-instance, saying, oh, and I don't know if it's, if it can't fix a feature kind of thing, or if there's really, you know, additional value there, but the claim is it's more secure. Multi-tenant, I think conceptually, is certainly more cost-effective. What's your take on sort of multi-tenant? Why is it important? Maybe discuss the security levels that you guys engineer in, your comments. >> Yeah, yeah, if you have something that you can call it a feature, you can, like you said, but our belief is that multi-tenant architecture allows for faster innovation, easier update to the customer, to keep them current, and you know, you think about having thousands of individual instances that you have to update, on a weekly basis, because we will get to a weekly update. We are currently doing monthly update, and we get to a weekly update. That requires a natural act to create automation to be able to update all of them. I mean, there's, you know, you could argue which is really more pure, but multi-tenant architecture for us is one single application server farm that is able to work for different tenants, understanding their configuration, their business process, and operate the way they want it to be operated, but it is running in one single farm, that we can update as frequently as we need, without obviously causing disruption, so that is, I think is a good design scenario. Having said that, we actually isolate the data of a tenant, right, because you could have a scenario where all tenants' data is in one database. We don't do that. We actually insulate tenants so that data is not permeable. You can't go across tenants. So, we think that this is an elegant way to architect and keep it agile, and we can bring innovation faster to the customer. >> So when you go from monthly to weekly, to daily, to hourly, to minutely. Every customer comes with you, whereas in the multi-instance world, you actually have to plan for it. You've got to plan the migration. You're maybe N minus one, or maybe even N minus two, if that's supported, and it's more disruptive. >> That's correct. >> Okay, and then, you've got to engineer, you know, the security, and other factors. Thank you for that explanation. >> So, I always like to get back to, at the end of the day, you know, what are folks doing with what you're providing them, right? So, in kind of like your new services world, your new product world, what are some of the more, I guess, unique ways in which your customers are putting these great tools that you have to work for them, that you would like to use as kind of the poster child of success, and say, you know, we're providing this new value and these new enhancements, and give you the chance to take it to others, and use them as examples? >> Yeah, so, fundamentally, I'll be remiss if I don't start with the industry, right? So, it may not be very sexy, but ultimately, if I'm in a food and beverage industry, I really need to have a piece of software that understands that, right? Like for example, if you're an ice cream plant, you pay by part of a carton, you don't pay for the gallons of milk you get, right, so, does the software understand that? Right, if it don't, then you have to work around it, right? So, it may not sound sexy, but that's important to us, right? So, customers deploy without customization is very, very important for us. That's why we call it last man functionality. But if you flip to the technology side of things, I think that we're just scratching the surface in terms of what users want to do with Coleman. Coleman digital assistant, for example, like I earlier said about placing an acquisition target into a device. I think our idea is that every single employee of our customer organization should be using technology. Typical ERP, as it was deployed 20 years ago, only power users used it, right? Other people wrote on a piece of paper and sent it around. >> Same thing with decision support. There was like, three guys, two guys in the company who knew it. You had to go ask them to build a cube for you. >> Exactly. That doesn't scale, exactly. And we're living in a very diverse, global sort of set up. It doesn't work if you have three people who understand how to do BI, you know, two people who can create work flows, and I always like to use this example of this website called ifttt.com. I don't know if you've tried this or not. It literally stands for if this, then that. If I can go and describe something, and if this happens, then do that. Why can't we do that in enterprise software, right? Why is it that you have to go to knock on the door of IT to do it? So our idea is to bring that level of innovation, so we can innovate, our partners can innovate, customers can innovate, we don't step on each other. >> I got to ask you about a topic that we've heard a lot about this week, is robotic process automation, and you guys have essentially intimated, or at least, I've inferred that you've got quite a bit of capabilities in that regard. We're talking about software robots here, essentially, to replace sort of humans doing mundane tasks, or maybe augment humans. What is the capability that you have with RPA? Is it something that you're shipping today, and I have some follow up questions, if I may. >> Yeah, so, we built ION when we started building this years ago. We built it with the notion of build it on a data-rich architecture, right? What I mean by that is when something happens, an event happens in an application, a sales order is taken, or it's updated, give me a full copy of that document, that anyone can understand, right? That is a foundation of what you need to be able to externalize things like RPA. So we have access to the document as things happen. That's point number one. Point number two is that we built the Coleman AI platform, which we talked about earlier today. That actually leverages that workflow, as points in the workflow, to be able to go and do AI-based services that are hooks that are there in the workflow. So, where human beings need to intervene, I give an easy example. How often, like, there are people reporting to you, I do, and we get expense reports that people submit. First of all, I don't even look at them, Michelle looks at them, and do you think she opens and actually looks at how much somebody spent for dinner? No, you just push the button and approve. Why are we doing that, right? Why can't a robot figure out is there something that looks not quite right, then flag it, versus having to do this mundane work? So why can't Coleman do that? That's the way we've done it, and it's because we have a workflow engine, we have the API architecture, we have an AI platform, it's easy to wire these things together and having data externalized allows us to do that. >> So, in looking at the RPA market, there's several companies out there, and a lot of software companies, many of which are very, very complicated. You can't get your hands on the software. There is some, or maybe one in particular, it's easy, you download it, and it's low code or even no code, so I would imagine, I'm envisioning some kind of studio for a user like myself, who can, you know, is not technical, who can use it, and then maybe some kind of orchestrator, to be able to actually effect what I want done to get done. Is that something that you're shipping today, or how do I do it, as a user, and is it low code or no code? >> As an end user, if you are trying to figure out, yeah, I'll go to them to deploy, then obviously, you need a data scientist, okay? So, that part of it, we have a platform that is available for the data scientist, to be able to go look at the data, curate the data set, allow them to deploy different algorithms to figure out which one work, is the right for certain, then deploy that, and when you say deploy, it automatically creates an API, and allows for use anywhere. From an end user standpoint, like I said, this ifttt.com, you should be able to go in and say, set up your own alerts, that if I see, if you see, you know, X, Y, Z happen, let me know, or if I see X, Y, Z happen, you know, do this. So that part of capability exists in the platform, right? So, you can't completely replace data science and everything with the real end user doing it, but if you package the services in such a way that an end user can actually pick and choose and deploy, that can be done today. >> Your expense report, or approval example, and there are many, many others, so, are great, thank you for great. >> Soma, thank you, for the time too. We appreciate that. Thanks for dropping in, and again, great job on the keynote stage, and wish you success down the road here. >> Thanks a lot, appreciate it. >> I don't think you need it, though, I think you've got your, your act together really well. >> And your hands full. >> Yes, you do. A lot going on. All right, back with more here. We're live in Washington DC. You're watching theCUBE.
SUMMARY :
brought to you by Infor. and a lot of buzz, Why don't you go ahead and tell our audience And on the functional side, you know, and then you have to come all the way back, Our idea is that if you buy yeah, yeah. So what if you could create an acquisition, connection points that you have access to, you can go bring your own data, how fast you can get that to market, Our belief is that if you try to verticalize and how you expose data, but it gives you finer grain control. you need to be able to embrace that. if you run a non-Infor application, right, Is it just sort of an API-based framework, you know, you go to United Nations, But they spoke English yesterday. you know, the CTO-- We love to get into it. that you guys engineer in, your comments. individual instances that you have to update, So when you go you know, the security, then you have to work around it, right? You had to go ask them Why is it that you have to go to What is the capability that you have with RPA? That is a foundation of what you need who can, you know, is not technical, and when you say deploy, so, are great, thank you for great. and wish you success down the road here. I don't think you need it, Yes, you do.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Charles | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Soma | PERSON | 0.99+ |
Michelle | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
two guys | QUANTITY | 0.99+ |
Washington DC | LOCATION | 0.99+ |
three guys | QUANTITY | 0.99+ |
Soma Somasundaram | PERSON | 0.99+ |
DC | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
two people | QUANTITY | 0.99+ |
three people | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Infor OS | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
John | PERSON | 0.99+ |
Alexa | TITLE | 0.99+ |
Infor Go | TITLE | 0.99+ |
20 different apps | QUANTITY | 0.99+ |
Salesforce | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
one company | QUANTITY | 0.99+ |
Infor | ORGANIZATION | 0.99+ |
one thing | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two things | QUANTITY | 0.98+ |
ION | ORGANIZATION | 0.98+ |
thousands | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
English | OTHER | 0.98+ |
this week | DATE | 0.98+ |
one way | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
First | QUANTITY | 0.97+ |
one place | QUANTITY | 0.97+ |
each | QUANTITY | 0.96+ |
this morning | DATE | 0.96+ |
single | QUANTITY | 0.96+ |
Infor | TITLE | 0.96+ |
DynamoDB | TITLE | 0.96+ |
Birst | ORGANIZATION | 0.95+ |
Coleman | PERSON | 0.95+ |
one single farm | QUANTITY | 0.95+ |
one database | QUANTITY | 0.94+ |
both | QUANTITY | 0.94+ |
one single application | QUANTITY | 0.93+ |
ifttt.com | OTHER | 0.91+ |
Inforum 2018 | EVENT | 0.91+ |
Coleman | TITLE | 0.91+ |
Intel | ORGANIZATION | 0.9+ |
Point number two | QUANTITY | 0.9+ |
ION | TITLE | 0.9+ |
Ross Smith IV & Greg Taylor, Microsoft | Microsoft Ignite 2018
>> Live, from Orlando, Florida. It's theCube covering Microsoft Ignite, brought to you by Cohesity, and theCube's ecosystem partners. >> Welcome back everyone, to theCube's live coverage of Microsoft Ignite. I'm your host, Rebecca Knight, along with my cohost, Stu Miniman. We have two guests for this segment, we have Ross Smith, the Principle Program Manager at Microsoft, and Greg Taylor, who is the Director of Product Marketing at Microsoft. Thank you so much for joining us! >> Thanks for having us. >> So, I want to start off by talking about messaging. You are both legends in the Microsoft messaging world, sorry to be obsequious here. >> That just means we're old. >> You've been around a while, it's not your first rodeo. >> No, no. >> So, talk a little bit about what's new, what the enhancements you're doing for Enterprise, it is the most used app. >> Yeah. >> So we're launching Exchange Server 2019 this year. It's another version of on-premises exchange, it's incredible. We had 2000 people registered for the session, we had 1000 in the room. There's still some love for on-prem exchange, no doubt, so that's been a big thing we're talking about at Ignite this year. For those customers, and I'll be honest it's very much a release aimed at large Enterprise customers who want to keep some exchange on-prem. We strongly believe that small-medium business should be in the cloud, so we've focused on the kind of features that really large Enterprises really want to get from Exchange. >> Yeah, and then from a app perspective, we've been heavily invested with ALUP, Fry, WES and Android to bring a unique and valuable experience for both consumers and commercial users using both Office 365 and Exchange on-premises. So we now have a hundred million users using Outlook Mobile today, and it's been a great experience and we continue to evolve the app on a weekly basis, now. >> Can you talk a little bit about the evolution of the app and what kinds of features and enhancements you're using for both the consumers and Enterprise? >> Right, yeah. So the app originally began as a consumer acquisition, which we've now targeted and rebranded it as Outlook, and we've been heavily focused on bringing Enterprise features that our users know and love. Office 365 Groups is a great example of an experience that we built into the app that no other native mail client or third-party mail client can deliver today. We've delivered other Enterprise security-specific features like Azure Active Directory conditional access so customers can lock down what mobile apps can access the service and prevent any other client from doing so. And then, of course, there's in-tune app protection policies which allow us to, and customers to, ensure only the corporate data is protected and exclude the personal data, so that we can ensure there's no data leakage scenarios going. >> I wonder if we can step back for a second. I think about messaging, it's very diverse. I remember back in the '90s, I was helping companies get access to this whole "internet thing" and LANs and setting up and oh, we're going to go from faxes and memos to emails, show how old I am in this business, too. But today, our mobile devices, a lot of what we're doing companies, whether they have their own data-centers or doing their cloud, there's usually lots of different ways we communicate. My joke is, the best way to communicate with someone is probably the one they prefer to and hopefully aren't buried in. >> Yes. Because we all have the Slacks and all those other things out there. How do you view the word's game, how does the Exchange and Outlook and those fit into the overall portfolio and interact with everything else. >> From the Exchange side, email is dead. I've heard email is dead for I don't know how many years and well, email is still one of the primary communication methods we all use and rely upon. And so Exchange was one of the applications that kind of coined the mission-critical application moniker, right? 22 years ago, 20 years ago, Exchange was one of the mission-critical apps. But we actually kind of think of Exchange now as almost a service, a commodity, like the power. And most people, it's kind of interesting, we have the front and the back end of things, right? I'm thinking about the messaging infrastructure of the back, and Ross is now working on the client side. Most people see the client features and think of them as Outlook and client features, but a lot of them are Exchange features which are servicing the client. It's been a real kind of evolution. We've got to a point where nobody really cares about the back end, unless it's not there, then that's a problem, but most of the things servicing the client. >> And so what we see is that the transition from typical on-premises infrastructure to the cloud service usually, generally begins with email into the Office 365 stack, and that starts lighting up additional features. And then from a mobility perspective, we're seeing that that begins the on-ramp into mobile. Because, like Greg mentioned, we've had email capability on mobile devices integrated into Exchange for 17 years now, so it's a very ubiquitous thing to have on a mobile device, so it's just a natural progression just to use email on a mobile device. And then that begins lighting up as customers begin to move to Office 365, they start lighting up additional features like teams integration or Skype for business or any of the other Office apps. And then they just light up naturally. And then through all of our protection mechanisms we're able to ensure that that entire experience is secure from a IT business, and protecting it. >> Just speaking of the evolution of messaging in and of itself, what do you see, people who've been in the industry a long time, what do you see as next, I mean, where do we go from here? Email, they say, is dead, we know it's not dead, but what are the next kinds of generation of features and enhancements that you see customers really needing, and that you're working on at Microsoft? >> Alright, I think that Exchange was really interesting from an Office 365 perspective, as Exchange isn't really just a messaging engine anymore, it's a data store that we are, through things like Graph and all the other applications, is giving businesses a whole new way of looking at the data, and so we're pulling data from all the different places. Exchange is becoming almost a plumbing kind of infrastructure piece, but it's a key data source and I think the data is still there, the communication is still there, but I think much of the future development is in the client-side apps and how people interact with the data, and the back-end just becomes the infrastructure, right? >> Actually you bring up a great point. A premise that my Head of Research at Wikibon had is talking about Microsoft's position in AI today, and Office 365 and the messaging that you have, there's so much data there if you wanted it. What are people worrying about? How can a company understand that? How can Microsoft help businesses in general? There's a touchpoint that even an infrastructure as a service-provider wouldn't have, but you really get to the end-point and the end users in productivity, and that's a huge opportunity for Microsoft in the future as long as you're not messing with our data, you're not as heavy into, you know some of the other messaging people out there, that you're like, wait, why am I getting ads for that stuff, or, I think I talked about that stuff. >> And that's a great point, Stu, because going back to Outlook Mobile as an example, right? We're heavily invested in AI-driven capabilities into that app, zero-touch search, as an instance. You can go right in the app, tap one button and you see your favorite contacts, you get your Discover information from the Office Graph your next itinerary and travel information, and we're lighting up that functionality across the board throughout the app. Location-rich data, using Cortana time-to-leave services, so that you can get to a meeting at the right time, as opposed to a typical oh, it reminded me at 15 minutes and I got to hop 45 minutes down the other end of, where are we, West? In the West building, right? So we're building all that functionality into clients like Outlook Mobile and the rest of the stack to help drive that type of capabilities. >> And all of that data's in the back end, right? You said email is this repository of incredible business information, and so the question is how you leverage that, how do you take what's in there and surface it in a way that makes sense to the users? It's a fascinating time at the moment, where the data's there, we just got to know how to use it in the right way. And I agree, using it in the right way and not using it to sell stuff, that's absolutely our approach to it, so, super important. >> And do you work closely with clients to come up with this new kind of functionality? One of the biggest challenges that so many technology companies face is staying on the cutting edge of these ideas and innovation, so how closely are you working with customers to dream up new functionality? >> Yeah, we're working with customers all the time. We do it through a variety of different channels. We have UserVoice, which allows customers and end users to directly interface and provide their ideas. We have private preview programs, where we target customers about specific new feature sets. TAP programs, like we're doing with Exchange 2019, as well as future releases within Office 365 that enable that type of experience. >> Exchange, I think, historically, has always been very customer focused, very community focused. We have a great bunch of MVPs, the TAP program, the Technology Adoption Program, is a bunch of customers that deploy our pre-production code in production for us, so we've got some real big customers who, they're running versions of Exchange that the world hasn't seen. >> One of the themes we heard in Satya's keynote yesterday is business productivity, and we know one of the biggest challenges out there is, you get this new stuff, and you're like, well, I'm going to pretty much just try to use it the way I always have been doing it, and some of us have been using emails for decades and decades and I look at my own usage and wow, I'm probably a bit out of date. If I could just wipe my brain and say 'okay, here's this cool new tool' that could do all this stuff, we wouldn't even call it email, we'd call it something different. I know you guys do things like the Channel 9 broadcast, I'm sure there's lots of things on the website, how do you help customers learn to use the new stuff and get rid of some of the things, the old habits that they had in using these technologies. And can you get everybody to stop 'reply to all' in the big group, that would be super helpful. >> Work on that please. >> That's interesting, we're building it into the apps, to be honest. We're doing a lot of work whenever we release new features to light up an experience within the app that guide the user on how to use that new functionality to help them understand what they can do with the app, as well as simplifying the overall app structure. You look at some of our apps, they become very bloated in terms of all the widgets you have available and knobs to control it and we're trying to simplify that stack. We're refreshing with Outlook 2019 and Office Pro Plus. We're refreshing the user interface on desktop, we're doing the same in Mac. We've done it in Outlook for iOS, we're redoing OA, as well, and Office 365, all to enhance and simplify the experience, and, as well, provide a consistent experience across all the endpoints, which will help. >> If the question is here, how do we wean people off email, how do we get them off email. >> Just their old habits and patterns. >> And you know, it's kind of funny, but it still works. I remember having a conversation with somebody once who, it was a presentation we did once, and it was a team who did more of a social kind of thing, and their view was, they put a picture of the Queen of England up on a slide and said 'Email is old, like the Queen of England.' And my response was, well so are fire and the wheel, but they seem to be hanging around pretty well, so far. So I think there are certain things for which email is still king, but it's evolving and changing. I think we're still waiting for the real killer app that replaces email. >> It's not Yammer. >> It's not what? (laughter) >> It's not Yammer. >> I'm not going on camera saying that. The way I prefer to think of it is, I don't really matter what the client is or how you all interact with it, if we can all use an app that suits our own style of working, right? My inbox is zero inbox. I'm a zero inbox kind of guy, right? If I can work like that and interact with people who want to work on a different client, I'm happy. >> Not to go on the Yammer piece, but you made me think a little bit about acquisitions. Big acquisitions, like LinkedIn and Github, messaging ties into both of those quite a bit. Any visibility you can give? I know there's some integrations there, but how does that look? >> So we're launching LinkedIn integration with Outlook for iOS and Android as we speak. That's something we'll be rolling out shortly, and it enables, within the people or contact card, you can quickly see information from their LinkedIn data set, as well as the ability for us to push data from Office 365 into LinkedIn, so that LinkedIn users can also see relevant information about who that person's interacting with from a calendar type of perspective. So we're definitely taking that availability and providing that through our mutual customers. >> Great. Well, Ross and Greg, thank you so much for coming on the show, it was >> Thanks for having us. really a pleasure having you. >> Yeah, it was great. >> I'm Rebecca Knight for Stu Miniman, we will have more of theCube's live coverage from the Orange County Civic Center Microsoft Ignite in just a little bit. (electronic music)
SUMMARY :
brought to you by Cohesity, the Principle Program Manager at Microsoft, and Greg Taylor, You are both legends in the Microsoft messaging world, for Enterprise, it is the most used app. on the kind of features that really large Enterprises evolve the app on a weekly basis, now. and exclude the personal data, so that is probably the one they prefer to how does the Exchange and Outlook and those of the back, and Ross is now working on the client side. and that starts lighting up additional features. and all the other applications, is giving businesses and Office 365 and the messaging that you have, and the rest of the stack to help and so the question is how you leverage that, TAP programs, like we're doing with Exchange 2019, that the world hasn't seen. and get rid of some of the things, it into the apps, to be honest. If the question is here, how do we like the Queen of England.' or how you all interact with it, but how does that look? the ability for us to push data from Office 365 for coming on the show, it was Thanks for having us. live coverage from the Orange County Civic Center
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Greg Taylor | PERSON | 0.99+ |
Ross Smith | PERSON | 0.99+ |
Ross | PERSON | 0.99+ |
Greg | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Outlook | TITLE | 0.99+ |
17 years | QUANTITY | 0.99+ |
Office 365 | TITLE | 0.99+ |
two guests | QUANTITY | 0.99+ |
Cortana | TITLE | 0.99+ |
1000 | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
45 minutes | QUANTITY | 0.99+ |
Exchange 2019 | TITLE | 0.99+ |
Outlook 2019 | TITLE | 0.99+ |
Channel 9 | ORGANIZATION | 0.99+ |
Office Pro Plus | TITLE | 0.99+ |
2000 people | QUANTITY | 0.99+ |
Exchange | TITLE | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Android | TITLE | 0.99+ |
this year | DATE | 0.98+ |
20 years ago | DATE | 0.98+ |
yesterday | DATE | 0.98+ |
Orange County Civic Center | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
OA | TITLE | 0.98+ |
22 years ago | DATE | 0.98+ |
Github | ORGANIZATION | 0.98+ |
Office | TITLE | 0.98+ |
Exchange Server 2019 | TITLE | 0.98+ |
decades | QUANTITY | 0.98+ |
Outlook Mobile | TITLE | 0.97+ |
iOS | TITLE | 0.97+ |
Fry | ORGANIZATION | 0.97+ |
Cohesity | ORGANIZATION | 0.97+ |
zero inbox | QUANTITY | 0.97+ |
Skype | ORGANIZATION | 0.97+ |
Satya | PERSON | 0.96+ |
one | QUANTITY | 0.96+ |
One | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
hundred million users | QUANTITY | 0.94+ |
Stu | PERSON | 0.92+ |
Technology Adoption Program | OTHER | 0.91+ |
ALUP | ORGANIZATION | 0.91+ |
Ross | ORGANIZATION | 0.91+ |
theCube | ORGANIZATION | 0.91+ |
Aman Naimat, Demandbase, Chapter 2 | George Gilbert at HQ
>> And we're back, this is George Gilbert from Wikibon, and I'm here with Aman Naimat at Demandbase, the pioneers in the next gen AI generation of CRM. So Aman, let's continue where we left off. So we're talking about natural language processing, and I think most people are familiar with it more on the B to C technology, where the big internet providers have sort of accumulated a lot of voice data and have learned how to process it and convert it into text. So tell us how B to B NLP is different, to use a lot of acronyms. In other words, how you're using it to build up a map of relationships between businesses. >> Right, yeah, we call it the demand graph. So it's an interesting question, because firstly, it turns out that, while very different, B to B is also, the language is quite boring. It doesn't evolve as fast as consumer concepts. And so it makes the problem much more approachable from a language understanding point of view. So natural language processing or natural language understanding is all about how machines can understand and store and take action on language. So while we were working on this four or five years ago, and that's my background as well, it turned out the problem was simpler, because human language is very rich, and natural language processing converting voice to text is trivial compared to understanding meaning of things and words, which is much more difficult. Or even the sense of the word, apparently in English each word has six meanings, right? We call them word senses. So the problem was only simpler because B to B language doesn't tend to evolve as fast as regular language, because terms stick in an industry. The challenge with B to B and why it was different is that each industry or sub-industry has a very specific language and jargon and acronyms. So to really understand that industry, you need to come from that industry. So if you go back to the CRM example of what happened 10, 20 years ago, you would have a sales person that would come from that industry if you wanted to sell into it. And that still happens in some traditional companies, right? So the idea was to be able to replicate the knowledge that they would have as if they came from that industry. So it's the language, the vocabularies, and then ultimately have a way of storing and taking action on it. It's very analogous to what Google had done with Knowledge Graph. >> Alright, so two questions I guess. First is, it sounds almost like a translation problem, in the sense that you have some base language primitives, like partner, supplier, competitor, customer. But that the language in each industry is different, and so you have to map those down to those sort of primitives. So tell us the process. You don't have on staff people who translate from every industry. >> I mean that was the whole, writing logical rules or expressions for language, which use conventional good old fashioned AI. >> You mean this was the rules-based knowledge engineering? >> That's right. And that clearly did not succeed, because it is impossible to do it. >> The old quip which was, one researcher said, "Every time I fired a rules engineer, "my accuracy score would go up." (chuckles) >> That's right, and now the problem is because language is evolving, and the context is so different. So even pharmaceutical companies in the US or in the Bay Area would use different language than pharma in Europe or in Switzerland. And so it's just impossible to be able to quantify the variations. >> George: To do it manually. >> To do it manually, it's impossible. It's certainly not possible for a small startup. And we did try having it be generated. In the early days we used to have crowdsource workers validate the machine. But it turned out that they couldn't do it either, because they didn't understand the pharmaceutical language either, right? So in the end, the only way to do that was to have some sort of model and some seed data to be able to validate it, or to hire experts and to have small samples of data to validate. So going back to the graph, right, it turns out that when we have seen sophisticated AI work, you know, towards complex problems, so for example predicting your next connection on LinkedIn, or your next friend, or what ads should you see on Facebook, they have used network-based data, social graph data, or in the case of Google, it's the Knowledge Graph, of how things are connected. And somehow machine learning and AI systems based on network data tend to be more powerful and more intuitive than other types of models. >> So OK, when you say model, help us with an example of, you're representing a business and who it's connected to and its place in the world. >> So the demand graph is basically as Demandbase, who are our customers, who are their partners, who are their suppliers, who are their competitors. And utilizing that network of companies in a manner that we have network of friends on LinkedIn or Facebook. And it turns out that businesses are extremely social in nature. In fact, we found out that the connections between companies have more signal, and are more predictive of acquisition or predicting the next customer, than even the Facebook social graph. So it's much easier to utilize the business graph, the B to B business graph, to predict the next customer, than to say, predict your next friend on Facebook. >> OK, so that's a perfect analogy. So tell us about the raw material you churn through on the web, and then how you learn what that terminology might be. You've boot-strapped a little bit, now you have all this data, and you have to make sense out of new terms, and then you build this graph of who this business is related to. >> That's right, and the hardest part is to be able to handle rumors and to be able to handle jokes, like, "Isn't it time for Microsoft to just buy Salesforce?" Question mark, smiley face. You know, so it's a challenging problem. But we were lucky that business language and business press is definitely more boring than, you know, people talking about movies. >> George: Or Reddit. >> Or Reddit, right. So the way we work is we process the entire business internet, or the entire internet. And initially we used to crawl it ourselves, but soon realized that Common Crawl, which is an open source foundation that has crawled the internet and put at least a large chunk of it, and that really enabled us to stop the crawling. And we read the entire internet and look at, ultimately we're interested in businesses, 'cause that's the world we are, in business, B to B marketing and B to B sales. We look at wherever there's a company mentioned or a business person or business title mentioned, and then ignore everything else. 'Cause if it doesn't have a company or a business person, we don't care. Right, so, or a business product. So we read the entire internet, and try to then infer that this is, Amazon is mentioned in it, then we figure out, is it Amazon the company, or is it Amazon the river? So that's problem number one. So we call it the entity linking problem. And then we try to understand and piece together the various expressions of relationships between companies expressed in text. It could be a press release, it could be a competitive analysis, it could be announcement of a new product. It could be a supply chain relationship. It could be a rumor. And then it also turns out the internet's very noisy, so we look at corroboration across multiple disparate sources-- >> George: Interesting, to decide-- >> Is it true? >> George: To signal is it real. >> Right, yeah, 'cause there's a lot of fake news out there. (George laughs) So we look at corroboration and the sources to be able to infer if we can have confidence in this. >> I can imagine this could be applied to-- >> A lot of other problems. >> Political issues. So OK, you've got all these sources, give us some specific examples of feeds, of sources, and then help us understand. 'Cause I don't think we've heard a lot about the notion of boot-strapping, and it sounds like you're generalizing, which is not something that most of us are familiar with who have a surface-level familiarity with machine learning. >> I think there was a lot of research like, not to credit Google too much, but... Boot-strapping methods were used by Sergei I think was the first person, and then he gave up 'cause they founded Google and they moved on. And since then in 2003, 2004, there was a lot of research around this topic. You know, and it's in the genre of unsupervised machine learning models. And in the real world, because there's less labeled data, we tend to find that to be an extremely effective method, to learn language and obviously now with deep learning, it's also being utilized more, unsupervised methods. But the idea is really to, and this was around five years ago when we started building this graph, and I obviously don't know how the Google Knowledge Graph is built, but I can assume it's a similar technique. We don't tend to talk about how commercial products work that much. But the idea is basically to generalize models or learn from a small seed, so let's say I put in seed like Nike and Adidas, and say they compete, right? And then if you look at the entire internet and look at all the expressions of how Nike and Adidas are expressed together in language, it could be, you know, "I think "Nike shoes are better than Adidas." >> Ah, so it's not just that you find an opinion that they're better than, but you find all the expressions that explain that they're different and they're competition. >> That's right. But we also find cases where somebody's saying, "I bought Nike and Adidas," or, "Nike and Adidas shoes are sold here." So we have to be able to be smart enough to discern when it's something else and not competition. >> OK, so you've told us how this graph gets built out. So the suppliers, the partners, the customers, the competitors, now you've got this foundation-- >> And people and products as well. >> OK, people, products. You've got this really rich foundation. Now you build and application on top of it. Tell us about CRM with that foundation. >> Yeah, I mean we have the demand graph, in which we tie in also things around basic data that you could find from graphics and intent that we've also built. But it also turns out that the knowledge graph itself, our initial intuition was that we'll just expose this to end users, and they'll be able to figure it out. But it was just too complicated. It really needed another level of machinery and AI on top to take advantage of the graph, and to be able to build prescriptive actions. And action could be, or to solve a business problem. A problem could be, I'm an IOT startup, I'm looking for manufacturing companies who will buy my product. Or it could be, I am a venture capital firm, I want to understand what other venture capital firms are investing in. Or, hey, I'm Tesla, and I'm looking for a new supplier for the new Tesla screen. Or you know, things of that nature. So then we apply and build specific models, more machine learning, or layers of machine learning, to then solve specific business problems. Like the reinforcement learning to understand next best action. >> And are these models associated with one of your customers? >> No, they're general purpose, they're packaged applications. >> OK, tell us more, so what was the base level technology that you started with in terms of the being able to manage a customer conversation, a marketing conversation, and then how did that get richer over time? >> Yeah, I mean we take our proprietary data sets that we've accumulated over the years and manufactured over the years, and then co-mingle with customer data, which we keep private, 'cause they own the data. And the technology is generic, but you're right, the model being generated by the machine is specific to every customer. So obviously the next best action model for a pharmaceutical company is based on doctors visiting, and is this person an oncologist, or what they're researching online. And that model is very different than a model for Demandbase for example, or Salesforce. >> Is it that the algorithm's different, or it's trained on different data? >> It's trained on different data. It's the same code, I mean we only have 20, 30 data scientists, so we're obviously not going to build custom code for... So the idea is it's the same model, but the same meta model is trained on different data. So public data, but also customers' private data. >> And how much does the customer, let's say your customer's Tesla, how much of it is them running some of their data through this boot-strapping process, versus how much of it is, your model is set up and it just automatically once you've boot-strapped it, it automatically starts learning from the interactions with the Tesla, with Tesla itself from all the different partners and customers? >> Right, I think you know, we have found, most startups are just learning over small data sets, which are customer-centric. What we have found is real magic happens when you take private data and combine it with large amounts of public data. So at Demandbase, we have massive amounts of public and proprietary data. And then we plug in, and we have to tell you that our client is Tesla, so it understands the localized graph, and knows the Tesla ecosystem, and that's based on public data sets and our proprietary data. Then we also bring in your private slice whenever possible. >> George: Private...? >> Slice of data. So we have code that can plug into your web site, and then start understanding interactions that your customers are having. And then based on that, we're able to train our models. As much as possible, we try to automate the data capture process, so in essence using a sensor or using a pixel on your web site, and then we take that private stream of data and include it in our graph and merge it in, and that's where we find... Our data by itself is not as powerful as our data mixed with your private data. >> So I guess one way to think about it would be, there's a skeletal graph, and that may be sounding too minimalistic, there's a graph. But let's say you take Tesla as the example, you tell them what data you need from them, and that trains the meta models, and then it fleshes out the graph of the Tesla ecosystem. >> Right, whatever data we couldn't get or infer, from the outside. And we have a lot of proprietary data, where we see online traffic, business traffic, what people are reading, who's interested in what, for hundreds of millions of people. We have developed that technology. So we know a lot without actually getting people's private slice. But you know, whenever possible, we want the maximum impact. >> So... >> It's actually simple, and let's divorce the words graphs for a second. It's really about, let's say that I know you, right, and there's some information you can tell me about you. But imagine if I google your name, and I read every document about you, every video you have produced, every blog you have written, then I have the best of both knowledge, right, your private data from maybe your social graph on Facebook, and then your public data. And then if I knew, you know... If I partnered with Forbes and they told me you logged in and read something on Forbes, then they'll get me that data, so now I really have a deep understanding of what you're interested in, who you are, what's your language, you know, what are you interested in. It's that, sort of simplified, but similar, at a much larger scale. >> Alright, let's take a pause at this point and then we'll come back with part three. >> Excellent.
SUMMARY :
more on the B to C technology, So the idea was to be able to replicate in the sense that you have I mean that was the because it is impossible to do it. The old quip which And so it's just impossible to be So in the end, the only way to do that was So OK, when you say model, the B to B business graph, and then how you learn what the hardest part is to So the way we work is and the sources to be and it sounds like you're generalizing, But the idea is basically to generalize Ah, so it's not just that you find So we have to be able to So the suppliers, the Now you build and and to be able to build No, they're general purpose, and manufactured over the years, So the idea is it's the same model, and we have to tell you and then we take that graph of the Tesla ecosystem. get or infer, from the outside. and then your public data. and then we'll come back with part three.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Switzerland | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
George Gilbert | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
2003 | DATE | 0.99+ |
George | PERSON | 0.99+ |
Sergei | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Bay Area | LOCATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Adidas | ORGANIZATION | 0.99+ |
Nike | ORGANIZATION | 0.99+ |
two questions | QUANTITY | 0.99+ |
six meanings | QUANTITY | 0.99+ |
2004 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Forbes | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
Demandbase | ORGANIZATION | 0.99+ |
each word | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Aman | ORGANIZATION | 0.99+ |
each industry | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
four | DATE | 0.98+ |
Aman Naimat | PERSON | 0.98+ |
Wikibon | ORGANIZATION | 0.95+ |
ORGANIZATION | 0.95+ | |
hundreds of millions of people | QUANTITY | 0.95+ |
English | OTHER | 0.94+ |
10, 20 years ago | DATE | 0.94+ |
first person | QUANTITY | 0.94+ |
one way | QUANTITY | 0.94+ |
Aman Naimat | ORGANIZATION | 0.94+ |
five years ago | DATE | 0.93+ |
20, 30 data scientists | QUANTITY | 0.88+ |
Salesforce | ORGANIZATION | 0.88+ |
firstly | QUANTITY | 0.86+ |
one researcher | QUANTITY | 0.83+ |
around five years ago | DATE | 0.82+ |
one | QUANTITY | 0.73+ |
a second | QUANTITY | 0.71+ |
Salesforce | TITLE | 0.67+ |
Chapter 2 | OTHER | 0.64+ |
Knowledge Graph | TITLE | 0.63+ |
part three | QUANTITY | 0.56+ |
Nenshad Bardoliwalla & Pranav Rastogi | BigData NYC 2017
>> Announcer: Live from Midtown Manhattan it's theCUBE. Covering Big Data New York City 2017. Brought to you by SiliconANGLE Media and its ecosystem sponsors. >> OK, welcome back everyone we're here in New York City it's theCUBE's exclusive coverage of Big Data NYC, in conjunction with Strata Data going on right around the corner. It's out third day talking to all the influencers, CEO's, entrepreneurs, people making it happen in the Big Data world. I'm John Furrier co-host of theCUBE, with my co-host here Jim Kobielus who is the Lead Analyst at Wikibon Big Data. Nenshad Bardoliwalla. >> Bar-do-li-walla. >> Bardo. >> Nenshad Bardoliwalla. >> That guy. >> Okay, done. Of Paxata, Co-Founder & Chief Product Officer it's a tongue twister, third day, being from Jersey, it's hard with our accent, but thanks for being patient with me. >> Happy to be here. >> Pranav Rastogi, Product Manager, Microsoft Azure. Guys, welcome back to theCUBE, good to see you. I apologize for that, third day blues here. So Paxata, we had your partner on Prakash. >> Prakash. >> Prakash. Really a success story, you guys have done really well launching theCUBE fun to watch you guys from launching to the success. Obviously your relationship with Microsoft super important. Talk about the relationship because I think this is really people can start connecting the dots. >> Sure, maybe I'll start and I'LL be happy to get Pranav's point of view as well. Obviously Microsoft is one of the leading brands in the world and there are many aspects of the way that Microsoft has thought about their product development journey that have really been critical to the way that we have thought about Paxata as well. If you look at the number one tool that's used by analysts the world over it's Microsoft Excel. Right, there isn't even anything that's a close second. And if you look at the the evolution of what Microsoft has done in many layers of the stack, whether it's the end user computing paradigm that Excel provides to the world. Whether it's all of their recent innovation in both hybrid cloud technologies as well as the big data technologies that Pranav is part of managing. We just see a very strong synergy between trying to combine the usage by business consumers of being able to take advantage of these big data technologies in a hybrid cloud environment. So there's a very natural resonance between the 2 companies. We're very privileged to have Microsoft Ventures as an investor in Paxata and so the opportunity for us to work with one of the great brands of all time in our industry was really a privilege for us. Yeah, and that's the corporate sides so that wasn't actually part of it. So it's a different part of Microsoft which is great. You have also business opportunity with them. >> Nenshad : We do. >> Obviously data science problem that we're seeing is that they need to get the data faster. All that prep work, seems to be the big issue. >> It does and maybe we can get Pranav's point of view from the Microsoft angle. >> Yeah so to sort of continue what Nenshad was saying, you know the data prep in general is sort of a key core competence which is problematic for lots of users, especially around the knowledge that you need to have in terms of the different tools you can use. Folks who are very proficient will do ETL or data preparation like scenarios using one of the computing engines like Hive or Spark. That's good, but there's this big audience out there who like Excel-like interface, which is easy to use a very visually rich graphical interface where you can drag and drop and can click through. And the idea behind all of this is how quickly can I get insights from my data faster. Because in a big data space, it's volume, variety and velocity. So data is coming at a very fast rate. It's changing it's growing. And if you spend lot of time just doing data prep you're losing the value of data, or the value of data would change over time. So what we're trying to do would sort of enabling Paxata or HDInsight is enabling these users to use Paxata, get insights from data faster by solving key problems of doing data prep. >> So data democracy is a term that we've been kicking around, you guys have been talking about as well. What is actually mean, because we've been teasing out first two days here at theCUBE and BigData NYC is. It's clear the community aspect of data is growing, almost on a similar path as you're seeing with open source software. That genie's out the bottle. Open source software, tier one, it won, it's only growing exponentially. That same paradigm is moving into the data world where the collaboration is super important, in this data democracy, what is that actually mean and how does that relate to you guys? >> So the perspective we have is that first something that one of our customers said, that is there is no democracy without certain degrees of governance. We all live in a in a democracy. And yet we still have rules that we have to abide by. There are still policies that society needs to follow in order for us to be successful citizens. So when when a lot of folks hear the term democracy they really think of the wild wild west, you know. And a lot of the analytic work in the enterprise does have that flavor to it, right, people download stuff to their desktop, they do a little bit of massaging of the data. They email that to their friend, their friend then makes some changes and next thing you know we have what what some folks affectionately call spread mart hell. But if you really want to democratize the technology you have to wrap not only the user experience, like Pranav described, into something that's consumable by a very large number of folks in the enterprise. You have to wrap that with the governance and collaboration capabilities so that multiple people can work off the same data set. That you can apply the permissions so that people, who is allowed to share with each other and under what circumstances are they allowed to share. Under what circumstances are you allowed to promote data from one environment to another? It may be okay for someone like me to work in a sandbox but I cannot push that to a database or HDFS or Azure BLOB storage unless I actually have the right permissions to do so. So I think what you're seeing is that, in general, technology is becoming a, always goes on this trend, towards democratization. Whether it's the phone, whether it's the television, whether it's the personal computer and the same thing is happening with data technologies and certainly companies like. >> Well, Pranav, we're talking about this when you were on theCUBE yesterday. And I want to get your thoughts on this. The old way to solve the governance problem was to put data in silos. That was easy, I'll just put it in a silo and take care of it and access control was different. But now the value of the data is about cross-pollinating and make it freely available, horizontally scalable, so that it can be used. But the same time and you need to have a new governance paradigm. So, you've got to democratize the data by making it available, addressable and use for apps. The same time there's also the concerns on how do you make sure it doesn't get in the wrong hands and so on and so forth. >> Yeah and which is also very sort of common regarding open source projects in the cloud is a how do you ensure that the user authorized to access this open source project or run it has the right credentials is authorized and stuff. So, the benefit that you sort of get in the cloud is there's a centralized authentication system. There's Azure Active Directory, so you know most enterprise would have Active Directory users. Who are then authorized to either access maybe this cluster, or maybe this workload and they can run this job and that sort of further that goes down to the data layer as well. Where we have active policies which then describe what user can access what files and what folders. So if you think about the entrance scenario there is authentication and authorization happening and for the entire system when what user can access what data. And part of what Paxata brings in the picture is like how do you visualize this governance flow as data is coming from various sources, how do you make sure that the person who has access to data does have access data, and the one who doesn't cannot access data. >> Is that the problem with data prep is just that piece of it? What is the big problem with data prep, I mean, that seems to be, everyone keeps coming back to the same problem. What is causing all this data prep. >> People not buying Paxata it's very simple. >> That's a good one. Check out Paxata they're going to solve your problems go. But seriously, there seems to be the same hole people keep digging themselves into. They gather their stuff then next thing they're in the in the same hole they got to prepare all this stuff. >> I think the previous paradigms for doing data preparation tie exactly to the data democracy themes that we're talking about here. If you only have a very silo'd group of people in the organization with very deep technical skills but don't have the business context for what they're actually trying to accomplish, you have this impedance mismatch in the organization between the people who know what they want and the people who have the tools to do it. So what we've tried to do, and again you know taking a page out of the way that Microsoft has approached solving these problems you know both in the past in the present. Is to say look we can actually take the tools that once were only in the hands of the, you know, shamans who know how to utter the right incantations and instead move that into the the common folk who actually. >> The users. >> The users themselves who know what they want to do with the data. Who understand what those data elements mean. So if you were to ask the Paxata point of view, why have we had these data prep problems? Because we've separated the people who had the tools from the people who knew what they wanted to do with it. >> So it sounds to me, correct me if this is the wrong term, that what you offer in your partnership is it basically a broad curational environment for knowledge workers. You know, to sift and sort and annotating shared data with the lineage of the data preserved in essentially a system of record that can follow the data throughout its natural life. Is that a fair characterization? >> Pranav: I would think so yeah. >> You mention, Pranav, the whole issue of how one visualizes or should visualize this entire chain of custody, as it were, for the data, is there is there any special visualization paradigm that you guys offer? Now Microsoft, you've made a fairly significant investment in graph technology throughout your portfolio. I was at Build back in May and Sacha and the others just went to town on all things to do with Microsoft Graph, will that technology be somehow at some point, now or in the future, be reflected in this overall capability that you've established here with your partner here Paxata? >> I am not sure. So far, I think what you've talked about is some Graph capabilities introduced from the Microsoft Graph that's sort of one extreme. The other side of Graph exists today as a developer you can do some Graph based queries. So you can go to Cosmos DB which had a Gremlin API. For Graph based query, so I don't know how. >> I'll get right to the question. What's the Paxata benefits of with HDInsight? How does that, just quickly, explain for the audience. What is that solution, what are the benefits? >> So the the solution is you get a one click install of installing Paxata HDInsight and the benefit is as a benefit for a user persona who's not, sort of, used to big data or Hadoop they can use a very familiar GUI-based experience to get their insights from data faster without having any knowledge of how Spark works or Hadoop works. >> And what does the Microsoft relationship bring to the table for Paxata? >> So I think it's a couple of things. One is Azure is clearly growing at an extremely fast pace. And a lot of the enterprise customers that we work with are moving many of their workloads to Azure and and these cloud based environments. Especially for us, the unique value proposition of a partner who truly understands the hybrid nature of the world. The idea that everything is going to move to the cloud or everything is going to stay on premise is too simplistic. Microsoft understood that from day one. That data would be in it and all of those different places. And they've provided enabling technologies for vendors like us. >> I'll just say it to maybe you're too coy to say it, but the bottom line is you have an Excel-like interface. They have Office 365 they're user's going to instantly love that interface because it's an easy to use interface an Excel-like it's not Excel interface per se. >> Similar. >> Metaphor, graphical user interface. >> Yes it is. >> It's clean and it's targeted at the analyst role or user. >> That's right. >> That's going to resonate in their install base. >> And combined with a lot of these new capabilities that Microsoft is rolling out from a big data perspective. So HDInsight has a very rich portfolio of runtime engines and capabilities. They're introducing new data storage layers whether it's ADLS or Azure BLOB storage, so it's really a nice way of us working together to extract and unlock a lot of the value that Microsoft. >> So, here's the tough question for you, open source projects I see Microsoft, comments were hell froze because LINUX is now part of their DNA, which was a comment I saw at the even this week in Orlando, but they're really getting behind open source. From open compute, it's just clearly new DNA's. They're they're into it. How are you guys working together in open source and what's the impact to developers because now that's only one cloud, there's other clouds out there so data's going to be an important part of it. So open source, together, you guys working together on that and what's the role for the data? >> From an open source perspective, Microsoft plays a big role in embracing open source technologies and making sure that it runs reliably in the cloud. And part of that value prop that we provide in sort of Azure HDInsight is being sure that you can run these open source big data workloads reliably in the cloud. So you can run open source like Apache, Spark, Hive, Storm, Kafka, R Server. And the hard part about running open source technology in the cloud is how do you fine tune it, and how do you configure it, how do you run it reliably. And that's what sort of what we bring in from a cloud perspective. And we also contribute back to the community based on sort of what learned by running these workloads in the cloud. And we believe you know in the broader ecosystem customers will sort of have a mixture of these combinations and their solution They'll be using some of the Microsoft solutions some open source solutions some solutions from ecosystem that's how we see our customer solution sort of being built today. >> What's the big advantage you guys have at Paxata? What's the key differentiator for why someone should work with you guys? Is it the automation? What's the key secret sauce to you guys? >> I think it's a couple of dimensions. One is I think we have come the closest in the industry to getting a user experience that matches the Excel target user. A lot of folks are attempting to do the same but the feedback we consistently get is that when the Excel user uses our solution they just, they get it. >> Was there a design criteria, was that from the beginning how you were going to do this? >> From day one. >> So you engineer everything to make it as simple as like Excel. >> We want people to use our system they shouldn't be coding, they shouldn't be writing scripts. They just need to be able. >> Good Excel you just do good macros though. >> That's right. >> So simple things like that right. >> But the second is being able to interact with the data at scale. There are a lot of solutions out there that make the mistake in our opinion of sampling very tiny amounts of data and then asking you to draw inferences and then publish that to batch jobs. Our whole approach is to smash the batch paradigm and actually bring as much into the interactive world as possible. So end users can actually point and click on 100 million rows of data, instead of the million that you would get in Excel, and get an instantaneous response. Verses designing a job in a batch paradigm and then pushing it through the the batch. >> So it's interactive data profiling over vast corpuses of data in the cloud. >> Nenshad: Correct. >> Nenshad Bardoliwalla thanks for coming on theCUBE appreciate it, congratulations on Paxata and Microsoft Azure, great to have you. Good job on everything you do with Azure. I want to give you guys props, with seeing the growth in the market and the investment's been going well, congratulations. Thanks for sharing, keep coverage here in BigData NYC more coming after this short break.
SUMMARY :
Brought to you by SiliconANGLE Media in the Big Data world. it's hard with our accent, So Paxata, we had your partner on Prakash. launching theCUBE fun to watch you guys has done in many layers of the stack, is that they need to get the data faster. from the Microsoft angle. the different tools you can use. and how does that relate to you guys? have the right permissions to do so. But the same time and you need to have So, the benefit that you sort of get in the cloud What is the big problem with data prep, But seriously, there seems to be the same hole and instead move that into the the common folk from the people who knew what they wanted to do with it. is the wrong term, that what you offer for the data, is there is there So you can go to Cosmos DB which had a Gremlin API. What's the Paxata benefits of with HDInsight? So the the solution is you get a one click install And a lot of the enterprise customers but the bottom line is you have an Excel-like interface. user interface. It's clean and it's targeted at the analyst role to extract and unlock a lot of the value So open source, together, you guys working together and making sure that it runs reliably in the cloud. A lot of folks are attempting to do the same So you engineer everything to make it as simple They just need to be able. Good Excel you just do But the second is being able to interact So it's interactive data profiling and Microsoft Azure, great to have you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Jersey | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Excel | TITLE | 0.99+ |
2 companies | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
Orlando | LOCATION | 0.99+ |
Nenshad | PERSON | 0.99+ |
Bardo | PERSON | 0.99+ |
Nenshad Bardoliwalla | PERSON | 0.99+ |
third day | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Office 365 | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
100 million rows | QUANTITY | 0.99+ |
BigData | ORGANIZATION | 0.99+ |
Paxata | ORGANIZATION | 0.99+ |
Microsoft Ventures | ORGANIZATION | 0.99+ |
Pranav Rastogi | PERSON | 0.99+ |
first two days | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
million | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
Midtown Manhattan | LOCATION | 0.98+ |
Spark | TITLE | 0.98+ |
this week | DATE | 0.98+ |
first | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
one click | QUANTITY | 0.97+ |
Prakash | PERSON | 0.97+ |
Azure | TITLE | 0.97+ |
May | DATE | 0.97+ |
Wikibon Big Data | ORGANIZATION | 0.96+ |
Hadoop | TITLE | 0.96+ |
Hive | TITLE | 0.94+ |
today | DATE | 0.94+ |
Strata Data | ORGANIZATION | 0.94+ |
Pranav | PERSON | 0.93+ |
NYC | LOCATION | 0.93+ |
one cloud | QUANTITY | 0.93+ |
2017 | DATE | 0.92+ |
Apache | ORGANIZATION | 0.9+ |
Paxata | TITLE | 0.9+ |
Graph | TITLE | 0.89+ |
Pranav | ORGANIZATION | 0.88+ |
Katrina Gosek & Alistair Galbraith - Oracle Modern Customer Experience #ModernCX - #theCUBE
>> Host: Live from Las Vegas. It's The Cube! Covering Oracle Modern Customer Experience 2017. (electronic music) Brought to you by Oracle. >> Okay, welcome back everyone, we're here live at the Mandalay Bay for Oracle's Modern CX Show, Modern Customer Experience, this is the Cube, I'm John Furrier. My co-host, Peter Burris, two days of wall-to-wall coverage. Day two, my next guest is Katrina Gosek, Senior Director Commerce Product Strategy, (mumbles) Oracle upper world a few years ago, and Alistair Galbraith Sr, Director of CX, Customer Experience Innovation Lab with Oracle. Welcome to The Cube, great to see you. >> Thank you. >> Thanks, welcome. >> So commerce is part of the story, it's just not marketing, there's transactions involved, there's R & D, there's a lot of technology. The show here is the common theme of just modernizing the customer experience, which is good, because it's the outcomes. But commerce is one of them. Give us the update, what's hot for you guys this week? >> Yeah, I think what's different this year, from any other year in the past is the pace of innovation is changing, because I think there's so much disruption in the commerce space, and particularly in retail and also B to B commerce. There's lots of new expectations from customers. I know we've been saying that for years, right? But I think the technologies now, that can enable some new experiences, have rapidly changed. Now it's completely fathomable to leverage AI to drive more high-end personalization or to leverage internet of things, to embed commerce more into everyday experience. >> John: Where's the innovation in retail? 'Cause retail's not a stranger to data. They've had data models going back, but certainly digital changes things, they're at the edge of the networks, so it's a little bit of internet of things meets consumer data, the data's huge if you can get the identity of the person. That seems to be the key conversation: how do you guys enable that to take advantage of the sea of data that you're providing form the data cloud, third party and first party data? >> Well I think there's a lot of fun approaches. Oracle has a technology called the Oracle ID Graph, which starts to merge a lot of identities across channels, so where customers are using data cloud, that can inform those micro interactions as they move between channels, and I think one of the trends we've been seeing this year that we're talking about as My Channel, is that customers no longer really complete one interaction or one transaction in one place. They might start on mobile, move to voice, move into a physical store, and we're trying to track that customer in all of those places, so a lot of our focus, and you see data cloud moves into AI, is enabling brands to move this data around more easily without needing to know everything about the customer themselves. >> John: Well that's the key for the experience of the customer, because they don't want to have to answer the same questions again if they're on a chat bot, and they've already been at a transaction. Knowing what someone's doing at any given time is good contextual data. >> Alistair: Yep. >> Well it's funny you say that, because when we talk to customers or end consumers, they're not thinking, "I need more artificial intelligence, "I need more data around my experience, "I need internet of things", they're thinking, "I want convenience, I want this to be fast and quick, "I want you to know me as a brand, "I don't want to have to re-enter everything. "If I'm talking to a customer service agent, "versus someone in the store, versus interacting online". So data's a huge part of that, the challenge is how do you make it consistent? >> John: Katrina has a great point: it's not the technology, it's about what they're trying to do. >> Katrina: Yeah, exactly, very much. >> Well the experience comes back to, in many respects, convenience, and, "I want you to sustain "the state of where I am in my journey for me". >> Katrina: Correct, yeah. >> Or at least not blow my state up. So it's interesting, the journey used to be a role or a context thing, and now we're adding physical location to it, as well as device. So go back to this notion of new experiences. 'Cause it's got to be more than, you can look at something on your phone and then transact on your phone. What are some of the new experiences on the horizon? 'Cause that is a lot to do with where you guys think digital technology's going to go. >> I think some of those experiences are micro-interactions, so that could be people are using voice shopping, but not for the entire purchase, just a re-order this thing, what's the status of this thing? And brands are also using the data that they're gathering to tweak and adjust those interactions. So we're seeing data coming from real world devices and IOT changing the expectation of the customer, as they, maybe, we showed some stories where people are re-ordering products using voice, and then when they shift between these channels, that micro piece of data is really changing that interaction. The other challenge we're seeing is the consistency of the interaction, you said yourself, not only it's the complexity of "what did I do?", but if I do something here and I do something here, I should get the same experience both times. >> So we're talking mostly at this point about the B to C, the consumer world. In many respects, some of the most interesting experiences, we can envisage in the B to B world, where a community of sellers is selling to a community of buyers, and the state that's really important is how does that buying community interact with each other? As they discover things and share information. So how do you see this notion of new experiences starting to manifest itself in the B to B world? >> Katrina: Yeah, it's interesting you say that, because I often, we work with both B to C and B to B clients, and I actually think B to B has always been more focused on personalization, because they do have so much information about their customers, contract data, a lot of information about the buyer, the companies, they've always done kind of online custom personalized catalogs. So I think there's a lot that B to C can learn from B to B about how to leverage that data to personalize experiences. >> John: And vice-versa too, it's interesting, to that point, the B to C is a leading indicator on the experience side, but B to B's got the blocking and tackling down, if they have the data. 'Cause having the data, you get the goods. Okay, so here's the question for you: with the consumers going to digital, you're seeing massive, we were reporting yesterday, here on The Cube and also on siliconhill.com, as well as Adage, not that we didn't predict this, but ad spend now on digital has surpassed TV for the first time. Which is an indicator, but the ad tech world's changing, because how people are engaging with the customer is changing, so the question is, what technology is going to help transition those ad dollars, from banner ads to older formats to something more compelling and using data? 'Cause you can imagine retail being less about click, buy, to sharing data. So the spend's going to only grow on advertising or reaching consumers. That conversion, that experience is going to have to move from direct response clicking, to more experience, what tech is out there? >> Well, I think the biggest challenge has always been tracking and personalizing for a unique interaction. Just the sheer volume of data that's coming in, it's just too hard to consume. So I think the blend of AI and AI with the ability to tweak, adjust, look at multi-variate tests, and change the interaction as it goes, that's going to really massively affect the journeys for retailers, and I think the big benefit as brands move to the cloud, the cost of innovation, the cost of trying something and failing is so much less, and the pace of innovation is so much faster, I think we're seeing people try new things with the data they've got. Find out what works and what doesn't. >> Here's a question for you guys. We're talking to Jess Cahill, when this came up yesterday as well, Peter brought this up as part of the big data action going on with the AI and whatnot. Batch to real time is a shift, and this is clear here in the show that the batch is there, but still an older, but real time data in motion consumers in motion are out there, so the real time is now the key. Can you comment on that? >> I think it goes back to what Alistair was saying earlier about those micro-moments. I think transacting in new and unexpected places, ways, I think that's the key, and that's actually a huge challenge for our customers, because you have to be able to use that data in real time, because that customer is standing there with their phone, or in front of Alexa, or a speaker. >> John: It's an opportunity. >> It's a huge opportunity, and I think those opportunities are everywhere now. In a couple of years be the refrigerator, if you're re-ordering groceries, leveraging the screen, so I think that's going to be the challenge, but I think we've got time to help our customers figure out how to leverage that in real time. I think staying nimble and agile is going to be key and failing fast, and I guess a more positive way to say this-- >> The Agile Marketer, I think we had Roland Smart on yesterday, he literally wrote the book. But this is interesting, if you have the data, you can do these kinds of things. So the question is, certainly your point about the refrigerator and all these different things is going to create the omni-channel nightmare. It's not going to be, certainly multi-multi-omni. It's going to be too many challenges to deal with. >> Alistair: I think we prefer to see it as the omni-channel dream, than the nightmare. (group laughs) >> So many channels, there's no more channels, right? >> Well I think that's where things like Marketing Cloud, things like Integration Cloud help orchestrate that omni-channel journey, so that to your point on marketing and ad-spend, being able to analyze whether a benefit or promotion I showed during one micro-interaction affected something somewhere else, is so challenging but so important when you're moving this ad spend around. And I think where orchestrating and joining these micro-moments together, it's really where we're focusing a lot of our investment at the moment. >> One of the big things that's happening in the industry today is we're starting to develop techniques, and approaches, methods, for conceptualizing how a real thing is turned into a digital representation. IBM calls and not to mention them, or GE, perhaps more of a customer ... (group laughs) Yeah, I just did. >> That's all right. >> This notion of a digital twin. Commerce succeeds, where online electronic commerce succeeds as we are more successful at representing goods and services digitally. What's the relationship between IOT and some of these techniques for manifesting things digitally? And commerce, because commerce can expand its portfolio, things it can cover, as more of these things can be successfully digitally represented? >> I think that's key, and that's actually one of the predictions that we talked about in our keynote is how do you represent new ways of representing the physical store, the physical space with customers, so for me, I think something that probably Back to the Future or Judy Jetson, like a few years ago, augmented reality, or virtual reality, I think now we're going to see that more. We're starting to see it more with furniture sales, for example, you're on your iPad at home, and you can put the couch you've chosen in the space, right there with you, and see if it fits, but you're in your home, you don't have to go to the furniture store, and kind of guess with your tape measure whether the couch fits or not. And I think that's applicable in B to B as well, as 3D CAD drawings, you can kind of see them in VR, or AR. >> Amazon just announced Look, yesterday, which is the selfie tool that allows you to see what you're wearing. >> I think we're going to see a lot more of it in the coming years. >> Well, in many respects, it also, going back to this, we asked the question earlier about B to B, B to C, and the ability to represent that community. We're going to start seeing more of a household approach, as to just a consumer approach, and I think you just mentioned a great one. When we are successfully, or when we are willing to start capturing more data about our physical house or what's going on inside, so that we can make more informed decisions, with others, about how we want to do things, has an enormous impact on the quality of the experience, and where people are going to go to make their purchases. >> Alistair: Definitely, and I think that as we try and merge those experiences between B to B and B to C, what we know about someone as a consumer also directly affects their buying decisions, as a B to B employee buying for their brand. And that just increases the sheer volume of data that people are trying to manage and test and orchestrate. I think we're seeing a shift not only in people being prepared to surrender some degree of privacy for a increased experience, but we're also seeing people trusting in that virtual experience being a reality when they buy. So people have a much higher trust level in AR, if I visualize a couch and then buy it, I've got a degree of faith that when it turns up, it'll be like the one I looked at. And I think that increased trust is really making virtual experiences, digital commerce, so much easier. >> I think that's an interesting point, we had CMO of Time Warner on yesterday, Kristen O'Hara, and she was, we asked her, "Oh yeah, these transformations", big use case, she's on stage, but I asked her, "How was it like the old way? "What would you do before Oracle?", she goes, "Well, there was no old way", they never did. The point is, she said, the point was we became a direct to consumer company, so B to B and B to C are completely merging. So now the B to B's have to be a B to C, inherently because of the direct connect to the consumer. Not saying that their business model's changing, just that's the way the consumer is impacting. >> Peter: Or is it data connection to a consumer? >> A data connection, and where there's gesture data, or interaction data coming in, so this makes, the B to Bs now have to bolt on more stuff, like loyalty, you mentioned loyalty, things of that nature. >> Yeah, if you're a B to B company, you're selling to other businesses, but who are the people on the other business? There are people who shop every day in consumer applications, so their expectations are, "I'm going to have a great personalized experience, "I'm going to be able to leverage the same tools "that I see in my consumer shopping experiences "for my B to B experience, why would it be different?" So I think that's something that B to B is really learning from B to C as well. >> True, but although there seems to be something of a counter-veiling trend, but an increasing number of people are now working at home. So in many respects, where we're going to, is we're talking about experience, not just being online. One of my little heroes, when I was actually trying to do development, a million years ago, was Christopher Alexander. The Timeless Way of Building, which was one of the basic texts that people use for a lot of this customer experience stuff, and the observation that he made was, you talk about spaces, you talk about people moving into spaces to do things in context. And increasingly, the spaces that we have to worry about are not just what's on the screen, but the physical space that people move in, and operate in, an the idea is, I'm going somewhere to do something, and I'm bringing physical space with me. So all of these, the ability to represent space, time and interests and wants and needs, are going to have an enormous impact on experience. Wouldn't you agree? >> Massively, and I think the challenge using that same approach is that people are co-existing in multiple spaces concurrently. They no longer do one thing at the same time. >> Peter: They may be in the same physical place, but have two different contexts associated with it. Like working my home office, I'm both a father, as well as an employee. >> Alistair: Yes. >> And those two sometimes conflict. (Katrina laughs) >> Yeah, absolutely, and you're a consumer and an employee, and as a father, you're potentially affecting the decisions that the rest of your household is making, as well as the decisions that your business is making, all in slightly different ways. But those two experiences with the B to B and B to C, overlap one another. >> Peter: In fact, switching contexts from consumer to father is one of the primary reasons why I lose where I am in the journey. So these are very powerful, and the ability to have the data and then go to your customers, and say, "We will be able to provide that end to end for you, "so that you can provide a consistent "and coherent experience for your customers" is really crucial. Is that kind of where you're taking us? >> Yeah, I mean we've always commerce isn't kind of a standalone little thing, it really connects and glues together so many other types of experiences, so it connects to marketing, it connects to service, you need all of that, to be able to make the experience work. So we're really focused on making sure that it's easy to connect those applications together, that its easy to manage them behind the scenes, and that it appears seamless to the customer on the front end. >> One other thought that I have is, and in many respects, increasingly, because we're going to be able to represent more things digitally, which means we'll be able to move more stuff through commerce platforms. This is where the CX is going to meet the customer road, is in the commerce platforms. Do you guys agree with that? You're going to measure things all over the place, but I'm just curious-- >> John: It's their products, yeah. >> What do you think? Is it going to be increasingly the basis for honest CX? >> Well we're already seeing it become the basis, so I wouldn't say it's a future thing, I think it's been a reality for quite some time, where commerce is the hub that kind of connects, in retail, the store to marketing experiences. >> John: It's bonafide data is what it is too. >> Yeah. >> That's good data. >> Katrina: It holds so much product information, transaction information, customer information, and it just connects and leverages. I don't know if you would agree? >> Alistair: I would agree completely, and I think you look at the fact that most companies ultimately are selling a product, so that's commerce, and I think the transition is that rather than going into the commerce site or the commerce space, you see a lot of brands over the last 12 months have got rid of their store.brand.com thing and just merged their commerce experience into everything else, you're always selling. And we've customers deploy commerce without the cart, but as a product and communication marketing model, to get this tracking data moving around. >> We were talking about Jack earlier, yesterday, Berkowitz, who was talking data, we were talking about data, good data, dirty data, clean data, and data quality in general. >> Katrina: It's a tough problem. >> In context to value, and he said a quote, he said, "Good data makes things happen, "great data makes amazing things happen". And to your point, retail, commerce data, you can't, it's undisputed, it's a transaction. It's a capture in time, and that can be used in context to help other data sets become more robust. >> Well, in many respects it's the most important first person data that you have in your business. >> Katrina: Yeah, and I think from an Oracle perspective, what we're doing with the adaptive intelligent applications for commerce, and for the other applications as well, and particular for commerce is combing that first hand information you have about your products and your customers as an online business, but then the immense amount of data that the data cloud has behind the scenes that augments and allows you to automatically personalize, when a customer comes to your storefront, because they're coming already with all the context that they have elsewhere out in the world, and you can combine that with your own data, and I think really enhance the experience. >> John: Yeah it's funny, we were joking yesterday, Oracle went to bed a software company, woke up a data company. >> Katrina: Yeah (laughs). >> So the data cloud is pretty impressive, what's happened there and what that's doing. >> Katrina: It's amazing, it's a huge differentiator for us. >> Huge differentiator. Okay, final word, I'd like both you guys to just quickly comment to end this segment, awesome segment on commerce and data, which we love. But your reaction to the show, what's the bottom line, what's exciting you this week? Share with the folks, each of you, a quick soundbite of what's happening here and the impact people should know about. >> Sure from a commerce perspective, this is the first year where we've got a 50/50 split in our customer base, so we're seeing a lot of our un-premise customers move to cloud, which is great, and we're really growing our commerce cloud customer base. I'm very excited about that. >> And you're trying to get 100% now, it's never going to be a hundred. >> Katrina: (laughs) Yeah, we need to work with customers and what's right for them, but yeah, it's very exciting right now. >> Alistair, your take? >> I think for me, it's just the sheer pace of innovation, we're seeing brands go from un-premised stories that would take 12, 15, 18 months to add new features, make changes to small nimble brands rolling out incredible innovative features in 12, 18 week time frames, and we're seeing more people having more discussions around the art of the possible. >> John: All right, Katrina, Alistair, great comment, great insight, great conversation about data and commerce, of course cloud, it's the marketing clouds, all cloud world, it's commerce cloud, it's data cloud, it's just the cloud (laughs). I'm John Furrier, Peter Burris, move live coverage here from Las Vegas, Oracle Modern CX after this short break. (electronic music) >> Host: Robert--
SUMMARY :
Brought to you by Oracle. Welcome to The Cube, great to see you. So commerce is part of the story, and particularly in retail and also B to B commerce. of the sea of data that you're providing moves into AI, is enabling brands to move this experience of the customer, because they don't So data's a huge part of that, the challenge it's not the technology, it's about what Well the experience comes back to, in many respects, 'Cause that is a lot to do with where you guys of the interaction, you said yourself, the B to C, the consumer world. So I think there's a lot that B to C can learn So the spend's going to only grow as brands move to the cloud, the cost of innovation, We're talking to Jess Cahill, I think it goes back to what Alistair so I think that's going to be the challenge, is going to create the omni-channel nightmare. as the omni-channel dream, than the nightmare. that omni-channel journey, so that to your point One of the big things that's happening What's the relationship between IOT and And I think that's applicable in B to B as well, allows you to see what you're wearing. of it in the coming years. B to C, and the ability to represent that community. B to B and B to C, what we know about someone as a consumer inherently because of the direct connect to the consumer. the B to Bs now have to bolt on more stuff, So I think that's something that B to B So all of these, the ability to represent Massively, and I think the challenge using that Peter: They may be in the same physical place, And those two sometimes conflict. affecting the decisions that the rest of your household and then go to your customers, and say, and that it appears seamless to the customer You're going to measure things all over the place, in retail, the store to marketing experiences. I don't know if you would agree? to get this tracking data moving around. and data quality in general. And to your point, retail, commerce data, Well, in many respects it's the most important first amount of data that the data cloud has behind the scenes John: Yeah it's funny, we were joking yesterday, So the data cloud is pretty impressive, and the impact people should know about. in our customer base, so we're seeing a lot it's never going to be a hundred. and what's right for them, but yeah, to add new features, make changes to small nimble it's just the cloud (laughs).
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kristen O'Hara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Alistair | PERSON | 0.99+ |
Katrina Gosek | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Jess Cahill | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
12 | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Katrina | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
Alistair Galbraith | PERSON | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
100% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Time Warner | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Christopher Alexander | PERSON | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Mandalay Bay | LOCATION | 0.99+ |
Berkowitz | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Robert | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
CX | ORGANIZATION | 0.99+ |
18 months | QUANTITY | 0.99+ |
store.brand.com | OTHER | 0.99+ |
One | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
one place | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
50/50 | QUANTITY | 0.98+ |
Judy Jetson | PERSON | 0.98+ |
this year | DATE | 0.97+ |
this week | DATE | 0.97+ |
Day two | QUANTITY | 0.97+ |
one transaction | QUANTITY | 0.97+ |
both times | QUANTITY | 0.97+ |
a million years ago | DATE | 0.96+ |
each | QUANTITY | 0.96+ |
first year | QUANTITY | 0.96+ |
two experiences | QUANTITY | 0.96+ |
Roland Smart | PERSON | 0.95+ |
Alexa | TITLE | 0.94+ |