Image Title

Search Results for third vector:

Breaking Analysis: Cloudflare’s Supercloud…What Multi Cloud Could Have Been


 

from the cube studios in Palo Alto in Boston bringing you data-driven insights from the cube and ETR this is breaking analysis with Dave vellante over the past decade cloudflare has built a Global Network that has the potential to become the fourth us-based hyperscale class cloud in our view the company is building a durable Revenue model with hooks into many important markets these include the more mature DDOS protection space to other growth sectors such as zero trust a serverless platform for application development and an increasing number of services such as database and object storage and other network services in essence cloudflare could be thought of as a giant distributed supercomputer that can connect multiple clouds and act as a highly efficient scheduling engine at scale its disruptive DNA is increasingly attracting novel startups and established Global firms alike looking for Reliable secure high performance low latency and more cost-effective alternatives to AWS and Legacy infrastructure Solutions hello and welcome to this week's wikibon Cube insights powered by ETR in this breaking analysis we initiate our deeper coverage of cloudflare we'll briefly explain our take on the company and its unique business model we'll then share some peer comparisons with both the financial snapshot and some fresh ETR survey data finally we'll share some examples of how we think cloudflare could be a disruptive force with a super cloud-like offering that in many respects is what multi-cloud should have been cloudflare has been on our peripheral radar Ben Thompson and many others have written about their disruptive business model and recently a breaking analysis follower who will remain anonymous emailed with some excellent insights on cloudflare that prompted us to initiate more detailed coverage let's first take a look at how cloudflare seize the world in terms of its view of a modern stack this is a graphic from cloudflare that shows a simple three-layer Stack comprising Storage and compute the lower level and application layer and the network and their key message is basically that the big four hyperscalers have replaced the on-prem leaders apps have been satisfied and that mess of network that you see and Security in the upper left can now be handled all by cloudflare and the stack can be rented via Opex versus requiring heavy capex investment so okay somewhat of a simplified view is those companies on the the left are you know not standing still and we're going to come back to that but cloudflare has done something quite amazing I mean it's been a while since we've invoked Russ hanneman of Silicon Valley Fame on breaking analysis but remember when he was in a meeting one of his first meetings if not the first with Richard Hendricks it was the whiz kid on the show Silicon Valley and hanneman said something like if you had a blank check and you could build anything in the world what would it be and Richard's answer was basically a new internet and that led to Pied Piper this peer-to-peer Network powered by decentralized devices and and iPhones and this amazing compression algorithm that enabled high-speed data movement and low latency uh up to no low latency access across the network well in a way that's what cloudflare has built its founding premise reimagined how the internet should be built with a consistent set of server infrastructure where each server had lots of cores lots of dram lots of cash fast ssds and plenty of network connectivity and bandwidth and well this picture makes it look like a bunch of dots and points of presence on a map which of course it is there's a software layer that enables cloudflare to efficiently allocate resources across this Global Network the company claims that it's Network utilization is in the 70 percent range and it has used its build out to enter the technology space from the bottoms up offering for example free tiers of services to users with multiple entry points on different services and selling then more services over time to a customer which of course drives up its average contract value and its lifetime value at the same time the company continues to innovate and add new services at a very rapid cloud-like Pace you can think of cloudflare's initial Market entry as like a lightweight Cisco as a service the company's CFO actually he uses that term he calls it that which really must tick off Cisco who of course has a massive portfolio and a dominant Market position now because it owns the network cloudflare is a marginal cost of adding new Services is very small and goes towards zero so it's able to get software like economics at scale despite all this infrastructure that's building out so it doesn't have to constantly face the increasing infrastructure tax snowflake for example doesn't own its own network infrastructure as it grows it relies on AWS or Azure gcp and and while it gives the company obvious advantages it doesn't have to build out its own network it also requires them to constantly pay the tax and negotiate with hyperscalers for better rental rates now as previously mentioned Cloud Fair cloudflare claims that its utilization is very high probably higher than the hyperscalers who can spin up servers that they can charge for underutilized customer capacity cloudflare also has excellent Network traffic data that it can use to its Advantage with its Analytics the company has been rapidly innovating Beyond its original Core Business adding as I said before serverless zero trust offerings it has announced a database it calls its database D1 that's pretty creative and it's announced an object store called R2 that is S3 minus one both from the alphabet and the numeric I.E minus the egress cost saying no egress cost that's their big claim to fame and they've made a lot of marketing noise around about that and of course they've promised in our a D2 database which of course is R2D2 RR they've launched a developer platform cloudflare can be thought of kind of like first of all a modern CDN they've got a simpler security model that's how they compete for example with z-scaler that brings uh they also bring VPN sd-wan and DDOS protection services that are that are part of the network and they're less expensive than AWS that's kind of their sort of go to market and messaging and value proposition and they're positioning themselves as a neutral Network that can connect across multiple clouds now to be clear unlike AWS in particular cloudflare is not well suited to lift and shift your traditional apps like for instance sap Hana you're not going to run that in on cloudflare's platform rather the company started by making websites more secure and faster and it flew under the radar and much in the same way that clay Christensen described the disruption in the steel industry if you've seen that where new entrants picked off the low margin rebar business then moved up the stack we've used that analogy in the semiconductor business with arm and and even China cloudflare is running a similar playbook in the cloud and in the network so in the early part of the last decade as aws's ascendancy was becoming more clear many of us started thinking about how and where firms could compete and add value as AWS is becoming so dominant so for instance take an industry Focus you could do things like data sharing with snowflake eventually you know uh popularized you could build on top of clouds again snowflake is doing that as are others you could build private clouds and of course connect to hybrid clouds but not many had the wherewithal and or the hutzpah to build out a Global Network that could serve as a connecting platform for cloud services cloudflare has traction in the market as it adds new services like zero trust and object store or database its Tam continues to grow here's a quick snapshot of cloudflare's financials relative to Z scalar which is both a competitor and a customer fastly which is a smaller CDN and Akamai a more mature CDN slash Edge platform cloudflare and fastly both reported earnings this past week Cloud Fair Cloud flare surpassed a billion dollar Revenue run rate but they gave tepid guidance and the stock got absolutely crushed today which is Friday but the company's business model is sound it's growing close to 50 annually it has sas-like gross margins in the mid to high 70s and it's it it's got a very strong balance sheet and a 13x revenue run rate multiple in fact it's Financial snapshot is quite close to that of z-scaler which is kind of interesting which zinc sailor of course doesn't own its own network that's a pure play software company fastly is much smaller and growing more slowly than cloudflare hence its lower multiple well Akamai as you can see is a more mature company but it's got a nice business now on its earnings call this week cloudflare announced that its head of sales was stepping down and the company has brought in a new leader to take the firm to five billion dollars in sales I think actually its current sales leader felt like hey you know my work is done here bring on somebody else to take it to the next level the company is promising to be free cash flow positive by the end of the year and is working hard toward its long-term financial model or so working towards sorry it's a long-term financial model with gross margin Targets in the mid 70s it's targeting 20 non-gaap operating margins so so solid you know very solid not like completely off the charts but you know very good and to our knowledge it has not committed to a long-term growth rate but at that sort of operating profit level you would like to see growth be consistently at least in the 20 range so they could at least be a rule of 40 company or perhaps even even five even higher if they're going to continue to command a premium valuation okay let's take a look at the ETR data ETR is very positive on cloudflare and has recently published a report on the company like many companies cloudflare is seeing an across the board slowdown in spending velocity we've reported on this quite extensively using the ETR data to quantify the degree to that Slowdown and on the data set with ETR we see that many customers they're shifting their spend to Flat spend you know plus or minus let's say you know single digits you know two three percent or even zero or in the market we're seeing a shift from paid to free tiers remember cloudflare offers a lot of free services as you're seeing customers maybe turn off the pay for a while and going with the freebie but we're also seeing some larger customers in the data and the fortune 1000 specifically they're actually spending more which was confirmed on cloudflare's earnings call they did say everything across the board was softer but they did also indicate that some of their larger customers are actually growing faster than their smaller customers and their churn is very very low here's a two-dimensional graphic we'd like to share this view a lot it's got Net score or spending momentum on the vertical axis and overlap or pervasiveness in the survey on the horizontal axis and this cut isolates three segments in the etrs taxonomy that cloudflare plays in Cloud security and networking now the table inserted in that upper left there shows the raw data which informs the position of each company in the dots with Net score in the ends listed in that rightmost column the red dotted line indicates a highly elevated Net score and finally we posted the breakdown those colors in the bottom right of cloudflare's Net score the lime green that's new adoptions the forest green is we're spending more six percent or more the gray is flat plus or minus uh five percent and you can see that the majority of customers you can see that's the majority of the customers that gray area the pink is we're spending Less in other words down six percent or worse and the bright red is churn which is minimal one percent very good indicator for for cloudflare what you do to get etr's proprietary Net score and they've done this for many many quarters so we have that time series data you subtract the Reds from the greens and that's Net score cloudflare is at 39 just under that magic red line now note that cloudflare and zscaler are right on top of each other Cisco has a dominant position on the x-axis that cloudflare and others are eyeing AWS is also dominant but note that its Net score is well above the red dotted line it's incredible Palo Alto networks is also very impressive it's got both a strong presence on the horizontal axis and it's got a Net score that's pretty comparable to cloudflare and z-scaler to much smaller companies Akamai is actually well positioned for a reasonably mature company and you can see fastly ATT Juniper and F5 have far less spending momentum on their platforms than does cloudflare but at least they are in positive Net score territory so what's going to be really interesting to see is whether cloudflare can continue to hold this momentum or even accelerate it as we've seen with some other clouds as it scales its Network and keeps adding more and more services cloudflare has a couple of potential strategic vectors that we want to talk about and it'll be going to be interesting to see how that plays out Now One path is to compete more directly as a Cloud Player offering secure access Edge services like firewall as a service and zero Trust Services like data loss prevention email security from its area one acquisition and other zero trust offerings as well as Network Services like routing and network connectivity this is The Sweet Spot of the company load balancing many others and then add in things like Object Store and database Services more Edge services in the future it might be telecom like services such as Network switching for offices so that's one route and cloudflare is clearly on that path more services more cohorts at innovating and and growing the company and bringing in more Revenue increasing acvs and and increasing long-term value and keeping retention high now the other Vector is what we're just going to refer to as super cloud as an enabler of cross-cloud infrastructure this is new value uh relative to the former Vector that we were just talking about now the title of this episode is what multi-cloud should have been meaning cloudflare could be the control plane providing a consistent experience across clouds one that is fast and secure at global scale now to give you Insight on this let's take a look at some of the comments made by Matthew Prince the CEO and co-founder of cloudflare cloudflare put its R2 Object Store into public beta this past May and I believe it's storing around a petabyte of data today I think that's what they said in their call here's what Prince said about that quote we are talking to very large companies about moving more and more of their stored objects to where we can store that with R2 and one of the benefits is not only can we help them save money on the egress fees but it allows them to then use those object stores or objects across any of the different Cloud platforms they're that they're using so by being that neutral third party we can let people adopt a little bit of Amazon a little bit of Microsoft a little bit of Google a little bit of SAS vendors and share that data across all those different places so what's interesting about this in the super cloud context is it suggests that customers could take the best of each Cloud to power their digital businesses I might like AWS for in redshift for my analytic database or I love Google's machine learning Microsoft's collaboration and I'd like a consistent way to connect those resources but of course he's strongly hinting and has made many public statements that aws's egress fees are a blocker to that vision now at a recent investor event Matthew Prince added some color to this concept when he talked about one metric of success being how much R2 capacity was consumed and how much they sold but perhaps a more interesting Benchmark is highlighted by the following statement that he made he said a completely different measure of success for R2 is Andy jassy says I'm sick and tired of these guys meaning cloudflare taking our objects away we're dropping our egress fees to zero I would be so excited because we've then unlocked the ability to be the network that interconnects the cloud together now of course it would be Adam solipski who would be saying that or maybe Andy Jesse you know still watching over AWS and I think it's highly unlikely that that's going to happen anytime soon and that of course but but in theory gets us closer to the super cloud value proposition and to further drive that point home and we're paraphrasing a little bit his comments here he said something the effect of quote customers need one consistent control plane across clouds and we are the neutral Network that can be consistent no matter which Cloud you're using interesting right that Prince sees the world that's similar to if not nearly identical to the concepts that the cube Community has been putting forth around supercloud now this vision is a ways off let's be real Prince even suggested that his initial vision of an application running across multiple clouds you know that's like super cloud Nirvana isn't what customers are doing today that's that's really hard to do and perhaps you know it's never going to happen but there's a little doubt that cloudflare could be and is positioning itself as that cross-cloud control plane it has the network economics and the business model levers to pull it's got an edge up on the competition at the edge pun intended cloudflare is the definition of Edge and it's distributed platform it's decentralized platform is much better suited for Edge workloads than these giant data centers that are you know set up to to try and handle that today the the hyperscalers are building out you know their Edge networks things like outposts you know going out to the edge and other local zones Etc now cloudflare is increasingly competitive to the hyperscalers and those traditional Stacks that it depositioned on an earlier slide that we showed but you know the likes of AWS and Dell and hpe and Cisco and those others they're not sitting in their hands they have a huge huge customer install bases and they are definitely a moving Target they're investing and they're building out their own Super clouds with really robust stacks as well let's face it it's going to take a decade or more for Enterprises to adopt a developer platform or a new database Cloud plus cloudflare's capabilities when compared to incumbent stacks and the hyperscalers is much less robust in these areas and even in storage you know despite all the great conversation that R2 generated and the buzz you take a specialist like Wasabi they're more mature they're more functional and they're way cheaper even than cloudflare so you know it's not a fake a complete that cloudflare is going to win in those markets but we love the disruption and if cloudflare wants to be the fourth us-based hyperscaler or join the the big four as the as the fifth if we put Alibaba in the mix it's got a lot of work to do in the ecosystem by its own admission as much to learn and is part of the value by the way that it sees in its area one acquisition it's email security company that it bought but even in that case much of the emphasis has been on reseller channels compare that to the AWS ecosystem which is not only a channel play but is as much an innovation flywheel filling gaps where companies like snowflake Thrive side by side with aws's data stores as well all the on-prem stacks are building hybrid connections to AWS and other clouds as a means of providing consistent experiences across clouds indeed many of them see what they call cross-cloud services or what we call super cloud hyper cloud or whatever you know Mega Cloud you want to call it we use super cloud they are really eyeing that opportunity so very few companies frankly are not going after that space but we're going to close with this cloudflare is one of those companies that's in a position to wake up each morning and ask who can we disrupt today and very few companies are in a position to disrupt the hyperscalers to the degree that cloudflare is and that my friends is going to be fascinating to watch unfold all right let's call it a wrap I want to thank Alex Meyerson who's on production and manages the podcast as well as Ken schiffman who's our newest addition to the Boston Studio Kristen Martin and Cheryl Knight help us get the word out on social media and in our newsletters and Rob Hof is our editor-in-chief over at silicon angle thank you to all remember all these episodes are available as podcasts wherever you listen all you're going to do is search breaking analysis podcasts I publish each week on wikibon.com and siliconangle.com you can email me at david.velante at siliconangle.com or DM me at divalante if you comment on my LinkedIn posts and please do check out etr.ai they got the best survey data in the Enterprise Tech business this is Dave vellante for the cube insights powered by ETR thank you very much for watching and we'll see you next time on breaking analysis

Published Date : Nov 5 2022

SUMMARY :

that the majority of customers you can

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MeyersonPERSON

0.99+

RichardPERSON

0.99+

Matthew PrincePERSON

0.99+

Ken schiffmanPERSON

0.99+

Matthew PrincePERSON

0.99+

Adam solipskiPERSON

0.99+

70 percentQUANTITY

0.99+

Rob HofPERSON

0.99+

Cheryl KnightPERSON

0.99+

PrincePERSON

0.99+

Dave vellantePERSON

0.99+

Andy JessePERSON

0.99+

Palo AltoLOCATION

0.99+

six percentQUANTITY

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

13xQUANTITY

0.99+

AmazonORGANIZATION

0.99+

five billionQUANTITY

0.99+

AWSORGANIZATION

0.99+

hannemanPERSON

0.99+

FridayDATE

0.99+

Ben ThompsonPERSON

0.99+

Richard HendricksPERSON

0.99+

zeroQUANTITY

0.99+

DellORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

Andy jassyPERSON

0.99+

39QUANTITY

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

AlibabaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

five percentQUANTITY

0.99+

Boston StudioORGANIZATION

0.99+

AkamaiORGANIZATION

0.99+

clay ChristensenPERSON

0.99+

one percentQUANTITY

0.99+

awsORGANIZATION

0.99+

R2TITLE

0.99+

40 companyQUANTITY

0.98+

fiveQUANTITY

0.98+

fifthQUANTITY

0.98+

sapTITLE

0.98+

BostonLOCATION

0.98+

firstQUANTITY

0.98+

Russ hannemanPERSON

0.98+

cloudflareTITLE

0.98+

each companyQUANTITY

0.98+

each weekQUANTITY

0.97+

mid 70sDATE

0.97+

ETRORGANIZATION

0.97+

each serverQUANTITY

0.97+

this weekDATE

0.97+

EdgeTITLE

0.97+

zero trustQUANTITY

0.96+

todayDATE

0.96+

fourthQUANTITY

0.96+

two three percentQUANTITY

0.96+

each morningQUANTITY

0.95+

S3TITLE

0.95+

one metricQUANTITY

0.95+

bothQUANTITY

0.95+

billion dollarQUANTITY

0.95+

hpeORGANIZATION

0.94+

one acquisitionQUANTITY

0.94+

Horizon3.ai Signal | Horizon3.ai Partner Program Expands Internationally


 

hello I'm John Furrier with thecube and welcome to this special presentation of the cube and Horizon 3.ai they're announcing a global partner first approach expanding their successful pen testing product Net Zero you're going to hear from leading experts in their staff their CEO positioning themselves for a successful Channel distribution expansion internationally in Europe Middle East Africa and Asia Pacific in this Cube special presentation you'll hear about the expansion the expanse partner program giving Partners a unique opportunity to offer Net Zero to their customers Innovation and Pen testing is going International with Horizon 3.ai enjoy the program [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're here with Jennifer Lee head of Channel sales at Horizon 3.ai Jennifer welcome to the cube thanks for coming on great well thank you for having me so big news around Horizon 3.aa driving Channel first commitment you guys are expanding the channel partner program to include all kinds of new rewards incentives training programs help educate you know Partners really drive more recurring Revenue certainly cloud and Cloud scale has done that you got a great product that fits into that kind of Channel model great Services you can wrap around it good stuff so let's get into it what are you guys doing what are what are you guys doing with this news why is this so important yeah for sure so um yeah we like you said we recently expanded our Channel partner program um the driving force behind it was really just um to align our like you said our Channel first commitment um and creating awareness around the importance of our partner ecosystems um so that's it's really how we go to market is is through the channel and a great International Focus I've talked with the CEO so you know about the solution and he broke down all the action on why it's important on the product side but why now on the go to market change what's the what's the why behind this big this news on the channel yeah for sure so um we are doing this now really to align our business strategy which is built on the concept of enabling our partners to create a high value high margin business on top of our platform and so um we offer a solution called node zero it provides autonomous pen testing as a service and it allows organizations to continuously verify their security posture um so we our company vision we have this tagline that states that our pen testing enables organizations to see themselves Through The Eyes of an attacker and um we use the like the attacker's perspective to identify exploitable weaknesses and vulnerabilities so we created this partner program from a perspective of the partner so the partner's perspective and we've built It Through The Eyes of our partner right so we're prioritizing really what the partner is looking for and uh will ensure like Mutual success for us yeah the partners always want to get in front of the customers and bring new stuff to them pen tests have traditionally been really expensive uh and so bringing it down in one to a service level that's one affordable and has flexibility to it allows a lot of capability so I imagine people getting excited by it so I have to ask you about the program What specifically are you guys doing can you share any details around what it means for the partners what they get what's in it for them can you just break down some of the mechanics and mechanisms or or details yeah yep um you know we're really looking to create business alignment um and like I said establish Mutual success with our partners so we've got two um two key elements that we were really focused on um that we bring to the partners so the opportunity the profit margin expansion is one of them and um a way for our partners to really differentiate themselves and stay relevant in the market so um we've restructured our discount model really um you know highlighting profitability and maximizing profitability and uh this includes our deal registration we've we've created deal registration program we've increased discount for partners who take part in our partner certification uh trainings and we've we have some other partner incentives uh that we we've created that that's going to help out there we've we put this all so we've recently Gone live with our partner portal um it's a Consolidated experience for our partners where they can access our our sales tools and we really view our partners as an extension of our sales and Technical teams and so we've extended all of our our training material that we use internally we've made it available to our partners through our partner portal um we've um I'm trying I'm thinking now back what else is in that partner portal here we've got our partner certification information so all the content that's delivered during that training can be found in the portal we've got deal registration uh um co-branded marketing materials pipeline management and so um this this portal gives our partners a One-Stop place to to go to find all that information um and then just really quickly on the second part of that that I mentioned is our technology really is um really disruptive to the market so you know like you said autonomous pen testing it's um it's still it's well it's still still relatively new topic uh for security practitioners and um it's proven to be really disruptive so um that on top of um just well recently we found an article that um that mentioned by markets and markets that reports that the global pen testing markets really expanding and so it's expected to grow to like 2.7 billion um by 2027. so the Market's there right the Market's expanding it's growing and so for our partners it's just really allows them to grow their revenue um across their customer base expand their customer base and offering this High profit margin while you know getting in early to Market on this just disruptive technology big Market a lot of opportunities to make some money people love to put more margin on on those deals especially when you can bring a great solution that everyone knows is hard to do so I think that's going to provide a lot of value is there is there a type of partner that you guys see emerging or you aligning with you mentioned the alignment with the partners I can see how that the training and the incentives are all there sounds like it's all going well is there a type of partner that's resonating the most or is there categories of partners that can take advantage of this yeah absolutely so we work with all different kinds of Partners we work with our traditional resale Partners um we've worked we're working with systems integrators we have a really strong MSP mssp program um we've got Consulting partners and the Consulting Partners especially with the ones that offer pen test services so we they use us as a as we act as a force multiplier just really offering them profit margin expansion um opportunity there we've got some technology partner partners that we really work with for co-cell opportunities and then we've got our Cloud Partners um you'd mentioned that earlier and so we are in AWS Marketplace so our ccpo partners we're part of the ISP accelerate program um so we we're doing a lot there with our Cloud partners and um of course we uh we go to market with uh distribution Partners as well gotta love the opportunity for more margin expansion every kind of partner wants to put more gross profit on their deals is there a certification involved I have to ask is there like do you get do people get certified or is it just you get trained is it self-paced training is it in person how are you guys doing the whole training certification thing because is that is that a requirement yeah absolutely so we do offer a certification program and um it's been very popular this includes a a seller's portion and an operator portion and and so um this is at no cost to our partners and um we operate both virtually it's it's law it's virtually but live it's not self-paced and we also have in person um you know sessions as well and we also can customize these to any partners that have a large group of people and we can just we can do one in person or virtual just specifically for that partner well any kind of incentive opportunities and marketing opportunities everyone loves to get the uh get the deals just kind of rolling in leads from what we can see if our early reporting this looks like a hot product price wise service level wise what incentive do you guys thinking about and and Joint marketing you mentioned co-sell earlier in pipeline so I was kind of kind of honing in on that piece sure and yes and then to follow along with our partner certification program we do incentivize our partners there if they have a certain number certified their discount increases so that's part of it we have our deal registration program that increases discount as well um and then we do have some um some partner incentives that are wrapped around meeting setting and um moving moving opportunities along to uh proof of value gotta love the education driving value I have to ask you so you've been around the industry you've seen the channel relationships out there you're seeing companies old school new school you know uh Horizon 3.ai is kind of like that new school very cloud specific a lot of Leverage with we mentioned AWS and all the clouds um why is the company so hot right now why did you join them and what's why are people attracted to this company what's the what's the attraction what's the vibe what do you what do you see and what what do you use what did you see in in this company well this is just you know like I said it's very disruptive um it's really in high demand right now and um and and just because because it's new to Market and uh a newer technology so we are we can collaborate with a manual pen tester um we can you know we can allow our customers to run their pen test um with with no specialty teams and um and and then so we and like you know like I said we can allow our partners can actually build businesses profitable businesses so we can they can use our product to increase their services revenue and um and build their business model you know around around our services what's interesting about the pen test thing is that it's very expensive and time consuming the people who do them are very talented people that could be working on really bigger things in the in absolutely customers so bringing this into the channel allows them if you look at the price Delta between a pen test and then what you guys are offering I mean that's a huge margin Gap between street price of say today's pen test and what you guys offer when you show people that they follow do they say too good to be true I mean what are some of the things that people say when you kind of show them that are they like scratch their head like come on what's the what's the catch here right so the cost savings is a huge is huge for us um and then also you know like I said working as a force multiplier with a pen testing company that offers the services and so they can they can do their their annual manual pen tests that may be required around compliance regulations and then we can we can act as the continuous verification of their security um um you know that that they can run um weekly and so it's just um you know it's just an addition to to what they're offering already and an expansion so Jennifer thanks for coming on thecube really appreciate you uh coming on sharing the insights on the channel uh what's next what can we expect from the channel group what are you thinking what's going on right so we're really looking to expand our our Channel um footprint and um very strategically uh we've got um we've got some big plans um for for Horizon 3.ai awesome well thanks for coming on really appreciate it you're watching thecube the leader in high tech Enterprise coverage [Music] [Music] hello and welcome to the Cube's special presentation with Horizon 3.ai with Raina Richter vice president of emea Europe Middle East and Africa and Asia Pacific APAC for Horizon 3 today welcome to this special Cube presentation thanks for joining us thank you for the invitation so Horizon 3 a guy driving Global expansion big international news with a partner first approach you guys are expanding internationally let's get into it you guys are driving this new expanse partner program to new heights tell us about it what are you seeing in the momentum why the expansion what's all the news about well I would say uh yeah in in international we have I would say a similar similar situation like in the US um there is a global shortage of well-educated penetration testers on the one hand side on the other side um we have a raising demand of uh network and infrastructure security and with our approach of an uh autonomous penetration testing I I believe we are totally on top of the game um especially as we have also now uh starting with an international instance that means for example if a customer in Europe is using uh our service node zero he will be connected to a node zero instance which is located inside the European Union and therefore he has doesn't have to worry about the conflict between the European the gdpr regulations versus the US Cloud act and I would say there we have a total good package for our partners that they can provide differentiators to their customers you know we've had great conversations here on thecube with the CEO and the founder of the company around the leverage of the cloud and how successful that's been for the company and honestly I can just Connect the Dots here but I'd like you to weigh in more on how that translates into the go to market here because you got great Cloud scale with with the security product you guys are having success with great leverage there I've seen a lot of success there what's the momentum on the channel partner program internationally why is it so important to you is it just the regional segmentation is it the economics why the momentum well there are it's there are multiple issues first of all there is a raising demand in penetration testing um and don't forget that uh in international we have a much higher level in number a number or percentage in SMB and mid-market customers so these customers typically most of them even didn't have a pen test done once a year so for them pen testing was just too expensive now with our offering together with our partners we can provide different uh ways how customers could get an autonomous pen testing done more than once a year with even lower costs than they had with with a traditional manual paint test so and that is because we have our uh Consulting plus package which is for typically pain testers they can go out and can do a much faster much quicker and their pain test at many customers once in after each other so they can do more pain tests on a lower more attractive price on the other side there are others what even the same ones who are providing um node zero as an mssp service so they can go after s p customers saying okay well you only have a couple of hundred uh IP addresses no worries we have the perfect package for you and then you have let's say the mid Market let's say the thousands and more employees then they might even have an annual subscription very traditional but for all of them it's all the same the customer or the service provider doesn't need a piece of Hardware they only need to install a small piece of a Docker container and that's it and that makes it so so smooth to go in and say okay Mr customer we just put in this this virtual attacker into your network and that's it and and all the rest is done and within within three clicks they are they can act like a pen tester with 20 years of experience and that's going to be very Channel friendly and partner friendly I can almost imagine so I have to ask you and thank you for calling the break calling out that breakdown and and segmentation that was good that was very helpful for me to understand but I want to follow up if you don't mind um what type of partners are you seeing the most traction with and why well I would say at the beginning typically you have the the innovators the early adapters typically Boutique size of Partners they start because they they are always looking for Innovation and those are the ones you they start in the beginning so we have a wide range of Partners having mostly even um managed by the owner of the company so uh they immediately understand okay there is the value and they can change their offering they're changing their offering in terms of penetration testing because they can do more pen tests and they can then add other ones or we have those ones who offer 10 tests services but they did not have their own pen testers so they had to go out on the open market and Source paint testing experts um to get the pen test at a particular customer done and now with node zero they're totally independent they can't go out and say okay Mr customer here's the here's the service that's it we turn it on and within an hour you're up and running totally yeah and those pen tests are usually expensive and hard to do now it's right in line with the sales delivery pretty interesting for a partner absolutely but on the other hand side we are not killing the pain testers business we do something we're providing with no tiers I would call something like the foundation work the foundational work of having an an ongoing penetration testing of the infrastructure the operating system and the pen testers by themselves they can concentrate in the future on things like application pen testing for example so those Services which we we're not touching so we're not killing the paint tester Market we're just taking away the ongoing um let's say foundation work call it that way yeah yeah that was one of my questions I was going to ask is there's a lot of interest in this autonomous pen testing one because it's expensive to do because those skills are required are in need and they're expensive so you kind of cover the entry level and the blockers that are in there I've seen people say to me this pen test becomes a blocker for getting things done so there's been a lot of interest in the autonomous pen testing and for organizations to have that posture and it's an overseas issue too because now you have that that ongoing thing so can you explain that particular benefit for an organization to have that continuously verifying an organization's posture yep certainly so I would say um typically you are you you have to do your patches you have to bring in new versions of operating systems of different Services of uh um operating systems of some components and and they are always bringing new vulnerabilities the difference here is that with node zero we are telling the customer or the partner package we're telling them which are the executable vulnerabilities because previously they might have had um a vulnerability scanner so this vulnerability scanner brought up hundreds or even thousands of cves but didn't say anything about which of them are vulnerable really executable and then you need an expert digging in one cve after the other finding out is it is it really executable yes or no and that is where you need highly paid experts which we have a shortage so with notes here now we can say okay we tell you exactly which ones are the ones you should work on because those are the ones which are executable we rank them accordingly to the risk level how easily they can be used and by a sudden and then the good thing is convert it or indifference to the traditional penetration test they don't have to wait for a year for the next pain test to find out if the fixing was effective they weren't just the next scan and say Yes closed vulnerability is gone the time is really valuable and if you're doing any devops Cloud native you're always pushing new things so pen test ongoing pen testing is actually a benefit just in general as a kind of hygiene so really really interesting solution really bring that global scale is going to be a new new coverage area for us for sure I have to ask you if you don't mind answering what particular region are you focused on or plan to Target for this next phase of growth well at this moment we are concentrating on the countries inside the European Union Plus the United Kingdom um but we are and they are of course logically I'm based into Frankfurt area that means we cover more or less the countries just around so it's like the total dark region Germany Switzerland Austria plus the Netherlands but we also already have Partners in the nordics like in Finland or in Sweden um so it's it's it it's rapidly we have Partners already in the UK and it's rapidly growing so I'm for example we are now starting with some activities in Singapore um um and also in the in the Middle East area um very important we uh depending on let's say the the way how to do business currently we try to concentrate on those countries where we can have um let's say um at least English as an accepted business language great is there any particular region you're having the most success with right now is it sounds like European Union's um kind of first wave what's them yes that's the first definitely that's the first wave and now we're also getting the uh the European instance up and running it's clearly our commitment also to the market saying okay we know there are certain dedicated uh requirements and we take care of this and and we're just launching it we're building up this one uh the instance um in the AWS uh service center here in Frankfurt also with some dedicated Hardware internet in a data center in Frankfurt where we have with the date six by the way uh the highest internet interconnection bandwidth on the planet so we have very short latency to wherever you are on on the globe that's a great that's a great call outfit benefit too I was going to ask that what are some of the benefits your partners are seeing in emea and Asia Pacific well I would say um the the benefits is for them it's clearly they can they can uh talk with customers and can offer customers penetration testing which they before and even didn't think about because it penetrates penetration testing in a traditional way was simply too expensive for them too complex the preparation time was too long um they didn't have even have the capacity uh to um to support a pain an external pain tester now with this service you can go in and say even if they Mr customer we can do a test with you in a couple of minutes within we have installed the docker container within 10 minutes we have the pen test started that's it and then we just wait and and I would say that is we'll we are we are seeing so many aha moments then now because on the partner side when they see node zero the first time working it's like this wow that is great and then they work out to customers and and show it to their typically at the beginning mostly the friendly customers like wow that's great I need that and and I would say um the feedback from the partners is that is a service where I do not have to evangelize the customer everybody understands penetration testing I don't have to say describe what it is they understand the customer understanding immediately yes penetration testing good about that I know I should do it but uh too complex too expensive now with the name is for example as an mssp service provided from one of our partners but it's getting easy yeah it's great and it's great great benefit there I mean I gotta say I'm a huge fan of what you guys are doing I like this continuous automation that's a major benefit to anyone doing devops or any kind of modern application development this is just a godsend for them this is really good and like you said the pen testers that are doing it they were kind of coming down from their expertise to kind of do things that should have been automated they get to focus on the bigger ticket items that's a really big point so we free them we free the pain testers for the higher level elements of the penetration testing segment and that is typically the application testing which is currently far away from being automated yeah and that's where the most critical workloads are and I think this is the nice balance congratulations on the international expansion of the program and thanks for coming on this special presentation really I really appreciate it thank you you're welcome okay this is thecube special presentation you know check out pen test automation International expansion Horizon 3 dot AI uh really Innovative solution in our next segment Chris Hill sector head for strategic accounts will discuss the power of Horizon 3.ai and Splunk in action you're watching the cube the leader in high tech Enterprise coverage foreign [Music] [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're with Chris Hill sector head for strategic accounts and federal at Horizon 3.ai a great Innovative company Chris great to see you thanks for coming on thecube yeah like I said uh you know great to meet you John long time listener first time caller so excited to be here with you guys yeah we were talking before camera you had Splunk back in 2013 and I think 2012 was our first splunk.com and boy man you know talk about being in the right place at the right time now we're at another inflection point and Splunk continues to be relevant um and continuing to have that data driving Security in that interplay and your CEO former CTO of his plug as well at Horizon who's been on before really Innovative product you guys have but you know yeah don't wait for a breach to find out if you're logging the right data this is the topic of this thread Splunk is very much part of this new international expansion announcement uh with you guys tell us what are some of the challenges that you see where this is relevant for the Splunk and Horizon AI as you guys expand uh node zero out internationally yeah well so across so you know my role uh within Splunk it was uh working with our most strategic accounts and so I looked back to 2013 and I think about the sales process like working with with our small customers you know it was um it was still very siled back then like I was selling to an I.T team that was either using this for it operations um we generally would always even say yeah although we do security we weren't really designed for it we're a log management tool and we I'm sure you remember back then John we were like sort of stepping into the security space and and the public sector domain that I was in you know security was 70 of what we did when I look back to sort of uh the transformation that I was witnessing in that digital transformation um you know when I look at like 2019 to today you look at how uh the IT team and the security teams are being have been forced to break down those barriers that they used to sort of be silent away would not commute communicate one you know the security guys would be like oh this is my box I.T you're not allowed in today you can't get away with that and I think that the value that we bring to you know and of course Splunk has been a huge leader in that space and continues to do Innovation across the board but I think what we've we're seeing in the space and I was talking with Patrick Coughlin the SVP of uh security markets about this is that you know what we've been able to do with Splunk is build a purpose-built solution that allows Splunk to eat more data so Splunk itself is ulk know it's an ingest engine right the great reason people bought it was you could build these really fast dashboards and grab intelligence out of it but without data it doesn't do anything right so how do you drive and how do you bring more data in and most importantly from a customer perspective how do you bring the right data in and so if you think about what node zero and what we're doing in a horizon 3 is that sure we do pen testing but because we're an autonomous pen testing tool we do it continuously so this whole thought I'd be like oh crud like my customers oh yeah we got a pen test coming up it's gonna be six weeks the week oh yeah you know and everyone's gonna sit on their hands call me back in two months Chris we'll talk to you then right not not a real efficient way to test your environment and shoot we saw that with Uber this week right um you know and that's a case where we could have helped oh just right we could explain the Uber thing because it was a contractor just give a quick highlight of what happened so you can connect the doctor yeah no problem so um it was uh I got I think it was yeah one of those uh you know games where they would try and test an environment um and with the uh pen tester did was he kept on calling them MFA guys being like I need to reset my password we need to set my right password and eventually the um the customer service guy said okay I'm resetting it once he had reset and bypassed the multi-factor authentication he then was able to get in and get access to the building area that he was in or I think not the domain but he was able to gain access to a partial part of that Network he then paralleled over to what I would assume is like a VA VMware or some virtual machine that had notes that had all of the credentials for logging into various domains and So within minutes they had access and that's the sort of stuff that we do you know a lot of these tools like um you know you think about the cacophony of tools that are out there in a GTA architect architecture right I'm gonna get like a z-scale or I'm going to have uh octum and I have a Splunk I've been into the solar system I mean I don't mean to name names we have crowdstriker or Sentinel one in there it's just it's a cacophony of things that don't work together they weren't designed work together and so we have seen so many times in our business through our customer support and just working with customers when we do their pen tests that there will be 5 000 servers out there three are misconfigured those three misconfigurations will create the open door because remember the hacker only needs to be right once the defender needs to be right all the time and that's the challenge and so that's what I'm really passionate about what we're doing uh here at Horizon three I see this my digital transformation migration and security going on which uh we're at the tip of the spear it's why I joined sey Hall coming on this journey uh and just super excited about where the path's going and super excited about the relationship with Splunk I get into more details on some of the specifics of that but um you know well you're nailing I mean we've been doing a lot of things on super cloud and this next gen environment we're calling it next gen you're really seeing devops obviously devsecops has already won the it role has moved to the developer shift left is an indicator of that it's one of the many examples higher velocity code software supply chain you hear these things that means that it is now in the developer hands it is replaced by the new Ops data Ops teams and security where there's a lot of horizontal thinking to your point about access there's no more perimeter huge 100 right is really right on things one time you know to get in there once you're in then you can hang out move around move laterally big problem okay so we get that now the challenges for these teams as they are transitioning organizationally how do they figure out what to do okay this is the next step they already have Splunk so now they're kind of in transition while protecting for a hundred percent ratio of success so how would you look at that and describe the challenge is what do they do what is it what are the teams facing with their data and what's next what are they what are they what action do they take so let's use some vernacular that folks will know so if I think about devsecops right we both know what that means that I'm going to build security into the app it normally talks about sec devops right how am I building security around the perimeter of what's going inside my ecosystem and what are they doing and so if you think about what we're able to do with somebody like Splunk is we can pen test the entire environment from Soup To Nuts right so I'm going to test the end points through to its I'm going to look for misconfigurations I'm going to I'm going to look for um uh credential exposed credentials you know I'm going to look for anything I can in the environment again I'm going to do it at light speed and and what what we're doing for that SEC devops space is to you know did you detect that we were in your environment so did we alert Splunk or the Sim that there's someone in the environment laterally moving around did they more importantly did they log us into their environment and when do they detect that log to trigger that log did they alert on us and then finally most importantly for every CSO out there is going to be did they stop us and so that's how we we do this and I think you when speaking with um stay Hall before you know we've come up with this um boils but we call it fine fix verifying so what we do is we go in is we act as the attacker right we act in a production environment so we're not going to be we're a passive attacker but we will go in on credentialed on agents but we have to assume to have an assumed breach model which means we're going to put a Docker container in your environment and then we're going to fingerprint the environment so we're going to go out and do an asset survey now that's something that's not something that Splunk does super well you know so can Splunk see all the assets do the same assets marry up we're going to log all that data and think and then put load that into this long Sim or the smoke logging tools just to have it in Enterprise right that's an immediate future ad that they've got um and then we've got the fix so once we've completed our pen test um we are then going to generate a report and we can talk about these in a little bit later but the reports will show an executive summary the assets that we found which would be your asset Discovery aspect of that a fix report and the fixed report I think is probably the most important one it will go down and identify what we did how we did it and then how to fix that and then from that the pen tester or the organization should fix those then they go back and run another test and then they validate like a change detection environment to see hey did those fixes taste play take place and you know snehaw when he was the CTO of jsoc he shared with me a number of times about it's like man there would be 15 more items on next week's punch sheet that we didn't know about and it's and it has to do with how we you know how they were uh prioritizing the cves and whatnot because they would take all CBDs it was critical or non-critical and it's like we are able to create context in that environment that feeds better information into Splunk and whatnot that brings that brings up the efficiency for Splunk specifically the teams out there by the way the burnout thing is real I mean this whole I just finished my list and I got 15 more or whatever the list just can keeps growing how did node zero specifically help Splunk teams be more efficient like that's the question I want to get at because this seems like a very scale way for Splunk customers and teams service teams to be more so the question is how does node zero help make Splunk specifically their service teams be more efficient so so today in our early interactions we're building customers we've seen are five things um and I'll start with sort of identifying the blind spots right so kind of what I just talked about with you did we detect did we log did we alert did they stop node zero right and so I would I put that you know a more Layman's third grade term and if I was going to beat a fifth grader at this game would be we can be the sparring partner for a Splunk Enterprise customer a Splunk Essentials customer someone using Splunk soar or even just an Enterprise Splunk customer that may be a small shop with three people and just wants to know where am I exposed so by creating and generating these reports and then having um the API that actually generates the dashboard they can take all of these events that we've logged and log them in and then where that then comes in is number two is how do we prioritize those logs right so how do we create visibility to logs that that um are have critical impacts and again as I mentioned earlier not all cves are high impact regard and also not all or low right so if you daisy chain a bunch of low cves together boom I've got a mission critical AP uh CPE that needs to be fixed now such as a credential moving to an NT box that's got a text file with a bunch of passwords on it that would be very bad um and then third would be uh verifying that you have all of the hosts so one of the things that splunk's not particularly great at and they'll literate themselves they don't do asset Discovery so dude what assets do we see and what are they logging from that um and then for from um for every event that they are able to identify one of the cool things that we can do is actually create this low code no code environment so they could let you know Splunk customers can use Splunk sword to actually triage events and prioritize that event so where they're being routed within it to optimize the Sox team time to Market or time to triage any given event obviously reducing MTR and then finally I think one of the neatest things that we'll be seeing us develop is um our ability to build glass cables so behind me you'll see one of our triage events and how we build uh a Lockheed Martin kill chain on that with a glass table which is very familiar to the community we're going to have the ability and not too distant future to allow people to search observe on those iocs and if people aren't familiar with it ioc it's an instant of a compromise so that's a vector that we want to drill into and of course who's better at Drilling in the data and smoke yeah this is a critter this is an awesome Synergy there I mean I can see a Splunk customer going man this just gives me so much more capability action actionability and also real understanding and I think this is what I want to dig into if you don't mind understanding that critical impact okay is kind of where I see this coming got the data data ingest now data's data but the question is what not to log you know where are things misconfigured these are critical questions so can you talk about what it means to understand critical impact yeah so I think you know going back to the things that I just spoke about a lot of those cves where you'll see um uh low low low and then you daisy chain together and they're suddenly like oh this is high now but then your other impact of like if you're if you're a Splunk customer you know and I had it I had several of them I had one customer that you know terabytes of McAfee data being brought in and it was like all right there's a lot of other data that you probably also want to bring but they could only afford wanted to do certain data sets because that's and they didn't know how to prioritize or filter those data sets and so we provide that opportunity to say hey these are the critical ones to bring in but there's also the ones that you don't necessarily need to bring in because low cve in this case really does mean low cve like an ILO server would be one that um that's the print server uh where the uh your admin credentials are on on like a printer and so there will be credentials on that that's something that a hacker might go in to look at so although the cve on it is low is if you daisy chain with somebody that's able to get into that you might say Ah that's high and we would then potentially rank it giving our AI logic to say that's a moderate so put it on the scale and we prioritize those versus uh of all of these scanners just going to give you a bunch of CDs and good luck and translating that if I if I can and tell me if I'm wrong that kind of speaks to that whole lateral movement that's it challenge right print serve a great example looks stupid low end who's going to want to deal with the print server oh but it's connected into a critical system there's a path is that kind of what you're getting at yeah I use Daisy Chain I think that's from the community they came from uh but it's just a lateral movement it's exactly what they're doing in those low level low critical lateral movements is where the hackers are getting in right so that's the beauty thing about the uh the Uber example is that who would have thought you know I've got my monthly Factor authentication going in a human made a mistake we can't we can't not expect humans to make mistakes we're fallible right the reality is is once they were in the environment they could have protected themselves by running enough pen tests to know that they had certain uh exposed credentials that would have stopped the breach and they did not had not done that in their environment and I'm not poking yeah but it's an interesting Trend though I mean it's obvious if sometimes those low end items are also not protected well so it's easy to get at from a hacker standpoint but also the people in charge of them can be fished easily or spearfished because they're not paying attention because they don't have to no one ever told them hey be careful yeah for the community that I came from John that's exactly how they they would uh meet you at a uh an International Event um introduce themselves as a graduate student these are National actor States uh would you mind reviewing my thesis on such and such and I was at Adobe at the time that I was working on this instead of having to get the PDF they opened the PDF and whoever that customer was launches and I don't know if you remember back in like 2008 time frame there was a lot of issues around IP being by a nation state being stolen from the United States and that's exactly how they did it and John that's or LinkedIn hey I want to get a joke we want to hire you double the salary oh I'm gonna click on that for sure you know yeah right exactly yeah the one thing I would say to you is like uh when we look at like sort of you know because I think we did 10 000 pen tests last year is it's probably over that now you know we have these sort of top 10 ways that we think and find people coming into the environment the funniest thing is that only one of them is a cve related vulnerability like uh you know you guys know what they are right so it's it but it's it's like two percent of the attacks are occurring through the cves but yeah there's all that attention spent to that and very little attention spent to this pen testing side which is sort of this continuous threat you know monitoring space and and this vulnerability space where I think we play a such an important role and I'm so excited to be a part of the tip of the spear on this one yeah I'm old enough to know the movie sneakers which I loved as a you know watching that movie you know professional hackers are testing testing always testing the environment I love this I got to ask you as we kind of wrap up here Chris if you don't mind the the benefits to Professional Services from this Alliance big news Splunk and you guys work well together we see that clearly what are what other benefits do Professional Services teams see from the Splunk and Horizon 3.ai Alliance so if you're I think for from our our from both of our uh Partners uh as we bring these guys together and many of them already are the same partner right uh is that uh first off the licensing model is probably one of the key areas that we really excel at so if you're an end user you can buy uh for the Enterprise by the number of IP addresses you're using um but uh if you're a partner working with this there's solution ways that you can go in and we'll license as to msps and what that business model on msps looks like but the unique thing that we do here is this C plus license and so the Consulting plus license allows like a uh somebody a small to mid-sized to some very large uh you know Fortune 100 uh consulting firms use this uh by buying into a license called um Consulting plus where they can have unlimited uh access to as many IPS as they want but you can only run one test at a time and as you can imagine when we're going and hacking passwords and um checking hashes and decrypting hashes that can take a while so but for the right customer it's it's a perfect tool and so I I'm so excited about our ability to go to market with uh our partners so that we understand ourselves understand how not to just sell to or not tell just to sell through but we know how to sell with them as a good vendor partner I think that that's one thing that we've done a really good job building bring it into the market yeah I think also the Splunk has had great success how they've enabled uh partners and Professional Services absolutely you know the services that layer on top of Splunk are multi-fold tons of great benefits so you guys Vector right into that ride that way with friction and and the cool thing is that in you know in one of our reports which could be totally customized uh with someone else's logo we're going to generate you know so I I used to work in another organization it wasn't Splunk but we we did uh you know pen testing as for for customers and my pen testers would come on site they'd do the engagement and they would leave and then another release someone would be oh shoot we got another sector that was breached and they'd call you back you know four weeks later and so by August our entire pen testings teams would be sold out and it would be like well even in March maybe and they're like no no I gotta breach now and and and then when they do go in they go through do the pen test and they hand over a PDF and they pack on the back and say there's where your problems are you need to fix it and the reality is that what we're going to generate completely autonomously with no human interaction is we're going to go and find all the permutations of anything we found and the fix for those permutations and then once you've fixed everything you just go back and run another pen test it's you know for what people pay for one pen test they can have a tool that does that every every Pat patch on Tuesday and that's on Wednesday you know triage throughout the week green yellow red I wanted to see the colors show me green green is good right not red and one CIO doesn't want who doesn't want that dashboard right it's it's exactly it and we can help bring I think that you know I'm really excited about helping drive this with the Splunk team because they get that they understand that it's the green yellow red dashboard and and how do we help them find more green uh so that the other guys are in red yeah and get in the data and do the right thing and be efficient with how you use the data know what to look at so many things to pay attention to you know the combination of both and then go to market strategy real brilliant congratulations Chris thanks for coming on and sharing um this news with the detail around the Splunk in action around the alliance thanks for sharing John my pleasure thanks look forward to seeing you soon all right great we'll follow up and do another segment on devops and I.T and security teams as the new new Ops but and super cloud a bunch of other stuff so thanks for coming on and our next segment the CEO of horizon 3.aa will break down all the new news for us here on thecube you're watching thecube the leader in high tech Enterprise coverage [Music] yeah the partner program for us has been fantastic you know I think prior to that you know as most organizations most uh uh most Farmers most mssps might not necessarily have a a bench at all for penetration testing uh maybe they subcontract this work out or maybe they do it themselves but trying to staff that kind of position can be incredibly difficult for us this was a differentiator a a new a new partner a new partnership that allowed us to uh not only perform services for our customers but be able to provide a product by which that they can do it themselves so we work with our customers in a variety of ways some of them want more routine testing and perform this themselves but we're also a certified service provider of horizon 3 being able to perform uh penetration tests uh help review the the data provide color provide analysis for our customers in a broader sense right not necessarily the the black and white elements of you know what was uh what's critical what's high what's medium what's low what you need to fix but are there systemic issues this has allowed us to onboard new customers this has allowed us to migrate some penetration testing services to us from from competitors in the marketplace But ultimately this is occurring because the the product and the outcome are special they're unique and they're effective our customers like what they're seeing they like the routineness of it many of them you know again like doing this themselves you know being able to kind of pen test themselves parts of their networks um and the the new use cases right I'm a large organization I have eight to ten Acquisitions per year wouldn't it be great to have a tool to be able to perform a penetration test both internal and external of that acquisition before we integrate the two companies and maybe bringing on some risk it's a very effective partnership uh one that really is uh kind of taken our our Engineers our account Executives by storm um you know this this is a a partnership that's been very valuable to us [Music] a key part of the value and business model at Horizon 3 is enabling Partners to leverage node zero to make more revenue for themselves our goal is that for sixty percent of our Revenue this year will be originated by partners and that 95 of our Revenue next year will be originated by partners and so a key to that strategy is making us an integral part of your business models as a partner a key quote from one of our partners is that we enable every one of their business units to generate Revenue so let's talk about that in a little bit more detail first is that if you have a pen test Consulting business take Deloitte as an example what was six weeks of human labor at Deloitte per pen test has been cut down to four days of Labor using node zero to conduct reconnaissance find all the juicy interesting areas of the of the Enterprise that are exploitable and being able to go assess the entire organization and then all of those details get served up to the human to be able to look at understand and determine where to probe deeper so what you see in that pen test Consulting business is that node zero becomes a force multiplier where those Consulting teams were able to cover way more accounts and way more IPS within those accounts with the same or fewer consultants and so that directly leads to profit margin expansion for the Penn testing business itself because node 0 is a force multiplier the second business model here is if you're an mssp as an mssp you're already making money providing defensive cyber security operations for a large volume of customers and so what they do is they'll license node zero and use us as an upsell to their mssb business to start to deliver either continuous red teaming continuous verification or purple teaming as a service and so in that particular business model they've got an additional line of Revenue where they can increase the spend of their existing customers by bolting on node 0 as a purple team as a service offering the third business model or customer type is if you're an I.T services provider so as an I.T services provider you make money installing and configuring security products like Splunk or crowdstrike or hemio you also make money reselling those products and you also make money generating follow-on services to continue to harden your customer environments and so for them what what those it service providers will do is use us to verify that they've installed Splunk correctly improved to their customer that Splunk was installed correctly or crowdstrike was installed correctly using our results and then use our results to drive follow-on services and revenue and then finally we've got the value-added reseller which is just a straight up reseller because of how fast our sales Cycles are these vars are able to typically go from cold email to deal close in six to eight weeks at Horizon 3 at least a single sales engineer is able to run 30 to 50 pocs concurrently because our pocs are very lightweight and don't require any on-prem customization or heavy pre-sales post sales activity so as a result we're able to have a few amount of sellers driving a lot of Revenue and volume for us well the same thing applies to bars there isn't a lot of effort to sell the product or prove its value so vars are able to sell a lot more Horizon 3 node zero product without having to build up a huge specialist sales organization so what I'm going to do is talk through uh scenario three here as an I.T service provider and just how powerful node zero can be in driving additional Revenue so in here think of for every one dollar of node zero license purchased by the IT service provider to do their business it'll generate ten dollars of additional revenue for that partner so in this example kidney group uses node 0 to verify that they have installed and deployed Splunk correctly so Kitty group is a Splunk partner they they sell it services to install configure deploy and maintain Splunk and as they deploy Splunk they're going to use node 0 to attack the environment and make sure that the right logs and alerts and monitoring are being handled within the Splunk deployment so it's a way of doing QA or verifying that Splunk has been configured correctly and that's going to be internally used by kidney group to prove the quality of their services that they've just delivered then what they're going to do is they're going to show and leave behind that node zero Report with their client and that creates a resell opportunity for for kidney group to resell node 0 to their client because their client is seeing the reports and the results and saying wow this is pretty amazing and those reports can be co-branded where it's a pen testing report branded with kidney group but it says powered by Horizon three under it from there kidney group is able to take the fixed actions report that's automatically generated with every pen test through node zero and they're able to use that as the starting point for a statement of work to sell follow-on services to fix all of the problems that node zero identified fixing l11r misconfigurations fixing or patching VMware or updating credentials policies and so on so what happens is node 0 has found a bunch of problems the client often lacks the capacity to fix and so kidney group can use that lack of capacity by the client as a follow-on sales opportunity for follow-on services and finally based on the findings from node zero kidney group can look at that report and say to the customer you know customer if you bought crowdstrike you'd be able to uh prevent node Zero from attacking and succeeding in the way that it did for if you bought humano or if you bought Palo Alto networks or if you bought uh some privileged access management solution because of what node 0 was able to do with credential harvesting and attacks and so as a result kidney group is able to resell other security products within their portfolio crowdstrike Falcon humano Polito networks demisto Phantom and so on based on the gaps that were identified by node zero and that pen test and what that creates is another feedback loop where kidney group will then go use node 0 to verify that crowdstrike product has actually been installed and configured correctly and then this becomes the cycle of using node 0 to verify a deployment using that verification to drive a bunch of follow-on services and resell opportunities which then further drives more usage of the product now the way that we licensed is that it's a usage-based license licensing model so that the partner will grow their node zero Consulting plus license as they grow their business so for example if you're a kidney group then week one you've got you're going to use node zero to verify your Splunk install in week two if you have a pen testing business you're going to go off and use node zero to be a force multiplier for your pen testing uh client opportunity and then if you have an mssp business then in week three you're going to use node zero to go execute a purple team mssp offering for your clients so not necessarily a kidney group but if you're a Deloitte or ATT these larger companies and you've got multiple lines of business if you're Optive for instance you all you have to do is buy one Consulting plus license and you're going to be able to run as many pen tests as you want sequentially so now you can buy a single license and use that one license to meet your week one client commitments and then meet your week two and then meet your week three and as you grow your business you start to run multiple pen tests concurrently so in week one you've got to do a Splunk verify uh verify Splunk install and you've got to run a pen test and you've got to do a purple team opportunity you just simply expand the number of Consulting plus licenses from one license to three licenses and so now as you systematically grow your business you're able to grow your node zero capacity with you giving you predictable cogs predictable margins and once again 10x additional Revenue opportunity for that investment in the node zero Consulting plus license my name is Saint I'm the co-founder and CEO here at Horizon 3. I'm going to talk to you today about why it's important to look at your Enterprise Through The Eyes of an attacker the challenge I had when I was a CIO in banking the CTO at Splunk and serving within the Department of Defense is that I had no idea I was Secure until the bad guys had showed up am I logging the right data am I fixing the right vulnerabilities are my security tools that I've paid millions of dollars for actually working together to defend me and the answer is I don't know does my team actually know how to respond to a breach in the middle of an incident I don't know I've got to wait for the bad guys to show up and so the challenge I had was how do we proactively verify our security posture I tried a variety of techniques the first was the use of vulnerability scanners and the challenge with vulnerability scanners is being vulnerable doesn't mean you're exploitable I might have a hundred thousand findings from my scanner of which maybe five or ten can actually be exploited in my environment the other big problem with scanners is that they can't chain weaknesses together from machine to machine so if you've got a thousand machines in your environment or more what a vulnerability scanner will do is tell you you have a problem on machine one and separately a problem on machine two but what they can tell you is that an attacker could use a load from machine one plus a low from machine two to equal to critical in your environment and what attackers do in their tactics is they chain together misconfigurations dangerous product defaults harvested credentials and exploitable vulnerabilities into attack paths across different machines so to address the attack pads across different machines I tried layering in consulting-based pen testing and the issue is when you've got thousands of hosts or hundreds of thousands of hosts in your environment human-based pen testing simply doesn't scale to test an infrastructure of that size moreover when they actually do execute a pen test and you get the report oftentimes you lack the expertise within your team to quickly retest to verify that you've actually fixed the problem and so what happens is you end up with these pen test reports that are incomplete snapshots and quickly going stale and then to mitigate that problem I tried using breach and attack simulation tools and the struggle with these tools is one I had to install credentialed agents everywhere two I had to write my own custom attack scripts that I didn't have much talent for but also I had to maintain as my environment changed and then three these types of tools were not safe to run against production systems which was the the majority of my attack surface so that's why we went off to start Horizon 3. so Tony and I met when we were in Special Operations together and the challenge we wanted to solve was how do we do infrastructure security testing at scale by giving the the power of a 20-year pen testing veteran into the hands of an I.T admin a network engineer in just three clicks and the whole idea is we enable these fixers The Blue Team to be able to run node Zero Hour pen testing product to quickly find problems in their environment that blue team will then then go off and fix the issues that were found and then they can quickly rerun the attack to verify that they fixed the problem and the whole idea is delivering this without requiring custom scripts be developed without requiring credential agents be installed and without requiring the use of external third-party consulting services or Professional Services self-service pen testing to quickly Drive find fix verify there are three primary use cases that our customers use us for the first is the sock manager that uses us to verify that their security tools are actually effective to verify that they're logging the right data in Splunk or in their Sim to verify that their managed security services provider is able to quickly detect and respond to an attack and hold them accountable for their slas or that the sock understands how to quickly detect and respond and measuring and verifying that or that the variety of tools that you have in your stack most organizations have 130 plus cyber security tools none of which are designed to work together are actually working together the second primary use case is proactively hardening and verifying your systems this is when the I that it admin that network engineer they're able to run self-service pen tests to verify that their Cisco environment is installed in hardened and configured correctly or that their credential policies are set up right or that their vcenter or web sphere or kubernetes environments are actually designed to be secure and what this allows the it admins and network Engineers to do is shift from running one or two pen tests a year to 30 40 or more pen tests a month and you can actually wire those pen tests into your devops process or into your detection engineering and the change management processes to automatically trigger pen tests every time there's a change in your environment the third primary use case is for those organizations lucky enough to have their own internal red team they'll use node zero to do reconnaissance and exploitation at scale and then use the output as a starting point for the humans to step in and focus on the really hard juicy stuff that gets them on stage at Defcon and so these are the three primary use cases and what we'll do is zoom into the find fix verify Loop because what I've found in my experience is find fix verify is the future operating model for cyber security organizations and what I mean here is in the find using continuous pen testing what you want to enable is on-demand self-service pen tests you want those pen tests to find attack pads at scale spanning your on-prem infrastructure your Cloud infrastructure and your perimeter because attackers don't only state in one place they will find ways to chain together a perimeter breach a credential from your on-prem to gain access to your cloud or some other permutation and then the third part in continuous pen testing is attackers don't focus on critical vulnerabilities anymore they know we've built vulnerability Management Programs to reduce those vulnerabilities so attackers have adapted and what they do is chain together misconfigurations in your infrastructure and software and applications with dangerous product defaults with exploitable vulnerabilities and through the collection of credentials through a mix of techniques at scale once you've found those problems the next question is what do you do about it well you want to be able to prioritize fixing problems that are actually exploitable in your environment that truly matter meaning they're going to lead to domain compromise or domain user compromise or access your sensitive data the second thing you want to fix is making sure you understand what risk your crown jewels data is exposed to where is your crown jewels data is in the cloud is it on-prem has it been copied to a share drive that you weren't aware of if a domain user was compromised could they access that crown jewels data you want to be able to use the attacker's perspective to secure the critical data you have in your infrastructure and then finally as you fix these problems you want to quickly remediate and retest that you've actually fixed the issue and this fine fix verify cycle becomes that accelerator that drives purple team culture the third part here is verify and what you want to be able to do in the verify step is verify that your security tools and processes in people can effectively detect and respond to a breach you want to be able to integrate that into your detection engineering processes so that you know you're catching the right security rules or that you've deployed the right configurations you also want to make sure that your environment is adhering to the best practices around systems hardening in cyber resilience and finally you want to be able to prove your security posture over a time to your board to your leadership into your regulators so what I'll do now is zoom into each of these three steps so when we zoom in to find here's the first example using node 0 and autonomous pen testing and what an attacker will do is find a way to break through the perimeter in this example it's very easy to misconfigure kubernetes to allow an attacker to gain remote code execution into your on-prem kubernetes environment and break through the perimeter and from there what the attacker is going to do is conduct Network reconnaissance and then find ways to gain code execution on other machines in the environment and as they get code execution they start to dump credentials collect a bunch of ntlm hashes crack those hashes using open source and dark web available data as part of those attacks and then reuse those credentials to log in and laterally maneuver throughout the environment and then as they loudly maneuver they can reuse those credentials and use credential spraying techniques and so on to compromise your business email to log in as admin into your cloud and this is a very common attack and rarely is a CV actually needed to execute this attack often it's just a misconfiguration in kubernetes with a bad credential policy or password policy combined with bad practices of credential reuse across the organization here's another example of an internal pen test and this is from an actual customer they had 5 000 hosts within their environment they had EDR and uba tools installed and they initiated in an internal pen test on a single machine from that single initial access point node zero enumerated the network conducted reconnaissance and found five thousand hosts were accessible what node 0 will do under the covers is organize all of that reconnaissance data into a knowledge graph that we call the Cyber terrain map and that cyber Terrain map becomes the key data structure that we use to efficiently maneuver and attack and compromise your environment so what node zero will do is they'll try to find ways to get code execution reuse credentials and so on in this customer example they had Fortinet installed as their EDR but node 0 was still able to get code execution on a Windows machine from there it was able to successfully dump credentials including sensitive credentials from the lsas process on the Windows box and then reuse those credentials to log in as domain admin in the network and once an attacker becomes domain admin they have the keys to the kingdom they can do anything they want so what happened here well it turns out Fortinet was misconfigured on three out of 5000 machines bad automation the customer had no idea this had happened they would have had to wait for an attacker to show up to realize that it was misconfigured the second thing is well why didn't Fortinet stop the credential pivot in the lateral movement and it turned out the customer didn't buy the right modules or turn on the right services within that particular product and we see this not only with Ford in it but we see this with Trend Micro and all the other defensive tools where it's very easy to miss a checkbox in the configuration that will do things like prevent credential dumping the next story I'll tell you is attackers don't have to hack in they log in so another infrastructure pen test a typical technique attackers will take is man in the middle uh attacks that will collect hashes so in this case what an attacker will do is leverage a tool or technique called responder to collect ntlm hashes that are being passed around the network and there's a variety of reasons why these hashes are passed around and it's a pretty common misconfiguration but as an attacker collects those hashes then they start to apply techniques to crack those hashes so they'll pass the hash and from there they will use open source intelligence common password structures and patterns and other types of techniques to try to crack those hashes into clear text passwords so here node 0 automatically collected hashes it automatically passed the hashes to crack those credentials and then from there it starts to take the domain user user ID passwords that it's collected and tries to access different services and systems in your Enterprise in this case node 0 is able to successfully gain access to the Office 365 email environment because three employees didn't have MFA configured so now what happens is node 0 has a placement and access in the business email system which sets up the conditions for fraud lateral phishing and other techniques but what's especially insightful here is that 80 of the hashes that were collected in this pen test were cracked in 15 minutes or less 80 percent 26 of the user accounts had a password that followed a pretty obvious pattern first initial last initial and four random digits the other thing that was interesting is 10 percent of service accounts had their user ID the same as their password so VMware admin VMware admin web sphere admin web Square admin so on and so forth and so attackers don't have to hack in they just log in with credentials that they've collected the next story here is becoming WS AWS admin so in this example once again internal pen test node zero gets initial access it discovers 2 000 hosts are network reachable from that environment if fingerprints and organizes all of that data into a cyber Terrain map from there it it fingerprints that hpilo the integrated lights out service was running on a subset of hosts hpilo is a service that is often not instrumented or observed by security teams nor is it easy to patch as a result attackers know this and immediately go after those types of services so in this case that ILO service was exploitable and were able to get code execution on it ILO stores all the user IDs and passwords in clear text in a particular set of processes so once we gain code execution we were able to dump all of the credentials and then from there laterally maneuver to log in to the windows box next door as admin and then on that admin box we're able to gain access to the share drives and we found a credentials file saved on a share Drive from there it turned out that credentials file was the AWS admin credentials file giving us full admin authority to their AWS accounts not a single security alert was triggered in this attack because the customer wasn't observing the ILO service and every step thereafter was a valid login in the environment and so what do you do step one patch the server step two delete the credentials file from the share drive and then step three is get better instrumentation on privileged access users and login the final story I'll tell is a typical pattern that we see across the board with that combines the various techniques I've described together where an attacker is going to go off and use open source intelligence to find all of the employees that work at your company from there they're going to look up those employees on dark web breach databases and other forms of information and then use that as a starting point to password spray to compromise a domain user all it takes is one employee to reuse a breached password for their Corporate email or all it takes is a single employee to have a weak password that's easily guessable all it takes is one and once the attacker is able to gain domain user access in most shops domain user is also the local admin on their laptop and once your local admin you can dump Sam and get local admin until M hashes you can use that to reuse credentials again local admin on neighboring machines and attackers will start to rinse and repeat then eventually they're able to get to a point where they can dump lsas or by unhooking the anti-virus defeating the EDR or finding a misconfigured EDR as we've talked about earlier to compromise the domain and what's consistent is that the fundamentals are broken at these shops they have poor password policies they don't have least access privilege implemented active directory groups are too permissive where domain admin or domain user is also the local admin uh AV or EDR Solutions are misconfigured or easily unhooked and so on and what we found in 10 000 pen tests is that user Behavior analytics tools never caught us in that lateral movement in part because those tools require pristine logging data in order to work and also it becomes very difficult to find that Baseline of normal usage versus abnormal usage of credential login another interesting Insight is there were several Marquee brand name mssps that were defending our customers environment and for them it took seven hours to detect and respond to the pen test seven hours the pen test was over in less than two hours and so what you had was an egregious violation of the service level agreements that that mssp had in place and the customer was able to use us to get service credit and drive accountability of their sock and of their provider the third interesting thing is in one case it took us seven minutes to become domain admin in a bank that bank had every Gucci security tool you could buy yet in 7 minutes and 19 seconds node zero started as an unauthenticated member of the network and was able to escalate privileges through chaining and misconfigurations in lateral movement and so on to become domain admin if it's seven minutes today we should assume it'll be less than a minute a year or two from now making it very difficult for humans to be able to detect and respond to that type of Blitzkrieg attack so that's in the find it's not just about finding problems though the bulk of the effort should be what to do about it the fix and the verify so as you find those problems back to kubernetes as an example we will show you the path here is the kill chain we took to compromise that environment we'll show you the impact here is the impact or here's the the proof of exploitation that we were able to use to be able to compromise it and there's the actual command that we executed so you could copy and paste that command and compromise that cubelet yourself if you want and then the impact is we got code execution and we'll actually show you here is the impact this is a critical here's why it enabled perimeter breach affected applications will tell you the specific IPS where you've got the problem how it maps to the miter attack framework and then we'll tell you exactly how to fix it we'll also show you what this problem enabled so you can accurately prioritize why this is important or why it's not important the next part is accurate prioritization the hardest part of my job as a CIO was deciding what not to fix so if you take SMB signing not required as an example by default that CVSs score is a one out of 10. but this misconfiguration is not a cve it's a misconfig enable an attacker to gain access to 19 credentials including one domain admin two local admins and access to a ton of data because of that context this is really a 10 out of 10. you better fix this as soon as possible however of the seven occurrences that we found it's only a critical in three out of the seven and these are the three specific machines and we'll tell you the exact way to fix it and you better fix these as soon as possible for these four machines over here these didn't allow us to do anything of consequence so that because the hardest part is deciding what not to fix you can justifiably choose not to fix these four issues right now and just add them to your backlog and surge your team to fix these three as quickly as possible and then once you fix these three you don't have to re-run the entire pen test you can select these three and then one click verify and run a very narrowly scoped pen test that is only testing this specific issue and what that creates is a much faster cycle of finding and fixing problems the other part of fixing is verifying that you don't have sensitive data at risk so once we become a domain user we're able to use those domain user credentials and try to gain access to databases file shares S3 buckets git repos and so on and help you understand what sensitive data you have at risk so in this example a green checkbox means we logged in as a valid domain user we're able to get read write access on the database this is how many records we could have accessed and we don't actually look at the values in the database but we'll show you the schema so you can quickly characterize that pii data was at risk here and we'll do that for your file shares and other sources of data so now you can accurately articulate the data you have at risk and prioritize cleaning that data up especially data that will lead to a fine or a big news issue so that's the find that's the fix now we're going to talk about the verify the key part in verify is embracing and integrating with detection engineering practices so when you think about your layers of security tools you've got lots of tools in place on average 130 tools at any given customer but these tools were not designed to work together so when you run a pen test what you want to do is say did you detect us did you log us did you alert on us did you stop us and from there what you want to see is okay what are the techniques that are commonly used to defeat an environment to actually compromise if you look at the top 10 techniques we use and there's far more than just these 10 but these are the most often executed nine out of ten have nothing to do with cves it has to do with misconfigurations dangerous product defaults bad credential policies and it's how we chain those together to become a domain admin or compromise a host so what what customers will do is every single attacker command we executed is provided to you as an attackivity log so you can actually see every single attacker command we ran the time stamp it was executed the hosts it executed on and how it Maps the minor attack tactics so our customers will have are these attacker logs on one screen and then they'll go look into Splunk or exabeam or Sentinel one or crowdstrike and say did you detect us did you log us did you alert on us or not and to make that even easier if you take this example hey Splunk what logs did you see at this time on the VMware host because that's when node 0 is able to dump credentials and that allows you to identify and fix your logging blind spots to make that easier we've got app integration so this is an actual Splunk app in the Splunk App Store and what you can come is inside the Splunk console itself you can fire up the Horizon 3 node 0 app all of the pen test results are here so that you can see all of the results in one place and you don't have to jump out of the tool and what you'll show you as I skip forward is hey there's a pen test here are the critical issues that we've identified for that weaker default issue here are the exact commands we executed and then we will automatically query into Splunk all all terms on between these times on that endpoint that relate to this attack so you can now quickly within the Splunk environment itself figure out that you're missing logs or that you're appropriately catching this issue and that becomes incredibly important in that detection engineering cycle that I mentioned earlier so how do our customers end up using us they shift from running one pen test a year to 30 40 pen tests a month oftentimes wiring us into their deployment automation to automatically run pen tests the other part that they'll do is as they run more pen tests they find more issues but eventually they hit this inflection point where they're able to rapidly clean up their environment and that inflection point is because the red and the blue teams start working together in a purple team culture and now they're working together to proactively harden their environment the other thing our customers will do is run us from different perspectives they'll first start running an RFC 1918 scope to see once the attacker gained initial access in a part of the network that had wide access what could they do and then from there they'll run us within a specific Network segment okay from within that segment could the attacker break out and gain access to another segment then they'll run us from their work from home environment could they Traverse the VPN and do something damaging and once they're in could they Traverse the VPN and get into my cloud then they'll break in from the outside all of these perspectives are available to you in Horizon 3 and node zero as a single SKU and you can run as many pen tests as you want if you run a phishing campaign and find that an intern in the finance department had the worst phishing behavior you can then inject their credentials and actually show the end-to-end story of how an attacker fished gained credentials of an intern and use that to gain access to sensitive financial data so what our customers end up doing is running multiple attacks from multiple perspectives and looking at those results over time I'll leave you two things one is what is the AI in Horizon 3 AI those knowledge graphs are the heart and soul of everything that we do and we use machine learning reinforcement techniques reinforcement learning techniques Markov decision models and so on to be able to efficiently maneuver and analyze the paths in those really large graphs we also use context-based scoring to prioritize weaknesses and we're also able to drive collective intelligence across all of the operations so the more pen tests we run the smarter we get and all of that is based on our knowledge graph analytics infrastructure that we have finally I'll leave you with this was my decision criteria when I was a buyer for my security testing strategy what I cared about was coverage I wanted to be able to assess my on-prem cloud perimeter and work from home and be safe to run in production I want to be able to do that as often as I wanted I want to be able to run pen tests in hours or days not weeks or months so I could accelerate that fine fix verify loop I wanted my it admins and network Engineers with limited offensive experience to be able to run a pen test in a few clicks through a self-service experience and not have to install agent and not have to write custom scripts and finally I didn't want to get nickeled and dimed on having to buy different types of attack modules or different types of attacks I wanted a single annual subscription that allowed me to run any type of attack as often as I wanted so I could look at my Trends in directions over time so I hope you found this talk valuable uh we're easy to find and I look forward to seeing seeing you use a product and letting our results do the talking when you look at uh you know kind of the way no our pen testing algorithms work is we dynamically select uh how to compromise an environment based on what we've discovered and the goal is to become a domain admin compromise a host compromise domain users find ways to encrypt data steal sensitive data and so on but when you look at the the top 10 techniques that we ended up uh using to compromise environments the first nine have nothing to do with cves and that's the reality cves are yes a vector but less than two percent of cves are actually used in a compromise oftentimes it's some sort of credential collection credential cracking uh credential pivoting and using that to become an admin and then uh compromising environments from that point on so I'll leave this up for you to kind of read through and you'll have the slides available for you but I found it very insightful that organizations and ourselves when I was a GE included invested heavily in just standard vulnerability Management Programs when I was at DOD that's all disa cared about asking us about was our our kind of our cve posture but the attackers have adapted to not rely on cves to get in because they know that organizations are actively looking at and patching those cves and instead they're chaining together credentials from one place with misconfigurations and dangerous product defaults in another to take over an environment a concrete example is by default vcenter backups are not encrypted and so as if an attacker finds vcenter what they'll do is find the backup location and there are specific V sender MTD files where the admin credentials are parsippled in the binaries so you can actually as an attacker find the right MTD file parse out the binary and now you've got the admin credentials for the vcenter environment and now start to log in as admin there's a bad habit by signal officers and Signal practitioners in the in the Army and elsewhere where the the VM notes section of a virtual image has the password for the VM well those VM notes are not stored encrypted and attackers know this and they're able to go off and find the VMS that are unencrypted find the note section and pull out the passwords for those images and then reuse those credentials across the board so I'll pause here and uh you know Patrick love you get some some commentary on on these techniques and other things that you've seen and what we'll do in the last say 10 to 15 minutes is uh is rolled through a little bit more on what do you do about it yeah yeah no I love it I think um I think this is pretty exhaustive what I like about what you've done here is uh you know we've seen we've seen double-digit increases in the number of organizations that are reporting actual breaches year over year for the last um for the last three years and it's often we kind of in the Zeitgeist we pegged that on ransomware which of course is like incredibly important and very top of mind um but what I like about what you have here is you know we're reminding the audience that the the attack surface area the vectors the matter um you know has to be more comprehensive than just thinking about ransomware scenarios yeah right on um so let's build on this when you think about your defense in depth you've got multiple security controls that you've purchased and integrated and you've got that redundancy if a control fails but the reality is that these security tools aren't designed to work together so when you run a pen test what you want to ask yourself is did you detect node zero did you log node zero did you alert on node zero and did you stop node zero and when you think about how to do that every single attacker command executed by node zero is available in an attacker log so you can now see you know at the bottom here vcenter um exploit at that time on that IP how it aligns to minor attack what you want to be able to do is go figure out did your security tools catch this or not and that becomes very important in using the attacker's perspective to improve your defensive security controls and so the way we've tried to make this easier back to like my my my the you know I bleed Green in many ways still from my smoke background is you want to be able to and what our customers do is hey we'll look at the attacker logs on one screen and they'll look at what did Splunk see or Miss in another screen and then they'll use that to figure out what their logging blind spots are and what that where that becomes really interesting is we've actually built out an integration into Splunk where there's a Splunk app you can download off of Splunk base and you'll get all of the pen test results right there in the Splunk console and from that Splunk console you're gonna be able to see these are all the pen tests that were run these are the issues that were found um so you can look at that particular pen test here are all of the weaknesses that were identified for that particular pen test and how they categorize out for each of those weaknesses you can click on any one of them that are critical in this case and then we'll tell you for that weakness and this is where where the the punch line comes in so I'll pause the video here for that weakness these are the commands that were executed on these endpoints at this time and then we'll actually query Splunk for that um for that IP address or containing that IP and these are the source types that surface any sort of activity so what we try to do is help you as quickly and efficiently as possible identify the logging blind spots in your Splunk environment based on the attacker's perspective so as this video kind of plays through you can see it Patrick I'd love to get your thoughts um just seeing so many Splunk deployments and the effectiveness of those deployments and and how this is going to help really Elevate the effectiveness of all of your Splunk customers yeah I'm super excited about this I mean I think this these kinds of purpose-built integration snail really move the needle for our customers I mean at the end of the day when I think about the power of Splunk I think about a product I was first introduced to 12 years ago that was an on-prem piece of software you know and at the time it sold on sort of Perpetual and term licenses but one made it special was that it could it could it could eat data at a speed that nothing else that I'd have ever seen you can ingest massively scalable amounts of data uh did cool things like schema on read which facilitated that there was this language called SPL that you could nerd out about uh and you went to a conference once a year and you talked about all the cool things you were splunking right but now as we think about the next phase of our growth um we live in a heterogeneous environment where our customers have so many different tools and data sources that are ever expanding and as you look at the as you look at the role of the ciso it's mind-blowing to me the amount of sources Services apps that are coming into the ciso span of let's just call it a span of influence in the last three years uh you know we're seeing things like infrastructure service level visibility application performance monitoring stuff that just never made sense for the security team to have visibility into you um at least not at the size and scale which we're demanding today um and and that's different and this isn't this is why it's so important that we have these joint purpose-built Integrations that um really provide more prescription to our customers about how do they walk on that Journey towards maturity what does zero to one look like what does one to two look like whereas you know 10 years ago customers were happy with platforms today they want integration they want Solutions and they want to drive outcomes and I think this is a great example of how together we are stepping to the evolving nature of the market and also the ever-evolving nature of the threat landscape and what I would say is the maturing needs of the customer in that environment yeah for sure I think especially if if we all anticipate budget pressure over the next 18 months due to the economy and elsewhere while the security budgets are not going to ever I don't think they're going to get cut they're not going to grow as fast and there's a lot more pressure on organizations to extract more value from their existing Investments as well as extracting more value and more impact from their existing teams and so security Effectiveness Fierce prioritization and automation I think become the three key themes of security uh over the next 18 months so I'll do very quickly is run through a few other use cases um every host that we identified in the pen test were able to score and say this host allowed us to do something significant therefore it's it's really critical you should be increasing your logging here hey these hosts down here we couldn't really do anything as an attacker so if you do have to make trade-offs you can make some trade-offs of your logging resolution at the lower end in order to increase logging resolution on the upper end so you've got that level of of um justification for where to increase or or adjust your logging resolution another example is every host we've discovered as an attacker we Expose and you can export and we want to make sure is every host we found as an attacker is being ingested from a Splunk standpoint a big issue I had as a CIO and user of Splunk and other tools is I had no idea if there were Rogue Raspberry Pi's on the network or if a new box was installed and whether Splunk was installed on it or not so now you can quickly start to correlate what hosts did we see and how does that reconcile with what you're logging from uh finally or second to last use case here on the Splunk integration side is for every single problem we've found we give multiple options for how to fix it this becomes a great way to prioritize what fixed actions to automate in your soar platform and what we want to get to eventually is being able to automatically trigger soar actions to fix well-known problems like automatically invalidating passwords for for poor poor passwords in our credentials amongst a whole bunch of other things we could go off and do and then finally if there is a well-known kill chain or attack path one of the things I really wish I could have done when I was a Splunk customer was take this type of kill chain that actually shows a path to domain admin that I'm sincerely worried about and use it as a glass table over which I could start to layer possible indicators of compromise and now you've got a great starting point for glass tables and iocs for actual kill chains that we know are exploitable in your environment and that becomes some super cool Integrations that we've got on the roadmap between us and the Splunk security side of the house so what I'll leave with actually Patrick before I do that you know um love to get your comments and then I'll I'll kind of leave with one last slide on this wartime security mindset uh pending you know assuming there's no other questions no I love it I mean I think this kind of um it's kind of glass table's approach to how do you how do you sort of visualize these workflows and then use things like sore and orchestration and automation to operationalize them is exactly where we see all of our customers going and getting away from I think an over engineered approach to soar with where it has to be super technical heavy with you know python programmers and getting more to this visual view of workflow creation um that really demystifies the power of Automation and also democratizes it so you don't have to have these programming languages in your resume in order to start really moving the needle on workflow creation policy enforcement and ultimately driving automation coverage across more and more of the workflows that your team is seeing yeah I think that between us being able to visualize the actual kill chain or attack path with you know think of a of uh the soar Market I think going towards this no code low code um you know configurable sore versus coded sore that's going to really be a game changer in improve or giving security teams a force multiplier so what I'll leave you with is this peacetime mindset of security no longer is sustainable we really have to get out of checking the box and then waiting for the bad guys to show up to verify that security tools are are working or not and the reason why we've got to really do that quickly is there are over a thousand companies that withdrew from the Russian economy over the past uh nine months due to the Ukrainian War there you should expect every one of them to be punished by the Russians for leaving and punished from a cyber standpoint and this is no longer about financial extortion that is ransomware this is about punishing and destroying companies and you can punish any one of these companies by going after them directly or by going after their suppliers and their Distributors so suddenly your attack surface is no more no longer just your own Enterprise it's how you bring your goods to Market and it's how you get your goods created because while I may not be able to disrupt your ability to harvest fruit if I can get those trucks stuck at the border I can increase spoilage and have the same effect and what we should expect to see is this idea of cyber-enabled economic Warfare where if we issue a sanction like Banning the Russians from traveling there is a cyber-enabled counter punch which is corrupt and destroy the American Airlines database that is below the threshold of War that's not going to trigger the 82nd Airborne to be mobilized but it's going to achieve the right effect ban the sale of luxury goods disrupt the supply chain and create shortages banned Russian oil and gas attack refineries to call a 10x spike in gas prices three days before the election this is the future and therefore I think what we have to do is shift towards a wartime mindset which is don't trust your security posture verify it see yourself Through The Eyes of the attacker build that incident response muscle memory and drive better collaboration between the red and the blue teams your suppliers and Distributors and your information uh sharing organization they have in place and what's really valuable for me as a Splunk customer was when a router crashes at that moment you don't know if it's due to an I.T Administration problem or an attacker and what you want to have are different people asking different questions of the same data and you want to have that integrated triage process of an I.T lens to that problem a security lens to that problem and then from there figuring out is is this an IT workflow to execute or a security incident to execute and you want to have all of that as an integrated team integrated process integrated technology stack and this is something that I very care I cared very deeply about as both a Splunk customer and a Splunk CTO that I see time and time again across the board so Patrick I'll leave you with the last word the final three minutes here and I don't see any open questions so please take us home oh man see how you think we spent hours and hours prepping for this together that that last uh uh 40 seconds of your talk track is probably one of the things I'm most passionate about in this industry right now uh and I think nist has done some really interesting work here around building cyber resilient organizations that have that has really I think helped help the industry see that um incidents can come from adverse conditions you know stress is uh uh performance taxations in the infrastructure service or app layer and they can come from malicious compromises uh Insider threats external threat actors and the more that we look at this from the perspective of of a broader cyber resilience Mission uh in a wartime mindset uh I I think we're going to be much better off and and will you talk about with operationally minded ice hacks information sharing intelligence sharing becomes so important in these wartime uh um situations and you know we know not all ice acts are created equal but we're also seeing a lot of um more ad hoc information sharing groups popping up so look I think I think you framed it really really well I love the concept of wartime mindset and um I I like the idea of applying a cyber resilience lens like if you have one more layer on top of that bottom right cake you know I think the it lens and the security lens they roll up to this concept of cyber resilience and I think this has done some great work there for us yeah you're you're spot on and that that is app and that's gonna I think be the the next um terrain that that uh that you're gonna see vendors try to get after but that I think Splunk is best position to win okay that's a wrap for this special Cube presentation you heard all about the global expansion of horizon 3.ai's partner program for their Partners have a unique opportunity to take advantage of their node zero product uh International go to Market expansion North America channel Partnerships and just overall relationships with companies like Splunk to make things more comprehensive in this disruptive cyber security world we live in and hope you enjoyed this program all the videos are available on thecube.net as well as check out Horizon 3 dot AI for their pen test Automation and ultimately their defense system that they use for testing always the environment that you're in great Innovative product and I hope you enjoyed the program again I'm John Furrier host of the cube thanks for watching

Published Date : Sep 28 2022

SUMMARY :

that's the sort of stuff that we do you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Patrick CoughlinPERSON

0.99+

Jennifer LeePERSON

0.99+

ChrisPERSON

0.99+

TonyPERSON

0.99+

2013DATE

0.99+

Raina RichterPERSON

0.99+

SingaporeLOCATION

0.99+

EuropeLOCATION

0.99+

PatrickPERSON

0.99+

FrankfurtLOCATION

0.99+

JohnPERSON

0.99+

20-yearQUANTITY

0.99+

hundredsQUANTITY

0.99+

AWSORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

seven minutesQUANTITY

0.99+

95QUANTITY

0.99+

FordORGANIZATION

0.99+

2.7 billionQUANTITY

0.99+

MarchDATE

0.99+

FinlandLOCATION

0.99+

seven hoursQUANTITY

0.99+

sixty percentQUANTITY

0.99+

John FurrierPERSON

0.99+

SwedenLOCATION

0.99+

John FurrierPERSON

0.99+

six weeksQUANTITY

0.99+

seven hoursQUANTITY

0.99+

19 credentialsQUANTITY

0.99+

ten dollarsQUANTITY

0.99+

JenniferPERSON

0.99+

5 000 hostsQUANTITY

0.99+

Horizon 3TITLE

0.99+

WednesdayDATE

0.99+

30QUANTITY

0.99+

eightQUANTITY

0.99+

Asia PacificLOCATION

0.99+

American AirlinesORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

three licensesQUANTITY

0.99+

two companiesQUANTITY

0.99+

2019DATE

0.99+

European UnionORGANIZATION

0.99+

sixQUANTITY

0.99+

seven occurrencesQUANTITY

0.99+

70QUANTITY

0.99+

three peopleQUANTITY

0.99+

Horizon 3.aiTITLE

0.99+

ATTORGANIZATION

0.99+

Net ZeroORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

UberORGANIZATION

0.99+

fiveQUANTITY

0.99+

less than two percentQUANTITY

0.99+

less than two hoursQUANTITY

0.99+

2012DATE

0.99+

UKLOCATION

0.99+

AdobeORGANIZATION

0.99+

four issuesQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

next yearDATE

0.99+

three stepsQUANTITY

0.99+

node 0TITLE

0.99+

15 minutesQUANTITY

0.99+

hundred percentQUANTITY

0.99+

node zeroTITLE

0.99+

10xQUANTITY

0.99+

last yearDATE

0.99+

7 minutesQUANTITY

0.99+

one licenseQUANTITY

0.99+

second thingQUANTITY

0.99+

thousands of hostsQUANTITY

0.99+

five thousand hostsQUANTITY

0.99+

next weekDATE

0.99+

Greg Muscarella, SUSE | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe, 2022. Brought to you by red hat, the cloud native computing foundation and its ecosystem partners. >>Welcome to Valencia Spain and cuon cloud native con 20 Europe, 2022. I'm your host Keith towns alongside a new hope en Rico, senior reti, senior editor. I'm sorry, senior it analyst at <inaudible> Enrique. Welcome to the program. >>Thank you very much. And thank you for having me. It's exciting. >>So thoughts, high level thoughts of CU con first time in person again in couple years? >>Well, this is amazing for several reasons. And one of the reasons is that yeah, I had the chance to meet, uh, with, uh, you know, people like you again. I mean, we, we met several times over the internet over zoom calls. I, I started to eat these zoom codes. <laugh> because they're really impersonal in the end. And like last night we, we are together group of friends, industry folks. It's just amazing. And a part of that, I mean, the event is, uh, is a really cool, it's really cool. There are a lot from people interviews and, you know, real people doing real stuff, not just, uh, you know, again, in personal calls, you don't even know if they're telling the truth, but when you can, you know, look in their eyes, what they're doing, I, I think that's makes a difference. >>So speaking about real people, meeting people for the first time, new jobs, new roles, Greg Moscarella, enterprise container management and general manager at SUSE. Welcome to the show, welcome back clue belong. >>Thank you very much. It's awesome to be here. It's awesome to be back in person. And I completely agree with you. Like there's a certain fidelity to the conversation and a certain, uh, ability to get to know people a lot more. So it's absolutely fantastic to be here. >>So Greg, tell us about your new role and what SUSE has gone on at KU coupon. >>Sure. So I joined SA about three months ago to lead the rancher business unit, right? So our container management pieces and, you know, it's a, it's a fantastic time. Cause if you look at the transition from virtual machines to containers and to moving to microservices, right alongside that transition from on-prem to cloud, like this is a very exciting time to be in this industry. And rancher has been setting the stage. And again, I'm go back to being here. Rancher's all about the community, right? So this is a very open, independent, uh, community driven product and project. And so this, this is kinda like being back to our people, right. And being able to reconnect here. And so, you know, doing it, digital is great, but, but being here is changes the game for us. So we, we feed off that community. We feed off the energy. So, uh, and again, going back to the space and what's happening in it, great time to be in this space. And you guys have seen the transitions you've seen, I mean, we've seen just massive adoption, uh, of containers and Kubernetes overall and ranchers been been right there with some amazing companies doing really interesting things that I'd never thought of before. Uh, so I'm, I'm still learning on this, but, um, but it's been great so far. >>Yeah. And you know, when we talk about strategy about Kubernetes today, we are talking about very broad strategies. I mean, not just the data center or the cloud with, you know, maybe smaller organization adopting Kubernetes in the cloud, but actually large organization thinking guide and more and more the edge. So what's your opinion on this, you know, expansion of Kubernetes towards the edge. >>So I think you're, I think you're exactly right. And that's actually a lot of meetings I've been having here right now is these are some of these interesting use cases. So people who, uh, whether it be, you know, ones that are easy to understand in the telco space, right? Especially the adoption of 5g and you have all these space stations, new towers, and they have not only the core radio functions or network functions that they're trying to do there, but they have other applications that wanna run on that same environment. Uh, I spoke recently with some of our, our good friends at a major automotive manufacturer, doing things in their factories, right. That can't take the latency of being somewhere else. Right. So they have robots on the factory floor, the latency that they would experience if they tried to run things in the cloud meant that robot would've moved 10 centimeters. >>By the time, you know, the signal got back, it may not seem like a lot to you, but if, if, if you're an employee, you know, there, you know, uh, a big 2000 pound robot being 10 centimeters closer to you may not be what you, you really want. Um, there's, there's just a tremendous amount of activity happening out there on the retail side as well. So it's, it's amazing how people are deploying containers in retail outlets. You know, whether it be fast food and predicting, what, what, how many French fries you need to have going at this time of day with this sort of weather. Right. So you can make sure those queues are actually moving through. It's, it's, it's really exciting and interesting to look at all the different applications that are happening. So yes, on the edge for sure, in the public cloud, for sure. In the data center and we're finding is people want a common platform across those as well. Right? So for the management piece too, but also for security and for policies around these things. So, uh, it really is going everywhere. >>So talk to me, how do, how are we managing that as we think about pushing stuff out of the data center, out of the cloud cloud, closer to the edge security and life cycle management becomes like top of mind thought as, as challenges, how is rancher and sushi addressing >>That? Yeah. So I, I think you're, again, spot on. So it's, it starts off with the think of it as simple, but it's, it's not simple. It's the provisioning piece. How do we just get it installed and running right then to what you just asked the management piece of it, everything from your firmware to your operating system, to the, the cluster, uh, the Kubernetes cluster, that's running on that. And then the workloads on top of that. So with rancher, uh, and with the rest of SUSE, we're actually tacking all those parts of the problems from bare metal on up. Uh, and so we have lots of ways for deploying that operating system. We have operating systems that are, uh, optimized for the edge, very secure and ephemeral container images that you can build on top of. And then we have rancher itself, which is not only managing your ES cluster, but can actually start to manage the operating system components, uh, as well as the workload components. >>So all from your single interface, um, we mentioned policy and security. So we, yeah, we'll probably talk about it more, um, uh, in a little bit, but, but new vector, right? So we acquired a company called new vector, just open sourced, uh, that here in January, that ability to run that level of, of security software everywhere again, is really important. Right? So again, whether I'm running it on, whatever my favorite public cloud providers, uh, managed Kubernetes is, or out at the edge, you still have to have security, you know, in there. And, and you want some consistency across that. If you have to have a different platform for each of your environments, that's just upping the complexity and the opportunity for error. So we really like to eliminate that and simplify our operators and developers' lives as much as possible. >>Yeah. From this point of view, are you implying that even you, you are matching, you know, self, uh, let's say managed clusters at the, at the very edge now with, with, you know, added security, because these are the two big problems lately, you know, so having something that is autonomous somehow easier to manage, especially if you are deploying hundreds of these that's micro clusters. And on the other hand, you need to know a policy based security that is strong enough to be sure again, if you have these huge robots moving too close to you, because somebody act the, the, the class that is managing them, that is, could be a huge problem. So are you, you know, approaching this kind of problems? I mean, is it, uh, the technology that you are acquired, you know, ready to, to do this? >>Yeah. I, I mean, it, it really is. I mean, there's still a lot of innovation happening. Don't, don't get me wrong. We're gonna see a lot of, a lot more, not just from, from SA and ranch here, but from the community, right. There's a lot happening there, but we've come a long way and we solved a lot of problems. Uh, if I think about, you know, how do you have this distributed environment? Uh, well, some of it comes down to not just, you know, all the different environments, but it's also the applications, you know, with microservices, you have very dynamic environment now just with your application space as well. So when we think about security, we really have to evolve from a fairly static policy where like, you might even be able to set an IP address and a port and some configuration on that. >>It's like, well, your workload's now dynamically moving. So not only do you have to have that security capability, like the ability to like, look at a process or look at a network connection and stop it, you have to have that, uh, manageability, right? You can't expect an operator or someone to like go in and manually configure a YAML file, right? Because things are changing too fast. It needs to be that combination of convenient, easy to manage with full function and ability to protect your, your, uh, your resources. And I think that's really one of the key things that new vector really brings is because we have so much intelligence about what's going on there. Like the configuration is pretty high level, and then it just runs, right? So it's used to this dynamic environment. It can actually protect your workloads wherever it's going from pod to pod. Uh, and it's that, that combination, again, that manageability with that high functionality, um, that, that is what's making it so popular. And what brings that security to those edge locations or cloud locations or your data center. >>So one of the challenges you're kind of, uh, touching on is this abstraction on, upon abstraction. When I, I ran my data center, I could put, uh, say this IP address, can't talk to this IP address on this port. Then I got next generation firewalls where I could actually do, uh, some analysis. Where are you seeing the ball moving to when it comes to customers, thinking about all these layers of abstraction IP address doesn't mean anything anymore in cloud native it's yes, I need one, but I'm not, I'm not protecting based on IP address. How are customers approaching security from the name space perspective? >>Well, so it's, you're absolutely right. In fact, even when you go to IPV six, like, I don't even recognize IP addresses anymore. <laugh> yeah. >>That doesn't mean anything like, oh, just a bunch of, yeah. Those are numbers, alpha Ric >>And colons. Right. You know, it's like, I don't even know anymore. Right. So, um, yeah, so it's, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. Right? So this static thing that I can sort of know and, and love and touch and kind of protect to this almost living, breathing thing, which is moving all around, it's a swarm of, you know, pods moving all over the place. And so, uh, it, it is, I mean, that's what Kubernetes has done for the workload side of it is like, how do you get away from, from that, that pet to a declarative approach to, you know, identifying your workload and the components of that workload and what it should be doing. And so if we go on the security side some more like, yeah, it's actually not even namespace namespace. >>Isn't good enough if we wanna get, if we wanna get to zero trust, it's like, just cuz you're running in my namespace doesn't mean I trust you. Right. So, and that's one of the really cool things about new vectors because of the, you know, we're looking at protocol level stuff within the network. So it's pod to pod, every single connection we can look at and it's at the protocol layer. So if you say you're on my SQL database and I have a mye request going into it, I can confirm that that's actually a mye protocol being spoken and it's well formed. Right. And I know that this endpoint, you know, which is a, uh, container image or a pod name or some, or a label, even if it's in the same name, space is allowed to talk to and use this protocol to this other pod that's running in my same name space. >>Right. So I can either allow or deny. And if I can, I can look into the content that request and make sure it's well formed. So I'll give you an example is, um, do you guys remember the log four J challenges from not too long ago, right. It was a huge deal. So if I'm doing something that's IP and port based and name space based, so what are my protections? What are my options for something that's got logged four J embedded in like, I either run the risk of it running or I shut it down. Those are my options. Like those neither one of those are very good. So we can do, because again, we're at the protocol layer. It's like, ah, I can identify any log for J protocol. I can look at whether it's well formed, you know, or if it's malicious and it's malicious, I can block it. If it's well formed, I can let it go through. So I can actually look at those, those, um, those vulnerabilities. I don't have to take my service down. I can run and still be protected. And so that, that extra level, that ability to kind of peek into things and also go pod to pod, you know, not just same space level is one of the key differences. So I talk about the evolution or how we're evolving with, um, with the security. Like we've grown a lot, we've got a lot more coming. >>So let's talk about that a lot more coming what's in the pipeline for SUSE. >>Well, probably before I get to that, we just announced new vector five. So maybe I can catch us up on what was released last week. Uh, and then we can talk a little bit about going, going forward. So new vector five, introduce something called um, well, several things, but one of the things I can talk in more detail about is something called zero drift. So I've been talking about the network security, but we also have run time security, right? So any, any container that's running within your environment has processes that are running that container. What we can do is actually comes back to that manageability and configuration. We can look at the root level of trust of any process that's running. And as long as it has an inheritance, we can let that process run without any extra configuration. If it doesn't have a root level of trust, like it didn't spawn from whatever the, a knit, um, function was in that container. We're not gonna let it run. Uh, so the, the configuration that you have to put in there is, is a lot simpler. Um, so that's something that's in, in new vector five, um, the web application firewall. So this layer seven security inspection has gotten a lot more granular now. So it's that pod Topo security, um, both for ingress egress and internal on the cluster. Right. >>So before we get to what's in the pipeline, one question around new vector, how is that consumed and deployed? >>How is new vector consumed, >>Deployed? And yeah, >>Yeah, yeah. So, uh, again with new vector five and, and also rancher 2 65, which just were released, there's actually some nice integration between them. So if I'm a rancher customer and I'm using 2 65, I can actually deploy that new vector with a couple clicks of the button in our, uh, in our marketplace. And we're actually tied into our role-based access control. So an administrator who has that has the rights can just click they're now in a new vector interface and they can start setting those policies and deploying those things out very easily. Of course, if you aren't using, uh, rancher, you're using some other, uh, container management platform, new vector still works. Awesome. You can deploy it there still in a few clicks. Um, you're just gonna get into, you have to log into your new vector, uh, interface and, and use it from there. >>So that's how it's deployed. It's, it's very, it's very simple to use. Um, I think what's actually really exciting about that too, is we've opensourced it? Um, so it's available for anyone to go download and try, and I would encourage people to give it a go. Uh, and I think there's some compelling reasons to do that now. Right? So we have pause security policies, you know, depreciated and going away, um, pretty soon in, in Kubernetes. And so there's a few things you might look at to make sure you're still able to run a secure environment within Kubernetes. So I think it's a great time to look at what's coming next, uh, for your security within your Kubernetes. >>So Paul, we appreciate chief stopping by from ity of Spain, from Spain, I'm Keith Townsend, along with en Rico Sinte. Thank you. And you're watching the, the leader in high tech coverage.

Published Date : May 19 2022

SUMMARY :

Brought to you by red hat, Welcome to the program. And thank you for having me. I had the chance to meet, uh, with, uh, you know, people like you again. So speaking about real people, meeting people for the first time, new jobs, So it's absolutely fantastic to be here. So Greg, tell us about your new role and what SUSE has gone So our container management pieces and, you know, it's a, it's a fantastic time. you know, maybe smaller organization adopting Kubernetes in the cloud, So people who, uh, whether it be, you know, By the time, you know, the signal got back, it may not seem like a lot to you, to what you just asked the management piece of it, everything from your firmware to your operating system, managed Kubernetes is, or out at the edge, you still have to have security, And on the other hand, you need to know a policy based security that is strong have to evolve from a fairly static policy where like, you might even be able to set an IP address and a port and some configuration So not only do you have to have So one of the challenges you're kind of, uh, touching on is this abstraction In fact, even when you go to IPV six, like, Those are numbers, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. And I know that this endpoint, you know, and also go pod to pod, you know, not just same space level is one of the key differences. the configuration that you have to put in there is, is a lot simpler. Of course, if you aren't using, uh, rancher, you're using some other, So I think it's a great time to look at what's coming next, uh, for your security within your So Paul, we appreciate chief stopping by from ity of Spain,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Greg MoscarellaPERSON

0.99+

Greg MuscarellaPERSON

0.99+

SpainLOCATION

0.99+

PaulPERSON

0.99+

JanuaryDATE

0.99+

SUSEORGANIZATION

0.99+

10 centimetersQUANTITY

0.99+

Keith TownsendPERSON

0.99+

EnriquePERSON

0.99+

GregPERSON

0.99+

last weekDATE

0.99+

oneQUANTITY

0.99+

2000 poundQUANTITY

0.99+

one questionQUANTITY

0.99+

Valencia SpainLOCATION

0.98+

2022DATE

0.97+

CoonORGANIZATION

0.97+

bothQUANTITY

0.97+

KubernetesTITLE

0.97+

first timeQUANTITY

0.97+

two big problemsQUANTITY

0.97+

single interfaceQUANTITY

0.96+

IPV sixOTHER

0.96+

CloudnativeconORGANIZATION

0.96+

KubeconORGANIZATION

0.95+

ingressORGANIZATION

0.95+

todayDATE

0.95+

eachQUANTITY

0.95+

SQLTITLE

0.93+

5gQUANTITY

0.93+

SUSETITLE

0.92+

ESTITLE

0.92+

red hatORGANIZATION

0.9+

zeroQUANTITY

0.9+

hundredsQUANTITY

0.88+

KubernetesORGANIZATION

0.87+

Keith townsPERSON

0.84+

vector fiveOTHER

0.84+

last nightDATE

0.84+

vector fiveTITLE

0.83+

EuropeLOCATION

0.83+

Rico SintePERSON

0.82+

three months agoDATE

0.81+

cuon cloud native conORGANIZATION

0.79+

cloud native conORGANIZATION

0.79+

SAORGANIZATION

0.79+

couple yearsQUANTITY

0.78+

2 65COMMERCIAL_ITEM

0.76+

aboutDATE

0.73+

RicoPERSON

0.72+

SALOCATION

0.71+

single connectionQUANTITY

0.63+

rancherORGANIZATION

0.63+

FrenchOTHER

0.6+

egressORGANIZATION

0.58+

reasonsQUANTITY

0.57+

20LOCATION

0.56+

foundationORGANIZATION

0.56+

CUORGANIZATION

0.51+

fiveTITLE

0.47+

KubernetesPERSON

0.46+

KUORGANIZATION

0.45+

conEVENT

0.4+

vectorCOMMERCIAL_ITEM

0.36+

sevenQUANTITY

0.35+

couponEVENT

0.33+

Greg Muscarella, SUSE | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Valencia Spain and con cloud native con 20 Europe, 2022. I'm your host, Keith Townson alongside a new host en Rico senior reti, senior editor. I'm sorry, senior it analyst at giong Enrique. Welcome to the program. >>Thank you very much. And thank you for having me. It's exciting. >>So thoughts, high level thoughts of CU con first time in person again in couple years? >>Well, this is amazing for several reasons. And one of the reasons is that yeah, I had the chance to meet, uh, with, uh, you know, people like you again. I mean, we, we met several times over the internet, over zoom codes. I, I started to eat these zoom codes. <laugh> because they're very impersonal in the end. And like last night we, we are together group of friends, industry folks. It's just amazing. And a part of that, I mean, the event is, uh, is a really cool, it's really cool. There are a lot from people interviews and, you know, real people doing real stuff, not just, uh, you know, again, in personal calls, you don't even know if they're telling the truth, but when you can, you know, look in their eyes, what they're doing, I, I think that's makes a difference. >>So speaking about real people, meeting people for the first time, new jobs, new roles, Greg Moscarella enterprise container management in general manager at SUSE, welcome to the show, welcome back clue belong. >>Thank you very much. It's awesome to be here. It's awesome to be back in person. And I completely agree with you. Like there's a certain fidelity to the conversation and a certain, uh, ability to get to know people a lot more. So it's absolutely fantastic to be here. >>So Greg, tell us about your new role and what SUSE has gone on at KU con. >>Sure. So I joined SA about three months ago to lead the rancher business unit, right? So our container management pieces and, you know, it's a, it's a fantastic time. Cause if you look at the transition from virtual machines to containers and to moving to micro services, right alongside that transition from on-prem to cloud, like this is a very exciting time to be in this industry and rancher's been setting the stage. And again, I'm go back to being here. Rancher's all about the community, right? So this is a very open, independent, uh, community driven product and project. And so this, this is kinda like being back to our people, right. And being able to reconnect here. And so, you know, doing it, digital is great, but, but being here is changes the game for us. So we, we feed off that community. We feed off the energy. So, uh, and again, going back to the space and what's happening in it, great time to be in this space. And you guys have seen the transitions you've seen, I mean, we've seen just massive adoption, uh, of containers and Kubernetes overall, and rancher has been been right there with some amazing companies doing really interesting things that I'd never thought of before. Uh, so I'm, I'm still learning on this, but, um, but it's been great so far. >>Yeah. And you know, when we talk about strategy about Kubernetes today, we are talking about very broad strategies. I mean, not just the data center or the cloud with, you know, maybe smaller organization adopting Kubernetes in the cloud, but actually large organization thinking guide and more and more the edge. So what's your opinion on this, you know, expansion of Kubernetes towards the edge. >>So I think you're, I think you're exactly right. And that's actually a lot of meetings I've been having here right now is these are some of these interesting use cases. So people who, uh, whether it be, you know, ones that are easy to understand in the telco space, right? Especially the adoption of 5g and you have all these base stations, new towers, and they have not only the core radio functions or network functions that they're trying to do there, but they have other applications that wanna run on that same environment, uh, spoke recently with some of our, our good friends at a major automotive manufacturer, doing things in their factories, right. That can't take the latency of being somewhere else. Right? So they have robots on the factory floor, the latency that they would experience if they tried to run things in the cloud meant that robot would've moved 10 centimeters. >>By the time, you know, the signal got back, it may not seem like a lot to you, but if, if, if you're an employee, you know, there, you know, uh, a big 2000 pound robot being 10 centimeters closer to you may not be what you, you really want. Um, there's, there's just a tremendous amount of activity happening out there on the retail side as well. So it's, it's amazing how people are deploying containers in retail outlets. You know, whether it be fast food and predicting, what, what, how many French fries you need to have going at this time of day with this sort of weather. Right. So you can make sure those queues are actually moving through. It's, it's, it's really exciting and interesting to look at all the different applications that are happening. So yes, on the edge for sure, in the public cloud, for sure. In the data center and we're finding is people want to common platform across those as well. Right? So for the management piece too, but also for security and for policies around these things. So, uh, it really is going everywhere. >>So talk to me, how do, how are we managing that as we think about pushing stuff out of the data center, out of the cloud cloud, closer to the edge security and life cycle management becomes like top of mind thought as, as challenges, how is rancher and sushi addressing >>That? Yeah. So I, I think you're, again, spot on. So it's, it starts off with the think of it as simple, but it's, it's not simple. It's the provisioning piece. How do we just get it installed and running right then to what you just asked the management piece of it, everything from your firmware to your operating system, to the, the cluster, uh, the Kubernetes cluster, that's running on that. And then the workloads on top of that. So with rancher, uh, and with the rest of SUSE, we're actually tacking all those parts of the problems from bare metal on up. Uh, and so we have lots of ways for deploying that operating system. We have operating systems that are, uh, optimized for the edge, very secure and ephemeral container images that you can build on top of. And then we have rancher itself, which is not only managing your Kubernetes cluster, but can actually start to manage the operating system components, uh, as well as the workload components. >>So all from your single interface, um, we mentioned policy and security. So we, yeah, we'll probably talk about it more, um, uh, in a little bit, but, but new vector, right? So we acquired a company called new vector, just open sourced, uh, that here in January, that ability to run that level of, of security software everywhere again, is really important. Right? So again, whether I'm running it on, whatever my favorite public cloud providers, uh, managed Kubernetes is, or out at the edge, you still have to have security, you know, in there. And, and you want some consistency across that. If you have to have a different platform for each of your environments, that's just upping the complexity and the opportunity for error. So we really like to eliminate that and simplify our operators and developers lives as much as possible. >>Yeah. From this point of view, are you implying that even you, you are matching, you know, self, uh, let's say managed clusters at the, at the very edge now with, with, you know, added security, because these are the two big problems lately, you know, so having something that is autonomous somehow easier to manage, especially if you are deploying hundreds of these that's micro clusters. And on the other hand, you need to know a policy based security that is strong enough to be sure again, if you have these huge robots moving too close to you, because somebody act the class that is managing them, that could be a huge problem. So are you, you know, approaching this kind of problems? I mean, is it, uh, the technology that you are acquired, you know, ready to, to do this? >>Yeah. I, I mean, it, it really is. I mean, there's still a lot of innovation happening. Don't, don't get me wrong. We're gonna see a lot of, a lot more, not just from, from SA and rancher, but from the community, right. There's a lot happening there, but we've come a long way and we've solved a lot of problems. Uh, if I think about, you know, how do you have this distributed environment? Uh, well, some of it comes down to not just, you know, all the different environments, but it's also the applications, you know, with microservices, you have very dynamic environment now just with your application space as well. So when we think about security, we really have to evolve from a fairly static policy where like, you might even be able to set an IP address in a port and some configuration on that. It's like, well, your workload's now dynamically moving. >>So not only do you have to have that security capability, like the ability to like, look at a process or look at a network connection and stop it, you have to have that, uh, manageability, right? You can't expect an operator or someone to like go in and manually configure a YAML file, right? Because things are changing too fast. It needs to be that combination of convenient, easy to manage with full function and ability to protect your, your, uh, your resources. And I think that's really one of the key things that new vector really brings is because we have so much intelligence about what's going on there. Like the configuration is pretty high level, and then it just runs, right? So it's used to this dynamic environment. It can actually protect your workloads wherever it's going from pod to pod. Uh, and it's that, that combination, again, that manageability with that high functionality, um, that, that is what's making it so popular. And what brings that security to those edge locations or cloud locations or your data center >>Mm-hmm <affirmative> so one of the challenges you're kind of, uh, touching on is this abstraction on upon abstraction. When I, I ran my data center, I could put, uh, say this IP address, can't talk to this IP address on this port. Then I got next generation firewalls where I could actually do, uh, some analysis. Where are you seeing the ball moving to when it comes to customers, thinking about all these layers of abstraction I IP address doesn't mean anything anymore in cloud native it's yes, I need one, but I'm not, I'm not protecting based on IP address. How are customers approaching security from the name space perspective? >>Well, so it's, you're absolutely right. In fact, even when you go to I P six, like, I don't even recognize IP addresses anymore. <laugh> >>Yeah. Doesn't mean anything like, oh, just a bunch of, yes, those are numbers, ER, >>And colons. Right. You know, it's like, I don't even know anymore. Right. So, um, yeah, so it's, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. Right? So this static thing that I can sort of know and, and love and touch and kind of protect to this almost living, breathing thing, which is moving all around, it's a swarm of, you know, pods moving all over the place. And so, uh, it, it is, I mean, that's what Kubernetes has done for the workload side of it is like, how do you get away from, from that, that pet to a declarative approach to, you know, identifying your workload and the components of that workload and what it should be doing. And so if we go on the security side some more like, yeah, it's actually not even namespace namespace. >>Isn't good enough. We wanna get, if we wanna get to zero trust, it's like, just cuz you're running in my namespace doesn't mean I trust you. Right. So, and that's one of the really cool things about new vectors because of the, you know, we're looking at protocol level stuff within the network. So it's pod to pod, every single connection we can look at and it's at the protocol layer. So if you say you're on my database and I have a mye request going into it, I can confirm that that's actually a mye protocol being spoken and it's well formed. Right. And I know that this endpoint, you know, which is a, uh, container image or a pod name or some, or a label, even if it's in the same name, space is allowed to talk to and use this protocol to this other pod that's running in my same name space. >>Right. So I can either allow or deny. And if I can, I can look into the content that request and make sure it's well formed. So I'll give you an example is, um, do you guys remember the log four J challenges from not too long ago, right. Was, was a huge deal. So if I'm doing something that's IP and port based and name space based, so what are my protections? What are my options for something that's got log four J embedded in like I either run the risk of it running or I shut it down. Those are my options. Like those neither one of those are very good. So we can do, because again, we're at the protocol layers like, ah, I can identify any log for J protocol. I can look at whether it's well formed, you know, or if it's malicious, if it's malicious, I can block it. If it's well formed, I can let it go through. So I can actually look at those, those, um, those vulnerabilities. I don't have to take my service down. I can run and still be protected. And so that, that extra level, that ability to kind of peek into things and also go pod to pod, you know, not just name space level is one of the key differences. So I talk about the evolution or how we're evolving with, um, with the security. Like we've grown a lot, we've got a lot more coming. >>So let's talk about that a lot more coming what's in the pipeline for SUSE. >>Well, how, before I get to that, we just announced new vector five. So maybe I can catch us up on what was released last week. Uh, and then we can talk a little bit about going, going forward. So new vector five, introduce something called um, well, several things, but one of the things I can talk in more detail about is something called zero drift. So I've been talking about the network security, but we also have run time security, right? So any, any container that's running within your environment has processes that are running that container. What we can do is actually comes back to that manageability and configuration. We can look at the root level of trust of any process that's running. And as long as it has an inheritance, we can let that process run without any extra configuration. If it doesn't have a root level of trust, like it didn't spawn from whatever the, a knit, um, function was and that container we're not gonna let it run. Uh, so the, the configuration that you have to put in there is, is a lot simpler. Um, so that's something that's in, in new vector five, um, the web application firewall. So this layer seven security inspection has gotten a lot more granular now. So it's that pod Topo security, um, both for ingress egress and internal on the cluster. Right. >>So before we get to what's in the pipeline, one question around new vector, how is that consumed and deployed? >>How is new vector consumed, >>Deployed? And yeah, >>Yeah, yeah. So, uh, again with new vector five and, and also rancher 2 65, which just were released, there's actually some nice integration between them. So if I'm a rancher customer and I'm using 2 65, I can actually just deploy that new vector with a couple clicks of the button in our, uh, in our marketplace. And we're actually tied into our role-based access control. So an administrator who has that has the rights can just click they're now in a new vector interface and they can start setting those policies and deploying those things out very easily. Of course, if you aren't using, uh, rancher, you're using some other, uh, container management platform, new vector still works. Awesome. You can deploy it there still in a few clicks. Um, you're just gonna get into, you have to log into your new vector, uh, interface and, and use it from there. >>So that's how it's deployed. It's, it's very, it's very simple to use. Um, I think what's actually really exciting about that too, is we've opensourced it? Um, so it's available for anyone to go download and try, and I would encourage people to give it a go. Uh, and I think there's some compelling reasons to do that now. Right? So we have pause security policies, you know, depreciated and going away, um, pretty soon in, in Kubernetes. And so there's a few things you might look at to make sure you're still able to run a secure environment within Kubernetes. So I think it's a great time to look at what's coming next, uh, for your security within your Kubernetes. >>So, Paul, we appreciate you stopping by from ity of Spain. I'm Keith Townsend, along with en Rico Sinte. Thank you. And you're watching the, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. Welcome to the program. And thank you for having me. I had the chance to meet, uh, with, uh, you know, people like you again. So speaking about real people, meeting people for the first time, new jobs, So it's absolutely fantastic to be here. So Greg, tell us about your new role and what SUSE has gone So our container management pieces and, you know, it's a, it's a fantastic time. you know, maybe smaller organization adopting Kubernetes in the cloud, So people who, uh, whether it be, you know, By the time, you know, the signal got back, it may not seem like a lot to you, to what you just asked the management piece of it, everything from your firmware to your operating system, If you have to have a different platform for each of your environments, And on the other hand, you need to know a policy based security that is strong have to evolve from a fairly static policy where like, you might even be able to set an IP address in a port and some So not only do you have to have that security capability, like the ability to like, Where are you seeing the In fact, even when you go to I P six, like, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. And I know that this endpoint, you know, and also go pod to pod, you know, not just name space level is one of the key differences. the configuration that you have to put in there is, is a lot simpler. Of course, if you aren't using, uh, rancher, you're using some other, So I think it's a great time to look at what's coming next, uh, for your security within your So, Paul, we appreciate you stopping by from ity of Spain.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsonPERSON

0.99+

SUSEORGANIZATION

0.99+

Greg MuscarellaPERSON

0.99+

PaulPERSON

0.99+

10 centimetersQUANTITY

0.99+

Keith TownsendPERSON

0.99+

JanuaryDATE

0.99+

Greg MoscarellaPERSON

0.99+

last weekDATE

0.99+

SpainLOCATION

0.99+

GregPERSON

0.99+

2000 poundQUANTITY

0.99+

one questionQUANTITY

0.98+

KubernetesTITLE

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

Valencia SpainLOCATION

0.97+

todayDATE

0.97+

KubeconORGANIZATION

0.97+

first timeQUANTITY

0.95+

single interfaceQUANTITY

0.95+

two big problemsQUANTITY

0.95+

eachQUANTITY

0.94+

CoonORGANIZATION

0.94+

ingressORGANIZATION

0.94+

zeroQUANTITY

0.9+

three months agoDATE

0.9+

CloudnativeconORGANIZATION

0.88+

22EVENT

0.86+

SUSETITLE

0.86+

fiveTITLE

0.85+

I P sixOTHER

0.84+

EuropeLOCATION

0.81+

giong EnriquePERSON

0.81+

log fourOTHER

0.8+

2 65COMMERCIAL_ITEM

0.79+

2022DATE

0.78+

vector fiveTITLE

0.77+

couple yearsQUANTITY

0.75+

rancherORGANIZATION

0.73+

FrenchOTHER

0.73+

cloud native computingORGANIZATION

0.73+

KubernetesORGANIZATION

0.72+

last nightDATE

0.71+

single connectionQUANTITY

0.71+

one of the reasonsQUANTITY

0.69+

RicoORGANIZATION

0.68+

Rico SintePERSON

0.67+

SAORGANIZATION

0.66+

aboutDATE

0.66+

layer sevenOTHER

0.65+

vectorOTHER

0.64+

5gQUANTITY

0.64+

65COMMERCIAL_ITEM

0.62+

cloud native conORGANIZATION

0.55+

telcoORGANIZATION

0.55+

2TITLE

0.54+

SALOCATION

0.53+

egressORGANIZATION

0.52+

hundredsQUANTITY

0.51+

CU conEVENT

0.46+

KU con.ORGANIZATION

0.44+

vectorCOMMERCIAL_ITEM

0.39+

20EVENT

0.31+

Micah Coletti & Venkat Ramakrishnan | KubeCon + CloudNativeCon NA 2021


 

>> Welcome back to Los Angeles. TheCUBE is live. I can't say that enough. The cube is live. We're at KubeCon Cloud Native Con 21. We've been here all day yesterday, and today and tomorrow I'm talking with lots of guests, really uncovering what's going on in the world of Kubernetes. Lisa Martin, here with Dave Nicholson. We've got some folks. Next we're going to be talking about a customer use case, which is always one of my favorite things to talk about. Please welcome Micah Coletti, the principal platform engineer at CHG healthcare, and Venkat Ramakrishnan VP of products from Portworx by Pure Storage, guys welcome to the program. >> Thank you. >> Happy to be here. >> Yeah. So Micah, first of all, let's go ahead and start with you. Give the audience an overview of CHG healthcare. >> Yeah. So CHG healthcare, we're a staffing company. So we try like a little companion. So our clients are doctors and hospitals, so we help staff hospitals with temporary doctors or even permanent placing. So we deal with a lot of doctors, a lot of nursing and we're a combination of multiple companies. So CHG is the parent. So, and yeah, we're known in the industry as one of the leaders in this field and providing hospitals with high quality doctors and nurses. And, you know, our customer service is like number one, and one of the things our CEO is really focused on is now how do we make that more digital? How do we provide that same level of quality of service, but a digital experience as rich for her. >> I can imagine it was a massive need for that in the last 18 months alone. >> COVID definitely really raised that awareness up for us and the importance of that digital experience and that we need to be out there in the digital market. >> Absolutely. So you're a customer port works by pure storage, we're going to get into that, but then Venkat talk to us about what's going on, the acquisition of port works by pure storage was about a year ago. Talk to us about your VP of products what's going on. >> Yeah, I mean, you know, first of all, I think I could not say how much of a great fit for a Portworx will be part of pure storage, it's, pure itself is a very fast moving, large startup that's a dominant leader in the flash and data center space, and, you know, pure recognizes the fact that Kubernetes is the new operating system of the cloud is not how, you know, it's kind of virtualizing the cloud itself, and there's a, you know, a big burgeoning need for data management and Kubernetes and how you can kind of orchestrate workloads between your on-prem data centers and the cloud and back. So Portworx fits right into the story as complete vision of data management for our customers, and it's been phenomenal. Our business has grown as part of being part of a pure, and you know, we're looking at launching some new products as well, and it's all exciting times. >> So you must've been pretty delighted to be acquired as a startup by essentially a startup because, because although pure has reached significant milestones in the storage business and is a leader in flash storage still that that startup mindset is absolutely unique. That's not, that's not the same as being acquired by a company that's been around for a hundred years seeking to revitalize itself. >> Absolutely. >> Can you talk a little bit about that aspect? >> Yeah, So I think, you know, purist culture is a highly innovation-driven and it's a very open, flat culture, right? I mean, it's, everybody in pure is accessible. It can easily have a composition with folks and everybody has his learning mindset and Portworx is and has always been the same way. Right? So when you put these teams together, if we can create wonders, I mean, we right after the acquisition, just within a few months, we announced an integrated solution that portworx orchestrates volumes and file shares in pure splash products and then delivers as an integrated solution for our customers, and pure has a phenomenal cloud-based monitoring and management system called pure one that we integrated well into. Now, we're bringing the power of all of the observability that pure's customers are used to for all of the corporate customers, and I've been super happy, you know, delegating that capability to our customers and our customers are delighted. Now they can have a complete view all the way from Kubernetes app to the flash. and I don't think any one company in the planet can even plan they can do that. >> I think it's fair to acknowledge that pure one was observability before observability was a word that everyone used regularly. >> Yep. >> Sounds very interesting. >> Micah Talk to us about, obviously you are a customer. CHG is a customer of Portworx now Portworx by Pure Storage. Talk to us about the use case. What, what was the compellent? Was there a compelling event and from a storage perspective that led you to Portworx in the first place. >> So we beat, they began this, our CEO base came to the vision, we need to have a digital presence we need enhances. and this was even before COVID. So they brought me on board and my, my manager read glossary. We basically had this task to, how are we going to get out into the cloud? How are we going to make that happen? And we chose to follow a very much a cloud native strategy and the platform of choice, I mean, it just made sense with Kubernetes. And so when we were looking at Kubernetes, we were starting to figure out how we're doing. We knew that data is going to be a big factor. You know, being a, provide data. We're very much focused on an event driven. We're really pushing to event driven architecture. So we leverage Kafka on top of Kubernetes, but at the time we were actually leveraging Kafka with a MSK down, out in AWS, and that was just a huge cost to us. So I came on board, I had experienced with Portworx, a prior company before that, and I basically said, we need to figure out a great storage relay overlay. and the only way to do is we got to have high performance storage, we've got to have secure. We got to be able to backup and recover that storage. And the Portworx was the right match. And that allowed us to have a very smooth transition off of MSK onto Kubernetes saving us a significant amount of money per month, and just leverage that already existing hardware that our existing compute memory and just, and the, and move right to Portworx. >> Leveraging your existing investments. >> Exactly. >> Which is key, >> Very key, very key so. >> So how common are the challenges that when you guys came together with CHG, how common are the challenges? >> It's actually a, that's a great question. You know, this is, you know, I'll tell you the challenges that Micah and his team are running into is what we see a lot in the industry where people pay a ton of money, you know to other vendors are, you know, especially in some cases use some cloud native services, but they want to have control over the data. They want to control the cost and they want higher performance and they want to have, you know, there's also governance and regulatory things that they need to control better. So they want to kind of bring these services and have more control over them. Right? So now we will work very well with all of our partners, including the cloud providers, as well as, you know, on-prem and server vendors and everybody, but different customers have different kinds of needs. And Portworx gives them that flexibility. If you are a customer who want, you know, have a lot of control over your applications, the performance, the latency, and want to control costs very well and leverage your existing investments Portworx can deliver that for you in your data center. Right now, you can integrate that with pure slash and you get a complete solution, or you want to run it in cloud, and you still want to have leverage the agility of the cloud and scale Portworx delivers a solution for you as well. So it kind of not only protects their investment its future proves their architecture, you get future proving your architecture completely. So if you want to tear the cloud or burst the cloud, you have a great solution that you can continue to leverage >> Micah, when you hear future-proof and I'm a marketer. So I always go, I love to know what it means to different people. What does that mean to you in your environment? >> My environment. So a future-proof means like one of the things we've been addressing lately, that's just a real big challenge. And I'm sure it's a challenge in the industry, especially the Q and A's is upgrading our clusters. The ability to actually maintain a consistent flow with how fast Kubernetes is growing, you know, they're, they they're out. I think he cast, we leverage the cast. So it's like 121 or 122 now, and that effort to upgrade a cluster, it can be a daunting one. With Portworx, we actually were able to make that to where we could actually spin up a brand new cluster. And with Portworx shift, all our applications, services, data migrated completely over, Portworx handles all of that for us and stand up that new cluster in, in less than a day. And that effort, I mean, it would take us a week, two weeks to do so, not even man hours and time spent there, but just the reliability of being able to do that in the cost, you know, instead of standing up a new cluster and configuring it and doing all that and spending all that time, we can just really, we move to what we call blue green cut-over strategy. And Portworx is an essential piece of that. >> So Venkat, is it fair to say that there are a variety of ways that people approach Portworx from a value perspective in terms of, I know that one area that you are particularly good in is the area of backups in this environment, but then you get data management and there's a third kind of vector there. What is the third vector? >> As all of the data services, >> Data services, >> Yeah Like for example, deep database as a service on any Kubernetes cluster feed on your cloud or your on-prem data centers. >> Which data, what kind of databases are you talking about? >> I mean we're talking about anything from Reddit Kafka, Post-stress my sequel console, we are supporting. We just announced something called a Portworx Data Services Offering that essentially delivers all these databases as a service on any Kubernetes cluster that a customer can point to and lets them kind of get the automated management of the database from day one to day three, the entire life cycle, you know, through regular Kubernetes, scoop cuddle experience through APIs and SDKs and a nice slick UI that they can, you know, that's, role-based access control and all of that, that they can completely control their data and their applications through it. And you know, that's the third vector of Portworx office. >> Micah a question for you. So Portworx has been a part of pure storage? You've known it since obviously for several years before you were at CHG, you brought it to CHG. You now know it a year into being acquired by a fast paced startup. Talk to me about the relationship and some of the benefits that you're getting with Portworx as a part of pure storage? >> Well, I mean, one of the things I, you know, when I heard about the acquisition, my first thing was, I was a little bit concerned is that relationship going to change? And when we were acquiring, when we were looking at adopting Portworx, one thing I would tell my management is Portworx is not just a vendor that wants to throw a solution on you and provide some capability. They're a partner. They want to partner with you and your success in your journey and this whole cloud native journey to provide this rich digital experience in the, for not only our platform engineering team, but our Dev teams, but also be able to really accelerate the development of our services. So we can provide that digital portal for our end users. And that didn't change. If anything, that it accelerated that relationship did not change. You know, I came to Venkat with an issue. We just we're, we're dealing with, he immediately got someone on a phone call with me. And so that has not changed. So it's really exciting to see that now that they've been acquired, that they still are very much invested in the success of their customers and making sure we're successful. You know, it's not all of a sudden. I was worried I was going to have to do a whole different support PA process, and it was going to go into a black hole. Didn't happen. They still are very much involved with their customers. >> It's sounds kind of Venkat similar to what you talked about with the cultural alignment. I've known here for a long time and they're very customer centric sounds like one of the areas in which there was a very strong alignment with Portworx >> Absolutely. and Portworx has always taken pride in being customer first company. Our founders are heavily customer focused. You know, they are aligned. They want, they have always aligned. our portraits business to our customers' needs. Now Pure is a company that's maniacally focused on customers, right? I mean, that's all in a pure pounder cars and everybody cared about. And so, you know, bringing these companies together and being part of the Pure team, I kind of see how, how synergistic it is. And, you know, we have, you know, that has enabled us to serve our customer's customers even better than before. >> So I'm curious about the two of you personally, in terms of your, your histories, I'm going to assume that you didn't both just bounce out of high school into the world of Kubernetes, right? So like Lisa and I you're spanning the generations between the world of say virtualization based on x86 architecture, virtualization, where you're not, you don't have microservices, you have a full blown operating system that you're working with. Kind of talk about, you know, Micah with you first talk about what that's been like navigating that change. We were in the midst of that. Do you have advice for others that are navigating that change? >> Don't be afraid of it. You know, a lot of people want to, you know, I call it we're moving from where we're name me. We still have cats and dogs. They have a name that the VMs either whether or not they're physical boxes or their VMs to where it's more like, he'd say cattle, you know, it's like we don't own the OOS and not to be afraid of afraid of that, because change is really good. You know, the ability for me to not have to worry about patching and operating system, it's huge, you know, where I can rely on someone like EKS and, and the version and allow them to, if a CV comes out, they let me know. I go and I use their tools to be able to upgrade. So I don't have to literally worry about owning that OOS and containers as the same thing. You know, you, you know, it's all about being fault-tolerant right. And being able to be changed or where, you know, you can actually roll out a new version of a container, a base image with a lot of ease without having to go and patch a bunch of servers. I mean, patch night was hell and sorry if I could say that, but it was a nightmare, you know, but this whole world has just been a game changer with that. >> So Venkat from your perspective, you were coming at it, going into a startup, looking at the landscape in the future and seeing opportunity. What what's that been like for you? I guess the question for you is more something, Lisa and I talk about this concept of peak Kubernetes, where are we in the wave? Is this just, is this just the beginning? Are we in the thick of it? >> I think I would say we're kind of transitioning from early adopters, early majority phase in the whole, you know, crossing the chasm analogy, right? So I would say we're still early stages of this big wave. That's going to transform how infrastructure is built. Apps are apps are built and managed and run in production. I think some of the pieces, the key pieces are falling in place and maturing. There are some other pieces like observability and security, you know, kind of edge use cases need to be, you know, they're kind of going to get a lot more mature and you'll see that the cloud, as we know today, and the apps, as we know today, they're going to be radically different. And you know, if you're not building your apps and your business on this modern platform, on this modern infrastructure, you're going to be left behind. You know, I, my wife's birthday was a couple of days ago. I was telling the story to my couple of friends is that I, I used another flowers delivery website. They miss delivering the flowers on the same day, right. So they told me all kinds of excuses. Then I just went and looked up a, you know, like door dash, which is delivers, you know, and then, you know, like your food, but there's also flower delivery and door dash and I don't do I door dash flowers to her, and I can track the flower delivery all the way she did not need them, but my kids love the chocolates though. Right. So, and you know, the case in point is that you cannot be in a building, a modern business without leveraging the model tool chain and modern tool chain and how the business is going to be delivered at that thing is going to be changing dramatically. And those kinds of customer experience, if you don't deliver, you're not going to be successful in business. And Kubernetes is the fundamental technology that enables this containers is a fundamental piece of technology that enables building new businesses, you know, modernizing existing businesses. And the 5G is going to be, there's going to be new innovations. It's going to get unleashed. And again, Kubernetes and containers enable us to leverage those. And so we're still scratching the surface on this. It's big. Now, it's going to be much, much bigger, you know, as, as we go into the next couple of years. >> Speaking, scratching the surface, Micah, take us out in the last 30 seconds or so with where CHG healthcare is on institutional transformation, how is Portworx facilitating that? >> So we're, we're right in the thick of it. I mean, we are, we still have what we call the legacy. We're working on getting those, but I mean, we're really moving forward to provide that rich experience, especially with event driven platforms like Kafka and Kubernetes and partnering with Portworx is one of the key things for us with that. And AWS along with that. But we're a, and I remember I heard a talk and I can't, I can't remember her name, but he talked about how, how Pure Kubernetes is sort of like the 56K modem, right. You're hearing it and see, but it's got to get to the point where it's just there. It's just the high-speed internet and Kelsey Hightower. That's great. But yeah, and I really liked that because that's true, you know, and that's where we are. We're all in that transition where we're still early, it's still at 50. So you still want to hear note, you still want to do cube CTL. You want to learn it the hard way and do all that fun stuff. But eventually it's going to be where it's just, it's just there. And it's running everything like 5G. I mean, stripped down doing micro, you know, Kate's things like that. You know, we're going to see it in a lot of other areas and just periphery and really accelerate the industry in compute and memory and storage, and. >> Yeah, a lot of acceleration. Guys thank you. This has been a really interesting session. I always love digging into customer use cases. How CHG is really driving its evolution with Portworx. Venkat, thanks for sharing with us, What's going on with Portworx a year after the acquisition. It sounds like all good stuff. >> Thank you. Thanks for having us. >> Pleasure. All right. For Dave Nicholson, I'm Lisa Martin. You're watching theCUBE live from Los Angeles. This is our coverage of KubeCon Cloud Native Con 21.

Published Date : Oct 29 2021

SUMMARY :

in the world of Kubernetes. and start with you. and one of the things our CEO in the last 18 months alone. and that we need to be out Talk to us about your VP of and there's a, you know, So you must've been pretty Yeah, So I think, you know, I think it's fair to that led you to Portworx and the only way to do is we You know, this is, you know, What does that mean to and that effort to upgrade a cluster, I know that one area that you feed on your cloud that they can, you know, that's, and some of the benefits the things I, you know, to what you talked about and being part of the Pure the two of you personally, and operating system, it's huge, you know, I guess the question for phase in the whole, you know, and I really liked that Yeah, a lot of Thanks for having us. This is our coverage of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

MicahPERSON

0.99+

Micah ColettiPERSON

0.99+

LisaPERSON

0.99+

two weeksQUANTITY

0.99+

PortworxORGANIZATION

0.99+

CHGORGANIZATION

0.99+

Los AngelesLOCATION

0.99+

AWSORGANIZATION

0.99+

Los AngelesLOCATION

0.99+

a weekQUANTITY

0.99+

tomorrowDATE

0.99+

VenkatPERSON

0.99+

less than a dayQUANTITY

0.99+

Venkat RamakrishnanPERSON

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

CHG healthcareORGANIZATION

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.99+

oneQUANTITY

0.99+

KubernetesTITLE

0.98+

Pure StorageORGANIZATION

0.98+

first thingQUANTITY

0.98+

first companyQUANTITY

0.98+

bothQUANTITY

0.98+

KafkaTITLE

0.98+

CloudNativeConEVENT

0.98+

VenkatORGANIZATION

0.97+

KatePERSON

0.97+

KubeConEVENT

0.97+

pureORGANIZATION

0.97+

a yearQUANTITY

0.97+

50QUANTITY

0.96+

day threeQUANTITY

0.95+

third vectorQUANTITY

0.95+

coupleQUANTITY

0.93+

a hundred yearsQUANTITY

0.93+

portworxORGANIZATION

0.93+

MSKORGANIZATION

0.92+

COVIDORGANIZATION

0.92+

day oneQUANTITY

0.91+

Micah Coletti & Venkat Ramakrishnan | KubeCon + CloudNativeCon NA 2021


 

>>Mhm Welcome back to Los Angeles. The Cubans live, I can't say that enough. The Cubans live. We're at cu con cloud Native Con 21. We've been here all day yesterday and today and tomorrow talking with lots of gas. Really uncovering what's going on in the world of kubernetes, lisa martin here with Dave Nicholson. We've got some folks. Next we're gonna be talking about a customer use case, which is always one of my favorite things to talk about. Please welcome Michael Coletti, the principal platform engineer at CHG Healthcare and then cat from a christian VP of products from port works by pure storage. Guys, welcome to the program, Thank you. Happy to be here. Yeah. So Michael, first of all, let's go ahead and start with you, give the audience an overview of CHG healthcare. >>Yeah, so CHG Healthcare were a staffing company so we sure like a locum pen and so our clients are doctors and hospitals, so we help staff hospitals with temporary doctors or even permanent placing. So we deal with a lot of doctors, a lot of nursing and we're were a combination of multiple companies to see if she is the parents. So and uh yeah, we're known in the industry is one of the leaders in this, this field and providing uh hospitals with high quality uh doctors and nurses and uh you know, our customer services like number one and one of these are Ceos really focused on is now how do we make that more digital, how we provide that same level of quality of service, but a digital experience as rich for >>I can imagine there was a massive need for that in the last 18 months alone. >>Covid definitely really raised that awareness out for us and the importance of that digital experience and that we need to be out there in the digital market. >>Absolutely. So your customer report works by pure storage, we're gonna get into that. But then can talk to us about what's going on. The acquisition of port works by peer storage was about a year ago I talked to us about your VP of product, what's going on? >>Yeah, I mean, you know, first of all, I think I could not say how much of a great fit for a port works to be part of your storage. It's uh uh Pure itself is a very fast moving large start up that's a dominant leader in a flash and data center space. And you know, pure recognizes the fact that Cuban it is is the new operating system of the cloud is now how you know, it's kind of virtualizing the cloud itself and there is a, you know, a big burgeoning need for data management in communities and how you can kind of orchestrate work lords between your on prem data centers in the cloud and back. So port books fits right into the story as complete vision of data management for our customers and uh spend phenomenal or business has grown as part of being part of Pure and uh you know, we're looking at uh launching some new products as well and it's all exciting times. >>So you must have been pretty delighted to be acquired as a startup by essentially a startup because because although pure has reached significant milestones in the storage business and is a leader in flash storage still, that, that startup mindset is there, that's unique, that's not, that's not the same as being acquired by a company that's been around for 100 years seeking to revitalize >>itself. Can >>you talk a little bit about that >>aspect? So I think it will uh, Purest culture is highly innovation driven and it's a very open flat culture. Right? I mean everybody impure is accessible, it can easily have a conversation with folks and everybody has his learning mindset and Port works is and has always been in the same way. Right? So when you put these teams together, if we can create wonders, I mean we, right after that position, just within a few months we announced an integrated solution that Port works orchestrates volumes and she file shares in Pure flash products and then delivers as an integrated solution for our customers. And Pure has a phenomenal uh, cloud based monitoring and management system called Pure one that we integrated well into. Now we're bringing the power of all of the observe ability that Purest customers are used to for all of the partners customers and having super happy, you know, delivering that capability to our customers and our customers are delighted now they can have a complete view all the way from community is an >>app to the >>flash and I don't think any one company on the planet can even climb, they can do that. >>I think, I think it's fair to acknowledge that pure one was observe ability before observe ability was a word. Exactly one used regularly. So that's very interesting. >>I could talk to us about obviously you are a customer CHD as a customer of court works now Port works by peer storage. Talk to us about the use case, what what was the compelling? It was their compelling event and from a storage perspective that that led you to Port works in the >>first so we be, they began this our Ceo basically in the vision, we we need to have a digital presence, we need and hazards and this was even before Covid, so they brought me on board and my my manager read uh glass or he we basically had this task to how are we going to get out into the cloud, how we're going to make that happen And we we chose to follow very much cloud native strategy and the platform of choice. I mean it just made sense with kubernetes and so when we were looking at kubernetes, we're starting to figure out how we're doing, we knew that data is going to be a big factor, you know, um being to provide data, we're very much focused on an event driven, were really pushing to event driven architecture. So we leverage Kafka on top of kubernetes, but at the time we were actually leveraging Kafka with M S K down out in a W S and that was just a huge cost to us. So I came on board, I had experienced with poor works prior company before that and I basically said we need to figure out a great storage away overlay. And the only way to do is we gotta have high performance storage, we've got to have secure, we gotta be able to back up and recover that storage and the poor works was the right match and that allowed us to have a very smooth transition off of M S K onto kubernetes, saving us, it's a significant amount of money per month and just leverage that already existing hardware that are existing, compute memory and just in the and move right to port works, >>leveraging your existing investments. >>Exactly which is key. Very, very key. So, >>so been kept, how common are the challenges that when you guys came together with the HD, how common are the challenges? It's actually, >>that's a great question, you know, this is, I'll tell you the challenges that Michael and his team are running into is what we see a lot in the, in the industry where people pay a ton of money, you know, to, you know, to to other vendors or especially in some cases use some cloud native services, but they want to have control over the data. They want to control the cost and they want higher performance and they want to have, you know, there's also governance and regulatory things that they need to control better. So they want to kind of bring these services and have more control over them. Right? So now we will work very well with all of our partners including the cloud providers as well as uh, you know, an from several vendors and everybody but different customers are different kinds of needs and port works gives them the flexibility if you are a customer who want, you know, have a lot of control over your applications, the performance of the agency and want to control cars very well in leveraging existing investments board works can deliver that for you in your data center right now you can integrate it with pure slash and you get a complete solution or you won't run it in cloud and you still want to have leverage the agility of the cloud and scale for books delivers a solution for you as well. So it kind of not only protects their investment in future proves their architecture, you get future proving your architecture completely. So if you want to tear the cloud or burst the cloud, you have a great solution that you can continue to leverage >>when you hear a future proof and I'm a marketer. So I always go, I love to know what it means to different people, what does that mean to you in your environment? >>My environment. So a future proof means like one of the things we've been addressing lately, that's just a real big challenge and I'm sure it's a challenge in the industry, especially Q and A's is upgrading our clusters ability to actually maintain a consistent flow with how fast kubernetes is growing, you know, they they're out I think yes, we leverage eks so it's like 1 21 or 1 22 now, uh that effort to upgrade a cluster, it can be a daunting one with port works. We actually were able to make that to where we could actually spin up a brand new cluster and with port work shift, all our application services, data migrated completely over poor works, handles all that for us and stand up that new cluster in less than a day. And that effort, it would take us a week, two weeks to do so not even man hours the time spent there, but just the reliability of being able to do that and the cost, you know, instead of standing up a new cluster and configuring it and doing all that and spending all that time, we can just really, we move to what we call blue green cut over strategy and port works is an essential piece of that. >>So is it fair to say that there are a variety of ways that people approach port works from a, from a value perspective in terms of, I I know that one area that you are particularly good in is the area of backups in this environment, but then you get data management and there's a third kind of vector there. What is the third vector? >>Yeah, it's all of the data services. Data services, like for example, database as a service on any kubernetes cluster paid on your cloud or you're on from data centers, which >>data, what kind of databases >>you were talking about? Anything from Red is Kafka Postgres, my sequel, you know, council were supporting, we just announced something called port books, data services offering that essentially delivers all these databases as a service on any kubernetes cluster uh that that a customer can point to unless than kind of get the automated management of the database on day one to day three, the entire life cycle. Um you know, through regular communities, could curdle experience through Api and SDK s and a nice slick ui that they can, you know, just role based access control and all of that, that they can completely control their data and their applications through it. And, you know, that's the third vector of potatoes Africans >>like a question for you. So what works has been a part of peer storage? You've known it since obviously for several years before you were a c h G, you brought up to see H G, you now know it a year into being acquired by a fast paced startup. Talk to me about the relationship and some of the benefits that you're getting with port works as a part of pure storage. >>Well, I mean one of the things, you know, when, when I heard about the accusation, my first thing was I was a little bit concerned is that relationship going to change and when we were acquiring, when we're looking at a doctor and Poor works, One thing I would tell my management is poor works is not just a vendor that wants to throw a solution on you and provide some capability there, partner, they want to partner with you and your success in your journey and this whole cloud native journey to provide this rich digital experience for not only our platform engineering team, but our dev teams, but also be able to really accelerate the development of our services so we can provide that digital portal for our end users and that didn't change. If anything that accelerated that that relationship did not change. You know, I came to the cat with an issue we just, we're dealing with, he immediately got someone on the phone call with me and so that has not changed. So it's really exciting to see that now that they've been acquired that they still are very much invested in the success of their customers and making sure we're successful. You know, it's not all of a sudden I was worried I was gonna have to do a whole different support process and it's gonna go into a black hole didn't happen. They still are very much involved with their customers. And >>that sounds kind of similar to what you talked about with the cultural alignment I've known here for a long time and they're very customer centric. Sounds like one of the areas in which there was a very strong alignment with port works. >>Absolutely important works has always taken pride in being customer. First company. Our founders are heavily customer focused. Uh, you know, they are aligned. They want, they have always aligned uh, the portraits business to our customers needs. Uh Pure is a company that's men. I actually focused on customers, right? I mean, that's all, you know, purist founder cause and everybody care about and so, you know, bringing these companies together and being part of the pure team. I kind of see how synergistic it is. And you know, we have, you know, that has enabled us to serve our customers customers even better than before. >>So, I'm curious about the two of you personally, in terms of your histories, I'm going to assume that you didn't both just bounce out of high school into the world of kubernetes, right? So like lisa and I your spanning the generations between the world of, say, virtualization based on X 86 architecture and virtualization where you can have microservices, you have a full blown operating system that you're working with, that kind of talk about, you know, Michael with you first talk about what that's been like navigating that change. We were in the midst of that, Do you have advice for others that are navigating that change? >>Don't be afraid of it, you know, a lot of people want to, you know, I call it, we're moving from where we're uh naming, we still have cats and dogs, they have a name, the VMS either whether or not their physical boxes or their VMS to where it's more like it's a cattle, you know, it's like we don't own the Os and not to be afraid afraid of that because change is really good. You know, the ability for me to not have to worry about patching and operating system is huge, you know, where I can rely on someone like the chaos and and the version and allow them to, if CV comes out, they let me know I go and I use their tools to be able to upgrade. So I don't have to literally worry about owning that Os and continues the same thing. You know, you, you, you know, it's all about being fault tolerant, right? And being able to be changed where you can actually brought a new version of a container, a base image with a lot of these without having to go and catch a bunch of servers, I mean patch night was held, I'm sorry if I could say that, but it was a nightmare, you know, but this whole world has just been a game changer >>with that. So Van cut from your perspective, you were coming at it, going into a startup, looking at the landscape in the future and seeing opportunity, um what what what's that been like for you? I guess the question for you is more something lisa and I talk about this concept of peak kubernetes, where are we in the wave, is this just is this just the beginning, are we in the thick of it? >>Yeah, I think I would say we're kind of transitioning from earlier doctors too early majority face in the whole, you know, um crossing the chasm analogy. Right, so uh I would say we're still the early stages of this big wave that's going to transform how infrastructure is built, apps are, apps are built and managed and run in production. Um I think some of the uh pieces, the key pieces are falling in place and maturing, uh there are some other pieces like observe ability and security, uh you know, kind of edge use cases need to be, you know, they're kind of going to get a lot more mature and you'll see that the cloud as we know today and the apps as we know today, they're going to be radically different and you know, if you're not building your apps and your business on this modern platform, on this modern infrastructure, you're gonna be left behind. Um, you know, I, my wife's birthday was a couple of days ago. I was telling this story a couple of friends is that I r I used another flowers delivery website. Uh they missed delivering the flowers on the same day, right? So when they told me all kinds of excuses, then I just went and looked up, you know, like door dash, which delivers uh, you know, and then, you know, like your food, but there's also flower delivery, indoor dash and I don't do it, I door dash flowers to her and I can track the flower does all the way she did not eat them, okay, You need them. But my kids love the chocolates though. So, you know, the case in point is that you cannot be, you know, building a modern business without leveraging the moral toolchain and modern toolchain and how the business is going to be delivered. That that thing is going to be changing dramatically. And those kind of customer experience, if you don't deliver, uh, you're not gonna be successful in business and communities is the fundamental technology that enables these containers. It's a fundamental piece of technology that enables building new businesses, you know, modernizing existing businesses and the five G is gonna be, there's gonna be new innovations that's going to get unleashed. And uh, again, communities and containers enable us to leverage those. And so we're still scratching the surface on this, it's big now, it's going to be much, much bigger as we go to the next couple of years. >>Speaking of scratching the surface, Michael, take us out in the last 30 seconds or so with where CHG healthcare is on its digital transformation. How is port works facilitating that? >>So we're right in the thick of it. I mean we are we still have what we call the legacy, we're working on getting those. But I mean we're really moving forward um to provide that rich experience, especially with inventing driven platforms like Kafka and Kubernetes and partnering with port works is one of the key things for us with that and a W s along with that. But we're, and I remember I heard a talk and I can't, I can't remember me but he he talked about how, how kubernetes just sort of like 56 K. Modem, You're hearing it, see, but it's got to get to the point where it's just there, it's just the high speed internet and Kelsey Hightower, That's who Great. Yeah, and I really like that because that's true, you know, and that's where we're on that transition, where we're still early, it's still that 50. So you still want to hear a note, you still want to do cube Cto, you want to learn it the hard way and do all that fun stuff, but eventually it's gonna be where it's just, it's just there and it's running everything like five G. I mean stripped down doing Micro K. It's things like that, you know, we're gonna see it in a lot of other areas and just proliferate and really accelerate uh the industry and compute and memory and, and storage and >>yeah, a lot of acceleration guys, thank you. This has been a really interesting session. I always love digging into customer use cases how C H. G is really driving its evolution with port works Venkat. Thanks for sharing with us. What's going on with port works a year after the acquisition. It sounds like all good stuff. >>Thank you. Thanks for having us. It's been fun, our >>pleasure. Alright for Dave Nicholson. I'm lisa martin. You're watching the cube live from Los Angeles. This is our coverage of Yukon cloud native Con 21 mhm

Published Date : Oct 15 2021

SUMMARY :

So Michael, first of all, let's go ahead and start with you, high quality uh doctors and nurses and uh you know, importance of that digital experience and that we need to be out The acquisition of port works by peer storage was about a year ago I talked to us of Pure and uh you know, we're looking at uh launching some new products as well and it's you know, delivering that capability to our customers and our customers are delighted now they can have a complete view I think, I think it's fair to acknowledge that pure one was observe ability before observe ability I could talk to us about obviously you are a customer CHD as a customer of court works now Port works by peer storage. you know, um being to provide data, we're very much focused on an event driven, Very, very key. you know, have a lot of control over your applications, the performance of the agency and want to control cars what does that mean to you in your environment? with how fast kubernetes is growing, you know, they they're out I think yes, good in is the area of backups in this environment, but then you get data Yeah, it's all of the data services. and SDK s and a nice slick ui that they can, you know, for several years before you were a c h G, you brought up to see H G, you now know it a Well, I mean one of the things, you know, when, when I heard about the accusation, that sounds kind of similar to what you talked about with the cultural alignment I've known here for a long time And you know, we have, you know, So, I'm curious about the two of you personally, in terms of your histories, Don't be afraid of it, you know, a lot of people want to, you know, I call it, I guess the question for you is more something lisa and I talk about this concept of peak kubernetes, they're going to be radically different and you know, if you're not building your Speaking of scratching the surface, Michael, take us out in the last 30 seconds or so with where CHG Yeah, and I really like that because that's true, you know, and that's where we're on that transition, What's going on with port works a year after the acquisition. It's been fun, our This is our coverage of Yukon cloud native Con 21

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichaelPERSON

0.99+

Michael ColettiPERSON

0.99+

Dave NicholsonPERSON

0.99+

Micah ColettiPERSON

0.99+

Los AngelesLOCATION

0.99+

CHG HealthcareORGANIZATION

0.99+

two weeksQUANTITY

0.99+

lisa martinPERSON

0.99+

twoQUANTITY

0.99+

lisaPERSON

0.99+

a weekQUANTITY

0.99+

tomorrowDATE

0.99+

Venkat RamakrishnanPERSON

0.99+

less than a dayQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

CeosORGANIZATION

0.99+

first thingQUANTITY

0.99+

PurestORGANIZATION

0.98+

KubeConEVENT

0.98+

bothQUANTITY

0.98+

50QUANTITY

0.97+

pureORGANIZATION

0.97+

RedORGANIZATION

0.97+

CubansPERSON

0.97+

OneQUANTITY

0.96+

CloudNativeConEVENT

0.96+

oneQUANTITY

0.96+

CHGORGANIZATION

0.96+

Kelsey HightowerPERSON

0.96+

bigEVENT

0.95+

next couple of yearsDATE

0.94+

PureORGANIZATION

0.91+

ApiTITLE

0.91+

about a year agoDATE

0.9+

last 18 monthsDATE

0.9+

VenkatORGANIZATION

0.9+

KafkaTITLE

0.9+

third vectorQUANTITY

0.87+

one areaQUANTITY

0.87+

CHG healthcareORGANIZATION

0.87+

First companyQUANTITY

0.86+

M S KTITLE

0.86+

21 mhmQUANTITY

0.84+

NA 2021EVENT

0.83+

1OTHER

0.83+

couple of days agoDATE

0.83+

five GORGANIZATION

0.82+

a yearQUANTITY

0.82+

one companyQUANTITY

0.82+

coupleQUANTITY

0.79+

day threeQUANTITY

0.79+

Kafka PostgresORGANIZATION

0.78+

friendsQUANTITY

0.78+

a yearQUANTITY

0.77+

100 yearsQUANTITY

0.77+

Yukon cloud native ConEVENT

0.76+

last 30 secondsDATE

0.76+

waveEVENT

0.73+

CovidORGANIZATION

0.72+

dayQUANTITY

0.71+

KubernetesTITLE

0.7+

cu con cloudORGANIZATION

0.69+

ModemPERSON

0.67+

1 22OTHER

0.67+

third kindQUANTITY

0.66+

K.COMMERCIAL_ITEM

0.65+

SDKTITLE

0.62+

21OTHER

0.62+

OsCOMMERCIAL_ITEM

0.61+

C H. GCOMMERCIAL_ITEM

0.6+

a ton of moneyQUANTITY

0.6+

Wayne Duso | AWS Storage Day 2021


 

(Upbeat intro music) >> Thanks guys. Hi everybody. Welcome back to The Spheres. My name is Dave Vellante and you're watching theCubes continuous coverage of AWS storage day. I'm really excited to bring on Wayne Duso. Wayne is the vice-president of AWS Storage Edge and Data Governance Services. Wayne, two Boston boys got to come to Seattle to see each other. You know. Good to see you, man. >> Good to see you too. >> I mean, I'm not really from Boston. The guys from East Boston give me crap for saying that. [Wayne laughs] That my city, right? You're a city too. >> It's my city as well I'm from Charlestown so right across the ocean. >> Charlestown is actually legit Boston, you know I grew up in a town outside, but that's my city. So all the sports fan. So, hey great keynote today. We're going to unpack the keynote and, and really try to dig into it a little bit. You know, last 18 months has been a pretty bizarre, you know, who could have predicted this. We were just talking to my line about, you know, some of the permanent changes and, and even now it's like day to day, you're trying to figure out, okay, you know, what's next, you know, our business, your business. But, but clearly this has been an interesting time to say the least and the tailwind for the Cloud, but let's face it. How are customers responding? How are they changing their strategies as a result? >> Yeah. Well, first off, let me say it's good to see you. It's been years since we've been in chairs across from one another. >> Yeah. A couple of years ago in Boston, >> A couple of years ago in Boston. I'm glad to see you're doing well. >> Yeah. Thanks. You too. >> You look great. (Wayne Laughs) >> We get the Sox going. >> We'll be all set. >> Mm Dave you know, the last 18 months have been challenging. There's been a lot of change, but it's also been inspiring. What we've seen is our customers engaging the agility of the Cloud and appreciating the cost benefits of the Cloud. You know, during this time we've had to be there for our partners, our clients, our customers, and our people, whether it's work from home, whether it's expanding your capability, because it's surging say a company like zoom, where they're surging and they need more capability. Our cloud capabilities have allowed them to function, grow and thrive. In these challenging times. It's really a privilege that we have the services and we have the capability to enable people to behave and, execute and operate as normally as you possibly can in something that's never happened before in our lifetimes. It's unprecedented. It's a privilege. >> Yeah. I mean, I agree. You think about it. There's a lot of negative narrative, in the press about, about big tech and, and, and, you know, the reality is, is big tech has, has stood and small tech has stepped up big time and we were really think about it, Wayne, where would we be without, without tech? And I know it sounds bizarre, but we're kind of lucky. This pandemic actually occurred when it did, because had it occurred, you know, 10 years ago it would have been a lot tougher. I mean, who knows the state of vaccines, but certainly from a tech standpoint, the Cloud has been a savior. You've mentioned Zoom. I mean, you know, we, productivity continues. So that's been, been pretty key. I want to ask you, in you keynote, you talked about two paths to, to move to the Cloud, you know, Vector one was go and kind of lift and shift if I got it right. And then vector two was modernized first and then go, first of all, did I get that right? And >> Super close and >> So help me course correct. And what are those, what are those two paths mean for customers? How should we think about that? >> Yeah. So we want to make sure that customers can appreciate the value of the Cloud as quickly as they need to. And so there's, there's two paths and with not launches and, we'll talk about them in a minute, like our FSX for NetApp ONTAP, it allows customers to quickly move from like to like, so they can move from on-prem and what they're using in terms of the storage services, the processes they use to administer the data and manage the data straight onto AWS, without any conversion, without any change to their application. So I don't change to anything. So storage administrators can be really confident that they can move. Application Administrators know it will work as well, if not better with the Cloud. So moving onto AWS quickly to value that's one path. Now, once they move on to AWS, some customers will choose to modernize. So they will, they will modernize by containerizing their applications, or they will modernize by moving to server-less using Lambda, right? So that gives them the opportunity at the pace they want as quickly or as cautiously as they need to modernize their application, because they're already executing, they're already operating already getting value. Now within that context, then they can continue that modernization process by integrating with even more capabilities, whether it's ML capabilities or IOT capabilities, depending on their needs. So it's really about speed agility, the ability to innovate, and then the ability to get that flywheel going with cost optimization, feed those savings back into betterment for their customers. >> So how did the launches that you guys have made today and even, even previously, do they map into those two paths? >> Yeah, they do very well. >> How so? Help us understand that. >> So if we look, let's just run down through some of the launches today, >> Great. >> And we can, we can map those two, those two paths. So like we talked about FSX for NetApp ONTAP, or we just like to say FSX for ONTAP because it's so much easier to say. [Dave laughs] >> So FSX for ONTAP is a clear case of move. >> Right >> EBS io2 Block Express for Sand, a clear case of move. It allows customers to quickly move their sand workloads to AWS, with the launch of EBS direct API, supporting 64 terabyte volumes. Now you can snapshot your 64 terabyte volumes on-prem to already be in AWS, and you can restore them to an EBS io2 Block Express volume, allowing you to quickly move an ERP application or an Oracle application. Some enterprise application that requires the speed, the durability and the capability of VBS super quickly. So that's, those are good examples of, of that. In terms of the modernization path, our launch of AWS transfer managed workflows is a good example of that. Manage workflows have been around forever. >> Dave: Yeah. >> And, and customers rely on those workflows to run their business, but they really want to be able to take advantage of cloud capabilities. They want to be able to, for instance, apply ML to those workflows because it really kind of makes sense that their workloads are people related. You can apply artificial intelligence to them, >> Right >> This is an example of a service that allows them to modify those workflows, to modernize them and to build additional value into them. >> Well. I like that example. I got a couple of followup questions, if I may. Sticking on the machine learning and machine intelligence for a minute. That to me is a big one because when I was talking to my line about this is this, it's not just you sticking storage in a bucket anymore, right? You're invoking other services: machine intelligence, machine learning, might be database services, whatever it is, you know, streaming services. And it's a service, you know, there it is. It's not a real complicated integration. So that to me is big. I want to ask you about the block side of things >> Wayne: Sure >> You built in your day, a lot of boxes. >> Wayne: I've built a lot of boxes. >> And you know, the Sand space really well. >> Yeah. >> And you know, a lot of people probably more than I do storage admins that say you're not touching my Sand, right? And they just build a brick wall around it. Okay. And now eventually it ages out. And I think, you know, that whole cumbersome model it's understood, but nonetheless, their workloads and our apps are running on that. How do you see that movement from those and they're the toughest ones to move. The Oracle, the SAP they're really, you know, mission critical Microsoft apps, the database apps, hardcore stuff. How do you see that moving into the Cloud? Give us a sense as to what customers are telling you. >> Storage administrators have a hard job >> Dave: Yeah >> And trying to navigate how they move from on-prem to in Cloud is challenging. So we listened to the storage administrators, even when they tell us, No. we want to understand why no. And when you look at EBS io2 Block Express, this is in part our initial response to moving their saying into the Cloud super easily. Right? Because what do they need? They need performance. They need their ability. They need availability. They need the services to be able to snap and to be able to replicate their Capa- their storage. They need to know that they can move their applications without having to redo all they know to re-plan all they work on each and every day. They want to be able to move quickly and confidently. EBS io2 Block Express is the beginning of that. They can move confidently to sand in the Cloud using EBS. >> Well, so why do they say 'no'? Is it just like the inherent fear? Like a lawyer would say, don't do that, you know, don't or is it just, is it, is it a technical issue? Is it a cultural issue? And what are you seeing there? >> It's a cultural issue. It's a mindset issue, but it's a responsibility. I mean, these folks are responsible for the, one of the most important assets that you have. Most important asset for any company is people. Second most important asset is data. These folks are responsible for a very important asset. And if they don't get it right, if they don't get security, right. They don't get performance right. They don't get durability right. They don't get availability right. It's on them. So it's on us to make sure they're okay. >> Do you see it similar to the security discussion? Because early on, I was just talking to Sandy Carter about this and we were saying, you remember the CIA deal? Right? So I remember talking to the financial services people said, we'll never put any data in the Cloud. Okay they got to be one of your biggest industries, if not your biggest, you know customer base today. But there was fear and, and the CIA deal changed that. They're like, wow CIA is going to the Cloud They're really security conscious. And that was an example of maybe public sector informing commercial. Do you see it as similar? I mean there's obviously differences, but is it a sort of similar dynamic? >> I do. I do. You know, all of these ilities right. Whether it's, you know, durability, availability, security, we'll put ility at the end of that somehow. All of these are not jargon words. They mean something to each persona, to each customer. So we have to make sure that we address each of them. So like security. And we've been addressing the security concern since the beginning of AWS, because security is job number one. And operational excellence job number two. So, a lot of things we're talking about here is operational excellence, durability, availability, likeness are all operational concerns. And we have to make sure we deliver against those for our customers. >> I get it. I mean, the storage admins job is thankless, but the same time, you know, if your main expertise is managing LUNs, your growth path is limited. So they, they want to transform. They want to modernize their own careers. >> I love that. >> It's true. Right? I mean it's- >> Yeah. Yeah. So, you know, if you're a storage administrator today, understanding the storage portfolio that AWS delivers will allow you, and it will enable you empower you to be a cloud storage administrator. So you have no worry because you're, let's take FSX for ONTAP. You will take the skills that you've developed and honed over years and directly apply them to the workloads that you will bring to the Cloud. Using the same CLIs, The same APIs, the same consoles, the same capabilities. >> Plus you mentioned you guys announced, you talked about AWS backup services today, announced some stuff there. I see security governance, backup, identity access management, and governance. These are all adjacency. So if you're a, if you're a cloud storage administrator, you now are going to expand your scope of operations. You, you know, you're not going to be a security, Wiz overnight by any means, but you're now part of that, that rubric. And you're going to participate in that opportunity and learn some things and advance your career. I want to ask you, before we run out of time, you talked about agility and cost optimization, and it's kind of the yin and the yang of Cloud, if you will. But how are these seemingly conflicting forces in sync in your view. >> Like many things in life, right? [Wayne Laughs] >> We're going to get a little spiritually. >> We might get a little philosophical here. [Dave Laughs] >> You know, cloud announced, we've talked about two paths and in part of the two paths is enabling you to move quickly and be agile in how you move to the Cloud. Once you are on the Cloud, we have the ability through all of the service integrations that we have. In your ability to see exactly what's happening at every moment, to then cost optimize, to modernize, to cost optimize, to improve on the applications and workloads and data sets that you've brought. So this becomes a flywheel cost optimization allows you to reinvest, reinvest, be more agile, more innovative, which again, returns a value to your business and value to your customers. It's a flywheel effect. >> Yeah. It's kind of that gain sharing. Right? >> It is. >> And, you know, it's harder to do that in a, in an on-prem world, which everything is kind of, okay, it's working. Now boom, make it static. Oh, I want to bring in this capability or this, you know, AI. And then there's an integration challenge >> That's true. >> Going on. Not, not that there's, you know, there's differences in, APIs. But that's, to me is the opportunity to build on top of it. I just, again, talking to my line, I remember Andy Jassy saying, Hey, we purposefully have created our services at a really atomic level so that we can get down to the primitives and change as the market changes. To me, that's an opportunity for builders to create abstraction layers on top of that, you know, you've kind of, Amazon has kind of resisted that over the years, but, but almost on purpose. There's some of that now going on specialization and maybe certain industry solutions, but in general, your philosophy is to maintain that agility at the really granular level. >> It is, you know, we go back a long way. And as you said, I've built a lot of boxes and I'm proud of a lot of the boxes I've built, but a box is still a box, right? You have constraints. And when you innovate and build on the Cloud, when you move to the Cloud, you do not have those constraints, right? You have the agility, you can stand up a file system in three seconds, you can grow it and shrink it whenever you want. And you can delete it, get rid of it whenever you want back it up and then delete it. You don't have to worry about your infrastructure. You don't have to worry about is it going to be there in three months? It will be there in three seconds. So the agility of each of these services, the unique elements of all of these services allow you to capitalize on their value, use what you need and stop using it when you don't, and you don't have the same capabilities when you use more traditional products. >> So when you're designing a box, how is your mindset different than when you're designing a service? >> Well. You have physical constraints. You have to worry about the physical resources on that device for the life of that device, which is years. Think about what changes in three or five years. Think about the last two years alone and what's changed. Can you imagine having been constrained by only having boxes available to you during this last two years versus having the Cloud and being able to expand or contract based on your business needs, that would be really tough, right? And it has been tough. And that's why we've seen customers for every industry accelerate their use of the Cloud during these last two years. >> So I get that. So what's your mindset when you're building storage services and data services. >> So. Each of the surfaces that we have in object block file, movement services, data services, each of them provides very specific customer value and each are deeply integrated with the rest of AWS, so that when you need object services, you start using them. The integrations come along with you. When, if you're using traditional block, we talked about EBS io2 Block Express. When you're using file, just the example alone today with ONTAP, you know, you get to use what you need when you need it, and the way that you're used to using it without any concerns. >> (Dave mumbles) So your mindset is how do I exploit all these other services? You're like the chef and these are ingredients that you can tap and give a path to your customers to explore it over time. >> Yeah. Traditionally, for instance, if you were to have a filer, you would run multiple applications on that filer you're worried about. Cause you should, as a storage administrator, will each of those applications have the right amount of resources to run at peak. When you're on the Cloud, each of those applications will just spin up in seconds, their own file system. And those file systems can grow and shrink at whatever, however they need to do so. And you don't have to worry about one application interfering with the other application. It's not your concern anymore. And it's not really that fun to do. Anyway. It's kind of the hard work that nobody really you know, really wants to reward you for. So you can take your time and apply it to more business generate, you know, value for your business. >> That's great. Thank you for that. Okay. I'll I'll give you the last word. Give us the bumper sticker on AWS Storage day. Exciting day. The third AWS storage day. You guys keep getting bigger, raising the bar. >> And we're happy to keep doing it with you. >> Awesome. >> So thank you for flying out from Boston to see me. >> Pleasure, >> As they say. >> So, you know, this is a great opportunity for us to talk to customers, to thank them. It's a privilege to build what we build for customers. You know, our customers are leaders in their organizations and their businesses for their customers. And what we want to do is help them continue to be leaders and help them to continue to build and deliver we're here for them. >> Wayne. It's great to see you again. Thanks so much. >> Thanks. >> Maybe see you back at home. >> All right. Go Sox. All right. Yeah, go Sox. [Wayne Laughs] All right. Thank you for watching everybody. Back to Jenna Canal and Darko in the studio. Its Dave Volante. You're watching theCube. [Outro Music]

Published Date : Sep 2 2021

SUMMARY :

I'm really excited to bring on Wayne Duso. I mean, I'm not really from Boston. right across the ocean. you know, our business, your business. it's good to see you. I'm glad to see you're doing well. You too. You look great. have the capability to I mean, you know, we, And what are those, the ability to innovate, How so? because it's so much easier to say. So FSX for ONTAP is and you can restore them to for instance, apply ML to those workflows that allows them to And it's a service, you know, And you know, the And I think, you know, They need the services to be able to that you have. I remember talking to the Whether it's, you know, but the same time, you know, I mean it's- to the workloads that you and it's kind of the yin and the yang We're going to get We might get a little and in part of the two paths is that gain sharing. or this, you know, AI. Not, not that there's, you know, and you don't have the same capabilities having boxes available to you So what's your mindset so that when you need object services, and give a path to your have the right amount of resources to run I'll I'll give you the last word. And we're happy to So thank you for flying out and help them to continue to build It's great to see you again. Thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

WaynePERSON

0.99+

SeattleLOCATION

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

AWSORGANIZATION

0.99+

BostonLOCATION

0.99+

threeQUANTITY

0.99+

Wayne LaughsPERSON

0.99+

Dave VolantePERSON

0.99+

Sandy CarterPERSON

0.99+

Wayne DusoPERSON

0.99+

CIAORGANIZATION

0.99+

Dave LaughsPERSON

0.99+

two pathsQUANTITY

0.99+

one pathQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

LambdaTITLE

0.99+

CharlestownLOCATION

0.99+

eachQUANTITY

0.99+

East BostonLOCATION

0.99+

FSXTITLE

0.99+

twoQUANTITY

0.99+

OracleORGANIZATION

0.99+

three secondsQUANTITY

0.99+

DarkoPERSON

0.99+

five yearsQUANTITY

0.99+

SecondQUANTITY

0.99+

each customerQUANTITY

0.99+

64 terabyteQUANTITY

0.99+

EBSORGANIZATION

0.99+

EachQUANTITY

0.99+

each personaQUANTITY

0.99+

three monthsQUANTITY

0.98+

10 years agoDATE

0.98+

todayDATE

0.98+

Dave mumblesPERSON

0.98+

oneQUANTITY

0.98+

one applicationQUANTITY

0.97+

Jenna CanalPERSON

0.97+

SAPORGANIZATION

0.97+

SoxORGANIZATION

0.96+

pandemicEVENT

0.95+

firstQUANTITY

0.95+

thirdQUANTITY

0.95+

Krishna Kottapalli and Sumant Rao, Abacus Insights | AWS Startup Showcase


 

(upbeat music) >> Welcome to today's session of theCUBE's presentation of the AWS Startup Showcase, the Next Big Thing in AI, Security & Life Sciences. Today we're joined by Abacus Insights for the Life Sciences track, I'm your host, Natalie Ehrlich. Now we're going to be speaking about creating an innovation enabling data environment to accelerate your healthcare analytics journey, and we're now joined by our guests Krishna Kottapalli, chief commercial officer as well as Sumant Rao, chief product officer, both working at Abacus Insights, thank you very much for joining us. >> Thank you for having us. >> Well let's kick off with our theme Krishna, how can we create innovation enabling data environments in order to facilitate the healthcare analytics? >> Yeah, so I think if you sort of think about this that is a lot of data proliferating inside the healthcare system, and whether it's through the internal sources, external sources, devices, patient monitoring platforms, and so on, and all of this carries yeah all of these essentially carry, have useful data and intelligence, right, and essentially the users are looking to get insights out of it to solve problems. And we're also seeing that the journey that our clients are going through is actually a transformation journey, right so they are thinking about how do we seamlessly interact with our stakeholders, so their stakeholders being members and providers, so that they don't get frustrated and feel like they're interacting with multiple parts of the health plan, right, we typically when you call the health plan you feel like you're calling five different departments, so they want to have a seamless experience, and finally, I think the whole, you know, the data being you know, in the ecosystem within the patients, payers, and providers being able to operate and interact has intelligence. So what we, what we think about this is how do we take all of this and help our clients you know, digitize their, you know, path forward and create a way to deliver, you know enable them to do meaningful analytics. >> Well Sumant, when you think about your customers what are the key benefits that Abacus is providing? >> So that's a good question, so primarily speaking, we approach this as, you know a framework that drives innovation that enables data and analytics. I mean, that's really what we're trying to do here. What Abacus does though, is this is slightly different is how we think about this. So we firmly believe that data analytics is not a linear journey, I mean, you cannot say that, oh I'll build my data foundation first and then, you know have the data and then they shall come that's not how it works. So for us, the way Abacus approaches this is, we focus really heavily on the data foundation part of it first. But along the way in the process, a big part of our value statement is we engage and make sure we are driving business value throughout this piece. So, so general message is, you know make sure innovation for the sake of innovation data is not how you're approaching this, but think about your business users, get them engaged, have it small, milestone driven progress that you make along the way. So, so generally speaking, it's we're not tryna be just a platform who moves bits and bytes of information. The way we think about this is you know we'll help you along this journey, there are steps that happen that take you there. And because of which, the message to most of our customers is you focus on your core competence. You know your business, you have nuances in the data, you have nuances on needs that your customers need, you focus on that. The scale that Abacus brings because this is what we do day in day out is more along area of re-usability. So if within our customers, they've got data assets how do we reuse some of that? How does Abacus re-use the fact that because of our of what we do, we actually have data assets that, you know, we can bring data to life quickly. So, so general guidelines, right, so first is don't innovate for the sake of innovating. I mean, that's not going to get you far, respect the process that this is not a linear path, there's always value that's happening throughout the process, and that's, you know, Abacus will work closely with you to make sure you recognize that value. The second part is within your organization, you have assets. There's like major data assets, there's IP, there's things that can leverage that Abacus will do. And because we are a platform, what we focus on is configurability. We've done this for, I mean, a lot of us on the Abacus team come from healthcare space, we have got big payer DNA, we get this, and what we also know is data rules change. I mean, you know, it's really hard when you build a system that's tightly built and you cannot change and you cannot adapt as data rules change, so we've made that part of it easier. We have, we understand data governance, so we work closely with our payers data governance teams to make sure that part of it happens. And I think the last part of this which is really important, this in the context of this conversation is, all of this is good stuff, I mean, you've got massive data foundation, you've got, you know, healthcare expertise flowing in, you've got partnerships with data governance, all that is great. If you don't have best-in-class infrastructure supporting all of that, then you really, you will really have issues Erlich. I mean, that's just the way it works, and this is why, you know, we're built on the AWS stack which kind of helps us, and also helps our clients along with their cloud journey. So it's kind of an interesting set of events in terms of you know, again, I'm going to repeat this because it's important that we don't innovate for the sake of innovating, re-use your assets, leverage your existing IP, make things configurable, data changes, and then leverage best in class infrastructure, so Abacus strategy progresses across those four dimensions. >> And I mean, that's an excellent point about healthcare data being really nuanced and you know, Krishna would love to get your insights on what you see are the biggest opportunities in healthcare analytics now. >> Yeah, so the biggest opportunities are, you know there are two, we think about it in two dimensions, right, one is really around sort of the analytics use cases, and second is around the operational use cases, right, so if you think about a payer they're trying to solve both, and we see because of, you know, our the way we think about data, which is close to near real time, we are able to essentially serve up our clients with, you know helping them solve both their use cases. So think of this that, when you're a patient, you go to you know, you go to a CVS to do something, and then you go to your doctor's office to do something, right, to be, to be able to take a test. If all of these are known, to your payer care management team, if you will, in close to near real time then know, right, where you've been, what you can do how to be able to sort of intervene and so on and so forth, so from a next best action and operational use cases we see a lot of them emerging, new thanks to the cloud as well as thanks to infrastructure, which can do sort of near real time. So that's our own sort of operational use cases if you will. If, when you think about the analytics right, so, you know, every, all payers struggle with this, Which is you have limited dollars to be able to intervene with you know, a large set of population, right so every piece of data that you know, have about your patient, about the specific provider so on and so forth is able to actually, you know give you analytics to be able to intervene or engage if you will, with the patient in a very one-to-one manner. And what we find is at the end of the day if the patient is not engaged in this and the member or the patient is not engaged, you know in the healthcare, you know, value chain, if you will, then your dollars go to waste, and we feel that, in essentially both of these type of use cases can be sorted up really well with, with a unified data platform, as well as with upstack analytics. >> And now Sumant, I'd love to hear from you, you know you're really involved with the product, how do you see the competitive landscape? How do you make sure that your product is the best out there? >> So I think, I think a lot of that is we think about ourselves across three, three vectors. Talk about it as core platform, which is at a very minimal level of description, it's really moving bits and bytes from point A to point B. That's one part of it, right, and I think there's a, it's a pretty crowded space, it's a whole bunch of folks out there trying to, you know demonstrate that they can successfully land data from one point to the other. We do that too, we do that at scale. Where you'd start differentiating and pulling away from the pack is the second vector, which is enrichment. Now, this is where again, it's you have to understand healthcare data to really build a level of respect for how messy it can get. And you have to understand it and build it in a way where it's easy to keep up with the changes. We spend a lot of time, you know in building out a platform to do that so that we can implement data quickly. I mean, you know, for Abacus to bring a data source to life in less than 45 days, it's pretty straightforward. And it's you're talking on an average 6 or 12 months across the rest. Because we get this, we've got a library of rules, we understand how to bring this piece, so we start pulling away from the competitors, if you may. More along the enrichment vector, because that's where we think, getting high quality rules, getting these re-used, all of this is part of it, but then we bring another level of enrichment where we have, you know, we use public data sets, we use a reference data sets, we tie this, we fill in the blanks in the data. All of this is the end state, let's make the data shovel-ready for analytics. So we do all of that along the way, so now applying our expertise, cleansing data, making sure it's the gaps are all filled out and getting this ready and then comes the next part where we tie this data out. Cause it's one thing to bring in multiple sources quickly at scale high speed and all that good stuff, which is hard work, but you know, it's, it's expected now at the same time how do you put all that together in a meaningful manner with which we can actually, you know, land it and keep it ready? So that's two parts. So first is, the platform, the nuts and bolts, the pipes, all that is good stuff, the second is the enrichment. The third side, which is really where we start differentiating is distribution. We have a philosophy that, you know, really the mission of the whole company was to get data available. To solve use cases like the one Krishna just talked about. So rather than make this a massive change management program that takes five years to implement, and really scares your end users away, our philosophy is like let's have incremental use case all on the way, but let's talk to the users, let them interact with data as easy as they can. So we've built our partnerships on our distribution hub, which makes it easy, so an example is if you have someone in the marketing team, who really wants to analyze a particular population to reach out to them, and all they know is tableau, it is great. It should be as simple as saying, look what's the sliver of data you need to get your job done, how do you interact? So we've our distribution hub, is really is the part where, users come in, interact with the data with you know, we will meet you where you are is the underlying principle and that's how it operates. So, so I think on the first level of platform, yeah a crowded space everyone's fighting for that piece, the second part of it is enrichment where we really start pulling away using our expertise, and then at the end of it you've got the distribution part where you know you just want to make it available to users, and, you know, a lot of work has gone into getting this done but that's how we work. >> And if I could add a couple more things, Natalie, so the other thing is security, right so the reason that healthcare, healthcare players have not gone to the cloud until about three four years back, is the whole concern about security so we have invested a ton of resources and money to make sure that our platform is run in the most secure manner, and giving confidence to our clients, and it's an expensive process, right, even though you're on AWS you have to have your own certification that, so that that gives us a huge differentiator, and the last but not least is how we actually approach the whole data management deployment process, which is, our clients think about us in two dimensions, total cost of ownership, but typically 50 to 60% of what it would cost internally, and secondly, time to value, right, you can't have an infinitely long deployment cycle. So we think about those two and actually put our skin in the game and tie our, you know, tie our success to total cost of ownership and time to value. >> Well, just really quick in 1-2 sentences, would love to get your insight on Abacus's defining contribution to the future of cloud scale. >> Go ahead, Sumant. >> So as I see it, I think so part of it is we've got some of our clients who are payers and we've got them along their cloud journey trusting one of their key assets which is data, and letting us drive it. And this is really driven by domain expertise, a good understanding of data governance, and a great understanding of security, I mean, combining all of this, we've actually got our clients sitting and operating on, you know pretty significant cloud infrastructure successfully day in day out. So I think we've done our part as far as, you know helping folks along that journey. >> Yeah and just to close it out I would say it is speed, right, it is speed to deployment, you don't have to wait. You know, we have set up the infrastructure, set up the cloud and the ability to get things up and running is literally we think about it in weeks, and not months. >> Terrific, well, thank you both very much for insights, fantastic to have you on the show, really fascinating to hear about how Abacus is leveraging healthcare data expertise on its platform , to drive robust analytics, and of course, here we were joined by Abacus Insights, Krishna Kottapalli, the chief commercial officer, as well as Sumant Rao, the chief product officer, thank you again very much for your insights on this program and this session of the AWS Startup Showcase. (upbeat music)

Published Date : Jun 24 2021

SUMMARY :

thank you very much for joining us. of this and help our clients you know, and this is why, you know, and you know, Krishna would and we see because of, you know, our the competitors, if you may. and tie our, you know, the future of cloud scale. and operating on, you know and the ability to get fantastic to have you on the show,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Natalie EhrlichPERSON

0.99+

AbacusORGANIZATION

0.99+

Krishna KottapalliPERSON

0.99+

Sumant RaoPERSON

0.99+

50QUANTITY

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

6QUANTITY

0.99+

Abacus InsightsORGANIZATION

0.99+

NataliePERSON

0.99+

twoQUANTITY

0.99+

two dimensionsQUANTITY

0.99+

12 monthsQUANTITY

0.99+

one pointQUANTITY

0.99+

second partQUANTITY

0.99+

threeQUANTITY

0.99+

second vectorQUANTITY

0.99+

bothQUANTITY

0.99+

secondQUANTITY

0.99+

one partQUANTITY

0.99+

less than 45 daysQUANTITY

0.99+

firstQUANTITY

0.99+

third sideQUANTITY

0.99+

60%QUANTITY

0.98+

three vectorsQUANTITY

0.98+

TodayDATE

0.98+

two partsQUANTITY

0.98+

five different departmentsQUANTITY

0.98+

KrishnaPERSON

0.98+

first levelQUANTITY

0.97+

oneQUANTITY

0.97+

todayDATE

0.97+

ErlichPERSON

0.96+

one thingQUANTITY

0.95+

theCUBEORGANIZATION

0.92+

1-2 sentencesQUANTITY

0.91+

Startup ShowcaseEVENT

0.9+

SumantPERSON

0.82+

three four years backDATE

0.8+

AWS Startup ShowcaseEVENT

0.79+

resourcesQUANTITY

0.63+

pointOTHER

0.63+

aboutDATE

0.62+

a tonQUANTITY

0.61+

secondlyQUANTITY

0.6+

SumantORGANIZATION

0.52+

coupleQUANTITY

0.45+

Phil Bullinger, Infinidat & Lee Caswell, VMware | CUBE Conversation, March 2021


 

>>10 years ago, a group of industry storage veterans formed a company called Infinidat. The DNA of the company was steeped in the heritage of its founder, Moshe Yanai, who had a reputation for relentlessly innovating on three main areas, the highest performance, rock solid availability, and the lowest possible cost. Now these elements have historically represented the superpower triumvirate of a successful storage platform. Now, as Infinidat evolved, landed on a fourth vector, that has been a key differentiator and its value proposition, and that is petabyte scale. Hello everyone. And welcome to this Qube conversation. My name is Dave Vellante and I'm pleased to welcome in two longtime friends of theCube. Phil Bullinger is newly minted CEO of Infinidat and of course, Lee Caswell, VMware's VP of Marketing for the cloud platform business unit. Gents, welcome. >>Great to be here. Always good to see you guys. Phil, so you're joining at the 10 year anniversary mark. Congratulations on the appointment. What attracted you to the company? >>You know I spent a long time in my career at enterprise storage and, and enjoying many of the opportunities, you know, through a number of companies. Last fall when I became aware of the Infinidat opportunity and it immediately captured my attention because of frankly my respect for the product through several opportunities I've had with enterprise customers in selling cycles of different products, if they happened to be customers of Infinidat, , they were not bashful about talking about their satisfaction with the product, their level of delight with it. And so I think from, from the sidelines, I've always had a lot of respect for the Infinidat platform, the implementation of the product quality and reliability that it's kind of legendary for. And so when the opportunity came along, it really captured my interest in of course behind a great product is almost always a great team. >>And as I got to know the company and the board, and, you know, some of the leaders, and learned about the momentum and the business, it was just a very, very compelling opportunity for me. And I'll have to say just, you know, 60 days into the job. Everything I hoped for is here, not only a warm welcome to the company, but an exciting opportunity with respect to where Infinidat is at today with the growth of the business. The company has achieved a level of consistent growth through 2020, cashflow positive, EBITDA positive. And now it's a matter of scaling, scaling the business and it's something that I have had success with several times in my career and really, really enjoying the opportunity here at Infinidat to do that. >>That's great. Thanks for that. Now, of course, Lee, VMware was founded nearly a quarter century ago and carved out a major piece of the enterprise pie and predominantly that's been on prem, but the data center's evolving the cloud is evolving, and this universe is expanding. How do you see the future of that on-prem data center? >>No, I think Satya recently said, right, that, that we've reached max consolidation almost right. You pointed that out earlier. I thought that was really interesting, right. You know, we believe in the distributed hybrid cloud and you know, the reasons for that actually turn out to be storage led in there and in, in the real thinking about it, because we're going to have distributed environments and, you know, one of the things that we're doing with Infinidat here today, right, is we're showing how customers can invest intelligently and responsibly on prem and have bridges in across the hybrid cloud. We do that through something called the VMware Cloud Foundation. That's a full stack offering that, uh, an interesting here, right? It started off with a HCI element, but it's expanded into storage and storage at scale, you know, because storage is going to exist... We have very powerful storage value propositions, and you're seeing customers go and deploy both. We're really excited about seeing Infinidat lean into the VMware Cloud Foundation and vVols actually as a way to match the pace of change in today's application world. >>These trends, I mean, building bridges is what we called it. And so that takes a lot of hard work, especially when you're doing from on-prem into hybrid, across clouds, eventually the edge, you know, that's a, that's a non-trivial task. How do you see this playing out in market trends? >>Yeah. You know, we're, we're in the middle of this every day as, as you know, Dave, uh, and certainly Lee, uh, data center architectures ebb and flow from centralized to decentralized, but clearly data locality, I think, is driving a lot of the growth of the distributed data center architecture, the edge data centers, but core is still very significant for, for most enterprise. Uh, and it's, it's, it has, it has a lot to do with the fact that most enterprises want to own their own cloud. You know, when a Fortune 15 or a Fortune 50 or Fortune 100 customer, when they talk about their cloud, they don't want to talk about, you know, the AWS cloud or the GCP cloud or the Azure cloud. They want to talk about their cloud. And almost always, these are hybrid architectures with a large on-prem or colo footprint. >>Uh, the reason for that number of reasons, right? Data sovereignty is a big deal, uh, among the highest priorities for enterprise today. The control of the security, the, the ability to recover quickly from ransomware attacks, et cetera. These, these are the things that are just fundamentally important, uh, to the business continuity and enterprise risk management plan for these companies. But I think one thing that has changed the on prem data center is the fact that it's the core operating characteristics have to take on kind of that public cloud characteristic. It has to be a transparent, seamless scalability. I think the days of, of CIO's  you know, even tolerating people showing up in their data centers with, with disk trays under their arms to add capacity is, is over. Um, they want to seamlessly add capacity. They want nonstop operation, a hundred percent uptime is the bar. >>Now it has to be a consolidation. Massive consolidation is clearly the play for TCO and efficiency. They don't want to have any compromises between scale and availability and performance. You know, the, the very characteristics that you talked about upfront, Dave, that make Infinidat unique, I think are fundamentally the characteristics that enterprises are looking for when they build their cloud on prem. Uh, I, I think our architecture also really does provide a, a set it and forget it, uh, kind of experience. Um, when we install a new Infinidat frame in an enterprise data center, our intentions are we're, we're not going to come back. We don't intend to come back, uh, to, to help fiddle with the bits or, uh, you know, tweak the configuration as applications and, and multitenant users are added. And then of course, flexible economic models. I mean, everybody takes this for granted, but you really, really do have to be completely flexible between the two rails, the CapEx rail and the OpEx rail and every, uh, every step in between. And importantly, when a customer, when an enterprise customer needs to add capacity, they don't have a sales conversation. They just want to have it right. They're already running in their data center. And that's the experience that we provide. >>Yeah. You guys are aligned in that vision, that layer, that abstracts the complexity from the underlying wherever cloud on prem, et cetera. Right. Let's talk about the VMware and Infinidat relationship. I mean, every, every year at VMworld, up until last year, thank you COVID, Infinidat would host this awesome dinner. You'd have the top customers there. Very nice Vegas steak restaurant. I, of course, I always made a point to stop by not just for the food. I mean, I was able to meet some customers and I've talked to many dozens over the years, Phil, and I can echo that sentiment, but, you know, why is the VMware ecosystem so important to Infinidat? And I guess the question there is, is, is petabyte scale that really that prominent in the VMware customer base? >>It's a, it's a very, very important point. VMware is the longest standing Alliance partner of Infinidat. It goes back to really, almost the foundation of the company, certainly starting with the release one, the very first commercial release of Infinidat VMware and a very tight integration with the VMware was a core part of that. Uh, we, we have a capability. We call the Host PowerTools, which drives a consistent best practices implementation around our, our VMware, uh, integration and, and how it's actually used in the data center. And we built on that through the years through just a deep level of integration. And, um, our customers typically are, are at scale petabyte scale or average deployment as a petabyte and up, um, and over 90% of our customers use VMware. So you would say, I, I think I can safely say we're we serve the VMware environment for some of VMware's largest enterprise footprints, uh, in the market. >>I know it's like children, you got, you love all your partners, but is there anything about Infinidat that, that stands out to you a particular area where, where they shine that from your perspective? >>Yeah, I think so. You know, the, the best partnerships, one are ones that are customer driven. It turns out right. And the idea that we have joint customers at large scale and listen storage is a tough business to get, right, right. It takes time to go and mature to harden a code base. Right. And particularly when you're talking about petabyte scale, right now, you've basically got customers buying in for the largest systems. And what we're seeing overall is customers are trying to do more things with fewer component elements, makes sense, right? And so the scale here is important because it's not just scale in terms of like capacity, right. It's scale in terms of performance as well. And so, as you see customers trying to expand the number of different types of applications, this is one of the things we're seeing, right. Is new applications, which could be container-based Kubernetes orchestrated our Tanzu portfolio helps with that. >>Right. If you see what we're doing with Nvidia, for example, we announced some AI work, right. Uh, this week with vSphere. And so what you're starting to see is like the changing nature of applications and the fast pace of applications is really helping customers save us. And I want to go and find solutions that can meet the majority of my needs. And that's one of the things that we're seeing. And particularly with the vVols integration at scale, that we just haven't seen before, uh, and Infinidat has set the bar and is really setting a new, a new record for that. >>Yeah. Let me, let me comment on that a little bit, Dave, we've been a core part of the VMware Cloud Solutions Lab, which is a very, very exciting engaging, investment that VMware has made. A lot of people have contributed to in the industry, but in the, in the VMware Cloud Solutions Lab, we recently demonstrated on a single Infinidat frame over 200,000 vVols on a single system. And I think that not only edges up the bar, I think it completely redefines what, what scale means when you're talking about a vVols implementation. >>So not to geek out here, but vVols, they're kind of a game changer because instead of admins, having to manually allocate storage to performance tiers. An array, that is VASA certified, VASA is VMware, or actually vStorage API for, for storage awareness, VASA, anyway, with vVols, you can dynamically provision storage that matches the way I say it as a match as device attributes to the data and the application requirements of the VM. So Phil,  it seems like so much in VMware land hearkens back to the way mainframes used to solve problems in a modern way. Right. And vVols is a real breakthrough in that regard in terms of storage. So, so how do you guys see it? I, I presume you're, you're sort of vVols certified based on what you just said in the lab. >>Yeah. We recently announced our vVols release and we're not the first to market with the vVols, but from, from the start of the engineering project, we wanted to do it. We wanted to do it the way we think. We think at scale in everything we do, and our customers were very prescriptive about the kind of scale and performance and availability that they wanted to experience in vVols. And we're now seeing quite a bit of customer interest with traction in it. Uh, as I said, we, we redefined the bar for vVols scalability. We support on a single array now, um, a thousand storage containers. Uh, and I think most of our competition is like at one or maybe 10 or 13 or something like that. So, uh, our customers are, again at scale, they said, if you're going to do vVols, we want it... We want it at scale. We want it to embody the characteristics of your, of your platform. We really liked vVols because it, it helps, it helps separate kind of the roles and responsibilities between the VI administrator and the storage system administrator. If you're going to put a majority of your most critical bits on Infinidat in your data center, you're going to want to, you're going to want to have control over how that resource is used, but yet the vVols mplementation and the tools that we provide with that deep level of integration, give the VI, the VI administrator, all of the flexibility they need to manage applications. And vVols of course gives the VI administrator the native use of our snapshot technology. And so it makes it incredibly easy for them to administrate the platform without having to worry about the physical infrastructure, but yet the people worried about the physical infrastructure still have control over that resource. So it's, it's a game changer as far as we're concerned. >>Yeah. Storage has come a long way. Hasn't it, Lee? I'm wondering if you could add some color here, it seems in talking to ... Uh, so that's interesting. You've had, you had a hand in the growth of vSAN and it was very successful product, but he chose Infinidat for that higher end application. It seems like vVols are a key innovation in that regard. How's the vVols uptake going from your perspective. >>Yeah, I think we you know, we're in the second phase of vVols adoption, right? First phase was, Hey, technically interesting, intriguing. Um, but adoption was relatively low, I think because, you know, up until five years ago, um, applications, weren't actually changing that fast. I mean, think about it, right? The applications, ERP systems, CRM systems, you weren't changing those at the pace of what we're doing today. Now what's happening is every business is a software business. Every business, when you work, when you interact with your healthcare provider right now, it's about the apps. Like, can you go and get your schedules online? Can you email your doctors? Right? Can you go and get your labs? Right? The pace of new application development, we have some data showing that there will be more apps developed in the next five years, and then the past 40 years of computing combined. >>And so when you think about that, what's changed now is trying to manage that all from the kind of storage hardware side was just actually getting in the way you want to organize around the fastest beat rate in your infrastructure today. That's the application. So what vVols has helped you do is it allows the vSphere administrator, who's managing VMs and looking at the apps and the changing pace, and be able to basically select storage attributes, including QoS, capacity, IOPS, and do that from the vCenter console, and then be able to rectify things and manage them right from the console right next to the apps. And that provides a really integrated way. So when you have a close interaction, like what we're talking about today, or, you know, integration, um, that the Infinidat has provided now, you've got this ability to have a faster moving activity. And, you know, consolidation is one of the themes you've heard from time to time from VMware, we're consolidating the management so that the vSphere administrator can now go and manage more things. What traditional VMs yes. VMs across HCI. Sure. Plus now, plus storage and into the hybrid cloud and into like containers. It's that consolidated management, which is getting us speed and basically a consumer like experience for infrastructure deployments. >>Yeah. Now Phil mentioned the solutions lab. We've got a huge ecosystem. Several years ago, you launched this, this via the VMware. I think it's called the VMware Cloud Solutions Lab is the official name. What, explain what it does for collaboration and joint solutions development. And then Phil, I want you to go into more detail about what your participation is, but Lee, why don't you explain it? >>Yeah. You know, we don't take just any products that, because listen, there's a mixing. What we take is things that really expand that innovation frontier. And that's what we saw with Infinidat was expanding the frontier on like large capacity for many, many different mixed workloads and a commitment, right. To go and bring in, not just vVols support, of course, all the things we do for just a normal interaction with vSphere. But, uh, bringing vVols in was certainly important in showing how we operate at scale. And then importantly, as we expanded the VCF, VMware Cloud Foundation, to include storagee systems for a customer, for example, right, who has storage and HCI, right? And it looks for how to go and use them. And that's an individual choice at a customer level. We think this is strategically important. Now, as we expand a multicloud experience, that's different from the hyperscalers. Hyperscalers are coming in with two kind of issues, maybe, right? So one is it's single cloud. And the other one is there's a potential competitive aspect or from some right around the ongoing, underlying business and a hyperscaler business model. And so what VMware uniquely is doing is extending a common control plane across storage systems and HCI, and doing that in a way that basically gives customers choice. And we love that the cloud lab is really designed to go and make that a reality for customers strip out perceived and real risk. >>Yeah. To Lee's point of, it's not like there's not dozens and dozens and dozens of logos on the slide for the lab. I think there's like, you know, 10 or 12 from what I saw and Infinidat is one of them. Maybe you could talk a little bit more about your participation in the program and what it does for customers. >>Yeah, absolutely. And I would agree it's I, we liked the lab because it's not just supposed to be one of everything eye candy it's a purpose-built lab to do real things. And we like it because we can really explore, you know, some of the most contemporary, workloads in that environment, as well as solutions to what I considered some of the most contemporary industry problems. We're participating in a couple of ways. I believe we're the only petabyte scale storage solution in the Cloud Solutions Lab at VMware. One of the projects we're working on with VMware is their machine learning platform.  That's one of the first cloud solutions lab projects that we worked on at Infinidat. And we're also a core part of, of what VMware is driving from a data for good initiative. This was inspired by the idea that that tech can be used as a force for good in the world. And right now it's focused on the technology needs of nonprofits. And so we're closely working in, in the cloud solutions lab with, the VMware cloud foundation layers, as well as, their Tanzu and Kubernetes environments and learning a lot and proving a lot. And it's also a great way to demonstrate the capabilities of our platform. >>Yeah. So, yeah, it was just the other day I was on the VMware analyst meeting virtually of course in Zane and Sanjay and a number of other execs were giving the update. And, and just to sort of emphasize what we've been talking about here, this expansion of on-prem the cloud experience, the data from, especially from our survey data, we have a partner UTR that did great surveys on a regular quarterly basis, the VMware cloud on AWS, doing great for sure, but the VMware Cloud Foundation, the on-prem cloud, the hybrid cloud is really exploding and resonating with customers. And that's a good example of this sort of equilibrium that we're seeing between the public and private coming together >>Well on the VMware Cloud Foundation right now with, uh, you know, over a thousand customers, but importantly over 400 of the global 2000, it's the largest customers. And that's actually where the Venn diagram between the work that VMware Cloud Foundation is doing and Infinidat right, you know, this large scale, actually the, you know, interesting crossover, right. And, you know, listen for customers to go and take on a new store system. We always know that it's a high bar, right. So they have to see some really unique value, like how is this going to help? Right. And today that value is I want to spend less time looking down at the storage and more time looking up at the apps, that's how we're working together. Right. And how vVols fits into that, you know, with the VMware Cloud Foundation, it's the hype that hybrid cloud offering really gives customers that future-proofing right. And the degrees of freedom they're most likely to exercise. >>Right. Well, let's close with a, kind of a glimpse of the future. What do you see as the future of the data center specifically, and also your, your collaborations Lee? Why don't you start? >>I think what we hope to be true is turning out to be true. So, you know, if you've looked at the, you know, what's happening in the cloud, not everything is migrating in the cloud, but the public cloud, for example, and I'm talking about public cloud there. The public cloud offers some really interesting, unique value and VMware is doing really interesting things about like DR as a service and other things, right? So we're helping customers tap into that at the same time. Right. We're seeing that the on-prem investment is not stalling at all because of data sovereignty because of bandwidth limitations. Right. And because of really the economics of what it means to rent versus buy. And so, you know, partnering with  leaders on, in storage, right, is a core part of our strategy going forward. And we're looking forward to doing more right with Infinidat, as we see VCF evolve, as we see new applications, including container based applications running on our platform, lots of futures, right. As the pace of application change, you know, doesn't slow down. >>So what do you see for the next 10 years for Infinidat? >>Yeah, well, um, we, I appreciated your introduction because of this speak to sort of the core characteristics of Infinidat. And I think a company like us and at our, at our juncture of evolution, it's important to know exactly who you are. And we clearly are focused in that on-prem hybrid data center environment. We want to be the storage tier that companies use to build their clouds. And, uh, the partnership with VMware, uh, we talked about the Venn diagram. I think it just could not be more complimentary. And so we're certainly going to continue to focus on VMware as our largest and most consequential Alliance partner for our business going forward. Um, I'm excited about, about the data center landscape going forward. I think it's going to continue to ebb and flow. We'll see growth in distributed architectures. We'll see growth at the edge in the core data center. >>I think the, the old, the old days where customers would buy a storage system for a application environment, um, those days are over, it's all about consolidating multiple apps and thousands of users on a single platform. And to do that, you have to be really good at, uh, at a lot of things that we are very good at. Our, our strategy going forward is to evolve as media evolves, but never stray far from what has made Infinidat unique and special and highly differentiated in the marketplace. I think the work that VMware is doing and in Kubernetes >>Is very exciting. We're starting to see that really pick up in our business as well. So as we think about, um, uh, you know, not only staying relevant, but keeping very contemporary with application workloads, you know, we have some very small amount of customers that still do some bare metal, but predominantly as I said, 90% or above is VMware infrastructure. Uh, but we also see, uh, Kubernetes, our CSI driver works well with the VMware suite above it. Uh, so that, that complimentary relationship we see extending forward as, as the application environment evolves. Great, thank you. You know, many years ago when I attended my first, uh, VMworld, the practitioners that were there, you talked to them, half the conversations, they were complaining about storage and how it was so complicated and you needed guys in lab coats to solve problems. And, you know, VMware really has done a great job, publishing the APIs and encouraging the ecosystem. And so if you're a practitioner you're interested in how vVols and Infinidat and VMware were kind of raising the bar and on petabyte scale, there's some good blogs out there. Check out the Virtual Blocks blog for more information, guys. Thanks so much great to have you in the program. Really appreciate it. Thanks so much. Thank you for watching this Cube conversation, Dave Vellante. We'll see you next time.

Published Date : Mar 30 2021

SUMMARY :

and of course, Lee Caswell, VMware's VP of Marketing for the cloud platform business unit. Always good to see you guys. and enjoying many of the opportunities, you know, through a number of companies. And as I got to know the company and the board, and, you know, some of the leaders, but the data center's evolving the cloud is evolving, and this universe is expanding. You know, we believe in the distributed hybrid cloud and you know, the reasons for that actually turn out to eventually the edge, you know, that's a, that's a non-trivial task. they don't want to talk about, you know, the AWS cloud or the GCP cloud or the Azure cloud. The control of the security, the, the ability to recover And that's the experience that we provide. And I guess the question there is, is, is petabyte scale that really that prominent We call the Host PowerTools, which drives a consistent best practices implementation around our, And the idea that we have joint customers at large scale and listen storage is a tough business to get, And that's one of the things that we're seeing. And I think that not only edges up the bar, and the application requirements of the VM. mplementation and the tools that we provide with that deep level of integration, in the growth of vSAN and it was very successful product, but he chose Infinidat for that higher end Yeah, I think we you know, we're in the second phase of vVols adoption, right? the kind of storage hardware side was just actually getting in the way you want to organize And then Phil, I want you to go into more detail about what your participation is, but Lee, And the other one is there's a potential competitive aspect or from some right around the I think there's like, you know, 10 or 12 from what I saw and And we like it because we can really explore, you know, some of the most contemporary, the VMware cloud on AWS, doing great for sure, but the VMware Cloud Foundation, Well on the VMware Cloud Foundation right now with, uh, you know, over a thousand customers, And the degrees of freedom they're most likely to exercise. as the future of the data center specifically, and also your, your collaborations Lee? So, you know, As the pace of application change, you know, at our juncture of evolution, it's important to know exactly who you are. And to do that, you have to be really good at, Thanks so much great to have you in the program.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lee CaswellPERSON

0.99+

Phil BullingerPERSON

0.99+

PhilPERSON

0.99+

DavePERSON

0.99+

InfinidatORGANIZATION

0.99+

LeePERSON

0.99+

VMware Cloud FoundationORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

VMware Cloud FoundationORGANIZATION

0.99+

VMware Cloud Solutions LabORGANIZATION

0.99+

March 2021DATE

0.99+

Moshe YanaiPERSON

0.99+

VMwareORGANIZATION

0.99+

VMware Cloud Solutions LabORGANIZATION

0.99+

90%QUANTITY

0.99+

60 daysQUANTITY

0.99+

First phaseQUANTITY

0.99+

AWSORGANIZATION

0.99+

2020DATE

0.99+

two railsQUANTITY

0.99+

firstQUANTITY

0.99+

OneQUANTITY

0.99+

last yearDATE

0.99+

vSphereTITLE

0.99+

10QUANTITY

0.99+

second phaseQUANTITY

0.99+

UTRORGANIZATION

0.99+

fourth vectorQUANTITY

0.98+

VCFORGANIZATION

0.98+

todayDATE

0.98+

three main areasQUANTITY

0.98+

two kindQUANTITY

0.98+

singleQUANTITY

0.98+

oneQUANTITY

0.98+

this weekDATE

0.98+

VMwareTITLE

0.98+

VegasLOCATION

0.98+

over 90%QUANTITY

0.98+

Last fallDATE

0.98+

bothQUANTITY

0.97+

13QUANTITY

0.97+

over a thousand customersQUANTITY

0.97+

Phil Bullinger, INFINIDAT & Lee Caswell, VMware


 

(upbeat music) >> 10 years ago, a group of industry storage veterans formed a company called INFINIDAT. The DNA of the company was steeped in the heritage of its founder, Moshe Yanai who had a reputation for relentlessly innovating on three main areas, the highest performance, rock solid availability and the lowest possible cost. Now these elements have historically represented the superpower triumvirate of a successful storage platform. Now as INFINIDAT evolved it landed on a fourth vector that has been a key differentiator in its value proposition and that is petabyte scale. Hello everyone and welcome to this Cube Conversation. My name is Dave Vellante and I'm pleased to welcome in two long time friends of the cube, Phil Bullinger is newly minted CEO of INFINIDAT and of course, Lee Caswell, VMware's VP of marketing for the cloud platform business unit. Gents welcome. >> Thank you so much. Yeah. Great to be here Dave. >> Yeah. Great to be here Dave. Thanks. >> Always good to see you guys. Phil, so you're joining at the 10 year anniversary, Mark, congratulations on the appointment. What attracted you to the company? >> Yeah that's a great question Dave. I spent a long time in my career at enterprise storage and enjoyed many of the opportunities through a number of companies. Last fall when I became aware of the INFINIDAT opportunity and immediately captured my attention because of frankly my respect for the product. Through several opportunities I've had with enterprise customers in selling cycles of different products, if they happen to be customers of INFINIDAT they were not bashful about talking about their satisfaction with the product, their level of delight with it. And so I think from the sidelines I have always had a lot of respect for the INFINIDAT platform, the implementation of the product quality and reliability that it's kind of legendary for. And so when the opportunity came along it really captured my interest and of course behind a great product is almost always a great team and as I got to know the company and the board and some of the leaders and learned about the momentum and the business it was just a very, very compelling opportunity for me. And I'll have to say just 60 days into the job everything I hoped for is here not only a warm welcome to the company but an exciting opportunity with respect to where INFINIDAT is at today with growth of the business, the company has achieved a level of consistent growth through 2020 cashflow, positive, even thought positive and now it's a matter of scaling the business and it's something that I have had success with at several times in my career and I'm really, really enjoying the opportunity here at INFINIDAT to do that. >> That's great. Thanks for that. Now, of course Lee, VMware was founded nearly a quarter century ago and carved out a major piece of the enterprise pie and predominantly that's been on prem but the data centers evolving, the cloud is evolving and this universe is expanding. How do you see the future of that on-prem data center? >> I think Satya recently said, right? That we've reached max consolidation almost right. You pointed that out earlier. I thought that was really interesting, right? We believe in the distributed hybrid cloud and the reasons for that actually turn out to be storage led in there and in the real thinking about it because we're going to have distributed environments. And one of the things that we're doing with INFINIDAT here today, right? Is we're showing how customers can invest intelligently and responsibly on prem and have bridges in across the hybrid cloud. We do that through something called the VMware Cloud Foundation. That's a full stack offering that... And interesting here, right? It started off with a HCI element but it's expanded into storage and storage at scale. Because storage is going to exist we have very powerful storage value propositions and you're seeing customers go and deploy both. We're really excited about seeing INFINIDAT lean into the VMware Cloud Foundation and VVol has actually a way to match the pace of change in today's application world. >> Yes, so Phil you see these trends, I mean building bridges is what we called it. And so that takes a lot of hard work especially when you're doing from on-prem into hybrid, across clouds, eventually the edge, that's a non-trivial task. How do you see this playing out in market trends? >> We're in the middle of this every day and as you know Dave and certainly Lee, data center architecture is urban flow from centralized to decentralized but clearly data locality I think is driving a lot of the growth of the distributed data center architecture, the edge data centers but core is still very significant for most enterprise. And it has a lot to do with the fact that most enterprises want to own their own cloud when a Fortune 15 or a Fortune 50 or a Fortune 100 customer, when they talk about their cloud they don't want to talk about the AWS cloud or the GCP cloud or the Azure cloud. They want to talk about their cloud and almost always these are hybrid architectures with a large on-prem or colo footprint. The reason for that number of reasons, right? Data sovereignty is a big deal among the highest priorities for enterprise today. The control, the security, the ability to recover quickly from ransomware attacks, et cetera. These are the things that are just fundamentally important to the business continuity and enterprise risk management plan for these companies. But I think one thing that has changed the on-prem data center is the fact that it's the core operating characteristics have to take on kind of that public cloud characteristic, it has to be a transparent seamless scalability. I think the days of CIOs even tolerating people showing up in their data centers with disk trays under their arms to add capacity is over. They want to seamlessly add capacity, they want nonstop operation, a hundred percent uptime is the bar now it has to be a consolidation, massive consolidation, is clearly the play for TCO and efficiency. They don't want to have any compromises between scale and availability and performance. The very characteristics that you talked about upfront Dave, that make INFINIDAT unique I think are fundamentally the characteristics that enterprises are looking for when they build their cloud on prem. I think our architecture also really does provide a set it and forget it kind of experience when we install a new INFINIDAT frame in an enterprise data center, our intentions are we're not going to come back. We don't intend to come back to help fiddle with the bits or tweak the configuration and as applications and multi tenant users are added. And then of course, flexible economic models. I mean, everybody takes this for granted but you really really do have to be completely flexible between the two rails, the cap X rail and the objects rail and every step in between. And importantly when an enterprise customer needs to add capacity they don't have a sales conversation. They just want to have it right there already running in their data center. And that's the experience that we provide. >> Yeah. You guys are aligned in that vision, that layer that abstracts the complexity from the underlying wherever cloud on prem, et cetera. >> Right? >> Let's talk about VMware and INFINIDAT their relationship, I mean, every year at VMworld up until last year, thank you COVID, INFINIDAT would host this awesome dinner, you'd have his top customers there, very nice Vegas steak restaurant. I of course, I always made a point to stop by not just for the food. I mean, I was able to meet some customers and I've talked to many dozens over the years Phil, and I can echo that sentiment, why is the VMware ecosystem so important to INFINIDAT? And I guess the question there is, is petabyte scale really that prominent in the VMware customer base? >> It's a very, very important point. VMware is the longest standing alliance partner of INFINIDAT. It goes back to really almost the foundation of the company certainly starting with the release one, the very first commercial release of INFINIDAT, VMware and a very tight integration where VMware was a core part of that. We have a capability we call the host power tools which drives a consistent best practices implementation around our VMware integration and how it's actually used in the data center. And we built on that through the years through just a deep level of integration and our customers typically are at scale, petabyte scale or average deployment as a petabyte and up and over 90% of our customers use VMware. I think I can safely say we serve the VMware environment for some of VMware's largest enterprise footprints in the market. >> So Lee It's like children, you love all your partners but is there anything about INFINIDAT that stands out to you, a particular area where they shine from your perspective? >> Yeah, I think so. The best partnerships won are ones that are customer driven it turns out, right? And the idea that we have joint customers at large-scale, I must say storage is a tough business to go, right? Right, it takes time to go and mature to harden a code base, right? And particularly when you talk about petabyte scale right now, you've basically got customers buying in for the largest systems. And what we're seeing overall is customers are trying to do more things with fewer component elements. Makes sense, right? And so the scale here is important because it's not just scale in terms of like capacity, right? It's scale in terms of performance as well. And so, as you see customers trying to expand the number of different types of applications and this is one of the things we're seeing, right? Is new applications which could be container-based, Kubernetes orchestrated, our Tansu portfolio helps with that, right? If you see what we're doing with Nvidia, for example we announced some AI work, right? This week with vSphere. And so what you're starting to see is like the changing nature of applications and the fast pace of applications is really helping customers say, listen I want to go and find solutions that can meet the majority of my needs. And that's one of the things that we're seeing and particularly with the VVol'sintegration at scale that we just haven't seen before, INFINIDAT is setting the bar and really setting a new record for that. >> Yeah. Let me comment on that a little bit, Dave. We've been a core part of the VMware Cloud Solutions Lab, which is a very very exciting engaging investment that VMware has made. A lot of people have contributed to in the industry but in the VMware Cloud Solutions Lab we recently demonstrated on a single INFINIDAT frame over 200,000 VVols on a single system. And I think that not only edges up the bar I think it completely redefines what scale means when you're talking about a VVol implementation >> So lets talk about both those things. Not to geek out here but VVols they're kind of a game changer because instead of admins having to manually allocate storage to performance tiers, an array that is VASA certified, VASA is VMware or actually the storage API for storage awareness, VASA, anyway with VVols you can dynamically provision storage that matches, the way I say it as matches device attributes to the data and the application requirements of the VM. So Phil, it seems like so much in VMware land harkens back to the way mainframes used to solve problems in a modern way, right? And VVol is a real breakthrough in that regard in terms of simplifying storage. So how do you guys see it? I presume you're sort of VVol certified based on what you just said in the lab. >> Yeah. We recently announced our VVols release and we're not the first to market with VVols but from the start of the engineering project we wanted to do it. We wanted to do it the way we think. We think at scale in everything we do and our customers were very prescriptive and the kind of scale and performance and availability that they wanted to experience in VVols. And we're now seeing quite a bit of customer interest with traction in it. As I said, we redefined the bar for VVol scalability. We support on a single array now a thousand storage containers. And I think most of our competition is like at one or maybe 10 or 13 or something like that. So our customers are again at scale, they said if you're going to do VVols we want it at scale. We want it to embody the characteristics of your platform. We really liked VVols because it helps separate kind of the roles and responsibilities between the BI administrator and the storage system administrator. If you're going to put the majority of your most critical bits on INFINIDAT in your data center you're going to want to have control over how that resource is used, the at the VVols in rotation and the tools that we provide with that deep level of integration give the BI administrator all of the flexibility they need to manage applications and VVols of course gives the BI administrator the native use of our in minute snapshot technology. And so it makes it incredibly easy for them to administrate the platform without having to worry about the physical infrastructure but yet the people worried about the physical infrastructure still have control over that resource. So it's a game changer as far as we're concerned. >> Yeah. Storage has come a long way hasn't it Lee? If you could add some color here it seems in talking needs so VASA that's interesting you had a hand in the growth of VASA and very successful product but he chose INFINIDAT for that higher end application. It seemed like VVols are a key innovation in that regard. How's the VVol uptake going from your perspective. >> Yeah, I think we're in the second phase of VVol adoption, right? First phase was, hey, it technically interesting, intriguing but adoption was relatively low I think because you know up until five years ago applications weren't actually changing that fast. I mean, think about it, right? The applications, ERP systems, CRM systems, you weren't changing those at the pace of what we're doing today. Now what's happening is every business is a software business. Every business when you work, when you interact with your healthcare provider right now it's about the apps. Like, can you go and get your schedules online? Can you email your doctors, right? Can you go and get your labs, right? The pace of new application development, we have some data showing that there will be more apps developed in the next five years and then the past 40 years of computing combined. And so when you think about that what's changed now is trying to manage that all from the kind of storage hardware side was just actually getting in the way you want to organize around the fastest beat rate in your infrastructure, today that's the application. So what VVOls helps you do is it allows the vSphere administrator who's managing VMs and looking at the apps and the changing pace and be able to basically select storage attributes including QoS, capacity, IOPS and do that from the V center console and then be able to rectify things and manage them, right? From the console right next to the apps. And that provides a really integrated way. So when you have a close interaction like what we're talking about today or integration that the INFINIDAT has provided now you've got this ability to have a faster moving activity. And consolidation is one of the themes you've heard from time to time from VMware, we're consolidating the management so that the vSphere administrator can now go and manage more things. What traditional VMs, yes, VMs across HI sure put now plus storage and into the hybrid cloud and into like containers, it's that consolidated management which is getting us speed and basically a consumer like experience for infrastructure deployments. >> Yeah. Now Phil mentioned the solutions lab. We've got a huge ecosystem. Several years ago you launched this, the VMware, I think it's called the VMware Cloud Solutions Lab is the official name. Explain what it does for collaboration and joint solutions development. And then Phil, I want you to go in more detail about what your participation has been but Lee why don't you explain it? >> Yeah. We don't take just any products that because listen there's a mixing, what we take is things that really expand that innovation frontier. And that's what we saw with INFINIDAT was expanding the frontier on like large capacity for many many different mixed workloads and a commitment, right? To go and bring in not just VVol support, of course all the things we do for just normal interaction with vSphere but bringing VVOls in was certainly important in showing how we operate at scale. And then importantly as we expanded the vSphere or cloud foundation to include store systems, fair customer for example, right? Who has storage and HCI, right? And it looks for how to go and use them. And that's an individual choice at a customer level. We think this is strategically important now as we expand a multi-cloud experience that's different from the hyperscalers, right? Hyperscalers are coming in with two kind of issues, maybe, right? So one is it's single cloud. And the other one is there's a potential competitive aspect from some right around the ongoing underlying business and a hyperscaler business model. And so what VMware uniquely is doing is extending a common control plane across storage systems and HCI and doing that in a way that basically gives customers choice. And we love that the cloud lab is really designed to go and make that a reality for customers strip out perceived and real risk. >> Yeah. Phil to Lee's point, it's not dozens and dozens and dozens of logos on the slide for the lab. I think there's like 10 or 12 from what I saw and INFINIDAT is one of them. Maybe you could talk a little bit more about your participation in the program and what it does for customers. >> Yeah, absolutely. And I would agree it's, we like the lab because it's not just supposed to be one of everything I can do it, it's a purpose-built lab to do real things. And we like it because we can really explore some of the most contemporary workloads in that environment as well as solutions to what I centered as some of the most contemporary industry problems we're participating in a couple of ways. I believe we're the only petabyte scale storage solution in the cloud solutions lab at VMware. One of the projects we're working on with VMware is their machine learning platform. That's one of the first cloud solutions lab projects that we worked on with INFINIDAT. And we're also a core part of what VMware is driving from at but we call it data for good initiative. This was inspired by the idea that tech can be used as a force for good in the world. And right now it's focused on the technology needs of nonprofits. And so we're closely working in the cloud solutions lab with the VMware Cloud Foundation layers as well as the Tansu and Kubernetes environments and learning a lot and proving a lot. And it's also a great way to demonstrate the capabilities of our platform. >> Yeah. So Lee, I was just the other day I was under VMware analyst meeting virtually of course and Zane and Sanjay and a number of other execs were given the update. And just to sort of emphasize what we've been talking about here this expansion of on-prem, the cloud experience, the data especially from our survey data we have a partner at ETR they do great surveys on quarterly basis. The VMware cloud on AWS do great for sure but the VMware Cloud Foundation, the on-prem cloud, the hybrid cloud is really exploding and resonating with customers. And that's a good example of this sort of equilibrium that we're seeing between the public and private coming together. >> Well, VMware Cloud Foundation right now with over a thousand customers but importantly over 400 of the global 2000, right? It's the largest customers. And that's actually where the Venn diagram between the work that VMware Cloud Foundation is doing and INFINIDAT, right? This large scale actually the interesting crossover, right? And listen for customers to go and take on a new storage system we always know that it's a high bar, right? So they have to see some really unique value, like how is this going to help, right? And today that value is I want to spend less time looking down at the storage and more time looking up at the apps, that's how we're working together, right? And how VVols fits into that with the VMware Cloud Foundation, it's that hybrid cloud offering really gives customers that future-proofing, right? And the degrees of freedom they're most likely to exercise. >> Right. Well, let's close with a kind of a glimpse of the future. What do you two see as the future of the data center specifically and also your collaborations Lee? Why don't you start? >> So I think what we hope to be true is turning out to be true. So, if you've looked at what's happening in the cloud not everything is migrating in the cloud but the public cloud for example and I'm talking about public cloud there, the public cloud offers some really interesting unique value. And VMware is doing really interesting things about like Dr as a service and other things, right? So we're helping customers tap into that at the same time, right? We're seeing that the on-prem investment is not stalling at all because of data sovereignty because of bandwidth limitations, right? And because of really the economics of what it means to rent versus buy. And so partnering with leaders in storage, right? Is a core part of our strategy going forward. And we're looking forward to doing more, right? With INFINIDAT as we see VCF evolve, as we see new applications including container-based applications running on our platform, lots of futures, right? As the pace of application change doesn't slow down. >> So Phil, what do you see for the next 10 years for INFINIDAT? >> Yeah, well, I appreciated your introduction because it does speak to sort of the core characteristics of INFINIDAT. And I think a company like us and at our juncture of evolution it's important to know exactly who you are. And we clearly are focused in that on-prem hybrid data center environment. We want to be the storage tier that companies use to build their clouds. The partnership with VMware we talked about the Venn diagram, I think it just could not be more complimentary. And so we're certainly going to continue to focus on VMware as our largest and most consequential alliance partner for our business going forward. I'm excited about the data center landscape going forward. I think it's going to continue to ebb and flow. We'll see growth and distributed architectures, we'll see growth at the edge. In the core data center I think the old days where customers would buy a storage system for a application environment, those days are over it's all about consolidating multiple apps and thousands of users on a single platform. And to do that you have to be really good at a lot of things that we are very good at. Our strategy going forward is to evolve as media evolves but never stray far from what has made INFINIDAT unique and special and highly differentiated in the marketplace. I think the work that VMware is doing in Kubernetes is very exciting. We're starting to see that really pick up in our business as well. So as we think about not only staying relevant but keeping very contemporary with application workloads, we have some very small amount of customers that still do some bare metal but predominantly as I said 90% or above is a VMware infrastructure. But we also see Kubernetes, our CSI driver works well with the VMware suite above it. So that that complimentary relationship we see extending forward as the application environment evolves. >> It's great. Thank you. Many years ago when I attended my first VMworld the practitioners that were there you talked to them, half the conversations they were complaining about storage and how it was so complicated and you needed guys in lab coats to solve problems. And VMware really has done a great job publishing the APIs and encouraging the ecosystem. And so if you're a practitioner you're interested in in how VVols and INFINIDAT and VMware, we're kind of raising the bar and on petabyte scale there's some good blogs out there. Check out the virtual blocks blog for more information. Guys thanks so much. Great to have you in the program. Really appreciate it. >> Thanks so much, Dave. >> All right. Thank you for watching this cute conversation, Dave Vellante, we'll see you next time. (upbeat music)

Published Date : Mar 10 2021

SUMMARY :

The DNA of the company was Great to be here Dave. Mark, congratulations on the appointment. and enjoyed many of the opportunities of the enterprise pie and And one of the things that we're doing across clouds, eventually the edge, And that's the experience that we provide. that layer that abstracts the complexity And I guess the question of the company certainly And the idea that we have but in the VMware Cloud Solutions Lab VASA is VMware or actually the storage API and the tools that we How's the VVol uptake going and do that from the V center console the VMware, I think it's called of course all the things we do of logos on the slide for the lab. One of the projects we're but the VMware Cloud And the degrees of freedom future of the data center And because of really the economics differentiated in the marketplace. the practitioners that were Thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lee CaswellPERSON

0.99+

Dave VellantePERSON

0.99+

PhilPERSON

0.99+

MarkPERSON

0.99+

Moshe YanaiPERSON

0.99+

Phil BullingerPERSON

0.99+

DavePERSON

0.99+

NvidiaORGANIZATION

0.99+

VMware Cloud Solutions LabORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

LeePERSON

0.99+

VMware Cloud FoundationORGANIZATION

0.99+

VMware Cloud FoundationORGANIZATION

0.99+

INFINIDATORGANIZATION

0.99+

dozensQUANTITY

0.99+

90%QUANTITY

0.99+

oneQUANTITY

0.99+

VVolORGANIZATION

0.99+

60 daysQUANTITY

0.99+

AWSORGANIZATION

0.99+

First phaseQUANTITY

0.99+

ZanePERSON

0.99+

second phaseQUANTITY

0.99+

last yearDATE

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

bothQUANTITY

0.99+

INFINIDATTITLE

0.99+

firstQUANTITY

0.99+

SanjayPERSON

0.99+

This weekDATE

0.99+

VASAORGANIZATION

0.98+

Last fallDATE

0.98+

fourth vectorQUANTITY

0.98+

OneQUANTITY

0.98+

VMworldORGANIZATION

0.98+

five years agoDATE

0.98+

vSphereTITLE

0.98+

10QUANTITY

0.98+

three main areasQUANTITY

0.98+

12QUANTITY

0.98+

todayDATE

0.97+

COVIDPERSON

0.97+

VegasLOCATION

0.97+

thousands of usersQUANTITY

0.97+

over a thousand customersQUANTITY

0.97+

ETRORGANIZATION

0.97+

Marc Staimer, Dragon Slayer Consulting & David Floyer, Wikibon | December 2020


 

>> Announcer: From theCUBE studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is theCUBE conversation. >> Hi everyone, this is Dave Vellante and welcome to this CUBE conversation where we're going to dig in to this, the area of cloud databases. And Gartner just published a series of research in this space. And it's really a growing market, rapidly growing, a lot of new players, obviously the big three cloud players. And with me are three experts in the field, two long time industry analysts. Marc Staimer is the founder, president, and key principal at Dragon Slayer Consulting. And he's joined by David Floyer, the CTO of Wikibon. Gentlemen great to see you. Thanks for coming on theCUBE. >> Good to be here. >> Great to see you too Dave. >> Marc, coming from the great Northwest, I think first time on theCUBE, and so it's really great to have you. So let me set this up, as I said, you know, Gartner published these, you know, three giant tomes. These are, you know, publicly available documents on the web. I know you guys have been through them, you know, several hours of reading. And so, night... (Dave chuckles) Good night time reading. The three documents where they identify critical capabilities for cloud database management systems. And the first one we're going to talk about is, operational use cases. So we're talking about, you know, transaction oriented workloads, ERP financials. The second one was analytical use cases, sort of an emerging space to really try to, you know, the data warehouse space and the like. And, of course, the third is the famous Gartner Magic Quadrant, which we're going to talk about. So, Marc, let me start with you, you've dug into this research just at a high level, you know, what did you take away from it? >> Generally, if you look at all the players in the space they all have some basic good capabilities. What I mean by that is ultimately when you have, a transactional or an analytical database in the cloud, the goal is not to have to manage the database. Now they have different levels of where that goes to as how much you have to manage or what you have to manage. But ultimately, they all manage the basic administrative, or the pedantic tasks that DBAs have to do, the patching, the tuning, the upgrading, all of that is done by the service provider. So that's the number one thing they all aim at, from that point on every database has different capabilities and some will automate a whole bunch more than others, and will have different primary focuses. So it comes down to what you're looking for or what you need. And ultimately what I've learned from end users is what they think they need upfront, is not what they end up needing as they implement. >> David, anything you'd add to that, based on your reading of the Gartner work. >> Yes. It's a thorough piece of work. It's taking on a huge number of different types of uses and size of companies. And I think those are two parameters which really change how companies would look at it. If you're a Fortune 500 or Fortune 2000 type company, you're going to need a broader range of features, and you will need to deal with size and complexity in a much greater sense, and a lot of probably higher levels of availability, and reliability, and recoverability. Again, on the workload side, there are different types of workload and there're... There is as well as having the two transactional and analytic workloads, I think there's an emerging type of workload which is going to be very important for future applications where you want to combine transactional with analytic in real time, in order to automate business processes at a higher level, to make the business processes synchronous as opposed to asynchronous. And that degree of granularity, I think is missed, in a broader view of these companies and what they offer. It's in my view trying in some ways to not compare like with like from a customer point of view. So the very nuance, what you talked about, let's get into it, maybe that'll become clear to the audience. So like I said, these are very detailed research notes. There were several, I'll say analysts cooks in the kitchen, including Henry Cook, whom I don't know, but four other contributing analysts, two of whom are CUBE alum, Don Feinberg, and Merv Adrian, both really, you know, awesome researchers. And Rick Greenwald, along with Adam Ronthal. And these are public documents, you can go on the web and search for these. So I wonder if we could just look at some of the data and bring up... Guys, bring up the slide one here. And so we'll first look at the operational side and they broke it into four use cases. The traditional transaction use cases, the augmented transaction processing, stream/event processing and operational intelligence. And so we're going to show you there's a lot of data here. So what Gartner did is they essentially evaluated critical capabilities, or think of features and functions, and gave them a weighting, or a weighting, and then a rating. It was a weighting and rating methodology. On a s... The rating was on a scale of one to five, and then they weighted the importance of the features based on their assessment, and talking to the many customers they talk to. So you can see here on the first chart, we're showing both the traditional transactions and the augmented transactions and, you know, the thing... The first thing that jumps out at you guys is that, you know, Oracle with Autonomous is off the charts, far ahead of anybody else on this. And actually guys, if you just bring up slide number two, we'll take a look at the stream/event processing and operational intelligence use cases. And you can see, again, you know, Oracle has a big lead. And I don't want to necessarily go through every vendor here, but guys, if you don't mind going back to the first slide 'cause I think this is really, you know, the core of transaction processing. So let's look at this, you've got Oracle, you've got SAP HANA. You know, right there interestingly Amazon Web Services with the Aurora, you know, IBM Db2, which, you know, it goes back to the good old days, you know, down the list. But so, let me again start with Marc. So why is that? I mean, I guess this is no surprise, Oracle still owns the Mission-Critical for the database space. They earned that years ago. One that, you know, over the likes of Db2 and, you know, Informix and Sybase, and, you know, they emerged as number one there. But what do you make of this data Marc? >> If you look at this data in a vacuum, you're looking at specific functionality, I think you need to look at all the slides in total. And the reason I bring that up is because I agree with what David said earlier, in that the use case that's becoming more prevalent is the integration of transaction and analytics. And more importantly, it's not just your traditional data warehouse, but it's AI analytics. It's big data analytics. It's users are finding that they need more than just simple reporting. They need more in-depth analytics so that they can get more actionable insights into their data where they can react in real time. And so if you look at it just as a transaction, that's great. If you're going to just as a data warehouse, that's great, or analytics, that's fine. If you have a very narrow use case, yes. But I think today what we're looking at is... It's not so narrow. It's sort of like, if you bought a streaming device and it only streams Netflix and then you need to get another streaming device 'cause you want to watch Amazon Prime. You're not going to do that, you want one, that does all of it, and that's kind of what's missing from this data. So I agree that the data is good, but I don't think it's looking at it in a total encompassing manner. >> Well, so before we get off the horses on the track 'cause I love to do that. (Dave chuckles) I just kind of let's talk about that. So Marc, you're putting forth the... You guys seem to agree on that premise that the database that can do more than just one thing is of appeal to customers. I suppose that makes, certainly makes sense from a cost standpoint. But, you know, guys feel free to flip back and forth between slides one and two. But you can see SAP HANA, and I'm not sure what cloud that's running on, it's probably running on a combination of clouds, but, you know, scoring very strongly. I thought, you know, Aurora, you know, given AWS says it's one of the fastest growing services in history and they've got it ahead of Db2 just on functionality, which is pretty impressive. I love Google Spanner, you know, love the... What they're trying to accomplish there. You know, you go down to Microsoft is, they're kind of the... They're always good enough a database and that's how they succeed and et cetera, et cetera. But David, it sounds like you agree with Marc. I would say, I would think though, Amazon kind of doesn't agree 'cause they're like a horses for courses. >> I agree. >> Yeah, yeah. >> So I wonder if you could comment on that. >> Well, I want to comment on two vectors. The first vector is that the size of customer and, you know, a mid-sized customer versus a global $2,000 or global 500 customer. For the smaller customer that's the heart of AWS, and they are taking their applications and putting pretty well everything into their cloud, the one cloud, and Aurora is a good choice. But when you start to get to a requirements, as you do in larger companies have very high levels of availability, the functionality is not there. You're not comparing apples and... Apples with apples, it's two very different things. So from a tier one functionality point of view, IBM Db2 and Oracle have far greater capability for recovery and all the features that they've built in over there. >> Because of their... You mean 'cause of the maturity, right? maturity and... >> Because of their... Because of their focus on transaction and recovery, et cetera. >> So SAP though HANA, I mean, that's, you know... (David talks indistinctly) And then... >> Yeah, yeah. >> And then I wanted your comments on that, either of you or both of you. I mean, SAP, I think has a stated goal of basically getting its customers off Oracle that's, you know, there's always this urinary limping >> Yes, yes. >> between the two companies by 2024. Larry has said that ain't going to happen. You know, Amazon, we know still runs on Oracle. It's very hard to migrate Mission-Critical, David, you and I know this well, Marc you as well. So, you know, people often say, well, everybody wants to get off Oracle, it's too expensive, blah, blah, blah. But we talked to a lot of Oracle customers there, they're very happy with the reliability, availability, recoverability feature set. I mean, the core of Oracle seems pretty stable. >> Yes. >> But I wonder if you guys could comment on that, maybe Marc you go first. >> Sure. I've recently done some in-depth comparisons of Oracle and Aurora, and all their other RDS services and Snowflake and Google and a variety of them. And ultimately what surprised me is you made a statement it costs too much. It actually comes in half of Aurora for in most cases. And it comes in less than half of Snowflake in most cases, which surprised me. But no matter how you configure it, ultimately based on a couple of things, each vendor is focused on different aspects of what they do. Let's say Snowflake, for example, they're on the analytical side, they don't do any transaction processing. But... >> Yeah, so if I can... Sorry to interrupt. Guys if you could bring up the next slide that would be great. So that would be slide three, because now we get into the analytical piece Marc that you're talking about that's what Snowflake specialty is. So please carry on. >> Yeah, and what they're focused on is sharing data among customers. So if, for example, you're an automobile manufacturer and you've got a huge supply chain, you can supply... You can share the data without copying the data with any of your suppliers that are on Snowflake. Now, can you do that with the other data warehouses? Yes, you can. But the focal point is for Snowflake, that's where they're aiming it. And whereas let's say the focal point for Oracle is going to be performance. So their performance affects cost 'cause the higher the performance, the less you're paying for the performing part of the payment scale. Because you're paying per second for the CPUs that you're using. Same thing on Snowflake, but the performance is higher, therefore you use less. I mean, there's a whole bunch of things to come into this but at the end of the day what I've found is Oracle tends to be a lot less expensive than the prevailing wisdom. So let's talk value for a second because you said something, that yeah the other databases can do that, what Snowflake is doing there. But my understanding of what Snowflake is doing is they built this global data mesh across multiple clouds. So not only are they compatible with Google or AWS or Azure, but essentially you sign up for Snowflake and then you can share data with anybody else in the Snowflake cloud, that I think is unique. And I know, >> Marc: Yes. >> Redshift, for instance just announced, you know, Redshift data sharing, and I believe it's just within, you know, clusters within a customer, as opposed to across an ecosystem. And I think that's where the network effect is pretty compelling for Snowflake. So independent of costs, you and I can debate about costs and, you know, the tra... The lack of transparency of, because AWS you don't know what the bill is going to be at the end of the month. And that's the same thing with Snowflake, but I find that... And by the way guys, you can flip through slides three and four, because we've got... Let me just take a quick break and you have data warehouse, logical data warehouse. And then the next slide four you got data science, deep learning and operational intelligent use cases. And you can see, you know, Teradata, you know, law... Teradata came up in the mid 1980s and dominated in that space. Oracle does very well there. You can see Snowflake pop-up, SAP with the Data Warehouse, Amazon with Redshift. You know, Google with BigQuery gets a lot of high marks from people. You know, Cloud Data is in there, you know, so you see some of those names. But so Marc and David, to me, that's a different strategy. They're not trying to be just a better data warehouse, easier data warehouse. They're trying to create, Snowflake that is, an incremental opportunity as opposed to necessarily going after, for example, Oracle. David, your thoughts. >> Yeah, I absolutely agree. I mean, ease of use is a primary benefit for Snowflake. It enables you to do stuff very easily. It enables you to take data without ETL, without any of the complexity. It enables you to share a number of resources across many different users and know... And be able to bring in what that particular user wants or part of the company wants. So in terms of where they're focusing, they've got a tremendous ease of use, tremendous focus on what the customer wants. And you pointed out yourself the restrictions there are of doing that both within Oracle and AWS. So yes, they have really focused very, very hard on that. Again, for the future, they are bringing in a lot of additional functions. They're bringing in Python into it, not Python, JSON into the database. They can extend the database itself, whether they go the whole hog and put in transaction as well, that's probably something they may be thinking about but not at the moment. >> Well, but they, you know, they obviously have to have TAM expansion designs because Marc, I mean, you know, if they just get a 100% of the data warehouse market, they're probably at a third of their stock market valuation. So they had better have, you know, a roadmap and plans to extend there. But I want to come back Marc to this notion of, you know, the right tool for the right job, or, you know, best of breed for a specific, the right specific, you know horse for course, versus this kind of notion of all in one, I mean, they're two different ends of the spectrum. You're seeing, you know, Oracle obviously very successful based on these ratings and based on, you know their track record. And Amazon, I think I lost count of the number of data stores (Dave chuckles) with Redshift and Aurora and Dynamo, and, you know, on and on and on. (Marc talks indistinctly) So they clearly want to have that, you know, primitive, you know, different APIs for each access, completely different philosophies it's like Democrats or Republicans. Marc your thoughts as to who ultimately wins in the marketplace. >> Well, it's hard to say who is ultimately going to win, but if I look at Amazon, Amazon is an all-cart type of system. If you need time series, you go with their time series database. If you need a data warehouse, you go with Redshift. If you need transaction, you go with one of the RDS databases. If you need JSON, you go with a different database. Everything is a different, unique database. Moving data between these databases is far from simple. If you need to do a analytics on one database from another, you're going to use other services that cost money. So yeah, each one will do what they say it's going to do but it's going to end up costing you a lot of money when you do any kind of integration. And you're going to add complexity and you're going to have errors. There's all sorts of issues there. So if you need more than one, probably not your best route to go, but if you need just one, it's fine. And if, and on Snowflake, you raise the issue that they're going to have to add transactions, they're going to have to rewrite their database. They have no indexes whatsoever in Snowflake. I mean, part of the simplicity that David talked about is because they had to cut corners, which makes sense. If you're focused on the data warehouse you cut out the indexes, great. You don't need them. But if you're going to do transactions, you kind of need them. So you're going to have to do some more work there. So... >> Well... So, you know, I don't know. I have a different take on that guys. I think that, I'm not sure if Snowflake will add transactions. I think maybe, you know, their hope is that the market that they're creating is big enough. I mean, I have a different view of this in that, I think the data architecture is going to change over the next 10 years. As opposed to having a monolithic system where everything goes through that big data platform, the data warehouse and the data lake. I actually see what Snowflake is trying to do and, you know, I'm sure others will join them, is to put data in the hands of product builders, data product builders or data service builders. I think they're betting that that market is incremental and maybe they don't try to take on... I think it would maybe be a mistake to try to take on Oracle. Oracle is just too strong. I wonder David, if you could comment. So it's interesting to see how strong Gartner rated Oracle in cloud database, 'cause you don't... I mean, okay, Oracle has got OCI, but you know, you think a cloud, you think Google, or Amazon, Microsoft and Google. But if I have a transaction database running on Oracle, very risky to move that, right? And so we've seen that, it's interesting. Amazon's a big customer of Oracle, Salesforce is a big customer of Oracle. You know, Larry is very outspoken about those companies. SAP customers are many, most are using Oracle. I don't, you know, it's not likely that they're going anywhere. My question to you, David, is first of all, why do they want to go to the cloud? And if they do go to the cloud, is it logical that the least risky approach is to stay with Oracle, if you're an Oracle customer, or Db2, if you're an IBM customer, and then move those other workloads that can move whether it's more data warehouse oriented or incremental transaction work that could be done in a Aurora? >> I think the first point, why should Oracle go to the cloud? Why has it gone to the cloud? And if there is a... >> Moreso... Moreso why would customers of Oracle... >> Why would customers want to... >> That's really the question. >> Well, Oracle have got Oracle Cloud@Customer and that is a very powerful way of doing it. Where exactly the same Oracle system is running on premise or in the cloud. You can have it where you want, you can have them joined together. That's unique. That's unique in the marketplace. So that gives them a very special place in large customers that have data in many different places. The second point is that moving data is very expensive. Marc was making that point earlier on. Moving data from one place to another place between two different databases is a very expensive architecture. Having the data in one place where you don't have to move it where you can go directly to it, gives you enormous capabilities for a single database, single database type. And I'm sure that from a transact... From an analytic point of view, that's where Snowflake is going, to a large single database. But where Oracle is going to is where, you combine both the transactional and the other one. And as you say, the cost of migration of databases is incredibly high, especially transaction databases, especially large complex transaction databases. >> So... >> And it takes a long time. So at least a two year... And it took five years for Amazon to actually succeed in getting a lot of their stuff over. And five years they could have been doing an awful lot more with the people that they used to bring it over. So it was a marketing decision as opposed to a rational business decision. >> It's the holy grail of the vendors, they all want your data in their database. That's why Amazon puts so much effort into it. Oracle is, you know, in obviously a very strong position. It's got growth and it's new stuff, it's old stuff. It's, you know... The problem with Oracle it has like many of the legacy vendors, it's the size of the install base is so large and it's shrinking. And the new stuff is.... The legacy stuff is shrinking. The new stuff is growing very, very fast but it's not large enough yet to offset that, you see that in all the learnings. So very positive news on, you know, the cloud database, and they just got to work through that transition. Let's bring up slide number five, because Marc, this is to me the most interesting. So we've just shown all these detailed analysis from Gartner. And then you look at the Magic Quadrant for cloud databases. And, you know, despite Amazon being behind, you know, Oracle, or Teradata, or whomever in every one of these ratings, they're up to the right. Now, of course, Gartner will caveat this and say, it doesn't necessarily mean you're the best, but of course, everybody wants to be in the upper, right. We all know that, but it doesn't necessarily mean that you should go by that database, I agree with what Gartner is saying. But look at Amazon, Microsoft and Google are like one, two and three. And then of course, you've got Oracle up there and then, you know, the others. So that I found that very curious, it is like there was a dissonance between the hardcore ratings and then the positions in the Magic Quadrant. Why do you think that is Marc? >> It, you know, it didn't surprise me in the least because of the way that Gartner does its Magic Quadrants. The higher up you go in the vertical is very much tied to the amount of revenue you get in that specific category which they're doing the Magic Quadrant. It doesn't have to do with any of the revenue from anywhere else. Just that specific quadrant is with that specific type of market. So when I look at it, Oracle's revenue still a big chunk of the revenue comes from on-prem, not in the cloud. So you're looking just at the cloud revenue. Now on the right side, moving to the right of the quadrant that's based on functionality, capabilities, the resilience, other things other than revenue. So visionary says, hey how far are you on the visionary side? Now, how they weight that again comes down to Gartner's experts and how they want to weight it and what makes more sense to them. But from my point of view, the right side is as important as the vertical side, 'cause the vertical side doesn't measure the growth rate either. And if we look at these, some of these are growing much faster than the others. For example, Snowflake is growing incredibly fast, and that doesn't reflect in these numbers from my perspective. >> Dave: I agree. >> Oracle is growing incredibly fast in the cloud. As David pointed out earlier, it's not just in their cloud where they're growing, but it's Cloud@Customer, which is basically an extension of their cloud. I don't know if that's included these numbers or not in the revenue side. So there's... There're a number of factors... >> Should it be in your opinion, Marc, would you include that in your definition of cloud? >> Yeah. >> The things that are hybrid and on-prem would that cloud... >> Yes. >> Well especially... Well, again, it depends on the hybrid. For example, if you have your own license, in your own hardware, but it connects to the cloud, no, I wouldn't include that. If you have a subscription license and subscription hardware that you don't own, but it's owned by the cloud provider, but it connects with the cloud as well, that I would. >> Interesting. Well, you know, to your point about growth, you're right. I mean, it's probably looking at, you know, revenues looking, you know, backwards from guys like Snowflake, it will be double, you know, the next one of these. It's also interesting to me on the horizontal axis to see Cloud Data and Databricks further to the right, than Snowflake, because that's kind of the data lake cloud. >> It is. >> And then of course, you've got, you know, the other... I mean, database used to be boring, so... (David laughs) It's such a hot market space here. (Marc talks indistinctly) David, your final thoughts on all this stuff. What does the customer take away here? What should I... What should my cloud database management strategy be? >> Well, I was positive about Oracle, let's take some of the negatives of Oracle. First of all, they don't make it very easy to rum on other platforms. So they have put in terms and conditions which make it very difficult to run on AWS, for example, you get double counts on the licenses, et cetera. So they haven't played well... >> Those are negotiable by the way. Those... You bring it up on the customer. You can negotiate that one. >> Can be, yes, They can be. Yes. If you're big enough they are negotiable. But Aurora certainly hasn't made it easy to work with other plat... Other clouds. What they did very... >> How about Microsoft? >> Well, no, that is exactly what I was going to say. Oracle with adjacent workloads have been working very well with Microsoft and you can then use Microsoft Azure and use a database adjacent in the same data center, working with integrated very nicely indeed. And I think Oracle has got to do that with AWS, it's got to do that with Google as well. It's got to provide a service for people to run where they want to run things not just on the Oracle cloud. If they did that, that would in my term, and my my opinion be a very strong move and would make make the capabilities available in many more places. >> Right. Awesome. Hey Marc, thanks so much for coming to theCUBE. Thank you, David, as well, and thanks to Gartner for doing all this great research and making it public on the web. You can... If you just search critical capabilities for cloud database management systems for operational use cases, that's a mouthful, and then do the same for analytical use cases, and the Magic Quadrant. There's the third doc for cloud database management systems. You'll get about two hours of reading and I learned a lot and I learned a lot here too. I appreciate the context guys. Thanks so much. >> My pleasure. All right, thank you for watching everybody. This is Dave Vellante for theCUBE. We'll see you next time. (upbeat music)

Published Date : Dec 18 2020

SUMMARY :

leaders all around the world. Marc Staimer is the founder, to really try to, you know, or what you have to manage. based on your reading of the Gartner work. So the very nuance, what you talked about, You're not going to do that, you I thought, you know, Aurora, you know, So I wonder if you and, you know, a mid-sized customer You mean 'cause of the maturity, right? Because of their focus you know... either of you or both of you. So, you know, people often say, But I wonder if you But no matter how you configure it, Guys if you could bring up the next slide and then you can share And by the way guys, you can And you pointed out yourself to have that, you know, So if you need more than one, I think maybe, you know, Why has it gone to the cloud? Moreso why would customers of Oracle... on premise or in the cloud. And as you say, the cost in getting a lot of their stuff over. and then, you know, the others. to the amount of revenue you in the revenue side. The things that are hybrid and on-prem that you don't own, but it's Well, you know, to your point got, you know, the other... you get double counts Those are negotiable by the way. hasn't made it easy to work and you can then use Microsoft Azure and the Magic Quadrant. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FloyerPERSON

0.99+

Rick GreenwaldPERSON

0.99+

DavePERSON

0.99+

Marc StaimerPERSON

0.99+

MarcPERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Adam RonthalPERSON

0.99+

Don FeinbergPERSON

0.99+

GoogleORGANIZATION

0.99+

LarryPERSON

0.99+

AWSORGANIZATION

0.99+

OracleORGANIZATION

0.99+

December 2020DATE

0.99+

IBMORGANIZATION

0.99+

Henry CookPERSON

0.99+

Palo AltoLOCATION

0.99+

twoQUANTITY

0.99+

five yearsQUANTITY

0.99+

GartnerORGANIZATION

0.99+

Merv AdrianPERSON

0.99+

100%QUANTITY

0.99+

second pointQUANTITY

0.99+

Cultivating a Data Fluent Culture | Beyond.2020 Digital


 

>>Yeah, >>yeah. Hello, everyone. And welcome to the cultivating a data slowing culture. Jack, my name is Paula Johnson. I'm thought Spots head of community, and I am so excited to be your host heared at beyond. One of my favorite things about beyond is connecting with everyone and just feeling that buzz and energy from you all. So please don't be shy and engage in the chat. I'll be there shortly. We all know that when it comes to being fluent in a language, it's all about how do you take data in the sense and turn it into action? We've seen that in the hands of employees. Once they have access to this information, they are more engaged in their role. They're more productive, and most importantly, they're making better decisions. I think all of us want a little bit more of that, don't we? In today's track, you'll hear from expert partners and our customers and best practices that you could start applying to build that data. Fluent culture in your organization that we're seeing is powering the digital transformation across all industries will also discuss the role that the analysts of the future plays when it comes to this cultural shift and how important it is for diversity in data that helps us prevent bias at scale. To start us off our first session of the day is cultivating a data fluent culture, the essence and essentials. Our first speaker, CEO and founder of the Data Lodge, Valerie Logan. Valerie, Thank you for joining us today of passings over to you Now. >>Excellent. Thank you so much while it's so great to be here with the thought spot family. And there is nothing I would love to talk more about than data literacy and data fluency. And I >>just want to take a >>second and acknowledge I love how thought spot refers to this as data fluency and because I really see data literacy and fluency at, you know, either end of the same spectrum. And to mark that to commemorate that I have decorated the Scrabble board for today's occasion with fluency and literacy intersecting right at the center of the board. So with that, let's go ahead and get started and talking about how do you cultivate a data fluent culture? So in today's session, I am thrilled to be able to talk through Ah, few dynamics around what's >>going >>on in the market around this area. Who are the pioneers and what are they doing to drive data fluent culture? And what can you do about it? What are the best practices that you can apply to start this? This momentum and it's really a movement. So how do you want to play a part in this movement? So the market in the myths, um you know, it's 2020. We have had what I would call an unexpected awakening for the topic of data literacy and fluency. So let's just take a little trip down memory lane. So the last few years, data literacy and data fluency have been emerging as part of the chief data officer Agenda Analytics leaders have been looking at data culture, um, and the up skilling of the workforce as a key cornerstone to how do you create Ah, modern data and analytic strategy. But often this has been viewed as kind of just training or visualization or, um, a lot of focus on the upscaling side of data literacy. So there's >>been >>some great developments over the past few years with I was leading research at Gartner on this topic. There's other work around assessments and training Resource is. But if I'm if I'm really honest, they a lot of this has been somewhat viewed as academic and maybe a bit abstract. Enter the year 2020 where data literacy just got riel and it really can no longer be ignored. And the co vid pandemic has made this personal for all of us, not only in our work roles but in our personal lives, with our friends and families trying to make critical life decisions. So what I'd ask you to do is just to appreciate that this topic is no longer just a work thing. It is personal, and I think that's one of the ways you start to really crack. The culture code is how do you make this relevant to everyone in their personal lives? And unfortunately, cove it did that, and it has brought it to the forefront. But the challenge is how do you balance how do analytics leaders balance the need to up skill the workforce in the culture, with all of these competing needs around modernizing the platform and, um, driving trusted data and data governance? So that's what we'll be exploring is how to do this in parallel. So the very first thing that we need to do is start with the definition and I'd like to share with you how I framed data literacy for any industry across the globe. Which is first of all to appreciate that data literacy as a foundation capability has really been elevated now as >>an >>equivalent to people process and technology. And, you know, if you've been around a while, you know that classic trinity of people process and technology, It's the way that we have thought about how do you change an organization but with the digitization of our work, our lives, our society, you know anything from how do we consume information? How do we serve customers? Um, you know, we're walking sensors with our smartphones are worlds are digital now, and so data has been elevated as an equivalent Vector two people process and technology. And this is really why the role of the chief data officer in the analytics leader has been elevated to a C suite role. And it's also why data literacy and fluency is a workforce competency, not just for the specialist eso You know, I'm an old math major quant. So I've always kind of appreciated the role of data, but now it's prevalent to all right in work in life. So this >>is a >>mindset shift. And in addition to the mindset shift, let's look at what really makes up the elements of what does it mean to be data literate. So I like to call it the ability to read, write and communicate with data in context in both work in life and that it has two pieces. It has a vocabulary, so the vocabulary includes three basic sets of terms. So it includes data terms, obviously, so data sources, data attributes, data quality. There are analysis methods and concepts and terms. You know, it could be anything from, ah, bar Chart Thio, an advanced machine learning algorithm to the value drivers, right? The business acumen. What problems are resolving. So if you really break it down, it's those three sets of terms that make up the vocabulary. But it's not just the terms. It's also what we do with those terms and the skills and the skills. I like to refer to those as the acronym T T E a How do you think? How do you engage with others and how do you act or apply with data constructively? So hopefully that gives you a good basis for how we think about data literacy. And of course, the stronger you get in data literacy drives you towards higher degrees of data fluency. So I like to say we need to make this personal. And when we think about the different roles that we have in life and the different backgrounds that we bring, we think about the diversity and the inclusion of all people and all backgrounds. Diversity, to me is in addition to diversity of our gender identification, diversity of our racial backgrounds and histories. Diversity is also what is what is our work experience in our life experience. So one of the things I really like to do is to use this quote when talking about data literacy, which is we don't see things as they are. We see them as we are. So what we do is we create permission to say, you know what? It's okay that maybe you have some fear about this topic, or you may have some vulnerability around using, um you know, interactive dashboards. Um, you know, it's all about how we each come to this topic and how we support each other. So what I'd like to dio is just describe how we do that and the way that I like to teach that is this idea that we we foster data literacy by acknowledging that really, you learn this language, you learn this through embracing it, like learning a second language. So just take a second and think about you know what languages you speak right? And maybe maybe it's one. Maybe it's too often there's, you know, multiple. But you can embrace data literacy and fluency like it's a language, and somehow that creates permission for people to just say, you know, it's OK that I don't necessarily speak this language, but but I can try. So the way that we like to break this down and I call this SL information as a second language built off of the SL construct of English as a second language and it starts with that basic vocabulary, right? Every language has a vocabulary, and what I mentioned earlier in the definition is this idea that there are three basic sets of terms, value information and analysis. And everybody, when they're learning things like Stow have like a little pneumonic, right? So this is called the V A model, and you can take this and you can apply it to any use case. And you can welcome others into the conversation and say, You know, I really understand the V and the I, but I'm not a Kwan. I don't understand the A. So even just having this basic little triangle called the Via Model starts to create a frame for a shared conversation. But it's not just the vocabulary. It's also about the die elects. So if you are in a hospital, you talk about patient outcomes. If you are in insurance, you talk about underwriting and claims related outcomes. So the beauty of this language is there is a core construct for a vocabulary. But then it gets contextualized, and the beauty of that is, even if you're a classic business person that don't you don't think you're a data and analytics person. You bring something to the party. You bring something to this language, which is you understand the value drivers, so hopefully that's a good basis for you. But it's not just the language. It's also the constructs. How do you think? How do you interact and how do you add value? So here's a little double click of the T E. A acronym to show you it's Are you aware of context? So when you're watching the news, which could be interesting these days, are you actually stepping back and taking pause and saying E wonder what the source of that ISS? I wonder what the assumptions are or when you're in interacting with others. What is your degree of the ability? Thio? Tele Data story, Right? Do you have comfort and confidence interacting with others and then on the applying? This is at the end of the day, this is all about helping people make decisions. So when you're making a decision, are you being conscientious of the ethics right, the ethics or the potential bias in what you're looking at and what you're potentially doing? So I hope this provides you a nice frame. Just if you take nothing else away, take away the V A model as a way to think about a use case and application of data that there's different dialects. So when you're interacting with somebody, think of what dialect are they speaking? And then these three basic skill sets that were helping the workforce to up skill on. But the last thing is, um, you know, there's there's different levels of proficiency, and this is the point of literacy versus fluency. Depending on your role. Not everyone needs to speak data at the same level. So what we're trying to do is get everyone, at least to a shared level of conversational data, right? A basic level of foundation literacy. But based on your role, you will develop different degrees of fluency. The last point of treating this as a language is the idea that we don't just learn language through training. We learn language through interaction and experience. So I would encourage you. Just think about all what are all the different ways you can learn language and apply those to your relationship with data. Hopefully, that makes sense. Um, >>there's a >>few myths out there around this topic of data literacy, and I just want to do a little myth busting real quickly just so you can be on the lookout for these. So first of all data literacy is not about just about training. Training and assessments are certainly a cornerstone, however, when you think about developing a language, yeah, you can use a Rosetta Stone or one of those techniques, but that only gets you. So far. It's conversations you have. It's immersion. Eso keep in mind. It's not just about training. There are many ways to develop language. Secondly, data literacy is not just about internal structure, data and statistics. There are so many different types of data sets, audio, video, text, um, and so many different methods for synthesizing that content. So keep in mind, this isn't just about kind of classic data and methods. The third is visualization and storytelling are such a beautiful way to bring data literacy toe life. But it's not on Lee about visualization and storytelling, right? So there are different techniques. There are different methods on. We'll talk in a minute about health. Top Spot is embedding a lot of the data literacy capabilities into the environment. So it's not just about visualization and storytelling, and it's certainly not about making everybody a junior data scientist. The key is to identify, you know, if you are a call center representative. If you are a Knop orations manager, if you are the CEO, what is the appropriate profile of literacy and fluency for you? The last point and hopefully you get this by now is thistle is not just a work skill. And I think this is one of the best, um, services that we can provide to our employees is when you train an employee and help them up. Skill their data fluency. You're actually up Skilling, the household and their friends and their family because you're teaching them and then they can continue to teach. So at the >>end of >>the day, when we talk about what are the needs and drivers like, where's the return and what are the main objectives of, you know, having a C suite embrace state illiteracy as, ah program? There are primarily four key themes that come up that I hear all the time that I work with clients on Number one is This is how you help accelerate the shift to a data informed, insight driven culture. Or I actually like how thought spot refers to signals, right? So it's not even just insights. It's How do you distill all this noise right and and respond to the signals. But to do that collectively and culturally. Secondly, this is about unlocking what I call radical collaboration so well, while these terms often, sometimes they're viewed as, oh, we need to up skill the full population. This is as much about unlocking how data scientists, data engineers and business analysts collaborate. Right there is there is work to be done there, an opportunity there. The third is yes, we need to do this in the context of up Skilling for digital dexterity. So what I mean by that is data literacy and fluency is in the context of whole Siris of other up Skilling objectives. So becoming more agile understanding, process, automation, understanding, um, the broader ability, you know, ai and in Internet of things sensors, right? So this is part of a portfolio of up skilling. But at the end of the day, it comes down to comfort and confidence. If people are not comfortable with decision making in their role at their level in their those moments that matter, you won't get the kind of engagement. So this is also about fostering comfort and confidence. The last thing is, you know, you have so much data and analytics talent in your organization, and what we want to do is we want to maximize that talent. We really want to reduce dependency on reports and hey, can you can you put that together for me and really enable not just self service but democratizing that access and creating that freedom of access, but also freed up capacity. So if you're looking to build the case for a program, these air the primary four drivers that you can identify clear r A y and I call r o, I, I refer to are oh, I two ways return on investment and also risk of ignoring eso. You gotta be careful. You ignore these. They're going to come back to haunt you later. Eso Hopefully this helps you build the case. So let's take a look at what is a data literacy program. So it's one thing to say, Yeah, that sounds good, but how do you collectively and systemically start to enable this culture change? So, in pioneering data literacy programs, I like to call a data literacy program a commitment. Okay, this is an intentional commitment to up skill, the workforce in the culture, and there's really three pieces to that. The first is it has to be scoped to say we are about enabling the full potential of all associates. And sometimes some of my clients are extending that beyond the virtual walls of their organization to say S I'm working with a U. S. Federal agency. They're talking about data literacy for citizens, right, extending it outside the wall. So it's really about all your constituents on day and associates. Secondly, it is about fostering shared language and the modern data literacy abilities. The third is putting a real focus on what are the moments that matter. So with any kind of heavy change program, there's always a risk that it can. It can get very vague. So here's some examples of the moments that you're really trying to identify in the moments that matter. We do that through three things. I'll just paint those real quick. One is engagement. How do you engage with the leaders? How do you develop community and how do you drive communications? Secondly, we do that through development. We do that through language development, explicitly self paced learning and then of course, broader professional development and training. The third area enablement. This one is often overlooked in any kind of data literacy program. And this is where Thought spot is driving innovation left and right. This is about augmentation of the experience. So if we expect data literacy and data fluency to be developed Onley through training and not augmenting the experience in the environment, we will miss a huge opportunity. So thought spot one. The announcement yesterday with search assist. This is a beautiful example of how we are augmenting guided data literacy, right to support unending user in asking data rich questions and to not expect them to have to know all the forms and features is no different than how a GPS does not tell you. Latitude, longitude, a GPS tells you, Turn left, turn right. So the ability to augment that the way that thought spot does is so powerful. And one of my clients calls it data literacy by design. So how are we in designing that into the environment? And at the end of the day, the last and fourth lever of how you drive a program is you've gotta have someone orchestrating this change. So there is a is an art and a science to data literacy program development. So a couple of examples of pioneers So one pioneer nationwide building society, um, incredible work on how they are leveraging thought spot In particular, Thio have conversations with data. They are creating frictionless voyages with data, and they're using the spot I Q tool to recommend personalized insight. Right? This is an example of that enablement that I was just explaining. Second example, Red hat red hat. They like to describe this as going farther faster than with a small group of experts. They also refer to it as supporting data conversations again with that idea of language. So what's the difference between pioneers and procrastinators? Because what I'm seeing in the market right now is we've got these frontline pioneers who are driving these programs. But then there's kind of a d i Y do it yourself mentality going on. So I just wanted to share what I'm observing as this contrast. So procrastinators are kind of thinking I have no idea where they even start with us, whereas pioneers air saying, you know what, this is absolutely central. Let's figure it out procrastinators are saying. You know what? This probably isn't the right time for this program. Other things are more important and pioneers air like you know what? We don't have an option fast forward a year from now. Do we really think this is gonna organically change? This is pervasive to everything we dio procrastinators. They're saying I don't even know who to put in charge for this. And pioneers there saying this needs a lead. This needs someone focusing on it and a network of influencers. And then finally, procrastinators, They're generally going, you know, we're just gonna wing this and we'll just we'll stand up in academy. We'll put some courses together and pioneers air saying, You know what? We need to work smart. We need a launch, We need a leverage and we need to scale. So I hope that this has inspired you that, you know, there really are many ways to go forward, as FDR said, and only one way of standing still. So not taking an action is a choice. And there were, you know, it does have impact. So a couple of just quick things to wrap up one is how do you get started with the data literacy program, so I recommend seven steps. Who's your sponsor and who is the lead craft? Your case for change. Make it explicit. Developed that narrative craft a blueprint that's scalable but that has an initial plan where data literacy is part of not separate. Run some pilot workshops. These can be so fun and you can tackle the fear and vulnerability concern with really going after, Like how? How do we speak data across different diverse parts of the team. Thes are so fun. And what I find is when I teach people how to run a workshop like this, they absolutely want to repeat it and they get demand for more and more workshops launch pragmatically, right? We don't have any time or energy for big, expansive programs. Identify some quick winds, ignite the grassroots movement, low cost. There are many ways to do that. Engage the influencers right, ignite this bottom up movement and find ways to welcome all to the party. And then finally, you gotta think about scale right over time. This is a partnership with learning and development partnership with HR. This becomes the fabric of how do you onboard people. How do you sustain people? How do you develop? So the last thing I wanted to just caution you on is there's a few kind of big mistakes in this area. One is you have to be clear on what you're solving for, right? What does this really mean? What does it look like? What are the needs and drivers? Where is this being done? Well, today, to be very clear on what you're solving for secondly, language matters, right? If if that has not been clear, language is the common thread and it is the basis for literacy and fluency. Third, going it alone. If you try to tackle this and try to wing it. Google searching data literacy You will spend your time and energy, which is as precious of a currency as your money on efforts that, um, take more time. And there is a lot to be leveraged through through various partnerships and leverage of your vendor providers like thought spot. Last thing. A quick story. Um, over 100 years ago, Ford Motor Company think about think about who the worker population was in the plants. They were immigrants coming from all different countries having different native languages. What was happening in the environment in the plants is they were experiencing significant safety issues and efficiency issues. The root issue was lack of a shared language. I truly believe that we're at the same moment where we're lacking a shared language around data. So what Ford did was they created the Ford English school and they started to nurture that shared language. And I believe that that's exactly what we're doing now, right? So I couldn't I couldn't leave this picture, though, and not acknowledge. Not a lot of diversity in that room. So I know we would have more diversity now if we brought everyone together. But I just hope that this story resonates with you as the power of language as a foundation for growing literacy and fluency >>for joining us. We're actually gonna be jumping into the next section, so grab a quick water break, but don't wander too far. You definitely do not want to miss the second session of today. We're going to be exploring how to scale the impact and how to become a change agent in your organization and become that analysts of the future. So season

Published Date : Dec 10 2020

SUMMARY :

of passings over to you Now. Thank you so much while it's so great to be here with the thought spot family. and because I really see data literacy and fluency at, you know, So the market in the myths, um you know, it's 2020. and I'd like to share with you how I framed data literacy for any industry It's the way that we have thought about how do you change an organization but with So this is called the V A model, and you can take this and you can apply The key is to identify, you know, if you are a call center representative. So a couple of just quick things to wrap up one is how do you get started with the data literacy program, We're actually gonna be jumping into the next section, so grab a quick water

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
FordORGANIZATION

0.99+

JackPERSON

0.99+

Paula JohnsonPERSON

0.99+

ValeriePERSON

0.99+

GartnerORGANIZATION

0.99+

two piecesQUANTITY

0.99+

Valerie LoganPERSON

0.99+

Ford Motor CompanyORGANIZATION

0.99+

LeePERSON

0.99+

todayDATE

0.99+

2020DATE

0.99+

second languageQUANTITY

0.99+

Data LodgeORGANIZATION

0.99+

yesterdayDATE

0.99+

thirdQUANTITY

0.99+

firstQUANTITY

0.99+

first sessionQUANTITY

0.99+

bothQUANTITY

0.99+

GoogleORGANIZATION

0.99+

second sessionQUANTITY

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.98+

three setsQUANTITY

0.98+

two peopleQUANTITY

0.98+

three piecesQUANTITY

0.98+

seven stepsQUANTITY

0.98+

ThirdQUANTITY

0.98+

first speakerQUANTITY

0.98+

SirisTITLE

0.97+

first thingQUANTITY

0.97+

SecondlyQUANTITY

0.97+

secondQUANTITY

0.95+

Second exampleQUANTITY

0.95+

over 100 years agoDATE

0.95+

EnglishOTHER

0.95+

three thingsQUANTITY

0.94+

twoQUANTITY

0.93+

one thingQUANTITY

0.92+

eachQUANTITY

0.91+

one wayQUANTITY

0.89+

StowORGANIZATION

0.88+

Ford English schoolORGANIZATION

0.88+

U. S. FederalORGANIZATION

0.88+

four driversQUANTITY

0.87+

ThioPERSON

0.87+

three basicQUANTITY

0.83+

OnleyORGANIZATION

0.81+

pandemicEVENT

0.77+

four key themesQUANTITY

0.75+

fourth leverQUANTITY

0.75+

yearsDATE

0.73+

KnopORGANIZATION

0.71+

doubleQUANTITY

0.7+

SpotORGANIZATION

0.69+

a yearQUANTITY

0.68+

a secondQUANTITY

0.66+

three basic setsQUANTITY

0.65+

pastDATE

0.63+

StoneCOMMERCIAL_ITEM

0.59+

lastDATE

0.54+

clientsQUANTITY

0.53+

VectorORGANIZATION

0.53+

FDRORGANIZATION

0.47+

RosettaORGANIZATION

0.41+

Hardik Modi, NETSCOUT | CUBEConversations September 2020


 

>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, I'm Stu Miniman, and this is a special CUBE Conversation coming to us from our Boston area studio. We know that so much has changed in 2020 with the global pandemic on, with people working from home, staying safe is super important, and that especially is true when it comes to the threats that are facing us. So really happy to welcome to the program Hardik Modi, we're going to be talking about the NETSCOUT threat intelligence report for the first half of 2020. Hardik's the AVP of engineering for threat and mitigation products. Hardik, thanks so much for joining us. >> Thanks Stu, it's great to be here. Thanks for having me. >> Alright, so first set this up. This is NETSCOUT does these threat reports and on a pretty regular cadence, I have to think that the first half of 2020, we'll dig into this a little bit, is a little different because I know everybody when they had their plans at the beginning of 2020, by the time we got to March, we kind of shredded them and started over or made some serious adjustments. So why don't you introduce us to this? And then we'll talk specifically about the first half 2020 results. >> Right, thanks, Stu. So I'm here to speak about the fifth NETSCOUT threat intelligence report. So this is something that we do every six months in my team, in particular, the NETSCOUT threat intelligence organization, we maintain visibility across the internet and in particular threat activity across the internet, and very specifically with a strengthened DDoS activity. And so, you know, there's a lot of data that we have collected. There's a lot of analysis that we conduct on a regular basis. And then every six months, we try to roll this up into a report that gives you a view into everything that's happened across the landscape. So this is our report for the first half of the year. So through June 2020, and yes, you know, as we came into March 2020, everything changed. And in particular, when, you know, the pandemic kind of set upon us, you know, countries, entire continents went into lockdown and we intuited that this would have an impact on the threat landscape. And you know, this is even as we've been reporting through it, this is our first drill of roll up and look at really everything that happened and everything that changed in the first half of 2020. >> Yeah. It absolutely had such a huge impact. You know, my background, Hardik, is in networking. You think about how much over the last decade we've built out, you know, those corporate networks, all the Wi-Fi environments, all the security put there, and all of a sudden, well, we had some people remote, now everybody is remote. And you know, that has a ripple on corporate IT as well as, you know, those of us at home that have to do the home IT piece there. So why don't you give us a look inside the report? What are some of the main takeaways that the report had this time? >> No, so you're right, the network became everything for us and the network became how we, how our students attended school, right? And how we did our shopping, you know, how we did certainly finance and most definitely how for a lot of us how we did work, and suddenly the network, which, you know, certainly was a driver for productivity, and just business worldwide suddenly became that much more central. And so, we tend to look at the network, both sort of at the enterprise level, but then also a lot of what we get to see is at the service provider level. So what's happening on the big networks worldwide, and that's what we've rolled up into this report. So a few things that I want to kind of highlight from the report, the first thing is there were a lot of DDoS attacks. So we recorded through our visibility, 4.83 million DDoS attacks in the first six months of the year. That's almost 30,000 attacks a day. And you know, it's not like we hear about 30,000 outages every day. Certainly aren't 30,000 outages every day, but you know, this is an ongoing onslaught, for anybody who exists on the internet, and this didn't update at all through the first half of the year. If you kind of go like, just look at the numbers, it went up 15% for the same period year on year. But then as you enter into March, and in particular, the date when the WHO sort of announced the global pandemic, that's essentially the start that we marked. From that day onwards, the rise in attacks year on year for the same period, you know, a year ago was 25%. So that really, just in sheer numbers a lot changed. And then, you know, as we go a level deeper, and we look at like the nature of these attacks. You know, a lot of that actually has evolved considerably, over the past few years. And then in particular, like we're able to highlight a few stats in the first half of the year, and certainly like a lot of the drivers for this, the technical drivers are understood. And then there's just the human drivers for this, right? And we understand that a lot more people are at home. A lot more people are reliant on the internet and, you know, just sad to say, but you know, certainly also a lot more people aren't as engaged with school, with work, with society at large. And these tend to have knock on effects across large, a lot of things that we do in life, but also in like cyber crime and in particular, like in the DDoS space. >> Maybe if you could for our audience, I think they're in general familiar with DDoS, it's typically when, you know, sites get overwhelmed with traffic, different from say, everybody working at home is it'd be a little bit more cautious about phishing attacks. You're getting, you know, links and tax links in email, "Super important thing, please check this," please don't click those links. Does this impact, you know, those workers at home or is it, you know, all the corporate IT and all the traffic going through those that there's ways that they can stop, halt that, or, you know, interfere, get sensitive data? >> That's a really good point. And in large parts, I mean, and like with a lot of other kind of cyber crime activity, this is primarily felt inside the enterprise. And so the, as far as like, you know, companies are concerned and people who are using VPN and other kinds of remote access to get to critical resources, the key challenge here is the denial of availability. And so, okay. So you're right. Let's take a step back. DDoS, distributed denial of service. This is typically when like a large polarity of devices are used to direct traffic towards a device on the internet. And we typically think of this as a site. And so maybe, your favorite newspaper went down because of a DDoS attack, or you couldn't get to your bank or your retail, you know, e-commerce as a result of the DDoS attack, but this plays out in many different ways, including the inability for people to access work, just because their VPN concentrators have been DDOSed. I think, you know, just coming back to the split between people who work for a company and the company themselves, ultimately it's a shared responsibility, there's some amount of best practices that employees can follow. I mean, a lot of this enforcement and, you know, primarily ensuring that your services are running to expectation, as always, there's going to be the responsibility of the enterprise and something that enterprise security typically will want to cater for. >> All right. And how are these attacks characterized? You said it was up significantly 15% for the half year, overall, 25% overall, anything that differentiates big attacks, small attacks? Do we know how many of them actually freeze a site or pause how much activity is going on? >> Right, so what I will say is that within just those numbers, and we're simply just counting attacks, right? Even within those numbers, a key aspect that has changed is the rise in what we call multi-vector attacks. And so these are attacks in which they're, you go back maybe five years, certainly like going back further, typically a DDoS attack would involve a single technique that was being used to cause damage. And then over time, as many techniques were developed and new vulnerable services are discovered on the internet, what we find is that there's, you know, occasionally there would be a combination of these vectors, as we call them, being used against the target. And so a big thing that has changed within the last two years is what we think of as the rise in multi-vector attacks. And what we are seeing is that attacks that involve even 15 separate vectors are up considerably, over 1000% compared to the same time last year, and correspondingly attacks that involve a single vector are down in a really big way. And so we're just seeing a shift in the general, the techniques that are used within these attacks, and, you know, that has been considerable over certainly, you know, the same time 2019. But if you go back two years, even, it would seem like a complete sea change. >> What other key things, key learnings did you have from the survey this year that you can share? >> Yeah, so one thing I want to highlight that, you know, we kind of, and I think it's been implicit in some of your questions, certainly in many conversations that I have, like, what is the cost of these attacks? You know, what is ultimately the impact of these attacks on society? And one of the ways in which we tend to think of the impact is in simply like outages, like an e-commerce site that does a certain amount of business every day, you know, they can easily recognize that "All right, if I'm off for a day, for two days, for seven days, here's the impact to my business." So that tends to be understood at the individual enterprise level. Another cost that that often is well recognized as like the cost of mitigating attacks. And so now there's, whether it's the service provider, the enterprise themselves, other forms of business or other entities who will invest in mitigation techniques and capacity, those costs tend to kind of rack up. What we have done, and thanks to our kind of really unique visibility into service provider networks worldwide. What we've been able to do is extract essentially the, what we call the DDoS attack coefficient. And this is, think of it as like, here's how much DDoS attack traffic is going on worldwide or across any set of networks at any given time. So if you had zero DDoS in the world, that number will be zero, but it most definitely is not. You know, there's, we have represented numbers for different parts of the world. This can be many, many, many gigabits per second, many terabits per second. And essentially there's a, even just a transit cost for carrying this traffic from one point to another. And that is actually like the, you know, what we call the DDoS attack coefficient. And that cost is something that I want to highlight is being borne by everyone. So this ultimately is what shows up in your internet bills, whether you're a residential subscriber, whether you're using your phone and paying for internet through your phone, or you're an enterprise, and now you have network connections for your service providers, because ultimately this is a cost that we're bearing as a society. This is the first time that we've actually conducted research into this phenomenon. And I'm proud to say that we've captured this split across multiple geographies of the world. >> Yeah. It's been a big challenge these days. The internet is a big place, there's worry about fragmentation of the internet. There's worry about some of the countries out there, as well as some of the large, multinational global companies out there, really are walling our piece of the internet. Hardik, one thing I'm curious about, we talked about the impact of work from home and have a more distributed workforce. One of the other big mega trends we've been seeing even before 2020 is the growth of edge computing. You talk about the trillions of IOT devices that will be out there. Does DDoS play into this? You know, I just, the scenario runs through my mind. "Okay, great. We've got all these vehicles running that has some telemetry," all of a sudden, if they can't get their telemetry, that's a big problem. >> Yeah. So this is both the, this is the devices themselves and the, basically the impact that you could see from an attack on them. But more often what we see on the internet in the here and now is actually the use of these devices to attack other more established entities on the internet. So then, so for us now, for many years, we've been talking about the use of IOT devices in attacks, and simply the fact that so many devices are being deployed that are physically, they're vulnerable from the get-go, insecure at birth, essentially, and then deployed across the internet. You know, even if they were secure to start, they often don't have update mechanisms. And now, they, over a period of time, new vulnerabilities are discovered in those devices and they're used to attack other devices. So in this report, we have talked about a particular family of malware called Mirai, and Mirai has been around since 2016, been used in many high profile attacks. And over time there have been a number of variations to Mirai. And, you know, we absolutely keep track of the growth in these variations and the kinds of devices where they attack. Sorry, that they compromise, and then use to attack other targets. We've also kind of gone into another malware family that has been talked about a bit called Lucifer, and Lucifer was another, I think originally more Microsoft Windows, so you're going to see it more on your classic kind of client and server kind of computing device. But over time, we've seen, we have reported on Linux variants of Lucifer that not only can be installed on Linux devices, but also have DDoS capabilities. So we're tracking like the emergence of new botnets. Still, Stu, going straight back to your question. They are, this is where IOT, you know, even for all the promise that it holds for us as society, you know, if we don't get this right, there's a lot of pain in our future just coming from the use of these devices in attacks. >> Well, I thought it was bad enough that we had an order of magnitude more surface area to defend against on, I hadn't really thought about the fact that all of these devices might be turned into an attack vector back on what we're doing. Alright, Hardik. So you need to give us some, the ray of hope here. We've got all of these threats out here. How's the industry doing overall defending against this, what more can be done to stop these threats? What are some of the actions people, and especially enterprise techs should be doing? >> Yeah, so I absolutely start with just awareness. This is why we publish the report. This is why we have resources like NETSCOUT Cyber Threat Horizon that provides continuous visibility into attack activity worldwide. So it absolutely just starts with that. We're actually, this is not necessarily a subject of the report because it's happened in the second half of the year, but there have been a wave of high profile attacks associated with extortion attempts, over the past month. And, these attacks aren't necessarily complex, like the techniques being used aren't novel. I think in many ways, these are the things that we would have considered maybe run of the mill, at least for us on the research side and the people who live this kind of stuff, but, they have been successful, and a number of companies right now, a number of entities worldwide right now are kind of rethinking what they're doing in particular DDoS protection. And for us, you know, our observation is that this happens every few years, where every few years, there's essentially a reminder that DDoS is a threat domain. DDoS typically will involve an intelligent adversary on the other side, somebody who wants to cause you harm. To defend against it, there are plenty of well known kind of techniques and methodology, but that is something that enterprises, all of us, governments, service providers, those of us on the research side have to kind of stay on top of, keep reminding ourselves of those best practices and use them. And, you know, I'll say that again, for me, the ray of hope is that we haven't seen a new vector in the first six months of the year, even as we've seen a combination of other known vectors. And so for these, just from that perspective, there's these attacks we should be able to defend against. So that's essentially where I leave this, in terms of the hope for the future. >> Alright, Hardik, what final tips do you have? How do people get the report itself and how do they keep up? Where do you point everyone to? >> Yes, so the report itself is going to be, is live on the 29th of September 2020. It will be available at NETSCOUT.com/threatreport. I'll also point you to another resource, Cyber Threat Horizon, that gives you more continuous visibility into a tech activity, and that's NETSCOUT.com/horizon. And so these are the key resources that I leave you with, again, this is, there's plenty to be hopeful about. As I said, there hasn't been a new vector that we've uncovered in the first six months of the year, as opposed to seven vectors in the year 2019. So, that is something that certainly gives me hope. And, for the things that we've talked about in the report, we know how to defend against them. So, this is something that I think with action, we'll be able to live through just fine. >> Well, Hardik, thanks so much for sharing the data, sharing the insight, pleasure catching up with you. >> Okay. Likewise, Stu, thank you. >> All right, and be sure to check out theCUBE.net for all of the videos we have, including many of the upcoming events. I'm Stu Miniman and thank you for watching theCUBE. (calm music)

Published Date : Sep 30 2020

SUMMARY :

leaders all around the world, for the first half of 2020. Thanks Stu, it's great to be here. by the time we got to March, And in particular, when, you know, that the report had this time? on the internet and, you know, Does this impact, you know, And so the, as far as like, you know, for the half year, overall, is the rise in what we And that is actually like the, you know, fragmentation of the internet. basically the impact that you could see What are some of the actions people, and the people who live is live on the 29th of September 2020. much for sharing the data, for all of the videos we have,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
HardikPERSON

0.99+

March 2020DATE

0.99+

June 2020DATE

0.99+

September 2020DATE

0.99+

two daysQUANTITY

0.99+

BostonLOCATION

0.99+

MarchDATE

0.99+

NETSCOUTORGANIZATION

0.99+

seven daysQUANTITY

0.99+

Palo AltoLOCATION

0.99+

25%QUANTITY

0.99+

StuPERSON

0.99+

Stu MinimanPERSON

0.99+

15 separate vectorsQUANTITY

0.99+

Hardik ModiPERSON

0.99+

15%QUANTITY

0.99+

last yearDATE

0.99+

WHOORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

29th of September 2020DATE

0.99+

30,000 outagesQUANTITY

0.99+

firstQUANTITY

0.99+

zeroQUANTITY

0.99+

MiraiTITLE

0.99+

2016DATE

0.98+

2019DATE

0.98+

over 1000%QUANTITY

0.98+

two yearsQUANTITY

0.98+

LuciferTITLE

0.98+

first timeQUANTITY

0.98+

2020DATE

0.98+

oneQUANTITY

0.98+

a year agoDATE

0.98+

bothQUANTITY

0.98+

a dayQUANTITY

0.97+

this yearDATE

0.97+

fifthQUANTITY

0.97+

first six monthsQUANTITY

0.97+

LinuxTITLE

0.97+

five yearsQUANTITY

0.97+

first half of 2020DATE

0.97+

one pointQUANTITY

0.97+

zero DDoSQUANTITY

0.96+

about 30,000 outagesQUANTITY

0.96+

first drillQUANTITY

0.96+

OneQUANTITY

0.96+

NETSCOUT.com/threatreportOTHER

0.96+

single techniqueQUANTITY

0.96+

seven vectorsQUANTITY

0.96+

4.83 million DDoSQUANTITY

0.96+

first thingQUANTITY

0.94+

pandemicEVENT

0.93+

first half 2020DATE

0.92+

single vectorQUANTITY

0.91+

almost 30,000 attacks a dayQUANTITY

0.91+

six monthsQUANTITY

0.88+

Cyber Threat HorizonTITLE

0.86+

one thingQUANTITY

0.85+

past monthDATE

0.83+

NETSCOUT.com/horizonOTHER

0.83+

theCUBE.netOTHER

0.83+

beginning of 2020DATE

0.81+

theCUBEORGANIZATION

0.79+

WindowsTITLE

0.78+

last two yearsDATE

0.75+

Hardik Modi, NETSCOUT | CUBEConversations


 

>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, I'm Stu Miniman, and this is a special CUBE Conversation coming to us from our Boston area studio. We know that so much has changed in 2020 with the global pandemic on, with people working from home, staying safe is super important, and that especially is true when it comes to the threats that are facing us. So really happy to welcome to the program Hardik Modi, we're going to be talking about the NETSCOUT threat intelligence report for the first half of 2020. Hardik's the AVP of engineering for threat and mitigation products. Hardik, thanks so much for joining us. >> Thanks Stu, it's great to be here. Thanks for having me. >> Alright, so first set this up. This is NETSCOUT does these threat reports and on a pretty regular cadence, I have to think that the first half of 2020, we'll dig into this a little bit, is a little different because I know everybody when they had their plans at the beginning of 2020, by the time we got to March, we kind of shredded them and started over or made some serious adjustments. So why don't you introduce us to this? And then we'll talk specifically about the first half 2020 results. >> Right, thanks, Stu. So I'm here to speak about the fifth NETSCOUT threat intelligence report. So this is something that we do every six months in my team, in particular, the NETSCOUT threat intelligence organization, we maintain visibility across the internet and in particular threat activity across the internet, and very specifically with a strengthened DDoS activity. And so, you know, there's a lot of data that we have collected. There's a lot of analysis that we conduct on a regular basis. And then every six months, we try to roll this up into a report that gives you a view into everything that's happened across the landscape. So this is our report for the first half of the year. So through June 2020, and yes, you know, as we came into March 2020, everything changed. And in particular, when, you know, the pandemic kind of set upon us, you know, countries, entire continents went into lockdown and we intuited that this would have an impact on the threat landscape. And you know, this is even as we've been reporting through it, this is our first drill of roll up and look at really everything that happened and everything that changed in the first half of 2020. >> Yeah. It absolutely had such a huge impact. You know, my background, Hardik, is in networking. You think about how much over the last decade we've built out, you know, those corporate networks, all the Wi-Fi environments, all the security put there, and all of a sudden, well, we had some people remote, now everybody is remote. And you know, that has a ripple on corporate IT as well as, you know, those of us at home that have to do the home IT piece there. So why don't you give us a look inside the report? What are some of the main takeaways that the report had this time? >> No, so you're right, the network became everything for us and the network became how we, how our students attended school, right? And how we did our shopping, you know, how we did certainly finance and most definitely how for a lot of us how we did work, and suddenly the network, which, you know, certainly was a driver for productivity, and just business worldwide suddenly became that much more central. And so, we tend to look at the network, both sort of at the enterprise level, but then also a lot of what we get to see is at the service provider level. So what's happening on the big networks worldwide, and that's what we've rolled up into this report. So a few things that I want to kind of highlight from the report, the first thing is there were a lot of DDoS attacks. So we recorded through our visibility, 4.83 million DDoS attacks in the first six months of the year. That's almost 30,000 attacks a day. And you know, it's not like we hear about 30,000 outages every day. Certainly aren't 30,000 outages every day, but you know, this is an ongoing onslaught, for anybody who exists on the internet, and this didn't update at all through the first half of the year. If you kind of go like, just look at the numbers, it went up 15% for the same period year on year. But then as you enter into March, and in particular, the date when the WHO sort of announced the global pandemic, that's essentially the start that we marked. From that day onwards, the rise in attacks year on year for the same period, you know, a year ago was 25%. So that really, just in sheer numbers a lot changed. And then, you know, as we go a level deeper, and we look at like the nature of these attacks. You know, a lot of that actually has evolved considerably, over the past few years. And then in particular, like we're able to highlight a few stats in the first half of the year, and certainly like a lot of the drivers for this, the technical drivers are understood. And then there's just the human drivers for this, right? And we understand that a lot more people are at home. A lot more people are reliant on the internet and, you know, just sad to say, but you know, certainly also a lot more people aren't as engaged with school, with work, with society at large. And these tend to have knock on effects across large, a lot of things that we do in life, but also in like cyber crime and in particular, like in the DDoS space. >> Maybe if you could for our audience, I think they're in general familiar with DDoS, it's typically when, you know, sites get overwhelmed with traffic, different from say, everybody working at home is it'd be a little bit more cautious about phishing attacks. You're getting, you know, links and tax links in email, "Super important thing, please check this," please don't click those links. Does this impact, you know, those workers at home or is it, you know, all the corporate IT and all the traffic going through those that there's ways that they can stop, halt that, or, you know, interfere, get sensitive data? >> That's a really good point. And in large parts, I mean, and like with a lot of other kind of cyber crime activity, this is primarily felt inside the enterprise. And so the, as far as like, you know, companies are concerned and people who are using VPN and other kinds of remote access to get to critical resources, the key challenge here is the denial of availability. And so, okay. So you're right. Let's take a step back. DDoS, distributed denial of service. This is typically when like a large polarity of devices are used to direct traffic towards a device on the internet. And we typically think of this as a site. And so maybe, your favorite newspaper went down because of a DDoS attack, or you couldn't get to your bank or your retail, you know, e-commerce as a result of the DDoS attack, but this plays out in many different ways, including the inability for people to access work, just because their VPN concentrators have been DDOSed. I think, you know, just coming back to the split between people who work for a company and the company themselves, ultimately it's a shared responsibility, there's some amount of best practices that employees can follow. I mean, a lot of this enforcement and, you know, primarily ensuring that your services are running to expectation, as always, there's going to be the responsibility of the enterprise and something that enterprise security typically will want to cater for. >> All right. And how are these attacks characterized? You said it was up significantly 15% for the half year, overall, 25% overall, anything that differentiates big attacks, small attacks? Do we know how many of them actually freeze a site or pause how much activity is going on? >> Right, so what I will say is that within just those numbers, and we're simply just counting attacks, right? Even within those numbers, a key aspect that has changed is the rise in what we call multi-vector attacks. And so these are attacks in which they're, you go back maybe five years, certainly like going back further, typically a DDoS attack would involve a single technique that was being used to cause damage. And then over time, as many techniques were developed and new vulnerable services are discovered on the internet, what we find is that there's, you know, occasionally there would be a combination of these vectors, as we call them, being used against the target. And so a big thing that has changed within the last two years is what we think of as the rise in multi-vector attacks. And what we are seeing is that attacks that involve even 15 separate vectors are up considerably, over 1000% compared to the same time last year, and correspondingly attacks that involve a single vector are down in a really big way. And so we're just seeing a shift in the general, the techniques that are used within these attacks, and, you know, that has been considerable over certainly, you know, the same time 2019. But if you go back two years, even, it would seem like a complete sea change. >> What other key things, key learnings did you have from the survey this year that you can share? >> Yeah, so one thing I want to highlight that, you know, we kind of, and I think it's been implicit in some of your questions, certainly in many conversations that I have, like, what is the cost of these attacks? You know, what is ultimately the impact of these attacks on society? And one of the ways in which we tend to think of the impact is in simply like outages, like an e-commerce site that does a certain amount of business every day, you know, they can easily recognize that "All right, if I'm off for a day, for two days, for seven days, here's the impact to my business." So that tends to be understood at the individual enterprise level. Another cost that that often is well recognized as like the cost of mitigating attacks. And so now there's, whether it's the service provider, the enterprise themselves, other forms of business or other entities who will invest in mitigation techniques and capacity, those costs tend to kind of rack up. What we have done, and thanks to our kind of really unique visibility into service provider networks worldwide. What we've been able to do is extract essentially the, what we call the DDoS attack coefficient. And this is, think of it as like, here's how much DDoS attack traffic is going on worldwide or across any set of networks at any given time. So if you had zero DDoS in the world, that number will be zero, but it most definitely is not. You know, there's, we have represented numbers for different parts of the world. This can be many, many, many gigabits per second, many terabits per second. And essentially there's a, even just a transit cost for carrying this traffic from one point to another. And that is actually like the, you know, what we call the DDoS attack coefficient. And that cost is something that I want to highlight is being borne by everyone. So this ultimately is what shows up in your internet bills, whether you're a residential subscriber, whether you're using your phone and paying for internet through your phone, or you're an enterprise, and now you have network connections for your service providers, because ultimately this is a cost that we're bearing as a society. This is the first time that we've actually conducted research into this phenomenon. And I'm proud to say that we've captured this split across multiple geographies of the world. >> Yeah. It's been a big challenge these days. The internet is a big place, there's worry about fragmentation of the internet. There's worry about some of the countries out there, as well as some of the large, multinational global companies out there, really are walling our piece of the internet. Hardik, one thing I'm curious about, we talked about the impact of work from home and have a more distributed workforce. One of the other big mega trends we've been seeing even before 2020 is the growth of edge computing. You talk about the trillions of IOT devices that will be out there. Does DDoS play into this? You know, I just, the scenario runs through my mind. "Okay, great. We've got all these vehicles running that has some telemetry," all of a sudden, if they can't get their telemetry, that's a big problem. >> Yeah. So this is both the, this is the devices themselves and the, basically the impact that you could see from an attack on them. But more often what we see on the internet in the here and now is actually the use of these devices to attack other more established entities on the internet. So then, so for us now, for many years, we've been talking about the use of IOT devices in attacks, and simply the fact that so many devices are being deployed that are physically, they're vulnerable from the get-go, insecure at birth, essentially, and then deployed across the internet. You know, even if they were secure to start, they often don't have update mechanisms. And now, they, over a period of time, new vulnerabilities are discovered in those devices and they're used to attack other devices. So in this report, we have talked about a particular family of malware called Mirai, and Mirai has been around since 2016, been used in many high profile attacks. And over time there have been a number of variations to Mirai. And, you know, we absolutely keep track of the growth in these variations and the kinds of devices where they attack. Sorry, that they compromise, and then use to attack other targets. We've also kind of gone into another malware family that has been talked about a bit called Lucifer, and Lucifer was another, I think originally more Microsoft Windows, so you're going to see it more on your classic kind of client and server kind of computing device. But over time, we've seen, we have reported on Linux variants of Lucifer that not only can be installed on Linux devices, but also have DDoS capabilities. So we're tracking like the emergence of new botnets. Still, Stu, going straight back to your question. They are, this is where IOT, you know, even for all the promise that it holds for us as society, you know, if we don't get this right, there's a lot of pain in our future just coming from the use of these devices in attacks. >> Well, I thought it was bad enough that we had an order of magnitude more surface area to defend against on, I hadn't really thought about the fact that all of these devices might be turned into an attack vector back on what we're doing. Alright, Hardik. So you need to give us some, the ray of hope here. We've got all of these threats out here. How's the industry doing overall defending against this, what more can be done to stop these threats? What are some of the actions people, and especially enterprise techs should be doing? >> Yeah, so I absolutely start with just awareness. This is why we publish the report. This is why we have resources like NETSCOUT Cyber Threat Horizon that provides continuous visibility into attack activity worldwide. So it absolutely just starts with that. We're actually, this is not necessarily a subject of the report because it's happened in the second half of the year, but there have been a wave of high profile attacks associated with extortion attempts, over the past month. And, these attacks aren't necessarily complex, like the techniques being used aren't novel. I think in many ways, these are the things that we would have considered maybe run of the mill, at least for us on the research side and the people who live this kind of stuff, but, they have been successful, and a number of companies right now, a number of entities worldwide right now are kind of rethinking what they're doing in particular DDoS protection. And for us, you know, our observation is that this happens every few years, where every few years, there's essentially a reminder that DDoS is a threat domain. DDoS typically will involve an intelligent adversary on the other side, somebody who wants to cause you harm. To defend against it, there are plenty of well known kind of techniques and methodology, but that is something that enterprises, all of us, governments, service providers, those of us on the research side have to kind of stay on top of, keep reminding ourselves of those best practices and use them. And, you know, I'll say that again, for me, the ray of hope is that we haven't seen a new vector in the first six months of the year, even as we've seen a combination of other known vectors. And so for these, just from that perspective, there's these attacks we should be able to defend against. So that's essentially where I leave this, in terms of the hope for the future. >> Alright, Hardik, what final tips do you have? How do people get the report itself and how do they keep up? Where do you point everyone to? >> Yes, so the report itself is going to be, is live on the 29th of September 2020. It will be available at NETSCOUT.com/threatreport. I'll also point you to another resource, Cyber Threat Horizon, that gives you more continuous visibility into a tech activity, and that's NETSCOUT.com/horizon. And so these are the key resources that I leave you with, again, this is, there's plenty to be hopeful about. As I said, there hasn't been a new vector that we've uncovered in the first six months of the year, as opposed to seven vectors in the year 2019. So, that is something that certainly gives me hope. And, for the things that we've talked about in the report, we know how to defend against them. So, this is something that I think with action, we'll be able to live through just fine. >> Well, Hardik, thanks so much for sharing the data, sharing the insight, pleasure catching up with you. >> Okay. Likewise, Stu, thank you. >> All right, and be sure to check out theCUBE.net for all of the videos we have, including many of the upcoming events. I'm Stu Miniman and thank you for watching theCUBE. (calm music)

Published Date : Sep 29 2020

SUMMARY :

leaders all around the world, for the first half of 2020. Thanks Stu, it's great to be here. by the time we got to March, And in particular, when, you know, that the report had this time? on the internet and, you know, Does this impact, you know, And so the, as far as like, you know, for the half year, overall, is the rise in what we And that is actually like the, you know, fragmentation of the internet. basically the impact that you could see What are some of the actions people, and the people who live is live on the 29th of September 2020. much for sharing the data, for all of the videos we have,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
HardikPERSON

0.99+

March 2020DATE

0.99+

June 2020DATE

0.99+

two daysQUANTITY

0.99+

BostonLOCATION

0.99+

MarchDATE

0.99+

NETSCOUTORGANIZATION

0.99+

seven daysQUANTITY

0.99+

Palo AltoLOCATION

0.99+

25%QUANTITY

0.99+

StuPERSON

0.99+

Stu MinimanPERSON

0.99+

15 separate vectorsQUANTITY

0.99+

Hardik ModiPERSON

0.99+

15%QUANTITY

0.99+

last yearDATE

0.99+

WHOORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

29th of September 2020DATE

0.99+

30,000 outagesQUANTITY

0.99+

firstQUANTITY

0.99+

zeroQUANTITY

0.99+

MiraiTITLE

0.99+

2016DATE

0.98+

2019DATE

0.98+

over 1000%QUANTITY

0.98+

two yearsQUANTITY

0.98+

LuciferTITLE

0.98+

first timeQUANTITY

0.98+

2020DATE

0.98+

oneQUANTITY

0.98+

a year agoDATE

0.98+

bothQUANTITY

0.98+

a dayQUANTITY

0.97+

this yearDATE

0.97+

fifthQUANTITY

0.97+

first six monthsQUANTITY

0.97+

LinuxTITLE

0.97+

five yearsQUANTITY

0.97+

first half of 2020DATE

0.97+

one pointQUANTITY

0.97+

zero DDoSQUANTITY

0.96+

about 30,000 outagesQUANTITY

0.96+

first drillQUANTITY

0.96+

OneQUANTITY

0.96+

NETSCOUT.com/threatreportOTHER

0.96+

single techniqueQUANTITY

0.96+

seven vectorsQUANTITY

0.96+

4.83 million DDoSQUANTITY

0.96+

first thingQUANTITY

0.94+

pandemicEVENT

0.93+

first half 2020DATE

0.92+

single vectorQUANTITY

0.91+

almost 30,000 attacks a dayQUANTITY

0.91+

six monthsQUANTITY

0.88+

Cyber Threat HorizonTITLE

0.86+

one thingQUANTITY

0.85+

past monthDATE

0.83+

NETSCOUT.com/horizonOTHER

0.83+

theCUBE.netOTHER

0.83+

beginning of 2020DATE

0.81+

theCUBEORGANIZATION

0.79+

WindowsTITLE

0.78+

last two yearsDATE

0.75+

half yearQUANTITY

0.74+

4-video test


 

>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.

Published Date : Sep 27 2020

SUMMARY :

bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Exxon MobilORGANIZATION

0.99+

AndyPERSON

0.99+

Sean HagarPERSON

0.99+

Daniel WennbergPERSON

0.99+

ChrisPERSON

0.99+

USCORGANIZATION

0.99+

CaltechORGANIZATION

0.99+

2016DATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Tatsuya NagamotoPERSON

0.99+

twoQUANTITY

0.99+

1978DATE

0.99+

FoxORGANIZATION

0.99+

six systemsQUANTITY

0.99+

HarvardORGANIZATION

0.99+

Al QaedaORGANIZATION

0.99+

SeptemberDATE

0.99+

second versionQUANTITY

0.99+

CIAORGANIZATION

0.99+

IndiaLOCATION

0.99+

300 yardsQUANTITY

0.99+

University of TokyoORGANIZATION

0.99+

todayDATE

0.99+

BurnsPERSON

0.99+

Atsushi YamamuraPERSON

0.99+

0.14%QUANTITY

0.99+

48 coreQUANTITY

0.99+

0.5 microsecondsQUANTITY

0.99+

NSFORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

CBSORGANIZATION

0.99+

NTTORGANIZATION

0.99+

first implementationQUANTITY

0.99+

first experimentQUANTITY

0.99+

123QUANTITY

0.99+

Army Research OfficeORGANIZATION

0.99+

firstQUANTITY

0.99+

1,904,711QUANTITY

0.99+

oneQUANTITY

0.99+

sixQUANTITY

0.99+

first versionQUANTITY

0.99+

StevePERSON

0.99+

2000 spinsQUANTITY

0.99+

five researcherQUANTITY

0.99+

CreoleORGANIZATION

0.99+

three setQUANTITY

0.99+

second partQUANTITY

0.99+

third partQUANTITY

0.99+

Department of Applied PhysicsORGANIZATION

0.99+

10QUANTITY

0.99+

eachQUANTITY

0.99+

85,900QUANTITY

0.99+

OneQUANTITY

0.99+

one problemQUANTITY

0.99+

136 CPUQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

ScottPERSON

0.99+

2.4 gigahertzQUANTITY

0.99+

1000 timesQUANTITY

0.99+

two timesQUANTITY

0.99+

two partsQUANTITY

0.99+

131QUANTITY

0.99+

14,233QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

two possible phasesQUANTITY

0.99+

13,580QUANTITY

0.99+

5QUANTITY

0.99+

4QUANTITY

0.99+

one microsecondsQUANTITY

0.99+

first stepQUANTITY

0.99+

first partQUANTITY

0.99+

500 spinsQUANTITY

0.99+

two identical photonsQUANTITY

0.99+

3QUANTITY

0.99+

70 years agoDATE

0.99+

IraqLOCATION

0.99+

one experimentQUANTITY

0.99+

zeroQUANTITY

0.99+

Amir Safarini NiniPERSON

0.99+

SaddamPERSON

0.99+

Neuromorphic in Silico Simulator For the Coherent Ising Machine


 

>>Hi everyone, This system A fellow from the University of Tokyo before I thought that would like to thank you she and all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today or some of the recent works that have been done either by me or by character of Hong Kong Noise Group indicating the title of my talk is a neuro more fic in silica simulator for the commenters in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then I will show some proof of concept of the game in performance that can be obtained using dissimulation in the second part and the production of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is adapted from a recent natural tronics paper from the Village Back hard People. And this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, Interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba purification machine, or a recently proposed restricted Bozeman machine, FPD eight, by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition influx you beat or the energy efficiency off memory sisters uh P. J. O are still an attractive platform for building large theorizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particle in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system in this respect, the f. D. A s. They are interesting from the perspective, off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see. And so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for suggesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics. Orphan, chaotic because of symmetry, is interconnectivity. The infrastructure. No neck talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's a schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the Cortes in machine, which is a growing toe the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo F represents the monitor optical parts, the district optical parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback cooking cm using oh, more than detection and refugee A then injection off the cooking time and eso this dynamics in both cases of CME in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the Eyes in coping and the H is the extension of the rising and attorney in India and expect so. >>Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of >>this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted convergence to the global minimum of there's even 20 and using this approach. And so this is >>why we propose toe introduce a macro structure the system or where one analog spin or one D o. P. O is replaced by a pair off one and knock spin and one error on cutting. Viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a >>learning process for searching for the ground state of the icing. Every 20 >>within this massacre structure the role of the ER variable eyes to control the amplitude off the analog spins to force the amplitude of the expense toe, become equal to certain target amplitude. A Andi. This is known by moderating the strength off the icing complaints or see the the error variable e I multiply the icing complain here in the dynamics off UH, D o p o on Then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different, I think introduces a >>symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here for solving certain current size off, um, escape problem, Uh, in which the exiled from here in the i r. From here and the value of the icing energy is shown in the bottom plots. And you see this Celtics search that visit various local minima of the as Newtonian and eventually finds the local minima Um, >>it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing hamiltonian so that we're gonna do not get stuck in any of them. On more over the other types of attractors, I can eventually appear, such as the limits of contractors or quality contractors. They can also be destabilized using a moderation of the target amplitude. And so we have proposed in the past two different motivation of the target constitute the first one is a moderation that ensure the 100 >>reproduction rate of the system to become positive on this forbids the creation of any non tree retractors. And but in this work I will talk about another modulation or Uresti moderation, which is given here that works, uh, as well as this first, uh, moderation, but is easy to be implemented on refugee. >>So this couple of the question that represent the current the stimulation of the cortex in machine with some error correction, they can be implemented especially efficiently on an F B G. And here I show the time that it takes to simulate three system and eso in red. You see, at the time that it takes to simulate the X, I term the EI term, the dot product and the rising everything. Yet for a system with 500 spins analog Spain's equivalent to 500 g. O. P. S. So in f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements tobacco cm in which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as, ah one gear repression to replicate the post phaser CIA. Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts, all the dog products, respect to the problem size. And and if we had a new infinite amount of resources and PGA to simulate the dynamics, then the non in optical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a low carrot off end and while the kite off end. Because computing the dot product involves the summing, all the terms in the products, which is done by a nephew, Jay by another tree, which heights scares a logarithmic any with the size of the system. But this is in the case if we had an infinite amount of resources on the LPGA food but for dealing for larger problems off more than 100 spins, usually we need to decompose the metrics into ah smaller blocks with the block side that are not you here. And then the scaling becomes funny non inner parts linear in the and over you and for the products in the end of you square eso typically for low NF pdf cheap P a. You know you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance started path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product. By increasing the size of this at the tree and this can be done by organizing Yeah, click the extra co components within the F p G A in order which is shown here in this right panel here in order to minimize the finding finance of the system and to minimize the long distance that the path in the in the fpt So I'm not going to the details of how this is implemented the PGA. But just to give you a new idea off why the Iraqi Yahiko organization off the system becomes extremely important toe get good performance for simulator organizing mission. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should result for solving escape problems, free connected person, randomly person minus one, spin last problems and we sure, as we use as a metric the numbers >>of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with Nina successful BT against the problem size here and and in red here there's propose F B J implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior. It's similar to the car testing machine >>and security. You see that the scaling off the numbers of metrics victor product necessary to solve this problem scales with a better exponents than this other approaches. So so So that's interesting feature of the system and next we can see what is the real time to solution. To solve this, SK instances eso in the last six years, the time institution in seconds >>to find a grand state of risk. Instances remain answers is possibility for different state of the art hardware. So in red is the F B G. A presentation proposing this paper and then the other curve represent ah, brick, a local search in in orange and center dining in purple, for example, and So you see that the scaring off this purpose simulator is is rather good and that for larger politicizes, we can get orders of magnitude faster than the state of the other approaches. >>Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FBT implementation would be faster than risk Other recently proposed izing machine, such as the Hope you know network implemented on Memory Sisters. That is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the >>restricted Bosman machine implemented a PGA proposed by some group in Brooklyn recently again, which is very fast for small promise sizes. But which canning is bad So that, uh, this worse than the purpose approach so that we can expect that for promise sizes larger than, let's say, 1000 spins. The purpose, of course, would be the faster one. >>Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better cut values that have been previously found by any other >>algorithms. So they are the best known could values to best of our knowledge. And, um, or so which is shown in this paper table here in particular, the instances, Uh, 14 and 15 of this G set can be We can find better converse than previously >>known, and we can find this can vary is 100 times >>faster than the state of the art algorithm and cp to do this which is a recount. Kasich, it s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. >>So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g A onda and carefully routing the trickle components within the P G A. And and we can draw some projections of what type of performance we can achieve in >>the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape problems respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital and, you know, free to is shown in the green here, the green >>line without that's and, uh and we should two different, uh, prosthesis for this productions either that the time to solution scales as exponential off n or that >>the time of social skills as expression of square root off. So it seems according to the data, that time solution scares more as an expression of square root of and also we can be sure >>on this and this production showed that we probably can solve Prime Escape Program of Science 2000 spins to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP or optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this, what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account out on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will >>be just based on the simple common line access for the simulator and in which will have just a classical approximation of the system. We don't know Sturm, binary weights and Museum in >>term, but then will propose a second version that would extend the current arising machine to Iraq off eight f p g. A. In which we will add the more refined models truncated bigger in the bottom question model that just talked about on the supports in which he valued waits for the rising problems and support the cement. So we will announce >>later when this is available, and Farah is working hard to get the first version available sometime in September. Thank you all, and we'll be happy to answer any questions that you have.

Published Date : Sep 24 2020

SUMMARY :

know that the classical approximation of the Cortes in machine, which is a growing toe So the well known problem of And so this is And the addition of this chemical structure introduces learning process for searching for the ground state of the icing. off the analog spins to force the amplitude of the expense toe, symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here is a moderation that ensure the 100 reproduction rate of the system to become positive on this forbids the creation of any non tree in the in the fpt So I'm not going to the details of how this is implemented the PGA. of the mattress Victor products since it's the bottleneck of the computation, uh, You see that the scaling off the numbers of metrics victor product necessary to solve So in red is the F B G. A presentation proposing Moreover, the relatively good scanning off the But which canning is bad So that, scheme scales well that you can find the maximum cut values off benchmark the instances, Uh, 14 and 15 of this G set can be We can find better faster than the state of the art algorithm and cp to do this which is a recount. So given that the performance off the design depends on the height the near future based on the, uh, implementation that we are currently working. the time of social skills as expression of square root off. And so the idea of this model is that instead of having the very be just based on the simple common line access for the simulator and in which will have just a classical to Iraq off eight f p g. A. In which we will add the more refined models any questions that you have.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrooklynLOCATION

0.99+

SeptemberDATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Hong Kong Noise GroupORGANIZATION

0.99+

CIAORGANIZATION

0.99+

300 yardsQUANTITY

0.99+

1000 spinsQUANTITY

0.99+

IndiaLOCATION

0.99+

15 yearsQUANTITY

0.99+

second versionQUANTITY

0.99+

first versionQUANTITY

0.99+

FarahPERSON

0.99+

second partQUANTITY

0.99+

first partQUANTITY

0.99+

twoQUANTITY

0.99+

500 spinsQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

first stepQUANTITY

0.99+

20QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

ScottPERSON

0.99+

University of TokyoORGANIZATION

0.99+

500 g.QUANTITY

0.98+

MexicanLOCATION

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

KasichPERSON

0.98+

first versionQUANTITY

0.98+

firstQUANTITY

0.98+

IraqLOCATION

0.98+

third partQUANTITY

0.98+

13 clock cyclesQUANTITY

0.98+

43 clock cyclesQUANTITY

0.98+

first thingQUANTITY

0.98+

0.5 microsecondsQUANTITY

0.97+

JayPERSON

0.97+

HaiderLOCATION

0.97+

15QUANTITY

0.97+

one microsecondsQUANTITY

0.97+

SpainLOCATION

0.97+

about 10 secondsQUANTITY

0.97+

LPGAORGANIZATION

0.96+

eachQUANTITY

0.96+

500 timerQUANTITY

0.96+

one strategyQUANTITY

0.96+

both casesQUANTITY

0.95+

one errorQUANTITY

0.95+

20 wattsQUANTITY

0.95+

NinaPERSON

0.95+

about 0.1 microsecondsQUANTITY

0.95+

nineQUANTITY

0.95+

each graphQUANTITY

0.93+

14QUANTITY

0.92+

CMEORGANIZATION

0.91+

IraqiOTHER

0.91+

billions of neuronsQUANTITY

0.91+

99 successQUANTITY

0.9+

about 100QUANTITY

0.9+

larger than 500 speedsQUANTITY

0.9+

VectorORGANIZATION

0.89+

spinsQUANTITY

0.89+

VictorORGANIZATION

0.89+

last six yearsDATE

0.86+

oneQUANTITY

0.85+

one analogQUANTITY

0.82+

hamiltonianOTHER

0.82+

SimulatorTITLE

0.8+

EuropeanOTHER

0.79+

three neuro inspired principlesQUANTITY

0.78+

BosmanPERSON

0.75+

three systemQUANTITY

0.75+

trumpPERSON

0.74+

Xia PiosCOMMERCIAL_ITEM

0.72+

100QUANTITY

0.7+

one gearQUANTITY

0.7+

P.QUANTITY

0.68+

FPD eightCOMMERCIAL_ITEM

0.66+

first oneQUANTITY

0.64+

Escape Program of Science 2000TITLE

0.6+

CelticsOTHER

0.58+

TobyPERSON

0.56+

MachineTITLE

0.54+

Refugee ATITLE

0.54+

coupleQUANTITY

0.53+

TektronixORGANIZATION

0.51+

OpaOTHER

0.51+

P. J. OORGANIZATION

0.51+

BozemanORGANIZATION

0.48+

Coherent Nonlinear Dynamics and Combinatorial Optimization


 

Hi, I'm Hideo Mabuchi from Stanford University. This is my presentation on coherent nonlinear dynamics, and combinatorial optimization. This is going to be a talk, to introduce an approach, we are taking to the analysis, of the performance of Coherent Ising Machines. So let me start with a brief introduction, to ising optimization. The ising model, represents a set of interacting magnetic moments or spins, with total energy given by the expression, shown at the bottom left of the slide. Here the cigna variables are meant to take binary values. The matrix element jij, represents the interaction, strength and sign, between any pair of spins ij, and hi represents a possible local magnetic field, acting on each thing. The ising ground state problem, is defined in an assignment of binary spin values, that achieves the lowest possible value of total energy. And an instance of the easing problem, is specified by given numerical values, for the matrix j and vector h, although the ising model originates in physics, we understand the ground state problem, to correspond to what would be called, quadratic binary optimization, in the field of operations research. And in fact, in terms of computational complexity theory, it can be established that the, ising ground state problem is NP complete. Qualitatively speaking, this makes the ising problem, a representative sort of hard optimization problem, for which it is expected, that the runtime required by any computational algorithm, to find exact solutions, should asyntonically scale, exponentially with the number of spins, and four worst case instances at each end. Of course, there's no reason to believe that, the problem instances that actually arise, in practical optimization scenarios, are going to be worst case instances. And it's also not generally the case, in practical optimization scenarios, that we demand absolute optimum solutions. Usually we're more interested in, just getting the best solution we can, within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for computation. This focus is great interest on, so-called heuristic algorithms, for the ising problem and other NP complete problems, which generally get very good, but not guaranteed optimum solutions, and run much faster than algorithms, that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem, for which extensive compilations, of benchmarking data may be found online. A recent study found that, the best known TSP solver required median runtimes, across a library of problem instances, that scaled as a very steep route exponential, for an up to approximately 4,500. This gives some indication of the change, in runtime scaling for generic, as opposed to worst case problem instances. Some of the instances considered in this study, were taken from a public library of TSPs, derived from real world VLSI design data. This VLSI TSP library, includes instances within ranging from 131 to 744,710, instances from this library within between 6,880 and 13,584, were first solved just a few years ago, in 2017 requiring days of runtime, and a 48 core two gigahertz cluster, all instances with n greater than or equal to 14,233, remain unsolved exactly by any means. Approximate solutions however, have been found by heuristic methods, for all instances in the VLSI TSP library, with, for example, a solution within 0.014% of a known lower bound, having been discovered for an instance, with n equal 19,289, requiring approximately two days of runtime, on a single quarter at 2.4 gigahertz. Now, if we simple-minded the extrapolate, the route exponential scaling, from the study yet to n equal 4,500, we might expect that an exact solver, would require something more like a year of runtime, on the 48 core cluster, used for the n equals 13,580 for instance, which shows how much, a very small concession on the quality of the solution, makes it possible to tackle much larger instances, with much lower costs, at the extreme end, the largest TSP ever solved exactly has n equal 85,900. This is an instance derived from 1980s VLSI design, and this required 136 CPU years of computation, normalized to a single core, 2.4 gigahertz. But the 20 fold larger, so-called world TSP benchmark instance, with n equals 1,904,711, has been solved approximately, with an optimality gap bounded below 0.0474%. Coming back to the general practical concerns, of applied optimization. We may note that a recent meta study, analyze the performance of no fewer than, 37 heuristic algorithms for MaxCut, and quadratic binary optimization problems. And find the performance... Sorry, and found that a different heuristics, work best for different problem instances, selected from a large scale heterogeneous test bed, with some evidence, the cryptic structure, in terms of what types of problem instances, were best solved by any given heuristic. Indeed, there are reasons to believe, that these results for MaxCut, and quadratic binary optimization, reflect to general principle, of a performance complementarity, among heuristic optimization algorithms, and the practice of solving hard optimization problems. There thus arises the critical pre processing issue, of trying to guess, which of a number of available, good heuristic algorithms should be chosen, to tackle a given problem instance. Assuming that any one of them, would incur high cost to run, on a large problem of incidents, making an astute choice of heuristic, is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight, about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This is certainly pinpointed by researchers in the field, as a circumstance and must be addressed. So adding this all up, we see that a critical frontier, for cutting edge academic research involves, both the development of novel heuristic algorithms, that deliver better performance with lower costs, on classes of problem instances, that are underserved by existing approaches, as well as fundamental research, to provide deep conceptual insight, into what makes a given problem instance, easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law, and speculate about a so-called second quantum revolution, it's natural to talk not only about novel algorithms, for conventional CPUs, but also about highly customized, special purpose hardware architectures, on which we may run entirely unconventional algorithms, for common tutorial optimizations, such as ising problem. So against that backdrop, I'd like to use my remaining time, to introduce our work on, analysis of coherent using machine architectures, and associated optimization algorithms. Ising machines in general, are a novel class of information processing architectures, for solving combinatorial optimization problems, by embedding them in the dynamics, of analog, physical, or a cyber-physical systems. In contrast to both more traditional engineering approaches, that build ising machines using conventional electronics, and more radical proposals, that would require large scale quantum entanglement the emerging paradigm of coherent ising machines, leverages coherent nominal dynamics, in photonic or optical electronic platforms, to enable near term construction, of large scale prototypes, that leverage posting as information dynamics. The general structure of current of current CIM systems, as shown in the figure on the right, the role of the easing spins, is played by a train of optical pulses, circulating around a fiber optical storage ring, that beam splitter inserted in the ring, is used to periodically sample, the amplitude of every optical pulse. And the measurement results, are continually read into an FPGA, which uses then to compute perturbations, to be applied to each pulse, by a synchronized optical injections. These perturbations are engineered to implement, the spin-spin coupling and local magnetic field terms, of the ising hamiltonian, corresponding to a linear part of the CIM dynamics. Asynchronously pumped parametric amplifier, denoted here as PPL and wave guide, adds a crucial nonlinear component, to the CIM dynamics as well. And the basic CIM algorithm, the pump power starts very low, and is gradually increased, at low pump powers, the amplitudes of the easing spin pulses, behave as continuous complex variables, whose real parts which can be positive or negative, by the role of soft or perhaps mean field spins. Once the pump power crosses the threshold, for perimetric self oscillation in the optical fiber ring, however, the amplitudes of the easing spin pulses, become effectively quantized into binary values, while the pump power is being ramped up, the FPGA subsystem continuously applies, its measurement based feedback implementation, of the using hamiltonian terms. The interplay of the linearized easing dynamics, implemented by the FPGA , and the thresholds quantization dynamics, provided by the sink pumped parametric amplifier, result in a final state, of the optical plus amplitudes, at the end of the pump ramp, that can be read as a binary strain, giving a proposed solution, of the ising ground state problem. This method of solving ising problems, seems quite different from a conventional algorithm, that runs entirely on a digital computer. As a crucial aspect, of the computation is performed physically, by the analog continuous coherent nonlinear dynamics, of the optical degrees of freedom, in our efforts to analyze CA and performance. We have therefore turn to dynamical systems theory. Namely a study of bifurcations, the evolution of critical points, and typologies of heteroclitic orbits, and basins of attraction. We conjecture that such analysis, can provide fundamental insight, into what makes certain optimization instances, hard or easy for coherent ising machines, and hope that our approach, can lead to both improvements of the course CIM algorithm, and the pre processing rubric, for rapidly assessing the CIM insuibility of the instances. To provide a bit of intuition about how this all works. It may help to consider the threshold dynamics, of just one or two optical parametric oscillators, in the CIM architecture just described. We can think of each of the pulse time slots, circulating around the fiber ring, as are presenting an independent OPO. We can think of a single OPO degree of freedom, as a single resonant optical mode, that experiences linear dissipation, due to coupling loss, and gain in a pump near crystal, as shown in the diagram on the upper left of the slide, as the pump power is increased from zero. As in the CIM algorithm, the non-linear gain is initially too low, to overcome linear dissipation. And the OPO field remains in a near vacuum state, at a critical threshold value, gain equals dissipation, and the OPO undergoes a sort of lasing transition. And the steady States of the OPO, above this threshold are essentially coherent States. There are actually two possible values, of the OPO coherent amplitude, and any given above threshold pump power, which are equal in magnitude, but opposite in phase, when the OPO cross this threshold, it basically chooses one of the two possible phases, randomly, resulting in the generation, of a single bit of information. If we consider two uncoupled OPOs, as shown in the upper right diagram, pumped at exactly the same power at all times, then as the pump power is increased through threshold, each OPO will independently choose a phase, and thus two random bits are generated, for any number of uncoupled OPOs, the threshold power per OPOs is unchanged, from the single OPO case. Now, however, consider a scenario, in which the two appeals are coupled to each other, by a mutual injection of their out coupled fields, as shown in the diagram on the lower right. One can imagine that, depending on the sign of the coupling parameter alpha, when one OPO is lasing, it will inject a perturbation into the other, that may interfere either constructively or destructively, with the field that it is trying to generate, via its own lasing process. As a result, when can easily show that for alpha positive, there's an effective ferromagnetic coupling, between the two OPO fields, and their collective oscillation threshold, is lowered from that of the independent OPO case, but only for the two collective oscillation modes, in which the two OPO phases are the same. For alpha negative, the collective oscillation threshold, is lowered only for the configurations, in which the OPO phases are opposite. So then looking at how alpha is related to the jij matrix, of the ising spin coupling hamilitonian, it follows the, we could use this simplistic to OPO CIM, to solve the ground state problem, of the ferromagnetic or antiferromagnetic angles, to ising model, simply by increasing the pump power, from zero and observing what phase relation occurs, as the two appeals first start to lase. Clearly we can imagine generalizing the story to larger, and, however, the story doesn't stay as clean and simple, for all larger problem instances. And to find a more complicated example, we only need to go to n equals four, for some choices of jij for n equals four, the story remains simple, like the n equals two case. The figure on the upper left of this slide, shows the energy of various critical points, for a non frustrated n equals for instance, in which the first bifurcated critical point, that is the one that, by forgets of the lowest pump value a, this first bifurcated critical point, flows asyntonically into the lowest energy using solution, and the figure on the upper right, however, the first bifurcated critical point, flows to a very good, but suboptimal minimum at large pump power. The global minimum is actually given, by a distinct critical point. The first appears at a higher pump power, and is not needed radically connected to the origin. The basic CIM algorithm, is this not able to find this global minimum, such non-ideal behavior, seems to become more common at margin end, for the n equals 20 instance show in the lower plots, where the lower right pod is just a zoom into, a region of the lower left block. It can be seen that the global minimum, corresponds to a critical point, that first appears that of pump parameter a around 0.16, at some distance from the adriatic trajectory of the origin. That's curious to note that, in both of these small and examples, however, the critical point corresponding to the global minimum, appears relatively close, to the adiabatic trajectory of the origin, as compared to the most of the other, local minimum that appear. We're currently working to characterise, the face portrait typology, between the global minimum, and the adiabatic trajectory of the origin, taking clues as to how the basic CIM algorithm, could be generalized, to search for non-adiabatic trajectories, that jumped to the global minimum, during the pump up, of course, n equals 20 is still too small, to be of interest for practical optimization applications. But the advantage of beginning, with the study of small instances, is that we're able to reliably to determine, their global minima, and to see how they relate to the idea, that trajectory of the origin, and the basic CIM algorithm. And the small land limit, We can also analyze, for the quantum mechanical models of CAM dynamics, but that's a topic for future talks. Existing large-scale prototypes, are pushing into the range of, n equals, 10 to the four, 10 to the five, 10 to the six. So our ultimate objective in theoretical analysis, really has to be, to try to say something about CAM dynamics, and regime of much larger in. Our initial approach to characterizing CAM behavior, in the large end regime, relies on the use of random matrix theory. And this connects to prior research on spin classes, SK models, and the tap equations, et cetera, at present we're focusing on, statistical characterization, of the CIM gradient descent landscape, including the evolution of critical points, And their value spectra, as the pump powers gradually increase. We're investigating, for example, whether there could be some way, to explain differences in the relative stability, of the global minimum versus other local minima. We're also working to understand the deleterious, or potentially beneficial effects, of non-ideologies such as asymmetry, in the implemented using couplings, looking one step ahead, we plan to move next into the direction, of considering more realistic classes of problem instances, such as quadratic binary optimization with constraints. So in closing I should acknowledge, people who did the hard work, on these things that I've shown. So my group, including graduate students, Edwin Ng, Daniel Wennberg, Ryatatsu Yanagimoto, and Atsushi Yamamura have been working, in close collaboration with, Surya Ganguli, Marty Fejer and Amir Safavi-Naeini. All of us within the department of applied physics, at Stanford university and also in collaboration with Yoshihisa Yamamoto, over at NTT-PHI research labs. And I should acknowledge funding support, from the NSF by the Coherent Ising Machines, expedition in computing, also from NTT-PHI research labs, army research office, and ExxonMobil. That's it. Thanks very much.

Published Date : Sep 21 2020

SUMMARY :

by forgets of the lowest pump value a,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Edwin NgPERSON

0.99+

ExxonMobilORGANIZATION

0.99+

Daniel WennbergPERSON

0.99+

85,900QUANTITY

0.99+

Marty FejerPERSON

0.99+

Ryatatsu YanagimotoPERSON

0.99+

4,500QUANTITY

0.99+

Hideo MabuchiPERSON

0.99+

2017DATE

0.99+

Amir Safavi-NaeiniPERSON

0.99+

13,580QUANTITY

0.99+

Surya GanguliPERSON

0.99+

48 coreQUANTITY

0.99+

136 CPUQUANTITY

0.99+

1980sDATE

0.99+

14,233QUANTITY

0.99+

20QUANTITY

0.99+

Yoshihisa YamamotoPERSON

0.99+

oneQUANTITY

0.99+

NTT-PHIORGANIZATION

0.99+

1,904,711QUANTITY

0.99+

2.4 gigahertzQUANTITY

0.99+

Atsushi YamamuraPERSON

0.99+

19,289QUANTITY

0.99+

firstQUANTITY

0.99+

twoQUANTITY

0.99+

two appealsQUANTITY

0.99+

two possible phasesQUANTITY

0.99+

10QUANTITY

0.99+

two caseQUANTITY

0.99+

Coherent Ising MachinesORGANIZATION

0.98+

0.014%QUANTITY

0.98+

131QUANTITY

0.98+

each pulseQUANTITY

0.98+

two possible valuesQUANTITY

0.98+

NSFORGANIZATION

0.98+

744,710QUANTITY

0.98+

fourQUANTITY

0.98+

Stanford UniversityORGANIZATION

0.98+

20 foldQUANTITY

0.98+

13,584QUANTITY

0.98+

bothQUANTITY

0.97+

two gigahertzQUANTITY

0.96+

single coreQUANTITY

0.96+

singleQUANTITY

0.95+

sixQUANTITY

0.95+

zeroQUANTITY

0.95+

fiveQUANTITY

0.95+

6,880QUANTITY

0.94+

approximately two daysQUANTITY

0.94+

eachQUANTITY

0.93+

each endQUANTITY

0.93+

37 heuristicQUANTITY

0.93+

MoorePERSON

0.93+

each OPOQUANTITY

0.93+

two collective oscillation modesQUANTITY

0.93+

single bitQUANTITY

0.92+

each thingQUANTITY

0.92+

20 instanceQUANTITY

0.91+

one stepQUANTITY

0.9+

around 0.16QUANTITY

0.89+

Stanford universityORGANIZATION

0.88+

single quarterQUANTITY

0.87+

approximately 4,500QUANTITY

0.87+

second quantum revolutionQUANTITY

0.85+

a yearQUANTITY

0.84+

two random bitsQUANTITY

0.83+

two OPOQUANTITY

0.81+

few years agoDATE

0.77+

two uncoupled OPOsQUANTITY

0.76+

MaxCutTITLE

0.74+

four worst caseQUANTITY

0.71+

0.0474%QUANTITY

0.7+

up toQUANTITY

0.7+

CoherentORGANIZATION

0.69+

Photonic Accelerators for Machine Intelligence


 

>>Hi, Maya. Mr England. And I am an associate professor of electrical engineering and computer science at M I T. It's been fantastic to be part of this team that Professor Yamamoto put together, uh, for the entity Fire program. It's a great pleasure to report to you are update from the first year I will talk to you today about our recent work in photonic accelerators for machine intelligence. You can already get a flavor of the kind of work that I'll be presenting from the photonic integrated circuit that services a platonic matrix processor that we are developing to try toe break some of the bottle next that we encounter in inference, machine learning tasks in particular tasks like vision, games control or language processing. This work is jointly led with Dr Ryan heavily, uh, scientists at NTT Research, and he will have a poster that you should check out. Uh, in this conference should also say that there are postdoc positions available. Um, just take a look at announcements on Q P lab at m i t dot eu. So if you look at these machine learning applications, look under the hood. You see that a common feature is that they used these artificial neural networks or a and ends where you have an input layer of, let's say, and neurons and values that is connected to the first layer of, let's Say, also and neurons and connecting the first to the second layer would, if you represented it biomatrix requiring and biomatrix that has of order and squared free parameters. >>Okay, now, in traditional machine learning inference, you would have to grab these n squared values from memory. And every time you do that, it costs quite a lot of energy. Maybe you can match, but it's still quite costly in energy, and moreover, each of the input values >>has to be multiplied by that matrix. And if you multiply an end by one vector by an end square matrix, you have to do a border and squared multiplication. Okay, now, on a digital computer, you therefore have to do a voter in secret operations and memory access, which could be quite costly. But the proposition is that on a photonic integrated circuits, perhaps we could do that matrix vector multiplication directly on the P. I C itself by encoding optical fields on sending them through a programmed program into parameter and the output them would be a product of the matrix multiplied by the input vector. And that is actually the experiment. We did, uh, demonstrating that That this is, you know, in principle, possible back in 2017 and a collaboration with Professor Marine Soldier Judge. Now, if we look a little bit more closely at the device is shown here, this consists of a silicon layer that is pattern into wave guides. We do this with foundry. This was fabricated with the opposite foundry, and many thanks to our collaborators who helped make that possible. And and this thing guides light, uh, on about of these wave guides to make these two by two transformations Maxine and the kilometers, as they called >>input to input wave guides coming in to input to output wave guides going out. And by having to phase settings here data and five, we can control any arbitrary, uh, s U two rotation. Now, if I wanna have any modes coming in and modes coming out that could be represented by an S u N unitary transformation, and that's what this kind of trip allows you to dio and That's the key ingredient that really launched us in in my group. I should at this point, acknowledge the people who have made this possible and in particular point out Leon Bernstein and Alex lots as well as, uh, Ryan heavily once more. Also, these other collaborators problems important immigrant soldier dish and, of course, to a funding in particular now three entity research funding. So why optics optics has failed many times before in building computers. But why is this different? And I think the difference is that we now you know, we're not trying to build an entirely new computer out of optics were selective in how we apply optics. We should use optics for what it's good at. And that's probably not so much from non linearity, unnecessarily I mean, not memory, um, communication and fan out great in optics. And as we just said, linear algebra, you can do in optics. Fantastic. Okay, so you should make use of these things and then combine it judiciously with electronic processing to see if you can get an advantage in the entire system out of it, okay. And eso before I move on. Actually, based on the 2017 paper, uh, to startups were created, like intelligence and like, matter and the two students from my group, Nick Harris. And they responded, uh, co started this this this jointly founded by matter. And just after, you know, after, like, about two years, they've been able to create their first, uh, device >>the first metrics. Large scale processor. This is this device has called Mars has 64 input mode. 64 Promodes and the full program ability under the hood. Okay. So because they're integrating wave guides directly with Seamus Electron ICS, they were able to get all the wiring complexity, dealt with all the feedback and so forth. And this device is now able to just process a 64 or 64 unitary majors on the sly. Okay, parameters are three wants total power consumption. Um, it has ah, late and see how long it takes for a matrix to be multiplied by a factor of less than a nanosecond. And because this device works well over a pretty large 20 gigahertz, you could put many channels that are individually at one big hurts, so you can have tens of S U two s u 65 or 64 rotations simultaneously that you could do the sort of back in the envelope. Physics gives you that per multiply accumulate. You have just tens of Tempted jewels. Attn. A moment. So that's very, very competitive. That's that's awesome. Okay, so you see, plan and potentially the breakthroughs that are enabled by photonics here And actually, more recently, they actually one thing that made it possible is very cool Eyes thes My face shifters actually have no hold power, whereas our face shifters studios double modulation. These use, uh, nano scale mechanical modulators that have no hold power. So once you program a unitary, you could just hold it there. No energy consumption added over >>time. So photonics really is on the rise in computing on demand. But once again, you have to be. You have to be careful in how you compare against a chance to find where is the game to be had. So what I've talked so far about is wait stationary photonic processing. Okay, up until here. Now what tronics has that also, but it doesn't have the benefits of the coherence of optical fields transitioning through this, uh, to this to this matrix nor the bandwidth. Okay, Eso So that's Ah, that is, I think a really exciting direction. And these companies are off and they're they're building these trips and we'll see the next couple of months how well this works. Uh, on the A different direction is to have an output stationary matrix vector multiplication. And for this I want to point to this paper we wrote with Ryan, Emily and the other team members that projects the activation functions together with the weight terms onto a detector array and by the interference of the activation function and the weight term by Hamad and >>Affection. It's possible if you think about Hamad and affection that it actually automatically produces the multiplication interference turn between two optical fields gives you the multiplication between them. And so that's what that is making use of. I wanna talk a little bit more about that approach. So we actually did a careful analysis in the P R X paper that was cited in the last >>page and that analysis of the energy consumption show that this device and principal, uh, can compute at at an energy poor multiply accumulate that is below what you could theoretically dio at room temperature using irreversible computer like like our digital computers that we use in everyday life. Um, so I want to illustrate that you can see that from this plot here, but this is showing. It's the number of neurons that you have per layer. And on the vertical axis is the energy per multiply accumulate in terms of jewels. And when we make use of the massive fan out together with this photo electric multiplication by career detection, we estimate that >>we're on this curve here. So the more right. So since our energy consumption scales us and whereas for a for a digital computer it skills and squared, we, um we gain mawr as you go to a larger matrices. So for largest matrices like matrices of >>scale 1,005,000, even with present day technology, we estimate that we would hit and energy per multiply accumulate of about a center draw. Okay, But if we look at if we imagine a photonic device that >>uses a photonic system that uses devices that have already been demonstrated individually but not packaged in large system, you know, individually in research papers, we would be on this curve here where you would very quickly dip underneath the lander, a limit which corresponds to the thermodynamic limit for doing as many bit operations that you would have to do to do the same depth of neural network as we do here. And I should say that all of these numbers were computed for this simulated >>optical neural network, um, for having the equivalent, our rate that a fully digital computer that a digital computer would have and eso equivalent in the error rate. So it's limited in the error by the model itself rather than the imperfections of the devices. Okay. And we benchmark that on the amnesty data set. So that was a theoretical work that looked at the scaling limits and show that there's great, great hope to to really gain tremendously in the energy per bit, but also in the overall latency and throughput. But you shouldn't celebrate too early. You have to really do a careful system level study comparing, uh, electronic approaches, which oftentimes happened analogous approach to the optical approaches. And we did that in the first major step in this digital optical neural network. Uh, study here, which was done together with the PNG who is an electron ICS designer who actually works on, uh, tronics based on c'mon specifically made for machine on an acceleration. And Professor Joel, member of M I t. Who is also a fellow at video And what we studied there in particular, is what if we just replaced on Lee the communication part with optics, Okay. And we looked at, you know, getting the same equivalent error rates that you would have with electronic computer. And that showed that that way should have a benefit for large neural networks, because large neural networks will require lots of communication that eventually do not fit on a single Elektronik trip anymore. At that point, you have to go longer distances, and that's where the optical connections start to win out. So for details, I would like to point to that system level study. But we're now applying more sophisticated studies like this, uh, like that simulate full system simulation to our other optical networks to really see where the benefits that we might have, where we can exploit thes now. Lastly, I want to just say What if we had known nominee Garrity's that >>were actually reversible. There were quantum coherent, in fact, and we looked at that. So supposed to have the same architectural layout. But rather than having like a sexual absorption absorption or photo detection and the electronic non linearity, which is what we've done so far, you have all optical non linearity, okay? Based, for example, on a curve medium. So suppose that we had, like, a strong enough current medium so that the output from one of these transformations can pass through it, get an intensity dependent face shift and then passes into the next layer. Okay, What we did in this case is we said okay. Suppose that you have this. You have multiple layers of these, Uh um accent of the parameter measures. Okay. These air, just like the ones that we had before. >>Um, and you want to train this to do something? So suppose that training is, for example, quantum optical state compression. Okay, you have an optical quantum optical state you'd like to see How much can I compress that to have the same quantum information in it? Okay. And we trained that to discover a efficient algorithm for that. We also trained it for reinforcement, learning for black box, quantum simulation and what? You know what is particularly interesting? Perhaps in new term for one way corner repeaters. So we said if we have a communication network that has these quantum optical neural networks stationed some distance away, you come in with an optical encoded pulse that encodes an optical cubit into many individual photons. How do I repair that multi foot on state to send them the corrected optical state out the other side? This is a one way error correcting scheme. We didn't know how to build it, but we put it as a challenge to the neural network. And we trained in, you know, in simulation we trained the neural network. How toe apply the >>weights in the Matrix transformations to perform that Andi answering actually a challenge in the field of optical quantum networks. So that gives us motivation to try to build these kinds of nonlinear narratives. And we've done a fair amount of work. Uh, in this you can see references five through seven. Here I've talked about thes programmable photonics already for the the benchmark analysis and some of the other related work. Please see Ryan's poster we have? Where? As I mentioned we where we have ongoing work in benchmarking >>optical computing assed part of the NTT program with our collaborators. Um And I think that's the main thing that I want to stay here, you know, at the end is that the exciting thing, really is that the physics tells us that there are many orders of magnitude of efficiency gains, uh, that are to be had, Uh, if we you know, if we can develop the technology to realize it. I was being conservative here with three orders of magnitude. This could be six >>orders of magnitude for larger neural networks that we may have to use and that we may want to use in the future. So the physics tells us there are there is, like, a tremendous amount of gap between where we are and where we could be and that, I think, makes this tremendously exciting >>and makes the NTT five projects so very timely. So with that, you know, thank you for your attention and I'll be happy. Thio talk about any of these topics

Published Date : Sep 21 2020

SUMMARY :

It's a great pleasure to report to you are update from the first year I And every time you do that, it costs quite a lot of energy. And that is actually the experiment. And as we just said, linear algebra, you can do in optics. rotations simultaneously that you could do the sort of back in the envelope. You have to be careful in how you compare So we actually did a careful analysis in the P R X paper that was cited in the last It's the number of neurons that you have per layer. So the more right. Okay, But if we look at if we many bit operations that you would have to do to do the same depth of neural network And we looked at, you know, getting the same equivalent Suppose that you have this. And we trained in, you know, in simulation we trained the neural network. Uh, in this you can see references five through seven. Uh, if we you know, if we can develop the technology to realize it. So the physics tells us there are there is, you know, thank you for your attention and I'll be happy.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2017DATE

0.99+

JoelPERSON

0.99+

RyanPERSON

0.99+

Nick HarrisPERSON

0.99+

EmilyPERSON

0.99+

MayaPERSON

0.99+

YamamotoPERSON

0.99+

two studentsQUANTITY

0.99+

NTT ResearchORGANIZATION

0.99+

HamadPERSON

0.99+

AlexPERSON

0.99+

firstQUANTITY

0.99+

second layerQUANTITY

0.99+

20 gigahertzQUANTITY

0.99+

less than a nanosecondQUANTITY

0.99+

first layerQUANTITY

0.99+

64QUANTITY

0.99+

first metricsQUANTITY

0.99+

LeePERSON

0.99+

todayDATE

0.99+

tensQUANTITY

0.98+

sevenQUANTITY

0.98+

EnglandPERSON

0.98+

sixQUANTITY

0.98+

1,005,000QUANTITY

0.98+

fiveQUANTITY

0.98+

twoQUANTITY

0.98+

64 PromodesQUANTITY

0.98+

two transformationsQUANTITY

0.98+

five projectsQUANTITY

0.97+

eachQUANTITY

0.97+

Leon BernsteinPERSON

0.97+

M I T.ORGANIZATION

0.96+

GarrityPERSON

0.96+

NTTORGANIZATION

0.96+

PNGORGANIZATION

0.96+

about two yearsQUANTITY

0.94+

one thingQUANTITY

0.94+

one wayQUANTITY

0.94+

ThioPERSON

0.92+

MarinePERSON

0.92+

two optical fieldsQUANTITY

0.89+

first yearQUANTITY

0.88+

64 rotationsQUANTITY

0.87+

one vectorQUANTITY

0.87+

three entityQUANTITY

0.85+

three ordersQUANTITY

0.84+

oneQUANTITY

0.84+

singleQUANTITY

0.84+

ProfessorPERSON

0.83+

next couple of monthsDATE

0.81+

threeQUANTITY

0.79+

tens of Tempted jewelsQUANTITY

0.74+

M I t.ORGANIZATION

0.73+

AndiPERSON

0.71+

DrPERSON

0.68+

first major stepQUANTITY

0.67+

Seamus Electron ICSORGANIZATION

0.63+

65QUANTITY

0.62+

WhoPERSON

0.59+

Q PORGANIZATION

0.57+

PTITLE

0.48+

ordersQUANTITY

0.47+

MarsLOCATION

0.43+

MaxinePERSON

0.41+

Adrian and Adam Keynote v4 fixed audio blip added slide


 

>>Welcome everyone. Good morning. Good evening to all of you around the world. I am so excited to welcome you to launch bad our annual conference for customers, for partners, for our own colleagues here at Mirandes. This is meant to be a forum for learning, for sharing for discovery. One of openness. We're incredibly excited. Do you have you here with us? I want to take a few minutes this morning and opened the conference and share with you first and foremost where we're going as a company. What is our vision then? I also want to share with you on update on what we have been up to you for the past year. Especially with two important acquisitions, Doc Enterprise and then container and lens. And what are some of the latest developments at Mirandes? And then I'll close also with an exciting announcement that we have today, which we hope is going to be interesting and valuable for all of you. But let me start with our mission. What are we here to Dio? It's very simple. We want to help you the ship code faster. This is something that we're very excited about, something that we have achieved for many of you around the world. And we just want thio double down on. We feel this is a mission that's very much worthwhile and relevant and important to you. Now, how do we do that? How do we help you ship code faster? There are three things we believe in. We believe in this world of cloud. Um, choice is incredibly important. We all know that developers want to use the latest tools. We all know that cloud technology is evolving very quickly and new innovations appear, um, very, very quickly, and we want to make them available to you. So choice is very important. At the same time, consuming choice can be difficult. So our mission is to make choice simple for you to give developers and operators simplicity and then finally underpinning everything that we dio is security. These are the three big things that we invest in and that we believe that choice, simplicity and security and the foundation technology that we're betting on to make that happen for you is kubernetes many of you, many of our customers use kubernetes from your aunties today and they use it at scale. And this is something we want to double down on the fundamental benefit. The our key promise we want to deliver for you is Speed. And we feel this is very relevant and important and and valuable in the world that we are in today. So you might also be interested in what have been our priorities since we acquired Doc Enterprise. What has happened for the past year at Miranda's And there are three very important things we focused on as a company. The first one is customer success. Um, when we acquired Doc Enterprise, the first thing we did is listen to you connect with the most important customers and find out what was your sentiment. What did you like? What were you concerned about? What needed to improve? How can we create more value and a better experience for you? So, customers success has been a top of our list of priorities ever since. And here is what we've heard here is what you've told us. You've told us that you very much appreciated the technology that you got a lot of value out of the technology, but that at the same time, there are some things that we can do better. Specifically, you wanted better. Sele's better support experience. You also wanted more clarity on the road map. You also wanted to have a deeper alignment and a deeper relationship between your needs and your requirements and our our technical development that keep people in our development organization are most important engineers. So those three things are were very, very important to you and they were very important to us here. So we've taken that to heart and over the past 12 months, we believe, as a team, we have dramatically improved the customer support experience. We introduced new SLS with prod care. We've rolled out a roadmap to many many of our customers. We've taken your requirements of the consideration and we've built better and deeper relationships with so many of you. And the evidence for that that we've actually made some progress is in a significant increase off the work clothes and in usage of all platforms. I was so fortunate that we were able to build better and stronger relationships and take you to the next level of growth for companies like Visa like soc T general, like nationwide, like Bosch, like Axa X l like GlaxoSmithKline, like standard and Poor's, like Apple A TNT. So many, many off you, Many of all customers around the world, I believe over the past 12 months have experienced better, better, better support strong s L. A s a deeper relationship and a lot more clarity on our roadmap and our vision forward. The second very big priority for us over the last year has been product innovation. This is something that we are very excited about that we've invested. Most of our resource is in, and we've delivered some strong proof points. Doc Enterprise 3.1 has been the first release that we have shipped. Um, as Mirant is as the unified company, Um, it's had some big innovative features or Windows support or a I and machine learning use cases and a significant number off improvements in stability and scalability earlier this year. We're very excited to have a quiet lens and container team, which is by far the most popular kubernetes. I'd, um, in the world today and every day, 600 new users are starting to use lens to manage the community's clusters to deploy applications on top of communities and to dramatically simplify the experience for communities for operators and developers alike. That is a very big step forward for us as a company. And then finally, this week at this conference, we announcing our latest product, which we believe is a huge step forward for Doc Enterprise and which we call Doc Enterprise, Container Cloud, and you will hear a lot more about that during this conference. The third vector of development, the third priority for us as a company over the past year was to become mawr and Mawr developer centric. As we've seen over the past 10 years, developers really move the world forward. They create innovation, they create new software. And while our platform is often managed and run and maybe even purchased by RT architects and operators and I T departments, the actual end users are developers. And we made it our mission a za company, to become closer and closer to developers to better understand their needs and to make our technology as easy and fast to consume as possible for developers. So as a company, we're becoming more and more developers centric, really. The two core products which fit together extremely well to make that happen, or lens, which is targeted squarely at a new breed off kubernetes developers sitting on the desktop and managing communities, environments and the applications on top on any cloud platform anywhere and then DACA enterprise contain a cloud which is a new and radically innovative, contain a platform which we're bringing to market this week. So with this a za background, what is the fundamental problem which we solve for you, for our customers? What is it that we feel are are your pain points that can help you resolve? We see too very, very big trends in the world today, which you are experiencing. On one side, we see the power of cloud emerging with more features mawr innovation, more capabilities coming to market every day. But with those new features and new innovations, there is also an exponential growth in cloud complexity and that cloud complexity is becoming increasingly difficult to navigate for developers and operators alike. And at the same time, we see the pace of change in the economy continuing to accelerate on bits in the economy and in the technology as well. So when you put these two things together on one hand, you have MAWR and Mawr complexity. On the other hand, you have fast and faster change. This makes for a very, very daunting task for enterprises, developers and operators to actually keep up and move with speed. And this is exactly the central problem that we want to solve for you. We want to empower you to move with speed in the middle off rising complexity and change and do it successfully and with confidence. So with that in mind, we are announcing this week at LAUNCHPAD a big and new concept to take the company forward and take you with us to create value for you. And we call this your cloud everywhere, which empowers you to ship code faster. Dr. Enterprise Container Cloud is a lynch bit off your cloud everywhere. It's a radical and new container platform, which gives you our customers a consistent experience on public clouds and private clouds alike, which enables you to ship code faster on any infrastructure, anywhere with a cohesive cloud fabric that meets your security standards that offers a choice or private and public clouds and offer you a offers you a simple, an extremely easy and powerful to use experience. for developers. All of this is, um, underpinned by kubernetes as the foundation technology we're betting on forward to help you achieve your goals at the same time. Lens kubernetes e. It's also very, very well into the real cloud. Every concept, and it's a second very strong linchpin to take us forward because it creates the developing experience. It supports developers directly on their desktop, enabling them Thio manage communities workloads to test, develop and run communities applications on any infrastructure anywhere. So Doc, Enterprise, Container, Cloud and Lens complement each other perfectly. So I'm very, very excited to share this with you today and opened the conference for you. And with this I want to turn it over to my colleague Adam Parker, who runs product development at Mirandes to share a lot more detail about Doc Enterprise Container Cloud. Why we're excited about it. Why we feel is a radical step forward to you and why we feel it can add so much value to your developers and operators who want to embrace the latest kubernetes technology and the latest container technology on any platform anywhere. I look forward to connecting with you during the conference and we should all the best. Bye bye. >>Thanks, Adrian. My name is Adam Parco, and I am vice president of engineering and product development at Mirant ISS. I'm extremely excited to be here today And to present to you Dr Enterprise Container Cloud Doc Enterprise Container Cloud is a major leap forward. It Turpal charges are platform. It is your cloud everywhere. It has been completely designed and built around helping you to ship code faster. The world is moving incredibly quick. We have seen unpredictable and rapid changes. It is the goal of Docker Enterprise Container Cloud to help navigate this insanity by focusing on speed and efficiency. To do this requires three major pillars choice, simplicity and security. The less time between a line of code being written and that line of code running in production the better. When you decrease that cycle, time developers are more productive, efficient and happy. The code is higher, quality contains less defects, and when bugs are found are fixed quicker and more easily. And in turn, your customers get more value sooner and more often. Increasing speed and improving developer efficiency is paramount. To do this, you need to be able to cycle through coding, running, testing, releasing and monitoring all without friction. We enabled us by offering containers as a service through a consistent, cloudlike experience. Developers can log into Dr Enterprise Container Cloud and, through self service, create a cluster No I T. Tickets. No industry specific experience required. Need a place to run. A workload simply created nothing quicker than that. The clusters air presented consistently no matter where they're created, integrate your pipelines and start deploying secure images everywhere. Instantly. You can't have cloud speed if you start to get bogged down by managing, so we offer fully automated lifecycle management. Let's jump into the details of how we achieve cloud speed. The first is cloud choice developers. Operators add mons users they all want. In fact, mandate choice choice is extremely important in efficiency, speed and ultimately the value created. You have cloud choice throughout the full stack. Choice allows developers and operators to use the tooling and services their most familiar with most efficient with or perhaps simply allows them to integrate with any existing tools and services already in use, allowing them to integrate and move on. Doc Enterprise Container Cloud isn't constructive. It's open and flexible. The next important choice we offer is an orchestration. We hear time and time again from our customers that they love swarm. That's simply enough for the majority of their applications. And that just works that they have skills and knowledge to effectively use it. They don't need to be or find coop experts to get immediate value, so we will absolutely continue to offer this choice and orchestration. Our existing customers could rest assure their workloads will continue to run. Great as always. On the other hand, we can't ignore the popularity that growth, the enthusiasm and community ecosystem that has exploded with communities. So we will also be including a fully conforming, tested and certified kubernetes going down the stock. You can't have choice or speed without your choice and operating system. This ties back to developer efficiency. We want developers to be able to leverage their operating system of choice, were initially supporting full stack lifecycle management for a bun, too, with other operating systems like red hat to follow shortly. Lastly, all the way down at the bottom of stack is your choice in infrastructure choice and infrastructure is in our DNA. We have always promoted no locking and flexibility to run where needed initially were supporting open stock AWS and full life cycle management of bare metal. We also have a road map for VM Ware and other public cloud providers. We know there's no single solution for the unique and complex requirements our customers have. This is why we're doubling down on being the most open platform. We want you to truly make this your cloud. If done wrong, all this choice at speed could have been extremely complex. This is where cloud simplification comes in. We offer a simple and consistent as a service cloud experience, from installation to day to ops clusters Air created using a single pane of glass no matter where they're created, giving a simple and consistent interface. Clusters can be created on bare metal and private data centers and, of course, on public cloud applications will always have specific operating requirements. For example, data protection, security, cost efficiency edge or leveraging specific services on public infrastructure. Being able to create a cluster on the infrastructure that makes the most sense while maintaining a consistent experience is incredibly powerful to developers and operators. This helps developers move quick by being able to leverage the infra and services of their choice and operators by leveraging, available, compute with the most efficient and for available. Now that we have users self creating clusters, we need centralized management to support this increase in scale. Doc Enterprise Container cloud use is the single pane of glass for observe ability and management of all your clusters. We have day to ops covered to keep things simple and new. Moving fast from this single pane of glass, you can manage the full stack lifecycle of your clusters from the infra up, including Dr Enterprise, as well as the fully automated deployment and management of all components deployed through it. What I'm most excited about is Doc Enterprise Container Cloud as a service. What do I mean by as a service doctor? Enterprise continue. Cloud is fully self managed and continuously delivered. It is always up to date, always security patched, always available new features and capabilities pushed often and directly to you truly as a service experience anywhere you want, it run. Security is of utmost importance to Miranda's and our customers. Security can't be an afterthought, and it can't be added later with Doctor and a price continued cloud, we're maintaining our leadership and security. We're doing this by leveraging the proven security and Dr Enterprise. Dr. Enterprise has the best and the most complete security certifications and compliance, such as Stig Oscar, How and Phipps 1 $40 to thes security certifications allows us to run in the world's most secure locations. We are proud and honored to have some of the most security conscious customers in the world from all industries into. She's like insurance, finance, health care as well as public, federal and government agencies. With Dr Enterprise Container Cloud. We put security as our top concern, but importantly, we do it with speed. You can't move fast with security in the way so they solve this. We've added what we're calling invisible security security enabled by default and configured for you as part of the platform. Dr Price Container Cloud is multi tenant with granular are back throughout. In conjunction with Doc Enterprise, Docker Trusted Registry and Dr Content Trust. We have a complete end to end secured software supply chain Onley run the images that have gone through the appropriate channels that you have authorized to run on the most secure container engine in the >>industry. >>Lastly, I want to quickly touch on scale. Today. Cluster sprawl is a very real thing. There are test clusters, staging clusters and, of course, production clusters. There's also different availability zones, different business units and so on. There's clusters everywhere. These clusters are also running all over the place. We have customers running Doc Enterprise on premise there, embracing public cloud and not just one cloud that might also have some bare metal. So cloud sprawl is also a very real thing. All these clusters on all these clouds is a maintenance and observe ability. Nightmare. This is a huge friction point to scaling Dr Price. Container Cloud solves these issues, lets you scale quicker and more easily. Little recap. What's new. We've added multi cluster management. Deploy and attach all your clusters wherever they are. Multi cloud, including public private and bare metal. Deploy your clusters to any infra self service cluster creation. No more I T. Tickets to get resources. Incredible speed. Automated Full stack Lifecycle management, including Dr Enterprise Container, cloud itself as a service from the in for up centralized observe ability with a single pane of glass for your clusters, their health, your APs and most importantly to our existing doc enterprise customers. You can, of course, add your existing D clusters to Dr Enterprise Container Cloud and start leveraging the many benefits it offers immediately. So that's it. Thank you so much for attending today's keynote. This was very much just a high level introduction to our exciting release. There is so much more to learn about and try out. I hope you are as excited as I am to get started today with Doc Enterprise. Continue, Cloud, please attend the tutorial tracks up Next is Miska, with the world's most popular Kubernetes E Lens. Thanks again, and I hope you enjoy the rest of our conference.

Published Date : Sep 15 2020

SUMMARY :

look forward to connecting with you during the conference and we should all the best. We want you to truly make this your cloud. This is a huge friction point to scaling Dr Price.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AdrianPERSON

0.99+

BoschORGANIZATION

0.99+

Adam ParcoPERSON

0.99+

Adam ParkerPERSON

0.99+

GlaxoSmithKlineORGANIZATION

0.99+

firstQUANTITY

0.99+

AWSORGANIZATION

0.99+

VisaORGANIZATION

0.99+

AdamPERSON

0.99+

standard and Poor'sORGANIZATION

0.99+

secondQUANTITY

0.99+

MirantORGANIZATION

0.99+

first releaseQUANTITY

0.99+

600 new usersQUANTITY

0.99+

last yearDATE

0.99+

MirandesORGANIZATION

0.98+

threeQUANTITY

0.98+

two thingsQUANTITY

0.98+

LAUNCHPADORGANIZATION

0.98+

MawrORGANIZATION

0.98+

Dr EnterpriseORGANIZATION

0.98+

todayDATE

0.98+

this weekDATE

0.97+

TodayDATE

0.97+

Mirant ISSORGANIZATION

0.97+

Doc EnterpriseORGANIZATION

0.97+

third vectorQUANTITY

0.97+

third priorityQUANTITY

0.97+

first oneQUANTITY

0.97+

two important acquisitionsQUANTITY

0.97+

WindowsTITLE

0.96+

two core productsQUANTITY

0.96+

Axa X lORGANIZATION

0.96+

one cloudQUANTITY

0.96+

MirandaORGANIZATION

0.96+

three thingsQUANTITY

0.96+

one sideQUANTITY

0.96+

mawrORGANIZATION

0.96+

Apple A TNTORGANIZATION

0.95+

MiskaPERSON

0.94+

single paneQUANTITY

0.93+

single solutionQUANTITY

0.92+

DocORGANIZATION

0.91+

Dr. EnterpriseORGANIZATION

0.9+

past yearDATE

0.9+

How and PhippsORGANIZATION

0.89+

past yearDATE

0.89+

LensORGANIZATION

0.88+

this morningDATE

0.87+

ContainerORGANIZATION

0.87+

earlier this yearDATE

0.85+

Doc Enterprise 3.1TITLE

0.85+

Dr Content TrustORGANIZATION

0.84+

Doc EnterpriseTITLE

0.84+

Stig OscarORGANIZATION

0.84+

Docker Enterprise Container CloudTITLE

0.83+

Dr PriceORGANIZATION

0.82+

soc T generalORGANIZATION

0.82+

Container CloudORGANIZATION

0.81+

Doc Enterprise Container CloudTITLE

0.81+

EnterpriseORGANIZATION

0.79+

three major pillarsQUANTITY

0.78+

Enterprise Container Cloud Doc Enterprise Container CloudTITLE

0.78+

ContainerTITLE

0.77+

one handQUANTITY

0.76+

monthsDATE

0.75+

$40QUANTITY

0.74+

RTORGANIZATION

0.73+

Dr Price ContainerORGANIZATION

0.72+

DioORGANIZATION

0.7+

SelePERSON

0.7+

Bill Pearson, Intel | CUBE Conversation, August 2020


 

>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with our leaders all around the world. This is theCUBE conversation. >> Welcome back everybody. Jeff Frick here with theCUBE we are in our Palo Alto studios today. We're still getting through COVID, thankfully media was a necessary industry, so we've been able to come in and keep a small COVID crew, but we can still reach out to the community and through the magic of the internet and camera's on laptops, we can reach out and touch base with our friends. So we're excited to have somebody who's talking about and working on kind of the next big edge, the next big cutting thing going on in technology. And that's the internet of things you've heard about it the industrial Internet of Things. There's a lot of different words for it. But the foundation of it is this company it's Intel. We're happy to have joined us Bill Pearson. He is the Vice President of Internet of Things often said IoT for Intel, Bill, great to see you. >> Same Jeff. Nice to be here. >> Yeah, absolutely. So I just was teasing getting ready for this interview, doing a little homework and I saw you talking about Internet of Things in a 2015 interview, actually referencing a 2014 interview. So you've been at this for a while. So before we jump into where we are today, I wonder if you can share, you know, kind of a little bit of a perspective of what's happened over the last five or six years. >> I mean, I think data has really grown at a tremendous pace, which has changed the perception of what IoT is going to do for us. And the other thing that's been really interesting is the rise of AI. And of course we need it to be able to make sense of all that data. So, you know, one thing that's different is today where we're really focused on how do we take that data that is being produced at this rapid rate and really make sense of it so that people can get better business outcomes from that. >> Right, right. But the thing that's so interesting on the things part of the Internet of Things and even though people are things too, is that the scale and the pace of data that's coming off, kind of machine generated activity versus people generated is orders of magnitude higher in terms of the frequency, the variety, and all kind of your classic big data meme. So that's a very different challenge then, you know, kind of the growth of data that we had before and the types of data, 'cause it's really gone kind of exponential across every single vector. >> Absolutely. It has, I mean, we've seen estimates that data is going to increase by about five times as much as it is today, over the next, just a couple years. So it's exponential as you said. >> Right. The other thing that's happened is Cloud. And so, you know, kind of breaking the mold of the old mold roar, all the compute was either in your mini computer or data center or mainframe or on your laptop. Now, you know, with Cloud and instant connectivity, you know, it opens up a lot of different opportunities. So now we're coming to the edge and Internet of Things. So when you look at kind of edge in Internet of Things, kind of now folding into this ecosystem, you know, what are some of the tremendous benefits that we can get by leveraging those things that we couldn't with kind of the old infrastructure and our old way kind of gathering and storing and acting on data? >> Yeah. So one of the things we're doing today with the edge is really bringing the compute much closer to where all the data is being generated. So these sensors and devices are generating tons and tons of data and for a variety of reasons, we can't send it somewhere else to get processed. You know, there may be latency requirements for that control loop that you're running in your factory or there's bandwidth constraints that you have, or there's just security or privacy reasons to keep it onsite. And so you've got to process a lot of this data onsite and maybe some estimates or maybe half of the data is going to remain onsite here. And when you look at that, you know, that's where you need compute. And so the edge is all about taking compute, bringing it to where the data is, and then being able to use the intelligence, the AI and analytics to make sense of that data and take actions in real time. >> Right, right. But it's a complicated situation, right? 'Cause depending on where that edge is, what the device is, does it have power? Does it not have power? Does it have good connectivity? Does it not have good connectivity? Does it have even the ability to run those types of algorithms or does it have to send it to some interim step, even if it doesn't have, you know, kind of the ability to send it all the way back to the Cloud or all the way back to the data center for latency. So as you kind of slice and dice all these pieces of the chain, where do you see the great opportunity for Intel, where's a good kind of sweet spot where you can start to bring in some compute horsepower and you can start to bring in some algorithmic processing and actually do things between just the itty-bitty sensor at the itty-bitty end of the chain versus the data center that's way, way upstream and far, far away. >> Yeah. Our business is really high performance compute and it's this idea of taking all of these workloads and bringing them in to this high performance compute to be able to run multiple software defined workloads on single boxes, to be able to then process and analyze and store all that data that's being created at the edge, do it in a high performance way. And whether that's a retail smart shelf, for example, that we can do realtime inventory on that shelf, as things are coming and going, or whether it's a factory and somebody's doing, you know,real time defect detection of something moving across their textile line. So all of that comes down to being able to have the compute horsepower, to make sense of the data and do something with it. >> Right, right. So you wouldn't necessarily like in your shelf example that the compute might be done there at the local store or some aggregation point beyond just that actual, you know, kind of sensor that's underneath that one box of tide, if you will. >> Absolutely. Yeah, you could have that on-prem, a big box that does multiple shelves, for example. >> Okay, great. So there's a great example and you guys have the software development kit, you have a lot of resources for developers and in one of the case studies that I just wanted to highlight before we jump into the dev side was I think Audi was the customer. And it really illustrates a point that we talked about a lot in kind of the big data meme, which is, you know, people used to take action on a sample of data after the fact. And I think this case here we're talking about running 1,000 cars a day through this factory, they're doing so many welds, 5 million welds a day, and they would pull one at the end of the day, sample a couple welds and did we have a good day or not? Versus what they're doing now with your technology is actually testing each and every weld as it's being welded, based on data that's coming off the welding machine and they're inspecting every single weld. So I just love you've been at this for a long time. When you talk to customers about what is possible from a business point of view, when you go from after the fact with a sample of data, to in real time with all the data, how that completely changes your view and ability to react to your business. >> Yeah. I mean, it makes people be able to make better decisions in real time. You know, as you've got cameras on things like textile manufacturers or footwear manufacturers, or even these realtime inventory examples you mentioned, people are going to be able to make and can make decisions in real time about how to stock that shelf, what to order about what to pull off the line, am I getting a good product or not? And this has really changed, as you said, we don't have to go back and sample anymore. You can tell right now as that part is passing through your manufacturing line, or as that item is sitting on your shelf, what's happening to it. It's really incredible. >> So let's talk about developers. So you've got a lot of resources available for developers and everyone knows Intel obviously historically in PCs and data centers. And you would do what they call design wins back when I was there, many moons ago, right? You try to get a design win and then, you know, they're going to put your microprocessors and a bunch of other components in a device. When you're trying to work with, kind of Cutting Edge Developers in kind of new fields and new areas, this feels like a much more direct touch to the actual people building the applications than the people that are really just designing the systems of which Intel becomes a core part of. I wonder if you could talk about, you know, the role developers and really Intel's outreach to developers and how you're trying to help them, you know, kind of move forward in this new crazy world. >> Yeah, developers are essential to our business. They're essential to IoT. Developers, as you said, create the applications that are going to really make the business possible. And so we know the value of developers and want to make sure that they have the tools and resources that they need to use our products most effectively. We've done some things around OpenVINO toolkit as an example, to really try and simplify, democratize AI application so that more developers can take advantage of this and, you know, take the ambitions that they have to do something really interesting for their business, and then go put it into action. And the whole, you know, our whole purpose is making sure we can actually accomplish that. >> Right. So let's talk about OPenVINO. It's an interesting topic. So I actually found out what OpeVINO means, Open Visual Inference and Neural Optimization toolkit,. So it's a lot about computer vision. So I will, you know, and computer vision is an interesting early AI application that I think a lot of people are familiar with through Google photos or other things where, you know, suddenly they're putting together little or a highlight movies for you, or they're pulling together all the photos of a particular person or a particular place. So the computer vision is pretty interesting. Inference is a special subset of AI. So I wonder, you know, you guys are way behind OpenVINO. Where do you see the opportunities in visualization? What are some of the instances that you're seeing with the developers out there doing innovative things around computer vision? >> Yeah, there's a whole variety of used cases with computer vision. You know, one that we talked about earlier here was looking at defect detection. There's a company that we work with that has a 360 degree view. They use cameras all around their manufacturing line. And from there, they didn't know what a good part looks like and using inference and OpenVINO, they can tell when a bad part goes through or there's a defect in their line and they can go and pull that and make corrections as needed. We've also seen, you know, use cases like smart shopping, where there's a point of sale fraud detection. We call it, you know, is the item being scanned the same as the item that is actually going through the line. And so we can be much smarter about understanding retail. One example that I saw was a customer who was trying to detect if it was a vodka or potatoes that was being scanned in an automated checkout system. And again, using cameras and OpenVINO, they can tell the difference. >> We haven't talked about a computer testing yet. We're still sticking with computer vision and the natural language processing. I know one of the areas you're interested in and it's going to only increase in importance is education. Especially with what's going on, I keep waiting for someone to start rolling out some national, you know, best practice education courses for kindergartens and third graders and sixth graders. And you know, all these poor teachers that are learning to teach on the fly from home, you guys are doing a lot of work in education. I wonder if you can share, I think your work doing some work with Udacity. What are you doing? Where do you see the opportunity to apply some of this AI and IoT in education? >> Yeah, we launched the Nanodegree with Udacity, and it's all about OpenVINO and Edge AI and the idea is, again, get more developers educated on this technology, take a leader like your Udacity, partner with them to make the coursework available and get more developers understanding using and building things using Edge AI. And so we partnered with them as part of their million developer goal. We're trying to get as many developers as possible through that. >> Okay. And I would be remiss if we talked about IoT and I didn't throw 5G into the conversation. So 5G is a really big deal. I know Intel has put a ton of resources behind it and have been talking about it for a long, long time. You know, I think the huge value in 5G is a lot around IoT as opposed to my handset going faster, which is funny that they're actually releasing 5G handsets out there. But when you look at 5G combined with the other capabilities in IoT, again, how do you see 5G being this kind of step function in ability to do real time analysis and make real time business decisions? >> Well, I think it brings more connectivity certainly and bandwidth and reduces latency. But the cool thing about it is when you look at the applications of it, you know, we talked about factories. A lot of those factors may want to have a private 5G networks that are running inside that factory, running all the machines or robots or things in there. And so, you know, it brings capabilities that actually make a difference in the world of IoT and the things that developers are trying to build. >> That's great. So before I let you go, you've been at this for a while. You've been at Intel for a while. You've seen a lot of big sweeping changes, kind of come through the industry, you know, as you sit back with a little bit of perspective, and it's funny, even IoT, like you said, you've been talking about it for five years and 5G we've been been waiting for it, but the waves keep coming, right? That's kind of the fun of being in this business. As you sit there where you are today, you know, kind of looking forward the next couple of years, couple of four or five years, you know, what has just surprised you beyond compare and what are you still kind of surprised that's it's still a little bit lagging that you would have expected to see a little bit more progress at this point. >> You know, to me the incredible thing about the computing industry is just the insatiable demand that the world has for compute. It seems like we always come up with, our customers always come up with more and more uses for this compute power. You know, as we've talked about data and the exponential growth of data and now we need to process and analyze and store that data. It's impressive to see developers just constantly thinking about new ways to apply their craft and, you know, new ways to use all that available computing power. And, you know, I'm delighted 'cause I've been at this for a while, as you said, and I just see this continuing to go far as far as the eye can see. >> Yeah, yeah. I think you're right. There's no shortage of opportunity. I mean, the data explosion is kind of funny. The data has always been there, we just weren't keeping track of it before. And the other thing that as I look at Jira, Internet of Things, kind of toolkit, you guys have such a broad portfolio now where a lot of times people think of Intel pretty much as a CPU company, but as you mentioned, you got to FPGAs and VPUs and Vision Solutions, stretch applications Intel has really done a good job in terms of broadening the portfolio to go after, you know, kind of this disparate or kind of sharding, if you will, of all these different types of computer applications have very different demands in terms of power and bandwidth and crunching utilization to technical (indistinct). >> Yeah. Absolutely the various computer architectures really just to help our customers with the needs, whether it's high power or low performance, a mixture of both, being able to use all of those heterogeneous architectures with a tool like OpenVINO, so you can program once, right once and then run your application across any of those architectures, help simplify the life of our developers, but also gives them the compute performance, the way that they need it. >> Alright Bill, well keep at it. Thank you for all your hard work. And hopefully it won't be five years before we're checking in to see how far this IoT thing is going. >> Hopefully not, thanks Jeff. >> Alright Bill. Thanks a lot. He's bill, I'm Jeff. You're watching theCUBE. Thanks for watching, we'll see you next time. (upbeat music)

Published Date : Sep 1 2020

SUMMARY :

all around the world. And that's the internet of and I saw you talking And the other thing that's is that the scale and the pace of data So it's exponential as you said. And so, you know, kind of breaking the AI and analytics to kind of the ability to send it So all of that comes down to being able just that actual, you know, Yeah, you and in one of the case studies And this has really changed, as you said, to help them, you know, And the whole, you know, So I wonder, you know, you We've also seen, you know, and the natural language processing. and the idea is, again, But when you look at 5G and the things that developers couple of four or five years, you know, to apply their craft and, you know, to go after, you know, a mixture of both, being able to use Thank you for all your hard work. we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

2015DATE

0.99+

2014DATE

0.99+

Bill PearsonPERSON

0.99+

Palo AltoLOCATION

0.99+

August 2020DATE

0.99+

five yearsQUANTITY

0.99+

360 degreeQUANTITY

0.99+

AudiORGANIZATION

0.99+

BillPERSON

0.99+

IntelORGANIZATION

0.99+

one boxQUANTITY

0.99+

BostonLOCATION

0.99+

OpenVINOTITLE

0.98+

UdacityORGANIZATION

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

theCUBEORGANIZATION

0.97+

One exampleQUANTITY

0.97+

billPERSON

0.97+

1,000 cars a dayQUANTITY

0.96+

oneQUANTITY

0.96+

eachQUANTITY

0.95+

one thingQUANTITY

0.94+

single boxesQUANTITY

0.94+

OpeVINOTITLE

0.93+

coupleQUANTITY

0.91+

sixth gradersQUANTITY

0.91+

million developerQUANTITY

0.91+

about five timesQUANTITY

0.91+

5 million welds a dayQUANTITY

0.9+

Edge AITITLE

0.9+

GoogleORGANIZATION

0.9+

onceQUANTITY

0.89+

next couple of yearsDATE

0.89+

six yearsQUANTITY

0.87+

COVIDORGANIZATION

0.86+

many moons agoDATE

0.86+

fourQUANTITY

0.84+

half ofQUANTITY

0.84+

5GTITLE

0.83+

CloudTITLE

0.82+

Internet ofORGANIZATION

0.8+

tons and tons of dataQUANTITY

0.8+

single vectorQUANTITY

0.78+

OPenVINOTITLE

0.78+

JiraORGANIZATION

0.76+

single weldQUANTITY

0.74+

OpenVINOORGANIZATION

0.71+

thirdQUANTITY

0.68+

VicePERSON

0.67+

couple yearsQUANTITY

0.65+

everyQUANTITY

0.63+

ton of resourcesQUANTITY

0.62+

case studiesQUANTITY

0.58+

dataQUANTITY

0.55+

lastDATE

0.53+

Open Visual InferenceTITLE

0.51+

COVIDOTHER

0.5+

Raj Verma, MemSQL | CUBEConversation, July 2020


 

>> Narrator: From theCUBE's studios in Palo Alto, in Boston, connecting with thought leaders all around the world, this is theCUBE conversation. >> Welcome to this CUBE conversation. I'm Lisa Martin. And today joining me is CUBE alumni, the co-CEO of Mem-SQL, Raj Verma. Raj, welcome back to theCUBE. >> Thank you Lisa. It's great to be back, and it's so good to see you. >> Likewise. So since we last saw each other, a lot of changes going on everywhere. You're now the co-CEO of Mem-SQL. The CEO's role is changing dramatically in this year, and the last few months. Talk to us about some of those changes. >> Yeah. Where do I even start? I was just listening to something or watching something, and it said, in leadership one thing that they never tell you is, you don't find the event, the event finds you. And you know, it was four and a half, almost five months ago, we were at our SQL and if someone had said to me then that we'll be quarantined for five months following that, and most, more likely seven months, I probably wouldn't have believed them. And if I did, I would have gone and start crying. It's been sort of a lot of change for us. The one thing is for sure, as the CEO, I probably made more compelling decisions in the last four months than I probably made in the year prior to that. So there is a lot of decisions, important decisions that are being made now. I think the thing that's impressed me the most about just the human race per se in the last four and a half, five months is the resilience. The adaptability of just the community, and the race at large. There is a lot of goodness that we've seen happen. I think that is a greater appreciation for the life that we sort of had. And I think when everything does one day come back to normal, we would be a lot more appreciative and nicer just as individuals. Now as CEO, I think the first order of duty for me was to embrace our employees and my colleagues. It's a drawing set of circumstances for them, worrying about their health, the health of their aged parents, of their families well being, and whether they have a job or not, and how the economic environment would pan out. So I think it was just a ... My number one priority at the start and continues to be till today were our colleagues and the employees of Mem-SQL. And the first few decisions that we made were 100% employee centric. None of the big ones that was taking the pledge of no retrenchments or no workforce reduction for 90 days to begin with. And we've continued that. We haven't really reduced any employee headcount at all. The second was to go in turn embrace our customers and deliver to the promise that we had in normal times, and help them get back to as much of a normalcy as they could. And the third was to do whatever we could, to use our technology, our efforts, our resources, to help society at large. Whether it was to track and tracing projects that we did for a large telco or a telco in the Middle East, a telco in Asia. And we've put our resources there. Our aura is to just using our platform to heighten public awareness around Juneteenth, and other sort of social issues. Because I think in times of almost societal isolation, using your platform and being a voice for what you stand for is more important than ever before. And those were really the three things that stand out apart from just normal decisions, normal decisions that you make to make sure that you are well-capitalized, that you have enough cash to run your business, that all the fundamentals of the business are sound. So yeah. >> So lots of decisions on a massively accelerated scale, more than the last 10 years. But big strategic decisions made in a quick time period for employees, for customers, for how do we use our platform, what is the key that you need in order to make those decisions, as strategically as you can like that? >> Yeah, you got to lens it through, what is the why of your organization, Our advice is very simple, we want to be the platform of decision making, or what we call the platform of now. Where we can marry historical information with the real time operational data being streamed in to your organization, and be able to deliver up reports and insights that you need for quick decision making in other organizations. So delivering up the now. Internally, when sort of presented with options to make decisions, the lens that I've used is, what's in the best interest of our employees, what's in the best interest of our customers and what is in the best interest of our investors and stakeholders. And if you apply that lens, the decisions aren't actually that difficult. You will never have a 100% of the data that you need to make a decision. So, lensing it through your priorities becomes extremely, extremely important. The other aspect that having data though, having said that, having data now to make decisions is more important than ever before. Because you do not have the sort of physical cues to depend on or clues to depend on. I'm still finding it hard to read the digital clues on Zoom or Google Hangouts or Teams, or what have you. So you just have to have a very steeped-in data decision making, marrying it with, what is it that you stand for as an organization? And the third vector that we've put to this is very simple. We as an organization stand fort authenticity. We like to simplify rather than complicate. And we need to demonstrate courage over comfort. And those are the other vectors that we use to make the majority of our strategic decisions. >> So if data... For years, you've heard this all over the tech circuit, Raj, data's the new gold, data's the new oil. Now you're saying it's even more important than ever in this unprecedented time. How does Mem-SQL help customers get access to as much data as they can to make really fast strategic decisions. To not just survive in this mode, but thrive? >> Yeah, I think two questions, what is the data and the value of data? And you're absolutely right. The value of data now is more than ever before. And also the amount of data that is now being produced is more than ever before. So it's actually a pretty, pretty nontrivial issue to solve. And I think the first thought is that you can't solve the problems of tomorrow with the technology of yesterday. You cannot solve the problems of tomorrow, using a technology that was built for a different era, which was built 45 years ago, 25 years ago. And you know, some of the tenants of the technology are still steeped in. Let's just call it heritage. So first and foremost, the realization that the problems of tomorrow need the tools of now and the talent of now, and the management of now, and the leadership of now to solve it, is paramount. What we do as a technology company, and a lot of companies in our genre called hard tech is exactly that. It's hard tech. It takes a lot of talent, it takes a lot of time, resources, money, clarity of thought to build something which will solve the problems of today and of tomorrow. And today the challenges we actually have is, the real time nature of decision making of interactions, of experience, of security, of compliance, are more exaggerated than ever before. And how do you marry real time information with historical information in the cheapest, easiest to deploy flexible architecture is of paramount importance. And that's exactly what we do, Lisa. We give you a database that is arguably the fastest in the world from a query speed standpoint, the scale's more than any other database in our genre, it has data governance by virtue of us being sequel, it's hybrid multi-cloud so it doesn't lock you in, and it's a among the easiest to use. So, I don't know what the future would bring, Lisa, but one thing I can assure you is, there are five things which wouldn't change which is developers would prefer faster over slow, cheaper over fast, flexible over rigid, ease of use over complexity of use, and a secure, safe platform versus the alternative. And if you have those five tenants, I think you'd be pretty well-versed with solving the problems of today and tomorrow. >> You mentioned real time a minute ago, and that's, I think right now during the COVID-19 crisis, there's nothing that highlights the urgency of which we need information real time. It's not going to help us if it's 24 or 48 hours old. How does Mem-SQL deliver real time insights to customers, whether it's a telecommunications company looking to do contact tracing or a bank? >> Yeah. So let me start with a couple of examples, a very large telephone provider, telecommunication providers in the States, uses us for metric telemetry. So how many calls did Lisa make, how many texts did she send, what time? Without purpose, the privacy attached to it. When did she experienced a call drop, what's the coverage at her home, is the sort of a mobile tower close to her place going to go down, and what would be the inconvenience? All of that. So copious amounts of data required to really deliver a customer experience. And it's a hard enough problem because the amount of data as you can imagine is extremely, extremely, extremely large. But when COVID-19 struck, the data became that much more important, because now it was a tool that you could use as a company to be able to describe or follow cohorts of subscribers in hotbeds like New York at that time. And see which States they were actually, let's call it "fleeing" to or moving to. And to be able to do that in near real time was not good enough, because you had to actually do it in real time. To be able to track where the PPEs work in near real time was not good enough, it had to be real time. And to track where the ventilators were in near real time wasn't good enough. You just needed to do that. And I think that probably is one of the biggest examples of real time that we have in the recent past, and something that we are most proud of. How did we do that? We built this hard tech based on first principles. We didn't try and put a lipstick on a pig, we didn't try and re-architect a 45-year-old technology or a 15-year-old technology. We just said that if we actually had a plain sheet of paper, what would we do? And we said, the need of the future is going to be fast over slow, as I said, you know, cheap or expensive, flexible over rigid, safe over the other alternatives and ease of use. And that's what we've built. And the world will see the amount of difference that we make to organizations and more importantly to society, which is very near and dear to my heart. And yeah, that's what I'm extremely proud, and optimistic about. >> Talk to me about some of the customer conversations that you're having now. I've known you for many years. You're a very charismatic speaker. As you were saying a few minutes ago, it's hard to read body language on Zoom and video conferences. How are those customer conversations going, and how have they changed? >> A lot has changed. I think there are a couple of aspects that you touch upon. One is just getting used to your digital work day. Initially we thought it was two weeks and it's great. You don't have to commute and all the rest of it. And then you started to realize, and the other thing was, everyone was available. There's no one who was traveling. There were no birthday parties. There's no picking up a kid from baseball or school or swimming or whatever. So everyone was available. And we were like, "Wow, this is great, no commute, everyone's available. Let's start meeting and interacting." And then you realize after a while that this digital workday is extremely, extremely exhausting. And if you weren't deliberate about it, it can fill your entire day, and you don't get much done. So one of the things that I've started to do is, I don't get on a digital call unless of course it's a customer or something extremely, extremely important till 11:00 AM. That's my thinking time, it's just, you know, eight to 11 is untouched and people I want to call rather than my calendar describing what my priorities should be. And it's the same thing for our customers as well, in a slightly different way. They are trying to decide and come to terms with not only what today means to them, but what the realities of today means for tomorrow. I'll give you an example of a very, very large bank in the United States, a rich consumer bank, which is essentially believed in the fact that customer relations were the most and customer relationship managers were the most important role for them. They are thinking about moving to bots. So the fact that you would be interacting with bots when you reach your bank is going to be a reality. There is no if and and buts about it. A very, very large company providing financial services, is now trying to see, how do you make the digital platforms more responsive? How do you make analytics foster more responsive and collaborative? Those are really the focus of C-suite attention, rather than which building do we call after our company and add towers to it. Or what coffee machine should we buy for the organization, or should we have a whiskey bar or a wine bar in our office? Now the ... Just the mundaneness of those decisions are coming up. And now the focus is how do we not only survive, to your point, Lisa, but thrive in the digital collaboration economy. And it's going to be about responsiveness. It's going to be about speed. And it's going to be about security and compliance. >> At the end of the day, kind of wrapping things up here, COVID or not the customer experience is critical, right, it's the lifeblood of what your organization delivers. The success of your customers, and their ability to make major business impact is what speaks to Mem-SQL's capabilities. A customer experience I know is always near and dear to your heart. And it sounds like that's something that you have modified for the situation, that really Mem-SQL focused on, not just the customer experience, but your employee experience as well. >> That's exactly it. And I think if you do right by the employees, they'll do right by the customers. And I would any day, any day put the employee first lens to any decision that we make. And that's paid off for us in spades. We've got a family environment, I genuinely, genuinely care about every single employee of Mem-SQL and their families. And we've communicated that often, we have listened, we have learned, these are unprecedented times. There isn't a manual to go through COVID-19 work environment. And I think the realization that we just don't know what tomorrow would bring, it's actually very liberating because it just frees you from rinsing and repeating, and further feeding your prejudice and biases, to getting up every day and say, "Let me learn as much as I can about the current environment, current realities, lens it through our priorities, and make the best decision that we can." And if you're wrong, accept and correct it. Nothing too intellectual, but it's in the simplicity that sometimes you find a lot of solace. >> Yeah. Simplicity in these times would be great. I think you're ... I like how you talked about the opportunities. There's a lot of positive COVID catalysts that are coming from this. And we want to thank you for sharing some time with us today, talking about the changing role of the C-suite, and the opportunities that it brings. Raj it's been great to have you on theCUBE. >> As always Lisa. It's a pleasure. Thank you. >> For the co-CEO of Mem-SQL, Raj Verma, I'm Lisa Martin. You're watching theCUBE conversation.

Published Date : Jul 31 2020

SUMMARY :

leaders all around the world, the co-CEO of Mem-SQL, Raj Verma. it's so good to see you. and the last few months. And the third was to do whatever we could, more than the last 10 years. of the data that you need all over the tech circuit, Raj, and it's a among the easiest to use. during the COVID-19 crisis, And the world will see the the customer conversations So the fact that you would it's the lifeblood of what and make the best decision that we can." and the opportunities that it brings. Thank you. For the co-CEO of Mem-SQL, Raj Verma,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Raj VermaPERSON

0.99+

Lisa MartinPERSON

0.99+

LisaPERSON

0.99+

Palo AltoLOCATION

0.99+

90 daysQUANTITY

0.99+

100%QUANTITY

0.99+

24QUANTITY

0.99+

AsiaLOCATION

0.99+

New YorkLOCATION

0.99+

July 2020DATE

0.99+

two questionsQUANTITY

0.99+

RajPERSON

0.99+

48 hoursQUANTITY

0.99+

tomorrowDATE

0.99+

five tenantsQUANTITY

0.99+

United StatesLOCATION

0.99+

BostonLOCATION

0.99+

todayDATE

0.99+

thirdQUANTITY

0.99+

five thingsQUANTITY

0.99+

five monthsQUANTITY

0.99+

seven monthsQUANTITY

0.99+

COVID-19OTHER

0.99+

theCUBEORGANIZATION

0.99+

Middle EastLOCATION

0.99+

yesterdayDATE

0.99+

Mem-SQLORGANIZATION

0.98+

oneQUANTITY

0.98+

CUBEORGANIZATION

0.98+

firstQUANTITY

0.98+

three thingsQUANTITY

0.98+

two weeksQUANTITY

0.98+

secondQUANTITY

0.98+

11:00 AMDATE

0.98+

third vectorQUANTITY

0.97+

45 years agoDATE

0.97+

a minute agoDATE

0.97+

25 years agoDATE

0.96+

telcoORGANIZATION

0.96+

15-year-oldQUANTITY

0.96+

first thoughtQUANTITY

0.95+

one thingQUANTITY

0.94+

Mem-SQLTITLE

0.94+

one dayQUANTITY

0.93+

first principlesQUANTITY

0.93+

this yearDATE

0.92+

four and aDATE

0.92+

45-year-oldQUANTITY

0.91+

JuneteenthDATE

0.9+

five months agoDATE

0.89+

COVID-19 crisisEVENT

0.87+

few minutes agoDATE

0.86+

lastDATE

0.81+

last 10 yearsDATE

0.79+

first fewQUANTITY

0.78+

HangoutsTITLE

0.77+

first orderQUANTITY

0.77+

eightQUANTITY

0.76+

last four monthsDATE

0.72+

four and a halfQUANTITY

0.72+

Priyanka Sharma, CNCF | CUBE Conversation, June 2020


 

>> From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, I'm Stu Miniman, and welcome to this CUBE Conversation. I'm coming to you from our Boston area studio. I'm happy to welcome to the program someone we've known for many years, but a first time on the program. Priyanka Sharma, thank you so much for joining us. >> Hi, Stu. Thank you so much for having me. >> All right, and Priyanka, let's not bury the lead or anything. The reason we're talking to you is the news. You've got a new job, but in an area that you know really well. So we've known you through the cloud native communities for a number of years. We see you at the shows. We see you online. So happy to share with our community you are now the general manager of the CNCF, so congratulations so much on the job. >> Thank you so much. I am so honored to have this opportunity, and I can't wait to work even more closely with the cloud native community than I have already. I mean, as you said, I've been involved for a long time. I actually just saw on my LinkedIn today that 2016 was when my conversation within the CNCF started. I was then working on the OpenTracing Project, which was the third project to join the foundation, and CNCF had started in 2015, so it was all very new. We were in conversations, and it was just such an exciting time, and that just kept getting bigger and bigger, and then with GitLab I served, I actually still serve, until the 31st, on the board. And now this, so I'm very, very excited. >> Yeah, well right. So you're a board member of the CNCF, but Priyanka, if you go back even further, we look at how did CNCF start. It was all around Kubernetes. Where did Kubernetes come from? It came from Google, and when I dug back far enough into your CV I found Google on there, too. So maybe just give us a little bit of your career arc, and what you're involved with for people that don't know you from all these communities and events. >> Sure, absolutely. So my career started at Google in Mountain View, and I was on the business side of things. I worked with AdSense products, and around that same time I had a bit of the entrepreneurial bite, so the bug bit me, and I first joined a startup that was acquired by GoDaddy later on, and then I went off on my own. That was a very interesting time for me, because that was when I truly learned about the power of opensource. One of the products that me and my co-founder were building was an opensource time tracker, and I just saw the momentum on these communities, and that's when the dev tools love started. And then I got involved with Heavybit Industries, which is an accelerator for dev tools. There I met so many companies that were either in the cloud space, or just general other kinds of dev tools, advised a few, ended up joining LightStep, where the founders, them and a few community members were the creators of the OpenTracing standard. Got heavily, heavily involved in that project, jumped into cloud native with that, was a project contributor, organizer, educator, documentarian, all kinds of things, right, for two-plus years, and then GitLab with the board membership, and that's how I saw, actually, the governance side. Until then it had all been the community, the education, that aspect, and then I understood how Chris and Dan had built this amazing foundation that's done so much from the governance perspective. So it's been a long journey and it all feels that it's been coming towards in this awesome new direction. >> Well, yeah. Congratulations to you, and right, CNCF, in their press release I see Dan talked about you've been a speaker, you've been a governing board member, you participate in this, and you're going to help with that next phase, and you teased out a little bit, there's a lot of constituencies in the CNCF. There's a large user participation. We always love talking at KubeCon about the people not only just using the technology, but contributing back, the role of opensource, the large vendor ecosystem, a lot there. So give us your thought as to kind of where the CNCF is today, and where it needs to continue and go in the future. >> Absolutely. So in my opinion the CNCF is a breakout organization. I mean, we're approaching 600 members, of which 142 are end users. So with that number the CNCF is actually the largest, has the largest end user community of all opensource foundations. So tremendous progress has been made, especially from those days back in 2016 when we were the third project being considered. So leaps and bounds, so impressive. And I think... If you think about what's the end user storyline right now, so the CNCF did a survey last year, and so 84% of the people surveyed were using containers in production, and 78% were using Kubernetes in production. Amazing numbers, especially since both are up by about 15, 20% year over year. So this move towards devops, towards cloud native, towards Kubernetes is happening and happening really strong. The project has truly established itself. Kubernetes has won, in my opinion, and that's really good. I think now when it comes to the second wave, it is my perspective that the end user communities and the... Just the momentum that we have right now, we need to build and grow it. We need deeper developer engagement, because if you think about it, there's not just one graduated project in CNCF. There are 10. So Kubernetes being one of them, but there's Prometheus, there's Envoy, Jaeger, et cetera, et cetera. So we have amazing technologies that are all gaining adoption. Being graduated means that they have fast security audits, they have diverse contributors, they have safe, good governance, so as an end user you can feel very secure adopting them, and so we have so much to do to expand on the knowledge of those projects. We have so much to make software just better every day, so that's my one vector in my opinion. The second vector, I would say it has been more opportunistic. As you know, we are all living in a very unprecedented time with a global pandemic. Many of us are sheltering in place. Many are... Generally, life is changed. You are in media. You know this much better than me, I'm sure, that the number of, the amount of digital consumption has just skyrocketed. People are reading that many more articles. I'm watching that many more memes and jokes online, right? And what that means is that more and more companies are reaching that crazy web scale that started this whole cloud native and devops space in the first time, first place with Google and Netflix being D-to-C companies just building out what eventually became cloud native, SRE, that kind of stuff. So in general, online consumption's higher, so more and more companies need to be cloud native to support that kind of traffic. Secondly, even for folks that are not creating content, just a lot of the workflows have to move online. More people will do online banking. More people will do ecommerce. It's just the shift is happening, and for that we, as the foundation, need to be ready to support the end users with education, enablement, certifications, training programs, just to get them across that chasm into a new, even more online-focused reality. >> Yeah, and I say, Priyanka, that tees up one of the ways that most people are familiar with the CNCF is through the event. So KubeCon and CloudNativeCon, really the signature event. Tremendous growth over the last few years. You actually had involvement in a virtual event, the Cloud Native Summit recently. For KubeCon-- >> Yes. >> The European show is announced virtual. We know that there's still some uncertainty when it comes to the North America show. Supposed to be in my backyard here in Boston, so we'd love for it to happen. If it happens-- >> Of course. >> If not, we'll be there virtually or not. Give us a little bit your experience with the Cloud Native Summit, and what's your thinking today? We understand, as you said, a lot of uncertainty as to what goes on. Absolutely, even when physical events come back in the future, we expect this hybrid model to be with us for a long time. >> I definitely hear that. Completely agree that everything is uncertain and things have changed very rapidly for our world, particularly when it comes to events. We're lucky at the CNCF to be working with the LF Events team, which is just best in class, and we are working very hard every day, them, doing a lot of the lion's share of the work of building the best experience we can for KubeCon, CloudNativeCon EU, which, as you said, went virtual. I'm really looking forward to it because what I learned from the Cloud Native Summit Online, which was the event you mentioned that I had hosted in April, is that people are hungry to just engage, to see each other, to communicate however they can in this current time. Today I don't think the technology's at a point where physical events can be overshadowed by virtual, so there's still something very special about seeing someone face-to-face, having a coffee, and having that banter, conversations. But at the same time there are some benefits to online. So as an example, with the Cloud Native Summit, really, it was just me and a few community folks who were sad we didn't get to go to Amsterdam, so we're like, "Let's just get together in a group, "have some fun, talk to some maintainers," that kind of thing. I expected a few hundred, max. Thousands of people showed up, and that was just mind blowing because I was like, "Wait, what?" (chuckling) But it was so awesome because not only were there a lot of people, there were people from just about every part of the globe. So normally you have US, Europe, that kind of focus, and there's the Asia-PAC events that cater to that, but here in that one event where, by the way, we were talking to each other in realtime, there were folks from Asia-PAC, there were folks from Americas, EU, also the African continent, so geo meant nothing anymore. And that was very awesome. People from these different parts of the world were talking, engaging, learning, all at the same time, and I think with over 20,000 people expected at KubeCon EU, with it being virtual, we'll see something similar, and I think that's a big opportunity for us going forward. >> Yeah, no, absolutely. There are some new opportunities, some new challenges. I think back to way back in January I got to attend the GitLab event, and you look at GitLab, a fully remote company, but talking about the benefits of still getting together and doing things online. You think of the developer communities, they're used to working remote and working across different timezones, but there is that need to be able to get together and collaborate, and so we've got some opportunities, we've got some challenges when remote, so I guess, yeah, Priyanka. Give me the final word, things you want to look forward to, things we should be expecting from you and the CNCF team going forward. I guess I'll mention for our audience, I guess, Dan Kohn staying part of Linux Foundation, doing some healthcare things, will still stay a little involved, and Chris Aniszczyk, who's the CTO, still the CTO. I just saw him. Did a great panel for DockerCon with Kelsey Hightower, Michelle Noorali, and Sean Connelly, and all people we know that-- >> Right. >> Often are speaking at KubeCon, too. So many of the faces staying the same. I'm not expecting a big change, but what should we expect going forward? >> That's absolutely correct, Stu. No big changes. My first big priority as I join is, I mean, as you know, coming with the community background, with all this work that we've put into education and learning from each other, my number one goal is going to be to listen and learn in a very diverse set of personas that are part of this whole community. I mean, there's the board, there is the technical oversight committee, there is the project maintainers, there's the contributors, there are the end users, potential developers who could be contributors. There's just so many different types of people all united in our interest and desire to learn more about cloud native. So my number one priority is going to listen and learn, and as I get more and more up to speed I'm very lucky that Chris Aniszczyk, who has built this with Dan, is staying on and is going to be advising me, guiding me, and working with me. Dan as well is actually going to be around to help advise me and also work on some key initiatives, in addition to his big, new thing with public health and the Linux Foundation. You never expect anything average with Dan, so it's going to be amazing. He's done so much for this foundation and brought it to this point, which in my mind, I mean, it's stupendous the amount of work that's happened. It's so cool. So I'm really looking forward to building on this amazing foundation created by Dan and Chris under Jim. I think that what they have done by not only providing a neutral IP zone where people can contribute and use projects safely, they've also created an ecosystem where there is events, there is educational activity, projects can get documentation support, VR support. It's a very holistic view, and that's something, in my opinion, new, at least in the way it's done. So I just want to build upon that, and I think the end user communities will keep growing, will keep educating, will keep working together, and this is a team effort that we are all in together. >> Well, Priyanka, congratulations again. We know your community background and strong community at the CNCF. Looking forward to seeing that both in the virtual events in the near term and back when we have physical events again in the future, so thanks so much for joining us. >> Thank you for having me. >> All right. Be sure to check out thecube.net. You'll see all the previous events we've done with the CNCF, as well as, as mentioned, we will be helping keep cloud native connected at KubeCon, CloudNativeCon Europe, the virtual event in August, as well as the North American event later in the year. I'm Stu Miniman, and thank you for watching theCUBE. (smooth music)

Published Date : Jun 1 2020

SUMMARY :

leaders all around the world, I'm coming to you from Thank you so much for having me. but in an area that you know really well. and that just kept and when I dug back and I just saw the momentum and you teased out a little bit, and so 84% of the people surveyed So KubeCon and CloudNativeCon, We know that there's come back in the future, We're lucky at the CNCF to be working and the CNCF team going forward. So many of the faces staying the same. and brought it to this point, and strong community at the CNCF. I'm Stu Miniman, and thank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PriyankaPERSON

0.99+

DanPERSON

0.99+

Dan KohnPERSON

0.99+

Chris AniszczykPERSON

0.99+

Michelle NooraliPERSON

0.99+

Sean ConnellyPERSON

0.99+

ChrisPERSON

0.99+

2015DATE

0.99+

BostonLOCATION

0.99+

Priyanka SharmaPERSON

0.99+

Stu MinimanPERSON

0.99+

CNCFORGANIZATION

0.99+

AmericasLOCATION

0.99+

GoogleORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

2016DATE

0.99+

June 2020DATE

0.99+

Linux FoundationORGANIZATION

0.99+

Heavybit IndustriesORGANIZATION

0.99+

two-plus yearsQUANTITY

0.99+

StuPERSON

0.99+

Palo AltoLOCATION

0.99+

AprilDATE

0.99+

AmsterdamLOCATION

0.99+

84%QUANTITY

0.99+

JanuaryDATE

0.99+

Mountain ViewLOCATION

0.99+

600 membersQUANTITY

0.99+

AugustDATE

0.99+

KubeConEVENT

0.99+

78%QUANTITY

0.99+

JimPERSON

0.99+

LightStepORGANIZATION

0.99+

CloudNativeConEVENT

0.99+

Cloud Native SummitEVENT

0.99+

142QUANTITY

0.99+

oneQUANTITY

0.99+

EuropeLOCATION

0.99+

USLOCATION

0.99+

third projectQUANTITY

0.99+

Kelsey HightowerPERSON

0.99+

third projectQUANTITY

0.99+

first timeQUANTITY

0.99+

PrometheusTITLE

0.99+

thecube.netOTHER

0.99+

last yearDATE

0.98+

bothQUANTITY

0.98+

EULOCATION

0.98+

LFORGANIZATION

0.98+

North AmericaLOCATION

0.98+

over 20,000 peopleQUANTITY

0.98+

second vectorQUANTITY

0.98+

todayDATE

0.97+

GitLabORGANIZATION

0.97+

TodayDATE

0.97+

firstQUANTITY

0.97+

about 15, 20%QUANTITY

0.97+

OneQUANTITY

0.96+

Jen Doyle, 1Strategy & Ricardo Madan, TEKsystems | AWS re:Invent 2019


 

>>law from Las Vegas. It's the Q covering A ws re invent 2019. Brought to you by Amazon Web service is and in along with its ecosystem partners. >>Welcome back to Vegas. It's the Cube, live from AWS reinvent 19. Lisa Martin here with John Walls and John. We've been hanging out with about 65,000 folks, or so >>just are best friends. But Wade talked about this just a little bit ago, but I really have impressed again with kind of discontinued energy and focus, and you know it's gonna go well beyond the show. But three days of back to back to back Great presentations, Great programming obviously show for still jam packed a really good show. Hats off Day W s >>absolutely right. The energy has not wavered one bit. And oftentimes, by day three, that challenge. There's so much excitement >>not out here, >>not in Vegas. Don and I are pleased to welcome a couple of guests to the Cube. To my left, we've got Jen Doyle, the VP of operations from one strategy, and Ricardo Madan, VP of technology products and service is from Texas is I got all right, give me carte Blanche on how to pronounce that, By the way. So guys, one strategy and Techsystems general store with you give her audience and understanding of one strategy. What you guys are way you deliver. Yeah, so we >>are a eight of us. Born in the cloud, dedicated partner of our Amazon Web service is we're premier consulting partner who focuses exclusively on delivering to our customers high quality. Eight of US expertise across industries. Yeah, so because we're exclusively in aid of us, it's a cost industries and pretty agnostics for customer size scale. So we have that unique capability to really dive deep on being the experts on the eight of us when our customers are the experts of their own business >>and tech systems. >>So tech systems Global Service is we are a full stack technology consulting professional service is GS I global system integrator on. We really pay attention to that term full stack because we cover every facet of the software systems operation have life cycle. But increasingly, in the last couple of years, what has been the heart and soul of our ecosystem of confidences and practices and capabilities has been cloud and even more so has been a W s, which is one of the reasons that we're super excited about coming together with one strategy. >>Cloud. Obviously, it's not. It's not a thing. It's the thing, right? So So we kind of moved that passed that when people come to your clients come to you and they will understand that this cloud experience, especially if they're if they're native cloud right there. Not not a legacy, not bringing stuff over. But they're gonna want to launch what's kind of the checklist that the preliminary of that elementary looked at you do to assess what their needs are, what they're like. It's what their opportunities are and kind of how you get them to start faking about exactly what they want to get done, because I assume it's It's a big shoulder hunch and a lot of questions about where do we go from here? So how do you get them to, I guess, oriented toward that conversation in that discussion, >>Yeah, so a lot of the way good place to start is just a really understand their business right now. It's no longer just a IittIe side of the house kind of discussion it's a whole business. So our first step is really to dive deep and understand their business schools, their culture and what their actual end goal is going to be. And so we have a really great part program that we partner with eight of us called the eight of US Well-architected Review program, which we were really fortunate to be one of the top initial partners selected for the beta program a few years ago and then a launch partner for them when they went public last year to really dive deep in, be able to figure out exactly what are they doing? What do they want to be doing and how to get there both on scale, vertically and horizontally, howto costs save and how to really make sure when they're doing it they're doing in a year fashion. >>And where are those conversations happening? Are they happening at the White Sea level, or is it really up, as Andy Jassy was talking about Tuesday? These types of transformations have to come from the executive senior level. Are you having these conversations with the heads of business? We've really been >>seeing that kind of transformation, and it's been phenomenal. Where that change in culture is no longer just the I t side of the house, it is senior leadership. Like Andy, Jassy said. It's now a holistic business approach where you need that alignment in the senior leadership down and that inclusivity in that kind of far and a lot of our conversations, you're getting everybody really buying into the eight of us cloud initiatives that are going on and keep me honest. I know on your side as well. Tech is experiencing a lot of that same thing >>indeed, in the wayto kind of, I guess, divide and conquer the vectors from where we lean in tow, handle those conversations and prioritize the needs and even deal with the different audiences Lisa, like you're talking about because, like Enterprise, I T owners and business owners, ultimately they care about making the business better, but they're approaching it from different lenses and a W s language. There is a methodology in a mindset called working backwards, and it really is the process of beginning with those goals those business goals that Jen talked about in framing them up just super tight. Before we talk about how many lines of code or how many servers are gonna be preventing. We don't want to even get into that. So we've got that really good flowing understanding of the quantified needs and howto really kind of celebrate what that is and then work backwards from there. That the conference Because it's such an all encompassing conversation, especially with enterprises that air nascent to the cloud, they've only dip their toe in the water. Kind of like what What Andy was talking about during his keynote a couple days ago uh, are specific methodology. Under working backwards, we break it up into two pieces. One is called Think big and one is called Act Now and act Now. Starting from there is usually for the folks, and that's like the technology solution there. Fluent enough, they're lucid enough and what their business is going to get out of cloud and out of a migration and out of native development. All that good stuff so we can kind of go right surgically in tow. Hey, how did we just make you better? Based on our combined expertise and our experience? Think big is a little bit more involved, kind where the question was going because you're thinking about O C M. Organizational change management. And how does that culture really In Stan? She ate itself to move fast and be agile and think in a lean way. And, oh, repurpose lots of skills and lots of roles that kind of go extinct after a while. So how do we take in all this? Great talents unorganised ation and UPS killed him. And next gen them to really operate inside of this new cloud ecosystem. >>So you're talking about really organizationally this leadership holster change or shift, if you will, Taking ownership of it from the very top. How do you characterized maybe what that mindset looks like today, as opposed to maybe 45 years ago? It's so easy to put it over. You know, just throw it over the I t guys and developers, and we're gonna focus on our marketing and our sales that we're going to know that you know that the C suite is there, right? Much more president, These kind of discussions. Yeah, you have to have that. Do you know >>how >>to drive that kind of fundamental change? >>For sure. I think a lot of it has to do with the accessibility that AWS Cloud is really bringing to the industry where it's now in such a easily integrating way and your entire business. It's sea level. As you say, down to the interns can have that same accessibility using that tool box. The eight of us allows for them to really jump in hands first and start making things right away. You could be spinning up instances within seconds. It's so simple for people at all levels of knowledge. It's not just the 20 years of I t. That could be the only ones to understand what's going on anymore. >>What are some of the barriers that AWS and Cloud are have removed that 5 10 years ago, customers were concerned with ABC that now those barriers have been mitigated, not be new barriers. But what about the evolution that you've seen A W s really sort of fuel, >>so that way could even think back to some of what What John you were talking about? The kind of erstwhile mindset was a very big iron one. You didn't really look ATT technology and I t as anything more than a utility. Now it's a competitive advantage. Not now you have. That's why you know, you have this whole concept of being a digital native and digital transformation. All these big words. They get so much air time. But that's really been an acceptance in an adoption that technology has gotten to the point where we're moving quicker, better faster is a function of celebrating CX customer experience and enhancing it and using technology to really make organizations move quicker, move faster, adopt new features into whatever their products that is, whether it's online or whether it's packaged whatever. And it's so I think those barriers that AWS has really kind of bubbled up to the surface and then sifted off has been that integration into the business. And that, that is, that's been a transformation that no other company has really enabled outside of AWS for years. Think about like Gartner and forced or an I. D. C. They would talk about the number one objective right is to be aligned with the business, but always in like a subservient role that was more of a foot forward in a leadership role that you see inside of these organizations >>used to be all those of the I t guys. >>Yeah, that's >>what the I t. Guys. Right? I mean, home on the whole thing. Saved. Go. If you look forward, then when you sit down with whomever and you're trying to walk them through their process and get evaluated, What their needs aren't so on so forth. What's the biggest hurdle you gotta get over with down somebody to say, You've got to be You've got to be totally present. This is your your i t offering should be. You should be cloud or your hybrid multi whatever you might be. But you got to be cloud What's the big challenge there? You think you really get somebody jumping in the deep end? >>Honestly, I would really say it's the culture change right now. It's been such a huge digital transformation. You can't deny that. But the culture transformation that's going along with that has really been phenomenal. And that's a lot of people who are at that point of starting their cloud journey, are starting to realize they have to change the way that they look at everything it, as you mentioned several times. It's not just the technical side anymore. It is the business side, and that's the big culture shift of getting over that. There's a lot of technical debt in there, with all the on creme in different areas that people have invested in. And honestly, right now, the day of lift and shift is gonna is kind of going away. It's all of the new cloud. Benefits, like surveillance and containers is really going to be revolutionary, but that education and enabling it really needs to be more prevalent in everybody's vocabulary. And not just the I t. Guy who could tell you about it. It needs to be the sea level, the enablers, the stakeholders in the middle that really understand what's going on. >>So could you talk to us about one strategy and tech systems coming together tell us a little bit about that, what you're doing together and how you might be an eight, an enabler of that cultural transformation that is absolutely linchpin. >>So there's that that enabler on that accelerator t kind of that that change and not to overuse the word accelerator. But that's just kind of one vector that we can talk about a little bit, and it's really what we're encouraging our customers to look at because they've got a broad choice of size of system integrators like us. But if you're not coming to the table with really depth of expertise, depth of expertise, that can help mute a lot of the complexity that were alluding to. Because even even though we've got so many benefits and so much growth happening inside the Ws world, there's 175 service is today. There have been 2500 feature updates releases across that portfolio Just this year alone, there's 5 to 10 new announcement today and then outside of the Ws stack, you've got hundreds and hundreds of other members of the Dev Ops Tool chain. They get bolted into that so that you know the way that we're kind of getting customers to overcome. Some of that reticence is by muting a lot of that, simplifying it and coming to the table with real accelerators, where we've invested collectively hundreds of thousands of lines of code that we've built and put together for AWS proprietary tools for better adoption, whether it's database freedom and getting like kick started off of your legacy oppressed database environments and into the the purpose built platforms inside of a W s, whether it's micro service's libraries and frameworks that we built for customers to help them start to decompose. Some of those those big, expensive, you know, high technical debt applications that General was talking about into micro service is to containerized to make him run faster in the cloud. So that's, you know, that's where we're leaning in from, Uh, not just with the expertise and the combined resume of hundreds of awesome engagements that we've moved customers to the cloud in and hundreds and hundreds of terabytes that we've moved. But it's it's doing it in a way where the customer knows that they've got a real leader here with them, side by side in the journey. And it doesn't happen in one or two conversations. I mean, this is going on across many different settings and demos and think big sessions like like we were talking about. It takes, it takes some time. >>Yeah, I mean that I think the combined family of Texas one strategy will really be phenomenal for our customers. 48% of the market right now is using AWS cloud and to keep up with that scale of innovation and growth. Just to be able to do that, businesses need eight of US experts and that's who we are. It's in our name our. We have one focus, one strategy and that's eight of us. We are developed based on the same agile, lean leadership principles the eight of us has and with the several competencies that we have. Such a Czar Data and Analytics Machine Learning Dev. Ops Migration Way have a proven track record of not only being the AWS experts but being able to be agile and grow with that same speed that eight of us ours to keep up with the training our teams on that expertise. And I think with tech systems, global footprint and ability to find these amazing talent combined with our skill set, we will be able to create a larger geographical footprint to deliver to our customers in a way that they will not only see our ability to deliver what they're doing but exceed their expectations. >>I imagine the amount of engagement that you're gonna have after an event like this three days you mentioned there after 175 service is that AWS is delivery the volume of announcements. It's incredibly challenging to keep up with that. Plus, there's 2500 sessions. You know, customers can't go to that many. So imagine there's gonna be a lot of leaning on one started Genentech systems to say, Help us deconstruct, deconstruct this digest all the opportunities here. So you guys air. I'm sure going to be very busy after this event. But we thank you for joining John and me today and telling us what you guys were doing individually and collectively together. We appreciate it. Thank you so much for our pleasure. For John. Walls were out. Vegas, baby, this has been the Cube. This is the end of our third day of continuous coverage of lots of stuff going on aws reinvent John. It's been a blast hosting a few segments with you >>as always. >>Nice job. See you next time. >>Thanks for having >>All right. I will see you next time. Thanks for watching

Published Date : Dec 6 2019

SUMMARY :

Brought to you by Amazon Web service It's the Cube, live from AWS reinvent 19. and you know it's gonna go well beyond the show. that challenge. general store with you give her audience and understanding of one strategy. Born in the cloud, dedicated partner of our Amazon Web service We really pay attention to that term full stack because we cover every facet of the that the preliminary of that elementary looked at you do to assess what their needs are, a really great part program that we partner with eight of us called the eight of US Well-architected Review program, Are you having these conversations with the heads of business? It's now a holistic business approach where you need that alignment in the senior and it really is the process of beginning with those goals those business goals that Jen talked about in framing know that the C suite is there, right? I think a lot of it has to do with the accessibility that AWS Cloud is really bringing What are some of the barriers that AWS and Cloud are have removed so that way could even think back to some of what What John you were talking about? What's the biggest hurdle you gotta get over with down somebody to say, And not just the I t. Guy who could tell you about it. So could you talk to us about one strategy and tech systems coming together tell us a little bit about of that, simplifying it and coming to the table with real accelerators, of not only being the AWS experts but being able to be agile and grow with that same It's been a blast hosting a few segments with you See you next time. I will see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

Andy JassyPERSON

0.99+

JohnPERSON

0.99+

JassyPERSON

0.99+

Lisa MartinPERSON

0.99+

Jen DoylePERSON

0.99+

DonPERSON

0.99+

Ricardo MadanPERSON

0.99+

hundredsQUANTITY

0.99+

AWSORGANIZATION

0.99+

WadePERSON

0.99+

5QUANTITY

0.99+

oneQUANTITY

0.99+

VegasLOCATION

0.99+

2500 sessionsQUANTITY

0.99+

TuesdayDATE

0.99+

John WallsPERSON

0.99+

eightQUANTITY

0.99+

TexasLOCATION

0.99+

EightQUANTITY

0.99+

48%QUANTITY

0.99+

Las VegasLOCATION

0.99+

last yearDATE

0.99+

LisaPERSON

0.99+

two piecesQUANTITY

0.99+

first stepQUANTITY

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

175 serviceQUANTITY

0.99+

third dayQUANTITY

0.99+

JenPERSON

0.99+

one strategyQUANTITY

0.98+

one focusQUANTITY

0.98+

20 yearsQUANTITY

0.98+

two conversationsQUANTITY

0.98+

1StrategyORGANIZATION

0.98+

45 years agoDATE

0.98+

GenentechORGANIZATION

0.97+

hundreds of terabytesQUANTITY

0.97+

about 65,000 folksQUANTITY

0.97+

ABCORGANIZATION

0.97+

5 10 years agoDATE

0.97+

bothQUANTITY

0.96+

GartnerORGANIZATION

0.95+

this yearDATE

0.95+

three daysQUANTITY

0.94+

175 serviceQUANTITY

0.94+

three daysQUANTITY

0.94+

day threeQUANTITY

0.94+

USLOCATION

0.94+

GS IORGANIZATION

0.93+

UPSORGANIZATION

0.93+

10 new announcementQUANTITY

0.92+

hundreds of thousands of linesQUANTITY

0.92+

one vectorQUANTITY

0.91+

2500 feature updatesQUANTITY

0.9+

few years agoDATE

0.89+

firstQUANTITY

0.88+

TEKsystemsORGANIZATION

0.88+

CXTITLE

0.88+

White SeaLOCATION

0.87+

Joe Partlow, ReliaQuest | Splunk .conf19


 

>>Live from Las Vegas, you covering splunk.com 19 brought to you by Splunk.. >>Okay. Welcome back everyone. That's the cubes live coverage in Las Vegas for Splunk's dot com user conference 10 years is their anniversary. It's cubes seventh year. I'm John Farah, your host with a great guest here. Joe Partlow, CTO of rely AQuESTT recently on the heels of vying thread care and Marcus, Carrie and team. Congratulations. They'd come on. Yeah. Yeah. It's been a been a fun month. So obviously security. We love it. Let's take a minute to talk about what you guys do. Talk about what your company does that I've got some questions for you. Yeah. So you know, obviously with the increasing cyber threats, uh, you know, uh, security companies had a lot or customers had a lot of tools. Uh, it's easy to get overwhelmed, um, really causes a lot of confusion. So really what we're trying to do is we have a platform called gray matter that is really kind of how we deliver security model management, which what that means is that's bringing together people, process technology in a way that's easy to kind of make sense of all the noise. >>Um, yeah, there's, there a, a lot of features in there that would help monitor the health, uh, the incident response, the hunt, um, any kind of features that you would need from a security. So you guys are a managed service, you said four? >> Yeah. Yeah, a different, a little different than a traditional MSSP. We um, you'll work very close with, uh, the customers. Uh, we work in their environment, we're working side by side with them, uh, in their tools and we're really maturing and getting better visibility in their environment to get that MSSP for newer. >> Right. That's where you guys are. M S S VP >> on steroids. A little bit different. >> Alright. Well you guys got some things going on. You got a partnership with Splunk for the dotcom sock. Oh yeah. Talk about that with set up out here. And what's it showing? Yeah, that's been a great experience. >>Uh, we, we work very close with the Splunk, uh, team. Uh, we monitored Splunk corporate, uh, from a work with skirt team monitoring them. Uh, so when.call came around, it was kind of a natural progression of Hey, uh, you know, Joel and team on their side said, Hey, how do we kind of build up the team and do a little bit extra and I'll see any way that we can help secure.com. Uh, it was really cool. I give credit to the team, both teams, uh, standing up a, uh, new Splunk install, getting everything stood up really in the last few weeks, uh, making sure that every, uh, everybody at the pavilion and the conference in general is protected and we're watching for any kind of threat. >> So it's, it's been great working with the Splunk team. So is that normal procedure that the bad guys want to target? >>The security congresses? This is gonna make a state visit more of graffiti kind of mentality. It's an act kind of lift, fun, malicious endpoints that they want to get out of here. Oh yeah. There's, there's a little bit of a, you know, let's do it for fun and mess with the conference a little bit. So we'll want to make sure that, that that's what happened. So is my end point protected here? My end points, my phone and my laptop. Uh, not the user specific but any of the conference provided demo stations. Okay. So or structure for the equipment, not me personally. You are not monitoring your personal okay. I give up my privacy years ago. Yes. This is a interesting thing to talk about working with spunk because you know, I hear all the time and again we're looking at this from an industry wide perspective. >>I hear we've got a sock, they got a slot. So these socks are popping up yesterday. Operation centers. What is, what is the state of the art for that now? Is it best practice to have a mega Monster's sock or is it distributed, is it decentralized? What's the current thinking around how to deploy Sox surgery operations center or centers? Yeah, we certainly grow with a decentralized model. We need to follow the sun. So we've got operations centers here in Vegas, Tampa and Dublin. Uh, really making sure that we've got the full coverage. Uh, but it is working very close with the Splunk socks. So they've got a phenomenal team and we work with them side by side. Uh, obviously we are providing a lot of the, uh, the tier one, tier two heavy lift, and then we escalate to Splunk team. They're obviously gonna know Splunk corporate better than we will. >>So, uh, we work very close hand in hand. So you guys acquired threat care and Marcus carries now in the office of CTO, which you're running. Yes. How is that going to shape rely a quest and the Europe business? >> Yeah, the acquisition has been extremely, uh, you know, uh, exciting for us. Uh, you know, after meeting Marcus, uh, I've known of Marcus, he's a very positive influence in the community, uh, but having worked with him, the vision for threat care and the vision for Lioncrest really closely aligned. So where we want to take, uh, the future of security testing, testing controls, making sure upstream controls are working, uh, where threats they're wanting to go for. That was very much with what we aligned more so it made sense to partner up. So, uh, very excited about that and I think we will roll that into our gray matter platform has another capability. >>Uh, gray matter, love the name by the way. I mean, first of all, the security companies have the best names or mission control gray matter, you know, red Canary, Canary in the coal mine. All good stuff. All fun. But you know, you guys work hard so I know the price gotta be good. I gotta ask you around the product vision around the customers and how they're looking at security because you know, it's all fun games. They'll, someone's hacking their business trash or this ransomware going on. Data protection has become a big part of it. What are customers telling you right now in terms of their, their fears and aspirations? What do they need? What's on the agenda? Guests for customers right now? Yeah. I think kind of the two biggest fears, um, and then the problems that we're trying to address is one, just a lack of visibility. >>Uh, customers have so many things on their network, a lot of mergers and acquisitions. So, uh, unfortunately with a lot of times the security team is the last one to know when something pops up. Uh, so anything that we can do to increase visibility and that and that, a lot of times we work very closely with Splunk or send that they have out to make sure that it happens. And then the other thing I think is, you know, most people want to get more proactive. Uh, you know, salmon logging by nature is very reactive. So when he tried to get out in front of those threats a little bit more, so anything that we can do to try to get more proactive, uh, may certainly going to be on their, their top of mind. Well, the machine learning toolkits, getting a lot of buzz here at the show, that's a really big deal. >>I think the other thing that I'm seeing I to get your reaction to is this concept of diverse data. That's my word, not Splunk's, but the idea of bringing in more data sets actually helps machine learning that's pretty much known by data geeks, but in making data addressable because data seems to be the one thing that is all doing a lot of the automation that's takes that headway heavy lift and also provides heavy lifting capabilities to set data up to look at stuff. So data is pretty critical. Data addressability data diversity, you got to have the data and it's gotta be addressable in real time and through tools like fabric search and other things. What's your reaction to that and thoughts around that? No, I agree 100%. Uh, you know, obviously most enterprise customers have a diverse set of data. So trying to search across those data sets, normalize that data, it's, it's a huge task. >>Um, but to get the visibility that we need, we really need to be able to search these multiple data sets and bring those into make sense. Whether you're doing threat hunting or responding to alerts. Um, or you need it from a compliance standpoint, being able to deal with those diverse data sets, uh, is is a key key issue. You know, the other thing I wanna get your thoughts on this one that we've been kind of commenting, I've kind of said a ticket position on this gonna from an opinion standpoint, but it's kind of obvious but it's not necessarily true. But my point is with the data volume going up so massive, that puts the tips, the scales and the advantage for the adversaries. Ransomware's a great example of it and you know, as little ransomware now is towns and cities, these ransomware attacks just one little vector, but with the data volume data is the surface area, not just devices. >>Oh yeah. So how is the data piece of it and the adversarial advantage, you think that that makes them stronger, more surface area? Yeah, definitely. And that's something that where we're leaning on machine learning for a lot is if you really kind of make sense of that data, a lot of times you want to baseline that environment and just find it what's normal in the environment, what's not normal. And once you to find that out, then we can start saying, all right, is this malicious or not? Uh, you know, some things that uh, yeah, maybe PowerShell or something and one environment is a huge red flag that Hey, we've been compromised in another one. Hey, that's just a good administrator automating his job. So making sense of that. Um, and then also just the sheer volume of data that we, that we see customers dealing with. >>Very easy to hide in if you're doing an attack, uh, from an adversary standpoint. So being able to see across that and make sure that you can at scale SyFy that data and find actionable event. You guys, I was just talking with a friend that I've known from the cloud, world, cloud native world. We're talking about dev ops versus the security operations and those worlds are coming together. There are more operational things than developer things, but yet CSOs that we talked to are fully investing in developer teams. So it's not so much dev ops dogma, if you will. But we gotta do dev ops, right? You know, see the CIC D pipeline. Okay, I get that. But developers play a critical role in this feature security architecture, but at the end of the day, it's still operations. So this is the new dev ops or sec ops or whatever it's called these days. >>What's your, how, how do customers solve this problem? Because it is operational, whether it's industrial IOT or IOT or cloud native microservices to on premise security practices with end points. I mean, I, the thing we see that, that kind of gets those teams the most success is making sure they're working with those teams. So having security siloed off by itself. Um, I think we've kind of proven in the past that doesn't work right? So get them involved with their development teams, get them involved with their net ops or, or, you know, sec ops teams, making sure they're working together so that security teams can be an enabler. Uh, they don't want to be the, uh, the team that says no to everything. Um, but at the end of the day, you know, most companies are not in the business of security. They're in the business of making widgets or selling widgets or whatever it is. >>So making sure that the security, yeah, yeah, that's an app issue. Exactly. Making sure that they're kind of involved in that life cycle so that, not that they can, you know, define what that needs to be, but at least be aware of, Hey, this is something we need to watch out for or get visibility into and, and keep the process moving. All right. Let's talk about Splunk. Let's set up their role in the enterprise. I'll see enterprise suite 6.0 is a shipping general availability. How are you guys deploying and optimizing Splunk for customers? What are some of the killer use cases that's there and new ones emerging? Yeah, we've, we provide, you know, really kind of three core areas. First one customers, you're one is obviously making sure that the platform is healthy. So a lot of times we'll go into a, a customer that, uh, you know, maybe they, they, there's one team has turned over or they rapidly expanded and, and in a quickly, you kind of overwhelming the system that's there. >>So making sure that the, the architecture is correct, maintained, patched, upgraded, and they're, they're really taking advantage of the power of Splunk. Uh, from an engineering standpoint. Uh, also another key area is building content. So as we were discussing earlier, making sure that we've got the visibility and all that data coming in, we've got to make sure that, okay, are we pursuing that data correctly? Are we creating the appropriate alerts and dashboards and reports and we can see what's going on. Um, and then the last piece is actually taking, you know, see you taking action on that. So, uh, from an incident response standpoint, watching those alerts and watching that content flyer and making sure that we're escalating and working with the customer security team, they'd love to get your thoughts. Final question on the, um, first of all, great, great insight. They'll, I love that. >>As customers who have personal Splunk, we buy our data is number one third party app for blogs work an app, work app workloads, and in cloud as well as more clients than you have rely more on cloud. AWS for instance, they have security hub, they're deploying some of this to lean on cloud providers, hyperscale cloud providers for security, but that doesn't diminish the roles flung place. So there's a lot of people that are debating, well, the cloud is going to eat Splunk's lunch. And so I don't think that's the case. I want to get your thoughts of it because they're symbionic. Oh yeah. So what's your thoughts on the relationship to the cloud providers, to the Splunk customer who's also going to potentially moves to the cloud and have a hybrid cloud environment? Yeah, and now I would agree there's, you know, there are going to exist side by side for a long time. >>Uh, most environments that we see are hybrid environments. While most organizations do have a cloud first initiative, there's still a lot of on premise stuff. So Splunk is still going to be a, a key cornerstone of just getting that data. Where I do see is maybe a, you know, in those platforms, um, kind of stretching the reach of Splunk of, Hey, let's, let's filter and parse this stuff maybe closer to the source and make sure that we're getting the actionable things into our Splunk ES dashboards and things like that so that we can really make sure that we're getting the good stuff. And maybe, you know, the stuff that's not actionable, we're, we've up in our AWS environment. Um, and that's, that's a lot of the technology that Splunk's coming out with. It's able to search those other environments is going to be really key I think for that where you don't have to kind of use up all your licensing and bring that non-actionable data in, but you still able to search across. >>But that doesn't sound like core Splunk services more. That's more of an operational choice there. Less of a core thing. You mentioned that you think splints to sit side by side for the clouds. What, what gives you that insight? What's, what's, uh, what's telling you that that's gonna happen? What's the, yeah, you still need the core functionality of Splunk running with spark provides is a, you know, it's a great way to bring data and it parses it, uh, extremely well. Um, having those, uh, you know, correlate in correlation engines and searches. Um, that's, that's very nice to have that prepackaged doing that from scratch. Uh, you can certainly, there's other tools that can bring data in, but that's a heavy riff to try to recreate the wheel so to speak. We're here with Joe Parlo, CTO, really a quest, a pardon with Splunk setting up this dotcom SOC for the exhibits and all the infrastructure. >>Um, final question, what's the coolest thing going on at dotcom this year? What's, what should customers or geeks look at that's cool and relevant that you think should be top line? Top couple of things. Yeah, I, I, uh, one of the things I like the most out of the keynote was, uh, the whole, uh, Porsche use case with that. The AR augmentation on my pet bear was really, really cool. Um, and then obviously the new features are coming out with, with VFS and some of another pricing model. So definitely exciting time to be a partner of Splunk. Alright, Joe, thanks for them. John furrier here with the cube live in Las Vegas day two of three days of coverage.com. Their 10th year anniversary, our seventh year covering the Silicon angle, the cube. I'm Sean furrier. Thanks for watching. We'll be right back.

Published Date : Oct 23 2019

SUMMARY :

splunk.com 19 brought to you by Splunk.. So you know, obviously with the increasing cyber threats, uh, you know, uh, security companies the incident response, the hunt, um, any kind of features that you would need from a security. Uh, we work in their environment, we're working side by side with them, uh, That's where you guys are. on steroids. Well you guys got some things going on. of Hey, uh, you know, Joel and team on their side said, Hey, how do we kind of build up the So is that normal procedure There's, there's a little bit of a, you know, let's do it for fun and mess with the conference a little bit. Uh, really making sure that we've got the full coverage. So you guys acquired threat care and Marcus Yeah, the acquisition has been extremely, uh, you know, the customers and how they're looking at security because you know, it's all fun games. And then the other thing I think is, you know, most people want Uh, you know, obviously most enterprise customers have a diverse set of data. Ransomware's a great example of it and you know, sense of that data, a lot of times you want to baseline that environment and just find it what's normal in the environment, and make sure that you can at scale SyFy that data and find actionable event. Um, but at the end of the day, you know, most companies are not in the business of security. So a lot of times we'll go into a, a customer that, uh, you know, maybe they, they, and then the last piece is actually taking, you know, see you taking action on that. Yeah, and now I would agree there's, you know, there are going to exist side by side for a long time. It's able to search those other environments is going to be really key I think for that where you don't have to kind of use uh, you know, correlate in correlation engines and searches. that you think should be top line?

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JenniferPERSON

0.99+

JoePERSON

0.99+

Jeff FrickPERSON

0.99+

Joe PartlowPERSON

0.99+

Steven GatoffPERSON

0.99+

StevenPERSON

0.99+

JoelPERSON

0.99+

John FarahPERSON

0.99+

JeffPERSON

0.99+

Wal-MartORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

Las VegasLOCATION

0.99+

100%QUANTITY

0.99+

Joe ParloPERSON

0.99+

SplunkORGANIZATION

0.99+

VegasLOCATION

0.99+

PagerDutyORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

San FranciscoLOCATION

0.99+

AWSORGANIZATION

0.99+

UberORGANIZATION

0.99+

DublinLOCATION

0.99+

MarcusPERSON

0.99+

seventh yearQUANTITY

0.99+

One more questionQUANTITY

0.99+

PorscheORGANIZATION

0.99+

TampaLOCATION

0.99+

one teamQUANTITY

0.99+

FirstQUANTITY

0.99+

LioncrestORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

Rapid7ORGANIZATION

0.98+

Sean furrierPERSON

0.98+

oneQUANTITY

0.98+

first initiativeQUANTITY

0.98+

yesterdayDATE

0.98+

second yearQUANTITY

0.98+

threeQUANTITY

0.98+

both teamsQUANTITY

0.98+

firstQUANTITY

0.98+

CTOORGANIZATION

0.97+

JanuaryDATE

0.97+

Q4, 2017DATE

0.97+

three daysQUANTITY

0.96+

EuropeLOCATION

0.96+

two biggest fearsQUANTITY

0.96+

PagerDuty Summit 2017EVENT

0.95+

this yearDATE

0.95+

200+ different applicationsQUANTITY

0.95+

John furrierPERSON

0.95+

ReliaQuestORGANIZATION

0.93+

CarriePERSON

0.91+

one thingQUANTITY

0.91+

10th year anniversaryQUANTITY

0.91+

AQuESTTORGANIZATION

0.9+

PowerShellTITLE

0.89+

yearsDATE

0.89+

CTOPERSON

0.88+

tier oneQUANTITY

0.88+

TAMORGANIZATION

0.88+

John Fanelli, NVIDIA & Kevin Gray, Dell EMC | VMworld 2019


 

(lively music) >> Narrator: Live, from San Francisco, celebrating 10 years of high tech coverage, it's theCUBE, covering VMworld 2019! Brought to you by VMware and its ecosystem partners. >> Okay, welcome back to theCUBE's live coverage in VMworld 2019. We're in San Francisco. We're in Moscone North Lobby. I'm John Frer, my co Stu Miniman, here covering all the action of VMworld, two sets for theCUBE, our tenth year, Stu. Keeping it going. Two great guests, John Fanelli, CUBE Alumni, Vice President of Product, Virtual GPUs at NVIDIA Kevin Gray, Director of Product Marketing, Dell EMC. Thanks for coming back on. Good to see you. >> Awesome. >> Good to see you guys, too. >> NVIDIA, big news, we saw your CEO up on the keynote videoing in. Two big announcements. You got some stats on some Windows stats to talk about. Let's talk about the news first, get the news out of the way. >> Sure, at this show, NVIDIA announced our new product called NVIDIA Virtual Compute Server. So for the very first time anywhere, we're able to virtualize artificial intelligence, deep learning, machine learning, and data analytics. Of course, we did that in conjunction with our partner, VMware. This runs on top of vSphere and also in conjunction with our partner at Dell. All of this Virtual Compute Server runs on Dell VxRail, as well. >> What's the impact going to be for that? What does that mean for the customers? >> For customers, it's really going to be the on-ramp for Enterprise AI. A lot of customers, let's say they have a team of maybe eight data scientists are doing data analytics, if they want to move through GPU today, they have to buy eight GPUs. However, with our new solution, maybe they start with two GPUs and put four users on a GPU. Then as their models get bigger and their data gets bigger, they move to one user per GPU. Then ultimately, because we support multiple GPUs now as part of this, they move to a VM that has maybe four GPUs in it. We allow the enterprise to start to move on to AI and deep learning, in particular, machine learning for data analytics very easily. >> GPUs are in high demand. My son always wants the next NVIDIA, in part told me to get some GPUs from you when you came on. Ask the NVIDIA guy to get some for his gaming rig. Kidding aside, now in the enterprise, really important around some of the data crunching, this has really been a great use case. Talk about how that's changed, how people think about it, and how it's impacted traditional enterprise. >> From a data analytics perspective, the data scientists will ingest data, they'll run some machine learning on it, they'll create an inference model that they run to drive predictive business decisions. What we've done is we've GPU-accelerated the key libraries, the technologies, like PyTorch, XGBoost to use a GPU. The first announcement is about how they can now use Virtual Compute Server to do that. The second announcement is that workflow is, as I mentioned, they'll start small, and then they'll do bigger models, and eventually they want to train that scale. So what they want to do is they want to move to the cloud so they can have hundreds or thousands of GPUs. The second announcement is that NVIDIA and VMware are bringing Virtual Compute Server to VMware Cloud running on AWS with our T4 GPUs. So now I can scale virtually starting with fractional GPU to single GPU to multi GPU, and push a button with HCX and move it directly into AWS T4 accelerated cloud. >> That's the roadmap so you can get in, get the work done, scale up, that's the benefit of that. Availability, timing, when all of this is going to hit in-- >> So Virtual Compute Server is available on Friday, the 29th. We're looking at mid next year for the full suite of VMware Cloud on top of Aws T4. >> Kevin, you guys are supplier here at Dell EMC. What's the positioning there with you guys? >> We're working very closely with NVIDIA in general on all of their efforts around both AI as well as VDI too. We'll work quite a bit, most recently on the VDI front as well. We look to drive things like qualifying the devices. There's both VDI or analytics applications. >> Kevin, bring us up-to-date 'cause it's funny we were talking about this is our 10th year here at the show. I remember sitting across Howard Street here in 2010 and Dell, and HP, and IBM all claiming who had the lowest dollar per desktop as to what they were doing in VDI. It's a way different discussion here in 2019. >> Absolutely. Go ahead. >> One of the things that we've learned with NVIDIA is that it's really about the user experience. It's funny we're at a transition point now from Windows 7 to Windows 10. The last transition was Windows XP to Windows 7. What we did then is we took Windows 7, we tore everything out of it we possibly could, we made it look like XP, and we shoved it out. 10 years later, that doesn't work. Everyone's got their iPhones, their iOS devices, their Android devices. Microsoft's done a great job on Windows 10 being immersive. Now we're focused on user experience. When the VDI environment, as you move to Windows 10, you may not be aware of this, but from Windows 7 to Windows 10, it uses 50% more CPU, and you don't even get that great of a user experience. You pop a GPU in there, and you're good. Most of our customers together are working on a five-year life cycle. That means over the next five years, they're going to get 10 updates of Windows 10, and they're going to get like 60 updates of their Office applications. That means that they want to be future-proof now by putting the GPUs in to guarantee a great user experience. >> On the performance side too, obviously. In auto updates, this is the push notification world we live in. This has to built in from day one. >> Absolutely, and if you look at what Dell's doing, we really built this into both our VxRails and our VxBlocks. GPUs are just now part of it. We do these fully qualified. It stacks specifically for VDI environments as well. We're working a lot with the n-vector tools from VM which makes sure we're-- >> VDI finally made it! >> qualifying user experience. >> All these years. >> Yes, yes. In fact, we have this user experience tool called n-vector, which actually, without getting super technical for the audience, it allows you to look at the user experience based on frame-rate, latency, and image quality. We put this tool together, but Dell has really been taking a lead on testing it and promoting it to the users to really drive the cost-effectiveness. It still is about the dollar per desktop, but it's the dollar per dazzling desktop. (laughing) >> Kevin, I hear the frame-rate in there, and I've got all the remote workers, and you're saying how do I make sure that's not the gaming platform they're using because I know how important that is. >> Absolutely. There's a ton of customers that are out there that we're using. We look at folks like Guillevin as like the example of a company that's worked with us and NVIDIA to truly drive types of applications that are essential to VDI. These types of power workers doing applications like Autodesk, that user experience and that ability to support multiple users. If you look at Pat, he talked a little bit about any cloud, any application, any device. In VDI, that's really what it's about, allowing those workers to come together. >> I think the thing that the two of you mentioned, and Stu you pointed out brilliantly was that VDI is not just an IT thing anymore. It really is the expectation now that my rig, if I'm a gamer, or a young person, the younger kids, if you're under 25, if you don't have a kick-ass rig, (laughs) that's what they call it. Multiple monitors, that's the expectation, again, mobility. Work experience, workspace. >> Exactly, along those same lines, by the way. >> This is the whole category. It's not just like a VDI, this thing over here that used to be talked about as an IT thing. >> It's about the workflow. So it's how do I get my job done. We used to use words like "business worker" and "knowledge worker." It's just I'm a worker. Everybody today uses their phone that's mobile. They use their computer at home, they use their computer at work. They're all running with dual monitors. Dual monitors, sometimes dual 4K monitors. That really benefits as well from having a GPU. I know we're on TV so hopefully some of you guys are watching VDI on your GPU-accelerated. It's things like Skype, WebEX, Zoom, all the collaboration to 'em, Microsoft Teams, they all benefit from our joint solution, like the GPU. >> These new subsystems like GPUs become so critical. They're not just subsystem, they are the main part because the offload is now part of the new operating environment. >> We optimized together jointly using the n-vector tool. We optimized the server and operating environment, so that if you run into GPU, you can right-size your CPU in terms of cores, speed, etc., so that you get the best user experience at a most cost effective way. >> Also, the gaming world helps bring in the new kind of cool visualization. That's going to move into just the workflow of workers. You start to see this immersive experience, VR, ARs obviously around the corner. It's only going to get more complex, more needs for GPUs. >> Yes, in fact, we're seeing more, I think, requirements for AR and VR from business than we are actually for gaming. Don't you want to go into your auto showroom at your house and feel the fine Corinthian leather? >> We got to upgrade our CUBE game, get more GPU focused and get some tracing in there. >> Kevin, I know I've seen things from the Dell family on levering VR in the enterprise space. >> Oh, absolutely. If you look at a lot of the things that we're doing with some of the telcos around 5G. They're very interested in VR and AR. Those are areas that'll continue to use things like GPUs to help accelerate those types of applications. It really does come down to having that scalable infrastructure that's easy to manage and easy to operate. That's where I think the partnership with NVIDIA really comes together. >> Deep learning and all this stuff around data. Michael Dell always comes on theCUBE, talks about it. He sees data as the biggest opportunity and challenge. In whatever applications coming in, you got to be able to pound into that data. That's where AI's really shown... Machine learning has kind of shown that that's helping heavy lifting a lot of things that were either manual. >> Exactly. The one thing that's really great about data analytics that are GPU-accelerated is we can take a job that used to take days and bring it down to hours. Obviously, doing something faster is great, but if I take a job that used to take a week and I can do it in one day, that means I have four more days to do other things. It's almost like I'm hiring people for free because I get four more extra work days. The other thing that's really interesting as our joint solution is you can leverage that same virtual GPU technology. You can do VDI by day and at night, you run Compute. So when your users aren't at work, you migrate them off, you spin up your VMs that are doing your data analytics using our RAPIDS technology, and then you're able to get that platform running 24 by seven. >> Productivity gains just from an infrastructure. Even the user too, up and down, the productivity gains are significant. So I'll get three monitors now. I'm going to get one of those Alienware curved monitors. >> Just the difference we had, we have a suite here at the show, and just the difference, you can see such a difference when you insert the GPUs into the platform. It's just makes all the difference. >> John, I got to ask you a personal question. How many times have people asked you for a GPU? You must get that all the time? >> We do. I have a NVIDIA backpack. When I walk around, there's a lot of people that only know NVIDIA for games. So random people will always ask for that. >> I've got two sons and two daughters and they just nerd out on the GPUs. >> I think he's trying to get me to commit on camera on giving him a GPU. (laughing) I think I'm in trouble here. >> Yeah, they get the latest and greatest. Any new stuff, they're going to be happy to be the first on the block to get the GPU. It's certainly impacted on the infrastructure side, the components, the operating environment, Windows 10. Any other data you guys have to share that you think is notable around how all this is coming together working from user experience around Windows and VDI? >> I think one piece of data, again, going back to your first comment about cost per desktop. We're seeing a lot of migration to Windows 10. Customers are buying our joint solution from Dell which includes our hardware and software. They're buying that five-year life cycle, so we actually put a program in place to really drive down the cost. It's literally like $3 per month to have a GPU-accelerated virtual desktop. It's really great Value for the customers besides the great productivity. >> If you look at doing some of these workloads on premises, some of the costs can come down. We had a recent study around the VxBlock as an example. We showed that running GPUs and VDI can be up as much as 45% less on a VxBlock at scale. When you talk about the whole hybrid cloud, multi-cloud strategy, there's pluses and minuses to both. Certainly, if we look at some of the ability to start small and scale out, whether you're going HCI or you're going CI, I think there's a VDI solution there that can really drive the economics. >> The intense workloads. Is there any industries that are key for you guys in terms of verticals? >> Absolutely. So we're definitely looking at a lot of the CAD/CAM industries. We just did a certification on our platforms with Dassault's CATIA system. That's an area that we'll continue to explore as we move forward. >> I think in the workstation side of things, it's all the standard, it's automotive, it's manufacturing. Architecture is interesting. Architecture is one of those companies that has kind of an S and B profile. They have lots of offices, but they have enterprise requirements for all the hard work that they do. Then with VDI, we're very strong in financial services as well as healthcare. In fact, if you haven't seen, you should come by. We have a Bloomberg demo for financial services about the impact for traders. I have a virtualized GPU desktop. >> The speed is critical for them. Final question. Take-aways from the show this year, 2019 VMworld, Stu, we got 10 years to look back, but guys, take-aways from the show that you're going to take back from this week. >> I think there's still a lot of interest and enthusiasm. Surprisingly, there's still a lot of customers that haven't finished there migration to Windows 10 and they're coming to us saying, Oh my gosh, I only have until January, what can you do to help me? (laughing) >> Get some GPUs. Thoughts from the show. >> The multi-cloud world continues to evolve, the continued partnerships that emerge as part of this is just pretty amazing in how that's changing in things like virtual GPUs and accelerators. That experience that people have come to expect from the cloud is something, for me is a take-away. >> John Fanelli, NVIDIA, thanks for coming on. Congratulations on all the success. Kevin, Dell EMC, thanks for coming on. >> Thank you. >> Thanks for the insights. Here on theCUBE, Vmworld 2019. John Furrier, Stu Miniman, stay with us for more live coverage after this short break. (lively music)

Published Date : Aug 28 2019

SUMMARY :

Brought to you by VMware and its ecosystem partners. here covering all the action of VMworld, on the keynote videoing in. So for the very first time anywhere, We allow the enterprise Ask the NVIDIA guy to get some for his gaming rig. that they run to drive predictive business decisions. That's the roadmap so you can get in, on Friday, the 29th. What's the positioning there with you guys? most recently on the VDI front as well. the lowest dollar per desktop Absolutely. by putting the GPUs in to guarantee a great user experience. On the performance side too, obviously. Absolutely, and if you look at what Dell's doing, for the audience, it allows you to look and I've got all the remote workers, and that ability to support multiple users. It really is the expectation now that my rig, This is the whole category. all the collaboration to 'em, Microsoft Teams, of the new operating environment. We optimized the server and operating environment, bring in the new kind of cool visualization. and feel the fine Corinthian leather? We got to upgrade our CUBE game, on levering VR in the enterprise space. that scalable infrastructure that's easy to manage He sees data as the biggest opportunity and challenge. and at night, you run Compute. Even the user too, up and down, and just the difference, you can see such a difference You must get that all the time? that only know NVIDIA for games. and they just nerd out on the GPUs. (laughing) I think I'm in trouble here. It's certainly impacted on the infrastructure side, It's really great Value for the customers that can really drive the economics. Is there any industries that are key for you guys of the CAD/CAM industries. for all the hard work that they do. Take-aways from the show this year, that haven't finished there migration to Windows 10 Thoughts from the show. That experience that people have come to expect Congratulations on all the success. Thanks for the insights.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
$3QUANTITY

0.99+

Michael DellPERSON

0.99+

DellORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

IBMORGANIZATION

0.99+

2019DATE

0.99+

Stu MinimanPERSON

0.99+

John FanelliPERSON

0.99+

JohnPERSON

0.99+

John FrerPERSON

0.99+

KevinPERSON

0.99+

HPORGANIZATION

0.99+

2010DATE

0.99+

San FranciscoLOCATION

0.99+

10 yearsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

five-yearQUANTITY

0.99+

hundredsQUANTITY

0.99+

60 updatesQUANTITY

0.99+

Kevin GrayPERSON

0.99+

two daughtersQUANTITY

0.99+

John FurrierPERSON

0.99+

45%QUANTITY

0.99+

twoQUANTITY

0.99+

Windows 7TITLE

0.99+

VMwareORGANIZATION

0.99+

Windows 10TITLE

0.99+

one dayQUANTITY

0.99+

SkypeORGANIZATION

0.99+

Howard StreetLOCATION

0.99+

mid next yearDATE

0.99+

AWSORGANIZATION

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

tenth yearQUANTITY

0.99+

two GPUsQUANTITY

0.99+

bothQUANTITY

0.99+

Windows XPTITLE

0.99+

four usersQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

second announcementQUANTITY

0.99+

oneQUANTITY

0.99+

a weekQUANTITY

0.99+

firstQUANTITY

0.99+

10th yearQUANTITY

0.98+

one pieceQUANTITY

0.98+

one userQUANTITY

0.98+

WindowsTITLE

0.98+

this yearDATE

0.98+

PatPERSON

0.98+

DassaultORGANIZATION

0.98+

this weekDATE

0.98+

thousandsQUANTITY

0.98+

eight data scientistsQUANTITY

0.98+

first announcementQUANTITY

0.98+

XPTITLE

0.98+

10 years laterDATE

0.98+

StuPERSON

0.98+

first timeQUANTITY

0.98+