Image Title

Search Results for ZeratoCon 2018:

Darren Anstee, NETSCOUT | CUBEConversation, November 2019


 

from the silicon angle media office in Boston Massachusetts it's the queue now here's your host David on tape hello everyone and welcome to this cube conversation today we're gonna dig into the challenges of defending distributed denial of service or DDoS attacks we're gonna look at what DDoS attacks are why they occur and how defense techniques have evolved over time and with me to discuss these issues as Darin and Steve he's the CTO of security at net Scout Darren good to see you again can you tell me about your role your CTO of security so you got CTO specific to the different areas of your business yeah so I work within the broader CTO office at net Scout and we really act as a bridge between customers engineering teams our product management and the broader market and we're all about making sure that our strategy aligns with that of our customers that we're delivering what they need and when they need it and we're really about thought leadership so looking at the unique technologies and capabilities that that scout has and how we can pull those things together to deliver new value propositions new capabilities that can move our customers businesses forward and obviously taking us with of them great so let's get into it I mean everybody hears of DDoS attacks but specifically you know what are they why do they occur when what's the motivation behind the bad guys hitting us so a distributed denial of service attack is simply when an attacker is looking to consume some or all of the resources that are assigned to a network service or application so that a genuine user can't get through so that you can't get to that website so that your network is full of traffic so that firewall is no longer forwarding packets that's fundamentally what a DDoS attack is all about in terms of the motivations behind them they are many and varied there's a wide wide range of motivations behind the DDoS activity that we see going on out there today everything from cybercrime where people are holding people to ransom so I will take your website down unless you pay me you know X Bitcoin from ideological disputes through to nation-state attacks and then of course you get the you know things like students in higher educational establishments targeting online coursework submission and testing systems because they simply you know don't want to do the work fundamentally the issue you have around the motivations today is that it's so easy for anyone to get access to fairly sophisticated attack capabilities that anyone can launch an attack for pretty much any reason and that means that pretty much anyone can be targeted okay so you gotta be ready so are there different types of attacks I guess so right used to be denial of service now I'm distributed the service but what are the different types of attacks so the three main categories of distributed denial of service attack of what we call volumetric attacks State exhaustion attacks and application-layer attacks and you can kind of think of them around the different aspects of our infrastructure or the infrastructure of an organization that gets targeted so volumetric attacks are all about saturating Internet connectivity filling up the pipe as it were state exhaustion attacks are all about exhausting the state tables in specific pieces of infrastructure so if you think about load balancers and firewalls they maintain state on the traffic that they're forwarding if you can fill those tables up they stop doing their job and you can't get through them and then you have the application layer attacks which is their name would suggest is simply an attacker targeting an attack targeting a service at the application layer so for example flooding a website with requests for a download something like that so that genuine user can't get through it presumably some of those attacks for the infiltrators some of them are probably easier have a lower bar than others is that right or they pretty much also the same level of sophistication in terms of the attacks themselves there's big differences in the sophistication of the attack in terms of launching the attack it's really easy now so a lot of the attack tools that are out there today would be you know are fully weaponized so you click a button it launches multiple attack vectors at a target some of them will even rotate those attack vectors to make it harder for you to deal with the attack and then you have the DDoS for hire services that will do all of this for you is effectively a managed service so there's a whole economy around this stuff so common challenge and security very low barriers to entry how have these attacks changed over time so DDoS is nothing new it's been around for over 20 years and it has changed significantly over that time period as you would expect with anything in technology if you go back 20 years a DDoS attack of a couple of gigabits a second would be considered very very large last year we obviously saw saw DDoS attacks break the terabit barrier so you know that's an awful lot of traffic if we look in a more focused way at what's changed over the last 18 months I think there's a couple of things that are worth highlighting firstly we've seen the numbers of what we would consider to be midsize attacks and really grow very quickly over the last 12 months mid-sized to us is between 100 and 400 gigabits per second so we're still talking about very significant traffic volumes that can do a lot of damage you know saturate the internet connectivity of pretty much any enterprise out there between 2018 2019 looking at the two first halves respectively you're looking at about seven hundred and seventy six percent growth so there are literally thousands of these attacks going on out there now in that hundred to four hundred gig band and that's changing the way that network operators are thinking about dealing with them second thing that's changed is in the complexity of attacks now I've already mentioned this a little bit but there are now a lot of attack tools out there that completely automate the rotation of attack vectors during an attack so changing the way the attack works periodically every few minutes or every few seconds and they do that because it makes it harder to mitigate it makes it more likely that they'll succeed in their goal and then the third thing that I suppose has changed is simply the breadth of devices and protocols that are being used to launch attacks so we all remember in 2016 when Dyne was attacked and we started hearing about IOT and mirai and things like that that CCTV and DVR devices were being used there since then a much broader range of device types being targeted compromised subsumed into botnets and used to generate DDoS attacks and we're also seeing them use a much wider range of protocols within those DDoS attacks so there's a technique called reflection amplification which has been behind many of the largest DDoS attacks over the last 15 years or so traditionally it used a fairly narrow band of protocols over the last year or so we've seen attackers researching and then weaponizing a new range of protocols expanding their capability getting around existing defenses so there's a lot changing out there so you talking about mitigation how do you mitigate how do you defend against these attacks so that's changing actually so if you look at the way that the service provider world used to deal with DDoS predominantly what you would find is they would be investing in intelligent DDoS mitigation systems such as the Arbour TMS and they'd be deploying those solutions into their primary peering locations potentially into centralized data centers and then when they detected an attack using our sight line platform they would identify where it was coming in they identify the target of the attack and they divert the traffic across their network to those TMS locations inspect the traffic clean away the bad forward on the good protect the customer protect the infrastructure protect the service what's happening now is that the shape of service provider networks is changing so if we look at the way the content used to be distributed in service providers they pull it in centrally push it out to their customers if we look at the way that value-added service infrastructure used to be deployed it was very similar they deploy it centrally and then serve the customer all of that is starting to push out to the edge now contents coming in in many more locations nearer to areas delivered value-added service infrastructure is being pushed into virtual network functions at the edge of the network and that means that operators are not engineering the core of their networks in the same way they want to move DDoS attack traffic across their network so that they can then inspect and discard it they want to be doing things right at the edge and they want to be doing things at the edge combining together the capabilities of their router and switch infrastructure which they've already invested in with the intelligent DDoS mitigation capabilities of something like Ann Arbor TMS and they're looking for solutions that really orchestrate those combinations of mitigation mechanisms to deal with attacks as efficiently and effectively as possible and that's very much where we're going with the site line with sentinel products okay and we're gonna get into that you'd mentioned service providers do enterprises the same way and what's different so some enterprises approaching in exactly the same way so your larger scale enterprises that have networks that look a bit like those of service providers very much looking to use their router and switch infrastructure very much looking for a fully automated orchestrated attack response that leverages all capabilities within a given network with full reporting all of those kind two things for other enterprises hybrid DDoS defense has always been seen as the best practice which is really this combination of a service provider or cloud-based service to deal with high-volume attacks that would simply saturate connectivity with an on-prem or virtually on-prem capability that has a much more focused view of that enterprises traffic that can look at what's going on around the applications potentially decrypt traffic for those applications so that you can find those more stealthy more sophisticated attacks and deal with them very proactively do you you know a lot of times companies don't want to collaborate because their competitors but security is somewhat different are you finding that service providers or maybe even large organizations but not financial services that are are they collaborating and sharing information they're starting to so with the scale of DDoS now especially in terms of the size of the attacks and the frequency of the tax we are starting to see I suppose two areas where there's collaboration firstly you're seeing groups of organizations who are looking to offer services in a unified way to a customer outside of their normal reach so you know service provider a has reach in region area service provider B in region B see in region C they're looking to offer a unified service to a customer that has offices in all of those regions so they need to collaborate in order to offer that unified service so that's one driver for collaboration another one is where you see large service providers who have multiple kind of satellite operating companies so you know you think of some of the big brands that are out there in the search provider world they have networks in lots of parts of your well then they have other networks that join those networks together and they would very much like to share information kind of within that the challenge has always been well there are really two challenges to sharing information to deal with DDoS firstly there's a trust challenge so if I'm going to tell you about a DDoS attack are you simply going to start doing something with that information that might potentially drop traffic for a customer that might impact your network in some way that's one challenge the second challenge is invisibility in if I tell you about something how do you tell me what you actually did how do I find out what actually happened how do I tell my customer that I might be defending what happened overall so one of the things that we're doing in site language we're building in a new smart signaling mechanism where our customers will be able to cooperate with each other they'll be able to share information safely between one another and they'll be able to get feedback from one another on what actually happened what traffic was forwarded what traffic was dropped that's critical because you've mentioned the first challenges you got the balance of okay I'm business disruption versus protecting in the second is hey something's going wrong I don't really know what it is well that's not really very helpful well let's get more into the the Arbour platform and talk about how you guys are helping solve this this problem okay so sight line the honest sight line platform has been the market leading DDoS detection and mitigation solutions for network operators for well over the last decade obviously we were required by Netscape back in 2015 and what we've really been looking at is how we can integrate the two sets of technologies to deliver a real step change in capability to the market and that's really what we're doing with the site language Sentinel product site language Sentinel integrates net Scout and Arbor Technology so Arbor is traditionally provided our customers our sight line customers with visibility of what's happening across their networks at layer 3 and 4 so very much a network focus net Scout has smart data technology Smart Data technology is effectively about acquiring packet data in pretty much any environment whether we're talking physical virtual container public or private cloud and turning those packets into metadata into what we call smart data what we're doing in sight line with sentinel is combining packet and flow data together so you can think of it as kind of like colorizing a black and white photo so if you think about the picture we used to have insight line as being black and white we add this Smart Data suddenly we've colorized it when you look at that picture you can see more you can engage with it more you understand more about what was going on we're moving our visibility from the network layer up to the service layer and that will allow our customers to optimize the way that they deliver content across their networks it will allow them to understand what kinds of services their customers are accessing across their network so that they can optimize their value-added service portfolios drive additional revenue they'll be able to detect a broader range of threats things like botnet monitoring that kind of thing and they'll also be able to report on distributed denial of service attacks in a very different way if you look at the way in which much the reporting that happens out there today is designed it's very much network layer how many bits are forwarded how many packets are dropped when you're trying to explain to an end customer the value of the service that you offer that's a bit kind of vague what they want to know is how did my service perform how is my service protected and by bringing in that service layer visibility we can do that and that whole smarter visibility anger will drive a new intelligent automation engine which will really look at any attack and then provide a fully automated orchestrated attack response using all of the capabilities within a given network even outside a given network using the the the smarter signaling mechanism very whilst delivering a full suite of reporting on what's going on so that you're relying on the solution to deal with the attack for you to some degree but you're also being told exactly what's happening why it's happening and where it's happening in your secret sauce is this the way in which you handle the the metadata what you call smart data is that right I'll secret sauce really is in I think it's in a couple of different areas so with site language Sentinel the smart data is really a key one I think the other key one is our experience in the DDoS space so we understand how our customers are looking to use their router and switch infrastructure we understand the nature of the attacks that are going on out there we have a unique set of visibility into the attack landscape through the Netscape Atlas platform when you combine all of those things together we can look at a given network and we can understand for this attack at this this second this is the best way of dealing with that attack using these different mechanisms if the attack changes we love to our strategy and building that intelligent automation needs that smarter visibility so all of those different bits of our secret sauce really come together in centers so is that really your differentiator from you know your key competitors that you've got the experience you've got obviously the the tech anything else you'd add to that I think the other thing that we've got is two people so we've got a lot of research kind of capability in the DDoS space so we are we are delivering a lot of intelligence into our products as well now it's not just about what you detect locally anymore and we look at the way that the attack landscape is changing I mentioned that attackers are researching and weaponizing new protocols you know we're learning about that as it happens by looking at our honey pots by looking at our sinkholes by looking at our atlas data we're pushing that information down into site language Sentinel as well so that our customers are best prepared to deal with what's facing them when you talk to customers can you kind of summarize for our audience the the key to the business challenges you talked about some of the technical there may be some others that you can mention but try to get to that business impact yeah so on the business side of it there's a few different things so a lot of it comes down to operational cost and complexity and also obviously the cost of deploying infrastructure so and both of those things are changing because of the way that networks are changing and business models are changing on the operational side everyone is looking for their solutions to be more intelligent and more automated but they don't want them simply to be a black box if it's a black box it either works or it doesn't and if it doesn't you've got big problems especially if you've got service level agreements and things tied to services so intelligent automation to reduce operational overhead is key and we're very focused on that second thing is around deployment of capability into networks so I mentioned that the traditional DDoS that that the traditional DDoS mitigation kind of strategy was to deploy intelligent DDoS mitigation capability in to keep hearing locations and centralized data centers as we push things out towards the edge our customers are looking for those capabilities to be deployed more flexibly they're looking for them to be deployed on common off-the-shelf hardware they're looking for different kinds of software licensing models which again is something that we've already addressed to kind of allow our customers to move in that direction and then the third thing I think is really half opportunity and half business challenge and that's that when you look at service providers today they're very very focused on how they can generate additional revenue so they're looking very much at how they can take a service that maybe they've offered in the past to their top hundred customers and offer it to their top thousand or five thousand customers part of that is dry is intelligent automation part of that is getting the visibility but part of that again is partnering with an organization like netskope that can really help them to do that and so it's kind of part challenge part opportunity there but that's again something we're very focused on I want to come back and double down on the the point about automation seems to me the unique thing one of the unique things about security is this huge skills gap and people complain about that all the time a lot of infrastructure businesses you know automation means that you can take people and put them on you know different tasks more strategic and I'm sure that's true also its security but there's because of that skills gap automation is the only way to solve these problems right I mean you can't just keep throwing people at the problem because you don't have the skilled people and you can't take that brute force approach does that make sense to you it's scale and speed when it comes to distributed denial-of-service so given the attack vectors are changing very rapidly now because the tools support that you've got two choices as an operator you either have somebody focused on watching what the attack is doing and changing your mitigation strategy dynamically or you invest in a solution that has more intelligent art and more intelligent analytics better visibility of what's going on and that's slightly and with Sentinel fundamentally the other key thing is the scale aspect which is if you're looking to drive value-added services to a broader addressable market you can't really do that you know by simply hiring more and more people because the services don't cost in so that's where the intelligent automation comes in it's about scaling the capability that operators already have and most of them have a lot of you know very clever very good people in the security space you know it's about scaling the capability they already have to drive that additional revenue to drive the additional value so if I had to boil it down the business is obviously lower cost it's mentioned scale more effective mitigation which yeah which you know lowers your risk and then for the service providers it's monetization as well yeah and the more effective mitigation is a key one as well so you know leveraging that router and switch infrastructure to deal with the bulk of attack so that you can then use the intelligent DDoS mitigation capability the Arbour TMS to deal with the more sophisticated components combining those two things together all right we'll give you the final word Darren you know takeaways and you know any key point that you want to drive home yeah I mean sightline has been a market leading product for a number of years now what we're really doing in Nets care is investing in that we're pulling together the different technologies that we have available within the business to deliver a real step change in capability to our customer base so that they can have a fully automated and orchestrated attack response capability that allows them to defend themselves better and allows them to drive a new range of value-added services well Dara thanks for coming on you guys doing great work really appreciate your insights thanks Dave you're welcome and thank you for watching everybody this is Dave Volante we'll see you next time

Published Date : Nov 14 2019

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
2015DATE

0.99+

Dave VolantePERSON

0.99+

StevePERSON

0.99+

DavePERSON

0.99+

2016DATE

0.99+

DarinPERSON

0.99+

2018DATE

0.99+

DavidPERSON

0.99+

hundredQUANTITY

0.99+

November 2019DATE

0.99+

netskopeORGANIZATION

0.99+

two setsQUANTITY

0.99+

two peopleQUANTITY

0.99+

two challengesQUANTITY

0.99+

second challengeQUANTITY

0.99+

first challengesQUANTITY

0.99+

20 yearsQUANTITY

0.99+

Boston MassachusettsLOCATION

0.99+

third thingQUANTITY

0.99+

todayDATE

0.99+

Darren AnsteePERSON

0.98+

second thingQUANTITY

0.98+

sentinelORGANIZATION

0.98+

last yearDATE

0.98+

net ScoutORGANIZATION

0.98+

over 20 yearsQUANTITY

0.98+

two choicesQUANTITY

0.98+

one challengeQUANTITY

0.98+

five thousand customersQUANTITY

0.98+

third thingQUANTITY

0.97+

two thingsQUANTITY

0.97+

oneQUANTITY

0.97+

DaraPERSON

0.97+

thousandQUANTITY

0.97+

four hundredQUANTITY

0.97+

hundred customersQUANTITY

0.97+

every few secondsQUANTITY

0.97+

every few minutesQUANTITY

0.95+

bothQUANTITY

0.95+

NetscapeORGANIZATION

0.95+

SentinelORGANIZATION

0.94+

firstlyQUANTITY

0.93+

two areasQUANTITY

0.93+

ScoutORGANIZATION

0.92+

secondQUANTITY

0.92+

100QUANTITY

0.9+

DyneORGANIZATION

0.88+

4OTHER

0.88+

three main categoriesQUANTITY

0.87+

about seven hundred and seventy six percentQUANTITY

0.87+

2019DATE

0.87+

Netscape AtlasTITLE

0.87+

400 gigabits per secondQUANTITY

0.85+

key oneQUANTITY

0.85+

one driverQUANTITY

0.84+

nguageORGANIZATION

0.82+

last decadeDATE

0.82+

SentinelTITLE

0.81+

NETSCOUTORGANIZATION

0.81+

last 18 monthsDATE

0.81+

two first halvesQUANTITY

0.8+

layer 3OTHER

0.8+

last 15 yearsDATE

0.8+

DarrenPERSON

0.79+

thousands of these attacksQUANTITY

0.75+

Ann ArborORGANIZATION

0.75+

couple of gigabitsQUANTITY

0.72+

last 12 monthsDATE

0.71+

lot of intelligenceQUANTITY

0.69+

ArborORGANIZATION

0.66+

careORGANIZATION

0.66+

lot of attack toolsQUANTITY

0.59+

IOTTITLE

0.56+

ArbourORGANIZATION

0.54+

a secondQUANTITY

0.53+

DDoSOTHER

0.53+

number of yearsQUANTITY

0.52+

halfQUANTITY

0.52+

atlasORGANIZATION

0.48+

ArborTITLE

0.46+

Taylor Carol, GameChanger Charity & ZOTT | AWS Public Sector Summit 2018


 

>> (upbeat electronic music) >> Live, from Washington D.C., it's theCUBE. Covering AWS Public Sector Summit 2018. Brought to you by Amazon Web Services and it's ecosystem partners. (upbeat techno music) >> Welcome back to the nation's capital, everybody. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante and I'm here with Stu Miniman. This is day two of the AWS Public Sector Summit. Taylor Carol is here. He's the co-founder of the GameChanger charity and ZOTT. Taylor, welcome to theCUBE. Thanks for coming on. >> Thank you, glad to be here. >> Keynote yesterday got rave reviews. Let me just set this up. So, ZOTT is a content platform that creates virtual experiences for children, giving them an outlet for creativity, intellectual engagement, a lot more. We're going to talk about that. And then GameChanger is the non-profit and it's a majority share holder of the for-profit organization. So, that's an interesting business model. >> Thank you. >> Explain, please. >> Absolutely, we started GameChanger roughly twelve years ago, when I, at 11, was diagnosed terminal, with a rare form of cancer, given roughly two weeks left to live, thankfully a long two weeks, totally healthy now. But-- >> Congratulations, that's awesome. >> Hey, thank you so much. >> Good to have you with us. >> Glad to be here. But, from those five years I spent in hospital, combined with the 20,000 hospital rooms my dad and I have visited on behalf of GameChanger charity we saw how much need there was in the patient care space and the patient engagement space. And those insights led to first found GameChanger charity, now a nearly 12 year old 501(c)(3), an international non-profit. Started an endeavor in our garage. This year, we've taken in over 20 million dollars in donations, 93 cents on every dollar going to the cause. And GameChanger really focuses in on leveraging gaming, technology, and innovation to support patient's rights to play, learn and socialize. And we do that through virtual reality, through augmented reality, through custom gaming solutions, through character based scholarships, to support post-hospital dreams. And then with GameChanger days, where we go in and we bring in bundles of toys for the patients and a catered meal for staff, to sit down to talk with them and to learn about the bespoke gaming and tech solutions we can make to support each individual hospital's needs. So that's GameChanger. And then from that insight, from all that time in the hospital, something we really saw was that the strict patient engagement. How patients watch TV or get clinical health content was so broken. It's one TV mounted on the wall with 20 channels of basic cable. We saw it could be so much better. So, we made ZOTT, which is a device agnostic, cloud-based content distribution system. So, now, through ZOTT, from participating hospitals, any patient, any family member can get their own content, their own experiences, from any device, a laptop, a tablet, a phone, everywhere in the hospital. So, linear TV, gaming, clinical health content, even custom live-streams exclusively for the patients. And ZOTT is owned in entirety by GameChanger charity. >> That's awesome. >> So anything good that happens to ZOTT, goes back to support the GameChanger cause. >> So, completely changing the experience for the patient, from first-hand. What's been some of the outcomes, just in, either anecdotally, or I don't know if you have any kind of measurements. You're changing the world, but if you could share with us how, and any examples, would be great. >> Thank you for saying that. One of the most profound things we've seen at GameChanger charity and at ZOTT is how deleterious boredom is for the patient experience. Understandably, individuals are locked in a boring, white room for a day, a week, a month, years at times. >> Craving visitors, anything. >> Any form of interaction or social engagement. And you know something we've seen, is that boredom often magnifies pain and anxiety, isolation, over use of pain medication. And understanding that issue, that pain, something we've been able to do is incorporate custom VR rigs, custom VR experiences, for distraction therapy. So that's where we'll go in, meet with patients, and bring the care providers VR sets so when a patient is getting ready for a surgery, they can put on a VR rig, try a tranquil experience, and we've seen pain scores go down by as much as six points on a 10 point pain scale, as a result of such distraction therapy. >> That's fantastic. >> Yeah. >> Thank you. >> It's fascinating, we're really powerful the discussion we had in the keynote. So, making this happen, there's some technology behind this. Maybe walk us through a little bit, what's the connection with the cloud discussion. >> Absolutely, absolutely. Something we've seen in growing from a garage endeavor, to now an international organization that supports 11 countries, 20 million dollars in revenue this year, is the importance of scalability and being able to, one, help as many patients as possible, while still focusing on the individual and never losing sight of the fact that each patient we work with is an individual life and truly a family, impacted by acute or prolonged illnesses. So, what the cloud has really allowed us to do is to magnify our efforts and to take it from, say, five hospitals to now over 100. And, one example of that would be in how we use AWS's Sumerian. So, that is a cloud-based VR experience. And rather than needing to download really content-heavy VR experiences on say a gaming computer, in order to facilitate these experiences, now care providers can interact with them through the cloud. And go beyond that, they can actually customize a VR experiences for the needs of each patient. So, let's say there's a patient who needs to get a tour through their new hospital ward. Thanks to creating templates on Amazon Sumerian, GameChanger creating them, these care specialists now can go in and customize the script that that AR or VR host will speak to include the patient's name or to say I know this is a big change from California, or from Colorado or wherever they hail from. Really making that otherwise generic hospital integration experience feels so bespoke, so personalized to the individual. >> And if I remember right, one of the things you can do is actually, get them engaged with their care. Like, here's the surgery, going to take you inside what's going to be, and I've heard studies of this, you understand, what's going to be doing and can focus on it, kind of the power of understanding and thinking on it can actually improve the results that you get out of it. >> You are so right. That has been one of the most profound things for me personally. When I was sick, I was in the hospital for five years, and for roughly six months of those five years, I was in an isolation unit, where the only person that could come in was my doctor, my nurse in a hazmat suit. And, during that time, I was scared. I was an 11 year old boy, didn't understand what was happening. And I felt an utter loss of agency. An utter loss of empowerment regarding my illness and more importantly my healing. So, what we're able to do now with Sumerian, is we created a collaborative learning experience between CS Mott Children's Hospital in Ann Arbor, Michigan, and Children's Hospital, Colorado in Denver. So, experts 1200 miles apart, were able to collaborate in real time, through the cloud, through Amazon Sumerian, to make a VR experience where patients about to receive aortic valve replacements could actually go through human hearts in virtual reality and simulate the surgery they would soon be receiving leading to this huge spike in empowerment and identity and ownership over their healing. >> That's amazing. I mean, I remember, I've only had surgery once, I've been really lucky, >> Yeah. >> But when the surgeon explained to me how it worked and just opened up my mind, and made me so much more comfortable when I understood that, being able to visualize that has to be a complete game changer. Taylor, what does the hospital have to do? Take us through their infrastructure needs, or how do hospitals get on-boarded? >> That's a fantastic question. An anecdote or a saying that we always hold on to near and dear to our heart, at GameChanger and at ZOTT, is that when you know one hospital you know one hospital. (laughter) And we mean that in the sense that every hospital is it's own behemoth, it's own ecosystem that has spent the past one, five, ten, 50 years building what is now an incredibly outdated technology stack. So, purely from the patient engagement side, let's say looking at ZOTT, traditional engagement, just to get that TV on the wall, and to get the cable going and the basic clinical health information there's a satellite on the roof, there are server racks in the basement, there's a TV with a computer mounted on the back, there's a laptop in the waiting room. It's just everything is so cumbersome, so outdated. And what we've been able to do is take this really thin client-based cloud approach where we're able to create a bespoke cloud solution that totally bypasses all of that heavy technology stack. Equally, because Amazon and AWS services are so modifiable and you can really pick and choose what you need from the suite, we've been able to go in and instead of have the hospital change to us, we've been able to modify to the hospital, to fit into their ecosystem rather than bring in a bull dozer and try and change everything that they have. >> Awesome. So you can utilizing their existing infrastructure, and bring in a light-weight both cloud and thin-client infrastructure and be up and running. >> Absolutely. A metric that we have to speak to the groundbreaking nature of what we're able to do now is typical patient engagement systems can take up to 18 months to install. Cost millions of dollars, be incredibly cumbersome, and expensive in terms of hours it takes to maintain the hardware. ZOTT, our technology, when we bring it in, goes live in hospitals in as little as 15 minutes. >> And not millions and millions of dollars? >> (laughs) Exponentially less. >> Okay, so the hospital has to buy into it, they really don't have to bring in any new infrastructure. You guys kind of turn-key that for them. So really need a champion inside the hospital. And a go. >> Absolutely, absolutely. A mindfulness we really maintain is where in the hospital is that each hospital decision maker's priority is to safeguard the individual patient and their families. We understand that there's sensitivity, there's a lot of security requirements. And one of the beauties of working with AWS, as you all know is, is AWS is HIPAA compliant. And, in working with AWS, we've been able to add an extra degree of security and safeguarding for any information we collect, any experience we work with the hospitals, so that everyone is safe. That all decision makers feel like their needs and requirements are being satisfied and safeguarded. >> So does that mean the kids can't play Fortnite? >> Fortnite (laughs). Neither Fortnite nor PUBG's (laughs). >> Well, because if they're playing Fortnite, you'd never get 'em home. >> (laughs) >> Same with PUBG. >> One thing that is pretty fun is through ZOTT and through GameChanger, all of our relationships with all of the big game developers around the world, is we may not have PUBG, but we do have Steam integration, and through our game developers, we have over a million dollars worth of Steam codes continually replenished, so patients and their siblings can download a 20, 30, 40, 50 dollar game, keep it on their laptop, on their tablet, take it with them when they leave. As a gift for their strength while they were in the hospital. >> Amazing. Taylor, thanks so much for the contribution you're making to the children and to the world. Really a phenomenal story. Appreciate you coming on theCUBE. >> Thank you both so much for letting us be here and sharing our story. >> You're very welcome. All right, keep it right there, buddy. We'll be back with our next guest. You're watching theCUBE from AWS Public Sector Summit. Stay right there. (upbeat electronic music)

Published Date : Jun 21 2018

SUMMARY :

Brought to you by Amazon Web Services Welcome back to the nation's capital, everybody. of the for-profit organization. Absolutely, we started GameChanger and the patient engagement space. So anything good that happens to ZOTT, So, completely changing the experience One of the most profound things we've seen and bring the care providers VR sets the discussion we had in the keynote. and to take it from, say, one of the things you can do is and simulate the surgery I mean, I remember, and made me so much more comfortable and instead of have the hospital change to us, and bring in a light-weight it takes to maintain the hardware. Okay, so the hospital has to buy into it, is to safeguard the individual patient Well, because if they're playing Fortnite, and through our game developers, and to the world. and sharing our story. We'll be back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

GameChangerORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

20 channelsQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

ColoradoLOCATION

0.99+

Stu MinimanPERSON

0.99+

FortniteTITLE

0.99+

CaliforniaLOCATION

0.99+

millionsQUANTITY

0.99+

five yearsQUANTITY

0.99+

93 centsQUANTITY

0.99+

20QUANTITY

0.99+

10 pointQUANTITY

0.99+

TaylorPERSON

0.99+

ZOTTORGANIZATION

0.99+

Washington D.C.LOCATION

0.99+

30QUANTITY

0.99+

40QUANTITY

0.99+

11 countriesQUANTITY

0.99+

two weeksQUANTITY

0.99+

20 million dollarsQUANTITY

0.99+

fiveQUANTITY

0.99+

1200 milesQUANTITY

0.99+

six pointsQUANTITY

0.99+

CS Mott Children's HospitalORGANIZATION

0.99+

PUBGTITLE

0.99+

This yearDATE

0.99+

Taylor CarolPERSON

0.99+

a dayQUANTITY

0.99+

15 minutesQUANTITY

0.99+

over 100QUANTITY

0.99+

oneQUANTITY

0.99+

millions of dollarsQUANTITY

0.99+

a weekQUANTITY

0.98+

over 20 million dollarsQUANTITY

0.98+

Children's HospitalORGANIZATION

0.98+

501(c)(3)OTHER

0.98+

each patientQUANTITY

0.98+

firstQUANTITY

0.98+

twelve years agoDATE

0.98+

this yearDATE

0.98+

one hospitalQUANTITY

0.98+

50 dollarQUANTITY

0.98+

HIPAATITLE

0.98+

over a million dollarsQUANTITY

0.97+

20,000 hospital roomsQUANTITY

0.97+

DenverLOCATION

0.97+

One thingQUANTITY

0.97+

five hospitalsQUANTITY

0.97+

six monthsQUANTITY

0.97+

bothQUANTITY

0.96+

OneQUANTITY

0.96+

tenQUANTITY

0.96+

one TVQUANTITY

0.96+

a monthQUANTITY

0.96+

AWS Public Sector SummitEVENT

0.96+

AWS Public Sector Summit 2018EVENT

0.95+

each hospitalQUANTITY

0.95+

11 year oldQUANTITY

0.95+

James Kobielus, Wikibon | The Skinny on Machine Intelligence


 

>> Announcer: From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now here's your host, Dave Vellante. >> In the early days of big data and Hadoop, the focus was really on operational efficiency where ROI was largely centered on reduction of investment. Fast forward 10 years and you're seeing a plethora of activity around machine learning, and deep learning, and artificial intelligence, and deeper business integration as a function of machine intelligence. Welcome to this Cube conversation, The Skinny on Machine Intelligence. I'm Dave Vellante and I'm excited to have Jim Kobielus here up from the District area. Jim, great to see you. Thanks for coming into the office today. >> Thanks a lot, Dave, yes great to be here in beautiful Marlboro, Massachusetts. >> Yes, so you know Jim, when you think about all the buzz words in this big data business, I have to ask you, is this just sort of same wine, new bottle when we talk about all this AI and machine intelligence stuff? >> It's actually new wine. But of course there's various bottles and they have different vintages, and much of that wine is still quite tasty, and let me just break it out for you, the skinny on machine intelligence. AI as a buzzword and as a set of practices really goes back of course to the early post-World War II era, as we know Alan Turing and the Imitation Game and so forth. There are other developers, theorists, academics in the '40s and the '50s and '60s that pioneered in this field. So we don't want to give Alan Turing too much credit, but he was clearly a mathematician who laid down the theoretical framework for much of what we now call Artificial Intelligence. But when you look at Artificial Intelligence as a ever-evolving set of practices, where it began was in an area that focused on deterministic rules, rule-driven expert systems, and that was really the state of the art of AI for a long, long time. And so you had expert systems in a variety of areas that became useful or used in business, and science, and government and so forth. Cut ahead to the turn of the millennium, we are now in the 21st century, and what's different, the new wine, is big data, larger and larger data sets that can reveal great insights, patterns, correlations that might be highly useful if you have the right statistical modeling tools and approaches to be able to surface up these patterns in an automated or semi-automated fashion. So one of the core areas is what we now call machine learning, which really is using statistical models to infer correlations, anomalies, trends, and so forth in the data itself, and machine learning, the core approach for machine learning is something called Artificial Neural Networks, which is essentially modeling a statistical model along the lines of how, at a very high level, the nervous system is made up, with neurons connected by synapses, and so forth. It's an analog in statistical modeling called a perceptron. The whole theoretical framework of perceptrons actually got started in the 1950s with the first flush of AI, but didn't become a practical reality until after the turn of this millennium, really after the turn of this particular decade, 2010, when we started to see not only very large big data sets emerge and new approaches for managing it all, like Hadoop, come to the fore. But we've seen artificial neural nets get more sophisticated in terms of their capabilities, and a new approach for doing machine learning, artificial neural networks, with deeper layers of perceptrons, neurons, called deep learning has come to the fore. With deep learning, you have new algorithms like convolutional neural networks, recurrent neural networks, generative adversarial neural networks. These are different ways of surfacing up higher level abstractions in the data, for example for face recognition and object recognition, voice recognition and so forth. These all depend on this new state of the art for machine learning called deep learning. So what we have now in the year 2017 is we have quite a mania for all things AI, much of it is focused on deep learning, much of it is focused on tools that your average data scientist or your average developer increasingly can use and get very productive with and build these models and train and test them, and deploy them into working applications like going forward, things like autonomous vehicles would be impossible without this. >> Right, and we'll get some of that. But so you're saying that machine learning is essentially math that infers patterns from data. And math, it's new math, math that's been around for awhile or. >> Yeah, and inferring patterns from data has been done for a long time with software, and we have some established approaches that in many ways predate the current vogue for neural networks. We have support vector machines, and decision trees, and Bayesian logic. These are different ways of approaches statistical for inferring patterns, correlations in the data. They haven't gone away, they're a big part of the overall AI space, but it's a growing area that I've only skimmed the surface of. >> And they've been around for many many years, like SVM for example. Okay, now describe further, add some color to deep learning. You sort of painted a picture of this sort of deep layers of these machine learning algorithms and this network with some depth to it, but help us better understand the difference between machine learning and deep learning, and then ultimately AI. >> Yeah, well with machine learning generally, you know, inferring patterns from data that I said, artificial neural networks of which the deep learning networks are one subset. Artificial neural networks can be two or more layers of perceptrons or neurons, they have relationship to each other in terms of their activation according to various mathematical functions. So when you look at an artificial neural network, it basically does very complex math equations through a combination of what they call scalar functions, like multiplication and so forth, and then you have these non-linear functions, like cosine and so forth, tangent, all that kind of math playing together in these deep structures that are triggered by data, data input that's processed according to activation functions that set weights and reset the weights among all the various neural processing elements, that ultimately output something, the insight or the intelligence that you're looking for, like a yes or no, is this a face or not a face, that these incoming bits are presenting. Or it might present output in terms of categories. What category of face is this, a man, a woman, a child, or whatever. What I'm getting at is that so deep learning is more layers of these neural processing elements that are specialized to various functions to be able to abstract higher level phenomena from the data, it's not just, "Is this a face," but if it's a scene recognition deep learning network, it might recognize that this is a face that corresponds to a person named Dave who also happens to be the father in the particular family scene, and by the way this is a family scene that this deep learning network is able to ascertain. What I'm getting at is those are the higher level abstractions that deep learning algorithms of various sorts are built to identify in an automated way. >> Okay, and these in your view all fit under the umbrella of artificial intelligence, or is that sort of an uber field that we should be thinking of. >> Yeah, artificial intelligence as the broad envelope essentially refers to any number of approaches that help machines to think like humans, essentially. When you say, "Think like humans," what does that mean actually? To do predictions like humans, to look for anomalies or outliers like a human might, you know separate figure from ground for example in a scene, to identify the correlations or trends in a given scene. Like I said, to do categorization or classification based on what they're seeing in a given frame or what they're hearing in a given speech sample. So all these cognitive processes just skim the surface, or what AI is all about, automating to a great degree. When I say cognitive, but I'm also referring to affective like emotion detection, that's another set of processes that goes on in our heads or our hearts, that AI based on deep learning and so forth is able to do depending on different types of artificial neural networks are specialized particular functions, and they can only perform these functions if A, they've been built and optimized for those functions, and B, they have been trained with actual data from the phenomenon of interest. Training the algorithms with the actual data to determine how effective the algorithms are is the key linchpin of the process, 'cause without training the algorithms you don't know if the algorithm is effective for its intended purpose, so in Wikibon what we're doing is in the whole development process, DevOps cycle, for all things AI, training the models through a process called supervised learning is absolutely an essential component of ascertaining the quality of the network that you've built. >> So that's the calibration and the iteration to increase the accuracy, and like I say, the quality of the outcome. Okay, what are some of the practical applications that you're seeing for AI, and ML, and DL. >> Well, chat bots, you know voice recognition in general, Siri and Alexa, and so forth. Without machine learning, without deep learning to do speech recognition, those can't work, right? Pretty much in every field, now for example, IT service management tools of all sorts. When you have a large network that's logging data at the server level, at the application level and so forth, those data logs are too large and too complex and changing too fast for humans to be able to identify the patterns related to issues and faults and incidents. So AI, machine learning, deep learning is being used to fathom those anomalies and so forth in an automated fashion to be able to alert a human to take action, like an IT administrator, or to be able to trigger a response work flow, either human or automated. So AI within IT service management, hot hot topic, and we're seeing a lot of vendors incorporate that capability into their tools. Like I said, in the broad world we live in in terms of face recognition and Facebook, the fact is when I load a new picture of myself or my family or even with some friends or brothers in it, Facebook knows lickity-split whether it's my brother Tom or it's my wife or whoever, because of face recognition which obviously depends, well it's not obvious to everybody, depends on deep learning algorithms running inside Facebook's big data network, big data infrastructure. They're able to immediately know this. We see this all around us now, speech recognition, face recognition, and we just take it for granted that it's done, but it's done through the magic of AI. >> I want to get to the development angle scenario that you specialize in. Part of the reason why you came to Wikibon is to really focus on that whole application development angle. But before we get there, I want to follow the data for a bit 'cause you mentioned that was really the catalyst for the resurgence in AI, and last week at the Wikibon research meeting we talked about this three-tiered model. Edge, as edge piece, and then something in the middle which is this aggregation point for all this edge data, and then cloud which is where I guess all the deep modeling occurs, so sort of a three-tier model for the data flow. >> John: Yes. >> So I wonder if you could comment on that in the context of AI, it means more data, more I guess opportunities for machine learning and digital twins, and all this other cool stuff that's going on. But I'm really interested in how that is going to affect the application development and the programming model. John Farrier has a phrase that he says that, "Data is the new development kit." Well, if you got all this data that's distributed all over the place, that changes the application development model, at least you think it does. So I wonder if you could comment on that edge explosion, the data explosion as a result, and what it means for application development. >> Right, so more and more deep learning algorithms are being pushed to edge devices, by that I mean smartphones, and smart appliances like the ones that incorporate Alexa and so forth. And so what we're talking about is the algorithms themselves are being put into CPUs and FPGAs and ASICs and GPUs. All that stuff's getting embedded in everything that we're using, everything's that got autonomous, more and more devices have the ability if not to be autonomous in terms of making decisions, independent of us, or simply to serve as augmentation vehicles for our own whatever we happen to be doing thanks to the power of deep learning at the client. Okay, so when deep learning algorithms are embedded in say an internet of things edge device, what the deep learning algorithms are doing is A, they're ingesting the data through the sensors of that device, B, they're making inferences, deep learning algorithmic-driven inferences, based on that data. It might be speech recognition, face recognition, environmental sensing and being able to sense geospatially where you are and whether you're in a hospitable climate for whatever. And then the inferences might drive what we call actuation. Now in the autonomous vehicle scenario, the autonomous vehicle is equipped with all manner of sensors in terms of LiDAR and sonar and GPS and so forth, and it's taking readings all the time. It's doing inferences that either autonomously or in conjunction with inferences that are being made through deep learning and machine learning algorithms that are executing in those intermediary hubs like you described, or back in the cloud, or in a combination of all of that. But ultimately, the results of all those analytics, all those deep learning models, feed the what we call actuation of the car itself. Should it stop, should it put on the brakes 'cause it's about to hit a wall, should it turn right, should it turn left, should it slow down because it happened to have entered a new speed zone or whatever. All of the decisions, the actions that the edge device, like a car would be an edge device in this scenario, are being driven by evermore complex algorithms that are trained by data. Now, let's stay with the autonomous vehicle because that's an extreme case of a very powerful edge device. To train an autonomous vehicle you need of course lots and lots of data that's acquired from possibly a prototype that you, a Google or a Tesla, or whoever you might be, have deployed into the field or your customers are using, B, proving grounds like there's one out by my stomping ground out in Ann Arbor, a proving ground for the auto industry for self-driving vehicles and gaining enough real training data based on the operation of these vehicles in various simulated scenarios, and so forth. This data is used to build and iterate and refine the algorithms, the deep learning models that are doing the various operations of not only the vehicles in isolation but the vehicles operating as a fleet within an entire end to end transportation system. So what I'm getting at, is if you look at that three-tier model, then the edge device is the car, it's running under its own algorithms, the middle tier the hub might be a hub that's controlling a particular zone within a traffic system, like in my neck of the woods it might be a hub that's controlling congestion management among self-driving vehicles in eastern Fairfax County, Virginia. And then the cloud itself might be managing an entire fleet of vehicles, let's say you might have an entire fleet of vehicles under the control of say an Uber, or whatever is managing its own cars from a cloud-based center. So when you look at the tiering model that analytics, deep learning analytics is being performed, increasingly it will be for various, not just self-driving vehicles, through this tiered model, because the edge device needs to make decisions based on local data. The hub needs to make decisions based on a wider view of data across a wider range of edge entities. And then the cloud itself has responsibility or visibility for making deep learning driven determinations for some larger swath. And the cloud might be managing both the deep learning driven edge devices, as well as monitoring other related systems that self-driving network needs to coordinate with, like the government or whatever, or police. >> So envisioning that three-tier model then, how does the programming paradigm change and evolve as a result of that. >> Yeah, the programming paradigm is the modeling itself, the building and the training and the iterating the models generally will stay centralized, meaning to do all these functions, I mean to do modeling and training and iteration of these models, you need teams of data scientists and other developers who are both adept as to statistical modeling, who are adept at acquiring the training data, at labeling it, labeling is an important function there, and who are adept at basically developing and deploying one model after another in an iterative fashion through DevOps, through a standard release pipeline with version controls, and so forth built in, the governance built in. And that's really it needs to be a centralized function, and it's also very compute and data intensive, so you need storage resources, you need large clouds full of high performance computing, and so forth. Be able to handle these functions over and over. Now the edge devices themselves will feed in the data in just the data that is fed into the centralized platform where the training and the modeling is done. So what we're going to see is more and more centralized modeling and training with decentralized execution of the actual inferences that are driven by those models is the way it works in this distributive environment. >> It's the Holy Grail. All right, Jim, we're out of time but thanks very much for helping us unpack and giving us the skinny on machine learning. >> John: It's a fat stack. >> Great to have you in the office and to be continued. Thanks again. >> John: Sure. >> All right, thanks for watching everybody. This is Dave Vellante with Jim Kobelius, and you're watching theCUBE at the Marlboro offices. See ya next time. (upbeat music)

Published Date : Oct 18 2017

SUMMARY :

Announcer: From the SiliconANGLE Media office Thanks for coming into the office today. Thanks a lot, Dave, yes great to be here in beautiful So one of the core areas is what we now call math that infers patterns from data. that I've only skimmed the surface of. the difference between machine learning might recognize that this is a face that corresponds to a of artificial intelligence, or is that sort of an Training the algorithms with the actual data to determine So that's the calibration and the iteration at the server level, at the application level and so forth, Part of the reason why you came to Wikibon is to really all over the place, that changes the application development devices have the ability if not to be autonomous in terms how does the programming paradigm change and so forth built in, the governance built in. It's the Holy Grail. Great to have you in the office and to be continued. and you're watching theCUBE at the Marlboro offices.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

JimPERSON

0.99+

Jim KobeliusPERSON

0.99+

DavePERSON

0.99+

Jim KobielusPERSON

0.99+

Dave VellantePERSON

0.99+

FacebookORGANIZATION

0.99+

John FarrierPERSON

0.99+

GoogleORGANIZATION

0.99+

21st centuryDATE

0.99+

James KobielusPERSON

0.99+

TeslaORGANIZATION

0.99+

Alan TuringPERSON

0.99+

UberORGANIZATION

0.99+

SiriTITLE

0.99+

twoQUANTITY

0.99+

WikibonORGANIZATION

0.99+

last weekDATE

0.99+

AlexaTITLE

0.99+

MarlboroLOCATION

0.99+

TomPERSON

0.99+

Boston, MassachusettsLOCATION

0.99+

10 yearsQUANTITY

0.98+

Ann ArborLOCATION

0.98+

1950sDATE

0.98+

bothQUANTITY

0.97+

todayDATE

0.97+

Marlboro, MassachusettsLOCATION

0.97+

oneQUANTITY

0.96+

2017DATE

0.95+

three-tierQUANTITY

0.95+

2010DATE

0.95+

World War IIEVENT

0.95+

first flushQUANTITY

0.94+

three-tier modelQUANTITY

0.93+

Alan TuringTITLE

0.88+

'50sDATE

0.88+

eastern Fairfax County, VirginiaLOCATION

0.87+

The Skinny on Machine IntelligenceTITLE

0.87+

WikibonTITLE

0.87+

one modelQUANTITY

0.86+

'40sDATE

0.85+

CubeORGANIZATION

0.84+

DevOpsTITLE

0.83+

three-tieredQUANTITY

0.82+

one subsetQUANTITY

0.81+

The SkinnyORGANIZATION

0.81+

'60sDATE

0.8+

Imitation GameTITLE

0.79+

more layersQUANTITY

0.74+

theCUBEORGANIZATION

0.73+

SiliconANGLE MediaORGANIZATION

0.72+

post-DATE

0.56+

decadeDATE

0.46+