Breaking Analysis: What Black Hat '22 tells us about securing the Supercloud
>> From theCUBE Studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR, This is "Breaking Analysis with Dave Vellante". >> Black Hat 22 was held in Las Vegas last week, the same time as theCUBE Supercloud event. Unlike AWS re:Inforce where words are carefully chosen to put a positive spin on security, Black Hat exposes all the warts of cyber and openly discusses its hard truths. It's a conference that's attended by technical experts who proudly share some of the vulnerabilities they've discovered, and, of course, by numerous vendors marketing their products and services. Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR. In this "Breaking Analysis", we summarize what we learned from discussions with several people who attended Black Hat and our analysis from reviewing dozens of keynotes, articles, sessions, and data from a recent Black Hat Attendees Survey conducted by Black Hat and Informa, and we'll end with the discussion of what it all means for the challenges around securing the supercloud. Now, I personally did not attend, but as I said at the top, we reviewed a lot of content from the event which is renowned for its hundreds of sessions, breakouts, and strong technical content that is, as they say, unvarnished. Chris Krebs, the former director of Us cybersecurity and infrastructure security agency, CISA, he gave the keynote, and he spoke about the increasing complexity of tech stacks and the ripple effects that that has on organizational risk. Risk was a big theme at the event. Where re:Inforce tends to emphasize, again, the positive state of cybersecurity, it could be said that Black Hat, as the name implies, focuses on the other end of the spectrum. Risk, as a major theme of the event at the show, got a lot of attention. Now, there was a lot of talk, as always, about the expanded threat service, you hear that at any event that's focused on cybersecurity, and tons of emphasis on supply chain risk as a relatively new threat that's come to the CISO's minds. Now, there was also plenty of discussion about hybrid work and how remote work has dramatically increased business risk. According to data from in Intel 471's Mark Arena, the previously mentioned Black Hat Attendee Survey showed that compromise credentials posed the number one source of risk followed by infrastructure vulnerabilities and supply chain risks, so a couple of surveys here that we're citing, and we'll come back to that in a moment. At an MIT cybersecurity conference earlier last decade, theCUBE had a hypothetical conversation with former Boston Globe war correspondent, Charles Sennott, about the future of war and the role of cyber. We had similar discussions with Dr. Robert Gates on theCUBE at a ServiceNow event in 2016. At Black Hat, these discussions went well beyond the theoretical with actual data from the war in Ukraine. It's clear that modern wars are and will be supported by cyber, but the takeaways are that they will be highly situational, targeted, and unpredictable because in combat scenarios, anything can happen. People aren't necessarily at their keyboards. Now, the role of AI was certainly discussed as it is at every conference, and particularly cyber conferences. You know, it was somewhat dissed as over hyped, not surprisingly, but while AI is not a panacea to cyber exposure, automation and machine intelligence can definitely augment, what appear to be and have been stressed out, security teams can do this by recommending actions and taking other helpful types of data and presenting it in a curated form that can streamline the job of the SecOps team. Now, most cyber defenses are still going to be based on tried and true monitoring and telemetry data and log analysis and curating known signatures and analyzing consolidated data, but increasingly, AI will help with the unknowns, i.e. zero-day threats and threat actor behaviors after infiltration. Now, finally, while much lip service was given to collaboration and public-private partnerships, especially after Stuxsnet was revealed early last decade, the real truth is that threat intelligence in the private sector is still evolving. In particular, the industry, mid decade, really tried to commercially exploit proprietary intelligence and, you know, do private things like private reporting and monetize that, but attitudes toward collaboration are trending in a positive direction was one of the sort of outcomes that we heard at Black Hat. Public-private partnerships are being both mandated by government, and there seems to be a willingness to work together to fight an increasingly capable adversary. These things are definitely on the rise. Now, without this type of collaboration, securing the supercloud is going to become much more challenging and confined to narrow solutions. and we're going to talk about that little later in the segment. Okay, let's look at some of the attendees survey data from Black Hat. Just under 200 really serious security pros took the survey, so not enough to slice and dice by hair color, eye color, height, weight, and favorite movie genre, but enough to extract high level takeaways. You know, these strongly agree or disagree survey responses can sometimes give vanilla outputs, but let's look for the ones where very few respondents strongly agree or disagree with a statement or those that overwhelmingly strongly agree or somewhat agree. So it's clear from this that the respondents believe the following, one, your credentials are out there and available to criminals. Very few people thought that that was, you know, unavoidable. Second, remote work is here to stay, and third, nobody was willing to really jinx their firms and say that they strongly disagree that they'll have to respond to a major cybersecurity incident within the next 12 months. Now, as we've reported extensively, COVID has permanently changed the cybersecurity landscape and the CISO's priorities and playbook. Check out this data that queries respondents on the pandemic's impact on cybersecurity, new requirements to secure remote workers, more cloud, more threats from remote systems and remote users, and a shift away from perimeter defenses that are no longer as effective, e.g. firewall appliances. Note, however, the fifth response that's down there highlighted in green. It shows a meaningful drop in the percentage of remote workers that are disregarding corporate security policy, still too many, but 10 percentage points down from 2021 survey. Now, as we've said many times, bad user behavior will trump good security technology virtually every time. Consistent with the commentary from Mark Arena's Intel 471 threat report, fishing for credentials is the number one concern cited in the Black Hat Attendees Survey. This is a people and process problem more than a technology issue. Yes, using multifactor authentication, changing passwords, you know, using unique passwords, using password managers, et cetera, they're all great things, but if it's too hard for users to implement these things, they won't do it, they'll remain exposed, and their organizations will remain exposed. Number two in the graphic, sophisticated attacks that could expose vulnerabilities in the security infrastructure, again, consistent with the Intel 471 data, and three, supply chain risks, again, consistent with Mark Arena's commentary. Ask most CISOs their number one problem, and they'll tell you, "It's a lack of talent." That'll be on the top of their list. So it's no surprise that 63% of survey respondents believe they don't have the security staff necessary to defend against cyber threats. This speaks to the rise of managed security service providers that we've talked about previously on "Breaking Analysis". We've seen estimates that less than 50% of organizations in the US have a SOC, and we see those firms as ripe for MSSP support as well as larger firms augmenting staff with managed service providers. Now, after re:Invent, we put forth this conceptual model that discussed how the cloud was becoming the first line of defense for CISOs, and DevOps was being asked to do more, things like securing the runtime, the containers, the platform, et cetera, and audit was kind of that last line of defense. So a couple things we picked up from Black Hat which are consistent with this shift and some that are somewhat new, first, is getting visibility across the expanded threat surface was a big theme at Black Hat. This makes it even harder to identify risk, of course, this being the expanded threat surface. It's one thing to know that there's a vulnerability somewhere. It's another thing to determine the severity of the risk, but understanding how easy or difficult it is to exploit that vulnerability and how to prioritize action around that. Vulnerability is increasingly complex for CISOs as the security landscape gets complexified. So what's happening is the SOC, if there even is one at the organization, is becoming federated. No longer can there be one ivory tower that's the magic god room of data and threat detection and analysis. Rather, the SOC is becoming distributed following the data, and as we just mentioned, the SOC is being augmented by the cloud provider and the managed service providers, the MSSPs. So there's a lot of critical security data that is decentralized and this will necessitate a new cyber data model where data can be synchronized and shared across a federation of SOCs, if you will, or mini SOCs or SOC capabilities that live in and/or embedded in an organization's ecosystem. Now, to this point about cloud being the first line of defense, let's turn to a story from ETR that came out of our colleague Eric Bradley's insight in a one-on-one he did with a senior IR person at a manufacturing firm. In a piece that ETR published called "Saved by Zscaler", check out this comment. Quote, "As the last layer, we are filtering all the outgoing internet traffic through Zscaler. And when an attacker is already on your network, and they're trying to communicate with the outside to exchange encryption keys, Zscaler is already blocking the traffic. It happened to us. It happened and we were saved by Zscaler." So that's pretty cool. So not only is the cloud the first line of defense, as we sort of depicted in that previous graphic, here's an example where it's also the last line of defense. Now, let's end on what this all means to securing the supercloud. At our Supercloud 22 event last week in our Palo Alto CUBE Studios, we had a session on this topic on supercloud, securing the supercloud. Security, in our view, is going to be one of the most important and difficult challenges for the idea of supercloud to become real. We reviewed in last week's "Breaking Analysis" a detailed discussion with Snowflake co-founder and president of products, Benoit Dageville, how his company approaches security in their data cloud, what we call a superdata cloud. Snowflake doesn't use the term supercloud. They use the term datacloud, but what if you don't have the focus, the engineering depth, and the bank roll that Snowflake has? Does that mean superclouds will only be developed by those companies with deep pockets and enormous resources? Well, that's certainly possible, but on the securing the supercloud panel, we had three technical experts, Gee Rittenhouse of Skyhigh Security, Piyush Sharrma who's the founder of Accurics who sold to Tenable, and Tony Kueh, who's the former Head of Product at VMware. Now, John Furrier asked each of them, "What is missing? What's it going to take to secure the supercloud? What has to happen?" Here's what they said. Play the clip. >> This is the final question. We have one minute left. I wish we had more time. This is a great panel. We'll bring you guys back for sure after the event. What one thing needs to happen to unify or get through the other side of this fragmentation and then the challenges for supercloud? Because remember, the enterprise equation is solve complexity with more complexity. Well, that's not what the market wants. They want simplicity. They want SaaS. They want ease of use. They want infrastructure risk code. What has to happen? What do you think, each of you? >> So I can start, and extending to the previous conversation, I think we need a consortium. We need a framework that defines that if you really want to operate on supercloud, these are the 10 things that you must follow. It doesn't matter whether you take AWS, Slash, or TCP or you have all, and you will have the on-prem also, which means that it has to follow a pattern, and that pattern is what is required for supercloud, in my opinion. Otherwise, security is going everywhere. They're like they have to fix everything, find everything, and so on and so forth. It's not going to be possible. So they need a framework. They need a consortium, and this consortium needs to be, I think, needs to led by the cloud providers because they're the ones who have these foundational infrastructure elements, and the security vendor should contribute on providing more severe detections or severe findings. So that's, in my opinion, should be the model. >> Great, well, thank you, Gee. >> Yeah, I would think it's more along the lines of a business model. We've seen in cloud that the scale matters, and once you're big, you get bigger. We haven't seen that coalesce around either a vendor, a business model, or whatnot to bring all of this and connect it all together yet. So that value proposition in the industry, I think, is missing, but there's elements of it already available. >> I think there needs to be a mindset. If you look, again, history repeating itself. The internet sort of came together around set of IETF, RSC standards. Everybody embraced and extended it, right? But still, there was, at least, a baseline, and I think at that time, the largest and most innovative vendors understood that they couldn't do it by themselves, right? And so I think what we need is a mindset where these big guys, like Google, let's take an example. They're not going to win at all, but they can have a substantial share. So how do they collaborate with the ecosystem around a set of standards so that they can bring their differentiation and then embrace everybody together. >> Okay, so Gee's point about a business model is, you know, business model being missing, it's broadly true, but perhaps Snowflake serves as a business model where they've just gone out and and done it, setting or trying to set a de facto standard by which data can be shared and monetized. They're certainly setting that standard and mandating that standard within the Snowflake ecosystem with its proprietary framework. You know, perhaps that is one answer, but Tony lays out a scenario where there's a collaboration mindset around a set of standards with an ecosystem. You know, intriguing is this idea of a consortium or a framework that Piyush was talking about, and that speaks to the collaboration or lack thereof that we spoke of earlier, and his and Tony's proposal that the cloud providers should lead with the security vendor ecosystem playing a supporting role is pretty compelling, but can you see AWS and Azure and Google in a kumbaya moment getting together to make that happen? It seems unlikely, but maybe a better partnership between the US government and big tech could be a starting point. Okay, that's it for today. I want to thank the many people who attended Black Hat, reported on it, wrote about it, gave talks, did videos, and some that spoke to me that had attended the event, Becky Bracken, who is the EIC at Dark Reading. They do a phenomenal job and the entire team at Dark Reading, the news desk there, Mark Arena, whom I mentioned, Garrett O'Hara, Nash Borges, Kelly Jackson, sorry, Kelly Jackson Higgins, Roya Gordon, Robert Lipovsky, Chris Krebs, and many others, thanks for the great, great commentary and the content that you put out there, and thanks to Alex Myerson, who's on production, and Alex manages the podcasts for us. Ken Schiffman is also in our Marlborough studio as well, outside of Boston. Kristen Martin and Cheryl Knight, they help get the word out on social media and in our newsletters, and Rob Hoff is our Editor-in-Chief at SiliconANGLE and does some great editing and helps with the titles of "Breaking Analysis" quite often. Remember these episodes, they're all available as podcasts, wherever you listen, just search for "Breaking Analysis Podcasts". I publish each on wikibon.com and siliconangle.com, and you could email me, get in touch with me at david.vellante@siliconangle.com or you can DM me @dvellante or comment on my LinkedIn posts, and please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis". (upbeat music)
SUMMARY :
with Dave Vellante". and the ripple effects that This is the final question. and the security vendor should contribute that the scale matters, the largest and most innovative and the content that you put out there,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cheryl Knight | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Robert Lipovsky | PERSON | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
Chris Krebs | PERSON | 0.99+ |
Charles Sennott | PERSON | 0.99+ |
Becky Bracken | PERSON | 0.99+ |
Rob Hoff | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Kelly Jackson | PERSON | 0.99+ |
Gee Rittenhouse | PERSON | 0.99+ |
Benoit Dageville | PERSON | 0.99+ |
Tony Kueh | PERSON | 0.99+ |
Mark Arena | PERSON | 0.99+ |
Piyush Sharrma | PERSON | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Roya Gordon | PERSON | 0.99+ |
CISA | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto | LOCATION | 0.99+ |
Garrett O'Hara | PERSON | 0.99+ |
Accurics | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
2021 | DATE | 0.99+ |
Skyhigh Security | ORGANIZATION | 0.99+ |
Black Hat | ORGANIZATION | 0.99+ |
10 things | QUANTITY | 0.99+ |
Tenable | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Nash Borges | PERSON | 0.99+ |
last week | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Robert Gates | PERSON | 0.99+ |
one minute | QUANTITY | 0.99+ |
63% | QUANTITY | 0.99+ |
less than 50% | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
Kelly Jackson Higgins | PERSON | 0.99+ |
Alex | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Black Hat 22 | EVENT | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Black Hat | EVENT | 0.98+ |
three technical experts | QUANTITY | 0.98+ |
first line | QUANTITY | 0.98+ |
fifth response | QUANTITY | 0.98+ |
supercloud | ORGANIZATION | 0.98+ |
ETR | ORGANIZATION | 0.98+ |
Ukraine | LOCATION | 0.98+ |
Boston Globe | ORGANIZATION | 0.98+ |
Dr. | PERSON | 0.98+ |
one answer | QUANTITY | 0.97+ |
wikibon.com | OTHER | 0.97+ |
first line | QUANTITY | 0.97+ |
this week | DATE | 0.96+ |
first | QUANTITY | 0.96+ |
Marlborough | LOCATION | 0.96+ |
siliconangle.com | OTHER | 0.95+ |
Saved by Zscaler | TITLE | 0.95+ |
Palo Alto CUBE Studios | LOCATION | 0.95+ |
hundreds of sessions | QUANTITY | 0.95+ |
ORGANIZATION | 0.94+ | |
both | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
dozens of keynotes | QUANTITY | 0.93+ |
today | DATE | 0.93+ |
Eric Herzog & Sam Werner, IBM | CUBEconversation
(upbeat music) >> Hello everyone, and welcome to this "Cube Conversation." My name is Dave Vellante and you know, containers, they used to be stateless and ephemeral but they're maturing very rapidly. As cloud native workloads become more functional and they go mainstream persisting, and protecting the data that lives inside of containers, is becoming more important to organizations. Enterprise capabilities such as high availability or reliability, scalability and other features are now more fundamental and important and containers are linchpin of hybrid cloud, cross-cloud and edge strategies. Now fusing these capabilities together across these regions in an abstraction layer that hides that underlying complexity of the infrastructure, is where the entire enterprise technology industry is headed. But how do you do that without making endless copies of data and managing versions not to mention the complexities and costs of doing so. And with me to talk about how IBM thinks about and is solving these challenges are Eric Herzog, who's the Chief Marketing Officer and VP of Global Storage Channels. For the IBM Storage Division is Sam Werner is the vice president of offering management and the business line executive for IBM Storage. Guys, great to see you again, wish should, were face to face but thanks for coming on "theCUBE." >> Great to be here. >> Thanks Dave, as always. >> All right guys, you heard me my little spiel there about the problem statement. Eric, maybe you could start us off. I mean, is it on point? >> Yeah, absolutely. What we see is containers are going mainstream. I frame it very similarly to what happened with virtualization, right? It got brought in by the dev team, the test team, the applications team, and then eventually of course, it became the main state. Containers is going through exactly that right now. Brought in by the dev ops people, the software teams. And now it's becoming again, persistent, real use clients that want to deploy a million of them. Just the way they historically have deployed a million virtual machines, now they want a million containers or 2 million. So now it's going mainstream and the feature functions that you need once you take it out of the test sort of play with stage to the real production phase, really changes the ball game on the features you need, the quality of what you get, and the types of things you need the underlying storage and the data services that go with that storage,. to do in a fully container world. >> So Sam how'd we get here? I mean, container has been around forever. You look inside a Linux, right? But then they did, as Eric said, go mainstream. But it started out the, kind of little experimental, As I said, their femoral didn't really need to persist them, but it's changed very quickly. Maybe you could talk to that evolution and how we got here. >> I mean, well, it's been a look, this is all about agility right? It's about enterprises trying to accelerate their innovation. They started off by using virtual machines to try to accelerate access to IT for developers, and developers are constantly out, running ahead. They got to go faster and they have to deliver new applications. Business lines need to figure out new ways to engage with their customers. Especially now with the past year we had it even further accelerated this need to engage with customers in new ways. So it's about being agile. Containers promise or provide a lot of the capabilities you need to be agile. What enterprises are discovering, a lot of these initiatives are starting within the business lines and they're building these applications or making these architectural decisions, building dev ops environments on containers. And what they're finding is they're not bringing the infrastructure teams along with them. And they're running into challenges that are inhibiting their ability to achieve the agility they want because their storage needs aren't keeping up. So this is a big challenge that enterprises face. They want to use containers to build a more agile environment to do things like dev ops, but they need to bring the infrastructure teams along. And that's what we're focused on now. Is how do you make that agile infrastructure to support these new container worlds? >> Got it, so Eric, you guys made an announcement to directly address these issues. Like it's kind of a fire hose of innovation. Maybe you could take us through and then we can unpack that a little bit. >> Sure, so what we did is on April 27th, we announced IBM Spectrum Fusion. This is a fully container native software defined storage technology that integrates a number of proven battle-hardened technologies that IBM has been deploying in the enterprise for many years. That includes a global scalable file system that can span edge core and cloud seamlessly with a single copy of the data. So no more data silos and no more 12 copies of the data which of course drive up CapEx and OpEx. Spectrum Fusion reduces that and makes it easier to manage. Cuts the cost from a CapEx perspective and cuts a cost for an OpEx perspective. By being fully container native, it's ready to go for the container centric world and could span all types of areas. So what we've done is create a storage foundation which is what you need at the bottom. So things like the single global namespace, single accessibility, we have local caching. So with your edge core cloud, regardless of where the data is, you think the data's right with you, even if it physically is not. So that allows people to work on it. We have file locking and other technologies to ensure that the data is always good. And then of course we'd imbued it with the HA Disaster Recovery, the backup and restore technology, which we've had for years and have now made of fully container native. So spectrum fusion basically takes several elements of IBM's existing portfolio has made them container native and brought them together into a single piece of software. And we'll provide that both as a software defined storage technology early in 2022. And our first pass will be as a hyperconverged appliance which will be available next quarter in Q3 of 2021. That of course means it'll come with compute, it'll come with storage, come with a rack even, come with networking. And because we can preload everything for the end users or for our business partners, it would also include Kubernetes, Red Gat OpenShift and Red Hat's virtualization technology all in one simple package, all ease of use and a single management gooey to manage everything, both the software side and the physical infrastructure that's part of the hyperconverged system level technologies. >> So, maybe it can help us understand the architecture and maybe the prevailing ways in which people approach container storage, what's the stack look like? And how have you guys approached it? >> Yeah, that's a great question. Really, there's three layers that we look at when we talk about container native storage. It starts with the storage foundation which is the layer that actually lays the data out onto media and does it in an efficient way and makes that data available where it's needed. So that's the core of it. And the quality of your storage services above that depend on the quality of the foundation that you start with. Then you go up to the storage services layer. This is where you bring in capabilities like HA and DR. People take this for granted, I think as they move to containers. We're talking about moving mission critical applications now into a container and hybrid cloud world. How do you actually achieve the same levels of high availability you did in the past? If you look at what large enterprises do, they run three site, for site replication of their data with hyper swap and they can ensure high availability. How do you bring that into a Kubernetes environment? Are you ready to do that? We talk about how only 20% of applications have really moved into a hybrid cloud world. The thing that's inhibiting the other 80% these types of challenges, okay? So the storage services include HA DR, data protection, data governance, data discovery. You talked about making multiple copies of data creates complexity, it also creates risk and security exposures. If you have multiple copies of data, if you needed data to be available in the cloud you're making a copy there. How do you keep track of that? How do you destroy the copy when you're done with it? How do you keep track of governance and GDPR, right? So if I have to delete data about a person how do I delete it everywhere? So there's a lot of these different challenges. These are the storage services. So we talk about a storage services layer. So layer one data foundation, layer two storage services, and then there needs to be connection into the application runtime. There has to be application awareness to do things like high availability and application consistent backup and recovery. So then you have to create the connection. And so in our case, we're focused on open shift, right? When we talk about Kubernetes how do you create the knowledge between layer two, the storage services and layer three of the application services? >> And so this is your three layer cake. And then as far as like the policies that I want to inject, you got an API out and entries in, can use whatever policy engine I want. How does that work? >> So we're creating consistent sets of APIs to bring those storage services up into the application, run time. We in IBM have things like IBM cloud satellite which bring the IBM public cloud experience to your data center and give you a hybrid cloud or into other public cloud environments giving you one hybrid cloud management experience. We'll integrate there, giving you that consistent set of storage services within an IBM cloud satellite. We're also working with Red Hat on their Advanced Cluster Manager, also known as RACM to create a multi-cluster management of your Kubernetes environment and giving that consistent experience. Again, one common set of APIs. >> So the appliance comes first? Is that a no? Okay, so is that just time to market or is there a sort of enduring demand for appliances? Some customers, you know, they want that, maybe you could explain that strategy. >> Yeah, so first let me take it back a second. Look at our existing portfolio. Our award-winning products are both software defined and system-based. So for example Spectrum Virtualize comes on our flash system. Spectrum Scale comes on our elastic storage system. And we've had this model where we provide the exact same software, both on an array or as standalone piece of software. This is unique in the storage industry. When you look at our competitors, when they've got something that's embedded in their array, their array manager, if you will, that's not what they'll try to sell you. It's software defined storage. And of course, many of them don't offer software defined storage in any way, shape or form. So we've done both. So with spectrum fusion, we'll have a hyper-converged configuration which will be available in Q3. We'll have a software defined configuration which were available at the very beginning of 2022. So you wanted to get out of this market feedback from our clients, feedback from our business partners by doing a container native HCI technology, we're way ahead. We're going to where the park is. We're throwing the ball ahead of the wide receiver. If you're a soccer fan, we're making sure that the mid guy got it to the forward ahead of time so you could kick the goal right in. That's what we're doing. Other technologies lead with virtualization, which is great but virtualization is kind of old hat, right? VMware and other virtualization layers have been around for 20 now. Container is where the world is going. And by the way, we'll support everything. We still have customers in certain worlds that are using bare metal, guess what? We work fine with that. We worked fine with virtual as we have a tight integration with both hyper V and VMware. So some customers will still do that. And containers is a new wave. So with spectrum fusion, we are riding the wave not fighting the wave and that way we could meet all the needs, right? Bare metal, virtual environments, and container environments in a way that is all based on the end users applications, workloads, and use cases. What goes, where and IBM Storage can provide all of it. So we'll give them two methods of consumption, by early next year. And we started with a hyper-converged first because, A, we felt we had a lead, truly a lead. Other people are leading with virtualization. We're leading with OpenShift and containers where the first full container-native OpenShift ground up based hyper-converged of anyone in the industry versus somebody who's done VMware or some other virtualization layer and then sort of glommed on containers and as an afterthought. We're going to where the market is moving, not to where the market has been. >> So just follow up on that. You kind of, you got the sort of Switzerland DNA. And it's not just OpenShift and Red Hat and the open source ethos. I mean, it just goes all the way back to San Volume Controller back in the day where you could virtualize anybody's storage. How is that carrying through to this announcement? >> So Spectrum Fusion is doing the same thing. Spectrum Fusion, which has many key elements brought in from our history with Spectrum Scale supports not IBM storage, for example, EMC Isilon NFS. It will support, Fusion will support Spectrum Scale, Fusion will support our elastic storage system. Fusion will support NetApp filers as well. Fusion will support IBM cloud object storage both software defined storage, or as an array technology and Amazon S3 object stores and any other object storage vendor who's compliant with S3. All of those can be part of the global namespace, scalable file system. We can bring in, for example, object data without making a duplicate copy. The normal way to do that as you make a duplicate copy. So you had a copy in the object store. You make a copy and to bring that into the file. Well, guess what, we don't have to do that. So again, cutting CapEx and OpEx and ease of management. But just as we do with our flash systems product and our Spectrum Virtualize and the SAN Volume Controller, we support over 550 storage arrays that are not ours that are our competitors. With Spectrum Fusion, we've done the same thing, fusion, scale the IBM ESS, IBM cloud object storage, Amazon S3 object store, as well as other compliance, EMC Isilon NFS, and NFS from NetApp. And by the way, we can do the discovery model as well not just integration in the system. So we've made sure that we really do protect existing investments. And we try to eliminate, particularly with discovery capability, you've got AI or analytics software connecting with the API, into the discovery technology. You don't have to traverse and try to find things because the discovery will create real time, metadata cataloging, and indexing, not just of our storage but the other storage I'd mentioned, which is the competition. So talk about making it easier to use, particularly for people who are heterogeneous in their storage environment, which is pretty much the bulk of the global fortune 1500, for sure. And so we're allowing them to use multiple vendors but derive real value with Spectrum Fusion and get all the capabilities of Spectrum Fusion and all the advantages of the enterprise data services but not just for our own product but for the other products as well that aren't ours. >> So Sam, we understand the downside of copies, but then, so you're not doing multiple copies. How do you deal with latency? What's the secret sauce here? Is it the file system? Is there other magic in here? >> Yeah, that's a great question. And I'll build a little bit off of what Eric said, but look one of the really great and unique things about Spectrum Scale is its ability to consume any storage. And we can actually allow you to bring in data sets from where they are. It could have originated in object storage we'll cash it into the file system. It can be on any block storage. It can literally be on any storage you can imagine as long as you can integrate a file system with it. And as you know most applications run on top of the file system. So it naturally fits into your application stack. Spectrum Scale uniquely is a globally parallel file system. So there's not very many of them in the world and there's none that can achieve what Spectrum Scale can do. We have customers running in the exabytes of data and the performance improves with scales. So you can actually deploy Spectrum Scale on-prem, build out an environment of it, consuming whatever storage you have. Then you can go into AWS or IBM cloud or Azure, deploy an instance of it and it will now extend your file system into that cloud. Or you can deploy it at the edge and it'll extend your file system to that edge. This gives you the exact same set of files and visibility and we'll cash in only what's needed. Normally you would have to make a copy of data into the other environment. Then you'd have to deal with that copy later, let's say you were doing a cloud bursting use case. Let's look at that as an example, to make this real. You're running an application on-prem. You want to spin up more compute in the cloud for your AI. The data normally you'd have to make a copy of the data. You'd run your AI. They have to figure out what to do with that data. Do you copy some of the fact? Do we sync them? Do you delete it? What do you do? With Spectrum Scale just automatically cash in whatever you need. It'll run there and you get assigned to spin it down. Your copy is still on-prem. You know, no data is lost. We can actually deal with all of those scenarios for you. And then if you look at what's happening at the edge, a lot of say video surveillance, data pouring in. Looking at the manufacturing {for} looking for defects. You can run a AI right at the edge, make it available in the cloud, make that data available in your data center. Again, one file system going across all. And that's something unique in our data foundation built on Spectrum Scale. >> So there's some metadata magic in there as well, and that intelligence based on location. And okay, so you're smart enough to know where the data lives. What's the sweet spot for this Eric? Are there any particular use cases or industries that we should be focused on or is it through? >> Sure, so first let's talk about the industries. We see certain industries going more container quicker than other industries. So first is financial services. We see it happening there. Manufacturing, Sam already talked about AI based manufacturing platforms. We actually have a couple clients right now. We're doing autonomous driving software with us on containers right now, even before Spectrum Fusion with Spectrum Scale. We see public of course, healthcare and in healthcare don't just think delivery at IBM. That includes the research guys. So the genomic companies, the biotech companies, the drug companies are all included in that. And then of course, retail, both on-prem and off-prem. So those are sort of the industries. Then we see from an application workload, basically AI analytics and big data applications or workloads are the key things that Spectrum Fusion helps you because of its file system. It's high performance. And those applications are tending to spread across core ,edge and cloud. So those applications are spreading out. They're becoming broader than just running in the data center. And by the way they want to run it just into the data center, that's fine. Or perfect example, we had giant global auto manufacturer. They've got factories all over. And if you think there isn't compute resources in every factory, there is because those factories I just saw an article, actually, those factories cost about a billion dollars to build them, a billion. So they've got their own IT, now it's connected to their core data center as well. So that's a perfect example that enterprise edge where spectrum fusion would be an ideal solution whether they did it as software defined only, or of course when you got a billion dollar factory, just to make it let alone produce the autos or whatever you're producing. Silicon, for example, those fabs, all cost a billion. That's where the enterprise edge fits in very well with Spectrum Fusion. >> So are those industries, what's driving the adoption of containers? Is it just, they just want to modernize? Is it because they're doing some of those workloads that you mentioned or is there's edge? Like you mentioned manufacturing, I could see that potentially being an edge is the driver. >> Well, it's a little bit of all of those Dave. For example, virtualization came out and virtualization offered advantages over bare metal, okay? Now containerization has come out and containerization is offering advantage over virtualization. The good thing at IBM is we know we can support all three. And we know again, in the global fortune 2000, 1500 they're probably going to run all three based on the application workload or use case. And our storage is really good at bare metal. Very good at virtualization environments. And now with Spectrum Fusion are container native outstanding for container based environments. So we see these big companies will probably have all three and IBM storage is one of the few vendors if not the only vendor that could adroitly support all three of those various workload types. So that's why we see this as a huge advantage. And again, the market is going to containers. We are, I'm a native California. You don't fight the wave, you ride the wave. and the wave is containers and we're riding that wave. >> If you don't ride the wave you become driftwood as Pat Gelsinger would say. >> And that is true, another native California. I'm a whole boss. >> So okay, so, I wonder Sam I sort of hinted upfront in my little narrative there but the way we see this, as you've got on-prem hybrid, you got public clouds across cloud moving to the edge. Open shift is I said is the linchpin to enabling some of those. And what we see is this layer that abstracts the complexity, hides the underlying complexity of the infrastructure that becomes kind of an implementation detail. Eric talked about skating to the park or whatever sports analogy you want to use. Is that where the park is headed? >> Yeah, I mean, look, the bottom line is you have to remove the complexity for the developers. Again, the name of the game here is all about agility. You asked why these industries are implementing containers? It's about accelerating their innovation and their services for their customers. It's about leveraging AI to gain better insights about their customers and delivering what they want and proving their experience. So if it's all about agility developers don't want to wait around for infrastructure. You need to automate it as much as possible. So it's about building infrastructure that's automated, which requires consistent API APIs. And it requires abstracting out the complexity of things like HA and DR. You don't want every application owner to have to figure out how to implement that. You want to make those storage services available and easy for a developer to implement and integrate into what they're doing. You want to ensure security across everything you do as you bring more and more of your data of your information about your customers into these container worlds. You've got to have security rock solid. You can't leave any exposures there and you can't afford downtime. There's increasing threats from things like ransomware. You don't see it in the news every day but it happens every single day. So how do you make sure you can recover when an event happens to you? So yes, you need to build a abstracted layer of storage services and you need to make it simply available to the developers in these dev ops environments. And that's what we're doing with spectrum fusion. We're taking, I think, extremely unique and one of a kind storage foundation with Spectrum Scale that gives you single namespace globally. And we're building onto it an incredible set of storage services, making extremely simple to deploy enterprise class container applications. >> So what's the bottom line business impact. I mean, how does this change? I mean, Sam, you I think articulated very well through all about serving the developers versus you know, storage, admin provisioning, a LUN. So how does this change my organization, my business? What's the impact there? >> I've mentioned one other point that we talk about an IBM a lot, which is the AI ladder. And it's about how do you take all of this information you have and be able to take it to build new insights, to give your company and advantage. An incumbent in an industry shouldn't be able to be disrupted if they're able to leverage all the data they have about the industry and their customers. But in order to do that, you have to be able to get to a single source of data and be able to build it into the fabric of your business operations. So that all decisions you're making in your company, all services you deliver to your customers, are built on that data foundation and information and the only way to do that and infuse it into your culture is to make this stuff real time. And the only way to do that is to build out a containerized application environment that has access to real-time data. The ultimate outcome, sorry, I know you asked for business results is that you will, in real time understand your clients, understand your industry and deliver the best possible services. And the absolute, business outcome is you will continue to gain market share and your environment and grow revenue. I mean, that's the outcome every business wants. >> Yeah, it's all about speed. Everybody's kind of, everybody's last year was forced into digital transformation. It was sort of rushed into and compressed and now they get some time to do it right. And so modernizing apps, containers, dev ops developer led sort of initiatives are really key to modernization. All right, Eric, we've got, we're out of time but give us the bottom summary. We didn't talk, actually, we had to talk about the 3,200. Maybe you could give us a little insight on that before we close. >> Sure, so in addition to what we're doing with Fusion we also introduced a new elastic storage system, 3,200 and it's all flash. It gets 80 gigs, a second sustained at the node level and we can cluster them infinitely. So for example, I've got 10 of them. I'm delivering 800 gigabytes, a second sustained. And of course, AI, big data analytic workloads are extremely, extremely susceptible to bandwidth and or data transfer rate. That's what they need to deliver their application base properly. It comes with Spectrum Scale built in so that comes with it. So you get the advantage of Spectrum Scale. We talked a lot about Spectrum Scale because it is if you will, one of the three fathers of spectrum fusion. So it's ideal with it's highly parallel file system. It's used all over in high performance computing and super computing, in drug research, in health care in finance, probably about 80% of the world's largest banks in the world use Spectrum Scale already for AI, big data analytics. So the new 3,200 is an all flash version twice as fast as the older version and all the benefit of Spectrum Scale including the ability of seamlessly integrating into existing Spectrum Scale or ESS deployments. And when Fusion comes out, you'll be able to have Fusion. And you could also add 3,200 to it if you want to do that because of the capability of our global namespace and our single file system across edge, core and cloud. So that's the 3,200 in a nutshell, Dave. >> All right, give us a bottom line, Eric. And we got to go, what's the bumper sticker. >> Yeah, bumper sticker is, you got to ride the wave of containers and IBM storage is company that can take you there so that you win the big surfing context and get the big prize. >> Eric and Sam, thanks so much, guys. It's great to see you and miss you guys. Hopefully we'll get together soon. So get your jabs and we'll have a beer. >> All right. >> All right, thanks, Dave. >> Nice talking to you. >> All right, thank you for watching everybody. This is Dave Vellante for "theCUBE." We'll see you next time. (upbeat music)
SUMMARY :
and protecting the data about the problem statement. and the types of things you Maybe you could talk to that a lot of the capabilities Got it, so Eric, you the data is, you think So that's the core of it. you got an API out and entries in, into the application, run time. So the appliance comes first? that the mid guy got it to in the day where you could And by the way, we can do Is it the file system? and the performance improves with scales. What's the sweet spot for this Eric? And by the way they want to run it being an edge is the driver. and IBM storage is one of the few vendors If you don't ride the And that is true, but the way we see this, as So how do you make sure What's the impact there? and the only way to do that and infuse it and now they get some time to do it right. So that's the 3,200 in a nutshell, Dave. the bumper sticker. so that you win the big It's great to see you and miss you guys. All right, thank you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Sam Werner | PERSON | 0.99+ |
April 27th | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
80 gigs | QUANTITY | 0.99+ |
12 copies | QUANTITY | 0.99+ |
3,200 | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2 million | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
CapEx | TITLE | 0.99+ |
800 gigabytes | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
IBM Storage | ORGANIZATION | 0.99+ |
single copy | QUANTITY | 0.99+ |
OpEx | TITLE | 0.98+ |
three layers | QUANTITY | 0.98+ |
Spectrum Fusion | COMMERCIAL_ITEM | 0.98+ |
20% | QUANTITY | 0.98+ |
EMC | ORGANIZATION | 0.98+ |
first pass | QUANTITY | 0.98+ |
S3 | TITLE | 0.98+ |
Global Storage Channels | ORGANIZATION | 0.98+ |
a billion | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
20 | QUANTITY | 0.97+ |
Spectrum Scale | TITLE | 0.97+ |
three fathers | QUANTITY | 0.97+ |
early next year | DATE | 0.97+ |
three | QUANTITY | 0.97+ |
GDPR | TITLE | 0.96+ |
Red Hat | ORGANIZATION | 0.96+ |
OpenShift | TITLE | 0.96+ |
Empowerment Through Inclusion | Beyond.2020 Digital
>>Yeah, yeah. >>Welcome back. I'm so excited to introduce our next session empowerment through inclusion, reimagining society and technology. This is a topic that's personally very near and dear to my heart. Did you know that there's only 2% of Latinas in technology as a Latina? I know that there's so much more we could do collectively to improve these gaps and diversity. I thought spot diversity is considered a critical element across all levels of the organization. The data shows countless times. A diverse and inclusive workforce ultimately drives innovation better performance and keeps your employees happier. That's why we're passionate about contributing to this conversation and also partnering with organizations that share our mission of improving diversity across our communities. Last beyond, we hosted the session during a breakfast and we packed the whole room. This year, we're bringing the conversation to the forefront to emphasize the importance of diversity and data and share the positive ramifications that it has for your organization. Joining us for this session are thought spots Chief Data Strategy Officer Cindy Housing and Ruhollah Benjamin, associate professor of African American Studies at Princeton University. Thank you, Paola. So many >>of you have journeyed with me for years now on our efforts to improve diversity and inclusion in the data and analytic space. And >>I would say >>over time we cautiously started commiserating, eventually sharing best practices to make ourselves and our companies better. And I do consider it a milestone. Last year, as Paola mentioned that half the room was filled with our male allies. But I remember one of our Panelists, Natalie Longhurst from Vodafone, suggesting that we move it from a side hallway conversation, early morning breakfast to the main stage. And I >>think it was >>Bill Zang from a I G in Japan. Who said Yes, please. Everyone else agreed, but more than a main stage topic, I want to ask you to think about inclusion beyond your role beyond your company toe. How Data and analytics can be used to impact inclusion and equity for the society as a whole. Are we using data to reveal patterns or to perpetuate problems leading Tobias at scale? You are the experts, the change agents, the leaders that can prevent this. I am thrilled to introduce you to the leading authority on this topic, Rou Ha Benjamin, associate professor of African studies at Princeton University and author of Multiple Books. The Latest Race After Technology. Rou ha Welcome. >>Thank you. Thank you so much for having me. I'm thrilled to be in conversation with you today, and I thought I would just kick things off with some opening reflections on this really important session theme. And then we could jump into discussion. So I'd like us to as a starting point, um, wrestle with these buzzwords, empowerment and inclusion so that we can have them be more than kind of big platitudes and really have them reflected in our workplace cultures and the things that we design in the technologies that we put out into the world. And so to do that, I think we have to move beyond techno determinism, and I'll explain what that means in just a minute. Techno determinism comes in two forms. The first, on your left is the idea that technology automation, um, all of these emerging trends are going to harm us, are going to necessarily harm humanity. They're going to take all the jobs they're going to remove human agency. This is what we might call the techno dystopian version of the story and this is what Hollywood loves to sell us in the form of movies like The Matrix or Terminator. The other version on your right is the techno utopian story that technologies automation. The robots as a shorthand, are going to save humanity. They're gonna make everything more efficient, more equitable. And in this case, on the surface, he seemed like opposing narratives right there, telling us different stories. At least they have different endpoints. But when you pull back the screen and look a little bit more closely, you see that they share an underlying logic that technology is in the driver's seat and that human beings that social society can just respond to what's happening. But we don't really have a say in what technologies air designed and so to move beyond techno determinism the notion that technology is in the driver's seat. We have to put the human agents and agencies back into the story, the protagonists, and think carefully about what the human desires worldviews, values, assumptions are that animate the production of technology. And so we have to put the humans behind the screen back into view. And so that's a very first step and when we do that, we see, as was already mentioned, that it's a very homogeneous group right now in terms of who gets the power and the resource is to produce the digital and physical infrastructure that everyone else has to live with. And so, as a first step, we need to think about how to create more participation of those who are working behind the scenes to design technology now to dig a little more a deeper into this, I want to offer a kind of low tech example before we get to the more hi tech ones. So what you see in front of you here is a simple park bench public bench. It's located in Berkeley, California, which is where I went to graduate school and on this particular visit I was living in Boston, and so I was back in California. It was February. It was freezing where I was coming from, and so I wanted to take a few minutes in between meetings to just lay out in the sun and soak in some vitamin D, and I quickly realized, actually, I couldn't lay down on this bench because of the way it had been designed with these arm rests at intermittent intervals. And so here I thought. Okay, the the armrest have, ah functional reason why they're there. I mean, you could literally rest your elbows there or, um, you know, it can create a little bit of privacy of someone sitting there that you don't know. When I was nine months pregnant, it could help me get up and down or for the elderly, the same thing. So it has a lot of functional reasons, but I also thought about the fact that it prevents people who are homeless from sleeping on the bench. And this is the Bay area that we were talking about where, in fact, the tech boom has gone hand in hand with a housing crisis. Those things have grown in tandem. So innovation has grown within equity because we haven't thought carefully about how to address the social context in which technology grows and blossoms. And so I thought, Okay, this crisis is growing in this area, and so perhaps this is a deliberate attempt to make sure that people don't sleep on the benches by the way that they're designed and where the where they're implemented and So this is what we might call structural inequity. By the way something is designed. It has certain effects that exclude or harm different people. And so it may not necessarily be the intense, but that's the effect. And I did a little digging, and I found, in fact, it's a global phenomenon, this thing that architects called hostile architecture. Er, I found single occupancy benches in Helsinki, so only one booty at a time no laying down there. I found caged benches in France. And in this particular town. What's interesting here is that the mayor put these benches out in this little shopping plaza, and within 24 hours the people in the town rallied together and had them removed. So we see here that just because we have, uh, discriminatory design in our public space doesn't mean we have to live with it. We can actually work together to ensure that our public space reflects our better values. But I think my favorite example of all is the meter bench. In this case, this bench is designed with spikes in them, and to get the spikes to retreat into the bench, you have to feed the meter you have to put some coins in, and I think it buys you about 15 or 20 minutes. Then the spikes come back up. And so you'll be happy to know that in this case, this was designed by a German artists to get people to think critically about issues of design, not just the design of physical space but the design of all kinds of things, public policies. And so we can think about how our public life in general is metered, that it serves those that can pay the price and others are excluded or harm, whether we're talking about education or health care. And the meter bench also presents something interesting. For those of us who care about technology, it creates a technical fix for a social problem. In fact, it started out his art. But some municipalities in different parts of the world have actually adopted this in their public spaces in their parks in order to deter so called lawyers from using that space. And so, by a technical fix, we mean something that creates a short term effect, right. It gets people who may want to sleep on it out of sight. They're unable to use it, but it doesn't address the underlying problems that create that need to sleep outside in the first place. And so, in addition to techno determinism, we have to think critically about technical fixes that don't address the underlying issues that technology is meant to solve. And so this is part of a broader issue of discriminatory design, and we can apply the bench metaphor to all kinds of things that we work with or that we create. And the question we really have to continuously ask ourselves is, What values are we building in to the physical and digital infrastructures around us? What are the spikes that we may unwittingly put into place? Or perhaps we didn't create the spikes. Perhaps we started a new job or a new position, and someone hands us something. This is the way things have always been done. So we inherit the spike bench. What is our responsibility when we noticed that it's creating these kinds of harms or exclusions or technical fixes that are bypassing the underlying problem? What is our responsibility? All of this came to a head in the context of financial technologies. I don't know how many of you remember these high profile cases of tech insiders and CEOs who applied for Apple, the Apple card and, in one case, a husband and wife applied and the husband, the husband received a much higher limit almost 20 times the limit as his wife, even though they shared bank accounts, they lived in Common Law State. And so the question. There was not only the fact that the husband was receiving a much better interest rate and the limit, but also that there was no mechanism for the individuals involved to dispute what was happening. They didn't even know what the factors were that they were being judged that was creating this form of discrimination. So in terms of financial technologies, it's not simply the outcome that's the issue. Or that could be discriminatory, but the process that black boxes, all of the decision making that makes it so that consumers and the general public have no way to question it. No way to understand how they're being judged adversely, and so it's the process not only the product that we have to care a lot about. And so the case of the apple cart is part of a much broader phenomenon of, um, racist and sexist robots. This is how the headlines framed it a few years ago, and I was so interested in this framing because there was a first wave of stories that seemed to be shocked at the prospect that technology is not neutral. Then there was a second wave of stories that seemed less surprised. Well, of course, technology inherits its creator's biases. And now I think we've entered a phase of attempts to override and address the default settings of so called racist and sexist robots, for better or worse. And here robots is just a kind of shorthand, that the way people are talking about automation and emerging technologies more broadly. And so as I was encountering these headlines, I was thinking about how these air, not problems simply brought on by machine learning or AI. They're not all brand new, and so I wanted to contribute to the conversation, a kind of larger context and a longer history for us to think carefully about the social dimensions of technology. And so I developed a concept called the New Jim Code, which plays on the phrase Jim Crow, which is the way that the regime of white supremacy and inequality in this country was defined in a previous era, and I wanted us to think about how that legacy continues to haunt the present, how we might be coding bias into emerging technologies and the danger being that we imagine those technologies to be objective. And so this gives us a language to be able to name this phenomenon so that we can address it and change it under this larger umbrella of the new Jim Code are four distinct ways that this phenomenon takes shape from the more obvious engineered inequity. Those were the kinds of inequalities tech mediated inequalities that we can generally see coming. They're kind of obvious. But then we go down the line and we see it becomes harder to detect. It's happening in our own backyards. It's happening around us, and we don't really have a view into the black box, and so it becomes more insidious. And so in the remaining couple minutes, I'm just just going to give you a taste of the last three of these, and then a move towards conclusion that we can start chatting. So when it comes to default discrimination. This is the way that social inequalities become embedded in emerging technologies because designers of these technologies aren't thinking carefully about history and sociology. Ah, great example of this came Thio headlines last fall when it was found that widely used healthcare algorithm affecting millions of patients, um, was discriminating against black patients. And so what's especially important to note here is that this algorithm healthcare algorithm does not explicitly take note of race. That is to say, it is race neutral by using cost to predict healthcare needs. This digital triaging system unwittingly reproduces health disparities because, on average, black people have incurred fewer costs for a variety of reasons, including structural inequality. So in my review of this study by Obermeyer and colleagues, I want to draw attention to how indifference to social reality can be even more harmful than malicious intent. It doesn't have to be the intent of the designers to create this effect, and so we have to look carefully at how indifference is operating and how race neutrality can be a deadly force. When we move on to the next iteration of the new Jim code coded exposure, there's attention because on the one hand, you see this image where the darker skin individual is not being detected by the facial recognition system, right on the camera or on the computer. And so coated exposure names this tension between wanting to be seen and included and recognized, whether it's in facial recognition or in recommendation systems or in tailored advertising. But the opposite of that, the tension is with when you're over included. When you're surveiled when you're to centered. And so we should note that it's not simply in being left out, that's the problem. But it's in being included in harmful ways. And so I want us to think carefully about the rhetoric of inclusion and understand that inclusion is not simply an end point. It's a process, and it is possible to include people in harmful processes. And so we want to ensure that the process is not harmful for it to really be effective. The last iteration of the new Jim Code. That means the the most insidious, let's say, is technologies that are touted as helping US address bias, so they're not simply including people, but they're actively working to address bias. And so in this case, There are a lot of different companies that are using AI to hire, create hiring software and hiring algorithms, including this one higher view. And the idea is that there there's a lot that AI can keep track of that human beings might miss. And so so the software can make data driven talent decisions. After all, the problem of employment discrimination is widespread and well documented. So the logic goes, Wouldn't this be even more reason to outsource decisions to AI? Well, let's think about this carefully. And this is the look of the idea of techno benevolence trying to do good without fully reckoning with what? How technology can reproduce inequalities. So some colleagues of mine at Princeton, um, tested a natural learning processing algorithm and was looking to see whether it exhibited the same, um, tendencies that psychologists have documented among humans. E. And what they found was that in fact, the algorithm associating black names with negative words and white names with pleasant sounding words. And so this particular audit builds on a classic study done around 2003, before all of the emerging technologies were on the scene where two University of Chicago economists sent out thousands of resumes to employers in Boston and Chicago, and all they did was change the names on those resumes. All of the other work history education were the same, and then they waited to see who would get called back. And the applicants, the fictional applicants with white sounding names received 50% more callbacks than the black applicants. So if you're presented with that study, you might be tempted to say, Well, let's let technology handle it since humans are so biased. But my colleagues here in computer science found that this natural language processing algorithm actually reproduced those same associations with black and white names. So, too, with gender coded words and names Amazon learned a couple years ago when its own hiring algorithm was found discriminating against women. Nevertheless, it should be clear by now why technical fixes that claim to bypass human biases are so desirable. If Onley there was a way to slay centuries of racist and sexist demons with a social justice box beyond desirable, more like magical, magical for employers, perhaps looking to streamline the grueling work of recruitment but a curse from any jobseekers, as this headline puts it, your next interview could be with a racist spot, bringing us back to that problem space we started with just a few minutes ago. So it's worth noting that job seekers are already developing ways to subvert the system by trading answers to employers test and creating fake applications as informal audits of their own. In terms of a more collective response, there's a federation of European Trade unions call you and I Global that's developed a charter of digital rights for work, others that touches on automated and a I based decisions to be included in bargaining agreements. And so this is one of many efforts to change their ecosystem to change the context in which technology is being deployed to ensure more protections and more rights for everyday people in the US There's the algorithmic accountability bill that's been presented, and it's one effort to create some more protections around this ubiquity of automated decisions, and I think we should all be calling from more public accountability when it comes to the widespread use of automated decisions. Another development that keeps me somewhat hopeful is that tech workers themselves are increasingly speaking out against the most egregious forms of corporate collusion with state sanctioned racism. And to get a taste of that, I encourage you to check out the hashtag Tech won't build it. Among other statements that they have made and walking out and petitioning their companies. Who one group said, as the people who build the technologies that Microsoft profits from, we refuse to be complicit in terms of education, which is my own ground zero. Um, it's a place where we can we can grow a more historically and socially literate approach to tech design. And this is just one, um, resource that you all can download, Um, by developed by some wonderful colleagues at the Data and Society Research Institute in New York and the goal of this interventionist threefold to develop an intellectual understanding of how structural racism operates and algorithms, social media platforms and technologies, not yet developed and emotional intelligence concerning how to resolve racially stressful situations within organizations, and a commitment to take action to reduce harms to communities of color. And so as a final way to think about why these things are so important, I want to offer a couple last provocations. The first is for us to think a new about what actually is deep learning when it comes to computation. I want to suggest that computational depth when it comes to a I systems without historical or social depth, is actually superficial learning. And so we need to have a much more interdisciplinary, integrated approach to knowledge production and to observing and understanding patterns that don't simply rely on one discipline in order to map reality. The last provocation is this. If, as I suggested at the start, inequity is woven into the very fabric of our society, it's built into the design of our. Our policies are physical infrastructures and now even our digital infrastructures. That means that each twist, coil and code is a chance for us toe. We've new patterns, practices and politics. The vastness of the problems that we're up against will be their undoing. Once we accept that we're pattern makers. So what does that look like? It looks like refusing color blindness as an anecdote to tech media discrimination rather than refusing to see difference. Let's take stock of how the training data and the models that we're creating have these built in decisions from the past that have often been discriminatory. It means actually thinking about the underside of inclusion, which can be targeting. And how do we create a more participatory rather than predatory form of inclusion? And ultimately, it also means owning our own power in these systems so that we can change the patterns of the past. If we're if we inherit a spiked bench, that doesn't mean that we need to continue using it. We can work together to design more just and equitable technologies. So with that, I look forward to our conversation. >>Thank you, Ruth. Ha. That was I expected it to be amazing, as I have been devouring your book in the last few weeks. So I knew that would be impactful. I know we will never think about park benches again. How it's art. And you laid down the gauntlet. Oh, my goodness. That tech won't build it. Well, I would say if the thoughts about team has any saying that we absolutely will build it and will continue toe educate ourselves. So you made a few points that it doesn't matter if it was intentional or not. So unintentional has as big an impact. Um, how do we address that does it just start with awareness building or how do we address that? >>Yeah, so it's important. I mean, it's important. I have good intentions. And so, by saying that intentions are not the end, all be all. It doesn't mean that we're throwing intentions out. But it is saying that there's so many things that happened in the world, happened unwittingly without someone sitting down to to make it good or bad. And so this goes on both ends. The analogy that I often use is if I'm parked outside and I see someone, you know breaking into my car, I don't run out there and say Now, do you feel Do you feel in your heart that you're a thief? Do you intend to be a thief? I don't go and grill their identity or their intention. Thio harm me, but I look at the effect of their actions, and so in terms of art, the teams that we work on, I think one of the things that we can do again is to have a range of perspectives around the table that can think ahead like chess, about how things might play out, but also once we've sort of created something and it's, you know, it's entered into, you know, the world. We need to have, ah, regular audits and check ins to see when it's going off track just because we intended to do good and set it out when it goes sideways, we need mechanisms, formal mechanisms that actually are built into the process that can get it back on track or even remove it entirely if we find And we see that with different products, right that get re called. And so we need that to be formalized rather than putting the burden on the people that are using these things toe have to raise the awareness or have to come to us like with the apple card, Right? To say this thing is not fair. Why don't we have that built into the process to begin with? >>Yeah, so a couple things. So my dad used to say the road to hell is paved with good intentions, so that's >>yes on. In fact, in the book, I say the road to hell is paved with technical fixes. So they're me and your dad are on the same page, >>and I I love your point about bringing different perspectives. And I often say this is why diversity is not just about business benefits. It's your best recipe for for identifying the early biases in the data sets in the way we build things. And yet it's such a thorny problem to address bringing new people in from tech. So in the absence of that, what do we do? Is it the outside review boards? Or do you think regulation is the best bet as you mentioned a >>few? Yeah, yeah, we need really need a combination of things. I mean, we need So on the one hand, we need something like a do no harm, um, ethos. So with that we see in medicine so that it becomes part of the fabric and the culture of organizations that that those values, the social values, have equal or more weight than the other kinds of economic imperatives. Right. So we have toe have a reckoning in house, but we can't leave it to people who are designing and have a vested interest in getting things to market to regulate themselves. We also need independent accountability. So we need a combination of this and going back just to your point about just thinking about like, the diversity on teams. One really cautionary example comes to mind from last fall, when Google's New Pixel four phone was about to come out and it had a kind of facial recognition component to it that you could open the phone and they had been following this research that shows that facial recognition systems don't work as well on darker skin individuals, right? And so they wanted Thio get a head start. They wanted to prevent that, right? So they had good intentions. They didn't want their phone toe block out darker skin, you know, users from from using it. And so what they did was they were trying to diversify their training data so that the system would work better and they hired contract workers, and they told these contract workers to engage black people, tell them to use the phone play with, you know, some kind of app, take a selfie so that their faces would populate that the training set, But they didn't. They did not tell the people what their faces were gonna be used for, so they withheld some information. They didn't tell them. It was being used for the spatial recognition system, and the contract workers went to the media and said Something's not right. Why are we being told? Withhold information? And in fact, they told them, going back to the park bench example. To give people who are homeless $5 gift cards to play with the phone and get their images in this. And so this all came to light and Google withdrew this research and this process because it was so in line with a long history of using marginalized, most vulnerable people and populations to make technologies better when those technologies are likely going toe, harm them in terms of surveillance and other things. And so I think I bring this up here to go back to our question of how the composition of teams might help address this. I think often about who is in that room making that decision about sending, creating this process of the contract workers and who the selfies and so on. Perhaps it was a racially homogeneous group where people didn't want really sensitive to how this could be experienced or seen, but maybe it was a diverse, racially diverse group and perhaps the history of harm when it comes to science and technology. Maybe they didn't have that disciplinary knowledge. And so it could also be a function of what people knew in the room, how they could do that chest in their head and think how this is gonna play out. It's not gonna play out very well. And the last thing is that maybe there was disciplinary diversity. Maybe there was racial ethnic diversity, but maybe the workplace culture made it to those people. Didn't feel like they could speak up right so you could have all the diversity in the world. But if you don't create a context in which people who have those insights feel like they can speak up and be respected and heard, then you're basically sitting on a reservoir of resource is and you're not tapping into it to ensure T to do right by your company. And so it's one of those cautionary tales I think that we can all learn from to try to create an environment where we can elicit those insights from our team and our and our coworkers, >>your point about the culture. This is really inclusion very different from just diversity and thought. Eso I like to end on a hopeful note. A prescriptive note. You have some of the most influential data and analytics leaders and experts attending virtually here. So if you imagine the way we use data and housing is a great example, mortgage lending has not been equitable for African Americans in particular. But if you imagine the right way to use data, what is the future hold when we've gotten better at this? More aware >>of this? Thank you for that question on DSO. You know, there's a few things that come to mind for me one. And I think mortgage environment is really the perfect sort of context in which to think through the the both. The problem where the solutions may lie. One of the most powerful ways I see data being used by different organizations and groups is to shine a light on the past and ongoing inequities. And so oftentimes, when people see the bias, let's say when it came to like the the hiring algorithm or the language out, they see the names associated with negative or positive words that tends toe have, ah, bigger impact because they think well, Wow, The technology is reflecting these biases. It really must be true. Never mind that people might have been raising the issues in other ways before. But I think one of the most powerful ways we can use data and technology is as a mirror onto existing forms of inequality That then can motivate us to try to address those things. The caution is that we cannot just address those once we come to grips with the problem, the solution is not simply going to be a technical solution. And so we have to understand both the promise of data and the limits of data. So when it comes to, let's say, a software program, let's say Ah, hiring algorithm that now is trained toe look for diversity as opposed to homogeneity and say I get hired through one of those algorithms in a new workplace. I can get through the door and be hired. But if nothing else about that workplace has changed and on a day to day basis I'm still experiencing microaggressions. I'm still experiencing all kinds of issues. Then that technology just gave me access to ah harmful environment, you see, and so this is the idea that we can't simply expect the technology to solve all of our problems. We have to do the hard work. And so I would encourage everyone listening to both except the promise of these tools, but really crucially, um, Thio, understand that the rial kinds of changes that we need to make are gonna be messy. They're not gonna be quick fixes. If you think about how long it took our society to create the kinds of inequities that that we now it lived with, we should expect to do our part, do the work and pass the baton. We're not going to magically like Fairy does create a wonderful algorithm that's gonna help us bypass these issues. It can expose them. But then it's up to us to actually do the hard work of changing our social relations are changing the culture of not just our workplaces but our schools. Our healthcare systems are neighborhoods so that they reflect our better values. >>Yeah. Ha. So beautifully said I think all of us are willing to do the hard work. And I like your point about using it is a mirror and thought spot. We like to say a fact driven world is a better world. It can give us that transparency. So on behalf of everyone, thank you so much for your passion for your hard work and for talking to us. >>Thank you, Cindy. Thank you so much for inviting me. Hey, I live back to you. >>Thank you, Cindy and rou ha. For this fascinating exploration of our society and technology, we're just about ready to move on to our final session of the day. So make sure to tune in for this customer case study session with executives from Sienna and Accenture on driving digital transformation with certain AI.
SUMMARY :
I know that there's so much more we could do collectively to improve these gaps and diversity. and inclusion in the data and analytic space. Natalie Longhurst from Vodafone, suggesting that we move it from the change agents, the leaders that can prevent this. And so in the remaining couple minutes, I'm just just going to give you a taste of the last three of these, And you laid down the gauntlet. And so we need that to be formalized rather than putting the burden on So my dad used to say the road to hell is paved with good In fact, in the book, I say the road to hell for identifying the early biases in the data sets in the way we build things. And so this all came to light and the way we use data and housing is a great example, And so we have to understand both the promise And I like your point about using it is a mirror and thought spot. I live back to you. So make sure to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cindy | PERSON | 0.99+ |
Ruth | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Natalie Longhurst | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Paola | PERSON | 0.99+ |
Japan | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
$5 | QUANTITY | 0.99+ |
Ruhollah Benjamin | PERSON | 0.99+ |
Chicago | LOCATION | 0.99+ |
Rou Ha Benjamin | PERSON | 0.99+ |
Bill Zang | PERSON | 0.99+ |
Helsinki | LOCATION | 0.99+ |
Vodafone | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Sienna | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Data and Society Research Institute | ORGANIZATION | 0.99+ |
Cindy Housing | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
nine months | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Obermeyer | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
European Trade unions | ORGANIZATION | 0.99+ |
Berkeley, California | LOCATION | 0.99+ |
Multiple Books | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Tobias | PERSON | 0.99+ |
February | DATE | 0.99+ |
University of Chicago | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
one case | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Jim Crow | PERSON | 0.99+ |
This year | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
I Global | ORGANIZATION | 0.99+ |
two forms | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
2% | QUANTITY | 0.98+ |
Terminator | TITLE | 0.98+ |
Thio | PERSON | 0.98+ |
last fall | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
The Matrix | TITLE | 0.98+ |
24 hours | QUANTITY | 0.98+ |
The Latest Race After Technology | TITLE | 0.98+ |
Jim | PERSON | 0.98+ |
Princeton University | ORGANIZATION | 0.98+ |
Rou ha | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
both ends | QUANTITY | 0.97+ |
Accenture | ORGANIZATION | 0.96+ |
one booty | QUANTITY | 0.96+ |
almost 20 times | QUANTITY | 0.96+ |
Hollywood | ORGANIZATION | 0.95+ |
centuries | QUANTITY | 0.95+ |
rou | PERSON | 0.95+ |
one group | QUANTITY | 0.94+ |
Onley | PERSON | 0.93+ |
about 15 | QUANTITY | 0.93+ |
one discipline | QUANTITY | 0.93+ |
millions of patients | QUANTITY | 0.92+ |
four distinct ways | QUANTITY | 0.92+ |
Pixel four | COMMERCIAL_ITEM | 0.9+ |
few minutes ago | DATE | 0.9+ |
50% more | QUANTITY | 0.88+ |
few years ago | DATE | 0.88+ |
couple years ago | DATE | 0.88+ |
African | OTHER | 0.85+ |
one effort | QUANTITY | 0.85+ |
single occupancy benches | QUANTITY | 0.84+ |
African American | OTHER | 0.82+ |
Asa Kalavade, Amazon Web Services | AWS Storage Day 2019
(upbeat music) >> Hi, everybody, we're back. This is Dave Vellante with theCUBE. We're here talking storage at Amazon in Boston. Asa Kalavade's here, she's the general manager for Hybrid and Data Transfer services. >> Let me give you a perspective of how these services come together. We have DataSync, Storage Gateway, and Transfer. As a set of Hybrid and Data Transfer services. The problem that we're trying to address for customers is how to connect their on premises infrastructure to the cloud. And we have customers at different stages of their journey to the cloud. Some are just starting out to use the cloud, some are migrating, and others have migrated, but they still need access to the cloud from on-prem. So the broad charter for these services is to enable customers to use AWS Storage from on-premises. So for example, DataStorage Gateway today is used by customers to get unlimited access to cloud storage from on-premises. And they can do that with low latency, so they can run their on-prem workloads, but still leverage storage in the cloud. In addition to that, we have DataSync, which we launched at re:Invent last year, in 2018. And DataSync essentially is designed to help customers move a lot of their on-premises storage to the cloud, and back and forth for workloads that involve replication, migration, or ongoing data transfers. So together, Gateway and DataSync help solve the access and transfer problem for customers. >> Let's double down on the benefits. You started the segment just sort of describing the problem that you're solving, connecting on-prem to cloud, sort of helping create these hybrid environments. So that's really the other benefit for customers, really simplifying that sort of hybrid approach, giving them high performance confidence that it actually worked. >> Maybe talk a little bit more about that. >> So with DataSync, we see two broad use cases. There is a class of customers that have adopted DataSync for migration. So we have customers like Autodesk who've migrated hundreds of terabytes from their on-premises storage to AWS. And that has allowed them to shut down their data center, or retire their existing storage, because they're on their journey to the cloud. The other class of use cases is customers that have ongoing data that they need to move to the cloud for a workload. So it could be data from video cameras, or gene sequencers that they need to move to a data pipeline in the cloud, and they can do further processing there. And in some cases, bring the results back. So that's the second continuous data transfer use case, that DataSync allows customers to address. >> You're also talking today, about Storage Gateway high availability version of Storage Gateway. What's behind that? >> Storage Gateway today is used by customers to get access to data in the cloud, from on-premises. So if we continue this migration story that I mentioned with DataSync, now you have a customer that has moved a large amount of data to the cloud. They can now access that same data from on-premises for latency reasons, or if they need to distribute data across organizations and so on. So that's where the Gateway comes into play. Today we have 10's of thousands of customers that are using Gateway to do their back-ups, do archiving, or in some cases, use it as a target to replace their on-premises storage, with cloud backed storage. So a lot of these customers are running business critical applications today. But then some of our customers have told us they want to do additional workloads that are uninterruptible. So they can not tolerate downtime. So with that requirement in mind, we are launching this new capability around high availability. And we're quite excited, because that's solving, yet allowing us to do even more workloads on the Gateway. This announcement will allow customers to have a highly available Gateway, in a VMware environment. With that, their workloads can continue running, even if one of the Gateways goes down, if they have a hardware failure, a networking event, or software error such as the file shares becoming unavailable. The Gateway automatically restarts, so the workloads remain uninterrupted. >> So talk a little bit more about how it works, just in terms of anything customers have to do, any prerequisites they have. How does it all fit? >> Customers can essentially use this in their VMware H.A. environment today. So they would deploy their Gateway much like they do today. They can download the Gateway from the AWS console. If they have an existing Gateway, the software gets updated so they can take advantage of the high availability feature as well. The Gateway integrates into the VMware H.A. environment. It builds up a number of health checks, so we keep monitoring for the application up-time, network up-time, and so on. And if there is an event, the health check gets communicated back to VMware, and the Gateway gets restarted within, in most typical cases, under 60 seconds. >> So customers that are VMware customers, can take advantage of this, and to them, it's very non disruptive it sounds like. That's one of the benefits. But maybe talk about some of the other benefits. >> We saw a large number of our on-premises customers, especially in the enterprise environments, use VMware today. And they're using VMware HA for a number of their other applications. So we wanted to plug into that environment so the Gateway is as well highly available. So all their applications just work in that same framework. And then along with high availability, we're also introducing two additional capabilities. One is real time reports and visibility into the Gateway's resource consumption. So customers can now see embedded cloud watch graphs on how is their storage being consumed, what's their cache utilization, what's the network utilization. And then the administrators can use that to, in fairly real time, adapt the resources that they've allocated to the Gateway. So with that, as their workloads change, they can continue to adapt their Gateway resources, so they're getting the maximum performance out of the Gateway. >> So if they see a performance problem, and it's a high priority, they can put more resources on it-- >> They can attach more storage to it, or move it to a higher resourced VM, and they can continue to get the performance they need. Previously they could still do that, but they had to have manual checks. Now this is all automated, we can get this in a single pane of control. And they can use the AWS console today, like they do for their in cloud workloads. They can use that to look at performance of their on-premises Gateway's as well. So it's one pane of control. They can get CloudWatch health reports on their infrastructure on-prem. >> And if course it's cloud, so I can assume this is a service, I pay for it when I used it, I don't have to install any infrastructure, right? >> So the Gateways, again, consumption based, much like all AWS services. You download the Gateway, it doesn't cost you anything. And we charge one cent per gigabyte of data transfer through the Gateway, and it's capped at $125 a month. And you just pay for whatever storage is consumed by the Gateway. >> When you talk to senior exec's like Andy Jassy, always says "We focus on the customers." And sometimes people roll their eyes, but it's true. This is a hybrid world. Years ago, you didn't really hear much talk about hybrid. You talked to your customers and say, "Hey, we want to connect our on-prem to the public cloud." You're bringing services to do that. Asa, thanks so much for coming to theCUBE. Appreciate it. >> Thank you, thanks for your time. >> You're welcome. And thank you for watching everybody. This is Dave Vellante with theCUBE. We'll be back right after this short break. (upbeat music)
SUMMARY :
Asa Kalavade's here, she's the general manager for but they still need access to the cloud from on-prem. So that's really the other benefit for customers, or gene sequencers that they need to move to You're also talking today, about Storage Gateway for latency reasons, or if they need to distribute just in terms of anything customers have to do, So they would deploy their Gateway So customers that are VMware customers, they can continue to adapt their Gateway resources, and they can continue to get the performance they need. So the Gateways, again, consumption based, You talked to your customers and say, This is Dave Vellante with theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Asa Kalavade | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Autodesk | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
2018 | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
hundreds of terabytes | QUANTITY | 0.99+ |
Asa | PERSON | 0.99+ |
today | DATE | 0.99+ |
under 60 seconds | QUANTITY | 0.99+ |
last year | DATE | 0.96+ |
one pane | QUANTITY | 0.96+ |
10's | QUANTITY | 0.94+ |
two broad use cases | QUANTITY | 0.94+ |
re:Invent | EVENT | 0.93+ |
two additional capabilities | QUANTITY | 0.93+ |
one | QUANTITY | 0.93+ |
one cent per gigabyte | QUANTITY | 0.91+ |
VMware | ORGANIZATION | 0.91+ |
$125 a month | QUANTITY | 0.91+ |
DataSync | TITLE | 0.9+ |
VMware | TITLE | 0.88+ |
single pane | QUANTITY | 0.85+ |
VMware HA | TITLE | 0.81+ |
theCUBE | ORGANIZATION | 0.78+ |
DataSync | ORGANIZATION | 0.77+ |
second continuous | QUANTITY | 0.77+ |
thousands of customers | QUANTITY | 0.75+ |
Storage Day 2019 | EVENT | 0.73+ |
Years ago | DATE | 0.73+ |
Gateway | TITLE | 0.66+ |
Storage | ORGANIZATION | 0.55+ |
use | QUANTITY | 0.55+ |
AWS | EVENT | 0.53+ |
CloudWatch | TITLE | 0.53+ |
DataStorage Gateway | TITLE | 0.53+ |
Storage | TITLE | 0.51+ |
Gateway | COMMERCIAL_ITEM | 0.47+ |
Storage Gateway | TITLE | 0.44+ |
Gateway | ORGANIZATION | 0.37+ |
Laura Stevens, American Heart Association | AWS re:Invent
>> Narrator: Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2017, presented by AWS, Intel, and our ecosystem of partners. >> Hey, welcome back everyone, this is theCUBE's exclusive live coverage here in Las Vegas for AWS Amazon web services re:Invent 2017. I'm John Furrier with Keith Townsend. Our next guest is Laura Stevens, data scientist at the American Heart Association, an AWS customer, welcome to theCUBE. >> Hi, it's nice to be here. >> So, the new architecture, we're seeing all this great stuff, but one of the things that they mention is data is the killer app, that's my word, Verna didn't say that, but essentially saying that. You guys are doing some good work with AWS and precision medicine, what's the story? How does this all work, what are you working with them on? >> Yeah, so the American Heart Association was founded in 1924, and it is the oldest and largest voluntary organization dedicated to curing heart disease and stroke, and I think in the past few years what the American Heart Association has realized is that the potential of technology and data can really help us create innovative ways and really launch precision medicine in a fashion that hasn't been capable to do before. >> What are you guys doing with AWS? What's that, what's the solution? >> Yeah so the HA has strategically partnered with Amazon Web Services to basically use technology as a way to power precision medicine, and so when I say precision medicine, I mean identifying individual treatments, based on one's genetics, their environmental factors, their life factors, that then results in preventative and treatment that's catered to you as an individual rather than kind of a one size fits all approach that is currently happening. >> So more tailored? >> Yeah, specifically tailored to you as an individual. >> What do I do, get a genome sequence? I walk in, they throw a high force computing, sequence my genomes, maybe edit some genes while they're at it, I mean what's going on. There's some cutting edge conversations out there we see in some of the academic areas, course per that was me just throwing that in for fun, but data has to be there. What kind of data do you guys look at? Is it personal data, is it like how big is the data? Give us a sense of some of the data science work that you're doing? >> Yeah so the American Heart Association has launched the Institute for Precision Cardiovascular Medicine, and as a result, with Amazon, they created the precision medicine platform, which is a data marketplace that houses and provides analytic tools that enable high performance computing and data sharing for all sorts of different types of data, whether it be personal data, clinical trial data, pharmaceutical data, other data that's collected in different industries, hospital data, so a variety of data. >> So Laura, there's a lot of think fud out there around the ability to store data in a cloud, but there's also some valid concerns. A lot of individual researchers, I would imagine, don't have the skillset to properly protect data. What is the Heart Association doing with the framework to help your customers protect data? >> Yeah so the I guess security of data, the security of the individual, and the privacy of the individual is at the heart of the AHA, and it's their number one concern, and making anything that they provide that a number one priority, and the way that we do that in partnering with AWS is with this cloud environment we've been able to create even if you have data that you'd like to use sort of a walled garden behind your data so that it's not accessible to people who don't have access to the data, and it's also HIPAA compliant, it meets the standards that the utmost secure standards of health care today. >> So I want to make sure we're clear on this, the Heart Association doesn't collect data themselves. Are you guys creating a platform for your members to leverage this technology? >> So there's, I would so maybe both actually. The American Heart Association does have data that it is associated with, with its volunteers and the hospitals that it's associated with, and then on top of that, we've actually just launched My Research Legacy, which allows individuals of the community to, who want to share their data, whether you're healthy or just sick, either one, they want to share their data and help in aiding to cure heart disease and stroke, and so they can share their own data, and then on top of that, anybody, we are committed to strategically partnering with anybody who's involved and wants to share their data and make their data accessible. >> So I can share my data? >> Yes, you can share your data. >> Wow, so what type of tools do you guys use against that data set and what are some of the outcomes? >> Yeah so I think the foundation is the cloud, and that's where the data is stored and housed, and then from there, we have a variety of different tools that enable researchers to kind of custom build data sets that they want to answer the specific research questions they have, and so some of those tools, they range from common tools that are already in use today on your personal computer, such as Python or R Bioconductor, and then they have more high performance computing tools, such as Hal or any kind of s3 environment, or Amazon services, and then on top of that I think what is so awesome about the platform is that it's very dynamic, so a tool that's needed to use for high performance computing or a tool that's needed even just as a on a smaller data set, that can easily be installed and may be available to researchers, and so that they can use it for their research. >> So kind of data as a service. I would love to know about the community itself. How are you guys sharing the results of kind of oh this process worked great for this type of analysis amongst your members? >> Yeah so I think that there's kind of two different targets in that sense that you can think of is that there's the researchers and the researchers that come to the platform and then there's actually the patient itself, and ultimately the HA's goal is to make, to use the data and use the researcher for patient centered care, so with the researchers specifically, we have a variety of tutorials available so that researchers can one, learn how to perform high performance computing analysis, see what other people have done. We have a forum where researchers can log on and enable, I guess access other researchers and talk to them about different analysis, and then additionally we have My Research Legacy, which is patient centered, so it's this is what's been found and this is what we can give back to you as the patient about your specific individualized treatment. >> What do you do on a daily basis? Take us through your job, are you writing code, are you slinging API's around? What are some of the things that you're doing? >> I think I might say all of the above. I think right now my main effort is focused on one, conducting research using the platform, so I do use the platform to answer my own research questions, and those we have presented at different conferences, for example the American Heart Association, we had a talk here about the precision medicine platform, and then two, I'm focused on strategically making the precision medicine platform better by getting more data, adding data to the platform, improving the way that data is harmonized in the platform, and improving the amount of data that we have, and the diversity, and the variety. >> Alright, we'll help you with that, so let's help you get some people recruited, so what do they got to do to volunteer, volunteer their data, because I think this is one of those things where you know people do want to help. So, how do they, how you onboard? You use the website, is it easy, one click? Do they have to wear an iWatch, I mean what I mean? >> Yeah. >> What's the deal? What do I got to do? >> So I think I would encourage researchers and scientists and anybody who is data centric to go to precision.heart.org, and they can just sign up for an account, they can contact us through that, there's plenty of different ways to get in touch with us and plenty of ways to help. >> Precision.heart.org. >> Yup, precision.heart.org. >> Stu: Register now. >> Register now click, >> Powered by AWS. >> Yup. >> Alright so I gotta ask you as an AWS customer, okay take your customer hat off, put your citizen's hat on, what is Amazon mean to you, I mean is it, how do you describe it to people who don't use it? >> Okay yeah, so I think... the HA's ultimate mission right, is to provide individualized treatment and cures for cardiovascular disease and stroke. Amazon is a way to enable that and make that actually happen so that we can mine extremely large data sets, identify those individualized patterns. It allows us to store data in a fashion where we can provide a market place where there's extremely large amounts of data, extremely diverse amounts of data, and data that can be processed effectively, so that it can be directly used for research. >> What's your favorite tool or product or service within Amazon? >> That's a good question. I think, I mean the cloud and s3 buckets are definitely in a sense they're my favorites because there's so much that can be stored right there, Athena I think is also pretty awesome, and then the EMR clusters with Spark. >> The list is too long. >> My jam. >> It is. (laughs) >> So, one of the interesting things that I love is a lot of my friends are in non-profits, fundraising is a big, big challenge, grants are again, a big challenge, have you guys seen any new opportunities as a result of the results of the research coming out of HA and AWS in the cloud? >> Yeah so I think one of the coolest things about the HA is that they have this Institute for Precision Cardiovascular Medicine, and the strategic partnership between the HA and AWS, even just this year we've launched 13 new grants, where the HA kind of backs the research behind, and the AWS provides credit so that people can come to the cloud and use the cloud and use the tools available on a grant funded basis. >> So tell me a little bit more about that program. Anybody specifically that you, kind of like saying, seeing that's used these credits from AWS to do some cool research? >> Yeah definitely, so I think specifically we have one grantee right now that is really focused on identifying outcomes across multiple clinical trials, so currently clinical trials take 20 years, and there's a large variety of them. I don't know if any of you are familiar with the Framingham heart study, the Dallas heart study, the Jackson heart study, and trying to determine how those trials compare, and what outcomes we can generate, and research insights we can generate across multiple data sets is something that's been challenging due to the ability to not being able to necessarily access that data, all of those different data sets together, and then two, trying to find ways to actually compare them, and so with the precision medicine platform, we have a grantee at the University of Colorado-Denver, who has been able to find those synchronicities across data sets and has actually created kind of a framework that then can be implemented in the precision medicine platform. >> Well I just registered, it takes really two seconds to register, that's cool. Thanks so much for pointing out precision.heart.org. Final question, you said EMR's your jam. (laughing) >> Why, why is it? Why do you like it so much, is it fast, is it easy to use? >> I think the speed is one of the things. When it comes to using genetic data and multiple biological levels of data, whether it be your genetics, your lifestyle, your environment factors, there's... it just ends up being extremely large amounts of data, and to be able to implement things like server-less AI, and artificial intelligence, and machine learning on that data set is time consuming, and having the power of an EMR cluster that is scalable makes that so much faster so that we can then answer our research questions faster and identify those insights and get them to out in the world. >> Gotta love the new services they're launching, too. It just builds on top of it. Doesn't it? >> Yes. >> Yeah, soon everyone's gonna be jamming on AWS in our opinion. Thanks so much for coming on, appreciate the stories and commentary. >> Yeah. >> Precision.heart.org, you want to volunteer if you're a researcher or a user, want to share your data, they've got a lot of data science mojo going on over there, so check it out. It's theCUBE bringing a lot of data here, tons of data from the show, three days of wall to wall coverage, we'll be back with more live coverage after this short break. (upbeat music)
SUMMARY :
Narrator: Live from Las Vegas, scientist at the American Heart Association, but one of the things that they mention is that the potential of technology Yeah so the HA has strategically partnered What kind of data do you guys look at? Yeah so the American Heart Association has launched the framework to help your customers protect data? so that it's not accessible to people who the Heart Association doesn't collect data themselves. and the hospitals that it's associated with, and so that they can use it for their research. How are you guys sharing the results of kind back to you as the patient about your conferences, for example the American Heart Association, do they got to do to volunteer, volunteer to go to precision.heart.org, and they can actually happen so that we can mine extremely I mean the cloud and s3 buckets It is. and the AWS provides credit so that people from AWS to do some cool research? kind of a framework that then can be implemented Final question, you said EMR's your jam. of data, and to be able to implement Gotta love the new services they're launching, too. Thanks so much for coming on, appreciate the Precision.heart.org, you want to volunteer
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Laura Stevens | PERSON | 0.99+ |
Laura | PERSON | 0.99+ |
American Heart Association | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
two seconds | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
Heart Association | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Institute for Precision Cardiovascular Medicine | ORGANIZATION | 0.99+ |
1924 | DATE | 0.99+ |
AHA | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Institute for Precision Cardiovascular Medicine | ORGANIZATION | 0.99+ |
13 new grants | QUANTITY | 0.99+ |
precision.heart.org | OTHER | 0.99+ |
Python | TITLE | 0.99+ |
HA | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Precision.heart.org | OTHER | 0.99+ |
University of Colorado | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
HIPAA | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
R Bioconductor | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
Dallas | LOCATION | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
iWatch | COMMERCIAL_ITEM | 0.97+ |
one click | QUANTITY | 0.97+ |
three days | QUANTITY | 0.96+ |
Verna | PERSON | 0.96+ |
tons of data | QUANTITY | 0.96+ |
s3 | TITLE | 0.92+ |
one grantee | QUANTITY | 0.92+ |
theCUBE | TITLE | 0.9+ |
two different targets | QUANTITY | 0.9+ |
My Research Legacy | TITLE | 0.9+ |
Invent 2017 | EVENT | 0.89+ |
Framingham | ORGANIZATION | 0.89+ |
Spark | TITLE | 0.85+ |
Denver | ORGANIZATION | 0.83+ |
today | DATE | 0.82+ |
Hal | TITLE | 0.82+ |
lot of data | QUANTITY | 0.79+ |
Narrator: Live from Las | TITLE | 0.79+ |
Invent | EVENT | 0.71+ |
re:Invent 2017 | EVENT | 0.71+ |
past few years | DATE | 0.7+ |
one size | QUANTITY | 0.67+ |
EMR | ORGANIZATION | 0.64+ |
Vegas | LOCATION | 0.63+ |
s3 | COMMERCIAL_ITEM | 0.56+ |
re | EVENT | 0.53+ |
Jackson | PERSON | 0.52+ |
precision | ORGANIZATION | 0.5+ |
Day 1 Wrap Up - DockerCon 2017 - #theCUBE - #DockerCon
>> Narrator: Live, from Austin Texas it's the Cube. Covering DockerCon 2017. Brought to you by Docker and support from it's ecosystem partners. >> Hi, and welcome back to the Cube SiliconAngle Media's production of DockerCon 2017. I'm Stew Miniman, and joining me for the rap today I have Jim Kobilus who's been my host for the whole day, part of the Wikibon team. Jim, it's been a long day. Your first full day on the Cube, you've been on many times. >> It's been invigorating, I've learned so much. This is an awesomely substantial show. It's been wonderful. We've had so many great guests, oh my gosh. Ben Golub and everybody who came before. Amazing material. >> Stu: And my other guest for the wrap up is John Troyer who's been on the program many times. He sometimes guest host of the program so Chief Reckoner at TechReckoning. John, thanks for joining us. >> Hey, thanks so much for having me, Stu. >> Alright, so you know, we think right, guests we had some really good guests. It's easy for me at the end of the day when you're like oh it's energy flag oh let's have Ben Golub, the CEO of the company that where Docker's gone, and Jerry Chen who always brings energy, part of the V mafia like yourself, John so really interesting stuff. I want to step back, let's talk about the keynote. So I guess John, I'll start with you. Something we've been talking the last year or so is this Docker, Docker, Docker hype. I felt like a little bit of a hype was let out over the last year with the Docker data center, Docker swarm type activity, some of the ecosystem was a little frustrated with the direction that Docker the company was going, compared to where they wanted the open source part to do. Lot of open source, lot of developer talk today. What's your take on the announcements, the ecosystem, opensource? There's so many things, but let's get us started. >> Sure. Well I didn't quite know what to expect, Stu. We hear about Docker going more enterprise, they just made a big enterprise announcement, so I thought we might come in here and hear 45 minutes on digital transformation. And the standard enterprise keynote that you get at every other keynote. And we did not get that this morning. >> I've seen Michael Dell give that keynote in this building. (laughs) So, totally. >> At least we didn't get that here we've all heard that elsewhere. >> Well, at every conference for the last five years, I think. Ten years. So we talked about the ecosystem, that was the first message this morning. It was about growth of the ecosystem, about growth of the partnership, growth of the projects and so that was definitely playing to their strengths, and then they went straight to the code. This was a developer centered keynote, they did live demos with real code. And so they were really playing to the audience here which I think is still predominately developers. So they were signaling that hey, they weren't going all enterprise. Now, the announcements were also interesting. But I think the signal from the keynote was that we are still here, we're all about developer experience, we're about making things simple. >> Yeah, I don't think there's too many shows where you'd start off and they're like oh, here's how you can build really large containers, easier with this multi-part build and filling all this Docker stuff. It's not the suits, it's not the big customers. Having said, does that mean you won't go to tomorrow's keynote because Ben said it's going to be all the enterprise stuff tomorrow. >> I live for the enterprise stuff. I'm really excited about tomorrow. So hopefully, not too much digital transformation. But I think what Docker has announced the last month, not even talking about what happened today, but the Docker packaging, the Docker data, Docker enterprise edition versus consumer edition, and then not consumer, community addition, sorry. And then the tiers of the Docker, Docker enterprise edition, I think is really kind of brilliant. Docker is at a real turning point in its evolution right now. And there was a lot of confusion around what is Docker the project, what is Docker the engine, what is Docker the company, and I think with this kind of packaging, and then with the announcements today, I really think that they've just cleared up a whole lot of confusion in the ecosystem. >> Yeah, I mean coming in I think I heard a lot of people who were really excited that container D got open sourced. We went to, all three of us went to kubernetes event last night that was over at the Google Fiber Space a couple of blocks from here. And it was oh, cool I get all the opensource like Docker one, that stuff I need, but not all that upper level stuff and advanced things that Docker is building in to it so there's opensource pieces. That goes into the Moby project. Docker's committing, doubling down on a lot of this. We're going to take all these pieces. We're going to work on them, community's going to build it, they can take that compostable view of putting their solutions. And Docker will package and have monetization and things that they'll do there but the partner ecosystem can do different things with that. So what do you guys take on, let's just start with the Moby project first, some of these open source, the whole ecosystem. Positive, you think it's good? >> Yes, very much so. So the maturation of the container ecosystem is in the form of, what you see though the announcements, one of which is customization. So customize containers to the finest degree. They've got that capability now with Moby, exactly. It's all about containers everywhere. Containerization of applications is now the dominant theme in the developer community across all segments. So I think Docker has done the right thing which is doubling down on developers, doubling down on the message and the tooling now for both customization of containers but also for portability with the Linux kit announcement and so forth. Containerization, micro services and so forth across all segments. One of the areas I focus on is artificial intelligence, deep learning. Containerization is coming to that in a big way as well. A lot of it is to drive things like autonomous vehicles and drones and whatnot. But we're going to see containerization come to every other segment of data science, deep learning, machine learning and so forth. It's not just the people at this show, it's other developer communities that are coming to containerization in a big way. And Docker is becoming a premier development tool then for them. Or will be. >> So Jim, Stu, I think even more tactically, there was this confusion about Docker the engine, Docker the container run time, Docker the container specification. Now as pulling that out with container D and now with Linux kit, you always had the thing where Red Hat would say well we have open shift, it's like Docker or it has a piece of Docker or it can work with Docker, you have Cloud Foundry it's like Docker, or has a Docker, or can work with Docker now. And so everybody had to do this dance by saying well, we use some of the technology there. Now, very clean split, very different branding, we use Linux kit, we use container D, we use the Moby framework. And that actually will help again, look, the death of commercial success is confusion. If a buyer does not understand how to get what you want or what you're selling, he's never going to buy anything. >> Yeah, I think we've seen the end of Docker's well, batteries included but removable, cause some confusion in the marketplace. People are like well, but it's not easy, that's kind of what's there, I want to be able to choose the pieces up front. We talked about with Brian Gracely earlier today, what is the pinioned platform because there's certain solutions. Microsoft wants to build what they want. And they've lots of options, but when they want to build an upper level service, they have the pieces underneath that they care about. It's not like oh, okay wait. I have to do this, then I have to uninstall this, that was like in Linux all the time. It's like up, I'm recompiling, I'm recompiling, I have to add things in and remove them it's like no, no, no. I want it in box. In the kernel. And then I can choose and activate what I need. >> My guess is that next year, my prediction is that next year at DockerCon Docker will double down on experience, developer experience. There's not a enough of it yet, here. I think that will be a core theme for them going forward to continue to deepen their mind share in that community. >> I actually, I'll take that and double it. So, one of the reasons that, I think one of the factors, that caused VMWare to come to prominence was its operator experience and its simplicity. VMWare HA high availability was a one check box. VMWare distributed resource schedule which moved virtual machines around, one check box, right? And so with Docker's focus on developer usability and developer experience with today's announcements of Linux kit, that could actually be a huge, huge deal. If in the future, the application development pipeline greatly depends on building a just enough operating system as we used to say back in the day of VMWare with Jerry Chen. >> Stu: Yeah, good 'ol juice. >> Yeah, if that becomes the defining characteristic of building cloud native apps, and it is right? The Docker file is the defining document of our time. If that's the case, and now they've taken it into the Linux distribution world, which could have repercussions for the whole ecosystem, that could be Docker's, you know, again, their magic check box, the developer experience of rolling out a custom stack has just been the level has just been raised. And Linux kit is not new to the world. They just open sourced it today. But it's what they're using to get out their Docker for AWS and Docker for Google cloud. And Docker on public clouds already uses it so it's already in production today. I'm super impressed. >> And I think there was potential that it could have caused more confusion or upset in the ecosystem. But we interviewed Red Hat, and Canonical today and I'm not saying that jumped up and down and embraced and said oh goody, but it wasn't it was like okay, that's fine. It's not there, because there's always got to be that cooptive. I mean Jim, you came most recently from IBM. The company that I most associated with that word co-opetition. So, there's always, there's the swim lanes, there's where you partner together and there's where you sometimes bump heads as to strategy. >> Yeah. And I don't think people should be too alarmed, I mean from a technical level, right there's stuff that runs in containers, there's stuff that runs underneath containers. There's still a role for Ubuntu and there's still a role for Red Hat and there's still a role for CoreOS and Rancher. I don't know enough, I don't have enough of a crystal ball to say what we'll be talking about next year. It could actually have a fairly large dripple effect going out in our ecosystem. >> John, you've also, you've dug into with a couple of vendors here, what about the storage space? It's one we've been digging out of bed. There's still the general consensus is, we still have a little ways to go on the maturity and it's the furthest behind. Big surprise just like VMWare. We spent over a decade doing that. What's your take on storage? Any other comments on just the broad ecosystem, just what needs to work, be worked on and improved over time. >> I think storage is the next area that needs to be worked on. I think that's the next piece that we see as still a little bit fragmented. I've heard from many vendors here at the show that even from Docker itself, that the surprising thing is that containers are not just for cloud native apps. A lot of the enterprise journey, and I imagine we're going to hear about that in tomorrow's keynote, starts with containerizing your big legacy apps. >> Yeah, it's funny. I made a comment at the Google cloud event in San Francisco a month ago. I'm like, hey when did lift and shift all of a sudden become sexy? (laughs) It's of course nuanced on that, and we've had a few interviews Jim, where we've talked about look, there's initiatives that we want to do the cool app modernization and everything there but in the meantime, it is not a bimodal world. We're not going to leave our old stuff there and let it slowly have Larry the engineer keep an eye on it and sleep all the time. The whole world kind of needs to move forward, containers are part of the way to give us the bridge to the future if you will. >> Yeah, how do you containerize the legacy app the mainframe app for example, it's got a petabyte of data in its storage, I mean you just got to work through the data, I mean the deep data issues there, you know. >> Yeah, you can run Docker on a mainframe. I mean, I've done interviews on that. You work with those people, Jim. And it's one of those oh wait, okay, right. So there's pieces that'll be updated and people that are changed. John, you and I have talked. I remember early days of VMWare. It was let me take that horrible 10 year old application that's running on Windows NT which is going into life, and my hardware's going to die, let me shove it into VM and leave it there for another five, ten years. And it was like, please don't do that. >> Sometimes the real world intrudes. I think we are, part of this problem does get smoothed over or confused but we're talking about both on prem apps and public cloud apps. And that can get a little confusing because the storage issues, going back to storage, are a little different. Right? Especially in the public cloud, you've got issues of data locality, you've got issues of latency, even performance and so you see a number of vendors who are approaching it. It's very easy to connect the container to some sort of persistent volume. It is very hard to give something that its performance and is backed up and is, you know is going to be there. People have spent, the storage industry has spent decades on those problems. I don't think we're there yet in terms of the generic container that is floating either in public cloud or on prem. >> And they can handle the hybrid cloud, hybrid data clouds of which there are a myriad in terms of high public private zones within a distributed data architecture with varying degrees of velocity and variety. Managing all that data in a containerized environments with rich orchestration among them, to replication and streaming and so forth. >> You can do it, but it's not, it's cutting edge right now. >> Yeah, it's cutting edge. >> So, John last question I have to ask you is something near and dear to your heart. When you talk about careers and people that are doing, there's a lot of people here, people I used to see in the VMWare community that learning all the cool new stuff. Anything you see is Docker doing evangelism? Program the influencer program type thing? Are you seeing anything in the educational spaces from career space, what can you share? >> Sure, Docker is very rich in community it's kind of been the engine of their growth. They've long had a huge user group program, they have a campus program, they have a mentorship program, and they also have the Docker captains. The Docker captains started, oh I don't know, a year, a year and a half ago and is an advocacy program, I think there's 70 of them now, they work very closely with them. The come from all across the ecosystem which is kind of interesting. Everybody from Dehli MC and many companies. So that's pretty cool that these people, it feels a lot like early days of VMWare, these people have day jobs but yet they spend their nights and weekends hacking on Docker. And Docker takes advantage of that, I mean in the best sort of way. They give them opportunities, they give them platforms to speak, they give them platforms to help others. And I see that's in full force here. They have a track here at the show, so Dockers are leaning heavily on its community. I even saw one person here, Stu from from a mainline storage company said you know what, my company's not here but I am because I have to learn how to do this. I think people who are here have a good next phase of their career. >> That's a smart. A community advocacy program of that sort is actually is even more important than an event like this in terms of deepening the loyalty of the developers to leverage providers and their growing stacks. >> John: Docker the company is very small. There's a very large community and a very small company. >> Stu: Three hundred and some odd people. >> They have to leverage those resources. >> John: Exactly. >> Well, Jim thanks for all your help co-hosting today, John, really appreciate you coming in, especially some of that community ecosystem expertise that you bring. By the way, John's going to be co-hosting open stack summit with me. Another one that will have lost (mumbles) where that ecosystem community is and where it's going in a couple of weeks in my home state of Massachusetts in Boston. So be sure to tune in tomorrow, we've got a full day of coverage. First guest is going to be Solomon Hykes coming off the day two keynote. We're going to talk a little bit more about enterprise. We got a full lineup of guests. So be sure to check out siliconangle.tv for everything there. So for Jim Kobielus, John Troyer and myself Stu Miniman, thank you for watching day one of the Cube's coverage of DockerCon 2017. (upbeat music)
SUMMARY :
Narrator: Live, from Austin Texas it's the Cube. I'm Stew Miniman, and joining me for the rap today Ben Golub and everybody who came before. Stu: And my other guest for the wrap up is John Troyer that Docker the company was going, And the standard enterprise keynote I've seen Michael Dell give that keynote in this building. At least we didn't get that here and so that was definitely playing to their strengths, It's not the suits, it's not the big customers. I live for the enterprise stuff. but the partner ecosystem can do different things with that. is in the form of, what you see though the announcements, And so everybody had to do this dance I have to do this, then I have to uninstall this, I think that will be a core theme for them going forward So, one of the reasons that, I think one of the factors, Yeah, if that becomes the defining characteristic and I'm not saying that jumped up and down and embraced And I don't think people should be too alarmed, on the maturity and it's the furthest behind. that the surprising thing is that and let it slowly have Larry the engineer I mean the deep data issues there, you know. and people that are changed. and so you see a number of vendors who are approaching it. Managing all that data in a containerized environments it's cutting edge right now. that learning all the cool new stuff. it's kind of been the engine of their growth. in terms of deepening the loyalty of the developers John: Docker the company is very small. ecosystem expertise that you bring.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobilus | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Ben Golub | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Stew Miniman | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Ben | PERSON | 0.99+ |
Brian Gracely | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
45 minutes | QUANTITY | 0.99+ |
Solomon Hykes | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Massachusetts | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
Ten years | QUANTITY | 0.99+ |
Austin Texas | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Three hundred | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
tomorrow | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
last month | DATE | 0.99+ |
TechReckoning | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
Larry | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
Windows NT | TITLE | 0.98+ |
first message | QUANTITY | 0.98+ |
Canonical | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
a month ago | DATE | 0.98+ |
#DockerCon | EVENT | 0.98+ |
Docker | TITLE | 0.98+ |
today | DATE | 0.98+ |
one person | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
ten years | QUANTITY | 0.98+ |
DockerCon 2017 | EVENT | 0.98+ |
last night | DATE | 0.98+ |
a year | DATE | 0.98+ |
a year and a half ago | DATE | 0.97+ |
day one | QUANTITY | 0.96+ |