Wayne Duso, AWS & Iyad Tarazi, Federated Wireless | MWC Barcelona 2023
(light music) >> Announcer: TheCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to the Fira in Barcelona. Dave Vellante with Dave Nicholson. Lisa Martin's been here all week. John Furrier is in our Palo Alto studio, banging out all the news. Don't forget to check out siliconangle.com, thecube.net. This is day four, our last segment, winding down. MWC23, super excited to be here. Wayne Duso, friend of theCUBE, VP of engineering from products at AWS is here with Iyad Tarazi, who's the CEO of Federated Wireless. Gents, welcome. >> Good to be here. >> Nice to see you. >> I'm so stoked, Wayne, that we connected before the show. We texted, I'm like, "You're going to be there. I'm going to be there. You got to come on theCUBE." So thank you so much for making time, and thank you for bringing a customer partner, Federated Wireless. Everybody knows AWS. Iyad, tell us about Federated Wireless. >> We're a software and services company out of Arlington, Virginia, right outside of Washington, DC, and we're really focused on this new technology called Shared Spectrum and private wireless for 5G. Think of it as enterprises consuming 5G, the way they used to consume WiFi. >> Is that unrestricted spectrum, or? >> It is managed, organized, interference free, all through cloud platforms. That's how we got to know AWS. We went and got maybe about 300 products from AWS to make it work. Quite sophisticated, highly available, and pristine spectrum worth billions of dollars, but available for people like you and I, that want to build enterprises, that want to make things work. Also carriers, cable companies everybody else that needs it. It's really a new revolution for everyone. >> And that's how you, it got introduced to AWS. Was that through public sector, or just the coincidence that you're in DC >> No, I, well, yes. The center of gravity in the world for spectrum is literally Arlington. You have the DOD spectrum people, you have spectrum people from National Science Foundation, DARPA, and then you have commercial sector, and you have the FCC just an Uber ride away. So we went and found the scientists that are doing all this work, four or five of them, Virginia Tech has an office there too, for spectrum research for the Navy. Come together, let's have a party and make a new model. >> So I asked this, I'm super excited to have you on theCUBE. I sat through the keynotes on Monday. I saw Satya Nadella was in there, Thomas Kurian there was no AWS. I'm like, where's AWS? AWS is everywhere. I mean, you guys are all over the show. I'm like, "Hey, where's the number one cloud?" So you guys have made a bunch of announcements at the show. Everybody's talking about the cloud. What's going on for you guys? >> So we are everywhere, and you know, we've been coming to this show for years. But this is really a year that we can demonstrate that what we've been doing for the IT enterprise, IT people for 17 years, we're now bringing for telcos, you know? For years, we've been, 17 years to be exact, we've been bringing the cloud value proposition, whether it's, you know, cost efficiencies or innovation or scale, reliability, security and so on, to these enterprise IT folks. Now we're doing the same thing for telcos. And so whether they want to build in region, in a local zone, metro area, on-prem with an outpost, at the edge with Snow Family, or with our IoT devices. And no matter where they want to start, if they start in the cloud and they want to move to the edge, or they start in the edge and they want to bring the cloud value proposition, like, we're demonstrating all of that is happening this week. And, and very much so, we're also demonstrating that we're bringing the same type of ecosystem that we've built for enterprise IT. We're bringing that type of ecosystem to the telco companies, with CSPs, with the ISP vendors. We've seen plenty of announcements this week. You know, so on and so forth. >> So what's different, is it, the names are different? Is it really that simple, that you're just basically taking the cloud model into telco, and saying, "Hey, why do all this undifferentiated heavy lifting when we can do it for you? Don't worry about all the plumbing." Is it really that simple? I mean, that straightforward. >> Well, simple is probably not what I'd say, but we can make it straightforward. >> Conceptually. >> Conceptually, yes. Conceptually it is the same. Because if you think about, firstly, we'll just take 5G for a moment, right? The 5G folks, if you look at the architecture for 5G, it was designed to run on a cloud architecture. It was designed to be a set of services that you could partition, and run in different places, whether it's in the region or at the edge. So in many ways it is sort of that simple. And let me give you an example. Two things, the first one is we announced integrated private wireless on AWS, which allows enterprise customers to come to a portal and look at the industry solutions. They're not worried about their network, they're worried about solving a problem, right? And they can come to that portal, they can find a solution, they can find a service provider that will help them with that solution. And what they end up with is a fully validated offering that AWS telco SAS have actually put to its paces to make sure this is a real thing. And whether they get it from a telco, and, and quite frankly in that space, it's SIs such as Federated that actually help our customers deploy those in private environments. So that's an example. And then added to that, we had a second announcement, which was AWS telco network builder, which allows telcos to plan, deploy, and operate at scale telco network capabilities on the cloud, think about it this way- >> As a managed service? >> As a managed service. So think about it this way. And the same way that enterprise IT has been deploying, you know, infrastructure as code for years. Telco network builder allows the telco folks to deploy telco networks and their capabilities as code. So it's not simple, but it is pretty straightforward. We're making it more straightforward as we go. >> Jump in Dave, by the way. He can geek out if you want. >> Yeah, no, no, no, that's good, that's good, that's good. But actually, I'm going to ask an AWS question, but I'm going to ask Iyad the AWS question. So when we, when I hear the word cloud from Wayne, cloud, AWS, typically in people's minds that denotes off-premises. Out there, AWS data center. In the telecom space, yes, of course, in the private 5G space, we're talking about a little bit of a different dynamic than in the public 5G space, in terms of the physical infrastructure. But regardless at the edge, there are things that need to be physically at the edge. Do you feel that AWS is sufficiently, have they removed the H word, hybrid, from the list of bad words you're not allowed to say? 'Cause there was a point in time- >> Yeah, of course. >> Where AWS felt that their growth- >> They'll even say multicloud today, (indistinct). >> No, no, no, no, no. But there was a period of time where, rightfully so, AWS felt that the growth trajectory would be supported solely by net new things off premises. Now though, in this space, it seems like that hybrid model is critical. Do you see AWS being open to the hybrid nature of things? >> Yeah, they're, absolutely. I mean, just to explain from- we're a services company and a solutions company. So we put together solutions at the edge, a smart campus, smart agriculture, a deployment. One of our biggest deployment is a million square feet warehouse automation project with the Marine Corps. >> That's bigger than the Fira. >> Oh yeah, it's bigger, definitely bigger than, you know, a small section of here. It's actually three massive warehouses. So yes, that is the edge. What the cloud is about is that massive amount of efficiency has happened by concentrating applications in data centers. And that is programmability, that is APIs that is solutions, that is applications that can run on it, where people know how to do it. And so all that efficiency now is being ported in a box called the edge. What AWS is doing for us is bringing all the business and technical solutions they had into the edge. Some of the data may send back and forth, but that's actually a smaller piece of the value for us. By being able to bring an AWS package at the edge, we're bringing IoT applications, we're bringing high speed cameras, we're able to integrate with the 5G public network. We're able to bring in identity and devices, we're able to bring in solutions for students, embedded laptops. All of these things that you can do much much faster and cheaper if you are able to tap in the 4,000, 5,000 partners and all the applications and all the development and all the models that AWS team did. By being able to bring that efficiency to the edge why reinvent that? And then along with that, there are partners that you, that help do integration. There are development done to make it hardened, to make the data more secure, more isolated. All of these things will contribute to an edge that truly is a carbon copy of the data center. >> So Wayne, it's AWS, Regardless of where the compute, networking and storage physically live, it's AWS. Do you think that the term cloud will sort of drift away from usage? Because if, look, it's all IT, in this case it's AWS and federated IT working together. How, what's your, it's sort of a obscure question about cloud, because cloud is so integrated. >> You Got this thing about cloud, it's just IT. >> I got thing about cloud too, because- >> You and Larry Ellison. >> Because it's no, no, no, I'm, yeah, well actually there's- >> There's a lot of IT that's not cloud, just say that okay. >> Now, a lot of IT that isn't cloud, but I would say- >> But I'll (indistinct) cloud is an IT tool, and you see AWS obviously with the Snow fill in the blank line of products and outpost type stuff. Fair to say that you're, doesn't matter where it is, it could be AWS if it's on the edge, right? >> Well, you know, everybody wants to define the cloud as what it may have been when it started. But if you look at what it was when it started and what it is today, it is different. But the ability to bring the experience, the AWS experience, the services, the operational experience and all the things that Iyad had been talking about from the region all to all the way to, you know, the IoT device, if you would, that entire continuum. And it doesn't matter where you start. Like if you start in region and you need to bring your value to other places because your customers are asking you to do so, we're enabling that experience where you need to bring it. If you started at the edge, and- but you want to build cloud value, you know, whether it's again, cost efficiency, scalability, AI, ML or analytics into those capabilities, you can start at the edge with the same APIs, with the same service, the same capabilities, and you can build that value in right from the get go. You don't build this bifurcation or many separations and try to figure out how do I glue them together? There is no gluing together. So if you think of cloud as being elastic, scalable flexible, where you can drive innovation, it's the same exact model on the continuum. And you can start at either end, it's up to you as a customer. >> And I think if, the key to me is the ecosystem. I mean, if you can do for this industry what you've done for the technology- enterprise technology business from an ecosystem standpoint, you know everybody talks about flywheel, but that gives you like the massive flywheel. I don't know what the ratio is, but it used to be for every dollar spent on a VMware license, $15 is spent in the ecosystem. I've never heard similar ratios in the AWS ecosystem, but it's, I go to reinvent and I'm like, there's some dollars being- >> That's a massive ecosystem. >> (indistinct). >> And then, and another thing I'll add is Jose Maria Alvarez, who's the chairman of Telefonica, said there's three pillars of the future-ready telco, low latency, programmable networks, and he said cloud and edge. So they recognizing cloud and edge, you know, low latency means you got to put the compute and the data, the programmable infrastructure was invented by Amazon. So what's the strategy around the telco edge? >> So, you know, at the end, so those are all great points. And in fact, the programmability of the network was a big theme in the show. It was a huge theme. And if you think about the cloud, what is the cloud? It's a set of APIs against a set of resources that you use in whatever way is appropriate for what you're trying to accomplish. The network, the telco network becomes a resource. And it could be described as a resource. We, I talked about, you know, network as in code, right? It's same infrastructure in code, it's telco infrastructure as code. And that code, that infrastructure, is programmable. So this is really, really important. And in how you build the ecosystem around that is no different than how we built the ecosystem around traditional IT abstractions. In fact, we feel that really the ecosystem is the killer app for 5G. You know, the killer app for 4G, data of sorts, right? We started using data beyond simple SMS messages. So what's the killer app for 5G? It's building this ecosystem, which includes the CSPs, the ISVs, all of the partners that we bring to the table that can drive greater value. It's not just about cost efficiency. You know, you can't save your way to success, right? At some point you need to generate greater value for your customers, which gives you better business outcomes, 'cause you can monetize them, right? The ecosystem is going to allow everybody to monetize 5G. >> 5G is like the dot connector of all that. And then developers come in on top and create new capabilities >> And how different is that than, you know, the original smartphones? >> Yeah, you're right. So what do you guys think of ChatGPT? (indistinct) to Amazon? Amazon turned the data center into an API. It's like we're visioning this world, and I want to ask that technologist, like, where it's turning resources into human language interfaces. You know, when you see that, you play with ChatGPT at all, or I know you guys got your own. >> So I won't speak directly to ChatGPT. >> No, don't speak from- >> But if you think about- >> Generative AI. >> Yeah generative AI is important. And, and we are, and we have been for years, in this space. Now you've been talking to AWS for a long time, and we often don't talk about things we don't have yet. We don't talk about things that we haven't brought to market yet. And so, you know, you'll often hear us talk about something, you know, a year from now where others may have been talking about it three years earlier, right? We will be talking about this space when we feel it's appropriate for our customers and our partners. >> You have talked about it a little bit, Adam Selipsky went on an interview with myself and John Furrier in October said you watch, you know, large language models are going to be enormous and I know you guys have some stuff that you're working on there. >> It's, I'll say it's exciting. >> Yeah, I mean- >> Well proof point is, Siri is an idiot compared to Alexa. (group laughs) So I trust one entity to come up with something smart. >> I have conversations with Alexa and Siri, and I won't judge either one. >> You don't need, you could be objective on that one. I definitely have a preference. >> Are the problems you guys solving in this space, you know, what's unique about 'em? What are they, can we, sort of, take some examples here (indistinct). >> Sure, the main theme is that the enterprise is taking control. They want to have their own networks. They want to focus on specific applications, and they want to build them with a skeleton crew. The one IT person in a warehouse want to be able to do it all. So what's unique about them is that they're now are a lot of automation on robotics, especially in warehousing environment agriculture. There simply aren't enough people in these industries, and that required precision. And so you need all that integration to make it work. People also want to build these networks as they want to control it. They want to figure out how do we actually pick this team and migrate it. Maybe just do the front of the house first. Maybe it's a security team that monitor the building, maybe later on upgrade things that use to open doors and close doors and collect maintenance data. So that ability to pick what you want to do from a new processors is really important. And then you're also seeing a lot of public-private network interconnection. That's probably the undercurrent of this show that haven't been talked about. When people say private networks, they're also talking about something called neutral host, which means I'm going to build my own network, but I want it to work, my Verizon (indistinct) need to work. There's been so much progress, it's not done yet. So much progress about this bring my own network concept, and then make sure that I'm now interoperating with the public network, but it's my domain. I can create air gaps, I can create whatever security and policy around it. That is probably the power of 5G. Now take all of these tiny networks, big networks, put them all in one ecosystem. Call it the Amazon marketplace, call it the Amazon ecosystem, that's 5G. It's going to be tremendous future. >> What does the future look like? We're going to, we just determined we're going to be orchestrating the network through human language, okay? (group laughs) But seriously, what's your vision for the future here? You know, both connectivity and cloud are on on a continuum. It's, they've been on a continuum forever. They're going to continue to be on a continuum. That being said, those continuums are coming together, right? They're coming together to bring greater value to a greater set of customers, and frankly all of us. So, you know, the future is now like, you know, this conference is the future, and if you look at what's going on, it's about the acceleration of the future, right? What we announced this week is really the acceleration of listening to customers for the last handful of years. And, we're going to continue to do that. We're going to continue to bring greater value in the form of solutions. And that's what I want to pick up on from the prior question. It's not about the network, it's not about the cloud, it's about the solutions that we can provide the customers where they are, right? And if they're on their mobile phone or they're in their factory floor, you know, they're looking to accelerate their business. They're looking to accelerate their value. They're looking to create greater safety for their employees. That's what we can do with these technologies. So in fact, when we came out with, you know, our announcement for integrated private wireless, right? It really was about industry solutions. It really isn't about, you know, the cloud or the network. It's about how you can leverage those technologies, that continuum, to deliver you value. >> You know, it's interesting you say that, 'cause again, when we were interviewing Adam Selipsky, everybody, you know, all journalists analysts want to know, how's Adam Selipsky going to be different from Andy Jassy, what's the, what's he going to do to Amazon to change? And he said, listen, the real answer is Amazon has changed. If Andy Jassy were here, we'd be doing all, you know, pretty much the same things. Your point about 17 years ago, the cloud was S3, right, and EC2. Now it's got to evolve to be solutions. 'Cause if that's all you're selling, is the bespoke services, then you know, the future is not as bright as the past has been. And so I think it's key to look for what are those outcomes or solutions that customers require and how you're going to meet 'em. And there's a lot of challenges. >> You continue to build value on the value that you've brought, and you don't lose sight of why that value is important. You carry that value proposition up the stack, but the- what you're delivering, as you said, becomes maybe a bigger or or different. >> And you are getting more solution oriented. I mean, you're not hardcore solutions yet, but we're seeing more and more of that. And that seems to be a trend. We've even seen in the database world, making things easier, connecting things. Not really an abstraction layer, which is sort of antithetical to your philosophy, but it creates a similar outcome in terms of simplicity. Yeah, you're smiling 'cause you guys always have a different angle, you know? >> Yeah, we've had this conversation. >> It's right, it's, Jassy used to say it's okay to be misunderstood. >> That's Right. For a long time. >> Yeah, right, guys, thanks so much for coming to theCUBE. I'm so glad we could make this happen. >> It's always good. Thank you. >> Thank you so much. >> All right, Dave Nicholson, for Lisa Martin, Dave Vellante, John Furrier in the Palo Alto studio. We're here at the Fira, wrapping out MWC23. Keep it right there, thanks for watching. (upbeat music)
SUMMARY :
that drive human progress. banging out all the news. and thank you for bringing the way they used to consume WiFi. but available for people like you and I, or just the coincidence that you're in DC and you have the FCC excited to have you on theCUBE. and you know, we've been the cloud model into telco, and saying, but we can make it straightforward. that you could partition, And the same way that enterprise Jump in Dave, by the way. that need to be physically at the edge. They'll even say multicloud AWS felt that the growth trajectory I mean, just to explain from- and all the models that AWS team did. the compute, networking You Got this thing about cloud, not cloud, just say that okay. on the edge, right? But the ability to bring the experience, but that gives you like of the future-ready telco, And in fact, the programmability 5G is like the dot So what do you guys think of ChatGPT? to ChatGPT. And so, you know, you'll often and I know you guys have some stuff it's exciting. Siri is an idiot compared to Alexa. and I won't judge either one. You don't need, you could Are the problems you that the enterprise is taking control. that continuum, to deliver you value. is the bespoke services, then you know, and you don't lose sight of And that seems to be a trend. it's okay to be misunderstood. For a long time. so much for coming to theCUBE. It's always good. in the Palo Alto studio.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Marine Corps | ORGANIZATION | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
National Science Foundation | ORGANIZATION | 0.99+ |
Wayne | PERSON | 0.99+ |
Iyad Tarazi | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Jose Maria Alvarez | PERSON | 0.99+ |
Thomas Kurian | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Federated Wireless | ORGANIZATION | 0.99+ |
Wayne Duso | PERSON | 0.99+ |
$15 | QUANTITY | 0.99+ |
October | DATE | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
17 years | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
Telefonica | ORGANIZATION | 0.99+ |
DARPA | ORGANIZATION | 0.99+ |
Arlington | LOCATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Virginia Tech | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Siri | TITLE | 0.99+ |
five | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
four | QUANTITY | 0.99+ |
Washington, DC | LOCATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
FCC | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Jassy | PERSON | 0.99+ |
DC | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.98+ |
thecube.net | OTHER | 0.98+ |
this week | DATE | 0.98+ |
second announcement | QUANTITY | 0.98+ |
three years earlier | DATE | 0.98+ |
Tammy Whyman, Telco & Kurt Schaubach, Federated Wireless | MWC Barcelona 2023
>> Announcer: The cube's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) (background indistinct chatter) >> Good morning from Barcelona, everyone. It's theCUBE live at MWC23, day three of our four days of coverage. Lisa Martin here with Dave Nicholson. Dave, we have had some great conversations. Can't believe it's day three already. Anything sticking out at you from a thematic perspective that really caught your eye the last couple days? >> I guess I go back to kind of our experience with sort of the generalized world of information technology and a lot of the parallels between what's been happening in other parts of the economy and what's happening in the telecom space now. So it helps me understand some of the complexity when I tie it back to things that I'm aware of >> A lot of complexity, but a big ecosystem that's growing. We're going to be talking more about the ecosystem next and what they're doing to really enable customers CSPs to deliver services. We've got two guests here, Tammy Wyman joins us the Global head of Partners Telco at AWS. And Kurt Schaubach, CTO of Federated Wireless. Welcome to theCUBE Guys. >> Thank you. >> Thank you. >> Great to have you here, day three. Lots of announcements, lots of news at MWC. But Tammy, there's been a lot of announcements from partners with AWS this week. Talk to us a little bit more about first of all, the partner program and then let's unpack some of those announcements. One of them is with Federated Wireless. >> Sure. Yeah. So AWS created the partner program 10 years ago when they really started to understand the value of bringing together the ecosystem. So, I think we're starting to see how this is becoming a reality. So now we 100,000 partners later, 150 countries, 70% of those partners are outside of the US. So truly the global nature and partners being ISVs, GSIs. And then in the telco space, we're actually looking at how we help CSBs become partners of AWS and bring new revenue streams. So that's how we start having the discussions around Federated Wireless. >> Talk a little bit about Federated Wireless, Kurt, give the audience an overview of what you guys are doing and then maybe give us some commentary on the partnership. >> Sure. So we're a shared spectrum and private wireless company, and we actually started working with AWS about five years ago to take this model that we developed to perfect the use of shared spectrum to enable enterprise communications and bring the power of 5G to the enterprise to bring it to all of the AWS customers and partners. So through that now through we're one of the partner network participants. We're working very closely with the AWS team on bringing this, really unique form of connectivity to all sorts of different enterprise use cases from solving manufacturing and warehouse logistics issues to providing connectivity to mines, enhancing the experience for students on a university campus. So it's a really exciting partnership. Everything that we deliver on an end-to-end basis from design deployment to bringing the infrastructure on-prem, all runs on AWS. (background indistinct chatter) >> So a lot of the conversations that we've had sort of start with this concept of the radio access network and frankly in at least the public domain cellular sites. And so all of a sudden it's sort of grounded in this physical reality of these towers with these boxes of equipment on the tower, at the base of the tower, connected to other things. How does AWS and Federated Wireless, where do you fit in that model in terms of equipment at the base of a tower versus what having that be off-premises in some way or another. Kind of give us more of a flavor for the kind of physical reality of what you guys are doing? >> Yeah, I'll start. >> Yeah, Tammy. >> I'll hand it over to the real expert but from an AWS perspective, what we're finding is really I don't know if it's even a convergence or kind of a delaying of the network. So customers are, they don't care if they're on Wi-Fi if they're on public spectrum, if they're on private spectrum, what they want are networks that are able to talk to each other and to provide the right connectivity at the right time and with the right pricing model. So by moving to the cloud that allows us that flexibility to be able to offer the quality of service and to be able to bring in a larger ecosystem of partners as with the networks are almost disaggregated. >> So does the AWS strategy focus solely on things that are happening in, say, AWS locations or AWS data centers? Or is AWS also getting into the arena of what I would refer to as an Outpost in an AWS parlance where physical equipment that's running a stack might actually also be located physically where the communications towers are? What does that mix look like in terms of your strategy? >> Yeah, certainly as customers are looking at hybrid cloud environments, we started looking at how we can use Outpost as part of the network. So, we've got some great use cases where we're taking Outpost into the edge of operators networks, and really starting to have radio in the cloud. We've launched with Dish earlier, and now we're starting to see some other announcements that we've made with Nokia about having ran in the cloud as well. So using Outpost, that's one of our key strategies. It creates, again, a lot of flexibility for the hybrid cloud environment and brings a lot of that compute power to the edge of the network. >> Let's talk about some of the announcements. Tammy was reading that AWS is expanding, its telecom and 5g, private 5G network support. You've also unveiled the AWS Telco Network Builder service. Talk about that, who that's targeted for. What does an operator do with AWS on this? Or maybe you guys can talk about that together. >> Sure. Would you like to start? I can talk. All right. So from the network builder, it's aimed at the, I would say the persona that it's aimed at would be the network engineer within the CSPs. And there was a bit of a difficulty when you want to design a telco network on AWS versus the way that the network engineers would traditionally design. So I'm going to call them protocols, but you know I can imagine saying, "I really want to build this on the cloud, but they're making me move away from my typical way that I design a network and move it into a cloud world." So what we did was really kind of create this template saying, "You can build the network as you always do and we are going to put the magic behind it to translate it into a cloud world." So just really facilitating and taking some of the friction out of the building of the network. >> What was the catalyst for that? I think Dish and Swisscom you've been working with but talk about the catalyst for doing that and how it's facilitating change because part of that's change management with how network engineers actually function and how they work. >> Absolutely, yeah. And we're looking, we listen to customers and we're trying to understand what are those friction points? What would make it easier? And that was one that we heard consistently. So we wanted to apply a bit of our experience and the way that we're able to use data translate that using code so that you're building a network in your traditional way, and then it kind of spits out what's the formula to build the network in the cloud. >> Got it. Kurt, talk about, yeah, I saw that there was just an announcement that Federated Wireless made JBG Smith. Talk to us more about that. What will federated help them to create and how are you all working together? >> Sure. So JBG Smith is the exclusive redeveloper of an area just on the other side of the Potomac from Washington DC called National Landing. And it's about half the size of Manhattan. So it's an enormous area that's getting redeveloped. It's the home of Amazon's new HQ two location. And JBG Smith is investing in addition to the commercial real estate, digital place making a place where people live, work, play, and connect. And part of that is bringing an enhanced level of connectivity to people's homes, their residents, the enterprise, and private wireless is a key component of that. So when we talk about private wireless, what we're doing with AWS is giving an enterprise the freedom to operate a network independent of a mobile network operator. So that means everything from the ran to the core to the applications that run on this network are sort of within the domain of the enterprise merging 5G and edge compute and driving new business outcomes. That's really the most important thing. We can talk a lot about 5G here at MWC about what the enterprise really cares about are new business outcomes how do they become more efficient? And that's really what private wireless helps enable. >> So help us connect the dots. When we talk about private wireless we've definitely been in learning mode here. Well, I'll speak for myself going around and looking at some of the exhibits and seeing how things work. And I know that I wasn't necessarily a 100% clear on this connection between a 5G private wireless network today and where Wi-Fi still comes into play. So if I am a new resident in this area, happily living near the amazing new presence of AWS on the East coast, and I want to use my mobile device how am I connected into that private wireless network? What does that look like as a practical matter? >> So that example that you've just referred to is really something that we enable through neutral host. So in fact, what we're able to do through this private network is also create carrier connectivity. Basically create a pipe almost for the carriers to be able to reach a consumer device like that. A lot of private wireless is also driving business outcomes with enterprises. So work that we're doing, like for example, with the Cal Poly out in California, for example is to enable a new 5G innovation platform. So this is driving all sorts of new 5G research and innovation with the university, new applications around IoT. And they need the ability to do that indoors, outdoors in a way that's sort of free from the domain of connectivity to a a mobile network operator and having the freedom and flexibility to do that, merging that with edge compute. Those are some really important components. We're also doing a lot of work in things like warehouses. Think of a warehouse as being this very complex RF environment. You want to bring robotics you want to bring better inventory management and Wi-Fi just isn't an effective means of providing really reliable indoor coverage. You need more secure networks you need lower latency and the ability to move more data around again, merging new applications with edge compute and that's where private wireless really shines. >> So this is where we do the shout out to my daughter Rachel Nicholson, who is currently a junior at Cal Poly San Luis Obispo. Rachel, get plenty of sleep and get your homework done. >> Lisa: She better be studying. >> I held up my mobile device and I should have said full disclosure, we have spotty cellular service where I live. So I think of this as a Wi-Fi connected device, in fact. So maybe I confuse the issue at least. >> Tammy, talk to us a little bit about the architecture from an AWS perspective that is enabling JBG Smith, Cal Poly is this, we're talking an edge architecture, but give us a little bit more of an understanding of what that actually technically looks like. >> Alright, I would love to pass this one over to Kurt. >> Okay. >> So I'm sorry, just in terms of? >> Wanting to understand the AWS architecture this is an edge based architecture hosted on what? On AWS snow, application storage. Give us a picture of what that looks like. >> Right. So I mean, the beauty of this is the simplicity in it. So we're able to bring an AWS snowball, snow cone, edge appliance that runs a pack of core. We're able to run workloads on that locally so some applications, but we also obviously have the ability to bring that out to the public cloud. So depending on what the user application is, we look at anything from the AWS snow family to Outpost and sort of develop templates or solutions depending on what the customer workloads demand. But the innovation that's happened, especially around the pack core and how we can make that so compact and able to run on such a capable appliance is really powerful. >> Yeah, and I will add that I think the diversification of the different connectivity modules that we have a lot of them have been developed because of the needs from the telco industry. So the adaptation of Outpost to run into the edge, the snow family. So the telco industry is really leading a lot of the developments that AWS takes to market in the end because of the nature of having to have networks that are able to disconnect, ruggedize environments, the latency, the numerous use cases that our telco customers are facing to take to their end customers. So like it really allows us to adapt and bring the right network to the right place and the right environment. And even for the same customer they may have different satellite offices or remote sites that need different connectivity needs. >> Right. So it sounds like that collaboration between AWS and telco is quite strong and symbiotic, it sounds like. >> Tammy: Absolutely. >> So we talked about a number of the announcements in our final minutes. I want to talk about integrated private wireless that was just announced last week. What is that? Who are the users going to be? And I understand T-Mobile is involved there. >> Yes. Yeah. So this is a program that we launched based on what we're seeing is kind of a convergence of the ecosystem of private wireless. So we wanted to be able to create a program which is offering spectrum that is regulated as well. And we wanted to offer that on in a more of a multi country environment. So we launched with T-Mobile, Telephonica, KDDI and a number of other succeed, as a start to start being able to bring the regulated spectrum into the picture and as well other ISVs who are going to be bringing unique use cases so that when you look at, well we've got the connectivity into this environment, the mine or the port, what are those use cases? You know, so ISVs who are providing maybe asset tracking or some of the health and safety and we bring them in as part of the program. And I think an important piece is the actual discoverability of this, because when you think about that if you're a buyer on the other side, like where do I start? So we created a portal with this group of ISVs and partners so that one could come together and kind of build what are my needs? And then they start picking through and then the ecosystem would be recommended to them. So it's a really a way to discover and to also procure a private wireless network much more easily than could be done in the past. >> That's a great service >> And we're learning a lot from the market. And what we're doing together in our partnership is through a lot of these sort of ruggedized remote location deployments that we're doing, mines, clearing underbrush and forest forest areas to prevent forest fires. There's a tremendous number of applications for private wireless where sort of the conventional carrier networks just aren't prioritized to serve. And you need a different level of connectivity. Privacy is big concern as well. Data security. Keeping data on premise, which is a another big application that we were able to drive through these edge compute platforms. >> Awesome. Guys, thank you so much for joining us on the program talking about what AWS Federated are doing together and how you're really helping to evolve the telco landscape and make life ultimately easier for all the Nicholsons to connect over Wi-Fi, our private 5g. >> Keep us in touch. And from two Californians you had us when you said clear the brush, prevent fires. >> You did. Thanks guys, it was a pleasure having you on the program. >> Thank you. >> Thank you. >> Our pleasure. For our guest and for Dave Nicholson, I'm Lisa Martin. You're watching theCUBE Live from our third day of coverage of MWC23. Stick around Dave and I will be right back with our next guest. (upbeat music)
SUMMARY :
that drive human progress. eye the last couple days? and a lot of the parallels the Global head of Partners Telco at AWS. the partner program and then let's unpack So AWS created the partner commentary on the partnership. and bring the power of So a lot of the So by moving to the cloud that allows us and brings a lot of that compute power of the announcements. So from the network but talk about the catalyst for doing that and the way that we're Talk to us more about that. from the ran to the core and looking at some of the exhibits and the ability to move So this is where we do the shout out So maybe I confuse the issue at least. bit about the architecture pass this one over to Kurt. the AWS architecture the beauty of this is a lot of the developments that AWS and telco is quite strong and number of the announcements a convergence of the ecosystem a lot from the market. on the program talking the brush, prevent fires. having you on the program. of coverage of MWC23.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Kurt Schaubach | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Rachel Nicholson | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Tammy Wyman | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
California | LOCATION | 0.99+ |
Tammy | PERSON | 0.99+ |
telco | ORGANIZATION | 0.99+ |
T-Mobile | ORGANIZATION | 0.99+ |
Kurt | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Washington DC | LOCATION | 0.99+ |
Federated Wireless | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Rachel | PERSON | 0.99+ |
last week | DATE | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Swisscom | ORGANIZATION | 0.99+ |
Cal Poly | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tammy Whyman | PERSON | 0.99+ |
70% | QUANTITY | 0.99+ |
two guests | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Telephonica | ORGANIZATION | 0.99+ |
JBG Smith | ORGANIZATION | 0.99+ |
Manhattan | LOCATION | 0.99+ |
National Landing | LOCATION | 0.99+ |
four days | QUANTITY | 0.99+ |
this week | DATE | 0.98+ |
third day | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
JBG Smith | PERSON | 0.98+ |
Dish | ORGANIZATION | 0.98+ |
Potomac | LOCATION | 0.98+ |
two | QUANTITY | 0.98+ |
KDDI | ORGANIZATION | 0.98+ |
150 countries | QUANTITY | 0.97+ |
MWC23 | EVENT | 0.96+ |
two location | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
day three | QUANTITY | 0.95+ |
MWC | EVENT | 0.95+ |
Daniel Heacock, Etix & Adam Haines, Federated Sample - AWS Re:Invent 2013 - #awsreinvent #theCUBE
hi everybody we are live at AWS reinvents in Las Vegas I'm Jeff Kelly with Wikibon org you're watching the cube silicon angles premiere live broadcast we go out to the technology events and as John foyer likes to say extract the signal from the noise so being here at the AWS show we were talk we're going to talk to a lot of AWS customers here a lot about what they're doing in in this case around analytics data warehousing and data integration so for this segment I'm joined by two customers Daniel heacock senior business systems analyst with a tix and Adam Cain's who's a data architect with federated sample welcome guys thanks for joining us on the cube Thanks your first time so we'll promise we'll make this as painless as possible so so you guys have a couple things in common we were talking beforehand some of the workflows are similar you work your you're using Amazon Web Services redshift platform for data warehousing you're using attunity for some of the data integration to bring that in from your for your operational transactional databases and using a bi tool on top to kind of tease out some of the insights from that data but why don't we get started Daniel we'll start with you tell us a little bit about etix kind of what you guys do and then we'll just kind of get into the use cases and talk to use AWS and the tuner need some of the other technologies you use it sure yeah so the company I work for is etix we are a primary market ticketing company in the entertainment industry we provide a box office solutions to venues and venue owners all types of events casinos fairs festivals pretty much you name and we sell some tickets in that industry we we provide a software solution that enables those menu owners to engage their customers and sell tickets so could kind of a competitor to something like ticketmaster the behemoth in the industry and you're definitely so Ticketmaster would be the behemoth in the industry and we are we consider ourselves a smaller sexier version that more friendly to the customer customer friendly more agile absolutely so Adam tell us a little bit about better a sample sure federated sample is a technology company in the market research industry and we aim to do is add an exchange layer between buyers and sellers so we facilitate the transaction between when a buyer or a company like coke would say hey we need to do a survey we will negotiate pricing and route our respondents to their surveys try to make that a more seamless process so they don't have to go out and find your very respond right everything online and right right absolutely got it so so let's talk a little bit about let's start with AWS so obviously we're here to reinvent a big show 9,000 people here so you guys you know talk about agile talk about cloud enabling kind of innovation and I'm gonna start with you what kind of brought you to AWS are you using red shift and I think you mentioned you're all in the cloud right just give us your impressions of the show in AWS and what that's meant your business right shows been great so far as to we were originally on-premise entirely at data center out in California and it just didn't meet our rapid growth we're a smaller company startup so we couldn't handle the growth so we need something more elastic more agile so we ended up moving our entire infrastructure into amazon web services so then we found that we had a need to actually perform analytics on that data and that's when we started the transition to you know redshift and so the idea being you're moving data from your transactional system which is also on AWS into redshift so using attunity for that they're clapping solution talk a little bit about that and and you know how that is differentiate from some of the other integration methods you could have chosen right so we started with a more conventional integration method a homegrown solution to move our data from our production sequel server into redshift and it worked but it was not optimal didn't have all the bells and whistles and it was prone to bad management being like not many people could configure it know how to use it so then we saw cloud being from attunity and they offered a native solution using secret survey replication that could tie into our native sequel server and then push that data directly into cloud being at a very fast rate so moving that data from from the sequel server it is essentially a real-time replication so that yes that's moving that data into redshifts of the year analysts can actually write when they're doing there the reporting or doing some real ad hoc kind of queries they can be confident they've got the most up-to-date data from your secret service right actual system right yeah nearly real-time and just to put in perspective the reports that we were running on our other system we're taking you know 10 15 minutes to run in redshift we're running those same reports in minutes 1 12 minutes right and if you're running those reports so quickly you know the people sometimes forget when you're talking about you know real time or interactive queries and reporting it's somewhat only as good as the data timeliness that you've got that you by Dave the timeless of the data you've got in that database because right trying to make some real-time decisions you've got a lag of depending on the workload and your use case even 15 minutes to an hour back might really impact you're ready to make those decisions so Adam talk a little bit about your use case is it is a similar cloud cloud architecture are you moving from upside Daniel moving from on-premise to so you're actually working with an on-premise data center it's an Oracle database and so we've basically we we ran into two limitations one regarding to our current reporting infrastructure and then to kind of our business intelligence capabilities and so as an analyst I've been kind of tasked with creating internal feedback loops within our organization as far as delivering certain types of KPIs and metrics to you know inform our our different teams or operations teams our marketing teams so that has been one of the kind of BI lms that we've been able to achieve because of the replication and the redshift and then the the other is actually making our reporting more I guess comprehensive we're able to run now that we're using redshift we're able to run reports that we were previously not be able to do to run on our on-premise transactional database so really we just are kind of embracing the power of redshift and it's enabling us and a lot of different types of ways yeah i mean we're hearing a lot about red shift at the show it's the amazon says the fastest-growing service AWS has had from a revenue perspective and it's six seven year history so clearly there's a lot of power in that platform it removes a lot of the concerns around having to manage that infrastructure obviously but the performance you know that's that's something I think when people are have their own data centers their own databases tuning those for the type of performance you're looking for is can be a challenge is that one of the drivers to kind of your move to redshift oh for sure the performance i I'm trying to think of a good example of a metric to compare but it's basically enabled us to develop a product or to develop products that would not have been possible otherwise there were certain i guess the ability to crunch data like you said in a specific time frame is very important for reporting purposes and if you're not able to meet a certain time frame then certain type of report is just not going to be useful so it's opening the door for new types of products within our organization well let's dig into that a little bit the different data types we're talking about so you've got a tea tix you're talking about customer transactions your custom are you talking about profiles of different types of customers tell us about some of the data sources that you're moving from your transactional system which i think is an Oracle database to to red shift and then you know what are some of those types of analytic workloads what kind of insights are you looking for sure so you know we're in the business of selling tickets and so one of our you know main concerns or I guess you should say we're in the business of helping our customers sell tickets and so we're always trying to figure out ways to improve their marketing efforts and so marketing segmentation is one of the huge ones appending data from large data services in order to get customer demographic information is something as you know easy to do in red shift and so we're able to use that information transaction information customer information I guess better engage our fans and likewise Adam could you maybe walk us through kind of a use case maybe your types of data you're looking at right that you're moving into red ship with attunity and then you know what kind of analytics are you doing on top of that what kind of insights are you gathering right so are our date is a little bit different than then ticketing but what we ultimately capture is is a respondent answers to questions so we try to find the value in a particular set of answers so we can determine the quality of the supply that's sent from suppliers so if they say that a person meets a certain demographic that we can actually verify that that person reads that demographic and then we can actually help them improve their supply that they push down to that respondent to it everybody makes more money because the completion rates go up so overall just business and analysis on that type of information so that we can help our customers and help ourselves so I wonder if we could talk a little bit about kind of the BI layer on top as well I think you're both using jaspersoft but you know beyond that you know one of the topics we've been covering on the cube another and on Wikibon is this whole analytics for all movement and we've been hearing about self service business intelligence for 20-plus years from some of the more incumbent vendors like business objects and cognos that others but really I mean if you look at a typical enterprise business intelligence usage or adoption rate kind of stalls out by eighteen percent twenty percent talk about how you've seen this kind of industry evolve a little bit maybe talk about jaspersoft specifically but what are some of the things that you think have to happen or some of the types of tools that are needed to really make business intelligence more consumable for analysts and more business use people who are not necessarily trained in statistics aren't data scientists Adam we start yes so one of the things that we're doing is with our jaspersoft we're trying to figure out you know certain we have a pis and we have traditional you know client server applications which ones our customers want to use the most because we're trying to push everybody towards an API oriented so we're trying to put that data into redshift with Jasper soft and kind of flip that data and look at it year-to-date or over a period of time to see where all of our money's coming from where others are rather than getting driven from and our business users are now empowered with jaspersoft to do that themselves they don't rely on us to pull data from they could just tie right into jaspersoft grab the data they need for whatever period of time they want and look at it in a nice pretty chart as a similar experience you're having any text definitely and I think one of the things I should emphasize about our use of Jasper's off and basically really any bi tool you choose to use in the Amazon platform is just the ability to launch it almost immediately and be able to play with data within 5-10 minutes of trying to launch it yeah it's pretty amazing what how quickly things can come from just a thought into action so well that's a good point because I mean you think about not just bitten telligence but the whole datawarehousing world it was you know the traditional method is you you know the business user a business unit goes to IT they say here are some of the requirements of the metrics we want on these reports IT then gun it goes away and builds it comes back six months later 12 months later here you go here's the report and next thing you know the business doesn't remember what they asked for this isn't necessarily going to serve our needs anymore and you've just essentially it's not a particularly useful model and Amazon really helps you kind of shorten that time frame significantly it sounds like between what you can do with redshift and some of their other database products and whatever bi to used to use is that kind of how you see this evolving oh definitely and the options I guess the the kind of plug and play workflow is is pretty pretty amazing and it's a it's given us the flexibility in our organization to be able to say well we can use this tool for now and there's a there's a chance we may decide there's something different in the future that we want to use and plugin in its place we're confident that that product will be there whenever the you know whenever the need is there right well that's the other thing you can you can start to use a tool and if it doesn't meet your need you can stop using it move to another tool so I think that puts you know vendors like jaspersoft than others puts them on their toes they've got to continually innovate and make their product useful otherwise you know they know that you know there were AWS customers can simply press the button stop using it press another button stop start using another tool so I think it's good in that sense but kind of you know when you talk about cloud and especially around data you get questions around privacy about data ownership who owns the data if it's in amazon's cloud is your data but you know it's on there in their data centers how do you feel about that Adam is there any concerns around either privacy or data ownership when it comes to using the cloud I mean you guys are all in in the cloud so right yeah so we've isolated a lot of our data into virtual private clouds so with that segment of the network we feel much more comfortable putting our data in a public space because we do feel like it's secure enough for our type of data so that was one of the major concerns up front but you know after talking with Amazon and going through the whole process of migrating to we kind of feel way more comfortable with that if you expand on that a little so you've got a private instance essentially in amazon's rep right so we have a private subnet so it's a segmented piece of their network that's just for us okay so we're not you can't access this publicly only within our VPN client or within our infrastructure itself so we're segmented we're away from that everybody else interesting so they offer that kind of type of service when there's more privacy concern as a security concern definitely and of course a lot depends on the type of data i mean how sensitive that data is if it you know but personally identifiable data obviously is going to be more sensitive than if it's just a general market data that anyone could potentially access daniel is we'll talk about your concerns around that or did you have concerns definitely a more of a governance people process question than a technology question I think well I definitely a technology question to a certain extent I mean as a as a transaction based business we were obviously very concerned with security and our CTO is very adamant about that and so that was one of the first first issues that we address whenever we decided to go this route and I'm obviously AWS has has taken all the precautions we have a very similar set up to what Adam is describing as far as our security we are very much confident that it is a very robust solution so looking forward how do you see your use of both the cloud and kind of analytics evolving you know one of the things we've been covering a lot is the as use case to get more complex your kind of you've got to orchestrate more data flows you've got to move data for more places you mentioned you're using attunity to do some of that replication from your transactional database and some red shift you know what are some of the other potential data integration challenges you see fate you see yourselves facing as you kind of potentially get more complex deployments we've got more data maybe you start using more services on Amazon how do you look to tackle some of those eight integration challenges let me start that's a good question one of the things we're trying to do inside of you know our organization is I guess bring data from all the different sources that we have together we have you know we use Salesforce for our sales team we collect information from MailChimp from our digital marketing agency that that we'd like to tile that information together and so that's something we're working on attunity has been a great help there and they're you know they're their product development as far as their capabilities of bringing in information from other sources is growing so that's a you know we're confident that the demand is there and that the product will develop as we as we move forward well I mean it's interesting that we've got you know you two gentlemen up here one with a kind of a on premise to cloud deployment and one all in the cloud so I'm clearly tuning you can kind of gap both those right on premise and cloud roll but also work in the cloud environment Adam when we if you could talk a little bit about how you see this kind of evolving as you get more complex maybe bring in more systems are you looking to bring in more data sources maybe even third-party data sources outside data sources how are you how do you look at this evolve right President Lee we do have a Mongo database so we have other sources that we're doing now there's talks of even trying to stick that in dynamo DB which is a reg amazon offering and that ties directly into redshift so we could load that data directly into that using that key pair or however we want to use that type of data data Mart but one of the things that we're trying to work out right now is just distribution and you know being agile you know elasticity which I work those issues with our growing database so so our database grows rather large each month so working on scalability is our primary focus but other data sources so we look into other database technologies that we can leverage in addition to sequel server to help distribute that load you so we've got time just for one more question I wonder I always like to ask when we get customers and users on if you can give some advice to other practitioners for watching so I mean if you can give one piece of advice to somebody who might be in your position they're looking at maybe they've got an on-premise data warehouse or maybe they're just trying to figure out a way to to get make better use of their data I mean what would the we the one thing would it be a technology piece of advice maybe you know looked at something like red shift or and solutions like attunity but maybe it would be more of a you know cultural question around the use of data and I'm I instead of making data-driven decisions but with that kind of one piece of ice big I could put you on the spot okay I would say don't try to do it yourself when the experts have done it for I couldn't put it any more simpler than that very succinct but very powerful but for me my biggest takeaway would be just redshift I was kind of apprehensive to use it at first I was so used to other technologies but we can do so much with redshift now add you know half the cost so your good works pretty compelling all right fantastic well Adam pains Daniel heacock thank you so much for joining us on the cube appreciate it we'll be right back with our next guests we're live here at AWS reinvent in Las Vegas you're watching the cube the cute
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Daniel heacock | PERSON | 0.99+ |
Jeff Kelly | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Daniel Heacock | PERSON | 0.99+ |
Adam Cain | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
California | LOCATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
Daniel heacock | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
20-plus years | QUANTITY | 0.99+ |
Adam | PERSON | 0.99+ |
jaspersoft | ORGANIZATION | 0.99+ |
Adam Haines | PERSON | 0.99+ |
two customers | QUANTITY | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
eighteen percent | QUANTITY | 0.99+ |
etix | TITLE | 0.99+ |
9,000 people | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
twenty percent | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
10 15 minutes | QUANTITY | 0.98+ |
each month | QUANTITY | 0.97+ |
John foyer | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
one more question | QUANTITY | 0.96+ |
5-10 minutes | QUANTITY | 0.96+ |
coke | ORGANIZATION | 0.96+ |
two gentlemen | QUANTITY | 0.96+ |
Ticketmaster | ORGANIZATION | 0.95+ |
President | PERSON | 0.95+ |
both | QUANTITY | 0.95+ |
six months later | DATE | 0.94+ |
12 months later | DATE | 0.93+ |
six seven year | QUANTITY | 0.92+ |
eight integration | QUANTITY | 0.91+ |
12 minutes | QUANTITY | 0.89+ |
ticketmaster | ORGANIZATION | 0.88+ |
lot of our data | QUANTITY | 0.88+ |
agile | TITLE | 0.87+ |
first | QUANTITY | 0.85+ |
one piece | QUANTITY | 0.85+ |
red shift | TITLE | 0.85+ |
two limitations | QUANTITY | 0.85+ |
one piece of advice | QUANTITY | 0.82+ |
Jasper | TITLE | 0.8+ |
Wikibon | ORGANIZATION | 0.78+ |
minutes 1 | QUANTITY | 0.77+ |
lot | QUANTITY | 0.75+ |
first issues | QUANTITY | 0.74+ |
HelloFresh v2
>>Hello. And we're here at the cube startup showcase made possible by a Ws. Thanks so much for joining us today. You know when Jim McDaid Ghani was formulating her ideas around data mesh, She wasn't the only one thinking about decentralized data architecture. Hello, Fresh was going into hyper growth mode and realized that in order to support its scale, it needed to rethink how it thought about data. Like many companies that started in the early part of last decade, Hello Fresh relied on a monolithic data architecture and the internal team. It had concerns about its ability to support continued innovation at high velocity. The company's data team began to think about the future and work backwards from a target architecture which possessed many principles of so called data mesh even though they didn't use that term. Specifically, the company is a strong example of an early but practical pioneer of data mission. Now there are many practitioners and stakeholders involved in evolving the company's data architecture, many of whom are listed here on this on the slide to are highlighted in red are joining us today, we're really excited to welcome into the cube Clements cheese, the Global Senior Director for Data at Hello Fresh and christoph Nevada who's the Global Senior Director of data also, of course. Hello Fresh folks. Welcome. Thanks so much for making some time today and sharing your story. >>Thank you very much. Hey >>steve. All right, let's start with Hello Fresh. You guys are number one in the world in your field, you deliver hundreds of millions of meals each year to many, many millions of people around the globe. You're scaling christoph. Tell us a little bit more about your company and its vision. >>Yeah. Should I start or Clements maybe maybe take over the first piece because Clements has actually been a longer trajectory yet have a fresh. >>Yeah go ahead. Climate change. I mean yes about approximately six years ago I joined handle fresh and I didn't think about the startup I was joining would eventually I. P. O. And just two years later and the freshman public and approximately three years and 10 months after. Hello fresh was listed on the German stock exchange which was just last week. Hello Fresh was included in the Ducks Germany's leading stock market index and debt to mind a great great milestone and I'm really looking forward and I'm very excited for the future for the future for head of fashion. All our data. Um the vision that we have is to become the world's leading food solution group and there's a lot of attractive opportunities. So recently we did lounge and expand Norway. This was in july and earlier this year we launched the U. S. Brand green >>chef in the U. K. As >>well. We're committed to launch continuously different geographies in the next coming years and have a strong pipe ahead of us with the acquisition of ready to eat companies like factor in the U. S. And the planned acquisition of you foods in Australia. We're diversifying our offer now reaching even more and more untapped customer segments and increase our total addressable market. So by offering customers and growing range of different alternatives to shop food and consumer meals. We are charging towards this vision and the school to become the world's leading integrated food solutions group. >>Love it. You guys are on a rocket ship, you're really transforming the industry and as you expand your tam it brings us to sort of the data as a as a core part of that strategy. So maybe you guys could talk a little bit about your journey as a company specifically as it relates to your data journey. You began as a start up. You had a basic architecture like everyone. You made extensive use of spreadsheets. You built a Hadoop based system that started to grow and when the company I. P. O. You really started to explode. So maybe describe that journey from a data perspective. >>Yes they saw Hello fresh by 2015 approximately had evolved what amount of classical centralized management set up. So we grew very organically over the years and there were a lot of very smart people around the globe. Really building the company and building our infrastructure. Um This also means that there were a small number of internal and external sources. Data sources and a centralized the I team with a number of people producing different reports, different dashboards and products for our executives for example of our different operations teams, christian company's performance and knowledge was transferred um just via talking to each other face to face conversations and the people in the data where's team were considered as the data wizard or as the E. T. L. Wizard. Very classical challenges. And those et al. Reserves indicated the kind of like a silent knowledge of data management. Right? Um so a central data whereas team then was responsible for different type of verticals and different domains, different geographies and all this setup gave us to the beginning the flexibility to grow fast as a company in 2015 >>christoph anything that might add to that. >>Yes. Um Not expected to that one but as as clement says it right, this was kind of set up that actually work for us quite a while. And then in 2017 when L. A. Freshman public, the company also grew rapidly and just to give you an idea how that looked like. As was that the tech department self actually increased from about 40 people to almost 300 engineers And the same way as a business units as Clemens has described, also grew sustainable, sustainably. So we continue to launch hello fresh and new countries launching brands like every plate and also acquired other brands like much of a factor and with that grows also from a data perspective the number of data requests that centrally we're getting become more and more and more and also more and more complex. So that for the team meant that they had a fairly high mental load. So they had to achieve a very or basically get a very deep understanding about the business. And also suffered a lot from this context switching back and forth, essentially there to prioritize across our product request from our physical product, digital product from the physical from sorry, from the marketing perspective and also from the central reporting uh teams. And in a nutshell this was very hard for these people. And this that also to a situation that, let's say the solution that we have became not really optimal. So in a nutshell, the central function became a bottleneck and slowdown of all the innovation of the company. >>It's a classic case, isn't it? I mean Clements, you see you see the central team becomes a bottleneck and so the lines of business, the marketing team salesman's okay, we're going to take things into our own hands. And then of course I I. T. And the technical team is called in later to clean up the mess. Uh maybe, I mean was that maybe I'm overstating it, but that's a common situation, isn't it? >>Yeah. Uh This is what exactly happened. Right. So um we had a bottleneck, we have the central teams, there was always a little of tension um analytics teams then started in this business domains like marketing, trade chain, finance, HR and so on. Started really to build their own data solutions at some point you have to get the ball rolling right and then continue the trajectory um which means then that the data pipelines didn't meet the engineering standards. And um there was an increased need for maintenance and support from central teams. Hence over time the knowledge about those pipelines and how to maintain a particular uh infrastructure for example left the company such that most of those data assets and data sets are turned into a huge step with decreasing data quality um also decrease the lack of trust, decreasing transparency. And this was increasing challenge where majority of time was spent in meeting rooms to align on on data quality for example. >>Yeah. And and the point you were making christoph about context switching and this is this is a point that Jemaah makes quite often is we've we've we've contextualized are operational systems like our sales systems, our marketing system but not our our data system. So you're asking the data team, Okay. Be an expert in sales, be an expert in marketing, be an expert in logistics, be an expert in supply chain and it start stop, start, stop, it's a paper cut environment and it's just not as productive. But but on the flip side of that is when you think about a centralized organization you think, hey this is going to be a very efficient way, a cross functional team to support the organization but it's not necessarily the highest velocity, most effective organizational structure. >>Yeah, so so I agree with that. Is that up to a certain scale, a centralized function has a lot of advantages, right? That's clear for everyone which would go to some kind of expert team. However, if you see that you actually would like to accelerate that and specific and this hyper growth, right, you wanna actually have autonomy and certain teams and move the teams or let's say the data to the experts in these teams and this, as you have mentioned, right, that increases mental load and you can either internally start splitting your team into a different kind of sub teams focusing on different areas. However, that is then again, just adding another peace where actually collaboration needs to happen busy external sees, so why not bridging that gap immediately and actually move these teams and to end into into the function themselves. So maybe just to continue what, what was Clements was saying and this is actually where over. So Clements, my journey started to become one joint journey. So Clements was coming actually from one of these teams to build their own solutions. I was basically having the platform team called database housed in these days and in 2019 where basically the situation become more and more serious, I would say so more and more people have recognized that this model doesn't really scale In 2019, basically the leadership of the company came together and I identified data as a key strategic asset and what we mean by that, that if we leverage data in a proper way, it gives us a unique competitive advantage which could help us to, to support and actually fully automated our decision making process across the entire value chain. So what we're, what we're trying to do now or what we should be aiming for is that Hello, Fresh is able to build data products that have a purpose. We're moving away from the idea. Data is just a by problem products, we have a purpose why we would like to collect this data. There's a clear business need behind that. And because it's so important to for the company as a business, we also want to provide them as a trust versi asset to the rest of the organization. We say there's the best customer experience, but at least in a way that users can easily discover, understand and security access high quality data. >>Yeah, so and and and Clements, when you c J Maxx writing, you see, you know, she has the four pillars and and the principles as practitioners you look at that say, okay, hey, that's pretty good thinking and then now we have to apply it and that's and that's where the devil meets the details. So it's the four, you know, the decentralized data ownership data as a product, which we'll talk about a little bit self serve, which you guys have spent a lot of time on inclement your wheelhouse which is which is governance and a Federated governance model. And it's almost like if you if you achieve the first two then you have to solve for the second to it almost creates a new challenges but maybe you could talk about that a little bit as to how it relates to Hello fresh. >>Yes. So christophe mentioned that we identified economic challenge beforehand and for how can we actually decentralized and actually empower the different colleagues of ours. This was more a we realized that it was more an organizational or a cultural change and this is something that somebody also mentioned I think thought words mentioned one of the white papers, it's more of a organizational or cultural impact and we kicked off a um faced reorganization or different phases we're currently and um in the middle of still but we kicked off different phases of organizational reconstruct oring reorganization, try unlock this data at scale. And the idea was really moving away from um ever growing complex matrix organizations or matrix setups and split between two different things. One is the value creation. So basically when people ask the question, what can we actually do, what shall we do? This is value creation and how, which is capability building and both are equal in authority. This actually then creates a high urge and collaboration and this collaboration breaks up the different silos that were built and of course this also includes different needs of stuffing forward teams stuffing with more, let's say data scientists or data engineers, data professionals into those business domains and hence also more capability building. Um Okay, >>go ahead. Sorry. >>So back to Tzemach did johnny. So we the idea also Then crossed over when she published her papers in May 2019 and we thought well The four colors that she described um we're around decentralized data ownership, product data as a product mindset, we have a self service infrastructure and as you mentioned, Federated confidential governance. And this suited very much with our thinking at that point of time to reorganize the different teams and this then leads to a not only organisational restructure but also in completely new approach of how we need to manage data, show data. >>Got it. Okay, so your business is is exploding. Your data team will have to become domain experts in too many areas, constantly contact switching as we said, people started to take things into their own hands. So again we said classic story but but you didn't let it get out of control and that's important. So we actually have a picture of kind of where you're going today and it's evolved into this Pat, if you could bring up the picture with the the elephant here we go. So I would talk a little bit about the architecture, doesn't show it here, the spreadsheet era but christoph maybe you can talk about that. It does show the Hadoop monolith which exists today. I think that's in a managed managed hosting service, but but you you preserve that piece of it, but if I understand it correctly, everything is evolving to the cloud, I think you're running a lot of this or all of it in A W. S. Uh you've got everybody's got their own data sources, uh you've got a data hub which I think is enabled by a master catalog for discovery and all this underlying technical infrastructure. That is really not the focus of this conversation today. But the key here, if I understand it correctly is these domains are autonomous and not only that this required technical thinking, but really supportive organizational mindset, which we're gonna talk about today. But christoph maybe you could address, you know, at a high level some of the architectural evolution that you guys went through. >>Yeah, sure. Yeah, maybe it's also a good summary about the entire history. So as you have mentioned, right, we started in the very beginning with the model is on the operation of playing right? Actually, it wasn't just one model is both to one for the back end and one for the for the front and and or analytical plane was essentially a couple of spreadsheets and I think there's nothing wrong with spreadsheets, right, allows you to store information, it allows you to transform data allows you to share this information. It allows you to visualize this data, but all the kind of that's not actually separating concern right? Everything in one tool. And this means that obviously not scalable, right? You reach the point where this kind of management set up in or data management of isn't one tool reached elements. So what we have started is we've created our data lake as we have seen here on Youtube. And this at the very beginning actually reflected very much our operational populace on top of that. We used impala is a data warehouse, but there was not really a distinction between borders, our data warehouse and borders our data like the impala was used as a kind of those as the kind of engine to create a warehouse and data like construct itself and this organic growth actually led to a situation as I think it's it's clear now that we had to centralized model is for all the domains that will really lose kimball modeling standards. There was no uniformity used actually build in house uh ways of building materialized use abuse that we have used for the presentation layer, there was a lot of duplication of effort and in the end essentially they were missing feedbacks, food, which helped us to to improve of what we are filled. So in the end, in the natural, as we have said, the lack of trust and that's basically what the starting point for us to understand. Okay, how can we move away and there are a lot of different things that you can discuss of apart from this organizational structure that we have said, okay, we have these three or four pillars from from Denmark. However, there's also the next extra question around how do we implement our talking about actual right, what are the implications on that level? And I think that is there's something that we are that we are currently still in progress. >>Got it. Okay, so I wonder if we could talk about switch gears a little bit and talk about the organizational and cultural challenges that you faced. What were those conversations like? Uh let's dig into that a little bit. I want to get into governance as well. >>The conversations on the cultural change. I mean yes, we went through a hyper growth for the last year since obviously there were a lot of new joiners, a lot of different, very, very smart people joining the company which then results that collaboration uh >>got a bit more difficult. Of course >>there are times and changes, you have different different artifacts that you were created um and documentation that were flying around. Um so we were we had to build the company from scratch right? Um Of course this then resulted always this tension which I described before, but the most important part here is that data has always been a very important factor at l a fresh and we collected >>more of this >>data and continued to improve use data to improve the different key areas of our business. >>Um even >>when organizational struggles, the central organizational struggles data somehow always helped us to go through this this kind of change. Right? Um in the end those decentralized teams in our local geography ease started with solutions that serve the business which was very very important otherwise wouldn't be at the place where we are today but they did by all late best practices and standards and I always used sport analogy Dave So like any sport, there are different rules and regulations that need to be followed. These rules are defined by calling the sports association and this is what you can think about data governance and compliance team. Now we add the players to it who need to follow those rules and bite by them. This is what we then called data management. Now we have the different players and professionals, they need to be trained and understand the strategy and it rules before they can play. And this is what I then called data literacy. So we realized that we need to focus on helping our teams to develop those capabilities and teach the standards for how work is being done to truly drive functional excellence in a different domains. And one of our mission of our data literacy program for example is to really empower >>every employee at hello >>fresh everyone to make the right data informs decisions by providing data education that scaled by royal Entry team. Then this can be different things, different things like including data capabilities, um, with the learning paths for example. Right? So help them to create and deploy data products connecting data producers and data consumers and create a common sense and more understanding of each other's dependencies, which is important, for example, S. S. L. O. State of contracts and etcetera. Um, people getting more of a sense of ownership and responsibility. Of course, we have to define what it means, what does ownership means? But the responsibility means. But we're teaching this to our colleagues via individual learning patterns and help them up skill to use. Also, there's shared infrastructure and those self self service applications and overall to summarize, we're still in this progress of of, of learning, we are still learning as well. So learning never stops the tele fish, but we are really trying this um, to make it as much fun as possible. And in the end we all know user behavior has changed through positive experience. Uh, so instead of having massive training programs over endless courses of workshops, um, leaving our new journalists and colleagues confused and overwhelmed. >>We're applying um, >>game ification, right? So split different levels of certification where our colleagues can access, have had access points, they can earn badges along the way, which then simplifies the process of learning and engagement of the users and this is what we see in surveys, for example, where our employees that your justification approach a lot and are even competing to collect Those learning path batteries to become the # one on the leader board. >>I love the game ification, we've seen it work so well and so many different industries, not the least of which is crypto so you've identified some of the process gaps uh that you, you saw it is gloss over them. Sometimes I say paved the cow path. You didn't try to force, in other words, a new architecture into the legacy processes. You really have to rethink your approach to data management. So what what did that entail? >>Um, to rethink the way of data management. 100%. So if I take the example of Revolution, Industrial Revolution or classical supply chain revolution, but just imagine that you have been riding a horse, for example, your whole life and suddenly you can operate a car or you suddenly receive just a complete new way of transporting assets from A to B. Um, so we needed to establish a new set of cross functional business processes to run faster, dry faster, um, more robustly and deliver data products which can be trusted and used by downstream processes and systems. Hence we had a subset of new standards and new procedures that would fall into the internal data governance and compliance sector with internal, I'm always referring to the data operations around new things like data catalog, how to identify >>ownership, >>how to change ownership, how to certify data assets, everything around classical software development, which we know apply to data. This this is similar to a new thinking, right? Um deployment, versioning, QA all the different things, ingestion policies, policing procedures, all the things that suffer. Development has been doing. We do it now with data as well. And in simple terms, it's a whole redesign of the supply chain of our data with new procedures and new processes and as a creation as management and as a consumption. >>So data has become kind of the new development kit. If you will um I want to shift gears and talk about the notion of data product and, and we have a slide uh that we pulled from your deck and I'd like to unpack it a little bit. Uh I'll just, if you can bring that up, I'll read it. A data product is a product whose primary objective is to leverage on data to solve customer problems where customers, both internal and external. So pretty straightforward. I know you've gone much deeper and you're thinking and into your organization, but how do you think about that And how do you determine for instance who owns what? How did you get everybody to agree? >>I can take that one. Um, maybe let me start with the data product. So I think um that's an ongoing debate. Right? And I think the debate itself is an important piece here, right? That visit the debate, you clarify what we actually mean by that product and what is actually the mindset. So I think just from a definition perspective, right? I think we find the common denominator that we say okay that our product is something which is important for the company has come to its value what you mean by that. Okay, it's it's a solution to a customer problem that delivers ideally maximum value to the business. And yes, it leverages the power of data and we have a couple of examples but it had a fresh year, the historical and classical ones around dashboards for example, to monitor or error rates but also more sophisticated ways for example to incorporate machine learning algorithms in our recipe recommendations. However, I think the important aspects of the data product is a there is an owner, right? There's someone accountable for making sure that the product that we are providing is actually served and is maintained and there are, there is someone who is making sure that this actually keeps the value of that problem thing combined with the idea of the proper documentation, like a product description, right that people understand how to use their bodies is about and related to that peace is the idea of it is a purpose. Right? You need to understand or ask ourselves, Okay, why does this thing exist does it provide the value that you think it does. That leads into a good understanding about the life cycle of the data product and life cycle what we mean? Okay from the beginning from the creation you need to have a good understanding, we need to collect feedback, we need to learn about that. We need to rework and actually finally also to think about okay benefits time to decommission piece. So overall, I think the core of the data product is product thinking 11 right that we start the point is the starting point needs to be the problem and not the solution and this is essentially what we have seen what was missing but brought us to this kind of data spaghetti that we have built there in in Russia, essentially we built at certain data assets, develop in isolation and continuously patch the solution just to fulfill these articles that we got and actually these aren't really understanding of the stakeholder needs and the interesting piece as a result in duplication of work and this is not just frustrating and probably not the most efficient way how the company should work. But also if I build the same that assets but slightly different assumption across the company and multiple teams that leads to data inconsistency and imagine the following too narrow you as a management for management perspective, you're asking basically a specific question and you get essentially from a couple of different teams, different kind of grass, different kind of data and numbers and in the end you do not know which ones to trust. So there's actually much more ambiguity and you do not know actually is a noise for times of observing or is it just actually is there actually a signal that I'm looking for? And the same is if I'm running in a B test right, I have a new future, I would like to understand what has it been the business impact of this feature. I run that specific source in an unfortunate scenario. Your production system is actually running on a different source. You see different numbers. What you've seen in a B test is actually not what you see then in production typical thing then is you're asking some analytics tend to actually do a deep dive to understand where the discrepancies are coming from. The worst case scenario. Again, there's a different kind of source. So in the end it's a pretty frustrating scenario and that's actually based of time of people that have to identify the root cause of this divergence. So in a nutshell, the highest degree of consistency is actually achieved that people are just reusing Dallas assets and also in the media talk that we have given right, we we start trying to establish this approach for a B testing. So we have a team but just providing or is kind of owning their target metric associated business teams and they're providing that as a product also to other services including the A B testing team, they'll be testing team can use this information defines an interface is okay I'm joining this information that the metadata of an experiment and in the end after the assignment after this data collection face, they can easily add a graph to the dashboard. Just group by the >>Beatles Hungarian. >>And we have seen that also in other companies. So it's not just a nice dream that we have right. I have actually worked in other companies where we worked on search and we established a complete KPI pipeline that was computing all this information. And this information was hosted by the team and it was used for everything A B test and deep dives and and regular reporting. So uh just one of the second the important piece now, why I'm coming back to that is that requires that we are treating this data as a product right? If you want to have multiple people using the things that I am owning and building, we have to provide this as a trust mercy asset and in a way that it's easy for people to discover and actually work with. >>Yeah. And coming back to that. So this is to me this is why I get so excited about data mesh because I really do think it's the right direction for organizations. When people hear data product they say well, what does that mean? Uh but then when you start to sort of define it as you did, it's it's using data to add value, that could be cutting costs, that could be generating revenue, it could be actually directly you're creating a product that you monetize, So it's sort of in the eyes of the beholder. But I think the other point that we've made is you made it earlier on to and again, context. So when you have a centralized data team and you have all these P NL managers a lot of times they'll question the data because they don't own it. They're like wait a minute. If they don't, if it doesn't agree with their agenda, they'll attack the data. But if they own the data then they're responsible for defending that and that is a mindset change, that's really important. Um And I'm curious uh is how you got to, you know, that ownership? Was it a was it a top down with somebody providing leadership? Was it more organic bottom up? Was it a sort of a combination? How do you decide who owned what in other words, you know, did you get, how did you get the business to take ownership of the data and what is owning? You know, the data actually mean? >>That's a very good question. Dave I think this is one of the pieces where I think we have a lot of learnings and basically if you ask me how we could start the feeling. I think that would be the first piece. Maybe we need to start to really think about how that should be approached if it stopped his ownership. Right? It means somehow that the team has a responsibility to host and self the data efforts to minimum acceptable standards. This minimum dependencies up and down string. The interesting piece has been looking backwards. What what's happening is that under that definition has actually process that we have to go through is not actually transferring ownership from the central team to the distributor teams. But actually most cases to establish ownership, I make this difference because saying we have to transfer ownership actually would erroneously suggests that the data set was owned before. But this platform team, yes, they had the capability to make the changes on data pipelines, but actually the analytics team, they're always the ones who had the business understands, you use cases and but no one actually, but it's actually expensive expected. So we had to go through this very lengthy process and establishing ownership. We have done that, as in the beginning, very naively. They have started, here's a document here, all the data assets, what is probably the nearest neighbor who can actually take care of that and then we we moved it over. But the problem here is that all these things is kind of technical debt, right? It's not really properly documented, pretty unstable. It was built in a very inconsistent over years and these people who have built this thing have already left the company. So there's actually not a nice thing that is that you want to see and people build up a certain resistance, e even if they have actually bought into this idea of domain ownership. So if you ask me these learnings, but what needs to happen as first, the company needs to really understand what our core business concept that they have, they need to have this mapping from. These are the core business concept that we have. These are the domain teams who are owning this concept and then actually link that to the to the assets and integrated better with both understanding how we can evolve actually, the data assets and new data build things new in the in this piece in the domain. But also how can we address reduction of technical death and stabilizing what we have already. >>Thank you for that christoph. So I want to turn a direction here and talk about governance and I know that's an area that's passionate, you're passionate about. Uh I pulled this slide from your deck, which I kind of messed up a little bit sorry for that, but but by the way, we're going to publish a link to the full video that you guys did. So we'll share that with folks. But it's one of the most challenging aspects of data mesh, if you're going to decentralize you, you quickly realize this could be the Wild West as we talked about all over again. So how are you approaching governance? There's a lot of items on this slide that are, you know, underscore the complexity, whether it's privacy, compliance etcetera. So, so how did you approach this? >>It's yeah, it's about connecting those dots. Right. So the aim of the data governance program is about the autonomy of every team was still ensuring that everybody has the right interoperability. So when we want to move from the Wild West riding horses to a civilised way of transport, um you can take the example of modern street traffic, like when all participants can manoeuvre independently and as long as they follow the same rules and standards, everybody can remain compatible with each other and understand and learn from each other so we can avoid car crashes. So when I go from country to country, I do understand what the street infrastructure means. How do I drive my car? I can also read the traffic lights in the different signals. Um, so likewise as a business and Hello Fresh, we do operate autonomously and consequently need to follow those external and internal rules and standards to set forth by the redistribution in which we operate so in order to prevent a car crash, we need to at least ensure compliance with regulations to account for society's and our customers increasing concern with data protection and privacy. So teaching and advocating this advantage, realizing this to everyone in the company um was a key community communication strategy and of course, I mean I mentioned data privacy external factors, the same goes for internal regulations and processes to help our colleagues to adapt to this very new environment. So when I mentioned before the new way of thinking the new way of um dealing and managing data, this of course implies that we need new processes and regulations for our colleagues as well. Um in a nutshell then this means the data governance provides a framework for managing our people the processes and technology and culture around our data traffic. And those components must come together in order to have this effective program providing at least a common denominator, especially critical for shared dataset, which we have across our different geographies managed and shared applications on shared infrastructure and applications and is then consumed by centralized processes um for example, master data, everything and all the metrics and KPI s which are also used for a central steering. Um it's a big change day. Right. And our ultimate goal is to have this noninvasive, Federated um ultimatum and computational governance and for that we can't just talk about it. We actually have to go deep and use case by use case and Qc buy PVC and generate learnings and learnings with the different teams. And this would be a classical approach of identifying the target structure, the target status, match it with the current status by identifying together with the business teams with the different domains have a risk assessment for example, to increase transparency because a lot of teams, they might not even know what kind of situation they might be. And this is where this training and this piece of illiteracy comes into place where we go in and trade based on the findings based on the most valuable use case um and based on that help our teams to do this change to increase um their capability just a little bit more and once they hand holding. But a lot of guidance >>can I kind of kind of trying to quickly David will allow me I mean there's there's a lot of governance piece but I think um that is important. And if you're talking about documentation for example, yes, we can go from team to team and tell these people how you have to document your data and data catalog or you have to establish data contracts and so on the force. But if you would like to build data products at scale following actual governance, we need to think about automation right. We need to think about a lot of things that we can learn from engineering before. And that starts with simple things like if we would like to build up trust in our data products, right, and actually want to apply the same rigor and the best practices that we know from engineering. There are things that we can do and we should probably think about what we can copy and one example might be. So the level of service level agreements, service level objectives. So that level indicators right, that represent on on an engineering level, right? If we're providing services there representing the promises we made to our customers or consumers, these are the internal objectives that help us to keep those promises. And actually these are the way of how we are tracking ourselves, how we are doing. And this is just one example of that thing. The Federated Governor governance comes into play right. In an ideal world, we should not just talk about data as a product but also data product. That's code that we say, okay, as most as much as possible. Right? Give the engineers the tool that they are familiar basis and actually not ask the product managers for example to document their data assets in the data catalog but make it part of the configuration. Have this as a, as a C D C I, a continuous delivery pipeline as we typically see another engineering task through and services we say, okay, there is configuration, we can think about pr I can think about data quality monitoring, we can think about um the ingestion data catalog and so on and forest, I think ideally in the data product will become of a certain templates that can be deployed and are actually rejected or verified at build time before we actually make them deploy them to production. >>Yeah, So it's like devoPS for data product um so I'm envisioning almost a three phase approach to governance and you kind of, it sounds like you're in early phases called phase zero where there's there's learning, there's literacy, there's training, education, there's kind of self governance and then there's some kind of oversight, some a lot of manual stuff going on and then you you're trying to process builders at this phase and then you codify it and then you can automate it. Is that fair? >>Yeah, I would rather think think about automation as early as possible in the way and yes, there needs to be certain rules but then actually start actually use case by use case. Is there anything that small piece that we can already automate? It's as possible. Roll that out and then actually extended step by step, >>is there a role though that adjudicates that? Is there a central Chief state officer who is responsible for making sure people are complying or is it how do you handle that? >>I mean from a from a from a platform perspective, yes, we have a centralized team to uh implement certain pieces they'll be saying are important and actually would like to implement. However, that is actually working very closely with the governance department. So it's Clements piece to understand and defy the policies that needs to be implemented. >>So Clements essentially it's it's your responsibility to make sure that the policy is being followed. And then as you were saying, christoph trying to compress the time to automation as fast as possible percent. >>So >>it's really it's uh >>what needs to be really clear that it's always a split effort, Right? So you can't just do one thing or the other thing, but everything really goes hand in hand because for the right automation for the right engineering tooling, we need to have the transparency first. Uh I mean code needs to be coded so we kind of need to operate on the same level with the right understanding. So there's actually two things that are important which is one its policies and guidelines, but not only that because more importantly or even well equally important to align with the end user and tech teams and engineering and really bridge between business value business teams and the engineering teams. >>Got it. So just a couple more questions because we gotta wrap I want to talk a little bit about the business outcome. I know it's hard to quantify and I'll talk about that in a moment but but major learnings, we've got some of the challenges that you cited. I'll just put them up here. We don't have to go detailed into this, but I just wanted to share with some folks. But my question, I mean this is the advice for your peers question if you had to do it differently if you had a do over or a Mulligan as we like to say for you golfers, what would you do differently? Yeah, >>I mean can we start with from a from the transformational challenge that understanding that it's also high load of cultural change. I think this is this is important that a particular communication strategy needs to be put into place and people really need to be um supported. Right? So it's not that we go in and say well we have to change towards data mesh but naturally it's in human nature, you know, we're kind of resistance to to change right? Her speech uncomfortable. So we need to take that away by training and by communicating um chris we're gonna add something to that >>and definitely I think the point that I have also made before right we need to acknowledge that data mesh is an architecture of scale, right? You're looking for something which is necessary by huge companies who are vulnerable, data productive scale. I mean Dave you mentioned it right, there are a lot of advantages to have a centralized team but at some point it may make sense to actually decentralized here and at this point right? If you think about data Mash, you have to recognize that you're not building something on a green field. And I think there's a big learning which is also reflected here on the slide is don't underestimate your baggage. It's typically you come to a point where the old model doesn't doesn't broke anymore and has had a fresh right? We lost our trust in our data and actually we have seen certain risks that we're slowing down our innovation so we triggered that this was triggering the need to actually change something. So this transition implies that you typically have a lot of technical debt accumulated over years and I think what we have learned is that potentially we have decentralized some assets to earlier, this is not actually taking into account the maturity of the team where we are actually distributed to and now we actually in the face of correcting pieces of that one. Right? But I think if you if you if you start from scratch you have to understand, okay, is are my team is actually ready for taking on this new uh, this news capabilities and you have to make sure that business decentralization, you build up these >>capabilities and the >>teams and as Clements has mentioned, right, make sure that you take the people on your journey. I think these are the pieces that also here, it comes with this knowledge gap, right? That we need to think about hiring and literacy the technical depth I just talked about and I think the last piece that I would add now which is not here on the flight deck is also from our perspective, we started on the analytical layer because that's kind of where things are exploding, right, this is the thing that people feel the pain but I think a lot of the efforts that we have started to actually modernize the current state uh, towards data product towards data Mash. We've understood that it always comes down basically to a proper shape of our operational plane and I think what needs to happen is is I think we got through a lot of pains but the learning here is this need to really be a commitment from the company that needs to happen and to act. >>I think that point that last point you made it so critical because I I hear a lot from the vendor community about how they're gonna make analytics better and that's that's not unimportant, but but through data product thinking and decentralized data organizations really have to operationalize in order to scale. So these decisions around data architecture an organization, their fundamental and lasting, it's not necessarily about an individual project are why they're gonna be project sub projects within this architecture. But the architectural decision itself is an organizational, its cultural and what's the best approach to support your business at scale. It really speaks to to to what you are, who you are as a company, how you operate and getting that right, as we've seen in the success of data driven driven companies is yields tremendous results. So I'll ask each of you to give give us your final thoughts and then we'll wrap maybe >>maybe it quickly, please. Yeah, maybe just just jumping on this piece that you have mentioned, right, the target architecture. If we talk about these pieces right, people often have this picture of mind like OK, there are different kind of stages, we have sources, we have actually ingestion layer, we have historical transformation presentation layer and then we're basically putting a lot of technology on top of that kind of our target architecture. However, I think what we really need to make sure is that we have these different kind of viewers, right? We need to understand what are actually the capabilities that we need in our new goals. How does it look and feel from the different kind of personas and experience view? And then finally, that should actually go to the to the target architecture from a technical perspective um maybe just to give an outlook but what we're what we're planning to do, how we want to move that forward. We have actually based on our strategy in the in the sense of we would like to increase that to maturity as a whole across the entire company and this is kind of a framework around the business strategy and it's breaking down into four pillars as well. People meaning the data, cultural, data literacy, data organizational structure and so on that. We're talking about governance as Clements has actually mentioned that, right, compliance, governance, data management and so on. You talk about technology and I think we could talk for hours for that one. It's around data platform, better science platform and then finally also about enablement through data, meaning we need to understand that a quality data accessibility and the science and data monetization. >>Great, thank you christophe clement. Once you bring us home give us your final thoughts. >>Can't can just agree with christoph that uh important is to understand what kind of maturity people have to understand what the maturity level, where the company where where people organization is and really understand what does kind of some kind of a change replies to that those four pillars for example, um what needs to be taken first and this is not very clear from the very first beginning of course them it's kind of like Greenfield you come up with must wins to come up with things that we really want to do out of theory and out of different white papers. Um only if you really start conducting the first initiatives you do understand. Okay, where we have to put the starts together and where do I missed out on one of those four different pillars? People, process technology and governance. Right? And then that kind of an integration. Doing step by step, small steps by small steps not boiling the ocean where you're capable ready to identify the gaps and see where either you can fill um the gaps are where you have to increase maturity first and train people or increase your text text, >>you know Hello Fresh is an excellent example of a company that is innovating. It was not born in Silicon Valley which I love. It's a global company. Uh and I gotta ask you guys, it seems like this is an amazing place to work you guys hiring? >>Yes, >>definitely. We do >>uh as many rights as was one of these aspects distributing. And actually we are hiring as an entire company specifically for data. I think there are a lot of open roles serious. Please visit or our page from better engineering, data, product management and Clemens has a lot of rules that you can speak about. But yes >>guys, thanks so much for sharing with the cube audience, your, your pioneers and we look forward to collaborations in the future to track progress and really want to thank you for your time. >>Thank you very much. Thank you very much. Dave >>thank you for watching the cubes startup showcase made possible by A W. S. This is Dave Volonte. We'll see you next time. >>Yeah.
SUMMARY :
and realized that in order to support its scale, it needed to rethink how it thought Thank you very much. You guys are number one in the world in your field, Clements has actually been a longer trajectory yet have a fresh. So recently we did lounge and expand Norway. ready to eat companies like factor in the U. S. And the planned acquisition of you foods in Australia. So maybe you guys could talk a little bit about your journey as a company specifically as So we grew very organically So that for the team becomes a bottleneck and so the lines of business, the marketing team salesman's okay, we're going to take things into our own Started really to build their own data solutions at some point you have to get the ball rolling But but on the flip side of that is when you think about a centralized organization say the data to the experts in these teams and this, as you have mentioned, right, that increases mental load look at that say, okay, hey, that's pretty good thinking and then now we have to apply it and that's And the idea was really moving away from um ever growing complex go ahead. we have a self service infrastructure and as you mentioned, the spreadsheet era but christoph maybe you can talk about that. So in the end, in the natural, as we have said, the lack of trust and that's and cultural challenges that you faced. The conversations on the cultural change. got a bit more difficult. there are times and changes, you have different different artifacts that you were created These rules are defined by calling the sports association and this is what you can think about So learning never stops the tele fish, but we are really trying this and this is what we see in surveys, for example, where our employees that your justification not the least of which is crypto so you've identified some of the process gaps uh So if I take the example of This this is similar to a new thinking, right? gears and talk about the notion of data product and, and we have a slide uh that we There's someone accountable for making sure that the product that we are providing is actually So it's not just a nice dream that we have right. So this is to me this is why I get so excited about data mesh because I really do the company needs to really understand what our core business concept that they have, they need to have this mapping from. to the full video that you guys did. in order to prevent a car crash, we need to at least ensure the promises we made to our customers or consumers, these are the internal objectives that help us to keep a three phase approach to governance and you kind of, it sounds like you're in early phases called phase zero where Is there anything that small piece that we can already automate? and defy the policies that needs to be implemented. that the policy is being followed. so we kind of need to operate on the same level with the right understanding. or a Mulligan as we like to say for you golfers, what would you do differently? So it's not that we go in and say So this transition implies that you typically have a lot of the company that needs to happen and to act. It really speaks to to to what you are, who you are as a company, how you operate and in the in the sense of we would like to increase that to maturity as a whole across the entire company and this is kind Once you bring us home give us your final thoughts. and see where either you can fill um the gaps are where you Uh and I gotta ask you guys, it seems like this is an amazing place to work you guys hiring? We do you can speak about. really want to thank you for your time. Thank you very much. thank you for watching the cubes startup showcase made possible by A W. S.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Australia | LOCATION | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
May 2019 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Hello Fresh | ORGANIZATION | 0.99+ |
Russia | LOCATION | 0.99+ |
David | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
july | DATE | 0.99+ |
Denmark | LOCATION | 0.99+ |
Clements | PERSON | 0.99+ |
Jim McDaid Ghani | PERSON | 0.99+ |
U. S. | LOCATION | 0.99+ |
christophe | PERSON | 0.99+ |
two years later | DATE | 0.99+ |
last year | DATE | 0.99+ |
first piece | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
Clements | ORGANIZATION | 0.99+ |
steve | PERSON | 0.99+ |
last week | DATE | 0.99+ |
Beatles | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
one tool | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
Norway | LOCATION | 0.98+ |
second | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
christoph | PERSON | 0.98+ |
today | DATE | 0.98+ |
first two | QUANTITY | 0.98+ |
hundreds of millions of meals | QUANTITY | 0.98+ |
one model | QUANTITY | 0.98+ |
four colors | QUANTITY | 0.97+ |
four pillars | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
first initiatives | QUANTITY | 0.97+ |
earlier this year | DATE | 0.97+ |
Jemaah | PERSON | 0.97+ |
each | QUANTITY | 0.96+ |
handle fresh | ORGANIZATION | 0.96+ |
U. K. | LOCATION | 0.95+ |
Dallas | LOCATION | 0.95+ |
christoph Nevada | PERSON | 0.95+ |
johnny | PERSON | 0.95+ |
Wild West | LOCATION | 0.94+ |
Youtube | ORGANIZATION | 0.94+ |
christophe clement | PERSON | 0.94+ |
four different pillars | QUANTITY | 0.94+ |
about 40 people | QUANTITY | 0.93+ |
each year | QUANTITY | 0.93+ |
A W. S. | PERSON | 0.92+ |
two different things | QUANTITY | 0.92+ |
Hello fresh | ORGANIZATION | 0.92+ |
millions of people | QUANTITY | 0.91+ |
Ben Amor, Palantir, and Sam Michael, NCATS | AWS PS Partner Awards 2021
>>Mhm Hello and welcome to the cubes coverage of AWS amazon web services, Global public Sector partner awards program. I'm john for your host of the cube here we're gonna talk about the best covid solution to great guests. Benham or with healthcare and life sciences lead at palantir Ben welcome to the cube SAm Michaels, Director of automation and compound management and Cats. National Center for advancing translational sciences and Cats. Part of the NIH National sort of health Gentlemen, thank you for coming on and and congratulations on the best covid solution. >>Thank you so much john >>so I gotta, I gotta ask you the best solution is when can I get the vaccine? How fast how long it's gonna last but I really appreciate you guys coming on. I >>hope you're vaccinated. I would say john that's outside of our hands. I would say if you've not got vaccinated, go get vaccinated right now, have someone stab you in the arm, you know, do not wait and and go for it. That's not on us. But you got that >>opportunity that we have that done. I got to get on a plane and all kinds of hoops to jump through. We need a better solution anyway. You guys have a great technical so I wanna I wanna dig in all seriousness aside getting inside. Um you guys have put together a killer solution that really requires a lot of data can let's step back and and talk about first. What was the solution that won the award? You guys have a quick second set the table for what we're talking about. Then we'll start with you. >>So the national covered cohort collaborative is a secure data enclave putting together the HR records from more than 60 different academic medical centers across the country and they're making it available to researchers to, you know, ask many and varied questions to try and understand this disease better. >>See and take us through the challenges here. What was going on? What was the hard problem? I'll see everyone had a situation with Covid where people broke through and cloud as he drove it amazon is part of the awards, but you guys are solving something. What was the problem statement that you guys are going after? What happened? >>I I think the problem statement is essentially that, you know, the nation has the electronic health records, but it's very fragmented, right. You know, it's been is highlighted is there's there's multiple systems around the country, you know, thousands of folks that have E H. R. S. But there is no way from a research perspective to actually have access in any unified location. And so really what we were looking for is how can we essentially provide a centralized location to study electronic health records. But in a Federated sense because we recognize that the data exist in other locations and so we had to figure out for a vast quantity of data, how can we get data from those 60 sites, 60 plus that Ben is referencing from their respective locations and then into one central repository, but also in a common format. Because that's another huge aspect of the technical challenge was there's multiple formats for electronic health records, there's different standards, there's different versions. And how do you actually have all of this data harmonised into something which is usable again for research? >>Just so many things that are jumping in my head right now, I want to unpack one at the time Covid hit the scramble and the imperative for getting answers quickly was huge. So it's a data problem at a massive scale public health impact. Again, we were talking before we came on camera, public health records are dirty, they're not clean. A lot of things are weird. I mean, just just massive amount of weird problems. How did you guys pull together take me through how this gets done? What what happened? Take us through the the steps He just got together and said, let's do this. How does it all happen? >>Yeah, it's a great and so john, I would say so. Part of this started actually several years ago. I explain this when people talk about in three C is that and Cats has actually established what we like to call, We support a program which is called the Clinical translation Science Award program is the largest single grant program in all of NIH. And it constitutes the bulk of the Cats budget. So this is extra metal grants which goes all over the country. And we wanted this group to essentially have a common research environment. So we try to create what we call the secure scientific collaborative platforms. Another example of this is when we call the rare disease clinical research network, which again is a consortium of 20 different sites around the nation. And so really we started working this several years ago that if we want to Build an environment that's collaborative for researchers around the country around the world, the natural place to do that is really with a cloud first strategy and we recognize this as and cats were about 600 people now. But if you look at the size of our actual research community with our grantees were in the thousands. And so from the perspective that we took several years ago was we have to really take a step back. And if we want to have a comprehensive and cohesive package or solution to treat this is really a mid sized business, you know, and so that means we have to treat this as a cloud based enterprise. And so in cats several years ago had really gone on this strategy to bring in different commercial partners, of which one of them is Palin tear. It actually started with our intramural research program and obviously very heavy cloud use with AWS. We use your we use google workspace, essentially use different cloud tools to enable our collaborative researchers. The next step is we also had a project. If we want to have an environment, we have to have access. And this is something that we took early steps on years prior that there is no good building environment if people can't get in the front door. So we invested heavily and create an application which we call our Federated authentication system. We call it unified and cats off. So we call it, you know, for short and and this is the open source in house project that we built it and cats. And we wanted to actually use this for all sorts of implementation, acting as the front door to this collaborative environment being one of them. And then also by by really this this this interest in electronic health records that had existed prior to the Covid pandemic. And so we've done some prior work via mixture of internal investments in grants with collaborative partners to really look at what it would take to harmonize this data at scale. And so like you mentioned, Covid hit it. Hit really hard. Everyone was scrambling for answers. And I think we had a bit of these pieces um, in play. And then that's I think when we turned to ban and the team at volunteer and we said we have these components, we have these pieces what we really need. Something independent that we can stand up quickly to really address some of these problems. One of the biggest one being that data ingestion and the harmonization step. And so I can let Ben really speak to that one. >>Yeah. Ben Library because you're solving a lot of collaboration problems, not just the technical problem but ingestion and harmonization ingestion. Most people can understand is that the data warehousing or in the database know that what that means? Take us through harmonization because not to put a little bit of shade on this, but most people think about, you know, these kinds of research or non profits as a slow moving, you know, standing stuff up sandwich saying it takes time you break it down. By the time you you didn't think things are over. This was agile. So take us through what made it an agile because that's not normal. I mean that's not what you see normally. It's like, hey we'll see you next year. We stand that up. Yeah. At the data center. >>Yeah, I mean so as as Sam described this sort of the question of data on interoperability is a really essential problem for working with this kind of data. And I think, you know, we have data coming from more than 60 different sites and one of the reasons were able to move quickly was because rather than saying oh well you have to provide the data in a certain format, a certain standard. Um and three C. was able to say actually just give us the data how you have it in whatever format is easiest for you and we will take care of that process of actually transforming it into a single standard data model, converting all of the medical vocabularies, doing all of the data quality assessment that's needed to ensure that data is actually ready for research and that was very much a collaborative endeavor. It was run out of a team based at johns Hopkins University, but in collaboration with a broad range of researchers who are all adding their expertise and what we were able to do was to provide the sort of the technical infrastructure for taking the transformation pipelines that are being developed, that the actual logic and the code and developing these very robust kind of centralist templates for that. Um, that could be deployed just like software is deployed, have changed management, have upgrades and downgrades and version control and change logs so that we can roll that out across a large number of sites in a very robust way very quickly. So that's sort of that, that that's one aspect of it. And then there was a bunch of really interesting challenges along the way that again, a very broad collaborative team of researchers worked on and an example of that would be unit harmonization and inference. So really simple things like when a lab result arrives, we talked about data quality, um, you were expected to have a unit right? Like if you're reporting somebody's weight, you probably want to know if it's in kilograms or pounds, but we found that a very significant proportion of the time the unit was actually missing in the HR record. And so unless you can actually get that back, that becomes useless. And so an approach was developed because we had data across 60 or more different sites, you have a large number of lab tests that do have the correct units and you can look at the data distributions and decide how likely is it that this missing unit is actually kilograms or pounds and save a huge portion of these labs. So that's just an example of something that has enabled research to happen that would not otherwise have been able >>just not to dig in and rat hole on that one point. But what time saving do you think that saves? I mean, I can imagine it's on the data cleaning side. That's just a massive time savings just in for Okay. Based on the data sampling, this is kilograms or pounds. >>Exactly. So we're talking there's more than 3.5 billion lab records in this data base now. So if you were trying to do this manually, I mean, it would take, it would take to thousands of years, you know, it just wouldn't be a black, it would >>be a black hole in the dataset, essentially because there's no way it would get done. Ok. Ok. Sam take me through like from a research standpoint, this normalization, harmonization the process. What does that enable for the, for the research and who decides what's the standard format? So, because again, I'm just in my mind thinking how hard this is. And then what was the, what was decided? Was it just on the base records what standards were happening? What's the impact of researchers >>now? It's a great quite well, a couple things I'll say. And Ben has touched on this is the other real core piece of N three C is the community, right? You know, And so I think there's a couple of things you mentioned with this, johN is the way we execute this is, it was very nimble, it was very agile and there's something to be said on that piece from a procurement perspective, the government had many covid authorities that were granted to make very fast decisions to get things procured quickly. And we were able to turn this around with our acquisition shop, which we would otherwise, you know, be dead in the water like you said, wait a year ago through a normal acquisition process, which can take time, but that's only one half the other half. And really, you're touching on this and Ben is touching on this is when he mentions the research as we have this entire courts entire, you know, research community numbering in the thousands from a volunteer perspective. I think it's really fascinating. This is a really a great example to me of this public private partnership between the companies we use, but also the academic participants that are actually make up the community. Um again, who the amount of time they have dedicated on this is just incredible. So, so really, what's also been established with this is core governance. And so, you know, you think from assistance perspective is, you know, the Palin tear this environment, the N three C environment belongs to the government, but the N 33 the entire actually, you know, program, I would say, belongs to the community. We have co governance on this. So who decides really is just a mixture between the folks on End Cats, but not just end cast as folks at End Cats, folks that, you know, and I proper, but also folks and other government agencies, but also the, the academic communities and entire these mixed governance teams that actually set the stage for all of this. And again, you know, who's gonna decide the standard, We decide we're gonna do this in Oman 5.3 point one um is the standard we're going to utilize. And then once the data is there, this is what gets exciting is then they have the different domain teams where they can ask different research questions depending upon what has interest scientifically to them. Um and so really, you know, we viewed this from the government's perspective is how do we build again the secure platform where we can enable the research, but we don't really want to dictate the research. I mean, the one criteria we did put your research has to be covid focused because very clearly in response to covid, so you have to have a Covid focus and then we have data use agreements, data use request. You know, we have entire governance committees that decide is this research in scope, but we don't want to dictate the research types that the domain teams are bringing to the table. >>And I think the National Institutes of Health, you think about just that their mission is to serve the public health. And I think this is a great example of when you enable data to be surfaced and available that you can really allow people to be empowered and not to use the cliche citizen analysts. But in a way this is what the community is doing. You're doing research and allowing people from volunteers to academics to students to just be part of it. That is citizen analysis that you got citizen journalism. You've got citizen and uh, research, you've got a lot of democratization happening here. Is that part of it was a result of >>this? Uh, it's both. It's a great question. I think it's both. And it's it's really by design because again, we want to enable and there's a couple of things that I really, you know, we we clamor with at end cats. I think NIH is going with this direction to is we believe firmly in open science, we believe firmly in open standards and how we can actually enable these standards to promote this open science because it's actually nontrivial. We've had, you know, the citizen scientists actually on the tricky problem from a governance perspective or we have the case where we actually had to have students that wanted access to the environment. Well, we actually had to have someone because, you know, they have to have an institution that they come in with, but we've actually across some of those bridges to actually get students and researchers into this environment very much by design, but also the spirit which was held enabled by the community, which, again, so I think they go they go hand in hand. I planned for >>open science as a huge wave, I'm a big fan, I think that's got a lot of headroom because open source, what that's done to software, the software industry, it's amazing. And I think your Federated idea comes in here and Ben if you guys can just talk through the Federated, because I think that might enable and remove some of the structural blockers that might be out there in terms of, oh, you gotta be affiliate with this or that our friends got to invite you, but then you got privacy access and this Federated ID not an easy thing, it's easy to say. But how do you tie that together? Because you want to enable frictionless ability to come in and contribute same time you want to have some policies around who's in and who's not. >>Yes, totally, I mean so Sam sort of already described the the UNa system which is the authentication system that encounters has developed. And obviously you know from our perspective, you know we integrate with that is using all of the standard kind of authentication protocols and it's very easy to integrate that into the family platform um and make it so that we can authenticate people correctly. But then if you go beyond authentication you also then to actually you need to have the access controls in place to say yes I know who this person is, but now what should they actually be able to see? Um And I think one of the really great things in Free C has done is to be very rigorous about that. They have their governance rules that says you should be using the data for a certain purpose. You must go through a procedure so that the access committee approves that purpose. And then we need to make sure that you're actually doing the work that you said you were going to. And so before you can get your data back out of the system where your results out, you actually have to prove that those results are in line with the original stated purpose and the infrastructure around that and having the access controls and the governance processes, all working together in a seamless way so that it doesn't, as you say, increase the friction on the researcher and they can get access to the data for that appropriate purpose. That was a big component of what we've been building out with them three C. Absolutely. >>And really in line john with what NIH is doing with the research, all service, they call this raz. And I think things that we believe in their standards that were starting to follow and work with them closely. Multifactor authentication because of the point Ben is making and you raised as well, you know, one you need to authenticate, okay. This you are who you say you are. And and we're recognizing that and you're, you know, the author and peace within the authors. E what do you authorized to see? What do you have authorization to? And they go hand in hand and again, non trivial problems. And especially, you know, when we basis typically a lot of what we're using is is we'll do direct integrations with our package. We using commons for Federated access were also even using login dot gov. Um, you know, again because we need to make sure that people had a means, you know, and login dot gov is essentially a runoff right? If they don't have, you know an organization which we have in common or a Federated access to generate a login dot gov account but they still are whole, you know beholden to the multi factor authentication step and then they still have to get the same authorizations because we really do believe access to these environment seamlessly is absolutely critical, you know, who are users are but again not make it restrictive and not make it this this friction filled process. That's very that's very >>different. I mean you think about nontrivial, totally agree with you and if you think about like if you were in a classic enterprise, I thought about an I. T. Problem like bring your own device to work and that's basically what the whole world does these days. So like you're thinking about access, you don't know who's coming in, you don't know where they're coming in from, um when the churn is so high, you don't know, I mean all this is happening, right? So you have to be prepared two Provisions and provide resource to a very lightweight access edge. >>That's right. And that's why it gets back to what we mentioned is we were taking a step back and thinking about this problem, you know, an M three C became the use case was this is an enterprise I. T. Problem. Right. You know, we have users from around the world that want to access this environment and again we try to hit a really difficult mark, which is secure but collaborative, Right? That's that's not easy, you know? But but again, the only place this environment could take place isn't a cloud based environment, right? Let's be real. You know, 10 years ago. Forget it. You know, Again, maybe it would have been difficult, but now it's just incredible how much they advanced that these real virtual research organizations can start to exist and they become the real partnerships. >>Well, I want to Well, that's a great point. I want to highlight and call out because I've done a lot of these interviews with awards programs over the years and certainly in public sector and open source over many, many years. One of the things open source allows us the code re use and also when you start getting in these situations where, okay, you have a crisis covid other things happen, nonprofits go, that's the same thing. They, they lose their funding and all the code disappears. Saying with these covid when it becomes over, you don't want to lose the momentum. So this whole idea of re use this platform is aged deplatforming of and re factoring if you will, these are two concepts with a cloud enables SAM, I'd love to get your thoughts on this because it doesn't go away when Covid's >>over, research still >>continues. So this whole idea of re platform NG and then re factoring is very much a new concept versus the old days of okay, projects over, move on to the next one. >>No, you're absolutely right. And I think what first drove us is we're taking a step back and and cats, you know, how do we ensure that sustainability? Right, Because my background is actually engineering. So I think about, you know, you want to build things to last and what you just described, johN is that, you know, that, that funding, it peaks, it goes up and then it wanes away and it goes and what you're left with essentially is nothing, you know, it's okay you did this investment in a body of work and it goes away. And really, I think what we're really building are these sustainable platforms that we will actually grow and evolve based upon the research needs over time. And I think that was really a huge investment that both, you know, again and and Cats is made. But NIH is going in a very similar direction. There's a substantial investment, um, you know, made in these, these these these really impressive environments. How do we make sure the sustainable for the long term? You know, again, we just went through this with Covid, but what's gonna come next? You know, one of the research questions that we need to answer, but also open source is an incredibly important piece of this. I think Ben can speak this in a second, all the harmonization work, all that effort, you know, essentially this massive, complex GTL process Is in the N three Seagate hub. So we believe, you know, completely and the open source model a little bit of a flavor on it too though, because, you know, again, back to the sustainability, john, I believe, you know, there's a room for this, this marriage between commercial platforms and open source software and we need both. You know, as we're strong proponents of N cats are both, but especially with sustainability, especially I think Enterprise I. T. You know, you have to have professional grade products that was part of, I would say an experiment we ran out and cast our thought was we can fund academic groups and we can have them do open source projects and you'll get some decent results. But I think the nature of it and the nature of these environments become so complex. The experiment we're taking is we're going to provide commercial grade tools For the academic community and the researchers and let them use them and see how they can be enabled and actually focus on research questions. And I think, you know, N3C, which we've been very successful with that model while still really adhering to the open source spirit and >>principles as an amazing story, congratulated, you know what? That's so awesome because that's the future. And I think you're onto something huge. Great point, Ben, you want to chime in on this whole sustainability because the public private partnership idea is the now the new model innovation formula is about open and collaborative. What's your thoughts? >>Absolutely. And I mean, we uh, volunteer have been huge proponents of reproducibility and openness, um in analyses and in science. And so everything done within the family platform is done in open source languages like python and R. And sequel, um and is exposed via open A. P. I. S and through get repository. So that as SaM says, we've we've pushed all of that E. T. L. Code that was developed within the platform out to the cats get hub. Um and the analysis code itself being written in those various different languages can also sort of easily be pulled out um and made available for other researchers in the future. And I think what we've also seen is that within the data enclave there's been an enormous amount of re use across the different research projects. And so actually having that security in place and making it secure so that people can actually start to share with each other securely as well. And and and be very clear that although I'm sharing this, it's still within the range of the government's requirements has meant that the, the research has really been accelerated because people have been able to build and stand on the shoulders of what earlier projects have done. >>Okay. Ben. Great stuff. 1000 researchers. Open source code and get a job. Where do I sign up? I want to get involved. This is amazing. Like it sounds like a great party. >>We'll send you a link if you do a search on on N three C, you know, do do a search on that and you'll actually will come up with a website hosted by the academic side and I'll show you all the information of how you can actually connect and john you're welcome to come in. Billion by all means >>billions of rows of data being solved. Great tech he's working on again. This is a great example of large scale the modern era of solving problems is here. It's out in the open, Open Science. Sam. Congratulations on your great success. Ben Award winners. You guys doing a great job. Great story. Thanks for sharing here with us in the queue. Appreciate it. >>Thank you, john. >>Thanks for having us. >>Okay. It is. Global public sector partner rewards best Covid solution palantir and and cats. Great solution. Great story. I'm john Kerry with the cube. Thanks for watching. Mm mm. >>Mhm
SUMMARY :
thank you for coming on and and congratulations on the best covid solution. so I gotta, I gotta ask you the best solution is when can I get the vaccine? go get vaccinated right now, have someone stab you in the arm, you know, do not wait and and go for it. Um you guys have put together a killer solution that really requires a lot of data can let's step you know, ask many and varied questions to try and understand this disease better. What was the problem statement that you guys are going after? I I think the problem statement is essentially that, you know, the nation has the electronic health How did you guys pull together take me through how this gets done? or solution to treat this is really a mid sized business, you know, and so that means we have to treat this as a I mean that's not what you see normally. do have the correct units and you can look at the data distributions and decide how likely do you think that saves? it would take, it would take to thousands of years, you know, it just wouldn't be a black, Was it just on the base records what standards were happening? And again, you know, who's gonna decide the standard, We decide we're gonna do this in Oman 5.3 And I think this is a great example of when you enable data to be surfaced again, we want to enable and there's a couple of things that I really, you know, we we clamor with at end ability to come in and contribute same time you want to have some policies around who's in and And so before you can get your data back out of the system where your results out, And especially, you know, when we basis typically I mean you think about nontrivial, totally agree with you and if you think about like if you were in a classic enterprise, you know, an M three C became the use case was this is an enterprise I. T. Problem. One of the things open source allows us the code re use and also when you start getting in these So this whole idea of re platform NG and then re factoring is very much a new concept And I think, you know, N3C, which we've been very successful with that model while still really adhering to Great point, Ben, you want to chime in on this whole sustainability because the And I think what we've also seen is that within the data enclave there's I want to get involved. will come up with a website hosted by the academic side and I'll show you all the information of how you can actually connect and It's out in the open, Open Science. I'm john Kerry with the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
NIH | ORGANIZATION | 0.99+ |
National Institutes of Health | ORGANIZATION | 0.99+ |
Sam Michael | PERSON | 0.99+ |
Palantir | PERSON | 0.99+ |
john Kerry | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Ben | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
1000 researchers | QUANTITY | 0.99+ |
Ben Amor | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
60 sites | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
60 | QUANTITY | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
more than 60 different sites | QUANTITY | 0.99+ |
johns Hopkins University | ORGANIZATION | 0.99+ |
thousands of years | QUANTITY | 0.99+ |
python | TITLE | 0.99+ |
20 different sites | QUANTITY | 0.99+ |
SAm Michaels | PERSON | 0.99+ |
more than 60 different academic medical centers | QUANTITY | 0.99+ |
johN | PERSON | 0.99+ |
john | PERSON | 0.99+ |
Covid pandemic | EVENT | 0.98+ |
several years ago | DATE | 0.98+ |
one criteria | QUANTITY | 0.98+ |
more than 3.5 billion lab records | QUANTITY | 0.98+ |
N3C | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
Globa | PERSON | 0.98+ |
60 plus | QUANTITY | 0.98+ |
two concepts | QUANTITY | 0.97+ |
first strategy | QUANTITY | 0.97+ |
a year ago | DATE | 0.96+ |
R. | TITLE | 0.96+ |
thousands of folks | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
one aspect | QUANTITY | 0.96+ |
agile | TITLE | 0.95+ |
about 600 people | QUANTITY | 0.94+ |
AWS | EVENT | 0.94+ |
single grant program | QUANTITY | 0.94+ |
Covid | PERSON | 0.92+ |
ORGANIZATION | 0.91+ | |
second | QUANTITY | 0.91+ |
Free C | TITLE | 0.9+ |
one point | QUANTITY | 0.9+ |
End Cats | ORGANIZATION | 0.89+ |
National Center for advancing translational sciences and Cats | ORGANIZATION | 0.89+ |
Billion | QUANTITY | 0.88+ |
Seagate | ORGANIZATION | 0.88+ |
one half | QUANTITY | 0.88+ |
two Provisions | QUANTITY | 0.86+ |
one central repository | QUANTITY | 0.85+ |
login dot gov. | OTHER | 0.84+ |
Federated | ORGANIZATION | 0.84+ |
dot gov | OTHER | 0.83+ |
palantir | PERSON | 0.83+ |
billions of rows of data | QUANTITY | 0.82+ |
2021 045 Shiv Gupta
(upbeat electronic music) >> Welcome back to the Quantcast Industry Summit on the demise of third-party cookies. The Cookie Conundrum, A Recipe for Success. I'm John Furrier, host of theCUBE. The changing landscape of advertising is here, and Shiv Gupta, founder of U of Digital is joining us. Shiv, thanks for coming on this segment. I really appreciate it. I know you're busy. You've got two young kids, as well as providing education to the digital industry. You got some kids to take care of and train them too. So, welcome to the cube conversation here as part of the program. >> Yeah, thanks for having me. Excited to be here. >> So, the house of the changing landscape of advertising really centers around the open to walled garden mindset of the web and the big power players. We know the big three, four tech players dominate the marketplace. So, clearly in a major inflection point. And you know, we've seen this movie before. Web, now mobile revolution. Which was basically a re-platforming of capabilities, but now we're in an era of refactoring the industry, not replatforming. A complete changing over of the value proposition. So, a lot at stake here as this open web, open internet-- global internet, evolves. What are your, what's your take on this? There's industry proposals out there that are talking to this specific cookie issue? What does it mean and what proposals are out there? >> Yeah, so, you know, I really view the identity proposals in kind of two kinds of groups. Two separate groups. So, on one side you have what the walled gardens are doing. And really that's being led by Google, right? So, Google introduced something called the Privacy Sandbox when they announced that they would be deprecating third-party cookies. And as part of the Privacy Sandbox, they've had a number of proposals. Unfortunately, or you know, however you want to say, they're all bird-themed, for some reason I don't know why. But the one, the bird-themed proposal that they've chosen to move forward with is called FLOC, which stands for Federated Learning of Cohorts. And, essentially what it all boils down to is Google is moving forward with cohort level learning and understanding of users in the future after third-party cookies. Unlike what we've been accustomed to in this space, which is a user level understanding of people and what they're doing online for targeting and tracking purposes. And so, that's on one side of the equation. It's what Google is doing with FLOC and Privacy Sandbox. Now, on the other side is, you know, things like unified ID 2.0 or the work that ID5 is doing around building new identity frameworks for the entire space that actually can still get down to the user level. Right? And so again, Unified ID 2.0 comes to mind because it's the one that's probably gotten the most adoption in the space. It's an open source framework. So the idea is that it's free and pretty much publicly available to anybody that wants to use it. And Unified ID 2.0 again is user level. So, it's basically taking data that's authenticated data from users across various websites that are logging in and taking those authenticated users to create some kind of identity map. And so, if you think about those two work streams, right? You've got the walled gardens and or, you know, Google with FLOC on one side. And then you've got Unified ID 2.0 and other ID frameworks for the open internet on the other side. You've got these two very different type of approaches to identity in the future. Again, on the Google side it's cohort level, it's going to be built into Chrome. The idea is that you can pretty much do a lot of the things that we do with advertising today but now you're just doing them at a group level so that you're protecting privacy. Whereas, on the other side with the open internet you're still getting down to the user level and that's pretty powerful but the the issue there is scale, right? We know that a lot of people are not logged in on lots of websites. I think the stat that I saw was under 5% of all website traffic is authenticated. So, really if you simplify things and you boil it all down you have kind of these two very differing approaches. >> So we have a publishing business. We'd love to have people authenticate and get that closed loop journalism thing going on. But, if businesses wannna get this level too, they can have concerns. So, I guess my question is, what's the trade-off? Because you have power in Google and the huge data set that they command. They command a lot of leverage with that. And again, centralized. And you've got open. But it seems to me that the world is moving more towards decentralization, not centralization. Do you agree with that? And does that have any impact to this? Because, you want to harness the data, so it rewards people with the most data. In this case, the powerful. But the world's going decentralized, where there needs to be a new way for data to be accessed and leveraged by anyone. >> Yeah. John, it's a great point. And I think we're at kind of a crossroads, right? To answer that question. You know, I think what we're hearing a lot right now in the space from publishers, like yourself, is that there's an interesting opportunity right now for them, right? To actually have some more control and say about the future of their own business. If you think about the last, let's say 10, 15, 20 years in advertising in digital, right? Programmatic has really become kind of the primary mechanism for revenue for a lot of these publishers. Right? And so programmatic is a super important part of their business. But, with everything that's happening here with identity now, a lot of these publishers are kind of taking a look in the mirror and thinking about, "Okay, we have an interesting opportunity here to make a decision." And, the decision, the trade off to your question is, Do we continue? Right? Do we put up the login wall? The registration wall, right? Collect that data. And then what do we do with that data? Right? So it's kind of a two-fold process here. Two-step process that they have to make a decision on. First of all, do we hamper the user experience by putting up a registration wall? Will we lose consumers if we do that? Do we create some friction in the process that's not necessary. And if we do, right? We're taking a hit already potentially, to what end? Right? And, I think that's the really interesting question, is to what end? But, what we're starting to see is publishers are saying you know what? Programmatic revenue is super important to us. And so, you know, path one might be: Hey, let's give them this data. Right? Let's give them the authenticated information, the data that we collect. Because if we do, we can continue on with the path that our business has been on. Right? Which is generating this awesome kind of programmatic revenue. Now, alternatively we're starting to see some publishers say hold up. If we say no, if we say: "Hey, we're going to authenticate but we're not going to share the data." Right? Some of the publishers actually view programmatic as almost like the programmatic industrial complex, right? That's almost taken a piece of their business in the last 10, 15, 20 years. Whereas, back in the day, they were selling directly and making all the revenue for themselves, right? And so, some of these publishers are starting to say: You know what? We're not going to play nice with FLOC and Unified ID. And we're going to kind of take some of this back. And what that means in the short term for them, is maybe sacrificing programmatic revenue. But their bet is long-term, maybe some of that money will come back to them direct. Now, that'll probably only be the premium pubs, right? The ones that really feel like they have that leverage and that runway to do something like that. And even so, you know, I'm of the opinion that if certain publishers kind of peel away and do that, that's probably not great for the bigger picture. Even though it might be good for their business. But, you know, let's see what happens. To each business their own >> Yeah. I think the trade-off of monetization and user experience has always been there. Now, more than ever, people want truth. They want trust. And I think the trust factor is huge. And if you're a publisher, you wannna have your audience be instrumental. And I think the big players have sucked out of the audience from the publishers for years. And that's well-documented. People talk about that all the time. I guess the question, it really comes down to is, what alternatives are out there for cookies and which ones do you think will be more successful? Because, I think the consensus is, at least from my reporting and my view, is that the world agrees. Let's make it open. Which one's going to be better? >> Yeah. That's a great question, John. So as I mentioned, right? We have two kinds of work streams here. We've got the walled garden work stream being led by Google and their work around FLOC. And then we've got the open internet, right? Let's say Unified ID 2.0 kind of represents that. I personally don't believe that there is a right answer or an end game here. I don't think that one of them wins over the other, frankly. I think that, you know, first of all, you have those two frameworks. Neither of them are perfect. They're both flawed in their own ways. There are pros and cons to both of them. And so what we're starting to see now, is you have other companies kind of coming in and building on top of both of them as kind of a hybrid solution, right? So they're saying, hey we use, you know, an open ID framework in this way to get down to the user level and use that authenticated data. And that's important, but we don't have all the scale. So now we go to a Google and we go to FLOC to kind of fill the scale. Oh and hey, by the way, we have some of our own special sauce. Right? We have some of our own data. We have some of our own partnerships. We're going to bring that in and layer it on top, right? And so, really where I think things are headed is the right answer, frankly, is not one or the other. It's a little mishmash of both with a little extra, you know, something on top. I think that's what we're starting to see out of a lot of companies in the space. And I think that's frankly, where we're headed. >> What do you think the industry will evolve to, in your opinion? Because, I think this is going to be- You can't ignore the big guys on this Obviously the programmatic you mentioned, also the data's there. But, what do you think the market will evolve to with this conundrum? >> So, I think John, where we're headed, you know, I think right now we're having this existential crisis, right? About identity in this industry. Because our world is being turned upside down. All the mechanisms that we've used for years and years are being thrown out the window and we're being told, "Hey, we're going to have new mechanisms." Right? So cookies are going away. Device IDs are going away. And now we've got to come up with new things. And so, the world is being turned upside down and everything that you read about in the trades and you know, we're here talking about it, right? Everyone's always talking about identity, right? Now, where do I think this is going? If I was to look into my crystal ball, you know, this is how I would kind of play this out. If you think about identity today, right? Forget about all the changes. Just think about it now and maybe a few years before today. Identity, for marketers, in my opinion, has been a little bit of a checkbox activity, right? It's been, Hey, Okay. You know, ad tech company or media company. Do you have an identity solution? Okay. Tell me a little bit more about it. Okay. Sounds good. That sounds good. Now, can we move on and talk about my business and how are you going to drive meaningful outcomes or whatever for my business. And I believe the reason that is, is because identity is a little abstract, right? It's not something that you can actually get meaningful validation against. It's just something that, you know? Yes, you have it. Okay, great. Let's move on, type of thing, right? And so, that's kind of where we've been. Now, all of a sudden, the cookies are going away. The device IDs are going away. And so the world is turning upside down. We're in this crisis of: how are we going to keep doing what we were doing for the last 10 years in the future? So, everyone's talking about it and we're tryna re-engineer the mechanisms. Now, if I was to look into the crystal ball, right? Two, three years from now, where I think we're headed is, not much is going to change. And what I mean by that, John is, I think that marketers will still go to companies and say, "Do you have an ID solution? Okay, tell me more about it. Okay. Let me understand a little bit better. Okay. You do it this way. Sounds good." Now, the ways in which companies are going to do it will be different. Right now it's FLOC and Unified ID and this and that, right? The ways, the mechanisms will be a little bit different. But, the end state. Right? The actual way in which we operate as an industry and the view of the landscape in my opinion, will be very simple or very similar, right? Because marketers will still view it as a, tell me you have an ID solution, make me feel good about it, help me check the box and let's move on and talk about my business and how you're going to solve for my needs. So, I think that's where we're going. That is not by any means to discount this existential moment that we're in. This is a really important moment, where we do have to talk about and figure out what we're going to do in the future. My viewpoint is that the future will actually not look all that different than the present. >> And then I'll say the user base is the audience, their data behind it helps create new experiences, machine learning and AI are going to create those. And if you have the data, you're either sharing it or using it. That's what we're finding. Shiv Gupta, great insights. Dropping some nice gems here. Founder of U of Digital and also the adjunct professor of programmatic advertising at Leavey School of business in Santa Clara University. Professor, thank you for coming and dropping the gems here and insight. Thank you. >> Thanks a lot for having me, John. Really appreciate it. >> Thanks for watching The Cookie Conundrum This is theCUBE host, John Furrier, me. Thanks for watching. (uplifting electronic music)
SUMMARY :
on the demise of third-party cookies. Excited to be here. of the web and the big power players. Now, on the other side is, you know, Google and the huge data set kind of the primary mechanism for revenue People talk about that all the time. kind of fill the scale. Obviously the programmatic you mentioned, And I believe the reason that is, and also the adjunct professor Thanks a lot for having me, This is theCUBE host, John Furrier, me.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Shiv Gupta | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
10 | QUANTITY | 0.99+ |
Shiv | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Two-step | QUANTITY | 0.99+ |
Two separate groups | QUANTITY | 0.99+ |
Chrome | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
two young kids | QUANTITY | 0.99+ |
two kinds | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
FLOC | TITLE | 0.99+ |
two-fold | QUANTITY | 0.99+ |
two frameworks | QUANTITY | 0.98+ |
Leavey School | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
under 5% | QUANTITY | 0.98+ |
four tech players | QUANTITY | 0.97+ |
one side | QUANTITY | 0.97+ |
U of Digital | ORGANIZATION | 0.97+ |
2021 045 | OTHER | 0.97+ |
20 years | QUANTITY | 0.96+ |
The Cookie Conundrum | TITLE | 0.96+ |
Quantcast Industry Summit | EVENT | 0.95+ |
each business | QUANTITY | 0.93+ |
A Recipe for Success | TITLE | 0.93+ |
First | QUANTITY | 0.92+ |
Googl | ORGANIZATION | 0.92+ |
one side | QUANTITY | 0.92+ |
first | QUANTITY | 0.91+ |
Federated Learning of Cohorts | ORGANIZATION | 0.91+ |
Privacy Sandbox | TITLE | 0.9+ |
unified ID 2.0 | TITLE | 0.87+ |
two work streams | QUANTITY | 0.87+ |
FLOC | ORGANIZATION | 0.87+ |
last 10 years | DATE | 0.86+ |
Santa Clara University | ORGANIZATION | 0.83+ |
groups | QUANTITY | 0.81+ |
years | QUANTITY | 0.8+ |
Two | QUANTITY | 0.78+ |
ID 2.0 | OTHER | 0.78+ |
theCUBE | ORGANIZATION | 0.77+ |
few years before | DATE | 0.74+ |
three years | QUANTITY | 0.72+ |
FLOC | OTHER | 0.67+ |
three | QUANTITY | 0.64+ |
ID 2.0 | TITLE | 0.63+ |
Unified ID 2.0 | TITLE | 0.6+ |
ID5 | TITLE | 0.58+ |
differing approaches | QUANTITY | 0.54+ |
Unified | TITLE | 0.53+ |
Unified | OTHER | 0.52+ |
different | QUANTITY | 0.51+ |
last | DATE | 0.5+ |
Privacy Sandbox | COMMERCIAL_ITEM | 0.37+ |
John Roese, Dell Technologies & Chris Wolf, VMware | theCUBE on Cloud 2021
>>from around the globe. It's the Cube presenting Cuban Cloud brought to you by Silicon Angle. Welcome back to the live segment of the Cuban cloud. I'm Dave, along with my co host, John Ferrier. John Rose is here. He's the global C T o Dell Technologies. John, great to see you as always, Really appreciate >>it. Absolutely good to know. >>Hey, so we're gonna talk edge, you know, the the edge, it's it's estimated. It's a multi multi trillion dollar opportunity, but it's a highly fragmented, very complex. I mean, it comprises from autonomous vehicles and windmills, even retail stores outer space. And it's so it brings in a lot of really gnarly technical issues that we want to pick your brain on. Let me start with just what to you is edge. How do you think about >>it? Yeah, I think I mean, I've been saying for a while that edges the when you reconstitute Ike back out in the real world. You know, for 10 years we've been sucking it out of the real world, taking it out of factories, you know, nobody has an email server under their desk anymore. On that was because we could put it in data centers and cloud public clouds, and you know that that's been a a good journey. And then we realized, Wait a minute, all the data actually was being created out in the real world. And a lot of the actions that have to come from that data have to happen in real time in the real world. And so we realized we actually had toe reconstitute a nightie capacity out near where the data is created, consumed and utilized. And, you know, that turns out to be smart cities, smart factories. You know, uh, we're dealing with military apparatus. What you're saying, how do you put, you know, edges in tow, warfighting theaters or first responder environments? It's really anywhere that data exists that needs to be processed and understood and acted on. That isn't in a data center. So it's kind of one of these things. Defining edge is easier to find. What it isn't. It's anywhere that you're going to have. I t capacity that isn't aggregated into a public or private cloud data center. That seems to be the answer. So >>follow. Follow that. Follow the data. And so you've got these big issue, of course, is late and see people saying, Well, some applications or some use cases like autonomous vehicles. You have to make the decision locally. Others you can you can send back. And you, Kamal, is there some kind of magic algorithm the technical people used to figure out? You know what, the right approaches? Yeah, >>the good news is math still works and way spent a lot of time thinking about why you build on edge. You know, not all things belong at the edge. Let's just get that out of the way. And so we started thinking about what does belong at the edge, and it turns out there's four things you need. You know, if you have a real time responsiveness in the full closed loop of processing data, you might want to put it in an edge. But then you have to define real time, and real time varies. You know, real time might be one millisecond. It might be 30 milliseconds. It might be 50 milliseconds. It turns out that it's 50 milliseconds. You probably could do that in a co located data center pretty far away from those devices. One millisecond you better be doing it on the device itself. And so so the Leighton see around real time processing matters. And, you know, the other reasons interesting enough to do edge actually don't have to do with real time crossing they have to do with. There's so much data being created at the edge that if you just blow it all the way across the Internet, you'll overwhelm the Internets. We have need toe pre process and post process data and control the flow across the world. The third one is the I T. O T boundary that we all know. That was the I O t. Thing that we were dealing with for a long time. And the fourth, which is the fascinating one, is it's actually a place where you might want to inject your security boundaries, because security tends to be a huge problem and connected things because they're kind of dumb and kind of simple and kind of exposed. And if you protect them on the other end of the Internet, the surface area of protecting is enormous, so there's a big shift basically move security functions to the average. I think Gardner made up a term for called Sassy. You know, it's a pretty enabled edge, but these are the four big ones. We've actually tested that for probably about a year with customers. And it turns out that, you know, seems to hold If it's one of those four things you might want to think about an edge of it isn't it probably doesn't belong in >>it. John. I want to get your thoughts on that point. The security things huge. We talked about that last time at Del Tech World when we did an interview with the Cube. But now look at what's happened. Over the past few months, we've been having a lot of investigative reporting here at Silicon angle on the notion of misinformation, not just fake news. Everyone talks about that with the election, but misinformation as a vulnerability because you have now edge devices that need to be secured. But I can send misinformation to devices. So, you know, faking news could be fake data say, Hey, Tesla, drive off the road or, you know, do this on the other thing. So you gotta have the vulnerabilities looked at and it could be everything. Data is one of them. Leighton. See secure. Is there a chip on the device? Could you share your vision on how you see that being handled? Cause it's a huge >>problem. Yeah, this is this is a big deal because, you know, what you're describing is the fact that if data is everything, the flow of data ultimately turns into the flow of information that knowledge and wisdom and action. And if you pollute the data, if you could compromise it the most rudimentary levels by I don't know, putting bad data into a sensor or tricking the sensor which lots of people can dio or simulating a sensor, you can actually distort things like a I algorithms. You can introduce bias into them and then that's a That's a real problem. The solution to it isn't making the sensors smarter. There's this weird Catch 22 when you sense arise the world, you know you have ah, you know, finite amount of power and budget and the making sensors fatter and more complex is actually the wrong direction. So edges have materialized from that security dimension is an interesting augment to those connected things. And so imagine a world where you know your sensor is creating data and maybe have hundreds or thousands of sensors that air flowing into an edge compute layer and the edge compute layer isn't just aggregating it. It's putting context on it. It's metadata that it's adding to the system saying, Hey, that particular stream of telemetry came from this device, and I'm watching that device and Aiken score it and understand whether it's been compromised or whether it's trustworthy or whether it's a risky device and is that all flows into the metadata world the the overall understanding of not just the data itself, but where did it come from? Is it likely to be trustworthy? Should you score it higher or lower in your neural net to basically manipulate your algorithm? These kind of things were really sophisticated and powerful tools to protect against this kind of injection of false information at the sensor, but you could never do that at a sensor. You have to do it in a place that has more compute capacity and is more able to kind of enriched the data and enhance it. So that's why we think edges are important in that fourth characteristic of they aren't the security system of the sensor itself. But they're the way to make sure that there's integrity in the sense arised world before it reaches the Internet before it reaches the cloud data centers. >>So access to that metadata is access to the metadata is critical, and it's gonna be it's gonna be near real time, if not real time, right? >>Yeah, absolutely. And, you know, the important thing is, Well, I'll tell you this. You know, if you haven't figured this out by looking at cybersecurity issues, you know, compromising from the authoritative metadata is a really good compromise. If you could get that, you can manipulate things that a scale you've never imagined. Well, in this case, if the metadata is actually authoritatively controlled by the edge note the edge note is processing is determining whether or not this is trustworthy or not. Those edge nodes are not $5 parts, their servers, their higher end systems. And you can inject a lot more sophisticated security technology and you can have hardware root of trust. You can have, you know, mawr advanced. PK I in it, you can have a I engines watching the behavior of it, and again, you'd never do that in a sensor. But if you do it at the first step into the overall data pipeline, which is really where the edges materializing, you can do much more sophisticated things to the data. But you can also protect that thing at a level that you'd never be able to do to protect a smart lightbulb. A thermostat in your house? >>Uh, yes. So give us the playbook on how you see the evolution of the this mark. I'll see these air key foundational things, a distributed network and it's a you know I o t trends into industrial i o t vice versa. As a software becomes critical, what is the programming model to build the modern applications is something that I know. You guys talk to Michael Dell about this in the Cuban, everyone, your companies as well as everyone else. Its software define everything these days, right? So what is the software framework? How did people code on this? What's the application aware viewpoint on this? >>Yeah, this is, uh, that's unfortunately it's a very complex area that's got a lot of dimensions to it. Let me let me walk you through a couple of them in terms of what is the software framework for for For the edge. The first is that we have to separate edge platforms from the actual edge workload today too many of the edge dialogues or this amorphous blob of code running on an appliance. We call that an edge, and the reality is that thing is actually doing two things. It's, ah, platform of compute out in the real world and it's some kind of extension of the cloud data pipeline of the cloud Operating model. Instance, he added, A software probably is containerized code sitting on that edge platform. Our first principle about the software world is we have to separate those two things. You do not build your cloud your edge platform co mingled with the thing that runs on it. That's like building your app into the OS. That's just dumb user space. Colonel, you keep those two things separate. We have Thio start to enforce that discipline in the software model at the edges. The first principle, the second is we have to recognize that the edges are are probably best implemented in ways that don't require a lot of human intervention. You know, humans air bad when it comes to really complex distributed systems. And so what we're finding is that most of the code being pushed into production benefits from using things like kubernetes or container orchestration or even functional frameworks like, you know, the server list fast type models because those low code architectures generally our interface with via AP, eyes through CCD pipelines without a lot of human touch on it. And it turns out that, you know, those actually worked reasonably well because the edges, when you look at them in production, the code actually doesn't change very often, they kind of do singular things relatively well over a period of time. And if you can make that a fully automated function by basically taking all of the human intervention away from it, and if you can program it through low code interfaces or through automated interfaces, you take a lot of the risk out of the human intervention piece of this type environment. We all know that you know most of the errors and conditions that break things are not because the technology fails it because it's because of human being touches it. So in the software paradigm, we're big fans of more modern software paradigms that have a lot less touch from human beings and a lot more automation being applied to the edge. The last thing I'll leave you with, though, is we do have a problem with some of the edge software architectures today because what happened early in the i o t world is people invented kind of new edge software platforms. And we were involved in these, you know, edge X foundry, mobile edge acts, a crane. Oh, and those were very important because they gave you a set of functions and capabilities of the edge that you kind of needed in the early days. Our long term vision, though for edge software, is that it really needs to be the same code base that we're using in data centers and public clouds. It needs to be the same cloud stack the same orchestration level, the same automation level, because what you're really doing at the edge is not something that spoke. You're taking a piece of your data pipeline and you're pushing it to the edge and the other pieces are living in private data centers and public clouds, and you like they all operate under the same framework. So we're big believers in, like pushing kubernetes orchestration all the way to the edge, pushing the same fast layer all the way to the edge. And don't create a bespoke world of the edge making an extension of the multi cloud software framework >>even though the underlying the underlying hardware might change the microprocessor, GPU might change GP or whatever it is. Uh, >>by the way, that that's a really good reason to use these modern framework because the energies compute where it's not always next 86 underneath it, programming down at the OS level and traditional languages has an awful lot of hardware dependencies. We need to separate that because we're gonna have a lot of arm. We're gonna have a lot of accelerators a lot of deep. Use a lot of other stuff out there. And so the software has to be modern and able to support header genius computer, which a lot of these new frameworks do quite well, John. >>Thanks. Thanks so much for for coming on, Really? Spending some time with us and you always a great guest to really appreciate it. >>Going to be a great stuff >>of a technical edge. Ongoing room. Dave, this is gonna be a great topic. It's a clubhouse room for us. Well, technical edge section every time. Really. Thanks >>again, Jon. Jon Rose. Okay, so now we're gonna We're gonna move to the second part of our of our technical edge discussion. Chris Wolf is here. He leads the advanced architecture group at VM Ware. And that really means So Chris's looks >>at I >>think it's three years out is kind of his time. Arise. And so, you know, advanced architecture, Er and yeah. So really excited to have you here. Chris, can you hear us? >>Okay. Uh, >>can Great. Right. Great to see you again. >>Great >>to see you. Thanks for coming on. Really appreciate it. >>So >>we're talking about the edge you're talking about the things that you see way set it up is a multi trillion dollar opportunity. It's It's defined all over the place. Uh, Joey joke. It's Could be a windmill. You know, it could be a retail store. It could be something in outer space. Its's It's it's, you know, whatever is defined A factory, a military installation, etcetera. How do you look at the edge. And And how do you think about the technical evolution? >>Yeah, I think it is. It was interesting listening to John, and I would say we're very well aligned there. You know, we also would see the edge is really the place where data is created, processed and are consumed. And I think what's interesting here is that you have a number off challenges in that edges are different. So, like John was talking about kubernetes. And there's there's multiple different kubernetes open source projects that are trying to address thes different edge use cases, whether it's K three s or Cubbage or open your it or super edge. And I mean the list goes on and on, and the reason that you see this conflict of projects is multiple reasons. You have a platform that's not really designed to supported computing, which kubernetes is designed for data center infrastructure. Uh, first on then you have these different environments where you have some edge sites that have connectivity to the cloud, and you have some websites that just simply don't write whether it's an oil rig or a cruise ship. You have all these different use cases, so What we're seeing is you can't just say this is our edge platform and, you know, go consume it because it won't work. You actually have to have multiple flavors of your edge platform and decide. You know what? You should time first. From a market perspective, I >>was gonna ask you great to have you on. We've had many chest on the Cube during when we actually would go to events and be on the credit. But we appreciate you coming into our virtual editorial event will be doing more of these things is our software will be put in the work to do kind of a clubhouse model. We get these talks going and make them really valuable. But this one is important because one of the things that's come up all day and we kind of introduced earlier to come back every time is the standardization openness of how open source is going to extend out this this interoperability kind of vibe. And then the second theme is and we were kind of like the U S side stack come throwback to the old days. Uh, talk about Cooper days is that next layer, but then also what is going to be the programming model for modern applications? Okay, with the edge being obviously a key part of it. What's your take on that vision? Because that's a complex area certain a lot of a lot of software to be written, still to come, some stuff that need to be written today as well. So what's your view on How do you programs on the edge? >>Yeah, it's a It's a great question, John and I would say, with Cove it We have seen some examples of organizations that have been successful when they had already built an edge for the expectation of change. So when you have a truly software to find edge, you can make some of these rapid pivots quite quickly, you know. Example was Vanderbilt University had to put 1000 hospital beds in a parking garage, and they needed dynamic network and security to be able to accommodate that. You know, we had a lab testing company that had to roll out 400 testing sites in a matter of weeks. So when you can start tohave first and foremost, think about the edge as being our edge. Agility is being defined as you know, what is the speed of software? How quickly can I push updates? How quickly can I transform my application posture or my security posture in lieu of these types of events is super important. Now, if then if we walk that back, you know, to your point on open source, you know, we see open source is really, uh you know, the key enabler for driving edge innovation and driving in I S V ecosystem around that edge Innovation. You know, we mentioned kubernetes, but there's other really important projects that we're already seeing strong traction in the edge. You know, projects such as edge X foundry is seeing significant growth in China. That is, the core ejects foundry was about giving you ah, pass for some of your I o T aps and services. Another one that's quite interesting is the open source faith project in the Linux Foundation. And fate is really addressing a melody edge through a Federated M L model, which we think is the going to be the long term dominant model for localized machine learning training as we continue to see massive scale out to these edge sites, >>right? So I wonder if you could You could pick up on that. I mean, in in thinking about ai influencing at the edge. Um, how do you see that? That evolving? Uh, maybe You know what, Z? Maybe you could We could double click on the architecture that you guys see. Uh, progressing. >>Yeah, Yeah. Right now we're doing some really good work. A zai mentioned with the Fate project. We're one of the key contributors to the project. Today. We see that you need to expand the breath of contributors to these types of projects. For starters, uh, some of these, what we've seen is sometimes the early momentum starts in China because there is a lot of innovation associated with the edge there, and now it starts to be pulled a bit further West. So when you look at Federated Learning, we do believe that the emergence of five g I's not doesn't really help you to centralized data. It really creates the more opportunity to create, put more data and more places. So that's, you know, that's the first challenge that you have. But then when you look at Federated learning in general, I'd say there's two challenges that we still have to overcome organizations that have very sophisticated data. Science practices are really well versed here, and I'd say they're at the forefront of some of these innovations. But that's 1% of enterprises today. We have to start looking at about solutions for the 99% of enterprises. And I'd say even VM Ware partners such as Microsoft Azure Cognitive Services as an example. They've been addressing ML for the 99%. I say That's a That's a positive development. When you look in the open source community, it's one thing to build a platform, right? Look, we love to talk about platforms. That's the easy part. But it's the APS that run on that platform in the services that run on that platform that drive adoption. So the work that we're incubating in the VM, or CTO office is not just about building platforms, but it's about building the applications that are needed by say that 99% of enterprises to drive that adoption. >>So if you if you carry that through that, I infer from that Chris that the developers are ultimately gonna kind of win the edge or define the edge Um, How do you see that From their >>perspective? Yeah, >>I think its way. I like to look at this. I like to call a pragmatic Dev ops where the winning formula is actually giving the developer the core services that they need using the native tools and the native AP eyes that they prefer and that is predominantly open source. It would some cloud services as they start to come to the edge as well. But then, beyond that, there's no reason that I t operations can't have the tools that they prefer to use. A swell. So we see this coming together of two worlds where I t operations has to think even for differently about edge computing, where it's not enough to assume that I t has full control of all of these different devices and sensors and things that exists at the edge. It doesn't happen. Often times it's the lines of business that air directly. Deploying these types of infrastructure solutions or application services is a better phrase and connecting them to the networks at the edge. So what does this mean From a nightie operations perspective? We need tohave, dynamic discovery capabilities and more policy and automation that can allow the developers to have the velocity they want but still have that consistency of security, agility, networking and all of the other hard stuff that somebody has to solve. And you can have the best of both worlds here. >>So if Amazon turned the data center into an A P I and then the traditional, you know, vendors sort of caught up or catching up and trying to do in the same premise is the edge one big happy I Is it coming from the cloud? Is it coming from the on Prem World? How do you see that evolving? >>Yes, that's the question and races on. Yeah, but it doesn't. It doesn't have to be exclusive in one way or another. The VM Ware perspective is that, you know, we can have a consistent platform for open source, a consistent platform for cloud services. And I think the key here is this. If you look at the partnerships we've been driving, you know, we've on boarded Amazon rds onto our platform. We announced the tech preview of Azure Arc sequel database as a service on our platform as well. In addition, toe everything we're doing with open source. So the way that we're looking at this is you don't wanna make a bet on an edge appliance with one cloud provider. Because what happens if you have a business partner that says I am a line to Google or on the line to AWS? So I want to use this open source. Our philosophy is to virtualized the edge so that software can dictate, you know, organizations velocity at the end of the day. >>Yeah. So, Chris, you come on, you're you're an analyst at Gartner. You know us. Everything is a zero sum game, but it's but But life is not like that, right? I mean, there's so much of an incremental opportunity, especially at the edge. I mean, the numbers are mind boggling when when you look at it, >>I I agree wholeheartedly. And I think you're seeing a maturity in the vendor landscape to where we know we can't solve all the problems ourselves and nobody can. So we have to partner, and we have to to your earlier point on a P. I s. We have to build external interfaces in tow, our platforms to make it very easy for customers have choice around ice vendors, partners and so on. >>So, Chris, I gotta ask you since you run the advanced technology group in charge of what's going on there, will there be a ship and focus on mawr ships at the edge with that girl singer going over to intel? Um, good to see Oh, shit, so to speak. Um, all kidding aside, but, you know, patch leaving big news around bm where I saw some of your tweets and you laid out there was a nice tribute, pat, but that's gonna be cool. That's gonna be a didn't tell. Maybe it's more more advanced stuff there. >>Yeah, I think >>for people pats staying on the VMRO board and to me it's it's really think about it. I mean, Pat was part of the team that brought us the X 86 right and to come back to Intel as the CEO. It's really the perfect book end to his career. So we're really sad to see him go. Can't blame him. Of course it's it's a It's a nice chapter for Pat, so totally understand that. And we prior to pack going to Intel, we announced major partnerships within video last year, where we've been doing a lot of work with >>arm. So >>thio us again. We see all of this is opportunity, and a lot of the advanced development projects were running right now in the CTO office is about expanding that ecosystem in terms of how vendors can participate, whether you're running an application on arm, whether it's running on X 86 or whatever, it's running on what comes next, including a variety of hardware accelerators. >>So is it really? Is that really irrelevant to you? I mean, you heard John Rose talk about that because it's all containerized is it is. It is a technologies. Is it truly irrelevant? What processor is underneath? And what underlying hardware architectures there are? >>No, it's not. You know it's funny, right? Because we always want to say these things like, Well, it's just a commodity, but it's not. You didn't then be asking the hardware vendors Thio pack up their balls and go home because there's just nothing nothing left to do, and we're seeing actually quite the opposite where there's this emergence and variety of so many hardware accelerators. So even from an innovation perspective, for us. We're looking at ways to increase the velocity by which organizations can take advantage of these different specialized hardware components, because that's that's going to continue to be a race. But the real key is to make it seamless that an application could take advantage of these benefits without having to go out and buy all of this different hardware on a per application basis. >>But if you do make bets, you can optimize for that architecture, true or not, I mean, our estimate is that the you know the number of wafer is coming out of arm based, you know, platforms is 10 x x 86. And so it appears that, you know, from a cost standpoint, that's that's got some real hard decisions to make. Or maybe maybe they're easy decisions, I don't know. But so you have to make bets, Do you not as a technologist and try to optimize for one of those architectures, even though you have to hedge those bets? >>Yeah, >>we do. It really boils down to use cases and seeing, you know, what do you need for a particular use case like, you know, you mentioned arm, you know, There's a lot of arm out at the edge and on smaller form factor devices. Not so much in the traditional enterprise data center today. So our bets and a lot of the focus there has been on those types of devices. And again, it's it's really the It's about timing, right? The customer demand versus when we need to make a particular move from an innovation >>perspective. It's my final question for you as we wrap up our day here with Great Cuban Cloud Day. What is the most important stories in in the cloud tech world, edge and or cloud? And you think people should be paying attention to that will matter most of them over the next few years. >>Wow, that's a huge question. How much time do we have? Not not enough. A >>architect. Architectural things. They gotta focus on a lot of people looking at this cove it saying I got to come out with a growth strategy obvious and clear, obvious things to see Cloud >>Yeah, yeah, let me let me break it down this way. I think the most important thing that people have to focus on >>is deciding How >>do they when they build architectures. What does the reliance on cloud services Native Cloud Services so far more proprietary services versus open source technologies such as kubernetes and the SV ecosystem around kubernetes. You know, one is an investment in flexibility and control, lots of management and for your intellectual property, right where Maybe I'm building this application in the cloud today. But tomorrow I have to run it out at the edge. Or I do an acquisition that I just wasn't expecting, or I just simply don't know. Sure way. Sure hope that cova doesn't come around again or something like it, right as we get past this and navigate this today. But architect ng for the expectation of change is really important and having flexibility of round your intellectual property, including flexibility to be able to deploy and run on different clouds, especially as you build up your different partnerships. That's really key. So building a discipline to say you know what >>this is >>database as a service, it's never going to define who I am is a business. It's something I have to do is an I T organization. I'm consuming that from the cloud This part of the application sacked that defines who I am is a business. My active team is building this with kubernetes. And I'm gonna maintain more flexibility around that intellectual property. The strategic discipline to operate this way among many of >>enterprise customers >>just hasn't gotten there yet. But I think that's going to be a key inflection point as we start to see. You know, these hybrid architectures continue to mature. >>Hey, Chris. Great stuff, man. Really appreciate you coming on the cube and participate in the Cuban cloud. Thank you for your perspectives. >>Great. Thank you very much. Always a pleasure >>to see you. >>Thank you, everybody for watching this ends the Cuban Cloud Day. Volonte and John Furry. All these sessions gonna be available on demand. All the write ups will hit silicon angle calm. So check that out. We'll have links to this site up there and really appreciate you know, you attending our our first virtual editorial >>event again? >>There's day Volonte for John Ferrier in the entire Cube and Cuba and Cloud Team >>Q 3 65. Thanks >>for watching. Mhm
SUMMARY :
John, great to see you as always, Really appreciate Hey, so we're gonna talk edge, you know, the the edge, it's it's estimated. And a lot of the actions that have to come from that data have to happen in real time in the real world. Others you can you can send back. And the fourth, which is the fascinating one, is it's actually a place where you might want to inject your security drive off the road or, you know, do this on the other thing. information at the sensor, but you could never do that at a sensor. And, you know, the important thing is, Well, I'll tell you this. So give us the playbook on how you see the evolution of the this mark. of functions and capabilities of the edge that you kind of needed in the early days. GPU might change GP or whatever it is. And so the software has to Spending some time with us and you always a great It's a clubhouse room for us. move to the second part of our of our technical edge discussion. So really excited to have you here. Great to see you again. to see you. How do you look at the edge. And I mean the list goes on and on, and the reason that you see this conflict of projects is But we appreciate you coming into our virtual editorial event if then if we walk that back, you know, to your point on open source, you know, we see open source is really, click on the architecture that you guys see. So that's, you know, that's the first challenge that you have. And you can have the best of both worlds here. If you look at the partnerships we've been driving, you know, we've on boarded Amazon rds I mean, the numbers are mind boggling when when can't solve all the problems ourselves and nobody can. all kidding aside, but, you know, patch leaving big news around bm where I It's really the perfect book end to his career. So in the CTO office is about expanding that ecosystem in terms of how vendors can I mean, you heard John Rose talk about that But the real key is to make it seamless that an application could take advantage of I mean, our estimate is that the you know the number of wafer is coming out of arm based, It really boils down to use cases and seeing, you know, what do you need for a particular use case And you think people should be paying attention to that will matter most of them How much time do we have? They gotta focus on a lot of people looking at this cove it saying I got to come I think the most important thing that people have to focus on So building a discipline to say you know I'm consuming that from the cloud This part of the application sacked that defines who I am is a business. But I think that's going to be a key inflection point as we start to see. Really appreciate you coming on the cube and participate in the Cuban Thank you very much. We'll have links to this site up there and really appreciate you know, you attending our our first for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
30 milliseconds | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
$5 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
John Ferrier | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chris Wolf | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
one millisecond | QUANTITY | 0.99+ |
Jon Rose | PERSON | 0.99+ |
50 milliseconds | QUANTITY | 0.99+ |
Jon | PERSON | 0.99+ |
John Rose | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
99% | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Silicon Angle | ORGANIZATION | 0.99+ |
two challenges | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
1000 hospital beds | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
John Furry | PERSON | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
John Roese | PERSON | 0.99+ |
fourth | QUANTITY | 0.99+ |
One millisecond | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
first challenge | QUANTITY | 0.99+ |
Volonte | PERSON | 0.99+ |
second theme | QUANTITY | 0.99+ |
1% | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Kamal | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
400 testing sites | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Del Tech World | ORGANIZATION | 0.98+ |
third one | QUANTITY | 0.98+ |
first principle | QUANTITY | 0.98+ |
Vanderbilt University | ORGANIZATION | 0.98+ |
86 | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
VM Ware | ORGANIZATION | 0.97+ |
second part | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
both worlds | QUANTITY | 0.97+ |
10 | QUANTITY | 0.97+ |
about a year | QUANTITY | 0.96+ |
Cuba | LOCATION | 0.95+ |
Gardner | PERSON | 0.95+ |
three years | QUANTITY | 0.94+ |
Joey | PERSON | 0.94+ |
Cuban Cloud Day | EVENT | 0.93+ |
Lars Toomre, Brass Rat Capital | MIT CDOIQ 2019
>> from Cambridge, Massachusetts. It's the Cube covering M I T. Chief data officer and information quality Symposium 2019. Brought to you by Silicon Angle Media. >> Welcome back to M I. T. Everybody. This is the Cube. The leader in live coverage. My name is David wanted. I'm here with my co host, Paul Gill, in this day to coverage of the M I t cdo I Q conference. A lot of acronym stands for M I. T. Of course, the great institution. But Chief Data officer information quality event is his 13th annual event. Lars to Maria's here is the managing partner of Brass Rat Capital. Cool name Lars. Welcome to the Cube. Great. Very much. Glad I start with a name brass around Capitol was That's >> rat is reference to the M I t school. Okay, Beaver? Well, he is, but the students call it a brass rat, and I'm third generation M i t. So it's just seen absolutely appropriate. That is a brass rods and capital is not a reference to money, but is actually referenced to the intellectual capital. They if you have five or six brass rats in the same company, you know, we Sometimes engineers arrive and they could do some things. >> And it Boy, if you put in some data data capital in there, you really explosions. We cause a few problems. So we're gonna talk about some new regulations that are coming down. New legislation that's coming down that you exposed me to yesterday, which is gonna have downstream implications. You get ahead of this stuff and understand it. You can really first of all, prepare, make sure you're in compliance, but then potentially take advantage for your business. So explain to us this notion of open government act. >> Um, in the last five years, six years or so, there's been an effort going on to increase the transparency across all levels of government. Okay, State, local and federal government. The first of federal government laws was called the the Open Data Act of 2014 and that was an act. They was acted unanimously by Congress and signed by Obama. They was taking the departments of the various agencies of the United States government and trying to roll up all the expenses into one kind of expense. This is where we spent our money and who got the money and doing that. That's what they were trying to do. >> Big picture type of thing. >> Yeah, big picture type thing. But unfortunately, it didn't work, okay? Because they forgot to include this odd word called mentalities. So the same departments meant the same thing. Data problem. They have a really big data problem. They still have it. So they're to G et o reports out criticizing how was done, and the government's gonna try and correct it. Then in earlier this year, there was another open government date act which said in it was signed by Trump. Now, this time you had, like, maybe 25 negative votes, but essentially otherwise passed Congress completely. I was called the Open as all capital O >> P E >> n Government Data act. Okay, and that's not been implemented yet. But there's live talking around this conference today in various Chief date officers are talking about this requirement that every single non intelligence defense, you know, vital protection of the people type stuff all the like, um, interior, treasury, transportation, those type of systems. If you produce a report these days, which is machine, I mean human readable. You must now in two years or three years. I forget the exact invitation date. Have it also be machine readable. Now, some people think machine riddle mil means like pdf formats, but no, >> In fact, what the government did is it >> said it must be machine readable. So you must be able to get into the reports, and you have to be able to extract out the information and attach it to the tree of knowledge. Okay, so we're all of sudden having context like they're currently machine readable, Quote unquote, easy reports. But you can get into those SEC reports. You pull out the net net income information and says its net income, but you don't know what it attaches to on the tree of knowledge. So, um, we are helping the government in some sense able, machine readable type reporting that weaken, do machine to machine without people being involved. >> Would you say the tree of knowledge You're talking about the constant >> man tick semantic tree of knowledge so that, you know, we all come from one concept like the human is example of a living thing living beast, a living Beeston example Living thing. So it also goes back, and they're serving as you get farther and farther out the tree, there's more distance or semantic distance, but you can attach it back to concept so you can attach context to the various data. Is this essentially metadata? That's what people call it. But if I would go over see sale here at M I t, they would turn around. They call it the Tree of Knowledge or semantic data. Okay, it's referred to his semantic dated, So you are passing not only the data itself, but the context that >> goes along with the data. Okay, how does this relate to the financial transparency? >> Well, Financial Transparency Act was introduced by representative Issa, who's a Republican out of California. He's run the government Affairs Committee in the House. He retired from Congress this past November, but in 2017 he introduced what's got referred to his H R 15 30 Um, and the 15 30 is going to dramatically change the way, um, financial regulators work in the United States. Um, it is about it was about to be introduced two weeks ago when the labor of digital currency stuff came up. So it's been delayed a little bit because they're trying to add some of the digital currency legislation to that law. >> A front run that Well, >> I don't know exactly what the remember soul coming out of Maxine Waters Committee. So the staff is working on a bunch of different things at once. But, um, we own g was asked to consult with them on looking at the 15 30 act and saying, How would we improve quote unquote, given our technical, you know, not doing policy. We just don't have the technical aspects of the act. How would we want to see it improved? So one of the things we have advised is that for the first time in the United States codes history, they're gonna include interesting term called ontology. You know what intelligence? Well, everyone gets scared by the word. And when I read run into people, they say, Are you a doctor? I said, no, no, no. I'm just a date. A guy. Um, but an intolerant tea is like a taxonomy, but it had order has important, and an ontology allows you to do it is ah, kinda, you know, giving some context of linking something to something else. And so you're able Thio give Maur information with an intolerant that you're able to you with a tax on it. >> Okay, so it's a taxonomy on steroids? >> Yes, exactly what? More flexible, >> Yes, but it's critically important for artificial intelligence machine warning because if I can give them until ology of sort of how it goes up and down the semantics, I can turn around, do a I and machine learning problems on the >> order of 100 >> 1000 even 10,000 times faster. And it has context. It has contacts in just having a little bit of context speeds up these problems so dramatically so and it is that what enables the machine to machine? New notion? No, the machine to machine is coming in with son called SP R M just standard business report model. It's a OMG sophistication of way of allowing the computers or machines, as we call them these days to get into a standard business report. Okay, so let's say you're ah drug company. You have thio certify you >> drugged you manufactured in India, get United States safely. Okay, you have various >> reporting requirements on the way. You've got to give extra easy the FDA et cetera that will always be a standard format. The SEC has a different format. FERC has a different format. Okay, so what s p r m does it allows it to describe in an intolerant he what's in the report? And then it also allows one to attach an ontology to the cells in the report. So if you like at a sec 10 Q 10 k report, you can attach a US gap taxonomy or ontology to it and say, OK, net income annual. That's part of the income statement. You should never see that in a balance sheet type item. You know his example? Okay. Or you can for the first time by having that context you can say are solid problem, which suggested that you can file these machine readable reports that air wrong. So they believe or not, There were about 50 cases in the last 10 years where SEC reports have been filed where the assets don't equal total liabilities, plus cheryl equity, you know, just they didn't add >> up. So this to, >> you know, to entry accounting doesn't work. >> Okay, so so you could have the machines go and check scale. Hey, we got a problem We've >> got a problem here, and you don't have to get humans evolved. So we're gonna, um uh, Holland in Australia or two leaders ahead of the United States. In this area, they seem dramatic pickups. I mean, Holland's reporting something on the order of 90%. Pick up Australia's reporting 60% pickup. >> We say pick up. You're talking about pickup of errors. No efficiency, productivity, productivity. Okay, >> you're taking people out of the whole cycle. It's dramatic. >> Okay, now what's the OMG is rolling on the hoof. Explain the OMG >> Object Management Group. I'm not speaking on behalf of them. It's a membership run organization. You remember? I am a >> member of cold. >> I'm a khalid of it. But I don't represent omg. It's the membership has to collectively vote that this is what we think. Okay, so I can't speak on them, right? I have a pretty significant role with them. I run on behalf of OMG something called the Federated Enterprise Risk Management Group. That's the group which is focusing on risk management for large entities like the federal government's Veterans Affairs or Department offense upstairs. I think talking right now is the Chief date Officer for transportation. OK, that's a large organization, which they, they're instructed by own be at the, um, chief financial officer level. The one number one thing to do for the government is to get an effective enterprise worst management model going in the government agencies. And so they come to own G let just like NIST or just like DARPA does from the defense or intelligence side, saying we need to have standards in this area. So not only can we talk thio you effectively, but we can talk with our industry partners effectively on space. Programs are on retail, on medical programs, on finance programs, and so they're at OMG. There are two significant financial programs, or Sanders, that exist once called figgy financial instrument global identifier, which is a way of identifying a swap. Its way of identifying a security does not have to be used for a que ce it, but a worldwide. You can identify that you know, IBM stock did trade in Tokyo, so it's a different identifier has different, you know, the liberals against the one trading New York. Okay, so those air called figgy identifiers them. There are attributes associated with that security or that beast the being identified, which is generally comes out of 50 which is the financial industry business ontology. So you know, it says for a corporate bond, it has coupon maturity, semi annual payment, bullets. You know, it is an example. So that gives you all the information that you would need to go through to the calculation, assuming you could have a calculation routine to do it, then you need thio. Then turn around and set up your well. Call your environment. You know where Ford Yield Curves are with mortgage backed securities or any portable call. Will bond sort of probabilistic lee run their numbers many times and come up with effective duration? Um, And then you do your Vader's analytics. No aggregating the portfolio and looking at Shortfalls versus your funding. Or however you're doing risk management and then finally do reporting, which is where the standardized business reporting model comes in. So that kind of the five parts of doing a full enterprise risk model and Alex So what >> does >> this mean for first? Well, who does his impact on? What does it mean for organizations? >> Well, it's gonna change the world for basically everyone because it's like doing a clue ends of a software upgrade. Conversion one's version two point. Oh, and you know how software upgrades Everyone hates and it hurts because everyone's gonna have to now start using the same standard ontology. And, of course, that Sarah Ontology No one completely agrees with the regulators have agreed to it. The and the ultimate controlling authority in this thing is going to be F sock, which is the Dodd frank mandated response to not ever having another chart. So the secretary of Treasury heads it. It's Ah, I forget it's the, uh, federal systemic oversight committee or something like that. All eight regulators report into it. And, oh, if our stands is being the adviser Teff sock for all the analytics, what these laws were doing, you're getting over farm or more power to turn around and look at how we're going to find data across the three so we can come up consistent analytics and we can therefore hopefully take one day. Like Goldman, Sachs is pre payment model on mortgages. Apply it to Citibank Portfolio so we can look at consistency of analytics as well. It is only apply to regulated businesses. It's gonna apply to regulated financial businesses. Okay, so it's gonna capture all your mutual funds, is gonna capture all your investment adviser is gonna catch her. Most of your insurance companies through the medical air side, it's gonna capture all your commercial banks is gonna capture most of you community banks. Okay, Not all of them, because some of they're so small, they're not regularly on a federal basis. The one regulator which is being skipped at this point, is the National Association Insurance Commissioners. But they're apparently coming along as well. Independent federal legislation. Remember, they're regulated on the state level, not regularly on the federal level. But they've kind of realized where the ball's going and, >> well, let's make life better or simply more complex. >> It's going to make life horrible at first, but we're gonna take out incredible efficiency gains, probably after the first time you get it done. Okay, is gonna be the problem of getting it done to everyone agreeing. We use the same definitions >> of the same data. Who gets the efficiency gains? The regulators, The companies are both >> all everyone. Can you imagine that? You know Ah, Goldman Sachs earnings report comes out. You're an analyst. Looking at How do I know what Goldman? Good or bad? You have your own equity model. You just give the model to the semantic worksheet and all turn around. Say, Oh, those numbers are all good. This is what expected. Did it? Did it? Didn't you? Haven't. You could do that. There are examples of companies here in the United States where they used to have, um, competitive analysis. Okay. They would be taking somewhere on the order of 600 to 7. How 100 man hours to do the competitive analysis by having an available electronically, they cut those 600 hours down to five to do a competitive analysis. Okay, that's an example of the type of productivity you're gonna see both on the investment side when you're doing analysis, but also on the regulatory site. Can you now imagine you get a regulatory reports say, Oh, there's they're out of their way out of whack. I can tell you this fraud going on here because their numbers are too much in X y z. You know, you had to fudge numbers today, >> and so the securities analyst can spend Mme. Or his or her time looking forward, doing forecasts exactly analysis than having a look back and reconcile all this >> right? And you know, you hear it through this conference, for instance, something like 80 to 85% of the time of analysts to spend getting the data ready. >> You hear the same thing with data scientists, >> right? And so it's extent that we can helped define the data. We're going thio speed things up dramatically. But then what's really instinct to me, being an M I t engineer is that we have great possibilities. An A I I mean, really great possibilities. Right now, most of the A miles or pattern matching like you know, this idea using face shield technology that's just really doing patterns. You can do wonderful predictive analytics of a I and but we just need to give ah lot of the a m a. I am a I models the contact so they can run more quickly. OK, so we're going to see a world which is gonna found funny, But we're going to see a world. We talk about semantic analytics. Okay. Semantic analytics means I'm getting all the inputs for the analysis with context to each one of the variables. And when I and what comes out of it will be a variable results. But you also have semantics with it. So one in the future not too distant future. Where are we? We're in some of the national labs. Where are you doing it? You're doing pipelines of one model goes to next model goes the next mile. On it goes Next model. So you're gonna software pipelines, Believe or not, you get them running out of an Excel spreadsheet. You know, our modern Enhanced Excel spreadsheet, and that's where the future is gonna be. So you really? If you're gonna be really good in this business, you're gonna have to be able to use your brain. You have to understand what data means You're going to figure out what your modeling really means. What happens if we were, You know, normally for a lot of the stuff we do bell curves. Okay, well, that doesn't have to be the only distribution you could do fat tail. So if you did fat tail descriptions that a bell curve gets you much different results. Now, which one's better? I don't know, but, you know, and just using example >> to another cut in the data. So our view now talk about more about the tech behind this. He's mentioned a I What about math? Machine learning? Deep learning. Yeah, that's a color to that. >> Well, the tech behind it is, believe or not, some relatively old tech. There is a technology called rd F, which is kind of turned around for a long time. It's a science kind of, ah, machine learning, not machine wearing. I'm sorry. Machine code type. Fairly simplistic definitions. Lots of angle brackets and all this stuff there is a higher level. That was your distracted, I think put into standard in, like, 2000 for 2005. Called out. Well, two point. Oh, and it does a lot at a higher level. The same stuff that already f does. Okay, you could also create, um, believer, not your own special ways of a communicating and ontology just using XML. Okay, So, uh, x b r l is an enhanced version of XML, okay? And so some of these older technologies, quote unquote old 20 years old, are essentially gonna be driving a lot of this stuff. So you know you know Corbett, right? Corba? Is that what a maid omg you know, on the communication and press thing, do you realize that basically every single device in the world has a corpus standard at okay? Yeah, omg Standard isn't all your smartphones and all your computers. And and that's how they communicate. It turns out that a lot of this old stuff quote unquote, is so rigidly well defined. Well done that you can build modern stuff that takes us to the Mars based on these old standards. >> All right, we got to go. But I gotta give you the award for the most acronyms >> HR 15 30 fi G o m g s b r >> m fsoc tarp. Oh, fr already halfway. We knew that Owl XML ex brl corba, Which of course >> I do. But that's well done. Like thanks so much for coming. Everyone tried to have you. All right, keep it right there, everybody, We'll be back with our next guest from M i t cdo I Q right after this short, brief short message. Thank you
SUMMARY :
Brought to you by A lot of acronym stands for M I. T. Of course, the great institution. in the same company, you know, we Sometimes engineers arrive and they could do some things. And it Boy, if you put in some data data capital in there, you really explosions. of the United States government and trying to roll up all the expenses into one kind So they're to G et o reports out criticizing how was done, and the government's I forget the exact invitation You pull out the net net income information and says its net income, but you don't know what it attaches So it also goes back, and they're serving as you get farther and farther out the tree, Okay, how does this relate to the financial and the 15 30 is going to dramatically change the way, So one of the things we have advised is that No, the machine to machine is coming in with son Okay, you have various So if you like at a sec Okay, so so you could have the machines go and check scale. I mean, Holland's reporting something on the order of 90%. We say pick up. you're taking people out of the whole cycle. Explain the OMG You remember? go through to the calculation, assuming you could have a calculation routine to of you community banks. gains, probably after the first time you get it done. of the same data. You just give the model to the semantic worksheet and all turn around. and so the securities analyst can spend Mme. And you know, you hear it through this conference, for instance, something like 80 to 85% of the time You have to understand what data means You're going to figure out what your modeling really means. to another cut in the data. on the communication and press thing, do you realize that basically every single device But I gotta give you the award for the most acronyms We knew that Owl Thank you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gill | PERSON | 0.99+ |
Obama | PERSON | 0.99+ |
Trump | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Lars | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
2017 | DATE | 0.99+ |
David | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Goldman | ORGANIZATION | 0.99+ |
Issa | PERSON | 0.99+ |
Federated Enterprise Risk Management Group | ORGANIZATION | 0.99+ |
80 | QUANTITY | 0.99+ |
600 hours | QUANTITY | 0.99+ |
Financial Transparency Act | TITLE | 0.99+ |
Congress | ORGANIZATION | 0.99+ |
60% | QUANTITY | 0.99+ |
Maxine Waters Committee | ORGANIZATION | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
Tokyo | LOCATION | 0.99+ |
90% | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
Maria | PERSON | 0.99+ |
600 | QUANTITY | 0.99+ |
National Association Insurance Commissioners | ORGANIZATION | 0.99+ |
Brass Rat Capital | ORGANIZATION | 0.99+ |
California | LOCATION | 0.99+ |
Citibank | ORGANIZATION | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Excel | TITLE | 0.99+ |
FERC | ORGANIZATION | 0.99+ |
Lars Toomre | PERSON | 0.99+ |
15 30 | TITLE | 0.99+ |
2005 | DATE | 0.99+ |
two leaders | QUANTITY | 0.99+ |
Cambridge, Massachusetts | LOCATION | 0.99+ |
SEC | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
three years | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
7 | QUANTITY | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
Open Data Act of 2014 | TITLE | 0.99+ |
25 negative votes | QUANTITY | 0.99+ |
85% | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
50 | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
Sarah | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
Veterans Affairs | ORGANIZATION | 0.99+ |
five parts | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Republican | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
two weeks ago | DATE | 0.98+ |
one concept | QUANTITY | 0.98+ |
DARPA | ORGANIZATION | 0.98+ |
10,000 times | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
New York | LOCATION | 0.98+ |
Alex | PERSON | 0.98+ |
United States government | ORGANIZATION | 0.98+ |
Vader | PERSON | 0.98+ |
one day | QUANTITY | 0.98+ |
about 50 cases | QUANTITY | 0.98+ |
Treasury | ORGANIZATION | 0.97+ |
government Affairs Committee | ORGANIZATION | 0.97+ |
Mars | LOCATION | 0.97+ |
Object Management Group | ORGANIZATION | 0.97+ |
Government Data act | TITLE | 0.96+ |
earlier this year | DATE | 0.96+ |
OMG | ORGANIZATION | 0.96+ |
Teff | PERSON | 0.96+ |
100 | QUANTITY | 0.96+ |
six years | QUANTITY | 0.96+ |
Beaver | PERSON | 0.95+ |
two significant financial programs | QUANTITY | 0.94+ |
two point | QUANTITY | 0.94+ |
third generation | QUANTITY | 0.94+ |
Michael Allison & Derek Williams, State of Louisiana | Nutanix .NEXT 2018
>> Announcer: Live from New Orleans, Louisiana. It's theCUBE, covering .NEXT conference 2018, brought to you by Nutanix. >> Welcome back, we're here in New Orleans in the state of Louisiana, and to help Keith Townsend and myself, Stu Miniman, wrap up we're glad to have one more customer. We have the great state of Louisiana here with us, we have Michael Allison, who's the Chief Technology Officer. We also have Derek Williams, who's the Director of Data Center Operations. Gentleman, thanks so much for joining us. >> Thank you. >> Thanks for having us. >> All right, so I think we all know what the state of Louisiana is, hopefully most people can find it on a map, it's a nice easy shape to remember from my kids and the like. But, Michael, why don't we start with you? Talk to us first about kind of the purview of your group, your organization, and some of the kind of biggest challenges you've been facing in recent times. Sure, we are part of the Office of Technology Services, which is a consolidated IT organization for the state of Louisiana. We were organized about four years ago. Actually four years ago this July. And that brought in the 16 Federated IT groups into one large organization. And we have the purview of the executive branch, which includes those typical agencies like Children and Family Services, Motor Vehicles, Public Safety, Health and Hospitals, Labor, etc. >> And Derek, you've got the data center operations, so give us a little bit of a scope. We heard how many organizations in there, but what do you all have to get your arms around? >> Sure, so we had, you know, there's often a joke that we make that if they've ever made it we own one of each. So we had a little bit of every type of technology. So what we've really been getting our arms around is trying to standardize technologies, get a standard stack going, an enterprise level thing. And really what we're trying to do is become a service provider to those customers where we have standard lines of service and set enterprise level platforms that we migrate everybody onto. So do you actually have your own data centers? Your own hosting facilities? What's kind of the real estate look like? >> Absolutely, so we have, the state has two primary data centers that we utilize, and then we also use a number of cloud services as well as some third-party providers for offsite services. >> So obviously just like every other state in the union, you guys have plenty of money. >> Always. >> Way too many employees and just no challenges. Let's talk about what are the challenges? You know, coming together, bringing that many organizations together, there's challenges right off the bat. What are some of the challenges as you guys look to provide services to the great people of Louisiana? >> Well as Derek kind of eluded to, technology debt is deep. We have services that are aging at about 40 years old, that are our tier one services. And they were built in silos many, many years ago. So being able to do the application or actualization, being able to identify those services, then when we actually shift to the cultural side, actually bringing 16 different IT organizations into one, having all those individuals now work together instead of apart. And not in silos. That was probably one of the biggest challenges that we had over the last few years is really breaking down those cultural barriers and really coming together as one organization. >> Yeah I totally agree with that. The cultural aspect has been the biggest piece for us. Really getting in there and saying, you know a lot of small and medium size IT shops could get away without necessarily having the proper governance, structures in place, and a lot of people wore a lot of hats. So now we're about 800 strong in the Office of Technology Services, and that means people are very aligned to what they do operationally. And so that's been a big shift and kind of that cultural shift has really been where we've had to focus on to make that align properly to the business needs. >> Mike, what was the reason that led you down the path towards Nutanix? Maybe set us up with a little bit of the problem statement? We heard some of the heterogeneous nature and standardization which seems to fit into a theme we've heard lots of times with Nutanix. But was there a specific use case or what led you towards that path? Well, about four years ago the Department of Health and Hospitals really had a case where they needed to modernize their Medicaid services, eligibility and enrollment. CMS really challenged them to build an infrastructure that was in line with their MIDAS standards. There was modular, COTS, configuration over customization. Federal government no longer wants to build monolithic systems that don't integrate and are just big silos. So what we did was we gravitated to that project. We went to CMS and said, hey why don't we take what you're asking us to build and build it in a way that we can expand throughout the enterprise to not only affect the Department of Health but also Children of Family Services, and be able to expand it to Department of Corrections, etc. That was our use case, and having an anchor tenent with the Department of Health that has a partner with CMS really became the lynch pin in this journey. That was our first real big win. >> Okay how did you hear first about Nutanix? Was there a bake off you went through? >> It was, yes, very similar. It was the RP process took a year or so and we were actually going down the road of procuring some V blocks, and right before the Christmas vacations our Deputy CIO says hey, why don't you go look to see if there's other solutions that are out there? Challenge Derek, myself, and some others to really expand the horizons. Say, if we're going to kind of do this greenfield, what else is out there? And right before he got on his Christmas cruise he dropped that on our lap and about a month later we were going down the Dell Nutanix route. And to be honest it was very contentious, and it actually took a call from Michael Dell who I sent to voicemail twice before I realized who it was, but you know, those are the kind of decisions and the buy in from Dell executives that really allowed us to comfortably make this decision and move forward. >> So technology doesn't exactly move fast in any government because, you know, people process technology and especially in the government, people and process, as you guys have deployed Nutanix throughout your environment, what are some of the wins and what are some of the challenges? >> That's a funny point because we talk about this a lot. The fact that our choice was really between something like VBlock, which was an established player that had been for a long time, and something a little more bleeding edge. And part of the hesitancy to move to something like Nutanix was the idea that hey, we have a lot of restricted data, CJIS, HIPAA, all those kind of things across the board, RS1075 comes into play, and there was hesitancy to move to something new, but one of the things that we said exactly was we are not as agile as private sector. The procurement process, all the things that we have to do, put us a little further out. So it did come into play that when we look at that timeline the stuff that's bleeding edge now, by the time we have it out there in production it's probably going to be mainstream. So we had to hedge our bets a little. And you know, we really had to do our homework. Nutanix was, you know, kind of head and shoulders above a lot of what we looked at, and I had resiliency to it at first, so credit to the Deputy CIO, he made the right call, we came around on it, it's been awesome ever since you know, one of the driving things for us too was getting out there and really looking at the business case and talking to the customers. One of the huge things we kept hearing over and over was the HA aspect of it. You know, we need the high availability, we need the high availability. The other interesting thing that we have from the cost perspective is we are a cost recovery agency now that we're consolidated. So what you use you get charged for, you get a bill every month just like a commercial provider. You know, use this many servers, this much storage, you get that invoice for it. So we needed a way that we could have an environment that's scaled kind of at a linear cost that we could just kind of add these nodes to without having to go buy a new environment and have this huge kind of CAPX expenditure. And so at the end of the day it lived up to the hype and we went with Nutanix and we haven't regretted it, so. >> How are the vendors doing overall, helping you move to that really OP-X model, you said, love to hear what you're doing with cloud overall. Nutanix is talking about it. Dell's obviously talking about that. How are the vendors doing in general? And we'd love to hear specifically Dell Nutanix. >> We've had the luxury of having exceptionally good business partners. The example I'd like to give is, about four months into this project we realized that we were treated Nutanix as a traditional three-tier architecture. We were sending a lot of traffic more south. When we did the analysis we asked the question, a little cattywampus, it was how do we straighten this out? And so we posed a question on a Tuesday about how do we fix this, how do we drive the network back into the fabric? By Thursday we were on a phone call with VMWare. By the following Monday we had two engineers on site with a local partner with NSX Ninja. And we spent the next two months, with about different iterations of how to re-engineer the solution and really look at the full software-defined data center, not just software-defined storage and compute. It is really how do we then evolve this entire solution building upon Nutanix and then layering upon on top of that the VMWare solutions that kind of took us to that next level. >> Yeah and I think the key term in there is business partner. You know, it sounds a little corny to say, but we don't look at them as just vendors anymore. When we choose a technology or direction or an architecture, that is the direction we go for the entire state for that consolidated IT model. So, we don't just need a vendor. We need someone that has a vested interest in seeing us succeed with the technology, and that's what we've gotten out of Nutanix, out of Dell, and they've been willing to, you know, if there's an issue, they put the experts on site, it's not just we'll get some people on a call. They're going to be there next week, we're going to work with you guys and make it work. And it's been absolutely key in making this whole thing go. >> And as a CTO one of the challenges that we have is, as Derek has executed his cloud vision, is how do we take that and use it as an enabler, an accelerant to how we look at our service design, service architecture, how do we cloud optimize this? So as we're talking about CICD and all these little buzzwords that are out there, is how can we use this infrastructure to be that platform that kind of drives that from kind of a grass root, foundation up, whereas sometimes it's more of a pop down approach, we're taking somewhat of an opposite. And now we're in that position where we can now answer the question of now what, what do we do with it now? >> So sounds like you guys are a mixed VMWare, Nutanix hardware, I mean software, Dell hardware shop, foundation you've built the software-defined data center foundation, something that we've looked at for the past 10 years in IT to try and achieve, which is a precursor, or the foundation, to cloud. Nutanix has made a lot of cloud announcements. How does Nutanix's cloud announcements, your partnership with Dell match with what you guys plan when it comes to cloud? >> That's a perfect lead in for us. So you're absolutely right. We have had an active thought in our head that we need to move toward SDDC, software-defined data center is what we wanted to be at. Now that we've achieved it the next step for us is to say hey, whether it's an AWS or whomever, an Azure type thing, they are essentially an SDDC as well. How do we move workloads seamlessly up and down in a secure fashion? So the way we architected things in our SDDC, we have a lot of customers. We can't have lateral movement. So everything's microsegmentation across the board. What we've been pursuing is a way to move VM workloads essentially seamlessly up to the cloud and back down and have those microsegmentation rules follow whether it goes up or back down. That's kind of the zen state for us. It's been an interesting conference for us, because we've seen some competitors to that model. Some of the things Nutanix is rolling out, we're going to have to go back and take a very serious look at on that roadmap to see how it plays out. But, suddenly multicloud, if we can get to that state we don't care what cloud it's in. We don't have to learn separate stacks for different providers. That is a huge gap for us right now. We have highly available environment between two data centers where we run two setups active active that are load balanced. So the piece we're missing now is really an offsite DR that has that complete integration. So the idea that we could see a hurricane out in the golf, and 36, 48 hours away, and know that we might be having some issues. Being able to shift workloads up to the cloud, that's perfect for us. And you know, then cost comes into play. All that kind of stuff that we might have savings, economy of scale, all plays in perfectly for us. So we are super excited about where that's going and some of the technologies coming up are going to be things we're going to be evaluating very carefully over the next year. >> At the end of the day it's all about our constituents. We have to take data, turn it into information that they can consume at the pace that they want to. Whether it be traditional compute in a desktop or mobile or anywhere in between. It was our job to make sure that these services are available and usable when they need it, especially in the time of a disaster or just in day-to-day life. So that's the challenge that we have when delivering services to our citizens and constituents. >> All right, well Mike and Derek, really appreciate you sharing us the journey you've been on, how you're helping the citizens here in the great state of Louisiana. For Keith Townsend, I'm Stu Miniman. Thanks so much for watching our program. It's been a great two days here. Be sure to check out theCUBE.net for all of our programming. Thanks Nutanix and the whole crew here, and thank you for watching theCUBE. >> Thank you.
SUMMARY :
brought to you by Nutanix. We have the great state of Louisiana here with us, And we have the purview of the executive branch, but what do you all have to get your arms around? Sure, so we had, you know, there's often a joke and then we also use a number of cloud services So obviously just like every other state in the union, What are some of the challenges as you guys that we had over the last few years and kind of that cultural shift has really been and build it in a way that we can expand and we were actually going down the road of The procurement process, all the things that we have to do, How are the vendors doing overall, By the following Monday we had two engineers on site or an architecture, that is the direction we go And as a CTO one of the challenges that we have is, So sounds like you guys are a mixed VMWare, So the idea that we could see a hurricane out in the golf, So that's the challenge that we have Thanks Nutanix and the whole crew here,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mike | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Derek Williams | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Michael Allison | PERSON | 0.99+ |
Derek | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Office of Technology Services | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Louisiana | LOCATION | 0.99+ |
New Orleans | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Department of Health | ORGANIZATION | 0.99+ |
Thursday | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
two engineers | QUANTITY | 0.99+ |
New Orleans, Louisiana | LOCATION | 0.99+ |
Department of Corrections | ORGANIZATION | 0.99+ |
36, 48 hours | QUANTITY | 0.99+ |
two data centers | QUANTITY | 0.99+ |
Tuesday | DATE | 0.99+ |
next week | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two setups | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
two primary data centers | QUANTITY | 0.99+ |
twice | QUANTITY | 0.98+ |
a year | QUANTITY | 0.98+ |
VMWare | TITLE | 0.98+ |
next year | DATE | 0.97+ |
Children of Family Services | ORGANIZATION | 0.97+ |
three-tier | QUANTITY | 0.97+ |
Christmas | EVENT | 0.96+ |
about a month later | DATE | 0.96+ |
Gentleman | PERSON | 0.96+ |
about four months | QUANTITY | 0.96+ |
Children and Family Services | ORGANIZATION | 0.96+ |
16 Federated IT groups | QUANTITY | 0.95+ |
about 40 years old | QUANTITY | 0.95+ |
NSX Ninja | ORGANIZATION | 0.94+ |
one organization | QUANTITY | 0.94+ |
four years ago | DATE | 0.94+ |
16 different IT organizations | QUANTITY | 0.94+ |
Department of Health and Hospitals | ORGANIZATION | 0.93+ |
2018 | DATE | 0.93+ |
about four years ago | DATE | 0.91+ |
One | QUANTITY | 0.9+ |
four years ago this July | DATE | 0.89+ |
one large organization | QUANTITY | 0.89+ |
each | QUANTITY | 0.87+ |
Data Center | ORGANIZATION | 0.87+ |
many years ago | DATE | 0.85+ |
Public Safety | ORGANIZATION | 0.85+ |
VBlock | TITLE | 0.85+ |
Nutanix | COMMERCIAL_ITEM | 0.79+ |
HIPAA | TITLE | 0.79+ |
about 800 strong | QUANTITY | 0.78+ |