Image Title

Search Results for Conductor:

UNLIST TILL 4/2 - Vertica Big Data Conference Keynote


 

>> Joy: Welcome to the Virtual Big Data Conference. Vertica is so excited to host this event. I'm Joy King, and I'll be your host for today's Big Data Conference Keynote Session. It's my honor and my genuine pleasure to lead Vertica's product and go-to-market strategy. And I'm so lucky to have a passionate and committed team who turned our Vertica BDC event, into a virtual event in a very short amount of time. I want to thank the thousands of people, and yes, that's our true number who have registered to attend this virtual event. We were determined to balance your health, safety and your peace of mind with the excitement of the Vertica BDC. This is a very unique event. Because as I hope you all know, we focus on engineering and architecture, best practice sharing and customer stories that will educate and inspire everyone. I also want to thank our top sponsors for the virtual BDC, Arrow, and Pure Storage. Our partnerships are so important to us and to everyone in the audience. Because together, we get things done faster and better. Now for today's keynote, you'll hear from three very important and energizing speakers. First, Colin Mahony, our SVP and General Manager for Vertica, will talk about the market trends that Vertica is betting on to win for our customers. And he'll share the exciting news about our Vertica 10 announcement and how this will benefit our customers. Then you'll hear from Amy Fowler, VP of strategy and solutions for FlashBlade at Pure Storage. Our partnership with Pure Storage is truly unique in the industry, because together modern infrastructure from Pure powers modern analytics from Vertica. And then you'll hear from John Yovanovich, Director of IT at AT&T, who will tell you about the Pure Vertica Symphony that plays live every day at AT&T. Here we go, Colin, over to you. >> Colin: Well, thanks a lot joy. And, I want to echo Joy's thanks to our sponsors, and so many of you who have helped make this happen. This is not an easy time for anyone. We were certainly looking forward to getting together in person in Boston during the Vertica Big Data Conference and Winning with Data. But I think all of you and our team have done a great job, scrambling and putting together a terrific virtual event. So really appreciate your time. I also want to remind people that we will make both the slides and the full recording available after this. So for any of those who weren't able to join live, that is still going to be available. Well, things have been pretty exciting here. And in the analytic space in general, certainly for Vertica, there's a lot happening. There are a lot of problems to solve, a lot of opportunities to make things better, and a lot of data that can really make every business stronger, more efficient, and frankly, more differentiated. For Vertica, though, we know that focusing on the challenges that we can directly address with our platform, and our people, and where we can actually make the biggest difference is where we ought to be putting our energy and our resources. I think one of the things that has made Vertica so strong over the years is our ability to focus on those areas where we can make a great difference. So for us as we look at the market, and we look at where we play, there are really three recent and some not so recent, but certainly picking up a lot of the market trends that have become critical for every industry that wants to Win Big With Data. We've heard this loud and clear from our customers and from the analysts that cover the market. If I were to summarize these three areas, this really is the core focus for us right now. We know that there's massive data growth. And if we can unify the data silos so that people can really take advantage of that data, we can make a huge difference. We know that public clouds offer tremendous advantages, but we also know that balance and flexibility is critical. And we all need the benefit that machine learning for all the types up to the end data science. We all need the benefits that they can bring to every single use case, but only if it can really be operationalized at scale, accurate and in real time. And the power of Vertica is, of course, how we're able to bring so many of these things together. Let me talk a little bit more about some of these trends. So one of the first industry trends that we've all been following probably now for over the last decade, is Hadoop and specifically HDFS. So many companies have invested, time, money, more importantly, people in leveraging the opportunity that HDFS brought to the market. HDFS is really part of a much broader storage disruption that we'll talk a little bit more about, more broadly than HDFS. But HDFS itself was really designed for petabytes of data, leveraging low cost commodity hardware and the ability to capture a wide variety of data formats, from a wide variety of data sources and applications. And I think what people really wanted, was to store that data before having to define exactly what structures they should go into. So over the last decade or so, the focus for most organizations is figuring out how to capture, store and frankly manage that data. And as a platform to do that, I think, Hadoop was pretty good. It certainly changed the way that a lot of enterprises think about their data and where it's locked up. In parallel with Hadoop, particularly over the last five years, Cloud Object Storage has also given every organization another option for collecting, storing and managing even more data. That has led to a huge growth in data storage, obviously, up on public clouds like Amazon and their S3, Google Cloud Storage and Azure Blob Storage just to name a few. And then when you consider regional and local object storage offered by cloud vendors all over the world, the explosion of that data, in leveraging this type of object storage is very real. And I think, as I mentioned, it's just part of this broader storage disruption that's been going on. But with all this growth in the data, in all these new places to put this data, every organization we talk to is facing even more challenges now around the data silo. Sure the data silos certainly getting bigger. And hopefully they're getting cheaper per bit. But as I said, the focus has really been on collecting, storing and managing the data. But between the new data lakes and many different cloud object storage combined with all sorts of data types from the complexity of managing all this, getting that business value has been very limited. This actually takes me to big bet number one for Team Vertica, which is to unify the data. Our goal, and some of the announcements we have made today plus roadmap announcements I'll share with you throughout this presentation. Our goal is to ensure that all the time, money and effort that has gone into storing that data, all the data turns into business value. So how are we going to do that? With a unified analytics platform that analyzes the data wherever it is HDFS, Cloud Object Storage, External tables in an any format ORC, Parquet, JSON, and of course, our own Native Roth Vertica format. Analyze the data in the right place in the right format, using a single unified tool. This is something that Vertica has always been committed to, and you'll see in some of our announcements today, we're just doubling down on that commitment. Let's talk a little bit more about the public cloud. This is certainly the second trend. It's the second wave maybe of data disruption with object storage. And there's a lot of advantages when it comes to public cloud. There's no question that the public clouds give rapid access to compute storage with the added benefit of eliminating data center maintenance that so many companies, want to get out of themselves. But maybe the biggest advantage that I see is the architectural innovation. The public clouds have introduced so many methodologies around how to provision quickly, separating compute and storage and really dialing-in the exact needs on demand, as you change workloads. When public clouds began, it made a lot of sense for the cloud providers and their customers to charge and pay for compute and storage in the ratio that each use case demanded. And I think you're seeing that trend, proliferate all over the place, not just up in public cloud. That architecture itself is really becoming the next generation architecture for on-premise data centers, as well. But there are a lot of concerns. I think we're all aware of them. They're out there many times for different workloads, there are higher costs. Especially if some of the workloads that are being run through analytics, which tend to run all the time. Just like some of the silo challenges that companies are facing with HDFS, data lakes and cloud storage, the public clouds have similar types of siloed challenges as well. Initially, there was a belief that they were cheaper than data centers, and when you added in all the costs, it looked that way. And again, for certain elastic workloads, that is the case. I don't think that's true across the board overall. Even to the point where a lot of the cloud vendors aren't just charging lower costs anymore. We hear from a lot of customers that they don't really want to tether themselves to any one cloud because of some of those uncertainties. Of course, security and privacy are a concern. We hear a lot of concerns with regards to cloud and even some SaaS vendors around shared data catalogs, across all the customers and not enough separation. But security concerns are out there, you can read about them. I'm not going to jump into that bandwagon. But we hear about them. And then, of course, I think one of the things we hear the most from our customers, is that each cloud stack is starting to feel even a lot more locked in than the traditional data warehouse appliance. And as everybody knows, the industry has been running away from appliances as fast as it can. And so they're not eager to get locked into another, quote, unquote, virtual appliance, if you will, up in the cloud. They really want to make sure they have flexibility in which clouds, they're going to today, tomorrow and in the future. And frankly, we hear from a lot of our customers that they're very interested in eventually mixing and matching, compute from one cloud with, say storage from another cloud, which I think is something that we'll hear a lot more about. And so for us, that's why we've got our big bet number two. we love the cloud. We love the public cloud. We love the private clouds on-premise, and other hosting providers. But our passion and commitment is for Vertica to be able to run in any of the clouds that our customers choose, and make it portable across those clouds. We have supported on-premises and all public clouds for years. And today, we have announced even more support for Vertica in Eon Mode, the deployment option that leverages the separation of compute from storage, with even more deployment choices, which I'm going to also touch more on as we go. So super excited about our big bet number two. And finally as I mentioned, for all the hype that there is around machine learning, I actually think that most importantly, this third trend that team Vertica is determined to address is the need to bring business critical, analytics, machine learning, data science projects into production. For so many years, there just wasn't enough data available to justify the investment in machine learning. Also, processing power was expensive, and storage was prohibitively expensive. But to train and score and evaluate all the different models to unlock the full power of predictive analytics was tough. Today you have those massive data volumes. You have the relatively cheap processing power and storage to make that dream a reality. And if you think about this, I mean with all the data that's available to every company, the real need is to operationalize the speed and the scale of machine learning so that these organizations can actually take advantage of it where they need to. I mean, we've seen this for years with Vertica, going back to some of the most advanced gaming companies in the early days, they were incorporating this with live data directly into their gaming experiences. Well, every organization wants to do that now. And the accuracy for clickability and real time actions are all key to separating the leaders from the rest of the pack in every industry when it comes to machine learning. But if you look at a lot of these projects, the reality is that there's a ton of buzz, there's a ton of hype spanning every acronym that you can imagine. But most companies are struggling, do the separate teams, different tools, silos and the limitation that many platforms are facing, driving, down sampling to get a small subset of the data, to try to create a model that then doesn't apply, or compromising accuracy and making it virtually impossible to replicate models, and understand decisions. And if there's one thing that we've learned when it comes to data, prescriptive data at the atomic level, being able to show end of one as we refer to it, meaning individually tailored data. No matter what it is healthcare, entertainment experiences, like gaming or other, being able to get at the granular data and make these decisions, make that scoring applies to machine learning just as much as it applies to giving somebody a next-best-offer. But the opportunity has never been greater. The need to integrate this end-to-end workflow and support the right tools without compromising on that accuracy. Think about it as no downsampling, using all the data, it really is key to machine learning success. Which should be no surprise then why the third big bet from Vertica is one that we've actually been working on for years. And we're so proud to be where we are today, helping the data disruptors across the world operationalize machine learning. This big bet has the potential to truly unlock, really the potential of machine learning. And today, we're announcing some very important new capabilities specifically focused on unifying the work being done by the data science community, with their preferred tools and platforms, and the volume of data and performance at scale, available in Vertica. Our strategy has been very consistent over the last several years. As I said in the beginning, we haven't deviated from our strategy. Of course, there's always things that we add. Most of the time, it's customer driven, it's based on what our customers are asking us to do. But I think we've also done a great job, not trying to be all things to all people. Especially as these hype cycles flare up around us, we absolutely love participating in these different areas without getting completely distracted. I mean, there's a variety of query tools and data warehouses and analytics platforms in the market. We all know that. There are tools and platforms that are offered by the public cloud vendors, by other vendors that support one or two specific clouds. There are appliance vendors, who I was referring to earlier who can deliver package data warehouse offerings for private data centers. And there's a ton of popular machine learning tools, languages and other kits. But Vertica is the only advanced analytic platform that can do all this, that can bring it together. We can analyze the data wherever it is, in HDFS, S3 Object Storage, or Vertica itself. Natively we support multiple clouds on-premise deployments, And maybe most importantly, we offer that choice of deployment modes to allow our customers to choose the architecture that works for them right now. It still also gives them the option to change move, evolve over time. And Vertica is the only analytics database with end-to-end machine learning that can truly operationalize ML at scale. And I know it's a mouthful. But it is not easy to do all these things. It is one of the things that highly differentiates Vertica from the rest of the pack. It is also why our customers, all of you continue to bet on us and see the value that we are delivering and we will continue to deliver. Here's a couple of examples of some of our customers who are powered by Vertica. It's the scale of data. It's the millisecond response times. Performance and scale have always been a huge part of what we have been about, not the only thing. I think the functionality all the capabilities that we add to the platform, the ease of use, the flexibility, obviously with the deployment. But if you look at some of the numbers they are under these customers on this slide. And I've shared a lot of different stories about these customers. Which, by the way, it still amaze me every time I talk to one and I get the updates, you can see the power and the difference that Vertica is making. Equally important, if you look at a lot of these customers, they are the epitome of being able to deploy Vertica in a lot of different environments. Many of the customers on this slide are not using Vertica just on-premise or just in the cloud. They're using it in a hybrid way. They're using it in multiple different clouds. And again, we've been with them on that journey throughout, which is what has made this product and frankly, our roadmap and our vision exactly what it is. It's been quite a journey. And that journey continues now with the Vertica 10 release. The Vertica 10 release is obviously a massive release for us. But if you look back, you can see that building on that native columnar architecture that started a long time ago, obviously, with the C-Store paper. We built it to leverage that commodity hardware, because it was an architecture that was never tightly integrated with any specific underlying infrastructure. I still remember hearing the initial pitch from Mike Stonebreaker, about the vision of Vertica as a software only solution and the importance of separating the company from hardware innovation. And at the time, Mike basically said to me, "there's so much R&D in innovation that's going to happen in hardware, we shouldn't bake hardware into our solution. We should do it in software, and we'll be able to take advantage of that hardware." And that is exactly what has happened. But one of the most recent innovations that we embraced with hardware is certainly that separation of compute and storage. As I said previously, the public cloud providers offered this next generation architecture, really to ensure that they can provide the customers exactly what they needed, more compute or more storage and charge for each, respectively. The separation of compute and storage, compute from storage is a major milestone in data center architectures. If you think about it, it's really not only a public cloud innovation, though. It fundamentally redefines the next generation data architecture for on-premise and for pretty much every way people are thinking about computing today. And that goes for software too. Object storage is an example of the cost effective means for storing data. And even more importantly, separating compute from storage for analytic workloads has a lot of advantages. Including the opportunity to manage much more dynamic, flexible workloads. And more importantly, truly isolate those workloads from others. And by the way, once you start having something that can truly isolate workloads, then you can have the conversations around autonomic computing, around setting up some nodes, some compute resources on the data that won't affect any of the other data to do some things on their own, maybe some self analytics, by the system, etc. A lot of things that many of you know we've already been exploring in terms of our own system data in the product. But it was May 2018, believe it or not, it seems like a long time ago where we first announced Eon Mode and I want to make something very clear, actually about Eon mode. It's a mode, it's a deployment option for Vertica customers. And I think this is another huge benefit that we don't talk about enough. But unlike a lot of vendors in the market who will dig you and charge you for every single add-on like hit-buy, you name it. You get this with the Vertica product. If you continue to pay support and maintenance, this comes with the upgrade. This comes as part of the new release. So any customer who owns or buys Vertica has the ability to set up either an Enterprise Mode or Eon Mode, which is a question I know that comes up sometimes. Our first announcement of Eon was obviously AWS customers, including the trade desk, AT&T. Most of whom will be speaking here later at the Virtual Big Data Conference. They saw a huge opportunity. Eon Mode, not only allowed Vertica to scale elastically with that specific compute and storage that was needed, but it really dramatically simplified database operations including things like workload balancing, node recovery, compute provisioning, etc. So one of the most popular functions is that ability to isolate the workloads and really allocate those resources without negatively affecting others. And even though traditional data warehouses, including Vertica Enterprise Mode have been able to do lots of different workload isolation, it's never been as strong as Eon Mode. Well, it certainly didn't take long for our customers to see that value across the board with Eon Mode. Not just up in the cloud, in partnership with one of our most valued partners and a platinum sponsor here. Joy mentioned at the beginning. We announced Vertica Eon Mode for Pure Storage FlashBlade in September 2019. And again, just to be clear, this is not a new product, it's one Vertica with yet more deployment options. With Pure Storage, Vertica in Eon mode is not limited in any way by variable cloud, network latency. The performance is actually amazing when you take the benefits of separate and compute from storage and you run it with a Pure environment on-premise. Vertica in Eon Mode has a super smart cache layer that we call the depot. It's a big part of our secret sauce around Eon mode. And combined with the power and performance of Pure's FlashBlade, Vertica became the industry's first advanced analytics platform that actually separates compute and storage for on-premises data centers. Something that a lot of our customers are already benefiting from, and we're super excited about it. But as I said, this is a journey. We don't stop, we're not going to stop. Our customers need the flexibility of multiple public clouds. So today with Vertica 10, we're super proud and excited to announce support for Vertica in Eon Mode on Google Cloud. This gives our customers the ability to use their Vertica licenses on Amazon AWS, on-premise with Pure Storage and on Google Cloud. Now, we were talking about HDFS and a lot of our customers who have invested quite a bit in HDFS as a place, especially to store data have been pushing us to support Eon Mode with HDFS. So as part of Vertica 10, we are also announcing support for Vertica in Eon Mode using HDFS as the communal storage. Vertica's own Roth format data can be stored in HDFS, and actually the full functionality of Vertica is complete analytics, geospatial pattern matching, time series, machine learning, everything that we have in there can be applied to this data. And on the same HDFS nodes, Vertica can actually also analyze data in ORC or Parquet format, using External tables. We can also execute joins between the Roth data the External table holds, which powers a much more comprehensive view. So again, it's that flexibility to be able to support our customers, wherever they need us to support them on whatever platform, they have. Vertica 10 gives us a lot more ways that we can deploy Eon Mode in various environments for our customers. It allows them to take advantage of Vertica in Eon Mode and the power that it brings with that separation, with that workload isolation, to whichever platform they are most comfortable with. Now, there's a lot that has come in Vertica 10. I'm definitely not going to be able to cover everything. But we also introduced complex types as an example. And complex data types fit very well into Eon as well in this separation. They significantly reduce the data pipeline, the cost of moving data between those, a much better support for unstructured data, which a lot of our customers have mixed with structured data, of course, and they leverage a lot of columnar execution that Vertica provides. So you get complex data types in Vertica now, a lot more data, stronger performance. It goes great with the announcement that we made with the broader Eon Mode. Let's talk a little bit more about machine learning. We've been actually doing work in and around machine learning with various extra regressions and a whole bunch of other algorithms for several years. We saw the huge advantage that MPP offered, not just as a sequel engine as a database, but for ML as well. Didn't take as long to realize that there's a lot more to operationalizing machine learning than just those algorithms. It's data preparation, it's that model trade training. It's the scoring, the shaping, the evaluation. That is so much of what machine learning and frankly, data science is about. You do know, everybody always wants to jump to the sexy algorithm and we handle those tasks very, very well. It makes Vertica a terrific platform to do that. A lot of work in data science and machine learning is done in other tools. I had mentioned that there's just so many tools out there. We want people to be able to take advantage of all that. We never believed we were going to be the best algorithm company or come up with the best models for people to use. So with Vertica 10, we support PMML. We can import now and export PMML models. It's a huge step for us around that operationalizing machine learning projects for our customers. Allowing the models to get built outside of Vertica yet be imported in and then applying to that full scale of data with all the performance that you would expect from Vertica. We also are more tightly integrating with Python. As many of you know, we've been doing a lot of open source projects with the community driven by many of our customers, like Uber. And so now with Python we've integrated with TensorFlow, allowing data scientists to build models in their preferred language, to take advantage of TensorFlow. But again, to store and deploy those models at scale with Vertica. I think both these announcements are proof of our big bet number three, and really our commitment to supporting innovation throughout the community by operationalizing ML with that accuracy, performance and scale of Vertica for our customers. Again, there's a lot of steps when it comes to the workflow of machine learning. These are some of them that you can see on the slide, and it's definitely not linear either. We see this as a circle. And companies that do it, well just continue to learn, they continue to rescore, they continue to redeploy and they want to operationalize all that within a single platform that can take advantage of all those capabilities. And that is the platform, with a very robust ecosystem that Vertica has always been committed to as an organization and will continue to be. This graphic, many of you have seen it evolve over the years. Frankly, if we put everything and everyone on here wouldn't fit on a slide. But it will absolutely continue to evolve and grow as we support our customers, where they need the support most. So, again, being able to deploy everywhere, being able to take advantage of Vertica, not just as a business analyst or a business user, but as a data scientists or as an operational or BI person. We want Vertica to be leveraged and used by the broader organization. So I think it's fair to say and I encourage everybody to learn more about Vertica 10, because I'm just highlighting some of the bigger aspects of it. But we talked about those three market trends. The need to unify the silos, the need for hybrid multiple cloud deployment options, the need to operationalize business critical machine learning projects. Vertica 10 has absolutely delivered on those. But again, we are not going to stop. It is our job not to, and this is how Team Vertica thrives. I always joke that the next release is the best release. And, of course, even after Vertica 10, that is also true, although Vertica 10 is pretty awesome. But, you know, from the first line of code, we've always been focused on performance and scale, right. And like any really strong data platform, the execution engine, the optimizer and the execution engine are the two core pieces of that. Beyond Vertica 10, some of the big things that we're already working on, next generation execution engine. We're already actually seeing incredible early performance from this. And this is just one example, of how important it is for an organization like Vertica to constantly go back and re-innovate. Every single release, we do the sit ups and crunches, our performance and scale. How do we improve? And there's so many parts of the core server, there's so many parts of our broader ecosystem. We are constantly looking at coverages of how we can go back to all the code lines that we have, and make them better in the current environment. And it's not an easy thing to do when you're doing that, and you're also expanding in the environment that we are expanding into to take advantage of the different deployments, which is a great segue to this slide. Because if you think about today, we're obviously already available with Eon Mode and Amazon, AWS and Pure and actually MinIO as well. As I talked about in Vertica 10 we're adding Google and HDFS. And coming next, obviously, Microsoft Azure, Alibaba cloud. So being able to expand into more of these environments is really important for the Vertica team and how we go forward. And it's not just running in these clouds, for us, we want it to be a SaaS like experience in all these clouds. We want you to be able to deploy Vertica in 15 minutes or less on these clouds. You can also consume Vertica, in a lot of different ways, on these clouds. As an example, in Amazon Vertica by the Hour. So for us, it's not just about running, it's about taking advantage of the ecosystems that all these cloud providers offer, and really optimizing the Vertica experience as part of them. Optimization, around automation, around self service capabilities, extending our management console, we now have products that like the Vertica Advisor Tool that our Customer Success Team has created to actually use our own smarts in Vertica. To take data from customers that give it to us and help them tune automatically their environment. You can imagine that we're taking that to the next level, in a lot of different endeavors that we're doing around how Vertica as a product can actually be smarter because we all know that simplicity is key. There just aren't enough people in the world who are good at managing data and taking it to the next level. And of course, other things that we all hear about, whether it's Kubernetes and containerization. You can imagine that that probably works very well with the Eon Mode and separating compute and storage. But innovation happens everywhere. We innovate around our community documentation. Many of you have taken advantage of the Vertica Academy. The numbers there are through the roof in terms of the number of people coming in and certifying on it. So there's a lot of things that are within the core products. There's a lot of activity and action beyond the core products that we're taking advantage of. And let's not forget why we're here, right? It's easy to talk about a platform, a data platform, it's easy to jump into all the functionality, the analytics, the flexibility, how we can offer it. But at the end of the day, somebody, a person, she's got to take advantage of this data, she's got to be able to take this data and use this information to make a critical business decision. And that doesn't happen unless we explore lots of different and frankly, new ways to get that predictive analytics UI and interface beyond just the standard BI tools in front of her at the right time. And so there's a lot of activity, I'll tease you with that going on in this organization right now about how we can do that and deliver that for our customers. We're in a great position to be able to see exactly how this data is consumed and used and start with this core platform that we have to go out. Look, I know, the plan wasn't to do this as a virtual BDC. But I really appreciate you tuning in. Really appreciate your support. I think if there's any silver lining to us, maybe not being able to do this in person, it's the fact that the reach has actually gone significantly higher than what we would have been able to do in person in Boston. We're certainly looking forward to doing a Big Data Conference in the future. But if I could leave you with anything, know this, since that first release for Vertica, and our very first customers, we have been very consistent. We respect all the innovation around us, whether it's open source or not. We understand the market trends. We embrace those new ideas and technologies and for us true north, and the most important thing is what does our customer need to do? What problem are they trying to solve? And how do we use the advantages that we have without disrupting our customers? But knowing that you depend on us to deliver that unified analytics strategy, it will deliver that performance of scale, not only today, but tomorrow and for years to come. We've added a lot of great features to Vertica. I think we've said no to a lot of things, frankly, that we just knew we wouldn't be the best company to deliver. When we say we're going to do things we do them. Vertica 10 is a perfect example of so many of those things that we from you, our customers have heard loud and clear, and we have delivered. I am incredibly proud of this team across the board. I think the culture of Vertica, a customer first culture, jumping in to help our customers win no matter what is also something that sets us massively apart. I hear horror stories about support experiences with other organizations. And people always seem to be amazed at Team Vertica's willingness to jump in or their aptitude for certain technical capabilities or understanding the business. And I think sometimes we take that for granted. But that is the team that we have as Team Vertica. We are incredibly excited about Vertica 10. I think you're going to love the Virtual Big Data Conference this year. I encourage you to tune in. Maybe one other benefit is I know some people were worried about not being able to see different sessions because they were going to overlap with each other well now, even if you can't do it live, you'll be able to do those sessions on demand. Please enjoy the Vertica Big Data Conference here in 2020. Please you and your families and your co-workers be safe during these times. I know we will get through it. And analytics is probably going to help with a lot of that and we already know it is helping in many different ways. So believe in the data, believe in data's ability to change the world for the better. And thank you for your time. And with that, I am delighted to now introduce Micro Focus CEO Stephen Murdoch to the Vertica Big Data Virtual Conference. Thank you Stephen. >> Stephen: Hi, everyone, my name is Stephen Murdoch. I have the pleasure and privilege of being the Chief Executive Officer here at Micro Focus. Please let me add my welcome to the Big Data Conference. And also my thanks for your support, as we've had to pivot to this being virtual rather than a physical conference. Its amazing how quickly we all reset to a new normal. I certainly didn't expect to be addressing you from my study. Vertica is an incredibly important part of Micro Focus family. Is key to our goal of trying to enable and help customers become much more data driven across all of their IT operations. Vertica 10 is a huge step forward, we believe. It allows for multi-cloud innovation, genuinely hybrid deployments, begin to leverage machine learning properly in the enterprise, and also allows the opportunity to unify currently siloed lakes of information. We operate in a very noisy, very competitive market, and there are people, who are in that market who can do some of those things. The reason we are so excited about Vertica is we genuinely believe that we are the best at doing all of those things. And that's why we've announced publicly, you're under executing internally, incremental investment into Vertica. That investments targeted at accelerating the roadmaps that already exist. And getting that innovation into your hands faster. This idea is speed is key. It's not a question of if companies have to become data driven organizations, it's a question of when. So that speed now is really important. And that's why we believe that the Big Data Conference gives a great opportunity for you to accelerate your own plans. You will have the opportunity to talk to some of our best architects, some of the best development brains that we have. But more importantly, you'll also get to hear from some of our phenomenal Roth Data customers. You'll hear from Uber, from the Trade Desk, from Philips, and from AT&T, as well as many many others. And just hearing how those customers are using the power of Vertica to accelerate their own, I think is the highlight. And I encourage you to use this opportunity to its full. Let me close by, again saying thank you, we genuinely hope that you get as much from this virtual conference as you could have from a physical conference. And we look forward to your engagement, and we look forward to hearing your feedback. With that, thank you very much. >> Joy: Thank you so much, Stephen, for joining us for the Vertica Big Data Conference. Your support and enthusiasm for Vertica is so clear, and it makes a big difference. Now, I'm delighted to introduce Amy Fowler, the VP of Strategy and Solutions for FlashBlade at Pure Storage, who was one of our BDC Platinum Sponsors, and one of our most valued partners. It was a proud moment for me, when we announced Vertica in Eon mode for Pure Storage FlashBlade and we became the first analytics data warehouse that separates compute from storage for on-premise data centers. Thank you so much, Amy, for joining us. Let's get started. >> Amy: Well, thank you, Joy so much for having us. And thank you all for joining us today, virtually, as we may all be. So, as we just heard from Colin Mahony, there are some really interesting trends that are happening right now in the big data analytics market. From the end of the Hadoop hype cycle, to the new cloud reality, and even the opportunity to help the many data science and machine learning projects move from labs to production. So let's talk about these trends in the context of infrastructure. And in particular, look at why a modern storage platform is relevant as organizations take on the challenges and opportunities associated with these trends. The answer is the Hadoop hype cycles left a lot of data in HDFS data lakes, or reservoirs or swamps depending upon the level of the data hygiene. But without the ability to get the value that was promised from Hadoop as a platform rather than a distributed file store. And when we combine that data with the massive volume of data in Cloud Object Storage, we find ourselves with a lot of data and a lot of silos, but without a way to unify that data and find value in it. Now when you look at the infrastructure data lakes are traditionally built on, it is often direct attached storage or data. The approach that Hadoop took when it entered the market was primarily bound by the limits of networking and storage technologies. One gig ethernet and slower spinning disk. But today, those barriers do not exist. And all FlashStorage has fundamentally transformed how data is accessed, managed and leveraged. The need for local data storage for significant volumes of data has been largely mitigated by the performance increases afforded by all Flash. At the same time, organizations can achieve superior economies of scale with that segregation of compute and storage. With compute and storage, you don't always scale in lockstep. Would you want to add an engine to the train every time you add another boxcar? Probably not. But from a Pure Storage perspective, FlashBlade is uniquely architected to allow customers to achieve better resource utilization for compute and storage, while at the same time, reducing complexity that has arisen from the siloed nature of the original big data solutions. The second and equally important recent trend we see is something I'll call cloud reality. The public clouds made a lot of promises and some of those promises were delivered. But cloud economics, especially usage based and elastic scaling, without the control that many companies need to manage the financial impact is causing a lot of issues. In addition, the risk of vendor lock-in from data egress, charges, to integrated software stacks that can't be moved or deployed on-premise is causing a lot of organizations to back off the all the way non-cloud strategy, and move toward hybrid deployments. Which is kind of funny in a way because it wasn't that long ago that there was a lot of talk about no more data centers. And for example, one large retailer, I won't name them, but I'll admit they are my favorites. They several years ago told us they were completely done with on-prem storage infrastructure, because they were going 100% to the cloud. But they just deployed FlashBlade for their data pipelines, because they need predictable performance at scale. And the all cloud TCO just didn't add up. Now, that being said, well, there are certainly challenges with the public cloud. It has also brought some things to the table that we see most organizations wanting. First of all, in a lot of cases applications have been built to leverage object storage platforms like S3. So they need that object protocol, but they may also need it to be fast. And the said object may be oxymoron only a few years ago, and this is an area of the market where Pure and FlashBlade have really taken a leadership position. Second, regardless of where the data is physically stored, organizations want the best elements of a cloud experience. And for us, that means two main things. Number one is simplicity and ease of use. If you need a bunch of storage experts to run the system, that should be considered a bug. The other big one is the consumption model. The ability to pay for what you need when you need it, and seamlessly grow your environment over time totally nondestructively. This is actually pretty huge and something that a lot of vendors try to solve for with finance programs. But no finance program can address the pain of a forklift upgrade, when you need to move to next gen hardware. To scale nondestructively over long periods of time, five to 10 years plus is a crucial architectural decisions need to be made at the outset. Plus, you need the ability to pay as you use it. And we offer something for FlashBlade called Pure as a Service, which delivers exactly that. The third cloud characteristic that many organizations want is the option for hybrid. Even if that is just a DR site in the cloud. In our case, that means supporting appplication of S3, at the AWS. And the final trend, which to me represents the biggest opportunity for all of us, is the need to help the many data science and machine learning projects move from labs to production. This means bringing all the machine learning functions and model training to the data, rather than moving samples or segments of data to separate platforms. As we all know, machine learning needs a ton of data for accuracy. And there is just too much data to retrieve from the cloud for every training job. At the same time, predictive analytics without accuracy is not going to deliver the business advantage that everyone is seeking. You can kind of visualize data analytics as it is traditionally deployed as being on a continuum. With that thing, we've been doing the longest, data warehousing on one end, and AI on the other end. But the way this manifests in most environments is a series of silos that get built up. So data is duplicated across all kinds of bespoke analytics and AI, environments and infrastructure. This creates an expensive and complex environment. So historically, there was no other way to do it because some level of performance is always table stakes. And each of these parts of the data pipeline has a different workload profile. A single platform to deliver on the multi dimensional performances, diverse set of applications required, that didn't exist three years ago. And that's why the application vendors pointed you towards bespoke things like DAS environments that we talked about earlier. And the fact that better options exists today is why we're seeing them move towards supporting this disaggregation of compute and storage. And when it comes to a platform that is a better option, one with a modern architecture that can address the diverse performance requirements of this continuum, and allow organizations to bring a model to the data instead of creating separate silos. That's exactly what FlashBlade is built for. Small files, large files, high throughput, low latency and scale to petabytes in a single namespace. And this is importantly a single rapid space is what we're focused on delivering for our customers. At Pure, we talk about it in the context of modern data experience because at the end of the day, that's what it's really all about. The experience for your teams in your organization. And together Pure Storage and Vertica have delivered that experience to a wide range of customers. From a SaaS analytics company, which uses Vertica on FlashBlade to authenticate the quality of digital media in real time, to a multinational car company, which uses Vertica on FlashBlade to make thousands of decisions per second for autonomous cars, or a healthcare organization, which uses Vertica on FlashBlade to enable healthcare providers to make real time decisions that impact lives. And I'm sure you're all looking forward to hearing from John Yavanovich from AT&T. To hear how he's been doing this with Vertica and FlashBlade as well. He's coming up soon. We have been really excited to build this partnership with Vertica. And we're proud to provide the only on-premise storage platform validated with Vertica Eon Mode. And deliver this modern data experience to our customers together. Thank you all so much for joining us today. >> Joy: Amy, thank you so much for your time and your insights. Modern infrastructure is key to modern analytics, especially as organizations leverage next generation data center architectures, and object storage for their on-premise data centers. Now, I'm delighted to introduce our last speaker in our Vertica Big Data Conference Keynote, John Yovanovich, Director of IT for AT&T. Vertica is so proud to serve AT&T, and especially proud of the harmonious impact we are having in partnership with Pure Storage. John, welcome to the Virtual Vertica BDC. >> John: Thank you joy. It's a pleasure to be here. And I'm excited to go through this presentation today. And in a unique fashion today 'cause as I was thinking through how I wanted to present the partnership that we have formed together between Pure Storage, Vertica and AT&T, I want to emphasize how well we all work together and how these three components have really driven home, my desire for a harmonious to use your word relationship. So, I'm going to move forward here and with. So here, what I'm going to do the theme of today's presentation is the Pure Vertica Symphony live at AT&T. And if anybody is a Westworld fan, you can appreciate the sheet music on the right hand side. What we're going to what I'm going to highlight here is in a musical fashion, is how we at AT&T leverage these technologies to save money to deliver a more efficient platform, and to actually just to make our customers happier overall. So as we look back, and back as early as just maybe a few years ago here at AT&T, I realized that we had many musicians to help the company. Or maybe you might want to call them data scientists, or data analysts. For the theme we'll stay with musicians. None of them were singing or playing from the same hymn book or sheet music. And so what we had was many organizations chasing a similar dream, but not exactly the same dream. And, best way to describe that is and I think with a lot of people this might resonate in your organizations. How many organizations are chasing a customer 360 view in your company? Well, I can tell you that I have at least four in my company. And I'm sure there are many that I don't know of. That is our problem because what we see is a repetitive sourcing of data. We see a repetitive copying of data. And there's just so much money to be spent. This is where I asked Pure Storage and Vertica to help me solve that problem with their technologies. What I also noticed was that there was no coordination between these departments. In fact, if you look here, nobody really wants to play with finance. Sales, marketing and care, sure that you all copied each other's data. But they actually didn't communicate with each other as they were copying the data. So the data became replicated and out of sync. This is a challenge throughout, not just my company, but all companies across the world. And that is, the more we replicate the data, the more problems we have at chasing or conquering the goal of single version of truth. In fact, I kid that I think that AT&T, we actually have adopted the multiple versions of truth, techno theory, which is not where we want to be, but this is where we are. But we are conquering that with the synergies between Pure Storage and Vertica. This is what it leaves us with. And this is where we are challenged and that if each one of our siloed business units had their own stories, their own dedicated stories, and some of them had more money than others so they bought more storage. Some of them anticipating storing more data, and then they really did. Others are running out of space, but can't put anymore because their bodies aren't been replenished. So if you look at it from this side view here, we have a limited amount of compute or fixed compute dedicated to each one of these silos. And that's because of the, wanting to own your own. And the other part is that you are limited or wasting space, depending on where you are in the organization. So there were the synergies aren't just about the data, but actually the compute and the storage. And I wanted to tackle that challenge as well. So I was tackling the data. I was tackling the storage, and I was tackling the compute all at the same time. So my ask across the company was can we just please play together okay. And to do that, I knew that I wasn't going to tackle this by getting everybody in the same room and getting them to agree that we needed one account table, because they will argue about whose account table is the best account table. But I knew that if I brought the account tables together, they would soon see that they had so much redundancy that I can now start retiring data sources. I also knew that if I brought all the compute together, that they would all be happy. But I didn't want them to tackle across tackle each other. And in fact that was one of the things that all business units really enjoy. Is they enjoy the silo of having their own compute, and more or less being able to control their own destiny. Well, Vertica's subclustering allows just that. And this is exactly what I was hoping for, and I'm glad they've brought through. And finally, how did I solve the problem of the single account table? Well when you don't have dedicated storage, and you can separate compute and storage as Vertica in Eon Mode does. And we store the data on FlashBlades, which you see on the left and right hand side, of our container, which I can describe in a moment. Okay, so what we have here, is we have a container full of compute with all the Vertica nodes sitting in the middle. Two loader, we'll call them loader subclusters, sitting on the sides, which are dedicated to just putting data onto the FlashBlades, which is sitting on both ends of the container. Now today, I have two dedicated storage or common dedicated might not be the right word, but two storage racks one on the left one on the right. And I treat them as separate storage racks. They could be one, but i created them separately for disaster recovery purposes, lashing work in case that rack were to go down. But that being said, there's no reason why I'm probably going to add a couple of them here in the future. So I can just have a, say five to 10, petabyte storage, setup, and I'll have my DR in another 'cause the DR shouldn't be in the same container. Okay, but I'll DR outside of this container. So I got them all together, I leveraged subclustering, I leveraged separate and compute. I was able to convince many of my clients that they didn't need their own account table, that they were better off having one. I eliminated, I reduced latency, I reduced our ticketing I reduce our data quality issues AKA ticketing okay. I was able to expand. What is this? As work. I was able to leverage elasticity within this cluster. As you can see, there are racks and racks of compute. We set up what we'll call the fixed capacity that each of the business units needed. And then I'm able to ramp up and release the compute that's necessary for each one of my clients based on their workloads throughout the day. And so while they compute to the right before you see that the instruments have already like, more or less, dedicated themselves towards all those are free for anybody to use. So in essence, what I have, is I have a concert hall with a lot of seats available. So if I want to run a 10 chair Symphony or 80, chairs, Symphony, I'm able to do that. And all the while, I can also do the same with my loader nodes. I can expand my loader nodes, to actually have their own Symphony or write all to themselves and not compete with any other workloads of the other clusters. What does that change for our organization? Well, it really changes the way our database administrators actually do their jobs. This has been a big transformation for them. They have actually become data conductors. Maybe you might even call them composers, which is interesting, because what I've asked them to do is morph into less technology and more workload analysis. And in doing so we're able to write auto-detect scripts, that watch the queues, watch the workloads so that we can help ramp up and trim down the cluster and subclusters as necessary. There has been an exciting transformation for our DBAs, who I need to now classify as something maybe like DCAs. I don't know, I have to work with HR on that. But I think it's an exciting future for their careers. And if we bring it all together, If we bring it all together, and then our clusters, start looking like this. Where everything is moving in harmonious, we have lots of seats open for extra musicians. And we are able to emulate a cloud experience on-prem. And so, I want you to sit back and enjoy the Pure Vertica Symphony live at AT&T. (soft music) >> Joy: Thank you so much, John, for an informative and very creative look at the benefits that AT&T is getting from its Pure Vertica symphony. I do really like the idea of engaging HR to change the title to Data Conductor. That's fantastic. I've always believed that music brings people together. And now it's clear that analytics at AT&T is part of that musical advantage. So, now it's time for a short break. And we'll be back for our breakout sessions, beginning at 12 pm Eastern Daylight Time. We have some really exciting sessions planned later today. And then again, as you can see on Wednesday. Now because all of you are already logged in and listening to this keynote, you already know the steps to continue to participate in the sessions that are listed here and on the previous slide. In addition, everyone received an email yesterday, today, and you'll get another one tomorrow, outlining the simple steps to register, login and choose your session. If you have any questions, check out the emails or go to www.vertica.com/bdc2020 for the logistics information. There are a lot of choices and that's always a good thing. Don't worry if you want to attend one or more or can't listen to these live sessions due to your timezone. All the sessions, including the Q&A sections will be available on demand and everyone will have access to the recordings as well as even more pre-recorded sessions that we'll post to the BDC website. Now I do want to leave you with two other important sites. First, our Vertica Academy. Vertica Academy is available to everyone. And there's a variety of very technical, self-paced, on-demand training, virtual instructor-led workshops, and Vertica Essentials Certification. And it's all free. Because we believe that Vertica expertise, helps everyone accelerate their Vertica projects and the advantage that those projects deliver. Now, if you have questions or want to engage with our Vertica engineering team now, we're waiting for you on the Vertica forum. We'll answer any questions or discuss any ideas that you might have. Thank you again for joining the Vertica Big Data Conference Keynote Session. Enjoy the rest of the BDC because there's a lot more to come

Published Date : Mar 30 2020

SUMMARY :

And he'll share the exciting news And that is the platform, with a very robust ecosystem some of the best development brains that we have. the VP of Strategy and Solutions is causing a lot of organizations to back off the and especially proud of the harmonious impact And that is, the more we replicate the data, Enjoy the rest of the BDC because there's a lot more to come

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

Amy FowlerPERSON

0.99+

MikePERSON

0.99+

John YavanovichPERSON

0.99+

AmyPERSON

0.99+

Colin MahonyPERSON

0.99+

AT&TORGANIZATION

0.99+

BostonLOCATION

0.99+

John YovanovichPERSON

0.99+

VerticaORGANIZATION

0.99+

Joy KingPERSON

0.99+

Mike StonebreakerPERSON

0.99+

JohnPERSON

0.99+

May 2018DATE

0.99+

100%QUANTITY

0.99+

WednesdayDATE

0.99+

ColinPERSON

0.99+

AWSORGANIZATION

0.99+

Vertica AcademyORGANIZATION

0.99+

fiveQUANTITY

0.99+

JoyPERSON

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

UberORGANIZATION

0.99+

Stephen MurdochPERSON

0.99+

Vertica 10TITLE

0.99+

Pure StorageORGANIZATION

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

PhilipsORGANIZATION

0.99+

tomorrowDATE

0.99+

AT&T.ORGANIZATION

0.99+

September 2019DATE

0.99+

PythonTITLE

0.99+

www.vertica.com/bdc2020OTHER

0.99+

One gigQUANTITY

0.99+

AmazonORGANIZATION

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

15 minutesQUANTITY

0.99+

yesterdayDATE

0.99+

Sumit Gupta & Steven Eliuk, IBM | IBM CDO Summit Spring 2018


 

(music playing) >> Narrator: Live, from downtown San Francisco It's the Cube. Covering IBM Chief Data Officer Startegy Summit 2018. Brought to you by: IBM >> Welcome back to San Francisco everybody we're at the Parc 55 in Union Square. My name is Dave Vellante, and you're watching the Cube. The leader in live tech coverage and this is our exclusive coverage of IBM's Chief Data Officer Strategy Summit. They hold these both in San Francisco and in Boston. It's an intimate event, about 150 Chief Data Officers really absorbing what IBM has done internally and IBM transferring knowledge to its clients. Steven Eluk is here. He is one of those internal practitioners at IBM. He's the Vice President of Deep Learning and the Global Chief Data Office at IBM. We just heard from him and some of his strategies and used cases. He's joined by Sumit Gupta, a Cube alum. Who is the Vice President of Machine Learning and deep learning within IBM's cognitive systems group. Sumit. >> Thank you. >> Good to see you, welcome back Steven, lets get into it. So, I was um paying close attention when Bob Picciano took over the cognitive systems group. I said, "Hmm, that's interesting". Recently a software guy, of course I know he's got some hardware expertise. But bringing in someone who's deep into software and machine learning, and deep learning, and AI, and cognitive systems into a systems organization. So you guys specifically set out to develop solutions to solve problems like Steven's trying to solve. Right, explain that. >> Yeah, so I think ugh there's a revolution going on in the market the computing market where we have all these new machine learning, and deep learning technologies that are having meaningful impact or promise of having meaningful impact. But these new technologies, are actually significantly I would say complex and they require very complex and high performance computing systems. You know I think Bob and I think in particular IBM saw the opportunity and realized that we really need to architect a new class of infrastructure. Both software and hardware to address what data scientist like Steve are trying to do in the space, right? The open source software that's out there: Denzoflo, Cafe, Torch - These things are truly game changing. But they also require GPU accelerators. They also require multiple systems like... In fact interestingly enough you know some of the super computers that we've been building for the scientific computing world, those same technologies are now coming into the AI world and the enterprise. >> So, the infrastructure for AI, if I can use that term? It's got to be flexible, Steven we were sort of talking about that elastic versus I'm even extending it to plastic. As Sumit you just said, it's got to have that tooling, got to have that modern tooling, you've got to accommodate alternative processor capabilities um, and so, that forms what you've used Steven to sort of create new capabilities new business capabilities within IBM. I wanted to, we didn't touch upon this before, but we touched upon your data strategy before but tie it back to the line of business. You essentially are a presume a liaison between the line of business and the chief data office >> Steven: Yeah. >> Officer office. How did that all work out, and shake out? Did you defining the business outcomes, the requirements, how did you go about that? >> Well, actually, surprisingly, we have very little new use cases that we're generating internally from my organization. Because there's so many to pick from already throughout the organization, right? There's all these business units coming to us and saying, "Hey, now the data is in the data lake and now we know there's more data, now we want to do this. How do we do it?" You know, so that's where we come in, that's where we start touching and massaging and enabling them. And that's the main efforts that we have. We do have some derivative works that have come out, that have been like new offerings that you'll see here. But mostly we already have so many use cases that from those businesses units that we're really trying to heighten and bring extra value to those domains first. >> So, a lot of organizations sounds like IBM was similar you created the data lake you know, things like "a doop" made a lower cost to just put stuff in the data lake. But then, it's like "okay, now what?" >> Steven: Yeah. >> So is that right? So you've got the data and this bog of data and you're trying to make more sense out of it but get more value out of it? >> Steven: Absolutely. >> That's what they were pushing you to do? >> Yeah, absolutely. And with that, with more data you need more computational power. And actually Sumit and I go pretty far back and I can tell you from my previous roles I heightened to him many years ago some of the deficiencies in the current architecture in X86 etc and I said, "If you hit these points, I will buy these products." And what they went back and they did is they, they addressed all of the issues that I had. Like there's certain issues... >> That's when you were, sorry to interrupt, that's when you were a customer, right? >> Steven: That's when I was... >> An external customer >> Outside. I'm still an internal customer, so I've always been a customer I guess in that role right? >> Yep, yep. >> But, I need to get data to the computational device as quickly as possible. And with certain older gen technologies, like PTI Gen3 and certain issues around um x86. I couldn't get that data there for like high fidelity imaging for autonomous vehicles for ya know, high fidelity image analysis. But, with certain technologies in power we have like envy link and directly to the CPU. And we also have PTI Gen4, right? So, so these are big enablers for me so that I can really keep the utilization of those very expensive compute devices higher. Because they're not starved for data. >> And you've also put a lot of emphasis on IO, right? I mean that's... >> Yeah, you know if I may break it down right there's actually I would say three different pieces to the puzzle here right? The highest level from Steve's perspective, from Steven's teams perspective or any data scientist perspective is they need to just do their data science and not worry about the infrastructure, right? They actually don't want to know that there's an infrastructure. They want to say, "launch job" - right? That's the level of grand clarity we want, right? In the background, they want our schedulers, our software, our hardware to just seamlessly use either one system or scale to 100 systems, right? To use one GPU or to use 1,000 GPUs, right? So that's where our offerings come in, right. We went and built this offering called Powder and Powder essentially is open source software like TensorFlow, like Efi, like Torch. But performace and capabilities add it to make it much easier to use. So for example, we have an extremely terrific scheduling software that manages jobs called Spectrum Conductor for Spark. So as the name suggests, it uses Apache Spark. But again the data scientist doesn't know that. They say, "launch job". And the software actually goes and scales that job across tens of servers or hundreds of servers. The IT team can determine how many servers their going to allocate for data scientist. They can have all kinds of user management, data management, model management software. We take the open source software, we package it. You know surprisingly ugh most people don't realize this, the open source software like TensorFlow has primarily been built on a (mumbles). And most of our enterprise clients, including Steven, are on Redhat. So we, we engineered Redhat to be able to manage TensorFlow. And you know I chose those words carefully, there was a little bit of engineering both on Redhat and on TensorFlow to make that whole thing work together. Sounds trivial, took several months and huge value proposition to the enterprise clients. And then the last piece I think that Steven was referencing too, is we also trying to go and make the eye more accessible for non data scientist or I would say even data engineers. So we for example, have a software called Powder Vision. This takes images and videos, and automatically creates a trained deep learning model for them, right. So we analyze the images, you of course have to tell us in these images, for these hundred images here are the most important things. For example, you've identified: here are people, here are cars, here are traffic signs. But if you give us some of that labeled data, we automatically do the work that a data scientist would have done, and create this pre trained AI model for you. This really enables many rapid prototyping for a lot of clients who either kind of fought to have data scientists or don't want to have data scientists. >> So just to summarize that, the three pieces: It's making it simpler for the data scientists, just run the job - Um, the backend piece which is the schedulers, the hardware, the software doing its thing - and then its making that data science capability more accessible. >> Right, right, right. >> Those are the three layers. >> So you know, I'll resay it in my words maybe >> Yeah please. >> Ease of use right, hardware software optimized for performance and capability, and point and click AI, right. AI for non data scientists, right. It's like the three levels that I think of when I'm engaging with data scientists and clients. >> And essentially it's embedded AI right? I've been making the point today that a lot of the AI is going to be purchased from companies like IBM, and I'm just going to apply it. I'm not going to try to go build my own, own AI right? I mean, is that... >> No absolutely. >> Is that the right way to think about it as a practitioner >> I think, I think we talked about it a little bit about it on the panel earlier but if we can, if we can leverage these pre built models and just apply a little bit of training data it makes it so much easier for the organizations and so much cheaper. They don't have to invest in a crazy amount of infrastructure, all the labeling of data, they don't have to do that. So, I think it's definitely steering that way. It's going to take a little bit of time, we have some of them there. But as we as we iterate, we are going to get more and more of these types of you know, commodity type models that people could utilize. >> I'll give you an example, so we have a software called Intelligent Analytics at IBM. It's very good at taking any surveillance data and for example recognizing anomalies or you know if people aren't suppose to be in a zone. Ugh and we had a client who wanted to do worker safety compliance. So they want to make sure workers are wearing their safety jackets and their helmets when they're in a construction site. So we use surveillance data created a new AI model using Powder AI vision. We were then able to plug into this IVA - Intelligence Analytic Software. So they have the nice gooey base software for the dashboards and the alerts, yet we were able to do incremental training on their specific use case, which by the way, with their specific you know equipment and jackets and stuff like that. And create a new AI model, very quickly. For them to be able to apply and make sure their workers are actually complaint to all of the safety requirements they have on the construction site. >> Hmm interesting. So when I, Sometimes it's like a new form of capture says identify "all the pictures with bridges", right that's the kind of thing you're capable to do with these video analytics. >> That's exactly right. You, every, clients will have all kinds of uses I was at a, talking to a client, who's a major car manufacturer in the world and he was saying it would be great if I could identify the make and model of what cars people are driving into my dealership. Because I bet I can draw a ugh corelation between what they drive into and what they going to drive out of, right. Marketing insights, right. And, ugh, so there's a lot of things that people want to do with which would really be spoke in their use cases. And build on top of existing AI models that we have already. >> And you mentioned, X86 before. And not to start a food fight but um >> Steven: And we use both internally too, right. >> So lets talk about that a little bit, I mean where do you use X86 where do you use IBM Cognitive and Power Systems? >> I have a mix of both, >> Why, how do you decide? >> There's certain of work loads. I will delegate that over to Power, just because ya know they're data starved and we are noticing a complication is being impacted by it. Um, but because we deal with so many different organizations certain organizations optimize for X86 and some of them optimize for power and I can't pick, I have to have everything. Just like I mentioned earlier, I also have to support cloud on prim, I can't pick just to be on prim right, it so. >> I imagine the big cloud providers are in the same boat which I know some are your customers. You're betting on data, you're betting on digital and it's a good bet. >> Steven: Yeah, 100 percent. >> We're betting on data and AI, right. So I think data, you got to do something with the data, right? And analytics and AI is what people are doing with that data we have an advantage both at the hardware level and at the software level in these two I would say workloads or segments - which is data and AI, right. And we fundamentally have invested in the processor architecture to improve the performance and capabilities, right. You could offer a much larger AI models on a power system that you use than you can on an X86 system that you use. Right, that's one advantage. You can train and AI model four times faster on a power system than you can on an Intel Based System. So the clients who have a lot of data, who care about how fast their training runs, are the ones who are committing to power systems today. >> Mmm.Hmm. >> Latency requirements, things like that, really really big deal. >> So what that means for you as a practitioner is you can do more with less or is it I mean >> I can definitely do more with less, but the real value is that I'm able to get an outcome quicker. Everyone says, "Okay, you can just roll our more GPU's more GPU's, but run more experiments run more experiments". No no that's not actually it. I want to reduce the time for a an experiment Get it done as quickly as possible so I get that insight. 'Cause then what I can do I can get possibly cancel out a bunch of those jobs that are already running cause I already have the insight, knowing that that model is not doing anything. Alright, so it's very important to get the time down. Jeff Dean said it a few years ago, he uses the same slide often. But, you know, when things are taking months you know that's what happened basically from the 80's up until you know 2010. >> Right >> We didn't have the computation we didn't have the data. Once we were able to get that experimentation time down, we're able to iterate very very quickly on this. >> And throwing GPU's at the problem doesn't solve it because it's too much complexity or? >> It it helps the problem, there's no question. But when my GPU utilization goes from 95% down to 60% ya know I'm getting only a two-thirds return on investment there. It's a really really big deal, yeah. >> Sumit: I mean the key here I think Steven, and I'll draw it out again is this time to insight. Because time to insight actually is time to dollars, right. People are using AI either to make more money, right by providing better customer products, better products to the customers, giving better recommendations. Or they're saving on their operational costs right, they're improving their efficiencies. Maybe their routing their trucks in the right way, their routing their inventory in the right place, they're reducing the amount of inventory that they need. So in all cases you can actually coordinate AI to a revenue outcome or a dollar outcome. So the faster you can do that, you know, I tell most people that I engage with the hardware and software they get from us pays for itself very quickly. Because they make that much more money or they save that much more money, using power systems. >> We, we even see this internally I've heard stories and all that, Sumit kind of commented on this but - There's actually sales people that take this software & hardware out and they're able to get an outcome sometimes in certain situations where they just take the clients data and they're sales people they're not data scientists they train it it's so simple to use then they present the client with the outcomes the next day and the client is just like blown away. This isn't just a one time occurrence, like sales people are actually using this right. So it's getting to the area that it's so simple to use you're able to get those outcomes that we're even seeing it you know deals close quicker. >> Yeah, that's powerful. And Sumit to your point, the business case is actually really easy to make. You can say, "Okay, this initiative that you're driving what's your forecast for how much revenue?" Now lets make an assumption for how much faster we're going to be able to deliver it. And if I can show them a one day turn around, on a corpus of data, okay lets say two months times whatever, my time to break. I can run the business case very easily and communicate to the CFO or whomever the line of business head so. >> That's right. I mean just, I was at a retailer, at a grocery store a local grocery store in the bay area recently and he was telling me how In California we've passed legislation that does not allow plastic bags anymore. You have to pay for it. So people are bringing their own bags. But that's actually increased theft for them. Because people bring their own bag, put stuff in it and walk out. And he didn't want to have an analytic system that can detect if someone puts something in a bag and then did not buy it at purchase. So it's, in many ways they want to use the existing camera systems they have but automatically be able to detect fraudulent behavior or you know anomalies. And it's actually quite easy to do with a lot of the software we have around Power AI Vision, around video analytics from IBM right. And that's what we were talking about right? Take existing trained AI models on vision and enhance them for your specific use case and the scenarios you're looking for. >> Excellent. Guys we got to go. Thanks Steven, thanks Sumit for coming back on and appreciate the insights. >> Thank you >> Glad to be here >> You're welcome. Alright, keep it right there buddy we'll be back with our next guest. You're watching "The Cube" at IBM's CDO Strategy Summit from San Francisco. We'll be right back. (music playing)

Published Date : May 1 2018

SUMMARY :

Brought to you by: IBM and the Global Chief Data Office at IBM. So you guys specifically set out to develop solutions and realized that we really need to architect between the line of business and the chief data office how did you go about that? And that's the main efforts that we have. to just put stuff in the data lake. and I can tell you from my previous roles so I've always been a customer I guess in that role right? so that I can really keep the utilization And you've also put a lot of emphasis on IO, right? That's the level of grand clarity we want, right? So just to summarize that, the three pieces: It's like the three levels that I think of a lot of the AI is going to be purchased about it on the panel earlier but if we can, and for example recognizing anomalies or you know that's the kind of thing you're capable to do And build on top of existing AI models that we have And not to start a food fight but um and I can't pick, I have to have everything. I imagine the big cloud providers are in the same boat and at the software level in these two I would say really really big deal. but the real value is that We didn't have the computation we didn't have the data. It it helps the problem, there's no question. So the faster you can do that, you know, and they're able to get an outcome sometimes and communicate to the CFO or whomever and the scenarios you're looking for. appreciate the insights. with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Steven ElukPERSON

0.99+

StevePERSON

0.99+

IBMORGANIZATION

0.99+

Bob PiccianoPERSON

0.99+

StevenPERSON

0.99+

SumitPERSON

0.99+

Jeff DeanPERSON

0.99+

Sumit GuptaPERSON

0.99+

CaliforniaLOCATION

0.99+

BostonLOCATION

0.99+

BobPERSON

0.99+

San FranciscoLOCATION

0.99+

Steven EliukPERSON

0.99+

three piecesQUANTITY

0.99+

100 systemsQUANTITY

0.99+

two monthsQUANTITY

0.99+

100 percentQUANTITY

0.99+

2010DATE

0.99+

hundred imagesQUANTITY

0.99+

1,000 GPUsQUANTITY

0.99+

95%QUANTITY

0.99+

The CubeTITLE

0.99+

one GPUQUANTITY

0.99+

twoQUANTITY

0.99+

60%QUANTITY

0.99+

DenzofloORGANIZATION

0.99+

one systemQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

tens of serversQUANTITY

0.99+

two-thirdsQUANTITY

0.99+

Parc 55LOCATION

0.99+

one dayQUANTITY

0.98+

hundreds of serversQUANTITY

0.98+

one timeQUANTITY

0.98+

X86COMMERCIAL_ITEM

0.98+

IBM CognitiveORGANIZATION

0.98+

80'sDATE

0.98+

three levelsQUANTITY

0.98+

todayDATE

0.97+

BothQUANTITY

0.97+

CDO Strategy SummitEVENT

0.97+

SparkTITLE

0.96+

one advantageQUANTITY

0.96+

Spectrum ConductorTITLE

0.96+

TorchTITLE

0.96+

X86TITLE

0.96+

Vice PresidentPERSON

0.95+

three different piecesQUANTITY

0.95+

PTI Gen4COMMERCIAL_ITEM

0.94+

three layersQUANTITY

0.94+

Union SquareLOCATION

0.93+

TensorFlowTITLE

0.93+

TorchORGANIZATION

0.93+

PTI Gen3COMMERCIAL_ITEM

0.92+

EfiTITLE

0.92+

Startegy Summit 2018EVENT

0.9+

Kevin Baillie, Atomic Fiction


 

>> Narrator: Live from Las Vegas. It's the CUBE. Covering NAB 2017, brought to you by HGST. >> Welcome back to the CUBE live in Las Vegas at the NAB show. We're having a great day so far. Very excited to introduce you to my next guest, Kevin Baillie, cofounder and VFX supervisor at Atomic Fiction and the CEO of Conductor Technologies. Never a boring day for you with those two titles, I can imagine. >> No, I like to joke that I like to make sure that I always have the most exciting job in the world so I had to pick three to make sure that I never have a down moment spoil that, that day >> Wow, I am impressed. So you just spoke at the virtual NAB conference last month on the visual effects in the cloud, power, and control. Something that I found very interesting was that six years ago, you were kind of on an island going "I have this hunch about cloud." Tell us about, what was that hunch, why did you have it, and what has it generated so far? >> Yeah, yeah, that's a great question. The hunch was less of like, "Hey cloud looks like a great opportunity." It was more of like knowing what wasn't working in the industry as it was at that time. There were all kinds of companies that were kind of like having financial troubles or having a hard time delivering projects, tons of bankruptcies and just really sad stories everywhere. And we looked at the market and said, "There's a ton of work here, this doesn't make sense." Some of the best entertainment is being made right now and it all relies on visual effects, what's wrong? And the further we broke down the problem, the more we realized that like fixed infrastructure within a market that naturally ebbs and flows, it just didn't, there wasn't a match there. So, through that problem, we looked for solutions and cloud was a very obvious one at that point. So we just made the jump. >> And tell us about Atomic Fiction versus Conductor Technologies. Chicken, egg, which one came first? And how are they collaborating together? >> Atomic Fiction came first. It was almost seven years ago at this point that we started Atomic. And we looked for any kind of a way to use cloud. We started using an AWS directly, we then used a tool called Zync. And as we grew, we found that the needs of the company were changing so radically that nothing that was out there could actually keep up with our pace of growth. We had all this customized pipeline that we couldn't find a way to like get it into the cloud. So we built our own and that was called Conductor. And after, I think we were working on like Game of Thrones and The Walk and had just started on Deadpool that we realized it was working so well that we decided to spin it off as it's own company and make a go for actually turning it into a product that could help everybody in the same way that the cloud had helped Atomic Fiction. >> Fantastic, one of my favorite movies is The Walk. I was looking at your website and you think as the viewer, "How did they film this?" You know, this day and age, so much is CGI. Talk to us about what realtime cloud rendering is. How does it enable a movie like The Walk or Deadpool to have that awe inspiring, jaw dropping reaction from the audience? >> Well I think a large portion of bringing that jaw dropping reaction to the audience and that level of realism is being able to run productions in the way that they want to be run. And what I mean by that is, let's take a movie like The Walk where you have to recreate 1974 New York and the Twin Towers, and all these different lighting scenarios. That means we have to build every building, every rain gutter, every hotdog stand in the street down to exacting detail, and that just takes a lot of time. So we spent a ton of time, probably the first three quarters of the schedule just building the city, building the city. And we couldn't render anything at that point And it wasn't only until the very end of the show that we were able to say, "alright, now we have New York is there, let's just put it on the screen." But that takes millions of hours of computing to get that done. The Walk for example, it used 9.1 million processor hours of rendering. That's over a thousand years on a single processor to get it done. So if we hadn't had the cloud, we would have had to been like, "Oh what can we render first "so we don't bottleneck at the end of the schedule?" And really kind of like trying to bend production into the box that we, of fixed infrastructure that we have. But with the cloud, we don't have to do that. We can say, we can go as big as we want to at the very end of the show and get it done if that's what makes sense for the show. Because that's what makes sense for the show, the creative just ends up being that much better. The same was true for Deadpool, the same is true for Star Trek. These movies, they just sort of, you want to craft love into the beginning part of it so the stuff you generate at the end is as beautiful as it can be. >> So is cloud really freeing production from being able to operate in the way that it needs to operate? >> Yeah, yeah, exactly. Because the traditional model is, a visual effects company builds a data center and stuffs it full of computers. In best case, with like three weeks lead time you can like rent a bunch of racks of computers and like shove them in a closet somewhere and get your project done. It ends up being expensive and painful. You need a big team to man all that stuff. Whereas with cloud, we can say, "Hey, I need a thousand computers three minutes from now." And boom, a thousand computers spin up out of nowhere. And the great thing that we've done with Conductor as well is we've gone and negotiated per minute software licensing with Autodesk and the Foundry and IsoTropic and Chaos Group. All these big software vendors in the industry. So not only can you get compute by the minute, you can also get all the software that you need by the minute, right. So you can have three thousand nodes running Autodesk to Arnold, and you, but you run it for 42 minutes and you only pay for 42 minutes of three thousand licenses of Arnold, right. So it's really transformative from a flexibility standpoint. >> And the cost model really flips it on it's head. >> And by the way, the artists get the result back faster. Because you can scale up so big and get the result back to them so quickly without any cost penalty, they see the fruits of their labor while the ideas are still fresh in their head, which is like a huge, like, intangible benefit which has real economic benefits. >> Absolutely, one of the things and themes that we've heard of today is that speed is key. Absolutely critical to whatever is going to happen or whether or not on a shoot, a vision changes direction. And without having the power of the cloud to facilitate something on a dime, there's delays, which all adds up to economic impact. >> Yeah, and you know, back on one of our earliest projects rendered in the cloud, Flight. The Robert Zemeckis movie with Denzel Washington. That exact thing happened, where it was like at the very end, he, Zemeckis realized that he needed this extra set of like a hundred visual effects shots. And if it hadn't have been for the cloud, we would have had to say, "No, sorry we can't do these." "We have to find somebody else to do them." But because the ability of the cloud to accommodate that last minute creative epiphany, we were able to actually do the work. So it really is truly transformative and allowed us to bring in, you know, hundreds of thousands of dollars of extra revenue that we wouldn't have been able to do otherwise. >> Absolutely. In terms of some of the public cloud providers, tell us who you're working with on that end. >> Yeah, so we're working with Google right now, using Google Compute Engine on the back end. And we're actually moving forward with Microsoft and Azure. Adding it as an option later in the year. So, hopefully at the end of the year, we'll be able to support all the large cloud providers. And be able to say, "Hey, Studio X. "We know you have an affinity for Google right now, "but on the next project maybe you need "a very specific GPU type." Or there's a company in China that needs to do some work and Google isn't there. Now Azure is your thing, right. So, I think that the world of cloud providers competing against one another is going to be really beneficial for everyone in our industry for sure. And we want to be there to facilitate a little bit of like, choose whoever's best, right. >> Right, giving you the ability to really be like agnostic on the back end. >> Yeah that's exactly right. >> So as we look at these massive resources that studios are generating, creating such interactive films, what are some of the precautions that you see and you can help them mitigate against leveraging the power of cloud. >> Well, one of the benefits of cloud is you only have to pay for what you use, just like electricity, right. One of the downsides of cloud is you have to pay for what you use, right. So, if you're not careful about the render you put in the cloud or the simulation you put in the cloud, or how long you keep data in the cloud, things can get really expensive really quickly. So, one of the things we did, and this is actually why we kind of spun Conductor off as it's own company. And we just raised our Series A round of funding back in December to build the team out, because a lot of this stuff is really complicated, is one of the big efforts, in kind of a post funding world for Conductor, is on analytics and being able to use data to help people drive production better. So you know, in the very beginning, we have cost limits where you can say, "On this shot, I don't want to spend "more than a thousand dollars." Or, "I never want this artist to be able to spend "more than fifteen hundred bucks a day." But in the future, I think that there is kind of like cloud buzz-wordy things that actually come into real play here where we can use machine learning to detect when things are taking too long and alert people. We can tell people how much a render is going to cost before they even submit it maybe. We can use computer vision to check for bad things happening in the middle of a render before a human ever has a chance to lay eyes on it. So there's all kinds of stuff we can do with data to help mitigate some of the downsides of cloud and hopefully only leave people with like great insights to help them run production better. >> That's fantastic. One of the things that really interests me is the machine learning and the artificial intelligence. To be able to look at whether it's a broadcast outlet or a film studio, to be able to take a look at and evaluate the value and additional revenue streams that can come. But also, in your case, maybe even leveraging AI and machine learning to make certain processes faster thereby lowering costs. >> Yeah, we can actually make proactive suggestions based on, like, you know, thousands or millions of data points and say like, "Hey if you tweak this value on your shading rate here, "you're going to end up with a great visual "and not spend any more time, or actually spend less." So things like that and then also working together with production management systems. Like the guys at Autodesk have a product called Shotgun that deals with schedules and artist assignments. And they can have all the schedule information. We have all the sort of infrastructure information. If we correlate those two data sets together, then we'll be able to actually proactively tell somebody when we think a shot is running behind schedule. Or a shot needs more optimization. And I mean, there's all kinds of things that we can use just purely using data and a trained machine learning model to actually help people run their entire business better, not just an individual shot. >> Right, well, six years ago, when you had this hunch, you said there were some skeptics around there. One, you must feel pretty validated by now, but are you kind of one of the go-to guys, go-to companies of this is how to do it properly? These are all of the advantages, economic advantages, etc, that we can provide? >> Yeah, I think that there were definitely people that told me I was absolutely crazy when I first got started. Some of them are actually using Conductor now, so that's kind of like good. >> That must feel good right? >> Yeah, it's a good validation point and they had a lot of reasons for thinking that we were insane, cause we kind of were. But we just sort of believed deep down that it was going to work. So, yeah, I mean now, I think we're in a great position to help people. And for me, and you know, this is always like a thing that I sometimes get a hard time for, but I'm so passionate about this industry moving into the cloud that I'm just as happy to talk to somebody about how to do it maybe on their own if they're trying to do it on a small scale. Or what our competitors might be doing. Really, through that, I've kind of, we've found a space where we don't really have any competitors yet and we're breaking new ground. Really servicing the sort of medium and enterprise scale customers, and that kind of flexibility and scale and security that they kind of need. So it's sort of interesting in this, in a way, this sort of like selfless, just being excited about cloud has helped us to find a market that we can really and truly add insane value to. >> Wow, that is fascinating. Well, your passion for it is evident. Thank you so much Kevin for joining us on the CUBE. >> Yeah, thank you so much. >> Have a great time at the rest of the show and we'll see you on the CUBE sometimes soon. >> I always do, thank you again. >> Excellent, we want to thank you for watching. Again, we are live at NAB Las Vegas. Stick around. We will be right back.

Published Date : Apr 24 2017

SUMMARY :

brought to you by HGST. Very excited to introduce you to my next guest, So you just spoke at the virtual NAB conference last month And the further we broke down the problem, And tell us about Atomic Fiction that could help everybody in the same way Talk to us about what realtime cloud rendering is. into the beginning part of it so the stuff you generate And the great thing that we've done with Conductor as well And by the way, the artists get the result back faster. Absolutely, one of the things and themes And if it hadn't have been for the cloud, In terms of some of the public cloud providers, "but on the next project maybe you need like agnostic on the back end. and you can help them mitigate One of the downsides of cloud is you have One of the things that really interests me And I mean, there's all kinds of things that we can use that we can provide? that told me I was absolutely crazy And for me, and you know, this is always like a thing Thank you so much Kevin for joining us on the CUBE. and we'll see you on the CUBE sometimes soon. Excellent, we want to thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kevin BailliePERSON

0.99+

GoogleORGANIZATION

0.99+

KevinPERSON

0.99+

42 minutesQUANTITY

0.99+

IsoTropicORGANIZATION

0.99+

ChinaLOCATION

0.99+

Game of ThronesTITLE

0.99+

MicrosoftORGANIZATION

0.99+

Denzel WashingtonPERSON

0.99+

AutodeskORGANIZATION

0.99+

DecemberDATE

0.99+

Star TrekTITLE

0.99+

ZemeckisPERSON

0.99+

The WalkTITLE

0.99+

thousandsQUANTITY

0.99+

three weeksQUANTITY

0.99+

Las VegasLOCATION

0.99+

AWSORGANIZATION

0.99+

millionsQUANTITY

0.99+

two titlesQUANTITY

0.99+

Atomic FictionORGANIZATION

0.99+

Chaos GroupORGANIZATION

0.99+

OneQUANTITY

0.99+

Conductor TechnologiesORGANIZATION

0.99+

Robert ZemeckisPERSON

0.99+

more than a thousand dollarsQUANTITY

0.99+

New YorkLOCATION

0.99+

more than fifteen hundred bucks a dayQUANTITY

0.99+

oneQUANTITY

0.98+

six years agoDATE

0.98+

ArnoldORGANIZATION

0.98+

NAB 2017EVENT

0.98+

last monthDATE

0.98+

millions of hoursQUANTITY

0.98+

Twin TowersLOCATION

0.97+

DeadpoolTITLE

0.97+

firstQUANTITY

0.97+

three minutesQUANTITY

0.97+

first three quartersQUANTITY

0.97+

hundreds of thousands of dollarsQUANTITY

0.97+

CUBEORGANIZATION

0.97+

VFXORGANIZATION

0.96+

three thousand licensesQUANTITY

0.96+

single processorQUANTITY

0.96+

over a thousand yearsQUANTITY

0.96+

NAB showEVENT

0.95+

1974DATE

0.95+

threeQUANTITY

0.95+

todayDATE

0.94+

seven years agoDATE

0.91+

ConductorORGANIZATION

0.91+

9.1 million processorQUANTITY

0.9+

HGSTORGANIZATION

0.89+

three thousand nodesQUANTITY

0.89+

AzureORGANIZATION

0.89+

ConductorTITLE

0.87+

two data setsQUANTITY

0.82+

hundred visual effectsQUANTITY

0.81+

a thousand computersQUANTITY

0.76+

CGIORGANIZATION

0.74+

Narrator: Live fromTITLE

0.74+

thingsQUANTITY

0.73+

AtomicTITLE

0.72+

thousand computersQUANTITY

0.71+

endDATE

0.71+

AzureTITLE

0.7+

Atomic FictionTITLE

0.69+

NABEVENT

0.67+

ZyncORGANIZATION

0.66+

ton of workQUANTITY

0.65+

every hotdogQUANTITY

0.61+