Michael McCarthy and Jurgen Grech, Gamesys | AnsibleFest 2020
>> Announcer: From around the globe, it's The Cube. With digital coverage of Ansible Fest 2020 brought to you by Red Hat. >> Hello, welcome back to The Cube's coverage of Ansible Fest 2020. This is The Cube. Cube Virtual. I'm your host, John Furrier with The Cube and Silicon Angle. Two great guests here. Two engineers and architects. Michael McCarthy who is a architect at Delivery Engineering, who's giving a talk with Gamesys and Jurgen Grech who's a technical architect for the platform engineering team at Gamesys. Gentlemen, welcome to The Cube, thanks for coming on. >> Hello. >> Nice to see you. >> Coming in from London, coming in from Malta, you guys are doing a lot of engineering. You're a customer of Ansible, want to get into some of the cool things you're doing obviously Kubernetes automation, platform engineering, this is what everyone's working on right now that's going to be positioned for the future. Before we get started though, tell me a little bit about what Gamesys does and you guys' role. Michael, we'll start with you. >> Sure, so we're a gaming operator, we run multiple bingo-led and casino-led gaming websites, some of them are B2B, some are B2C. I think we've been doing it now for probably 14 or 15 years at least. I've been there for 12 and a half of those. So we essentially run gaming websites where people come and play their favorite games. >> And what's your role there? What do you do? >> So I'm in the operation side of things, I used to be a developer for 12 or so years. We make sure that everything's kind of up and running, we keep the systems running. My team in particular focuses on the speed of delivery for developers so we're constantly looking at, how long has it taken to get things in front of the customers, can we make it faster, can we make it easier, can we put cool stuff out there quicker? So it's a kind of platformy type role that I do, and I enjoy it a lot, so it's good. >> Jurgen you're platform engineering that sounds deep. >> Yes. >> Which is your role? (laughing) >> Well, I've been with Gamesys also for eight and a half years now. I hold the position of technical architect at the moment within this platform engineering group which is mostly tasked with all things ops related. I am responsible for designing, implementing and validating strategies for continuous deployment, whilst always ensuring high availability on both production and pre-production systems. I'm also responsible for the design and implementation of automated dynamic environment to support the needs of the development teams and also collaborating with other architects, especially those on the development floors in order to optimize the deployment and operational strategies for both existing and new types of services alike. >> Awesome, thanks for sharing that. Good, good context. Well, I mean, you don't have to be a rocket scientist to figure out that when you talk about gaming it's uptime and a high availability is critical. You know, having people, being the login you got to have the right data strategies, it can't be down, right. (laughs) It's a critical app. People are not going to enjoy it if they're not at, so I can see how scale's huge. Can you guys talk about how Ansible fits in because automation's been the theme here, you guys have been having a journey with automation. What's been your automation solution with Ansible? >> I'll go Michael. >> Yeah sure. >> So, basically back in July 2014, we started to look at Ansible to replace those commonly used, day to day, best scripts, which our ops team use to execute and which could lead to some human error. That was our main original goal of using Ansible at the time. At the time was our infrastructure looked considerably different. Definitely much, much smaller than the current private cloud footprint. And as I said, as early adopters within the operations team it was imperative for us to automate as much as possible. Those repetitive tasks, which involved the execution of various scripts and were prone to human error. Since then however, aware Ansible usage, it worked quickly. Since 2014, we went through two major infrastructure overhauls and automation using Ansible was always at the heart of each of those overhauls. In fact, our latest private cloud which is based on OpenStack is completely built from the ground up using Ansible code. So this includes the provision and co-visual machines, our entire networking stacks, so switches, routers, firewall, the SDN which OpenStack is built up on, our internal DNS system. Basically all you need to have a fully functional private cloud. At Gamesys we also have some workloads running in two different public clouds. And even in this case, we are running against the build code to set up all the required infrastructure components. Again, since we were fairly new adopters at the time of this technology, without all of those Ansible code, using the original as the case, cover now this has worked considerably and with enhancements of litigated modules polished public cloud, we've made the code look much cleaner, readable and ad approved. >> You made some great progress. Michael, you want to weigh in on this? Any thoughts on? >> Yeah, I think it's kind of, I mean, adding to what Jurgen said I think it's kind of everywhere. So, you know, you mentioned, you mentioned high availability, you mentioned kind of uptime, you know, imagine the people that operate the infra, the people who get called out and they're working 24 seven, you know, a lot of the things that they would do, the kind of run books they would use to, you know, restart something they're Ansible as well. So it's the deployment scripts, it's the kind of scripts that keep things running, it's the stuff that spins up the environments as Jurgen said. I've noticed a lot on the development side where, you know, we look at continuous delivery, people are running their own build servers. A lot of the scripting that people do, which, you know you'd imagine, might be done with say Bash, I think I've seen a lot of Ansible being used there amongst developers, I guess. Yeah, it's got an easy learning curve. It's all of those modules. A lot of the scripting around CD I think is Ansible. It plays quite nicely, you know, URI module and file modules and yeah, I think it's kind of everywhere I think. It's quite pervasive. >> Once again I said, when to get something going. Good, it's awesome. >> Yeah. Automation get great success. So it's been a big theme of Ansible Fest 2020 automation collectors, et cetera. But the question I have for you guys as customers, is how large of an IT estate were you looking to automate and where was the most imperative places to automate first? >> The most imperative items we wanted to automate first as I said, were those operational day to day tasks handled by our network operations team. Our estate is massive. So we are running our infrastructure across five different data centers around the world, thousands of virtual machines, hundreds of network components. So we, we deal with customers all around the world. So our point of presence is spread out around the world as well. And you can't really handle such kind of size without some sort of automation. And Ansible fit the bill perfectly, in my opinion. >> And so your goal is to automate the entire landscape. Are you there now? Where are you on that progress? >> I would say we're at a very advanced stage in that process. Since 2014 we've made huge strides. All of our most recent private cloud setups as I said, have been built from the ground up using Ansible. And I would say a good 90% plus of our operational tasks are handled using some kind of Ansible playbook. >> Yeah, that makes total sense. Michael you brought up the, you start early in people's, it spreads. Those are my words, but you were saying that. What kind of systems do people tend to start with at Ansible? And what's, where's that first sticky moment where it lands and expands and which teams jump on it first? Is it the developers? Is it more the IT? Take us through some of the how this all gets started and how it spreads. >> I think in the, the first time I remember using it was probably I think 2014, 2015. And it was what Jurgen mentioned. I was on the Dev side and we wanted a way to have consistency in how we deployed. We wanted to be able to deploy the exact same way, you know into earlier environments, into Dev environments as we did in staging and production. And, you know, someone kind of found Ansible and then someone in operations kind of saw it and they were happy with it and they felt comfortable using the, kind of getting up to speed. And I think it was hard to know where it really started first, but you sort of looked around and every team, every team kind of had it. So, you know, who actually started I'm not sure, but it's all over the place. >> He did. (laughs) >> Yeah. I think, you know, where people start with it first it probably depends if you're on the ops or the dev side, I think on the dev side you know, we're encouraging people to own their own deployment playbooks you know, you're responsible for the deployment of your system to production. Obviously you've got the network operations the not group sort of doing it for you, but you know, your first exposure is probably going to be writing a playbook to deploy your app or maybe it's around some build tooling, spinning up your own build environment but that's something you'll be doing. I know with Ansible and it's especially around this point of stuff because everything's in git, there's that collaboration which I never saw, obviously I saw people chatting over kind of slack in teams but in terms of being able to sort of raise PR's having developers raise PR's, having operations comment on them the same the other way around, that's been a massive change which I think has come from using Ansible. >> The collaboration piece is huge. And I think it's one of those things early on out of all the Ansible friends that I know that use it and customers and in the company product was just good. It just word of mouth, spreads it around and be like, this is workable, saves a lot of time and it's a pain point remover. Also enables some things to happen with now automation, but now it's mature. Right? So Jurgen I got to ask you in the maturation of all this automation you're talking about scale, you mentioned it. OpenStack, you guys got the private clouds, people use it for public cloud, I now see Red Hat has a angle on that. But when you think about the current modern state of the art today, you can't go anywhere without talking about Kubernetes. >> Yup. >> Kubernetes has really emerged on the scene to manage these clusters but yet it's just getting started. You have a lot of experience with Ansible and Kubernetes. Can you share your journey with Kubernetes and Ansible, and what's your reaction to that? >> Yes, so back in June 2016 Gamesys was developing a new gaming platform which was stood on now Kubernetes. Kubernetes at the time was fairly new to many at an enterprise level with only a handful of production systems online. So we were tasked to assess how we're going to bring Kubernetes into production. So we first, we identified the requirements to set up a production grade cluster and given our experience with Ansible, we embarked on a journey to automate the installation process. Again using Ansible this would ensure that all the required installation and configuration parameters as Michael mentioned, we are committing it, the code is shared with all the respective development teams for ease of collaboration and feedback. And we decided to logically divide our code into two. And we said, we're going to have an installation code in order to provide Kubernetes as a service. So this basically installs Docker onto every worker node. It installs cube lit, all the master playing components of Kubernetes installs core DNS, the container storage interface, and they full blown and cluster monitoring stack. Then we also had our configuration code which basically sets up name spaces, it labels nodes for specific uses at certain security policies according to the cluster use case and creates all the required role based access configurations. This need to split the code in two came about really with the growing adoption of Kubernetes because at the inception stage we only had the one team which had a requirement to use Kubernetes. However, with various teams getting on board each required their own flavor with their particular unique configurations. This is of course well managed quite easily to reduce of different Ansible inventories. And it's all integrated now within Ansible Tower with different unique drop templates to install and configure the Kubernetes clusters. We started as I said with just one pre-production or staging cluster in 2010 16. Today we manage 42 different Kubernetes clusters including six which are in production. >> What problems >> So, as I mentioned earlier >> I got to ask you 'cause Kubernetes certainly when it came out, I mean, that was a big fan boy of that. I was promoting Kubernetes from the beginning. I saw it as a really great opportunity to bring things together with containers. It turns out that developers love it for that reason. What, so getting your hands on is great, but as you moved it in to practice, what problems did it solve for you? >> So using Ansible, definitely solve the problem of ensuring that all of our 42 clusters across all the different data centers are running the same configuration. So they're running the same version. They're running the same security policies. They're running the same name space, according to the type. Each team has a similar deployment token. And it's very, very convenient to roll out changes and upgrades especially when all of our code has been integrated with Ansible Tower through a simple user interface click. >> How's Ansible Tower working for you? Is that going well? Ansible Tower? >> Eh, I would say so, yes. Most of our code now is integrated with Ansible Tower. It's allowed us to also share some of the tasks with a wider group of people. Within Peg we are the guardians of the production environments really. However, we share the responsibility of staging environments with the respective development teams, who primarily those environments. So as such, through the use of Ansible Tower we've managed to also securely and consistently share the same way how they can install and upgrade these clusters themselves without our involvement. >> Thank you. Michael you're giving, oh sorry go ahead. Go ahead Jurgen. >> Sorry is no no. >> Michael, you're giving a presentation breakout session at Ansible Fest. Can you give us a sneak peek >> Yup. >> Of what you're going to talk about? >> Yeah sure. So we, I said we've been using Tower for a long time. We've been using it since 2015 I think. Think we've probably made some mistakes along the way, I guess, or we've learned a lot of stuff from how we started then to now. So what it does is it follows this sort of timeline of how we started, why there was this big move to making an effort to put all of our deployment playbooks in Ansible. Why you would go to Tower over and above Ansible itself. It talks about our early interactions with quite an old version of Tower and now version two, things that we struggled with, then we saw version three came out there was loads and loads of really good stuff in version three. And it's really about kind of how we've used the new features, how it's worked out for us. It's kind of about what Gamesys have done with Tower but I think it's probably applicable to everyone and anyone that uses Tower I think will, they'll probably come across the same things, how do I scale it for multiple teams? How do I give teams the ownership to kind of own their own playbooks? How do I automate Tower itself? It talks about that. Sort of check pointing every few years about where we'd got to and what was going well and what was going less well. So, and a bit of a look forward to, what's going to come next with Tower. So we're constantly keeping up to date and we've got kind of roadmap for where we want to go. >> What's interesting about you guys is you think about look at OpenStack and then how Cloud came on the scene and Private Cloud has emerged with hybrid and obviously public, you guys are right on the wave of all this large scale stuff and your gaming app really kind of highlights that. And you've been through the paces with Ansible. So I guess my question, and you've got a lot of scar tissue and you got success to show for it too, a lot of great stuff. What advice would you give people who are now getting on the new wave, the bigger wave that's coming which is more users, more scale, more features more automation, microservices are coming around the corner. As long as I get more scale. What advice would you give someone who's coming on board with Ansible for the first time? >> I think there was, you were talking before about Kubernetes and it was so where we were, I think we'd got into containers kind of relatively early. And we were deploying Docker and we had some pretty big, kind of scary playbooks and they managed low balances and deployed Docker containers. And it was always interesting thinking how is this all going to change when Kubernetes comes along? And I think that's been really smooth. I think there's a really nice Ansible module that's just called gates. And I think it's really simple actually, it simplified a lot of the playbooks. And I think that the technologies can coexist quite happily. I don't think you have to feel like Kubernetes is going to change all of the investment you've made into Ansible. Even if you go down the route of Kubernetes operators, you can write them in Ansible. So I still think it's a very relevant tool even with Kubernetes being so kind of prevalent. >> Jurgen what's your thoughts on folks getting in now, who want to jump in and take advantage of the automation, all the cool stuff with Ansible? What advice would you give them? >> Yes, I would definitely recommend to look at their infrastructure set ups as they would look at their code. So break it down into small manageable components, start small, build your roles, make sure to build your roles properly for each of that small component. And then definitely look at Ansible Tower as a way to visualize and control the execution of your code. Make sure you're running it with the proper security policies with the proper credentials and all, they're not, of course so break anything which is at the production level. >> Michael McCarthy, Jurgen Grech two great engineers at Gamesys. Congratulations on your success and love to unpack the infrastructure and the scale you have and certainly automation, great success path. And it's going to get easier. I mean, that's what everyone's saying, it's going to get easier. Thanks for coming on. I appreciate the conversation. Thank you very much. >> Thank you, welcome >> Thank you, take care. Bye bye. >> I'm John Furrier with The Cube here in Palo Alto California. We're virtual, The Cube virtual for Ansible Fest 2020 virtual. Thank you for watching. (upbeat music)
SUMMARY :
brought to you by Red Hat. for the platform and you guys' role. and a half of those. So I'm in the operation side of things, engineering that sounds deep. I hold the position of technical because automation's been the theme here, At the time was our infrastructure Michael, you want to weigh in on this? A lot of the scripting that people do, Good, it's awesome. But the question I have And Ansible fit the bill automate the entire landscape. from the ground up using Ansible. Is it more the IT? the exact same way, you know (laughs) or the dev side, I think on the dev side and in the company emerged on the scene the code is shared with all the I got to ask you 'cause are running the same configuration. of the production environments really. Michael you're giving, oh sorry go ahead. Can you give us a sneak peek So, and a bit of a look forward to, the paces with Ansible. of the investment you've and control the execution of your code. the infrastructure and the scale you have Thank you, take care. Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michael McCarthy | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
July 2014 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
June 2016 | DATE | 0.99+ |
Malta | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Jurgen | PERSON | 0.99+ |
Jurgen Grech | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
12 | QUANTITY | 0.99+ |
14 | QUANTITY | 0.99+ |
2014 | DATE | 0.99+ |
90% | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
eight and a half years | QUANTITY | 0.99+ |
42 clusters | QUANTITY | 0.99+ |
Gamesys | ORGANIZATION | 0.99+ |
Palo Alto California | LOCATION | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
one team | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
Each team | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
Two engineers | QUANTITY | 0.99+ |
Two great guests | QUANTITY | 0.99+ |
Today | DATE | 0.98+ |
Tower | TITLE | 0.98+ |
Ansible Fest 2020 | EVENT | 0.98+ |
both | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
12 and a half | QUANTITY | 0.98+ |
each | QUANTITY | 0.97+ |
first time | QUANTITY | 0.97+ |
first exposure | QUANTITY | 0.97+ |
The Cube | ORGANIZATION | 0.96+ |
five different data centers | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
two great engineers | QUANTITY | 0.96+ |
new wave | EVENT | 0.95+ |
Carmaax Christian Emery v1 ITA Red Hat Ansiblefest
>> Hello and welcome to the session featuring CarMax, driving efficiency and innovation with Ansible. I'm your host Christian Emery. I've been at CarMax for over 18 years in several roles ranging from operations to engineering. And in my current role, I'm responsible for CarMax's private cloud and continuous integration, continuous delivery pipelines. Now, my journey with automation started many years ago when I was a Unix and a Linux admin. Day after day, there was always that routine of manual tasks and processes like backups and routine maintenance. Each tasks had a lot of value to the business, but also required consistency, reliability and completion, and demanded quality for system stability. However, it was really boring to carry out the same thing every day. And personally I had a hunger to do more, bring greater value to the business, and need to realize greater satisfaction through my contributions in my career. And this is where automation came into my life. But before we jump into the presentation, I do want to share a little bit about CarMax. For those who may not know, CarMax has been a unique force in the used car industry since 1993. Through innovation and integrity, we've revolutionized the way people buy and sell used cars. We pride ourselves on the experience we provide our customers and our associates to make it possible. And by changing the way we assist our customers, we've also changed the journey of our associates, providing careers in exciting collaborative work environments. In today's presentation, I'm going to cover the early chapters of the CarMax Ansible story. Topics discussed will highlight business need, why we selected Ansible, rapid adoption and our results. And throughout the presentation, I'm also going to share a lot of thoughts and lessons learned to help you with your automation journey. And while listening to the story, I'd like to challenge you to think about your own business needs, technology challenges, and how your team organizes or organization improves approaches automation. Now in our first year, I was challenged to achieve 5,000 hours in efficiency using Ansible. That was a really intimidating number. But we met the challenge and exceeded it. And since then, we've continued to expand our automation through incremental improvements in everyday work to tackling larger operational challenges like regular changes to the environment, routine upgrades and improved infrastructure delivery. Additionally, we expanded automation adoption across multiple teams. We increased our user and contributor base by over five times. And some of that growth was through organic cross team collaboration. However, the greatest growth we had seen was through hackathons, innovation days where we're able to actively collaborate with other teams using Ansible to solve a business problem. And across all those users, we crossed over 15,000 hours of efficiency gain. And I use that term efficiency gained as a measurement to show not only just labor savings, but also tell the story behind other work we accomplished. And keep in mind, this is work that we wouldn't have been able to achieve without automation. And through that user base and hours of efficiency realized, we implemented over 150,000 successful changes. So how do we get there? Earlier I told you about my personal interest in automation and how I've carried that into my current role. And as a leader, I challenge my team to standardize processes and automate as much as possible. We started initially with really repetitive tasks, much like a game of whack-a-mole, but more importantly, through our experimentation, we quickly found we could get better and more consistent results. We soon applied the same approach to our automation for even greater success. But before Ansible, we started to run into issues where team members were taking a more siloed approach to the work. And in an early retrospective, we came to realize that there is a need for a bigger picture mindset. And from that point on, we agreed to standards to increase quality in our code. However, we still occasionally ran into quality issues. Some of these challenges were from homegrown technology, lack of integration and general infrastructure. Now, this is all compounded by the fact that we were using different scripting and programming languages, and not everyone on the team was familiar with Python when compared to say Bash or PowerShell. And while our homegrown solutions made a difference, we thought there could be better ways to meet that demand from the business to do more, better and faster. But like most things in technology, there's always a different tool and approach to get something done. However, some of these other tools required agents on servers making a deployment, a major effort on its own. And additionally, the learning curve was steeper for systems admins and engineers that don't have as much development experience. But this is where Ansible came into the picture. It was easy to use with human readable code. It was an agentless solution allowing us to get started without as much ramp up time. We also liked the fact that it was built on an open standard and a growing user community with an increasing engagement base from partner in vendor integrators. Even better, it had an API we could use to integrate our other platforms as needed. Most recently with the introduction of Ansible collections, we can use community content with greater focus on our automation while worrying less about building new tools. Now, once we select an Ansible as our automation platform, we took a three part approach to implementation and building a foundation for its use. And as I discuss each of these areas, I just like you to consider how to best prepare your teams or organizations for using Ansible. And while planning the transformation, be sure to identify any sort of constraints, roadblocks, and how you plan to measure those results. People, arguably people are the most important part of the equation. You can have all the processes and ways to measure return, but at the end of the day, you need your teams to make that work happen. Start by asking yourself, how well does the team handle change? Are there resource challenges with aligning people and work? Do the people have the right level of knowledge? Do they need training? And how do you start with one team to quickly begin or expand automation? Processes, documentation, standards. Processes are those great ingredients for success in any technology organization. How well are your existing processes documented? Are there any sort of defined standards methods to approaching work? What about your environments? How well does your organization handle executing processes or changes? And lastly, technology. We always need to show results for our investments and technology can help us show that math. Does your organization use metrics and measurements to track progress and results? How do you define or measure success for a project? How should return on investment be measured or quantified? Like I mentioned before, I can't stress it enough, your people, your teams are the most important part of implementing Ansible. They'll be responsible for implementing and developing, maintaining the platform as well as following standards to execute that transformation. And to be successful, they need to have tools, environments, and knowledge. But one of the great things about Ansible is its comparatively easy learning curve. Ansible playbooks are written in a human readable markup language. And I found that most systems admins and engineers are able to pick up Ansible relatively quickly. And for our adoption, some folks were able to pick it up and begin development, while others were a little bit more comfortable and confident with just a little bit of training. Now, Ansible also democratizes technology, freeing up admins and engineers from traditional OS defined silos. Additionally, Ansible playbooks can be consumed by teams without explicit knowledge of the systems or the underlying technology. That's only if a playbook is well written and returns consistent results each time. For us, we first used Ansible to improve our delivery and reduce repeatable manual tasks. Then we turned our attention to shifting left self-service and we're now focused on enabling developers by getting out of the way. These improvements afforded our teams more time to deliver new capabilities to the business. But another benefit to that is teams were able to devote more time to learning and experimenting. When teams first started automating, there's always that impulse or need to go after that biggest win. I would always caution folks to start simple, find small wins to build that experience. These incremental gains are going to feel small, but they quickly add up over time. And as you're going to see, the work should always be done in those smaller increments to return value faster while allowing the ability to quickly make corrections or change course all together. Now, another huge benefit of using that smaller code increment is reuse. These smaller building blocks can and will be used time and time again, reducing future development efforts. And as we quickly learned, one of the best places to start with automation are documented processes. Each step in a process is already documented, it's a huge opportunity to convert it to code and step through those manual processes. And at CarMax, one of the first places we started out was our server checklist process. The process was really thorough, had over a hundred steps to validate systems, make sure they have the right configuration security and specs for each build. And while that process really gave us good consistent results, it was time consuming. It was also prone to human error. But once we automated each of those steps in validations, we were able to turn our focus to the next bottleneck in the process to speed up delivery. And this is why it's always important to strive for quality through consistent predictable results. Automation is just another tool to help make that vision a reality. And when working with teams, it's also important to understand development best practices, keep it simple, and always use version control with code. Better yet, if you're from an ops background, I'd say partner with your development teams to help with this part of the journey. And lastly, when it comes to integrations between platforms and systems, use a modular design, be flexible because technology changes, and over time, so are your integrations. And when it comes to Ansible or just automation in general, there's always that need for efficiency, consistency, reliability, and flexible integrations. And to make this become a reality, you really need to take both a low tech and a high tech approach. If you recall earlier, I mentioned starting with documented processes. That low tech road involves using process mapping value stream analysis tools where you lay out processes end to end to determine the amount of time it takes to execute a process. These processes can be mapped out using whiteboard, sticky notes or by software tools. And from there, more importantly, you can visualize the process bottlenecks and the areas of improvement should be pretty visible. So for CarMax, what we did was we mapped out our infrastructure delivery. We found it was a huge opportunity. But it was also an area we were more comfortable automating given our deep knowledge of the process. So years ago, when we started the process, our time to deliver virtual environments was about two days. Fast forward to now, we can consistently deliver the same infrastructure in just minutes. And in turn, we reuse portions of that process and code for OS refreshes, virtual machine rehydration, system recovery and hypervisor upgrades, just to name a few. And by freeing up team members to do more knowledge work and spend less time on operations, we're able to pivot more resources on the team to align with the business on strategic initiatives. Team members also had more time to do training, research and development for new capabilities, and other areas for future innovation. Now, Ansible gave us a tool where we need to think more like a DevOps organization. And admittedly, a lot of what I've talked about so far has been very operation centric, but systems engineers were all of a sudden writing a testing code, building tools, delivering infrastructure via code, pipelines and API integrations. And as a result, we instantly had to build and strengthen the collaborative relationship between traditional development and operations teams, we had to break down those silos. But the developers appreciate it because they can focus on developing code and not necessarily worry about environments being ready in time or configured correctly. Conversely, operations teams can be focused more on improvements, new capabilities, and spending less time on firefighting. But regardless of the outcomes, you need data to tell that story. And these data elements can start with the hard numbers from reduced cycle times when we were mapping out processes, you can use delivery and SLA metrics. Those were some easy go to numbers. But also consider how you tell that efficiency story. And remember, ROI isn't always about money or the time savings. So as an example, metrics we used included the number of teams using the platform, active contributors, workflows, processes run, and efficiency gain calculations. And as we evolve our journey, the metrics may change along with that story that we need to tell. So to recap, at CarMax, we put people first and you should too. Think about the resources and knowledge your teams are going to need to be successful. And like I said earlier, remember to start small, reuse code as much as possible. This is going to help teams realize faster return on their efforts and start that snowball effect where gains quickly compound over time. Have a vision and decide on targeted outcomes for your team or organization. Then build ROI metrics to help tell that story. But a big part of innovation is experimenting and learning from mistakes. So take a chance, try something new. And in closing, I'd like to thank you for your time. I sincerely hope our results and lessons learned will help you on your automation journey wherever it takes you.
SUMMARY :
and our associates to make it possible.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
CarMax | ORGANIZATION | 0.99+ |
Christian Emery | PERSON | 0.99+ |
5,000 hours | QUANTITY | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
Each step | QUANTITY | 0.99+ |
Each tasks | QUANTITY | 0.99+ |
PowerShell | TITLE | 0.99+ |
one team | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
each build | QUANTITY | 0.98+ |
1993 | DATE | 0.98+ |
over 150,000 successful changes | QUANTITY | 0.98+ |
over a hundred steps | QUANTITY | 0.97+ |
over 18 years | QUANTITY | 0.97+ |
over 15,000 hours | QUANTITY | 0.97+ |
first year | QUANTITY | 0.97+ |
Carmaax | ORGANIZATION | 0.97+ |
over five times | QUANTITY | 0.97+ |
three part | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
each time | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Bash | TITLE | 0.96+ |
Linux | TITLE | 0.95+ |
about two days | QUANTITY | 0.95+ |
today | DATE | 0.93+ |
years | DATE | 0.79+ |
first places | QUANTITY | 0.73+ |
ITA | ORGANIZATION | 0.7+ |
v1 | COMMERCIAL_ITEM | 0.67+ |
Unix | TITLE | 0.58+ |
Red | EVENT | 0.55+ |
Christian Emery | ORGANIZATION | 0.54+ |
Hat | ORGANIZATION | 0.42+ |
Ansiblefest | EVENT | 0.37+ |
Wikibon | Action Item, Feb 2018
>> Hi I'm Peter Burris, welcome to Action Item. (electronic music) There's an enormous net new array of software technologies that are available to businesses and enterprises to tend to some new classes of problems and that means that there's an explosion in the number of problems that people perceive as could be applied, or could be solved, with software approaches. The whole world of how we're going to automate things differently in artificial intelligence and any number of other software technologies, are all being brought to bear on problems in ways that we never envisioned or never thought possible. That leads ultimately to a comparable explosion in the number of approaches to how we're going to solve some of these problems. That means new tooling, new models, new any number of other structures, conventions, and artifacts that are going to have to be factored by IT organizations and professionals in the technology industry as they conceive and put forward plans and approaches to solving some of these problems. Now, George that leads to a question. Are we going to see an ongoing ever-expanding array of approaches or are we going to see some new kind of steady-state that kind of starts to simplify what happens, or how enterprises conceive of the role of software and solving problems. >> Well, we've had... probably four decades of packaged applications being installed and defining really the systems of record, which first handled the ordered cash process and then layered around that. Once we had more CRM capabilities we had the sort of the opportunity to lead capability added in there. But systems of record fundamentally are backward looking, they're tracking about the performance of the business. The opportunity-- >> Peter: Recording what has happened? >> Yes, recording what has happened. The opportunity we have is now to combine what the big Internet companies pioneered, with systems of engagement. Where you had machine learning anticipating and influencing interactions. You can now combine those sorts of analytics with systems of record to inform and automate decisions in the form of transactions. And the question is now, how are we going to do this? Is there some way to simplify or, not completely standardized, but can we make it so that we have at least some conventions and design patterns for how to do that? >> And David, we've been working on this problem for quite some time but the notion of convergence has been extent in the hardware and the services, or in the systems business for quite some time. Take us through what convergence means and how it is going to set up new ways of thinking about software. >> So there's a hardware convergence and it's useful to define a few terms. There's converged systems, those are systems which have some management software that have been brought into it and then on top of that they have traditional SANs and networks. There's hyper-converged systems, which started off in the cloud systems and now have come to enterprise as well. And those bring software networking, software storage, software-- >> Software defined, so it's a virtualizing of those converged systems. >> David: Absolutely, and in the future is going to bring also automated operational stuff as well, AI in the operational side. And then there's full stack conversions. Where we start to put in the software, the application software, to begin with the database side of things and then the application itself on top of the database. And finally these, what you are talking about, the systems of intelligence. Where we can combine both the systems of record, the systems of engagement, and the real-time analytics as a complete stack. >> Peter: Let's talk about this for a second because ultimately what I think you're saying is, that we've got hardware convergence in the form of converged infrastructure, hyper-converged in the forms of virtualization of that, new ways of thinking about how the stack comes together, and new ways of thinking about application components. But what seems to be the common thread, through all of this, is data. >> David: Yes. >> So it's basically what we're seeing is a convergence or a rethinking of how software elements revolve around the data, is that kind of the centerpiece of this? >> David: That's the centerpiece of it and we had very serious constraints about accessing data. Those will improve with flash but there's still a lot of room for improvement. And the architecture that we are saying is going to come forward, which really helps this a lot, is the unit grid architecture. Where we offload the networking and the storage from the processor. This is already happening in the hyper scale clouds, they're putting a lot of effort into doing this. But we're at the same time allowing any processor to access any data in a much more fluid way and we can grow that to thousands of processes. Now that type of architecture gives us the ability to converge the traditional systems of record, and there are a lot of them obviously, and the systems of engagement and the the real-time analytics for the first time. >> But the focal point of that convergence is not the licensing of the software, the focal point is convergence around the data. >> The data. >> But that has some pretty significant implications when we think about how software has always been sold, how organizations to run software have been structured, the way that funding is set up within businesses. So George, what does it mean to talk about converging software around data from a practical standpoint over the next few years? >> Okay, so let me take that and interpret that as converging the software around data in the context of adding intelligence to our existing application portfolio and then the new applications that follow on. And basically, when we want to inject an intelligence enough to inform and anticipate and inform interactions or inform or automate transactions, we have a bunch of steps that need to get done. Where we're ingesting essentially contextual or ambient information. Often this is information about a user or the business process. And this data, it's got to go through a pipeline where there's both a Design Time and a Run Time. In addition to ingesting it, you have to sort of enrich it and make it ready for analysis. Then the analysis has essentially picking out of all that data and calculating the features that you plug into a machine learning model. And then that, produces essentially an inference based on all that data, that says well this is the probable value and it sounds like, sounds like it's in the weeds but the point is it's actually a standardized set of steps. Then the question is, do you put that all together in one product across that whole pipeline? Can one piece of infrastructure software manage that ? Or do you have a bunch of pieces each handing off to the next? And-- >> Peter: But let me stop you so because I want to make sure that we kind of follow this thread. So we've argued that hardware convergence and the ability to scale the role the data plays or how data is used, is happening and that opens up new opportunities to think about data. Now what we've got is we are centering a lot of the software convergence around the use of data through copies and other types of mechanisms for handling snapshots and whatnot and things like uni grid. What you're, let's start with this. It sounds like what you're saying is we need to think of new classes of investments in technologies that are specifically set up to handling the processing of data in a more distributed application way, right? If I got that right, that's kind of what we mean by pipelines? >> George: Yes. >> Okay, so once we do that, once we establish those conventions, once we establish organizationally institutionally how that's going to work. Now we take the next step of saying, are we going to default to a single set of products or are we going to do best to breed and what kind of convergence are we going to see there? >> And there's no-- >> First of all, have I got that right? >> Yes, but there's no right answer. And I think there's a bunch of variables that we have to play with that depend on who the customer is. For instance, the very largest and most sophisticated tech companies are more comfortable taking multiple pieces each that's very specialized and putting them together in a pipeline. >> Facebook, Yahoo, Google-- >> George: LinkedIn. >> Got it. >> George: Those guys. And the knobs that they're playing with, that everyone's playing with, are three, basically on the software side. There's your latency budget, which is how much time do you have to produce an answer. So that drives the transaction or the interaction. And it's not, that itself is not just a single answer because... It's not, the goal isn't to get it as short as possible. The goal is to get as much information into the analysis within the budgeted latency. >> Peter: So it's packing the latency budget with data? >> George: Yes, because the more data that goes into making the inference, the better the inference. >> Got it. >> The example that someone used actually on Fareed Zakaria GPS, one show about it was, if he had 300 attributes describing a person he could know more about that person then that person did (laughs) in terms of inferring other attributes. So the the point is, once you've got your latency budget, the other two knobs that you can play with are development complexity and admin complexity. And the idea is on development complexity, there's a bunch of abstractions that you have to deal with. If it's all one product you're going to have one data model, one address and namespace convention, one programming model, one way of persisting data, a whole bunch of things. That's simplicity. And that makes it more accessible to mainstream organizations. Similarly there's a bunch of, let me just add that, there's probably two or three times as many constructs that admins would have to deal with. So again, if you're dealing with one product, it's a huge burden off the admin and we know they struggled with Hadoop. >> So convergence, decisions about how to enact convergence is going to be partly or strongly influenced by those three issues. Latency budget, development complexity or simplicity, and administrative, David-- >> I'd like to add one more to that, and that is location of data. Because you want to be able to, you want to be able to look at the data that is most relevant to solving that particular problem. Now, today a lot of the data is inside the enterprise. There's a lot of data outside that but they're still, you will want to, in the best possible way, combine that data one way or another. >> But isn't that a variable on the latency budget? >> David: Well there's, I would think it's very useful to split the latency budget, which is to do with inference mainly, and development with the machine learning. So there is a development cycle with machine learning that is much longer. That is days, could be weeks, could be months. >> I would still done in Bash. >> It is or will be done, wait a second. It will be done in Bash, it is done in Bash, and it's. You need to test it and then deliver it as an inference engine to the applications that you're talking about. Now that's going to be very close together, that inference, then the rest of it has to be all physically very close together. But the data itself is spread out and you want to have mechanisms that can combine those datas, move application to those datas, bring those together in the best possible way. That is still a Bash process. That can run where the data is, in the cloud locally, wherever it is. >> George: And I think you brought up a great point, which I would tend to include in latency budget because... no matter what kind of answers you're looking for, some of the attributes are going to be pre computed and those could be-- >> David: Absolutely. >> External data. >> David: Yes. >> And you're not going to calculate everything in real time, there's just-- >> You can't. >> Yes you can't. >> But is the practical reality that the convergence of, so again, the argument. We've got all these new problems, all new kinds of new people that are claiming that they know how to solve the problems, each of them choosing different classes of tools to solve the problem, an explosion across the board in the approaches, which can lead to enormous downstream integration and complexity costs. You've used the example of Cloudera, for example. Some of the distro companies who claim that 50 plus percent of their development budget is dedicated to just integrating these pieces. That's a non-starter for a lot of enterprises. Are we fundamentally saying that the degree of complexity or the degree of simplicity and convergence, it's possible in software, is tied to the degree of convergence in the data? >> You're honing in on something really important, give me-- >> Peter: Thank you! (laughs) >> George: Give an example of the convergence of data that you're talking about. >> Peter: I'll let David do it because I think he's going to jump on it. >> David: Yes so let me take examples, for example. If you have a small business, there's no way that you want to invest yourself in any of the normal levels of machine learning and applications like that. You want to outsource that. So big software companies are going to do that for you and they're going to do it especially for the specific business processes which are unique to them, which give them digital differentiation of some sort or another. So for all of those type of things, software will come in from vendors, from SAP or son of SAP, which will help you solve those problems. And having data brokers which are collecting the data, putting them together, helping you with that. That seems to me the way things are going. In the same way that there's a lot of inference engines which will be out at the IOT level. Those will have very rapid analytics given to them. Again, not by yourself but by companies that specialize in facial recognition or specialize in making warehouse-- >> Wait a minute, are you saying that my customers aren't special, that require special facial recognition? (laughs) So I agree with David but I want to come back to this notion because-- >> David: The point I was getting at is, there's going to be lots and lots of room for software to be developed, to help in specific cases. >> Peter: And large markets to sell that software into. >> Very large markets. >> Whether it's a software, but increasingly also with services. But I want to come back to this notion of convergence because we talked about hardware convergence and we're starting to talk about the practical limits on software convergence. But somewhere in between I would argue, and I think you guys would agree, that really the catalyst for, or the thing that's going to determine the rate of change and the degree of convergence is going to be how we deal with data. Now you've done a lot of research on this, I'm going to put something out there and you tell me if I'm wrong. But at the end of the day, when we start thinking about uni grid, when we start thinking about some of these new technologies, and the ability to have single copies or single sources of data, multiple copies, in many respects what we're talking about is the virtualization of data without loss. >> David: Yes. >> Not loss of the characters, the fidelity of the data, or the state of the data. I got that right? >> Knowing the state of the data. >> Peter: Or knowing state of the data. >> If you take a snapshot, that's a point in time, you know what that point of time is, and you can do a lot of analytics for example on, and you want to do them on a certain time of day or whatever-- >> Peter: So is it wrong to say that we're seeing, we've moved through the virtualization of hardware and we're now in a hyper scale or hyper-converged, which is very powerful stuff. We're seeing this explosion in the amount of software that's being you know, the way we approach problems and whatnot. But that a forcing function, something that's going to both constrain how converged that can be, but also force or catalyze some convergence, is the idea that we're moving into an era where we can start to think about virtualized data through some of these distributed file systems-- >> David: That's right, and the metadata that goes with it. The most important thing about the data is, and it's increasing much more rapidly than data itself, is the metadata around it. But I want to just, make one point on this, all data isn't useful. There's a huge amount of data that we capture that we're just going to have to throw away. The idea that we can look at every piece of data for every decision is patently false. There's a lovely example of this in... fluid mechanics. >> Peter: Fluid dynamics. >> David: Fluid dynamics, if you're trying to, if you're trying to have simulation at a very very low level, the amount of-- >> Peter: High fidelity. >> High fidelity, you run out of capacity very very very quickly indeed. So you have to make trade-offs about everything and all of that data that you're doing in that simulation, you're not going to keep that. All the data from IOT, you can't keep that. >> Peter: And that's not just a statement about the performance or the power or the capabilities of the hardware, there's some physical realities-- >> David: Absolutely, yes. >> That are going to limit what you can do with the simulation. But, and we've talked. We've talked about this in other action items, There is this notion of options on data value, where the value of today's data is maybe-- >> David: Is much higher. >> Peter: Well it's higher from at a time standpoint for the problems that we understand and are trying to solve now but there may be future problems where we still want to ensure that we have some degree of data where we can be better at attending those future problems. But I want to come back to this point because in all honesty, I haven't heard anybody else talking about this and maybe's because I'm not listening. But this notion of again, your research that the notion of virtualized data inside these new architectures being a catalyst for a simplification of a lot of the sharing subsystem. >> David: It's essentially sharing of data. So instead of having the traditional way of doing it within a data center, which is I have my systems of record, I make a copy, it gets delivered to the data warehouse, for example. That's the way that's being done. That is too slow, moving data is incredibly slow. So another way of doing it is to share that data, make a virtual copy of it, and technologies allowing you to do that because the access density has gone up by thousands of times-- >> Peter: Because? >> Because. (laughs) Because of flash, because of new technologies at that level, >> Peter: High performance interfaces, high performance networks. >> David: All of that stuff is now allowing things, which just couldn't be even conceived. However, there is still a constraint there. It may be a thousand times bigger but there is still an absolute constraint to the amount of data that you can actually process. >> And that constraint is provided by latency. >> Latency. >> Peter: Speed of light. >> Speed of light and speed of the processes themselves. >> George: Let me add something that may help explain the sort of the virtualization of data and how it ties into the convergence or non convergence of the software around it. Which is, when we're building these analytic pipelines, essentially we've disassembled what used to be a DBMS. And so out of that we've got a storage engine, we've got query optimizers, we've got data manipulation languages which have grown into full-blown analytic languages, data definition language. Now the system catalog used to be just, a way to virtualize all the tables in the database and tell you where all the stuff was, and the indexes and things like that. Now, what we're seeing is since data is now spread out over so many places and products, we're seeing an emergence of a new of catalog. Whether that's from Elation or Dremio or on AWS, it's the Glue catalog, and I think there's something equivalent coming on Asure. But the point is, we're beginning, those are beginning to get useful enough to be the entry point for analytic products and maybe eventually even for transactional products to update, or at least to analyze the data in these pipelines that we're putting together out of these components of what was a disassembled database. Now, we could be-- >> I would make a difference there there between the development of analytics and again, the real-time use of those analytics within systems of intelligence. >> George: Yeah but when you're using them-- >> David: There's a different, problems they have to solve. >> George: But there's a Design Time and a Run Time, there's actually four pipelines for the sort of analytic pipeline itself. There's Design Time and Run Time, and then for the inference engine and the modeling that goes behind it, there's also a Design Time and Run Time. But I guess where. I'm not disagreeing that you could have one converged product to manage the Run Time analytic pipeline. I'm just saying that the pieces that you assemble could come from one vendor. >> Yeah but I think David's point, I think it's accurate and this has been since the beginning of time. (laughs) Certainly predated UNIVAC. That at the end of the day, read/write ratios and the characteristics of the data are going to have an enormous impact on the choices that you make. And high write to read ratios almost dictate the degree of convergence, and we used to call that SMP, or you know scale-up database managers. And for those types of applications, with those types of workloads, it's not necessarily obvious that that's going to change. Now we can still find ways to relax that but you're talking about, George, the new characteristics >> Injecting the analytics. >> Injecting the analytics where we're doing more reading as opposed to writing. We may still be writing into an application that has these characteristics-- >> That's a small amount of data. >> But a significant portion of the new function is associated with these new pipelines. >> Right. And it's actually... what data you create is generally derived data. So you're not stepping on something that's already there. >> All right, so let me get some action items here. David, I want to start with you. What's the action item? >> David: So for me, about conversions, there's two levels of conversions. First of all, converge as much as possible and give the work to the vendor, would be my action item. The more that you can go full stack, the more that you can get the software services from a single point, single throat to choke, single hand to shake, the more you have out source your problems to them. >> Peter: And that has a speed implication, time to value. >> Time to value, it has a, you don't have to do undifferentiated work. So that's the first level of convergence and then the second level of convergence is to look hard about how you can bring additional value to your existing systems of record by putting in automation or a real-time analytics. Which leads to automation, that is the second one, for me, where the money is. Automation, reduction in the number of things that people have to do. >> Peter: George, action item. >> So my action item is that you have to evaluate, you the customer have to evaluate sort of your skills as much as your existing application portfolio. And if more of your greenfield apps can start in the cloud and you're not religious about open source but you're more religious about the admin burden and development burden and your latency budget, then start focusing on the services that the cloud vendors originally created that were standalone, but they are increasingly integrating because the customers are leading them there. And then for those customers who you know, have decades and decades of infrastructure and applications on Prem and need a pathway to the cloud, some of the vendors formerly known as Hadoop vendors. But for that matter, any on Prem software vendor is providing customers a way to run workloads in a hybrid environment or to migrate data across platforms. >> All right, so let me give this a final action item here. Thank you David Foyer, George Gilbert. Neil Raiden and Jim Kobielus and the rest of the Wikibon team is with customers today. We talked today about convergence at the software level. What we've observed over the course of the last few years is an expanding array of software technologies, specifically AI, big data, machine learning, etc. That are allowing enterprises to think differently about the types of problems that they can solve with technology. That's leading to an explosion and a number of problems that folks are looking at, the number of individuals participating in making those decisions and thinking those issues through. And very importantly, an explosion of the number of vendors with piecemeal solutions about what they regard, their best approach to doing things. However, that is going to have a significant burden that could have enormous implications for years and so the question is, will we see a degree of convergence in the approach to doing software, in the form of pipelines and applications and whatnot, driven by a combination of: what the hardware is capable of doing, what the skills are or make possible, and very importantly, the natural attributes of the data. And we think that there will be. There will always be tension in the model if you try to invent new software but one of the factors that's going to bring it all back to a degree of simplicity, will be a combination of what the hardware can do, what people can do, and what the data can do. And so we believe, pretty strongly, that ultimately the issues surrounding data whether it be latency or location, as well as the development complexity and administrative complexity, are going to be a range of factors that are going to dictate ultimately of how some of these solutions start to converge and simplify within enterprises. As we look forward, our expectation is that we're going to see an enormous net new investment over the next few years in pipelines, because pipelines are a first-level set of investments on how we're going to handle data within the enterprise. And they'll look like, in certain respects, how DBMS used to look but just in a disaggregated way but conceptually and administratively and then from a product selection and service election standpoint, the expectation is that they themselves have to come together so the developers can have a consistent view of the data that's going to run inside the enterprise. Want to thank David Floyer, want to thank George Gilbert. Once again, this has been Wikibon Action Item and we look forward to seeing you on our next Action Item. (electronic music)
SUMMARY :
in the number of approaches to how we're going the sort of the opportunity to lead And the question is now, how are we going to do this? has been extent in the hardware and the services, and now have come to enterprise as well. of those converged systems. David: Absolutely, and in the future is going to bring hyper-converged in the forms of virtualization of that, and the the real-time analytics for the first time. the licensing of the software, the way that funding is set up within businesses. the features that you plug into a machine learning model. and the ability to scale how that's going to work. that we have to play with that It's not, the goal isn't to get it as short as possible. George: Yes, because the more data that goes the other two knobs that you can play with is going to be partly or strongly that is most relevant to solving that particular problem. to split the latency budget, that inference, then the rest of it has to be all some of the attributes are going to be pre computed But is the practical reality that the convergence of, George: Give an example of the convergence of data because I think he's going to jump on it. in any of the normal levels of there's going to be lots and lots of room for and the ability to have single copies Not loss of the characters, the fidelity of the data, the way we approach problems and whatnot. David: That's right, and the metadata that goes with it. and all of that data that you're doing in that simulation, That are going to limit what you can for the problems that we understand So instead of having the traditional way of doing it Because of flash, because of new technologies at that level, Peter: High performance interfaces, to the amount of data that you can actually process. and the indexes and things like that. the development of analytics and again, I'm just saying that the pieces that you assemble on the choices that you make. Injecting the analytics where we're doing But a significant portion of the new function is what data you create is generally derived data. What's the action item? the more that you can get the software services So that's the first level of convergence and applications on Prem and need a pathway to the cloud, of convergence in the approach to doing software,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
David Foyer | PERSON | 0.99+ |
George Gilber | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Neil Raiden | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
300 attributes | QUANTITY | 0.99+ |
Bash | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
second level | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two knobs | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two levels | QUANTITY | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
first level | QUANTITY | 0.99+ |
each | QUANTITY | 0.98+ |
three issues | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
one point | QUANTITY | 0.98+ |
one product | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
UNIVAC | ORGANIZATION | 0.98+ |
50 plus percent | QUANTITY | 0.98+ |
decades | QUANTITY | 0.98+ |
second one | QUANTITY | 0.98+ |
single point | QUANTITY | 0.97+ |
three times | QUANTITY | 0.97+ |
one way | QUANTITY | 0.97+ |
Karsten Ronner, Swarm64 | Super Computing 2017
>> Announcer: On Denver, Colorado, it's theCUBE, covering SuperComputing '17, brought to you by Intel. >> Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're in Denver, Colorado at this SuperComputing conference 2017. I think there's 12,000 people. Our first time being here is pretty amazing. A lot of academics, a lot of conversations about space and genomes and you know, heavy-lifting computing stuff. It's fun to be here, and we're really excited. Our next guest, Karsten Ronner. He's the CEO of Swarm64. So Karsten, great to see you. >> Yeah, thank you very much for this opportunity. >> Absolutely. So for people that aren't familiar with Swarm64, give us kind of the quick eye-level. >> Yeah. Well, in a nutshell, Swarm64 is accelerating relational databases, and we allow them to ingest data so much faster, 50 times faster than a relational database. And we can also then query that data 10, 20 times faster than relational database. And that is very important for many new applications in IoT and in netbanking and in finance, and so on. >> So you're in a good space. So beyond just general or better performance, faster, faster, faster, you know, we're seeing all these movements now in real-time analytics and real-time applications, which is only going to get crazier with IoT and Internet of Things. So how do you do this? Where do you do this? What are some of the examples you could share with us? >> Yeah, so all our solution is a combination of a software wrapper that attaches our solution to existing databases. And inside, there's an FPGA from Intel, the Arria 10. And we are combining both, such that they actually plug into standard interfaces of existing databases, like in PostgreSQL, Foreign Data Wrappers, the storage engine in MySQL, and MariaDB and so on. And with that mechanism, we ensure that the database, the application doesn't see us. For the application, there's just fast database but we're invisible and also the functionality of the database remains what it was. That's the net of what we're doing. >> So that's so important because we talked a little bit about offline, you said you had a banking customer that said they have every database that's ever been created. They've been buying them all along so they've got embedded systems, you can't just rip and replace. You have to work with existing infrastructure. At the same time, they want to go faster. >> Yeah, absolutely right. Absolutely right. And there's a huge code base, which has been verified, which has been debugged, and in banking, it's also about compliance so you can't just rip out your old code base and do something new, because again, you would have to go through compliance. Therefore, customers really, really, really want their existing databases faster. >> Right. Now the other interesting part, and we've talked to some of the other Intel execs, is kind of this combination hybrid of the Hardware Software Solution in the FPGA, and you're really opening up an ecosystem for people to build more software-based solutions that leverage that combination of the hardware software power. Where do you see that kind of evolving? How's that going to help your company? >> Yeah. We are a little bit unique in that we are hiding that FPGA from the user, and we're not exposing it. Many people, actually, many applications expose it to the user, but apart from that, we are benefiting a lot from what Intel is doing. Intel is providing the entire environment, including virtualization, all those things that help us then to be able to get into Cloud service providers or into proprietary virtualized environments and things like that. So it is really a very close cooperation with Intel that helps us and enables us to do what we're doing. >> Okay. And I'm curious because you spend a lot of time with customers, you said a lot of legacy customers. So as they see the challenges of this new real-time environment, what are some of their concerns, what are some of the things that they're excited that they can do now with real-time, versus bash and data lake. And I think it's always funny, right? We used to make decisions based on stuff that happened in the past. And we're kind of querying now really the desires just to make action on stuff that's happening now, it's a fundamentally different way to address a problem. >> Yeah, absolutely. And a very, very key element of our solution is that we can not only insert these very, very large amounts of data that also other solutions can do, massively parallel solutions, streaming solutions, you know them all. They can do that too. However, the difference is that we can make that data available within less than 10 microseconds. >> Jeff: 10 microseconds? >> So dataset arrives within less than 10 microseconds, that dataset is part of the next query and that is a game changer. That allows you to do controlled loop processing of data in machine-to-machine environments, and autonomous, for autonomous applications and all those solutions where you just can't wait. If your car is driving down the street, you better know what has happened, right? And you can react to it. As an example, it could be a robot in a plant or things like that, where you really want to react immediately. >> I'm curious as to the kind of value unlocking that that provides to those old applications that were working with what they think is an old database. Now, you said, you know, you're accelerating it. To the application, it looks just the same as it looked before. How does that change those performances of those applications? I would imagine there's a whole other layer of value unlocking in those entrenched applications with this vast data. >> Yeah. That is actually true, and it's on a business level, the applications enable customers to do things they were not capable of doing before, and look for example in finance. If you can analyze the market data much quicker, if you can analyze past trades much quicker, then obviously you're generating value for the firm because you can react to market trends more accurately, you can mirror them in a more tighter fashion, and if you can do that, then you can reduce the margin of error with which you're estimating what's happening, and all of that is money. It's really pure money in the bank account of the customer, so to speak. >> Right. And the other big trend we talked about, besides faster, is you know, sampling versus not sampling. In the old days, we sampled old data and made decisions. Now we don't want to sample, we want all of the data, we want to make decisions on all the data, so again that's opening up another level of application performance because it's all the data, not a sample. >> For sure. Because before, you were aggregating. When you aggregate, you reduce the amount of information available. Now, of course, when you have the full set of information available, your decision-making is just so much smarter. And that's what we're enabling. >> And it's funny because in finance, you mentioned a couple of times, they've been doing that forever, right. The value of a few units of time, however small, is tremendous, but now we're seeing it in other industries as well that realize the value of real-time, aggregated, streaming data versus a sampling of old. Really opens up new types of opportunities. >> Absolutely, yes, yes. Yeah, finance, as I mentioned is an example, but then also IoT, machine-to-machine communication, everything which is real-time, logging, data logging, security and network monitoring. If you want to really understand what's flowing through your network, is there anything malicious, is there any actor on my network that should not be there? And you want to react so quickly that you can prevent that bad actor from doing anything to your data, this is where we come in. >> Right. And security's so big, right? It in everywhere. Especially with IoT and machine learning. >> Absolutely. >> All right, Karsten, I'm going to put you on the spot. So we're November 2017, hard to believe. As you look forward to 2018, what are some of your priorities? If we're standing here next year, at SuperComputing 2018, what are we going to be talking about? >> Okay, what we're going to talk about really is that we will, right now we're accelerating single-server solutions and we are working very, very hard on massively parallel systems, while retaining the real-time components. So we will not only then accelerate a single server, by then, allowing horizontal scaling, we will then bring a completely new level of analytics performance to customers. So that's what I'm happy to talk to you about next year. >> All right, we'll see you next year, I think it's in Texas. >> Wonderful, yeah, great. >> So thanks for stopping by. >> Thank you. >> He's Karsten, I'm Jeff. You're watching TheCUBE, from SuperComputing 2017. Thanks for watching.
SUMMARY :
brought to you by Intel. and genomes and you know, Yeah, thank you very of the quick eye-level. And that is very important for So how do you do this? ensure that the database, about offline, you said about compliance so you can't just rip out How's that going to help your company? that FPGA from the user, stuff that happened in the past. is that we can make the street, you better know that provides to those and if you can do that, then you can And the other big trend we talked about, Now, of course, when you have the in finance, you mentioned quickly that you can prevent And security's so big, right? going to put you on the spot. talk to you about next year. All right, we'll see you next Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Karsten Ronner | PERSON | 0.99+ |
Karsten | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
50 times | QUANTITY | 0.99+ |
November 2017 | DATE | 0.99+ |
MySQL | TITLE | 0.99+ |
next year | DATE | 0.99+ |
2018 | DATE | 0.99+ |
next year | DATE | 0.99+ |
less than 10 microseconds | QUANTITY | 0.99+ |
Swarm64 | ORGANIZATION | 0.99+ |
10 microseconds | QUANTITY | 0.99+ |
12,000 people | QUANTITY | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
PostgreSQL | TITLE | 0.99+ |
10 | QUANTITY | 0.99+ |
MariaDB | TITLE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
20 times | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first time | QUANTITY | 0.98+ |
Swarm64 | TITLE | 0.97+ |
SuperComputing | EVENT | 0.97+ |
single server | QUANTITY | 0.95+ |
SuperComputing '17 | EVENT | 0.91+ |
theCUBE | ORGANIZATION | 0.85+ |
Super Computing 2017 | EVENT | 0.83+ |
TheCUBE | TITLE | 0.81+ |
single | QUANTITY | 0.65+ |
2017 | EVENT | 0.63+ |
bash | TITLE | 0.6+ |
Foreign Data Wrappers | TITLE | 0.54+ |
SuperComputing | TITLE | 0.54+ |
Arria 10 | ORGANIZATION | 0.52+ |
2017 | DATE | 0.4+ |
Jagane Sundar & Pranav Rastogi | Big Data NYC 2017
>> Announcer: Live from Midtown Manhattan, it's theCUBE, covering Big Data, New York City, 2017. Brought to you by SiliconANGLE Media and its ecosystem sponsors. >> Okay, welcome back, everyone. Live in Manhattan, this is theCUBE's coverage of our fifth year doing Big Data, NYC; eighth year covering Hadoop World, which is now evolved into Strata Data which is right around the corner. We're doing that in conjunction with that event. This is, again, where we have the thought leaders, we have the experts, we have the entrepreneurs and CEOs come in, of course. The who's who in tech. And my next two guests, is Jagane Sundar, CUBE alumni, who was on yesterday. CTO of WANdisco, one of the hottest companies, most valuable companies in the space for their unique IP, and not a lot of people know what they're doing. So congratulations on that. But you're here with one of your partners, a company I've heard of, called Microsoft, also doing extremely well with Azure Cloud. We've got Pranav Rastogi, who's the program manager of Microsoft Cloud Azure. You guys have an event going on as well at Microsoft Ignite which has been creating a lot of buzz this year again. As usual, they have a good show, but this year the Cloud certainly has taken front and center. Welcome to theCUBE, and good to see you again. >> Thank you. >> Thank you. >> Alright, so talk about the partnership. You guys, Jagane deals with all the Cloud guys. You're here with Microsoft. What's going on with Microsoft? Obviously they've been, if you look at the stock price. From 20-something to a complete changeover of the leadership of Satya Nadella. The company has mobilized. The Cloud has got traction, putting a dent in the universe. Certainly, Amazon feels a little bit of pain there. But, in general, a lot more work to do. What are you guys doing together? Share the relationship. >> So, we just announced a product that's a one-click deployment in the Microsoft Azure Cloud, off WANdisco's Fusion Replication technology. So, if you got some data assets, Hadoop or Cloud object stores on-premise and you want to create a hybrid or a Cloud environment with Azure and Picture, ours is the only way of doing Active/Active. >> Active/Active. And there is some stuff out there that's looking like Active/Active. DataPlane by Hortonworks. But it's fully not Active/Active. We talked a little bit about that yesterday. >> Jagane: Yes. >> Microsoft, you guys, what's interesting about these guys besides the Active/Active? It's a unique thing. It's an ingredient for you guys. >> Yes, the interesting thing for us is, the biggest problem that we think customers have for big data perspective is, if you look at the landscape of the ecosystem in terms of open source projects that are available it's very hard to a: figure out How do I use this software?, b: How do I install it? And, so what we have done is created an experience in Azure HDInsight where you can discover these applications, within the context of your cluster and you can install these applications by one-click install. Which installs the application, configures it, and then you're good to go. We think that this is going to sort of increase the productivity of users trying to get sense out of big data. The key challenges we think customers have today is setting up some sort of hybrid environment between how do you connect your on premise data to move it to the Cloud, and there are different use cases that you can have you can move parts of the data and you can do experiment easily in the Cloud. So what we've done is, we've enabled WANdisco as an application on our HDInsight application platform, where customers can install it using a single-click deploy connected with the data that's sitting on-prem, use the Active/Active feature to have both these environments running simultaneously and they're in sync. >> So one benefits the one-click thing, that's on your side, right? You guys are enabling that. So, okay, I get that. That's totally cool. We'll get to that in a second. I want to kind of drill down on that. But, what's the benefit to the customers, that you guys are having? So, I'm a customer, I one-click, I want some WANdisco Active/Active. Why am I doing it? What does the Cloud change? How does your Cloud change from that experience? >> One example that you can think about is going to change is in an on-premise environment you have a cluster running, but you're kind of limited on what you can do with the cluster, because you've already setup the number of nodes and the workloads your running is fairly finite, but what's happening in reality and today is, lots of users, especially in the machine learning space, and AI space, and the analytic space are using a lot of open source libraries and technologies and they're using it on top of Hadoop, and they're using it on top of Spark. However, in experimenting with these technologies is hard on-prem because it's a locked environment. So we believe, with the Cloud, especially with it offering WANdisco and HDInsight, once you move the data you can start spinning up clusters, you can start installing more open source libraries, experiment, and you can shut down the clusters when you're done. So it's going to increase your efficiency, it's going to allow you to experiment faster, and it's going to reduce for cost as well, because you don't have to have the cluster running all the time and once you are done with your experimentation, then you can decide which way do you want to go. So, it's going to remove the-- >> Jagane, what's your experience with Azure? A lot of people have been, some people have been critical, and rightfully so. You guys are moving as fast you can. You can only go as fast you can, but the success of the Cloud has been phenomenal. You guys have done a great job with the Cloud. Got to give you props on that. Your customers are benefiting, or Microsoft's customers are benefiting. How's the relationship? Are you getting more customers through these guys? Are you bringing customers from on-prem to Cloud? How's the customer flow going? >> Almost all of our customers who have on-prem instances of Hadoop are considering Cloud in one form or the other. Different Clouds have different strengths, as they've found-- >> Interviewer: And different technologies. >> Indeed. And Azure's strengths appear to be the HDInsight piece of it and as Pranam just mentioned, the cool thing is, you can replicate into the Cloud, start up a 50 node Spark cluster today to run a query, that may return results to you really fast. Now, remember this is data that you can write to both in the Cloud and on-premise. It's kept consistent by our technology, or tomorrow you may find that somebody tells you, Hive with the new Tez enhancements is faster, sure, spin up a hundred node Hive cluster in the Cloud, HDInsight supports that really well. You're getting consistent data and your queries will respond much faster than your on-premise. >> We've had Oliver Chu on, before with Hortonworks obviously they're partnering there. HDInsight's been getting a lot of traction lately. Where's that going? We've seen some good buzz on that. Good people talking about it. What's the latest update on your end? >> HDInsight is doing really good. The customers love the ease of creating a cluster using just a few clicks and the benefits that customers get, clusters are optimized for certain scenarios. So if you're doing data science, you can create a Spark cluster, install open source libraries. We have Microsoft R Server running on Spark, which is a unique offering to Microsoft, which lots of customers have appreciated. You also have streaming scenarios that you can do using open source technologies, like we have Apache Kafka running on a stack, which is becoming very popular from an ingestion perspective. Folks have been-- >> Has the Kupernetes craze come down to your group yet? Has it trickled down? It seems to be going crazy. You hired an amazing person from Google, Brendan Burns, we've interviewed before. He's part of the original Kubernetes spec he now works for Microsoft. What's the buzz on the Kubernetes container world there? >> In general, Microsoft Azure has seen great benefits out of it. We are seeing lots of traction in that space. From my role in particular, I focus more on the HDInsight big data space, which is kind of outside of what we do with Kubernetes' work. >> And your relationship is going strong with WANdisco? >> Pranav: Yes. >> Right. >> We just launched this offering just about yesterday is what we announced and we're looking forward to getting customers on to the stack. >> That's awesome. What's your take on the industry right now? Obviously, the partnerships are becoming clearer as people can see there's (mumbles). You're starting to see the notion of infrastructure and services are changing. More and more people want services and then you got the classic infrastructure which looks like it's going to be hybrid. That's pretty clear, we see that. Services versus infrastructure, how should customers think about how they architect their environments? So they can take advantage of the Active/Active and also have a robust, clean, not a lot of re-skilling going on, but more of a good organization from a personnel standpoint, but yet get to a hybrid architecture? >> So, it depends, the Cloud gives you lots of options to meet the customers where they are. Different customers have different kinds of requirements. Customers who have specialized, some of their applications will probably want to go more of an infrastructure route, but customers also love to have some of the past benefits where, you know, I have a service running where I don't have to worry about the infrastructure, how dispatching happen, how does OS updates happen, how does maintenance happen. They want to sort of rely on the Microsoft Azure Cloud provider to take care of it. So that they can focus on their application specific logic, or business specific logic, or analytical workloads, and worry about optimizing those parts of the application because that is their core-- >> It's been great.I want to get your thoughts real quick. Share some color. What's going on inside Microsoft? Obviously, open source has become a really big part of the culture, even just at Ignite. More Linux news is coming. You guys have been involved in Linux. Obviously, open source with Azure, ton of stuff, I know is built in the Microsoft Cloud on open source. You're contributing now as to Kubernetes, as I mentioned earlier. Seems to be a good cultural shift at Microsoft. What's the vibe on the open source internally at Microsoft? Can you share, just some anecdotal insight into what's the vibe like inside, around open source? >> The vibe has increased quite a lot around open source. You rightly mentioned, just recently we've announced a SQL server on Linux as well, at the Ignite conference. You can also deploy a SQL server on a docker container, which is quite revolutionary if you think about how forward we have come. Open source is so pervasive it's almost used in a lot of these projects. Microsoft employees are contributing back to open source projects in terms of, bug fixes, feature requests, or documentation updates. It's a very, very active community and by and large I think customers are benefiting a lot, because there are so many folks working together on open source projects and making them successful and especially around the Azure stack, we also ensure that you can run these open source workloads lively in the Cloud. From an enterprise perspective, you get the best of both worlds. You get the latest innovations happening in open source, plus the reliability of the managed platform that Azure provides at an enterprise scale. >> So again, obviously Microsoft partnership is huge, all the Clouds as well. Where do you want to take the relationship with Microsoft? What happens next? You guys are just going to continue to do business, you're like expecting the one-click's nice, I have some questions on that. What happens next? >> So, I see our partnership becoming deeper. We see the value that HDInsight brings to the ecosystem and all of that value is captured by the data. At the end of the day, if you have stale data, if you have data that you can't rely on the applications are useless. So we see ourselves getting more and more deeply embedded in the system. We see of ourselves as an essential part of the data strategy for Azure. >> Yeah, we see continuous integration as a development concept, continuous analytics as a term, that's being kicked around. We were talking yesterday about, here in theCUBE, real time, I want some data real time and IT goes back, "Here it is, it's real time!" No, but the data's three weeks old. I mean, real time (laughs) is a word that doesn't mean I got to see it really fast, low latency response. Well, that's not the data I want. I meant the data in real time, not you giving me a real time query. So again, this brings up a mind shift in terms of the new way to do business in the Cloud and hybrid. It's changing the game. As customers scratch their heads and try to figure out how to make their organizations more DevOps oriented, what do you guys see for advice for those managers, who are really getting behind it, really want to make change, who kind of have to herd the cats a little bit, and maybe break out security and put it in it's own group? Or you come and say, okay IT guys we're going to change into our operating model, even on-prem, we'll use some burst in to the Cloud, Azure's got 365 on there, lot of coolness developing. What's the advice for the mindset of the change agents out there that are going to do the transformation? >> My advice would be, if you've done the same thing by hand over two times, it's time you automated it, but-- >> Interviewer: Two times?! >> Two times. >> No three rule? Three strikes you're out? >> You're saying two, contrarian. >> That's a careful statement. Because, if you try automating something that you've never actually tried by hand, that's a disaster as well. A couple times, so you know how it's supposed to work. >> Interviewer: Get a good groove on it. >> Right, then you optimize, you automate, and then you turn the knobs. So, you try a hundred node cluster, maybe that's going to be faster. Maybe after a certain point, you don't get any improvements, so you know how to-- >> So take some baby steps, and one easy way to do it is to automate something that you've done. >> Jagane: Yes, exactly. >> That's almost risk-free, relatively speaking. Thoughts, advice to change agents out there. This is your industry hat on. You can take your Microsoft hat off. >> Baby steps. So you start small, you get familiar with the environment and your toolsets are provided so that you get a consistent experience on what you were doing on-prem and sort of in a hybrid space. And the whole idea is as you get more comfortable the benefits of the Cloud far outweigh any sort of cultural changes that need to happen-- >> Guys, thanks for coming on theCUBE, really appreciate it. Thoughts on the Big Data NYC this week? What do you think? >> I think it's a conference that has a lot of Cloud hanging over it and people are scratching their heads. Including vendors, customers, everybody scratching their head, but there is a lot of Cloud in this conference, although this is not a Cloud conference. >> Yeah, they're trying to make it an AI conference. A lot of AI watching certainly we're seeing that everywhere. But again, nothing wrong hyping up AI. It's good for society. It really is cool, but still, that's talking about baby steps, AI is still not there. It seems like, AI from when I got my CS degree in the 80's, not a lot innovation, well machine learning is getting better, but, a lot more way to go on AI. Don't you think? >> Yes, you know a few of the announcements we've made in this week is all about making it easier for developers to get started with AI and machine learning and our whole hope is with these investments that we've done and Azure machine learning improvements and the companion app and the workbench, allows you to get started very easily with AI and machine learning models and you can apply and build these models, do a CICD process and deploy these models and be more effective in the space. >> Yeah and also the tooling market has kind of gotten out of control. We were just joking the other day, that there's this tool shed mindset where everything is in the tool shed and people bought a hammer and turned it into a lawnmower. So it's like, you got to be careful which tools you have. Think about a platform. Think holistically, but if you take the baby steps and implement it, certainly it's there. My personal opinion, I think the Cloud is the equalizer. Cloud can bring compute power that changes what a tool was built for. Even, go back six years, the tools that were out there even six years ago are completely changed by the impact of unlimited, potentially unlimited capacity horsepower. So, okay that resets a little bit. You agree? >> I do. I totally agree. >> Who wins, who loses on the reset? >> The Cloud is an equalizer, but there is a mindset shift that goes with that those who can adapt to the mindset shift, will win. Those who can not and are still clinging to their old practices will have a hard time. >> Yeah, it's exciting. If you're still reinventing Hadoop from 2011 then, probably not good shape right now. >> Jagane: Not a good place to be. >> Using Hadoop is great for Bash, but you can't make that be a lawnmower. That's my opinion. Okay, thanks for coming on. I appreciate it (laughs) You're smiling, you got something that you, no? >> Pranav: (laughs) Thank you so much for that comment. >> Yeah, tool sheds are out there, be careful. Guys do your job. Congratulations on your partnership, appreciate it. This is theCUBE, live in New York. More after this short break. We'll be right back.
SUMMARY :
Brought to you by SiliconANGLE Media Welcome to theCUBE, and good to see you again. of the leadership of Satya Nadella. and you want to create a hybrid We talked a little bit about that yesterday. It's an ingredient for you guys. and there are different use cases that you can have that you guys are having? and once you are done with your experimentation, Got to give you props on that. in one form or the other. the cool thing is, you can replicate into the Cloud, What's the latest update on your end? You also have streaming scenarios that you can do using Has the Kupernetes craze come down to your group yet? I focus more on the HDInsight big data space, on to the stack. and then you got the classic infrastructure So, it depends, the Cloud gives you lots of options of the culture, even just at Ignite. and especially around the Azure stack, Where do you want to take the relationship with Microsoft? At the end of the day, if you have stale data, in terms of the new way to do A couple times, so you know how it's supposed to work. and then you turn the knobs. and one easy way to do it is to You can take your Microsoft hat off. And the whole idea is as you get more comfortable Thoughts on the Big Data NYC this week? but there is a lot of Cloud in this conference, Don't you think? and you can apply and build these models, So it's like, you got to be careful which tools you have. I totally agree. and are still clinging to their old practices Yeah, it's exciting. but you can't make that be a lawnmower. Congratulations on your partnership, appreciate it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Brendan Burns | PERSON | 0.99+ |
Two times | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Hortonworks | ORGANIZATION | 0.99+ |
Jagane Sundar | PERSON | 0.99+ |
three weeks | QUANTITY | 0.99+ |
Jagane | PERSON | 0.99+ |
fifth year | QUANTITY | 0.99+ |
Manhattan | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
HDInsight | ORGANIZATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
WANdisco | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Pranav | PERSON | 0.99+ |
one-click | QUANTITY | 0.99+ |
Pranav Rastogi | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
Midtown Manhattan | LOCATION | 0.99+ |
this year | DATE | 0.99+ |
eighth year | QUANTITY | 0.98+ |
One example | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
both worlds | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Linux | TITLE | 0.97+ |
one | QUANTITY | 0.97+ |
Spark | TITLE | 0.97+ |
Azure | TITLE | 0.97+ |
NYC | LOCATION | 0.97+ |
two guests | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
six years ago | DATE | 0.97+ |
today | DATE | 0.96+ |
CTO | PERSON | 0.96+ |
Ignite | EVENT | 0.96+ |
one form | QUANTITY | 0.96+ |
80's | DATE | 0.95+ |
Ignite | ORGANIZATION | 0.95+ |
Hadoop | TITLE | 0.95+ |
Azure | ORGANIZATION | 0.95+ |
single | QUANTITY | 0.95+ |
Oliver Chu | PERSON | 0.94+ |
Azure Cloud | TITLE | 0.93+ |
one easy way | QUANTITY | 0.93+ |
WANdisco | TITLE | 0.91+ |
Elaine Yeung, Holberton School | Open Source Summit 2017
(upbeat music) >> Narrator: Live from Los Angeles it's The Cube covering Open Source Summit North America 2017. Brought to you by the Lennox Foundation and Red Hat. >> Welcome back, everyone. Live in Los Angeles for The Cube's exclusive coverage of the Open Source Summit North America. I'm John Furrier, your host, with my co-host, Stu Miniman. Our next guest is Elaine Yeung, @egsy on Twitter, check her out. Student at Holberton School? >> At Holberton School. >> Holberton School. >> And that's in San Francisco? >> I'm like reffing the school right here. (laughs) >> Looking good. You look great, so. Open Source is a new generation. It's going to go from 64 million libraries to 400 million by 2026. New developers are coming in. It's a whole new vibe. >> Elaine: Right. >> What's your take on this, looking at this industry right now? Looking at all this old, the old guard, the new guard's coming in, a lot of cool things happening. Apple's new ARKit was announced today. You saw VR and ARs booming, multimedia. >> Elaine: Got that newer home button. Right, like I-- >> It's just killer stuff happening. >> Stu: (laughs) >> I mean, one of the reason why I wanted to go into tech, and this is why I, like, when I told them that I applied to Holberton School, was that I really think at whatever next social revolution we have, technology is going to be somehow interval to it. It's probably not even, like, an existing technology right now. And, as someone who's just, like, social justice-minded, I wanted to be able to contribute in that way, so. >> John: Yeah. >> And develop a skillset that way. >> Well, we saw the keynote, Christine Corbett Moran, was talking really hardcore about code driving culture. This is happening. >> Elaine: Right. So this is not, like, you know, maybe going to happen, we're starting to see it. We're starting to see the culture being shaped by code. And notions of ruling classes and elites potentially becoming democratized 100% because now software, the guys and gals doing it are acting on it and they have a mindset-- >> Elaine: Right. >> That come from a community. So this is interesting dynamic. As you look at that, do you think that's closer to reality? Where in your mind's eye do you see it? 'Cause you're in the front lines. You're young, a student, you're immersed in that, in all the action. I wish I was in your position and all these great AI libraries. You got TensorFlow from Google, you have all this goodness-- >> Elaine: Right. >> Kind of coming in, I mean-- >> So you're, so let me make sure I am hearing your question right. So, you're asking, like, how do I feel about the democratization of, like, educ-- >> John: Yeah, yeah. Do you feel it? Are you there? Is it happening faster? >> Well, I mean, things are happening faster. I mean, I didn't have any idea of, like, how to use a terminal before January. I didn't know, like, I didn't know my way around Lennox or GitHub, or how to push a commit, (laughs) until I started at Holberton School, so. In that sense, I'm actually experiencing this democratization of-- >> John: Yeah. >> Of education. The whole, like, reason I'm able to go to this school is because they actually invest in the students first, and we don't have to pay tuition when we enroll. It's only after we are hired or actually, until we have a job, and then we do an income-share agreement. So, like, it's really-- >> John: That's cool. >> It's really cool to have, like, a school where they're basically saying, like, "We trust in the education that we're going to give you "so strongly that you're not going to pay up front. >> John: Yeah. >> "Because we know you're going to get a solid job and "you'll pay us at that point-- >> John: Takes a lot of pressure off, too. >> Yeah. >> John: 'Cause then you don't have to worry about that overhang. >> Exactly! I wrote about that in my essay as well. Yeah, just, like because who wants to, like, worry about student debt, like, while you're studying? So, now I can fully focus on learning C, learning Python (laughs) (mumbles) and stuff. >> Alright, what's the coolest thing that you've done, that's cool, that you've gotten, like, motivated on 'cause you're getting your hands dirty, you get the addiction. >> Stu: (laughs) >> Take us through the day in the life of like, "Wow, this is a killer." >> Elaine: I don't know. Normally, (laughs) I'm just kind of a cool person, so I feel like everything I-- no, no. (laughs) >> John: That's a good, that's the best answer we heard. >> (laughs) Okay, so we had a battle, a rap battle, at my school of programming languages. And so, I wrote a rap about Bash scripts and (laughs) that is somewhere on the internet. And, I'm pretty sure that's, like, one of the coolest things. And actually, coming out here, one of my school leaders, Sylvain, he told me, he was like, "You should actually put that, "like, pretty, like, front and center on your "like, LinkedIn." Or whatever, my profile. And what was cool, was when I meet Linus yesterday, someone who had seen my rap was there and it's almost like it was, like, set up because he was like, "Oh, are you the one "that was rapping Bash?" And, I was like, "Well, why yes, that was me." (laughs) >> John: (laughs) >> And then Linus said it was like, what did he say? He was like, "Oh, that's like Weird Al level." Like, just the fact that I would make up a rap about Bash Scripts. (laughs) >> John: That's so cool. So, is that on your Twitter handle? Can we find that on your Twitter handle? >> Yes, you can. I will-- >> Okay, E-G-S-Y. >> Yes. >> So, Elaine, you won an award to be able to come to this show. What's your take been on the show so far? What was exciting about you? And, what's your experience been so far? >> To come to the Summit. >> Stu: Yeah. >> Well, so, when I was in education as a dean, we did a lot of backwards planning. And so, I think for me, like, that's just sort of (claps hands). I was looking into the future, and I knew that in October I would need to, like, start looking for an internship. And so, one of my hopes coming out here was that I would be able to expand my network. And so, like that has been already, like that has happened like more than I even expected in terms of being able to meet new people, come out here and just, like, learn new things, but also just like hear from all these, everyone's experience in the industry. Everyone's been just super awesome (laughs) and super positive here. >> Yeah. We usually find, especially at the Open Source shows, almost everyone's hiring. You know, there's huge demand for software developers. Maybe tell us a little bit about Holberton school, you know, and how they're helping, you know, ramp people up and be ready for kind of this world? >> Yeah. So, it's a two-year higher education alternative, and it is nine months of programming. So, we do, and that's split up into three months low-level, so we actually we did C, where we, you know, programmed our own shell, we programmed printf. Then after that we followed with high-levels. So we studied Python, and now we're in our CIS Admin track. So we're finishing out the last three months. And, like, throughout it there's been a little bit, like, intermix. Like, we did binary trees a couple weeks ago, and so that was back in C. And so, I love it when they're, like, throwing, like, C at us when we've been doing Python for a couple weeks, and I'm like, "Dammit, I have to put semicolons (laughs) >> John: (laughs) >> "And start compiling. "Why do we have to compile this?" Oh, anyway, so, offtrack. Okay, so after those nine months, and then it's a six month internship, and after that it's nine months of specialization. And so there's different spec-- you can specialize in high-level, low-level, they'll work with you in whatever you, whatever the student, their interests are in. And you can do that either full-time student or do it part-time. Which most of the students that are in the first batch that started in January 2016, they're, most of them are, like, still working, are still working, and then they're doing their nine month specialization as, like, part-time students. >> Final question for you, Elaine. Share your personal thoughts on, as you're immersed in the coding and learning, you see the community, you meet some great people here, network expanding, what are you excited about going forward? As you look out there, as you finish it up and getting involved, what's exciting to you in the world ahead of you? What do you think you're going to jump into? What's popping out and revealing itself to you? >> I think coming to the conference and hearing Jim speak about just how diversity is important and also hearing from multiple speakers and sessions about the importance of collaboration and contributions, I just feel like Lennox and Open Source, this whole movement is just a really, it's a step in the right direction, I believe. And it's just, I think the recognition that by being diverse that we are going to be stronger for it, that is super exciting to me. >> John: Yeah. >> Yeah, and I just hope to be able to-- >> John: Yeah (mumbles) >> I mean, I know I'm going to be able to add to that soon. (laughs) >> Well, you certainly are. Thanks for coming on The Cube. Congratulations on your success. Thanks for coming, appreciate it. >> Elaine: Thank you, thank you. >> And this is The Cube coverage, live in LA, for Open Source Summit North America. I'm John Furrier, Stu Miniman. More live coverage after this short break. (upbeat music)
SUMMARY :
Brought to you by the Lennox Foundation and Red Hat. of the Open Source Summit North America. I'm like reffing the school It's going to go from 64 million libraries What's your take on this, Elaine: Got that newer I mean, one of the reason why I wanted to go into tech, Well, we saw the keynote, Christine Corbett Moran, you know, maybe going to happen, As you look at that, do you think that's closer to reality? so let me make sure I am hearing your question right. Do you feel it? I mean, I didn't have any idea of, like, and we don't have to pay tuition when we enroll. "so strongly that you're not going to pay up front. John: Takes a lot John: 'Cause then you don't have to worry (laughs) (mumbles) and stuff. you get the addiction. "Wow, this is a killer." Elaine: I don't know. that's the best answer we heard. and (laughs) that is somewhere on the internet. And then Linus said it was like, what did he say? So, is that on your Twitter handle? Yes, you can. So, Elaine, you won an award And so, like that has been already, you know, and how they're helping, you know, and so that was back in C. And you can do that either full-time student What do you think you're going to jump into? that by being diverse that we are going to be stronger for it, I mean, I know I'm going to Well, you certainly are. And this is The Cube coverage, live in LA,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Elaine | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Linus | PERSON | 0.99+ |
Elaine Yeung | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
January 2016 | DATE | 0.99+ |
Sylvain | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Jim | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
October | DATE | 0.99+ |
nine months | QUANTITY | 0.99+ |
LA | LOCATION | 0.99+ |
two-year | QUANTITY | 0.99+ |
Christine Corbett Moran | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Lennox Foundation | ORGANIZATION | 0.99+ |
six month | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
Los Angeles | LOCATION | 0.99+ |
Holberton School | ORGANIZATION | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
three months | QUANTITY | 0.99+ |
nine month | QUANTITY | 0.99+ |
400 million | QUANTITY | 0.99+ |
Open Source Summit North America | EVENT | 0.99+ |
@egsy | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
Stu | PERSON | 0.99+ |
Lennox | ORGANIZATION | 0.98+ |
2026 | DATE | 0.98+ |
Open Source Summit | EVENT | 0.98+ |
Open Source Summit North America 2017 | EVENT | 0.98+ |
ORGANIZATION | 0.97+ | |
one | QUANTITY | 0.96+ |
Weird Al | PERSON | 0.95+ |
ARKit | TITLE | 0.95+ |
first | QUANTITY | 0.95+ |
first batch | QUANTITY | 0.94+ |
today | DATE | 0.93+ |
Open Source Summit 2017 | EVENT | 0.92+ |
64 million libraries | QUANTITY | 0.89+ |
The Cube | ORGANIZATION | 0.89+ |
Bash Scripts | TITLE | 0.88+ |
C | TITLE | 0.88+ |
ORGANIZATION | 0.87+ | |
Open | ORGANIZATION | 0.86+ |
couple weeks ago | DATE | 0.85+ |
North America | LOCATION | 0.81+ |
Holberton school | ORGANIZATION | 0.81+ |
ORGANIZATION | 0.78+ | |
TensorFlow | TITLE | 0.76+ |
C. | TITLE | 0.68+ |
Bash | TITLE | 0.68+ |
E-G-S-Y | TITLE | 0.65+ |
school | QUANTITY | 0.64+ |
last three months | DATE | 0.58+ |
Josh Stella, Fugue | AWS Summit 2017
>> Announcer: Live from Manhattan, it's theCUBE. Covering AWS Summit, New York City 2017. Brought to you by Amazon Web Services. >> And we are live here at the Javits Center, continuing on theCUBE, our coverage of AWS Summit 2017, here in Midtown. Starting to wind down, tail end of the day but still a lot of excitement here on the show floor behind us, as there has been all day long. Joining us now along with Stu Miniman, I'm John Walls, is Josh Stella, who is the CEO and Co-Founder of Fugue, a Washington DC and Frederick, Maryland based company. Josh, thanks for being with us. >> Gentlemen, thanks for having me on theCUBE. >> You bet, first time, I think, right? >> Nope, second time. >> Oh, sorry, second time. >> Yeah. >> Alright, so a CUBE vet. >> A CUBE vet, there you go. >> Alright, so for our folks, viewers at home who might not be too familiar with Fugue. >> Josh: Sure. >> Tell us a little bit about what you do and I'm always curious about the origin of the name. Where'd that, you know, where that came from. >> Sure thing, sure. So what Fugue is, is an infrastructure automation system for the Cloud. So, it builds everything you need on the Cloud. It constantly monitors and operates it. It corrects it if anything goes wrong and it gives you a full view of everything in your infrastructure. We like to say you go fast. That's why you're going to Cloud, is to be able to go fast. You need to be able to see everything and get it right. Fugue gives you all of those capabilities at a different level than anything else out there. The name actually comes from music. From a form of musical composition called a fugue. And there might be some folks in the audience who remember Hofstadter's book Godel, Esher, Bach. That was actually where the idea came from. That and there aren't many English words left that are real words and I didn't want to make something up. >> So, you could get the website for it, so it was good to go? >> Yeah, we used fugue.co so that was part of it, sure. >> It worked out for you, then. >> It worked out, yeah. >> Well, for a guy I know who's big into astronomy, I guess Cloud would be, that seems to make sense, right? That you'd be tied into that. Just in general, Cloud migration now. What we're seeing with, this massive paradigm shift, right? >> Yes. >> That's occurring right now. What's in your mind, the biggest driver, you know, of that? Why are people now seriously on the uptake? >> Sure, so when I was at AWS, most of the growth that we saw was sort of, bottom-up. We would go into a new customer and they'd say, we didn't think we were on Cloud. And then we looked and there are 130 Cloud accounts, on AWS, scattered throughout the organization. That was kind of the first motion of Clouded option. We're really now in the second wave and this wave is strategic. It's where CIOs, CEOs and CTOs are saying this is the right way to go. They do security well, it's more cost-effective. More than anything, it allows us to move fast, iterate, be disruptive ourselves. Instead of letting the other guys, who are moving fast on Cloud disrupt us. So these are the big drivers. What Fugue does, is it allows your Cloud desk, and almost any of these organizations that are in this, sort of, phase two motion. It's not all bottom up. They're starting to say, how do we really want to get our hands around this? And so, what Fugue allows you to do is let your developers go even faster than they could without it but where things like policy has code, and infrastructure has code, are just baked in from the front. So, your developers can go really quickly, iterate and the system will actually tell them when they're doing something that isn't allowed by, for example, a regulatory regime or a compliance requirement. And, once you've built those things, Fugue makes sure their always running properly. So, it's a really powerful technology for migration. >> Josh, I'm wondering if you could take us in that dynamic you just talked about because the stuff where, the developers were just playing with it, we definitely saw it, you know. My joke, when I went to an audience was like, there's two types of customers out there. Those that know their using AWS and those that don't realize that they are using AWS. >> Josh: Yeah, exactly. >> But, when you switch to the top-down, it's, how do you get buy-in? How do you get, you know, that developer and the operator, you know, all on the same page. And, even you say today, most companies say, I have a Cloud strategy, but everybody's strategy is different and there's still, kind of, the ink's drying and as, you know, most people say, strategy means it's good for today. maybe not two years from now. >> Josh: Yeah. >> But, what are you seeing in the customer base, as some of those organizational dynamics, strategy dynamics. >> Sure, so, what we're seeing are, people are confused I think, still, about where this whole thing's going. There's a lot of clarity about where it's been, what it can do for you now. That's coming into a clear focus. But, we're in this moment of, not just moment, decade of huge change in computing. And we're still probably less than halfway through this sea change. So, I'd say the strategy, what we advise people, is the strategy has to be really thinking more about the future, that is unknown. As much as the present, that's known. And that's a difficult thing to do. Our approach to that has been, and then, how do you unify the, kind of, the intentions of the executives and the developers. Well, with developers you have to give them great tools. You have to give them things they want to use. You can't impose, kind of, these old enterprising systems on them. They will find ways around it. So, with Fugue, we wrote this very elegant functional programming language where the developers have far more power to do infrastructure as code than with anything else. It's a very beautiful, elegant language. Lots of developer tooling around that. We're just coming out within the next couple of weeks, here, an open beta on a visualization system. So, as you're writing your infrastructure as code, you automatically can see a diagram of everything that will be deployed. So, developers really like those aspects of Fugue. We speak their language. I'm the CEO, I've been a developer for 30 years. From the other side of the equation though, the executive level, the leadership of the organization, they need assurance that what's being built is going to be correct. Is going to be within the bounds of what's allowed by the organization and can adapt to change as it comes down the pike. So, and this gets back to strategy. So, we have the kind of, everything being built with virtual machines and attached disks. And now, you know, containers are really a huge trend, a really great trend but it's not the end. You have things like Lambda. You have things like machine learning as services. And the application boundaries around all of those things, the ones that are there now, and where it's going in the future. And so Fugue is very much architected to grow with that. >> Yeah, absolutely. I'm curious what you're seeing from customers. It used to be, I think back to, you know, virtualization. It was, you know, IT was a cost center and how do we squeeze money out. Then it was, how can IT respond to the business? And now, you know, the leading edge customers, it's how's IT driving business? I think about machine learning, you know, IOT, a lot of the customers we've talked to, that are using serverless, it's you know, I can be more profitable from day one. I can react much faster. What are the dynamics you're seeing? Kind of the role in IT and, you know, the business? >> Yes, thanks, that's a great question. So, you know, software's eating the world and the Cloud is software, if you do it right. The use of the Cloud is software. And so, we're definitely seeing that. Where it used to be that IT was this big fixed cost center, and you were trying to just get more efficiency out of it. You know, maybe extend your recap cycles if you could get away with it, kind of. Now, it's really a disruptive offensive capability. How am I going to build the next thing that expands my market share? That goes after, other people are trying to be disruptive. So, you have to be able to go really, really fast in order to do that, yeah. >> So, one of the announcements today was the AWS migration hub. And it sounds great, I've got all of these migrations out there and it's going to help them put together but it reminds me of, kind of, we have the manager of managers. Because, there's so many services out there, you know, public Cloud, you know, it used to be like, oh Cloud's going to simplify everything. It's like, no, Cloud is not simplifying anything. We always have, kind of, the complexity. How do you help with that? How are customers grappling with the speed of change and the complexity. >> Josh: Sure. >> It is now? >> So, through automation and code. And that's the whole way through the stack. People used to think about software just being application. Then in the more recent, I'd say in the last 18 months, people have really figured out that actually, no, the configuration of the system, the infrastructure, if you will, although even that's a bit anachronistic. Has to be code, so does security. Everything needs to be turned into code so that the build process is minutes, not days or hours. So, we have a customer in financial services, for example, that uses Fugue to build their entire CICD pipeline and then integrate itself with it, so that all of their infrastructure and security policies are completely automated whenever a developer does a pull request. So, if they do a pull request, out comes an infrastructure. If that infrastructure did not meet policy, it's a build fail. So, the way you adapt to all this complexity is through automation. And it's going to get worse, not better as these services proliferate. And as the application boundaries are drawn around wider and wider classes of services. >> Yeah, and that's I guess to ask about. Is that, if I come in to the Cloud and I have X workload, you know, and it's. And all of a sudden, here comes this and here comes that. Now I can do this, now I have new capabilities. And it's growing and growing. My managing becomes a whole different animal now, right? >> Josh: Yes. >> How do I control that? How do I keep a handle on that and not get overwhelmed by the ability to do more and then people within my own company wanting to do more. >> Yeah, so what you're getting at there, I think, is that people go into this thinking the day one problem is the hard one. It's not. >> John: Mine's going to be when it becomes exponentially larger. >> Yeah, and the day two on problem is the hard one. Now I've built this thing. Is it right anymore? >> John: Right. >> Is it doing what it's supposed to do? Who owns it? >> Right, so all these things are what Fugue was built to address. We don't just build stuff on Cloud. We monitor it every 30 seconds and if anything gets out of specification we fix it. So the effect of this is, as you're building and building and building, if Fugue is happy, your infrastructure is correct. So you no longer have to worry about what's out there, it is operating as intended at the infrastructural layer. So, I think that you're exactly right. You get to these large scales and you realize, wow, I have to automate everything. Typically inside of enterprises, they're kind of hand rolling a bunch of point solutions and bags of python and bash script to try to do it. It's a really hard problem. >> So Josh, it's been a year since you came out of stealth, you know, what's been exciting? What's been challenging? What do you expect to see by the time we catch up with you a year from now? >> Yeah, sure, so what's been exciting is the amount of real traction and interest we're getting out of, like, financial services, government and health care, those kinds of markets. I'd say, it's also been exciting to get the kind of feedback that we have from our early customers, which is, they really become evangelists for us and that feels great when you give people a technology that they don't just use but they love. That's very exciting. A year from now, you're going to see a lot from us. Over the next six to nine months, in terms of product releases. We're going to be putting something out at reinvent, I can't get too much into it. That really changes some of the dynamics around things like being able to adopt Cloud. So, a lot of exciting stuff's coming up. >> It sounds like you've got a pretty interesting runway ahead of you. And you certainly have your hands full. But I think you've got a pretty good hand on it. So, congratulations on a very good year. >> Thank you. >> And we wish you all the best success down the road as well. >> Great, thanks for your time. >> You bet, Josh,thank you. Josh Stella from Fugue joining us here on theCUBE. Back with more from the Javits Center, we're at Midtown Manhattan at AWS Summit 2017.
SUMMARY :
Brought to you by Amazon Web Services. still a lot of excitement here on the who might not be too familiar with Fugue. and I'm always curious about the origin of the name. So, it builds everything you need on the Cloud. What we're seeing with, this massive paradigm shift, right? Why are people now seriously on the uptake? And so, what Fugue allows you to do is let we definitely saw it, you know. the operator, you know, all on the same page. But, what are you seeing in the customer base, is the strategy has to be really thinking Kind of the role in IT and, you know, the business? and the Cloud is software, if you do it right. Because, there's so many services out there, you know, So, the way you adapt to all this complexity I have X workload, you know, and it's. and not get overwhelmed by the ability to do more day one problem is the hard one. John: Mine's going to be when it becomes Yeah, and the day two on problem is the hard one. You get to these large scales and you realize, and that feels great when you give people a technology And you certainly have your hands full. And we wish you all the best Back with more from the Javits Center,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nicola | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Jeremy Burton | PERSON | 0.99+ |
Paul Gillon | PERSON | 0.99+ |
GM | ORGANIZATION | 0.99+ |
Bob Stefanski | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave McDonnell | PERSON | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
James Kobielus | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Paul O'Farrell | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
David Siegel | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Sandy | PERSON | 0.99+ |
Nicola Acutt | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
David Lantz | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Lithuania | LOCATION | 0.99+ |
Michigan | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
General Motors | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
America | LOCATION | 0.99+ |
Charlie | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Pat Gelsing | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Bobby | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dante | PERSON | 0.99+ |
Switzerland | LOCATION | 0.99+ |
six-week | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Bob | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Sandy Carter | PERSON | 0.99+ |