Wilfred Justin, AWS WWPS | AWS re:Invent 2020 Public Sector Day
>>from around the >>globe. It's the Cube with digital coverage of AWS reinvent 2020. Special coverage sponsored by AWS Worldwide Public sector. >>Right. Hello and welcome to the Cube. Virtual our coverage of aws reinvent 2020 with special coverage of the public sector experience. This is the day when we go through all the great conversations around public sector in context to reinvent great guest will for Justin, head of A W s ai and machine learning enablement and partnership with AWS Wilfred. Thanks for joining us. >>Thanks, John. Thanks for having me on. I'm pretty excited to be part of this cube interview. >>Well, I wish we could be in person, but with the pandemic, we gotta do the remote. But I want to get into some of the things you're working on. The A I m l Rapid Adoption Assistance Initiative eyes a big story. What is? What is it described what it is. >>So we launched this artificial intelligence slash machine learning rapid adoption assistance for all public sector partners who are part of the AP in network in September 2020. Onda. We launched this in response to the president's Executive water called the American Year Initiative. So the rapid adoption assistant what it provides us. It provides a direct scalable on automated mechanism for all the public sector partners to reach out to AWS experts within our team for assistance in building and deploying machine learning workloads on behalf of the agencies. So for all all the partners who are part off, this rapid adoption assistance will go through a journey with AWS with my team and they will go through three different faces. The first face will be the envisioning face. The second phase would be the enablement face on the third would be the bill face, as you know, in the envisioning face will dive deeply The use case, the problem that they're trying to solve. This is where we will talk about the algorithms and framework on. We will solidify the architecture er on validate the architecture er on following that will be an enablement face where we engage with the partners trained their technical team, meaning that it will be a hands on approach hands on on keyboard kind of approach where we trained them on machine learning stack On the third phase would be the bill face on the partners leverage the knowledge that they have gained through the enablement and envisioning face, and they start building on rolling out workloads on behalf of the agencies. So we will stay with them throughout the journey on We will doom or any kind of blockers be technical or business, so that's a quick overview off a more rapid adoption assistance program. >>It's funny talking to Swami over the years and watching every year at reinvent the A I. M L Portfolio. Dr Matt Wood is always doing something new. This year is no exception. Even Mawr Machine Learning and AI in the In the News on this rapid adoption assistant initiative sounds like it's an accelerant. Um, so I get all that, But I want to ask you, what problem does it solve for the customer? Or Amazon is because there's demand. There's too much demand. People wanna go faster. What problem does this initiative this rapid adoption of a I machine learning initiative solved? >>So as you know, John, artificial intelligence and related technologies like deep learning and machine learning can literally transform the way agencies operate. They can enable them to provide better services, quicker services and more secure services to the citizens of this country. And that's the reason the president released an executive water called American Initiative on it drives all the government agencies, specifically federal agencies, to promote artificial intelligence to protect and improve the security and economy of the nation. So if you think about it, the best way to achieve the goal is to enable the partners toe build workloads on behalf of agencies, because when it comes to public sector, most of the workloads are delivered by partners. So the problem that we face based on our interaction with the partners is that though the partners have been building a lot off applications with AWS for more than a decade, when it comes to artificial intelligence, they have very limited resources when it comes to deep learning and machine learning, right, like speech recognition, cognitive computing, national language frosting. So we wanted exactly address that. And that's the problem you're trying to solve by launching this rapid adoption assistance, which is nothing but a dry direct mechanism for partners to reach our creative, these experts to help them to build those kind of solutions for the government. >>You know, it's interesting because AI and machine learning it's a secret sauce for workload, especially modern workloads. You mentioned agencies and also public sector. You know, we've seen Certainly there's been pandemic a ton of focus on moving faster, right? So getting those APS out quickly ai drives a lot of that, so totally get it. Um, I think it's an accelerant great program. It just makes a lot of sense. And I know you guys have been going in tow by vertical and kind of having stage making all these other tools kind of be specialized within those verticals. So it makes a ton of sense. I get it, and it is a great, great initiative and solve the problem. The question I have is who gets access to this, right? Is it just agencies you mentioned? Is it all public sector? Could you just clarify who can apply to this program? >>Yes, it is a partner focused program. So all the existing partners, though it is going to affect the end agencies, were trying to help the agency's through the partners. So all the existing AP in partners who are part of the PSP program, we call it the public sector partner program can apply for this rapid adoption assistance. So you have been following John, you have been following AWS and AWS partners on a lot of partners have different kind of expertise on they. They show that by achieving a lot of competencies, right, it could be technical competencies like big data storage and security. Or it could be domain specific competencies like public safety education on government competency. But for a playing this program, the partners don't need to have any kind of competency, and all they have to have is they have to be part of the Amazon Partner Network on they have to be part of the public sector partner program. That is number one Second. It is open toe all partners, meaning that it is open toe. Both technology partners, as well as consulting partners Number three are playing is pretty simple, John, right? You can quickly search for a I M or rapid adoption assistance on a little pop up a page on a P network, the partners have to go on Phil pretty basic information about the workload, the problem that they're trying to solve the machine learning services that they're planning to use on a couple of other information, like contact information, and then our team reaches out to the partner on help them with the journey. >>So real. No other requirements are prerequisites. Just part of the partner program. >>Absolutely. It is meant for partners. And all you have to do is you have to be a part off 18 network, and you have to be a public sector apartment. >>Public sector partner makes sense. I mean, how you're gonna handle the demand. I'm sure the it's gonna be a tsunami of interest, because, I mean, why wouldn't someone take advantage of this? >>Yep. It is open to all kinds of partners because they have some kind of prerequisites, right? So that's what I'm trying to explain. It is open to all partners, but we have since it is open to existing partners, we kind of expect the partners toe understand the best practices off deploying a machine, learning workloads, or for that case, any kind of workload which should be scalable, land secure and resilient. So we're not going to touch? Yeah, >>Well, I wanna ask you what's what's the response been on this launch? Because, you know, I mean to me, it just makes it's just common sense. Why wouldn't someone take advantage of it? E. Whether responses partner or you have domain expertise or in a vertical just makes a lot of sense. You get access to the experts. >>The response has been great. As I said, the once you apply the journey takes six weeks, but already we just launched it. Probably close toe. Two months back in September 2nd week of September, it is almost, uh, almost two months, and we have more than 15 partners as part of this program on dykan name couple of partners say, for example, we worked with delight on We Are. We will be working on number of work clothes for the Indy agencies through delight. And there are other couple of number of other partners were making significant progress using this rapid adoption assistance that includes after associates attained ardent emcee on infinitive. So to answer your question, the response has been great so far. >>So what's the I So I gotta ask, you know, one of things I thought that Teresa Carlson about all the time in Sandy Carter is, you know, trying to get the accelerant get whether it's Fed ramp and getting certifications. I mean, you guys have done a great job of getting partners on board. Is there any kind of paperwork? What's the process? What should a partner expect to take advantage of that? I'm sure they'll be interest beyond just the launch. What's what's involved? What zit Web bases it check a form? Is that a lot of hoops to jump through? Explain what? What? The process >>is. Very interesting question. And it probably is a very important question from a part of perspective, right? So since it is offered for a peon partners, absolutely, they should have already gone through the AP in terms and conditions they should have. Already, a customer agreement or advanced partners might have enterprise agreement. So for utilizing this for leveraging this rapid adoption assistance program, absolutely. There's no paperwork involved. All they have to do is log into the Web form, fill up the basic information. It comes to us way, take it from there. So there is no hard requirements as long as you're part of the AP network. And as long as you're part of the PSP program, >>well, for great insight, congratulations on a great program. I think it's gonna be a smash hit. Who wouldn't wanna take? I know you guys a lot of goodness there with Amazon Cloud higher level services with a I machine learning people could bring it into the table. I know from a cybersecurity standpoint to just education the range of, um, workloads is gonna be phenomenal. Obviously military as well. Eso totally cool. Love it. Congratulations. Like my final question is, um, one about the partner. So I'm a partner. I like this. Say I'm a partner. I jump in Easy to get in. Walk me through What happens? I mean, I signed some paperwork. You check the boxes, I get involved, I get, like, a rep. Do I do things? Do I? What happens to me? Walk me down the path of execution. What's expectation of what will happen? >>I'll explain that in two parts, John. Right? One is from a partner journey perspective and then from AWS perspective. What? What we expect out off partners, right? So, from a experience perspective, as long as they fill out, fill out the web form on, fill out the basic information about the project that they're trying to work. It comes to us. The workflow is automated. All the information is captured on the information comes to my team on. We get back to the partners within three days, but the journey itself can take from 6 to 8 weeks because, as I mentioned during the envisioning case, we try to map the problem to the solution. But the enablement phases the second phase is where it can take anywhere from 2 to 3 weeks because, as I mentioned, we focused on the three layers of the machine learning stack for certain kind of partners. They might be interested in sage maker because they might want to build a custom machine learning model. But for some of the partners, they want the argument that existing applications using S. R or NLP or nL you so we can focus on the high level services. Or we can train them on stage makers so it can take anywhere between 2 to 3 weeks or 3 to 4 weeks. And finally, the build phase varies from partner to partner on the complexity of the work. Lord at that point were still involved with a partner, but the partner will be taking the lead on will be with them to remove any kid of Glaucus being technical or, uh, business couple of Yeah, well, I just >>want to say the word enablement in your title kind of speaks volumes. This isn't about enabling customers. >>It is all about enabling the in customers through partners. So we focus on enabling partners. They could be business big system integrators like Lockheed's or Raytheon's or Delight. Or it could be nimble in small partners. Or it could be a technology partner building an entire pass or SAS service on behalf of the government agencies. Right or that could help the comment agencies in different verticals. So we just enabled the in the agency's through the partners. And the focus of this program is all about partner enablement. >>Well, for just ahead of a does a i machine learning enablement in partnership, part of public sector with a W. S. This is our special coverage. Well, for thanks for coming on being a cube virtual guest. I wish we could be in person, but this year it's remote. This is the cube virtual. I'm John for a year. Host of the Cube. Thanks for watching. >>Thanks a lot, John.
SUMMARY :
It's the Cube with digital coverage of AWS This is the day when we go through all the great I'm pretty excited to be part of this cube interview. of the things you're working on. So for all all the partners Even Mawr Machine Learning and AI in the In the News on this rapid adoption So the problem that we face based And I know you guys have been going in tow by vertical and kind of having stage making all these other tools kind So all the existing AP in partners who are part of the PSP program, Just part of the partner program. And all you have to do is you have to be a part off 18 I'm sure the it's gonna be a tsunami It is open to all partners, but we have since it You get access to the experts. As I said, the once you apply the journey takes six weeks, So what's the I So I gotta ask, you know, one of things I thought that Teresa Carlson about all the time in Sandy Carter is, All they have to do is log into the Web form, I know from a cybersecurity standpoint to just education the range of, All the information is captured on the information comes to my team on. want to say the word enablement in your title kind of speaks volumes. It is all about enabling the in customers through partners. This is the cube virtual.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lockheed | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
September 2020 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
Raytheon | ORGANIZATION | 0.99+ |
Justin | PERSON | 0.99+ |
Wilfred Justin | PERSON | 0.99+ |
six weeks | QUANTITY | 0.99+ |
2 | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
Matt Wood | PERSON | 0.99+ |
Sandy Carter | PERSON | 0.99+ |
Amazon Partner Network | ORGANIZATION | 0.99+ |
4 weeks | QUANTITY | 0.99+ |
second phase | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
3 weeks | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
6 | QUANTITY | 0.99+ |
Delight | ORGANIZATION | 0.99+ |
more than a decade | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
8 weeks | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
third phase | QUANTITY | 0.98+ |
more than 15 partners | QUANTITY | 0.98+ |
first face | QUANTITY | 0.98+ |
a year | QUANTITY | 0.97+ |
Swami | PERSON | 0.97+ |
Phil | PERSON | 0.97+ |
Second | QUANTITY | 0.96+ |
This year | DATE | 0.96+ |
September 2nd week of September | DATE | 0.95+ |
three layers | QUANTITY | 0.94+ |
three different faces | QUANTITY | 0.94+ |
Indy | ORGANIZATION | 0.94+ |
pandemic | EVENT | 0.93+ |
Two months | DATE | 0.92+ |
We Are | ORGANIZATION | 0.92+ |
almost two months | QUANTITY | 0.91+ |
AWS Worldwide | ORGANIZATION | 0.9+ |
NLP | ORGANIZATION | 0.89+ |
A W | ORGANIZATION | 0.87+ |
one | QUANTITY | 0.86+ |
couple of partners | QUANTITY | 0.85+ |
Number three | QUANTITY | 0.82+ |
AP | ORGANIZATION | 0.82+ |
Mawr | ORGANIZATION | 0.8+ |
AWS Wilfred | ORGANIZATION | 0.79+ |
Invent 2020 Public Sector Day | EVENT | 0.75+ |
public sector partner program | OTHER | 0.71+ |
Both technology | QUANTITY | 0.7+ |
couple | QUANTITY | 0.69+ |
Amazon Cloud | ORGANIZATION | 0.67+ |
S. R | ORGANIZATION | 0.66+ |
Cube | COMMERCIAL_ITEM | 0.65+ |
American Initiative | TITLE | 0.63+ |
Onda | ORGANIZATION | 0.63+ |
Rapid Adoption Assistance Initiative | OTHER | 0.61+ |
American Year Initiative | OTHER | 0.61+ |
Glaucus | ORGANIZATION | 0.59+ |
18 network | QUANTITY | 0.58+ |
aws reinvent 2020 | TITLE | 0.58+ |
SAS | ORGANIZATION | 0.58+ |
infinitive | TITLE | 0.57+ |
reinvent 2020 | TITLE | 0.49+ |
WWPS | TITLE | 0.45+ |
dykan | OTHER | 0.39+ |
UNLIST TILL 4/2 - The Next-Generation Data Underlying Architecture
>> Paige: Hello, everybody, and thank you for joining us today for the virtual Vertica BDC 2020. Today's breakout session is entitled, Vertica next generation architecture. I'm Paige Roberts, open social relationship Manager at Vertica, I'll be your host for this session. And joining me is Vertica Chief Architect, Chuck Bear, before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait, just type your question or comment, in the question box that's below the slides and click submit. So as you think about it, go ahead and type it in, there'll be a Q&A session at the end of the presentation, where we'll answer as many questions, as we're able to during the time. Any questions that we don't get a chance to address, we'll do our best to answer offline. Or alternatively, you can visit the Vertica forums to post your questions there, after the session. Our engineering team is planning to join the forum and keep the conversation going, so you can, it's just sort of like the developers lounge would be in delight conference. It gives you a chance to talk to our engineering team. Also, as a reminder, you can maximize your screen by clicking the double arrow button in the lower right corner of the slide. And before you ask, yes, this virtual session is being recorded, and it will be available to view on demand this week, we'll send you a notification, as soon as it's ready. Okay, now, let's get started, over to you, Chuck. >> Chuck: Thanks for the introduction, Paige, Vertica vision is to help customers, get value from structured data. This vision is simple, it doesn't matter what vertical the customer is in. They're all analytics companies, it doesn't matter what the customers environment is, as data is generated everywhere. We also can't do this alone, we know that you need other tools and people to build a complete solution. You know our database is key to delivering on the vision because we need a database that scales. When you start a new database company, you aren't going to win against 30 year old products on features. But from day one, we had something else, an architecture built for analytics performance. This architecture was inspired by the C-store project, combining the best design ideas from academics and industry veterans like Dr. Mike Stonebreaker. Our storage is optimized for performance, we use many computers in parallel. After over 10 years of refinements against various customer workloads, much of the design held up and serendipitously, the fact that we don't store in place updates set Vertica up for success in the cloud as well. These days, there are other tools that embody some of these design ideas. But we have other strengths that are more important than the storage format, where the only good analytics database that runs both on premise and in the cloud, giving customers the option to migrate their workloads, in most convenient and economical environment, or a full data management solution, not just the query tool. Unlike some other choices, ours comes with integration with a sequel ecosystem and full professional support. We organize our product roadmap into four key pillars, plus the cross cutting concerns of open integration and performance and scale. We have big plans to strengthen Vertica, while staying true to our core. This presentation is primarily about the separation pillar, and performance and scale, I'll cover our plans for Eon, our data management architecture, Mart analytic clusters, or fifth generation query executer, and our data storage layer. Let's start with how Vertica manages data, one of the central design points for Vertica was shared nothing, a design that didn't utilize a dedicated hardware shared disk technology. This quote here is how Mike put it politely, but around the Vertica office, shared disk with an LMTB over Mike's dead body. And we did get some early field experience with shared disk, customers, well, in fact will learn on anything if you let them. There were misconfigurations that required certified experts, obscure bugs extent. Another thing about the shared nothing designed for commodity hardware though, and this was in the papers, is that all the data management features like fault tolerance, backup and elasticity have to be done in software. And no matter how much you do, procuring, configuring and maintaining the machines with disks is harder. The software configuration process to add more service may be simple, but capacity planning, racking and stacking is not. The original allure of shared storage returned, this time though, the complexity and economics are different. It's cheaper, even provision storage with a few clicks and only pay for what you need. It expands, contracts and brings the maintenance of the storage close to a team is good at it. But there's a key difference, it's an object store, an object stores don't support the API's and access patterns used by most database software. So another Vertica visionary Ben, set out to exploit Vertica storage organization, which turns out to be a natural fit for modern cloud shared storage. Because Vertica data files are written once and not updated, they match the object storage model perfectly. And so today we have Eon, Eon uses shared storage to hold Vertica data with local disk depot's that act as caches, ensuring that we can get the performance that our customers have come to expect. Essentially Eon in enterprise behave similarly, but we have the benefit of flexible storage. Today Eon has the features our customers expect, it's been developed in tune for years, we have successful customers such as Redpharma, and if you'd like to know more about Eon has helped them succeed in Amazon cloud, I highly suggest reading their case study, which you can find on vertica.com. Eon provides high availability and flexible scaling, sometimes on premise customers with local disks get a little jealous of how recovery and sub-clusters work in Eon. Though we operate on premise, particularly on pure storage, but enterprise also had strengths, the most obvious being that you don't need and short shared storage to run it. So naturally, our vision is to converge the two modes, back into a single Vertica. A Vertica that runs any combination of local disks and shared storage, with full flexibility and portability. This is easy to say, but over the next releases, here's what we'll do. First, we realize that the query executer, optimizer and client drivers and so on, are already the same. Just the transaction handling and data management is different. But there's already more going on, we have peer-to-peer depot operations and other internode transfers. And enterprise also has a network, we could just get files from remote nodes over that network, essentially mimicking the behavior and benefits of shared storage with the layer of software. The only difference at the end of it, will be which storage hold the master copy. In enterprise, the nodes can't drop the files because they're the master copy. Whereas in Eon they can be evicted because it's just the cache, the masters, then shared storage. And in keeping with versus current support for multiple storage locations, we can intermix these approaches at the table level. Getting there as a journey, and we've already taken the first steps. One of the interesting design ideas of the C-store paper is the idea that redundant copies, don't have to have the same physical organization. Different copies can be optimized for different queries, sorted in different ways. Of course, Mike also said to keep the recovery system simple, because it's hard to debug, whenever the recovery system is being used, it's always in a high pressure situation. This turns out to be a contradiction, and the latter idea was better. No down performing stuff, if you don't keep the storage the same. Recovery hardware if you have, to reorganize data in the process. Even query optimization is more complicated. So over the past couple releases, we got rid of non identical buddies. But the storage files can still diverge at the fifth level, because tuple mover operations are synchronized. The same record can end up in different files than different nodes. The next step in our journey, is to make sure both copies are identical. This will help with backup and restore as well, because the second copy doesn't need backed up, or if it is backed up, it appears identical to the deduplication that is going to look present in both backup systems. Simultaneously, we're improving the Vertica networking service to support this new access pattern. In conjunction with identical storage files, we will converge to a recovery system that instantaneous nodes can process queries immediately, by retrieving data they need over the network from the redundant copies as they do in Eon day with even higher performance. The final step then is to unify the catalog and transaction model. Related concepts such as segment and shard, local catalog and shard catalog will be coalesced, as they're really represented the same concepts all along, just in different modes. In the catalog, we'll make slight changes to the definition of a projection, which represents the physical storage organization. The new definition simplifies segmentation and introduces valuable granularities of sharding to support evolution over time, and offers a straightforward migration path for both Eon and enterprise. There's a lot more to our Eon story than just the architectural roadmap. If you missed yesterday's Vertica, in Eon mode presentation about supported cloud, on premise storage option, replays are available. Be sure to catch the upcoming presentation on sizing and configuring vertica and in beyond doors. As we've seen with Eon, Vertica can separate data storage from the compute nodes, allowing machines to quickly fill in for each other, to rebuild fault tolerance. But separating compute and storage is used for much, much more. We now offer powerful, flexible ways for Vertica to add servers and increase access to the data. Vertica nine, this feature is called sub-clusters. It allows computing capacity to be added quickly and incrementally, and isolates workloads from each other. If your exploratory analytics team needs direct access to the source data, they need a lot of machines and not the same number all the time, and you don't 100% trust the kind of queries and user defined functions, they might be using sub-clusters as the solution. While there's much more expensive information available in our other presentation. I'd like to point out the highlights of our latest sub-cluster best practices. We suggest having a primary sub-cluster, this is the one that runs all the time, if you're loading data around the clock. It should be sized for the ETL workloads and also determines the natural shard count. Additional read oriented secondary sub-clusters can be added for real time dashboards, reports and analytics. That way, subclusters can be added or deep provisioned, without disruption to other users. The sub-cluster features of Vertica 9.3 are working well for customers. Yesterday, the Trade Desk presented their use case for Vertica over 300,000 in 5 sub clusters running in the cloud. If you missed a presentation, check out the replay. But we have plans beyond sub-clusters, we're extending sub-clusters to real clusters. For the Vertica savvy, this means the clusters bump, share the same spread ring network. This will provide further isolation, allowing clusters to control their own independent data sets. While replicating all are part of the data from other clusters using a publish subscribe mechanism. Synchronizing data between clusters is a feature customers want to understand the real business for themselves. This vision effects are designed for ancillary aspects, how we should assign resource pools, security policies and balance client connection. We will be simplifying our data segmentation strategy, so that when data that originate in the different clusters meet, they'll still get fully optimized joins, even if those clusters weren't positioned with the same number of nodes per shard. Having a broad vision for data management is a key component to political success. But we also take pride in our execution strategy, when you start a new database from scratch as we did 15 years ago, you won't compete on features. Our key competitive points where speed and scale of analytics, we set a target of 100 x better query performance in traditional databases with path loads. Our storage architecture provides a solid foundation on which to build toward these goals. Every query starts with data retrieval, keeping data sorted, organized by column and compressed by using adaptive caching, to keep the data retrieval time in IO to the bare minimum theoretically required. We also keep the data close to where it will be processed, and you clusters the machines to increase throughput. We have partition pruning a robust optimizer evaluate active use segmentation as part of the physical database designed to keep records close to the other relevant records. So the solid foundation, but we also need optimal execution strategies and tactics. One execution strategy which we built for a long time, but it's still a source of pride, it's how we process expressions. Databases and other systems with general purpose expression evaluators, write a compound expression into a tree. Here I'm using A plus one times B as an example, during execution, if your CPU traverses the tree and compute sub-parts from the whole. Tree traversal often takes more compute cycles than the actual work to be done. Especially in evaluation is a very common operation, so something worth optimizing. One instinct that engineers have is to use what we call, just-in-time or JIT compilation, which means generating code form the CPU into the specific activity expression, and add them. This replaces the tree of boxes that are custom made box for the query. This approach has complexity bugs, but it can be made to work. It has other drawbacks though, it adds a lot to query setup time, especially for short queries. And it pretty much eliminate the ability of mere models, mere mortals to develop user defined functions. If you go back to the problem we're trying to solve, the source of the overhead is the tree traversal. If you increase the batch of records processed in each traversal step, this overhead is amortized until it becomes negligible. It's a perfect match for a columnar storage engine. This also sets the CPU up for efficiency. The CPUs look particularly good, at following the same small sequence of instructions in a tight loop. In some cases, the CPU may even be able to vectorize, and apply the same processing to multiple records to the same instruction. This approach is easy to implement and debug, user defined functions are possible, then generally aligned with the other complexities of implementing and improving a large system. More importantly, the performance, both in terms of query setup and record throughput is dramatically improved. You'll hear me say that we look at research and industry for inspiration. In this case, our findings in line with academic binding. If you'd like to read papers, I recommend everything you always wanted to know about compiled and vectorized queries, don't afraid to ask, so we did have this idea before we read that paper. However, not every decision we made in the Vertica executer that the test of time as well as the expression evaluator. For example, sorting and grouping aren't susceptible to vectorization because sort decisions interrupt the flow. We have used JIT compiling on that for years, and Vertica 401, and it provides modest setups, but we know we can do even better. But who we've embarked on a new design for execution engine, which I call EE five, because it's our best. It's really designed especially for the cloud, now I know what you're thinking, you're thinking, I just put up a slide with an old engine, a new engine, and a sleek play headed up into the clouds. But this isn't just marketing hype, here's what I mean, when I say we've learned lessons over the years, and then we're redesigning the executer for the cloud. And of course, you'll see that the new design works well on premises as well. These changes are just more important for the cloud. Starting with the network layer in the cloud, we can't count on all nodes being connected to the same switch. Multicast doesn't work like it does in a custom data center, so as I mentioned earlier, we're redesigning the network transfer layer for the cloud. Storage in the cloud is different, and I'm not referring here to the storage of persistent data, but to the storage of temporary data used only once during the course of query execution. Our new pattern is designed to take into account the strengths and weaknesses of cloud object storage, where we can't easily do a path. Moving on to memory, many of our access patterns are reasonably effective on bare metal machines, that aren't the best choice on cloud hyperbug that have overheads, page faults or big gap. Here again, we found we can improve performance, a bit on dedicated hardware, and even more in the cloud. Finally, and this is true in all environments, core counts have gone up. And not all of our algorithms take full advantage, there's a lot of ground to cover here. But I think sorting in the perfect example to illustrate these points, I mentioned that we use JIT in sorting. We're getting rid of JIT in favor of a data format that can be treated efficiently, independent of what the data types are. We've drawn on the best, most modern technology from academia and industry. We've got our own analysis and testing, you know what we chose, we chose parallel merge sort, anyone wants to take a guess when merge sort was invented. It was invented in 1948, or at least documented that way, like computing context. If you've heard me talk before, you know that I'm fascinated by how all the things I worked with as an engineer, were invented before I was born. And in Vertica , we don't use the newest technologies, we use the best ones. And what is noble about Vertica is the way we've combined the best ideas together into a cohesive package. So all kidding about the 1940s aside, or he redesigned is actually state of the art. How do we know the sort routine is state of the art? It turns out, there's a pretty credible benchmark or at the appropriately named historic sortbenchmark.org. Anyone with resources looking for fame for their product or academic paper can try to set the record. Record is last set in 2016 with Tencent Sort, 100 terabytes in 99 seconds. Setting the records it's hard, you have to come up with hundreds of machines on a dedicated high speed switching fabric. There's a lot to a distributed sort, there all have core sorting algorithms. The authors of the paper conveniently broke out of the time spent in their sort, 67 out of 99 seconds want to know local sorting. If we break this out, divided by two CPUs and each of 512 nodes, we find that each CPU so there's almost a gig and a half per second. This is for what's called an indy sort, like an Indy race car, is in general purpose. It only handles fixed hundred five records with 10 byte key. There is a record length can vary, then it's called daytona sort, a 10 set daytona sort, is a little slower. One point is 10 gigabytes per second per CPU, now for Verrtica, We have a wide variety ability in record sizes, and more interesting data types, but still no harm in setting us like phone numbers, comfortable to the world record. On my 2017 era AMD desktop CPU, the Vertica EE5 sort to store about two and a half gigabytes per second. Obviously, this test isn't apply to apples because they use their own open power chip. But the number of DRM channels is the same, so it's pretty close the number that says we've hit on the right approach. And it performs this way on premise, in the cloud, and we can adapt it to cloud temp space. So what's our roadmap for integrating EE5 into the product and compare replacing the query executed the database to replacing the crankshaft and other parts of the engine of a car while it's been driven. We've actually done it before, between Vertica three and a half and five, and then we never really stopped changing it, now we'll do it again. The first part in replacing with algorithm called storage merge, which combines sorted data from disk. The first time has was two that are in vertical in incoming 10.0 patch that will be EE5 or resegmented storage merge, and then convert sorting and grouping into do out. There the performance results so far, in cases where the Vertica execute is doing well today, simple environments with simple data patterns, such as this simple capitalistic query, there's a lot of speed up, when we ship the segmentation code, which didn't quite make the freeze as much like to bump longer term, what we do is grouping into the storage of large operations, we'll get to where we think we ought to be, given a theoretical minimum work the CPUs need to do. Now if we look at a case where the current execution isn't doing as well, we see there's a much stronger benefit to the code shipping in Vertica 10. In fact, it turns a chart bar sideways to try to help you see the difference better. This case also benefit from the improvements in 10 product point releases and beyond. They will not happening to the vertical query executer, That was just the taste. But now I'd like to switch to the roadmap first for our adapters layer. I'll start with a story about, how our storage access layer evolved. If you go back to the academic ideas, if you start paper that persuaded investors to fund Vertica, read optimized store was the part that had substantiation in the form of performance data. Much of the paper was speculative, but we tried to follow it anyway. That paper talked about the WS with RS, The rights are in the read store, and how they work together for transaction processing and how there was a supernova. In all honesty, Vertica engineers couldn't figure out from the paper what to do next, incase you want to try, and we asked them they would like, We never got enough clarification to build it that way. But here's what we built, instead. We built the ROS, read optimized store, introduction on steep major revision. It's sorted, ordered columnar and compressed that follows a table partitioning that worked even better than the we are as described in the paper. We also built the last byte optimized store, we built four versions of this over the years actually. But this was the best one, it's not a set of interrelated V tree. It's just an append only, insertion order remember your way here, am sorry, no compression, no base, no partitioning. There is, however, a tuple over which does what we call move out. Move the data from WOS to ROS, sorting and compressing. Let's take a moment to compare how they behave, when you load data directly to the ROS, there's a data parsing operation. Then we finished the sorting, and then compressing right out the columnar data files to stay storage. The next query through executes against the ROS and it runs as it should because the ROS is read optimized. Let's repeat the exercise for WOS, the load operation response before the sorting and compressing, and before the data is written to persistent storage. Now it's possible for a query to come along, and the query could be responsible for sorting the lost data in addition to its other processes. Effect on query isn't predictable until the TM comes along and writes the data to the ROS. Over the years, we've done a lot of comparisons between ROS and WOS. ROS has always been better for sustained load throughput, it achieves much higher records per second without pushing back against the client and hasn't Vertica for when we developed the first usable merge out algorithm. ROS has always been better for predictable query performance, the ROS has never had the same management complexity and limitations as WOS. You don't have to pick a memory size and figure out which transactions get to use the pool. A non persistent nature of ROS always cause headaches when there are unexpected cluster shutdowns. We also looked at field usage data, we found that few customers were using a lot, especially among those that studied the issue carefully. So how we set out on a mission to improve the ROS to the point where it was always better than both the WOS and the profit of the past. And now it's true, ROS is better than the WOS and the loss of a couple of years ago. We implemented storage bundling, better catalog object storage and better tuple mover merge outs. And now, after extensive Q&A and customer testing, we've now succeeded, and in Vertica 10, we've removed the whys. Let's talk for a moment about simplicity, one of the best things Mike Stonebreaker said is no knobs. Anyone want to guess how many knobs we got rid of, and we took the WOS out of the product. 22 were five knobs to control whether it didn't went to ROS as well. Six controlling the ROS itself, Six more to set policies for the typical remove out and so on. In my honest opinion is still wasn't enough control over to achieve excess in a multi tenant environment, the big reason to get rid of the WOS for simplicity. Make the lives of DBAs and users better, we have a long way to go, but we're doing it. On my desk, I keep a jar with the knob in it for each knob in Vertica. When developers add a knob to the product, they have to add a knob to the jar. When they remove a knob, they get to choose one to take out, We have a lot of work to do, but I'm thrilled to report that in 15 years 10 is the first release with a number of knobs ticked downward. Get back to the WOS, I've said the most important thing get rid of it for last. We're getting rid of it so we can deliver our vision of the future to our customer. Remember how he said an Eon and sub-clusters we got all these benefits from shared storage? Guess what can't live in shared storage, the WOS. Remember how it's been a big part of the future was keeping the copies that identical to the primary copy? Independent actions of the WOS took a little at the root of the divergence between copies of the data. You have to admit it when you're wrong. That was in the original design and held up to the a selling point of time, without onto the idea of a separate ROS and WOS for too long. In Vertica, 10, we can finally bid, good reagents. I've covered a lot of ground, so let's put all the pieces together. I've talked a lot about our vision and how we're achieving it. But we also still pay attention to tactical detail. We've been fine tuning our memory management model to enhance performance. That involves revisiting tens of thousands of satellite of code, much like painting the inside of a large building with small paintbrushes. We're getting results as shown in the chart in Vertica nine, concurrent monitoring queries use memory from the global catalog tool, and Vertica 10, they don't. This is only one example of an important detail we're improving. We've also reworked the monitoring tables without network messages into two parts. The increased data we're collecting and analyzing and our quality assurance processes, we're improving on everything. As the story goes, I still have my grandfather's axe, of course, my father had to replace the handle, and I had to replace the head. Along the same lines, we still have Mike Stonebreaker Vertica. We didn't replace the query optimizer twice the debate database designer and storage layer four times each. The query executed is and it's a free design, like charted out how our code has changed over the years. I found that we don't have much from a long time ago, I did some digging, and you know what we have left in 2007. We have the original curly braces, and a little bit of percent code for handling dates and times. To deliver on our mission to help customers get value from their structured data, with high performance of scale, and in diverse deployment environments. We have the sound architecture roadmap, reviews the best execution strategy and solid tactics. On the architectural front, we're converging in an enterprise, we're extending smart analytic clusters. In query processing, we're redesigning the execution engine for the cloud, as I've told you. There's a lot more than just the fast engine. that you want to learn about our new data support for complex data types, improvements to the query optimizer statistics, or extension to live aggregate projections and flatten tables. You should check out some of the other engineering talk that the big data conference. We continue to stay on top of the details from low level CPU and memory too, to the monitoring management, developing tighter feedback cycles between development, Q&A and customers. And don't forget to check out the rest of the pillars of our roadmap. We have new easier ways to get started with Vertica in the cloud. Engineers have been hard at work on machine learning and security. It's easier than ever to use Vertica with third Party product, as a variety of tools integrations continues to increase. Finally, the most important thing we can do, is to help people get value from structured data to help people learn more about Vertica. So hopefully I left plenty of time for Q&A at the end of this presentation. I hope to hear your questions soon.
SUMMARY :
and keep the conversation going, and apply the same processing to multiple records
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mike | PERSON | 0.99+ |
Mike Stonebreaker | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Chuck Bear | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
Paige Roberts | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
second copy | QUANTITY | 0.99+ |
99 seconds | QUANTITY | 0.99+ |
67 | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
1948 | DATE | 0.99+ |
Ben | PERSON | 0.99+ |
two modes | QUANTITY | 0.99+ |
Redpharma | ORGANIZATION | 0.99+ |
first time | QUANTITY | 0.99+ |
first steps | QUANTITY | 0.99+ |
Paige | PERSON | 0.99+ |
two parts | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
five knobs | QUANTITY | 0.99+ |
100 terabytes | QUANTITY | 0.99+ |
both copies | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
each knob | QUANTITY | 0.99+ |
WS | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Eon | ORGANIZATION | 0.99+ |
1940s | DATE | 0.99+ |
today | DATE | 0.99+ |
One point | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
fifth level | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
yesterday | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Six | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
512 nodes | QUANTITY | 0.98+ |
ROS | TITLE | 0.98+ |
over 10 years | QUANTITY | 0.98+ |
Yesterday | DATE | 0.98+ |
15 years ago | DATE | 0.98+ |
twice | QUANTITY | 0.98+ |
sortbenchmark.org | OTHER | 0.98+ |
first release | QUANTITY | 0.98+ |
two CPUs | QUANTITY | 0.97+ |
Vertica 10 | TITLE | 0.97+ |
100 x | QUANTITY | 0.97+ |
WOS | TITLE | 0.97+ |
vertica.com | OTHER | 0.97+ |
10 byte | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
5 sub clusters | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
one example | QUANTITY | 0.97+ |
over 300,000 | QUANTITY | 0.96+ |
Dr. | PERSON | 0.96+ |
One | QUANTITY | 0.96+ |
tens of thousands of satellite | QUANTITY | 0.96+ |
EE5 | COMMERCIAL_ITEM | 0.96+ |
fifth generation | QUANTITY | 0.96+ |
Kyle Ruddy, VMware | VTUG Winter Warmer 2018
>> Announcer: From Gillette Stadium in Foxborough, Massachusetts, it's theCube! Covering VTUG Winter Warmer 2018. Presented by SiliconeANGLE. (energetic music) >> Hi, I'm Stu Miniman and this is theCube's coverage of the VTUG Winter Warmer 2018, the 12th year of this user group, fifth year we've had theCube here. I happen to have on the program a first-time guest, Kyle Ruddy, who's a Senior Technical Marketing Engineer with VMware, knows a thing or two about virtualization. >> Maybe a couple of things. >> Stu: Thanks for joining us, Kyle. >> Oh, thank you for having me. I'm happy to be here. >> All right, so Kyle, I know you were sitting at home in Florida and saying, "What I'd like to do is come up in the 20s. "It kind of feels like single digits." Why did you leave the warmth of the south to come up here to the frigid New England? >> (chuckles) Yeah, well, it was a great opportunity. I've never been to one of the VTUGs before, so they gave me a chance to talk about something that I'm extremely passionate about which is API usage. Once I got the invite, no-brainer, made the trip. >> Awesome! So definitely, Jonathan Frappier who we asked to be on the program but he said Kyle's going to be way better. (Kyle chuckles) Speak better, you got the better beard. (Kyle laughs) I think we're just going to give Frappier a bunch of grief since he didn't agree to come on. Give us first a little bit about your background, how long you been VMware, what kind of roles have you had there? >> Yeah, absolutely! So I've probably been in IT for over 15 years, a long-time customer. I did that for about 10 to 12 years of the IT span doing everything from help desk working my way up to being on the engineer side. I really fell in love with automation during that time period and then made the jump to the vendor side. I've been at VMware for about two years now where I focus on creating content and being at events like these to talk about our automation strategy for vSphere. >> Before you joined VMware, were you a vExpert? Have you presented at VMUGs? >> Yes, yes, so I've been a vExpert. I think I'm going on seven years now. I've helped run the Indianapolis VMUG for five to six years. I've presented VMUGs all over the country. >> Yeah, one of the things we always emphasize, especially at groups like this, is get involved, participate, it can do great things for your career. >> Yes, absolutely! I certainly wouldn't be here without that kind of input and guidance. >> Indy VMUG's a great one, a real large one here, even though I hear this one here has tended to be a little bit bigger, but a good rivalry going on there. I want to talk about the keynote you talked about, automation and APIs. It's not kind of the virtualization 101, so what excites you so much about it? And let's get in a little bit, talk about what you discussed there. >> Yeah, absolutely! We were talking about using Ansible with the vSphere 6.5 RESTful APIs. That's something that's new, brand new, to vSphere 6.5, and really just being able to, when those were released, allow our users and our customers to make use of those APIs in however way that they wanted to. If you look back at some of our prior APIs and our SDKs, you were a little more constrained. They were SOAP-based so there was a lot of overhead that came with those. There was a large learning curve that also came along with those. So by switching to REST, it's a whole lot more user friendly. You can use it with tools like Ansible which that was just something that Jon knew quite well. I thought that was a perfect opportunity for me to finally do a presentation with Jon. It went quite well. I think the audience learned quite a bit. We even kind of relayed to the audience that this isn't something that's just for vSphere. Ansible is something you can use with anything. >> For somebody out there watching this, how do they get started? What's kind of some of the learning curve that they need to do? What skillsets are they going to build on versus what they need to learn for new? >> Sure. A lot of the ways to really get started with these things, I've created a ton of blog posts that are out there on the VMware {code} blog. The first one is just getting started with the RESTful APIs that we've provided. There's a program that's called Postman, we give a couple of collections that you can automatically import and start using that. Ansible has some really good documentation on getting started with Ansible and whichever environment you're choosing to work or use it with. So they've got a Getting Started with vSphere, they've got a Getting Started with different operating systems as well. Those are really good tools to get started and get that integrated into your normal working environment. Obviously, we're building on automation here. We're building on... At least when I was in admin, I got involved in automation because there was a way for me to automate and get rid of those tasks, those menial tasks that I didn't really enjoy doing. So I could automate that, push that off, and get back to something that I cared about that I enjoyed. >> Yeah, great point there 'cause, yeah, some people, they're a little bit nervous, "Oh, wait, are these tools going to take away my job?" And to repeat what you were just saying, "No, no." There's the stuff that you don't really love doing and that you probably have to do a bunch. Those are the things that are probably, maybe the easiest to be able to move to the automation. How much do people look at this and be like, "Wait, no, once I start automating it, "then I kind of need to care, and feed, and maintain that, "versus just buying something off the shelf "or using some service that I can do." Any feedback on that? >> Well, it's more of a... It's a passion thing. If it's something that you're really get ingrained in, you really enjoy, then you're going to want to care and feed that because it's going to grow. It's going to expand into other areas of your environment. It's going to expand into other technologies that are within your environment. So of course, you can buy something. You could get somebody from... There are professional services organizations involved, so you don't have to do the menial tasks of updating that. Say if you go from one version to a next version, you don't have to deal with that. But if you're passionate about it, you enjoy doing that, and that's where I was. >> The other thing I picked up on is you said some of these things are new only in 6.5. One of the challenges we've always had out there is, "Oh, wait, I need to upgrade. "When can I do it? "What challenges I'm going to have?" What's the upgrade experience like now and anything else that you'd want to point out that said, "Hey, it's time to plan for that upgrade "and here are some of the things that are going to help you"? >> We actually have an End of Availability and End of Support coming up for vSphere 5.5. That's going to be coming up in here later this year in September-October timeframe. So you're not going to be able to open up a support request for that. This is a perfect time to start planning that upgrade to get up to at least 6.0, if not 6.5. And the other thing to keep in mind is that we've announced deprecation for the Windows version of vSphere. Moving forward past our next numbered release, that's going to be all vCenter Server Appliance from that point forward. Now we also have a really great tool that's called the VCSA Migration tool that you can use to help you migrate from Windows to the Appliance. Super simple, very straightforward, gives you a migration assistant to even point out some of those places where you might miss if you did it on your own. So that's a really great tool and really helps to remove that pain out of that process. >> Yeah, it's good, you've got a mix of a little bit of the stick, you got to get off! (Kyle chuckles) I know a lot of people still running 5.5 out there as well as there's the carrot out there. All the good stuff that's going to get you going. All right, hey, Kyle, last thing I want to ask is 2018. Boy, there's a lot of change going on in the industry. One, how do you keep up with everything, and two, what's exciting you about what's happening in the industry right now? >> As far as what excites me right now, Python. That's been something that's been coming up a lot more with the folks that I'm talking to. Even today, just at lunch, I was talking to somebody and they were bringing up Python. I'm like, "Wow!" This is something that keeps coming up more and more often. I'm using a lot more of my time, even my personal time, to start looking at that. And so when you start hearing the passion of people who are using some of these new technologies, that's when I start getting interested because I'm like, "Hey, if you're that interested, "and you're that passionate about it, "I should be too." So that's kind of what drives me to keep learning and to keep up with all of the latest and greatest things that are out there. Plus when you have events like this, you can go talk to some of the sponsors. You can talk and see what they're doing, how to make use of their product, and some of their automation frameworks, and with what programming languages. That kind of comes back to Python on that one because a lot more companies are releasing their automation tools for use with Python. >> Yeah, and you answered the second part of my question probably without even thinking about it. The passion, the excitement, talking to your peers, coming to events like this. All right, Kyle Ruddy, really appreciate you joining us here. We'll be back with more coverage here from the VTUG Winter Warmer 2018. I'm Stu Miniman. You're watching theCube. (energetic music)
SUMMARY :
it's theCube! I happen to have on the program I'm happy to be here. "What I'd like to do is come up in the 20s. so they gave me a chance to talk about something on the program but he said Kyle's going to be way better. I did that for about 10 to 12 years of the IT span for five to six years. Yeah, one of the things we always emphasize, that kind of input and guidance. even though I hear this one here has tended to be We even kind of relayed to the audience and get back to something that I cared about And to repeat what you were just saying, and feed that because it's going to grow. "and here are some of the things that are going to help you"? And the other thing to keep in mind is that All the good stuff that's going to get you going. and to keep up with all of the latest and greatest things Yeah, and you answered the second part of my question
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jonathan Frappier | PERSON | 0.99+ |
Kyle | PERSON | 0.99+ |
Kyle Ruddy | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Jon | PERSON | 0.99+ |
Florida | LOCATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
Frappier | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
2018 | DATE | 0.99+ |
second part | QUANTITY | 0.99+ |
fifth year | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
12th year | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
six years | QUANTITY | 0.99+ |
vSphere 6.5 | TITLE | 0.99+ |
Gillette Stadium | LOCATION | 0.99+ |
Windows | TITLE | 0.99+ |
over 15 years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two | QUANTITY | 0.98+ |
first-time | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
vSphere | TITLE | 0.98+ |
one version | QUANTITY | 0.98+ |
Foxborough, Massachusetts | LOCATION | 0.98+ |
New England | LOCATION | 0.98+ |
October | DATE | 0.98+ |
September | DATE | 0.97+ |
first | QUANTITY | 0.97+ |
6.5 | QUANTITY | 0.97+ |
about two years | QUANTITY | 0.96+ |
later this year | DATE | 0.96+ |
One | QUANTITY | 0.96+ |
12 years | QUANTITY | 0.95+ |
about 10 | QUANTITY | 0.94+ |
vSphere 5.5 | TITLE | 0.94+ |
VTUG | EVENT | 0.94+ |
VTUG Winter Warmer 2018 | EVENT | 0.94+ |
Postman | TITLE | 0.93+ |
VMUGs | ORGANIZATION | 0.87+ |
SiliconeANGLE | ORGANIZATION | 0.87+ |
a thing | QUANTITY | 0.87+ |
REST | TITLE | 0.86+ |
VMware | TITLE | 0.86+ |
Ansible | ORGANIZATION | 0.84+ |
one | QUANTITY | 0.82+ |
VCSA | TITLE | 0.82+ |
at least 6.0 | QUANTITY | 0.74+ |
theCube | ORGANIZATION | 0.72+ |
vExpert | ORGANIZATION | 0.68+ |
Winter Warmer | EVENT | 0.68+ |
5.5 | QUANTITY | 0.66+ |
theCube | COMMERCIAL_ITEM | 0.65+ |
One of the challenges | QUANTITY | 0.62+ |
vCenter | TITLE | 0.61+ |
Indy VMUG | ORGANIZATION | 0.6+ |
ton of blog posts | QUANTITY | 0.56+ |
single | QUANTITY | 0.54+ |
Indianapolis VMUG | ORGANIZATION | 0.54+ |
Ansible | TITLE | 0.53+ |
20s | QUANTITY | 0.51+ |
101 | QUANTITY | 0.44+ |
couple | QUANTITY | 0.38+ |
VTUGs | ORGANIZATION | 0.32+ |
Robson Grieve, New Relic Inc. | CUBE Conversations Jan 2018
(fast-paced instrumental music) >> Hello everyone, welcome to the special CUBE conversation, here at theCUBE Studio in Palo Alto. I'm John Furrier, Co-founder of SiliconANGLE Media and host of theCUBE for our special CMO signal series we're launching. Really talkin' to the top thought leaders in marketing, in the industry, really pushing the envelope on a lot of experimentation. And Robson Grieve, Chief Marketing Officer of New Relic, is here. Welcome to this CUBE conversation. >> Thank you, excited to be with you. >> So, New Relic is a very progressive company. You have a founder who's very dynamic, writes code, takes sabbaticals, creates product, he's a musician, is prolific. That kind of sets the tone for your company, and you guys are also state of the art DevOps company. >> Robson: Yes. >> So, pressure's on to be a progressive marketer, you guys are doing that. >> Yeah, I think some of the great things about that DevOps culture are process wise it allows us to experiment with different ways of working. And we've obviously talked a little bit about Agile and the way a different way of thinking about how you actually do the work can change the way you output the kind of things you're willing to make, the way the teams work together. And the degree to which you can integrate marketing and sales, really, around shorter time frames, faster cycle times. And so, we have a great culture around that. We also have a really great culture around experimentation. I think that's one of the biggest things that Lou talks a lot about is, let's try things, let's look for experiments, let's see where we can find something unexpected that could be a big success, and let's not be afraid for something to go wrong. If you can do that, then you have way higher odds of finding the Geo TenX. >> And you guys are also in the analytics, you also look at the signal, so you're very data driven, and I'll give you a prop for that, give you a plug. (Robson laughing) New Relic, a very data driven company. But today we're seeing a Seed Changer, a revolution in the tech industry. Seeing signals like cryptocurrency, blockchain, everyone's goin' crazy for this. They see disruption in that. You've got AI and a bunch of other things, so, and you got the Cloud computing revolution, so all of this is causing a lot of horizontally, scalable change, which is breaking down the silos of existing systems. >> Yeah. >> But, you can't just throw systems away. You have systems in marketing. So, how are you dealing with that dynamic, because we're seeing people going, hey, I just can't throw away my systems, but I got to really be innovative and agile to the real-time nature of the internet now, while having all those analytics available. >> Yeah. >> How do you tackle that, that issue? >> Yeah, there's a couple ways to think about analytics. Number one is, what do you need to know in real-time to make sure things are working and that your systems are up and running and operating effectively? And that runs through everything from upfront in web experiences and trial experiences, that kind of thing. Through to how our leads and customers progressing through a funnel, as they get passed around the various parts of a company. But then the second approach we take to data is, after all that's happened, how can we look backwards on it and what patterns emerge when you look at it over the scale of longer period of time. And so, that's the approach today. You're right, you can't just everything and throw it out and start over again, 'cause some startups stop by with a really cool idea. But, you have to be aggressive about experimentation. I think that's the, back to that big idea that we talked about experimentation. We are trying out a lot of different things all the time. Looking for things that could be really successful. Of course, Intercom is one that we started to experiment with a little bit for in product communications and we've expanded over time as we found it more and more useful. And, so that's not, we haven't taken and just ripped something else out of it, made some giant bet on something brand new. We've tried it, we've gotten to know it, and then we found ways to apply that. We're doing that with a number of different technologies right now. >> Yeah, you're in a very powerful position, you're Chief Marketing Officer, which has to look over a lot of things now, and certainly with IT and Cloud. You're essentially in the middle of the fabric of the organization. Plus, people are knockin' on your door to sell you stuff. >> Yeah. (laughing) >> So, what is-- >> That's happened. (laughing) >> It happens all the time, he's got the big budget. >> What are they saying to you? Who's knockin' on your door, right now? Who's peppering you. Who's tryin' to get on your calendar? Who's bombarding you? Where are you saying, Hey, I'm done with that, or Hey, I'm lookin' for more of that. How do you deal with that tension, 'cause I'm sure it must be heavy. >> Yeah, I think there is definitely a lot of optionality in the market, for sure. I think there's a new wave of martech vendors. Many of whom are sitting right in between sales ops and marketing ops. That's a layer we're really interested in. Systems that can help us better understand the behavior of sale's reps, and how they're using things that we're making, and then systems that you can better understand, indications of prospect intent. >> So, funnel and pipeline, or those kinds of things? >> Yeah, we think about it more from the context of authentic engagement. And so, we don't want to apply too much of a-- >> Structure to it. >> Structure, a sales structure to it. We want to try to follow the customer's intent through the process, 'cause the best prospect is someone who is authentically engaged in trying to find a solution to their problem. And so, if we can avail ourselves to people in a thoughtful, and creative, and authentic way, when they need us, when they're trying to solve that problem, then I think that they can become much more successful prospects. >> I love your angle on agile marketing. I think that's table steaks, not that you got to behave that way, and I'd love to get your thoughts, I'll get your thoughts later on the management style and how you make that happen. But, you mentioned engagement, this is now the new Holy Grail. There's a lot of data behind it, and it could be hidden data, it could be data decentralized all over the place. This is the hottest topic. How do you view engagement as a CMO, and the impact to the organization? What are you lookin' for, what's the key premise for your thesis of getting engagement? >> It's really the number one, two, and three topic we're talking about right now, and we think about it on the content side. How do we get ourselves really producing a constant stream of content that has value to people? That either helps them solve a problem right now, or helps them think about an architectural issue in a different way. We're trying to invest more and more technical resources in people who can produce things that are relevant to all the different kinds of users that we have. DevOps people, SREs, our traditional developer customers. We want to go deep and be super relevant at a content level for them. But then once they start to spend time with us, we want to then have a progressive way to pull them deeper and deeper into our community. And so, the things that we can do, something's in digital for that, but then often there's a pop off line, and we do a lot of workshops, a lot of education. >> Face-to-face? >> Face-to-face, where we're in communities, we look at a map at the start of the year and say, where do we have big user communities, and then we drop events into those places where we take our educators and our product experts and get customers to share with each other. And that becomes a really great platform to put them together and have them help each other, as well as learn more about what our product does. >> So, it sounds like you're blending digital with face-to-face? >> Robson: Yeah, absolutely. >> That's a key part of your strategy? >> Key part is to make sure that we're getting time and attention from the people who are making decisions, and what technologies they're going to buy, but also that we're really investing time in the people who are using it in their everyday lives to do their job better. That's a really-- >> Give some examples of outcomes that you've seen successful from that force. That's a really unique, well unique is pretty obvious if you think about it, but some people think digital is the Holy Grail, let's go digital, let's lower cost. But, face-to-face can be expensive, but you're blending it. What's the formula and what are some of the successes that you've seen as a result. >> Yeah, we tend to try to create events that are good for a very specific audience. So, if you think about a targeting formula that you would use in digital that will make digital really efficient, that same idea works really well for an event. So, if you got a user community that's really good at doing one thing with your product and you feel like if they knew a few more things that they could get better. Then we help them really advance to the next level, and so we run certification programs, where we'll pull together a group of confident users and help them get to the next level. And things like that allow us to make a really targeted event that allows us to reach out to a group and move them to a higher level of competency. To have competency focus is a big deal. Can we help you get better at your job? But then communities, is the other big one. Can we help you connect with people who are doing the same things? Solving the same kinds of problems and are interested in the same topics as you are. >> It sounds like the discovery path of the user, the journey, your potential. >> Yeah, it's important to us for sure. >> And content sounds like it's important too. >> It helps with your engagement. How you dealing with the content? Is that all on your properties? How about off property measurement? How do you get engagement for off property? >> Yeah, we're experimenting a lot in that area, of off property. I think we've had tons of success inside our own website and our blogs, and those kinds of-- >> You guys do pop out a lot of content, so it's content rich. >> Yes, we definitely have a lot, we hopefully, our attitude is, we want to turn our company inside out, so we want to take all of our experts-- >> Explain that, that's important topic, so, you guys are opening up what? >> We have got customer support people, we have technical sales, and technical support engineers, we've got marketing people who are thought leaders in Cloud and other architecture topics. We really want to take all the expertise that they've got and we want to share it with our community. >> John: How do you do that, through forums, through their Twitter handles? >> Through all of the above, really. Through their Twitter handles, through content that they write and produce through videos, through a podcast series that we run. We're really trying to expand as much as possible, but then inside our user help community, anytime somebody solves a problem for one customer, we want to add it to that-- >> Sounds like open-source, software. (laughing) >> From a knowledge perspective, that's really an important idea for us. >> Yeah, that's awesome. You worry about the risk. I like the idea of just opening it up. You're creating building blocks of knowledge, like code. It's almost like an open-source software, but no, it's open knowledge. >> We think if we can help people get really successful at the work they're trying to do, that it's going to do great things for us as a brand. >> What's the rules of the road, because obviously you might have some hay makers out there. Some employee goes rogue, or you guys just trusting everyone, just go out and just do it. >> Well, it's constant effort to distribute publishing rights and allow people to take more and more ownership of it, and to maintain some editorial controls, because I think quality is a big thing. It's probably a bigger concern for us then somebody going rogue. At some level, if that happens to you, you can't stop it. >> So, is this a new initiative or is it progression? >> It's been ongoing for awhile. It's progression of an effort we started probably 18 months ago, and it's a wonderful way for an engineering team, and a product management team, and a marketing team to get together around a really unified mission as well. So our content project is just one of those things that I think really pulls us together inside the company in a really fun way as well. >> It's interesting, you seeing more and more what social peers want to talk to each other and not the marketing guy, and say, Hey, get the Kool-Aid, I like the product, I want to talk to someone to solve my problem. >> Want to have a real conversation about it, and I think that's our job, is to not think of it as marketing, but to think of it as just facilitating a real conversation about how our product works for somebody. >> I'd love to talk about leadership as the Chief Marketer for New Relic in the culture that you're in, which is very cool to be in on the front-end, in the front lines doing cool things. What do you do? How do you manage yourself, how do you manage your time? What do you do, how do you organize the troops, how do you motivate them? What's your management style for this marketing in the modern era? >> I think, number one, we're trying to create an organization that is full of opportunities for people, so it's something that we've done. I've been there for about two and a half years, and we've really looked hard for people who have tons of potential and finding great things to work on. On new projects, and then let them try out ideas that they've got. So, if they can own an idea, give it a shot, and even if it doesn't work, they'll learn a bunch from the process of trying. >> What are the craziest ideas you've heard from some of your staff? (laughing) >> Oh boy, you know a lot of them involve video. There's always a great idea for a video that's risky. And we've made-- >> So the Burger King one with Net Neutrality going around the web is the funniest video I've seen all week. >> Robson: Yeah, yeah. >> Could be risky, could be also a double-edged sword, right? >> Yeah, video is one of those places where you have to check yourself a little bit, 'cause it could be a great idea, and so sometimes you have to actually make it and look at it, and say, would we publish this or not? And, yeah, so that's definitely the place to be. >> So common sense is kind of like your. >> Yeah, you start with common sense, for sure. And, I think we want to be a part of it being culturally responsible in Silicon Valley right now, is really making sure that we're attentive to making sure that we're putting in the right kind of workplace environment for people. And so, our content and the way we go to market has to reflect that as well, so there's a bunch of filters that you put on it, but you have to take risks and try to make things, and if they work great, and if they don't then the cost of that is less than. The cost of failure is so low in some of these things, so you just have to try. >> Well, you know, we're into video here at theCUBE. I have to ask you, do you see video more and more in the marketing mix and if so, how does that compare to old methods? We've seen the media business change and journalism, certainly on the analyst community. Who reads white papers? Maybe the do, maybe they don't. Or, how do they engage? What content formally do you see as state of the art engagement? Is is video, is it a mix, how do you view that? >> It's a mix, really. I think video's really powerful. And it can be great to tree topics and short form in a really powerful way. I think we can stretch it out a little bit in terms of how to and teaching and education also. But, there are times when other things like a white paper are still relevant. >> Yeah, they got to do their homework and get ready for the big test. >> Yes. >> How to install. (laughing) >> Exactly, yeah. >> Okay, big surprises for you in the industry, if you could look back and talk to yourself a few years ago and say, Wow, I didn't know that was going to happen, or I kind of knew this was going to be a trend we would be on. Where is the tailwinds, where's the headwinds in the industry as a marketer to be innovated, to be on the cutting edge, to deliver the value you need to do for your customers and for the company? >> Yeah, I think there's a bunch of great tailwinds organizationally and in the approach to work. And you talked about Agile. I think it's been a great thing to see people jump in and try to work in a different way. That's created tons of scale for a department like ours, where we're tryin' to go to more countries, and more places constantly. Having a better way to work, where we waste less effort, where we find problems and fix things way faster, has given us the chance to build leverage. And I think that's just that integration of engineering, attitudes, with marketing processes has been a, is an awesome thing. Everybody in our marketing department, or at least a lot of people have read the DevOps handbook, and we've got a lot of readers, so the devotes of that thought process that don't suit an engineering jobs. >> DevOps, Ethos, I think is going to be looked at as one of those things, that's a moment in history that has changed so much. I was just at Sundance Film Festival, and DevOps, Ethos is going to filmmaking. >> Robson: Yeah. >> And artistry with a craft and how that waterfall for the Elite Studios is opening up an amateur market in the Indy, so their Agile filmmakers and artists now doing cool stuff. So, it's going to happen. And of course, we love the infrastructures code. We'll talk about that all day long We love DevOps. (Robson laughing) So I got to ask you the marketing question. It will be a theme of my program of the CMO is, if I say marketing is code, infrastructure is code, enabled a lot of automation, some abstracted a way horizontally scaled, and new opportunities, created a lot of leverage, a lot of value, infrastructures code, created the Cloud. Is there a marketing as code Ethos, and what would that look like? If I would say, apply DevOps to marketing. If you could look at that, and you could say, magic wand. Give me some DevOps marketing, marketing as code. What would you have automated in a way that would be available to you? What would the APIs look like? What's your vision for that? >> What about the APIs, that's a good question. >> John: I don't think they exist yet, but we're fantasizing about it. (laughing) >> Yeah, I think the things that tend to slow marketing departments down really are old school, things like approvals. And how hard it is to get humans to agree on things that should be really easy. So, if the first thing you-- >> Provisioning an order. (laughing) >> The first thing you could do is just automate that system of agreeing that something's ready to go and send it out that I think you'd create so much efficiency in side marketing departments all over the world. Now that involves having a really great, and API's a great thought in that, because the expectations have to get matched up of what's being communicated on both sides, so we can have a channel on which to agree on something. That to me is-- >> Analytics are probably huge too. You want to have instant analytics. I don't care which database it came from. >> Yes, exactly. And that's the sense of DevOps and can use. But then you got some feedback on, did it work, was it the right thing to do, should we do more of it, should we fix it in some specific way? Yeah, I think that's-- >> I think that's an interesting angle, and the face-to-face thing that I find really interesting, because what you're doing is creating that face-to-face resource, that value is so intimate, and it's the best engagement data you can get is face-to-face. >> Yeah, I think it also allows us to build relationships to the point where we are getting invited into slack channels to help companies in real-time sometimes. I think there's a real-- >> So humanizing the company and the employees is critical. >> Yeah. >> You can't just be digital. >> Yes, it's a big deal. >> Awesome. Robson, thank so much for coming on theCUBE. The special CMO series. Is there a DevOps, can we automate away, what's going to automate, where's the value going to be in marketing? Super exciting, again, martech. Some are sayin' it's changing rapidly with the Cloud, AI, and all these awesome new technologies. What's going to change, that's what we're going to be exploring here on the CMO CUBE conversation. I'm John Furrier, thanks for watching. (upbeat instrumental music)
SUMMARY :
and host of theCUBE for our special CMO signal series and you guys are also state of the art DevOps company. So, pressure's on to be a progressive marketer, And the degree to which you can integrate marketing and you got the Cloud computing revolution, and agile to the real-time nature of the internet now, and what patterns emerge when you look of the organization. (laughing) How do you deal with that tension, that you can better understand, And so, we don't want to apply too much of a-- And so, if we can avail ourselves to people in a thoughtful, and the impact to the organization? And so, the things that we can do, and get customers to share with each other. Key part is to make sure that we're getting What's the formula and what are some of the successes and are interested in the same topics as you are. the journey, your potential. How do you get engagement for off property? and our blogs, and those kinds of-- so it's content rich. and we want to share it with our community. Through all of the above, really. (laughing) From a knowledge perspective, I like the idea of just opening it up. that it's going to do great things for us as a brand. or you guys just trusting everyone, and to maintain some editorial controls, and a marketing team to get together and not the marketing guy, and say, Hey, get the Kool-Aid, and I think that's our job, What do you do, how do you organize the troops, and finding great things to work on. Oh boy, you know a lot of them involve video. So the Burger King one with Net Neutrality going and so sometimes you have to actually make it And so, our content and the way we go to market and more in the marketing mix and if so, I think we can stretch it out a little bit in terms and get ready for the big test. How to install. in the industry as a marketer to be innovated, organizationally and in the approach to work. DevOps, Ethos, I think is going to be looked at as So I got to ask you the marketing question. John: I don't think they exist yet, Yeah, I think the things that tend to (laughing) because the expectations have to get matched up of I don't care which database it came from. And that's the sense of DevOps and can use. and it's the best engagement data to the point where we are getting invited into here on the CMO CUBE conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robson | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Jan 2018 | DATE | 0.99+ |
Burger King | ORGANIZATION | 0.99+ |
New Relic | ORGANIZATION | 0.99+ |
Robson Grieve | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
Kool-Aid | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Sundance Film Festival | EVENT | 0.99+ |
18 months ago | DATE | 0.99+ |
DevOps | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.98+ |
New Relic Inc. | ORGANIZATION | 0.98+ |
martech | ORGANIZATION | 0.98+ |
second approach | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
about two and a half years | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
Elite Studios | ORGANIZATION | 0.97+ |
Agile | TITLE | 0.95+ |
Lou | PERSON | 0.93+ |
Intercom | ORGANIZATION | 0.93+ |
theCUBE | ORGANIZATION | 0.92+ |
one customer | QUANTITY | 0.92+ |
three topic | QUANTITY | 0.88+ |
Geo TenX | COMMERCIAL_ITEM | 0.86+ |
one thing | QUANTITY | 0.85+ |
first thing | QUANTITY | 0.81+ |
theCUBE Studio | ORGANIZATION | 0.8+ |
few years ago | DATE | 0.8+ |
double | QUANTITY | 0.76+ |
Chief | PERSON | 0.74+ |
couple | QUANTITY | 0.73+ |
CMO CUBE | TITLE | 0.73+ |
Ethos | TITLE | 0.69+ |
slack | ORGANIZATION | 0.63+ |
Indy | LOCATION | 0.54+ |
edged | QUANTITY | 0.54+ |
Number one | QUANTITY | 0.5+ |
Agile | ORGANIZATION | 0.49+ |
agile | TITLE | 0.45+ |
Leslie Berlin, Stanford University | CUBE Conversation Nov 2017
(hopeful futuristic music) >> Hey welcome back everybody, Jeff Frick here with theCUBE. We are really excited to have this cube conversation here in the Palo Alto studio with a real close friend of theCUBE, and repeat alumni, Leslie Berlin. I want to get her official title; she's the historian for the Silicon Valley archive at Stanford. Last time we talked to Leslie, she had just come out with a book about Robert Noyce, and the man behind the microchip. If you haven't seen that, go check it out. But now she's got a new book, it's called "Troublemakers," which is a really appropriate title. And it's really about kind of the next phase of Silicon Valley growth, and it's hitting bookstores. I'm sure you can buy it wherever you can buy any other book, and we're excited to have you on Leslie, great to see you again. >> So good to see you Jeff. >> Absolutely, so the last book you wrote was really just about Noyce, and obviously, Intel, very specific in, you know, the silicon in Silicon Valley obviously. >> Right yeah. >> This is a much, kind of broader history with again just great characters. I mean, it's a tech history book, but it's really a character novel; I love it. >> Well thanks, yeah; I mean, I really wanted to find people. They had to meet a few criteria. They had to be interesting, they had to be important, they had to be, in my book, a little unknown; and most important, they had to be super-duper interesting. >> Jeff Frick: Yeah. >> And what I love about this generation is I look at Noyce's generation of innovators, who sort of working in the... Are getting their start in the 60s. And they really kind of set the tone for the valley in a lot of ways, but the valley at that point was still just all about chips. And then you have this new generation show up in the 70s, and they come up with the personal computer, they come up with video games. They sort of launch the venture capital industry in the way we know it now. Biotech, the internet gets started via the ARPANET, and they kind of set the tone for where we are today around the world in this modern, sort of tech infused, life that we live. >> Right, right, and it's interesting to me, because there's so many things that kind of define what Silicon Valley is. And of course, people are trying to replicate it all over the place, all over the world. But really, a lot of those kind of attributes were started by this class of entrepreneurs. Like just venture capital, the whole concept of having kind of a high risk, high return, small carve out from an institution, to put in a tech venture with basically a PowerPoint and some faith was a brand new concept back in the day. >> Leslie Berlin: Yeah, and no PowerPoint even. >> Well that's right, no PowerPoint, which is probably a good thing. >> You're right, because we're talking about the 1970s. I mean, what's so, really was very surprising to me about this book, and really important for understanding early venture capital, is that now a lot of venture capitalists are professional investors. But these venture capitalists pretty much to a man, and they were all men at that point, they were all operating guys, all of them. They worked at Fairchild, they worked at Intel, they worked at HP; and that was really part of the value that they brought to these propositions was they had money, yes, but they also had done this before. >> Jeff Frick: Right. >> And that was really, really important. >> Right, another concept that kind of comes out, and I think we've seen it time and time again is kind of this partnership of kind of the crazy super enthusiastic visionary that maybe is hard to work with and drives everybody nuts, and then always kind of has the other person, again, generally a guy in this time still a lot, who's kind of the doer. And it was really the Bushnell-Alcorn story around Atari that really brought that home where you had this guy way out front of the curve but you have to have the person behind who's actually building the vision in real material. >> Yeah, I mean I think something that's really important to understand, and this is something that I was really trying to bring out in the book, is that we usually only have room in our stories for one person in the spotlight when innovation is a team sport. And so, the kind of relationship that you're talking about with Nolan Bushnell, who started Atari, and Al Alcorn who was the first engineer there, it's a great example of that. And Nolan is exactly this very out there person, big curly hair, talkative, outgoing guy. After Atari he starts Chuck E. Cheese, which kind of tells you everything you need to know about someone who's dreaming up Chuck E. Cheese, super creative, super out there, super fun oriented. And you have working with him, Al Alcorn, who's a very straight laced for the time, by which I mean, he tried LSD but only once. (cumulative laughing) Engineer, and I think that what's important to understand is how much they needed each other, because the stories are so often only about the exuberant out front guy. To understand that those are just dreams, they are not reality without these other people. And how important, I mean, Al Alcorn told me look, "I couldn't have done this without Nolan, "kind of constantly pushing me." >> Right, right. >> And then in the Apple example, you actually see a third really important person, which to me was possibly the most exciting part of everything I discovered, which was the importance of the guy named Mike Markkula. Because in Jobs you had the visionary, and in Woz you had the engineer, but the two of them together, they had an idea, they had a great product, the Apple II, but they didn't have a company. And when Mike Markkula shows up at the garage, you know, Steve Jobs is 21 years old. >> Jeff Frick: Right. >> He has had 17 months of business experience in his life, and it's all his attack for Atari, actually. And so how that company became a business is due to Mike Markkula, this very quiet guy, very, very ambitious guy. He talked them up from a thousand stock options at Intel to 20,000 stock options at Intel when he got there, just before the IPO, which is how he could then turn around and help finance >> Jeff Frick: Right. >> The birth of Apple. And he pulled into Apple all of the chip people that he had worked with, and that is really what turned Apple into a company. So you had the visionary, you had the tech guy, you also needed a business person. >> But it's funny though because in that story of his visit to the garage he's specifically taken by the engineering elegance of the board >> Leslie Berlin: Right. >> That Woz put together, which I thought was really neat. So yeah, he's a successful business man. Yes he was bringing a lot of kind of business acumen value to the opportunity, but what struck him, and he specifically talks about what chips he used, how he planned for the power supply. Just very elegant engineering stuff that touched him, and he could recognize that they were so far ahead of the curve. And I think that's such another interesting point is that things that we so take for granted like mice, and UI, and UX. I mean the Atari example, for them to even think of actually building it that would operate with a television was just, I mean you might as well go to Venus, forget Mars, I mean that was such a crazy idea. >> Yeah, I mean I think Al ran to Walgreens or something like that and just sort of picked out the closest t.v. to figure out how he could build what turned out to be Pong, the first super successful video game. And I mean, if you look also at another story I tell is about Xerox Park; and specifically about a guy named Bob Taylor, who, I know I keep saying, "Oh this might be my favorite part." But Bob Taylor is another incredible story. This is the guy who convinced DARPA to start, it was then called ARPA, to start the ARPANET, which became the internet in a lot of ways. And then he goes on and he starts the computer sciences lab at Xerox Park. And that is the lab that Steve Jobs comes to in 1979, and for the first time sees a GUI, sees a mouse, sees Windows. And this is... The history behind that, and these people all working together, these very sophisticated Ph.D. engineers were all working together under the guidance of Bob Taylor, a Texan with a drawl and a Master's Degree in Psychology. So what it takes to lead, I think, is a really interesting question that gets raised in this book. >> So another great personality, Sandra Kurtzig. >> Yeah. >> I had to look to see if she's still alive. She's still alive. >> Leslie Berlin: Yeah. >> I'd love to get her in some time, we'll have to arrange for that next time, but her story is pretty fascinating, because she's a woman, and we still have big women issues in the tech industry, and this is years ago, but she was aggressive, she was a fantastic sales person, and she could code. And what was really interesting is she started her own software company. The whole concept of software kind of separated from hardware was completely alien. She couldn't even convince the HP guys to let her have access to a machine to write basically an NRP system that would add a ton of value to these big, expensive machines that they were selling. >> Yeah, you know what's interesting, she was able to get access to the machine. And HP, this is not a well known part of HP's history, is how important it was in helping launch little bitty companies in the valley. It was a wonderful sort of... Benefited all these small companies. But she had to go and read to them the definition of what an OEM was to make an argument that I am adding value to your machines by putting software on it. And software was such an unknown concept. A, people who heard she was selling software thought she was selling lingerie. And B, Larry Ellison tells a hilarious story of going to talk to venture capitalists about... When he's trying to start Oracle, he had co-founders, which I'm not sure everybody knows. And he and his co-founders were going to try to start Oracle, and these venture capitalists would, he said, not only throw him out of the office for such a crazy idea, but their secretaries would double check that he hadn't stolen the copy of Business Week off the table because what kind of nut job are we talking to here? >> Software. >> Yeah, where as now, I mean when you think about it, this is software valley. >> Right, right, it's software, even, world. There's so many great stories, again, "Troublemakers" just go out and get it wherever you buy a book. The whole recombinant DNA story and the birth of Genentech, A, is interesting, but I think the more kind of unique twist was the guy at Stanford, who really took it upon himself to take the commercialization of academic, generated, basic research to a whole 'nother level that had never been done. I guess it was like a sleepy little something in Manhattan they would send some paper to, but this guy took it to a whole 'nother level. >> Oh yeah, I mean before Niels showed up, Niels Reimers, he I believe that Stanford had made something like $3,000 off of the IP from its professors and students in the previous decades, and Niels said "There had to be a better way to do this." And he's the person who decided, we ought to be able to patent recombinant DNA. And one of the stories that's very, very interesting is what a cultural shift that required, whereas engineers had always thought in terms of, "How can this be practical?" For biologists this was seen as really an unpleasant thing to be doing, don't think about that we're about basic research. So in addition to having to convince all sorts of government agencies and the University of California system, which co-patented this, to make it possible, just almost on a paperwork level... >> Right. >> He had to convince the scientists themselves. And it was not a foregone conclusion, and a lot of people think that what kept the two named co-inventors of recombinant DNA, Stan Cohen and Herb Boyer, from winning the Nobel Prize is that they were seen as having benefited from the work of others, but having claimed all the credit, which is not, A, isn't fair, and B, both of those men had worried about that from the very beginning and kept saying, "We need to make sure that this includes everyone." >> Right. >> But that's not just the origins of the biotech industry in the valley, the entire landscape of how universities get their ideas to the public was transformed, and that whole story, there are these ideas that used to be in university labs, used to be locked up in the DOD, like you know, the ARPANET. And this is the time when those ideas start making their way out in a significant way. >> But it's this elegant dance, because it's basic research, and they want it to benefit all, but then you commercialize it, right? And then it's benefiting the few. But if you don't commercialize it and it doesn't get out, you really don't benefit very many. So they really had to walk this fine line to kind of serve both masters. >> Absolutely, and I mean it was even more complicated than that, because researchers didn't have to pay for it, it was... The thing that's amazing to me is that we look back at these people and say, "Oh these are trailblazers." And when I talked to them, because something that was really exciting about this book was that I got to talk to every one of the primary characters, you talk to them, and they say, "I was just putting one foot in front of the other." It's only when you sort of look behind them years later that you see, "Oh my God, they forged a completely new trail." But here it was just, "No I need to get to here, "and now I need to get to here." And that's what helped them get through. That's why I start the book with the quote from Raiders of the Lost Ark where Sallah asks Indy, you know basically, how are you going to stop, "Stop that car." And he says, "How are you going to do it Indy?" And Indy says, "I don't know "I'm making it up as I go along." And that really could almost be a theme in a lot of cases here that they knew where they needed to get to, and they just had to make it up to get there. >> Yeah, and there's a whole 'nother tranche on the Genentech story; they couldn't get all of the financing, so they actually used outsourcing, you know, so that whole kind of approach to business, which was really new and innovative. But we're running out of time, and I wanted to follow up on the last comment that you made. As a historian, you know, you are so fortunate or smart to pick your field that you can talk to the individual. So, I think you said, you've been doing interviews for five or six years for this book, it's 100 pages of notes in the back, don't miss the notes. >> But also don't think the book's too long. >> No, it's a good book, it's an easy read. But as you reflect on these individuals and these personalities, so there's obviously the stories you spent a lot of time writing about, but I'm wondering if there's some things that you see over and over again that just impress you. Is there a pattern, or is it just, as you said, just people working hard, putting one step in front of the other, and taking those risks that in hindsight are so big? >> I would say, I would point to a few things. I'd point to audacity; there really is a certain kind of adventurousness, at an almost unimaginable level, and persistence. I would also point to a third feature at that time that I think was really important, which was for a purpose that was creative. You know, I mean there was the notion, I think the metaphor of pioneering is much more what they were doing then what we would necessarily... Today we would call it disruption, and I think there's a difference there. And their vision was creative, I think of them as rebels with a cause. >> Right, right; is disruption the right... Is disruption, is that the right way that we should be thinking about it today or are just kind of backfilling the disruption after the fact that it happens do you think? >> I don't know, I mean I've given this a lot of thought, because I actually think, well, you know, the valley at this point, two-thirds of the people who are working in the tech industry in the valley were born outside of this country right now, actually 76 percent of the women. >> Jeff Frick: 76 percent? Wow. >> 76 percent of the women, I think it's age 25 to 44 working in tech were born outside of the United States. Okay, so the pioneering metaphor, that's just not the right metaphor anymore. The disruptive metaphor has a lot of the same concepts, but it has, it sounds to me more like blowing things up, and doesn't really thing so far as to, "Okay, what comes next?" >> Jeff Frick: Right, right. >> And I think we have to be sure that we continue to do that. >> Right, well because clearly, I mean, the Facebooks are the classic example where, you know, when he built that thing at Harvard, it was not to build a new platform that was going to have the power to disrupt global elections. You're trying to get dates, right? I mean, it was pretty simple. >> Right. >> Simple concept and yet, as you said, by putting one foot in front of the other as things roll out, he gets smart people, they see opportunities and take advantage of it, it becomes a much different thing, as has Google, as has Amazon. >> That's the way it goes, that's exactly... I mean, and you look back at the chip industry. These guys just didn't want to work for a boss they didn't like, and they wanted to build a transistor. And 20 years later a huge portion of the U.S. economy rests on the decisions they're making and the choices. And so I think this has been a continuous story in Silicon Valley. People start with a cool, small idea and it just grows so fast among them and around them with other people contributing, some people they wish didn't contribute, okay then what comes next? >> Jeff Frick: Right, right. >> That's what we figure out now. >> All right, audacity, creativity and persistence. Did I get it? >> And a goal. >> And a goal, and a goal. Pong, I mean was a great goal. (cumulative laughing) All right, so Leslie, thanks for taking a few minutes. Congratulations on the book; go out, get the book, you will not be disappointed. And of course, the Bob Noyce book is awesome as well, so... >> Thanks. >> Thanks for taking a few minutes and congratulations. >> Thank you so much Jeff. >> All right this is Leslie Berlin, I'm Jeff Frick, you're watching theCUBE. See you next time, thanks for watching. (electronic music)
SUMMARY :
And it's really about kind of the next phase Absolutely, so the last book you wrote was This is a much, kind of broader history and most important, they had to be super-duper interesting. but the valley at that point was still just all about chips. it all over the place, all over the world. which is probably a good thing. of the value that they brought to these propositions was And it was really the Bushnell-Alcorn story And so, the kind of relationship that you're talking about of the guy named Mike Markkula. And so how that company became a business is And he pulled into Apple all of the chip people I mean the Atari example, for them to even think And that is the lab that Steve Jobs comes I had to look to see if she's still alive. She couldn't even convince the HP guys to let double check that he hadn't stolen the copy when you think about it, this is software valley. the commercialization of academic, generated, basic research And he's the person who decided, we ought that from the very beginning and kept saying, in the DOD, like you know, the ARPANET. So they really had to walk this from Raiders of the Lost Ark where Sallah asks all of the financing, so they actually used outsourcing, obviously the stories you spent a lot of time that I think was really important, the disruption after the fact that it happens do you think? the valley at this point, two-thirds of the people Jeff Frick: 76 percent? The disruptive metaphor has a lot of the same concepts, And I think we have to be sure the Facebooks are the classic example where, by putting one foot in front of the other And so I think this has been Did I get it? And of course, the Bob Noyce book is awesome as well, so... See you next time, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sandra Kurtzig | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Leslie Berlin | PERSON | 0.99+ |
Mike Markkula | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Niels | PERSON | 0.99+ |
Indy | PERSON | 0.99+ |
Leslie | PERSON | 0.99+ |
Nolan | PERSON | 0.99+ |
University of California | ORGANIZATION | 0.99+ |
Nolan Bushnell | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
1979 | DATE | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Bob Taylor | PERSON | 0.99+ |
$3,000 | QUANTITY | 0.99+ |
Robert Noyce | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
17 months | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Manhattan | LOCATION | 0.99+ |
Jeff | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
100 pages | QUANTITY | 0.99+ |
Niels Reimers | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Nov 2017 | DATE | 0.99+ |
Sallah | PERSON | 0.99+ |
Stan Cohen | PERSON | 0.99+ |
Noyce | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
PowerPoint | TITLE | 0.99+ |
Al Alcorn | PERSON | 0.99+ |
Herb Boyer | PERSON | 0.99+ |
76 percent | QUANTITY | 0.99+ |
Walgreens | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Venus | LOCATION | 0.99+ |
six years | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
20,000 stock options | QUANTITY | 0.99+ |
Fairchild | ORGANIZATION | 0.99+ |
Nobel Prize | TITLE | 0.99+ |
Atari | ORGANIZATION | 0.99+ |
Biotech | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Mars | LOCATION | 0.99+ |
70s | DATE | 0.99+ |
third | QUANTITY | 0.99+ |
two-thirds | QUANTITY | 0.99+ |
Bob Noyce | PERSON | 0.98+ |
one foot | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Inhi Cho Suh, IBM - IBM Information on Demand 2013 - #IBMIoD #theCUBE
okay we're back live here inside the cube rounding out day one of exclusive coverage of IBM information on demand I'm John further the founder SiliconANGLE enjoy my co-host Davey lonte we're here in heat you saw who's the vice president I said that speaks that you know I think you always get promoted you've been on the cube so many times you doing so well it's all your reason tatian was so amazing I always liked SVP the cute good things happen that's exactly why i be MVP is a big deal unlike some of the starters where everyone gets EVP all these other titles but welcome back thank you so the storytelling has been phenomenal here although murs a little bit critical some of the presentations earlier from gardner but the stories higher your IBM just from last year take us through what's changed from iod last year to this year the story has gotten tighter yes comprehensive give us the quick okay quick view um okay here's the point of view here's the point of view first you got to invest in a platform which we've all talked about and i will tell you it's not just us saying it i would say other vendors are now copying what we're saying cuz if you went to strata yes which you were there we were there probably heard some of the messages that's right why everybody wants to be a platform okay one two elevated risk uncertainty governance I think privacy privacy security risk this is what people are talking about they want to invest in a more why because you know what the decisions matter they want to make bigger beds they want to do more things around customer experience they want to improve products they want to improve pricing the third area is really a cultural statement like applying analytics in the organization because the people and the skills I would say the culture conversation is happening a lot more this year than it was a year ago not just at IOD but in the industry so I think what you're seeing here at IOD is actually a reflection of what the conversations are happening so our organizations culturally ready for this I mean you guys are going to say yes and everybody comes on says oh yes we're seeing it all over the place but are they really ready it depends I think some are some are absolutely ready some are not and probably the best examples are and it really depends on the industry so I'll give you a few examples so in the government area I think people see the power of applying things like real-time contextual insight leveraging stream computing why because national security matters a lot of fraudulent activity because that's measurable you can drive revenue or savings healthcare people know that a lot of decision-making is being made without a comprehensive view of the analytics and the data now the other area that's interesting is most people like to talk about text analytics unstructured data a lot of social media data but the bulk of the data that's actually being used currently in terms of big data analytics is really transactional data why because that's what's maintained in most operational systems where health systems so you're going to see a lot more data warehouse augmentation use cases leverage you can do on the front end or the back end you're going to see kind of more in terms of comprehensive view of the customer right augmenting like an existing customer loyalty or segmentation data with additional let's say activity data that they're interacting with and that was the usta kind of demo showing social data cell phone metadata is that considered transactional you know it is well call me to record right CDR call detail records well the real time is important to you mentioned the US open just for folks out there was a demo on stage when you guys open data yeah at all the trend sentiment data the social data but that's people's thoughts right so you can see what people are doing now that's big yeah you know what's amazing about that just one second which is what we were doing was we were predicting it based on the past but then we were modifying it based on real time activity and conversation so let's say something hot happened and all of a sudden it was interesting when Brian told me this he was like oh yeah Serena's average Twitter score was like 2,200 twit tweets a day and then if some activity were to happen let's say I don't know she didn't he wrote she had got into a romance or let's say she decided to launch a new product then all of a sudden you'd see an accused spike rate in activity social activity that would then predict how they wanted to operate that environment that's amazing and you know we you know we love daily seen our our crowd spots be finder we have the new crowd chat one and this idea of connecting consumers is loose data it's ephemeral data it's transient data but it's now capture will so people can have a have fun into tennis tournament and then it's over they go back home to work you still have that metadata we do that's very kind of its transient and ephemeral that's value so you know Merv was saying also that your groups doing a lot of value creation let's talk about that for a second business outcomes what do you what's the top conversation when you walk into a customer that says hey you know here's point a point B B's my outcome mm-hmm one of those conversations like I mean what are they what are some of the outcomes you just talked to use case you tell customers but like what did some of the exact you know what I'll tell you one use case so and this was actually in the healthcare hotel you won healthcare use case in one financial services use case both conversations happened actually in the last two weeks so in the healthcare use case there's already let's say a model that's happening for this particular hospital now they have a workflow process typically in a workflow process you you're applying capabilities where you've modeled out your steps right you do a before be before see and you automate this leveraging BPM type capabilities in a data context you don't actually start necessarily with knowing what the workflow is you kind of let the data determine what the workflow should be so in the this was in an ICU arena historically if you wanted to decide who was the healthiest of the patients in the ICU because you had another trauma coming in there was a workflow that said you had to go check the nurses the patient's profile and say who gets kicked out of what bed or moved because they're most likely to be in a healthy state that's a predefined workflow but if you're applying streams for example all the sudden you could have real-time visibility without necessarily a nurse calling a doctor who that calls the local staff who then calls the cleaning crew rate you could actually have a dashboard that says with eighty percent confidence beds2 and ate those patients because of the following conditions could be the ones that you are proactive in and saying oh you know what not only can they be released but we have this degree of confidence around them being because of the days that it's coming obvious information that changes then potentially you know the way your kind of setting your rules and policies around your workflow another example which was really a government use case was think about in government security so in security scenarios and national security state there is you never quite know exactly what people are intended to do other than you know they're intending something bad right and they're intentionally trying not to be found so human trafficking it's an ugly topic but I want to bring it up for a second here what you're doing is you're actually looking at data compositions and and different patterns and resolving entities and based on that that will dictate kind of potentially a whole new flow or a treatment or remediation or activity or savior which is not the predefined workflow it's you're letting the data actually all of a sudden connect to other data points that then you're arriving at the insight to take the action where is completely different I wanna go back to sleep RFI course not healthcare examples yeah so where are we today is that something that's actually being implemented is that something they sort of a proof of concept well that's actually being done at it's being done in a couple different hospitals one of which is actually in hospital in Canada and then we're also leveraging streams in the emory university intensive Timothy Buckman on you did earlier oh yeah the ICU of the future right absolutely brilliant trafficking example brings up you know Ashley that's the underbelly of the world in society but like data condition to Jeff Jonas been on the queue as you know many times and he talks with his puzzle pieces in a way that the data is traveling on a network a network that's distributed essentially that's network computing I mean estate management so look at network management you can look at patterns right so so that's an interesting example so that begs the next question what is the craziest most interesting use case you seen oh my gosh okay now i got i think about oh yes and you can talk about and i can talk about that creates business value or society value oh you know I okay um for you are putting me on the spot the craziest one so 3 we could be great could be g-rated don't you know they go to 2k yeah you know what I participated three weeks ago tiaa-cref actually hosted a fraud summit where it was all investigators like they were doing crime investigation so more than sixty percent of the guys in the room carried weapons because they were Security Intelligence they were pleased they were DA's they repented I was not packing anyway and there was about so 60-plus percent were those right and then only about thirty percent in the room were what i would consider the data scientists in the room like these are the guys are trying to decide which claims are not true or false so forth there were at least like three or four use cases in that discussion that came out they were unbelievable so one is in the fraud area in particular and in crime they're luring the data there what does luring the data they're taking location-based data for geographic region they're putting crime data on top of that right historical like drug rings and even like datasets in miami-dade county the DA told me they were doing things where rather than looking at people that are doing the drugs they they realize people that had possession of a drug typically purchased within a certain location and they had these abandoned properties and were able to identify entire rings based on that another one this is also semi drug-related is in the energy utility space there was in the middle part of the United States houses in Nice urban areas where they were completely torn apart on the interior and build into marijuana houses and so of course they're utilizing high levels of gas and electricity in order to maintain the water fertilization everything else well what happens is it drives peaks in the way that the energy utility looks on a given day pattern so based on that they're able to detect how inappropriate activities are happening and whether it's a single opportunistic type activity whether it's saying this was doing laundry or irrigating the Erie hey we well you know what's interesting about electricity to is especially someone's using electricity but no one's like using any of the gas you're like home but no one's cooking you know something's a little long but it was fascinating i mean really fascinating there were like several other crime scenarios in terms of speed i actually did not know the US Postal Service is like the longest running federal institution that actually tracked like mail fraud and one of the use cases i'm sure jeff has talked about here on the cube is probably a moneygram use case but we talked about that we talked I mean it the stories were unreal because I was spending time with forensic scientists as well as forensic investigators and that's a completely do we're getting we're getting the few minutes need for a platform to handle all this diversity so that's the security risk the governance everything you gotta go cuz your star for the analyst me I can't watch this conversation one final question one of the best yet as we get drugs in there we got other things packing guns guns and drugs you in traffic you know tobacco if you go / news / tobacco well write the knowledge worker all right final question for I know you gotta go this big data applications were you know the guys in the mailroom the guys work for the post office are now unable to actually do this kind of high-level kind of date basically data science yeah if you will or being an analyst so that what I want you to share the folks your vision of the definition of the knowledge worker overused word that's been kicked around for the PC generates but now with handheld with analytical real-time with streaming all this stuff happening at the edge how is it going to change that the knowledge work or the person in the trenches it could be person the cubicle the person on the go the mobile sales person or anyone you know I some people feel threatened when they hear that you're going to apply data and analytics everywhere because you're it implies that you're automating things but that's actually not the value the real value is the insight so that you can double down on the decisions you want to make so if you're more confident you're going to take bigger bets right and decision-making historically has been I think reserved for a very elite few and what we're talking about now is a democratization of that insight and with that comes a lot of empowerment a lot empowerment for everyone and you don't have to be a data scientist be able to be able to make decisions and inform decisions if anything you know actually Tim Buckman I had a good conversation about them as a professional you know what I if I was a physician I'd want to work at the hospital that has the advanced capabilities why because it allows me as a professional physician to then be able to do what I was trained to do not to detect and have to pay attention to all these alarms going off you know I want to work at the institutions and organizations that are investing appropriately because it pushes the caliber of the work I get to do so I think it just changes the dynamics for everyone tim was like a high-priced logistics manager you want to work with people want to work with leaders and now we're in a modern era this new wave is upon us who care and they want to improve and this is about continuing to improve Dave and I always talk about the open source world that those principles are going mainstream to every aspect of business collaboration openness transparency not controlled absolutely absolutely Indy thanks so much for coming in the queue and know you're busy think of your time we are here live in the cube getting all the signal from the noise and some good commentary at the end a one we have one more guest ray way right up next stay tuned right back the queue
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Jonas | PERSON | 0.99+ |
Tim Buckman | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Timothy Buckman | PERSON | 0.99+ |
Canada | LOCATION | 0.99+ |
US Postal Service | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Serena | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
more than sixty percent | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
Ashley | PERSON | 0.99+ |
jeff | PERSON | 0.99+ |
one second | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Davey lonte | PERSON | 0.99+ |
about thirty percent | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
three weeks ago | DATE | 0.98+ |
a year ago | DATE | 0.98+ |
eighty percent | QUANTITY | 0.98+ |
both conversations | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Inhi Cho Suh | PERSON | 0.97+ |
four use cases | QUANTITY | 0.97+ |
2,200 twit tweets a day | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
third area | QUANTITY | 0.95+ |
John | PERSON | 0.95+ |
iod | TITLE | 0.95+ |
this year | DATE | 0.95+ |
2013 | DATE | 0.93+ |
2k | QUANTITY | 0.93+ |
tatian | PERSON | 0.92+ |
last two weeks | DATE | 0.91+ |
ORGANIZATION | 0.9+ | |
second business | QUANTITY | 0.9+ |
miami-dade | LOCATION | 0.89+ |
day one | QUANTITY | 0.88+ |
60-plus percent | QUANTITY | 0.88+ |
Merv | PERSON | 0.87+ |
first | QUANTITY | 0.85+ |
#IBMIoD | TITLE | 0.84+ |
gardner | PERSON | 0.82+ |
about | QUANTITY | 0.82+ |
Erie | PERSON | 0.78+ |
single opportunistic type | QUANTITY | 0.77+ |
US | LOCATION | 0.77+ |
SiliconANGLE | ORGANIZATION | 0.77+ |
hospitals | QUANTITY | 0.76+ |
one use | QUANTITY | 0.75+ |
two | QUANTITY | 0.73+ |
second | QUANTITY | 0.71+ |
couple | QUANTITY | 0.68+ |
one more | QUANTITY | 0.67+ |
IOD | ORGANIZATION | 0.67+ |
SVP | TITLE | 0.64+ |
one final question | QUANTITY | 0.63+ |
Indy | PERSON | 0.58+ |
cases | QUANTITY | 0.56+ |
IBM Information | ORGANIZATION | 0.52+ |
strata | LOCATION | 0.51+ |
tim | PERSON | 0.51+ |
many times | QUANTITY | 0.46+ |
on | TITLE | 0.34+ |
Demand | ORGANIZATION | 0.31+ |