UNLIST TILL 4/2 - End-to-End Security
>> Paige: Hello everybody and thank you for joining us today for the virtual Vertica BDC 2020. Today's breakout session is entitled End-to-End Security in Vertica. I'm Paige Roberts, Open Source Relations Manager at Vertica. I'll be your host for this session. Joining me is Vertica Software Engineers, Fenic Fawkes and Chris Morris. Before we begin, I encourage you to submit your questions or comments during the virtual session. You don't have to wait until the end. Just type your question or comment in the question box below the slide as it occurs to you and click submit. There will be a Q&A session at the end of the presentation and we'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to answer offline. Also, you can visit Vertica forums to post your questions there after the session. Our team is planning to join the forums to keep the conversation going, so it'll be just like being at a conference and talking to the engineers after the presentation. Also, a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slide. And before you ask, yes, this whole session is being recorded and it will be available to view on-demand this week. We'll send you a notification as soon as it's ready. I think we're ready to get started. Over to you, Fen. >> Fenic: Hi, welcome everyone. My name is Fen. My pronouns are fae/faer and Chris will be presenting the second half, and his pronouns are he/him. So to get started, let's kind of go over what the goals of this presentation are. First off, no deployment is the same. So we can't give you an exact, like, here's the right way to secure Vertica because how it is to set up a deployment is a factor. But the biggest one is, what is your threat model? So, if you don't know what a threat model is, let's take an example. We're all working from home because of the coronavirus and that introduces certain new risks. Our source code is on our laptops at home, that kind of thing. But really our threat model isn't that people will read our code and copy it, like, over our shoulders. So we've encrypted our hard disks and that kind of thing to make sure that no one can get them. So basically, what we're going to give you are building blocks and you can pick and choose the pieces that you need to secure your Vertica deployment. We hope that this gives you a good foundation for how to secure Vertica. And now, what we're going to talk about. So we're going to start off by going over encryption, just how to secure your data from attackers. And then authentication, which is kind of how to log in. Identity, which is who are you? Authorization, which is now that we know who you are, what can you do? Delegation is about how Vertica talks to other systems. And then auditing and monitoring. So, how do you protect your data in transit? Vertica makes a lot of network connections. Here are the important ones basically. There are clients talk to Vertica cluster. Vertica cluster talks to itself. And it can also talk to other Vertica clusters and it can make connections to a bunch of external services. So first off, let's talk about client-server TLS. Securing data between, this is how you secure data between Vertica and clients. It prevents an attacker from sniffing network traffic and say, picking out sensitive data. Clients have a way to configure how strict the authentication is of the server cert. It's called the Client SSLMode and we'll talk about this more in a bit but authentication methods can disable non-TLS connections, which is a pretty cool feature. Okay, so Vertica also makes a lot of network connections within itself. So if Vertica is running behind a strict firewall, you have really good network, both physical and software security, then it's probably not super important that you encrypt all traffic between nodes. But if you're on a public cloud, you can set up AWS' firewall to prevent connections, but if there's a vulnerability in that, then your data's all totally vulnerable. So it's a good idea to set up inter-node encryption in less secure situations. Next, import/export is a good way to move data between clusters. So for instance, say you have an on-premises cluster and you're looking to move to AWS. Import/Export is a great way to move your data from your on-prem cluster to AWS, but that means that the data is going over the open internet. And that is another case where an attacker could try to sniff network traffic and pull out credit card numbers or whatever you have stored in Vertica that's sensitive. So it's a good idea to secure data in that case. And then we also connect to a lot of external services. Kafka, Hadoop, S3 are three of them. Voltage SecureData, which we'll talk about more in a sec, is another. And because of how each service deals with authentication, how to configure your authentication to them differs. So, see our docs. And then I'd like to talk a little bit about where we're going next. Our main goal at this point is making Vertica easier to use. Our first objective was security, was to make sure everything could be secure, so we built relatively low-level building blocks. Now that we've done that, we can identify common use cases and automate them. And that's where our attention is going. Okay, so we've talked about how to secure your data over the network, but what about when it's on disk? There are several different encryption approaches, each depends on kind of what your use case is. RAID controllers and disk encryption are mostly for on-prem clusters and they protect against media theft. They're invisible to Vertica. S3 and GCP are kind of the equivalent in the cloud. They also invisible to Vertica. And then there's field-level encryption, which we accomplish using Voltage SecureData, which is format-preserving encryption. So how does Voltage work? Well, it, the, yeah. It encrypts values to things that look like the same format. So for instance, you can see date of birth encrypted to something that looks like a date of birth but it is not in fact the same thing. You could do cool stuff like with a credit card number, you can encrypt only the first 12 digits, allowing the user to, you know, validate the last four. The benefits of format-preserving encryption are that it doesn't increase database size, you don't need to alter your schema or anything. And because of referential integrity, it means that you can do analytics without unencrypting the data. So again, a little diagram of how you could work Voltage into your use case. And you could even work with Vertica's row and column access policies, which Chris will talk about a bit later, for even more customized access control. Depending on your use case and your Voltage integration. We are enhancing our Voltage integration in several ways in 10.0 and if you're interested in Voltage, you can go see their virtual BDC talk. And then again, talking about roadmap a little, we're working on in-database encryption at rest. What this means is kind of a Vertica solution to encryption at rest that doesn't depend on the platform that you're running on. Encryption at rest is hard. (laughs) Encrypting, say, 10 petabytes of data is a lot of work. And once again, the theme of this talk is everyone has a different key management strategy, a different threat model, so we're working on designing a solution that fits everyone. If you're interested, we'd love to hear from you. Contact us on the Vertica forums. All right, next up we're going to talk a little bit about access control. So first off is how do I prove who I am? How do I log in? So, Vertica has several authentication methods. Which one is best depends on your deployment size/use case. Again, theme of this talk is what you should use depends on your use case. You could order authentication methods by priority and origin. So for instance, you can only allow connections from within your internal network or you can enforce TLS on connections from external networks but relax that for connections from your internal network. That kind of thing. So we have a bunch of built-in authentication methods. They're all password-based. User profiles allow you to set complexity requirements of passwords and you can even reject non-TLS connections, say, or reject certain kinds of connections. Should only be used by small deployments because you probably have an LDAP server, where you manage users if you're a larger deployment and rather than duplicating passwords and users all in LDAP, you should use LDAP Auth, where Vertica still has to keep track of users, but each user can then use LDAP authentication. So Vertica doesn't store the password at all. The client gives Vertica a username and password and Vertica then asks the LDAP server is this a correct username or password. And the benefits of this are, well, manyfold, but if, say, you delete a user from LDAP, you don't need to remember to also delete their Vertica credentials. You can just, they won't be able to log in anymore because they're not in LDAP anymore. If you like LDAP but you want something a little bit more secure, Kerberos is a good idea. So similar to LDAP, Vertica doesn't keep track of who's allowed to log in, it just keeps track of the Kerberos credentials and it even, Vertica never touches the user's password. Users log in to Kerberos and then they pass Vertica a ticket that says "I can log in." It is more complex to set up, so if you're just getting started with security, LDAP is probably a better option. But Kerberos is, again, a little bit more secure. If you're looking for something that, you know, works well for applications, certificate auth is probably what you want. Rather than hardcoding a password, or storing a password in a script that you use to run an application, you can instead use a certificate. So, if you ever need to change it, you can just replace the certificate on disk and the next time the application starts, it just picks that up and logs in. Yeah. And then, multi-factor auth is a feature request we've gotten in the past and it's not built-in to Vertica but you can do it using Kerberos. So, security is a whole application concern and fitting MFA into your workflow is all about fitting it in at the right layer. And we believe that that layer is above Vertica. If you're interested in more about how MFA works and how to set it up, we wrote a blog on how to do it. And now, over to Chris, for more on identity and authorization. >> Chris: Thanks, Fen. Hi everyone, I'm Chris. So, we're a Vertica user and we've connected to Vertica but once we're in the database, who are we? What are we? So in Vertica, the answer to that questions is principals. Users and roles, which are like groups in other systems. Since roles can be enabled and disabled at will and multiple roles can be active, they're a flexible way to use only the privileges you need in the moment. For example here, you've got Alice who has Dbadmin as a role and those are some elevated privileges. She probably doesn't want them active all the time, so she can set the role and add them to her identity set. All of this information is stored in the catalog, which is basically Vertica's metadata storage. How do we manage these principals? Well, depends on your use case, right? So, if you're a small organization or maybe only some people or services need Vertica access, the solution is just to manage it with Vertica. You can see some commands here that will let you do that. But what if we're a big organization and we want Vertica to reflect what's in our centralized user management system? Sort of a similar motivating use case for LDAP authentication, right? We want to avoid duplication hassles, we just want to centralize our management. In that case, we can use Vertica's LDAPLink feature. So with LDAPLink, principals are mirrored from LDAP. They're synced in a considerable fashion from the LDAP into Vertica's catalog. What this does is it manages creating and dropping users and roles for you and then mapping the users to the roles. Once that's done, you can do any Vertica-specific configuration on the Vertica side. It's important to note that principals created in Vertica this way, support multiple forms of authentication, not just LDAP. This is a separate feature from LDAP authentication and if you created a user via LDAPLink, you could have them use a different form of authentication, Kerberos, for example. Up to you. Now of course this kind of system is pretty mission-critical, right? You want to make sure you get the right roles and the right users and the right mappings in Vertica. So you probably want to test it. And for that, we've got new and improved dry run functionality, from 9.3.1. And what this feature offers you is new metafunctions that let you test various parameters without breaking your real LDAPLink configuration. So you can mess around with parameters and the configuration as much as you want and you can be sure that all of that is strictly isolated from the live system. Everything's separated. And when you use this, you get some really nice output through a Data Collector table. You can see some example output here. It runs the same logic as the real LDAPLink and provides detailed information about what would happen. You can check the documentation for specifics. All right, so we've connected to the database, we know who we are, but now, what can we do? So for any given action, you want to control who can do that, right? So what's the question you have to ask? Sometimes the question is just who are you? It's a simple yes or no question. For example, if I want to upgrade a user, the question I have to ask is, am I the superuser? If I'm the superuser, I can do it, if I'm not, I can't. But sometimes the actions are more complex and the question you have to ask is more complex. Does the principal have the required privileges? If you're familiar with SQL privileges, there are things like SELECT, INSERT, and Vertica has a few of their own, but the key thing here is that an action can require specific and maybe even multiple privileges on multiple objects. So for example, when selecting from a table, you need USAGE on the schema and SELECT on the table. And there's some other examples here. So where do these privileges come from? Well, if the action requires a privilege, these are the only places privileges can come from. The first source is implicit privileges, which could come from owning the object or from special roles, which we'll talk about in a sec. Explicit privileges, it's basically a SQL standard GRANT system. So you can grant privileges to users or roles and optionally, those users and roles could grant them downstream. Discretionary access control. So those are explicit and they come from the user and the active roles. So the whole identity set. And then we've got Vertica-specific inherited privileges and those come from the schema, and we'll talk about that in a sec as well. So these are the special roles in Vertica. First role, DBADMIN. This isn't the Dbadmin user, it's a role. And it has specific elevated privileges. You can check the documentation for those exact privileges but it's less than the superuser. The PSEUDOSUPERUSER can do anything the real superuser can do and you can grant this role to whomever. The DBDUSER is actually a role, can run Database Designer functions. SYSMONITOR gives you some elevated auditing permissions and we'll talk about that later as well. And finally, PUBLIC is a role that everyone has all the time so anything you want to be allowed for everyone, attach to PUBLIC. Imagine this scenario. I've got a really big schema with lots of relations. Those relations might be changing all the time. But for each principal that uses this schema, I want the privileges for all the tables and views there to be roughly the same. Even though the tables and views come and go, for example, an analyst might need full access to all of them no matter how many there are or what there are at any given time. So to manage this, my first approach I could use is remember to run grants every time a new table or view is created. And not just you but everyone using this schema. Not only is it a pain, it's hard to enforce. The second approach is to use schema-inherited privileges. So in Vertica, schema grants can include relational privileges. For example, SELECT or INSERT, which normally don't mean anything for a schema, but they do for a table. If a relation's marked as inheriting, then the schema grants to a principal, for example, salespeople, also apply to the relation. And you can see on the diagram here how the usage applies to the schema and the SELECT technically but in Sales.foo table, SELECT also applies. So now, instead of lots of GRANT statements for multiple object owners, we only have to run one ALTER SCHEMA statement and three GRANT statements and from then on, any time that you grant some privileges or revoke privileges to or on the schema, to or from a principal, all your new tables and views will get them automatically. So it's dynamically calculated. Now of course, setting it up securely, is that you want to know what's happened here and what's going on. So to monitor the privileges, there are three system tables which you want to look at. The first is grants, which will show you privileges that are active for you. That is your user and active roles and theirs and so on down the chain. Grants will show you the explicit privileges and inherited_privileges will show you the inherited ones. And then there's one more inheriting_objects which will show all tables and views which inherit privileges so that's useful more for not seeing privileges themselves but managing inherited privileges in general. And finally, how do you see all privileges from all these sources, right? In one go, you want to see them together? Well, there's a metafunction added in 9.3.1. Get_privileges_description which will, given an object, it will sum up all the privileges for a current user on that object. I'll refer you to the documentation for usage and supported types. Now, the problem with SELECT. SELECT let's you see everything or nothing. You can either read the table or you can't. But what if you want some principals to see subset or a transformed version of the data. So for example, I have a table with personnel data and different principals, as you can see here, need different access levels to sensitive information. Social security numbers. Well, one thing I could do is I could make a view for each principal. But I could also use access policies and access policies can do this without introducing any new objects or dependencies. It centralizes your restriction logic and makes it easier to manage. So what do access policies do? Well, we've got row and column access policies. Rows will hide and column access policies will transform data in the row or column, depending on who's doing the SELECTing. So it transforms the data, as we saw on the previous slide, to look as requested. Now, if access policies let you see the raw data, you can still modify the data. And the implication of this is that when you're crafting access policies, you should only use them to refine access for principals that need read-only access. That is, if you want a principal to be able to modify it, the access policies you craft should let through the raw data for that principal. So in our previous example, the loader service should be able to see every row and it should be able to see untransformed data in every column. And as long as that's true, then they can continue to load into this table. All of this is of course monitorable by a system table, in this case access_policy. Check the docs for more information on how to implement these. All right, that's it for access control. Now on to delegation and impersonation. So what's the question here? Well, the question is who is Vertica? And that might seem like a silly question, but here's what I mean by that. When Vertica's connecting to a downstream service, for example, cloud storage, how should Vertica identify itself? Well, most of the time, we do the permissions check ourselves and then we connect as Vertica, like in this diagram here. But sometimes we can do better. And instead of connecting as Vertica, we connect with some kind of upstream user identity. And when we do that, we let the service decide who can do what, so Vertica isn't the only line of defense. And in addition to the defense in depth benefit, there are also benefits for auditing because the external system can see who is really doing something. It's no longer just Vertica showing up in that external service's logs, it's somebody like Alice or Bob, trying to do something. One system where this comes into play is with Voltage SecureData. So, let's look at a couple use cases. The first one, I'm just encrypting for compliance or anti-theft reasons. In this case, I'll just use one global identity to encrypt or decrypt with Voltage. But imagine another use case, I want to control which users can decrypt which data. Now I'm using Voltage for access control. So in this case, we want to delegate. The solution here is on the Voltage side, give Voltage users access to appropriate identities and these identities control encryption for sets of data. A Voltage user can access multiple identities like groups. Then on the Vertica side, a Vertica user can set their Voltage username and password in a session and Vertica will talk to Voltage as that Voltage user. So in the diagram here, you can see an example of how this is leverage so that Alice could decrypt something but Bob cannot. Another place the delegation paradigm shows up is with storage. So Vertica can store and interact with data on non-local file systems. For example, HGFS or S3. Sometimes Vertica's storing Vertica-managed data there. For example, in Eon mode, you might store your projections in communal storage in S3. But sometimes, Vertica is interacting with external data. For example, this usually maps to a user storage location in the Vertica side and it might, on the external storage side, be something like Parquet files on Hadoop. And in that case, it's not really Vertica's data and we don't want to give Vertica more power than it needs, so let's request the data on behalf of who needs it. Lets say I'm an analyst and I want to copy from or export to Parquet, using my own bucket. It's not Vertica's bucket, it's my data. But I want Vertica to manipulate data in it. So the first option I have is to give Vertica as a whole access to the bucket and that's problematic because in that case, Vertica becomes kind of an AWS god. It can see any bucket, any Vertica user might want to push or pull data to or from any time Vertica wants. So it's not good for the principals of least access and zero trust. And we can do better than that. So in the second option, use an ID and secret key pair for an AWS, IAM, if you're familiar, principal that does have access to the bucket. So I might use my, the analyst, credentials, or I might use credentials for an AWS role that has even fewer privileges than I do. Sort of a restricted subset of my privileges. And then I use that. I set it in Vertica at the session level and Vertica will use those credentials for the copy export commands. And it gives more isolation. Something that's in the works is support for keyless delegation, using assumable IAM roles. So similar benefits to option two here, but also not having to manage keys at the user level. We can do basically the same thing with Hadoop and HGFS with three different methods. So first option is Kerberos delegation. I think it's the most secure. It definitely, if access control is your primary concern here, this will give you the tightest access control. The downside is it requires the most configuration outside of Vertica with Kerberos and HGFS but with this, you can really determine which Vertica users can talk to which HGFS locations. Then, you've got secure impersonation. If you've got a highly trusted Vertica userbase, or at least some subset of it is, and you're not worried about them doing things wrong but you want to know about auditing on the HGFS side, that's your primary concern, you can use this option. This diagram here gives you a visual overview of how that works. But I'll refer you to the docs for details. And then finally, option three, this is bringing your own delegation token. It's similar to what we do with AWS. We set something in the session level, so it's very flexible. The user can do it at an ad hoc basis, but it is manual, so that's the third option. Now on to auditing and monitoring. So of course, we want to know, what's happening in our database? It's important in general and important for incident response, of course. So your first stop, to answer this question, should be system tables. And they're a collection of information about events, system state, performance, et cetera. They're SELECT-only tables, but they work in queries as usual. The data is just loaded differently. So there are two types generally. There's the metadata table, which stores persistent information or rather reflects persistent information stored in the catalog, for example, users or schemata. Then there are monitoring tables, which reflect more transient information, like events, system resources. Here you can see an example of output from the resource pool's storage table which, these are actually, despite that it looks like system statistics, they're actually configurable parameters for using that. If you're interested in resource pools, a way to handle users' resource allocation and various principal's resource allocation, again, check that out on the docs. Then of course, there's the followup question, who can see all of this? Well, some system information is sensitive and we should only show it to those who need it. Principal of least privilege, right? So of course the superuser can see everything, but what about non-superusers? How do we give access to people that might need additional information about the system without giving them too much power? One option's SYSMONITOR, as I mentioned before, it's a special role. And this role can always read system tables but not change things like a superuser would be able to. Just reading. And another option is the RESTRICT and RELEASE metafunctions. Those grant and revoke access to from a certain system table set, to and from the PUBLIC role. But the downside of those approaches is that they're inflexible. So they only give you, they're all or nothing. For a specific preset of tables. And you can't really configure it per table. So if you're willing to do a little more setup, then I'd recommend using your own grants and roles. System tables support GRANT and REVOKE statements just like any regular relations. And in that case, I wouldn't even bother with SYSMONITOR or the metafunctions. So to do this, just grant whatever privileges you see fit to roles that you create. Then go ahead and grant those roles to the users that you want. And revoke access to the system tables of your choice from PUBLIC. If you need even finer-grained access than this, you can create views on top of system tables. For example, you can create a view on top of the user system table which only shows the current user's information, uses a built-in function that you can use as part of the view definition. And then, you can actually grant this to PUBLIC, so that each user in Vertica could see their own user's information and never give access to the user system table as a whole, just that view. Now if you're a superuser or if you have direct access to nodes in the cluster, filesystem/OS, et cetera, then you have more ways to see events. Vertica supports various methods of logging. You can see a few methods here which are generally outside of running Vertica, you'd interact with them in a different way, with the exception of active events which is a system table. We've also got the data collector. And that sorts events by subjects. So what the data collector does, it extends the logging and system table functionality, by the component, is what it's called in the documentation. And it logs these events and information to rotating files. For example, AnalyzeStatistics is a function that could be of use by users and as a database administrator, you might want to monitor that so you can use the data collector for AnalyzeStatistics. And the files that these create can be exported into a monitoring database. One example of that is with the Management Console Extended Monitoring. So check out their virtual BDC talk. The one on the management console. And that's it for the key points of security in Vertica. Well, many of these slides could spawn a talk on their own, so we encourage you to check out our blog, check out the documentation and the forum for further investigation and collaboration. Hopefully the information we provided today will inform your choices in securing your deployment of Vertica. Thanks for your time today. That concludes our presentation. Now, we're ready for Q&A.
SUMMARY :
in the question box below the slide as it occurs to you So for instance, you can see date of birth encrypted and the question you have to ask is more complex.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chris Morris | PERSON | 0.99+ |
second option | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Paige Roberts | PERSON | 0.99+ |
two types | QUANTITY | 0.99+ |
first option | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Alice | PERSON | 0.99+ |
second approach | QUANTITY | 0.99+ |
Paige | PERSON | 0.99+ |
third option | QUANTITY | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Today | DATE | 0.99+ |
first approach | QUANTITY | 0.99+ |
second half | QUANTITY | 0.99+ |
each service | QUANTITY | 0.99+ |
Bob | PERSON | 0.99+ |
10 petabytes | QUANTITY | 0.99+ |
Fenic | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
first source | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
Fen | PERSON | 0.98+ |
S3 | TITLE | 0.98+ |
One system | QUANTITY | 0.98+ |
first objective | QUANTITY | 0.98+ |
each user | QUANTITY | 0.98+ |
First role | QUANTITY | 0.97+ |
each principal | QUANTITY | 0.97+ |
4/2 | DATE | 0.97+ |
each | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
Vertica | TITLE | 0.97+ |
First | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
this week | DATE | 0.95+ |
three different methods | QUANTITY | 0.95+ |
three system tables | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.94+ |
Fenic Fawkes | PERSON | 0.94+ |
Parquet | TITLE | 0.94+ |
Hadoop | TITLE | 0.94+ |
One example | QUANTITY | 0.93+ |
Dbadmin | PERSON | 0.92+ |
10.0 | QUANTITY | 0.92+ |
Sandra Rivera, Intel Corporation - Mobile World Congress 2017 - #MWC17 - #theCUBE
>> Announcer: Live from Silicon Valley it's theCUBE! Covering Mobile World Congress 2017. Brought to you by Intel. >> Okay, welcome back everyone. We're here live in Palo Alto for special Mobile World Congress 2017 coverage. Mobile World Congress is happening in Barcelona, Spain and we are covering it here in Palo Alto and covering all the action as day two of Mobile World Congress winds down. We have reporters, analysts in the field in Barcelona calling in, we have Peter Jarich coming up soon, a call in. We've had Scott Raynovich, analyst, called in earlier. We have reports: go to SiliconANGLE.com for all the action. Go to Cube365.net/MWC17, that is our new Cube365 software, a digital hub to aggregate all the top stories, all the signal from the noise at Mobile World Congress and that site is sponsored by Intel. I want to thank Intel for allowing us to do 30 great interviews here in studio as well as a variety of great content that we're getting in from phone-ins and friends on the ground in Barcelona to get you all the top stories, and of course we'll bring you commentary and analysis here inside theCUBE. I had a chance to talk to Intel at 1:30 this morning, in California time, early morning Tuesday time here, Tuesday time in Barcelona. Had a chance to talk with Sandra Rivera, who's the Corporate Vice President/General Manager of Intel's Network Platforms Group. She is in charge of the Intel Technology Group that brings the end-to-end transformation. Really getting 5G ready, part of the Intel brain trust and the leader and really taking the world by storm. 5G obviously the top story and underneath the hood of 5G is the network transformation. I had a chance to ask some very pointed questions, like, "Is 5G ready for prime time?" and, "What's it going to take to change the game "to bring a new business model to power "all the new use cases like autonomous vehicles, "smart cities, a new kind of media and entertainment "landscape as well as smart homes "and smart businesses?" So, let's hear what Sandra had to say, and here's my interview from this morning in Barcelona." >> Sandra: Well, I would certainly say it's revolution not evolution. If you look at all the previous generations of radio technology, 2G, 3G, 4G, it was largely driven by connecting people to other people, and of course the voice era with the 2G and 3G, came the app revolution, and us connecting with our loved ones over social media and all of the new capabilities that we found on the Internet. 4G then became about more capacity and coverage and faster upload and download speeds. With all of the, again, social media and video and media processing. But 5G is fundamentally different because it really brings together the computing and communications paradigm. It is truly that convergence of both computing and communication, and so, in addition to the billions of people that we've been connecting and all the other generations of radio technology, we are now connecting tens of billions of things in that era of 5G. And a lot of what we're seeing here on the ground is just some of those use cases are starting to merge in terms of once you really converge computing and communications, what is possible? What is possible to do? >> John: The big conversation that we've been having yesterday on theCUBE was the confluence between consumer technology and enterprise technology from a business model standpoint. We hear the word "digital transformation," that's the business model for pretty much the global business landscape, but really there's a lot of stuff going on under the hood around, you guys are calling, network transformation. Your CEO was talking on Fortune before the show started about this end-to-end architecture. >> Sandra: Yes, so when we talk about end-to-end, we do talk about every every point of either accessing or delivering information at the end either between people or between things. So it's from the jump-on point, if you will, on the network and the access layer and so of course it's all the new radio technologies up to the edge of a network where a lot of the decision points and the data analytics live and exist, up to the core of the network, which again, is the workhorse of where things are routed and where traffic is steered and what is the different types of traffic that you're trying to get from the source to the endpoint, and then of course back into the data center in the cloud, which is the place where most of the content is either originated or stored or served up. So when we talk about end-to-end, we do talk about every point in that continuum, and the need to have programmable, intelligent computing and communications capability which is very very different from what we've had historically from a network infrastructure perspective. So network transformation is all about embracing server-based technologies and the volume economics benefits that that brings for its relation technology and the fact that you can pool assets and use 'em across many different users and use cases, and of course cloud as both a technology and a business model and the idea that you can lease an asset and afford to lease almost unlimited compute capability, and then release it when you're done. So that end-to-end view and that transformation of the underlying infrastructure is really what we talk about when we talk about network transformation and because 5G requires that programmable computing capability all across that continuum, and in particular being closer and closer to those endpoints, whether they're the autonomous car, or they're drones, or robots, or of course the things that we're quite familiar with in terms of tablets and laptops and smartphones, that is really what we're now enabling under that umbrella of network transformation and 5G is accelerating. >> John: And for the folks watching and listening, we had a great interview with Lynn Comp, who went and did a drill down on NFE and some of those cool tech behind that. On the business model, kind of the landscape question, you mentioned drones, certainly hot. People can look at drones, they see the autonomous vehicles. This is an environment where these new applications and use cases are emerging. So there always seems to be the challenge, and we had an expert discussion this morning in theCUBE here in Palo Alto, around the trade-off between bandwidth and true mobility, and sometimes there is some trade-offs. And not one technology or partner will win it, and you guys are a big part of that. What is your view and Intel's view on the kinds of robust, diverse technologies that are needed to balance the many use cases, and at the same time, create an open ecosystem around fostering this new future growth, which seems to be a big wave we haven't seen since the iPhone in 2007. This is a really game changer. How do you guys view this multitude of technologies and diverse ecosystem and how do you guy foster that? >> Sandra: As Intel, we are a technology innovator and a technology leader and of course that clock never stands still, right? So you need to innovate (laughs) on the technology front and bring out new capabilities, and in particular as that computing and communications world come together, we know that we need to integrate more of the network and wireless IP into the standard roadmap of processors and capabilities that we bring to the market, both in hardware and software ingredients. But as we do that, we are trying to protect the software investment that the developers make in bringing new and emerging applications to market. So while we have, of course, huge CPU assets within Intel, we also have SCGA assets for use cases that would involve changing algorithms, whether they're security algorithms that are deployed differently in different parts of the world, different countries, or of course artificial intelligence, which is again an emerging field with new algorithms and new computational requirements, or on the radio side where the 5G wireless standards are going to be taking route and solidifying over the next several years and continue to evolve. You want to have that programmability so the SCGA assets come into play. And then we leverage that even further with some of the ASIC competency that we have, where you really do work in a hardened piece of silicon, on the ability to run very very fast calculations, many many times over, and to do it in as efficient possible way, both from a cost and count perspective. But all of that underlying hardware and silicon architecture choice really needs to be served up to a broad ecosystem through a software framework that is consistent and undeterministic in where you have a very robust toolchain which is really what Intel invests in. So we invest in robust and comprehensive software tools and frameworks so that we can tap into the very broadest application developer ecosystem that exists in the world. And that's how we see the capabilities that we bring to market tapping into our technology innovation in silicon and software ingredients, but then tapping into, again, something that we believe deeply in, which is a broad ecosystem, and the more market participation you have, the faster that innovation curve that you can drive. >> John: "Rising tide floats all boats," I love that saying, I think that seems to be the case here. Sandra, I want to get your thoughts on the business model on telcos and the industry. People know Mobile World Congress is the big show, but it's also where everyone who's anyone in the business goes, it's a lot of business conversations. I'm sure you're backed up between meeting and meeting after meeting because you got a lot of customers there. Take us through some of the hallway conversations you're having or specific business conversations that you're meeting with customers. What's the buzz in the hallways and what specific conversations are you having with the customers around commercializing, not just accelerating, but commercializing the business models that are going to emerge from these new use cases? >> Sandra: Yeah, well you know actually that's a great question because I've been coming to Mobile Congress for many many years and a lot of the network transformation discussions, and a lot of the discussions even around NFE and FEN in years past, have been rooted in the desire to try to achieve the lower cost point, a total cost of operation that was lower, when you move from fixed-function, purpose-filled, can't reprogram, reprovision the hardware to do anything other than what it was originally designed to do, even though the asset utilization on that investment was very low, 20% maybe 30% at best. So it was this desire to move to, again, volume economics and server-based technology and the benefits of virtualization and pooling. So it started in a cost-optimization type of conversation, but the map moved in the last year, certainly with 5G, into much more, "Well how do we innovate "services faster? "How do we bring new capabilities to market? "And how do we really help to grow the top line, "not just manage our costs?" And I think that's what you're seeing at this event this year, is the excitement around virtual reality and augmented reality, the excitement around a smart home and all the capabilities that you'll have in your appliances and in your infrastructure in your own home and how you run your household. Seeing all of the innovations that we've got in smart cities, so smart lighting, smart water systems, smart meters, and smart parking, another thing that we're seeing here in terms of a set of use cases that we're enabling. Of course, no trade show event that you're talking about in terms of new use cases and new experiences is complete without an autonomous car, so we have a beautiful BMW 7 Series auton. vehicle that we're showcasing here, but again, this is part of what we're enabling in terms of new use cases when you have virtual unlimited computes being brought to the edge of the network with all new radio technologies to address a lot of that bandwidth, a lot of that latency, highly sensitive type of ultra-reliable capability that you need for an autonomous car. So what you're seeing is these smart cities and virtual reality and autonomous driving and smart home, and how all of the underlying technologies make that possible. And from a business perspective, all those new services are clearly what the communication service providers are trying to deliver to the market and trying to do it in a way that embraces cloud business models but also working with all of the enterprises and that traditional business, whether it's an automotive industry or whether it's an industrial automation industry or even all of the appliances that go into your home. All these traditional businesses really disrupting themselves to embrace technology and to bring many more capabilities that, again, have never been possible before. >> John: Yeah the car really brings this data center to the edge in full light for the consumer. It's a moving data center, needs to talk to a base station, needs to talk to the network. And really, this is the new normal. You see Alexa in the home and the voice activation, all the coolness going on there. And a lot of folks have criticized the telcos in the past for being very good at turning on subscribers and billing them as their core competency. But now with IOT, you have literally, you know, provisioning that's happening so fast and so dynamic, you have literally anything with a SIM card is now on the network. This kind of changes the notion of a subscriber. So, moving from that bill to operational in this new thousands of things and people on the network, it's not as clean as it was in the old days. Are the telcos on this? Do they get this concept? I mean, this changes the requirements for the network to be more dynamic and manage the technologies. >> Sandra: It's a fundamental transformation that they're going through, rooted in an urgent business problem that they have, which is that the more data that is created and consumed, the more they have to build out the capacity, but they have to do that in an affordable way, and they can't do it when they're provisioning new services and capabilities and hardware, and particularly in hardware that only does what it was originally intended to do, and they're now moving to a model that is software-defined, where you are able to innovate and provision and deploy at the speed of software, not being anchored in hardware. But they really are absolutely welcoming that opportunity, again, to bring those new services and capabilities to market when they can create a network infrastructure that becomes a platform of innovation, where they can attract developers to imagine new use cases and applications and capabilities that they themselves have the DNA to do but they have such unique assets. They have spectrum, they have contextual information about network bandwidth and conditions. They have customer profile information. They have a billing relationship. >> John: They need security, too, as well. >> Sandra: They have security and reliability, and I mean, all of those assets, if they can tap into that and serve that up, as again, a platform upon which innovation can happen, then that's really their endgame. So while, to your point, they may have been criticized as being kind of slow moving, we really do see them embracing fully this idea that, in order to grow their top lines, and in order to innovate faster in terms of services that, embracing again this fundamental different architectural model of computing and communication converging the server-based and cloud-based technologies, is the wave of the future. And you know 5G just put the nice bow on it, right, because it just makes everything go faster given that all these new use cases that we're looking to enable. >> Producer: Hey John, you only have one question left, so key money question if you want. >> John: Great, my final question. Sandra, my final question is: what's the bumper sticker this year for Mobile World Congress?" If you had to put the bumper sticker on the car, what would it say this year to encapsulate Mobile World Congress? >> Sandra: So for me, it's "5G Starts Today." Because, in order to be ready for all those drones and robots and autonomous cars and all of those immersive experiences in your living room, you really have to transform the network infrastructure today, and that composability of the network infrastructure of the ability to capture a slice of the network and optimize it in realtime for your use case, all that requires programmable, scalable, flexible computing that is secure, that's reliable, and that embraces cloud architectures and cloud business models. And so that is happening today to get ready for 2018, 2019, 2020, when you see many more of those endpoints, those end devices, and those use cases come to be realized, you need to get started today. So 5G is absolutely on its way, and we're very very excited to be a key enabler of that vision. >> John: Sandra Rivera, thanks so much. Corporate Vice President/ General Manager of the Network Platforms Group at Intel. Really bringing the end-to-end technology enabling communications service providers to take their networks to the next level. Getting ready for 5G and bringing the performance to the edge of the network. Thanks for taking the time on theCUBE, calling in from Barcelona, really appreciate it. Have a great day. >> Sandra: Thanks, John, you too! (pulsing music)
SUMMARY :
Brought to you by Intel. She is in charge of the and all of the new for pretty much the and the need to have of the landscape question, of silicon, on the ability to run What's the buzz in the hallways and what and a lot of the network and the voice activation, the more they have to and in order to innovate so key money question if you want. bumper sticker on the car, of the ability to capture of the Network Platforms Group at Intel.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Scott Raynovich | PERSON | 0.99+ |
Sandra | PERSON | 0.99+ |
Sandra Rivera | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Peter Jarich | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
30% | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Intel Technology Group | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
one question | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
Tuesday | DATE | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Lynn Comp | PERSON | 0.99+ |
30 great interviews | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
last year | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Cube365.net/MWC17 | OTHER | 0.99+ |
NFE | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
SiliconANGLE.com | OTHER | 0.98+ |
FEN | ORGANIZATION | 0.98+ |
yesterday | DATE | 0.98+ |
5G | ORGANIZATION | 0.98+ |
Mobile World Congress 2017 | EVENT | 0.98+ |
Alexa | TITLE | 0.98+ |
thousands | QUANTITY | 0.98+ |
#MWC17 | EVENT | 0.98+ |
this year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
7 Series | COMMERCIAL_ITEM | 0.98+ |
Barcelona, Spain | LOCATION | 0.98+ |
Intel Corporation | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
one | QUANTITY | 0.96+ |
billions of people | QUANTITY | 0.96+ |
1:30 this morning | DATE | 0.94+ |
Network Platforms Group | ORGANIZATION | 0.93+ |
this morning | DATE | 0.91+ |
day two | QUANTITY | 0.91+ |
Mobile World Congress | EVENT | 0.91+ |
Mobile Congress | EVENT | 0.89+ |
early morning Tuesday | DATE | 0.88+ |
Cube365 | TITLE | 0.88+ |