Image Title

Search Results for siliconangle:

SiliconANGLE News | Red Hat Collaborates with Nvidia, Samsung and Arm on Efficient, Open Networks


 

(upbeat music) >> Hello, everyone; I'm John Furrier with SiliconANGLE NEWS and host of theCUBE, and welcome to our SiliconANGLE NEWS MWC NEWS UPDATE in Barcelona where MWC is the premier event for the cloud telecommunication industry, and in the news here is Red Hat, Red Hat announcing a collaboration with NVIDIA, Samsung and Arm on Efficient Open Networks. Red Hat announced updates across various fields including advanced 5G telecommunications cloud, industrial edge, artificial intelligence, and radio access networks, RAN, and Efficiency. Red Hat's enterprise Kubernetes platform, OpenShift, has added support for NVIDIA's converged accelerators and aerial SDK facilitating RAND deployments on industry standard service across hybrid and multicloud platforms. This composable infrastructure enables telecom firms to support heavier compute demands for edge computing, AI, private 5G, and more, and just also helps network operators adopt open architectures, allowing them to choose non-proprietary components from multiple suppliers. In addition to the NVIDIA collaboration, Red Hat is working with Samsung to offer a new vRAN solution for service providers to better manage their open RAN networks. They're also working with UK chip designer, Arm, to create new networking solutions for energy efficient Red Hat Open Source Kubernetes-based Efficient Power Level Exporter project, or Kepler, has been donated to the open Cloud Native Compute Foundation, allowing enterprise to better understand their cloud native workloads and power consumptions. Kepler can also help in the development of sustainable software by creating less power hungry applications. Again, Red Hat continuing to provide OpenSource, OpenRAN, and contributing an open source project to the CNCF, continuing to create innovation for developers, and, of course, Red Hat knows what, a lot about operating systems and the telco could be the next frontier. That's SiliconANGLE NEWS. I'm John Furrier; thanks for watching. (monotone music)

Published Date : Feb 28 2023

SUMMARY :

and in the news here is Red Hat,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NVIDIAORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

John FurrierPERSON

0.99+

SamsungORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

Cloud Native Compute FoundationORGANIZATION

0.99+

CNCFORGANIZATION

0.98+

UKLOCATION

0.95+

OpenRANTITLE

0.93+

telcoORGANIZATION

0.93+

KubernetesTITLE

0.92+

KeplerORGANIZATION

0.9+

SiliconANGLE NEWSORGANIZATION

0.88+

vRANTITLE

0.88+

SiliconANGLEORGANIZATION

0.87+

ArmORGANIZATION

0.87+

MWCEVENT

0.86+

Arm on Efficient Open NetworksORGANIZATION

0.86+

theCUBEORGANIZATION

0.84+

OpenShiftTITLE

0.78+

HatTITLE

0.73+

SiliconANGLE NewsORGANIZATION

0.65+

OpenSourceTITLE

0.61+

NEWSORGANIZATION

0.51+

RedORGANIZATION

0.5+

SiliconANGLETITLE

0.43+

SiliconANGLE News | VMware Entices Telcos with Expanded 5G and Open RAN Portfolio


 

(electronic music) >> Hello, I'm John Furrier with SiliconANGLE News and host of theCUBE, and welcome to our news update for MWC in Barcelona, the premier event for cloud and to the telecommunication industry. News today, VMware in the news has lots of announcements, where it's expanding its line of products for communication service providers with Open RAND portfolio VMware's unveiled service management orchestration framework for simplifying and automating radio access networks and their applications. RANDs have traditionally been proprietary because of their need for low latency and speed and the Overran Alliance is championed open standard that would expand the number of players in the RAND ecosystem. According to Sanjay Oppai, senior vice president and general manager of the service provider and Edge Business Unit at VMware, VMware is the forefront of getting deployed in telcos both in the RAND as well as the core and VMware hopes they can extend their leadership from the enterprise data center and SD WAN and be the defacto standard in the RAND. VMware is also announcing a technical preview that'll allow communications service providers to run disaggregated and virtualized RAND functions directly on bare metal servers using VMware Tanzu. Project Hui is the initiative aimed at telecom providers that need flexibility in how they deploy edge devices. The VMware Telco cloud platform is also being improved to deliver carrier grade intelligent networking and lateral security features such as distributed firewall and intrusion detection and prevention, along with support for energy efficient use cases for 4G and 5G core load balancing. For enterprise customers, VMware is delivering new and enhanced remote worker device connectivity and intelligent wireless capabilities to its SD WAN and Secure Access Service Edge, or SASE Products, is also expanding its collaboration with Intel aimed at delivering new edge applications based on 5G connectivity that will support SD WAN use cases involving mobile and internet of things devices. Again, VMware spinning their portfolio in the news. Again, VMware is not stopping. Of course, theCUBE's, all the coverage of VMware Explorer will be coming up this year in 2023. Don't miss that. But at mwc, Dave Vellante and Lisa Martin, the entire Cube team are there for four days of live coverage. Of course, all the news and reporting is on SiliconANGLE.com. For all the action, go there. And of course theCUBE.net is where the broadcast is in Barcelona. This is theCUBE News. Thanks for watching.

Published Date : Feb 28 2023

SUMMARY :

VMware is the forefront of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sanjay OppaiPERSON

0.99+

Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

John FurrierPERSON

0.99+

BarcelonaLOCATION

0.99+

VMwareORGANIZATION

0.99+

2023DATE

0.99+

Overran AllianceORGANIZATION

0.99+

four daysQUANTITY

0.98+

MWCEVENT

0.97+

todayDATE

0.97+

Edge Business UnitORGANIZATION

0.97+

CubeORGANIZATION

0.96+

bothQUANTITY

0.96+

SiliconANGLE NewsORGANIZATION

0.94+

this yearDATE

0.94+

SiliconANGLE.comOTHER

0.93+

theCUBE.netOTHER

0.9+

TelcosORGANIZATION

0.9+

theCUBE NewsORGANIZATION

0.83+

theCUBEORGANIZATION

0.83+

IntelORGANIZATION

0.67+

VMware ExplorerTITLE

0.62+

VMware TanzuORGANIZATION

0.6+

VMwareTITLE

0.58+

Project HuiORGANIZATION

0.48+

mwcLOCATION

0.42+

5GOTHER

0.38+

SiliconANGLE News | Google Targets Cloud-Native Network Transformation


 

(intense music) >> Hello, I'm John Furrier with "SiliconANGLE News" and the host of theCUBE here in Palo Alto, with coverage of MWC 2023. theCUBE is onsite in Barcelona, four days of wall to wall coverage. Here is a news update from MWC and in the news here is Google. Google Cloud targets cloud native network transformation for all the carriers or cloud service providers, and the communication service providers. They announced three new products to help communications service providers, also known as CSPs, build, deploy and operate hybrid cloud native networks, as well as collect and manage network data. The new products, when combined with Unified Cloud, enables the CSPs to improve customer experience, artificial intelligence, and data analytics. This is a big move, because 70% of communication service providers are expected to adopt cloud native network functions by the end of this year, making it a big, big wave. One of the key features of Google's products is the telecom network automation. This cloud service accelerates CSPs network and edge deployments through the use of Kubernetes based cloud native automation tools. It's managed by a cloud version of open source Nephio, project that Google founded in 2022. Of course, other key product announcements with Google, the Telecom Data Fabric, a tool that helps CSPs generate insights. That's the data driven piece, to target and optimize their network performance and reliability, works by simplifying the collection, normalization, correlation through an adaptive framework. This is kind of where AI shines. Finally, Google has telecom subscriber insights, a powerful AI tool that enables CSPs to extract insights from existing data sources in a privacy safe environment. Let's see if this is better than Bing search, we'll see. But CSPs are moving to the cloud across all channels. This is a really important trend, as cloud native scale, AI, data, configuration, automation all come to the edge of the network. That's an update from "SiliconANGLE News". Check out the coverage on siliconangle.com. Of course, thecube.net, four days, Dave Vellante and Lisa Martin are there. I'm here in Palo Alto. Thanks for watching. (slow music) (upbeat music)

Published Date : Feb 28 2023

SUMMARY :

and the host of theCUBE here in Palo Alto,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

Palo AltoLOCATION

0.99+

BarcelonaLOCATION

0.99+

70%QUANTITY

0.99+

2022DATE

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

thecube.netOTHER

0.98+

end of this yearDATE

0.98+

four daysQUANTITY

0.97+

MWC 2023EVENT

0.96+

OneQUANTITY

0.92+

three new productsQUANTITY

0.89+

SiliconANGLE NewsORGANIZATION

0.88+

theCUBEORGANIZATION

0.8+

BingORGANIZATION

0.75+

NephioTITLE

0.66+

MWCEVENT

0.65+

bigEVENT

0.63+

KubernetesTITLE

0.62+

Google CloudTITLE

0.57+

Unified CloudTITLE

0.45+

SiliconANGLE News | Google Showcases Updates for Android and Wearable Technology at MWC


 

(Introductory music) >> Hello everyone, welcome to theCUBE's coverage of Mobile World Congress (MWC) and also SiliconANGLEs news coverage. Welcome to SiliconANGLEs news update for MWC. I'm John Furrier, host of theCUBE and reporter with SiliconANGLE News Today. Google showcasing new updates for Android and wearables at MWC. Kind of going after the old Apple-like functionality. Google has announced some new updates for Android and wearables at MWC and Barcelona. The new features are aimed at enhancing user productivity, connectivity and overall enjoyment across various devices for Chromebooks and all their Android devices. This is their answer to be Apple-like. New features include updates to Google Keep, audio enhancements, instant pairing of Chromebooks, headphones, new emojis, smartphones, more wallet options, and greater accessibility options. These features designed to bridge the gap between different devices that people use together often such as watches and phones or laptops or headphones. Fast Pair, another feature which allows new Bluetooth headphones to be connected to a Chromebook with just one tap. If the headphones are already set up with Android phone, the Chromebook will automatically connect to them with no additional setup. And finally, Google Keep taking notes for you that app - very cool. New features include widgets for Android screens, making it easier for users to make to-do lists from their mobile devices and Smartwatches phones. So that's the big news there. And it's really about Apple-like functionality and they have added things to their meat, which is new backgrounds and then filters that's kind of a Zoom clone. So here you got Android, Google adding stuff to their wallet. They are really stepping up their game and they want to be more mobile in at a telecom conference like this. They can see them upping their game to try to compete with Apple. And that's the update from from Google, Android and Chromebook updates. Stay tuned for more coverage. Check out SiliconANGLE.com for our special report on Mobile World Congress and Barcelona. Got theCUBE team - Dave Vellante, Lisa Martin, the whole gang is there for four days of live coverage. Check that out on theCUBE.net (closing music)

Published Date : Feb 28 2023

SUMMARY :

and they have added things to their meat,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

John FurrierPERSON

0.99+

AppleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AndroidTITLE

0.99+

theCUBEORGANIZATION

0.99+

ChromebookCOMMERCIAL_ITEM

0.98+

Mobile World CongressEVENT

0.98+

ChromebooksCOMMERCIAL_ITEM

0.98+

four daysQUANTITY

0.98+

SiliconANGLEsORGANIZATION

0.97+

MWCEVENT

0.97+

theCUBE.netOTHER

0.96+

BarcelonaLOCATION

0.95+

GoogleEVENT

0.91+

oneQUANTITY

0.87+

SiliconANGLEORGANIZATION

0.77+

SiliconANGLE NewsORGANIZATION

0.74+

Google KeepTITLE

0.71+

TodayDATE

0.68+

SiliconANGLE.comORGANIZATION

0.48+

GoogleTITLE

0.46+

SiliconANGLE News | GSMA Debuts API Toolkit as AWS and Microsoft Roll Out New Carrier Offerings


 

(suspenseful music) >> Welcome back everyone, this is the SiliconANGLE news report, news flash, news update. I'm John Furrier, host of theCUBE, SiliconANGLE founder and editor. Got our team in Mobile World Congress, MWC. But here's some news flash: the GSMA debuted API toolkit as AWS and Microsoft roll out their offerings to make the cloud part of the telco world. The GSMA association, which runs this program and is the most important organization in telecommunications, unveiled the GSMA Open Gateway. This is a toolkit designed for creating applications that integrate with multiple carrier networks. The technology debuted at MWC23. This is the largest trade show opened in the telco area. This Open Gateway allows carriers to support APIs created with the technology that'll interoperate with each other. That means interoperability and cloud is coming to the telecommunication carriers. That's your cell phone, that's wireless. This allows developers to move applications from one carrier to another without needing to port their code. This is a huge game-changer. This is big news, and, of course, Microsoft and AWS are pounding stories out there as well. They got 21 carriers worldwide adopted and it's created using an open-source API toolkit called CAMARA. And Amazon and AWS are jumping on the cloud bandwagon with this and driving it hard into telco. And that's the big story, and, of course, more actions happening, theCUBE is onsite for four days in Barcelona for MWC23 and keep the news flowing. Check out SiliconANGLE.com, you'll see all the news there, and, of course, theCUBE.net for the livestream. I'm John Furrier, that's the news brief. (atmospheric music)

Published Date : Feb 28 2023

SUMMARY :

and is the most important organization

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GSMAORGANIZATION

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

21 carriersQUANTITY

0.99+

AmazonORGANIZATION

0.99+

four daysQUANTITY

0.99+

MWC23EVENT

0.98+

CAMARATITLE

0.97+

theCUBE.netOTHER

0.96+

theCUBEORGANIZATION

0.95+

Mobile World CongressEVENT

0.94+

one carrierQUANTITY

0.92+

telcoORGANIZATION

0.92+

Open GatewayTITLE

0.91+

SiliconANGLE.comOTHER

0.9+

SiliconANGLEORGANIZATION

0.87+

MWCEVENT

0.75+

SiliconANGLE News | Intel Accelerates 5G Network Virtualization


 

(energetic music) >> Welcome to the Silicon Angle News update Mobile World Congress theCUBE coverage live on the floor for four days. I'm John Furrier, in the studio here. Dave Vellante, Lisa Martin onsite. Intel in the news, Intel accelerates 5G network virtualization with radio access network boost for Xeon processors. Intel, well known for power and computing, they today announced their integrated virtual radio access network into its latest fourth gen Intel Xeon system on a chip. This move will help network operators gear up their efforts to deliver Cloud native features for next generation 5G core and edge networks. This announcement came today at MWC, formerly knows Mobile World Congress. In Barcelona, Intel is taking the latest step in its mission to virtualize the world's networks, including Core, Open RAN and Edge. Network virtualization is the key capability for communication service providers as they migrate from fixed function hardware to programmable software defined platforms. This provides greater agility and greater cost efficiency. According to Intel, this is the demand for agile, high performance, scalable networks requiring adoption. Fully virtualized software based platforms run on general purpose processors. Intel believes that network operators need to accelerate network virtualization to get the most out of these new architectures, and that's where it can be made its mark. With Intel vRAN Boost, it delivers twice the capability and capacity gains over its previous generation of silicon with the same power envelope with 20% in power savings that results from an integrated acceleration. In addition, Intel announced new infrastructure power manager for 5G core reference software that's designed to work with vRAN Boost. Intel also showcased its new Intel Converged Edge media platform designed to deliver multiple video services from a shared multi-tenant architecture. The platform leverages Cloud native scalability to respond to the shifting demands. Lastly, Intel announced a range of Agilex 7 Field Programmable Gate Arrays and eASIC N5X structured applications specific integrated circuits designed for individual cloud communications and embedded applications. Intel is targeting the power consumption which is energy and more horsepower for chips, which is going to power the industrial internet edge. That's going to be Cloud native. Big news happening at Mobile World Congress. theCUBE is there. Go to siliconangle.com for all the news and special report and live feed on theCUBE.net. (energetic music)

Published Date : Feb 28 2023

SUMMARY :

Intel in the news,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

John FurrierPERSON

0.99+

20%QUANTITY

0.99+

BarcelonaLOCATION

0.99+

siliconangle.comOTHER

0.99+

IntelORGANIZATION

0.99+

Mobile World CongressEVENT

0.98+

twiceQUANTITY

0.98+

todayDATE

0.98+

four daysQUANTITY

0.98+

fourth genQUANTITY

0.96+

theCUBE.netOTHER

0.9+

XeonCOMMERCIAL_ITEM

0.86+

MWCEVENT

0.84+

vRAN BoostTITLE

0.82+

AgilexTITLE

0.78+

Silicon AngleORGANIZATION

0.77+

7 Field ProgrammableCOMMERCIAL_ITEM

0.76+

SiliconANGLE NewsORGANIZATION

0.76+

eASICTITLE

0.75+

theCUBEORGANIZATION

0.63+

N5XCOMMERCIAL_ITEM

0.62+

5GQUANTITY

0.55+

Gate ArraysOTHER

0.41+

SiliconANGLE News | Dell Partners with Telecom and Infrastructure Players to Accelerate Adoption


 

(energetic instrumental music) >> Hey, everyone. Welcome to SiliconANGLE CUBE News here from Mobile World Congress. This is a Mobile World Congress news update. Dell in the news here partners with leading infrastructure companies, Dell Technologies, really setting up an ecosystem. Here, Dell, with leading telecom and infrastructure players accelerating the network adoption, announcing that it's launching the Dell's Open Telecom Ecosystem community. A community of multiple telecom partners and communication service providers aimed at becoming a unifying force in the telecom industry. This announcement comes just days after Dell introduced a host of new hardware, platforms designed to help the teleconference build cloud-native open radio network access, also called RAN architectures, using proprietary and sub-components for various suppliers. Dell's Open Telecom Ecosystem community has already partnered with Nokia, Qualcomm, Amdocs and Juniper Networks to create new offerings aimed at accelerating open RAN price performance for communication service providers. This includes creating a new virtual RAN offering using Open Telecom Ecosystem Labs, and as the center for testing and validation, building next-generation 5G virtualized distributed units and deploy and automated validated 5G-SA network with various partners across the ecosystem. Dell's promising that this is just the beginning of the collaboration with the telecom industry as it seeks to accelerate the adoption of 5G networking technologies and solve key industry challenges. More action's on the ground, go to thecube.net, theCUBE is broadcasting live for four days, Dave Vellante, Lisa Martin. I'm in the studios in Palo Alto bringing you the news. Lot of action happening, of course. Go to siliconangle.com to catch all the breaking news. We have a special report. We already got 10 plus stories already flowing. Probably have another 10 today. Day two tomorrow as MWC continues to power more news coverage for the edge and cloud-native technologies. (pensive ambient music)

Published Date : Feb 28 2023

SUMMARY :

and as the center for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

NokiaORGANIZATION

0.99+

AmdocsORGANIZATION

0.99+

QualcommORGANIZATION

0.99+

DellORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

todayDATE

0.99+

Juniper NetworksORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

Dell TechnologiesORGANIZATION

0.99+

10 plus storiesQUANTITY

0.99+

four daysQUANTITY

0.99+

thecube.netOTHER

0.98+

10QUANTITY

0.98+

MWCEVENT

0.97+

tomorrowDATE

0.96+

Day twoQUANTITY

0.95+

Mobile World CongressEVENT

0.95+

theCUBEORGANIZATION

0.94+

Mobile World CongressEVENT

0.83+

SiliconANGLE CUBEORGANIZATION

0.78+

OpenORGANIZATION

0.75+

SiliconANGLE NewsORGANIZATION

0.73+

Open Telecom EcosystemORGANIZATION

0.73+

Ecosystem LabsORGANIZATION

0.66+

Open Telecom EcosystemORGANIZATION

0.59+

SiliconANGLE News | Beyond the Buzz: A deep dive into the impact of AI


 

(upbeat music) >> Hello, everyone, welcome to theCUBE. I'm John Furrier, the host of theCUBE in Palo Alto, California. Also it's SiliconANGLE News. Got two great guests here to talk about AI, the impact of the future of the internet, the applications, the people. Amr Awadallah, the founder and CEO, Ed Alban is the CEO of Vectara, a new startup that emerged out of the original Cloudera, I would say, 'cause Amr's known, famous for the Cloudera founding, which was really the beginning of the big data movement. And now as AI goes mainstream, there's so much to talk about, so much to go on. And plus the new company is one of the, now what I call the wave, this next big wave, I call it the fifth wave in the industry. You know, you had PCs, you had the internet, you had mobile. This generative AI thing is real. And you're starting to see startups come out in droves. Amr obviously was founder of Cloudera, Big Data, and now Vectara. And Ed Albanese, you guys have a new company. Welcome to the show. >> Thank you. It's great to be here. >> So great to see you. Now the story is theCUBE started in the Cloudera office. Thanks to you, and your friendly entrepreneurship views that you have. We got to know each other over the years. But Cloudera had Hadoop, which was the beginning of what I call the big data wave, which then became what we now call data lakes, data oceans, and data infrastructure that's developed from that. It's almost interesting to look back 12 plus years, and see that what AI is doing now, right now, is opening up the eyes to the mainstream, and the application's almost mind blowing. You know, Sati Natel called it the Mosaic Moment, didn't say Netscape, he built Netscape (laughing) but called it the Mosaic Moment. You're seeing companies in startups, kind of the alpha geeks running here, because this is the new frontier, and there's real meat on the bone, in terms of like things to do. Why? Why is this happening now? What's is the confluence of the forces happening, that are making this happen? >> Yeah, I mean if you go back to the Cloudera days, with big data, and so on, that was more about data processing. Like how can we process data, so we can extract numbers from it, and do reporting, and maybe take some actions, like this is a fraud transaction, or this is not. And in the meanwhile, many of the researchers working in the neural network, and deep neural network space, were trying to focus on data understanding, like how can I understand the data, and learn from it, so I can take actual actions, based on the data directly, just like a human does. And we were only good at doing that at the level of somebody who was five years old, or seven years old, all the way until about 2013. And starting in 2013, which is only 10 years ago, a number of key innovations started taking place, and each one added on. It was no major innovation that just took place. It was a couple of really incremental ones, but they added on top of each other, in a very exponentially additive way, that led to, by the end of 2019, we now have models, deep neural network models, that can read and understand human text just like we do. Right? And they can reason about it, and argue with you, and explain it to you. And I think that's what is unlocking this whole new wave of innovation that we're seeing right now. So data understanding would be the essence of it. >> So it's not a Big Bang kind of theory, it's been evolving over time, and I think that the tipping point has been the advancements and other things. I mean look at cloud computing, and look how fast it just crept up on AWS. I mean AWS you back three, five years ago, I was talking to Swami yesterday, and their big news about AI, expanding the Hugging Face's relationship with AWS. And just three, five years ago, there wasn't a model training models out there. But as compute comes out, and you got more horsepower,, these large language models, these foundational models, they're flexible, they're not monolithic silos, they're interacting. There's a whole new, almost fusion of data happening. Do you see that? I mean is that part of this? >> Of course, of course. I mean this wave is building on all the previous waves. We wouldn't be at this point if we did not have hardware that can scale, in a very efficient way. We wouldn't be at this point, if we don't have data that we're collecting about everything we do, that we're able to process in this way. So this, this movement, this motion, this phase we're in, absolutely builds on the shoulders of all the previous phases. For some of the observers from the outside, when they see chatGPT for the first time, for them was like, "Oh my god, this just happened overnight." Like it didn't happen overnight. (laughing) GPT itself, like GPT3, which is what chatGPT is based on, was released a year ahead of chatGPT, and many of us were seeing the power it can provide, and what it can do. I don't know if Ed agrees with that. >> Yeah, Ed? >> I do. Although I would acknowledge that the possibilities now, because of what we've hit from a maturity standpoint, have just opened up in an incredible way, that just wasn't tenable even three years ago. And that's what makes it, it's true that it developed incrementally, in the same way that, you know, the possibilities of a mobile handheld device, you know, in 2006 were there, but when the iPhone came out, the possibilities just exploded. And that's the moment we're in. >> Well, I've had many conversations over the past couple months around this area with chatGPT. John Markoff told me the other day, that he calls it, "The five dollar toy," because it's not that big of a deal, in context to what AI's doing behind the scenes, and all the work that's done on ethics, that's happened over the years, but it has woken up the mainstream, so everyone immediately jumps to ethics. "Does it work? "It's not factual," And everyone who's inside the industry is like, "This is amazing." 'Cause you have two schools of thought there. One's like, people that think this is now the beginning of next gen, this is now we're here, this ain't your grandfather's chatbot, okay?" With NLP, it's got reasoning, it's got other things. >> I'm in that camp for sure. >> Yeah. Well I mean, everyone who knows what's going on is in that camp. And as the naysayers start to get through this, and they go, "Wow, it's not just plagiarizing homework, "it's helping me be better. "Like it could rewrite my memo, "bring the lead to the top." It's so the format of the user interface is interesting, but it's still a data-driven app. >> Absolutely. >> So where does it go from here? 'Cause I'm not even calling this the first ending. This is like pregame, in my opinion. What do you guys see this going, in terms of scratching the surface to what happens next? >> I mean, I'll start with, I just don't see how an application is going to look the same in the next three years. Who's going to want to input data manually, in a form field? Who is going to want, or expect, to have to put in some text in a search box, and then read through 15 different possibilities, and try to figure out which one of them actually most closely resembles the question they asked? You know, I don't see that happening. Who's going to start with an absolute blank sheet of paper, and expect no help? That is not how an application will work in the next three years, and it's going to fundamentally change how people interact and spend time with opening any element on their mobile phone, or on their computer, to get something done. >> Yes. I agree with that. Like every single application, over the next five years, will be rewritten, to fit within this model. So imagine an HR application, I don't want to name companies, but imagine an HR application, and you go into application and you clicking on buttons, because you want to take two weeks of vacation, and menus, and clicking here and there, reasons and managers, versus just telling the system, "I'm taking two weeks of vacation, going to Las Vegas," book it, done. >> Yeah. >> And the system just does it for you. If you weren't completing in your input, in your description, for what you want, then the system asks you back, "Did you mean this? "Did you mean that? "Were you trying to also do this as well?" >> Yeah. >> "What was the reason?" And that will fit it for you, and just do it for you. So I think the user interface that we have with apps, is going to change to be very similar to the user interface that we have with each other. And that's why all these apps will need to evolve. >> I know we don't have a lot of time, 'cause you guys are very busy, but I want to definitely have multiple segments with you guys, on this topic, because there's so much to talk about. There's a lot of parallels going on here. I was talking again with Swami who runs all the AI database at AWS, and I asked him, I go, "This feels a lot like the original AWS. "You don't have to provision a data center." A lot of this heavy lifting on the back end, is these large language models, with these foundational models. So the bottleneck in the past, was the energy, and cost to actually do it. Now you're seeing it being stood up faster. So there's definitely going to be a tsunami of apps. I would see that clearly. What is it? We don't know yet. But also people who are going to leverage the fact that I can get started building value. So I see a startup boom coming, and I see an application tsunami of refactoring things. >> Yes. >> So the replatforming is already kind of happening. >> Yes, >> OpenAI, chatGPT, whatever. So that's going to be a developer environment. I mean if Amazon turns this into an API, or a Microsoft, what you guys are doing. >> We're turning it into API as well. That's part of what we're doing as well, yes. >> This is why this is exciting. Amr, you've lived the big data dream, and and we used to talk, if you didn't have a big data problem, if you weren't full of data, you weren't really getting it. Now people have all the data, and they got to stand this up. >> Yeah. >> So the analogy is again, the mobile, I like the mobile movement, and using mobile as an analogy, most companies were not building for a mobile environment, right? They were just building for the web, and legacy way of doing apps. And as soon as the user expectations shifted, that my expectation now, I need to be able to do my job on this small screen, on the mobile device with a touchscreen. Everybody had to invest in re-architecting, and re-implementing every single app, to fit within that model, and that model of interaction. And we are seeing the exact same thing happen now. And one of the core things we're focused on at Vectara, is how to simplify that for organizations, because a lot of them are overwhelmed by large language models, and ML. >> They don't have the staff. >> Yeah, yeah, yeah. They're understaffed, they don't have the skills. >> But they got developers, they've got DevOps, right? >> Yes. >> So they have the DevSecOps going on. >> Exactly, yes. >> So our goal is to simplify it enough for them that they can start leveraging this technology effectively, within their applications. >> Ed, you're the COO of the company, obviously a startup. You guys are growing. You got great backup, and good team. You've also done a lot of business development, and technical business development in this area. If you look at the landscape right now, and I agree the apps are coming, every company I talk to, that has that jet chatGPT of, you know, epiphany, "Oh my God, look how cool this is. "Like magic." Like okay, it's code, settle down. >> Mm hmm. >> But everyone I talk to is using it in a very horizontal way. I talk to a very senior person, very tech alpha geek, very senior person in the industry, technically. they're using it for log data, they're using it for configuration of routers. And in other areas, they're using it for, every vertical has a use case. So this is horizontally scalable from a use case standpoint. When you hear horizontally scalable, first thing I chose in my mind is cloud, right? >> Mm hmm. >> So cloud, and scalability that way. And the data is very specialized. So now you have this vertical specialization, horizontally scalable, everyone will be refactoring. What do you see, and what are you seeing from customers, that you talk to, and prospects? >> Yeah, I mean put yourself in the shoes of an application developer, who is actually trying to make their application a bit more like magic. And to have that soon-to-be, honestly, expected experience. They've got to think about things like performance, and how efficiently that they can actually execute a query, or a question. They've got to think about cost. Generative isn't cheap, like the inference of it. And so you've got to be thoughtful about how and when you take advantage of it, you can't use it as a, you know, everything looks like a nail, and I've got a hammer, and I'm going to hit everything with it, because that will be wasteful. Developers also need to think about how they're going to take advantage of, but not lose their own data. So there has to be some controls around what they feed into the large language model, if anything. Like, should they fine tune a large language model with their own data? Can they keep it logically separated, but still take advantage of the powers of a large language model? And they've also got to take advantage, and be aware of the fact that when data is generated, that it is a different class of data. It might not fully be their own. >> Yeah. >> And it may not even be fully verified. And so when the logical cycle starts, of someone making a request, the relationship between that request, and the output, those things have to be stored safely, logically, and identified as such. >> Yeah. >> And taken advantage of in an ongoing fashion. So these are mega problems, each one of them independently, that, you know, you can think of it as middleware companies need to take advantage of, and think about, to help the next wave of application development be logical, sensible, and effective. It's not just calling some raw API on the cloud, like openAI, and then just, you know, you get your answer and you're done, because that is a very brute force approach. >> Well also I will point, first of all, I agree with your statement about the apps experience, that's going to be expected, form filling. Great point. The interesting about chatGPT. >> Sorry, it's not just form filling, it's any action you would like to take. >> Yeah. >> Instead of clicking, and dragging, and dropping, and doing it on a menu, or on a touch screen, you just say it, and it's and it happens perfectly. >> Yeah. It's a different interface. And that's why I love that UIUX experiences, that's the people falling out of their chair moment with chatGPT, right? But a lot of the things with chatGPT, if you feed it right, it works great. If you feed it wrong and it goes off the rails, it goes off the rails big. >> Yes, yes. >> So the the Bing catastrophes. >> Yeah. >> And that's an example of garbage in, garbage out, classic old school kind of comp-side phrase that we all use. >> Yep. >> Yes. >> This is about data in injection, right? It reminds me the old SQL days, if you had to, if you can sling some SQL, you were a magician, you know, to get the right answer, it's pretty much there. So you got to feed the AI. >> You do, Some people call this, the early word to describe this as prompt engineering. You know, old school, you know, search, or, you know, engagement with data would be, I'm going to, I have a question or I have a query. New school is, I have, I have to issue it a prompt, because I'm trying to get, you know, an action or a reaction, from the system. And the active engineering, there are a lot of different ways you could do it, all the way from, you know, raw, just I'm going to send you whatever I'm thinking. >> Yeah. >> And you get the unintended outcomes, to more constrained, where I'm going to just use my own data, and I'm going to constrain the initial inputs, the data I already know that's first party, and I trust, to, you know, hyper constrain, where the application is actually, it's looking for certain elements to respond to. >> It's interesting Amr, this is why I love this, because one we are in the media, we're recording this video now, we'll stream it. But we got all your linguistics, we're talking. >> Yes. >> This is data. >> Yep. >> So the data quality becomes now the new intellectual property, because, if you have that prompt source data, it makes data or content, in our case, the original content, intellectual property. >> Absolutely. >> Because that's the value. And that's where you see chatGPT fall down, is because they're trying to scroll the web, and people think it's search. It's not necessarily search, it's giving you something that you wanted. It is a lot of that, I remember in Cloudera, you said, "Ask the right questions." Remember that phrase you guys had, that slogan? >> Mm hmm. And that's prompt engineering. So that's exactly, that's the reinvention of "Ask the right question," is prompt engineering is, if you don't give these models the question in the right way, and very few people know how to frame it in the right way with the right context, then you will get garbage out. Right? That is the garbage in, garbage out. But if you specify the question correctly, and you provide with it the metadata that constrain what that question is going to be acted upon or answered upon, then you'll get much better answers. And that's exactly what we solved Vectara. >> Okay. So before we get into the last couple minutes we have left, I want to make sure we get a plug in for the opportunity, and the profile of Vectara, your new company. Can you guys both share with me what you think the current situation is? So for the folks who are now having those moments of, "Ah, AI's bullshit," or, "It's not real, it's a lot of stuff," from, "Oh my god, this is magic," to, "Okay, this is the future." >> Yes. >> What would you say to that person, if you're at a cocktail party, or in the elevator say, "Calm down, this is the first inning." How do you explain the dynamics going on right now, to someone who's either in the industry, but not in the ropes? How would you explain like, what this wave's about? How would you describe it, and how would you prepare them for how to change their life around this? >> Yeah, so I'll go first and then I'll let Ed go. Efficiency, efficiency is the description. So we figured that a way to be a lot more efficient, a way where you can write a lot more emails, create way more content, create way more presentations. Developers can develop 10 times faster than they normally would. And that is very similar to what happened during the Industrial Revolution. I always like to look at examples from the past, to read what will happen now, and what will happen in the future. So during the Industrial Revolution, it was about efficiency with our hands, right? So I had to make a piece of cloth, like this piece of cloth for this shirt I'm wearing. Our ancestors, they had to spend month taking the cotton, making it into threads, taking the threads, making them into pieces of cloth, and then cutting it. And now a machine makes it just like that, right? And the ancestors now turned from the people that do the thing, to manage the machines that do the thing. And I think the same thing is going to happen now, is our efficiency will be multiplied extremely, as human beings, and we'll be able to do a lot more. And many of us will be able to do things they couldn't do before. So another great example I always like to use is the example of Google Maps, and GPS. Very few of us knew how to drive a car from one location to another, and read a map, and get there correctly. But once that efficiency of an AI, by the way, behind these things is very, very complex AI, that figures out how to do that for us. All of us now became amazing navigators that can go from any point to any point. So that's kind of how I look at the future. >> And that's a great real example of impact. Ed, your take on how you would talk to a friend, or colleague, or anyone who asks like, "How do I make sense of the current situation? "Is it real? "What's in it for me, and what do I do?" I mean every company's rethinking their business right now, around this. What would you say to them? >> You know, I usually like to show, rather than describe. And so, you know, the other day I just got access, I've been using an application for a long time, called Notion, and it's super popular. There's like 30 or 40 million users. And the new version of Notion came out, which has AI embedded within it. And it's AI that allows you primarily to create. So if you could break down the world of AI into find and create, for a minute, just kind of logically separate those two things, find is certainly going to be massively impacted in our experiences as consumers on, you know, Google and Bing, and I can't believe I just said the word Bing in the same sentence as Google, but that's what's happening now (all laughing), because it's a good example of change. >> Yes. >> But also inside the business. But on the crate side, you know, Notion is a wiki product, where you try to, you know, note down things that you are thinking about, or you want to share and memorialize. But sometimes you do need help to get it down fast. And just in the first day of using this new product, like my experience has really fundamentally changed. And I think that anybody who would, you know, anybody say for example, that is using an existing app, I would show them, open up the app. Now imagine the possibility of getting a starting point right off the bat, in five seconds of, instead of having to whole cloth draft this thing, imagine getting a starting point then you can modify and edit, or just dispose of and retry again. And that's the potential for me. I can't imagine a scenario where, in a few years from now, I'm going to be satisfied if I don't have a little bit of help, in the same way that I don't manually spell check every email that I send. I automatically spell check it. I love when I'm getting type ahead support inside of Google, or anything. Doesn't mean I always take it, or when texting. >> That's efficiency too. I mean the cloud was about developers getting stuff up quick. >> Exactly. >> All that heavy lifting is there for you, so you don't have to do it. >> Right? >> And you get to the value faster. >> Exactly. I mean, if history taught us one thing, it's, you have to always embrace efficiency, and if you don't fast enough, you will fall behind. Again, looking at the industrial revolution, the companies that embraced the industrial revolution, they became the leaders in the world, and the ones who did not, they all like. >> Well the AI thing that we got to watch out for, is watching how it goes off the rails. If it doesn't have the right prompt engineering, or data architecture, infrastructure. >> Yes. >> It's a big part. So this comes back down to your startup, real quick, I know we got a couple minutes left. Talk about the company, the motivation, and we'll do a deeper dive on on the company. But what's the motivation? What are you targeting for the market, business model? The tech, let's go. >> Actually, I would like Ed to go first. Go ahead. >> Sure, I mean, we're a developer-first, API-first platform. So the product is oriented around allowing developers who may not be superstars, in being able to either leverage, or choose, or select their own large language models for appropriate use cases. But they that want to be able to instantly add the power of large language models into their application set. We started with search, because we think it's going to be one of the first places that people try to take advantage of large language models, to help find information within an application context. And we've built our own large language models, focused on making it very efficient, and elegant, to find information more quickly. So what a developer can do is, within minutes, go up, register for an account, and get access to a set of APIs, that allow them to send data, to be converted into a format that's easy to understand for large language models, vectors. And then secondarily, they can issue queries, ask questions. And they can ask them very, the questions that can be asked, are very natural language questions. So we're talking about long form sentences, you know, drill down types of questions, and they can get answers that either come back in depending upon the form factor of the user interface, in list form, or summarized form, where summarized equals the opportunity to kind of see a condensed, singular answer. >> All right. I have a. >> Oh okay, go ahead, you go. >> I was just going to say, I'm going to be a customer for you, because I want, my dream was to have a hologram of theCUBE host, me and Dave, and have questions be generated in the metaverse. So you know. (all laughing) >> There'll be no longer any guests here. They'll all be talking to you guys. >> Give a couple bullets, I'll spit out 10 good questions. Publish a story. This brings the automation, I'm sorry to interrupt you. >> No, no. No, no, I was just going to follow on on the same. So another way to look at exactly what Ed described is, we want to offer you chatGPT for your own data, right? So imagine taking all of the recordings of all of the interviews you have done, and having all of the content of that being ingested by a system, where you can now have a conversation with your own data and say, "Oh, last time when I met Amr, "which video games did we talk about? "Which movie or book did we use as an analogy "for how we should be embracing data science, "and big data, which is moneyball," I know you use moneyball all the time. And you start having that conversation. So, now the data doesn't become a passive asset that you just have in your organization. No. It's an active participant that's sitting with you, on the table, helping you make decisions. >> One of my favorite things to do with customers, is to go to their site or application, and show them me using it. So for example, one of the customers I talked to was one of the biggest property management companies in the world, that lets people go and rent homes, and houses, and things like that. And you know, I went and I showed them me searching through reviews, looking for information, and trying different words, and trying to find out like, you know, is this place quiet? Is it comfortable? And then I put all the same data into our platform, and I showed them the world of difference you can have when you start asking that question wholeheartedly, and getting real information that doesn't have anything to do with the words you asked, but is really focused on the meaning. You know, when I asked like, "Is it quiet?" You know, answers would come back like, "The wind whispered through the trees peacefully," and you know, it's like nothing to do with quiet in the literal word sense, but in the meaning sense, everything to do with it. And that that was magical even for them, to see that. >> Well you guys are the front end of this big wave. Congratulations on the startup, Amr. I know you guys got great pedigree in big data, and you've got a great team, and congratulations. Vectara is the name of the company, check 'em out. Again, the startup boom is coming. This will be one of the major waves, generative AI is here. I think we'll look back, and it will be pointed out as a major inflection point in the industry. >> Absolutely. >> There's not a lot of hype behind that. People are are seeing it, experts are. So it's going to be fun, thanks for watching. >> Thanks John. (soft music)

Published Date : Feb 23 2023

SUMMARY :

I call it the fifth wave in the industry. It's great to be here. and the application's almost mind blowing. And in the meanwhile, and you got more horsepower,, of all the previous phases. in the same way that, you know, and all the work that's done on ethics, "bring the lead to the top." in terms of scratching the surface and it's going to fundamentally change and you go into application And the system just does it for you. is going to change to be very So the bottleneck in the past, So the replatforming is So that's going to be a That's part of what and they got to stand this up. And one of the core things don't have the skills. So our goal is to simplify it and I agree the apps are coming, I talk to a very senior And the data is very specialized. and be aware of the fact that request, and the output, some raw API on the cloud, about the apps experience, it's any action you would like to take. you just say it, and it's But a lot of the things with chatGPT, comp-side phrase that we all use. It reminds me the old all the way from, you know, raw, and I'm going to constrain But we got all your So the data quality And that's where you That is the garbage in, garbage out. So for the folks who are and how would you prepare them that do the thing, to manage the current situation? And the new version of Notion came out, But on the crate side, you I mean the cloud was about developers so you don't have to do it. and the ones who did not, they all like. If it doesn't have the So this comes back down to Actually, I would like Ed to go first. factor of the user interface, I have a. generated in the metaverse. They'll all be talking to you guys. This brings the automation, of all of the interviews you have done, one of the customers I talked to Vectara is the name of the So it's going to be fun, Thanks John.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John MarkoffPERSON

0.99+

2013DATE

0.99+

AWSORGANIZATION

0.99+

Ed AlbanPERSON

0.99+

AmazonORGANIZATION

0.99+

30QUANTITY

0.99+

10 timesQUANTITY

0.99+

2006DATE

0.99+

John FurrierPERSON

0.99+

two weeksQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

Ed AlbanesePERSON

0.99+

JohnPERSON

0.99+

five secondsQUANTITY

0.99+

Las VegasLOCATION

0.99+

EdPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

10 good questionsQUANTITY

0.99+

SwamiPERSON

0.99+

15 different possibilitiesQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

VectaraORGANIZATION

0.99+

Amr AwadallahPERSON

0.99+

GoogleORGANIZATION

0.99+

ClouderaORGANIZATION

0.99+

first timeQUANTITY

0.99+

bothQUANTITY

0.99+

end of 2019DATE

0.99+

yesterdayDATE

0.98+

Big DataORGANIZATION

0.98+

40 million usersQUANTITY

0.98+

two thingsQUANTITY

0.98+

two great guestsQUANTITY

0.98+

12 plus yearsQUANTITY

0.98+

oneQUANTITY

0.98+

five dollarQUANTITY

0.98+

NetscapeORGANIZATION

0.98+

five years agoDATE

0.98+

SQLTITLE

0.98+

first inningQUANTITY

0.98+

AmrPERSON

0.97+

two schoolsQUANTITY

0.97+

firstQUANTITY

0.97+

10 years agoDATE

0.97+

OneQUANTITY

0.96+

first dayQUANTITY

0.96+

threeDATE

0.96+

chatGPTTITLE

0.96+

first placesQUANTITY

0.95+

BingORGANIZATION

0.95+

NotionTITLE

0.95+

first thingQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

Beyond the BuzzTITLE

0.94+

Sati NatelPERSON

0.94+

Industrial RevolutionEVENT

0.93+

one locationQUANTITY

0.93+

three years agoDATE

0.93+

single applicationQUANTITY

0.92+

one thingQUANTITY

0.91+

first platformQUANTITY

0.91+

five years oldQUANTITY

0.91+

SiliconANGLE News | AWS Responds to OpenAI with Hugging Face Expanded Partnership


 

(upbeat music) >> Hello everyone. Welcome to Silicon Angle news breaking story here. Amazon Web Services, expanding their relationship with Hugging Face, breaking news here on Silicon Angle. I'm John Furrier, Silicon Angle reporter, founder and also co-host of theCUBE. And I have with me Swami from Amazon Web Services, vice president of database analytics machine learning with AWS. Swami, great to have you on for this breaking news segment on AWS's big news. Thanks for coming on, taking the time. >> Hey John, pleasure to be here. >> We've had many conversations on theCUBE over the years. We've watched Amazon really move fast into the large data modeling. You SageMaker became a very smashing success. Obviously you've been on this for a while, now with Chat GPT, open AI, a lot of buzz going mainstream, takes it from behind the curtain, inside the ropes, if you will, in the industry to a mainstream. And so this is a big moment I think in the industry. I want to get your perspective because your news with Hugging Face, I think is a is another tell sign that we're about to tip over into a new accelerated growth around making AI now application aware application centric, more programmable, more API access. What's the big news about with AWS Hugging Face, you know, what's going on with this announcement? >> Yeah, first of all, they're very excited to announce our expanded collaboration with Hugging Face because with this partnership, our goal, as you all know, I mean Hugging Face I consider them like the GitHub for machine learning. And with this partnership, Hugging Face and AWS will be able to democratize AI for a broad range of developers, not just specific deep AI startups. And now with this we can accelerate the training, fine tuning, and deployment of these large language models and vision models from Hugging Face in the cloud. So, and the broader context, when you step back and see what customer problem we are trying to solve with this announcement, essentially if you see these foundational models are used to now create like a huge number of applications, suggest like tech summarization, question answering, or search image generation, creative, other things. And these are all stuff we are seeing in the likes of these Chat GPT style applications. But there is a broad range of enterprise use cases that we don't even talk about. And it's because these kind of transformative generative AI capabilities and models are not available to, I mean, millions of developers. And because either training these elements from scratch can be very expensive or time consuming and need deep expertise, or more importantly, they don't need these generic models. They need them to be fine tuned for the specific use cases. And one of the biggest complaints we hear is that these models, when they try to use it for real production use cases, they are incredibly expensive to train and incredibly expensive to run inference on, to use it at a production scale, so And unlike search, web search style applications where the margins can be really huge, here in production use cases and enterprises, you want efficiency at scale. That's where a Hugging Face and AWS share our mission. And by integrating with Trainium and Inferentia, we're able to handle the cost efficient training and inference at scale. I'll deep dive on it and by training teaming up on the SageMaker front now the time it takes to build these models and fine tune them as also coming down. So that's what makes this partnership very unique as well. So I'm very excited. >> I want to get into the, to the time savings and the cost savings as well on the on the training and inference. It's a huge issue. But before we get into that, just how long have you guys been working with Hugging Face? I know this is a previous relationship. This is an expansion of that relationship. Can you comment on the what's different about what's happened before and then now? >> Yeah, so Hugging Face, we have had an great relationship in the past few years as well where they have actually made their models available to run on AWS in a fashion, even inspect their Bloom project was something many of our customers even used. Bloom Project for context is their open source project, which builds a GPT three style model. And now with this expanded collaboration, now Hugging Face selected AWS for that next generation of this generative AI model, building on their highly successful Bloom project as well. And the nice thing is now by direct integration with Trainium and Inferentia, where you get cost savings in a really significant way. Now for instance, tier 1 can provide up to 50% cost to train savings, and Inferentia can deliver up to 60% better costs and Forex more higher throughput. Now these models, especially as they train that next generation generated AI model, it is going to be not only more accessible to all the developers who use it in open. So it'll be a lot cheaper as well. And that's what makes this moment really exciting because yeah, we can't democratize AI unless we make it broadly accessible and cost efficient, and easy to program and use as well. >> Okay, thanks Swami. We really appreciate. Swami's a Cube alumni, but also vice President, database analyst machine learning web services breaking down the Hugging Face announcement. Obviously the relationship he called it the GitHub of machine learning. This is the beginning of what we will see, a continuing competitive battle with Microsoft. Microsoft launching OpenAI. Amazon's been doing it for years. They got Alexa, they know what they're doing. It's going to be very interesting to see how this all plays out. You're watching Silicon Angle News, breaking here. I'm John Furrier, host of the Cube. Thanks for watching. (ethereal music)

Published Date : Feb 23 2023

SUMMARY :

And I have with me Swami into the large data modeling. the time it takes to build these models and the cost savings as well on the and easy to program and use as well. I'm John Furrier, host of the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Amazon Web ServicesORGANIZATION

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

SwamiPERSON

0.99+

AmazonORGANIZATION

0.99+

millionsQUANTITY

0.99+

GitHubORGANIZATION

0.98+

AlexaTITLE

0.98+

InferentiaORGANIZATION

0.97+

Silicon AngleORGANIZATION

0.97+

TrainiumORGANIZATION

0.97+

Hugging FaceORGANIZATION

0.96+

oneQUANTITY

0.95+

up to 60%QUANTITY

0.95+

up to 50%QUANTITY

0.95+

CubeORGANIZATION

0.94+

Hugging FaceTITLE

0.94+

Chat GPTTITLE

0.86+

BloomPERSON

0.84+

OpenAITITLE

0.83+

theCUBEORGANIZATION

0.77+

Chat GPTTITLE

0.76+

1OTHER

0.75+

Silicon Angle NewsTITLE

0.74+

FaceTITLE

0.73+

BloomTITLE

0.72+

developersQUANTITY

0.7+

TrainiumTITLE

0.7+

Silicon AngleORGANIZATION

0.64+

past few yearsDATE

0.63+

BloomORGANIZATION

0.56+

SiliconANGLE NewsTITLE

0.55+

SageMakerTITLE

0.53+

tierQUANTITY

0.52+

HuggingORGANIZATION

0.49+

SiliconORGANIZATION

0.48+

AngleLOCATION

0.47+

SiliconANGLE News | Swami Sivasubramanian Extended Version


 

(bright upbeat music) >> Hello, everyone. Welcome to SiliconANGLE News breaking story here. Amazon Web Services expanding their relationship with Hugging Face, breaking news here on SiliconANGLE. I'm John Furrier, SiliconANGLE reporter, founder, and also co-host of theCUBE. And I have with me, Swami, from Amazon Web Services, vice president of database, analytics, machine learning with AWS. Swami, great to have you on for this breaking news segment on AWS's big news. Thanks for coming on and taking the time. >> Hey, John, pleasure to be here. >> You know- >> Looking forward to it. >> We've had many conversations on theCUBE over the years, we've watched Amazon really move fast into the large data modeling, SageMaker became a very smashing success, obviously you've been on this for a while. Now with ChatGPT OpenAI, a lot of buzz going mainstream, takes it from behind the curtain inside the ropes, if you will, in the industry to a mainstream. And so this is a big moment, I think, in the industry, I want to get your perspective, because your news with Hugging Face, I think is another tell sign that we're about to tip over into a new accelerated growth around making AI now application aware, application centric, more programmable, more API access. What's the big news about, with AWS Hugging Face, you know, what's going on with this announcement? >> Yeah. First of all, they're very excited to announce our expanded collaboration with Hugging Face, because with this partnership, our goal, as you all know, I mean, Hugging Face, I consider them like the GitHub for machine learning. And with this partnership, Hugging Face and AWS, we'll be able to democratize AI for a broad range of developers, not just specific deep AI startups. And now with this, we can accelerate the training, fine tuning and deployment of these large language models, and vision models from Hugging Face in the cloud. And the broader context, when you step back and see what customer problem we are trying to solve with this announcement, essentially if you see these foundational models, are used to now create like a huge number of applications, suggest like tech summarization, question answering, or search image generation, creative, other things. And these are all stuff we are seeing in the likes of these ChatGPT style applications. But there is a broad range of enterprise use cases that we don't even talk about. And it's because these kind of transformative, generative AI capabilities and models are not available to, I mean, millions of developers. And because either training these elements from scratch can be very expensive or time consuming and need deep expertise, or more importantly, they don't need these generic models, they need them to be fine tuned for the specific use cases. And one of the biggest complaints we hear is that these models, when they try to use it for real production use cases, they are incredibly expensive to train and incredibly expensive to run inference on, to use it at a production scale. So, and unlike web search style applications, where the margins can be really huge, here in production use cases and enterprises, you want efficiency at scale. That's where Hugging Face and AWS share our mission. And by integrating with Trainium and Inferentia, we're able to handle the cost efficient training and inference at scale, I'll deep dive on it. And by teaming up on the SageMaker front, now the time it takes to build these models and fine tune them is also coming down. So that's what makes this partnership very unique as well. So I'm very excited. >> I want to get into the time savings and the cost savings as well on the training and inference, it's a huge issue, but before we get into that, just how long have you guys been working with Hugging Face? I know there's a previous relationship, this is an expansion of that relationship, can you comment on what's different about what's happened before and then now? >> Yeah. So, Hugging Face, we have had a great relationship in the past few years as well, where they have actually made their models available to run on AWS, you know, fashion. Even in fact, their Bloom Project was something many of our customers even used. Bloom Project, for context, is their open source project which builds a GPT-3 style model. And now with this expanded collaboration, now Hugging Face selected AWS for that next generation office generative AI model, building on their highly successful Bloom Project as well. And the nice thing is, now, by direct integration with Trainium and Inferentia, where you get cost savings in a really significant way, now, for instance, Trn1 can provide up to 50% cost to train savings, and Inferentia can deliver up to 60% better costs, and four x more higher throughput than (indistinct). Now, these models, especially as they train that next generation generative AI models, it is going to be, not only more accessible to all the developers, who use it in open, so it'll be a lot cheaper as well. And that's what makes this moment really exciting, because we can't democratize AI unless we make it broadly accessible and cost efficient and easy to program and use as well. >> Yeah. >> So very exciting. >> I'll get into the SageMaker and CodeWhisperer angle in a second, but you hit on some good points there. One, accessibility, which is, I call the democratization, which is getting this in the hands of developers, and/or AI to develop, we'll get into that in a second. So, access to coding and Git reasoning is a whole nother wave. But the three things I know you've been working on, I want to put in the buckets here and comment, one, I know you've, over the years, been working on saving time to train, that's a big point, you mentioned some of those stats, also cost, 'cause now cost is an equation on, you know, bundling whether you're uncoupling with hardware and software, that's a big issue. Where do I find the GPUs? Where's the horsepower cost? And then also sustainability. You've mentioned that in the past, is there a sustainability angle here? Can you talk about those three things, time, cost, and sustainability? >> Certainly. So if you look at it from the AWS perspective, we have been supporting customers doing machine learning for the past years. Just for broader context, Amazon has been doing ML the past two decades right from the early days of ML powered recommendation to actually also supporting all kinds of generative AI applications. If you look at even generative AI application within Amazon, Amazon search, when you go search for a product and so forth, we have a team called MFi within Amazon search that helps bring these large language models into creating highly accurate search results. And these are created with models, really large models with tens of billions of parameters, scales to thousands of training jobs every month and trained on large model of hardware. And this is an example of a really good large language foundation model application running at production scale, and also, of course, Alexa, which uses a large generator model as well. And they actually even had a research paper that showed that they are more, and do better in accuracy than other systems like GPT-3 and whatnot. So, and we also touched on things like CodeWhisperer, which uses generative AI to improve developer productivity, but in a responsible manner, because 40% of some of the studies show 40% of this generated code had serious security flaws in it. This is where we didn't just do generative AI, we combined with automated reasoning capabilities, which is a very, very useful technique to identify these issues and couple them so that it produces highly secure code as well. Now, all these learnings taught us few things, and which is what you put in these three buckets. And yeah, like more than 100,000 customers using ML and AI services, including leading startups in the generative AI space, like stability AI, AI21 Labs, or Hugging Face, or even Alexa, for that matter. They care about, I put them in three dimension, one is around cost, which we touched on with Trainium and Inferentia, where we actually, the Trainium, you provide to 50% better cost savings, but the other aspect is, Trainium is a lot more power efficient as well compared to traditional one. And Inferentia is also better in terms of throughput, when it comes to what it is capable of. Like it is able to deliver up to three x higher compute performance and four x higher throughput, compared to it's previous generation, and it is extremely cost efficient and power efficient as well. >> Well. >> Now, the second element that really is important is in a day, developers deeply value the time it takes to build these models, and they don't want to build models from scratch. And this is where SageMaker, which is, even going to Kaggle uses, this is what it is, number one, enterprise ML platform. What it did to traditional machine learning, where tens of thousands of customers use StageMaker today, including the ones I mentioned, is that what used to take like months to build these models have dropped down to now a matter of days, if not less. Now, a generative AI, the cost of building these models, if you look at the landscape, the model parameter size had jumped by more than thousand X in the past three years, thousand x. And that means the training is like a really big distributed systems problem. How do you actually scale these model training? How do you actually ensure that you utilize these efficiently? Because these machines are very expensive, let alone they consume a lot of power. So, this is where SageMaker capability to build, automatically train, tune, and deploy models really concern this, especially with this distributor training infrastructure, and those are some of the reasons why some of the leading generative AI startups are actually leveraging it, because they do not want a giant infrastructure team, which is constantly tuning and fine tuning, and keeping these clusters alive. >> It sounds like a lot like what startups are doing with the cloud early days, no data center, you move to the cloud. So, this is the trend we're seeing, right? You guys are making it easier for developers with Hugging Face, I get that. I love that GitHub for machine learning, large language models are complex and expensive to build, but not anymore, you got Trainium and Inferentia, developers can get faster time to value, but then you got the transformers data sets, token libraries, all that optimized for generator. This is a perfect storm for startups. Jon Turow, a former AWS person, who used to work, I think for you, is now a VC at Madrona Venture, he and I were talking about the generator AI landscape, it's exploding with startups. Every alpha entrepreneur out there is seeing this as the next frontier, that's the 20 mile stairs, next 10 years is going to be huge. What is the big thing that's happened? 'Cause some people were saying, the founder of Yquem said, "Oh, the start ups won't be real, because they don't all have AI experience." John Markoff, former New York Times writer told me that, AI, there's so much work done, this is going to explode, accelerate really fast, because it's almost like it's been waiting for this moment. What's your reaction? >> I actually think there is going to be an explosion of startups, not because they need to be AI startups, but now finally AI is really accessible or going to be accessible, so that they can create remarkable applications, either for enterprises or for disrupting actually how customer service is being done or how creative tools are being built. And I mean, this is going to change in many ways. When we think about generative AI, we always like to think of how it generates like school homework or arts or music or whatnot, but when you look at it on the practical side, generative AI is being actually used across various industries. I'll give an example of like Autodesk. Autodesk is a customer who runs an AWS and SageMaker. They already have an offering that enables generated design, where designers can generate many structural designs for products, whereby you give a specific set of constraints and they actually can generate a structure accordingly. And we see similar kind of trend across various industries, where it can be around creative media editing or various others. I have the strong sense that literally, in the next few years, just like now, conventional machine learning is embedded in every application, every mobile app that we see, it is pervasive, and we don't even think twice about it, same way, like almost all apps are built on cloud. Generative AI is going to be part of every startup, and they are going to create remarkable experiences without needing actually, these deep generative AI scientists. But you won't get that until you actually make these models accessible. And I also don't think one model is going to rule the world, then you want these developers to have access to broad range of models. Just like, go back to the early days of deep learning. Everybody thought it is going to be one framework that will rule the world, and it has been changing, from Caffe to TensorFlow to PyTorch to various other things. And I have a suspicion, we had to enable developers where they are, so. >> You know, Dave Vellante and I have been riffing on this concept called super cloud, and a lot of people have co-opted to be multicloud, but we really were getting at this whole next layer on top of say, AWS. You guys are the most comprehensive cloud, you guys are a super cloud, and even Adam and I are talking about ISVs evolving to ecosystem partners. I mean, your top customers have ecosystems building on top of it. This feels like a whole nother AWS. How are you guys leveraging the history of AWS, which by the way, had the same trajectory, startups came in, they didn't want to provision a data center, the heavy lifting, all the things that have made Amazon successful culturally. And day one thinking is, provide the heavy lifting, undifferentiated heavy lifting, and make it faster for developers to program code. AI's got the same thing. How are you guys taking this to the next level, because now, this is an opportunity for the competition to change the game and take it over? This is, I'm sure, a conversation, you guys have a lot of things going on in AWS that makes you unique. What's the internal and external positioning around how you take it to the next level? >> I mean, so I agree with you that generative AI has a very, very strong potential in terms of what it can enable in terms of next generation application. But this is where Amazon's experience and expertise in putting these foundation models to work internally really has helped us quite a bit. If you look at it, like amazon.com search is like a very, very important application in terms of what is the customer impact on number of customers who use that application openly, and the amount of dollar impact it does for an organization. And we have been doing it silently for a while now. And the same thing is true for like Alexa too, which actually not only uses it for natural language understanding other city, even national leverages is set for creating stories and various other examples. And now, our approach to it from AWS is we actually look at it as in terms of the same three tiers like we did in machine learning, because when you look at generative AI, we genuinely see three sets of customers. One is, like really deep technical expert practitioner startups. These are the startups that are creating the next generation models like the likes of stability AIs or Hugging Face with Bloom or AI21. And they generally want to build their own models, and they want the best price performance of their infrastructure for training and inference. That's where our investments in silicon and hardware and networking innovations, where Trainium and Inferentia really plays a big role. And we can nearly do that, and that is one. The second middle tier is where I do think developers don't want to spend time building their own models, let alone, they actually want the model to be useful to that data. They don't need their models to create like high school homeworks or various other things. What they generally want is, hey, I had this data from my enterprises that I want to fine tune and make it really work only for this, and make it work remarkable, can be for tech summarization, to generate a report, or it can be for better Q&A, and so forth. This is where we are. Our investments in the middle tier with SageMaker, and our partnership with Hugging Face and AI21 and co here are all going to very meaningful. And you'll see us investing, I mean, you already talked about CodeWhisperer, which is an open preview, but we are also partnering with a whole lot of top ISVs, and you'll see more on this front to enable the next wave of generated AI apps too, because this is an area where we do think lot of innovation is yet to be done. It's like day one for us in this space, and we want to enable that huge ecosystem to flourish. >> You know, one of the things Dave Vellante and I were talking about in our first podcast we just did on Friday, we're going to do weekly, is we highlighted the AI ChatGPT example as a horizontal use case, because everyone loves it, people are using it in all their different verticals, and horizontal scalable cloud plays perfectly into it. So I have to ask you, as you look at what AWS is going to bring to the table, a lot's changed over the past 13 years with AWS, a lot more services are available, how should someone rebuild or re-platform and refactor their application of business with AI, with AWS? What are some of the tools that you see and recommend? Is it Serverless, is it SageMaker, CodeWhisperer? What do you think's going to shine brightly within the AWS stack, if you will, or service list, that's going to be part of this? As you mentioned, CodeWhisperer and SageMaker, what else should people be looking at as they start tinkering and getting all these benefits, and scale up their ups? >> You know, if we were a startup, first, I would really work backwards from the customer problem I try to solve, and pick and choose, bar, I don't need to deal with the undifferentiated heavy lifting, so. And that's where the answer is going to change. If you look at it then, the answer is not going to be like a one size fits all, so you need a very strong, I mean, granted on the compute front, if you can actually completely accurate it, so unless, I will always recommend it, instead of running compute for running your ups, because it takes care of all the undifferentiated heavy lifting, but on the data, and that's where we provide a whole variety of databases, right from like relational data, or non-relational, or dynamo, and so forth. And of course, we also have a deep analytical stack, where data directly flows from our relational databases into data lakes and data virus. And you can get value along with partnership with various analytical providers. The area where I do think fundamentally things are changing on what people can do is like, with CodeWhisperer, I was literally trying to actually program a code on sending a message through Twilio, and I was going to pull up to read a documentation, and in my ID, I was actually saying like, let's try sending a message to Twilio, or let's actually update a Route 53 error code. All I had to do was type in just a comment, and it actually started generating the sub-routine. And it is going to be a huge time saver, if I were a developer. And the goal is for us not to actually do it just for AWS developers, and not to just generate the code, but make sure the code is actually highly secure and follows the best practices. So, it's not always about machine learning, it's augmenting with automated reasoning as well. And generative AI is going to be changing, and not just in how people write code, but also how it actually gets built and used as well. You'll see a lot more stuff coming on this front. >> Swami, thank you for your time. I know you're super busy. Thank you for sharing on the news and giving commentary. Again, I think this is a AWS moment and industry moment, heavy lifting, accelerated value, agility. AIOps is going to be probably redefined here. Thanks for sharing your commentary. And we'll see you next time, I'm looking forward to doing more follow up on this. It's going to be a big wave. Thanks. >> Okay. Thanks again, John, always a pleasure. >> Okay. This is SiliconANGLE's breaking news commentary. I'm John Furrier with SiliconANGLE News, as well as host of theCUBE. Swami, who's a leader in AWS, has been on theCUBE multiple times. We've been tracking the growth of how Amazon's journey has just been exploding past five years, in particular, past three. You heard the numbers, great performance, great reviews. This is a watershed moment, I think, for the industry, and it's going to be a lot of fun for the next 10 years. Thanks for watching. (bright music)

Published Date : Feb 22 2023

SUMMARY :

Swami, great to have you on inside the ropes, if you And one of the biggest complaints we hear and easy to program and use as well. I call the democratization, the Trainium, you provide And that means the training What is the big thing that's happened? and they are going to create this to the next level, and the amount of dollar impact that's going to be part of this? And generative AI is going to be changing, AIOps is going to be John, always a pleasure. and it's going to be a lot

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SwamiPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Jon TurowPERSON

0.99+

John MarkoffPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

40%QUANTITY

0.99+

AutodeskORGANIZATION

0.99+

50%QUANTITY

0.99+

Madrona VentureORGANIZATION

0.99+

20 mileQUANTITY

0.99+

Hugging FaceORGANIZATION

0.99+

FridayDATE

0.99+

second elementQUANTITY

0.99+

more than 100,000 customersQUANTITY

0.99+

AI21ORGANIZATION

0.99+

tens of thousandsQUANTITY

0.99+

first podcastQUANTITY

0.99+

three tiersQUANTITY

0.98+

SiliconANGLEORGANIZATION

0.98+

twiceQUANTITY

0.98+

Bloom ProjectTITLE

0.98+

oneQUANTITY

0.98+

SageMakerORGANIZATION

0.98+

Hugging FaceTITLE

0.98+

AlexaTITLE

0.98+

firstQUANTITY

0.98+

GitHubORGANIZATION

0.98+

one modelQUANTITY

0.98+

up to 50%QUANTITY

0.97+

ChatGPTTITLE

0.97+

FirstQUANTITY

0.97+

more than thousand XQUANTITY

0.97+

amazon.comORGANIZATION

0.96+

tens of billionsQUANTITY

0.96+

OneQUANTITY

0.96+

up to 60%QUANTITY

0.96+

one frameworkQUANTITY

0.96+

YquemORGANIZATION

0.94+

three thingsQUANTITY

0.94+

InferentiaORGANIZATION

0.94+

CodeWhispererTITLE

0.93+

fourQUANTITY

0.92+

three setsQUANTITY

0.92+

threeQUANTITY

0.92+

TwilioORGANIZATION

0.92+

SiliconANGLE Report: Reporters Notebook with Adrian Cockcroft | AWS re:Invent 2022


 

(soft techno upbeat music) >> Hi there. Welcome back to Las Vegas. This is Dave Villante with Paul Gillon. Reinvent day one and a half. We started last night, Monday, theCUBE after dark. Now we're going wall to wall. Today. Today was of course the big keynote, Adam Selipsky, kind of the baton now handing, you know, last year when he did his keynote, he was very new. He was sort of still getting his feet wet and finding his guru swing. Settling in a little bit more this year, learning a lot more, getting deeper into the tech, but of course, sharing the love with other leaders like Peter DeSantis. Tomorrow's going to be Swamy in the keynote. Adrian Cockcroft is here. Former AWS, former network Netflix CTO, currently an analyst. You got your own firm now. You're out there. Great to see you again. Thanks for coming on theCUBE. >> Yeah, thanks. >> We heard you on at Super Cloud, you gave some really good insights there back in August. So now as an outsider, you come in obviously, you got to be impressed with the size and the ecosystem and the energy. Of course. What were your thoughts on, you know what you've seen so far, today's keynotes, last night Peter DeSantis, what stood out to you? >> Yeah, I think it's great to be back at Reinvent again. We're kind of pretty much back to where we were before the pandemic sort of shut it down. This is a little, it's almost as big as the, the largest one that we had before. And everyone's turned up. It just feels like we're back. So that's really good to see. And it's a slightly different style. I think there were was more sort of video production things happening. I think in this keynote, more storytelling. I'm not sure it really all stitched together very well. Right. Some of the stories like, how does that follow that? So there were a few things there and some of there were spelling mistakes on the slides, you know that ELT instead of ETL and they spelled ZFS wrong and something. So it just seemed like there was, I'm not quite sure just maybe a few things were sort of rushed at the last minute. >> Not really AWS like, was it? It's kind of remind the Patriots Paul, you know Bill Belichick's teams are fumbling all over the place. >> That's right. That's right. >> Part of it may be, I mean the sort of the market. They have a leader in marketing right now but they're going to have a CMO. So that's sort of maybe as lack of a single threaded leader for this thing. Everything's being shared around a bit more. So maybe, I mean, it's all fixable and it's mine. This is minor stuff. I'm just sort of looking at it and going there's a few things that looked like they were not quite as good as they could have been in the way it was put together. Right? >> But I mean, you're taking a, you know a year of not doing Reinvent. Yeah. Being isolated. You know, we've certainly seen it with theCUBE. It's like, okay, it's not like riding a bike. You know, things that, you know you got to kind of relearn the muscle memories. It's more like golf than is bicycle riding. >> Well I've done AWS keynotes myself. And they are pretty much scrambled. It looks nice, but there's a lot of scrambling leading up to when it actually goes. Right? And sometimes you can, you sometimes see a little kind of the edges of that, and sometimes it's much more polished. But you know, overall it's pretty good. I think Peter DeSantis keynote yesterday was a lot of really good meat there. There was some nice presentations, and some great announcements there. And today I was, I thought I was a little disappointed with some of the, I thought they could have been more. I think the way Andy Jesse did it, he crammed more announcements into his keynote, and Adam seems to be taking sort of a bit more of a measured approach. There were a few things he picked up on and then I'm expecting more to be spread throughout the rest of the day. >> This was more poetic. Right? He took the universe as the analogy for data, the ocean for security. Right? The Antarctic was sort of. >> Yeah. It looked pretty, >> yeah. >> But I'm not sure that was like, we're not here really to watch nature videos >> As analysts and journalists, You're like, come on. >> Yeah, >> Give it the meat >> That was kind the thing, yeah, >> It has always been the AWS has always been Reinvent has always been a shock at our approach. 100, 150 announcements. And they're really, that kind of pressure seems to be off them now. Their position at the top of the market seems to be unshakeable. There's no clear competition that's creeping up behind them. So how does that affect the messaging you think that AWS brings to market when it doesn't really have to prove that it's a leader anymore? It can go after maybe more of the niche markets or fix the stuff that's a little broken more fine tuning than grandiose statements. >> I think so AWS for a long time was so far out that they basically said, "We don't think about the competition, we are listen to the customers." And that was always the statement that works as long as you're always in the lead, right? Because you are introducing the new idea to the customer. Nobody else got there first. So that was the case. But in a few areas they aren't leading. Right? You could argue in machine learning, not necessarily leading in sustainability. They're not leading and they don't want to talk about some of these areas and-- >> Database. I mean arguably, >> They're pretty strong there, but the areas when you are behind, it's like they kind of know how to play offense. But when you're playing defense, it's a different set of game. You're playing a different game and it's hard to be good at both. I think and I'm not sure that they're really used to following somebody into a market and making a success of that. So there's something, it's a little harder. Do you see what I mean? >> I get opinion on this. So when I say database, David Foyer was two years ago, predicted AWS is going to have to converge somehow. They have no choice. And they sort of touched on that today, right? Eliminating ETL, that's one thing. But Aurora to Redshift. >> Yeah. >> You know, end to end. I'm not sure it's totally, they're fully end to end >> That's a really good, that is an excellent piece of work, because there's a lot of work that it eliminates. There's are clear pain points, but then you've got sort of the competing thing, is like the MongoDB and it's like, it's just a way with one database keeps it simple. >> Snowflake, >> Or you've got on Snowflake maybe you've got all these 20 different things you're trying to integrate at AWS, but it's kind of like you have a bag of Lego bricks. It's my favorite analogy, right? You want a toy for Christmas, you want a toy formula one racing car since that seems to be the theme, right? >> Okay. Do you want the fully built model that you can play with right now? Or do you want the Lego version that you have to spend three days building. Right? And AWS is the Lego technique thing. You have to spend some time building it, but once you've built it, you can evolve it, and you'll still be playing those are still good bricks years later. Whereas that prebuilt to probably broken gathering dust, right? So there's something about having an vulnerable architecture which is harder to get into, but more durable in the long term. And so AWS tends to play the long game in many ways. And that's one of the elements that they do that and that's good, but it makes it hard to consume for enterprise buyers that are used to getting it with a bow on top. And here's the solution. You know? >> And Paul, that was always Andy Chassy's answer to when we would ask him, you know, all these primitives you're going to make it simpler. You see the primitives give us the advantage to turn on a dime in the marketplace. And that's true. >> Yeah. So you're saying, you know, you take all these things together and you wrap it up, and you put a snowflake on top, and now you've got a simple thing or a Mongo or Mongo atlas or whatever. So you've got these layered platforms now which are making it simpler to consume, but now you're kind of, you know, you're all stuck in that ecosystem, you know, so it's like what layer of abstractions do you want to tie yourself to, right? >> The data bricks coming at it from more of an open source approach. But it's similar. >> We're seeing Amazon direct more into vertical markets. They spotlighted what Goldman Sachs is doing on their platform. They've got a variety of platforms that are supposedly targeted custom built for vertical markets. How do successful do you see that play being? Is this something that the customers you think are looking for, a fully integrated Amazon solution? >> I think so. There's usually if you look at, you know the MongoDB or data stacks, or the other sort of or elastic, you know, they've got the specific solution with the people that really are developing the core technology, there's open source equivalent version. The AWS is running, and it's usually maybe they've got a price advantage or it's, you know there's some data integration in there or it's somehow easier to integrate but it's not stopping those companies from growing. And what it's doing is it's endorsing that platform. So if you look at the collection of databases that have been around over the last few years, now you've got basically Elastic Mongo and Cassandra, you know the data stacks as being endorsed by the cloud vendors. These are winners. They're going to be around for a very long time. You can build yourself on that architecture. But what happened to Couch base and you know, a few of the other ones, you know, they don't really fit. Like how you going to bait? If you are now becoming an also ran, because you didn't get cloned by the cloud vendor. So the customers are going is that a safe place to be, right? >> But isn't it, don't they want to encourage those partners though in the name of building the marketplace ecosystem? >> Yeah. >> This is huge. >> But certainly the platform, yeah, the platform encourages people to do more. And there's always room around the edge. But the mainstream customers like that really like spending the good money, are looking for something that's got a long term life to it. Right? They're looking for a long commitment to that technology and that it's going to be invested in and grow. And the fact that the cloud providers are adopting and particularly AWS is adopting some of these technologies means that is a very long term commitment. You can base, you know, you can bet your future architecture on that for a decade probably. >> So they have to pick winners. >> Yeah. So it's sort of picking winners. And then if you're the open source company that's now got AWS turning up, you have to then leverage it and use that as a way to grow the market. And I think Mongo have done an excellent job of that. I mean, they're top level sponsors of Reinvent, and they're out there messaging that and doing a good job of showing people how to layer on top of AWS and make it a win-win both sides. >> So ever since we've been in the business, you hear the narrative hardware's going to die. It's just, you know, it's commodity and there's some truth to that. But hardware's actually driving good gross margins for the Cisco's of the world. Storage companies have always made good margins. Servers maybe not so much, 'cause Intel sucked all the margin out of it. But let's face it, AWS makes most of its money. We know on compute, it's got 25 plus percent operating margins depending on the seasonality there. What do you think happens long term to the infrastructure layer discussion? Okay, commodity cloud, you know, we talk about super cloud. Do you think that AWS, and the other cloud vendors that infrastructure, IS gets commoditized and they have to go up market or you see that continuing I mean history would say that still good margins in hardware. What are your thoughts on that? >> It's not commoditizing, it's becoming more specific. We've got all these accelerators and custom chips now, and this is something, this almost goes back. I mean, I was with some micro systems 20,30 years ago and we developed our own chips and HP developed their own chips and SGI mips, right? We were like, the architectures were all squabbling of who had the best processor chips and it took years to get chips that worked. Now if you make a chip and it doesn't work immediately, you screwed up somewhere right? It's become the technology of building these immensely complicated powerful chips that has become commoditized. So the cost of building a custom chip, is now getting to the point where Apple and Amazon, your Apple laptop has got full custom chips your phone, your iPhone, whatever and you're getting Google making custom chips and we've got Nvidia now getting into CPUs as well as GPUs. So we're seeing that the ability to build a custom chip, is becoming something that everyone is leveraging. And the cost of doing that is coming down to startups are doing it. So we're going to see many, many more, much more innovation I think, and this is like Intel and AMD are, you know they've got the compatibility legacy, but of the most powerful, most interesting new things I think are going to be custom. And we're seeing that with Graviton three particular in the three E that was announced last night with like 30, 40% whatever it was, more performance for HPC workloads. And that's, you know, the HPC market is going to have to deal with cloud. I mean they are starting to, and I was at Supercomputing a few weeks ago and they are tiptoeing around the edge of cloud, but those supercomputers are water cold. They are monsters. I mean you go around supercomputing, there are plumbing vendors on the booth. >> Of course. Yeah. >> Right? And they're highly concentrated systems, and that's really the only difference, is like, is it water cooler or echo? The rest of the technology stack is pretty much off the shelf stuff with a few tweets software. >> You point about, you know, the chips and what AWS is doing. The Annapurna acquisition. >> Yeah. >> They're on a dramatically different curve now. I think it comes down to, again, David Floyd's premise, really comes down to volume. The arm wafer volumes are 10 x those of X 86, volume always wins. And the economics of semis. >> That kind of got us there. But now there's also a risk five coming along if you, in terms of licensing is becoming one of the bottlenecks. Like if the cost of building a chip is really low, then it comes down to licensing costs and do you want to pay the arm license And the risk five is an open source chip set which some people are starting to use for things. So your dis controller may have a risk five in it, for example, nowadays, those kinds of things. So I think that's kind of the the dynamic that's playing out. There's a lot of innovation in hardware to come in the next few years. There's a thing called CXL compute express link which is going to be really interesting. I think that's probably two years out, before we start seeing it for real. But it lets you put glue together entire rack in a very flexible way. So just, and that's the entire industry coming together around a single standard, the whole industry except for Amazon, in fact just about. >> Well, but maybe I think eventually they'll get there. Don't use system on a chip CXL. >> I have no idea whether I have no knowledge about whether going to do anything CXL. >> Presuming I'm not trying to tap anything confidential. It just makes sense that they would do a system on chip. It makes sense that they would do something like CXL. Why not adopt the standard, if it's going to be as the cost. >> Yeah. And so that was one of the things out of zip computing. The other thing is the low latency networking with the elastic fabric adapter EFA and the extensions to that that were announced last night. They doubled the throughput. So you get twice the capacity on the nitro chip. And then the other thing was this, this is a bit technical, but this scalable datagram protocol that they've got which basically says, if I want to send a message, a packet from one machine to another machine, instead of sending it over one wire, I consider it over 16 wires in parallel. And I will just flood the network with all the packets and they can arrive in any order. This is why it isn't done normally. TCP is in order, the packets come in order they're supposed to, but this is fully flooding them around with its own fast retry and then they get reassembled at the other end. So they're not just using this now for HPC workloads. They've turned it on for TCP for just without any change to your application. If you are trying to move a large piece of data between two machines, and you're just pushing it down a network, a single connection, it takes it from five gigabits per second to 25 gigabits per second. A five x speed up, with a protocol tweak that's run by the Nitro, this is super interesting. >> Probably want to get all that AIML that stuff is going on. >> Well, the AIML stuff is leveraging it underneath, but this is for everybody. Like you're just copying data around, right? And you're limited, "Hey this is going to get there five times faster, pushing a big enough chunk of data around." So this is turning on gradually as the nitro five comes out, and you have to enable it at the instance level. But it's a super interesting announcement from last night. >> So the bottom line bumper sticker on commoditization is what? >> I don't think so. I mean what's the APIs? Your arm compatible, your Intel X 86 compatible or your maybe risk five one day compatible in the cloud. And those are the APIs, right? That's the commodity level. And the software is now, the software ecosystem is super portable across those as we're seeing with Apple moving from Intel to it's really not an issue, right? The software and the tooling is all there to do that. But underneath that, we're going to see an arms race between the top providers as they all try and develop faster chips for doing more specific things. We've got cranium for training, that instance has they announced it last year with 800 gigabits going out of a single instance, 800 gigabits or no, but this year they doubled it. Yeah. So 1.6 terabytes out of a single machine, right? That's insane, right? But what you're doing is you're putting together hundreds or thousands of those to solve the big machine learning training problems. These super, these enormous clusters that they're being formed for doing these massive problems. And there is a market now, for these incredibly large supercomputer clusters built for doing AI. That's all bandwidth limited. >> And you think about the timeframe from design to tape out. >> Yeah. >> Is just getting compressed It's relative. >> It is. >> Six is going the other way >> The tooling is all there. Yeah. >> Fantastic. Adrian, always a pleasure to have you on. Thanks so much. >> Yeah. >> Really appreciate it. >> Yeah, thank you. >> Thank you Paul. >> Cheers. All right. Keep it right there everybody. Don't forget, go to thecube.net, you'll see all these videos. Go to siliconangle.com, We've got features with Adam Selipsky, we got my breaking analysis, we have another feature with MongoDB's, Dev Ittycheria, Ali Ghodsi, as well Frank Sluman tomorrow. So check that out. Keep it right there. You're watching theCUBE, the leader in enterprise and emerging tech, right back. (soft techno upbeat music)

Published Date : Nov 30 2022

SUMMARY :

Great to see you again. and the ecosystem and the energy. Some of the stories like, It's kind of remind the That's right. I mean the sort of the market. the muscle memories. kind of the edges of that, the analogy for data, As analysts and journalists, So how does that affect the messaging always in the lead, right? I mean arguably, and it's hard to be good at both. But Aurora to Redshift. You know, end to end. of the competing thing, but it's kind of like you And AWS is the Lego technique thing. to when we would ask him, you know, and you put a snowflake on top, from more of an open source approach. the customers you think a few of the other ones, you know, and that it's going to and doing a good job of showing people and the other cloud vendors the HPC market is going to Yeah. and that's really the only difference, the chips and what AWS is doing. And the economics of semis. So just, and that's the entire industry Well, but maybe I think I have no idea whether if it's going to be as the cost. and the extensions to that AIML that stuff is going on. and you have to enable And the software is now, And you think about the timeframe Is just getting compressed Yeah. Adrian, always a pleasure to have you on. the leader in enterprise

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Adam SelipskyPERSON

0.99+

David FloydPERSON

0.99+

Peter DeSantisPERSON

0.99+

PaulPERSON

0.99+

Ali GhodsiPERSON

0.99+

Adrian CockcroftPERSON

0.99+

AWSORGANIZATION

0.99+

Frank SlumanPERSON

0.99+

Paul GillonPERSON

0.99+

AmazonORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Andy ChassyPERSON

0.99+

Las VegasLOCATION

0.99+

AdamPERSON

0.99+

Dev IttycheriaPERSON

0.99+

Andy JessePERSON

0.99+

Dave VillantePERSON

0.99+

AugustDATE

0.99+

two machinesQUANTITY

0.99+

Bill BelichickPERSON

0.99+

10QUANTITY

0.99+

CiscoORGANIZATION

0.99+

todayDATE

0.99+

last yearDATE

0.99+

1.6 terabytesQUANTITY

0.99+

AMDORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

hundredsQUANTITY

0.99+

one machineQUANTITY

0.99+

three daysQUANTITY

0.99+

AdrianPERSON

0.99+

800 gigabitsQUANTITY

0.99+

TodayDATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

David FoyerPERSON

0.99+

two yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

yesterdayDATE

0.99+

this yearDATE

0.99+

SnowflakeTITLE

0.99+

NvidiaORGANIZATION

0.99+

five timesQUANTITY

0.99+

oneQUANTITY

0.99+

NetflixORGANIZATION

0.99+

thecube.netOTHER

0.99+

IntelORGANIZATION

0.99+

fiveQUANTITY

0.99+

both sidesQUANTITY

0.99+

MongoORGANIZATION

0.99+

ChristmasEVENT

0.99+

last nightDATE

0.99+

HPORGANIZATION

0.98+

25 plus percentQUANTITY

0.98+

thousandsQUANTITY

0.98+

20,30 years agoDATE

0.98+

pandemicEVENT

0.98+

bothQUANTITY

0.98+

two years agoDATE

0.98+

twiceQUANTITY

0.98+

tomorrowDATE

0.98+

X 86COMMERCIAL_ITEM

0.98+

AntarcticLOCATION

0.98+

PatriotsORGANIZATION

0.98+

siliconangle.comOTHER

0.97+

Alec Furrier, SiliconANGLE Media, Inc. | Blockchain Unbound 2018


 

>> Narrator: Live from San Juan, Puerto Rico It's theCUBE, covering Blockchain Unbound Brought to you by Blockchain Industries (upbeat music) >> Hey, welcome back everybody, we're live in Puerto Rico for the cryptocurrency, global blockchain, decentralized internet, Cube coverage in Puerto Rico part of Blockchain Unbound. I'm John Furrier, host of theCUBE here, also co-founder of SiliconANGLE Media Inc. And, we're here with a first Cube ever, father/son Cube segment where we're going to kind of break down a summary of the show but mainly get the take from a 22 year old. Here with me is my son Alex Furrier who's been doing the schedule and greeting all the guests. Alec has been also demoing our platform that we haven't formally announced but also Not that we have to but it's out there. theCUBE platform, all the back-end data Because it really is getting everyone here excited So, Alec, welcome to theCUBE. >> Thanks, great to be on, finally, after all these years (John chuckles) to be on, it's an honor. >> Well, thanks for all the hard work you did on the schedule but you're a young gun, you're 22 years old. This is an exciting crypto world for your generation. What's your reaction to the commentary you've heard, the stories you've heard, what's the young perspective on cryptocurrency, blockchain, what's the view? >> Totally, it's a totally crazy culture, right? So, there's a very big influx of young talent and talented minds at that, right? And, this is really changing the revolution landscape. It's accelerating the tech. These ideas are being freely shared whereas before there was bottlenecks in the collaboration aspect of the technological field, right? >> You're a gamer, I know that so you're the young eco-system You don't care about data lakes and data centers and cloud computing. What is your generation look at this as an opportunity? What's exciting about it? What's the perspective? >> Well, there's multiple perspectives. The main two I say, there's multiple perspectives. Main two, is one, there's a shit ton of way to make money. And you know, is there a scam? Is there a risk for my business? You know, blockchain is involved. And there's a little bit of that mumbo jumbo going along. But then, there's also the other side that are really into it and really applying the tech and know that this is the best way to collaborate with peers >> What's the coolest thing you've seen? >> The coolest thing I've seen is probably Hashgraph which is actually not on the blockchain and competitors of the blockchain. And that's actually increasing speeds and pretty much making the tech, the back-end infrastructure better. >> So, you dropped out of UCSB, you're going to maybe go back to school but you're also working as a product manager for our crypto project for SiliconANGLE Media, theCUBE, Cube Network, you were giving demos. What is, what are we doing? How would you explain what we're doing? And, what was some of the reactions to the demo that you were giving? >> All great reactions so far. People are very excited what we're building which is a reputation centrality metric. And, what this does, is allows us to track, what users are talking about, and where they're talking about it. And actually, rank their reputation leaderboard rankings by topic, by frequency, by impact down reverb in the entire network. And that allows us to appropriate connections between two people who have different social, culture and professional topics that they talk about. And allow them to create more value for the entire platform, for the community and more importantly, themselves. >> What is, what does that mean, what problem are we solving? >> So, we're solving the Facebook ad word problem of the old generation which is you as a user do not own your data. Right? >> Yep. >> So now, what we have is this user base struggling to find the monetary value in their social media platforms. But now, we are actually offering a way for them to reverse the paradigm and get paid for interacting with others, creating with others and contributing to the community through all of their social media outlets. >> What was the biggest thing that people reacted to at the demos, the variety of tools we showed them. What was the number one, couple of things that they reacted to, what jumped out at you? >> So, I would say what jumped out is, how blown away these people are. They really are, you know, elevated in their mindset when they think about these concepts. Because it expands their mind and when they realize that I can go and expand someone else's mind and their mind will essentially contribute to the entire community. And everyone's going to grow from one initial idea. >> What are you working on, the project? Please share with the folks, what've you been working on, what specific things that you do and you're managing. What's unique about the technology? Share some color commentary on the project. >> Yeah so right now we have a couple of projects going, and, for now, I'll just talk about the platform side of things which is the more futuristic vision. Specifically, we're creating trending communities so we could actually auto generate stories based on Twitter API data, right? And also, our own platform has even more complex metrics which we'll be rewarding people for, so people will get rewards for using our platform more than the Twitter. But we could still have native content versus in-network content being weighed differently. And so, what we're doing is routing metrics of weighted value with a contextual layer on top through natural language processing and machine learning. >> So, are some people saying "Oh, you're like Steam?" How do you respond to that? >> We're not like Steam. Steam is extremely powerhousey and it's momentum and it doesn't actually do topic weighing Right, so, and we also value attention of the crowd so what we're working on is, what do people influence with their reputation? Whereas Steam, it's like, where do people contribute? How much do they contribute? And so, what we want to do is, we say hey, you know if I get uploads on Reddit that should be weighed in the network somewhere else, right? Instead of having a overall karma, we should have one integrated karmic aspect of a topicality so that if my karma, I'm using karma as an analogy cause Reddit has the up votes karma, down votes karma. >> So what about blockchain, why are we So, how would you explain to someone Okay, you're theCUBE what is the blockchain? What is crypto mean for us? >> So, blockchain, we're using it to add a layer of trust and security to our network. So we want transparency within our network and that means we have to have a ledger for every single engagement, interaction like we tweet on the network, right? >> And the crypto, the token, does what? >> Crypto token will pretty much be able to be cashed out thru Ethereum, right, ERC20 but it would also have a weighted role in our two sided marketplace, bounty ask buy. And, that'll be the main medium of where people identify and exchange their reputation. >> How would you describe out platform to a user out there if they say, what do you like, or what are you disrupting, what aren't you like, what are you guys doing, what you disrupting? And why would I want to use your platform? >> Yeah, so I think we're disrupting, you know, multiple companies, right? And, the one I really associate with is a professional Steamit meets Brave Browser, BAT token versus Steam, right? So, BAT is attention only and attention is valuable. I'm here with you, you have a 20 minute interview with me. That's your attention, that's valuable but it's much more valuable than someone else who isn't interviewing, let's just say, someone who is less fortunate. But, that's also a real time aspect. So there's a time variable, there's a network variable and there's a topicality variable, you know the social graph, you got the interest graph, and then the value graph on top. >> So Alec, so if you had to describe what we do in one sentence, what would it be? Putting you on the spot. >> In one sentence, I would say we would call it, a decentralized media platform with rewards for the user base, based on reputation. >> Alright, my son Alec Furrier is also involved in our crypto project, part of theCUBE network coming soon, house of theCUBE is here, the crypto conference, and what better way to align with the crypto community then demoing our token enabled platform. Congratulations to you, Narendra, Kent, Jeff and the team doing a great job with theCUBE network. Cube alumni are all going to get coins, right? Not yet decided but great work Alec, thanks for sharing. It's theCUBE here, Puerto Rico. I'm John Furrier, my son Alec. Thanks for watching. (upbeat music)

Published Date : Mar 17 2018

SUMMARY :

and greeting all the guests. Thanks, great to be on, finally, work you did on the schedule aspect of the technological field, right? What's the perspective? And you know, is there a scam? and competitors of the blockchain. to the demo that you were giving? for the community and more old generation which is you as So now, what we have is at the demos, the variety And everyone's going to What are you working on, the project? And so, what we're doing is And so, what we want to do is, we say hey, and that means we have to And, that'll be the main medium of And, the one I really associate to describe what we do with rewards for the user Narendra, Kent, Jeff and the team

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlecPERSON

0.99+

Alex FurrierPERSON

0.99+

John FurrierPERSON

0.99+

Puerto RicoLOCATION

0.99+

two peopleQUANTITY

0.99+

SiliconANGLE Media, Inc.ORGANIZATION

0.99+

Cube NetworkORGANIZATION

0.99+

theCUBEORGANIZATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

20 minuteQUANTITY

0.99+

SiliconANGLE Media Inc.ORGANIZATION

0.99+

one sentenceQUANTITY

0.99+

JohnPERSON

0.99+

SteamORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Alec FurrierPERSON

0.99+

UCSBORGANIZATION

0.99+

JeffPERSON

0.99+

NarendraPERSON

0.98+

22 year oldQUANTITY

0.98+

RedditORGANIZATION

0.98+

firstQUANTITY

0.98+

two sidedQUANTITY

0.97+

KentPERSON

0.97+

ERC20OTHER

0.97+

oneQUANTITY

0.96+

TwitterORGANIZATION

0.95+

twoQUANTITY

0.95+

Alec FurrierPERSON

0.93+

CubeORGANIZATION

0.9+

2018DATE

0.9+

San Juan, Puerto RicoLOCATION

0.88+

SteamitORGANIZATION

0.87+

22 years oldQUANTITY

0.86+

Brave BrowserORGANIZATION

0.86+

Blockchain Unbound BroughtTITLE

0.83+

theCUBETITLE

0.82+

single engagementQUANTITY

0.8+

one initial ideaQUANTITY

0.8+

EthereumOTHER

0.75+

Blockchain IndustriesORGANIZATION

0.73+

CubeTITLE

0.67+

couple of projectsQUANTITY

0.62+

HashgraphORGANIZATION

0.52+

BATORGANIZATION

0.41+

CubeCOMMERCIAL_ITEM

0.35+

Greg Theriault, SiliconANGLE | Focus On Customers Jan 2018


 

>> [Narrator] From the SiliconANGLE media office in Boston, Massachusets, it's theCUBE. Now, here's your host, Dave Vellante. >> Hi everybody, Dave Vellante here coming at you from our East Coast studios in Marlborough, MA just outside of Boston. What I wanted to do is give you a little recap of 2017 and what's happening and give you an update on SiliconANGLE Media. So as many of you know SiliconANGLE Media INC comprises three brands. TheCUBE, which as most of you know is we call it sometimes the ESPN of tech, it's our live and on demand video broadcasting element. And of course we have the research arm which is Wikibon and Wikibon.com And then, SiliconANGLE is our news site. And so I want to just, as I said, recap what went down in 2017 some of the things you may not know about. >> Last February, February first, actually we opened the new studio in Palo Alto, California. It's at 989 Commercial ST, you should check it out. It's sort of near the mountain view line but it's in Palo Alto, it's a great location, we have a large studio there. And throughout the year, in 2017 we held events, we had launches, but most importantly John Furrier, my business partner, is really running editorial content programs out of that studio. >> So every Thursday Furrier has high level key guests come in CEOs, VCs, in customers, and they just riff on what's going on in the industry and what's happening It's been an absolutely awesome resource for us and I really encourage you guys to go check it out. We did 135 show days last year. TheCUBE is run by our general manager, Jeff Frick and 135 show days meaning we broadcast live at 135 days at events last year, which is just incredible. >> It was our first year we ever did anything in China We did the Alibaba conference, the cloud show there that was very exciting. We did a number of shows in Europe and of course all the big shows in the United States as well >> We launched three websites last year. TheCUBE.net is the latest one. You know, a lot of times we talk about data driven media. If you go to theCube.net and check it out, you'll see something called theCUBE Alumni database. And theCUBE Alumni database contains virtually everybody who's ever been on theCUBE. So you can search CIOs, CEOs, developers, bloggers, analysts all the folks that have been on theCUBE you can see and they've got a profile page on each one of those so, we're collecting all that data SiliconANGLE.com we launched the new website >> SiliconANGLE is run by Rob Hof, who is our Editor-in-Chief Rob was the Silicon Valley beuro chief for business week for the better part of a decade, so we're really proud to have Rob on. He's been on for the last couple of years and just doing a great job with that site. >> And then Wikibon.com is run by Peter Burris he's our Chief Research Officer He's been with us now for the better part of 2 years and he's got that team cranking on all kinds of research in cloud and AI and data orientation, the edge, and infrastructure for emerging applications like AI. >> One of the areas we're most excited about that we launched in 2017 was a new capability called Clipper. So we have this tool called Video Clipper as you know, John Furrier and I, when we met we had this vision for data driven media and innovation and we launched this tool we call video clipper that was developed by Kent Libbey and his team one of our newer executives that we brought in last year on the product side. >> What Video Clipper does is we transcribe every video now that we do, we'll transcribe this video, and then we synchronize the transcript with the video and we're able to then search video, highlight a text, a paragraph let's say, push a button and boom we've got a clip and that clip is ready to be shared throughout various social media platforms like Twitter, and LinkedIn, and Facebook and the like So very, very excited about that tool you're going to be hearing more about that We don't sell it as a separate tool, we integrate it as part of our offerings and got some new offerings that we're bringing to customers in 2018. >> One of the other really exciting things in 2017 we brought in a new chief revenue officer his name is Greg Theriault, I'm going to introduce you to him today Greg Theriault is with me here in studio, Greg, it's great see you, thanks for spending some time with us. >> [Greg] Thank you, Dave, thanks for the opportunity I've never been more excited. Let me tell you a little bit about myself I live in Concord, MA right around the studio here and I came from the IT industry. I've been there for a long time. I used to be at a small systems integrator, kind of the size of SiliconANGLE Media, building client servers, computing, got certified in Novell, and then I jumped into sales. I worked most recently at Forester Research and was there for almost 18 years, two decades, building the sales capabilities, always wrapped around the customers, but I am thrilled to be here today >> [Dave] So, Novell, when our network goes down can you help us fix that? >> It was about 20 years ago but, you know the history with Novell >> Yeah, another Utah company that somehow didn't make it, but for a while they were a little monopoly. So you've been in the business now for a couple of decades maybe, you know, think about what has happened over the last 20 years, what kind of changes have you seen? Share with us your perspectives. >> I've never seen so much disruption from client server, to social computing, to AI, now it's digital disruption in everything and you hear about this all the time in the news that companies are becoming software companies look around the corner, GE is now GE digital, they're trying to reinvent themselves, very, very exciting times. AI machine learning, autonomous computing, and then right around the corner there's block chain I mean that's the big buzz these days Also there's the autonomous vehicles, and let em give you a quick story About two years ago my son was born and I was fortunate enough to have a breakfast with the CEO of Tesla, and I asked him "Hey, he was born, what's going to happen in 16 years?" and JB said to me quite candidly, he said "if your son is driving a car that's not autonomous it won't be safe and he won't need a license" So, things are happening at an epic speed I don't know I these prediction will be true but it is Telsa >> [Dave] Won't need a license, you know it's funny, I mean, I don't know how you feel about it but when I turned 16 it was one of the most exciting days of a young person's life. You wonder what the social implications of that is if you don't need a license, I don't know maybe they can start driving at 14 or 13, you know whatever but you know what I'm saying? >> [Greg] Yeah That was a really exciting time we couldn't wait to get our permits and "Dad can I drive you to the dump?" Right? It's like... >> Self driving cars and self driving refrigerators, I mean, it's moving fast it's at an epic speed right now >> Well everything, and again, you take that business it's all about the data, as I said in my intro we always talk about data driven media we got so much data, you talk about digital transformation, philosophy is digital meets data >> Right >> and you talked about GE you're seeing all these companies now getting disrupted because digital allows people to move so fast, it allows companies like Apple to get into financial servies and you're seeing Amazon become a content company and it's really all around the data, isn't it? >> [Greg] Absolutely >> So, I wonder if you could share with our audience, SiliconANGLE Media, small company you came from a much larger firm, a big brand, Forester, your former company. What attracted you to SiliconANGLE Media? >> I think it was the fact that I jumped on airplane and went out to Palo Alto and met with your general managers. I think the innovation and the speed, the speed around it's in your DNA and then you took social computing, combined it with really computing power. And then I saw the Video Clipper tool. It's the fastest application I've ever seen to clip video and that innovation, the speed really attracted me to the company, to build really powerful content >> [Dave] Yeah it's been quite a ride since I met John Furrier in 2010. You know, John at the time, said "Dave, whatever we do we have to innovate. "We have to continue to invest in R&D" And those R&D experiments they don't always pay off but when one hits, like the Clipepr tool, it can be a home run so we're very excited about that. Share with us your philosophy, what can we expect from Greg Theriault? >> [Greg] Sure, I appreciate that. Well I'm happy to be here I actually blogged on LinkedIn over the weekend about my transition here, and I think it starts off with my family, my son and my wife they helped me, they grounded me, but my philosophy on business is to really be customer focused to hire the right people, train and coach, and build a different mindset which I call the growth mindset the sales rep of the future is being disrupted right now just like very other function. And that is absolutely pivotal. I think the buyers change, Dave. Faster in two years than the past 100 years the buyer is in control, you have to build systems, processes and technologies wrapped around how do you help the customer be successful at drygrowth and that's the biggest shift going on right now I mean sales right now, again, is being disrupted so social selling and things like that, I want to bring that kind of discipline and processes to SiliconANGLE Media >> [Dave] Well, what about social selling? A lot of people will, when social media really started to come into play, a lot of people say "well, we sell to IT people, and IT people, they don't have time to go on Twitter, they don't do Facebook" What's your perspective, has that changed you know and what about that? >> It's changed faster than I could ever believe buyers buy differently but they also need to see the different presence in social that's Twitter, that's LinkedIn, and that's also you have to be on the phone, you have to be in front of customers but it absolutely is pivotal that the new, let's call it a digital rep, needs to understand the tools to listen. Listen to the customer first and foremost, and it's a new channel but it's a channel here for a long time. Again, it's disrupting sales at an epic pace >> [Dave] So what are your priorities, looking out, say, near term, mid-term, long term? >> [Greg] To wrap my hand around the customer base you have to innovate with them, with the team we build And also to build the collaborative culture I'm really into culture and the ability to kind of game-afy the culture, grow the business, accelerate the business, and also develop the team that we build. I mean, the aspirations to where do they want to be in a couple years will help build the business and that's a global business as well >> Well, of course, a lot of the action in the tech business is out in Silicon Valley, and you and I are based here in the East coast, What can we expect in terms of your presence in Silicon Valley? >> I'll be on a plane a lot, and I don't mind that at all I mean, it's a flat country right now So I'll be on a plane, but also the heat is in Boston, New York, Chicago, but the Valley is where it's at so I'm going to be jumping on plane in two weeks to meet with the team, I can't wait >> [Dave] Well, we're excited Greg, to have an executive of your callabor join our team. >> [Greg] Thank you, appreciate that >> Congratulations, and look forward to many, many years of productive growth and adding value for our clients with you >> [Greg] Likewise, thank you >> Alright, you're welcome. Thanks for watching everybody, this is Dave Vellante with Greg Theriault, we'll see you next time.

Published Date : Jan 11 2018

SUMMARY :

[Narrator] From the SiliconANGLE media office the things you may not know about. It's at 989 Commercial ST, you should check it out. and I really encourage you guys to go check it out. and of course all the big shows in the United States as well all the folks that have been on theCUBE you can see He's been on for the last couple of years and data orientation, the edge, and One of the areas we're most excited about that we and then we synchronize the transcript with the video Greg Theriault, I'm going to introduce you to him today and I came from the IT industry. over the last 20 years, what kind of changes have you seen? and let em give you a quick story I mean, I don't know how you feel about it but and "Dad can I drive you to the dump?" What attracted you to SiliconANGLE Media? and that innovation, the speed really attracted me You know, John at the time, said the buyer is in control, you have to build systems, also you have to be on the phone, you have to be in front and also develop the team that we build. executive of your callabor join our team. with Greg Theriault, we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

Greg TheriaultPERSON

0.99+

EuropeLOCATION

0.99+

JBPERSON

0.99+

GregPERSON

0.99+

Jeff FrickPERSON

0.99+

Dave VellantePERSON

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

2017DATE

0.99+

JohnPERSON

0.99+

AppleORGANIZATION

0.99+

2018DATE

0.99+

2010DATE

0.99+

Rob HofPERSON

0.99+

RobPERSON

0.99+

Jan 2018DATE

0.99+

TeslaORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

DavePERSON

0.99+

UtahLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

NovellORGANIZATION

0.99+

ChinaLOCATION

0.99+

GEORGANIZATION

0.99+

SiliconANGLE Media INCORGANIZATION

0.99+

John FurrierPERSON

0.99+

SiliconANGLEORGANIZATION

0.99+

last yearDATE

0.99+

LinkedInORGANIZATION

0.99+

135 daysQUANTITY

0.99+

Wikibon.comORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

TelsaORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

Marlborough, MALOCATION

0.99+

United StatesLOCATION

0.99+

Forester ResearchORGANIZATION

0.99+

ForesterORGANIZATION

0.99+

two decadesQUANTITY

0.99+

989 Commercial STLOCATION

0.99+

FacebookORGANIZATION

0.99+

135 show daysQUANTITY

0.98+

2 yearsQUANTITY

0.98+

todayDATE

0.98+

ESPNORGANIZATION

0.98+

TwitterORGANIZATION

0.98+

two weeksQUANTITY

0.98+

16QUANTITY

0.98+

TheCUBEORGANIZATION

0.97+

first yearQUANTITY

0.97+

Video ClipperTITLE

0.97+

Kent LibbeyPERSON

0.97+

two yearsQUANTITY

0.97+

FurrierPERSON

0.97+

Dominique Bastos, Persistent Systems | International Women's Day 2023


 

(gentle upbeat music) >> Hello, everyone, welcome to theCUBE's coverage of International Women's Day. I'm John Furrier host here in Palo Alto, California. theCUBE's second year covering International Women's Day. It's been a great celebration of all the smart leaders in the world who are making a difference from all kinds of backgrounds, from technology to business and everything in between. Today we've got a great guest, Dominique Bastos, who's the senior Vice President of Cloud at Persistent Systems, formerly with AWS. That's where we first met at re:Invent. Dominique, great to have you on the program here for International Women's Day. Thanks for coming on. >> Thank you John, for having me back on theCUBE. This is an honor, especially given the theme. >> Well, I'm excited to have you on, I consider you one of those typecast personas where you've kind of done a lot of things. You're powerful, you've got great business acumen you're technical, and we're in a world where, you know the world's coming completely digital and 50% of the world is women, 51%, some say. So you got mostly male dominated industry and you have a dual engineering background and that's super impressive as well. Again, technical world, male dominated you're in there in the mix. What inspires you to get these engineering degrees? >> I think even it was more so shifted towards males. When I had the inspiration to go to engineering school I was accused as a young girl of being a tomboy and fiddling around with all my brother's toys versus focusing on my dolls and other kind of stereotypical toys that you would give a girl. I really had a curiosity for building, a curiosity for just breaking things apart and putting them back together. I was very lucky in that my I guess you call it primary school, maybe middle school, had a program for, it was like electronics, that was the class electronics. So building circuit boards and things like that. And I really enjoyed that aspect of building. I think it was more actually going into engineering school. Picking that as a discipline was a little bit, my mom's reaction to when I announced that I wanted to do engineering which was, "No, that's for boys." >> Really. >> And that really, you know, I think she, it came from a good place in trying to protect me from what she has experienced herself in terms of how women are received in those spaces. So I kind of shrugged it off and thought "Okay, well I'm definitely now going to do this." >> (laughs) If I was told not to, you're going to do it. >> I was told not to, that's all I needed to hear. And also, I think my passion was to design cars and I figured if I enroll in an industrial engineering program I could focus on ergonomic design and ultimately, you know have a career doing something that I'm passionate about. So yeah, so my inspiration was kind of a little bit of don't do this, a lot of curiosity. I'm also a very analytical person. I've been, and I don't know what the science is around left right brain to be honest, but been told that I'm a very much a logical person versus a feeler. So I don't know if that's good or bad. >> Straight shooter. What were your engineering degrees if you don't mind sharing? >> So I did industrial engineering and so I did a dual degree, industrial engineering and robotics. At the time it was like a manufacturing robotics program. It was very, very cool because we got to, I mean now looking back, the evolution of robotics is just insane. But you, you know, programmed a robotic arm to pick things up. I actually crashed the Civil Engineering School's Concrete Canoe Building Competition where you literally have to design a concrete canoe and do all the load testing and the strength testing of the materials and basically then, you know you go against other universities to race the canoe in a body of water. We did that at, in Alabama and in Georgia. So I was lucky to experience that two times. It was a lot of fun. >> But you knew, so you knew, deep down, you were technical you had a nerd vibe you were geeking out on math, tech, robotics. What happened next? I mean, what were some of the challenges you faced? How did you progress forward? Did you have any blockers and roadblocks in front of you and how did you handle those? >> Yeah, I mean I had, I had a very eye-opening experience with, in my freshman year of engineering school. I kind of went in gung-ho with zero hesitation, all the confidence in the world, 'cause I was always a very big nerd academically, I hate admitting this but myself and somebody else got most intellectual, voted by the students in high school. It's like, you don't want to be voted most intellectual when you're in high school. >> Now it's a big deal. (laughs) >> Yeah, you want to be voted like popular or anything like that? No, I was a nerd, but in engineering school, it's a, it was very humbling. That whole confidence that I had. I experienced prof, ooh, I don't want to name the school. Everybody can google it though, but, so anyway so I had experience with some professors that actually looked at me and said, "You're in the wrong program. This is difficult." I, and I think I've shared this before in other forums where, you know, my thermodynamic teacher basically told me "Cheerleading's down the hall," and it it was a very shocking thing to hear because it really made me wonder like, what am I up against here? Is this what it's going to be like going forward? And I decided not to pay attention to that. I think at the moment when you hear something like that you just, you absorb it and you also don't know how to react. And I decided immediately to just walk right past him and sit down front center in the class. In my head I was cursing him, of course, 'cause I mean, let's be real. And I was like, I'm going to show this bleep bleep. And proceeded to basically set the curve class crushed it and was back to be the teacher's assistant. So I think that was one. >> But you became his teacher assistant after, or another one? >> Yeah, I gave him a mini speech. I said, do not do this. You, you could, you could have broken me and if you would've done this to somebody who wasn't as steadfast in her goals or whatever, I was really focused like I'm doing this, I would've backed out potentially and said, you know this isn't something I want to experience on the daily. So I think that was actually a good experience because it gave me an opportunity to understand what I was up against but also double down in how I was going to deal with it. >> Nice to slay the misogynistic teachers who typecast people. Now you had a very technical career but also you had a great career at AWS on the business side you've handled 'em all of the big accounts, I won't say the names, but like we're talking about monster accounts, sales and now basically it's not really selling, you're managing a big account, it's like a big business. It's a business development thing. Technical to business transition, how do you handle that? Was that something you were natural for? Obviously you, you stared down the naysayers out of the gate in college and then in business, did that continue and how did you drive through that? >> So I think even when I was coming out of university I knew that I wanted to have a balance between the engineering program and business. A lot of my colleagues went on to do their PEs so continue to get their masters basically in engineering or their PhDs in engineering. I didn't really have an interest for that. I did international business and finance as my MBA because I wanted to explore the ability of taking what I had learned in engineering school and applying it to building businesses. I mean, at the time I didn't have it in my head that I would want to do startups but I definitely knew that I wanted to get a feel for what are they learning in business school that I missed out in engineering school. So I think that helped me when I transitioned, well when I applied, I was asked to come apply at AWS and I kind of went, no I'm going to, the DNA is going to be rejected. >> You thought, you thought you'd be rejected from AWS. >> I thought I'd be, yeah, because I have very much a startup founder kind of disruptive personality. And to me, when I first saw AWS at the stage early 2016 I saw it as a corporation. Even though from a techie standpoint, I was like, these people are insane. This is amazing what they're building. But I didn't know what the cultural vibe would feel like. I had been with GE at the beginning of my career for almost three years. So I kind of equated AWS Amazon to GE given the size because in between, I had done startups. So when I went to AWS I think initially, and I do have to kind of shout out, you know Todd Weatherby basically was the worldwide leader for ProServe and it was being built, he built it and I went into ProServe to help from that standpoint. >> John: ProServe, Professional services >> Professional services, right. To help these big enterprise customers. And specifically my first customer was an amazing experience in taking, basically the company revolves around strategic selling, right? It's not like you take a salesperson with a conventional schooling that salespeople would have and plug them into AWS in 2016. It was very much a consultative strategic approach. And for me, having a technical background and loving to solve problems for customers, working with the team, I would say, it was a dream team that I joined. And also the ability to come to the table with a technical background, knowing how to interact with senior executives to help them envision where they want to go, and then to bring a team along with you to make that happen. I mean, that was like magical for me. I loved that experience. >> So you like the culture, I mean, Andy Jassy, I've interviewed many times, always talked about builders and been a builder mentality. You mentioned that earlier at the top of this interview you've always building things, curious and you mentioned potentially your confidence might have been shaken. So you, you had the confidence. So being a builder, you know, being curious and having confidence seems to be what your superpower is. A lot of people talk about the confidence angle. How important is that and how important is that for encouraging more women to get into tech? Because I still hear that all the time. Not that they don't have confidence, but there's so many signals that potentially could shake confidence in industry >> Yeah, that's actually a really good point that you're making. A lot of signals that women get could shake their confidence and that needs to be, I mean, it's easy to say that it should be innate. I mean that's kind of like textbook, "Oh it has to come from within." Of course it does. But also, you know, we need to understand that in a population where 50% of the population is women but only 7% of the positions in tech, and I don't know the most current number in tech leadership, is women, and probably a smaller percentage in the C-suite. When you're looking at a woman who's wanting to go up the trajectory in a tech company and then there's a subconscious understanding that there's a limit to how far you'll go, your confidence, you know, in even subconsciously gets shaken a little bit because despite your best efforts, you're already seeing the cap. I would say that we need to coach girls to speak confidently to navigate conflict versus running away from it, to own your own success and be secure in what you bring to the table. And then I think a very important thing is to celebrate each other and the wins that we see for women in tech, in the industry. >> That's awesome. What's, the, in your opinion, the, you look at that, the challenges for this next generation women, and women in general, what are some of the challenges for them and that they need to overcome today? I mean, obviously the world's changed for the better. Still not there. I mean the numbers one in four women, Rachel Thornton came on, former CMO of AWS, she's at MessageBird now. They had a study where only one in four women go to the executive board level. And so there's still, still numbers are bad and then the numbers still got to get up, up big time. That's, and the industry's working on that, but it's changed. But today, what are some of the challenges for this current generation and the next generation of women and how can we and the industry meet, we being us, women in the industry, be strong role models for them? >> Well, I think the challenge is one of how many women are there in the pipeline and what are we doing to retain them and how are we offering up the opportunities to fill. As you know, as Rachel said and I haven't had an opportunity to see her, in how are we giving them this opportunity to take up those seats in the C-suite right, in these leadership roles. And I think this is a little bit exacerbated with the pandemic in that, you know when everything shut down when people were going back to deal with family and work at the same time, for better or for worse the brunt of it fell on probably, you know the maternal type caregiver within the family unit. You know, I've been, I raised my daughter alone and for me, even without the pandemic it was a struggle constantly to balance the risk that I was willing to take to show up for those positions versus investing even more of that time raising a child, right? Nevermind the unconscious bias or cultural kind of expectations that you get from the male counterparts where there's zero understanding of what a mom might go through at home to then show up to a meeting, you know fully fresh and ready to kind of spit out some wisdom. It's like, you know, your kid just freaking lost their whatever and you know, they, so you have to sort a bunch of things out. I think the challenge that women are still facing and will we have to keep working at it is making sure that there's a good pipeline. A good amount of young ladies of people taking interest in tech. And then as they're, you know, going through the funnel at stages in their career, we're providing the mentoring we're, there's representation, right? To what they're aspiring to. We're celebrating their interest in the field, right? And, and I think also we're doing things to retain them, because again, the pandemic affected everybody. I think women specifically and I don't know the statistics but I was reading something about this were the ones to tend to kind of pull it back and say well now I need to be home with, you know you name how many kids and pets and the aging parents, people that got sick to take on that position. In addition to the career aspirations that they might have. We need to make it easier basically. >> I think that's a great call out and I appreciate you bringing that up about family and being a single mom. And by the way, you're savage warrior to doing that. It's amazing. You got to, I know you have a daughter in computer science at Stanford, I want to get to that in a second. But that empathy and I mentioned Rachel Thornton, who's the CMO MessageBird and former CMO of AWS. Her thing right now to your point is mentoring and sponsorship is very key. And her company and the video that's on the site here people should look at that and reference that. They talk a lot about that empathy of people's situation whether it's a single mom, family life, men and women but mainly women because they're the ones who people aren't having a lot of empathy for in that situation, as you called it out. This is huge. And I think remote work has opened up this whole aperture of everyone has to have a view into how people are coming to the table at work. So, you know, props are bringing that up, and I recommend everyone look at check out Rachel Thornton. So how do you balance that, that home life and talk about your daughter's journey because sounds like she's nerding out at Stanford 'cause you know Stanford's called Nerd Nation, that's their motto, so you must be proud. >> I am so proud, I'm so proud. And I will say, I have to admit, because I did encounter so many obstacles and so many hurdles in my journey, it's almost like I forgot that I should set that aside and not worry about my daughter. My hope for her was for her to kind of be artistic and a painter or go into something more lighthearted and fun because I just wanted to think, I guess my mom had the same idea, right? She, always been very driven. She, I want to say that I got very lucky that she picked me to be her mom. Biologically I'm her mom, but I told her she was like a little star that fell from the sky and I, and ended up with me. I think for me, balancing being a single mom and a career where I'm leading and mentoring and making big decisions that affect people's lives as well. You have to take the best of everything you get from each of those roles. And I think that the best way is play to your strengths, right? So having been kind of a nerd and very organized person and all about, you know, systems for effectiveness, I mean, industrial engineering, parenting for me was, I'm going to make it sound super annoying and horrible, but (laughs) >> It's funny, you know, Dave Vellante and I when we started SiliconANGLE and theCUBE years ago, one of the things we were all like sports lovers. So we liked sports and we are like we looked at the people in tech as tech athletes and except there's no men and women teams, it's one team. It's all one thing. So, you know, I consider you a tech athlete you're hard charging strong and professional and smart and beautiful and brilliant, all those good things. >> Thank you. >> Now this game is changing and okay, and you've done startups, and you've done corporate jobs, now you're in a new role. What's the current tech landscape from a, you know I won't say athletic per standpoint but as people who are smart. You have all kinds of different skill sets. You have the startup warriors, you have the folks who like to be in the middle of the corporate world grow up through corporate, climb the corporate ladder. You have investors, you have, you know, creatives. What have you enjoyed most and where do you see all the action? >> I mean, I think what I've enjoyed the most has been being able to bring all of the things that I feel I'm strong at and bring it together to apply that to whatever the problem is at hand, right? So kind of like, you know if you look at a renaissance man who can kind of pop in anywhere and, oh, he's good at, you know sports and he's good at reading and, or she's good at this or, take all of those strengths and somehow bring them together to deal with the issue at hand, versus breaking up your mindset into this is textbook what I learned and this is how business should be done and I'm going to draw these hard lines between personal life and work life, or between how you do selling and how you do engineering. So I think my, the thing that I loved, really loved about AWS was a lot of leaders saw something in me that I potentially didn't see, which was, yeah you might be great at running that big account but we need help over here doing go to market for a new product launch and boom, there you go. Now I'm in a different org helping solve that problem and getting something launched. And I think if you don't box yourself in to I'm only good at this, or, you know put a label on yourself as being the rockstar in that. It leaves room for opportunities to present themselves but also it leaves room within your own mind to see yourself as somebody capable of doing anything. Right, I don't know if I answered the question accurately. >> No, that's good, no, that's awesome. I love the sharing, Yeah, great, great share there. Question is, what do you see, what do you currently during now you're building a business of Persistent for the cloud, obviously AWS and Persistent's a leader global system integrator around the world, thousands and thousands of customers from what we know and been reporting on theCUBE, what's next for you? Where do you see yourself going? Obviously you're going to knock this out of the park. Where do you see yourself as you kind of look at the continuing journey of your mission, personal, professional what's on your mind? Where do you see yourself going next? >> Well, I think, you know, again, going back to not boxing yourself in. This role is an amazing one where I have an opportunity to take all the pieces of my career in tech and apply them to building a business within a business. And that involves all the goodness of coaching and mentoring and strategizing. And I'm loving it. I'm loving the opportunity to work with such great leaders. Persistent itself is very, very good at providing opportunities, very diverse opportunities. We just had a huge Semicolon; Hackathon. Some of the winners were females. The turnout was amazing in the CTO's office. We have very strong women leading the charge for innovation. I think to answer your question about the future and where I may see myself going next, I think now that my job, well they say the job is never done. But now that Chloe's kind of settled into Stanford and kind of doing her own thing, I have always had a passion to continue leading in a way that brings me to, into the fold a lot more. So maybe, you know, maybe in a VC firm partner mode or another, you know CEO role in a startup, or my own startup. I mean, I never, I don't know right now I'm super happy but you never know, you know where your drive might go. And I also want to be able to very deliberately be in a role where I can continue to mentor and support up and coming women in tech. >> Well, you got the smarts but you got really the building mentality, the curiosity and the confidence really sets you up nicely. Dominique great story, great inspiration. You're a role model for many women, young girls out there and women in tech and in celebration. It's a great day and thank you for sharing that story and all the good nuggets there. Appreciate you coming on theCUBE, and it's been my pleasure. Thanks for coming on. >> Thank you, John. Thank you so much for having me. >> Okay, theCUBE's coverage of International Women's Day. I'm John Furrier, host of theCUBE here in Palo Alto getting all the content, check out the other interviews some amazing stories, lessons learned, and some, you know some funny stories and some serious stories. So have some fun and enjoy the rest of the videos here for International Women's Days, thanks for watching. (gentle inspirational music)

Published Date : Mar 9 2023

SUMMARY :

Dominique, great to have you on Thank you John, for and 50% of the world is I guess you call it primary And that really, you know, (laughs) If I was told not design and ultimately, you know if you don't mind sharing? and do all the load testing the challenges you faced? I kind of went in gung-ho Now it's a big deal. and you also don't know how to react. and if you would've done this to somebody Was that something you were natural for? and applying it to building businesses. You thought, you thought and I do have to kind And also the ability to come to the table Because I still hear that all the time. and that needs to be, I mean, That's, and the industry's to be home with, you know and I appreciate you bringing that up and all about, you know, It's funny, you know, and where do you see all the action? And I think if you don't box yourself in I love the sharing, Yeah, I think to answer your and all the good nuggets there. Thank you so much for having me. learned, and some, you know

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rachel ThorntonPERSON

0.99+

RachelPERSON

0.99+

Todd WeatherbyPERSON

0.99+

GeorgiaLOCATION

0.99+

GEORGANIZATION

0.99+

Dominique BastosPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

AlabamaLOCATION

0.99+

Dave VellantePERSON

0.99+

Andy JassyPERSON

0.99+

2016DATE

0.99+

John FurrierPERSON

0.99+

DominiquePERSON

0.99+

Palo AltoLOCATION

0.99+

50%QUANTITY

0.99+

thousandsQUANTITY

0.99+

ChloePERSON

0.99+

two timesQUANTITY

0.99+

International Women's DaysEVENT

0.99+

International Women's DayEVENT

0.99+

51%QUANTITY

0.99+

oneQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

PersistentORGANIZATION

0.99+

ProServeORGANIZATION

0.99+

StanfordORGANIZATION

0.99+

Persistent SystemsORGANIZATION

0.99+

MessageBirdORGANIZATION

0.99+

second yearQUANTITY

0.99+

7%QUANTITY

0.99+

early 2016DATE

0.98+

one teamQUANTITY

0.98+

firstQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

singleQUANTITY

0.98+

Civil Engineering SchoolORGANIZATION

0.98+

four womenQUANTITY

0.98+

todayDATE

0.97+

TodayDATE

0.97+

eachQUANTITY

0.97+

pandemicEVENT

0.97+

first customerQUANTITY

0.97+

International Women's Day 2023EVENT

0.95+

single momQUANTITY

0.95+

AmazonORGANIZATION

0.94+

CloudORGANIZATION

0.88+

one thingQUANTITY

0.87+

almost three yearsQUANTITY

0.87+

zero understandingQUANTITY

0.86+

Concrete Canoe Building CompetitionEVENT

0.86+

Nerd NationORGANIZATION

0.84+

zeroQUANTITY

0.84+

secondQUANTITY

0.8+

CTOORGANIZATION

0.76+

SiliconANGLEORGANIZATION

0.74+

Jay Marshall, Neural Magic | AWS Startup Showcase S3E1


 

(upbeat music) >> Hello, everyone, and welcome to theCUBE's presentation of the "AWS Startup Showcase." This is season three, episode one. The focus of this episode is AI/ML: Top Startups Building Foundational Models, Infrastructure, and AI. It's great topics, super-relevant, and it's part of our ongoing coverage of startups in the AWS ecosystem. I'm your host, John Furrier, with theCUBE. Today, we're excited to be joined by Jay Marshall, VP of Business Development at Neural Magic. Jay, thanks for coming on theCUBE. >> Hey, John, thanks so much. Thanks for having us. >> We had a great CUBE conversation with you guys. This is very much about the company focuses. It's a feature presentation for the "Startup Showcase," and the machine learning at scale is the topic, but in general, it's more, (laughs) and we should call it "Machine Learning and AI: How to Get Started," because everybody is retooling their business. Companies that aren't retooling their business right now with AI first will be out of business, in my opinion. You're seeing massive shift. This is really truly the beginning of the next-gen machine learning AI trend. It's really seeing ChatGPT. Everyone sees that. That went mainstream. But this is just the beginning. This is scratching the surface of this next-generation AI with machine learning powering it, and with all the goodness of cloud, cloud scale, and how horizontally scalable it is. The resources are there. You got the Edge. Everything's perfect for AI 'cause data infrastructure's exploding in value. AI is just the applications. This is a super topic, so what do you guys see in this general area of opportunities right now in the headlines? And I'm sure you guys' phone must be ringing off the hook, metaphorically speaking, or emails and meetings and Zooms. What's going on over there at Neural Magic? >> No, absolutely, and you pretty much nailed most of it. I think that, you know, my background, we've seen for the last 20-plus years. Even just getting enterprise applications kind of built and delivered at scale, obviously, amazing things with AWS and the cloud to help accelerate that. And we just kind of figured out in the last five or so years how to do that productively and efficiently, kind of from an operations perspective. Got development and operations teams. We even came up with DevOps, right? But now, we kind of have this new kind of persona and new workload that developers have to talk to, and then it has to be deployed on those ITOps solutions. And so you pretty much nailed it. Folks are saying, "Well, how do I do this?" These big, generational models or foundational models, as we're calling them, they're great, but enterprises want to do that with their data, on their infrastructure, at scale, at the edge. So for us, yeah, we're helping enterprises accelerate that through optimizing models and then delivering them at scale in a more cost-effective fashion. >> Yeah, and I think one of the things, the benefits of OpenAI we saw, was not only is it open source, then you got also other models that are more proprietary, is that it shows the world that this is really happening, right? It's a whole nother level, and there's also new landscape kind of maps coming out. You got the generative AI, and you got the foundational models, large LLMs. Where do you guys fit into the landscape? Because you guys are in the middle of this. How do you talk to customers when they say, "I'm going down this road. I need help. I'm going to stand this up." This new AI infrastructure and applications, where do you guys fit in the landscape? >> Right, and really, the answer is both. I think today, when it comes to a lot of what for some folks would still be considered kind of cutting edge around computer vision and natural language processing, a lot of our optimization tools and our runtime are based around most of the common computer vision and natural language processing models. So your YOLOs, your BERTs, you know, your DistilBERTs and what have you, so we work to help optimize those, again, who've gotten great performance and great value for customers trying to get those into production. But when you get into the LLMs, and you mentioned some of the open source components there, our research teams have kind of been right in the trenches with those. So kind of the GPT open source equivalent being OPT, being able to actually take, you know, a multi-$100 billion parameter model and sparsify that or optimize that down, shaving away a ton of parameters, and being able to run it on smaller infrastructure. So I think the evolution here, you know, all this stuff came out in the last six months in terms of being turned loose into the wild, but we're staying in the trenches with folks so that we can help optimize those as well and not require, again, the heavy compute, the heavy cost, the heavy power consumption as those models evolve as well. So we're staying right in with everybody while they're being built, but trying to get folks into production today with things that help with business value today. >> Jay, I really appreciate you coming on theCUBE, and before we came on camera, you said you just were on a customer call. I know you got a lot of activity. What specific things are you helping enterprises solve? What kind of problems? Take us through the spectrum from the beginning, people jumping in the deep end of the pool, some people kind of coming in, starting out slow. What are the scale? Can you scope the kind of use cases and problems that are emerging that people are calling you for? >> Absolutely, so I think if I break it down to kind of, like, your startup, or I maybe call 'em AI native to kind of steal from cloud native years ago, that group, it's pretty much, you know, part and parcel for how that group already runs. So if you have a data science team and an ML engineering team, you're building models, you're training models, you're deploying models. You're seeing firsthand the expense of starting to try to do that at scale. So it's really just a pure operational efficiency play. They kind of speak natively to our tools, which we're doing in the open source. So it's really helping, again, with the optimization of the models they've built, and then, again, giving them an alternative to expensive proprietary hardware accelerators to have to run them. Now, on the enterprise side, it varies, right? You have some kind of AI native folks there that already have these teams, but you also have kind of, like, AI curious, right? Like, they want to do it, but they don't really know where to start, and so for there, we actually have an open source toolkit that can help you get into this optimization, and then again, that runtime, that inferencing runtime, purpose-built for CPUs. It allows you to not have to worry, again, about do I have a hardware accelerator available? How do I integrate that into my application stack? If I don't already know how to build this into my infrastructure, does my ITOps teams, do they know how to do this, and what does that runway look like? How do I cost for this? How do I plan for this? When it's just x86 compute, we've been doing that for a while, right? So it obviously still requires more, but at least it's a little bit more predictable. >> It's funny you mentioned AI native. You know, born in the cloud was a phrase that was out there. Now, you have startups that are born in AI companies. So I think you have this kind of cloud kind of vibe going on. You have lift and shift was a big discussion. Then you had cloud native, kind of in the cloud, kind of making it all work. Is there a existing set of things? People will throw on this hat, and then what's the difference between AI native and kind of providing it to existing stuff? 'Cause we're a lot of people take some of these tools and apply it to either existing stuff almost, and it's not really a lift and shift, but it's kind of like bolting on AI to something else, and then starting with AI first or native AI. >> Absolutely. It's a- >> How would you- >> It's a great question. I think that probably, where I'd probably pull back to kind of allow kind of retail-type scenarios where, you know, for five, seven, nine years or more even, a lot of these folks already have data science teams, you know? I mean, they've been doing this for quite some time. The difference is the introduction of these neural networks and deep learning, right? Those kinds of models are just a little bit of a paradigm shift. So, you know, I obviously was trying to be fun with the term AI native, but I think it's more folks that kind of came up in that neural network world, so it's a little bit more second nature, whereas I think for maybe some traditional data scientists starting to get into neural networks, you have the complexity there and the training overhead, and a lot of the aspects of getting a model finely tuned and hyperparameterization and all of these aspects of it. It just adds a layer of complexity that they're just not as used to dealing with. And so our goal is to help make that easy, and then of course, make it easier to run anywhere that you have just kind of standard infrastructure. >> Well, the other point I'd bring out, and I'd love to get your reaction to, is not only is that a neural network team, people who have been focused on that, but also, if you look at some of the DataOps lately, AIOps markets, a lot of data engineering, a lot of scale, folks who have been kind of, like, in that data tsunami cloud world are seeing, they kind of been in this, right? They're, like, been experiencing that. >> No doubt. I think it's funny the data lake concept, right? And you got data oceans now. Like, the metaphors just keep growing on us, but where it is valuable in terms of trying to shift the mindset, I've always kind of been a fan of some of the naming shift. I know with AWS, they always talk about purpose-built databases. And I always liked that because, you know, you don't have one database that can do everything. Even ones that say they can, like, you still have to do implementation detail differences. So sitting back and saying, "What is my use case, and then which database will I use it for?" I think it's kind of similar here. And when you're building those data teams, if you don't have folks that are doing data engineering, kind of that data harvesting, free processing, you got to do all that before a model's even going to care about it. So yeah, it's definitely a central piece of this as well, and again, whether or not you're going to be AI negative as you're making your way to kind of, you know, on that journey, you know, data's definitely a huge component of it. >> Yeah, you would have loved our Supercloud event we had. Talk about naming and, you know, around data meshes was talked about a lot. You're starting to see the control plane layers of data. I think that was the beginning of what I saw as that data infrastructure shift, to be horizontally scalable. So I have to ask you, with Neural Magic, when your customers and the people that are prospects for you guys, they're probably asking a lot of questions because I think the general thing that we see is, "How do I get started? Which GPU do I use?" I mean, there's a lot of things that are kind of, I won't say technical or targeted towards people who are living in that world, but, like, as the mainstream enterprises come in, they're going to need a playbook. What do you guys see, what do you guys offer your clients when they come in, and what do you recommend? >> Absolutely, and I think where we hook in specifically tends to be on the training side. So again, I've built a model. Now, I want to really optimize that model. And then on the runtime side when you want to deploy it, you know, we run that optimized model. And so that's where we're able to provide. We even have a labs offering in terms of being able to pair up our engineering teams with a customer's engineering teams, and we can actually help with most of that pipeline. So even if it is something where you have a dataset and you want some help in picking a model, you want some help training it, you want some help deploying that, we can actually help there as well. You know, there's also a great partner ecosystem out there, like a lot of folks even in the "Startup Showcase" here, that extend beyond into kind of your earlier comment around data engineering or downstream ITOps or the all-up MLOps umbrella. So we can absolutely engage with our labs, and then, of course, you know, again, partners, which are always kind of key to this. So you are spot on. I think what's happened with the kind of this, they talk about a hockey stick. This is almost like a flat wall now with the rate of innovation right now in this space. And so we do have a lot of folks wanting to go straight from curious to native. And so that's definitely where the partner ecosystem comes in so hard 'cause there just isn't anybody or any teams out there that, I literally do from, "Here's my blank database, and I want an API that does all the stuff," right? Like, that's a big chunk, but we can definitely help with the model to delivery piece. >> Well, you guys are obviously a featured company in this space. Talk about the expertise. A lot of companies are like, I won't say faking it till they make it. You can't really fake security. You can't really fake AI, right? So there's going to be a learning curve. They'll be a few startups who'll come out of the gate early. You guys are one of 'em. Talk about what you guys have as expertise as a company, why you're successful, and what problems do you solve for customers? >> No, appreciate that. Yeah, we actually, we love to tell the story of our founder, Nir Shavit. So he's a 20-year professor at MIT. Actually, he was doing a lot of work on kind of multicore processing before there were even physical multicores, and actually even did a stint in computational neurobiology in the 2010s, and the impetus for this whole technology, has a great talk on YouTube about it, where he talks about the fact that his work there, he kind of realized that the way neural networks encode and how they're executed by kind of ramming data layer by layer through these kind of HPC-style platforms, actually was not analogous to how the human brain actually works. So we're on one side, we're building neural networks, and we're trying to emulate neurons. We're not really executing them that way. So our team, which one of the co-founders, also an ex-MIT, that was kind of the birth of why can't we leverage this super-performance CPU platform, which has those really fat, fast caches attached to each core, and actually start to find a way to break that model down in a way that I can execute things in parallel, not having to do them sequentially? So it is a lot of amazing, like, talks and stuff that show kind of the magic, if you will, a part of the pun of Neural Magic, but that's kind of the foundational layer of all the engineering that we do here. And in terms of how we're able to bring it to reality for customers, I'll give one customer quote where it's a large retailer, and it's a people-counting application. So a very common application. And that customer's actually been able to show literally double the amount of cameras being run with the same amount of compute. So for a one-to-one perspective, two-to-one, business leaders usually like that math, right? So we're able to show pure cost savings, but even performance-wise, you know, we have some of the common models like your ResNets and your YOLOs, where we can actually even perform better than hardware-accelerated solutions. So we're trying to do, I need to just dumb it down to better, faster, cheaper, but from a commodity perspective, that's where we're accelerating. >> That's not a bad business model. Make things easier to use, faster, and reduce the steps it takes to do stuff. So, you know, that's always going to be a good market. Now, you guys have DeepSparse, which we've talked about on our CUBE conversation prior to this interview, delivers ML models through the software so the hardware allows for a decoupling, right? >> Yep. >> Which is going to drive probably a cost advantage. Also, it's also probably from a deployment standpoint it must be easier. Can you share the benefits? Is it a cost side? Is it more of a deployment? What are the benefits of the DeepSparse when you guys decouple the software from the hardware on the ML models? >> No you actually, you hit 'em both 'cause that really is primarily the value. Because ultimately, again, we're so early. And I came from this world in a prior life where I'm doing Java development, WebSphere, WebLogic, Tomcat open source, right? When we were trying to do innovation, we had innovation buckets, 'cause everybody wanted to be on the web and have their app and a browser, right? We got all the money we needed to build something and show, hey, look at the thing on the web, right? But when you had to get in production, that was the challenge. So to what you're speaking to here, in this situation, we're able to show we're just a Python package. So whether you just install it on the operating system itself, or we also have a containerized version you can drop on any container orchestration platform, so ECS or EKS on AWS. And so you get all the auto-scaling features. So when you think about that kind of a world where you have everything from real-time inferencing to kind of after hours batch processing inferencing, the fact that you can auto scale that hardware up and down and it's CPU based, so you're paying by the minute instead of maybe paying by the hour at a lower cost shelf, it does everything from pure cost to, again, I can have my standard IT team say, "Hey, here's the Kubernetes in the container," and it just runs on the infrastructure we're already managing. So yeah, operational, cost and again, and many times even performance. (audio warbles) CPUs if I want to. >> Yeah, so that's easier on the deployment too. And you don't have this kind of, you know, blank check kind of situation where you don't know what's on the backend on the cost side. >> Exactly. >> And you control the actual hardware and you can manage that supply chain. >> And keep in mind, exactly. Because the other thing that sometimes gets lost in the conversation, depending on where a customer is, some of these workloads, like, you know, you and I remember a world where even like the roundtrip to the cloud and back was a problem for folks, right? We're used to extremely low latency. And some of these workloads absolutely also adhere to that. But there's some workloads where the latency isn't as important. And we actually even provide the tuning. Now, if we're giving you five milliseconds of latency and you don't need that, you can tune that back. So less CPU, lower cost. Now, throughput and other things come into play. But that's the kind of configurability and flexibility we give for operations. >> All right, so why should I call you if I'm a customer or prospect Neural Magic, what problem do I have or when do I know I need you guys? When do I call you in and what does my environment look like? When do I know? What are some of the signals that would tell me that I need Neural Magic? >> No, absolutely. So I think in general, any neural network, you know, the process I mentioned before called sparcification, it's, you know, an optimization process that we specialize in. Any neural network, you know, can be sparcified. So I think if it's a deep-learning neural network type model. If you're trying to get AI into production, you have cost concerns even performance-wise. I certainly hate to be too generic and say, "Hey, we'll talk to everybody." But really in this world right now, if it's a neural network, it's something where you're trying to get into production, you know, we are definitely offering, you know, kind of an at-scale performant deployable solution for deep learning models. >> So neural network you would define as what? Just devices that are connected that need to know about each other? What's the state-of-the-art current definition of neural network for customers that may think they have a neural network or might not know they have a neural network architecture? What is that definition for neural network? >> That's a great question. So basically, machine learning models that fall under this kind of category, you hear about transformers a lot, or I mentioned about YOLO, the YOLO family of computer vision models, or natural language processing models like BERT. If you have a data science team or even developers, some even regular, I used to call myself a nine to five developer 'cause I worked in the enterprise, right? So like, hey, we found a new open source framework, you know, I used to use Spring back in the day and I had to go figure it out. There's developers that are pulling these models down and they're figuring out how to get 'em into production, okay? So I think all of those kinds of situations, you know, if it's a machine learning model of the deep learning variety that's, you know, really specifically where we shine. >> Okay, so let me pretend I'm a customer for a minute. I have all these videos, like all these transcripts, I have all these people that we've interviewed, CUBE alumnis, and I say to my team, "Let's AI-ify, sparcify theCUBE." >> Yep. >> What do I do? I mean, do I just like, my developers got to get involved and they're going to be like, "Well, how do I upload it to the cloud? Do I use a GPU?" So there's a thought process. And I think a lot of companies are going through that example of let's get on this AI, how can it help our business? >> Absolutely. >> What does that progression look like? Take me through that example. I mean, I made up theCUBE example up, but we do have a lot of data. We have large data models and we have people and connect to the internet and so we kind of seem like there's a neural network. I think every company might have a neural network in place. >> Well, and I was going to say, I think in general, you all probably do represent even the standard enterprise more than most. 'Cause even the enterprise is going to have a ton of video content, a ton of text content. So I think it's a great example. So I think that that kind of sea or I'll even go ahead and use that term data lake again, of data that you have, you're probably going to want to be setting up kind of machine learning pipelines that are going to be doing all of the pre-processing from kind of the raw data to kind of prepare it into the format that say a YOLO would actually use or let's say BERT for natural language processing. So you have all these transcripts, right? So we would do a pre-processing path where we would create that into the file format that BERT, the machine learning model would know how to train off of. So that's kind of all the pre-processing steps. And then for training itself, we actually enable what's called sparse transfer learning. So that's transfer learning is a very popular method of doing training with existing models. So we would be able to retrain that BERT model with your transcript data that we have now done the pre-processing with to get it into the proper format. And now we have a BERT natural language processing model that's been trained on your data. And now we can deploy that onto DeepSparse runtime so that now you can ask that model whatever questions, or I should say pass, you're not going to ask it those kinds of questions ChatGPT, although we can do that too. But you're going to pass text through the BERT model and it's going to give you answers back. It could be things like sentiment analysis or text classification. You just call the model, and now when you pass text through it, you get the answers better, faster or cheaper. I'll use that reference again. >> Okay, we can create a CUBE bot to give us questions on the fly from the the AI bot, you know, from our previous guests. >> Well, and I will tell you using that as an example. So I had mentioned OPT before, kind of the open source version of ChatGPT. So, you know, typically that requires multiple GPUs to run. So our research team, I may have mentioned earlier, we've been able to sparcify that over 50% already and run it on only a single GPU. And so in that situation, you could train OPT with that corpus of data and do exactly what you say. Actually we could use Alexa, we could use Alexa to actually respond back with voice. How about that? We'll do an API call and we'll actually have an interactive Alexa-enabled bot. >> Okay, we're going to be a customer, let's put it on the list. But this is a great example of what you guys call software delivered AI, a topic we chatted about on theCUBE conversation. This really means this is a developer opportunity. This really is the convergence of the data growth, the restructuring, how data is going to be horizontally scalable, meets developers. So this is an AI developer model going on right now, which is kind of unique. >> It is, John, I will tell you what's interesting. And again, folks don't always think of it this way, you know, the AI magical goodness is now getting pushed in the middle where the developers and IT are operating. And so it again, that paradigm, although for some folks seem obvious, again, if you've been around for 20 years, that whole all that plumbing is a thing, right? And so what we basically help with is when you deploy the DeepSparse runtime, we have a very rich API footprint. And so the developers can call the API, ITOps can run it, or to your point, it's developer friendly enough that you could actually deploy our off-the-shelf models. We have something called the SparseZoo where we actually publish pre-optimized or pre-sparcified models. And so developers could literally grab those right off the shelf with the training they've already had and just put 'em right into their applications and deploy them as containers. So yeah, we enable that for sure as well. >> It's interesting, DevOps was infrastructure as code and we had a last season, a series on data as code, which we kind of coined. This is data as code. This is a whole nother level of opportunity where developers just want to have programmable data and apps with AI. This is a whole new- >> Absolutely. >> Well, absolutely great, great stuff. Our news team at SiliconANGLE and theCUBE said you guys had a little bit of a launch announcement you wanted to make here on the "AWS Startup Showcase." So Jay, you have something that you want to launch here? >> Yes, and thank you John for teeing me up. So I'm going to try to put this in like, you know, the vein of like an AWS, like main stage keynote launch, okay? So we're going to try this out. So, you know, a lot of our product has obviously been built on top of x86. I've been sharing that the past 15 minutes or so. And with that, you know, we're seeing a lot of acceleration for folks wanting to run on commodity infrastructure. But we've had customers and prospects and partners tell us that, you know, ARM and all of its kind of variance are very compelling, both cost performance-wise and also obviously with Edge. And wanted to know if there was anything we could do from a runtime perspective with ARM. And so we got the work and, you know, it's a hard problem to solve 'cause the instructions set for ARM is very different than the instruction set for x86, and our deep tensor column technology has to be able to work with that lower level instruction spec. But working really hard, the engineering team's been at it and we are happy to announce here at the "AWS Startup Showcase," that DeepSparse inference now has, or inference runtime now has support for AWS Graviton instances. So it's no longer just x86, it is also ARM and that obviously also opens up the door to Edge and further out the stack so that optimize once run anywhere, we're not going to open up. So it is an early access. So if you go to neuralmagic.com/graviton, you can sign up for early access, but we're excited to now get into the ARM side of the fence as well on top of Graviton. >> That's awesome. Our news team is going to jump on that news. We'll get it right up. We get a little scoop here on the "Startup Showcase." Jay Marshall, great job. That really highlights the flexibility that you guys have when you decouple the software from the hardware. And again, we're seeing open source driving a lot more in AI ops now with with machine learning and AI. So to me, that makes a lot of sense. And congratulations on that announcement. Final minute or so we have left, give a summary of what you guys are all about. Put a plug in for the company, what you guys are looking to do. I'm sure you're probably hiring like crazy. Take the last few minutes to give a plug for the company and give a summary. >> No, I appreciate that so much. So yeah, joining us out neuralmagic.com, you know, part of what we didn't spend a lot of time here, our optimization tools, we are doing all of that in the open source. It's called SparseML and I mentioned SparseZoo briefly. So we really want the data scientists community and ML engineering community to join us out there. And again, the DeepSparse runtime, it's actually free to use for trial purposes and for personal use. So you can actually run all this on your own laptop or on an AWS instance of your choice. We are now live in the AWS marketplace. So push button, deploy, come try us out and reach out to us on neuralmagic.com. And again, sign up for the Graviton early access. >> All right, Jay Marshall, Vice President of Business Development Neural Magic here, talking about performant, cost effective machine learning at scale. This is season three, episode one, focusing on foundational models as far as building data infrastructure and AI, AI native. I'm John Furrier with theCUBE. Thanks for watching. (bright upbeat music)

Published Date : Mar 9 2023

SUMMARY :

of the "AWS Startup Showcase." Thanks for having us. and the machine learning and the cloud to help accelerate that. and you got the foundational So kind of the GPT open deep end of the pool, that group, it's pretty much, you know, So I think you have this kind It's a- and a lot of the aspects of and I'd love to get your reaction to, And I always liked that because, you know, that are prospects for you guys, and you want some help in picking a model, Talk about what you guys have that show kind of the magic, if you will, and reduce the steps it takes to do stuff. when you guys decouple the the fact that you can auto And you don't have this kind of, you know, the actual hardware and you and you don't need that, neural network, you know, of situations, you know, CUBE alumnis, and I say to my team, and they're going to be like, and connect to the internet and it's going to give you answers back. you know, from our previous guests. and do exactly what you say. of what you guys call enough that you could actually and we had a last season, that you want to launch here? And so we got the work and, you know, flexibility that you guys have So you can actually run Vice President of Business

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JayPERSON

0.99+

Jay MarshallPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

Nir ShavitPERSON

0.99+

20-yearQUANTITY

0.99+

AlexaTITLE

0.99+

2010sDATE

0.99+

sevenQUANTITY

0.99+

PythonTITLE

0.99+

MITORGANIZATION

0.99+

each coreQUANTITY

0.99+

Neural MagicORGANIZATION

0.99+

JavaTITLE

0.99+

YouTubeORGANIZATION

0.99+

TodayDATE

0.99+

nine yearsQUANTITY

0.98+

bothQUANTITY

0.98+

BERTTITLE

0.98+

theCUBEORGANIZATION

0.98+

ChatGPTTITLE

0.98+

20 yearsQUANTITY

0.98+

over 50%QUANTITY

0.97+

second natureQUANTITY

0.96+

todayDATE

0.96+

ARMORGANIZATION

0.96+

oneQUANTITY

0.95+

DeepSparseTITLE

0.94+

neuralmagic.com/gravitonOTHER

0.94+

SiliconANGLEORGANIZATION

0.94+

WebSphereTITLE

0.94+

nineQUANTITY

0.94+

firstQUANTITY

0.93+

Startup ShowcaseEVENT

0.93+

five millisecondsQUANTITY

0.92+

AWS Startup ShowcaseEVENT

0.91+

twoQUANTITY

0.9+

YOLOORGANIZATION

0.89+

CUBEORGANIZATION

0.88+

OPTTITLE

0.88+

last six monthsDATE

0.88+

season threeQUANTITY

0.86+

doubleQUANTITY

0.86+

one customerQUANTITY

0.86+

SupercloudEVENT

0.86+

one sideQUANTITY

0.85+

VicePERSON

0.85+

x86OTHER

0.83+

AI/ML: Top Startups Building Foundational ModelsTITLE

0.82+

ECSTITLE

0.81+

$100 billionQUANTITY

0.81+

DevOpsTITLE

0.81+

WebLogicTITLE

0.8+

EKSTITLE

0.8+

a minuteQUANTITY

0.8+

neuralmagic.comOTHER

0.79+

Phil Kippen, Snowflake, Dave Whittington, AT&T & Roddy Tranum, AT&T | | MWC Barcelona 2023


 

(gentle music) >> Narrator: "TheCUBE's" live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) >> Hello everybody, welcome back to day four of "theCUBE's" coverage of MWC '23. We're here live at the Fira in Barcelona. Wall-to-wall coverage, John Furrier is in our Palo Alto studio, banging out all the news. Really, the whole week we've been talking about the disaggregation of the telco network, the new opportunities in telco. We're really excited to have AT&T and Snowflake here. Dave Whittington is the AVP, at the Chief Data Office at AT&T. Roddy Tranum is the Assistant Vice President, for Channel Performance Data and Tools at AT&T. And Phil Kippen, the Global Head Of Industry-Telecom at Snowflake, Snowflake's new telecom business. Snowflake just announced earnings last night. Typical Scarpelli, they beat earnings, very conservative guidance, stocks down today, but we like Snowflake long term, they're on that path to 10 billion. Guys, welcome to "theCUBE." Thanks so much >> Phil: Thank you. >> for coming on. >> Dave and Roddy: Thanks Dave. >> Dave, let's start with you. The data culture inside of telco, We've had this, we've been talking all week about this monolithic system. Super reliable. You guys did a great job during the pandemic. Everything shifting to landlines. We didn't even notice, you guys didn't miss a beat. Saved us. But the data culture's changing inside telco. Explain that. >> Well, absolutely. So, first of all IoT and edge processing is bringing forth new and exciting opportunities all the time. So, we're bridging the world between a lot of the OSS stuff that we can do with edge processing. But bringing that back, and now we're talking about working, and I would say traditionally, we talk data warehouse. Data warehouse and big data are now becoming a single mesh, all right? And the use cases and the way you can use those, especially I'm taking that edge data and bringing it back over, now I'm running AI and ML models on it, and I'm pushing back to the edge, and I'm combining that with my relational data. So that mesh there is making all the difference. We're getting new use cases that we can do with that. And it's just, and the volume of data is immense. >> Now, I love ChatGPT, but I'm hoping your data models are more accurate than ChatGPT. I never know. Sometimes it's really good, sometimes it's really bad. But enterprise, you got to be clean with your AI, don't you? >> Not only you have to be clean, you have to monitor it for bias and be ethical about it. We're really good about that. First of all with AT&T, our brand is Platinum. We take care of that. So, we may not be as cutting-edge risk takers as others, but when we go to market with an AI or an ML or a product, it's solid. >> Well hey, as telcos go, you guys are leaning into the Cloud. So I mean, that's a good starting point. Roddy, explain your role. You got an interesting title, Channel Performance Data and Tools, what's that all about? >> So literally anything with our consumer, retail, concenters' channels, all of our channels, from a data perspective and metrics perspective, what it takes to run reps, agents, all the way to leadership levels, scorecards, how you rank in the business, how you're driving the business, from sales, service, customer experience, all that data infrastructure with our great partners on the CDO side, as well as Snowflake, that comes from my team. >> And that's traditionally been done in a, I don't mean the pejorative, but we're talking about legacy, monolithic, sort of data warehouse technologies. >> Absolutely. >> We have a love-hate relationship with them. It's what we had. It's what we used, right? And now that's evolving. And you guys are leaning into the Cloud. >> Dramatic evolution. And what Snowflake's enabled for us is impeccable. We've talked about having, people have dreamed of one data warehouse for the longest time and everything in one system. Really, this is the only way that becomes a reality. The more you get in Snowflake, we can have golden source data, and instead of duplicating that 50 times across AT&T, it's in one place, we just share it, everybody leverages it, and now it's not duplicated, and the process efficiency is just incredible. >> But it really hinges on that separation of storage and compute. And we talk about the monolithic warehouse, and one of the nightmares I've lived with, is having a monolithic warehouse. And let's just go with some of my primary, traditional customers, sales, marketing and finance. They are leveraging BSS OSS data all the time. For me to coordinate a deployment, I have to make sure that each one of these units can take an outage, if it's going to be a long deployment. With the separation of storage, compute, they own their own compute cluster. So I can move faster for these people. 'Cause if finance, I can implement his code without impacting finance or marketing. This brings in CI/CD to more reality. It brings us faster to market with more features. So if he wants to implement a new comp plan for the field reps, or we're reacting to the marketplace, where one of our competitors has done something, we can do that in days, versus waiting weeks or months. >> And we've reported on this a lot. This is the brilliance of Snowflake's founders, that whole separation >> Yep. >> from compute and data. I like Dave, that you're starting with sort of the business flexibility, 'cause there's a cost element of this too. You can dial down, you can turn off compute, and then of course the whole world said, "Hey, that's a good idea." And a VC started throwing money at Amazon, but Redshift said, "Oh, we can do that too, sort of, can't turn off the compute." But I want to ask you Phil, so, >> Sure. >> it looks from my vantage point, like you're taking your Data Cloud message which was originally separate compute from storage simplification, now data sharing, automated governance, security, ultimately the marketplace. >> Phil: Right. >> Taking that same model, break down the silos into telecom, right? It's that same, >> Mm-hmm. >> sorry to use the term playbook, Frank Slootman tells me he doesn't use playbooks, but he's not a pattern matcher, but he's a situational CEO, he says. But the situation in telco calls for that type of strategy. So explain what you guys are doing in telco. >> I think there's, so, what we're launching, we launched last week, and it really was three components, right? So we had our platform as you mentioned, >> Dave: Mm-hmm. >> and that platform is being utilized by a number of different companies today. We also are adding, for telecom very specifically, we're adding capabilities in marketplace, so that service providers can not only use some of the data and apps that are in marketplace, but as well service providers can go and sell applications or sell data that they had built. And then as well, we're adding our ecosystem, it's telecom-specific. So, we're bringing partners in, technology partners, and consulting and services partners, that are very much focused on telecoms and what they do internally, but also helping them monetize new services. >> Okay, so it's not just sort of generic Snowflake into telco? You have specific value there. >> We're purposing the platform specifically for- >> Are you a telco guy? >> I am. You are, okay. >> Total telco guy absolutely. >> So there you go. You see that Snowflake is actually an interesting organizational structure, 'cause you're going after verticals, which is kind of rare for a company of your sort of inventory, I'll say, >> Absolutely. >> I don't mean that as a negative. (Dave laughs) So Dave, take us through the data journey at AT&T. It's a long history. You don't have to go back to the 1800s, but- (Dave laughs) >> Thank you for pointing out, we're a 149-year-old company. So, Jesse James was one of the original customers, (Dave laughs) and we have no longer got his data. So, I'll go back. I've been 17 years singular AT&T, and I've watched it through the whole journey of, where the monolithics were growing, when the consolidation of small, wireless carriers, and we went through that boom. And then we've gone through mergers and acquisitions. But, Hadoop came out, and it was going to solve all world hunger. And we had all the aspects of, we're going to monetize and do AI and ML, and some of the things we learned with Hadoop was, we had this monolithic warehouse, we had this file-based-structured Hadoop, but we really didn't know how to bring this all together. And we were bringing items over to the relational, and we were taking the relational and bringing it over to the warehouse, and trying to, and it was a struggle. Let's just go there. And I don't think we were the only company to struggle with that, but we learned a lot. And so now as tech is finally emerging, with the cloud, companies like Snowflake, and others that can handle that, where we can create, we were discussing earlier, but it becomes more of a conducive mesh that's interoperable. So now we're able to simplify that environment. And the cloud is a big thing on that. 'Cause you could not do this on-prem with on-prem technologies. It would be just too cost prohibitive, and too heavy of lifting, going back and forth, and managing the data. The simplicity the cloud brings with a smaller set of tools, and I'll say in the data space specifically, really allows us, maybe not a single instance of data for all use cases, but a greatly reduced ecosystem. And when you simplify your ecosystem, you simplify speed to market and data management. >> So I'm going to ask you, I know it's kind of internal organizational plumbing, but it'll inform my next question. So, Dave, you're with the Chief Data Office, and Roddy, you're kind of, you all serve in the business, but you're really serving the, you're closer to those guys, they're banging on your door for- >> Absolutely. I try to keep the 130,000 users who may or may not have issues sometimes with our data and metrics, away from Dave. And he just gets a call from me. >> And he only calls when he has a problem. He's never wished me happy birthday. (Dave and Phil laugh) >> So the reason I asked that is because, you describe Dave, some of the Hadoop days, and again love-hate with that, but we had hyper-specialized roles. We still do. You've got data engineers, data scientists, data analysts, and you've got this sort of this pipeline, and it had to be this sequential pipeline. I know Snowflake and others have come to simplify that. My question to you is, how is that those roles, how are those roles changing? How is data getting closer to the business? Everybody talks about democratizing business. Are you doing that? What's a real use example? >> From our perspective, those roles, a lot of those roles on my team for years, because we're all about efficiency, >> Dave: Mm-hmm. >> we cut across those areas, and always have cut across those areas. So now we're into a space where things have been simplified, data processes and copying, we've gone from 40 data processes down to five steps now. We've gone from five steps to one step. We've gone from days, now take hours, hours to minutes, minutes to seconds. Literally we're seeing that time in and time out with Snowflake. So these resources that have spent all their time on data engineering and moving data around, are now freed up more on what they have skills for and always have, the data analytics area of the business, and driving the business forward, and new metrics and new analysis. That's some of the great operational value that we've seen here. As this simplification happens, it frees up brain power. >> So, you're pumping data from the OSS, the BSS, the OKRs everywhere >> Everywhere. >> into Snowflake? >> Scheduling systems, you name it. If you can think of what drives our retail and centers and online, all that data, scheduling system, chat data, call center data, call detail data, all of that enters into this common infrastructure to manage the business on a day in and day out basis. >> How are the roles and the skill sets changing? 'Cause you're doing a lot less ETL, you're doing a lot less moving of data around. There were guys that were probably really good at that. I used to joke in the, when I was in the storage world, like if your job is bandaging lungs, you need to look for a new job, right? So, and they did and people move on. So, are you able to sort of redeploy those assets, and those people, those human resources? >> These folks are highly skilled. And we were talking about earlier, SQL hasn't gone away. Relational databases are not going away. And that's one thing that's made this migration excellent, they're just transitioning their skills. Experts in legacy systems are now rapidly becoming experts on the Snowflake side. And it has not been that hard a transition. There are certainly nuances, things that don't operate as well in the cloud environment that we have to learn and optimize. But we're making that transition. >> Dave: So just, >> Please. >> within the Chief Data Office we have a couple of missions, and Roddy is a great partner and an example of how it works. We try to bring the data for democratization, so that we have one interface, now hopefully know we just have a logical connection back to these Snowflake instances that we connect. But we're providing that governance and cleansing, and if there's a business rule at the enterprise level, we provide it. But the goal at CDO is to make sure that business units like Roddy or marketing or finance, that they can come to a platform that's reliable, robust, and self-service. I don't want to be in his way. So I feel like I'm providing a sub-level of platform, that he can come to and anybody can come to, and utilize, that they're not having to go back and undo what's in Salesforce, or ServiceNow, or in our billers. So, I'm sort of that layer. And then making sure that that ecosystem is robust enough for him to use. >> And that self-service infrastructure is predominantly through the Azure Cloud, correct? >> Dave: Absolutely. >> And you work on other clouds, but it's predominantly through Azure? >> We're predominantly in Azure, yeah. >> Dave: That's the first-party citizen? >> Yeah. >> Okay, I like to think in terms sometimes of data products, and I know you've mentioned upfront, you're Gold standard or Platinum standard, you're very careful about personal information. >> Dave: Yeah. >> So you're not trying to sell, I'm an AT&T customer, you're not trying to sell my data, and make money off of my data. So the value prop and the business case for Snowflake is it's simpler. You do things faster, you're in the cloud, lower cost, et cetera. But I presume you're also in the business, AT&T, of making offers and creating packages for customers. I look at those as data products, 'cause it's not a, I mean, yeah, there's a physical phone, but there's data products behind it. So- >> It ultimately is, but not everybody always sees it that way. Data reporting often can be an afterthought. And we're making it more on the forefront now. >> Yeah, so I like to think in terms of data products, I mean even if the financial services business, it's a data business. So, if we can think about that sort of metaphor, do you see yourselves as data product builders? Do you have that, do you think about building products in that regard? >> Within the Chief Data Office, we have a data product team, >> Mm-hmm. >> and by the way, I wouldn't be disingenuous if I said, oh, we're very mature in this, but no, it's where we're going, and it's somewhat of a journey, but I've got a peer, and their whole job is to go from, especially as we migrate from cloud, if Roddy or some other group was using tables three, four and five and joining them together, it's like, "Well look, this is an offer for data product, so let's combine these and put it up in the cloud, and here's the offer data set product, or here's the opportunity data product," and it's a journey. We're on the way, but we have dedicated staff and time to do this. >> I think one of the hardest parts about that is the organizational aspects of it. Like who owns the data now, right? It used to be owned by the techies, and increasingly the business lines want to have access, you're providing self-service. So there's a discussion about, "Okay, what is a data product? Who's responsible for that data product? Is it in my P&L or your P&L? Somebody's got to sign up for that number." So, it sounds like those discussions are taking place. >> They are. And, we feel like we're more the, and CDO at least, we feel more, we're like the guardians, and the shepherds, but not the owners. I mean, we have a role in it all, but he owns his metrics. >> Yeah, and even from our perspective, we see ourselves as an enabler of making whatever AT&T wants to make happen in terms of the key products and officers' trade-in offers, trade-in programs, all that requires this data infrastructure, and managing reps and agents, and what they do from a channel performance perspective. We still ourselves see ourselves as key enablers of that. And we've got to be flexible, and respond quickly to the business. >> I always had empathy for the data engineer, and he or she had to service all these different lines of business with no business context. >> Yeah. >> Like the business knows good data from bad data, and then they just pound that poor individual, and they're like, "Okay, I'm doing my best. It's just ones and zeros to me." So, it sounds like that's, you're on that path. >> Yeah absolutely, and I think, we do have refined, getting more and more refined owners of, since Snowflake enables these golden source data, everybody sees me and my organization, channel performance data, go to Roddy's team, we have a great team, and we go to Dave in terms of making it all happen from a data infrastructure perspective. So we, do have a lot more refined, "This is where you go for the golden source, this is where it is, this is who owns it. If you want to launch this product and services, and you want to manage reps with it, that's the place you-" >> It's a strong story. So Chief Data Office doesn't own the data per se, but it's your responsibility to provide the self-service infrastructure, and make sure it's governed properly, and in as automated way as possible. >> Well, yeah, absolutely. And let me tell you more, everybody talks about single version of the truth, one instance of the data, but there's context to that, that we are taking, trying to take advantage of that as we do data products is, what's the use case here? So we may have an entity of Roddy as a prospective customer, and we may have a entity of Roddy as a customer, high-value customer over here, which may have a different set of mix of data and all, but as a data product, we can then create those for those specific use cases. Still point to the same data, but build it in different constructs. One for marketing, one for sales, one for finance. By the way, that's where your data engineers are struggling. >> Yeah, yeah, of course. So how do I serve all these folks, and really have the context-common story in telco, >> Absolutely. >> or are these guys ahead of the curve a little bit? Or where would you put them? >> I think they're definitely moving a lot faster than the industry is generally. I think the enabling technologies, like for instance, having that single copy of data that everybody sees, a single pane of glass, right, that's definitely something that everybody wants to get to. Not many people are there. I think, what AT&T's doing, is most definitely a little bit further ahead than the industry generally. And I think the successes that are coming out of that, and the learning experiences are starting to generate momentum within AT&T. So I think, it's not just about the product, and having a product now that gives you a single copy of data. It's about the experiences, right? And now, how the teams are getting trained, domains like network engineering for instance. They typically haven't been a part of data discussions, because they've got a lot of data, but they're focused on the infrastructure. >> Mm. >> So, by going ahead and deploying this platform, for platform's purpose, right, and the business value, that's one thing, but also to start bringing, getting that experience, and bringing new experience in to help other groups that traditionally hadn't been data-centric, that's also a huge step ahead, right? So you need to enable those groups. >> A big complaint of course we hear at MWC from carriers is, "The over-the-top guys are killing us. They're riding on our networks, et cetera, et cetera. They have all the data, they have all the client relationships." Do you see your client relationships changing as a result of sort of your data culture evolving? >> Yes, I'm not sure I can- >> It's a loaded question, I know. >> Yeah, and then I, so, we want to start embedding as much into our network on the proprietary value that we have, so we can start getting into that OTT play, us as any other carrier, we have distinct advantages of what we can do at the edge, and we just need to start exploiting those. But you know, 'cause whether it's location or whatnot, so we got to eat into that. Historically, the network is where we make our money in, and we stack the services on top of it. It used to be *69. >> Dave: Yeah. >> If anybody remembers that. >> Dave: Yeah, of course. (Dave laughs) >> But you know, it was stacked on top of our network. Then we stack another product on top of it. It'll be in the edge where we start providing distinct values to other partners as we- >> I mean, it's a great business that you're in. I mean, if they're really good at connectivity. >> Dave: Yeah. >> And so, it sounds like it's still to be determined >> Dave: Yeah. >> where you can go with this. You have to be super careful with private and for personal information. >> Dave: Yep. >> Yeah, but the opportunities are enormous. >> There's a lot. >> Yeah, particularly at the edge, looking at, private networks are just an amazing opportunity. Factories and name it, hospital, remote hospitals, remote locations. I mean- >> Dave: Connected cars. >> Connected cars are really interesting, right? I mean, if you start communicating car to car, and actually drive that, (Dave laughs) I mean that's, now we're getting to visit Xen Fault Tolerance people. This is it. >> Dave: That's not, let's hold the traffic. >> Doesn't scare me as much as we actually learn. (all laugh) >> So how's the show been for you guys? >> Dave: Awesome. >> What're your big takeaways from- >> Tremendous experience. I mean, someone who doesn't go outside the United States much, I'm a homebody. The whole experience, the whole trip, city, Mobile World Congress, the technologies that are out here, it's been a blast. >> Anything, top two things you learned, advice you'd give to others, your colleagues out in general? >> In general, we talked a lot about technologies today, and we talked a lot about data, but I'm going to tell you what, the accelerator that you cannot change, is the relationship that we have. So when the tech and the business can work together toward a common goal, and it's a partnership, you get things done. So, I don't know how many CDOs or CIOs or CEOs are out there, but this connection is what accelerates and makes it work. >> And that is our audience Dave. I mean, it's all about that alignment. So guys, I really appreciate you coming in and sharing your story in "theCUBE." Great stuff. >> Thank you. >> Thanks a lot. >> All right, thanks everybody. Thank you for watching. I'll be right back with Dave Nicholson. Day four SiliconANGLE's coverage of MWC '23. You're watching "theCUBE." (gentle music)

Published Date : Mar 2 2023

SUMMARY :

that drive human progress. And Phil Kippen, the Global But the data culture's of the OSS stuff that we But enterprise, you got to be So, we may not be as cutting-edge Channel Performance Data and all the way to leadership I don't mean the pejorative, And you guys are leaning into the Cloud. and the process efficiency and one of the nightmares I've lived with, This is the brilliance of the business flexibility, like you're taking your Data Cloud message But the situation in telco and that platform is being utilized You have specific value there. I am. So there you go. I don't mean that as a negative. and some of the things we and Roddy, you're kind of, And he just gets a call from me. (Dave and Phil laugh) and it had to be this sequential pipeline. and always have, the data all of that enters into How are the roles and in the cloud environment that But the goal at CDO is to and I know you've mentioned upfront, So the value prop and the on the forefront now. I mean even if the and by the way, I wouldn't and increasingly the business and the shepherds, but not the owners. and respond quickly to the business. and he or she had to service Like the business knows and we go to Dave in terms doesn't own the data per se, and we may have a entity and really have the and having a product now that gives you and the business value, that's one thing, They have all the data, on the proprietary value that we have, Dave: Yeah, of course. It'll be in the edge business that you're in. You have to be super careful Yeah, but the particularly at the edge, and actually drive that, let's hold the traffic. much as we actually learn. the whole trip, city, is the relationship that we have. and sharing your story in "theCUBE." Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave WhittingtonPERSON

0.99+

Frank SlootmanPERSON

0.99+

RoddyPERSON

0.99+

AmazonORGANIZATION

0.99+

PhilPERSON

0.99+

Phil KippenPERSON

0.99+

AT&TORGANIZATION

0.99+

Jesse JamesPERSON

0.99+

AT&T.ORGANIZATION

0.99+

five stepsQUANTITY

0.99+

Dave NicholsonPERSON

0.99+

John FurrierPERSON

0.99+

50 timesQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

Roddy TranumPERSON

0.99+

10 billionQUANTITY

0.99+

one stepQUANTITY

0.99+

17 yearsQUANTITY

0.99+

130,000 usersQUANTITY

0.99+

United StatesLOCATION

0.99+

1800sDATE

0.99+

last weekDATE

0.99+

BarcelonaLOCATION

0.99+

Palo AltoLOCATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

last nightDATE

0.99+

MWC '23EVENT

0.98+

telcoORGANIZATION

0.98+

one systemQUANTITY

0.98+

oneQUANTITY

0.98+

40 data processesQUANTITY

0.98+

todayDATE

0.98+

one placeQUANTITY

0.97+

P&LORGANIZATION

0.97+

telcosORGANIZATION

0.97+

CDOORGANIZATION

0.97+

149-year-oldQUANTITY

0.97+

fiveQUANTITY

0.97+

singleQUANTITY

0.96+

three componentsQUANTITY

0.96+

OneQUANTITY

0.96+

John Kreisa, Couchbase | MWC Barcelona 2023


 

>> Narrator: TheCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music intro) (logo background tingles) >> Hi everybody, welcome back to day three of MWC23, my name is Dave Vellante and we're here live at the Theater of Barcelona, Lisa Martin, David Nicholson, John Furrier's in our studio in Palo Alto. Lot of buzz at the show, the Mobile World Daily Today, front page, Netflix chief hits back in fair share row, Greg Peters, the co-CEO of Netflix, talking about how, "Hey, you guys want to tax us, the telcos want to tax us, well, maybe you should help us pay for some of the content. Your margins are higher, you have a monopoly, you know, we're delivering all this value, you're bundling Netflix in, from a lot of ISPs so hold on, you know, pump the brakes on that tax," so that's the big news. Lockheed Martin, FOSS issues, AI guidelines, says, "AI's not going to take over your job anytime soon." Although I would say, your job's going to be AI-powered for the next five years. We're going to talk about data, we've been talking about the disaggregation of the telco stack, part of that stack is a data layer. John Kreisa is here, the CMO of Couchbase, John, you know, we've talked about all week, the disaggregation of the telco stacks, they got, you know, Silicon and operating systems that are, you know, real time OS, highly reliable, you know, compute infrastructure all the way up through a telemetry stack, et cetera. And that's a proprietary block that's really exploding, it's like the big bang, like we saw in the enterprise 20 years ago and we haven't had much discussion about that data layer, sort of that horizontal data layer, that's the market you play in. You know, Couchbase obviously has a lot of telco customers- >> John: That's right. >> We've seen, you know, Snowflake and others launch telco businesses. What are you seeing when you talk to customers at the show? What are they doing with that data layer? >> Yeah, so they're building applications to drive and power unique experiences for their users, but of course, it all starts with where the data is. So they're building mobile applications where they're stretching it out to the edge and you have to move the data to the edge, you have to have that capability to deliver that highly interactive experience to their customers or for their own internal use cases out to that edge, so seeing a lot of that with Couchbase and with our customers in telco. >> So what do the telcos want to do with data? I mean, they've got the telemetry data- >> John: Yeah. >> Now they frequently complain about the over-the-top providers that have used that data, again like Netflix, to identify customer demand for content and they're mopping that up in a big way, you know, certainly Amazon and shopping Google and ads, you know, they're all using that network. But what do the telcos do today and what do they want to do in the future? They're all talking about monetization, how do they monetize that data? >> Yeah, well, by taking that data, there's insight to be had, right? So by usage patterns and what's happening, just as you said, so they can deliver a better experience. It's all about getting that edge, if you will, on their competition and so taking that data, using it in a smart way, gives them that edge to deliver a better service and then grow their business. >> We're seeing a lot of action at the edge and, you know, the edge can be a Home Depot or a Lowe's store, but it also could be the far edge, could be a, you know, an oil drilling, an oil rig, it could be a racetrack, you know, certainly hospitals and certain, you know, situations. So let's think about that edge, where there's maybe not a lot of connectivity, there might be private networks going in, in the future- >> John: That's right. >> Private 5G networks. What's the data flow look like there? Do you guys have any customers doing those types of use cases? >> Yeah, absolutely. >> And what are they doing with the data? >> Yeah, absolutely, we've got customers all across, so telco and transportation, all kinds of service delivery and healthcare, for example, we've got customers who are delivering healthcare out at the edge where they have a remote location, they're able to deliver healthcare, but as you said, there's not always connectivity, so they need to have the applications, need to continue to run and then sync back once they have that connectivity. So it's really having the ability to deliver a service, reliably and then know that that will be synced back to some central server when they have connectivity- >> So the processing might occur where the data- >> Compute at the edge. >> How do you sync back? What is that technology? >> Yeah, so there's, so within, so Couchbase and Couchbase's case, we have an autonomous sync capability that brings it back to the cloud once they get back to whether it's a private network that they want to run over, or if they're doing it over a public, you know, wifi network, once it determines that there's connectivity and, it can be peer-to-peer sync, so different edge apps communicating with each other and then ultimately communicating back to a central server. >> I mean, the other theme here, of course, I call it the software-defined telco, right? But you got to have, you got to run on something, got to have hardware. So you see companies like AWS putting Outposts, out to the edge, Outposts, you know, doesn't really run a lot of database to mind, I mean, it runs RDS, you know, maybe they're going to eventually work with companies like... I mean, you're a partner of AWS- >> John: We are. >> Right? So do you see that kind of cloud infrastructure that's moving to the edge? Do you see that as an opportunity for companies like Couchbase? >> Yeah, we do. We see customers wanting to push more and more of that compute out to the edge and so partnering with AWS gives us that opportunity and we are certified on Outpost and- >> Oh, you are? >> We are, yeah. >> Okay. >> Absolutely. >> When did that, go down? >> That was last year, but probably early last year- >> So I can run Couchbase at the edge, on Outpost? >> Yeah, that's right. >> I mean, you know, Outpost adoption has been slow, we've reported on that, but are you seeing any traction there? Are you seeing any nibbles? >> Starting to see some interest, yeah, absolutely. And again, it has to be for the right use case, but again, for service delivery, things like healthcare and in transportation, you know, they're starting to see where they want to have that compute, be very close to where the actions happen. >> And you can run on, in the data center, right? >> That's right. >> You can run in the cloud, you know, you see HPE with GreenLake, you see Dell with Apex, that's essentially their Outposts. >> Yeah. >> They're saying, "Hey, we're going to take our whole infrastructure and make it as a service." >> Yeah, yeah. >> Right? And so you can participate in those environments- >> We do. >> And then so you've got now, you know, we call it supercloud, you've got the on-prem, you've got the, you can run in the public cloud, you can run at the edge and you want that consistent experience- >> That's right. >> You know, from a data layer- >> That's right. >> So is that really the strategy for a data company is taking or should be taking, that horizontal layer across all those use cases? >> You do need to think holistically about it, because you need to be able to deliver as a, you know, as a provider, wherever the customer wants to be able to consume that application. So you do have to think about any of the public clouds or private networks and all the way to the edge. >> What's different John, about the telco business versus the traditional enterprise? >> Well, I mean, there's scale, I mean, one thing they're dealing with, particularly for end user-facing apps, you're dealing at a very very high scale and the expectation that you're going to deliver a very interactive experience. So I'd say one thing in particular that we are focusing on, is making sure we deliver that highly interactive experience but it's the scale of the number of users and customers that they have, and the expectation that your application's always going to work. >> Speaking of applications, I mean, it seems like that's where the innovation is going to come from. We saw yesterday, GSMA announced, I think eight APIs telco APIs, you know, we were talking on theCUBE, one of the analysts was like, "Eight, that's nothing," you know, "What do these guys know about developers?" But you know, as Daniel Royston said, "Eight's better than zero." >> Right? >> So okay, so we're starting there, but the point being, it's all about the apps, that's where the innovation's going to come from- >> That's right. >> So what are you seeing there, in terms of building on top of the data app? >> Right, well you have to provide, I mean, have to provide the APIs and the access because it is really, the rubber meets the road, with the developers and giving them the ability to create those really rich applications where they want and create the experiences and innovate and change the way that they're giving those experiences. >> Yeah, so what's your relationship with developers at Couchbase? >> John: Yeah. >> I mean, talk about that a little bit- >> Yeah, yeah, so we have a great relationship with developers, something we've been investing more and more in, in terms of things like developer relations teams and community, Couchbase started in open source, continue to be based on open source projects and of course, those are very developer centric. So we provide all the consistent APIs for developers to create those applications, whether it's something on Couchbase Lite, which is our kind of edge-based database, or how they can sync that data back and we actually automate a lot of that syncing which is a very difficult developer task which lends them to one of the developer- >> What I'm trying to figure out is, what's the telco developer look like? Is that a developer that comes from the enterprise and somebody comes from the blockchain world, or AI or, you know, there really doesn't seem to be a lot of developer talk here, but there's a huge opportunity. >> Yeah, yeah. >> And, you know, I feel like, the telcos kind of remind me of, you know, a traditional legacy company trying to get into the developer world, you know, even Oracle, okay, they bought Sun, they got Java, so I guess they have developers, but you know, IBM for years tried with Bluemix, they had to end up buying Red Hat, really, and that gave them the developer community. >> Yep. >> EMC used to have a thing called EMC Code, which was a, you know, good effort, but eh. And then, you know, VMware always trying to do that, but, so as you move up the stack obviously, you have greater developer affinity. Where do you think the telco developer's going to come from? How's that going to evolve? >> Yeah, it's interesting, and I think they're... To kind of get to your first question, I think they're fairly traditional enterprise developers and when we break that down, we look at it in terms of what the developer persona is, are they a front-end developer? Like they're writing that front-end app, they don't care so much about the infrastructure behind or are they a full stack developer and they're really involved in the entire application development lifecycle? Or are they living at the backend and they're really wanting to just focus in on that data layer? So we lend towards all of those different personas and we think about them in terms of the APIs that we create, so that's really what the developers are for telcos is, there's a combination of those front-end and full stack developers and so for them to continue to innovate they need to appeal to those developers and that's technology, like Couchbase, is what helps them do that. >> Yeah and you think about the Apples, you know, the app store model or Apple sort of says, "Okay, here's a developer kit, go create." >> John: Yeah. >> "And then if it's successful, you're going to be successful and we're going to take a vig," okay, good model. >> John: Yeah. >> I think I'm hearing, and maybe I misunderstood this, but I think it was the CEO or chairman of Ericsson on the day one keynotes, was saying, "We are going to monetize the, essentially the telemetry data, you know, through APIs, we're going to charge for that," you know, maybe that's not the best approach, I don't know, I think there's got to be some innovation on top. >> John: Yeah. >> Now maybe some of these greenfield telcos are going to do like, you take like a dish networks, what they're doing, they're really trying to drive development layers. So I think it's like this wild west open, you know, community that's got to be formed and right now it's very unclear to me, do you have any insights there? >> I think it is more, like you said, Wild West, I think there's no emerging standard per se for across those different company types and sort of different pieces of the industry. So consequently, it does need to form some more standards in order to really help it grow and I think you're right, you have to have the right APIs and the right access in order to properly monetize, you have to attract those developers or you're not going to be able to monetize properly. >> Do you think that if, in thinking about your business and you know, you've always sold to telcos, but now it's like there's this transformation going on in telcos, will that become an increasingly larger piece of your business or maybe even a more important piece of your business? Or it's kind of be steady state because it's such a slow moving industry? >> No, it is a big and increasing piece of our business, I think telcos like other enterprises, want to continue to innovate and so they look to, you know, technologies like, Couchbase document database that allows them to have more flexibility and deliver the speed that they need to deliver those kinds of applications. So we see a lot of migration off of traditional legacy infrastructure in order to build that new age interface and new age experience that they want to deliver. >> A lot of buzz in Silicon Valley about open AI and Chat GPT- >> Yeah. >> You know, what's your take on all that? >> Yeah, we're looking at it, I think it's exciting technology, I think there's a lot of applications that are kind of, a little, sort of innovate traditional interfaces, so for example, you can train Chat GPT to create code, sample code for Couchbase, right? You can go and get it to give you that sample app which gets you a headstart or you can actually get it to do a better job of, you know, sorting through your documentation, like Chat GPT can do a better job of helping you get access. So it improves the experience overall for developers, so we're excited about, you know, what the prospect of that is. >> So you're playing around with it, like everybody is- >> Yeah. >> And potentially- >> Looking at use cases- >> Ways tO integrate, yeah. >> Hundred percent. >> So are we. John, thanks for coming on theCUBE. Always great to see you, my friend. >> Great, thanks very much. >> All right, you're welcome. All right, keep it right there, theCUBE will be back live from Barcelona at the theater. SiliconANGLE's continuous coverage of MWC23. Go to siliconangle.com for all the news, theCUBE.net is where all the videos are, keep it right there. (cheerful upbeat music outro)

Published Date : Mar 1 2023

SUMMARY :

that drive human progress. that's the market you play in. We've seen, you know, and you have to move the data to the edge, you know, certainly Amazon that edge, if you will, it could be a racetrack, you know, Do you guys have any customers the applications, need to over a public, you know, out to the edge, Outposts, you know, of that compute out to the edge in transportation, you know, You can run in the cloud, you know, and make it as a service." to deliver as a, you know, and the expectation that But you know, as Daniel Royston said, and change the way that they're continue to be based on open or AI or, you know, there developer world, you know, And then, you know, VMware and so for them to continue to innovate about the Apples, you know, and we're going to take data, you know, through APIs, are going to do like, you and the right access in and so they look to, you know, so we're excited about, you know, yeah. Always great to see you, Go to siliconangle.com for all the news,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JohnPERSON

0.99+

Greg PetersPERSON

0.99+

Daniel RoystonPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

David NicholsonPERSON

0.99+

Palo AltoLOCATION

0.99+

John KreisaPERSON

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

last yearDATE

0.99+

Silicon ValleyLOCATION

0.99+

GSMAORGANIZATION

0.99+

JavaTITLE

0.99+

LoweORGANIZATION

0.99+

first questionQUANTITY

0.99+

Lockheed MartinORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

OracleORGANIZATION

0.99+

telcosORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

DellORGANIZATION

0.99+

yesterdayDATE

0.99+

EightQUANTITY

0.99+

oneQUANTITY

0.99+

Chat GPTTITLE

0.99+

Hundred percentQUANTITY

0.99+

AppleORGANIZATION

0.99+

telcoORGANIZATION

0.98+

CouchbaseORGANIZATION

0.98+

John FurrierPERSON

0.98+

siliconangle.comOTHER

0.98+

ApexORGANIZATION

0.98+

Home DepotORGANIZATION

0.98+

early last yearDATE

0.98+

BarcelonaLOCATION

0.98+

20 years agoDATE

0.98+

MWC23EVENT

0.97+

BluemixORGANIZATION

0.96+

SunORGANIZATION

0.96+

SiliconANGLEORGANIZATION

0.96+

theCUBEORGANIZATION

0.95+

GreenLakeORGANIZATION

0.94+

ApplesORGANIZATION

0.94+

SnowflakeORGANIZATION

0.93+

OutpostORGANIZATION

0.93+

VMwareORGANIZATION

0.93+

zeroQUANTITY

0.93+

EMCORGANIZATION

0.91+

day threeQUANTITY

0.9+

todayDATE

0.89+

Mobile World Daily TodayTITLE

0.88+

Wild WestORGANIZATION

0.88+

theCUBE.netOTHER

0.87+

app storeTITLE

0.86+

one thingQUANTITY

0.86+

EMC CodeTITLE

0.86+

CouchbaseTITLE

0.85+

Greg Manganello Fuijitsu, Fujitsu & Ryan McMeniman, Dell Technologies | MWC Barcelona 2023


 

>> Announcer: TheCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (pleasant music) >> We're back. This is Dave Vellante for our live coverage of MWC '23 SiliconANGLE's wall to wall, four-day coverage. We're here with Greg Manganello, who's from Fuijitsu. He's the global head of network services business unit at the company. And Ryan McMeniman is the director of product management for the open telecom ecosystem. We've been talking about that all week, how this ecosystem has opened up. Ryan's with Dell Technologies. Gents, welcome to theCUBE. >> Thank you, Dave. >> Thank you. >> Good to be here. >> Greg, thanks for coming on. Let's hear Fuijitsu's story. We haven't heard much at this event from Fuijitsu. I'm sure you got a big presence, but welcome to theCUBE. Tell us your angle. >> Thanks very much. So Fuijitsu, we're big O-RAN advocates, open radio access network advocates. We're one of the leading founders of that open standard. We're also members of the Open RAN Policy Coalition. I'm a board member there. We're kind of all in on OpenRAN. The reason is it gives operators choices and much more vendor diversity and therefore a lot of innovation when they build out their 5G networks. >> And so as an entry point for Dell as well, I mean obviously you guys make a lot of hay with servers and storage and other sort of hardware, but O-RAN is just this disruptive change to this industry, but it's also compute intensive. So from Dell's perspective, what are the challenges of getting customers to the carriers to adopt O-RAN? How do you de-risk it for them? >> Right, I mean O-RAN really needs to be seen as a choice, right? And that choice comes with building out an ecosystem of partners, right? Working with people like Fuijitsu and others helps us build systems that the carriers can rely upon. Otherwise, it looks like another science experiment, a sandbox, and it's really anything but that. >> So what specifically are you guys doing together? Are you doing integrations, reference architectures engineered systems, all of the above? >> Yeah, so I think it's a little bit of all of the above. So we've announced our cooperation, so the engineering teams are linked, and that we're combining our both sweet spots together from Fuijitsu's virtual CU/DU, and our OpenRAN radios, and Dell's platforms and integration capabilities. And together we're offering a pre-integrated bundle to operators to reduce that risk and kind of help overcome some of the startup obstacles by shrinking the integration cost. >> So you've got Greenfield customers, that's pretty straightforward, white sheet of paper, go, go disrupt. And then there's traditional carriers, got 4G and 5G networks, and sort of hybrid if you will, and this integration there. Where do you see the action now? I presume it's Greenfield today, but isn't it inevitable that the traditional carriers have to go open? >> It is, a couple of different ways that they need to go and they want to go might be power consumption, it might be the cloudification of their network. They're going to have different reasons for doing it. And I think we have to make sure that when we work on collaborations like we do with Fuijitsu, we have to look at all of those vectors. What is it that somebody maybe here in Europe is dealing with high gas prices, high energy prices, in the U.S. or wherever it's expansion. They're going to be different justifications for it. >> Yeah, so power must be an increasing component of the operating expense, with energy costs up, and it's a power hungry environment. So how does OpenRAN solve that problem? >> So that's a great question. So by working together we can really optimize the configurations. So on the Fuijitsu side, our radios are multi-band and highly compact and super energy efficient so that the TCO for the carrier is much, much lower. And then we've also announced on the rApp side power savings, energy savings applications, which are really sophisticated AI enabled apps that can switch off the radio based upon traffic prediction models and we can save the operator 30% on their energy bill. That's a big number. >> And that intelligence that lives in the, does it live in the RIC, is it in the brain? >> In the app right above the RIC, absolutely. >> Okay, so it's a purpose-built app to deal with that. >> It's multi-vendor app, it can sit on anybody's O-RAN system. And one of the beauties of O-RAN is there is that open architecture, so that even if Dell and Fuijitsu only sell part of the, or none of the system, an app can be selected from any vendor including Fuijitsu. So that's one of the benefits of whoever's got the best idea, the best cost performance, the best energy performance, customers can really be enabled to make the choice and continue to make choices, not just way back at RFP time, but throughout their life cycle they can keep making choices. And so that's really meaning that, hey, if we miss the buying cycle then we're closed out for 5 or 10 years. No, it's constantly being reevaluated, and that's really exciting, the whole ecosystem. But what we really want to do is make sure we partner together with key partners, Dell and Fuijitsu, such that the customer, when they do select us they see a bundle, not just every person for themselves. It de-risks it. And we get a lot of that integration headache out of the way before we launch it. >> I think that's what's different. We've been talking about how we've kind of seen this move before, in the nineties we saw the move from the mainframe vertical stack to the horizontal stack. We talked about that, but there are real differences because back then you had, I don't know, five components of the stack and there was no integration, and even converged infrastructure was kind of bolts that brought that together. And then over time it's become engineered systems. When you talk to customers, Ryan, is the conversation today mostly TCO? Is it how to get the reliability and quality of service of traditional stacks? Where's the conversation today? >> Yeah, it's the flip side of choice, which is how do you make sure you have that reliability and that security to ensure that the full stack isn't just integrated, but it lives through that whole life cycle management. What are, if you're bringing in another piece, an rApp or an xApp, how do you actually make sure that it works together as a group? Because if you don't have that kind of assurance how can you actually guarantee that O-RAN in and of itself is going to perform better than a traditional RAN system? So overcoming that barrier requires partnerships and integration activity. That is an investment on the parts of our companies, but also the operators need to look back at us and say, yeah, that work has been done, and I trust as trusted advisors for the operators that that's been done. And then we can go validate it. >> Help our audience understand it. At what point in time do you feel that from a TCO perspective there'll be parity, or in my opinion it doesn't even have to be equal. It has to be close enough. And I don't know what that close enough is because the other benefits of openness, the innovation, so there's that piece of it as the cost piece and then there is the reliability. And I would say the same thing. It's got to be, well, maybe good enough is not good enough in this world, but maybe it is for some use cases. So really my question is around adoption and what are those factors that are going to affect adoption and when can we expect them to be? >> It's a good question, Dave, and what I would say is that the closed RAN vendors are making incremental improvements. And if you think in a snapshot there might be one answer, but if you think in kind of a flow model, a river over time, our O-RAN like-minded people are on a monster innovation curve. I mean the slope of the curve is huge. So in the OpenRAN policy coalition, 60 like-minded companies working together going north, and we're saying that let's bring all the innovation together, so you can say TCO, reliability, but we're bringing the innovation curve of software and integration curve from silicon and integration from system vendors all together to really out-innovate everybody else by working together. So that's the-- >> I like that curve analogy, Greg 'cause okay, you got the ogive or S curve, and you're saying that O-RAN is entering or maybe even before the steep part of the S curve, so you're going to go hyperbolic, whereas the traditional vendors are maybe trying to squeeze a little bit more out of the lemon. >> 1, 2%, and we're making 30% or more quantum leaps at a time every innovation. So what we tell customers is you can measure right now, but if you just do the time-based competition model, as an organization, as a group of us, we're going to be ahead. >> Is it a Moore's law innovation curve or is it actually faster because you've got the combinatorial factors of silicon, certain telco technologies, other integration software. Is it actually steeper than maybe historical Moore's law? >> I think it's steeper. I don't know Ryan's opinion, but I think it's steeper because Moore's law, well-known in silicon, and it's reaching five nanometers and more and more innovations. But now we're talking about AI software and machine learning as well as the system and device vendors. So when all that's combined, what is that? So that's why I think we're at an O-RAN conference today. I'm not sure we're at MWC. >> Well, it's true. It's funny they changed the name from Mobile World Congress and that was never really meant to be a consumer show, but these things change that, right? And so I think it's appropriate MWC because we're seeing really deep enterprise technology now enter, so that's your sweet spot, isn't it? >> It really is. But I think in some ways it's the path to that price performance parity, which we saw in IT a long time ago, making its way into telecom is there, but it doesn't work unless everybody is on board. And that involves players like this and even smaller companies and innovative startups, which we really haven't seen in this space for some time. And we've been having them at the Dell booth all week long. And there's really interesting stuff like Greg said, AI, ML, optimization and efficiency, which is exciting. And that's where O-RAN can also benefit the Industry. >> And as I say, there are other differences to your advantage. You've got engineered systems or you've been through that in enterprise IT, kind of learned how to do that. But you've also got the cloud, public cloud for experimentation, so you can fail cheaply, and you got AI, right, which is, really didn't have AI in the nineties. You had it, but nobody used it. And now you're like, everybody's using ChatGPT. >> Right, but now what's exciting, and the other thing that Ryan and we are working on together is linking our labs together because it's not about the first time system integration and connecting the hoses together, and okay, there it worked, but it's about the ongoing life cycle management of all the updates and upgrades. And by using Dell's OTEL Lab and Fuijitsu's MITC lab and linking them together, now we really have a way of giving operators confidence that as we bring out the new innovations it's battle tested by two organizations. And so two logos coming together and saying, we've looked at it from our different angles and then this is battle tested. There's a lot of value there. >> I think the labs are key. >> But it's interesting, the point there is by tying labs together, there's an acknowledged skills gap as we move into this O-RAN world that operators are looking to us and probably Fuijitsu saying, help our team understand how to thrive in this new environment because we're going from closed systems to open systems where they actually again, have more choice and more ability to be flexible. >> Yeah, if you could take away that plumbing, even though they're good plumbers. All right guys, we got to go. Thanks so much for coming on theCUBE. >> Thank you much. >> It's great to have you. >> Appreciate it, Dave. >> Okay, keep it right there. Dave Vellante, Lisa Martin, and Dave Nicholson will be back from the Fira in Barcelona on theCUBE. Keep it right there. (pleasant music)

Published Date : Feb 28 2023

SUMMARY :

that drive human progress. And Ryan McMeniman is the I'm sure you got a big presence, We're also members of the and other sort of hardware, the carriers can rely upon. and that we're combining our that the traditional it might be the cloudification of the operating expense, so that the TCO for the In the app right above app to deal with that. Dell and Fuijitsu, such that the customer, in the nineties we saw the move but also the operators of it as the cost piece that the closed RAN vendors or maybe even before the and we're making 30% or more quantum leaps combinatorial factors of silicon, and it's reaching five nanometers and that was never really And that involves players like this and you got AI, right, and connecting the hoses together, and more ability to be flexible. Yeah, if you could Martin, and Dave Nicholson

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Greg ManganelloPERSON

0.99+

Lisa MartinPERSON

0.99+

DavePERSON

0.99+

Dave ZeigenfussPERSON

0.99+

Dave VellantePERSON

0.99+

Ryan McMenimanPERSON

0.99+

BJ GardnerPERSON

0.99+

BJPERSON

0.99+

Dave NicholsonPERSON

0.99+

February of 2019DATE

0.99+

GregPERSON

0.99+

November of 2018DATE

0.99+

David ZeigenfussPERSON

0.99+

EuropeLOCATION

0.99+

FactionORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

PhiladelphiaLOCATION

0.99+

AtlantaLOCATION

0.99+

New JerseyLOCATION

0.99+

AWSORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

FuijitsuORGANIZATION

0.99+

September 17thDATE

0.99+

RyanPERSON

0.99+

5QUANTITY

0.99+

two floorsQUANTITY

0.99+

30%QUANTITY

0.99+

DellORGANIZATION

0.99+

DavidPERSON

0.99+

Stu MinimanPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

one floorQUANTITY

0.99+

JuneDATE

0.99+

two guestsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

summer of 2019DATE

0.99+

U.S.LOCATION

0.99+

FirstQUANTITY

0.99+

OTEL LabORGANIZATION

0.99+

two organizationsQUANTITY

0.99+

Pennsylvania LumbermensORGANIZATION

0.99+

125 yearsQUANTITY

0.99+

Atlanta, GeorgiaLOCATION

0.99+

one answerQUANTITY

0.99+

BarcelonaLOCATION

0.99+

Open RAN Policy CoalitionORGANIZATION

0.99+

two logosQUANTITY

0.99+

2007-2008DATE

0.99+

two foldQUANTITY

0.99+

four-dayQUANTITY

0.99+

100QUANTITY

0.99+

Vanesa Diaz, LuxQuanta & Dr Antonio Acin, ICFO | MWC Barcelona 2023


 

(upbeat music) >> Narrator: theCUBE's live coverage is made possible by funding from Dell Technologies: creating technologies that drive human progress. (upbeat music) >> Welcome back to the Fira in Barcelona. You're watching theCUBE's Coverage day two of MWC 23. Check out SiliconANGLE.com for all the news, John Furrier in our Palo Alto studio, breaking that down. But we're here live Dave Vellante, Dave Nicholson and Lisa Martin. We're really excited. We're going to talk qubits. Vanessa Diaz is here. She's CEO of LuxQuanta And Antonio Acin is a professor of ICFO. Folks, welcome to theCUBE. We're going to talk quantum. Really excited about that. >> Vanessa: Thank you guys. >> What does quantum have to do with the network? Tell us. >> Right, so we are actually leaving the second quantum revolution. So the first one actually happened quite a few years ago. It enabled very much the communications that we have today. So in this second quantum revolution, if in the first one we learn about some very basic properties of quantum physics now our scientific community is able to actually work with the systems and ask them to do things. So quantum technologies mean right now, three main pillars, no areas of exploration. The first one is quantum computing. Everybody knows about that. Antonio knows a lot about that too so he can explain further. And it's about computers that now can do wonder. So the ability of of these computers to compute is amazing. So they'll be able to do amazing things. The other pillar is quantum communications but in fact it's slightly older than quantum computer, nobody knows that. And we are the ones that are coming to actually counteract the superpowers of quantum computers. And last but not least quantum sensing, that's the the application of again, quantum physics to measure things that were impossible to measure in with such level of quality, of precision than before. So that's very much where we are right now. >> Okay, so I think I missed the first wave of quantum computing Because, okay, but my, our understanding is ones and zeros, they can be both and the qubits aren't that stable, et cetera. But where are we today, Antonio in terms of actually being able to apply quantum computing? I'm inferring from what Vanessa said that we've actually already applied it but has it been more educational or is there actual work going on with quantum? >> Well, at the moment, I mean, typical question is like whether we have a quantum computer or not. I think we do have some quantum computers, some machines that are able to deal with these quantum bits. But of course, this first generation of quantum computers, they have noise, they're imperfect, they don't have many qubits. So we have to understand what we can do with these quantum computers today. Okay, this is science, but also technology working together to solve relevant problems. So at this moment is not clear what we can do with present quantum computers but we also know what we can do with a perfect quantum computer without noise with many quantum bits, with many qubits. And for instance, then we can solve problems that are out of reach for our classical computers. So the typical example is the problem of factorization that is very connected to what Vanessa does in her company. So we have identified problems that can be solved more efficiently with a quantum computer, with a very good quantum computer. People are working to have this very good quantum computer. At the moment, we have some imperfect quantum computers, we have to understand what we can do with these imperfect machines. >> Okay. So for the first wave was, okay, we have it working for a little while so we see the potential. Okay, and we have enough evidence almost like a little experiment. And now it's apply it to actually do some real work. >> Yeah, so now there is interest by companies so because they see a potential there. So they are investing and they're working together with scientists. We have to identify use cases, problems of relevance for all of us. And then once you identify a problem where a quantum computer can help you, try to solve it with existing machines and see if you can get an advantage. So now the community is really obsessed with getting a quantum advantage. So we really hope that we will get a quantum advantage. This, we know we will get it. We eventually have a very good quantum computer. But we want to have it now. And we're working on that. We have some results, there were I would say a bit academic situation in which a quantum advantage was proven. But to be honest with you on a really practical problem, this has not happened yet. But I believe the day that this happens and I mean it will be really a game changing. >> So you mentioned the word efficiency and you talked about the quantum advantage. Is the quantum advantage a qualitative advantage in that it is fundamentally different? Or is it simply a question of greater efficiency, so therefore a quantitative advantage? The example in the world we're used to, think about a card system where you're writing information on a card and putting it into a filing cabinet and then you want to retrieve it. Well, the information's all there, you can retrieve it. Computer system accelerates that process. It's not doing something that is fundamentally different unless you accept that the speed with which these things can be done gives it a separate quality. So how would you characterize that quantum versus non quantum? Is it just so much horse power changes the game or is it fundamentally different? >> Okay, so from a fundamental perspective, quantum physics is qualitatively different from classical physics. I mean, this year the Nobel Prize was given to three experimentalists who made experiments that proved that quantum physics is qualitatively different from classical physics. This is established, I mean, there have been experiments proving that. Now when we discuss about quantum computation, it's more a quantitative difference. So we have problems that you can solve, in principle you can solve with the classical computers but maybe the amount of time you need to solve them is we are talking about centuries and not with your laptop even with a classic super computer, these machines that are huge, where you have a building full of computers there are some problems for which computers take centuries to solve them. So you can say that it's quantitative, but in practice you may even say that it's impossible in practice and it will remain impossible. And now these problems become feasible with a quantum computer. So it's quantitative but almost qualitative I would say. >> Before we get into the problems, 'cause I want to understand some of those examples, but Vanessa, so your role at LuxQuanta is you're applying quantum in the communication sector for security purposes, correct? >> Vanessa: Correct. >> Because everybody talks about how quantum's going to ruin our lives in terms of taking all our passwords and figuring everything out. But can quantum help us defend against quantum and is that what you do? >> That's what we do. So one of the things that Antonio's explaining so our quantum computer will be able to solve in a reasonable amount of time something that today is impossible to solve unless you leave a laptop or super computer working for years. So one of those things is cryptography. So at the end, when use send a message and you want to preserve its confidentiality what you do is you destroy it but following certain rules which means they're using some kind of key and therefore you can send it through a public network which is the case for every communication that we have, we go through the internet and then the receiver is going to be able to reassemble it because they have that private key and nobody else has. So that private key is actually made of computational problems or mathematical problems that are very, very hard. We're talking about 40 years time for a super computer today to be able to hack it. However, we do not have the guarantee that there is already very smart mind that already have potentially the capacity also of a quantum computer even with enough, no millions, but maybe just a few qubits, it's enough to actually hack this cryptography. And there is also the fear that somebody could actually waiting for quantum computing to finally reach out this amazing capacity we harvesting now which means capturing all this confidential information storage in it. So when we are ready to have the power to unlock it and hack it and see what's behind. So we are talking about information as delicate as governmental, citizens information related to health for example, you name it. So what we do is we build a key to encrypt the information but it's not relying on a mathematical problem it's relying on the laws of quantum physics. So I'm going to have a channel that I'm going to pump photons there, light particles of light. And that quantum channel, because of the laws of physics is going to allow to detect somebody trying to sneak in and seeing the key that I'm establishing. If that happens, I will not create a key if it's clean and nobody was there, I'll give you a super key that nobody today or in the future, regardless of their computational power, will be able to hack. >> So it's like super zero trust. >> Super zero trust. >> Okay so but quantum can solve really challenging mathematical problems. If you had a quantum computer could you be a Bitcoin billionaire? >> Not that I know. I think people are, okay, now you move me a bit of my comfort zone. Because I know people have working on that. I don't think there is a lot of progress at least not that I am aware of. Okay, but I mean, in principle you have to understand that our society is based on information and computation. Computers are a key element in our society. And if you have a machine that computes better but much better than our existing machines, this gives you an advantage for many things. I mean, progress is locked by many computational problems we cannot solve. We can want to have better materials better medicines, better drugs. I mean this, you have to solve hard computational problems. If you have machine that gives you machine learning, big data. I mean, if you have a machine that gives you an advantage there, this may be a really real change. I'm not saying that we know how to do these things with a quantum computer. But if we understand how this machine that has been proven more powerful in some context can be adapted to some other context. I mean having a much better computer machine is an advantage. >> When? When are we going to have, you said we don't really have it today, we want it today. Are we five years away, 10 years away? Who's working on this? >> There are already quantum computers are there. It's just that the capacity that they have of right now is the order of a few hundred qubits. So people are, there are already companies harvesting, they're actually the companies that make these computers they're already putting them. People can access to them through the cloud and they can actually run certain algorithms that have been tailor made or translated to the language of a quantum computer to see how that performs there. So some people are already working with them. There is billions of investment across the world being put on different flavors of technologies that can reach to that quantum supremacy that we are talking about. The question though that you're asking is Q day it sounds like doomsday, you know, Q day. So depending on who you talk to, they will give you a different estimation. So some people say, well, 2030 for example but perhaps we could even think that it could be a more aggressive date, maybe 2027. So it is yet to be the final, let's say not that hard deadline but I think that the risk, that it can actually bring is big enough for us to pay attention to this and start preparing for it. So the end times of cryptography that's what quantum is doing is we have a system here that can actually prevent all your communications from being hacked. So if you think also about Q day and you go all the way back. So whatever tools you need to protect yourself from it, you need to deploy them, you need to see how they fit in your organization, evaluate the benefits, learn about it. So that, how close in time does that bring us? Because I believe that the time to start thinking about this is now. >> And it's likely it'll be some type of hybrid that will get us there, hybrid between existing applications. 'Cause you have to rewrite or write new applications and that's going to take some time. But it sounds like you feel like this decade we will see Q day. What probability would you give that? Is it better than 50/50? By 2030 we'll see Q day. >> But I'm optimistic by nature. So yes, I think it's much higher than 50. >> Like how much higher? >> 80, I would say yes. I'm pretty confident. I mean, but what I want to say also usually when I think there is a message here so you have your laptop, okay, in the past I had a Spectrum This is very small computer, it was more or less the same size but this machine is much more powerful. Why? Because we put information on smaller scales. So we always put information in smaller and smaller scale. This is why here you have for the same size, you have much more information because you put on smaller scales. So if you go small and small and small, you'll find the quantum word. So this is unavoidable. So our information devices are going to meet the quantum world and they're going to exploit it. I'm fully convinced about this, maybe not for the quantum computer we're imagining now but they will find it and they will use quantum effects. And also for cryptography, for me, this is unavoidable. >> And you brought the point there are several companies working on that. I mean, I can get quantum computers on in the cloud and Amazon and other suppliers. IBM of course is. >> The underlying technology, there are competing versions of how you actually create these qubits. pins of electrons and all sorts of different things. Does it need to be super cooled or not? >> Vanessa: There we go. >> At a fundamental stage we'd be getting ground. But what is, what does ChatGPT look like when it can leverage the quantum realm? >> Well, okay. >> I Mean are we all out of jobs at that point? Should we all just be planning for? >> No. >> Not you. >> I think all of us real estate in Portugal, should we all be looking? >> No, actually, I mean in machine learning there are some hopes about quantum competition because usually you have to deal with lots of data. And we know that in quantum physics you have a concept that is called superposition. So we, there are some hopes not in concrete yet but we have some hopes that these superpositions may allow you to explore this big data in a more efficient way. One has to if this can be confirmed. But one of the hopes creating this lots of qubits in this superpositions that you will have better artificial intelligence machines but, okay, this is quite science fiction what I'm saying now. >> At this point and when you say superposition, that's in contrast to the ones and zeros that we're used to. So when someone says it could be a one or zero or a one and a zero, that's referencing the concept of superposition. And so if this is great for encryption, doesn't that necessarily mean that bad actors can leverage it in a way that is now unhackable? >> I mean our technologies, again it's impossible to hack because it is the laws of physics what are allowing me to detect an intruder. So that's the beauty of it. It's not something that you're going to have to replace in the future because there will be a triple quantum computer, it is not going to affect us in any way but definitely the more capacity, computational capacity that we see out there in quantum computers in particular but in any other technologies in general, I mean, when we were coming to talk to you guys, Antonio and I, he was the one saying we do not know whether somebody has reached some relevant computational power already with the technologies that we have. And they've been able to hack already current cryptography and then they're not telling us. So it's a bit of, the message is a little bit like a paranoid message, but if you think about security that the amount of millions that means for a private institution know when there is a data breach, we see it every day. And also the amount of information that is relevant for the wellbeing of a country. Can you really put a reasonable amount of paranoid to that? Because I believe that it's worth exploring whatever tool is going to prevent you from putting any of those piece of information at risk. >> Super interesting topic guys. I know you're got to run. Thanks for stopping by theCUBE, it was great to have you on. >> Thank you guys. >> All right, so this is the SiliconANGLE theCUBE's coverage of Mobile World Congress, MWC now 23. We're live at the Fira Check out silicon SiliconANGLE.com and theCUBE.net for all the videos. Be right back, right after this short break. (relaxing music)

Published Date : Feb 28 2023

SUMMARY :

that drive human progress. for all the news, to do with the network? if in the first one we learn and the qubits aren't So we have to understand what we can do Okay, and we have enough evidence almost But to be honest with you So how would you characterize So we have problems that you can solve, and is that what you do? that I'm going to pump photons If you had a quantum computer that gives you machine learning, big data. you said we don't really have It's just that the capacity that they have of hybrid that will get us there, So yes, I think it's much higher than 50. So if you go small and small and small, And you brought the point of how you actually create these qubits. But what is, what does ChatGPT look like that these superpositions may allow you and when you say superposition, that the amount of millions that means it was great to have you on. for all the videos.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

VanessaPERSON

0.99+

Lisa MartinPERSON

0.99+

Vanessa DiazPERSON

0.99+

Dave NicholsonPERSON

0.99+

John FurrierPERSON

0.99+

AntonioPERSON

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

PortugalLOCATION

0.99+

five yearsQUANTITY

0.99+

LuxQuantaORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

Vanesa DiazPERSON

0.99+

three experimentalistsQUANTITY

0.99+

todayDATE

0.99+

Antonio AcinPERSON

0.99+

Palo AltoLOCATION

0.99+

2027DATE

0.99+

first oneQUANTITY

0.99+

2030DATE

0.99+

BarcelonaLOCATION

0.99+

zeroQUANTITY

0.98+

bothQUANTITY

0.98+

three main pillarsQUANTITY

0.98+

oneQUANTITY

0.98+

Dell TechnologiesORGANIZATION

0.97+

this yearDATE

0.97+

Nobel PrizeTITLE

0.97+

Mobile World CongressEVENT

0.97+

first generationQUANTITY

0.97+

MWC 23EVENT

0.96+

millionsQUANTITY

0.96+

SiliconANGLEORGANIZATION

0.95+

second quantum revolutionQUANTITY

0.95+

few years agoDATE

0.95+

80QUANTITY

0.94+

billions of investmentQUANTITY

0.92+

theCUBEORGANIZATION

0.92+

centuriesQUANTITY

0.91+

SiliconANGLE.comOTHER

0.9+

about 40 yearsQUANTITY

0.89+

DrPERSON

0.88+

super zeroOTHER

0.86+

50/50QUANTITY

0.84+

first waveEVENT

0.84+

day twoQUANTITY

0.83+

zerosQUANTITY

0.82+

yearsQUANTITY

0.81+

ICFOORGANIZATION

0.8+

this decadeDATE

0.77+

few hundred qubitsQUANTITY

0.72+

FiraLOCATION

0.69+

23DATE

0.64+

MWCEVENT

0.62+

higherQUANTITY

0.62+

50QUANTITY

0.61+

FiraEVENT

0.55+

tripleQUANTITY

0.55+

zeroOTHER

0.54+

OneQUANTITY

0.53+

theCUBE.netOTHER

0.53+

qubitsQUANTITY

0.51+

Breaking Analysis: MWC 2023 goes beyond consumer & deep into enterprise tech


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> While never really meant to be a consumer tech event, the rapid ascendancy of smartphones sucked much of the air out of Mobile World Congress over the years, now MWC. And while the device manufacturers continue to have a major presence at the show, the maturity of intelligent devices, longer life cycles, and the disaggregation of the network stack, have put enterprise technologies front and center in the telco business. Semiconductor manufacturers, network equipment players, infrastructure companies, cloud vendors, software providers, and a spate of startups are eyeing the trillion dollar plus communications industry as one of the next big things to watch this decade. Hello, and welcome to this week's Wikibon CUBE Insights, powered by ETR. In this Breaking Analysis, we bring you part two of our ongoing coverage of MWC '23, with some new data on enterprise players specifically in large telco environments, a brief glimpse at some of the pre-announcement news and corresponding themes ahead of MWC, and some of the key announcement areas we'll be watching at the show on theCUBE. Now, last week we shared some ETR data that showed how traditional enterprise tech players were performing, specifically within the telecoms vertical. Here's a new look at that data from ETR, which isolates the same companies, but cuts the data for what ETR calls large telco. The N in this cut is 196, down from 288 last week when we included all company sizes in the dataset. Now remember the two dimensions here, on the y-axis is net score, or spending momentum, and on the x-axis is pervasiveness in the data set. The table insert in the upper left informs how the dots and companies are plotted, and that red dotted line, the horizontal line at 40%, that indicates a highly elevated net score. Now while the data are not dramatically different in terms of relative positioning, there are a couple of changes at the margin. So just going down the list and focusing on net score. Azure is comparable, but slightly lower in this sector in the large telco than it was overall. Google Cloud comes in at number two, and basically swapped places with AWS, which drops slightly in the large telco relative to overall telco. Snowflake is also slightly down by one percentage point, but maintains its position. Remember Snowflake, overall, its net score is much, much higher when measuring across all verticals. Snowflake comes down in telco, and relative to overall, a little bit down in large telco, but it's making some moves to attack this market that we'll talk about in a moment. Next are Red Hat OpenStack and Databricks. About the same in large tech telco as they were an overall telco. Then there's Dell next that has a big presence at MWC and is getting serious about driving 16G adoption, and new servers, and edge servers, and other partnerships. Cisco and Red Hat OpenShift basically swapped spots when moving from all telco to large telco, as Cisco drops and Red Hat bumps up a bit. And VMware dropped about four percentage points in large telco. Accenture moved up dramatically, about nine percentage points in big telco, large telco relative to all telco. HPE dropped a couple of percentage points. Oracle stayed about the same. And IBM surprisingly dropped by about five points. So look, I understand not a ton of change in terms of spending momentum in the large sector versus telco overall, but some deltas. The bottom line for enterprise players is one, they're just getting started in this new disruption journey that they're on as the stack disaggregates. Two, all these players have experience in delivering horizontal solutions, but now working with partners and identifying big problems to be solved, and three, many of these companies are generally not the fastest moving firms relative to smaller disruptive disruptors. Now, cloud has been an exception in fairness. But the good news for the legacy infrastructure and IT companies is that the telco transformation and the 5G buildout is going to take years. So it's moving at a pace that is very favorable to many of these companies. Okay, so looking at just some of the pre-announcement highlights that have hit the wire this week, I want to give you a glimpse of the diversity of innovation that is occurring in the telecommunication space. You got semiconductor manufacturers, device makers, network equipment players, carriers, cloud vendors, enterprise tech companies, software companies, startups. Now we've included, you'll see in this list, we've included OpeRAN, that logo, because there's so much buzz around the topic and we're going to come back to that. But suffice it to say, there's no way we can cover all the announcements from the 2000 plus exhibitors at the show. So we're going to cherry pick here and make a few call outs. Hewlett Packard Enterprise announced an acquisition of an Italian private cellular network company called AthoNet. Zeus Kerravala wrote about it on SiliconANGLE if you want more details. Now interestingly, HPE has a partnership with Solana, which also does private 5G. But according to Zeus, Solona is more of an out-of-the-box solution, whereas AthoNet is designed for the core and requires more integration. And as you'll see in a moment, there's going to be a lot of talk at the show about private network. There's going to be a lot of news there from other competitors, and we're going to be watching that closely. And while many are concerned about the P5G, private 5G, encroaching on wifi, Kerravala doesn't see it that way. Rather, he feels that these private networks are really designed for more industrial, and you know mission critical environments, like factories, and warehouses that are run by robots, et cetera. 'Cause these can justify the increased expense of private networks. Whereas wifi remains a very low cost and flexible option for, you know, whatever offices and homes. Now, over to Dell. Dell announced its intent to go hard after opening up the telco network with the announcement that in the second half of this year it's going to begin shipping its infrastructure blocks for Red Hat. Remember it's like kind of the converged infrastructure for telco with a more open ecosystem and sort of more flexible, you know, more mature engineered system. Dell has also announced a range of PowerEdge servers for a variety of use cases. A big wide line bringing forth its 16G portfolio and aiming squarely at the telco space. Dell also announced, here we go, a private wireless offering with airspan, and Expedo, and a solution with AthoNet, the company HPE announced it was purchasing. So I guess Dell and HPE are now partnering up in the private wireless space, and yes, hell is freezing over folks. We'll see where that relationship goes in the mid- to long-term. Dell also announced new lab and certification capabilities, which we said last week was going to be critical for the further adoption of open ecosystem technology. So props to Dell for, you know, putting real emphasis and investment in that. AWS also made a number of announcements in this space including private wireless solutions and associated managed services. AWS named Deutsche Telekom, Orange, T-Mobile, Telefonica, and some others as partners. And AWS announced the stepped up partnership, specifically with T-Mobile, to bring AWS services to T-Mobile's network portfolio. Snowflake, back to Snowflake, announced its telecom data cloud. Remember we showed the data earlier, it's Snowflake not as strong in the telco sector, but they're continuing to move toward this go-to market alignment within key industries, realigning their go-to market by vertical. It also announced that AT&T, and a number of other partners, are collaborating to break down data silos specifically in telco. Look, essentially, this is Snowflake taking its core value prop to the telco vertical and forming key partnerships that resonate in the space. So think simplification, breaking down silos, data sharing, eventually data monetization. Samsung previewed its future capability to allow smartphones to access satellite services, something Apple has previously done. AMD, Intel, Marvell, Qualcomm, are all in the act, all the semiconductor players. Qualcomm for example, announced along with Telefonica, and Erickson, a 5G millimeter network that will be showcased in Spain at the event this coming week using Qualcomm Snapdragon chipset platform, based on none other than Arm technology. Of course, Arm we said is going to dominate the edge, and is is clearly doing so. It's got the volume advantage over, you know, traditional Intel, you know, X86 architectures. And it's no surprise that Microsoft is touting its open AI relationship. You're going to hear a lot of AI talk at this conference as is AI is now, you know, is the now topic. All right, we could go on and on and on. There's just so much going on at Mobile World Congress or MWC, that we just wanted to give you a glimpse of some of the highlights that we've been watching. Which brings us to the key topics and issues that we'll be exploring at MWC next week. We touched on some of this last week. A big topic of conversation will of course be, you know, 5G. Is it ever going to become real? Is it, is anybody ever going to make money at 5G? There's so much excitement around and anticipation around 5G. It has not lived up to the hype, but that's because the rollout, as we've previous reported, is going to take years. And part of that rollout is going to rely on the disaggregation of the hardened telco stack, as we reported last week and in previous Breaking Analysis episodes. OpenRAN is a big component of that evolution. You know, as our RAN intelligent controllers, RICs, which essentially the brain of OpenRAN, if you will. Now as we build out 5G networks at massive scale and accommodate unprecedented volumes of data and apply compute-hungry AI to all this data, the issue of energy efficiency is going to be front and center. It has to be. Not only is it a, you know, hot political issue, the reality is that improving power efficiency is compulsory or the whole vision of telco's future is going to come crashing down. So chip manufacturers, equipment makers, cloud providers, everybody is going to be doubling down and clicking on this topic. Let's talk about AI. AI as we said, it is the hot topic right now, but it is happening not only in consumer, with things like ChatGPT. And think about the theme of this Breaking Analysis in the enterprise, AI in the enterprise cannot be ChatGPT. It cannot be error prone the way ChatGPT is. It has to be clean, reliable, governed, accurate. It's got to be ethical. It's got to be trusted. Okay, we're going to have Zeus Kerravala on the show next week and definitely want to get his take on private networks and how they're going to impact wifi. You know, will private networks cannibalize wifi? If not, why not? He wrote about this again on SiliconANGLE if you want more details, and we're going to unpack that on theCUBE this week. And finally, as always we'll be following the data flows to understand where and how telcos, cloud players, startups, software companies, disruptors, legacy companies, end customers, how are they going to make money from new data opportunities? 'Cause we often say in theCUBE, don't ever bet against data. All right, that's a wrap for today. Remember theCUBE is going to be on location at MWC 2023 next week. We got a great set. We're in the walkway in between halls four and five, right in Congress Square, stand CS-60. Look for us, we got a full schedule. If you got a great story or you have news, stop by. We're going to try to get you on the program. I'll be there with Lisa Martin, co-hosting, David Nicholson as well, and the entire CUBE crew, so don't forget to come by and see us. I want to thank Alex Myerson, who's on production and manages the podcast, and Ken Schiffman, as well, in our Boston studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at SiliconANGLE.com. He does some great editing. Thank you. All right, remember all these episodes they are available as podcasts wherever you listen. All you got to do is search Breaking Analysis podcasts. I publish each week on Wikibon.com and SiliconANGLE.com. All the video content is available on demand at theCUBE.net, or you can email me directly if you want to get in touch David.Vellante@SiliconANGLE.com or DM me @DVellante, or comment on our LinkedIn posts. And please do check out ETR.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Mobile World Congress '23, MWC '23, or next time on Breaking Analysis. (bright music)

Published Date : Feb 25 2023

SUMMARY :

bringing you data-driven in the mid- to long-term.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

Alex MyersonPERSON

0.99+

OrangeORGANIZATION

0.99+

QualcommORGANIZATION

0.99+

HPEORGANIZATION

0.99+

TelefonicaORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

AMDORGANIZATION

0.99+

SpainLOCATION

0.99+

T-MobileORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

Deutsche TelekomORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

IBMORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

MarvellORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

SamsungORGANIZATION

0.99+

AppleORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

DellORGANIZATION

0.99+

IntelORGANIZATION

0.99+

Rob HofPERSON

0.99+

Palo AltoLOCATION

0.99+

OracleORGANIZATION

0.99+

40%QUANTITY

0.99+

last weekDATE

0.99+

AthoNetORGANIZATION

0.99+

EricksonORGANIZATION

0.99+

Congress SquareLOCATION

0.99+

AccentureORGANIZATION

0.99+

next weekDATE

0.99+

Mobile World CongressEVENT

0.99+

SolanaORGANIZATION

0.99+

BostonLOCATION

0.99+

two dimensionsQUANTITY

0.99+

ETRORGANIZATION

0.99+

MWC '23EVENT

0.99+

MWCEVENT

0.99+

288QUANTITY

0.98+

todayDATE

0.98+

this weekDATE

0.98+

SolonaORGANIZATION

0.98+

David.Vellante@SiliconANGLE.comOTHER

0.98+

telcoORGANIZATION

0.98+

TwoQUANTITY

0.98+

each weekQUANTITY

0.97+

Zeus KerravalaPERSON

0.97+

MWC 2023EVENT

0.97+

about five pointsQUANTITY

0.97+

theCUBE.netOTHER

0.97+

Red HatORGANIZATION

0.97+

SnowflakeTITLE

0.96+

oneQUANTITY

0.96+

DatabricksORGANIZATION

0.96+

threeQUANTITY

0.96+

theCUBE StudiosORGANIZATION

0.96+

Is Data Mesh the Killer App for Supercloud | Supercloud2


 

(gentle bright music) >> Okay, welcome back to our "Supercloud 2" event live coverage here at stage performance in Palo Alto syndicating around the world. I'm John Furrier with Dave Vellante. We've got exclusive news and a scoop here for SiliconANGLE and theCUBE. Zhamak Dehghani, creator of data mesh has formed a new company called NextData.com NextData, she's a cube alumni and contributor to our Supercloud initiative, as well as our coverage and breaking analysis with Dave Vellante on data, the killer app for Supercloud. Zhamak, great to see you. Thank you for coming into the studio and congratulations on your newly formed venture and continued success on the data mesh. >> Thank you so much. It's great to be here. Great to see you in person. >> Dave: Yeah, finally. >> John: Wonderful. Your contributions to the data conversation has been well-documented certainly by us and others in the industry. Data mesh taking the world by storm. Some people are debating it, throwing, you know, cold water on it. Some are, I think, it's the next big thing. Tell us about the data mesh super data apps that are emerging out of cloud. >> I mean, data mesh, as you said, it's, you know, the pain point that it surfaced were universal. Everybody said, "Oh, why didn't I think of that?" You know, it was just an obvious next step and people are approaching it, implementing it. I guess the last few years, I've been involved in many of those implementations, and I guess Supercloud is somewhat a prerequisite for it because it's data mesh and building applications using data mesh is about sharing data responsibly across boundaries. And those boundaries include boundaries, organizational boundaries cloud technology boundaries and trust boundaries. >> I want to bring that up because your venture, NextData which is new, just formed. Tell us about that. What wave is that riding? What specifically are you targeting? What's the pain point? >> Zhamak: Absolutely, yes. So next data is the result of, I suppose, the pains that I suffered from implementing a database for many of the organizations. Basically, a lot of organizations that I've worked with, they want decentralized data. So they really embrace this idea of decentralized ownership of the data, but yet they want interconnectivity through standard APIs, yet they want discoverability and governance. So they want to have policies implemented, they want to govern that data, they want to be able to discover that data and yet they want to decentralize it. And we do that with a developer experience that is easy and native to a generalist developer. So we try to find, I guess, the common denominator that solves those problems and enables that developer experience for data sharing. >> John: Since you just announced the news, what's been the reaction? >> Zhamak: I just announced the news right now, so what's the reaction? >> John: But people in the industry that know you, you did a lot of work in the area. What have been some of the feedback on the new venture in terms of the approach, the customers, problem? >> Yeah, so we've been in stealth modes, so we haven't publicly talked about it, but folks that have been close to us in fact have reached out. We already have implementations of our pilot platform with early customers, which is super exciting. And we're going to have multiple of those. Of course, we're a tiny, tiny company. We can have many of those where we are going to have multiple pilots, implementations of our platform in real world. We're real global large scale organizations that have real world problems. So we're not going to build our platform in vacuum. And that's what's happening right now. >> Zhamak: When I think about your role at ThoughtWorks, you had a very wide observation space with a number of clients helping them implement data mesh and other things as well prior to your data mesh initiative. But when I look at data mesh, at least the ones that I've seen, they're very narrow. I think of JPMC, I think of HelloFresh. They're generally obviously not surprising. They don't include the big vision of inclusivity across clouds across different data stores. But it seems like people are having to go through some gymnastics to get to, you know, the organizational reality of decentralizing data, and at least pushing data ownership to the line of business. How are you approaching or are you approaching, solving that problem? Are you taking a narrow slice? What can you tell us about Next Data? >> Zhamak: Sure, yeah, absolutely. Gymnastics, the cute word to describe what the organizations have to go through. And one of those problems is that, you know, the data, as you know, resides on different platforms. It's owned by different people, it's processed by pipelines that who owns them. So there's this very disparate and disconnected set of technologies that were very useful for when we thought about data and processing as a centralized problem. But when you think about data as a decentralized problem, the cost of integration of these technologies in a cohesive developer experience is what's missing. And we want to focus on that cohesive end-to-end developer experience to share data responsibly in this autonomous units, we call them data products, I guess in data mesh, right? That constitutes computation, that governs that data policies, discoverability. So I guess, I heard this expression in the last talks that you can have your cake and eat it too. So we want people have their cakes, which is, you know, data in different places, decentralization and eat it too, which is interconnected access to it. So we start with standardizing and codifying this idea of a data product container that encapsulates data computation, APIs to get to it in a technology agnostic way, in an open way. And then, sit on top and use existing existing tech, you know, Snowflake, Databricks, whatever exists, you know, the millions of dollars of investments that companies have made, sit on top of those but create this cohesive, integrated experience where data product is a first class primitive. And that's really key here, that the language, and the modeling that we use is really native to data mesh is that I will make a data product, I'm sharing a data product, and that encapsulates on providing metadata about this. I'm providing computation that's constantly changing the data. I'm providing the API for that. So we're trying to kind of codify and create a new developer experience based on that. And developer, both from provider side and user side connected to peer-to-peer data sharing with data product as a primitive first class concept. >> Okay, so the idea would be developers would build applications leveraging those data products which are discoverable and governed. Now, today you see some companies, you know, take a snowflake for example. >> Zhamak: Yeah. >> Attempting to do that within their own little walled garden. They even, at one point, used the term, "Mesh." I dunno if they pull back on that. And then they sort of became aware of some of your work. But a lot of the things that they're doing within their little insulated environment, you know, support that, that, you know, governance, they're building out an ecosystem. What's different in your vision? >> Exactly. So we realize that, you know, and this is a reality, like you go to organizations, they have a snowflake and half of the organization happily operates on Snowflake. And on the other half, oh, we are on, you know, bare infrastructure on AWS, or we are on Databricks. This is the realities, you know, this Supercloud that's written up here. It's about working across boundaries of technology. So we try to embrace that. And even for our own technology with the way we're building it, we say, "Okay, nobody's going to use next data mesh operating system. People will have different platforms." So you have to build with openness in mind, and in case of Snowflake, I think, you know, they have I'm sure very happy customers as long as customers can be on Snowflake. But once you cross that boundary of platforms then that becomes a problem. And we try to keep that in mind in our solution. >> So, it's worth reviewing that basically, the concept of data mesh is that, whether you're a data lake or a data warehouse, an S3 bucket, an Oracle database as well, they should be inclusive inside of the data. >> We did a session with AWS on the startup showcase, data as code. And remember, I wrote a blog post in 2007 called, "Data's the new developer kit." Back then, they used to call 'em developer kits, if you remember. And that we said at that time, whoever can code data >> Zhamak: Yes. >> Will have a competitive advantage. >> Aren't there machines going to be doing that? Didn't we just hear that? >> Well we have, and you know, Hey Siri, hey Cube. Find me that best video for data mesh. There it is. I mean, this is the point, like what's happening is that, now, data has to be addressable >> Zhamak: Yes. >> For machines and for coding. >> Zhamak: Yes. >> Because as you need to call the data. So the question is, how do you manage the complexity of big things as promiscuous as possible, making it available as well as then governing it because it's a trade off. The more you make open >> Zhamak: Definitely. >> The better the machine learning. >> Zhamak: Yes. >> But yet, the governance issue, so this is the, you need an OS to handle this maybe. >> Yes, well, we call our mental model for our platform is an OS operating system. Operating systems, you know, have shown us how you can kind of abstract what's complex and take care of, you know, a lot of complexities, but yet provide an open and, you know, dynamic enough interface. So we think about it that way. We try to solve the problem of policies live with the data. An enforcement of the policies happens at the most granular level which is, in this concept, the data product. And that would happen whether you read, write, or access a data product. But we can never imagine what are these policies could be. So our thinking is, okay, we should have a open policy framework that can allow organizations write their own policy drivers, and policy definitions, and encode it and encapsulated in this data product container. But I'm not going to fool myself to say that, you know, that's going to solve the problem that you just described. I think we are in this, I don't know, if I look into my crystal ball, what I think might happen is that right now, the primitives that we work with to train machine-learning model are still bits and bites in data. They're fields, rows, columns, right? And that creates quite a large surface area, an attack area for, you know, for privacy of the data. So perhaps, one of the trends that we might see is this evolution of data APIs to become more and more computational aware to bring the compute to the data to reduce that surface area so you can really leave the control of the data to the sovereign owners of that data, right? So that data product. So I think the evolution of our data APIs perhaps will become more and more computational. So you describe what you want, and the data owner decides, you know, how to manage the- >> John: That's interesting, Dave, 'cause it's almost like we just talked about ChatGPT in the last segment with you, who's a machine learning, could really been around the industry. It's almost as if you're starting to see reason come into the data, reasoning. It's like you starting to see not just metadata, using the data to reason so that you don't have to expose the raw data. It's almost like a, I won't say curation layer, but an intelligence layer. >> Zhamak: Exactly. >> Can you share your vision on that 'cause that seems to be where the dots are connecting. >> Zhamak: Yes, this is perhaps further into the future because just from where we stand, we have to create still that bridge of familiarity between that future and present. So we are still in that bridge-making mode, however, by just the basic notion of saying, "I'm going to put an API in front of my data, and that API today might be as primitive as a level of indirection as in you tell me what you want, tell me who you are, let me go process that, all the policies and lineage, and insert all of this intelligence that need to happen. And then I will, today, I will still give you a file. But by just defining that API and standardizing it, now we have this amazing extension point that we can say, "Well, the next revision of this API, you not just tell me who you are, but you actually tell me what intelligence you're after. What's a logic that I need to go and now compute on your API?" And you can kind of evolve that, right? Now you have a point of evolution to this very futuristic, I guess, future where you just describe the question that you're asking from the chat. >> Well, this is the Supercloud, Dave. >> I have a question from a fan, I got to get it in. It's George Gilbert. And so, his question is, you're blowing away the way we synchronize data from operational systems to the data stack to applications. So the concern that he has, and he wants your feedback on this, "Is the data product app devs get exposed to more complexity with respect to moving data between data products or maybe it's attributes between data products, how do you respond to that? How do you see, is that a problem or is that something that is overstated, or do you have an answer for that?" >> Zhamak: Absolutely. So I think there's a sweet spot in getting data developers, data product developers closer to the app, but yet not burdening them with the complexity of the application and application logic, and yet reducing their cognitive load by localizing what they need to know about which is that domain where they're operating within. Because what's happening right now? what's happening right now is that data engineers, a ton of empathy for them for their high threshold of pain that they can, you know, deal with, they have been centralized, they've put into the data team, and they have been given this unbelievable task of make meaning out of data, put semantic over it, curates it, cleans it, and so on. So what we are saying is that get those folks embedded into the domain closer to the application developers, these are still separately moving units. Your app and your data products are independent but yet tightly closed with each other, tightly coupled with each other based on the context of the domain, so reduce cognitive load by localizing what they need to know about to the domain, get them closer to the application but yet have them them separate from app because app provides a very different service. Transactional data for my e-commerce transaction, data product provides a very different service, longitudinal data for the, you know, variety of this intelligent analysis that I can do on the data. But yet, it's all within the domain of e-commerce or sales or whatnot. >> So a lot of decoupling and coupling create that cohesiveness. >> Zhamak: Absolutely. >> Architecture. So I have to ask you, this is an interesting question 'cause it came up on theCUBE all last year. Back on the old server, data center days and cloud, SRE, Google coined the term, "Site Reliability Engineer" for someone to look over the hundreds of thousands of servers. We asked a question to data engineering community who have been suffering, by the way, agree. Is there an SRE-like role for data? Because in a way, data engineering, that platform engineer, they are like the SRE for data. In other words, managing the large scale to enable automation and cell service. What's your thoughts and reaction to that? >> Zhamak: Yes, exactly. So, maybe we go through that history of how SRE came to be. So we had the first DevOps movement which was, remove the wall between dev and ops and bring them together. So you have one cross-functional units of the organization that's responsible for, you build it you run it, right? So then there is no, I'm going to just shoot my application over the wall for somebody else to manage it. So we did that, and then we said, "Okay, as we decentralized and had this many microservices running around, we had to create a layer that abstracted a lot of the complexity around running now a lot or monitoring, observing and running a lot while giving autonomy to this cross-functional team." And that's where the SRE, a new generation of engineers came to exist. So I think if I just look- >> Hence Borg, hence Kubernetes. >> Hence, hence, exactly. Hence chaos engineering, hence embracing the complexity and messiness, right? And putting engineering discipline to embrace that and yet give a cohesive and high integrity experience of those systems. So I think, if we look at that evolution, perhaps something like that is happening by bringing data and apps closer and make them these domain-oriented data product teams or domain oriented cross-functional teams, full stop, and still have a very advanced maybe at the platform infrastructure level kind of operational team that they're not busy doing two jobs which is taking care of domains and the infrastructure, but they're building infrastructure that is embracing that complexity, interconnectivity of this data process. >> John: So you see similarities. >> Absolutely, but I feel like we're probably in a more early days of that movement. >> So it's a data DevOps kind of thing happening where scales happening. It's good things are happening yet. Eh, a little bit fast and loose with some complexities to clean up. >> Yes, yes. This is a different restructure. As you said we, you know, the job of this industry as a whole on architects is decompose, recompose, decompose, recomposing a new way, and now we're like decomposing centralized team, recomposing them as domains and- >> John: So is data mesh the killer app for Supercloud? >> You had to do this for me. >> Dave: Sorry, I couldn't- (John and Dave laughing) >> Zhamak: What do you want me to say, Dave? >> John: Yes. >> Zhamak: Yes of course. >> I mean Supercloud, I think it's, really the terminology's Supercloud, Opencloud. But I think, in spirits of it, this embracing of diversity and giving autonomy for people to make decisions for what's right for them and not yet lock them in. I think just embracing that is baked into how data mesh assume the world would work. >> John: Well thank you so much for coming on Supercloud too, really appreciate it. Data has driven this conversation. Your success of data mesh has really opened up the conversation and exposed the slow moving data industry. >> Dave: Been a great catalyst. (John laughs) >> John: That's now going well. We can move faster, so thanks for coming on. >> Thank you for hosting me. It was wonderful. >> Okay, Supercloud 2 live here in Palo Alto. Our stage performance, I'm John Furrier with Dave Vellante. We're back with more after this short break, Stay with us all day for Supercloud 2. (gentle bright music)

Published Date : Feb 17 2023

SUMMARY :

and continued success on the data mesh. Great to see you in person. and others in the industry. I guess the last few years, What's the pain point? a database for many of the organizations. in terms of the approach, but folks that have been close to us to get to, you know, the data, as you know, resides Okay, so the idea would be developers But a lot of the things that they're doing This is the realities, you know, inside of the data. And that we said at that Well we have, and you know, So the question is, how do so this is the, you need and the data owner decides, you know, so that you don't have 'cause that seems to be where of this API, you not So the concern that he has, into the domain closer to So a lot of decoupling So I have to ask you, this a lot of the complexity of domains and the infrastructure, in a more early days of that movement. to clean up. the job of this industry the world would work. John: Well thank you so much for coming Dave: Been a great catalyst. We can move faster, so Thank you for hosting me. after this short break,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JohnPERSON

0.99+

ZhamakPERSON

0.99+

DavePERSON

0.99+

George GilbertPERSON

0.99+

AWSORGANIZATION

0.99+

2007DATE

0.99+

Palo AltoLOCATION

0.99+

John FurrierPERSON

0.99+

John FurrierPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

JPMCORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

DavPERSON

0.99+

two jobsQUANTITY

0.99+

SupercloudORGANIZATION

0.99+

NextDataORGANIZATION

0.99+

todayDATE

0.99+

OpencloudORGANIZATION

0.99+

last yearDATE

0.99+

SiriTITLE

0.99+

ThoughtWorksORGANIZATION

0.98+

NextData.comORGANIZATION

0.98+

Supercloud 2EVENT

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

HelloFreshORGANIZATION

0.98+

firstQUANTITY

0.98+

millions of dollarsQUANTITY

0.96+

SnowflakeEVENT

0.96+

OracleORGANIZATION

0.96+

SRETITLE

0.94+

SnowflakeORGANIZATION

0.94+

CubePERSON

0.93+

ZhamaPERSON

0.92+

Data Mesh the Killer AppTITLE

0.92+

SiliconANGLEORGANIZATION

0.91+

DatabricksORGANIZATION

0.9+

first classQUANTITY

0.89+

Supercloud 2ORGANIZATION

0.88+

theCUBEORGANIZATION

0.88+

hundreds of thousandsQUANTITY

0.85+

one pointQUANTITY

0.84+

ZhamPERSON

0.83+

SupercloudEVENT

0.83+

ChatGPTORGANIZATION

0.72+

SREORGANIZATION

0.72+

BorgPERSON

0.7+

SnowflakeTITLE

0.66+

SupercloudTITLE

0.65+

halfQUANTITY

0.64+

Ben Hirschberg, Armo Ltd | CloudNativeSecurityCon 23


 

(upbeat music) >> Hello everyone, welcome back to theCUBE's coverage of Cloud Native SecurityCon North America 2023. Obviously, CUBE's coverage with our CUBE Center Report. We're not there on the ground, but we have folks and our CUBE Alumni there. We have entrepreneurs there. Of course, we want to be there in person, but we're remote. We've got Ben Hirschberg, CTO and Co-Founder of Armo, a cloud native security startup, well positioned in this industry. He's there in Seattle. Ben, thank you for coming on and sharing what's going on with theCUBE. >> Yeah, it's great to be here, John. >> So we had written on you guys up on SiliconANGLE. Congratulations on your momentum and traction. But let's first get into what's going on there on the ground? What are some of the key trends? What's the most important story being told there? What is the vibe? What's the most important story right now? >> So I think, I would like to start here with the I think the most important thing was that I think the event is very successful. Usually, the Cloud Native Security Day usually was part of KubeCon in the previous years and now it became its own conference of its own and really kudos to all the organizers who brought this up in, actually in a short time. And it wasn't really clear how many people will turn up, but at the end, we see a really nice turn up and really great talks and keynotes around here. I think that one of the biggest trends, which haven't started like in this conference, but already we're talking for a while is supply chain. Supply chain is security. I think it's, right now, the biggest trend in the talks, in the keynotes. And I think that we start to see companies, big companies, who are adopting themselves into this direction. There is a clear industry need. There is a clear problem and I think that the cloud native security teams are coming up with tooling around it. I think for right now we see more tools than adoption, but the adoption is always following the tooling. And I think it already proves itself. So we have just a very interesting talk this morning about the OpenSSL vulnerability, which was I think around Halloween, which came out and everyone thought that it's going to be a critical issue for the whole cloud native and internet infrastructure and at the end it turned out to be a lesser problem, but the reason why I think it was understood that to be a lesser problem real soon was that because people started to use (indistinct) store software composition information in the environment so security teams could look into, look up in their systems okay, what, where they're using OpenSSL, which version they are using. It became really soon real clear that this version is not adopted by a wide array of software out there so the tech surface is relatively small and I think it already proved itself that the direction if everyone is talking about. >> Yeah, we agree, we're very bullish on this move from the Cloud Native Foundation CNCF that do the security conference. Amazon Web Services has re:Invent. That's their big show, but they also have re:Inforce, the security show, so clearly they work together. I like the decoupling, very cohesive. But you guys have Kubescape of Kubernetes security. Talk about the conversations that are there and that you're hearing around why there's different event what's different around KubeCon and CloudNativeCon than this Cloud Native SecurityCon. It's not called KubeSucSecCon, it's called Cloud Native SecurityCon. What's the difference? Are people confused? Is it clear? What's the difference between the two shows? What are you hearing? >> So I think that, you know, there is a good question. Okay, where is Cloud Native Computing Foundation came from? Obviously everyone knows that it was somewhat coupled with the adoption of Kubernetes. It was a clear understanding in the industry that there are different efforts where the industry needs to come together without looking be very vendor-specific and try to sort out a lot of issues in order to enable adoption and bring great value and I think that the main difference here between KubeCon and the Cloud Native Security Conference is really the focus, and not just on Kubernetes, but the whole ecosystem behind that. The way we are delivering software, the way we are monitoring software, and all where Kubernetes is only just, you know, maybe the biggest clog in the system, but, you know, just one of the others and it gives great overview of what you have in the whole ecosystem. >> Yeah, I think it's a good call. I would add that what I'm hearing too is that security is so critical to the business model of every company. It's so mainstream. The hackers have a great business model. They make money, their costs are lower than the revenue. So the business of hacking in breaches, ransomware all over the place is so successful that they're playing offense, everyone's playing defense, so it's about time we can get focus to really be faster and more nimble and agile on solving some of these security challenges in open source. So I think that to me is a great focus and so I give total props to the CNC. I call it the event operating system. You got the security group over here decoupled from the main kernel, but they work together. Good call and so this brings back up to some of the things that are going on so I have to ask you, as your startup as a CTO, you guys have the Kubescape platform, how do you guys fit into the landscape and what's different from your tools for Kubernetes environments versus what's out there? >> So I think that our journey is really interesting in the solution space because I think that our mode really tries to understand where security can meet the actual adoption because as you just said, somehow we have to sort out together how security is going to be automated and integrated in its best way. So Kubescape project started as a Kubernetes security posture tool. Just, you know, when people are really early in their adoption of Kubernetes systems, they want to understand whether the installation is is secure, whether the basic configurations are look okay, and giving them instant feedback on that, both in live systems and in the CICD, this is where Kubescape came from. We started as an open source project because we are big believers of open source, of the power of open source security, and I can, you know I think maybe this is my first interview when I can say that Kubescape was accepted to be a CNCF Sandbox project so Armo was actually donating the project to the CNCF, I think, which is a huge milestone and a great way to further the adoption of Kubernetes security and from now on we want to see where the users in Armo and Kubescape project want to see where the users are going, their Kubernetes security journey and help them to automatize, help them to to implement security more fast in the way the developers are using it working. >> Okay, if you don't mind, I want to just get clarification. What's the difference between the Armo platform and Kubescape because you have Kubescape Sandbox project and Armo platform. Could you talk about the differences and interaction? >> Sure, Kubescape is an open source project and Armo platform is actually a managed platform which runs Kubescape in the cloud for you because Kubescape is part, it has several parts. One part is, which is running inside the Kubernetes cluster in the CICD processes of the user, and there is another part which we call the backend where the results are stored and can be analyzed further. So Armo platform gives you managed way to run the backend, but I can tell you that backend is also, will be available within a month or two also for everyone to install on their premises as well, because again, we are an open source company and we are, we want to enable users, so the difference is that Armo platform is a managed platform behind Kubescape. >> How does Kubescape differ from closed proprietary sourced solutions? >> So I can tell you that there are closed proprietary solutions which are very good security solutions, but I think that the main difference, if I had to pick beyond the very specific technicalities is the worldview. The way we see that our user is not the CISO. Our user is not necessarily the security team. From our perspective, the user is the DevOps and the developers who are working on the Kubernetes cluster day to day and we want to enable them to improve their security. So actually our approach is more developer-friendly, if I would need to define it very shortly. >> What does this risk calculation score you guys have in Kubscape? That's come up and we cover that in our story. Can you explain to the folks how that fits in? Is it Kubescape is the platform and what's the benefit, what's the purpose? >> So the risk calculation is actually a score we are giving to clusters in order for the users to understand where they are standing in the general population, how they are faring against a perfect hardened cluster. It is based on the number of different tests we are making. And I don't want to go into, you know, the very specifics of the mathematical functions, but in general it takes into account how many functions are failing, security tests are failing inside your cluster. How many nodes you are having, how many workloads are having, and creating this number which enables you to understand where you are standing in the global, in the world. >> What's the customer value that you guys pitching? What's the pitch for the Armo platform? When you go and talk to a customer, are they like, "We need you." Do they come to you? Is it word of mouth? You guys have a strategy? What's the pitch? What's so appealing to the customers? Why are they enthusiastic about you guys? >> So John, I can tell you, maybe it's not so easy to to say the words, but I nearly 20 years in the industry and though I've been always around cyber and the defense industry and I can tell you that I never had this journey where before where I could say that the the customers are coming to us and not we are pitching to customers. Simply because people want to, this is very easy tool, very very easy to use, very understandable and it very helps the engineers to improve security posture. And they're coming to us and they're saying, "Well, awesome, okay, how we can like use it. Do you have a graphical interface?" And we are pointing them to the Armor platform and they are falling in love and coming to us even more and we can tell you that we have a big number of active users behind the platform itself. >> You know, one of the things that comes up every time at KubeCon and Cloud NativeCon when we're there, and we'll be in Amsterdam, so folks watching, you know, we'll see onsite, developer productivity is like the number one thing everyone talks about and security is so important. It's become by default a blocker or anchor or a drag on productivity. This is big, the things that you're mentioning, easy to use, engineering supporting it, developer adoption, you know we've always said on theCUBE, developers will be the de facto standards bodies by their choices 'cause developers make all the decisions. So if I can go faster and I can have security kind of programmed in, I'm not shifting left, it's just I'm just having security kind of in there. That's the dream state. Is that what you guys are trying to do here? Because that's the nirvana, everyone wants to do that. >> Yeah, I think your definition is like perfect because really we had like this, for a very long time we had this world where we decoupled security teams from developers and even for sometimes from engineering at all and I think for multiple reasons, we are more seeing a big convergence. Security teams are becoming part of the engineering and the engineering becoming part of the security and as you're saying, okay, the day-to-day world of developers are becoming very tangled up in the good way with security, so the think about it that today, one of my developers at Armo is creating a pull request. He's already, code is already scanned by security scanners for to test for different security problems. It's already, you know, before he already gets feedback on his first time where he's sharing his code and if there is an issue, he already can solve it and this is just solving issues much faster, much cheaper, and also you asked me about, you know, the wipe in the conference and we know no one can deny the current economic wipe we have and this also relates to security teams and security teams has to be much more efficient. And one of the things that everyone is talking, okay, we need more automation, we need more, better tooling and I think we are really fitting into this. >> Yeah, and I talked to venture capitalists yesterday and today, an angel investor. Best time for startup is right now and again, open source is driving a lot of value. Ben, it's been great to have you on and sharing with us what's going on on the ground there as well as talking about some of the traction you have. Just final question, how old's the company? How much funding do you have? Where you guys located? Put a plug in for the company. You guys looking to hire? Tell us about the company. Were you guys located? How much capital do you have? >> So, okay, the company's here for three years. We've passed a round last March with Tiger and Hyperwise capitals. We are located, most of the company's located today in Israel in Tel Aviv, but we have like great team also in Ukraine and also great guys are in Europe and right now also Craig Box joined us as an open source VP and he's like right now located in New Zealand, so we are a really global team, which I think it's really helps us to strengthen ourselves. >> Yeah, and I think this is the entrepreneurial equation for the future. It's really great to see that global. We heard that in Priyanka Sharma's keynote. It's a global culture, global community. >> Right. >> And so really, really props you guys. Congratulations on Armo and thanks for coming on theCUBE and sharing insights and expertise and also what's happening on the ground. Appreciate it, Ben, thanks for coming on. >> Thank you, John. >> Okay, cheers. Okay, this is CUB coverage here of the Cloud Native SecurityCon in North America 2023. I'm John Furrier for Lisa Martin, Dave Vellante. We're back with more of wrap up of the event after this short break. (gentle upbeat music)

Published Date : Feb 3 2023

SUMMARY :

and sharing what's going on with theCUBE. What is the vibe? and at the end it turned that do the security conference. the way we are monitoring software, I call it the event operating system. the project to the CNCF, What's the difference between in the CICD processes of the user, is the worldview. Is it Kubescape is the platform It is based on the number of What's the pitch for the Armo platform? and the defense industry This is big, the things and the engineering becoming the traction you have. So, okay, the company's Yeah, and I think this is and also what's happening on the ground. of the Cloud Native SecurityCon

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Ben HirschbergPERSON

0.99+

Lisa MartinPERSON

0.99+

EuropeLOCATION

0.99+

SeattleLOCATION

0.99+

IsraelLOCATION

0.99+

UkraineLOCATION

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

New ZealandLOCATION

0.99+

TigerORGANIZATION

0.99+

three yearsQUANTITY

0.99+

CUBEORGANIZATION

0.99+

AmsterdamLOCATION

0.99+

Priyanka SharmaPERSON

0.99+

Tel AvivLOCATION

0.99+

BenPERSON

0.99+

ArmoORGANIZATION

0.99+

todayDATE

0.99+

Craig BoxPERSON

0.99+

two showsQUANTITY

0.99+

HyperwiseORGANIZATION

0.99+

last MarchDATE

0.99+

One partQUANTITY

0.99+

yesterdayDATE

0.99+

Armo LtdORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

KubeConEVENT

0.99+

Cloud Native FoundationORGANIZATION

0.99+

first timeQUANTITY

0.99+

first interviewQUANTITY

0.99+

HalloweenEVENT

0.99+

Cloud Native Security ConferenceEVENT

0.98+

oneQUANTITY

0.98+

Cloud Native SecurityConEVENT

0.98+

KubernetesTITLE

0.98+

Cloud Native Security DayEVENT

0.97+

firstQUANTITY

0.97+

CNCFORGANIZATION

0.97+

KubeSucSecConEVENT

0.97+

CloudNativeConEVENT

0.96+

twoQUANTITY

0.96+

bothQUANTITY

0.95+

North AmericaLOCATION

0.95+

ArmoTITLE

0.94+

nearly 20 yearsQUANTITY

0.94+

Cloud Native SecurityCon North America 2023EVENT

0.94+

KubescapeTITLE

0.94+

OpenSSLTITLE

0.94+

theCUBEORGANIZATION

0.93+

this morningDATE

0.93+

a monthQUANTITY

0.93+

Kubescape SandboxTITLE

0.9+

thingsQUANTITY

0.89+

ArmoPERSON

0.87+

KubscapeTITLE

0.86+

CloudNativeSecurityCon 23EVENT

0.78+

one ofQUANTITY

0.77+

KubescapeORGANIZATION

0.76+

Cloud NativeConEVENT

0.75+

CUBE Center ReportTITLE

0.75+

Yves Sandfort, Comdivision Group | CloudNativeSecurityCon 23


 

(rousing music) >> Hello everyone. Welcome back to "theCUBE's" day one coverage of Cloud Native Security Con 23. This is going to be an exciting panel. I've got three great guests. I'm Lisa Martin, you know our esteemed analysts, John Furrier, and Dave Vellante well. And we're excited to welcome to "theCUBE" for the first time, Yves Sandfort, the CEO of Comdivision Group, who's coming to us from Germany. As you know, Cloud Native Security Con is a global event. Everyone welcome Yves, great to have you in particular. Welcome to "theCUBE." >> Great to be here. >> Thank you for inviting me. >> Yves, tell us a little bit, before we dig into really wanting to understand your perspectives on the event and get Dave and John's feedback as well, tell us a little bit about you. >> So yeah, talking about me, or talking about Comdivision real quick. We are in the business for over 27 years already. We started as a SaaS company, then became more like an architecture and, and Cloud Native company over the last few years. But what's interesting is, and I think that's, that's, that's really interesting when we look at our industry. It hasn't really, the requirements haven't really changed over the years. It's still security. We still have to figure out how we deal with security. We still have to figure out how we deal with compliance and everything else. And I think therefore, it's more and more important that we take these items more seriously. Also, based on the fact that when we look at it, how development and other things happen nowadays, it's, it's, everybody says it's like open source. It's great because everybody can look into the code. We, I think the last few years have shown us enough example that that's not necessarily solving all the issues, but it's also code and development has changed rapidly when we look at the Cloud Native approach, where it's far more about gluing the pieces together, versus the development pieces. When I was actually doing software development 25 years ago, and had to basically build my code because I didn't have that much internet access for it. So it has evolved, but even back then we had to deal with security and everything. >> Right. The focus on security is, is incredibly important, and the focus keeps growing as you mentioned. This is, guys, and I want to get your perspectives on this. We're going to start with John. This is the first time Cloud Native Security Con is its own event being extracted from, and amplified from KubeCon. John, I want to understand from your perspective, break down the event, what you see, what you've heard, and Cloud Native Security in general. What does this mean to companies? What does it mean to customers? Is this a reality? >> Well, I think that's the topic we want to discuss, and I think Yves background, you see the VMware certification, I love that. Because what VMware did with virtualization, was abstract that from server virtualization, kind of really changed the game on things, and you start to see Cloud Native kind of go that next level of how companies will be operating their business, not just digital transformation, as digital transformation goes to completion, it's total business transformation where IT is everywhere. And so you're starting to see the trends where, "Okay, that's happening." Now you're starting to see, that's Cloud Native Con, or KubeCon, AWS re:Invent, or whatever show, or whatever way you want to look at it. But in, in the past decade, past five years, security has always been front and center as almost a separate thing, and, in and of itself, but the same thing. So you're starting to see the breakout of security conversations around how to make things work. So a lot of operational conversations around what used to be DevOps makes infrastructure as code, and that was great, that fueled that. Then DevSecOps came. So the Cloud Native next level, is more application development at scale, developers driving the standards with developer first thinking, shifting left, I get all that. But down in the lower ends of the stack, you got real operational issues. DNS we've heard in the keynote, we heard about the Colonel, the Lennox Colonel. Things that need to be managed and taken care of at a security level. These are like, seem like in the weeds, but you're starting to see that happen. And the other thing that I think's real about Cloud Native Security Con that's going to be interesting to watch, is Amazon has pretty much canceled all their re:Invent like shows except for two; Re:Invent, which is their annual conference, and Re:Inforce, which is dedicated to securities. So Cloud Native, Linux, the Linux Foundation has now breaking out Cloud Native Con and KubeCon, and now Cloud Native Security Con. They can't call it KubeCon because it's not Kubernetes, but it's like security focus. I think this is the beginning of starting to see this new developer driving, developers driving the standards, and it has it implications, what used to be called IT ops, and that's like the VMwares of the world. You saw all the stuff that was not at developer focus, but more ops, becoming much more in the application. So I think, I think it's real. The question is where does it go? How fast does it develop? So to me, I think it's a real trend, and it's worthy of a breakout, but it's not yet clear of where the landing zone is for people to start doing it, how they get started, what are the best practices. Machine learning's going to be a big part of this. So to me it's totally cool, but I'm not yet seeing the beachhead. So that's kind of my take. >> Dave, our inventor and host of breaking analysis, what's your take? >> So when you, I think when you zoom out, there's some, there's a big macro change that's been going on. I think when you look back, let's say 10, 12 years ago, the, the need for speed far trumped the, the, the security aspect, the governance, the data privacy. It was like, "Yeah, the risks, they're not that great compared to our opportunity." That has completely changed because the risks are now so much higher. And so what's happening, I think there's a, there's a major effort amongst CIOs and CISOs to try to make security not a blocker because it use to be, it still is. "Okay, I got this great initiative." Eh, give it to the SecOps pros, and let them take it for a while before we can go to market. And so a huge challenge now is to simplify, automate, AI comes in, the whole supply chain security, so the, so the companies can not be facing so much friction. And that is non-trivial. I don't think we're anywhere close there, but I think the goal is by, within the next several years, we're going to be in a position, that security, we heard today, is, wasn't designed in to the initial internet protocols. It was bolted on. And so increasingly, the fundamental architecture of the internet, the Cloud, et cetera, is, is seeing designed in security, and, and that is an imperative, or else business is going to come to a grinding halt. >> Right. It's no longer, the bolt no longer works. Yves, what's your perspective on Cloud Native Security, where it stands today? What's in it for customers, whether we're talking about banks, or hospitals, or retailers, what do you think? >> I think when we, when we look at security in the, in the modern world, is we need to as, as Dave mentioned, we need to rethink how we apply it. Very often, security in the past has been always bolted on in the end. If we continue to do that, it'll become more and more difficult, because as companies evolve, and as companies want to bring products and software to market in a much faster and faster way, it's getting more and more difficult if we bolt on the security process at the end. It's like, developers build something and then someone checks security. That's not going to work any longer. Especially if we also consider now the changes in the industry. We had Stack Overflow over the last 10 years. If I would've had Stack Overflow 15, 20, what, 25 years ago when I was a developer, it would've changed a hell lot. Looking at it now, and looking at it what we had in the last few weeks, it's like where nearly all of my team members say is like finally I don't need any script kiddies anymore because I can't go to (indistinct) who writes the code for me. Which is on one end great, because it enables us to solve certain problems in a much higher pace. But the challenge with that is, if the people who just copy and past that code, don't understand the implications of that code, we have a much higher risk continuously. And what people thought was, is challenging with Stack Overflow. Imagine that something in one of these AI engines, is actually going ballistic, and it creates holes in nearly every one of these applications. And trust me, there will be enough developers who are going to use these tools to develop codes, the same as students in university are going to take this to write their essays and everything else. And so it's really important that every developer team basically has a security person within their team, and not a security at the end. So we build something, we check it, go through QA, and then it goes to security. Security needs to be at the forefront. And I think that's where we see Cloud Native Security Con, where we see AWS. I saw it during re:Invent already where they said is like, we have reinforced next year. I think this becomes more and more of a topic, and I think companies, as much as it is become a norm that you have a firewall and everything else, it needs to become a norm that when you are doing software development, and every development team needs to have a security person on that needs to be trained. >> I love that chat comment Dave, 'cause you and I were talking about this. And I think that is going to be the issue. Do we need security chat for the chat bot? And there's like a, like a recursive model there. The biases are built in. I think, and I think our interview with the Palo Alto Network's co-founder, Dave, when he talked about zero trust as a structured way to start things, but he was referencing that with Cloud, there's a chance to rethink or do a do-over in security. So, I think this is kind of to me, where this is all going. And I think you asked Pat Gelsinger what, year 2013, 2014, can, is security a do over? I think we're in that do over time. >> He said yes. >> He said yes. (laughing) He was right. But yeah, eight years later... But this is, how do you, zero trust gives you some structure, but how do you organize and redo security? Because to me, I think that's what's happening here. >> And John you heard, Zuk at Palo Alto Network said, "Yeah, the, the words security and architecture, they don't go together historically." And so it is a total, total retake. >> Well is that because there's too many tools out there and- >> Yeah. For sure. >> Yeah, well, first of all, a lot of hardware. And then yeah, a lot of tools. You even see IIOT and industry 40, you see IOT security coming up as another stove pipe, and that's not the right approach. And, and so- >> Well let me, let me ask you a question Dave, and Yves, if you don't mind. 'Cause I was just riffing on this yesterday about this. In the ML space, you're seeing the ML models, you're seeing proprietary models versus open source. Is security going to go down this proprietary security methods and open source? Because that's interesting, because the CNCF is run by the the Linux Foundation. So you can almost maybe see a model where there's more proprietary security methods than open source. Or is it, is that a non-issue? >> I would, I would, let me, if I, if I jump in here first, I think the last, especially last five or 10 years have clearly shown the, the whole and, and I invested early on in the, in the end 90s in several open source startups in the Bay area. So, I'm well behind the whole open source idea and, and mid (indistinct) and others back then several times. But the point is, I think what we have seen is open source is not in general, more secure or less secure, because code is too complex nowadays. You have millions of lines of code, and it's not that either one way or the other is going to solve it. The ways I think we are going to look at it is more is what's the role to market, because only because something is open source doesn't necessarily mean it's going to be available for everyone. And the same for proprietary source from that perspective, even though everybody mixes licensing and payments and all that all the time, but it doesn't necessarily have anything to do with it. But I think as we are going through it, and when we also look at the industry, security industry over the last 10 plus years has been primarily hardware focused. And a lot of these vendors have done a good business out of selling hardware boxes, putting software on top of it. Whereas in reality, those were still X86 standard boxes in the end. So it was not that we had specific security ethics or anything like that in there anymore. And so overall, the question of the market is going to change. And as we are looking into Cloud Native, think about someone like an AWS, do you really envision them to have a hardware box of every supplier in their data center, and that in every availability zone in every region? Same for Microsoft, same for Google, etc? So we need to have new ways on how we can apply security. And that applies both on the backend services, but also on the front end side. >> And if I, and if I could chime in, I think the, the good, I think the answer is, is, is no and yes. And what I mean by that is if you take, antivirus and known malware, I mean pretty much anybody today can, can solve that problem, it's the unknown malware. So I think the yes part of the answer is yes, it's, it's going to be proprietary, but in the sense we're going to use open source tooling, and then apply that in a proprietary way with, with specific algorithms and unique architectures that are going to solve problems. For example, XDR with, with unknown malware. So, and that's the, that's the hard part. As somebody said, I think this morning at the keynote, it's, it's all the stuff that, that the SecOps team couldn't find. That's the really hard part. >> (laughs) Well the question will be will, is the new IP, the ability to feed ChatGPT some magical spelled insertion query string that does the job, that's unique, that might be the new IP, the the question to ask. >> Well, that's what the hackers are going to do. And I, they're on offense. (John laughs) And the offense knows what play is coming. So, they're going to start. >> So guys, let's take this conversation up a level. I want to get your perspectives on what's in this for me as a customer? We know security is a board level conversation. We talk about this all the time. We also know that they're based on, I think David, was the conversations that you and I had, with Palo Alto Networks at Ignite in December. There's a, there's a lack of alignment between the executives and the board from a security perspective. When we talk about Cloud Native Security, we all talked about the value in that, what's in it for customers? I want to get your perspectives on should this be a board level conversation, and if so, how do you advise organizations, whether it is a hospital, or a bank, or an organization that is really affected by things like ransomware? How should they be thinking about this from an organizational perspective? >> Well, I'll start first, because we had this conversation during our Super Cloud event last month, and this comes up a lot. And this is, the CEO board level. Yes it is a board level conversation for security, as is application development as in terms of transforming their business to be competitive, not to be on the wrong side of history with this wave coming. So I think that's more of a management. But the issue is, they tell their people, "Go do it." And they're like, 'cause they get sold on the idea of, "Hey, won't you transform your business, and everything's going to be data driven, and machine learning's going to power your apps, get new customers, be profitable." "Oh, sign me up for that." When you have to implement this, it's really hard. And I think the core issue is, where are companies in their life cycle of the ability to execute and architect this thing properly as Dave said, Nick Zuk said, "You can't have architecture and security, you need platforms." So, I think the re-platforming, and the re-factoring of business is a big factor, and that's got to get down into the, the organizational shifts and the people to do it. So are there skills? Do I do a managed service? How do I architect it? Are there more services? Are there developers doing applications that are going to be more agile? So, this is not an easy thing. And to move a business from IT operations that is proven, to be positioned for this enablement, is just really difficult. And it's expensive. And if you screw it up, you could be, could be on the wrong side of things. So, to me, that's the big issue is, you sell the dream and then you got to implement it. And that's really difficult. >> Yves, give us your perspective on, based on John's comments, how do organizations shift so dramatically? There's a cultural element there as well, but there's also organizations that are, have competitive competitors in the rear view mirror, and there's time to waste. What are your thoughts on that? >> I think that's exactly the point. It's like, as an organization, you need to take the decision between the time, the risk, and all the other elements we have into this game. Because you can try to achieve 100% security, but that's exactly the same as trying to, to protect gold or anything else 100%. It's most likely not going to be from a risk perspective anyway sensible. And that's the same from a corporational perspective. When you look at building new internet services, or IOT services, or any kind of new shopping experience or whatever else, you need to balance out between the risks and the advantages out of it. And you also need to be accepting that you potentially on the way make mistakes, but then it's more important than ever that you are able to quickly fix any mistakes, and to adjust to anything what's happening in the market. Because as we are building all these new Cloud Native applications, and build up all these skill sets, one of the big scenarios is we are far more depending on individual building blocks. These building blocks come out of open source communities, which have a much different way. When we look back in software development, back then we had application servers from Oracle, Web Logic, whatsoever, they had a release cycles of every three to six months. As now we have to deal with open source, where sometimes release cycles are on a four week schedule, in between security patches. So you need to be much faster in adopting that, checking that, implementing that, getting things to work. So there is a security stretch from that perspective. There is a speech stretch on the other thing companies have to deal with, and on the other side it's always a measurement between the risk, and the security you can afford. Because reality is, you will not be 100% protected no matter what you do. So, you need to balance out what you as an organization can actually build on. But I think, coming back also to the point, it's on the bot level nowadays. It's like nearly every discussion we have with companies nowadays as they move into the Cloud, especially also here in Europe where for the last five years, it was always, it's like "It's data privacy." Data privacy is no longer, I mean, yes, for certain people, it's still the point, but for many more people it's like, "How protected is my data?" "What do we do in case of ransomware attack?" "What do we do in case of a denial of service?" All of these things become more vulnerable, where in the past you were discussing these things with a becking page, or, or like a stock exchange. They were, it's like, "What the hell is going to happen if we have a denial of service?" Now all of the sudden, this now affects nearly everyone in their storefronts and everything else, because everything is depending on it. >> Yeah, I think you're right on. You think about how cultural change occurs, it's bottom ups or, bottom up, top down or middle out. And what, what's happened with security is the people in the security team cared about it, they were the, everybody said, "Oh, it's their problem." And then it just did an end run to the board, kind of mid, early last decade. And then the board sort of pushed that down. And the line of business is realizing, "Holy cow. My business, my EBIT can be dramatically affected by this, so I care." Now it's this whole house, cultural team sport. I know it's sort of a, a cliche, but it, it's true. Everybody actually is beginning to care about security because the risks are now so high, and it's going to affect not only the bottom line of the company, the bottom line of the business, their job, it's, it's, it's virtually everywhere. It's a huge cultural shift that we're seeing. >> And that's a big challenge for organizations in any industry. And Yves, you talked about ransomware service. Every industry across the globe is vulnerable to this. But how can, maybe John, we'll start with you. How can Cloud Native Security help organizations if they're able to embrace it, operationally, culturally, dial down some of the vulnerabilities that just seem to keep growing? >> Well, I mean that's the big question. The breaches are, are critical. The governances also could be a way that anchors down growth. So I think the balance between the governance compliance piece of it is key, but making the developers faster and more productive is the key to me. And I think having the security paradigm where they're not blockers, as Dave said, is critical. So I love the whole shift left, but now that we have more data focused initiatives around how that, you can use data to understand the security issues, I think data and security are together, and I think there's a going to be a data operating system model emerging, where data and security will be almost one thing. And that will be set up by the security teams, and the data teams together. And that will feed guardrails into the developer environment. So the developer should feel no pain at all in doing this. So I think the best practice will end up being what we're seeing with supply chain, security, with making sure code's verified. And you're going to see the container, security side completely address has been, and KubeCon, we just, I asked Scott Johnson, the CEO of Docker, and I asked him directly, "Are you guys all tight on container security?" He said, yes, but other people are suggesting that's not true. There's a lot of issues with the container security. So, there's all kinds of areas where there's holes. So Cloud Native is cool on one hand, and very relevant, but if it's not shored up, it's going to be a problem. But I, so I think that's where the action will be, at the developer pipeline, in the containers, and the data. So, that will be very relevant, and if companies nail that, they'll be faster, they'll have better apps, and that'll be the differentiator. And again, if they don't on this next wave, they're going to be driftwood. >> Dave, how do they prevent becoming driftwood? >> Well, I think Cloud has had a huge impact. And a Cloud's by no means a panacea, but let's face it, it's dramatically improved a lot of companies security posture. Now there's still that shared responsibility. Even though an S3 bucket is encrypted, it's still your responsibility to make sure that it doesn't get decrypted by somebody who has access to it. So there are things like that, but to Yve's earlier point, that can be, that's done through software now, it's done through best practices. Those best practices can be shared. So the way you, you don't become driftwood, is you start to, you step back, rethink that security architecture as we were talking about earlier, take advantage of the Cloud, take advantage of Cloud Native, and all the, the rapid pace of innovation that's occurring there, and you don't use, it's called before, The audit is the last line of defense. That's no longer a check box item. "Oh yeah, we're in compliance." It's, this is a business imperative, and because we're going to reduce our expected loss and reduce our business risk. That's part of the business case today. >> Yeah. >> It's a huge, critically important part of the business case. Yves, question for you. If you're in an elevator with a CEO, a CFO, and a CISO, and they're talking about security and Cloud Native Security, what's your value proposition to them on a, on a say a 32nd elevator ride? >> Difficult story. I think at the moment, the most important part is, we need to get people to work together, and we need to train people to work more much better together. I think that's the overall most important part for all of these solutions, because in the end, security is always a person issue. If, we can have the best tools in the industry, as long as we don't get all of these teams to work together, then we have a problem. If the security team is always seen as the end of the solution to fix everything, that's not going to work because they always are the bad guys in the game. And so we need to bring the teams together. And once we have the teams work together, I think we have a far better track on, on maintaining security. >> John and Dave, I want to get your perspectives on what Yves just said. In all the experience that the two of you have as industry analysts here on "theCUBE," Wikibon, Siliconangle Media. How do you advise organizations to get those teams together? As Eve said, that alignment is critical, but John, we'll start with you, then Dave go to you. What's your advice for organizations that need to align those teams and really don't have a lot of time to wait to do it? >> (chuckling) That's a great question. I think, I think that's everyone pays hundreds of thousands of millions of dollars to get that advice from these consultants, organizations out there doing the transformations. But I think it comes down to personnel and commitment. I think if there's a C-level commitment to the effort, you'll see the institutional structure change. So you can see really getting behind it with their, with their wallet and their, and their support of either getting more personnel to support and assist, or manage services, or giving the power to the teams to execute and doing it in a way that, that's, that's well known and best practices. Start small, build out the pilots, build the platform, and then start getting it right. And I think that's the key. Not the magic wand, the old model of rolling out stuff in, in six month cycles. It's really, get the proof points, double down and change the culture, but also execute and have real metrics. And changing the architecture, like having more penetration tests as a service. Doing pen tests is like a joke now. So that doesn't make any sense. You got to have that built in almost every day, and every minute. So, these kinds of new techniques have to be implemented and have to be tried. So that's why these communities are growing. That's why I like what open source has been doing, and I like the open source as the place to have these conversations, because that's where the action will be for new stuff. And I think people will implement open source like they did before, but with different ways, better testing, better supply chain on the software side, verifying code. So, I see open source actually getting a tailwind from this, not a headwind. So, I'm bullish on the open source piece here on, on all levels, machine learning- >> Lisa, my answer is intramural sports. And it's 'cause I think it's cultural. And what I mean by that, is you take your your best and brightest security, and this is what frankly, a lot of CISOs do, an examples is Lena Smart, MongoDB. Take your best and brightest security pros, make them captains of the intramural teams, and pair them up with pods of individuals across the organization, which is most people who don't know anything about security, and put them together, so that they can, they, so that the folks that understand security can, can realize how little people know, what, what, what, how, what the worst practices that are out there in the reverse, how they can cross pollinate. And they do that on a regular basis, I know at Mongo and other companies. And that kind of cultural assimilation is a starting point for how you get security awareness up to your question around making it a team sport. >> Absolutely critical. Yves, I want to kind of wrap things with you. We've got a couple of minutes left. When you're really looking at the Cloud Native community, the growth of it, we talked about earlier in the program, Cloud Native Security Con being now extracted and elevated out of KubeCon, what are your thoughts on the groundswell that this community is generating around Cloud Native Security, the benefits that organizations will achieve from it? >> I think overall, when we have these securities conferences, or these security arms a bit spread out and separated out of the main conference, it helps to a certain degree, because especially in the security space, when you look at at other like black hat or white hat conferences and things like that in the past, although they were not focused on Cloud Native, a lot of these security folks didn't feel well taken care of in any of the other conferences because they were always these, it's like they are always blocking us, they're always making us problems, and all these kinds of things. Now that we really take the Cloud Native piece and the security piece together, or like AWS does it with re:Inforce, I think we will see more and more that people understand is that security is a permanent topic we need to cover, but we need to bring different people together, because security also has compliance and a lot of other components in there. So we will see at these conferences moving forward, also a different audience. It's not going to be only the Cloud Native developers. And if I see some of these security audiences, I can't really imagine them to really be at KubeCon because there is too much other things going on. And you couldn't really see much of that at re:Invent because re:Invent by itself has become a complete monster of a conference. It covers too many topics. And so having this very, very important security piece separated, also gives the opportunity, I think, that we can bring in the security people, but also have the type of board level discussions potentially, between the leaders of the industry, to also discuss on how we can evolve, how we can make things better, and how, how we can actually, yeah, evolve our industry for it. Because let's face it, that threat is not going to go away. It's, it's a business. And one of the last security conferences I was on, on the ransomware part, it was one of the topics someone said is like, "Look, currently on average, it takes a hacker group roughly around they said 15 to 20 K to break into a company, and they on average make 100K. It's a business, let's face it. And it's a business we don't like. And ethically, it's no discussion that this is not good, but that's something which is happening. People are making money with it. And as long as that's going to go on, and we have enough countries where these people can hide, it's going to stay and survive. And so, with that being said, it's important for us to really build an industry around this. But I also think it's good that we have separate conferences. In the past we had more the RSA conference, which tried to cover all of these areas. But that is not really fitting Cloud Native and everything else. So I think it's good that we have these new opportunities, the Cloud Native one, but also what AWS brings up for someone. >> Yves, you just nailed it. It just comes down to simple math. It's a fraction. Revenue over cost. And if you could increase the hacker's cost, increase the denominator, their ROI will go down. And that is the game. >> Great point, Dave. What I'm hearing guys, and we can talk about technology for days and days. I know all of you. But there's, there's a big component that, that the elevation of Cloud Native Security, on its own as standalone is critical, as is the people component. You guys all talked about that. We talked about the cultural change necessary for that. Hopefully what we're seeing with Cloud Native Security Con 23, this first event is going to give us more insight over the next couple of days, and the next months or so, as to how this elevation, and how the people can come together to really help organizations from a math perspective as, as Dave talked about, really dial down the risks there, understand more of the vulnerabilities so that ransomware as a service is not as lucrative as it is today. Guys, so much appreciate your time, really breaking down Cloud Native Security, the value in it from different perspectives, and what your thoughts are on where it's going. Thanks so much for your time. >> All right. Thanks. >> Thanks, Lisa. >> Thank you. >> Thanks, Yves. >> All right. For my guests, I'm Lisa Martin. You're watching theCUBE's day one coverage of Cloud Native Security Con 23. Thanks for watching. (rousing music)

Published Date : Feb 2 2023

SUMMARY :

the CEO of Comdivision Group, perspectives on the event We are in the business and the focus keeps and that's like the VMwares of the world. And so increasingly, the the bolt no longer works. and not a security at the end. And I think that is going to be the issue. Because to me, I think And John you heard, Zuk and that's not the right approach. because the CNCF is run by and all that all the time, that the SecOps team couldn't find. is the new IP, the ability to feed ChatGPT And the offense knows what play is coming. between the executives and the board and the people to do it. and there's time to waste. and the security you can afford. And the line of business is realizing, that just seem to keep growing? is the key to me. The audit is the last line of defense. of the business case. because in the end, security that the two of you have or giving the power to the teams so that the folks that the growth of it, and the security piece together, And that is the game. and how the people can come together All right. of Cloud Native Security Con 23.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

EvePERSON

0.99+

AmazonORGANIZATION

0.99+

Nick ZukPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

ZukPERSON

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

DavidPERSON

0.99+

YvesPERSON

0.99+

Yves SandfortPERSON

0.99+

GermanyLOCATION

0.99+

100%QUANTITY

0.99+

Palo Alto NetworkORGANIZATION

0.99+

EuropeLOCATION

0.99+

LisaPERSON

0.99+

Scott JohnsonPERSON

0.99+

15QUANTITY

0.99+

MongoORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Lena SmartPERSON

0.99+

2014DATE

0.99+

Linux FoundationORGANIZATION

0.99+

twoQUANTITY

0.99+

Comdivision GroupORGANIZATION

0.99+

DecemberDATE

0.99+

four weekQUANTITY

0.99+

DockerORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

Web LogicORGANIZATION

0.99+

Cloud Native Security ConEVENT

0.99+

Siliconangle MediaORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

DevSecOpsTITLE

0.99+

next yearDATE

0.99+

Palo Alto NetworkORGANIZATION

0.99+

eight years laterDATE

0.99+

last monthDATE

0.99+

Cloud Native Security Con 23EVENT

0.99+

KubeConEVENT

0.99+

20 KQUANTITY

0.98+

six monthsQUANTITY

0.98+

bothQUANTITY

0.98+

VMwareORGANIZATION

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

32nd elevatorQUANTITY

0.98+

DevOpsTITLE

0.98+

over 27 yearsQUANTITY

0.98+

YvePERSON

0.98+

Cloud NativeTITLE

0.98+

2013DATE

0.98+

firstQUANTITY

0.98+

MongoDBORGANIZATION

0.97+

Re:InforceEVENT

0.97+

25 years agoDATE

0.97+

Is Data Mesh the Next Killer App for Supercloud?


 

(upbeat music) >> Welcome back to our Supercloud 2 event live coverage here of stage performance in Palo Alto syndicating around the world. I'm John Furrier with Dave Vellante. We got exclusive news and a scoop here for SiliconANGLE in theCUBE. Zhamak Dehghani, creator of data mesh has formed a new company called Nextdata.com, Nextdata. She's a cube alumni and contributor to our supercloud initiative, as well as our coverage and Breaking Analysis with Dave Vellante on data, the killer app for supercloud. Zhamak, great to see you. Thank you for coming into the studio and congratulations on your newly formed venture and continued success on the data mesh. >> Thank you so much. It's great to be here. Great to see you in person. >> Dave: Yeah, finally. >> Wonderful. Your contributions to the data conversation has been well documented certainly by us and others in the industry. Data mesh taking the world by storm. Some people are debating it, throwing cold water on it. Some are thinking it's the next big thing. Tell us about the data mesh, super data apps that are emerging out of cloud. >> I mean, data mesh, as you said, the pain point that it surface were universal. Everybody said, "Oh, why didn't I think of that?" It was just an obvious next step and people are approaching it, implementing it. I guess the last few years I've been involved in many of those implementations and I guess supercloud is somewhat a prerequisite for it because it's data mesh and building applications using data mesh is about sharing data responsibly across boundaries. And those boundaries include organizational boundaries, cloud technology boundaries, and trust boundaries. >> I want to bring that up because your venture, Nextdata, which is new just formed. Tell us about that. What wave is that riding? What specifically are you targeting? What's the pain point? >> Absolutely. Yes, so Nextdata is the result of, I suppose the pains that I suffered from implementing data mesh for many of the organizations. Basically a lot of organizations that I've worked with they want decentralized data. So they really embrace this idea of decentralized ownership of the data, but yet they want interconnectivity through standard APIs, yet they want discoverability and governance. So they want to have policies implemented, they want to govern that data, they want to be able to discover that data, and yet they want to decentralize it. And we do that with a developer experience that is easy and native to a generalist developer. So we try to find the, I guess the common denominator that solves those problems and enables that developer experience for data sharing. >> Since you just announced the news, what's been the reaction? >> I just announced the news right now, so what's the reaction? >> But people in the industry know you did a lot of work in the area. What have been some of the feedback on the new venture in terms of the approach, the customers, problem? >> Yeah, so we've been in stealth mode so we haven't publicly talked about it, but folks that have been close to us, in fact have reached that we already have implementations of our pilot platform with early customers, which is super exciting. And we going to have multiple of those. Of course, we're a tiny, tiny company. We can have many of those, but we are going to have multiple pilot implementations of our platform in real world where real global large scale organizations that have real world problems. So we're not going to build our platform in vacuum. And that's what's happening right now. >> Zhamak, when I think about your role at ThoughtWorks, you had a very wide observation space with a number of clients, helping them implement data mesh and other things as well prior to your data mesh initiative. But when I look at data mesh, at least the ones that I've seen, they're very narrow. I think of JPMC, I think of HelloFresh. They're generally, obviously not surprising, they don't include the big vision of inclusivity across clouds, across different data storage. But it seems like people are having to go through some gymnastics to get to the organizational reality of decentralizing data and at least pushing data ownership to the line of business. How are you approaching, or are you approaching solving that problem? Are you taking a narrow slice? What can you tell us about Nextdata? >> Yeah, absolutely. Gymnastics, the cute word to describe what the organizations have to go through. And one of those problems is that the data as you know resides on different platforms, it's owned by different people, is processed by pipelines that who knows who owns them. So there's this very disparate and disconnected set of technologies that were very useful for when we thought about data and processing as a centralized problem. But when you think about data as a decentralized problem the cost of integration of these technologies in a cohesive developer experience is what's missing. And we want to focus on that cohesive end-to-end developer experience to share data responsibly in these autonomous units. We call them data products, I guess in data mesh. That constitutes computation. That governs that data policies, discoverability. So I guess, I heard this expression in the last talks that you can have your cake and eat it too. So we want people have their cakes, which is data in different places, decentralization, and eat it too, which is interconnected access to it. So we start with standardizing and codifying this idea of a data product container that encapsulates data computation APIs to get to it in a technology agnostic way, in an open way. And then sit on top and use existing tech, Snowflake, Databricks, whatever exists, the millions of dollars of investments that companies have made, sit on top of those but create this cohesive, integrated experience where data product is a first class primitive. And that's really key here. The language and the modeling that we use is really native to data mesh, which is that I'm building a data product I'm sharing a data product, and that encapsulates I'm providing metadata about this. I'm providing computation that's constantly changing the data. I'm providing the API for that. So we we're trying to kind of codify and create a new developer experience based on that. And developer, both from provider side and user side, connected to peer-to-peer data sharing with data product as a primitive first class concept. >> So the idea would be developers would build applications leveraging those data products, which are discoverable and governed. Now today you see some companies, take a Snowflake for example, attempting to do that within their own little walled garden. They even at one point used the term mesh. I don't know if they pull back on that. And then they became aware of some of your work. But a lot of the things that they're doing within their little insulated environment support that governance, they're building out an ecosystem. What's different in your vision? >> Exactly. So we realized that, and this is a reality, like you go to organizations, they have a Snowflake and half of the organization happily operates on Snowflake. And on the other half, "oh, we are on Bare infrastructure on AWS or we are on Databricks." This is the reality. This supercloud that's written up here, it's about working across boundaries of technology. So we try to embrace that. And even for our own technology with the way we're building it, we say, "Okay, nobody's going to use Nextdata, data mesh operating system. People will have different platforms." So you have to build with openness in mind and in case of Snowflake, I think, they have very, I'm sure very happy customers as long as customers can be on Snowflake. But once you cross that boundary of platforms then that becomes a problem. And we try to keep that in mind in our solution. >> So it's worth reviewing that basically the concept of data mesh is that whether you're a data lake or a data warehouse, an S3 bucket, an Oracle database as well, they should be inclusive inside of the data. >> We did a session with AWS on the startup showcase, data as code. And remember I wrote a blog post in 2007 called "Data as the New Developer Kit" back then we used to call them developer kits if you remember. And that we said at that time, whoever can code data will have a competitive advantage. >> Aren't the machines going to be doing that? Didn't we just hear that? >> Well, we have. Hey, Siri. Hey, Cube, find me that best video for data mesh. There it is. But this is the point, like what's happening is that now data has to be addressable. for machines and for coding because as you need to call the data. So the question is how do you manage the complexity of big things as promiscuous as possible, making it available, as well as then governing it? Because it's a trade off. The more you make open, the better the machine learning. But yet the governance issue, so this is the, you need an OS to handle this maybe. >> Yes. So yes, well we call, our mental model for our platform is an OS operating system. Operating systems have shown us how you can abstract what's complex and take care of a lot of complexities, but yet provide an open and dynamic enough interface. So we think about it that way. Just, we try to solve the problem of policies live with the data, an enforcement of the policies happens at the most granular level, which is in this concept of the data product. And that would happen whether you read, write or access a data product. But we can never imagine what are these policies could be. So our thinking is we should have a policy, open policy framework that can allow organizations write their own policy drivers and policy definitions and encode it and encapsulated in this data product container. But I'm not going to fool myself to say that, that's going to solve the problem that you just described. I think we are in this, I don't know, if I look into my crystal ball, what I think might happen is that right now the primitives that we work with to train machine learning model are still bits and bytes and data. They're fields, rows, columns and that creates quite a large surface area and attack area for privacy of the data. So perhaps one of the trends that we might see is this evolution of data APIs to become more and more computational aware to bring the compute to the data to reduce that surface area. So you can really leave the control of the data to the sovereign owners of that data. So that data product. So I think that evolution of our data APIs perhaps will become more and more computational. So you describe what you want and the data owner decides how to manage. >> That's interesting, Dave, 'cause it's almost like we just talked about ChatGPT in the last segment we had with you. It was a machine learning have been around the industry. It's almost as if you're starting to see reason come into, the data reasoning is like starting to see not just metadata. Using the data to reason so that you don't have to expose the raw data. So almost like a, I won't say curation layer, but an intelligence layer. >> Zhamak: Exactly. >> Can you share your vision on that? 'Cause that seems to be where the dots are connecting. >> Yes, perhaps further into the future because just from where we stand, we have to create still that bridge of familiarity between that future and present. So we are still in that bridge making mode. However, by just the basic notion of saying, "I'm going to put an API in front of my data." And that API today might be as primitive as a level of indirection, as in you tell me what you want, tell me who you are, let me go process that, all the policies and lineage and insert all of this intelligence that need to happen. And then today, I will still give you a file. But by just defining that API and standardizing it now we have this amazing extension point that we can say, "Well, the next revision of this API, you not just tell me who you are, but you actually tell me what intelligence you're after. What's a logic that I need to go and now compute on your API?" And you can evolve that. Now you have a point of evolution to this very futuristic, I guess, future where you just described the question that you're asking from the ChatGPT. >> Well, this is the supercloud, go ahead, Dave. >> I have a question from a fan, I got to get it in. It's George Gilbert. And so his question is, you're blowing away the way we synchronize data from operational systems to the data stack to applications. So the concern that he has and he wants your feedback on this, is the data product app devs get exposed to more complexity with respect to moving data between data products or maybe it's attributes between data products? How do you respond to that? How do you see? Is that a problem? Is that something that is overstated or do you have an answer for that? >> Absolutely. So I think there's a sweet spot in getting data developers, data product developers closer to the app, but yet not overburdening them with the complexity of the application and application logic and yet reducing their cognitive load by localizing what they need to know about, which is that domain where they're operating within. Because what's happening right now? What's happening right now is that data engineers with, a ton of empathy for them for their high threshold of pain that they can deal with, they have been centralized, they've put into the data team, and they have been given this unbelievable task of make meaning out of data, put semantic over it, curate it, cleans it, and so on. So what we are saying is that get those folks embedded into the domain closer to the application developers. These are still separately moving units. Your app and your data products are independent, but yet tightly closed with each other, tightly coupled with each other based on the context of the domain. So reduce cognitive load by localizing what they need to know about to the domain, get them closer to the application, but yet have them separate from app because app provides a very different service. Transactional data for my e-commerce transaction. Data product provides a very different service. Longitudinal data for the variety of this intelligent analysis that I can do on the data. But yet it's all within the domain of e-commerce or sales or whatnot. >> It's a lot of decoupling and coupling create that cohesiveness architecture. So I have to ask you, this is an interesting question 'cause it came up on theCUBE all last year. Back on the old server data center days and cloud, SRE, Google coined the term, site reliability engineer, for someone to look over the hundreds of thousands of servers. We asked the question to data engineering community who have been suffering, by the way, I agree. Is there an SRE like role for data? Because in a way data engineering, that platform engineer, they are like the SRE for data. In other words managing the large scale to enable automation and cell service. What's your thoughts and reaction to that? >> Yes, exactly. So maybe we go through that history of how SRE came to be. So we had the first DevOps movement, which was remove the wall between dev and ops and bring them together. So you have one unit of one cross-functional units of the organization that's responsible for you build it, you run it. So then there is no, I'm going to just shoot my application over the wall for somebody else to manage it. So we did that and then we said, okay, there is a ton, as we decentralized and had these many microservices running around, we had to create a layer that abstracted a lot of the complexity around running now a lot or monitoring, observing, and running a lot while giving autonomy to this cross-functional team. And that's where the SRE, a new generation of engineers came to exist. So I think if I just look at. >> Hence, Kubernetes. >> Hence, hence, exactly. Hence, chaos engineering. Hence, embracing the complexity and messiness. And putting engineering discipline to embrace that and yet give a cohesive and high integrity experience of those systems. So I think if we look at that evolution, perhaps something like that is happening by bringing data and apps closer and make them these domain-oriented data product teams or domain-oriented cross-functional teams full stop and still have a very advanced maybe at the platform level, infrastructure level operational team that they're not busy doing two jobs, which is taking care of domains and the infrastructure, but they're building infrastructure that is embracing that complexity, interconnectivity of this data process. >> So you see similarities? >> I see, absolutely. But I feel like we're probably in a more early days of that movement. >> So it's a data DevOps kind of thing happening where scales happening. It's good things are happening, yet a little bit fast and loose with some complexities to clean up. >> Yes. This is a different restructure. As you said, the job of this industry as a whole, an architect, is decompose recompose, decompose recompose in new way and now we're like decomposing centralized team, recomposing them as domains. >> So is data mesh the killer app for supercloud? >> You had to do this to me. >> Sorry, I couldn't resist. >> I know. Of course you want me to say this. >> Yes. >> Yes, of course. I mean, supercloud, I think it's really, the terminology supercloud, open cloud, but I think in spirits of it this embracing of diversity and giving autonomy for people to make decisions for what's right for them and not yet lock them in. I think just embracing that is baked into how data mesh assume the world would work. >> Well, thank you so much for coming on Supercloud 2. We really appreciate it. Data has driven this conversation. Your success of data mesh has really opened up the conversation and exposed the slow moving data industry. >> Dave: Been a great catalyst. >> That's now going well. We can move faster. So thanks for coming on. >> Thank you for hosting me. It was wonderful. >> Supercloud 2 live here in Palo Alto, our stage performance. I'm John Furrier with Dave Vellante. We'll back with more after this short break. Stay with us all day for Supercloud 2. (upbeat music)

Published Date : Jan 25 2023

SUMMARY :

and continued success on the data mesh. Great to see you in person. and others in the industry. I guess the last few What's the pain point? for many of the organizations. But people in the industry know you did but folks that have been close to us, at least the ones that I've is that the data as you know But a lot of the things that they're doing and half of the organization that basically the concept of data mesh And that we said at that time, is that now data has to be addressable. and the data owner decides how to manage. the data reasoning is like starting to see 'Cause that seems to be where What's a logic that I need to go Well, this is the So the concern that he has into the domain closer to We asked the question to of the organization that's responsible So I think if we look at that evolution, in a more early days of that movement. So it's a data DevOps As you said, the job of Of course you want me to say this. assume the world would work. the conversation and exposed So thanks for coming on. Thank you for hosting me. I'm John Furrier with Dave Vellante.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

2007DATE

0.99+

George GilbertPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

NextdataORGANIZATION

0.99+

ZhamakPERSON

0.99+

Palo AltoLOCATION

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

oneQUANTITY

0.99+

Nextdata.comORGANIZATION

0.99+

two jobsQUANTITY

0.99+

JPMCORGANIZATION

0.99+

todayDATE

0.99+

HelloFreshORGANIZATION

0.99+

ThoughtWorksORGANIZATION

0.99+

last yearDATE

0.99+

Supercloud 2EVENT

0.99+

OracleORGANIZATION

0.98+

firstQUANTITY

0.98+

SiriTITLE

0.98+

CubePERSON

0.98+

DatabricksORGANIZATION

0.98+

SnowflakeORGANIZATION

0.97+

SupercloudORGANIZATION

0.97+

bothQUANTITY

0.97+

one unitQUANTITY

0.97+

SnowflakeTITLE

0.96+

SRETITLE

0.95+

millions of dollarsQUANTITY

0.94+

first classQUANTITY

0.94+

hundreds of thousands of serversQUANTITY

0.92+

supercloudORGANIZATION

0.92+

one pointQUANTITY

0.92+

Supercloud 2TITLE

0.89+

ChatGPTORGANIZATION

0.81+

halfQUANTITY

0.81+

Data Mesh the Next Killer AppTITLE

0.78+

supercloudTITLE

0.75+

a tonQUANTITY

0.73+

Supercloud 2ORGANIZATION

0.72+

SiliconANGLEORGANIZATION

0.7+

DevOpsTITLE

0.66+

SnowflakeEVENT

0.59+

S3TITLE

0.54+

lastDATE

0.54+

supercloudEVENT

0.48+

KubernetesTITLE

0.47+

Breaking Analysis: ChatGPT Won't Give OpenAI First Mover Advantage


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> OpenAI The company, and ChatGPT have taken the world by storm. Microsoft reportedly is investing an additional 10 billion dollars into the company. But in our view, while the hype around ChatGPT is justified, we don't believe OpenAI will lock up the market with its first mover advantage. Rather, we believe that success in this market will be directly proportional to the quality and quantity of data that a technology company has at its disposal, and the compute power that it could deploy to run its system. Hello and welcome to this week's Wikibon CUBE insights, powered by ETR. In this Breaking Analysis, we unpack the excitement around ChatGPT, and debate the premise that the company's early entry into the space may not confer winner take all advantage to OpenAI. And to do so, we welcome CUBE collaborator, alum, Sarbjeet Johal, (chuckles) and John Furrier, co-host of the Cube. Great to see you Sarbjeet, John. Really appreciate you guys coming to the program. >> Great to be on. >> Okay, so what is ChatGPT? Well, actually we asked ChatGPT, what is ChatGPT? So here's what it said. ChatGPT is a state-of-the-art language model developed by OpenAI that can generate human-like text. It could be fine tuned for a variety of language tasks, such as conversation, summarization, and language translation. So I asked it, give it to me in 50 words or less. How did it do? Anything to add? >> Yeah, think it did good. It's large language model, like previous models, but it started applying the transformers sort of mechanism to focus on what prompt you have given it to itself. And then also the what answer it gave you in the first, sort of, one sentence or two sentences, and then introspect on itself, like what I have already said to you. And so just work on that. So it it's self sort of focus if you will. It does, the transformers help the large language models to do that. >> So to your point, it's a large language model, and GPT stands for generative pre-trained transformer. >> And if you put the definition back up there again, if you put it back up on the screen, let's see it back up. Okay, it actually missed the large, word large. So one of the problems with ChatGPT, it's not always accurate. It's actually a large language model, and it says state of the art language model. And if you look at Google, Google has dominated AI for many times and they're well known as being the best at this. And apparently Google has their own large language model, LLM, in play and have been holding it back to release because of backlash on the accuracy. Like just in that example you showed is a great point. They got almost right, but they missed the key word. >> You know what's funny about that John, is I had previously asked it in my prompt to give me it in less than a hundred words, and it was too long, I said I was too long for Breaking Analysis, and there it went into the fact that it's a large language model. So it largely, it gave me a really different answer the, for both times. So, but it's still pretty amazing for those of you who haven't played with it yet. And one of the best examples that I saw was Ben Charrington from This Week In ML AI podcast. And I stumbled on this thanks to Brian Gracely, who was listening to one of his Cloudcasts. Basically what Ben did is he took, he prompted ChatGPT to interview ChatGPT, and he simply gave the system the prompts, and then he ran the questions and answers into this avatar builder and sped it up 2X so it didn't sound like a machine. And voila, it was amazing. So John is ChatGPT going to take over as a cube host? >> Well, I was thinking, we get the questions in advance sometimes from PR people. We should actually just plug it in ChatGPT, add it to our notes, and saying, "Is this good enough for you? Let's ask the real question." So I think, you know, I think there's a lot of heavy lifting that gets done. I think the ChatGPT is a phenomenal revolution. I think it highlights the use case. Like that example we showed earlier. It gets most of it right. So it's directionally correct and it feels like it's an answer, but it's not a hundred percent accurate. And I think that's where people are seeing value in it. Writing marketing, copy, brainstorming, guest list, gift list for somebody. Write me some lyrics to a song. Give me a thesis about healthcare policy in the United States. It'll do a bang up job, and then you got to go in and you can massage it. So we're going to do three quarters of the work. That's why plagiarism and schools are kind of freaking out. And that's why Microsoft put 10 billion in, because why wouldn't this be a feature of Word, or the OS to help it do stuff on behalf of the user. So linguistically it's a beautiful thing. You can input a string and get a good answer. It's not a search result. >> And we're going to get your take on on Microsoft and, but it kind of levels the playing- but ChatGPT writes better than I do, Sarbjeet, and I know you have some good examples too. You mentioned the Reed Hastings example. >> Yeah, I was listening to Reed Hastings fireside chat with ChatGPT, and the answers were coming as sort of voice, in the voice format. And it was amazing what, he was having very sort of philosophy kind of talk with the ChatGPT, the longer sentences, like he was going on, like, just like we are talking, he was talking for like almost two minutes and then ChatGPT was answering. It was not one sentence question, and then a lot of answers from ChatGPT and yeah, you're right. I, this is our ability. I've been thinking deep about this since yesterday, we talked about, like, we want to do this segment. The data is fed into the data model. It can be the current data as well, but I think that, like, models like ChatGPT, other companies will have those too. They can, they're democratizing the intelligence, but they're not creating intelligence yet, definitely yet I can say that. They will give you all the finite answers. Like, okay, how do you do this for loop in Java, versus, you know, C sharp, and as a programmer you can do that, in, but they can't tell you that, how to write a new algorithm or write a new search algorithm for you. They cannot create a secretive code for you to- >> Not yet. >> Have competitive advantage. >> Not yet, not yet. >> but you- >> Can Google do that today? >> No one really can. The reasoning side of the data is, we talked about at our Supercloud event, with Zhamak Dehghani who's was CEO of, now of Nextdata. This next wave of data intelligence is going to come from entrepreneurs that are probably cross discipline, computer science and some other discipline. But they're going to be new things, for example, data, metadata, and data. It's hard to do reasoning like a human being, so that needs more data to train itself. So I think the first gen of this training module for the large language model they have is a corpus of text. Lot of that's why blog posts are, but the facts are wrong and sometimes out of context, because that contextual reasoning takes time, it takes intelligence. So machines need to become intelligent, and so therefore they need to be trained. So you're going to start to see, I think, a lot of acceleration on training the data sets. And again, it's only as good as the data you can get. And again, proprietary data sets will be a huge winner. Anyone who's got a large corpus of content, proprietary content like theCUBE or SiliconANGLE as a publisher will benefit from this. Large FinTech companies, anyone with large proprietary data will probably be a big winner on this generative AI wave, because it just, it will eat that up, and turn that back into something better. So I think there's going to be a lot of interesting things to look at here. And certainly productivity's going to be off the charts for vanilla and the internet is going to get swarmed with vanilla content. So if you're in the content business, and you're an original content producer of any kind, you're going to be not vanilla, so you're going to be better. So I think there's so much at play Dave (indistinct). >> I think the playing field has been risen, so we- >> Risen and leveled? >> Yeah, and leveled to certain extent. So it's now like that few people as consumers, as consumers of AI, we will have a advantage and others cannot have that advantage. So it will be democratized. That's, I'm sure about that. But if you take the example of calculator, when the calculator came in, and a lot of people are, "Oh, people can't do math anymore because calculator is there." right? So it's a similar sort of moment, just like a calculator for the next level. But, again- >> I see it more like open source, Sarbjeet, because like if you think about what ChatGPT's doing, you do a query and it comes from somewhere the value of a post from ChatGPT is just a reuse of AI. The original content accent will be come from a human. So if I lay out a paragraph from ChatGPT, did some heavy lifting on some facts, I check the facts, save me about maybe- >> Yeah, it's productive. >> An hour writing, and then I write a killer two, three sentences of, like, sharp original thinking or critical analysis. I then took that body of work, open source content, and then laid something on top of it. >> And Sarbjeet's example is a good one, because like if the calculator kids don't do math as well anymore, the slide rule, remember we had slide rules as kids, remember we first started using Waze, you know, we were this minority and you had an advantage over other drivers. Now Waze is like, you know, social traffic, you know, navigation, everybody had, you know- >> All the back roads are crowded. >> They're car crowded. (group laughs) Exactly. All right, let's, let's move on. What about this notion that futurist Ray Amara put forth and really Amara's Law that we're showing here, it's, the law is we, you know, "We tend to overestimate the effect of technology in the short run and underestimate it in the long run." Is that the case, do you think, with ChatGPT? What do you think Sarbjeet? >> I think that's true actually. There's a lot of, >> We don't debate this. >> There's a lot of awe, like when people see the results from ChatGPT, they say what, what the heck? Like, it can do this? But then if you use it more and more and more, and I ask the set of similar question, not the same question, and it gives you like same answer. It's like reading from the same bucket of text in, the interior read (indistinct) where the ChatGPT, you will see that in some couple of segments. It's very, it sounds so boring that the ChatGPT is coming out the same two sentences every time. So it is kind of good, but it's not as good as people think it is right now. But we will have, go through this, you know, hype sort of cycle and get realistic with it. And then in the long term, I think it's a great thing in the short term, it's not something which will (indistinct) >> What's your counter point? You're saying it's not. >> I, no I think the question was, it's hyped up in the short term and not it's underestimated long term. That's what I think what he said, quote. >> Yes, yeah. That's what he said. >> Okay, I think that's wrong with this, because this is a unique, ChatGPT is a unique kind of impact and it's very generational. People have been comparing it, I have been comparing to the internet, like the web, web browser Mosaic and Netscape, right, Navigator. I mean, I clearly still remember the days seeing Navigator for the first time, wow. And there weren't not many sites you could go to, everyone typed in, you know, cars.com, you know. >> That (indistinct) wasn't that overestimated, the overhyped at the beginning and underestimated. >> No, it was, it was underestimated long run, people thought. >> But that Amara's law. >> That's what is. >> No, they said overestimated? >> Overestimated near term underestimated- overhyped near term, underestimated long term. I got, right I mean? >> Well, I, yeah okay, so I would then agree, okay then- >> We were off the charts about the internet in the early days, and it actually exceeded our expectations. >> Well there were people who were, like, poo-pooing it early on. So when the browser came out, people were like, "Oh, the web's a toy for kids." I mean, in 1995 the web was a joke, right? So '96, you had online populations growing, so you had structural changes going on around the browser, internet population. And then that replaced other things, direct mail, other business activities that were once analog then went to the web, kind of read only as you, as we always talk about. So I think that's a moment where the hype long term, the smart money, and the smart industry experts all get the long term. And in this case, there's more poo-pooing in the short term. "Ah, it's not a big deal, it's just AI." I've heard many people poo-pooing ChatGPT, and a lot of smart people saying, "No this is next gen, this is different and it's only going to get better." So I think people are estimating a big long game on this one. >> So you're saying it's bifurcated. There's those who say- >> Yes. >> Okay, all right, let's get to the heart of the premise, and possibly the debate for today's episode. Will OpenAI's early entry into the market confer sustainable competitive advantage for the company. And if you look at the history of tech, the technology industry, it's kind of littered with first mover failures. Altair, IBM, Tandy, Commodore, they and Apple even, they were really early in the PC game. They took a backseat to Dell who came in the scene years later with a better business model. Netscape, you were just talking about, was all the rage in Silicon Valley, with the first browser, drove up all the housing prices out here. AltaVista was the first search engine to really, you know, index full text. >> Owned by Dell, I mean DEC. >> Owned by Digital. >> Yeah, Digital Equipment >> Compaq bought it. And of course as an aside, Digital, they wanted to showcase their hardware, right? Their super computer stuff. And then so Friendster and MySpace, they came before Facebook. The iPhone certainly wasn't the first mobile device. So lots of failed examples, but there are some recent successes like AWS and cloud. >> You could say smartphone. So I mean. >> Well I know, and you can, we can parse this so we'll debate it. Now Twitter, you could argue, had first mover advantage. You kind of gave me that one John. Bitcoin and crypto clearly had first mover advantage, and sustaining that. Guys, will OpenAI make it to the list on the right with ChatGPT, what do you think? >> I think categorically as a company, it probably won't, but as a category, I think what they're doing will, so OpenAI as a company, they get funding, there's power dynamics involved. Microsoft put a billion dollars in early on, then they just pony it up. Now they're reporting 10 billion more. So, like, if the browsers, Microsoft had competitive advantage over Netscape, and used monopoly power, and convicted by the Department of Justice for killing Netscape with their monopoly, Netscape should have had won that battle, but Microsoft killed it. In this case, Microsoft's not killing it, they're buying into it. So I think the embrace extend Microsoft power here makes OpenAI vulnerable for that one vendor solution. So the AI as a company might not make the list, but the category of what this is, large language model AI, is probably will be on the right hand side. >> Okay, we're going to come back to the government intervention and maybe do some comparisons, but what are your thoughts on this premise here? That, it will basically set- put forth the premise that it, that ChatGPT, its early entry into the market will not confer competitive advantage to >> For OpenAI. >> To Open- Yeah, do you agree with that? >> I agree with that actually. It, because Google has been at it, and they have been holding back, as John said because of the scrutiny from the Fed, right, so- >> And privacy too. >> And the privacy and the accuracy as well. But I think Sam Altman and the company on those guys, right? They have put this in a hasty way out there, you know, because it makes mistakes, and there are a lot of questions around the, sort of, where the content is coming from. You saw that as your example, it just stole the content, and without your permission, you know? >> Yeah. So as quick this aside- >> And it codes on people's behalf and the, those codes are wrong. So there's a lot of, sort of, false information it's putting out there. So it's a very vulnerable thing to do what Sam Altman- >> So even though it'll get better, others will compete. >> So look, just side note, a term which Reid Hoffman used a little bit. Like he said, it's experimental launch, like, you know, it's- >> It's pretty damn good. >> It is clever because according to Sam- >> It's more than clever. It's good. >> It's awesome, if you haven't used it. I mean you write- you read what it writes and you go, "This thing writes so well, it writes so much better than you." >> The human emotion drives that too. I think that's a big thing. But- >> I Want to add one more- >> Make your last point. >> Last one. Okay. So, but he's still holding back. He's conducting quite a few interviews. If you want to get the gist of it, there's an interview with StrictlyVC interview from yesterday with Sam Altman. Listen to that one it's an eye opening what they want- where they want to take it. But my last one I want to make it on this point is that Satya Nadella yesterday did an interview with Wall Street Journal. I think he was doing- >> You were not impressed. >> I was not impressed because he was pushing it too much. So Sam Altman's holding back so there's less backlash. >> Got 10 billion reasons to push. >> I think he's almost- >> Microsoft just laid off 10000 people. Hey ChatGPT, find me a job. You know like. (group laughs) >> He's overselling it to an extent that I think it will backfire on Microsoft. And he's over promising a lot of stuff right now, I think. I don't know why he's very jittery about all these things. And he did the same thing during Ignite as well. So he said, "Oh, this AI will write code for you and this and that." Like you called him out- >> The hyperbole- >> During your- >> from Satya Nadella, he's got a lot of hyperbole. (group talks over each other) >> All right, Let's, go ahead. >> Well, can I weigh in on the whole- >> Yeah, sure. >> Microsoft thing on whether OpenAI, here's the take on this. I think it's more like the browser moment to me, because I could relate to that experience with ChatG, personally, emotionally, when I saw that, and I remember vividly- >> You mean that aha moment (indistinct). >> Like this is obviously the future. Anything else in the old world is dead, website's going to be everywhere. It was just instant dot connection for me. And a lot of other smart people who saw this. Lot of people by the way, didn't see it. Someone said the web's a toy. At the company I was worked for at the time, Hewlett Packard, they like, they could have been in, they had invented HTML, and so like all this stuff was, like, they just passed, the web was just being passed over. But at that time, the browser got better, more websites came on board. So the structural advantage there was online web usage was growing, online user population. So that was growing exponentially with the rise of the Netscape browser. So OpenAI could stay on the right side of your list as durable, if they leverage the category that they're creating, can get the scale. And if they can get the scale, just like Twitter, that failed so many times that they still hung around. So it was a product that was always successful, right? So I mean, it should have- >> You're right, it was terrible, we kept coming back. >> The fail whale, but it still grew. So OpenAI has that moment. They could do it if Microsoft doesn't meddle too much with too much power as a vendor. They could be the Netscape Navigator, without the anti-competitive behavior of somebody else. So to me, they have the pole position. So they have an opportunity. So if not, if they don't execute, then there's opportunity. There's not a lot of barriers to entry, vis-a-vis say the CapEx of say a cloud company like AWS. You can't replicate that, Many have tried, but I think you can replicate OpenAI. >> And we're going to talk about that. Okay, so real quick, I want to bring in some ETR data. This isn't an ETR heavy segment, only because this so new, you know, they haven't coverage yet, but they do cover AI. So basically what we're seeing here is a slide on the vertical axis's net score, which is a measure of spending momentum, and in the horizontal axis's is presence in the dataset. Think of it as, like, market presence. And in the insert right there, you can see how the dots are plotted, the two columns. And so, but the key point here that we want to make, there's a bunch of companies on the left, is he like, you know, DataRobot and C3 AI and some others, but the big whales, Google, AWS, Microsoft, are really dominant in this market. So that's really the key takeaway that, can we- >> I notice IBM is way low. >> Yeah, IBM's low, and actually bring that back up and you, but then you see Oracle who actually is injecting. So I guess that's the other point is, you're not necessarily going to go buy AI, and you know, build your own AI, you're going to, it's going to be there and, it, Salesforce is going to embed it into its platform, the SaaS companies, and you're going to purchase AI. You're not necessarily going to build it. But some companies obviously are. >> I mean to quote IBM's general manager Rob Thomas, "You can't have AI with IA." information architecture and David Flynn- >> You can't Have AI without IA >> without, you can't have AI without IA. You can't have, if you have an Information Architecture, you then can power AI. Yesterday David Flynn, with Hammersmith, was on our Supercloud. He was pointing out that the relationship of storage, where you store things, also impacts the data and stressablity, and Zhamak from Nextdata, she was pointing out that same thing. So the data problem factors into all this too, Dave. >> So you got the big cloud and internet giants, they're all poised to go after this opportunity. Microsoft is investing up to 10 billion. Google's code red, which was, you know, the headline in the New York Times. Of course Apple is there and several alternatives in the market today. Guys like Chinchilla, Bloom, and there's a company Jasper and several others, and then Lena Khan looms large and the government's around the world, EU, US, China, all taking notice before the market really is coalesced around a single player. You know, John, you mentioned Netscape, they kind of really, the US government was way late to that game. It was kind of game over. And Netscape, I remember Barksdale was like, "Eh, we're going to be selling software in the enterprise anyway." and then, pshew, the company just dissipated. So, but it looks like the US government, especially with Lena Khan, they're changing the definition of antitrust and what the cause is to go after people, and they're really much more aggressive. It's only what, two years ago that (indistinct). >> Yeah, the problem I have with the federal oversight is this, they're always like late to the game, and they're slow to catch up. So in other words, they're working on stuff that should have been solved a year and a half, two years ago around some of the social networks hiding behind some of the rules around open web back in the days, and I think- >> But they're like 15 years late to that. >> Yeah, and now they got this new thing on top of it. So like, I just worry about them getting their fingers. >> But there's only two years, you know, OpenAI. >> No, but the thing (indistinct). >> No, they're still fighting other battles. But the problem with government is that they're going to label Big Tech as like a evil thing like Pharma, it's like smoke- >> You know Lena Khan wants to kill Big Tech, there's no question. >> So I think Big Tech is getting a very seriously bad rap. And I think anything that the government does that shades darkness on tech, is politically motivated in most cases. You can almost look at everything, and my 80 20 rule is in play here. 80% of the government activity around tech is bullshit, it's politically motivated, and the 20% is probably relevant, but off the mark and not organized. >> Well market forces have always been the determining factor of success. The governments, you know, have been pretty much failed. I mean you look at IBM's antitrust, that, what did that do? The market ultimately beat them. You look at Microsoft back in the day, right? Windows 95 was peaking, the government came in. But you know, like you said, they missed the web, right, and >> so they were hanging on- >> There's nobody in government >> to Windows. >> that actually knows- >> And so, you, I think you're right. It's market forces that are going to determine this. But Sarbjeet, what do you make of Microsoft's big bet here, you weren't impressed with with Nadella. How do you think, where are they going to apply it? Is this going to be a Hail Mary for Bing, or is it going to be applied elsewhere? What do you think. >> They are saying that they will, sort of, weave this into their products, office products, productivity and also to write code as well, developer productivity as well. That's a big play for them. But coming back to your antitrust sort of comments, right? I believe the, your comment was like, oh, fed was late 10 years or 15 years earlier, but now they're two years. But things are moving very fast now as compared to they used to move. >> So two years is like 10 Years. >> Yeah, two years is like 10 years. Just want to make that point. (Dave laughs) This thing is going like wildfire. Any new tech which comes in that I think they're going against distribution channels. Lina Khan has commented time and again that the marketplace model is that she wants to have some grip on. Cloud marketplaces are a kind of monopolistic kind of way. >> I don't, I don't see this, I don't see a Chat AI. >> You told me it's not Bing, you had an interesting comment. >> No, no. First of all, this is great from Microsoft. If you're Microsoft- >> Why? >> Because Microsoft doesn't have the AI chops that Google has, right? Google is got so much core competency on how they run their search, how they run their backends, their cloud, even though they don't get a lot of cloud market share in the enterprise, they got a kick ass cloud cause they needed one. >> Totally. >> They've invented SRE. I mean Google's development and engineering chops are off the scales, right? Amazon's got some good chops, but Google's got like 10 times more chops than AWS in my opinion. Cloud's a whole different story. Microsoft gets AI, they get a playbook, they get a product they can render into, the not only Bing, productivity software, helping people write papers, PowerPoint, also don't forget the cloud AI can super help. We had this conversation on our Supercloud event, where AI's going to do a lot of the heavy lifting around understanding observability and managing service meshes, to managing microservices, to turning on and off applications, and or maybe writing code in real time. So there's a plethora of use cases for Microsoft to deploy this. combined with their R and D budgets, they can then turbocharge more research, build on it. So I think this gives them a car in the game, Google may have pole position with AI, but this puts Microsoft right in the game, and they already have a lot of stuff going on. But this just, I mean everything gets lifted up. Security, cloud, productivity suite, everything. >> What's under the hood at Google, and why aren't they talking about it? I mean they got to be freaked out about this. No? Or do they have kind of a magic bullet? >> I think they have the, they have the chops definitely. Magic bullet, I don't know where they are, as compared to the ChatGPT 3 or 4 models. Like they, but if you look at the online sort of activity and the videos put out there from Google folks, Google technology folks, that's account you should look at if you are looking there, they have put all these distinctions what ChatGPT 3 has used, they have been talking about for a while as well. So it's not like it's a secret thing that you cannot replicate. As you said earlier, like in the beginning of this segment, that anybody who has more data and the capacity to process that data, which Google has both, I think they will win this. >> Obviously living in Palo Alto where the Google founders are, and Google's headquarters next town over we have- >> We're so close to them. We have inside information on some of the thinking and that hasn't been reported by any outlet yet. And that is, is that, from what I'm hearing from my sources, is Google has it, they don't want to release it for many reasons. One is it might screw up their search monopoly, one, two, they're worried about the accuracy, 'cause Google will get sued. 'Cause a lot of people are jamming on this ChatGPT as, "Oh it does everything for me." when it's clearly not a hundred percent accurate all the time. >> So Lina Kahn is looming, and so Google's like be careful. >> Yeah so Google's just like, this is the third, could be a third rail. >> But the first thing you said is a concern. >> Well no. >> The disruptive (indistinct) >> What they will do is do a Waymo kind of thing, where they spin out a separate company. >> They're doing that. >> The discussions happening, they're going to spin out the separate company and put it over there, and saying, "This is AI, got search over there, don't touch that search, 'cause that's where all the revenue is." (chuckles) >> So, okay, so that's how they deal with the Clay Christensen dilemma. What's the business model here? I mean it's not advertising, right? Is it to charge you for a query? What, how do you make money at this? >> It's a good question, I mean my thinking is, first of all, it's cool to type stuff in and see a paper get written, or write a blog post, or gimme a marketing slogan for this or that or write some code. I think the API side of the business will be critical. And I think Howie Xu, I know you're going to reference some of his comments yesterday on Supercloud, I think this brings a whole 'nother user interface into technology consumption. I think the business model, not yet clear, but it will probably be some sort of either API and developer environment or just a straight up free consumer product, with some sort of freemium backend thing for business. >> And he was saying too, it's natural language is the way in which you're going to interact with these systems. >> I think it's APIs, it's APIs, APIs, APIs, because these people who are cooking up these models, and it takes a lot of compute power to train these and to, for inference as well. Somebody did the analysis on the how many cents a Google search costs to Google, and how many cents the ChatGPT query costs. It's, you know, 100x or something on that. You can take a look at that. >> A 100x on which side? >> You're saying two orders of magnitude more expensive for ChatGPT >> Much more, yeah. >> Than for Google. >> It's very expensive. >> So Google's got the data, they got the infrastructure and they got, you're saying they got the cost (indistinct) >> No actually it's a simple query as well, but they are trying to put together the answers, and they're going through a lot more data versus index data already, you know. >> Let me clarify, you're saying that Google's version of ChatGPT is more efficient? >> No, I'm, I'm saying Google search results. >> Ah, search results. >> What are used to today, but cheaper. >> But that, does that, is that going to confer advantage to Google's large language (indistinct)? >> It will, because there were deep science (indistinct). >> Google, I don't think Google search is doing a large language model on their search, it's keyword search. You know, what's the weather in Santa Cruz? Or how, what's the weather going to be? Or you know, how do I find this? Now they have done a smart job of doing some things with those queries, auto complete, re direct navigation. But it's, it's not entity. It's not like, "Hey, what's Dave Vellante thinking this week in Breaking Analysis?" ChatGPT might get that, because it'll get your Breaking Analysis, it'll synthesize it. There'll be some, maybe some clips. It'll be like, you know, I mean. >> Well I got to tell you, I asked ChatGPT to, like, I said, I'm going to enter a transcript of a discussion I had with Nir Zuk, the CTO of Palo Alto Networks, And I want you to write a 750 word blog. I never input the transcript. It wrote a 750 word blog. It attributed quotes to him, and it just pulled a bunch of stuff that, and said, okay, here it is. It talked about Supercloud, it defined Supercloud. >> It's made, it makes you- >> Wow, But it was a big lie. It was fraudulent, but still, blew me away. >> Again, vanilla content and non accurate content. So we are going to see a surge of misinformation on steroids, but I call it the vanilla content. Wow, that's just so boring, (indistinct). >> There's so many dangers. >> Make your point, cause we got to, almost out of time. >> Okay, so the consumption, like how do you consume this thing. As humans, we are consuming it and we are, like, getting a nicely, like, surprisingly shocked, you know, wow, that's cool. It's going to increase productivity and all that stuff, right? And on the danger side as well, the bad actors can take hold of it and create fake content and we have the fake sort of intelligence, if you go out there. So that's one thing. The second thing is, we are as humans are consuming this as language. Like we read that, we listen to it, whatever format we consume that is, but the ultimate usage of that will be when the machines can take that output from likes of ChatGPT, and do actions based on that. The robots can work, the robot can paint your house, we were talking about, right? Right now we can't do that. >> Data apps. >> So the data has to be ingested by the machines. It has to be digestible by the machines. And the machines cannot digest unorganized data right now, we will get better on the ingestion side as well. So we are getting better. >> Data, reasoning, insights, and action. >> I like that mall, paint my house. >> So, okay- >> By the way, that means drones that'll come in. Spray painting your house. >> Hey, it wasn't too long ago that robots couldn't climb stairs, as I like to point out. Okay, and of course it's no surprise the venture capitalists are lining up to eat at the trough, as I'd like to say. Let's hear, you'd referenced this earlier, John, let's hear what AI expert Howie Xu said at the Supercloud event, about what it takes to clone ChatGPT. Please, play the clip. >> So one of the VCs actually asked me the other day, right? "Hey, how much money do I need to spend, invest to get a, you know, another shot to the openAI sort of the level." You know, I did a (indistinct) >> Line up. >> A hundred million dollar is the order of magnitude that I came up with, right? You know, not a billion, not 10 million, right? So a hundred- >> Guys a hundred million dollars, that's an astoundingly low figure. What do you make of it? >> I was in an interview with, I was interviewing, I think he said hundred million or so, but in the hundreds of millions, not a billion right? >> You were trying to get him up, you were like "Hundreds of millions." >> Well I think, I- >> He's like, eh, not 10, not a billion. >> Well first of all, Howie Xu's an expert machine learning. He's at Zscaler, he's a machine learning AI guy. But he comes from VMware, he's got his technology pedigrees really off the chart. Great friend of theCUBE and kind of like a CUBE analyst for us. And he's smart. He's right. I think the barriers to entry from a dollar standpoint are lower than say the CapEx required to compete with AWS. Clearly, the CapEx spending to build all the tech for the run a cloud. >> And you don't need a huge sales force. >> And in some case apps too, it's the same thing. But I think it's not that hard. >> But am I right about that? You don't need a huge sales force either. It's, what, you know >> If the product's good, it will sell, this is a new era. The better mouse trap will win. This is the new economics in software, right? So- >> Because you look at the amount of money Lacework, and Snyk, Snowflake, Databrooks. Look at the amount of money they've raised. I mean it's like a billion dollars before they get to IPO or more. 'Cause they need promotion, they need go to market. You don't need (indistinct) >> OpenAI's been working on this for multiple five years plus it's, hasn't, wasn't born yesterday. Took a lot of years to get going. And Sam is depositioning all the success, because he's trying to manage expectations, To your point Sarbjeet, earlier. It's like, yeah, he's trying to "Whoa, whoa, settle down everybody, (Dave laughs) it's not that great." because he doesn't want to fall into that, you know, hero and then get taken down, so. >> It may take a 100 million or 150 or 200 million to train the model. But to, for the inference to, yeah to for the inference machine, It will take a lot more, I believe. >> Give it, so imagine, >> Because- >> Go ahead, sorry. >> Go ahead. But because it consumes a lot more compute cycles and it's certain level of storage and everything, right, which they already have. So I think to compute is different. To frame the model is a different cost. But to run the business is different, because I think 100 million can go into just fighting the Fed. >> Well there's a flywheel too. >> Oh that's (indistinct) >> (indistinct) >> We are running the business, right? >> It's an interesting number, but it's also kind of, like, context to it. So here, a hundred million spend it, you get there, but you got to factor in the fact that the ways companies win these days is critical mass scale, hitting a flywheel. If they can keep that flywheel of the value that they got going on and get better, you can almost imagine a marketplace where, hey, we have proprietary data, we're SiliconANGLE in theCUBE. We have proprietary content, CUBE videos, transcripts. Well wouldn't it be great if someone in a marketplace could sell a module for us, right? We buy that, Amazon's thing and things like that. So if they can get a marketplace going where you can apply to data sets that may be proprietary, you can start to see this become bigger. And so I think the key barriers to entry is going to be success. I'll give you an example, Reddit. Reddit is successful and it's hard to copy, not because of the software. >> They built the moat. >> Because you can, buy Reddit open source software and try To compete. >> They built the moat with their community. >> Their community, their scale, their user expectation. Twitter, we referenced earlier, that thing should have gone under the first two years, but there was such a great emotional product. People would tolerate the fail whale. And then, you know, well that was a whole 'nother thing. >> Then a plane landed in (John laughs) the Hudson and it was over. >> I think verticals, a lot of verticals will build applications using these models like for lawyers, for doctors, for scientists, for content creators, for- >> So you'll have many hundreds of millions of dollars investments that are going to be seeping out. If, all right, we got to wrap, if you had to put odds on it that that OpenAI is going to be the leader, maybe not a winner take all leader, but like you look at like Amazon and cloud, they're not winner take all, these aren't necessarily winner take all markets. It's not necessarily a zero sum game, but let's call it winner take most. What odds would you give that open AI 10 years from now will be in that position. >> If I'm 0 to 10 kind of thing? >> Yeah, it's like horse race, 3 to 1, 2 to 1, even money, 10 to 1, 50 to 1. >> Maybe 2 to 1, >> 2 to 1, that's pretty low odds. That's basically saying they're the favorite, they're the front runner. Would you agree with that? >> I'd say 4 to 1. >> Yeah, I was going to say I'm like a 5 to 1, 7 to 1 type of person, 'cause I'm a skeptic with, you know, there's so much competition, but- >> I think they're definitely the leader. I mean you got to say, I mean. >> Oh there's no question. There's no question about it. >> The question is can they execute? >> They're not Friendster, is what you're saying. >> They're not Friendster and they're more like Twitter and Reddit where they have momentum. If they can execute on the product side, and if they don't stumble on that, they will continue to have the lead. >> If they say stay neutral, as Sam is, has been saying, that, hey, Microsoft is one of our partners, if you look at their company model, how they have structured the company, then they're going to pay back to the investors, like Microsoft is the biggest one, up to certain, like by certain number of years, they're going to pay back from all the money they make, and after that, they're going to give the money back to the public, to the, I don't know who they give it to, like non-profit or something. (indistinct) >> Okay, the odds are dropping. (group talks over each other) That's a good point though >> Actually they might have done that to fend off the criticism of this. But it's really interesting to see the model they have adopted. >> The wildcard in all this, My last word on this is that, if there's a developer shift in how developers and data can come together again, we have conferences around the future of data, Supercloud and meshs versus, you know, how the data world, coding with data, how that evolves will also dictate, 'cause a wild card could be a shift in the landscape around how developers are using either machine learning or AI like techniques to code into their apps, so. >> That's fantastic insight. I can't thank you enough for your time, on the heels of Supercloud 2, really appreciate it. All right, thanks to John and Sarbjeet for the outstanding conversation today. Special thanks to the Palo Alto studio team. My goodness, Anderson, this great backdrop. You guys got it all out here, I'm jealous. And Noah, really appreciate it, Chuck, Andrew Frick and Cameron, Andrew Frick switching, Cameron on the video lake, great job. And Alex Myerson, he's on production, manages the podcast for us, Ken Schiffman as well. Kristen Martin and Cheryl Knight help get the word out on social media and our newsletters. Rob Hof is our editor-in-chief over at SiliconANGLE, does some great editing, thanks to all. Remember, all these episodes are available as podcasts. All you got to do is search Breaking Analysis podcast, wherever you listen. Publish each week on wikibon.com and siliconangle.com. Want to get in touch, email me directly, david.vellante@siliconangle.com or DM me at dvellante, or comment on our LinkedIn post. And by all means, check out etr.ai. They got really great survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, We'll see you next time on Breaking Analysis. (electronic music)

Published Date : Jan 20 2023

SUMMARY :

bringing you data-driven and ChatGPT have taken the world by storm. So I asked it, give it to the large language models to do that. So to your point, it's So one of the problems with ChatGPT, and he simply gave the system the prompts, or the OS to help it do but it kind of levels the playing- and the answers were coming as the data you can get. Yeah, and leveled to certain extent. I check the facts, save me about maybe- and then I write a killer because like if the it's, the law is we, you know, I think that's true and I ask the set of similar question, What's your counter point? and not it's underestimated long term. That's what he said. for the first time, wow. the overhyped at the No, it was, it was I got, right I mean? the internet in the early days, and it's only going to get better." So you're saying it's bifurcated. and possibly the debate the first mobile device. So I mean. on the right with ChatGPT, and convicted by the Department of Justice the scrutiny from the Fed, right, so- And the privacy and thing to do what Sam Altman- So even though it'll get like, you know, it's- It's more than clever. I mean you write- I think that's a big thing. I think he was doing- I was not impressed because You know like. And he did the same thing he's got a lot of hyperbole. the browser moment to me, So OpenAI could stay on the right side You're right, it was terrible, They could be the Netscape Navigator, and in the horizontal axis's So I guess that's the other point is, I mean to quote IBM's So the data problem factors and the government's around the world, and they're slow to catch up. Yeah, and now they got years, you know, OpenAI. But the problem with government to kill Big Tech, and the 20% is probably relevant, back in the day, right? are they going to apply it? and also to write code as well, that the marketplace I don't, I don't see you had an interesting comment. No, no. First of all, the AI chops that Google has, right? are off the scales, right? I mean they got to be and the capacity to process that data, on some of the thinking So Lina Kahn is looming, and this is the third, could be a third rail. But the first thing What they will do out the separate company Is it to charge you for a query? it's cool to type stuff in natural language is the way and how many cents the and they're going through Google search results. It will, because there were It'll be like, you know, I mean. I never input the transcript. Wow, But it was a big lie. but I call it the vanilla content. Make your point, cause we And on the danger side as well, So the data By the way, that means at the Supercloud event, So one of the VCs actually What do you make of it? you were like "Hundreds of millions." not 10, not a billion. Clearly, the CapEx spending to build all But I think it's not that hard. It's, what, you know This is the new economics Look at the amount of And Sam is depositioning all the success, or 150 or 200 million to train the model. So I think to compute is different. not because of the software. Because you can, buy They built the moat And then, you know, well that the Hudson and it was over. that are going to be seeping out. Yeah, it's like horse race, 3 to 1, 2 to 1, that's pretty low odds. I mean you got to say, I mean. Oh there's no question. is what you're saying. and if they don't stumble on that, the money back to the public, to the, Okay, the odds are dropping. the model they have adopted. Supercloud and meshs versus, you know, on the heels of Supercloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

SarbjeetPERSON

0.99+

Brian GracelyPERSON

0.99+

Lina KhanPERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

Reid HoffmanPERSON

0.99+

Alex MyersonPERSON

0.99+

Lena KhanPERSON

0.99+

Sam AltmanPERSON

0.99+

AppleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Rob ThomasPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

GoogleORGANIZATION

0.99+

David FlynnPERSON

0.99+

SamPERSON

0.99+

NoahPERSON

0.99+

Ray AmaraPERSON

0.99+

10 billionQUANTITY

0.99+

150QUANTITY

0.99+

Rob HofPERSON

0.99+

ChuckPERSON

0.99+

Palo AltoLOCATION

0.99+

Howie XuPERSON

0.99+

AndersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

John FurrierPERSON

0.99+

Hewlett PackardORGANIZATION

0.99+

Santa CruzLOCATION

0.99+

1995DATE

0.99+

Lina KahnPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

50 wordsQUANTITY

0.99+

Hundreds of millionsQUANTITY

0.99+

CompaqORGANIZATION

0.99+

10QUANTITY

0.99+

Kristen MartinPERSON

0.99+

two sentencesQUANTITY

0.99+

DavePERSON

0.99+

hundreds of millionsQUANTITY

0.99+

Satya NadellaPERSON

0.99+

CameronPERSON

0.99+

100 millionQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

one sentenceQUANTITY

0.99+

10 millionQUANTITY

0.99+

yesterdayDATE

0.99+

Clay ChristensenPERSON

0.99+

Sarbjeet JohalPERSON

0.99+

NetscapeORGANIZATION

0.99+