Views: 123 , Video Rating: nan , View Time: 39:47 Minutes, # of Likes: 0, # of Disslikes: 0
Today, organizations deploy more AI/ML workloads on AWS than on any other cloud platform. The cloud has removed many of the challenges associated with scalability, and it’s never been easier or more cost effective to build custom and intelligent data models. In this session, learn how the C3 Platform leverages the full power of Intel Xeon Scalable processors on AWS to rapidly train, deploy, and operationalize AI/ML and big data applications like C3 Inventory Optimization and C3 Predictive Maintenance. In addition, a customer shares how these solutions helped achieve demonstrable value. This session is brought to you by AWS partner, Intel.
hi good morning everybody and thank you for being here on this fine Monday morning in Las Vegas I hope you all had a great Thanksgiving we're going to go ahead and get started with brief introductions I am Bonet at lore I am with Intel's artificial intelligence products group where I run the custom enabling team with me is dip hi I'm Deb Banerjee from c3 I focus on areas of supply chain particularly applying AI machine learning techniques to solve supply chain use cases thanks Deb so today we are going to go ahead and first run through our legal notice and the agenda is pretty simple today we'll start off with the work that we've been doing with Intel or AWS over a decade we'll talk briefly about what Intel is doing in the area of AI and then I'll turn it over to Deb to talk about a use case where they worked with a customer together with AWS on Intel infrastructure okay so intel has a decade-plus collaboration hardware software with AWS from the edge of the cloud we share many common values with AWS we want to drive on the left side digital transformation we have a shared customer passion and we want to really drive the saya T of getting the high performance and lowering lowering a costs of your infrastructure on the other side on the John priorities obviously we are working on AI and ML together HPC and analytics and then all the way over to the IOT and edge computing area and for the cloud AWS is such an important partner for us that we build customized Intel Xeon scalable performance processors – AWS specifications that aren't available on premise these special CPUs power ec2 and fully managed services to deliver the best TCO for customers and this is actually something that we're going to talk about today is part of the use case that did we'll walk through so here's the ml stack actually let me go ahead okay sorry okay I'm sure you're all familiar with the AWS machine learning stack intel has worked closely with AWS in many parts of the stack we've jointly optimize the performance on sage maker for large scale deployment we've we've also worked on the deep learning ami that's now available for use on the c5 instance and then we have continuing collaboration on many aei framework optimizations on Intel hardware plus of course the work we've done with AWS on lambda and D plans and snowball edge so on the right side of the screen you see AWS D plans this is something that we worked on to really show this training and cloud inference on the edge type of capabilities we've also recently worked together to optimize tensorflow on ec2 cpu on the instance especially the c5 instances powered by the latest Intel skylake processors the Intel scalable performance processors the work was done in close collaboration with the artificial intelligence products group that's the group I work in at Intel and resulted in more than a 7x improvement on CPU performance on some benchmarks over the stock tensorflow we look forward to continuing the collaboration and innovative new technologies to drive AI forward and provide the best experience for customers ok so I often get asked by customers you know hey Intel you're a hardware company or silicon company what exactly do you do in AI most people know that we build silicon that's used by our hardware partners but that's about it but as as you all know AI is much more than the hardware itself so what we we need to do and that what we do is we provide software tools that enable you to extract the highest performance and capabilities out of the underlying hardware and then of course there needs to be the ecosystem that's the community that we work with and that includes companies like c3 IOT to take these tools and hardware and apply AI across for vertical and horizontal markets over the next few slides I'll just talk about each of these different areas starting with the hardware first so as you can see we have silicon technologies available all the way from the endpoint of the data center and again I get asked by customers why do we have so many different piece of silicon and the answer is simply that customer is demanded ai is being built into more and more products and we have different pieces of technology for different capabilities that customers are looking for for instance at the endpoint IOT sensors are being used in security home retail industrial and many recalls and low-power is crucial and so for example for inference applications like drones we have the Intel more various vision processing units and in these types of situations you know you you have a technology that can give you inference for less than a water power and then we also have self-driving vehicles and Intel mobile eye technology is used in there and then you're also starting to see inference happening on laptops and this is where Intel Core processor family can be used in the edge area for the most part the customers I mostly using Xeon CPUs but there's areas in here as well there for instance where they might be dealing with latency bound workloads where something like the Aria 10 FPGA is useful and then you can also have solutions based on move areas and then finally on the data center this is where most most customers today are doing the deep learning inference and training on Zeon's and CPUs and for intensive deep learning environments this is where we plan to introduce the Intel neural network processor 4 and that's coming in 2019 sorry so I'm just trying to see okay so we talked about the hardware in the previous slide in this slide I wanted to talk about the Intel overall into a portfolio that sits on top of the hardware so it's a hardware that I went over in the previous slide is at the very bottom what we've got on top of that is the various foundation elements the different libraries and the math kernal libraries and so on so think of these as the lower level types of primitives that we offer for you to extract that performance from the underlying technology and then above that we have libraries in I talked about tensorflow this is where we've created an optimized version of tensorflow that takes advantage of through these lower level libraries of the underlying hardware and about that we have toolkits that we provide so open vino is a platform that allows you to not a platform it's a toolkit that lets you take a model that's trained on different hardware possibly or different in the cloud and then apply it somewhere else either in a different type of hardware or on something like the edge the good news is with companies and like c3 and Amazon's platform like Sage maker a lot of this is hidden from from your point of view so from your perspective all you see is that it simply just works and then another request we get is really around solutions right how do I build solutions based on all this and we provide these types of white papers available on builders Darin telkom /a I so feel free to go there and and learn more about it okay so in this particular case what I it's a bill but I'm just going to go over this very briefly at the at the bottom of this is what we call the time to solutions so this is you know where you think about an AI and you ask yourself well can I apply to AI to it and so on and and so that's the overall from where you think about where AI can be applied to where you end up that's the time to solution but the next part really is around this idea of buildup lying and scaling and in this area what we want to show in this slide is really it's more than just you know training or inference it's really around how do you get your source data how do you do the development and then what's your plan and how are you going to scale and deploy the inference and this this an example of this is what c3 will cover and but but the point I want to bring across is if you look specifically at the training part you know a lot of people talk about training and say oh well training is very intensive and so on it's true but really it's a small part of the overall cycle of building deploying and scaling and this was an example I'm not going to spend too much time on this but this is an example of what it takes to if you're doing something on pram for this overall type of deployment again fortunately fortunately you know with the platforms like AWS and c3 you don't have to worry about that because it's all done for you on the on the cloud okay so the last part of our stack I talked about the hardware the tools and finally the community and the ecosystems so we announced the Intel ai builders partner program in May of this year and since then we've got over a hundred actually the current count is over a hundred and sixty partners at this point that range in a number of different domains all the way from different verticals to different horizontals and of course as you see and the low right under a i-pass we've got C 3 feature there and you know we this is where you know we really look to work with our partners to help help you the customers be able to use Intel technologies so here's an example that c3 did recently and this shows what they've done here is they've migrated a certain workload from the c3 instance on AWS not not to be confused with c3 the company and what they saw in moving from the c3 to the c5 instance is a performance improvement of 27% but at the same time they saw the total cost of ownership go down 41% in moving from the c3 to the c5 instance I think this is a key point because what it shows you is that as you scale out you know so maybe you maybe you did the POC on the c3 instance and so on but as you scale out on c5 you're actually getting better performance and lower cost tool cost as you're scaling out so that has you know that's great news for customers as they deal with how they scale up ok at this point I'd like to turn it over to Deb so it is this Thank You Vinay hello everyone so in this part of the presentation I'm going to break it up into three parts in the first part I'm going to provide an overview about how we can apply AI machine learning in the area of supply chain the second part I'm going to drill into a specific customer case study and in the third part I'm going to show you share with you some results that we some performance improvement results like Binet shared with us using the latest generation of Intel processors so let's start with the first part about an overview of how we can apply AI machine learning to supply chain use cases so what you see here is a value chain where you have suppliers at one end and then going going forward all the way to manufacturers logistics distribution all the way to customers now customers companies can use c3 platform to build applications tailored to each of the stages in the value chain using the underlying data that might be sitting in multiple disparate source systems today so some examples of applications a IML based applications that can be built our supply network optimization inventory optimization phos forwarding into demand forecasting and all the way to after market insights these are just some examples of use cases or applications that can be built to address specific customer needs and help drive business performance improvement now today we are going to deep dive into one specific application which is inventory optimization but let's try to get an understanding of some of the common pain points that practitioners in the supply chain space face in on the day to day job for executives it's very pickled to get a view and a real-time view of the supply chain from an end-to-end standpoint the reason being the data resides in multiple disparate systems and it's very costly and it also takes a lot of time to bring all of that together to provide the real-time view next without the right analytical tools it's very difficult for executives to contain the inventory costs or optimize the inventory costs and at the same time meet service levels which means making sure that the right product is available in the right quantity at the right time at the right location in order to meet customer demand next the analysts typically work with spreadsheets tonne of spreadsheets and thus their analysis is generally slow its unreliable and many times because the data and the spreadsheets are pulled from our source system at some point in time in the past so they're working off stale data most of the time and finally solutions that are developed based on spreadsheets generally can't be scaled across the enterprise for millions of products across hundreds to thousands of locations worldwide globally now let's take a look at how we at C 3 address these problems so based on C 3 inventory optimization is an application that's built on the C 3 platform using AI machine learning techniques to help optimize inventory levels and what that means is reducing inventory costs yet at the same time making sure you meet service levels which is making sure your products are available based on customer demand now based on the projects we have done so far we have seen reduction of inventory levels by as much as 30 percent or higher yet at the same time customers achieving service levels higher than 99 percent in addition to that arm in learning models are able to predict with high levels of accuracy more than 80% accuracy in predicting supplier delays which is a key aspect around managing the uncertainties for your that leads to inventory pileup so what does that mean for a company that has a very large company that has more than a 1 billion dollar in inventory it means that that company is going to be able to reduce their working capital which is currently locked up in inventory by as much as 300 million in addition to that there's around hundred million dollars or more of savings that can be achieved in logistics costs just because you need to now order less materials and fewer materials need to be moved around so that's a tremendous benefit for a very large enterprise now how we'll be able to do that right inventory optimization is not a new problem in today's world but our points of differentiation are around four capabilities first it's the ability to use a I machine learning algorithms as against standard rule-based systems that are in place the ability to do optimization in real time as against the current cadence of doing it maybe quarterly or at best monthly at many companies the ability to scale across millions of products across hundreds to thousands of locations globally and finally the ability to ingest new data types new data types such as weather data data related to supply network congestion which leads to better prediction about the uncertainties in the future and thus better optimize or more optimized invention levels this slide shows you a quick comparison along those four capabilities that I just mentioned the column on er left shows you how typically the legacy solutions that have been built maybe most of them built more than a decade ago how they operate versus the new approach that we have at c3 which is on the on ER right so when it comes to data integration the ability to integrate multiple disparate sources or new data types is a big challenge with legacy systems however since e3 is built on the c3 c3 inventory optimization has built on the c3 platform that makes it very easy to ingest new data types and I mentioned some examples like whether supply network congestion etc second the use of machine learning algorithms as against very simplistic rule-based systems that traditional solutions use the ability to do this in real time as against doing inventory optimization maybe on a quarterly or at best monthly basis and finally the ability to scale where we use the c3 platform to massively scale across millions of items at thousands of locations globally so that was the background about applying AI machine learning in the area of supply chain so next I'm going to get into more of a specific customer use case or a case study where customers have realized tangible benefits using this application so this case study is for a very large discrete manufacturer which has global presence and hundreds of locations worldwide and they make complex machines in each of these machines can have like tens of thousands of individual parts that go into making the machine now there's two kinds of uncertainties that they typically have to deal with there's uncertainty on the demand side because these machines are very configured are customizable so you do not know what the customer demand is going to be for all these options and configurations and so that's the challenge by itself secondly there's recommend delay from suppliers there are many suppliers which are overseas and they get delayed in providing materials that are required for making these complex machines as a result there is an ongoing challenge in terms of figuring out what should be the right level of inventory that needs to be maintained for individual parts at each of these manufacturing locations now they tried to solve this problem using some of the existing solutions out there but none of those solutions were able to do inventory optimization dynamically as well as at scale so when they approached us they chose c3 inventory optimization for doing a trial for one product line at one facility where that product line was covering several thousands of parts now as we went through the trial and developed and tuned the optimization algorithm we were able to basically demonstrate that there was inventory savings opportunity of the tune of 28 to 52 percent depending on the service level that they want to operate in and that translated to somewhere between 100 to 200 millions of economic value for the company now seeing that proof through the trial they decided to scale that application up across multiple locations multiple manufacturing facilities globally now let me share with you what a timeline of the trial was and this is very typical of some of the trials that we do at c3 so the trial can be compared file typically comprises of like three phases in the first phase it's about or data integration which basically means working very closely with the customers subject matter experts and understanding the semantics of the data identifying the data sources that are required and the data types and loading all of the data from multiple disparate sources into the C 3 platform now once that's done the next phase is about developing and tuning the machine learning algorithm to optimize inventory and typically the output of the machine learning algorithm would be something like a recommendation for the user to reduce safety stock for a particular part at a given facility and then finally we configured a user interface for end users to just act on the recommendations coming out of the machine learning algorithm this is a view of the C 3 inventory optimization application in the context in the broader context of the C 3 platform so what you see on your left are different data types that are required for the algorithm some examples being demand forecasts purchase orders material movement inventory snapshots lead times with suppliers etc and in the middle layer of the C 3 platform where the machine learning algorithm and the optimization algorithms runs and generates these recommendations and these recommendations are surfaced through the inventory optimization application now besides the application the user interface for the end user the data scientists can interact with the data on the platform through the jupiter ipython notebooks and also the business analysts can develop models using the visual tool called c 3 ex machina and yet at the same time review business intelligence reports coming out of C 3 intelligence so now let me share with you some examples or some results which are more at a granular at a part level so that you can see where the value comes from so this is a view of one specific part at a factory the blue line indicates the inventory levels on a daily basis historically over a three-year period okay now compare that with the green lines very small tiny green liners there which represents the average consumption or the actual demand of the part so just this view by itself tells you that there's an enormous opportunity to optimize inventory for this part since your daily demand is so little and you are maintaining so high Inventure levels and the red lines are arrivals of that material from the customer so this view by itself was a ha moment for the customer because using their current tools there was no way for them to get a view like this ok so the way this works on the c3 platform is we optimize inventory on a daily basis using the most recent data available so using the different data types that I mentioned earlier demand forecast purchase orders inventory inventory snapshots material movement using all of those the machine learning algorithm computes saved and optimal safety stock which in turn drives optimal invention levels so in here as an example based on all that information the safety stuff that the optimization algorithm came up with is 120 now as new data gets in the algorithm runs again and if there is a need it updates the optimal safety stock value in this case for this part the optimal safety stock came down to 90 right so this is a very dynamic way of optimizing your inventory as against doing it basically once a quarter or at best once a month which is typically what customers do today okay so now the here are some results of the optimization percent what you see here is on the top graph the green line represents the actual daily inventory levels over that three-year time period and the bottom graph shows the comparison between the green and the blue line which represents the optimal inventory level coming out of the c3 inventory optimization algorithm and the difference between the Green and the blue represents the inventory savings opportunity so that savings opportunity that I referred to earlier number 28 percent that is an aggregate of this difference between the green and the blue lines across all parts that were in scope and now if you plot the distribution of the average consumption or the actual demand of the part versus the actual inventory the distributions prior to optimization which is the top part of the graph they are very spread out right which is not desirable but after optimization the distributions are so much closer so the inventory level has come down much closer to that rate at which consumption happens this is a view of the user interface I encourage all of you to stop by our booth and actually get a demo of the application now here so going back to the point that been I made earlier we made we made some we basically tested the complex algorithm like c3 inventory optimization right working optimizing several parts across multiple locations with what the old instances like the m4 here in this example and compare that with the latest skylake based m5 instances and saw improvement both in terms of performance and it's the number is 21% here as well as in terms of reduction of PCOS which is 42% and a similar exercise on the r5 instances showed performance improvement of 50% and this your reduction by 49 percent so the key point here is as customers scale up on these latest generation of machines based on skylake processors they're going to see more and more benefits coming out of an application which is running on the stack based on skylake processors okay so that is the end of the presentation and for my part I encourage all of you to visit the c3 booth and also there's a session tomorrow where we're going to talk about how customers can build AI applications in 1/10 the time and the cost so that's tomorrow at 12:15 and with that I turn it over to Ben I thank you Dad did I think you can join me on stage we're getting in the Q&A so yeah just hopefully you you learned out of this and hopefully enjoyed this presentation but what we really wanted to show is how c3 and Intel working together can really get you a benefit on some of the workloads that and use cases that you might see we have a number of sessions as well during the course of the week we have one this afternoon tomorrow as well as on Wednesday and and then in addition to that please join us in the booth please visit the Intel booth and we have some exciting announcements coming up at Monday Night Live and at the Andy Jesse keynote on Wednesday so please stay tuned for that so we're done with the presentation I I guess 10 minutes early around 10 minutes early which loves us time for Q&A so if you have any questions please look for folks like Billy and Danny to step up to the mic sure oh yeah okay you wanna talk about c3 first yeah sure so as part of that specific project that I talked about it was three to four people from c3 and we interacted very closely with the customer resources right so that was the nature of the engagement but I you don't know what to be neither yes so so the way we worked there were there were two parts for Intel's engagement one was the Intel team working with AWS to make sure that the all the optimizations if you remember the slide I showed that had all the different tools we work closely with AWS to make sure all those optimized tools were incorporated as part of the c5 instance so that was one part of the effort the second part of the effort was working with c3 on the specific workloads and the services that they were using on AWS and make sure those were taking advantage of all the optimizations we've done so that's roughly how we we worked through this this type of scenario that makes sense so there's a general general part as well as a specific part so one question like you mentioned very quickly right you are going into unknown territory there are a lot of data ingestion right and data is spread on all over the places right when you go to the manufacturing so do you did you have to figure how did you get you made you in your program charter you said you did the work in four weeks to get the information on the data it yes so do you have how do you get like for example if my problem is if I come there are different manufacturing facilities right and key question here is from one manufacturing facility to the other right yes we want it to be converging but how do you go about to get the information on the data right how do you do are you a fix template to figure out that how fast you can do the ingestion sure so there are two parts to it let me address the template piece of it yes we do we do have what we call Canonical's which specifies what data types and individual fields or attributes are required that feeds into the optimization algorithm right so that is pretty much set for the most part then we work with the customer to help them understand this canonical template and generally it's the IT team at the customer who would figure out where to pull data from but so this can be extracted on a one-off basis or we can also use connectors for example if the data is in a seppie use a seppie connectors to get the data onto the c3 platform so there are different options that are available to achieve that does that help no that's good good overview I think I may get connected offline with you thank you Karen thank you thank you for the person could you quantify kind of the size of the data that you were ingesting and I'm assuming it was streaming versus a batch or anything in in this example this was batch data right so the customer was providing data on those different data types like demand forecasts purchase orders inventory snapshots material movement right typically once a day right but the platform also has the ability to handle more real-time data but in this particular use case this was you're getting your data about once a day could you talk a bit about how much data you're actually ingesting so in this particular use case as we are going through multiple factories right so one way to think about it is the scale of it is going to grow as more factories come on board but in here we are talking about so far somewhere between 50 I think 50 plus GB every day okay and can you talk a little bit about some of the AWS technologies you used in in the data pipeline that you were using so the c3 platform sits on top of the AWS stack and so basically the I don't have a slide out here but it uses multiple aw services I I don't have the specifics I can connect you to you later on yeah I think that was certainly services like the relationship database and yes like Postgres Cassandra those would be the two databases that we use if I remember right and totally is around twenty services yeah I saw the list yeah thank you any any other questions my name is Steve good question for you in terms of you said that you were you were getting performances from c3 c5 about a fifty percent increase in performance can you tell us about the tools did you use tools to help optimize those workloads to migrate from g3 to see if I have to get to fifty percent increase or was it just a natural increase based on scaling up so some a lot of this is really something you don't have to worry too much about as long as you use the deep learning a my Amazon machine instance if you use that on c5 the optimizations are in there and it'll be continued continuously updated as we continue to optimize so as long as you use the deep learning am I on c5 you should get all these optimizations in there okay good thank you okay normal questions well thank you very much for your time and enjoy the rest of reading that thank you