Customer-Aware Supply-Chain Operations
As an e-healthcare company, PharmEasy would like to prioritize all our customers’ orders and have them delivered on time. Our customers count on us for many of their critical medications, and the company does not want to breach the medicine-availability or delivery-time promise. However, an unexpected surge in orders (for e.g. during Covid19) or constraints concerning workforce or medicine availability are a reality, that results in a small percentage of breaches occurring.
The current logic of order fulfillment at warehouses and delivery logistics is based on FIFO, i.e., first-in-first-out with respect to the order confirmation times. However, in a scenario where fulfillment capacity is constrained, we would like to have better control over orders that get fulfilled and orders that would most likely breach the committed delivery times. For example, we would always want to ensure a better delivery experience to certain customer cohorts such as
- Frequently ordering or loyalty program customers
- New customers ordering for the first time
- High-value orders with certain chronic or acute medications
- Detractors or customers who have been inconvenienced in the past
- Dormant customers reordering after a long period
- Other ‘priority’ cohorts from time to time
In a conventional setting, a dynamic ranking of the order processing queue is difficult to implement as various customer, product and order-history context data is not typically available within the Order Management Service of the warehouse and delivery logistics systems. While such contexts would be readily available in the Enterprise Data Warehouse or the Data Lake, there is typically a lag between the source and the Data Lake with respect to the order state change signals. We solved this problem using Isima’s bi(OS), which enables us to ingest order state change signals in real-time and enrich the signals using a variety of customer and product contexts. It also allows us to query this data within seconds and feed into a pre-built machine learning model that provides an updated order priority based on all currently queued orders at the warehouse or delivery logistics. It was possible to ingest the required signals and context data, build the order prioritization logic, integrate loosely with tech systems, and go live with an experiment within a few weeks. We discovered that having access to reliable real-time data streams from multiple tech systems for querying within seconds, allows us to build an agile order-prioritization logic without many conventional constraints. This also eliminates the need to modify or integrate repeatedly with critical systems such as the Order Management Services each time the model evolves or a new feature is added. We will further describe this project in more detail, beginning with the data onboarding process.
For this particular project, as well as other potential use cases, relevant data sources were identified to push data to Isima bi(OS) so as to make them available for extraction. In order to ingest data in real-time, the source systems should be able to stream data as they get updated. In an RDBMS like MySQL, this is achieved by enabling binlogs, i.e., Change Data Capture. The data can also be streamed directly from applications or services that generate data using the Isima SDK. However, to avoid data loss in case of any downtime or maintenance (client or host side), we set up a queuing system (for e.g. Kafka) between the source data sources (RDBMS, Data Warehouse, etc.) and bi(OS). During the PoC, we had an outage for 36 hours in the queuing system. Once it recovered, we were pleasantly surprised to see bi(OS) handle the surge of backstopped data without any issues at all. More details on the onboarding process are shown below.
Ontology Design and Ingestion
This is the most important step to ensure various signals and context data is available for use. Getting all the relevant tables across systems together will help identify the best way to define the schema of the tables. The dimension tables are stored as contexts and need to be designed first as they will allow setting up preprocessing and postprocessing of incoming data. It is recommended to enrich the transactional data stored as ‘signals’ by joining with relevant ‘context’ in real-time during the ingestion step. This will reduce computations during repeated data extractions later. Any new fields added to the source systems can also be easily integrated into the existing signals. We then declaratively defined the aggregations and features for the model and bi(OS) calculated these features in real-time which made model building and experimentation easy. We created about 10 features for the model on bi(OS) in less than a week.
- Bootstrapping – The use of real-time data warrants backfilling of historical data (fore.g. For the past 20-30 days). Critical historical data can also be modeled and ingested as contexts as it would not change frequently. Context data needs to be bootstrapped prior to ingesting signals data in order to do signal enrichment during ingestion.
- Stream Pipeline – Setting up the real-time stream is required to receive order state change signals in real-time to determine the current pending order queue at the warehouse and logistics stages. The pipeline also needs to be resilient to handle errors and alert the users in case there are any issues with the data or the pipeline.
Data Quality Checks
Once the data is bootstrapped and real-time ingestion is enabled it is important to assess the data quality checks. Aggregated metrics such as sum and count checks metrics as well as verification of sample set of records between source and the bi(OS) platform should be done to ensure there is no loss between various systems involved.
Building the model
For the purpose of building and using the order prioritization model, only the required subset of data is extracted to minimize memory usage and model processing time. A single analytical dataset with all the required features was created on which a scoring function was applied to rank orders currently queued at the warehouse or with the logistics network. This output was re-ingested into bi(OS) as a synthetic signal to measure the impact of the model.
Data logging and backups
With each iteration, it is important to log the extracted and scored data in order to calculate the impact of the scoring logic and to check if business objectives were achieved at a later point in time. This dataset can be ingested back into the platform as a signal and/or stored in traditional databases or file systems like S3 or NFS.
After the model was completed and prior to actual go-live at the warehouse, for several days, the output of the model was stored but not shared with the operations team. The model output logged over several days was used to
- Determine if the process was running as expected and within the promised turnaround time
- Shadow test or simulate if it met the committed business objectives under different scenarios
- Make necessary tweaks to fine-tune the model and reiterate
Once the model was tested and finalized, the output was shared with the operations team as per a predetermined contract for sharing order priority rank in terms of format and frequency.
Sharing the Recommendation and Taking Action
To begin with, the final ranked ordered list of queued orders or the ‘recommendation’ file is shared with the operations team stakeholders to be manually prioritized in the respective systems. As a next step, recommendation files will need to be integrated into the fulfillment system seamlessly to avoid any manual intervention or confusion at the warehouse or logistics network.
Continuous logging of these recommendation files is important to report adherence and calculate the impact on key goal metrics such as cancellation reduction, higher customer satisfaction, and repeat rates. The data logged from each iteration was used to calculate the warehouse queue if it had used the prioritized orders instead of the actual queue to compare if the most important orders were processed within the committed time resulting in enhanced customer satisfaction and fulfillment of business objectives. This comparison of old fulfillment “FIFO” logic vs the model recommendation was done over several days to estimate the long term benefits of extending the experiment.
Initial results showed that instances of delivery-time promise breach occurring at the warehouse reduced by 10-15% on most days. Projections showed that the company could save orders valued up to 1% of GMV from breaching promised delivery times.
- Infra Setup: The ingestor is stateless running on an EC2 machine that needs to be made resilient.
- Availability of Real-Time Data Source: In order to use the system, it is necessary to have the capability to be able to stream real-time data from source systems. As an example, a source system like PostgreSQL which does not capture binlogs, will not be able to stream data in real-time. In such cases, the data needs to be ingested from the application or micro-services.
Our bi(OS) Wishlist
- Bootstrapping from a csv: It would be ideal for bi(OS) to build a CSV ingestor to bootstrap all the context data from databases. This would have saved us another week of time given the size of our customer data and number of contexts. Isima claims to have this as part of their new release. We will test this for newer use cases and update the blog.
- bi(OS) being a real-time data platform delivers time-series based BI effectively. It would be useful to support traditional dashboards that analyze data across multiple dimensions that need not include time.
With the basic model currently in place, enhancements are planned to include other key features such as prior-order fulfillment experiences of the customer. The model will also integrate with the Order Management System that can coordinate with multiple other operations systems seamlessly and avoid any manual intervention on the ground. Similar other use cases that can be attempted in our eCommerce industry include:
- Prediction of Promise Delivery Date based on current capacity and backlog
- Monitoring real-time Payment Success rates
- Prioritization of stock inwards based on current item inventory levels
- Personalized coupons based on item inventory state and customer history
Our Final Thoughts
We were skeptical that bi(OS) could perform as promised and expected the PoC to fail or take very long. But the Covid19 lockdowns created an urgent situation that needed a quick fix. Everyone across Pharmeasy was pleasantly surprised to see a solution delivered so quickly. We asked for many enhancements and most of the critical ones were delivered. Further, bi(OS) was also able to handle data surges from our upstream systems easily. Our next project would be to predict delivery promise time more accurately to the customer using real-time warehouse or logistics network capacities and backlog.