Let's say that your firm's QA (and Operations) team has determined that one of the major reasons for suboptimal customer service levels and stock-outs is unpredictable fluctuations in lead time. "Lead time", in this example, is defined as the length of time it takes to fill your stock from the moment you initiate a requisition. This allowance includes the time it takes to prepare the Purchase Order through to physical receipt of goods in your warehouse, and placing (posting) goods into available inventory. Naturally, if you place a PO with a supplier, the quantity in part is determined by the amount that you expect to sell within the lead time. If goods arrive three weeks late, and you do not have enough safety stock to compensate, you will likely run out of stock.
Now, your QA/Operations team might have come to this conclusion by using a variety of tools, such as the "Fishbone" (or Ishikawa) diagram, or even Pareto Analysis. But now we want to dig deeper into the problem of lead time variability.
Take a statistically significant sampling of inbound shipments that are arriving "late" (that is, beyond lead times that have been negotiated in procurement contracts). Determine what issue, or combination of issues, have led to the shipment arriving late. Such issues might include:
- supplier/vendor out of stock
- equipment breakdown in supplier's manufacturing process
- shipments delayed at point of consolidation at port of exit (for overseas suppliers)
- unreliable trucking company
- goods stuck in customs due to paperwork problems
- delays in rail yards
- and so on
Count the frequency of each issue within the sampling.
Rank the issues top-to-bottom.
Apply Pareto's Law and assign A, B, and C classifications.
This will allow your QA team to focus upon those few issues that are contributing the most to your lead time variability problem (80%), and fix them first. This is the low-hanging fruit
It can be done!
Cheers