HomePublicationsInsightsTOTAL QUALITY TOOLS APPLIED TO IMPROVE THE LOGISTICS SERVICE

TOTAL QUALITY TOOLS APPLIED TO IMPROVE THE LOGISTICS SERVICE

In several articles published in previous numbers (see, for example, the April/99, January/2000 and July/2000 editions), we have shown the importance of measuring the quality of the service provided by the logistics system with the aim of identifying which attributes our performance leaves something to be desired and in which attributes we are meeting or even exceeding customer expectations.

Evidently this type of information is necessary but not sufficient. It is necessary to act on the attributes in which we are not meeting the expectations of customers and, at the same time, have a system for monitoring future performance, because nothing guarantees that a service considered satisfactory at the time of the measures taken will remain so over time. .

Variability is an intrinsic characteristic of any service and, in particular, the logistics service. Each of the successive performances may be slightly different, due to a series of controllable and non-controllable variables by the service provider. If these variations are minimal, they will certainly not compromise the quality of the service. For example, if 99,5% of all delivery trucks to stores leave the Distribution Center within two minutes of the expected departure time, this is a variation that is unlikely to worry the delivery person. If, however, this process fails to present such satisfactory indicators, action must be taken to diagnose, control and improve the process.

There are a number of effective statistical methods for improving logistics services by identifying and reducing existing failures. The objective of this article is exactly to present the main statistical methods that can be applied in the improvement of the logistic service and that, in the context of the quality programs, came to be known as Total Quality Tools. Its use must be understood in the context of activity cycles, in the management of uncertainties and in the existing gaps between the quality of service provided by the supplier and that perceived by the customer. The next paragraphs address each of these three concepts.

CYCLE OF ACTIVITIES, UNCERTAINTIES AND SERVICE GAPS

A cycle of activities is the basic unit of analysis and control of logistics processes. It involves not only activities or tasks necessary to serve the customer, but also all decision-making and information exchange processes between the Marketing, Operations and Logistics departments in the company, and also with its customers, suppliers and service providers. Through activity cycles, it is possible for a company to evaluate any logistics process in terms of its efficiency (productivity in the use of resources) and its effectiveness (meeting goals).

Among the various activity cycles that exist in a company, the physical distribution activity cycle stands out, which involves activities ranging from placing the order by the customer in the sales department, to delivering it to the customer through logistics. Basically in companies, these cycles are composed of the following activities:

  1. Transmission of the order to the supplier, either via telephone, fax, EDI or Internet
  2. Order processing, by checking its availability in stock, assessing delivery times and checking customer credit
  3. Loading, transport and delivery to the customer.
2000_10_image 01

It should be noted that this generic cycle, as illustrated in Figure 1, is applicable to companies in various segments of the economy, whether in the industrial sector or not. For example, a pizzeria making home deliveries or a bank delivering checkbooks to its customers' homes experience cycles similar to the one described above.

In practice, the various activities of the physical distribution activity cycle are subject to uncertainties, either due to the level of reliability in the operation or due to quality problems in the tasks performed. In other words, transmission, processing, and loading times can vary greatly around their average or standard, culminating in a very marked variation in the total physical distribution cycle. Figure 2 illustrates this phenomenon.

2000_10_image 02

In this scenario of extreme variability, it is easily noticed that the management of uncertainty is one of the most important challenges for the logistics manager. Both delays and anticipations in the execution of each of these activities in the cycle, in relation to their standards, must be monitored in order to avoid deterioration in the quality of the service provided. The action of logistics management must be guided by:

  • Ensure service consistency by reducing process variability.
  • Reduce the duration of the activity cycle to the minimum possible by improving processes

The activities in the cycle just described contain a set of service attributes that are more or less valued by customers. It is the performance in these attributes that determines a better or worse evaluation, by the customer, of the quality of the service. So, for example, from a physical distribution perspective, some of the service attributes most valued by customers are on-time delivery, percentage of complete orders, and level of product availability. It is based on the customer's evaluation, therefore, that a company identifies service gaps, and will then work on the respective attributes directing its efforts towards the corresponding activities. It is at this point that total quality tools demonstrate their usefulness.

TOTAL QUALITY TOOLS

As illustrated in figure 3, the total quality tools applied to the improvement of the logistics service are normally divided into two categories according to:

  • your degree of sophistication: Basic Tools or Advanced Tools
  • its nature of analysis: Process Analysis or Statistical Analysis.
2000_10_image 03

Generally, the improvement process starts with basic process analysis tools such as Brainstorming and Cause and Effect diagrams. Then, quantifications are made about the quality of the service provided based on basic statistical analysis tools such as histograms and ABC (Pareto) analysis. Finally, advanced process analysis and statistical analysis tools are used. The next paragraphs describe these tools.

Brainstorming

Also known as a “storm of ideas”, it is a very useful tool in the elaboration of Cause & Effect Diagrams. This is because it allows to quickly generate a large number of ideas about the main problems (effects) and their causes associated with the poor quality of the logistics service.

Generally, a Brainstorming session is conducted in groups of 5 to 10 people, with the help of a flip-chart where the ideas suggested by its components are written down. It is important to point out that creativity is not inhibited in these sessions, that is, one element of the group under no circumstances criticizes the idea raised by any other member.

Cause & Effect Diagram

Another basic process analysis tool aims to schematically illustrate (see Figure 4) the relationship between potential causes and the effect (problem) existing in a service. This tool is also known as a Fishbone Diagram, due to its format, or as an Ishikawa Diagram, a tribute to Kaoru Ishikawa, one of the great thinkers of total quality in the XNUMXth century.

2000_10_image 04

As said, potential causes are raised in Brainstorming sessions. Normally, when the cycle of activities of the physical distribution is analyzed, four main groups of causes are covered: hardware, software, peopleware and external environment. Figure 5 contemplates a non-exhaustive list of these causes.

• Hardware • Peopleware
– Machines
- Equipment
– Materials
- Installations
- Human Resources
• Software • External Environment
– Methods
- Policies
– Procedures
– Rating Systems
of performance
- Customers
- Suppliers
– Logistics Service Providers
(Transporters, Storers)

 

 Figure 5 – Main Cause Categories

Once the potential causes for the deficiency in the service have been raised, the execution and service times in each of these activities must be measured and quantified. With this, it is possible to validate the causes raised, measuring their variability and characterizing whether customer service is under control or not. Generally, companies can derive these times based on two main sources:

  • Corporate databases: companies that are integrated by corporate systems such as SAP, BPCS and Oracle usually have in their records the start and end times of each activity.
  • Internal audits: aim to map and detail the activities of the physical distribution cycle, accompanied by timing.

After measuring the various times referring to the activities of the physical distribution cycle, statistical analysis tools are applied in order to quantify the variability and characterize whether it is under control or not.

Histograms

The histogram is a graph obtained based on the frequency distribution of a given event. So, for example, if the event considered is the total customer service time, the histogram tells us how many times in the collected sample this time was between 0 to 24 hours, 24 to 48 hours, and so on. With this, it is possible to evaluate how the number of occurrences of a given event varies with its intensity. Figure 6 exemplifies a histogram, referring to the length of stay for loading vehicles in a factory.

2000_10_image 05

A strong indication of the loading activity being out of control is clearly perceived through two considerations:

  • The histogram shows a high amplitude, ie the difference between the highest and the lowest time verified.
  • There is an abrupt drop to the right in the graph, indicating discontinuity in the collected sample.

ABC Analysis (Pareto)

This analysis has as its starting point the causes raised in Brainstorming, in addition to having a construction and elaboration process similar to that of histograms. The difference is that instead of evaluating the frequency distribution of the main effect, the ABC analysis allows identifying how the causes that contribute to this main effect are distributed.

With this it is possible to evaluate:

  • the small number of causes responsible for the greater number of times that there is deterioration in the quality of service (few, but vital).
  • the large number of causes responsible for the least number of times the problem occurs (many, but trivial)

ABC analysis can also be applied in the interpretation of histograms. When calculating the cumulative frequency distribution of a given event, it is possible, for example, to answer questions like:

  • What percentage of customers are served within 24 hours?
  • how many customers are served in more than 72 hours?

In several companies, such as Federal Express, the control and specification of service policies in express delivery programs are based on analyzes of this nature.

Process Flow Diagrams

It is an advanced process analysis tool, as it outlines the sequence of activities and decisions in a cycle of activities. In addition to visualizing where the causes raised in the Cause&Effect diagrams are, this tool has several applications:

  • facilitates understanding of the process
  • helps identify opportunities for improvement, that is, bottlenecks and redundancies that do not add value to the customer.
  • assists in the development, description and documentation of improvements

Figure 7 illustrates the symbology normally used to represent the various stages of a process. Figure 8 exemplifies the vehicle loading process at an industrial unit. When analyzing process flow diagrams, the decision maker must direct his attention to activities such as waiting, checking and internal movements, which are usually of little value to the end customer and only add cost to the operation.

 2000_10_image 06

Control Chart

An advanced tool for statistical analysis, but not for that reason complex, the control chart monitors the degree of variability of an activity, helping to identify trends that indicate whether it is under control or not. By calculating three parameters LC (Central Control Line), LSC (Upper Control Limit) and LIC (Lower Control Limit) a control chart is defined in order to enable the continuous monitoring of an activity over time. Figure 9 presents the basic aspect of a control chart, considering as an example the variable “time to charge a vehicle”. A process or activity is said to be in control when no measurements are above the LSC or below the LIC. The main steps for building a control chart with the determination of the LC, LSC and LIC parameters are presented below, by way of example.

 2000_10_image 07

To illustrate the construction of the graph, assume that over 10 days a sample (n) of 5 loading times has been selected to study the variability of this process. The collected sample is shown in figure 10.

 2000_10_image 08

Step 1: Determine the Center Line (LC) of Control

If the current process is assumed to be in control, the Central Line of Control is the average of the average loading times verified for each of the 10 days of operation. In this example, as shown in figure 11, LC = 5:25 hours. If the process is considered to be out of control, this value may not be a good starting point. In this case, if there is a previous target for the variable in question, this is the indicated value. For example, if the company is targeting an average load time of 5 hours, then the value LC = 5:00 should be the employee.

 2000_10_image 09

Step 2: Determine the Average Amplitude ( R )
The Mean Amplitude (R) is the average of the differences between the maximum and minimum load times verified for each of the 10 days. In this example, as illustrated in figure 12, R = 5:30 hours.

2000_10_image 10

Step 3: Estimate the Standard Deviation (R/d)

The standard deviation is an estimate of the variability of the activity, which can be obtained based on the mean amplitude and the coefficient d, as a function of the size of the sample collected day after day. This coefficient d is easily obtained in statistical process control books and Figure 13 reproduces a table that indicates several values ​​of d for sample sizes ranging from 2 to 10 elements collected. In our case, since the sample size (n) is equal to 5, d = 2,326.

Thus:

R/d = 5:30/2,326 = 2:22 hours

2000_10_image 11

Step 4: Determine Upper and Lower Control Limits for 3 Deviations

With these parameters calculated, just apply the two formulas for calculating the LSC and LIC.

 2000_10_image 12

We therefore have that if the average loading time on any given day is between 2:14 and 8:35 hours long, the activity is under control.

Figure 14 is a reproduction of figure 9 now with the LC, LSC and LIC parameters already represented. We can observe that something out of control happened on the 5th day of measurements, once again assuming that the process was considered in control. We invite the reader to remake this graph considering LC = 5:00h and the desired value for the average loading time.

The use of this graph is, from now on, to verify whether or not the process is out of control. The control limits are established so that if at any time the measured times are outside the control limits, something is wrong.

 2000_10_image 13

An important issue is the following: the reader should note that with this graph, the process will be considered under control if the loading time is between 2:14h and 8:35h. This may seem like too large a tolerance, suggesting that the starting parameters already indicated an out-of-control process. If the control person starts to act in order to reduce the variability of the process, this will have a direct implication in the reduction of the R value, which is an important factor in the calculation of LSC and LIC. Thus, over time, the person in charge of the control, as he reduces the variability, will reduce the value of R and, with this, the values ​​of LSC and LIC will be closer to the LC, making the control less tolerant. If also like time, the loading starts to be done in less time, the person responsible for the control can change his LC value and start working with another control chart.

It is also relevant to note that not every control chart works with an upper and lower limit. Everything will depend on the variable under study. In the given example, a loading time greater than the LSC value could mean an abnormality, something unwanted. A value lower than the LIC may mean nothing. Like any control tool, it exists to help those in control and must be used properly in order to avoid malfunctions.

Finally, it should be said that a control chart is a neutral tool. It serves to identify and describe a situation in a very objective way. It should not be considered as a formula for deciding “who to blame” for a problem. Its purpose is to show everyone working on the process how it is developing and quickly inform them of any anomalies. This creates an alert awareness in the group and an interest in solving the problem if it was caused by equipment failure, human error or by some external factor to the system. It also sensitizes the company's management to provide all the necessary assistance in order to keep the process under control.

Authors: Kleber Figueiredo and Peter Wanke

https://ilos.com.br

Doctor in Business Administration from IESE Business School, Universidad de Navarra, Barcelona, ​​Spain and Master from COPPEAD/UFRJ. Degree in Mathematics from UFRGS. Professor in the Operations and Technology area at COPPEAD between 1979 and 1994. He was Deputy Director of the institution between 1988 and 1992, coordinator of several classes of the Executive MBA and professor in all levels of courses offered: Masters, Doctorate and Executive Training. Since 1990 he has been a visiting professor at the Instituto de Empresa in Madrid, and between October 1994 and April 1996 he held the position of Director of the Operations and Logistics Area full-time. In 1998 he returned to COPPEAD and, since then, he has been Head of the Operations, Logistics and Technology Area and Coordinator of the first 10 classes of the Logistics MBA. He was Deputy Director of Executive Education between March 2005 and February 2008. He is currently a professor at the AMIL Chair in Health Services Management and Coordinator of the Center for Studies in Health Services Management. His areas of research interest are Service Operations and Logistics Services Strategy. He is one of the authors of the books “Business Logistics – the Brazilian perspective” and “Logistics and Management of the Supply Chain”. He is also the author of numerous teaching cases and articles published in technical and academic journals in Brazil and abroad.

Sign up and receive exclusive content and market updates

Stay informed about the latest trends and technologies in Logistics and Supply Chain

Rio de Janeiro

TV. do Ouvidor, 5, sl 1301
Centro, Rio de Janeiro - RJ
ZIP CODE: 20040-040
Phone: (21) 3445.3000

São Paulo

Alameda Santos, 200 – CJ 102
Cerqueira Cesar, Sao Paulo – SP
ZIP CODE: 01419-002
Phone: (11) 3847.1909

CNPJ: 07.639.095/0001-37 | Corporate name: ILOS/LGSC – INSTITUTO DE LOGISTICA E SUPPLY CHAIN ​​LTDA

© All rights reserved by ILOS – Developed by Design C22