Semester Final Exam Notes
A Comprehensive Guide to Quality Management
Chapter 1: Introduction to Quality
1.1 Perspectives on Quality
Quality is a multifaceted concept. It's not just about making a "good" product; it's a strategic approach that impacts an entire organization. Different stakeholders view quality through different lenses:
- Fitness for Use (Juran): This is a customer-focused definition. It asserts that quality is determined by the customer's experience. This concept is further divided into:
- Quality of Design: Differences in the intended features and specifications of a product. A luxury car and an economy car have different qualities of design.
- Quality of Conformance: How well the actual product meets the specifications laid out in its design. A product with high conformance has few defects.
- Conformance to Requirements (Crosby): This is a producer-focused, unambiguous definition. It states that a product has quality if it meets the specifications, and lacks quality if it doesn't. This promotes a "zero defects" culture where the goal is to do things right the first time, every time.
- Other Perspectives: There's also the Transcendent view (quality is universally recognizable but hard to define), the Product-based view (quality is a precise and measurable variable), and the Value-based view (quality in terms of cost and price).
1.2 The Quality Gurus and their Philosophies
The modern quality movement was shaped by several key thinkers whose philosophies are still influential today:
- W. Edwards Deming: Often considered the father of modern quality control. He introduced Statistical Process Control (SPC) to Japan after WWII. His core philosophy, encapsulated in his 14 Points for Management, is that management is responsible for creating a system that allows for quality. He believed most problems are systemic, not due to individual workers. He also championed the Plan-Do-Check-Act (PDCA) cycle for continuous improvement.
- Joseph M. Juran: Juran defined quality as "fitness for use" and emphasized the financial aspects of quality. He proposed the Quality Trilogy:
- Quality Planning: Designing processes that are capable of meeting goals.
- Quality Control: Monitoring and adjusting processes to maintain control.
- Quality Improvement: A breakthrough approach to take processes to unprecedented levels of performance.
- Philip B. Crosby: Crosby is famous for the concepts of "Quality is Free" and "Zero Defects." He argued that the costs of preventing defects (training, process improvement) are far less than the costs of correcting them (rework, scrap, warranty claims). He defined quality simply as "conformance to requirements."
1.3 The Evolution: QC, QA, and TQM
These terms represent an evolution in how organizations approach quality, moving from detection to prevention and finally to a holistic, strategic philosophy.
Concept | Definition & Focus | Core Idea |
---|---|---|
Quality Control (QC) | A system of operational techniques to measure product characteristics, compare them with specifications, and take action if there's a difference. It's reactive and focused on the product. | Detection |
Quality Assurance (QA) | A planned system of activities to provide adequate confidence that a product or service will satisfy given requirements. It's proactive and focused on the process. | Prevention |
Total Quality Management (TQM) | A management philosophy that integrates all organizational functions to focus on meeting customer needs and organizational objectives. It is a cultural shift towards continuous improvement, involving everyone from the CEO to the frontline worker. | Continuous Improvement & Customer Focus |
1.4 Quality Circle: A Tool for Employee Empowerment
A Quality Circle is a participative management technique. It is a small group of employees (typically 6-10) from the same work area who meet voluntarily and regularly to identify, analyze, and solve work-related problems. Its success hinges on several factors:
- Voluntary Participation: Members must choose to be involved; forced participation is ineffective.
- Management Support: Leadership must provide resources (time, meeting space, training) and, most importantly, genuinely consider the circle's proposals.
- Training: Members must be trained in problem-solving tools like the 7 QC Tools (Pareto charts, fishbone diagrams, etc.) to analyze problems systematically.
- Focus: Circles should focus on problems within their own area of control, which empowers them to make tangible improvements.
Chapter 2: Statistical Process Control (SPC)
SPC is the application of statistical methods to the monitoring and control of a process to ensure that it operates at its full potential to produce conforming product. The foundational concept is that variation exists in every process, and understanding this variation is the key to improving quality.
2.1 Understanding Variation: The Voice of the Process
Dr. Walter Shewhart, the father of SPC, identified two distinct types of variation that affect every process:
Feature | Chance (Common) Causes | Assignable (Special) Causes |
---|---|---|
Definition | The inherent, natural variability of a process that is in a state of statistical control. It is the result of the combined effect of many small, uncontrollable causes. It is the "noise" or random variation within the system. | Variations that are not inherent to the process. They are caused by specific, identifiable events that cause a noticeable change in the process output. They are a "signal" that something has changed. |
Examples | Slight variations in raw material properties, minor fluctuations in ambient temperature or humidity, normal machine vibration and wear. | A tool breaking, a machine setting being incorrect, a new and untrained operator, a bad batch of raw material from a supplier. |
Action Required | To reduce common cause variation, the entire system must be fundamentally improved (e.g., investing in new equipment, redesigning the process). This is a management responsibility. | Assignable causes should be detected, investigated, and eliminated by the process operators and engineers to bring the process back into a state of control. |
A process is said to be in a state of statistical control when only common causes of variation are present. This does not necessarily mean the process is producing good parts, but it does mean that its output is stable, consistent, and predictable. The primary tool of SPC, the control chart, is used to distinguish between these two causes of variation and to guide the appropriate action.
Chapter 3: Cost of Quality (COQ)
COQ, sometimes called the Price of Nonconformance (PONC), is a powerful methodology that translates quality problems into the language of senior management: money. It helps to highlight the financial impact of poor quality and justify investments in quality improvement initiatives.
The Four Categories of Quality Costs
The total Cost of Quality is the sum of two major components: the Cost of Good Quality (Prevention and Appraisal) and the Cost of Poor Quality (Internal and External Failures).
- Prevention Costs: Costs of all activities specifically designed to prevent defects from occurring in the first place. This is the most desirable area for investment.
- Examples: Quality planning, new product design reviews, process capability studies, statistical process control (SPC) implementation, employee quality training, and supplier evaluation programs.
- Appraisal Costs: Costs associated with measuring, evaluating, or auditing products, components, and purchased materials to ensure conformance to quality standards.
- Examples: Incoming material inspection, in-process inspection, final product testing, calibration of test and measurement equipment, and quality audits.
- Internal Failure Costs: Costs incurred when defects are found *before* the product is delivered to the customer. These are the costs of failing to do it right the first time.
- Examples: Scrap (material and labor costs for non-repairable products), rework and repair costs, re-inspection of reworked items, failure analysis to determine the cause of failure, and downgrading products to a lower price point.
- External Failure Costs: Costs incurred when defects are found *after* the product reaches the customer. These are the most dangerous and often the most expensive costs.
- Examples: Warranty charges, handling and resolving customer complaints, product recalls and field repairs, returned material processing, and, most importantly, the hidden cost of loss of customer goodwill and future sales.
3.1 The Optimum Cost Model: Classical vs. Modern View
The goal of tracking COQ is to reduce the total cost. The classical model suggested a trade-off, implying there was an "optimal" level of defects. However, the modern TQM philosophy, championed by Crosby, argues that the true optimum is found by investing heavily in prevention to drive failure costs to zero. The cost of prevention is almost always a fraction of the costs of failure, especially external failures which can damage a company's reputation.
Chapter 4: ISO 9000 Quality System
ISO 9000 is a family of international standards for quality management systems (QMS). It is not a product standard, but a process standard. It provides a framework for how an organization should manage its processes to consistently meet customer and regulatory requirements and to enhance customer satisfaction.
4.1 The ISO 9001:2015 Standard and its Principles
ISO 9001 is the only standard in the family against which an organization can be externally audited and certified. It is built on a foundation of seven Quality Management Principles:
- Customer Focus: The primary objective is to meet and exceed customer expectations.
- Leadership: Top management must demonstrate commitment, create a quality policy, and provide direction.
- Engagement of People: Competent, empowered, and engaged people at all levels are essential for creating and delivering value.
- Process Approach: Managing activities as interrelated processes that function as a coherent system.
- Improvement: Successful organizations have an ongoing focus on continual improvement.
- Evidence-based Decision Making: Decisions should be based on the analysis and evaluation of data and information.
- Relationship Management: Managing relationships with interested parties, such as suppliers, is crucial for sustained success.
4.2 Key Concepts: Process Approach and Risk-Based Thinking
The ISO 9001:2015 standard heavily emphasizes two key concepts:
- The Process Approach: This involves managing an organization as a system of interconnected processes. The standard's structure itself aligns with the PDCA (Plan-Do-Check-Act) cycle, ensuring a dynamic approach to quality management.
- Risk-Based Thinking: This requires an organization to proactively identify, consider, and control the risks and opportunities throughout its processes. It replaces what used to be a separate clause on "preventive action" and makes risk management an integral part of the entire QMS.
4.3 Benefits of ISO 9000 Certification
- Improved Customer Satisfaction: By consistently meeting customer requirements.
- Increased Efficiency and Reduced Costs: Through better process control and a focus on continual improvement.
- Enhanced Market Access: Certification is often a prerequisite for doing business in many industries and global markets.
- Better Internal Communication and Morale: Through clearly defined roles, responsibilities, and processes.
Chapter 5: Seven Tools of Quality Control
These are fundamental graphical techniques, also known as the "7 QC Tools," used for analyzing and solving quality problems. They are powerful because they are simple and visual, making them accessible to everyone in the organization, from engineers to shop-floor operators.
- Check Sheet: A simple, structured form for collecting and tallying data in real-time. It is often the first step in problem-solving and provides the raw data for other tools like histograms and Pareto charts.
- Histogram: A bar chart showing the frequency distribution of variable data (e.g., measurements). It provides a quick visual summary of a process's variation, centering, and shape (e.g., normal, bimodal, skewed).
- Pareto Chart: A bar chart, based on the 80/20 rule, that organizes causes (e.g., types of defects) by their frequency in descending order, with a cumulative percentage line. It's a key tool for prioritizing problems, helping a team to focus its efforts on the "vital few" causes that have the greatest impact.
- Cause-and-Effect Diagram (Fishbone/Ishikawa): A structured brainstorming tool used to explore all potential causes of a problem (the "effect"). Causes are systematically grouped into categories like the 6Ms (Manpower, Machines, Methods, Materials, Measurement, Environment) to ensure a thorough and organized analysis.
- Scatter Diagram: A graph that plots pairs of numerical data (one variable on each axis) to investigate the relationship (correlation) between them. The pattern of points can suggest a positive, negative, or no correlation, which can help confirm or deny suspected cause-and-effect relationships.
- Flow Chart: A diagram that visually represents the sequence of steps, decisions, and activities in a process. It is essential for understanding how a process currently works ("as-is") and for designing an improved process ("to-be").
- Control Chart: The core tool of SPC. It is a time-series graph with a center line and statistically calculated upper and lower control limits. It is used to monitor a process over time and distinguish between common cause and special cause variation, signaling when intervention is needed.
Chapter 6: Process Capability Study
A process capability study is a set of tools used to assess how well a process, which must first be in a state of statistical control, is able to meet a set of specifications defined by the customer. It answers the question: "Is our process good enough?"
6.1 Voice of the Customer vs. Voice of the Process
- Specification Limits (USL, LSL): These are set by the customer or engineer. They define the acceptable range for a product characteristic. They are the "Voice of the Customer".
- Natural Process Limits (UNPL, LNPL): These represent the actual variation of the stable process (typically +/- 3 standard deviations from the mean). They are the "Voice of the Process".
Process capability is the quantitative comparison of these two "voices".
6.2 Capability Indices: Cp and Cpk
These are statistical measures of process capability.
- Cp (Potential Capability): Compares the allowable spread (specification width) to the actual spread (process width). It tells you if the process is *capable* of fitting within the specifications, but does not consider if it is centered.
Cp = (USL - LSL) / 6σ
- Cpk (Actual Capability): This index measures how well the process is actually performing because it accounts for both the spread and the centering of the process relative to the specification limits. It represents the capability on the worse side of the process mean.
Cpk = min[ (USL - μ) / 3σ , (μ - LSL) / 3σ ]
Interpreting Cp and Cpk
- If Cp = Cpk, the process is perfectly centered between the specification limits.
- If Cp is high, but Cpk is low, the process has low variation but is running off-center. This is often an easy problem to fix (e.g., adjust a machine setting).
- A value of Cpk < 1.0 means the process is not capable of meeting specifications (i.e., it is producing defects).
- A value of Cpk ≥ 1.33 is often considered a minimum requirement for a capable process in many industries.
- A Six Sigma process has a short-term Cpk of 2.0, which allows for a 1.5 sigma shift in the mean while still producing fewer than 3.4 defects per million opportunities.
Chapter 7: Process of Continuous Improvement
Continuous improvement, or Kaizen, is a core principle of TQM. It is the philosophy of making small, incremental improvements on an ongoing basis. This section covers some of the key tools and methodologies used to foster this culture.
7.1 The PDCA Cycle
The PDCA (Plan-Do-Check-Act) or Deming Cycle is the engine of continuous improvement. It's a four-step iterative method for improving processes and products:
- Plan: Identify a problem or opportunity for improvement and develop a hypothesis or plan for a change.
- Do: Implement the change on a small scale (a pilot test).
- Check: Observe and analyze the results of the pilot test to see if the change had the desired effect.
- Act: If the change was successful, implement it on a wider scale and standardize the new process. If not, analyze what went wrong and begin the cycle again with a new plan.
7.2 Kanban System
Kanban, Japanese for "visual signal" or "card," is a scheduling system for lean and just-in-time (JIT) manufacturing. It is a "pull" system, meaning that production is triggered by customer demand rather than a forecast. It uses visual cues (like cards, bins, or electronic signals) to signal when more parts are needed, which helps to reduce inventory and overproduction.
7.3 Brainstorming and Gantt Charts
- Brainstorming: A technique used by groups to generate a large number of ideas for the solution of a problem. The key rule is to defer judgment to encourage creativity and participation. It is often used as the first step in creating a Cause-and-Effect Diagram.
- Gantt Chart: A project management tool that illustrates a project schedule. It is a bar chart that shows the start and finish dates of the various tasks of a project, helping to plan, coordinate, and track specific improvement projects.
Chapter 8: Packaging
Packaging plays a critical quality role by protecting a product from damage during handling, storage, and transportation. Packaging quality involves selecting the right materials and verifying their performance through testing.
8.1 Packaging Materials
The choice of material depends on the product, shipping method, and environmental conditions.
- Paper and Paperboard: Widely used for its low cost and recyclability. Corrugated boxes are common for shipping containers.
- Plastics: Offer versatility, moisture resistance, and strength. Used for films, foams, and rigid containers.
- Wood: Used for creating durable crates and pallets for heavy-duty shipping.
- Metal: Provides excellent protection but at a higher cost and weight. Used for drums and cans.
8.2 Load Testing Procedures
To ensure a package design is effective, it must undergo rigorous testing that simulates the hazards of the distribution environment.
- Vibration Test: Simulates the shocks and vibrations experienced during transport on a truck or train, which can cause component fatigue and abrasion damage.
- Drop Test: Simulates the manual and mechanical handling shocks a package may receive. The package is dropped from various heights onto its corners, edges, and faces.
- Impact Test: Simulates horizontal shocks, such as those that occur during railcar coupling or sudden stops of a truck.
- Compression Test: Determines the maximum load a container can withstand, which is critical for understanding how high packages can be stacked in a warehouse.
Chapter 9: Control Charts for Variables
Variable control charts are used for measurable data, such as length, weight, or temperature. They are powerful because they provide detailed information about both the central tendency and the variability of a process. The most common types are X-bar & R charts.
9.1 X-bar and R Chart (X̄-R)
This is a pair of charts used together to monitor a process when data is collected in rational subgroups of size 2 to 10.
- The X-bar (X̄) chart: Monitors the average or central tendency of the process. It is sensitive to shifts in the process mean.
- The R chart: Monitors the range or variability within subgroups. It is sensitive to changes in process variation.
The R chart must be in control before the X-bar chart can be properly interpreted. If the R chart is out of control, the control limits on the X-bar chart are meaningless.
9.2 Other Variable Charts
- X-bar and S Chart (X̄-S): Used when the subgroup size is large (n > 10). The S chart plots the standard deviation of the subgroup and is a more efficient estimator of variability than the range.
- Individuals and Moving Range Chart (I-MR): Used when data is not collected in subgroups (n=1), such as for automated testing where every unit is measured. The I chart plots the individual values, and the MR chart plots the moving range between consecutive observations.
Chapter 10: Control Charts for Attributes
Attribute control charts are used for count data, where items are classified as either conforming or nonconforming (go/no-go). They are generally less sensitive than variable charts but are often easier to implement.
10.1 Charts for Nonconforming Units (Defectives)
- P-Chart (Proportion): Used to monitor the proportion of defective items in a sample. It is used when the sample size (n) varies.
- NP-Chart (Number): Used to monitor the number of defective items in a sample. It requires a constant sample size (n). It is often preferred over the p-chart because it is easier for operators to understand count data than proportions.
10.2 Charts for Nonconformities (Defects)
- C-Chart (Count): Used to monitor the number of defects in a sample of constant size (e.g., the number of blemishes on a car door).
- U-Chart (Rate): Used to monitor the number of defects per unit. It is used when the sample size can vary (e.g., the number of defects per square meter of fabric).
Chapter 11: Acceptance Sampling
Acceptance sampling is a method used to make a decision about accepting or rejecting a large batch (or lot) of material based on the inspection of a smaller sample taken from that lot. It does not control or improve quality, but rather serves as an auditing tool to prevent a large number of defective items from entering a process or reaching a customer.
11.1 Key Terms and Concepts
- Acceptable Quality Level (AQL): The maximum percent defective that, for purposes of sampling inspection, can be considered satisfactory as a process average. It's the level of quality the producer aims for.
- Lot Tolerance Percent Defective (LTPD): The level of quality that is considered poor or unacceptable. The consumer wants to reject lots at or below this level.
- Producer's Risk (α): The risk of having a good lot (at or better than AQL) rejected by the sampling plan. Typically around 5%.
- Consumer's Risk (β): The risk of accepting a bad lot (at or worse than LTPD). Typically around 10%.
11.2 The Operating Characteristic (OC) Curve
An OC curve is a graph that shows the performance of a sampling plan. It plots the probability of accepting a lot versus the actual percent defective in the lot. A steep OC curve is desirable, as it better discriminates between good and bad lots.
11.3 Types of Sampling Plans
- Single Sampling Plan: One sample is taken from the lot. The decision to accept or reject is based solely on this one sample.
- Double Sampling Plan: A smaller initial sample is taken. A decision is made to accept, reject, or take a second sample based on the results.
- Multiple Sampling Plan: An extension of double sampling, where more than two samples may be required to reach a decision.
Chapter 13: Reliability
Reliability is the probability that a product, component, or system will perform its intended function for a specified period of time under a given set of operating conditions. It is a key dimension of quality that focuses on performance over time.
13.1 The Bathtub Curve
The "bathtub curve" is a graphical representation of the failure rate of a product over its lifespan. It consists of three phases:
- Infant Mortality (Decreasing Failure Rate): Early failures caused by manufacturing defects or substandard components.
- Useful Life (Constant Failure Rate): Random failures occur, but the failure rate is low and stable. Reliability engineering aims to extend this period.
- Wear-out (Increasing Failure Rate): The failure rate increases as the product ages and components begin to fatigue and wear out.
13.2 Key Metrics: MTBF and MTTF
- Mean Time Between Failures (MTBF): Used for repairable systems. It is the average time a system operates before it fails.
MTBF = Total Operating Time / Number of Failures
- Mean Time To Failure (MTTF): Used for non-repairable components. It is the average time a component is expected to last.
13.3 System Reliability
The reliability of a system depends on the reliability of its components and how they are configured.
- Series System: All components must function for the system to function. The system reliability is the product of the individual component reliabilities.
Rs = R1 * R2 * ... * Rn
. A series system is always less reliable than its least reliable component. - Parallel System: The system functions as long as at least one of its components functions. This configuration introduces redundancy.
Rs = 1 - (1 - R1) * (1 - R2) * ... * (1 - Rn)
. A parallel system is always more reliable than its most reliable component.
Chapter 14: Failure Mode and Effects Analysis (FMEA)
FMEA is a systematic, proactive method for identifying and preventing potential problems in product designs (Design FMEA) or manufacturing processes (Process FMEA) before they can occur. It is a structured risk assessment tool.
14.1 The FMEA Process
An FMEA is conducted by a cross-functional team and follows these general steps:
- Identify Potential Failure Modes: For each process step or design component, brainstorm how it could potentially fail to meet its requirements.
- Identify Potential Effects: For each failure mode, determine the consequences or effects on the customer.
- Identify Potential Causes: For each failure mode, determine the root causes that could lead to it.
- Rate Severity, Occurrence, and Detection (SOD):
- Severity (S): How serious is the effect on the customer? (1=Not severe, 10=Catastrophic)
- Occurrence (O): How likely is the cause to occur? (1=Very unlikely, 10=Almost certain)
- Detection (D): How likely are you to detect the cause or failure mode before it reaches the customer? (1=Very likely to detect, 10=Very unlikely to detect)
- Calculate the Risk Priority Number (RPN):
RPN = S × O × D
. The RPN is used to prioritize risks for corrective action. - Develop and Implement Action Plans: The team focuses on taking action to reduce the highest RPNs, primarily by improving detection or, ideally, by preventing the cause from occurring.
- Recalculate the RPN: After actions are taken, the RPN is recalculated to confirm the risk has been reduced.
Chapter 15: Kaizen (Continuous Improvement)
Kaizen is a Japanese philosophy that means "change for the better" or "continuous improvement." It is a long-term approach that systematically seeks to achieve small, incremental changes in processes in order to improve efficiency and quality. It involves everyone in the organization, from top management to the shop floor.
15.1 The 5S Methodology: A Foundation for Kaizen
5S is a workplace organization method that uses five Japanese words. It is often the starting point for Kaizen because it creates a clean, organized, and efficient workplace where problems are more visible.
- Seiri (Sort): Go through all items in a workspace and remove everything that is not necessary.
- Seiton (Set in Order): Arrange necessary items in a neat and logical way so they are easy to find and use. "A place for everything, and everything in its place."
- Seiso (Shine): Clean the workspace and equipment. This often helps to identify problems like leaks or cracks.
- Seiketsu (Standardize): Create standards and procedures to maintain the first three S's.
- Shitsuke (Sustain): Make 5S a habit and part of the daily work culture to ensure long-term success.
Chapter 16: Poka-Yoke (Mistake-Proofing)
Poka-Yoke is a Japanese term that means "mistake-proofing" or "error-proofing." It is any mechanism in a process that helps an operator avoid (yokeru) mistakes (poka). Its purpose is to eliminate defects by preventing, correcting, or drawing attention to human errors as they occur.
16.1 Types of Poka-Yoke
- Control (Shutdown) Type: This is the strongest form. It physically stops the process when an error occurs, making it impossible to create a defect. For example, a fixture that prevents a part from being loaded incorrectly.
- Warning (Attention) Type: This type signals that an error has been made, typically with a buzzer or a light, but does not stop the process. It relies on the operator to take corrective action.
16.2 Examples of Poka-Yoke
- A USB cable that can only be plugged in one way.
- A car that will not start unless the clutch is pressed.
- Microwave ovens that do not operate while the door is open.
Chapter 17: Measurement System Analysis (MSA)
MSA is an experimental and mathematical method of determining the amount of variation that exists within a measurement system. It is a critical first step before analyzing process data, because if your measurement system is not reliable, you cannot trust your data.
17.1 Sources of Measurement Variation
The total observed variation in a process is a combination of the actual process variation and the measurement system variation. MSA focuses on quantifying the measurement system component, which is often called Gage R&R.
- Repeatability (Equipment Variation): The variation observed when the *same operator* measures the *same part* multiple times with the *same gage*. It is the inherent variation of the gage itself.
- Reproducibility (Appraiser Variation): The variation observed when *different operators* measure the *same part* with the *same gage*. It is the variation due to differences between operators.
17.2 Gage R&R Study and Acceptance Criteria
A Gage R&R study is a designed experiment to quantify these sources of variation. The total Gage R&R is compared to the total process variation. A common guideline is:
- Under 10%: The measurement system is acceptable.
- 10% to 30%: The system may be acceptable based on the importance of the application and the cost of improvement.
- Over 30%: The measurement system is unacceptable and needs to be improved.
Chapter 21: Advanced Concepts
21.1 Taguchi Loss Function
Developed by Dr. Genichi Taguchi, the Taguchi Loss Function is a concept that challenges the traditional "goalpost" view of quality (i.e., a product is good if it's within specifications). Taguchi argued that any deviation from the target value, even if it is within the specification limits, results in a "loss" to society (e.g., lower performance, shorter life, customer dissatisfaction).
The loss function is typically a quadratic equation: L(x) = k(x - T)²
, where L(x) is the loss, x is the actual value, T is the target value, and k is a cost constant. This means that the loss increases quadratically as the characteristic deviates from its target. This philosophy emphasizes the importance of reducing variation and consistently hitting the target value, not just staying within the specs.
21.2 Quality Function Deployment (QFD)
QFD is a structured approach for translating customer requirements (the "Voice of the Customer") into design and production requirements at each stage of product development. The primary tool of QFD is the House of Quality, a matrix that connects customer wants with the technical "hows" of fulfilling them. It is a powerful tool for cross-functional product planning.
Previous Year Questions & Answers
1. Define "Quality is Fitness for use" with Example.
Definition: The concept "Quality is Fitness for Use," coined by quality pioneer Joseph Juran, is a customer-centric definition of quality. It means that a product or service possesses quality if it successfully serves the purpose for which the customer intends to use it, meeting their needs and expectations. This definition encompasses not just freedom from defects, but also the product's design, features, reliability, and serviceability.
Example: Consider two smartphones. Phone A is a high-end model with a professional camera and a high price. Phone B is a budget model with a basic camera. If a professional photographer buys Phone A and it takes stunning, high-resolution photos, it is "fit for use." If a student buys Phone B primarily for calls, texts, and basic apps, and it performs these functions reliably, it is also "fit for use." However, if Phone A's battery drains in two hours, or Phone B's screen is unresponsive, neither would be fit for use, because they fail to perform their intended function for the customer.
2. Briefly Illustrate a cycle of PDCA for improving your academic performance.
The PDCA (Plan-Do-Check-Act) cycle is a systematic approach to continuous improvement. Here’s how it can be applied to academic performance:
- PLAN:
- Identify the Problem: "My grade in Quality Control is lower than I want."
- Analyze the Root Cause: Use a simple Cause-and-Effect analysis. Perhaps the cause isn't "not studying enough," but more specifically, "I don't understand the statistical formulas."
- Develop a Plan (a testable hypothesis): "I will spend one hour every Tuesday and Thursday specifically practicing the statistical problems from the textbook for the next three weeks."
- DO: Implement the plan. Execute the study sessions as planned, without deviation, for the three-week period. Keep a log of the problems you solve and any difficulties you encounter.
- CHECK: After three weeks, assess the results. Take a practice exam or review your performance on a recent quiz. Compare your score and understanding now to your level before you started the plan. Did your scores improve? Do you feel more confident with the formulas?
- ACT:
- If the plan worked: Standardize the improvement. Make this focused study session a permanent part of your weekly routine for the rest of the semester.
- If the plan didn't work: Analyze why. Perhaps the issue was a lack of understanding of the theory behind the formulas. In that case, abandon the original plan, and start a new PDCA cycle with a new plan: "I will meet with a study group to review the theory before attempting the problems."
3. Discuss about the requirement for the successful implementation of a quality circle.
For a Quality Circle to be successful and not just a short-lived fad, several key requirements must be met:
- Management Support and Commitment: This is the most critical factor. Management must provide not just verbal support, but also tangible resources like meeting time during work hours, a proper meeting place, and access to necessary information. Most importantly, management must create a system to seriously review and, where appropriate, implement the circle's suggestions.
- Voluntary Participation: Members must feel that they are joining the circle by their own choice. Forcing employees to participate breeds resentment and stifles creativity.
- Training: Circle members and their leader must be properly trained in group dynamics, communication skills, and, crucially, in the systematic problem-solving tools (the 7 QC Tools). Without these tools, meetings can devolve into unstructured complaint sessions.
- Clear Objectives and Focus: The circle should be empowered to work on problems within its own work area. This gives them a sense of ownership and ensures they are tackling issues they understand best. The focus should be on improving their own processes, not on criticizing other departments.
- Recognition: The efforts and achievements of the Quality Circle must be formally and publicly recognized by management. This provides motivation and reinforces the value of their contributions to the organization.
4. Draw an Ishikawa diagram for the problem of "degrading quality in a chain fast food shop."
The "effect" or problem is "Degrading Food Quality." The main causes, or "bones" of the fish, are categorized using the 6Ms framework:
- Manpower (People):
- Lack of proper training on recipes and procedures.
- High employee turnover leading to inexperienced staff.
- Low morale or lack of motivation.
- Poor personal hygiene.
- Method (Process):
- Inconsistent or unclear recipes.
- Incorrect cooking times or temperatures being used.
- Food left sitting out too long under heat lamps.
- First-In-First-Out (FIFO) not being followed for ingredients.
- Machine (Equipment):
- Faulty ovens, fryers, or grills with incorrect temperature calibration.
- Poor or infrequent maintenance of equipment.
- Outdated or inefficient equipment.
- Material (Ingredients):
- Switching to a lower-quality, cheaper supplier for ingredients.
- Improper storage of ingredients (e.g., wrong temperature).
- Inconsistent portioning of ingredients.
- Measurement:
- Thermostats and timers are not calibrated correctly.
- Portion control tools (scoops, scales) are inaccurate or not used.
- No quality checks or taste tests are performed.
- Environment:
- A rushed, high-pressure work environment leading to mistakes.
- Poor lighting or cramped workspace.
- Inadequate cleaning procedures.
8. Using P-chart, 3σ limits check that whether the process is in control or out of control and explain that situation.
Here is the detailed step-by-step analysis to determine if the process is in statistical control using a P-chart.
1. Given Data:
- Number of samples (k) = 22
- Sample size (n) = 60 lights (This is constant for all samples)
- Total number of failures (Σnp) = 3+2+3+2+3+2+5+1+8+2+1+4+2+3+6+1+2+3+2+1+1+2 = 59
2. Calculate the Center Line (p-bar):
The center line (p̄) represents the average proportion of defective items across all samples.
p̄ = (Total Failures) / (Total Items Inspected) = 59 / (22 * 60) = 59 / 1320 ≈ 0.0447
3. Calculate the 3-Sigma Control Limits (UCL and LCL):
The formulas for the control limits for a p-chart are:
UCLp = p̄ + 3 * sqrt(p̄ * (1 - p̄) / n)
LCLp = p̄ - 3 * sqrt(p̄ * (1 - p̄) / n)
Calculation:
UCLp = 0.0447 + 3 * sqrt(0.0447 * (1 - 0.0447) / 60)
UCLp = 0.0447 + 3 * sqrt(0.04265 / 60)
UCLp = 0.0447 + 3 * sqrt(0.0007108)
UCLp = 0.0447 + 3 * 0.02666
UCLp = 0.0447 + 0.07998 ≈ 0.1247
LCLp = 0.0447 - 0.07998 ≈ -0.0353
Since the calculated Lower Control Limit (LCL) is negative, it is set to 0, as a proportion cannot be negative.
So, the control limits are: UCL = 0.1247 and LCL = 0.
4. Calculate the Proportion (p) for Each Sample and Plot:
Now, we calculate the proportion defective (p = number of failures / 60) for each of the 22 samples and check if any fall outside our control limits.
- Sample 1: p = 3/60 = 0.050 (In control)
- ... (all samples are checked) ...
- Sample 9: p = 8/60 = 0.133 (OUT OF CONTROL)
- ... (all other samples are checked) ...
- Sample 15: p = 6/60 = 0.100 (In control)
5. Conclusion and Explanation:
The process is OUT of statistical control. The reason is that the data point for Sample 9, with a proportion defective of 0.133, falls above the Upper Control Limit (UCL) of 0.1247.
This point represents a "signal" of an assignable (or special) cause of variation. It indicates that something unusual and detrimental happened during the production period when Sample 9 was taken. The situation requires immediate investigation by the quality team to identify the root cause of this spike in failures (e.g., a bad batch of filaments, a problem with a sealing machine, an untrained operator) and take corrective action to prevent it from recurring. The process cannot be considered stable or predictable until this special cause is eliminated.