Case study

Eliminating False Scrap Through Vision System Recalibration

Summary

In high-volume mass production, even small percentages of scrap can translate into millions in losses. At one production line, a persistent 5% rejection rate from a vision inspection system after sintering caused significant cost impact and frustration. Despite the system’s reputation for reliability, the scrap rate did not improve and seemed to be “built in.”

Through systematic data analysis, creative hypothesis testing, and validation against golden standards, the issue was uncovered as a false fail caused by calibration drift. Fixing it restored the line to 0% vision-related scrap and saved millions annually.

111

Zero-Risk Partnership

Pay Only for Proven Results

The Problem

    • After sintering, parts were automatically checked by a vision system that measured their position.

    • Roughly 5% of parts were flagged as defective — every single day.

    • Since sintering is irreversible, these flagged parts could not be reworked and went straight to scrap.

    • Over time, this became a stable but costly “baseline” loss for the product.

    Closer analysis of the data showed something unusual:

    • All failed parts were marginal, sitting just beyond the upper specification limit (USL).

    • The entire Y-position population was shifted upward compared to results from an identical production line running the same product.

    This raised a critical question: were these true defects, or was the vision system itself at fault?

The Investigation

To understand the problem, I began by separating machine conditions from product behavior:

  1. Product data review: I analyzed thermal profiles across different runs. The trend confirmed that parts consistently failed to reach stable target temperature.

  2. Machine sensor analysis: I compared the internal heating block sensor readings with the product’s actual measured temperature. A mismatch indicated poor thermal transfer.

  3. Hypothesis testing: Initial adjustments of heating profiles and setpoints were attempted, but no sustainable improvement was seen.

At this point, the issue remained hidden. Standard DOE and parameter shifts did not reveal the underlying cause.

The Investigation

    • Challenge the assumption: Instead of accepting the 5% scrap as genuine, I asked: what if these are false fails?

    • Bypass test: We decided to let a small batch of failed parts continue downstream. The risk was controlled — any real defect would show up later in the process.

      • Result: every part passed further checks with no issues.

      • This was the first strong sign that the failures were not real.

    • Comparative analysis:

      • I compared measurement distributions from the failing line with those from a “healthy” sister line.

      • The failing line showed a clear upward shift in the Y-position distribution.

      • This suggested a systematic bias, not random defects.

    • Golden tool validation:

      • To be certain, we used a golden tool (reference standard) and measured it both with the vision system and in the lab.

      • The lab confirmed the correct position.

      • The vision system showed the same drift seen in production — confirming it was the measurement system that had shifted, not the product.

The Breakthrough

With hard data proving the issue was in the vision system, the solution became clear: recalibration.

  • The vision machine was recalibrated against the golden tool standard.

  • Immediately, the population shift disappeared.

  • Subsequent runs aligned perfectly with expected measurement values.

The Result

  • Vision-related rejects dropped from a steady 5% → 0%.
  • Scrap costs were eliminated for this failure mode.
  • Overall yield on the line increased by 5 percentage points, directly translating into multi-million savings per year.
  • The recalibration restored confidence in the automated inspection system among engineers and operators, who had previously considered the losses “normal.”
  •  

Lessons Learned

  • Don’t accept chronic scrap as inevitable — always validate if the measurement system is truly accurate.

  • Compare across lines: Sister lines running the same product can be invaluable benchmarks.

  • Golden tool checks are essential: Even trusted vision systems can drift over time and create false failures.

  • Small shifts in data patterns can hide large costs: What looked like minor marginal failures was in fact a systematic metrology error.

Conclusion

This case demonstrated the value of questioning assumptions and validating data at every level. What appeared to be a permanent 5% yield loss was actually a false scrap problem caused by measurement drift. By recalibrating the system, the factory not only eliminated waste but also saved millions while strengthening trust in automated inspection.

Facing a critical Yield / Quality issue?

I specialize in rapid root cause analysis and execution support. 

Let’s bring your line back on track.