Computer Vision Defect Detection For an Electronics Manufacturer

How switching from a rule-based machine vision system to an advanced computer vision solution for early-stage defect detection on the production line helped an electronics manufacturer reach 97,4% accuracy in identifying flaws and save $2.4 million annually thanks to a drop in warranty claims.

Business challenge

Our client is a Saudi-based electronics manufacturer producing laptops for the MEA region in partnership with one of the global tech leaders. Years earlier, we had created digital twins of their factories, so when new challenges surfaced, the client turned to us again. 

The company had long moved from fully manual defect detection on the production line to an automated, non-contact product quality inspection. However, their automated optical inspection (AOI) in the conveyor video surveillance system was too demanding, rigid, and costly, posing numerous limitations. 

  • Overdependence on consistent imaging. The system cannot function efficiently if lighting and camera positioning aren’t stable.
  • Inflexibility. AOI operated on strict “if-else” rules, making the system unable to perform continuous trend analysis and adjust dynamically to enhance both speed and accuracy of defect detection. 
  • Slow adaptability. Every time the component configuration, size, or color changed, the system required recalibration. 
  • Low scalability. One AOI system couldn’t simply be replicated across multiple factories. Each production line will require extensive manual parameter tuning. 
  • Costly maintenance. Traditional machine vision systems like AOI require highly specialized experts to take over routine manual inspections, regular calibrations, software updates, and hardware repairs. 

The client needed a more flexible, scalable, and budget-wise form of computer vision at the core of their conveyor video surveillance system.

Solution

Our development team ensured the smooth shift from basic AOI to advanced computer vision.

  1. Shaping up the tech stack

We decided to rely on a pre-trained object detection and image classification model. Then, we had to choose between single- and two-stage detectors to perform object detection. The scale was tipped in favor of a single-stage algorithm due to its several advantages:

  • Faster inference
  • Lower computational demand
  • Easier maintenance

Having experience with various single-stage detectors, we settled on YOLO over CenterNet, SSD, RetinaNet, and FCOS, as it offers superior inference speed, ensuring truly real-time performance. Despite YOLOv10 being available, we used YOLOv8, as it was the most tested and stable version at the moment.

  1. Optimizing the training dataset

The client’s previous pre-computer vision system relied on a three-tier defect classification framework: “definitely defective,” “borderline,” and “definitely non-defective”. Rather than discarding the existing dataset, we capitalized on it by strengthening defect coverage and enforcing consistent, high-quality labeling.

Augmenting data to address the class imbalance issue

The original dataset was limited in size, with a maximum of 200 images per defect category. In addition, different defect types weren’t represented equally. For example, issues such as missing components and incorrect placement appeared far more frequently than subtler issues like misalignment, solder defects, open circuits, or lifted leads.

To address both the small dataset size and class imbalance, we applied data augmentation techniques:

  • Random photo cropping and flipping
  • Controlled blurring and noise injection
  • Tweaking lighting
  • Changing the background

That way, we multiplied the dataset tenfold while ensuring balanced representation across all defect categories.

Improving labeling consistency and reliability

Given the diversity of subtle defects, high-quality labeling was a must-have. Our AI engineers focused on refining the labeling process to: 

  • Reduce annotation noise
  • Improve consistency across classes
  • Enable more statistically reliable quality assessments

In addition, we ensured the dataset included images both with and without the target object, improving the model’s ability to distinguish true defects from background artifacts and, thus, reducing the false positive rate.

A collage showing two close-up images of green circuit boards with various electronic components and solder points, and one image of a metallic surface with blue rectangular and square outlines marking different areas. Text is faintly visible in the bottom right corner.
Three images: a close-up of a green circuit board showing silver solder joints; a larger green circuit board with many red and silver components and a central microchip; and a metallic panel with blue-outlined rectangular and square cutouts.

More consistent labeling enhances the reliability of bounding boxes, teaching the model to distinguish every little part and defect as a separate entity, rather than group several objects within a single bounding box.

A collage of three images shows close-ups of green circuit boards with red components and microchips, overlaid with a photo of a metal plate featuring multiple cutouts and small square and rectangular holes highlighted in blue.

More consistent labeling enhances the reliability of bounding boxes, teaching the model to distinguish every little part and defect as a separate entity, rather than group several objects within a single bounding box.

  1. Running model transfer learning on the enhanced dataset

Models for object detection and classification like YOLO are already trained on hundreds of millions of general images. However, to accurately address the client’s specific needs, it had to undergo additional training on the updated dataset.

Instinctools’ AI engineers took an efficient approach to tackle this challenge. They resorted to transfer learning as an ML training technique to improve the model’s performance:

  1. Started with a pretrained YOLO model
  2. Used PyTorch to remove the default classifier head
  3. Replaced the model’s classifier head with the enhanced dataset

After that, YOLO was able to instantly detect and classify all the specific defects relevant to the client’s production line.

What is transfer learning in computer vision?
 
Transfer learning is a machine learning training technique, which capitalizes on the pre-trained model’s general knowledge (the ability to detect different defects on various surfaces) instead of training the model from scratch.
 
With transfer learning, you only retrain the model on your task-specific dataset (examples of external and internal defects of the client’s laptops), enabling the model to adapt its existing knowledge to your reality. This method significantly reduces training time and computational resource consumption.
Three images show a circuit board section. The first has a red “Defect Detected” label, the second outlines a component in red, and the third highlights the component with a red overlay, indicating the defect’s location. Purple arrows separate each step.
Three images of a circuit board show defect detection: first, a red Defect Detected label; second, a red rectangle around a tan component; third, the same component shaded red, highlighting the defect location. Purple arrows separate each image.
Three stacked images of a circuit board: 1. Top image highlights a component with a red “Defect Detected” banner above. 2. Middle image shows the same area outlined with a red box. 3. Bottom image shades the component in red, marking it as defective.
  1. Calibrating defect inspection thresholds 

Defining a defect is only part of the process. Next, our dedicated team helped minimize the number of items wrongly discarded at the production stage through:

  • Raising a confidence threshold ratio from 0.5 to 0.85 to prevent the model from flagging low-certainty cases as sure defects 
  • Setting up detailed defect severity scoring to minimize false positives while not letting critical defects slip further
  • Adjusting NMS (Non-Maximum Suppression) parameters so that the model always chooses one clean, highest-confidence box per object instead of overlapping bounding boxes
  1. Going the extra mile for near-100% defect detection accuracy 

After transfer learning and threshold calibration, the model’s accuracy reached 86,5% which was already more than 15% higher than with the previous AOI system. Still, our team aimed for as close to 100% accuracy as possible.

Instinctools’ AI experts enhanced YOLO-based defect detection capabilities by applying:

A flowchart showing a neural network architecture with four stages: Input (640x640x3), Backbone, Neck, and Prediction. Colored blocks represent layers like Focus, CBL, CSP, SPP, and CONV, with branching paths for multi-scale predictions.
A flowchart of a neural network architecture with four sections: Input (640x640/3), Backbone (Focus, CBL, CSP, SPP), Neck (CSP, Upsampling), and Prediction (Conv layers with outputs: 80*80/255, 40*40/255, 20*20/255).
A vertical neural network diagram with four labeled sections: Input (640×640×3), Backbone (Focus, Conv, CSP1, SPP), Neck (Upsampling, Concat, CSP2), and Prediction (Conv layers with outputs: 80×80×255, 40×40×255, 20×20×255).
  • Depthwise Separable Convolution (DSConv) to accelerate the model’s inference speed
  • The Cross-Stage Partial Network (C3 module) to combine low-level detailed information with high-level semantic data, enhancing the model’s adaptability to target scale variations and improving detection accuracy
  • The Bidirectional Feature Pyramid Network (BiFPN) to enable the model to identify fine features of small targets, improving its recognition capability
  • The DySample upsampling operator to minimize detail loss and boost accuracy for small targets

With these enhancements, the new CV defect detection system consistently hits the 97,4% accuracy benchmark.

  1. Integrating the CV mechanism into the client’s manufacturing execution systems (MES)

The computer vision solution was installed at one of the client’s facilities. There, it underwent further model training based on collected metadata and outputs generated by the system during operation. Those adjustments helped align the system with real-world production conditions and replaced AOI at each critical juncture:

  • Post-solder paste application. The system verifies if paste volumes are adequate and properly aligned.
  • Post-component placement. The solution validates whether each component is present, oriented correctly, and positioned within acceptable tolerances.
  • Post-reflow. Software runs a final check for defects such as tombstoning, bridging, and cold solder joints.
The result is an all-encompassing, multi-class defect classification, with all defects categorized by type and severity.
A flowchart for video object detection: Starts with Input data (video), checks Success?, then branches to process the image or video, detects defects, locates them, flags with red and green, checks for various actions, and ends the process.
A flowchart for object detection in video: start with video input, check for success; if yes, process original image and video; create binary image, mask defects, then perform object detection, locate defects, flag objects, and check results to end.
A vertical flowchart shows steps from inputting video data, processing and splitting the video, detecting objects and defects, flagging objects, checking data, and ending. Steps are in colored rectangles with directional arrows connecting them.

As the solution features cloud connectivity, our team can remotely perform model updates, configuration changes, and CV algorithm optimizations. 

Before

  • A rigid and costly machine vision system that has to be configured for each production line
  • Real-time defect detection is only possible under perfectly stable lighting 
  • 70% accuracy of defect detection
  • Items falsely discarded as defective

After

  • Flexible and highly scalable computer vision system that can be reused across production lines
  • Real-time defect detecting under any lighting conditions
  • 97,4% accuracy of defect detection
  • < 0,5% discarded items

Business value

Operational impact:
  • + 27,4% accuracy of production line inspections 
  • + 24% in production throughput due to immediate defect detection
  • < 2% false positive rate
  • < 0,5% discarded items 
  • – 67% defect-related warranty claims
Financial impact:
  • – $1.2 million in waste-related costs
  • + $2.4 million annually due to the drop in warranty claims 
  • – 26% in quality control labor costs

Client’s testimonial

Every collaboration with *instinctools takes our factories to the next level. The new computer vision system for defect detection reimagined our quality control at production lines. The drop in discarded items below 0,5% alone fully justified the investment, but a year post-implementation, we also saw that it translated into a millions-worth reduction in warranty claims, exceeding our boldest expectations.

Lead Quality Engineer,
Electronics Manufacturer

Multiplier effect

Stronger quality-control systems on a production line lead to fewer defective products reaching customers and ruining their experience. With the stakes as high as one in three customers* leaving a brand after a single bad experience involving defective products, playing it safe becomes essential for business survival.

Computer vision can be the key to achieving higher-quality products and improved customer satisfaction. 

*According to PwC

A diverse group of six people sit around a table with papers, coffee, and devices, engaged in discussion. Overlaid text reads Customer Feedback: Connect with your friends. Notification icons and messages appear around the image.

Do you have a similar project idea?

Anna Vasilevskaya
Anna Vasilevskaya Account Executive

Get in touch

Drop us a line about your project at contact@instinctools.com or via the contact form below, and we will contact you soon.