Fabric Defect Detection: OpenCV and Deep Learning for Textile QC

Tutorial Industry

Hardware Used

Line scan cameras LED lighting Industrial PC Conveyor systems

Software Stack

Python OpenCV YOLO TensorFlow PyTorch

Use Cases

Textile manufacturing Fabric inspection Weaving QC Knitting QC

Introduction

Textile manufacturing produces fabric at high speeds, making manual inspection nearly impossible. Even skilled inspectors miss 15-25% of defects due to fatigue, speed, and subtle defect appearances.

Automated visual inspection using computer vision catches more defects, maintains consistency, and enables real-time quality control. This guide covers detection methods from simple OpenCV techniques to production-grade deep learning systems.


Common Fabric Defects

Weaving Defects

Defect Description Visual Appearance
Broken warp Missing vertical thread Vertical line/gap
Broken weft Missing horizontal thread Horizontal line/gap
Missing pick Skipped weft insertion Thin horizontal stripe
Float Thread over wrong threads Raised/loose thread
Reed mark Uneven beat-up Vertical stripe pattern
Temple mark Edge tension issue Distortion at edges

Knitting Defects

Defect Description Visual Appearance
Dropped stitch Missing loop Hole or ladder
Snag Pulled thread Loop on surface
Barre Stripe pattern Horizontal bands
Needle line Faulty needle Vertical line
Hole Multiple dropped Visible gap

Surface Defects

Defect Description Detection Challenge
Stain Discoloration Color analysis
Oil spot Machinery contamination Texture change
Dirt/foreign matter Contamination Color/texture
Crease Fold mark Shadow/texture
Pilling Surface balls Texture analysis

Imaging Setup

Camera Options

Line Scan Camera (Recommended for production)

  • Captures one line at a time
  • Very high resolution (4K-16K pixels wide)
  • Perfect for continuous web inspection
  • Requires precise synchronization with fabric speed

Area Scan Camera (Good for prototyping)

  • Standard camera capturing frames
  • Easier to set up
  • May miss defects between frames at high speeds
  • Good for learning and development

Resolution Requirements

Rule of Thumb: Minimum 3-5 pixels per smallest defect dimension.

Defect Size Required Resolution (1m width)
1mm 3,000-5,000 pixels/m
0.5mm 6,000-10,000 pixels/m
0.25mm 12,000-20,000 pixels/m

Lighting Approaches

1. Transmitted Light (Backlighting)

1
2
3
4
5
   [Camera]
      ↓
═══════════════  ← Fabric
      ↑
   [Light]
  • Best for hole detection
  • Shows thread density variations
  • Requires translucent fabric

2. Front Lighting

1
2
3
   [Light] [Camera] [Light]
      ↘      ↓      ↙
═══════════════════════════  ← Fabric
  • Shows surface defects
  • Color and stain detection
  • Works with any fabric

3. Angled/Grazing Light

1
2
3
                [Camera]
                   ↓
[Light]→  ═══════════════════
  • Highlights texture defects
  • Shows weaving irregularities
  • Good for pilling detection

Traditional CV Approach

Texture Analysis with Gabor Filters

Gabor filters are excellent for detecting structural defects in repetitive textures.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
import cv2
import numpy as np

def create_gabor_filters(num_orientations=8, wavelengths=[4, 8, 16]):
    """
    Create bank of Gabor filters for texture analysis.
    """
    filters = []
    for wavelength in wavelengths:
        for i in range(num_orientations):
            theta = i * np.pi / num_orientations
            kernel = cv2.getGaborKernel(
                ksize=(31, 31),
                sigma=4.0,
                theta=theta,
                lambd=wavelength,
                gamma=0.5,
                psi=0
            )
            filters.append(kernel)
    return filters

def apply_gabor_filters(image, filters):
    """
    Apply Gabor filter bank and return responses.
    """
    responses = []
    for kernel in filters:
        filtered = cv2.filter2D(image, cv2.CV_64F, kernel)
        responses.append(filtered)
    return responses

def detect_texture_defects(image, filters, threshold_factor=2.5):
    """
    Detect defects as deviations from normal texture response.
    """
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) if len(image.shape) == 3 else image

    # Get Gabor responses
    responses = apply_gabor_filters(gray, filters)

    # Combine responses
    combined = np.zeros_like(gray, dtype=np.float64)
    for response in responses:
        combined += np.abs(response)

    # Normalize
    combined = combined / len(responses)

    # Find anomalies (deviations from mean)
    mean_response = np.mean(combined)
    std_response = np.std(combined)

    # Threshold for defects
    defect_mask = np.abs(combined - mean_response) > (threshold_factor * std_response)

    return defect_mask.astype(np.uint8) * 255

Fourier Analysis for Periodic Patterns

Fabric has regular periodic structure. Defects disrupt this pattern.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
def detect_periodic_defects(image, sensitivity=0.3):
    """
    Detect defects using Fourier analysis.
    Works well for woven fabrics with regular patterns.
    """
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) if len(image.shape) == 3 else image
    gray = gray.astype(np.float32)

    # Compute FFT
    f = np.fft.fft2(gray)
    fshift = np.fft.fftshift(f)

    # Get magnitude spectrum
    magnitude = np.abs(fshift)

    # Find dominant frequencies (fabric pattern)
    # These appear as bright spots in frequency domain
    mean_mag = np.mean(magnitude)
    pattern_mask = magnitude > (mean_mag * 10)

    # Remove pattern frequencies (keep only anomalies)
    fshift_filtered = fshift.copy()
    fshift_filtered[pattern_mask] = 0

    # Inverse FFT
    f_ishift = np.fft.ifftshift(fshift_filtered)
    img_back = np.fft.ifft2(f_ishift)
    img_back = np.abs(img_back)

    # Threshold to find defects
    threshold = np.mean(img_back) + sensitivity * np.std(img_back)
    defect_mask = img_back > threshold

    # Clean up
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
    defect_mask = cv2.morphologyEx(
        defect_mask.astype(np.uint8) * 255,
        cv2.MORPH_CLOSE, kernel
    )

    return defect_mask

Hole and Gap Detection

For detecting holes and missing threads:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
def detect_holes(image, min_area=50):
    """
    Detect holes using thresholding and contour analysis.
    Works best with backlighting.
    """
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) if len(image.shape) == 3 else image

    # Adaptive threshold
    thresh = cv2.adaptiveThreshold(
        gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
        cv2.THRESH_BINARY_INV, 11, 2
    )

    # Morphological operations
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
    thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
    thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)

    # Find contours
    contours, _ = cv2.findContours(
        thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
    )

    holes = []
    for contour in contours:
        area = cv2.contourArea(contour)
        if area > min_area:
            x, y, w, h = cv2.boundingRect(contour)
            holes.append({
                'bbox': (x, y, w, h),
                'area': area,
                'contour': contour,
                'type': 'hole'
            })

    return holes

Stain and Color Defect Detection

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
def detect_stains(image, reference_color=None, threshold=30):
    """
    Detect stains as color deviations from expected fabric color.
    """
    hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

    if reference_color is None:
        # Use median as reference
        reference_color = np.median(hsv, axis=(0, 1))

    # Calculate color distance
    diff = np.abs(hsv.astype(np.float32) - reference_color)

    # Weight channels (H is most important for stains)
    weights = [2.0, 1.0, 0.5]  # H, S, V
    weighted_diff = sum(diff[:,:,i] * weights[i] for i in range(3))

    # Threshold
    stain_mask = weighted_diff > threshold

    # Clean up
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
    stain_mask = cv2.morphologyEx(
        stain_mask.astype(np.uint8) * 255,
        cv2.MORPH_CLOSE, kernel
    )
    stain_mask = cv2.morphologyEx(stain_mask, cv2.MORPH_OPEN, kernel)

    return stain_mask

Deep Learning Approach

YOLO for Defect Detection

Train YOLO to detect and classify fabric defects.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
from ultralytics import YOLO

# Dataset configuration
"""
# fabric_defects.yaml
path: /data/fabric_defects
train: images/train
val: images/val

names:
  0: hole
  1: stain
  2: broken_thread
  3: crease
  4: oil_spot
  5: foreign_matter
  6: pilling
  7: snag
"""

# Train model
model = YOLO('yolov8m.pt')
results = model.train(
    data='fabric_defects.yaml',
    epochs=200,
    imgsz=640,
    batch=16,
    augment=True,
    # Fabric-specific augmentations
    degrees=5,       # Slight rotation
    translate=0.1,
    scale=0.2,
    fliplr=0.5,
    flipud=0.0,      # Usually not helpful for fabric
    mosaic=0.5
)

Classification Network

For pass/fail classification:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
import torch
import torch.nn as nn
from torchvision import models, transforms

class FabricClassifier(nn.Module):
    def __init__(self, num_classes=8, pretrained=True):
        super().__init__()

        # Use EfficientNet as backbone
        self.backbone = models.efficientnet_b0(
            weights='DEFAULT' if pretrained else None
        )

        # Replace classifier
        in_features = self.backbone.classifier[1].in_features
        self.backbone.classifier = nn.Sequential(
            nn.Dropout(0.3),
            nn.Linear(in_features, 256),
            nn.ReLU(),
            nn.Dropout(0.2),
            nn.Linear(256, num_classes)
        )

    def forward(self, x):
        return self.backbone(x)

# Training loop
def train_classifier(model, train_loader, val_loader, epochs=50):
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    model = model.to(device)

    criterion = nn.CrossEntropyLoss()
    optimizer = torch.optim.AdamW(model.parameters(), lr=1e-4)
    scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(
        optimizer, T_max=epochs
    )

    for epoch in range(epochs):
        model.train()
        for images, labels in train_loader:
            images, labels = images.to(device), labels.to(device)

            optimizer.zero_grad()
            outputs = model(images)
            loss = criterion(outputs, labels)
            loss.backward()
            optimizer.step()

        scheduler.step()

        # Validation
        model.eval()
        correct = 0
        total = 0
        with torch.no_grad():
            for images, labels in val_loader:
                images, labels = images.to(device), labels.to(device)
                outputs = model(images)
                _, predicted = torch.max(outputs, 1)
                total += labels.size(0)
                correct += (predicted == labels).sum().item()

        print(f'Epoch {epoch+1}: Val Accuracy = {100*correct/total:.2f}%')

Production System Architecture

Inline Inspection System

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
                    ┌───────────────┐
                    │  Line Scan    │
                    │    Camera     │
                    └───────┬───────┘
                            │
   ┌────────────────────────▼────────────────────────┐
───│═══════════════════════════════════════════════════│───
   │                   Fabric Web                      │
───│═══════════════════════════════════════════════════│───
   └────────────────────────┬────────────────────────┘
                            │
                    ┌───────▼───────┐
                    │  LED Light    │
                    │    Bar        │
                    └───────────────┘

                    ┌───────────────┐
                    │  Industrial   │
                    │     PC        │
                    │  (Analysis)   │
                    └───────┬───────┘
                            │
            ┌───────────────┼───────────────┐
            ▼               ▼               ▼
       ┌─────────┐    ┌─────────┐    ┌─────────┐
       │ Display │    │  PLC    │    │Database │
       │ (Alert) │    │(Control)│    │(Logging)│
       └─────────┘    └─────────┘    └─────────┘

Speed Synchronization

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
def calculate_scan_rate(fabric_speed_mpm, camera_resolution_px, width_mm):
    """
    Calculate required camera scan rate.

    fabric_speed_mpm: Fabric speed in meters per minute
    camera_resolution_px: Camera width in pixels
    width_mm: Inspection width in mm
    """
    # Pixels per mm
    px_per_mm = camera_resolution_px / width_mm

    # Speed in mm per second
    speed_mmps = (fabric_speed_mpm * 1000) / 60

    # Required scan rate (lines per second)
    scan_rate = speed_mmps * px_per_mm

    return scan_rate

# Example:
# 50 m/min fabric speed
# 8192 pixel camera
# 1600mm inspection width
# Required: ~4267 lines/second

Datasets for Training

Public Datasets

Dataset Defect Types Size Link
AITEX Holes, stains, broken threads 245 images AITEX Database
TILDA Multiple woven defects 3,200 images Research access
Kaggle Fabric Various Varies Search Kaggle

Building Your Own Dataset

Recommended approach:

  1. Collect from production line
  2. Include multiple fabric types
  3. Balance defect classes
  4. Include edge cases
  5. Validate with domain experts

Minimum quantities:

  • 200-500 images per defect class
  • 500+ “good” fabric images
  • Multiple lighting conditions
  • Different fabric patterns

Performance Benchmarks

Typical Detection Rates

Method Accuracy Speed Training Data
Traditional CV (tuned) 85-92% Real-time None
YOLO (trained) 92-98% 30+ fps 1000+ images
Classification CNN 90-96% 100+ fps 500+ images

Production Metrics

Metric Target
Detection rate >98%
False positive rate <2%
Throughput Match line speed
Latency <100ms

Hardware Recommendations

Development Setup (~£500)

Component Price
Industrial USB3 Camera £180
LED Light Bar £50
Sample fabrics £50
PC with GPU £220

Production Setup (~£5,000-15,000)

Component Price Range
Line scan camera (8K) £2,000-5,000
Camera link/CoaXPress interface £500-1,500
LED line lights £500-1,500
Industrial PC £1,500-3,000
Mounting, enclosures £500-1,500
Integration, software £1,000-3,000

Getting Started

  1. Collect sample images from target fabric types
  2. Start with OpenCV methods to understand defect characteristics
  3. Label training data using Roboflow or CVAT
  4. Train YOLO model on your specific defects
  5. Validate on held-out test set
  6. Deploy with appropriate hardware

Hardware:

Related Tutorials:


Some links above are affiliate links. We may earn a small commission if you purchase through them, at no extra cost to you. See our affiliate disclosure.

Don't Miss the Next Insight

Weekly updates on computer vision, defect detection, and practical AI implementation.

Was this article helpful?

Your feedback helps improve future content

Found a bug in the code or spotted an error?

Report an issue
James Lions

James Lions

AI & Computer Vision enthusiast exploring the future of automated defect detection