Automated generation of labeled synthetic training data for machine learning based segmentation of 3D-woven composites
     Topic(s) : Special Sessions

    Co-authors​ :

     Johan FRIEMANN (SWEDEN), Lars PILGAARD MIKKELSEN (DENMARK), Carolyn ODDY (SWEDEN), Martin FAGERSTRÖM (SWEDEN) 

    Abstract :
    3D-woven carbon fiber reinforced polymer (CFRP) composites have garnered great interest recently due to their potentially good specific mechanical properties. When compared with the more common 2D-woven, or laminated, composites they show improved out of plane stiffness and strength. Moreover, the through layer reinforcement offers protection against delamination. In order to efficiently utilize them in industry, accurate computational models are required. Typically, the modeling workflow starts with the analysis of representative volume elements. The utilization of x-ray computed tomography (XRCT) to generate RVE geometries is a popular method. However, the segmentation of CT images, and their transformation into computational meshes can be challenging.

    Owing to the low contrast to noise ratio in scans of CFRP, classical segmentation algorithms such as filtering or thresholding, often fail. As an alternative, machine learning based methods show great promise. One pressing issue of machine learning methods is their need for large amounts of labeled data for training. High quality scans take several hours (or require access to synchrotron facilities), and the labeling must often be performed painstakingly by hand. It is therefore of interest to investigate the prospects of transfer learning, where the segmentation algorithm is trained on synthetic data but utilized on real data.

    Ali et al. [1] successfully demonstrated how it is possible to use synthetic XRCT images to train a deep convolutional neural network to perform semantic segmentation of a composite’s phases. However, they directly transformed the generated 3D-models to gray-scale and added noise as post processing. This method does not account for any physical effects, or the typical artifacts introduced when performing tomographic reconstruction. It thereby reduces the image’s realism. An alternative would be to simulate the XRCT process and reconstruct images in the same way as real scans. This results in training sets that enable algorithms to account for the type of artifacts encountered in real scans.

    The aim of this work is to present a novel pipeline for the generation of synthetic XRCT images of 3D-woven composites, paired with an automatic labeled segmentation. The pipeline enables the automated generation of representative training data for machine learning based segmentation algorithms. Furthermore, the physically based images (including reconstruction artifacts) also enable the evaluation of a scan setup before any real CT scan has been performed, potentially saving time and resources.

    Acknowledgment:
    This study was funded by EU Horizon MSCA 2021 DN Reliance: REaL-tIme characterization of ANisotropic Carbon-based tEchnological fibres, films and composites.