AI accelerated appearance prediction for color 3D printing

May 28, 2021

The Computer Graphics Group (CGG) from Matfyz published an article with a new method for improving full-color 3D printing technology. The new technique results in 3D printer output which achieves a far more accurate match between user input and final object appearance than current commercial software. The method leverages machine learning techniques to improve the runtime and practicality of a previously published algorithm by the same authors.

This work [1] was a collaboration with researchers from MPI Saarbrücken, USI Lugano, Keldysh Institute Moscow, IST Austria and University College London (UCL) that followed from a EU Innovative Training Network (ITN) project called DISTRO. The presentation is at the virtual Eurographics 2021 conference and the article appears in the May issue of the Computer Graphics Forum journal under Open Access.

Background

In full-color 3D printing, the printhead is not melting plastic but jetting tiny droplets of liquid resin and hardening them instantly using UV light. As with any additive manufacturing technique, this process is repeated layer by layer to form a 3D dimensional object (Picture [4]).

These material jetting 3D printers are today mostly used in industrial applications, such as prototyping, cultural heritage preservation and for medical prosthetics. Recently, Mixed Dimensions started offering a print service for custom game figures. The animation studio LAIKA is using this 3D printing technology to animate the facial expressions of their characters in stop-motion movies.

Similar to 2D printing, a wide range of color shades are produced by placing multiple base materials (CMYK+W) next to each other. The materials consist of a semi-transparent resin [2] that allows for light to travel slightly below the surface. This translucency enables precise subtractive mixing of color shades through different ratios of absorbing base materials.

Despite the flexibility in color reproduction, the color-bleeding also poses a physical limitation for the reproduction of fine detail: Texture details are blurred and hard edges lose contrast from the light scattering laterally underneath the surface. Adding to this problem, the blur is three-dimensional meaning that it also affects colors on opposing sides of thin object parts.

The team’s previous work [5, 3] has proven that it is possible to recover the sharpness and contrast by carefully optimizing the material placement. Using a virtual simulation of the printed appearance one can iteratively find an arrangement of materials that reproduces the input object most faithfully.

In this article [1] they propose a new technique for this virtual simulation that is up to 300 times faster than previous methods, while also only requiring a single GPU instead of a whole compute cluster. Thanks to learning from millions of training examples, a neural network is able to efficiently predict how the light scatters underneath the surface and how a given surface point is influenced by the materials around it.

Using the new simulation, the preparation time for a colored 3D model is reduced from tens of hours to several minutes which allows for this technique to be actually used in practice. With such an iterative pipeline, one can achieve higher fidelity surface appearance with today's 3D printer hardware than traditional print preparation software could offer.

The authors’ paper page provides links to the open source implementation, including the dataset for training and pre-trained networks, a 30s teaser video, and an 18m conference presentation.

Contact

Tobias Rittig
PhD student
Computer Graphics Group, Charles University
tobias@cgg.mff.cuni.cz
+420 95155 4203 (office)

References

[1] T. Rittig et al., “Neural Acceleration of Scattering-Aware Color 3D Printing,” Computer Graphics Forum, 2021, doi: 10.1111/cgf.142626.

[2] O. Elek et al., “Robust and practical measurement of volume transport parameters in solid photo-polymer materials for 3D printing,” Opt. Express, OE, vol. 29, no. 5, pp. 7568–7588, Mar. 2021, doi: 10.1364/OE.406095.

[3] D. Sumin et al., “Geometry-Aware Scattering Compensation for 3D Printing,” ACM Trans. Graph., vol. 38, no. 4, p. 111:1-111:14, Jul. 2019, doi: 10.1145/3306346.3322992.

[4] S. Ritter, formnext AM FIELD GUIDE : discover the world of additive manufacturing : a practical guide to the exciting world of generative manufacturing. 2019.

[5] O. Elek et al., “Scattering-aware Texture Reproduction for 3D Printing,” ACM Trans. Graph., vol. 36, no. 6, p. 241:1-241:15, Nov. 2017, doi: 10.1145/3130800.3130890.

 

Charles University, Faculty of Mathematics and Physics
Ke Karlovu 3, 121 16 Praha 2, Czech Republic
VAT ID: CZ00216208

HR Award at Charles University

4EU+ Alliance