Article Content
Abstract
3D metrology; optical coherence tomography; image processing; surface morphology fitting; functional sensing
1. Introduction
2. Experimental Setup
2.1. SD-OCT System

2.2. OCT Volume
3. Curved Slicing Method
3.1. Foreground Voxels Extraction

3.2. Structural Morphology Detection
As the foreground voxels are identified through a binarization process, their locations are extracted to determine the object’s boundary. Curve fitting is performed on these 3D-voxel coordinates to best fit the boundary layers characterized by a surface equation (such as polynomial, spline, and others). If the surface is rough, one may use periodic functions or even use deep learning methods to model a relatively complicated surface. In our case, a polynomial surface equation will suffice:
with the polynomial degree 𝑘=3 (due to the nature of our sample) being used throughout this paper. To fit the curve, we select N representative voxels and find every 𝐶𝑎,𝑏 that minimizes ∑𝑁𝑖[𝑓(𝑥𝑖,𝑦𝑖)−𝑧𝑖]2. We use Tikhonov regularization [22] to prevent overfitting by introducing a fix regularization variable 𝛼=0.1 for minimization of ||𝑦−𝑋𝑤||22+𝛼||𝑤||22, according to the Scikit-learn library [23]. This step obtains a surface that best represents objects’ structural morphology while minimizing shot noises and rejecting reflection artifacts. We then describe the fitted surface as
with unique label m to represent individual instances capable of shifting the depth axis up or down by Δ𝑧. We can then choose 𝑚=1 to represent the initial fitted curve (the green line in Figure 2b) as 𝐹1(0). The full 3D representation of the surface 𝐹1(0) illustrated in Figure 2e can be drawn by respectively iterating x and y over 𝑥𝑖∈[0,1023] and 𝑦𝑖∈[0,255].
3.3. Target Layer Selection
To automatically select a specific boundary layer, we can apply the weighing technique used for outlier rejection [24] to obtain
The key idea is that by shifting Δ𝑧 along the depth axis z, the total distance between the points near the surface will decrease the denominator term, which consequently increases the sum of the inverse of distances. The addition of 1 is used to prevent each term from overflowing and is inherently used as a normalizing factor. From Figure 2c, when the green surface crosses each boundary, the boundary score (blue graph) will rise due to decreasing distances. Moreover, the magnitude of this score could be used as an approximation for inlier points selection. For example, if we would like to select the second (bottom) layer for further analysis, we first move the green surface down the depth axis to 𝐹1(Δ𝑧≈11), then the points that represents the second boundary will be the closest to the surface, which increases the boundary score (totalling around 30% of the volume foreground points). We then finally cut 70% of the points, classifying them as outliers, and repeat the curve-fitting process. The second curve-fitting process inherently eliminates noises from other layers due to outlier rejection.
4. Results and Discussions
4.1. Multi-Layer Surface Extraction and Visualization

4.2. Image Processing Integrations

The measurement process starts with the raw signal in Figure 4b. Subsequently, a Gaussian curve fitting is applied to this signal. A cutoff point (or signal width representative [26]) at 1/𝑒2 of the Gaussian fit amplitude is considered for layer counting. To translate from voxel lengths to real-world lengths, we define physical domain coordinates as 𝑋=𝑀𝑥𝑥, 𝑌=𝑀𝑦𝑦, and 𝑍=𝑀𝑧𝑧 where 𝑀𝑥, 𝑀𝑦, 𝑀𝑧 are 4.8828, 19.5312, 4.2068 μm/voxel, respectively. Since the surface is tilted, the real thickness is proportional to the projected length of the gradient plane. The vector that defines a gradient plane at point (𝑥0,𝑦0) is defined by
with its corresponding unit vector 𝐫̂ =𝐫/|𝐫|. The layer thickness L is calculated by projecting layer count 𝑙𝑧 (in voxel) with basis 𝐞𝑧 onto vector 𝐫̂ , yielding
4.3. Automated Workflow Robustness Analysis
- 1.
-
Obtain the OCT volume.
- 2.
-
Perform curved slicing to obtain the surface equation and its corresponding boundary (according to the flowchart in Figure 2f).
- 3.
-
For each Δ𝑧∈[−20,19] slice, perform dimensionality reduction:
- 3.1.
-
Apply 2D FFT to obtain the 2D frequency domain.
- 3.2.
-
Reduce the dimension by applying maxpool to the frequency domain image.
- 3.3.
-
Extract locations and intensities in the frequency domain.
- 3.4.
-
Sort the information by intensity (as shown in Figure 5a, where each line represents each Δ𝑧 slice).
Figure 5. Automated ink thickness measurement results. (a) Intensity distribution sorted in descending order; each line represents each slice Δ𝑧, and the color of each line represents the ratio 𝑃2/𝑃100 of each slice. (b) Ratio according to each Δ𝑧, where the maximum ratio corresponds to candidate slice Δ𝑧𝑚. The top inset shows dominant and recessive frequency-domain data points of Δ𝑧𝑚, where the points are aligned according to the inked pattern. The bottom inset shows frequency-domain data points of a non-candidate slice, where points are distributed in a random manner (with higher spatial frequency indicating possible noise). (c) Signal processing results of each case. (d) Box plot of measurement results, with standard deviations of 5.1802μm for different labels and 3.4322μm for different orientations.
- 4.
-
Find the candidate slice Δ𝑧𝑚 which has the largest 𝑃2/𝑃100 ratio, where 𝑃2 is the second-largest intensity (probably the sideband, shown in Figure 5a as the middle dashed line), and 𝑃100 is the 100th-largest intensity (probably noise, shown in Figure 5a as the right-most dashed line). The ratios of all slices are separately plotted as shown in Figure 5b for visualization purposes.
- 5.
-
In the candidate slice Δ𝑧𝑚, divide frequency-domain data points into two sets: the 23 largest intensities as dominant locations and the rest as recessive locations (shown as the top inset in Figure 5b).
- 6.
-
For each Δ𝑧:
- 6.1.
-
The largest intensity near dominant locations of Δ𝑧𝑚 is considered as the sideband.
- 6.2.
-
The largest intensity near recessive locations of Δ𝑧𝑚 is considered as noise.
- 6.3.
-
Signal is considered as sideband subtract noise. Refer to the blue dashed line plotted in Figure 5c.
- 7.
-
Perform the initial Gaussian fit to the signal.
- 8.
-
Clip the signal at the right side (1/𝑒2 of Gaussian) for curve-fitting stability. Refer to the blue solid line plotted in Figure 5c.
- 9.
-
Perform the second Gaussian fit to the clipped signal. Refer to the orange line of plots in Figure 5c
- 10.
-
Calculate the ink thickness according to (5).
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
The following abbreviations are used in this manuscript:
| OCT | Optical coherence tomography. |
| SD-OCT | Spectral-domain optical coherence tomography. |
Appendix A


References
- Dhalla, A.H.; Nankivil, D.; Bustamante, T.; Kuo, A.; Izatt, J.A. Simultaneous swept source optical coherence tomography of the anterior segment and retina using coherence revival. Opt. Lett. 2012, 37, 1883–1885. [Google Scholar] [CrossRef] [PubMed]
- Ngo, L.; Cha, J.; Han, J.H. Deep Neural Network Regression for Automated Retinal Layer Segmentation in Optical Coherence Tomography Images. IEEE Trans. Image Process. 2020, 29, 303–312. [Google Scholar] [CrossRef] [PubMed]
- Han, S.B.; Liu, Y.C.; Noriega, K.M.; Mehta, J.S. Applications of Anterior Segment Optical Coherence Tomography in Cornea and Ocular Surface Diseases. J. Ophthalmol. 2016, 2016, 4971572. [Google Scholar] [CrossRef] [PubMed]
- Luong, M.N.; Shimada, Y.; Araki, K.; Yoshiyama, M.; Tagami, J.; Sadr, A. Diagnosis of Occlusal Caries with Dynamic Slicing of 3D Optical Coherence Tomography Images. Sensors 2020, 20, 1659. [Google Scholar] [CrossRef]
- Wu, X.; Gao, W.; He, Y.; Liu, H. Quantitative measurement of subsurface damage with self-referenced spectral domain optical coherence tomography. Opt. Mater. Express 2017, 7, 3919–3933. [Google Scholar] [CrossRef]
- Marques, M.J.; Green, R.; King, R.; Clement, S.; Hallett, P.; Podoleanu, A. Sub-surface characterisation of latest-generation identification documents using optical coherence tomography. Sci. Justice 2021, 61, 119–129. [Google Scholar] [CrossRef]
- Zhang, N.; Jiang, P.; Wang, W.; Wang, C.; Xie, L.; Li, Z.; Huang, W.; Shi, G.; Wang, L.; Yan, Y.; et al. Initial Study for the Determination of the Sequence of Intersecting Lines between Gel Pens and Seals by Optical Coherence Tomography*. J. Forensic Sci. 2020, 65, 2071–2079. [Google Scholar] [CrossRef]
- Zhang, N.; Wang, C.; Sun, Z.; Mei, H.; Huang, W.; Xu, L.; Xie, L.; Guo, J.; Yan, Y.; Li, Z.; et al. Characterization of automotive paint by optical coherence tomography. Forensic Sci. Int. 2016, 266, 239–244. [Google Scholar] [CrossRef]
- He, B.; Shi, Y.; Sun, Z.; Li, X.; Hu, X.; Wang, L.; Xie, L.; Yan, Y.; Li, Z.; Li, Z.; et al. Rapid, autonomous and ultra-large-area detection of latent fingerprints using object-driven optical coherence tomography. Opt. Express 2024, 32, 31090. [Google Scholar] [CrossRef]
- Zhang, N.; Wang, C.; Sun, Z.; Li, Z.; Xie, L.; Yan, Y.; Xu, L.; Guo, J.; Huang, W.; Li, Z.; et al. Detection of latent fingerprint hidden beneath adhesive tape by optical coherence tomography. Forensic Sci. Int. 2018, 287, 81–87. [Google Scholar] [CrossRef]
- Strąkowski, M.R.; Strąkowska, P.; Pluciński, J. Latent fingerprint imaging by spectroscopic optical coherence tomography. Opt. Lasers Eng. 2023, 167, 107622. [Google Scholar] [CrossRef]
- Margolis, R.; Spaide, R.F. A Pilot Study of Enhanced Depth Imaging Optical Coherence Tomography of the Choroid in Normal Eyes. Am. J. Ophthalmol. 2009, 147, 811–815. [Google Scholar] [CrossRef]
- Li, A.; Cheng, J.; Yow, A.P.; Wall, C.; Wong, D.W.K.; Tey, H.L.; Liu, J. Epidermal segmentation in high-definition optical coherence tomography. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 3045–3048. [Google Scholar] [CrossRef]
- Weissman, J.; Hancewicz, T.; Kaplan, P. Optical coherence tomography of skin for measurement of epidermal thickness by shapelet-based image analysis. Opt. Express 2004, 12, 5760. [Google Scholar] [CrossRef] [PubMed]
- Kashani, A.H.; Chen, C.L.; Gahm, J.K.; Zheng, F.; Richter, G.M.; Rosenfeld, P.J.; Shi, Y.; Wang, R.K. Optical coherence tomography angiography: A comprehensive review of current methods and clinical applications. Prog. Retin. Eye Res. 2017, 60, 66–100. [Google Scholar] [CrossRef] [PubMed]
- Yu, X.; Xiong, Q.; Luo, Y.; Wang, N.; Wang, L.; Tey, H.L.; Liu, L. Contrast Enhanced Subsurface Fingerprint Detection Using High-Speed Optical Coherence Tomography. IEEE Photonics Technol. Lett. 2017, 29, 70–73. [Google Scholar] [CrossRef]
- Auksorius, E.; Borycki, D.; Stremplewski, P.; Liżewski, K.; Tomczewski, S.; Niedźwiedziuk, P.; Sikorski, B.L.; Wojtkowski, M. In vivo imaging of the human cornea with high-speed and high-resolution Fourier-domain full-field optical coherence tomography. Biomed. Opt. Express 2020, 11, 2849. [Google Scholar] [CrossRef]
- Jones, C.K.; Li, B.; Wu, J.H.; Nakaguchi, T.; Xuan, P.; Liu, T.Y.A. Comparative analysis of alignment algorithms for macular optical coherence tomography imaging. Int. J. Retin. Vitr. 2023, 9, 60. [Google Scholar] [CrossRef]
- Ruggeri, M.; Giuffrida, F.P.; Vy Truong, N.L.; Parel, J.M.; Cabot, F.; Shousha, M.A.; Manns, F.; Ho, A. Wide-field self-referenced optical coherence tomography imaging of the corneal microlayers. Opt. Lett. 2025, 50, 1204. [Google Scholar] [CrossRef]
- Garvin, M.; Abramoff, M.; Wu, X.; Russell, S.; Burns, T.; Sonka, M. Automated 3-D Intraretinal Layer Segmentation of Macular Spectral-Domain Optical Coherence Tomography Images. IEEE Trans. Med Imaging 2009, 28, 1436–1447. [Google Scholar] [CrossRef]
- Van der Jeught, S.; Buytaert, J.A.N.; Bradu, A.; Podoleanu, A.G.; Dirckx, J.J.J. Real-time correction of geometric distortion artefacts in large-volume optical coherence tomography. Meas. Sci. Technol. 2013, 24, 057001. [Google Scholar] [CrossRef]
- Rifkin, R.M.; Lippert, R.A. Notes on Regularized Least Squares; Technical report; Massachusetts Institute of Technology: Cambridge, MA, USA, 2007. [Google Scholar]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Farin, G.E.; Hansford, D. Mathematical Principles for Scientific Computing and Visualization; A K Peters: Wellesley, MA, USA, 2008. [Google Scholar]
- Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
- Dickson, L.D. Characteristics of a Propagating Gaussian Beam. Appl. Opt. 1970, 9, 1854. [Google Scholar] [CrossRef] [PubMed]
- Fabritius, T.; Saarela, J.; Myllyla, R. Determination of the refractive index of paper with clearing agents. Proc. SPIE 2006, 6053, 60530X. [Google Scholar] [CrossRef]
- Paluszny, M.; Ríos, D. Retrieving 3D medical data along fitted curved slices and their display. BMC Med Inform. Decis. Mak. 2020, 20, 23. [Google Scholar] [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
|
Share and Cite
Li, M.; Loahavilai, P.; Liu, Y.; Li, X.; Li, Y.; Sun, L. Adaptive Curved Slicing for En Face Imaging in Optical Coherence Tomography. Sensors 2025, 25, 4329. https://doi.org/10.3390/s25144329
Li M, Loahavilai P, Liu Y, Li X, Li Y, Sun L. Adaptive Curved Slicing for En Face Imaging in Optical Coherence Tomography. Sensors. 2025; 25(14):4329. https://doi.org/10.3390/s25144329Chicago/Turabian Style
Li, Mingxin, Phatham Loahavilai, Yueyang Liu, Xiaochen Li, Yang Li, and Liqun Sun. 2025. “Adaptive Curved Slicing for En Face Imaging in Optical Coherence Tomography” Sensors 25, no. 14: 4329. https://doi.org/10.3390/s25144329
APA Style
Li, M., Loahavilai, P., Liu, Y., Li, X., Li, Y., & Sun, L. (2025). Adaptive Curved Slicing for En Face Imaging in Optical Coherence Tomography. Sensors, 25(14), 4329. https://doi.org/10.3390/s25144329