Journal of the Korean Wood Science and Technology
The Korean Society of Wood Science & Technology
Original Article

Wood Classification of Japanese Fagaceae using Partial Sample Area and Convolutional Neural Networks

Taufik FATHURAHMAN2,3, P. H. GUNAWAN2, Esa PRAKASA3,, Junji SUGIYAMA4
2School of Computing, Telkom University, Bandung, Indonesia
3 Computer Vision Research Group, Research Center for Informatics, Indonesian Institute of Sciences, Bandung, Indonesia
4Division of Forestry and Biomaterials Science Faculty / Graduate School of Agriculture, Kyoto University Kitashirakawa-Oiwakecho, Sakyo-ku, Kyoto 606-8502, Japan
Corresponding author: Esa PRAKASA (e-mail: esa.prakasa@lipi.go.id, ORCID: 0000-0003-4685-6309)

© The Korean Society of Wood Science & Technology.

Received: Aug 25, 2020; Accepted: Aug 06, 2021

Published Online: Sep 25, 2021

Abstract

Wood identification is regularly performed by observing the wood anatomy, such as colour, texture, fibre direction, and other characteristics. The manual process, however, could be time consuming, especially when identification work is required at high quantity. Considering this condition, a convolutional neural networks (CNN)-based program is applied to improve the image classification results. The research focuses on the algorithm accuracy and efficiency in dealing with the dataset limitations. For this, it is proposed to do the sample selection process or only take a small portion of the existing image. Still, it can be expected to represent the overall picture to maintain and improve the generalisation capabilities of the CNN method in the classification stages. The experiments yielded an incredible F1 score average up to 93.4% for medium sample area sizes (200 × 200 pixels) on each CNN architecture (VGG16, ResNet50, MobileNet, DenseNet121, and Xception based). Whereas DenseNet121-based architecture was found to be the best architecture in maintaining the generalisation of its model for each sample area size (100, 200, and 300 pixels). The experimental results showed that the proposed algorithm can be an accurate and reliable solution.

Keywords: wood; microscopic image; sample selection; classification; convolutional neural network

1. INTRODUCTION

Wood is the most dominant forest product for commercial use in various industries as raw material and to support the daily life of humans, such as building materials, furniture, craft arts, and many others. In identifying the wood, the visual inspection of various wood tissues is a common method (Hwang et al., 2020). Wood species are very diverse, but still have unique characteristics that can be distinguished from each other. Therefore, an accurate and reliable wood classification system based on the image classification method needs to be developed.

Several previous works related with microsopic analaysis of the wood surface can be mentioned as follows. Savero et al. (2020) investigate wood characteristics of 8-year-old superior teak from Muna Island by observing macroscopic and microscopic anatomical characteristics. This study showed differences in the characteristics of the higher wood portion, wood texture, growing ring width, and wood specific gravity which were categorized as a Strength Class of III (Savero et al., 2020). Furthermore, Jeon et al. (2020) researched the anatomical characteristics of Quercus mongolica which was attacked by oak wilt disease, the anatomical structure of infected wood, deadwood, and healthy wood. The results of the study that showed a big difference between wood affected by oak wilt and healthy wood was the tyloses ratio (Jeon et al., 2020). Finally, Jeon et al. (2018) conducted a study on the characteristics of Korean commercial bamboo species (Phyllostachys pubescens, Phyllostachys nigra, and Phyllostachys bambusoides). The research resulted in crystal properties, vascular bundle, fiber length, vessel diameter, and parenchyma, as well as the length and width of the radial and tangential sections (Jeon et al., 2018).

Salma et al. (2018) proposed a wood identification algorithm by combining Daubechies Wavelet (DW) and Local Binary Pattern (LBP) methods (Salma et al., 2018) as the pattern extractor. The pattern was then classified by using support vector machine (SVM) classifier. Sugiarto et al. (2017) developed a wood identification algorithm based on cooperation between the histogram of oriented gradient (HOG) and the SVM classifier (Sugiarto et al., 2017). Meanwhile, Kobayashi et al. (2019) developed a method for statistically extracting the anatomical features of Fagaceae (Kobayashi et al., 2019). This approach could help to reveal some new aspects of wood anatomy that might be difficult in conventional observation. The next reference is reported by Hadiwidjaja et al. (2019). The paper implemented the LBP and Hough transform methods to improve the extract of the wood feature.

Classification has become one of the main topics that has attached much attention in recent years. Many methods have been developed for classification purposes. Nowadays, the convolutional neural network (CNN) emerges as a powerful visual model with an outstanding performance in various visual recognition and classification problems, for instance, as presented in the papers of Yu et al. (2017), and Levi and Hassner (2015). Yu et al. (2017) developed an efficient CNN architecture to boost the hyperspectral image classification discriminative capability. Levi and Hassner (2015) proposed a simple convolutional net architecture that can be used even when the amount of learning data is limited. Their method was evaluated with the Adience benchmark for age and gender estimation and successfully outperformed other current methods. The Adience dataset is a dataset of face images acquired at common imaging conditions. Therefore, variations of object appearance, poses, and environmental light are found since the photos were taken in free situations. The CNN method can provide excellent classification results, but it still has a challenge in the number of data sets needed to make the classifier well trained (Yu et al., 2017; Levi and Hassner, 2015; Maggiori et al., 2016; Marmanis et al., 2015).

Several previous studies that became references in the use of CNN in the classification and identification process include works of Kwon et al. (2019), Kwon et al. (2017) and Yang et al. (2019). In the first paper, Kwon et al. (2019) used the ensemble model LeNet2, LeNet3, and MiniVGGNet4 to classified Korean softwood with the F1 score reaching 0.98. In the second paper, Kwon et al. (2017) developed an automatic wood species identification system utilising CNN models such as LeNet, MiniVGGNet, and their variants. The research showed a sufficiently fast and accurate result with 99.3% accuracy score by LeNet3 architecture for five Korean softwood species. In the last paper, Yang et al. (2019) used the ensemble methods of two different convolutional neural network models, i.e., LeNet3 and NIRNet. The ensemble methods were applied to lumber species with the average F1 score of 95.31%.

In this wood classification research, we attempted to identify the microscopic wood image. By following the successful example in the paper discussed earlier, the CNN method becomes one of the potential approaches to solve the microscopic wood image classification. In overcoming the small number of wood datasets to train the CNN classifier, as mentioned in several studies in the paragraph above, to get a good CNN classifier requires an adequate quantity of datasets in the training process. A sample selection process is proposed before the microscopic image of wood dataset enters the classification stage using the CNN method. Inside the sample selection process, the wood image is cropped into several sections at a specific size. We assumed that even though it only uses certain image segments, it has some characteristics that can distinguish one species from another, such as vessel size, the density of vessels, colour, and transverse wood fibre.

The remaining part is organised as follows. Section 2 presents the data sets used in the research, the sample selection process, the proposed CNN architectures, and the proposed classification algorithm. Section 3 furthermore presents all experimental results started from the sample selection, training, and testing results. The last section, Section 4, provides the conclusion and future work.

2. MATERIALS and METHODS

2.1. Wood Dataset

The wood species is commonly recognised by examining a small piece of wood as a sample. In this Japanese Fagaceae wood classification research, we observed the microscopic features of the wood. The commonly observed microscopic features are vessels, parenchyma, rays, and others (Schoch et al., 2004; Prislan et al., 2014). Observations are made by collecting micro cores and observing them under a light microscope (Prislan et al., 2014).

This research used a wood dataset obtained from the Research Institute of Sustainable Humanosphere (RISH), Kyoto University, Japan. The dataset consisted of microscopic images of nine species of Fagaceae woods. Table 1 presents the list of wood.

Table 1. Nine species of Japanese Fagaceae
Academic Name Japanese
Common Name
Number of Individual Wood Number of Images Data and Kyoto University ID (KYOw number)
Castanea crenata Kuri 10 20 PICA08E PICA09E PICA09F PICA0A0 PICA0B1
10246 10293 10294 10997 10997
PICA0B2 PICA0B3 PICA0C4 PICA0C5 PICA0C6
11596 12944 13755 13760 13830
Fagus crenata Buna 10 12 PICA0D6 PICA0D7 PICA0E8 PICA0E9 PICA0EA
458 972 1116 1294 8305
PICA0FB PICA0FC PICA0FD PICA0FE PICA10E
458 972 1116 1294 8305
Fagus japonica Inubuna 9 12 PICA10F PICA110 PICA121 PICA122 PICA123
354 1613 5368 8306 8308
PICA133 PICA134 PICA145 PICA146
13836 13955 17510 18594
Quercus accuta Akagashi 10 19 PICA147 PICA158 PICA159 PICA15A PICA16A
342 1615 2867 2957 4920
PICA16B PICA16C PICA17D PICA17E PICA18F
8312 9277 13837 14477 14679
Quercus actissima Kunugi 10 30 PICA190 PICA191 PICA1A1 PICA1A2 PICA1A3
60 1120 1617 5540 5668
PICA1B4 PICA1B5 PICA1B6 PICA1C6 PICA1C7
8314 8315 12092 15352 17763
Quercus crispula Mizunara 10 18 PICA1C8 PICA1D9 PICA1DA PICA1EB PICA1EC
62 411 462 2963 8203
PICA1ED PICA1EE PICA1FE PICA1FF
8320 10297 11421 13841
Quercus gilva Ichiigashi 10 26 PICA210 PICA211 PICA212 PICA223 PICA224
2958 4973 5535 5663 8317
PICA225 PICA235 PICA236 PICA237 PICA248
9279 11593 13839 14829 18722
Quercus glauca Arakashi 10 15 PICA249 PICA259 PICA25A PICA25B PICA26C
5536 5664 9280 12846 12847
PICA26D PICA26E PICA27F PICA280 PICA290
12942 13743 15619 17789 18774
Quercus variabilis Abemaki 7 13 PICA291 PICA292 PICA2A3 PICA2A4 PICA2A5
1620 6576 8332 8333 10328
PICA2B6 PICA2B7
14129 17782
Download Excel Table

Each image in the dataset was stored as a TIFF file with the 4140 × 3096 - pixel dimension. The existing dataset was divided into three groups: training data, validation data, and test data. Data for test consisted of 27 images (3 images from each species), data for validation consisted of 18 images (2 images from each species), and data for train consisted of 120 images (as the rest of images).

2.2. Selection of Featuring Sub-images

The proposed method used a small segment of the image as an input. This procedure was defined by considering the size of the original image and limited dataset availability of 165 images. The image had a dimension of 4140 × 3096 pixels with a size of approximately 38 MB. The implementation of an image processing algorithm on an image with a large size could cost the computer resources and computational time. Moreover, as informed from the previous section, the minimum number of datasets can result in overfitting and low generalisation capabilities of the CNN model.

The process of taking specific segments of the input image consisted of 3 categories based on the sample area size. The first category was small that would crop the image with a size of 100 × 100 pixels and produced 1230 sample areas for each image. The second category was medium that would crop images with a size of 200 × 200 pixels and produced 300 sample areas for each image. Moreover, the last one was the large category to crop images with a size of 300 × 300 pixels and produced 130 sample areas for each image.

Fig. 1 depicts the more specific and detailed features provided by the selected area sample. The CNN model trained on specific and detailed features resulted in a CNN model that had excellent generational capabilities though with the limited dataset provided.

wood-49-5-491-g1
Fig. 1. The illustration of the sample selection process on the image.
Download Original Figure
2.3. Implementation of Convolutional Neural Networks

CNN is a neural network architecture used for prediction when the input observations are images, which is the case in a wide range of neural network applications (Seth, 2019). In principle, CNN mimics the visual cortex. It is based on studies as conducted by Neocognitrons in 1980 (Fukushima, 1980), which gradually evolved into what is now called as convolutional neural networks (Géron, 2019). CNN is very similar to ordinary neural networks and still consists of neurons with weights that can be learned from data. Each neuron receives several inputs and performs a dot product. In general, CNN has three main layers: the convolution layer, pooling layer, and fully connected layer (Sewak et al., 2018).

Fig. 2 shows that the typical CNN architecture piles up several convolutional layers (followed by activators), and pooling layer. Several convolutional layers (followed by activators), other pooling layers, and so on until the final layer produces several predictions (for instance, the softmax layer outputting several estimated class probabilities) (Géron, 2019).

wood-49-5-491-g2
Fig. 2. Convolutional Neural Networks (CNN) architecture.
Download Original Figure

In practice nowadays, very few people train entire convolutional networks from scratch. They do not choose this way for several reasons, such as many data required in the process to train the CNN model; lots of parameters (weights) to be trained and to avoid overfitting. The transfer learning is a solution for the problem. Five CNN architectures have been designed by applying transfer learning to fit in the sample selected data set.

Transfer learning is a machine learning technique that reuses the trained and developed models from one task into the second task. It refers to the situation whereby what has been learned in one setting is exploited to improve optimization in another setting (Hussain et al., 2019). Some examples of popular pre-trained models for transfer learning are VGG-16, MobileNet, ResNet-50, and DenseNet.

Fig. 3 illustrates the architecture of each model. Each base architecture was cut into a particular network depth, such as VGG16 (Simonyan and Zisserman, 2014) at the 19th layer, ResNet50 (He et al., 2016) at the 39th layer, MobileNet (Howard et al., 2017) at the 23rd layer, DenseNet121 (Huang et al., 2017) at the 51st layer, and Xception (Chollet, 2017) at the 42nd  layer. All layers were cut after the specified layers. Furthermore, a new fully connected layer would be used to fit the nine species/class of wood. The use of shallow network depth aimed to prevent the network from losing the essential features of the wood. The deeper the network, the more detailed the features will be produced; and this can affect the sample selected image with the features that are sufficiently detailed to be lost.

wood-49-5-491-g3
Fig. 3. Five architectures of CNN, VGG16 (a), ResNet50 (b), MobileNet (c), DenseNet121 (d), and Xception (e) based architecture. For each architecture, the first column is the layer number, the second and the third one is the layer type, and the number of blocks, respectively. Colour differences indicate the differences in architectural blocks. Light blue blocks indicate input layers, green blocks indicates fully connected layers, and light orange, grey, gold, yellow, and orange blocks indicate feature extractor layers.
Download Original Figure
2.4. Proposed Algorithm

Algorithm development consisted of two main steps. The training process initiated the step (Fig. 4) followed by the testing process (Fig. 5) as the second step.

wood-49-5-491-g4
Fig. 4. The training process in getting the CNN model.
Download Original Figure
wood-49-5-491-g5
Fig. 5. The testing process in getting the classification result.
Download Original Figure

The training process began with a sample selection process, as described in subsection 2.2. Furthermore, each architecture as in subsection 2.3 was trained in each sample selection size category (small 100 × 100 pixels, medium 200 × 200 pixels, and large 300 × 300), therefore each architecture could produce 3 models, and in total for 5 architectures, there would be 15 models.

In the testing process, augmentation data would be added to the test sets. The addition of data augmentation aimed to add data variation and make the testing process able to describe a more general situation. The augmentation process was carried out by rotating the image, thereby increasing the total number of test data to 54 images. Furthermore, test sets entered the sample selection stage. However, the difference was that not all sample areas for each image was going to be used. The optimal number of sample areas would be sought, which could produce high accuracy while still considering computational costs. For example, for one image with a sample area of one size (100, 200, or 300), it would take five images. Furthermore, all sample area images were classified by each CNN model. From all prediction results, the mode value was searched to be the final prediction result for each image (the real image). Fig. 5 illustrates this process.

3. RESULTS and DISCUSSION

3.1. Sample Selection Process Analysis

In the proposed algorithm, as described in subsection 2.4, every wood image data must be entered into the sample selection process. The sample selection process produced three categories of sample areas (small, medium, and large) as illustrated in Fig. 6.

wood-49-5-491-g6
Fig. 6. Sample selection results for Quercus variabilis species (small, medium, and large sample area).
Download Original Figure

The training process used all sample areas from each original image (1230 sample areas for small size, 300 sample areas for medium size, and 130 sample areas for large size). Meanwhile, for the testing process, we sought the most optimal number of sample areas with accuracy and computational cost. Fig. 7 presents ten trials using different numbers of data samples (1, 3, 5, 7, 9, 11, 13, 15, 21, and 25).

wood-49-5-491-g7
Fig. 7. The correlation between the number of test sets and the resulting accuracy.
Download Original Figure

Large sample area sizes and VGG16 architecture were used in this process. The results of this process analysis would be used in the next process. The most optimal result was achieved when the number of samples in the test set was 5. The next process was to evaluate 15 CNN models using the number of sample areas defined in this section.

3.2. Training Process Analysis

The training process for all CNN models used the same parameters to produce the valid benchmarks. The first training experiment set the following parameters: the number of epochs (30 epochs), learning rate (0.0001), and the optimiser (Adam optimiser). If the training accuracy has not increased in 10 epochs, the training process would be terminated without waiting for the maximum number of epochs. Furthermore, if in 5 epochs training process did not increase, the learning rate would be reduced by multiplying it by a factor value of 0.6. By lowering the learning rate, the process of updating the weight became smoother to increase accuracy. A total of 15 training processes have been completed. Fig. 8 displays a history of training from the VGG16 architecture with a sample size area of 200 pixels.

wood-49-5-491-g8
Fig. 8. The accuracy and loss progressions are plotted based on the results of the CNN model with the partial sample area.
Download Original Figure

Fig. 8 shows the architecture designed to accept wooden datasets properly. From each epoch, score accuracy continued to increase, and score loss continued to decrease. Scores between the validation and the training were not far apart showing that the model did not overfitting. Furthermore, to test the performance and generalisation of the models, a testing process was carried out on the wood testing dataset that has been prepared.

3.3. Testing Process Analysis

The testing process was done by evaluating each CNN models with the prepared testing data set. In detailed, the results of the testing process are presented in Table 2.

Table 2. CNN models performance evaluation by applying the proposed algorithm
VGG16 ResNet50 MobileNet DenseNet121 Xception
 Small Precision 92% 93% 87% 95% 94%
Recall 93% 94% 87% 94% 93%
F1 92% 95% 86% 94% 92%
 Medium Precision 94% 96% 89% 98% 97%
Recall 93% 94% 87% 98% 96%
F1 92% 94% 87% 98% 96%
 Large Precision 95% 91% 87% 98% 98%
Recall 94% 89% 85% 98% 98%
F1 94% 89% 84% 98% 98%
Weight Size 60 MB 4.5 MB 1.3 MB 5.9 MB 30 MB
Download Excel Table

Based on test results, the accuracy score generally increased following the size of the input image. With a sample area size = 100, the average F1 scores of VGG16, ResNet50, MobileNet, DenseNet121, and Xception were 92, 95, 86, 94, and 92% respectively. With a sample area size = 200, the average F1 scores of VGG16, MobileNet, DenseNet121, and Xception increased to 92, 87, 98, and 96% respectively, and for ResNet50 it decreased to 94%. Finally, for sample area size = 300, the average F1 scores increased for VGG16 and Xception to 94, and 98%, ResNet50 and MobileNet decreased to 89, and 84%, and for DenseNet121 it was unchanged from 98%.

Test result data in Table 2 provides a correlation between the size of the sample area and the capabilities of the resulting CNN model. The size of the sample area is related to the number of features contained in the image. As shown in Fig. 6, the smaller sample area could make the surface features of the wood disappeared due to the cutting process. The samples of medium category area appeared as the most optimal size in maintaining features contained in wood images. This area could create samples containing the most detailed and specific wood features compared to other 2 categories. Proved by the average F1 score of the medium size, each CNN architecture achieved 93.4%. Although the large category sample area had more features, it was not more specific than the medium category sample area. Thus, the results from large area samples did not produce a more general CNN model compared to medium area samples.

Furthermore, Table 2 informs that the DenseNet121- based model became the most general CNN architecture for all sample area sizes and considered to be the best CNN model for Japanese Fagaceae wood identification. DenseNet121-based produced F1 scores above 94% for every size of the sample area (100, 200, and 300 pixels). Meanwhile, MobileNet-based became the CNN architecture providing the lowest accuracy. However, it should be taken into account to use, with the given size of the resulting weight to be the lightest. Although accuracy is not as high as other architectures, it can be the most portable architecture option in use.

To prove how the proposed algorithm could overcome the problems of generalization ability, dataset limitation, and accuracy, several comparisons have been made by making a CNN model that did not go through the sample selection process.  In other words, it directly used the original image. As a sample of the training process carried out, Fig. 9 displays the history of the VGG16 architecture training process.

wood-49-5-491-g9
Fig. 9. The accuracy and loss progressions are plotted based on the results of the CNN model with the whole image area.
Download Original Figure

The training process used a similar architecture and parameters as in the training process described in Section 2.4. However, this stage excludes the sample selection process. It resulted in 94% validation accuracy values and validation loss values continuing to approach 0. Implementation of the partial sample area during the training stages does not significantly improve compared to the step with the whole sample area. However, the testing stage obtained different results. As presented in Table 3, most of the CNN architectures provide parameters values at lower levels.

Table 3. CNN models performance evaluation without applying the proposed algorithm
VGG16 ResNet50 MobileNet DenseNet121 Xception
 Precision 86% 88% 51% 71% 83%
 Recall 81% 81% 63% 67% 70%
 F1 80% 81% 54% 62% 70%
Download Excel Table

The unfavourable testing results in Table 3 indicated that the model was overfitting and had low generalizability. This comparison showed the advantages and benefits of using the sample selection process. With the similar limited dataset, the sample selection process could overcome the overfitting problem and the low generalizability of the CNN model.

4. CONCLUSION

CNN has become a popular deep learning model widely used for image and visual analysis. In the applications, it requires many datasets to learn a robust classifier. In this wood classification research, the microscopic images taken from the laboratory were in a limited number and had a large size of around 38 Mb and a dimension of 4140 × 3096 pixels. Therefore, a well-designed algorithm has been proposed, which could handle the dataset limitations and identify wood species accurately and efficiently. The proposed algorithm principle was to implement the sample selection process before the dataset entered the classification stage using VGG16, ResNet50, MobileNet, DenseNet121, and Xception based architectures.

The experimental results showed that the sample selection process could produce CNN models with some good generalisation capabilities. The experimental results proved that by making sample areas more specific and detailed, it can produce a powerful CNN model. The medium area sample category (200 × 200 pixels) appeared as the most optimal sample area. While the CNN DenseNet121 architecture became the most optimal architecture when used with medium sample areas.

The availability of more microscopic images is recommended in the future to improve the classification accuracy of the proposed algorithm. The results of this wood classification research are expected to be widely used. With a small size of the CNN model, it can be potentially applied to the mobile devices.

DATA AVAILABILITY STATEMENT

The microscopic images used in the experiment are available from Kyoto University Research Information Repository (https://repository.kulib.kyoto-u.ac.jp/dspace/handle/2433/250016).

ACKNOWLEDGMENT

The author would like to thank the Indonesian Institute of Sciences (Lembaga Ilmu Pengetahuan Indonesia) and the Research Institute of Sustainable Humanosphere (RISH), Kyoto University, Japan for the assistance and provision of datasets supporting this wood classification research.

REFERENCES

1.

Chollet F. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017; In: Xception: Deep Learning with Depthwise Separable Convolutions p. 1251-1258.

2.

Fukushima K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics. 1980; 36:193-202.

3.

Géron A. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow. 20192nd EditionO’Reilly Media.

4.

Hadiwidjaja M.L., Gunawan P.H., Prakasa E., Rianto Y., Sugiarto B., Wardoyo R., Damaryati R., Sugiyarto K., Dewi L.M., Astutiputri V.F. Developing wood identification system by local binary pattern and hough transform method. Journal of Physics: Conference Series. 2019; 1192(1):012053

5.

He K., Zhang X., Ren S., Sun J. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016; In: Deep Residual Learning for Image Recognition p. 770-778

6.

Howard A.G., Zhu M.C., Kalenichenko D., Wang W., Weyand T., Andreetto M., Adam H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. ArXiv Preprint ArXiv: 1704.04861. 2017.

7.

Huang G., Liu Z., Van Der Maaten L., Weinberger K.Q. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017; In: Densely Connected Convolutional Networks p. 4700-4708

8.

Hussain M., Bird J.J., Faria D.R. In: Lotfi A., Bouchachia H., Gegov A., Langensiepen C., McGinnity M., editors. A Study on Cnn Transfer Learning for Image Classification. Advances in Computational Intelligence Systems. 2018; :191-202

9.

Hwang S.-W., Tazuru S., Sugiyama J. Wood identification of historical architecture in Korea by Synchrotron X-ray microtomography-based three-dimensional microstructural imaging. Journal of the Korean Wood Science and Technology. 2020; 48(3):283-290.

10.

Jeon W.S., Kim Y.K., Lee J.A., Kim A.R., Darsan B., Chung W.Y., Kim N.H. Anatomical characteristics of three Korean bamboo species. Journal of the Korean Wood Science and Technology. 2018; 46(1):29-37.

11.

Jeon W.S., Lee H.M., Park J.H. Comparison of anatomical characteristics for wood damaged by oak wilt and sound wood from quercus mongolica. Journal of the Korean Wood Science and Technology. 2020; 48(6):807-819.

12.

Kobayashi K., Kegasa T., Hwang S.W., Sugiyama J. Anatomical features of Fagaceae wood statistically extracted by computer vision approaches: Some relationships with evolution. PloS One. 2019; 14(8):e0220762.

13.

Kwon O., Lee H.G., Lee M.R., Jang S., Yang S.Y., Park S.Y., Yeo H. Automatic wood species identification of Korean softwood based on convolutional neural networks. Journal of the Korean Wood Science and Technology. 2017; 45(6):797-808.

14.

Kwon O., Lee H.G., Yang S.Y., Kim H., Park S.Y., Choi I.G., Yeo H. Performance enhancement of automatic wood classification of Korean softwood by ensembles of convolutional neural networks. Journal of the Korean Wood Science and Technology. 2019; 47(3):265-276.

15.

Levi G., Hassner T. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2015; In: Age and Gender Classification Using Convolutional Neural Networks p. 34-42.

16.

Maggiori E., Tarabalka Y., Charpiat G., Alliez P. Convolutional neural networks for large-scale remote-sensing image classification. IEEE Transactions on Geoscience and Remote Sensing. 2016; 55(2):645-657

17.

Marmanis D., Datcu M., Esch T., Stilla U. Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geoscience and Remote Sensing Letters. 2015; 13(1):105-109.

18.

Prislan P., Gričar J., Čufar K. Wood sample preparation for microscopic analysis.University of Ljubljana, Department of Wood Science and Technology.

19.

Salma S., Gunawan P., Prakasa E., Sugiarto B., Wardoyo R., Rianto Y., Dewi L.M. In: 2018 International Conference on Computer, Control, Informatics and its Applications (IC3INA). 2018; In: Wood Identification on Microscopic Image with Daubechies Wavelet Method and Local Binary Pattern p. 23-27

20.

Savero A.M., Wahyudi I., Rahayu I.S., Yunianti A. D., Ishiguri F. Investigating the anatomical and physical-mechanical properties of the 8-year-old superior teakwood planted in muna island, Indonesia. Journal of the Korean Wood Science and Technology. 2020; 48(5):618-630.

21.

Schoch W., Heller I., Schweingruber F.H., Kienast F. Wood Anatomy of Central European Species. 2004Swiss Federal Institute for Forest.

22.

Seth W. Deep Learning from Scratch. 2019O’Reilly Media.

23.

Sewak M., Karim M.R., Pujari P. Practical Convolutional Neural Networks: Implement Advanced Deep Learning Models Using Python. 2018Packt Publishing Ltd.

24.

Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition. ArXiv Preprint ArXiv: 1409.1556. 2014.

25.

Sugiarto B., Prakasa E., Wardoyo R., Damayanti R., Dewi L.M., Pardede H.F., Rianto Y. In: 2017 2nd International conferences on Information Technology, Information Systems and Electrical Engineering (ICITISEE). 2017; In: Wood identification based on histogram of oriented gradient (HOG) feature and support vector machine (SVM) classifier p. 337-341

26.

Yang S.Y., Lee H.G., Park Y., Chung H., Kim H., Park S.Y., Yeo H. Wood species classification utilizing ensembles of convolutional neural networks established by near-infrared spectra and images acquired from Korean softwood lumber. Journal of the Korean Wood Science and Technology. 2019; 47(4):385-392.

27.

Yu S., Jia S., Xu C. Convolutional neural networks for hyperspectral image classification. Neurocomputing. 2017; 219:88-98