Idely applied in image classification and object detection [21,23,24]. At present, deep belief network (DBN)
Idely applied in image classification and object detection [21,23,24]. At present, deep belief network (DBN) [25], stacked autoencoder (SAE) [26], convolutional neural network (CNN) [27], as well as other models have already been applied in HI classification, and CNN is drastically superior towards the other models in classification and target detection tasks [280]. Consequently, the CNN model has been widely employed in PWD studies in recent years. In a study, two advanced object detection models, namely You Only Appear Once version 3 (YOLOv3) and More quickly Region-based Convolutional Neural Network (Faster RCNN), were employed in early diagnosis of PWD infection, obtaining great results and proposing an efficient and rapid strategy for the early diagnosis of PWD [19]. In one more study, Yu et al. [20] employed Faster R-CNN and YOLOv4 to determine early infected pine trees by PWD, revealing that early detection of PWD may be optimized by regarding broadleaved trees. Qin et al. [31] proposed a new framework, namely spatial-context-attention network (SCANet), to recognize PWD-infected pine trees employing UAV images. The study obtained an overall accuracy (OA) of 79 and offered a important strategy to monitor and handle PWD. Tao et al. [32] applied two CNN models (i.e., AlexNet and GoogLeNet) in addition to a standard template matching (TM) strategy to predict the distribution of dead pineRemote Sens. 2021, 13,5 oftrees triggered by PWD, revealing that the detection accuracy of CNN-based approaches was much better than that in the conventional TM technique. The above studies are all based on two-dimensional CNN (2D-CNN). Here, 2D-CNN [27] can receive spatial data in the original raw photos, whereas it can not correctly extract spectral info. When 2D-CNN is applied to HI classification, it’s essential to operate 2-D convolution around the original information of all bands; the convolution operation could be very complex for the reason that each and every band needs a group of convolution kernels to be trained. Distinctive in the pictures with RGB bands, the input hyperspectral data within the network normally harbor hundreds of spectral dimensions, which requires many convolution kernels. This will likely trigger over-fitting of the model, tremendously growing the computational price. To solve this difficulty, three-dimensional CNN (3D-CNN) is as a result introduced to HI classification [335]. Here, Guretolimod Immunology/Inflammation 3D-CNN uses 3-D convolution to perform simultaneously in 3 dimensions to directly extract the spectral and spatial facts in the hyperspectral pictures. The 3-D convolution kernel is capable of extracting 3-D facts, of which two represent spatial dimensions plus the other a single represents the spectral dimension. The HRS image can be a 3-D cube, therefore 3D-CNN can directly extract spatial and spectral information in the exact same time. These benefits enable 3D-CNN to serve as a a lot more suitable model for HI classification. For instance, M ret al. [21] collected hyperspectral and LiDAR information (LiDAR information can get canopy height model, which was employed to match ground reference information to aerial imagery), and employed the 3D-CNN model for person tree species classification from hyperspectral data, displaying that 3D-CNNs were efficient in distinguishing coniferous species from each other, and at the exact same time showed higher accuracy in classifying aspen. In another study, Zhang et al. [24] used hyperspectral Tasisulam MedChemExpress images and proposed a 3D-1D convolutional neural network model for tree species classification, turning the captured high-level semantic conce.