1. Feature Extraction Technology
Feature extraction is a key method in counter-drone identification technology. It works by analyzing the unique characteristics of drones to accurately identify them under different detection methods. In radar detection, radar micro-Doppler features provide critical information for identifying drones. China’s "Tianqiong" system analyzes the micro-Doppler features in radar echoes from rotary-wing drones, allowing it to distinguish between fixed-wing and multi-rotor drones.
When radar waves hit a rotary-wing drone, the high-speed rotation of the rotors causes unique micro-Doppler frequency shifts in the radar echo. These shifts are closely related to factors such as the number of rotors, rotor speed, and flight posture. Fixed-wing drones, due to their flight principles and structural characteristics, produce significantly different micro-Doppler features in radar echoes compared to rotary-wing drones.
By extracting and analyzing these subtle but distinctive features, the "Tianqiong" system can accurately identify drones in complex radar signals, providing essential data for countermeasures.
In optoelectronic detection, building a comprehensive and accurate optoelectronic feature database is key to efficient identification. France’s Thales is constructing a database containing optical features of 2,000 types of drones. In practical applications, its recognition speed is less than 0.5 seconds. Thales collects a vast amount of image data from various drone models and brands across visible and infrared bands, extracting features like drone shape, size ratio, surface texture, color features, and thermal radiation distribution. In visible light images, features such as the body shape, wing structure, and landing gear are important for recognition.
In infrared images, the heat patterns of drone engines, batteries, and other components provide distinctive thermal features. By quickly comparing real-time drone images with data from the feature database, Thales’ system can accurately identify drone models in a very short time, significantly improving response speed and recognition accuracy in counter-drone identification technology.
2. Machine Learning Technology
Machine learning technology has shown tremendous advantages in counter-drone identification technology. Specifically, techniques such as convolutional neural networks (CNN) and transfer learning have significantly improved recognition accuracy and efficiency. The AutoCue system by Northrop Grumman uses the advanced ResNet-50 model, achieving a 98.7% recognition accuracy for small drones. CNNs are deep learning models designed to process image data, automatically extracting representative features from images by constructing multiple layers of convolution, pooling, and fully connected layers.
In counter-drone identification technology, the AutoCue system uses large datasets of labeled drone images to train the ResNet-50 model. During training, the model continually adjusts its parameters, learning the characteristic patterns of different drone models in images, such as shape contours, texture details, and color distribution. When new drone images are input into the system, the ResNet-50 model quickly and accurately extracts key features from the images, comparing them with the learned feature patterns to achieve high-precision identification of small drones.
Transfer learning offers a solution to the challenge of insufficient training data in drone recognition. China’s Aerospace Science and Industry Corporation (CASIC) has successfully applied transfer learning techniques, transferring models trained in ground target recognition to the field of drone recognition, reducing training data requirements by 80%. The basic principle of transfer learning is to use knowledge and features already learned in one domain (source domain) to help in learning a related domain (target domain). Since drones share some similar features with ground targets, such as shape contours and texture features, knowledge learned in ground target recognition can be applied to drone recognition tasks. By fine-tuning the transferred model with a small amount of drone image data, the model quickly adapts to drone recognition needs, significantly reducing the amount of data collection and annotation while maintaining high recognition performance. This greatly improves the efficiency and feasibility of applying counter-drone identification technology.