INTELLIGENT TECHNIQUE OF TYPHOON VORTEX DETECTION BASED ON OBJECT DETECTION WITH DEEP LEARNING OF SATELLITE IMAGE
-
摘要: 基于2005—2020年的中国气象局台风最佳路径数据集以及葵花(Himawari)8和风云(FY)卫星云图数据,首先将卫星原始数据转换为FULLDISK灰度图像作为台风涡旋识别技术的图像来源,并制定新的VOC (Visual Object Classes)标注规范,构建了样本标注数据集。利用运行速度快、识别准确率高的人工智能领域经典目标检测SSD(Single Shot MultiBox Detector)模型作为台风涡旋识别的基础模型,并针对台风涡旋识别的独特性,特别是弱涡旋识别困难,提出一种迭代的SSD目标检测模型,明显提高了台风涡旋的识别精度。通过目标检测技术对卫星云图进行智能特征分析、抽取、识别和定位,实现了自动涡旋正确识别和定位,最终建立了智能台风涡旋识别技术。测试结果显示:该技术对强热带风暴级以下强度台风涡旋正确识别率为40%~80%,对强热带风暴级及以上强度台风涡旋正确识别率达90%以上,能够精准识别强台风级及以上强度涡旋,该技术为今后业务利用高时空分辨率卫星图像对台风进行实时精密监测提供了技术支撑。Abstract: The present study is based on the typhoon best track data released by the China Meteorological Administration and the satellite images of Himawari-8 and FY from 2005 to 2020. Firstly, the FULLDISK gray images from original satellite data are used as the source of images for typhoon vortex detection, and the images are labeled to develop the sample labeling data set. Furthermore, the classic target detection SSD model with fast running speed and high recognition accuracy is employed as the basic model of typhoon vortex detection. Based on the characteristics of typhoon vortex, the SSD basic model is improved, and an iterative SSD target detection model is developed, which can improve vortex detection and positioning. Finally, the intelligent typhoon vortex detection technique is developed. The test results show that the correct recognition rate of typhoon weaker than severe tropical storms is between 40%~80%, the correct recognition rate of severe tropical storm or stronger typhoon is more than 90%, and the correct recognition rate of vortex at typhoon level or above is close to 100%. This technique will support the realtime and precise monitoring of typhoon by using high spatio-temporal resolution satellite data in the future.
-
Key words:
- object detection /
- satellite image /
- typhoon vortex /
- intelligent detection
-
图 1 SSD模型中Default Box的生成
a. 带有Ground Truth框(GT框,即真实目标位置)的图像,红色框代表类别狗,蓝色框代表类别猫;b. 8×8尺寸的特征图(Feature Map),在特征图的每个点上绘制若干不同长宽比的锚框(Default Box, 虚线框),蓝色的虚线锚框为与a中代表猫的蓝色GT框匹配的正样本, 黑色虚线锚框为与GT框不匹配的负样本;c. 4×4尺寸的特征图上,绘制了四个长宽比的锚框,红色的虚线锚框为与a中代表狗的红色GT框匹配的正样本,黑色虚线锚框为负样本。对于每个锚框用一组坐标表示其相对源锚框的坐标即Δ(cx,cy,w,h),其中cx、cy、w、h分别代表相对原始框的中心坐标x、中心坐标y、宽度w、高度h的变化。
表 1 不同置信度下台风涡旋正确识别率(2005—2018年数据测试)
置信阈值 数据集总Box数量 识别Box数量 正确识别的Box数量 正确识别比例 正确占数据集Box比例(召回率) 0.2 5 306 10 263 5 174 50.4% 97.5% 0.3 5 306 6 479 4 831 74.6% 91.0% 0.4 5 306 5 321 4 425 83.2% 83.4% 0.5 5 306 4 516 3 994 88.4% 75.3% 0.6 5 306 3 782 3 437 90.9% 64.7% 0.7 5 306 2 968 2 746 92.5% 51.7% 0.8 5 306 2 041 1 932 94.7% 36.4% -
[1] WANG S, TOUMI R. Recent migration of tropical cyclones toward coasts[J]. Science, 2021, 371(6 528): 514-517. [2] KOSSIN J P. A global slowdown of tropical-cyclone translation speed[J]. Nature, 2018, 558: 104-107. [3] 王新, 唐世浩, 曹治强. 风云气象卫星"一带一路"热带气旋监测能力与最新进展[J]. 海洋气象学报, 2020, 40(2) : 10-18. [4] 杨军, 咸迪, 唐世浩. 风云系列气象卫星最新进展及应用[J]. 卫星应用, 2018(11): 8-14. [5] 朱玲, 吴心玥. 人工智能在气象领域的应用评述[J]. 广东气象, 2019, 14(1): 35-39. [6] 钱奇峰, 王川, 徐雅静, 等. 一种基于深度学习的台风强度估测技术[J]. 气象, 2021, 47(5): 601-608. [7] 王兴, 王坚红, 卞浩瑄, 等. 一种雷达回波飑线智能识别的方法[J]. 热带气象学报, 2020, 36(3): 317-327. [8] ZHOU X, LI Y, HE B, et al. GM-PHD-based multi-target visual tracking using entropy distribution and game theory[J]. IEEE Transactions on Industrial Informatics, 2014, 10(2): 1 064-1 076. [9] 周小龙, 陈小佳, 陈胜勇, 等. 弱监督学习下的目标检测算法综述[J]. 计算机科学, 2019, 46(11): 49-57. [10] VIOLA P, JONES M. Rapid object detection using a boosted cascade of simple features. proceedings/CVPR, IEEE computer society conference on computer vision and pattern recognition[J]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001: I-511-I-518. [11] HE K, ZHANG X, REN S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1 904-1 916. [12] GIRSHICK R, DONAHUE J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of IEEE conference on Computer Vision and Pattern Recognition[M]. New York: IEEE Press, 2013: 580-587. [13] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection. Proceedings of IEEE conference on Computer Vision and Pattern Recognition[M]. New York: IEEE Press, 2016: 779-788. [14] REN S, HE K, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans[J]. Pattern Anal Mach Intell, 2017; 39(6): 1 137-1 149. [15] LIN T, GOYAL P, S GIRSHICK R, et al. Focal loss for dense object detection, proceedings of IEEE conference on computer vision and pattern recognition[M]. New York: IEEE Press, 2017: 2 999-3 007. [16] LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single shot multiBox detector. 4th european conference on computer Vision (ECCV), Amsterdam, Netherlands[M]. Switzerland: Springer International Publishing, 2016: 21-37. -