Spatial‐Based Super‐resolution Reconstruction: A Deep Learning Network via Spatial‐Based Super‐resolution Reconstruction for Cell Counting and Segmentation

Cell counting and segmentation are critical tasks in biology and medicine. The traditional methods for cell counting are labor‐intensive, time‐consuming, and prone to human errors. Recently, deep learning‐based cell counting methods have become a trend, including point‐based counting methods, such as cell detection and cell density prediction, and non‐point‐based counting, such as cell number regression prediction. However, the point‐based counting method heavily relies on well‐annotated datasets, which are scarce and difficult to obtain. On the other hand, nonpoint‐based counting is less interpretable. The task of cell counting by dividing it into two subtasks is approached: cell number prediction and cell distribution prediction. To accomplish this, a deep learning network for spatial‐based super‐resolution reconstruction (SSRNet) is proposed that predicts the cell count and segments the cell distribution contour. To effectively train the model, an optimized multitask loss function (OM loss) is proposed that coordinates the training of multiple tasks. In SSRNet, a spatial‐based super‐resolution fast upsampling module (SSR‐upsampling) is proposed for feature map enhancement and one‐step upsampling, which can enlarge the deep feature map by 32 times without blurring and achieves fine‐grained detail and fast processing. SSRNet uses an optimized encoder network. Compared with the classic U‐Net, SSRNet's running memory read and write consumption is only 1/10 of that of U‐Net, and the total number of multiply and add calculations is 1/20 of that of U‐Net. Compared with the traditional sampling method, SSR‐upsampling can complete the upsampling of the entire decoder stage at one time, reducing the complexity of the network and achieving better performance. Experiments demonstrate that the method achieves state‐of‐the‐art performance in cell counting and segmentation tasks. The method achieves nonpoint‐based counting, eliminating the need for exact position annotation of each cell in the image during training. As a result, it has demonstrated excellent performance on cell counting and segmentation tasks. The code is public on GitHub (https://github.com/Roin626/SSRnet).

Location
Deutsche Nationalbibliothek Frankfurt am Main
Extent
Online-Ressource
Language
Englisch

Bibliographic citation
Spatial‐Based Super‐resolution Reconstruction: A Deep Learning Network via Spatial‐Based Super‐resolution Reconstruction for Cell Counting and Segmentation ; day:01 ; month:08 ; year:2023 ; extent:16
Advanced intelligent systems ; (01.08.2023) (gesamt 16)

Creator
Deng, Lijia
Zhou, Qinghua
Wang, Shuihua
Zhang, Yudong

DOI
10.1002/aisy.202300185
URN
urn:nbn:de:101:1-2023080215051084509379
Rights
Open Access; Der Zugriff auf das Objekt ist unbeschränkt möglich.
Last update
14.08.2025, 11:00 AM CEST

Data provider

This object is provided by:
Deutsche Nationalbibliothek. If you have any questions about the object, please contact the data provider.

Associated

  • Deng, Lijia
  • Zhou, Qinghua
  • Wang, Shuihua
  • Zhang, Yudong

Other Objects (12)