RUS  ENG
Full version
JOURNALS // Informatics and Automation // Archive

Tr. SPIIRAN, 2019 Issue 18, volume 6, Pages 1381–1406 (Mi trspy1085)

This article is cited in 4 papers

Digital Information Telecommunication Technologies

Semantic text segmentation from synthetic images of full-text documents

L. Bureš, I. Gruber, P. Neduchal, M. Hlaváč, M. Hrúz

University of West Bohemia

Abstract: An algorithm (divided into multiple modules) for generating images of full-text documents is presented. These images can be used to train, test, and evaluate models for Optical Character Recognition (OCR).
The algorithm is modular, individual parts can be changed and tweaked to generate desired images. A method for obtaining background images of paper from already digitized documents is described. For this, a novel approach based on Variational AutoEncoder (VAE) to train a generative model was used. These backgrounds enable the generation of similar background images as the training ones on the fly.
The module for printing the text uses large text corpora, a font, and suitable positional and brightness character noise to obtain believable results (for natural-looking aged documents).
A few types of layouts of the page are supported. The system generates a detailed, structured annotation of the synthesized image. Tesseract OCR to compare the real-world images to generated images is used. The recognition rate is very similar, indicating the proper appearance of the synthetic images. Moreover, the errors which were made by the OCR system in both cases are very similar. From the generated images, fully-convolutional encoder-decoder neural network architecture for semantic segmentation of individual characters was trained. With this architecture, the recognition accuracy of 99.28% on a test set of synthetic documents is reached.

Keywords: generation of synthetic images, semantic text segmentation, variational autoencoder, VAE, optical character recognition, OCR, aged-looking text generation.

UDC: 004.9

Received: 24.09.2019

Language: English

DOI: 10.15622/sp.2019.18.6.1381-1406



© Steklov Math. Inst. of RAS, 2024