Abstracts

Benedek Nagy, Hussain Mohamed: Edge Detection on a Triangular Grid

Abstract: Edge detection is a vital step in image processing, pattern recognition and computer vision. While edge detection were tackled on the traditional rectangular and on the hexagonal grids, there is a lack of studies on the triangular grid. In this paper, we fill this gap, we adapt various classical edge detection techniques and algorithms into the triangular grid. We start by gradient-based techniques, they are filters on the triangular grid corresponding to the Prewitt and Sobel filters on the square grid. We also discuss the second-derivative operators such as Laplacian filter as well as some low pass filters, such as Gaussian smoothing, allowing us to derive the Canny and Laplacian of Gaussian filters. We adapt these techniques keeping in mind the properties, the topology and neighborhood of the triangular grid with various modifications to the filters for improved results. Some filters give more accurate edges while some others have nice properties like more robustness to noise and some even give information on the direction of edges. By our results, the Laplacian filter gives the best results on denoised images, while the independent approach for Prewitt and Canny filter give good results on noisy images on the triangular grid.

Benedek Nagy, Tibor Lukic: Binary Tomography on the Cairo Pattern

Abstract: In this paper, we investigate a tomography reconstruction approach for binary images on the Cairo pattern. In discrete tomography the underlying grid play essential role, as the projection data and the structure of the grid together give the information that we can use for the reconstruction purpose. The Cairo pattern is one of the most known pentagonal tessellations of the plane and it has a nice symmetric structure for orthogonal projections giving the information in a somewhat different way as similar direction projections on the traditional square grid. We present how gradient based reconstruction method can be used to reconstruct images based on pentagonal pixels in the Cairo pattern. A short experimental evaluation is also presented.

Ede Troll, Benedek Nagy: Characterizing Translations on Octagonal-Square Grids, including the Khalimsky Grid

Abstract: Translations are such transformations that are expected to be bijective, moreover, connectedness, shape, perimeter and area preserving. In digital geometry, as we show here, this is not always obvious, we may need special conditions on the translation vector to have translations with these desired properties. In this paper, we use the Khalimsky grid, e.g., the semi-regular grid induced by the Khalimsky topology. This grid, also known as the truncated quadrille or truncated square tiling, built up by regular octagons and squares with equal side length. We also generalize the case when the side lengths vary, but the neighborhood structure and, thus, the topology of the grid remains the same. 

Zoltán Domokos: The Injectivity of Affine Mappings Restricted to Convex Sets

Abstract:  Hereby, the problem of deciding the injectivity of an affine mapping restricted to sets taken from a finite family of convex sets is studied. We propose sufficient and necessary conditions on injectivity. As the method presented highly depends on how the intersections of the sets under study can be detected, we propose a relaxation for sets that can be bounded by open d-balls of arbitrary dimensionality.

Peter Gritzmann: Diagrams, clustering, and coresets – and the representation of polycrystals 

AbstractWe report on recent results for anisotropic diagrams and their relation to constrained clustering with a view towards representing and analyzing polycrystalline materials and their dynamics. In particular, weight-constrained anisotropic clustering allows to compute diagram representations of polycrystals from data on the volume, center and moments of the grains which are available through tomographic measurements. Also we develop new coreset techniques, interesting for their own, which are utilized to significantly accelerate the computations. This effect is demonstrated on 3D real-world data sets. 
(The talk is based on recent joint work with A. Alpers, M. Fiedler, F. Klemm.) 

Gilles Bertrand: Morse Sequences on Stacks and Flooding Sequences

Abstract: This paper builds upon the framework of Morse sequences, a simple and effective approach to discrete Morse theory. A Morse sequence on a simplicial complex consists of a sequence of nested subcomplexes generated by expansions and fillings—two operations originally introduced by Whitehead. Expansions preserve homotopy, while fillings introduce critical simplexes that capture essential topological features.

We extend the notion of Morse sequences to stacks which are monotonic functions defined on simplicial complexes, and define Morse sequences on stacks as those whose expansions preserve the homotopy of all sublevel sets. This extension leads to a generalization of the fundamental collapse theorem to weighted simplicial complexes. Within this framework, we focus on a refined class of sequences called flooding sequences, which exhibit an ordering behavior similar to that of classical watershed algorithms. Although not every Morse sequence on a stack is a flooding sequence, we show that the gradient vector field associated with any Morse sequence can be recovered through a flooding sequence.

Finally, we present algorithmic schemes for computing flooding sequences using cosimplicial complexes, and we give several properties of minimal and maximal sequences.

Noel Nagy, Kálmán Palágyi: Fully Parallel 3D Curve-Thinning on (18,12) Pictures of the FCC Grid

Abstract: Curve-thinning is an iterative object reduction to extract centerlines from binary/segmented objects.
Parallel thinning algorithms are composed of parallel reductions, and the fully parallel approach uses the same parallel reduction in each thinning phase. This paper presents the first fully parallel 3D curve-thinning algorithm acting on the unconventional face-centered cubic (FCC) grid. Our algorithm combines a sufficient condition for topology-preserving parallel reductions acting on (18,12) pictures of the FCC grid with curve-endpoint preservation, hence its topological correctness is guaranteed for all possible objects. In order to reduce the count of unwanted side branches in the produced centerlines, a novel iteration-level endpoint re-checking process is proposed.

Gábor Németh: Similarity of 2D Thinning Algorithms and their k-Attempt Versions

Abstract: Thinning algorithms are composed of successive iterative object reduction steps, which are repeated until the stability is reached. In each iteration step, a set of points in the outmost layer of the current object is investigated for possible deletion. In the concept of k-attempt thinning any object point is investigated to delete at most k-times (k = 1, 2, . . . ). If a point “survives” the deletion attempts k-times, then it is considered as an element of the produced centerline and not investigated anymore. This concept yield a faster thinning approach, however, the resulted “skeleton” may not be the same as we could get without any limitation of deleting attempts. In this work we compare a set of existing thinning algorithms without any restriction of deleting attempts and their k-attempt versions where a point can try to delete at most k-times. Our aim with this is twofold. We would like to investigate that, what is the drawback in similarity of k-attempt version of a thinning algorithm regarding the original one. On the other hand, if the k-attempt version of a thinning algorithm produces the same result as the original set of objects, then it is worth to prove the k-attempt property in further works.

Péter Kardos: An Order-Independent 2D Parallel 4-Subiteration Thinning Algorithm

Abstract:  Thinning is a commonly used technique based on iterative object reduction to obtain skeletal shape features. For parallel subiterational thinning algorithms, an iteration step is composed of some subiterations, each corresponding to a specific deletion direction. The results produced by such algorithms usually depend on the order of these directions. This work presents an order-independent 4-subiteration thinning algorithm, which generates the same centerlines for arbitrary order of directions, and preserves the topology of 2D objects. 

Dacian Goina: Fast Image Resizing Through Bijective Transformations: Applications in Classification Tasks

Abstract:  Although Convolutional Neural Networks provided effective solutions for computer vision tasks across various domains in recent years, the high training times of the models remain a crucial aspect. A straightforward way to decrease the computational costs is to use smaller images during training. However, existing image resizing approaches are either computationally intensive or lossy methods. This paper introduces the usage of the Cantor pairing function for image resizing through feature reduction. The pairing function is a fast bijective transformation that maps a pair of two integers into a single value. Combining two pixels into one leads to a decrease in the image size, while the bijective property ensures a lossless transformation. The experiments conducted on the MNIST datasets demonstrated that the proposed approach reduces resizing time at least six times, decreases training times significantly, and achieves similar performance with baseline methods.

Judit Szűcs, Tibor Gaál, Krisztián Németh, Tibor Guzsvinecz: Deep Learning-Based Domain-Specific Object Detection in a 3D Virtual Tour

Abstract: This paper presents a domain-specific object detection pipeline developed for a virtualized campus environment, leveraging the YOLOv8 deep learning architecture. The model was fine-tuned to recognize four object classes (chairs, benches, cars, and bicycles) frequently appearing in the digital twin of the University Centre. We demonstrate that by combining targeted data collection, detailed annotation, and iterative training on a GPU-accelerated platform, a highly accurate detector can be developed using a moderate dataset. Through extensive evaluation on image sequences derived from a realistic digital twin, the proposed method is shown to outperform a general-purpose pretrained baseline for the intended object classes. This work highlights the effectiveness of specialization in object detection tasks and underscores the practical advantages of integrating computer vision with spatial digital twin frameworks.

Florian Delconte, Jui-Ting Lu, Isabelle Debled-Rennesson, Hoai-Diem-Phuc Ngo, Bertrand Kerautret: A 3D Compact Shape Descriptor based on LIP Signature

Abstract: Motivated by the performance of the shape descriptor based on the Largest Intersection and Projection (LIP) signature, we propose a three-dimensional extension in link to concrete application. By projecting 3D shape onto their principal planes and analyzing the resulting profiles, we extract compact and interpretable geometric features. Descriptors obtained from these features are then used for 3D object classification. The proposed method is evaluated on both the ModelNet benchmark and a real-world dataset of tree bark defects. Results show that the descriptor effectively captures shape variations while enabling a significant reduction in feature dimensionality.

Péter Balázs, Péter Bodnár, Sara Brunetti: Rotational and Rotation-Free Multidirectional Q-Concavity Shape Descriptors

Abstract: In this paper we extend a scalar Q-convexity shape descriptor in two ways towards rotational and rotation-free multidirectional descriptors. Since the original descriptor is defined with respect to the horizontal and vertical directions, we increase its ability to represent a shape by repeatedly rotating the shape and computing the descriptor, in the first extension. However, since rotation is generally not a safe transformation in the 2D digital space, as a second approach, we develop an alternative that works directly with an arbitrary pair of orthogonal lattice directions: the shape remains fixed and a certain number of pairs of directions are employed. We provide the algorithmic details of this approach and its computational complexity. In case of both approaches, the Q-concavity values measured along different directions can be gathered into a vector multidirectional descriptor (signature). We conducted experiments on the MPEG-7 dataset with
different sets of lattice directions. We conclude that even though the rotational signature is faster to compute, the rotation-free signature can ensure better classification accuracy. Depending on whether computational time or accuracy is more important in the given application, one of the two methods can be preferred to the other, and hence both of them seem to provide a viable option.

Lajos Hajdu: Applications of algebra and number theory in digital image processing

AbstractIn the talk we present applications of algebra and number theory to two problems in digital image processing.

The first application is related to discrete tomography, where the basic problem is to recover a function defined on a finite subset A of \mathbb Z^2 from its line sums measured along a finite set of directions. We sketch an algebraic framework for the problem, which enables us to describe the structure of the ghosts, i.e. functions defined on A having zero line sums in the given directions. We show how this knowledge makes it possible to describe the solutions with integer values, and how it contributes to the description of binary solutions (i.e. of functions having only values 0 and 1).

The second application is motivated by a problem from medical image processing, and is related to detect periodicity on images. We model this problem in the following way: find the grid which fits “best” a given finite set of points in \mathbb R^n. We show how lattice theory (geometry of numbers) come into play, and use tools from Diophantine approximations to handle the problem. We also show how to use the LLL algorithm to construct “well” approximating grids efficiently in concrete cases.

Henning Fernau, Jennifer Rose, Thamburaj Robinson, Gnanaraj Thomas: Boustrophedon Pushdown Automata for Two-dimensional Picture Languages

Abstract: In the area of picture languages, many mechanisms have been studied to describe language classes, often inspired by what has been done before for string languages. Following this tradition, we are now introducing a pushdown automaton extension of two variants of finite automata working on arrays that have been defined before, namely boustrophedon finite automata (BFA) and returning finite automata (RFA), leading to BPDA and RPDA. While BFA and RFA characterize the same class of array languages, BPDA and RPDA turn out to describe incomparable classes of array languages. We also prove that BPDA and RPDA can be viewed as a proper generalization of contextfree matrix languages, which is interesting, as it is known that (apart from a transposition) BFA and RFA characterize the regular matrix languages.

Gaëlle Largeteau-Skapin, Eric Andres, Lidija Comic, Rita Zrour: A Discrete Bijective Reflection with Near-Equivalence to Shear Rotation based Reflection

Abstract: We introduce an improved version of the discrete bijective reflection proposed in \cite{Andres19} and we show that this new reflection is almost always identical to the reflection based on the quasi shear rotation for integer centers \cite{andres1996quasi}. The difference between the two reflections is restricted to a discrete set of angles of the form \arctan(k+\frac{1}{2}), where k\in\mathbb{N}. Our \emph{improved pivot reflection} (IPR) is easier to compute than the quasi shear rotation based reflection and therefore well suited for the computation of nD rotations.

Hasmik Sahakyan, Levon Aslanyan: Partial Order and Chain Decomposition of the Set of k-Subsets

Abstract:  Let P(n,k) denote the set of all k-tuples with strictly increasing elements from the set [n]={1,2,⋯,n}, 1≤k≤n. To each (a_1,⋯,a_k)∈P(n,k), we assign a binary sequence (β_1,⋯,β_n), where β_i=1 if and only if i∈{a_1,⋯,a_k }. This mapping yields the set of vertices in the k-th layer of the binary cube B^n. On the other hand, an arbitrary subset M of the k-th layer of B^n, with |M|=r, r≤(n¦k) can be viewed as the incidence matrix of a simple k-uniform hypergraph with n vertices and r hyperedges. The hypergraph degree sequence problem is known to be NP-complete for simple k-uniform hypergraphs, starting from k=3. This motivates our study of P(n,3). We investigate properties of the Hasse diagram of the partially ordered set (P(n,k),≼), where ≼ is a component-wise partial order on P(n,k). An algorithm is presented that constructs a set of non-intersecting chains covering all elements of P(n,3). The number of these chains equals the width of (P(n,k),≼), and they can be used for the identification of monotone functions defined on P(n,3), analogous to Hansel’s chains for the identification of monotone Boolean functions defined on B^n. One motivation for studying these functions is that the degree sequences of the corresponding hypergraphs can be used to generate all degree sequences of uniform hypergraphs. Moreover, monotone Boolean functions defined on B^n play a special role in generating the degree sequences of simple hypergraphs. We consider the case of 3-uniform hypergraphs, the corresponding monotone Boolean functions, and provide complexity estimates for their identification.