M. Bekos, B. Niedermann, and M. Nöllenburg. | |

Abstract External labeling is frequently used for annotating features in graphical displays and visualizations, such as technical illustrations, anatomical drawings, or maps, with textual information. Such a labeling connects features within an illustration by thin leader lines with their labels, which are placed in the empty space surrounding the image. Over the last twenty years, a large body of literature in diverse areas of computer science has been published that investigates many different aspects, models, and algorithms for automatically placing external labels for a given set of features. This state-of-the-art report introduces a first unified taxonomy for categorizing the different results in the literature and then presents a comprehensive survey of the state of the art, a sketch of the most relevant algorithmic techniques for external labeling algorithms, as well as a list of open research challenges in this multidisciplinary research field. @article{doi:10.1111/cgf.13729, | |

B. Niedermann, and J.-H. Haunert. | |

Zooming is a basic operation that many digital maps support for exploring them interactively. Especially for maps on small-screen devices this is a helpful operation to uncover the user’s region of interest possibly hidden by labels, e.g., points of interest represented by icons. However, by scaling the map larger the user loses the context. As a consequent the user might need to repeatedly zoom in and out to explore the map step by step. We present an approach that reduces the necessity of zooming by providing the user with the possibility of displacing the labels of a circular focus region. To that end, we utilize techniques from focus+context maps implementing the displacement of the labels by fish-eye projections. The visual association between labels and their point features is established by connecting lines aggregated to bundles. Our approach particularly guarantees that labels move smoothly when the user continuously shifts the focus region, which reduces distracting flickering effects while exploring the map by panning the map view. Further, when the user stops moving the focus region, mathematical programming is applied to optimize positions of the displaced labels. In an evaluation on real-world data and synthetically generated data we show that our approach substantially increases the legibility of both the focus region and the displaced labels @article{niedermann2019, | |

B. Niedermann, I. Rutter, and M. Wolf. | |

Orthogonal drawings, i.e., embeddings of graphs into grids, are a classic topic in Graph Drawing. Often the goal is to find a drawing that minimizes the number of bends on the edges. A key ingredient for bend minimization algorithms is the existence of an orthogonal representation that allows to describe such drawings purely combinatorially by only listing the angles between the edges around each vertex and the directions of bends on the edges, but neglecting any kind of geometric information such as vertex coordinates or edge lengths. Barth et al. have established the existence of an analogous ortho-radial representation for ortho-radial drawings, which are embeddings into an ortho-radial grid, whose gridlines are concentric circles around the origin and straight-line spokes emanating from the origin but excluding the origin itself. While any orthogonal representation admits an orthogonal drawing, it is the circularity of the ortho-radial grid that makes the problem of characterizing valid ortho-radial representations all the more complex and interesting. Barth et al. prove such a characterization. However, the proof is existential and does not provide an efficient algorithm for testing whether a given ortho-radial representation is valid, let alone actually obtaining a drawing from an ortho-radial representation. In this paper we give quadratic-time algorithms for both of these tasks. They are based on a suitably constrained left-first DFS in planar graphs and several new insights on ortho-radial representations. Our validity check requires quadratic time, and a naive application of it would yield a quartic algorithm for constructing a drawing from a valid ortho-radial representation @inproceedings{nrw-eaorgd-19, | |

L. Barth, A. Gemsa, B. Niedermann, and M. Nöllenburg. | |

External labeling deals with annotating features in images with labels that are placed outside of the image and are connected by curves (so-called leaders) to the corresponding features. While external labeling has been extensively investigated from a perspective of automatization, the research on its readability has been neglected. In this article, we present the first formal user study on the readability of leader types in boundary labeling, a special variant of external labeling that considers rectangular image contours. We consider the four most studied leader types (straight, L-shaped, diagonal, and S-shaped) with respect to their performance, that is, whether and how fast a viewer can assign a feature to its label and vice versa. We give a detailed analysis of the results regarding the readability of the four models and discuss their aesthetic qualities based on the users’ preference judgments and interviews. As a consequence of our experiment, we can generally recommend L-shaped leaders as the best compromise between measured task performance and subjective preference ratings, while straight and diagonal leaders received mixed ratings in the two measures. S-shaped leaders are generally not recommended from a practical point of view. @article{doi:10.1177/1473871618799500, | |

A. Bonerath, J.-H. Haunert, and B. Niedermann. | |

The interactive exploration of data requires data structures that can be repeatedly queried to obtain simple visualizations of parts of the data. In this paper we consider the scenario that the data is a set of points each associated with a time stamp and that the result of each query is visualized by an α-shape, which generalizes the concept of convex hulls. Instead of computing each shape independently, we suggest and analyze a simple data structure that aggregates the α-shapes of all possible queries. Once the data structure is built, it particularly allows us to query single α-shapes without retrieving the actual (possibly large) point set and thus to rapidly produce small previews of the queried data. @inproceedings{bhn-castrqps-19, | |

B. Niedermann, and J.-H. Haunert. | |

For conducting navigation tasks in public trans- portation systems, schematic maps (e.g., metro maps) are the first choice, as they reduce the amount of information to a minimum. On the other hand, for navigation tasks in street networks, classical geographic maps are preferable, as they depict the area as accurately as possible. In this work, we create synergies between both types of maps by laying the metro map over the street map. We call those maps anchored metro maps, as we visually attach the metro stations of the metro map to their counterparts on the street map. Currently, we are developing algorithms optimizing the matching between both maps. In future research we plan to use this approach to show that the combination of schematic and geographical maps leads to an improvement for certain navigation tasks. @inproceedings{nh-amm-19, | |

H.-Y. Wu, B. Niedermann, S. Takahashi, and M. Nöllenburg. | |

Schematic maps are in daily use to show the connectivity of subway systems and to facilitate travellers to plan their journeys effectively. This study surveys up-to-date algorithmic approaches in order to give an overview of the state of the art in schematic network mapping. The study investigates the hypothesis that the choice of algorithmic approach is often guided by the requirements of the mapping application. For example, an algorithm that computes globally optimal solutions for schematic maps is capable of producing results for printing, while it is not suitable for computing instant layouts due to its long running time. Our analysis and discussion, therefore, focus on the computational complexity of the problem formulation and the running times of the schematic map algorithms, including algorithmic network layout techniques and station labeling techniques. The correlation between problem complexity and running time is then visually depicted using scatter plot diagrams. Moreover, since metro maps are common metaphors for data visualization, we also investigate online tools and application domains using metro map representations for analytics purposes, and finally summarize the potential future opportunities for schematic maps. @inproceedings{wntn-scsnm-19, | |

Y. Dehbi, A. Henn, G. Gröger, V. Stroh, and L. Pl\m̈er. | |

3D city models in Level-of-Detail 2 (LoD2) are nowadays inevitable for many applications such as solar radiation calculation and energy demand estimation. City-wide models are required which can solely be acquired by fully automatic approaches. In this paper we propose a novel method for the 3D-reconstruction of LoD2 buildings with structured roofs and dormers from LIDAR data. We apply a hybrid strategy which combines the strengths of top-down and bottom-up methods. The main contribution is the introduction of an \textitactive sampling strategy which applies a cascade of filters focusing on promising samples in an early stage and avoiding the pitfalls of RANSAC based approaches. Such filters are based on prior knowledge represented by (non-parametric) density distributions. Samples are pairs of surflets, i.e. 3D points together with normal vectors derived from a plane approximation of their neighborhood. Surflet pairs imply immediately important roof parameters such as azimuth, inclination and ridge height, as well as parameters for internal precision and consistency, giving a good base for assessment and ranking. Ranking of samples leads to a small number of promising hypotheses. Model selection is based on predictions for example of ridge positions which can easily be falsified based on the given observations. Our approach does not require building footprints as prerequisite. They are derived in a preprocessing step using machine learning methods, in particular Support Vector Machines (SVM). @inproceedings{dehbi2019active, | |

F. Biljecki, and Y. Dehbi. | |

LoD2 models include roof shapes and thus provide added value over their LoD1 counterparts for some applications such as estimating the solar potential of rooftops. However, because of laborious acquisition workflows they are more difficult to obtain than LoD1 models and are thus less prevalent in practice. This paper explores whether the type of the roof of a building can be inferred from semantic LoD1 data, potentially leading to their free upgrade to LoD2, in a broader context of a workflow for their generation without aerial campaigns. Inferring rooftop information has also other uses: data quality and verification of existing data, supporting roof reconstruction, and enriching LoD0/LoD1 data with the attribute of the roof type. We tested a RandomForest classifier that analyses attributes of buildings predicting the type of the roof. Experiments carried out on the 3D city model of Hamburg using 12 attributes achieve an accuracy of 85% in identifying the roof type from sparse data using a multiclass classification. The performance of binary classification hits the roof: 92% accuracy in predicting whether a roof is flat or not. It turns out that the two most useful variables are footprint area and building height (i.e. LoD1 models without any semantics, or LoD0 with such information), and using only them also yields relatively accurate results. @inproceedings{dehbi2019towards, | |

Y. Dehbi, S. Koppers, and L. Plümer. | |

3D building models including roofs are a key prerequisite in many fields of applications such as the estimation of solar suitability of rooftops. The accurate reconstruction of roofs with dormers is sometimes challenging. Without careful separation of the dormer points from the points on the roof surface, the estimation of the roof areas is distorted in a most characteristic way, which then let the dormer points appear as white noise. The characteristic distortion of the density distribution of the defects by dormers in comparison to the expected normal distribution is the starting point of our method. We propose a hierarchical method which improves roof reconstruction from LiDAR point clouds in a model-based manner separating dormer points from roof points using classification methods. The key idea is to exploit probability density functions (PDFs) to reveal roof properties and design skilful features for a supervised learning method using support vector machines (SVMs). Properties of the PDFs of measures such as residuals of model-based estimated roof models are used among others. A clustering step leads to a semantic segmentation of the point cloud enabling subsequent reconstruction. The approach is tested based on real data as well as simulated point clouds. The latter allow for experiments for various roof and dormer types with different parameters using an implemented simulation toolbox which generates virtual buildings and synthetic point clouds. @inproceedings{dehbi2019probability, | |

Y. Dehbi, L. Lucks, J. Behmann, L. klingbeil, and L. Plümer. | |

Accurate and robust positioning of vehicles in urban environments is of high importance for many applications (e.g. autonomous driving or mobile mapping). In the case of mobile mapping systems, a simultaneous mapping of the environment using laser scanning and an accurate positioning using GNSS is targeted. This requirement is often not guaranteed in shadowed cities where GNSS signals are usually disturbed, weak or even unavailable. Both, the generated point clouds and the derived trajectory are consequently imprecise. We propose a novel approach which incorporates prior knowledge, i.e. 3D building model of the environment, and improves the point cloud and the trajectory. The key idea is to benefit from the complementarity of both GNSS and 3D building models. The point cloud is matched to the city model using a point-to-plane ICP. An informed sampling of appropriate matching points is enabled by a pre-classification step. Support vector machines (SVMs) are used to discriminate between facade and remaining points. Local inconsistencies are tackled by a segment-wise partitioning of the point cloud where an interpolation guarantees a seamless transition between the segments. The full processing chain is implemented from the detection of facades in the point clouds, the matching between them and the building models and the update of the trajectory estimate. The general applicability of the implemented method is demonstrated on an inner city data set recorded with a mobile mapping system. @inproceedings{dehbi2019Improving, | |

A. Förster, J. Behley, J. Behmann, and R. Roscher. | |

With a limited amount of arable land, increasing demand for food induced by growth in population can only be meet with more effective crop production and more resistant plants. Since crop plants are exposed to many different stress factors, it is relevant to investigate those factors as well as their behavior and reactions. One of the most severe stress factors are diseases, resulting in a high loss of cultivated plants. Our main objective is the forecasting of the spread of disease symptons on barley plants using a Cycle-Consistent Generative Adversarial Network. Our contributions are: (1) we provide a daily forecast for one week to advance research for better planning of plant protection measures, and (2) in contrast to most approaches which use only RGB images, we learn a model with hyperspectral images, providing an information-rich result. In our experiments, we analyze healthy barley leaves and leaves which were inoculated by powdery mildew. Images of the leaves were acquired daily with a hyperspectral microscope, from day 3 to day 14 after inoculation. We provide two methods for evaluating the predicted time series with respect to the reference time series @inproceedings{foerster2019a, | |

A. Bonerath, B. Niedermann, and J.-H. Haunert. | |

The interactive exploration of data requires data structures that can be repeatedly queried to obtain simple visualizations of parts of the data. We consider the scenario that the data is a set of points each associated with a time stamp and that the result of each query is visualized by an α-shape, which generalizes the concept of convex hulls. Instead of computing each shape independently, we suggest and analyze a simple data structure that aggregates the α-shapes of all possible queries. Once the data structure is built, it particularly allows us to query single α-shapes without retrieving the actual (possibly large) point set and thus to rapidly produce small previews of the queried data. We discuss the data structure for the original α-shapes as well as for a schematized version of α-shapes, which further simplifies the visualization. We evaluate the data structure on real-world data. The experiments indicate linear memory consumption with respect to the number of points, which makes the data structure applicable in practice, although the size is quadratic for a theoretic worst case example. @inproceedings{bhn-rasspa-19, | |

S. Gedicke, B. Niedermann, and J.-H. Haunert. | |

Annotating small-screen maps with additional content such as labels for points of interest is a highly challenging problem that requires new algorithmic solutions. A common labeling approach is to select a maximum-size subset of all labels such that no two labels constitute a graphical conflict and to display only the selected labels in the map. A disadvantage of this approach is that a user often has to zoom in and out repeatedly to access all points of interest in a certain region. Since this can be very cumbersome, we suggest an alternative approach that allows the scale of the map to be kept fixed. Our approach is to distribute all labels on multiple pages through which the user can navigate, for example, by swiping the pages from right to left. We in particular optimize the assignment of the labels to pages such that no page contains two conflicting labels, more important labels appear on the first pages, and sparsely labeled pages are avoided. Algorithmically, we reduce this problem to a weighted and constrained graph coloring problem based on a graph representing conflicts between labels such that an optimal coloring of the graph corresponds to a multi-page labeling. We propose a simple greedy heuristic that is fast enough to be deployed in web-applications. We evaluate the quality of the obtained labelings by comparing them with optimal solutions, which we obtain by means of integer linear programming formulations. In our evaluation on real-world data we particularly show that the proposed heuristic achieves near-optimal solutions with respect to the chosen objective function and that it substantially improves the legibility of the labels in comparison to the simple strategy of assigning the labels to pages solely based on the labels' weights. @inproceedings{gedicke2019a, | |

J. Oehrlein, B. Niedermann, and J.-H. Haunert. | |

Green spaces in urban areas offer great possibilities of recreation, provided that they are easily accessible. Therefore, an ideal city should offer large green spaces close to where its residents live. Although there are several measures for the assessment of urban green spaces, the existing measures usually focus either on the total size of all green spaces or on their accessibility. Hence, in this paper, we present a new methodology for assessing green-space provision and accessibility in an integrated way. The core of our methodology is an algorithm based on linear programming that computes an optimal assignment between residential areas and green spaces. In a basic setting, it assigns green spaces of a prescribed size exclusively to each resident, such that an objective function that, in particular, considers the average distance between residents and assigned green spaces is optimized. We contribute a detailed presentation on how to engineer an assignment-based method, such that it yields plausible results (e.g., by considering distances in the road network) and becomes efficient enough for the analysis of large metropolitan areas (e.g., we were able to process an instance of Berlin with about 130,000 polygons representing green spaces, 18,000 polygons representing residential areas, and 6 million road segments). Furthermore, we show that the optimal assignments resulting from our method enable a subsequent analysis that reveals both interesting global properties of a city as well as spatial patterns. For example, our method allows us to identify neighborhoods with a shortage of green spaces, which will help spatial planners in their decision-making. @article{oehrlein2019-pfg, | |

E. Lehndorff, A. Rodionov, C. Stremtan, W. Brand, L. Plümer, P. Rottmann, B. Spiering, and W. Amelung. | |

The mean residence time of organic matter in soil ranges from days to millenia. Although chemical and physical processes are known to control the storage of organic matter in soil, the role of spatial arrangements in soil remains largely unknown due to the lack of in-situ techniques operating on the micro-scale. We asked whether deterministic patterns in element arrangement occur in large and small soil microaggregates (250-53 and 53-20 µm), and how these contribute to the storage of organic matter. To do so, we studied surface soils with increasing clay contents (sandy to loamy Luvisols, Germany) and subjected 60 individual aggregates to elemental mapping by electron probe micro analysis (EPMA), which recorded C, N, P, Al, Fe, Ca, K, Cl, and Si contents at micrometer scale resolution. We further developed the first laser-ablation isotope ratio mass spectrometry technique in soil science detecting d13C composition on the micro-scale (LA-IRMS) and employed this to trace micro-gradients in undisturbed soil and soil microaggregate samples. We found a pronounced heterogeneity in aggregate structure and composition, which was not reproducible for different microaggregates from the same soil fraction, and which was largely independent from clay content in soil. However, neighborhood analyses revealed close spatial correlations between organic matter debris (C:N app. 100:10) and microbial organic matter (C:N app. 10:1), indicating a spatial relationship between source and consumer. There was no systematic relationship between soil minerals and organic matter, suggesting that well-established macroscale correlations between pedogenic oxides and clay with soil organic matter storage do not apply to soil microaggregates. From first applications of LA-IRMS to soil we found also a high d13C variability of up to 9 per mill along spatial gradients of less than 300 µm, suggesting the appearance of very small hotspots of isotope enrichment and organic matter turnover by metabolic processes. @inproceedings{lehndorff2019element, |