Anna Arutyunova and
Anne Driemel and
Jan-Henrik Haunert and
Herman J. Haverkort and
Jürgen Kusche and
Elmar Langetepe and
Philip Mayer and
Petra Mutzel and
Heiko Röglin. Minimum-error triangulations for sea surface reconstruction. In volume 224 of LIPIcs. Proc. 38th International Symposium on Computational Geometry (SoCG 2022), pages 7:1-7:18. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022.
Anna Brauer, Ville Mäkinen, Axel Forsch, Juha Oksanen, and Jan-Henrik Haunert. My home is my secret: concealing sensitive locations by context-aware trajectory truncation. International Journal of Geographical Information Science, 36(12):2496-2524, 2022.
Ever since location-based services and mobile applications collecting data gathered through Global Navigation Satellite System (GNSS) positioning have become popular, concerns about location privacy have been expressed. Research has shown that human trajectory repositories containing sequences of observed locations ordered in time constitute a rich source for analyzing movement patterns, but they can also reveal sensitive personal information, such as a person’s home address. In this paper, we present a mechanism that protects visits to sensitive locations by suppressing revealing parts of trajectories. Our attack model acknowledges that the course of a trajectory, combined with spatial context information, can facilitate privacy breaches even if sensitive locations have been concealed. Thus, we introduce the concept of k-site-unidentifiability, a specialization of k-anonymity, under which a sensitive location cannot be singled out from a group of at least k sites that the trajectory could have visited. In an experimental study, we show that our method is utility-preserving and protects sensitive locations reliably even in sparsely built environments. As it can process each trajectory independently, individuals may also use our mechanism to enhance their privacy before publishing their trajectories.
Sven Gedicke, Annika Bonerath, Benjamin Niedermann, and Jan-Henrik Haunert. Zoomless Maps: External Labeling Methods for the Interactive Exploration of Dense Point Sets at a Fixed Map Scale. IEEE Transactions on Visualization and Computer Graphics, 27(2):1247-1256, 2021. https://youtu.be/IuMhk8jp54c
Visualizing spatial data on small-screen devices such as smartphones and smartwatches poses new challenges in computational cartography. The current interfaces for map exploration require their users to zoom in and out frequently. Indeed, zooming and panning are tools suitable for choosing the map extent corresponding to an area of interest. They are not as suitable, however, for resolving the graphical clutter caused by a high feature density since zooming in to a large map scale leads to a loss of context. Therefore, in this paper, we present new external labeling methods that allow a user to navigate through dense sets of points of interest while keeping the current map extent fixed. We provide a unified model, in which labels are placed at the boundary of the map and visually associated with the corresponding features via connecting lines, which are called leaders. Since the screen space is limited, labeling all features at the same time is impractical. Therefore, at any time, we label a subset of the features. We offer interaction techniques to change the current selection of features systematically and, thus, give the user access to all features. We distinguish three methods, which allow the user either to slide the labels along the bottom side of the map or to browse the labels based on pages or stacks. We present a generic algorithmic framework that provides us with the possibility of expressing the different variants of interaction techniques as optimization problems in a unified way. We propose both exact algorithms and fast and simple heuristics that solve the optimization problems taking into account different criteria such as the ranking of the labels, the total leader length as well as the distance between leaders. In experiments on real-world data we evaluate these algorithms and discuss the three variants with respect to their strengths and weaknesses proving the flexibility of the presented algorithmic framework.
A. Bonerath, B. Niedermann, J. Diederich, Y. Orgeig, J. Oehrlein, and J.-H. Haunert. A time-windowed data structure for spatial density maps. In Proc. 28th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL '20), pages 15-24. 2020.
The visualization of spatio-temporal data helps researchers understand global processes such as animal migration. In particular, interactively restricting the data to different time windows reveals new insights into the short-term and long-term changes of the research data. Inspired by this use case, we consider the visualization of point data annotated with time stamps. We pick up classical, grid-based density maps as the underlying visualization technique and enhance them with an efficient data structure for arbitrarily specified time-window queries. The running time of the queries is logarithmic in the total number of points and linear in the number of actually colored cells. In experiments on real-world data we show that the data structure answers time-window queries within milliseconds, which supports the interactive exploration of large point sets. Further, the data structure can be used to visualize additional decision problems, e.g., it can answer whether the sum or maximum of additional weights given with the points exceed a certain threshold. We have defined the data structure general enough to also support multiple thresholds expressed by different colors.
A. Bonerath, B. Niedermann, and J.-H. Haunert. Retrieving alpha-shapes and schematic polygonal approximations for sets of points within queried temporal ranges. In Proc. 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL '19), pages 249-258. 2019. trailer: https://youtu.be/mlnUDhbMSfQ
The interactive exploration of data requires data structures that can be repeatedly queried to obtain simple visualizations of parts of the data. We consider the scenario that the data is a set of points each associated with a time stamp and that the result of each query is visualized by an α-shape, which generalizes the concept of convex hulls. Instead of computing each shape independently, we suggest and analyze a simple data structure that aggregates the α-shapes of all possible queries. Once the data structure is built, it particularly allows us to query single α-shapes without retrieving the actual (possibly large) point set and thus to rapidly produce small previews of the queried data. We discuss the data structure for the original α-shapes as well as for a schematized version of α-shapes, which further simplifies the visualization. We evaluate the data structure on real-world data. The experiments indicate linear memory consumption with respect to the number of points, which makes the data structure applicable in practice, although the size is quadratic for a theoretic worst case example.
Drawing network maps automatically comprises two challenging steps, namely laying out the map and placing non-overlapping labels. In this paper we tackle the problem of labeling an already existing network map considering the application of metro maps. We present a flexible and versatile labeling model. Despite its simplicity, we prove that it is NP-complete to label a single line of the network. For a restricted variant of that model, we introduce an efficient algorithm that optimally labels a single line. Based on that algorithm, we present a general and sophisticated workflow for multiple metro lines, which is experimentally evaluated on real-world metro maps.
We present a new algorithm that enlarges a focus region in a given network map without removing non-focus (i.e., context) network parts from the map or changing the map's size. In cartography, this problem is usually tackled with fish-eye projections, which, however, introduce severe distortion. Our new algorithm minimizes distortion and, with respect to this objective, produces results of similar quality compared to an existing algorithm. In contrast to the existing algorithm, the new algorithm achieves real-time performance that allows its application in interactive systems. We target applications where a user sets a focus by brushing parts of the network or the focus region is defined as the neighborhood of a moving user. A crucial feature of the algorithm is its capability of avoiding unwanted edge crossings. Basically, we first solve a least-squares optimization problem without constraints for avoiding edge crossings. The solution we find is then used to identify a small set of constraints needed for a crossing-free solution and, beyond this, allows us to start an animation enlarging the focus region before the final, crossing-free solution is found. Moreover, memorizing the non-crossing constraints from an initial run of the algorithm allows us to achieve a better runtime on any further run - assuming that the focus region does not move too much between two consecutive runs. As we show with experiments on real-world data, this enables response times well below 1 second.
M. Fink, J.-H. Haunert, A. Schulz, J. Spoerhase, and A. Wolff. Algorithms for labeling focus regions. IEEE Transactions on Visualization and Computer Graphics, 18(12):2583-2592, 2012.
In this paper, we investigate the problem of labeling point sites in focus regions of maps or diagrams. This problem occurs, for example, when the user of a mapping service wants to see the names of restaurants or other POIs in a crowded downtown area but keep the overview over a larger area. Our approach is to place the labels at the boundary of the focus region and connect each site with its label by a linear connection, which is called a leader. In this way, we move labels from the focus region to the less valuable context region surrounding it. In order to make the leader layout well readable, we present algorithms that rule out crossings between leaders and optimize other characteristics such as total leader length and distance between labels. \par This yields a new variant of the boundary labeling problem, which has been studied in the literature. Other than in traditional boundary labeling, where leaders are usually schematized polylines, we focus on leaders that are either straight-line segments or Bézier curves. Further, we present algorithms that, given the sites, find a position of the focus region that optimizes the above characteristics. \par We also consider a variant of the problem where we have more sites than space for labels. In this situation, we assume that the sites are prioritized by the user. Alternatively, we take a new facility-location perspective which yields a clustering of the sites. We label one representative of each cluster. If the user wishes, we apply our approach to the sites within a cluster, giving details on demand.
Mobile users of maps typically need detailed information about their surroundings plus some context information about remote places. In order to avoid that the map partly gets too dense, cartographers have designed mapping functions that enlarge a user-defined focus region such functions are sometimes called fish-eye projections. The extra map space occupied by the enlarged focus region is compensated by distorting other parts of the map. We argue that, in a map showing a network of roads relevant to the user, distortion should preferably take place in those areas where the network is sparse. Therefore, we do not apply a predefined mapping function. Instead, we consider the road network as a graph whose edges are the road segments. We compute a new spatial mapping with a graph-based optimization approach, minimizing the square sum of distortions at edges. Our optimization method is based on a convex quadratic program (CQP); CQPs can be solved in polynomial time. Important requirements on the output map are expressed as linear inequalities. In particular, we show how to forbid edge crossings. We have implemented our method in a prototype tool. For instances of different sizes, our method generated output maps that were far less distorted than those generated with a predefined fish-eye projection. Future work is needed to automate the selection of roads relevant to the user. Furthermore, we aim at fast heuristics for application in real-time systems.
J.-H. Haunert, and A. Wolff. Area aggregation in map generalisation by mixed-integer programming. International Journal of Geographical Information Science, 24(12):1871-1897, 2010.
Topographic databases normally contain areas of different land cover classes, commonly defining a planar partition, that is, gaps and overlaps are not allowed. When reducing the scale of such a database, some areas become too small for representation and need to be aggregated. This unintentionally but unavoidably results in changes of classes. In this article we present an optimisation method for the aggregation problem. This method aims to minimise changes of classes and to create compact shapes, subject to hard constraints ensuring aggregates of sufficient size for the target scale. To quantify class changes we apply a semantic distance measure. We give a graph theoretical problem formulation and prove that the problem is NP-hard, meaning that we cannot hope to find an efficient algorithm. Instead, we present a solution by mixed-integer programming that can be used to optimally solve small instances with existing optimisation software. In order to process large datasets, we introduce specialised heuristics that allow certain variables to be eliminated in advance and a problem instance to be decomposed into independent sub-instances. We tested our method for a dataset of the official German topographic database ATKIS with input scale 1:50,000 and output scale 1:250,000. For small instances, we compare results of this approach with optimal solutions that were obtained without heuristics. We compare results for large instances with those of an existing iterative algorithm and an alternative optimisation approach by simulated annealing. These tests allow us to conclude that, with the defined heuristics, our optimisation method yields high-quality results for large datasets in modest time.
Skeletonization of polygons is a technique, which is often applied to problems of cartography and geographic information science. Especially it is needed for generalization tasks such as the collapse of small or narrow areas, which are negligible for a certain scale. Different skeleton operators can be used for such tasks. One of them is the straight skeleton, which was rediscovered by computer scientists several years ago after decades of neglect. Its full range of practicability and its benefits for cartographic applications have not been revealed yet. Based on the straight skeleton an area collapse that preserves topological constraints as well as a partial area collapse can be performed. An automatic method for the derivation of road centerlines from a cadastral dataset, which uses special characteristics of the straight skeleton, is shown.