2024
Peter Rottmann, Anne Driemel, Herman Haverkort, Heiko Röglin, and Jan-Henrik Haunert. Bicriteria shapes: hierarchical grouping and aggregation of polygons with an efficient graph-cut approach. ACM Transations on Spatial Algorithms and Systems, , 12 2024. Just Accepted
| |
An important task of pattern recognition and map generalization is to partition a set of disjoint polygons into groups and to aggregate the polygons within each group to a representative output polygon. We introduce a new method for this task called bicriteria shapes. Following a classical approach, we define the output polygons by merging the input polygons with a set of triangles that we select from a conforming Delaunay triangulation of the input polygons’ exterior. The innovation is that we control the selection of triangles with a bicriteria optimization model that is efficiently solved via graph cuts. In particular, we minimize a weighted sum that combines the total area of the output polygons and their total perimeter. In a basic problem, we ask for a single solution that is optimal for a preset parameter value. In a second problem, we ask for a set containing an optimal solution for every possible value of the parameter. We discuss how this set can be approximated with few solutions and show that it is hierarchically nested. Hence, the output is a hierarchical clustering that corresponds to multiple levels of detail. An evaluation with building footprints as input and a comparison with α-shapes that are based on the same underlying triangulation conclude the article. An advantage of bicriteria shapes compared to α-shapes is that the sequence of solutions for decreasing values of the parameter is monotone with respect to the total perimeter of the output polygons, resulting in a monotonically decreasing visual complexity. @article{rottmann2024bicriteriashapes, | |
Axel Forsch, Ruben Kemna, Elmar Langetepe, and Jan-Henrik Haunert. Polyline morphing for animated schematic maps. Journal of Geovisualization and Spatial Analysis, 8, 2024.
| |
Thematic maps allow for the visual analysis of spatial data. When comparing two map states, preserving the mental map of a user facilitates the comparison. One way to achieve this is to use animated transitions between the states. This work presents an algorithm for computing such animations, called morphs, between schematized map objects, a technique particularly pertinent in urban mobility scenarios where schematic maps improve map legibility. In schematic maps, abstraction is used to reduce the visual complexity while still conveying information on a selected phenomenon. Our method ensures that the morph has four favorable properties: (1) it is self-intersection-free, (2) it maintains the schematization of the input features, (3) it is self-contained, and (4) every segment moves at its own constant velocity. We present an efficient algorithm to compute vertex traces and the timing of the morph. We evaluate our approach on isochrones visualizing travel times and on different layouts of schematic transit networks. The results show that the additional constraints we induce on the morphing only have a minor influence on the optimization target while they reduce the complexity of the animation. @article{ForschKemna2024, | |
M. Hartig, S. Seifert, J.-H. Haunert, and S. Hüttel. Improving geographical accuracy of agricultural data: a probabilitstic spatial downscaling approach. In volume 59. Schriften der Gesellschaft für Wirtschafts- und Sozialwissenschaften des Landbaues e.V., pages 325-327. 2024.
| |
Analyzing agricultural production requires detailed information about agroclimatic conditions at the farm's locations. The widely used Farm Accountancy Data Network (FADN) misses such information and provides farm locations only at NUTS-2 level. To identify farm locations more precisely, we extend the Bayesian probabilistic spatial downscaling approach of KEMPEN et al. (2011) utilizing only open access data, state-of-the-art regression techniques, and crop modeling. Furthermore, we include new data sources to improve the overall prediction accuracy. Preliminary results indicate that an improved spatial downscaling of FADN farm-level data from NUTS-2 to NUTS-3 is possible using publicly available data sources. Our extension introduces an open-access approach to link FADN farm observations to weather and climate variables. @inproceedings{Hartig2024, | |
Julius Knechtel, Peter Rottmann, Jan-Henrik Haunert, and Youness Dehbi. Semantic floorplan segmentation using self-constructing graph networks. Automation in Construction, 166:105649, 2024.
| |
This article presents an approach for the automatic semantic segmentation of floorplan images, predicting room boundaries (walls, doors, windows) and semantic labels of room types. A multi-task network was designed to represent and learn inherent dependencies by combining a Convolutional Neural Network to generate suitable features with a Graph Convolutional Network (GCN) to capture long-range dependencies. In particular, a Self-Constructing Graph module is applied to automatically induce an input graph for the GCN. Experiments on different datasets demonstrate the superiority and effectiveness of the multi-task network compared to state-of-the-art methods. The accurate results not only allow for subsequent vectorization of the existing floorplans but also for automatic inference of layout graphs including connectivity and adjacency relations. The latter could serve as basis to automatically sample layout graphs for architectural planning and design, predict missing links for unobserved parts for as-built building models and learn important latent topological and architectonic patterns. @article{Knechtel2024FloorplanSCG, | |
Reza Arabsheibani, Jan-Henrik Haunert, Stephan Winter, and Martin Tomko. Strategic allocation of landmarks to reduce uncertainty in indoor navigation. Computers, Environment and Urban Systems, 114:102198, 2024.
| |
Indoor navigation systems often rely on verbal, turn-based route instructions. These can, at times, be ambiguous at complex decision points with multiple paths intersecting under angles that are not well distinguished by the turn grammar used. Landmarks can be included into turn instructions to reduce this ambiguity. Here, we propose an approach to optimize landmark allocation to improve the clarity of route instructions. This study assumes that landmark locations are constrained to a pre-determined set of slots. We select a minimum-size subset of the set of all slots and allocate it with landmarks, such that the navigation ambiguity is resolved. Our methodology leverages computational geometric analysis, graph algorithms, and optimization formulations to strategically incorporate landmarks into indoor route instructions. We propose a method to optimize landmark allocation in indoor navigation guidance systems, improving the clarity of route instructions at complex decision points that are inadequately served by turn-based instructions alone. @article{Arabsheibani2024, | |
Alexander Naumann, Annika Bonerath, and Jan-Henrik Haunert. Many-to-many polygon matching \`a la Jaccard. In Proc. European Symposium on Algorithms (ESA'24), pages 90:1-90:15. 2024.
| |
Integration of spatial data is a major field of research. An important task of data integration is finding correspondences between entities. Here, we focus on combining building footprint data from cadastre and from volunteered geographic information, in particular OpenStreetMap. Previous research on this topic has led to exact 1:1 matching approaches and heuristic m:n matching approaches, most of which are lacking a mathematical problem definition. We introduce a model for many-to-many polygon matching based on the well-established Jaccard index. This is a natural extension to the existing 1:1 matching approaches. We show that the problem is NP-complete and a naive approach via integer programming fails easily. By analyzing the structure of the problem in detail, we can reduce the number of variables significantly. This approach yields an optimal m:n matching even for large real-world instances with appropriate running time. In particular, for the set of all building footprints of the city of Bonn (119,300 / 97,284 polygons) it yielded an optimal solution in approximately 1 hour. @inproceedings{NaumannEtAl2024, | |
Anna-Maria Bolte, Benjamin Niedermann, Thomas Kistemann, Jan-Henrik Haunert, Youness Dehbi, and Theo Kötter. The green window view index: automated multi-source visibility analysis for a multi-scale assessment of green window views. Landscape Ecology, 39, 2024.
| |
@article{BolteEtAl2024, | |
Xinyuan Yan, Peter Rodgers, Peter Rottmann, Daniel Archambault, Jan-Henrik Haunert, and Bei Wang. Eulermerge: simplifying euler diagrams through set merges. In volume 14981 of Lecture Notes in Computer Science. Diagrams 2024: 14th International Conference on the Theory and Application of Diagrams, pages 190-206. Springer, 2024.
| |
Euler diagrams are an intuitive and popular method to visualize set-based data. In an Euler diagram, each set is represented as a closed curve, and set intersections are shown by curve overlaps. However, Euler diagrams are not visually scalable and automatic layout techniques struggle to display real-world data sets in a comprehensible way. Prior state-of-the-art approaches can embed Euler diagrams by splitting a closed curve into multiple curves so that a set is represented by multiple disconnected enclosed areas. In addition, these methods typically result in multiple curve segments being drawn concurrently. Both of these features significantly impede understanding. In this paper, we present a new and scalable method for embedding Euler diagrams using set merges. Our approach simplifies the underlying data to ensure that each set is represented by a single, connected enclosed area and that the diagram is drawn without curve concurrency, leading to wellformed and understandable Euler diagrams. @inproceedings{YanEtAl2024, | |
Sven Gedicke, Shiyaza Risvi, and Jan-Henrik Haunert. Report on the FAIRagro Workshop on Data Quality for Data Analytics in Agrosystem Science (DQ4DA). Technical Report, University of Bonn, 2024.
| |
@techreport{gedicke2024dq4da, | |
Julius Knechtel, Weilian Li, Yannick Orgeig, Jan-Henrik Haunert, and Youness Dehbi. Immersive Virtual Reality to Verify the As-built State of Electric Line Networks in Buildings. In Thomas H. Kolbe, Andreas Donaubauer, and Christof Beil, editors. Recent Advances in 3D Geoinformation Science (Proceedings of the 18th 3D GeoInfo Conference 2023, Munich), pages 129-143. Springer Nature Switzerland, 2024.
| |
Immersive virtual reality (IVR) allows viewing abstract concepts and entities in a three dimensional (3D) visuospatial environment. In this paper, we innovatively introduced IVR technology into the verification of the as-built state of electric line networks in buildings. On the one hand, using a reasoning-based estimation of electric networks as a starting point, we demonstrated the usability of IVR technology for verifying installed utilities in buildings. On the other hand, we established the communication between the Reasoner and the practitioner and also simulated the verification action of electric line networks in buildings in the real world. The principal findings of this work pave the way for a subsequent and systematic evaluation of the different reasoning strategies for estimating and generating the as-built state of building utilities. @inproceedings{knechtel2024immersiveVRElectricNetworks, | |
Farzane Mohseni, AmirHossein Ahrari, Jan-Henrik Haunert, and Carsten Montzka. The synergies of smap enhanced and modis products in a random forest regression for estimating 1 km soil moisture over africa using google earth engine. Big Earth Data, 8(1):33-57, 2024.
| |
@article{Mohseni2023, | |
Axel Forsch, Stefan Funke, Jan-Henrik Haunert, and Sabine Storandt. Efficient mining of volunteered trajectory datasets. In Dirk Burghardt, Elena Demidova, and Daniel A. Keim, editors. Volunteered Geographic Information: Interpretation, Visualization and Social Context, pages 43-77. Springer Nature Switzerland, 2024.
| |
With the ubiquity of mobile devices that are capable of tracking positions (be it via GPS or Wi-Fi/mobile network localization), there is a continuous stream of location data being generated every second. These location measurements are typically not considered individually but rather as sequences, each of which reflects the movement of one person or vehicle, which we call trajectory. This chapter presents new algorithmic approaches to process and visualize trajectories both in the network-constrained and the unconstrained case. @incollection{forsch2024volunteered, | |
Dorian Baltzer, Jan-Henrik Haunert, and Axel Forsch. Visualizing the influence of new public transport infrastructure on travel times. KN - Journal of Cartography and Geographic Information, 74:107-119, 2024.
| |
Due to the global climate change and increasing traffic volumes in cities, a shift from individual to public and multimodal transport is aspired. Travel time is one of the most important aspects for many people when choosing their mode of trans- portation. This leads to the requirement that changes in travel times have to be considered when planning new public trans- port infrastructure. This research paper presents and compares different techniques for visualizing the impact of new lines in existing public transport networks on travel times. The general approach of simulating timetable data and calculating intermodal travel times considering public transport and walking is being applied to two current infrastructure projects in the city of Bonn and the surrounding region. The created maps generally aim to visualize the spread in travel times between existing and extended transportation networks discretized by different spatial units such as rectangles or postal code areas. In comparison to other common methods which typically require two maps for two different scenarios (e.g. in case of isochro- nes), our approach gives the opportunity to combine all relevant information within one map. It is also shown how to apply bivariate choropleth maps for displaying travel times and how to visualize improvements in the accessibility of multiple target points of interest at once. @article{Baltzer2024, | |
Weilian Li, Lukas Arzoumanidis, Jannik Matijevic, Daham MMohammed Mustafa, Peter Rottmann, Jan-Henrik Haunert, and Youness Dehbi. Safety assessment of cycling routes in urban environments. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLVIII-4/W10-2024:125-130, 2024.
| |
As one of the most well-known forms of active mobility, cycling is an environmentally friendly way to explore a destination, becoming increasingly prevalent worldwide. However, safety is the most severe issue facing cyclists and the essential factor for people who are hesitant to use bikes. In this context, we propose a safety assessment method for cycling routes in urban environments. This work presents how to automatically generate a safe cycling map integrating volunteered geographic information with GIS analysis. We start by extracting the indicators that can possibly contribute to cycling safety from OpenStreetMap. Then, we set up a multivariate linear regression equation to solve the importance of indicators. Afterward, we generate a weighted graph model that represents cycling safety. Taking the City of Bonn, Germany, as an example, we implement our approach for this region. The experiment results show that Bonn is a city for safe cycling, confirming the claim that Bonn is a bike-friendly city. Our findings can also be used to plan the safest route for cycling, thereby enhancing the overall biking experience in Bonn. @article{li2024safetyassessment, | |
Peter Rottmann, Peter Rodgers, Xinyuan Yan, Daniel Archambault, Bei Wang, and Jan-Henrik Haunert. Generating euler diagrams through combinatorial optimization. Computer Graphics Forum, 43(3), 2024.
| |
Can a given set system be drawn as an Euler diagram? We present the first method that correctly decides this question for arbitrary set systems if the Euler diagram is required to represent each set with a single connected region. If the answer is yes, our method constructs an Euler diagram. If the answer is no, our method yields an Euler diagram for a simplified version of the set system, where a minimum number of set elements have been removed. Further, we integrate known wellformedness criteria for Euler diagrams as additional optimization objectives into our method. Our focus lies on the computation of a planar graph that is embedded in the plane to serve as the dual graph of the Euler diagram. Since even a basic version of this problem is known to be NP-hard, we choose an approach based on integer linear programming (ILP), which allows us to compute optimal solutions with existing mathematical solvers. For this, we draw upon previous research on computing planar supports of hypergraphs and adapt existing ILP building blocks for contiguity-constrained spatial unit allocation and the maximum planar subgraph problem. To generate Euler diagrams for large set systems, for which the proposed simplification through element removal becomes indispensable, we also present an efficient heuristic. We report on experiments with data from MovieDB and Twitter. Over all examples, including 850 non-trivial instances, our exact optimization method failed only for one set system to find a solution without removing a set element. However, with the removal of only a few set elements, the Euler diagrams can be substantially improved with respect to our wellformedness criteria. @article{rottmann2024generating, | |
Dorian Baltzer, Alexander Naumann, Stephan Rosenberg, and Jan-Henrik Haunert. Automatically generating virtual tours based on dense sets of 360° imagery. Abstracts of the ICA, 7:8, 2024.
| |
@article{baltzer2024automatically, |
2023
Axel Forsch, and Jan-Henrik Haunert. Metrochrones: schematic isochrones for schematic metro maps. The Cartographic Journal, 60:383-401, 2023.
| |
@article{ForschHaunert2024, | |
Anna Arutyunova, Anne Driemel, Jan-Henrik Haunert, Herman Haverkort, Jürgen Kusche, Elmar Langetepe, Philip Mayer, Petra Mutzel, and Heiko Röglin. Minimum-error triangulations for sea surface reconstruction. Journal of Computational Geometry, 14(2):108-171, 2023.
| |
We apply state-of-the-art computational geometry methods to the problem of reconstructing a time-varying sea surface from tide gauge records. Our work builds on a recent article by Nitzke et al. (Computers Geosciences, 157:104920, 2021) who have suggested to learn a triangulation $D$ of a given set of tide gauge stations. The objective is to minimize the misfit of the piecewise linear surface induced by $D$ to a reference surface that has been acquired with satellite altimetry. The authors restricted their search to $k$-order Delaunay ($k$-OD) triangulations and used an integer linear program in order to solve the resulting optimization problem. In geometric terms, the input to our problem consists of two sets of points in $ℝ^2$ with elevations: a set $S$ that is to be triangulated, and a set $R$ of reference points. Intuitively, we define the error of a triangulation as the average vertical distance of a point in $R$ to the triangulated surface that is obtained by interpolating elevations of $S$ linearly in each triangle. Our goal is to find the triangulation of $S$ that has minimum error with respect to $R$. In our work, we prove that the minimum-error triangulation problem is NP-hard and cannot be approximated within any multiplicative factor in polynomial time unless $P = NP$. At the same time we show that the problem instances that occur in our application (considering sea level data from several hundreds of tide gauge stations worldwide) can be solved relatively fast using dynamic programming when restricted to $k$-OD triangulations for $k \le 7$. The fast runtime is a result of a set of fixed edges called the $k$-OD fixed-edge graph. Instances for which the number of connected components of the $k$-OD fixed-edge graph is small can be solved within few seconds. @article{Arutyunova2023, | |
Sophie Duong, Peter Rottmann, Jan-Henrik Haunert, and Petra Mutzel. Clustering building footprint polygons based on graph similarity measures. In UrbanAI '23. Proceedings of the 1st ACM SIGSPATIAL International Workshop on Advances in Urban-AI, pages 22-31. Association for Computing Machinery, 2023.
| |
Footprints of buildings can provide cues about architectural styles and functional types. Learning such latent thematic information from geometry is relevant for various applications, such as urban planning and map generalization. A common task in this context is to cluster a set of building footprints based on their shape characteristics. In this paper, we present a novel method for this task which is based on concepts of graph similarity. We use a graph similarity measure that combines ideas from the Weisfeiler-Lehman-method and optimal transport theory. For the final clustering, we use spectral clustering. To obtain a meaningful graph representation, we propose two algorithms that transform the medial axis of a building footprint into a skeleton graph. We tested our algorithm on a data set from Boston and performed a small user study, where we also compared the results to an existing feature-based clustering method. The study gives a first hint that the results of our algorithm are in line with human similarity perception. Future work is needed to improve the stability of the proposed similarity measure and to confirm our findings with more extensive experiments. @inproceedings{duong2023clusteringfootprint, | |
Julius Knechtel, Jan Behmann, Jan-Henrik Haunert, and Youness Dehbi. Suitability assessment of different sensors to detect hidden installations for as-built bim. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, X-1/W1-2023:263-270, 2023.
| |
Knowledge on the utilities hidden in the wall, e.g., electric lines or water pipes, is indispensable for work safety and valuable for planning. Since most of the existing building stock originates from the pre-digital era, no models as understood for Building Information Modeling (BIM) exist. To generate these models often labor-intensive procedures are necessary; however, recent research has dealt with the efficient generation and verification of a building’s electric network. In this context, a reliable measurement method is a necessity. In this paper we test different measurement techniques, such as point-wise measurements with hand-held devices or area-based techniques utilizing thermal imaging. For this purpose, we designed and built a simulation environment that allows various parameters to be manipulated under controlled conditions. In this scenario the low-cost handheld devices show promising results, with a precision between 92% and 100% and a recall between 89% and 100%. The expensive thermal imaging camera is also able to detect electric lines and pipes if there is enough power on the line or if the temperature of the water in the pipe and the environment’s temperature are sufficiently different. Nevertheless, while point-wise measurements can directly yield results, the thermal camera requires post-processing in specific analysis software. The results reinforce the idea of using reasoning methods in both the do-it-yourself and commercial sector, to rapidly gather information about hidden installations in a building without prior technical knowledge. This paves the way for, e.g., exploring the possibilities of an implementation and presentation in augmented reality (AR). @article{knechtel2023sensorSuitability, | |
Annika Bonerath, Yannick Orgeig, Jan-Henrik Haunert, and Youness Dehbi. Integrating optimization-based spatial unit allocation into a multi-agent model for the simulation of urban growth. In Proceedings of the 6th ACM SIGSPATIAL International Workshop on GeoSpatial Simulation, pages 19-22. 2023.
| |
@inproceedings{inproceedings, | |
Lukas Arzoumanidis, Axel Forsch, Jan-Henrik, and Youness Dehbi. Catchment cell visualization for multi-modal public transportation networks. In Proc. 1st ACM SIGSPATIAL Workshop on Sustainable Mobility. 2023.
| |
@inproceedings{arzoumanidis2023catchment, | |
Sven Gedicke, Martin Tomko, Stephan Winter, and Jan-Henrik Haunert. Selecting Landmarks for Wayfinding Assistance Based on Advance Visibility. In Proc. 31st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL'23), pages 1-10. 2023.
| |
@inproceedings{gedicke2023selecting, | |
A. Bonerath, Y. Dong, and J.-H. Haunert. An efficient data structure providing maps of the frequency of public transit service within user-specified time windows. Advances in Cartography and GIScience of the ICA, 4:1, 2023. Special issue of 31st International Cartographic Conference (ICC'23)
| |
@article{bonerath2023thetastarstructure, | |
Weilian Li, Jan-Henrik Haunert, Julius Knechtel, Jun Zhu, Qing Zhu, and Youness Dehbi. Social Media Insights on Public Perception and Sentiment During and After Disasters: The European Floods in 2021 as a Case Study. Transactions in GIS, 27(6):1766-1793, 2023.
| |
Abstract Detecting and collecting public opinion via social media can provide near real-time information to decision-makers, which plays a vital role in urban disaster management and sustainable development. However, there has been little work focusing on identifying the perception and the sentiment polarity expressed by users during and after disasters, particularly regional flood events. In this article, we comprehensively analyze tweets data related to the “European floods in 2021” over time, topic, and sentiment, forming a complete workflow from data processing, topic modeling, sentiment analysis, and topic and sentiment prediction. The aim is to address the following research questions: (1) What are the public perception and main concerns during and after floods? (2) How does the public sentiment change during and after floods? Results indicate that there is a significant correlation between a flood's trend and the heat of corresponding tweets. The three topics that receive the most public concern are: (1) climate change and global warming; (2) praying for the victims: and (3) disaster situations and information. Negative sentiments are predominant during the floods and will continue for some time. We tested five different classifiers, of which TextCNN-attention turned out to deliver the best predictions in topic and sentiment prediction, and performed well for sparse flood tweets, it can be used to predict the topic and sentiment polarity of a single tweet in real-time during the flood events. Our findings can help disaster agencies to better understand the dynamics of social networks and develop stronger situational awareness towards a disaster, which can contribute to scientifically justified decision-making in urban risk management and also meet the challenges associated with the global sustainable development goal 11 (SDGs) on Sustainable Cities and Communities. @article{li2023socialMediaDisaster, | |
Lukas Arzoumanidis, Julius Knechtel, Jan-Henrik Haunert, and Youness Dehbi. Self-Constructing Graph Convolutional Networks for Semantic Segmentation of Historical Maps. Abstracts of the ICA, 6:11, 2023.
| |
@article{arzoumanidis2023SCGHistMaps, | |
Sven Gedicke, Lukas Arzoumanidis, and Jan-Henrik Haunert. Automating the External Placement of Symbols for Point Features in Situation Maps for Emergency Response. Cartography and Geographic Information Science, 50(4):385-402, 2023.
| |
@article{gedicke2023automating, | |
Sven Gedicke, Lukas Arzoumanidis, and Jan-Henrik Haunert. Ein Algorithmus zur automatischen Platzierung taktischer Zeichen in der digitalen Lageskizze. Zeitschrift für Forschung und Technik im Brandschutz vfdb, 72(2):59-65, 2023.
| |
@article{gedicke2023lageskizze, | |
Axel Forsch, Johannes Oehrlein, Benjamin Niedermann, and Jan-Henrik Haunert. Inferring routing preferences from user-generated trajectories using a compression criterion. Journal of Spatial Information Science, 26(5):99-124, 2023.
| |
The optimal path between two vertices in a graph depends on the optimization objective, which is often defined as a weighted sum of multiple criteria. When integrating two criteria, their relative importance is expressed with a balance factor α. We present a new approach for inferring α from trajectories. The core of our approach is a compression algorithm that requires a graph G representing a transportation network, two edge costs modeling routing criteria, and a path P in G representing the trajectory. It yields a minimum subsequence S of the sequence of vertices of P and a balance factor α, such that the path P can be fully reconstructed from S, G, its edge costs, and α. By minimizing the size of S over α, we learn the balance factor that corresponds best to the user's routing preferences. In an evaluation with crowd-sourced cycling trajectories, we weigh the usage of official signposted cycle routes against other routes. More than 50% of the trajectories can be segmented into five optimal sub-paths or less. Almost 40% of the trajectories indicate that the cyclist is willing to take a detour of 50% over the geodesic shortest path to use an official cycle path. @article{forsch2023inferring, | |
Sven Gedicke, and Jan-Henrik Haunert. An Empirical Study on Interfaces for Presenting Large Sets of Point Features in Mobile Maps. The Cartographic Journal, 60(1):25-42, 2023.
| |
@article{gedicke2023empirical, | |
Xenia Specka, Daniel Martini, Claus Weiland, Daniel Arend, Senthold Asseng, Franziska Boehm, Til Feike, Juliane Fluck, David Gackstetter, Aida Gonzales-Mellado, Thomas Hartmann, Jan-Henrik Haunert, Florian Hoedt, Carsten Hoffmann, Patrick König, Matthias Lange, Stephan Lesch, Birte Lindstädt, Gunnar Lischeid, Markus Möller, Uwe Rascher, Jochen Christoph Reif, Markus Schmalzl, Matthias Senft, Ulrike Stahl, Nikolai Svoboda, Björn Usadel, Heidi Webber, and Frank Ewert. FAIRagro: Ein Konsortium in der Nationalen Forschungsdateninfrastruktur (NFDI) für Forschungsdaten in der Agrosystemforschung. Informatik Spektrum, 0(0):0, 2023.
| |
@article{SpeckaEtAl2023, | |
Annika Bonerath, Jan-Henrik Haunert, Joseph S.B. Mitchell, and Benjamin Niedermann. Shortcut hulls: vertex-restricted outer simplifications of polygons. Computational Geometry Theory and Applications, :101983, 2023.
| |
Let P be a polygon and C a set of shortcuts, where each shortcut is a directed straight-line segment connecting two vertices of P. A shortcut hull of P is another polygon that encloses P and whose oriented boundary is composed of elements from C. We require P and the output shortcut hull to be weakly simple polygons, which we define as a generalization of simple polygons. Shortcut hulls find their application in cartography, where a common task is to compute simplified representations of area features. We aim at a shortcut hull that has a small area and a small perimeter. Our optimization objective is to minimize a convex combination of these two criteria. If no holes in the shortcut hull are allowed, the problem admits a straight-forward solution via computation of shortest paths. For the more challenging case in which the shortcut hull may contain holes, we present a polynomial-time algorithm that is based on computing a constrained, weighted triangulation of the input polygon's exterior. We use this problem as a starting point for investigating further variants, e.g., restricting the number of edges or bends. We demonstrate that shortcut hulls can be used for the schematization of polygons. @article{BONERATH2023101983, |
2022
Youness Dehbi, Julius Knechtel, Benjamin Niedermann, and Jan-Henrik Haunert. Incremental constraint-based reasoning for estimating as-built electric line routing in buildings. Automation in Construction, 143:104571, 2022.
| |
This article addresses the augmentation of existing building models by a-priori not observable structures such as electric installations. The aim is to unambiguously determine an electric network in an incremental manner with a minimum number of local measurements, e.g. using wire detectors, by suggesting the next measurement. Different reasoning strategies, e.g. utilizing graph-theoretical algorithms, have been presented and tested based on a hypothesis which is generated using Mixed Integer Linear Programming (MILP) while incorporating standards regarding the installation of electric wiring and findings from previous measurements. The presented method has been successfully applied on simulated and real-world buildings, it saves up to 80% of the necessary measurements compared to an exhaustive verification of the whole existing electric network, and paves the way for efficiently extending existing models, e.g. GIS models, with information on hidden utilities. This opens up new opportunities to model further infrastructures, e.g. water pipes, in future research. @article{dehbi2022incrementalConstraint-based, | |
L. Weilian, Z. Jun, J.-H. Haunert, F. Lin, Z. Qing, and Y. Dehbi. Three-dimensional virtual representation for the whole process of dam-break floods from a geospatial storytelling perspective. International Journal of Digital Earth, 15:1637-1656, 2022.
| |
The objective of disaster scenes is to share location-based risk information to a large audience in an effective and intuitive way. However, current studies on three-dimensional (3D) representation for dam-break floods have the following limitations: (1) they are lacking a reasonable logic to organize the whole process of dam-break floods, (2) they present information in a way that cannot be easily understood by laypersons. Geospatial storytelling helps to create exciting experiences and to explain complex relationships of geospatial phenomena. This article proposes a three-dimensional virtual representation method for the whole process of dam-break floods from a geospatial storytelling perspective. The creation of a storyline and a storytelling-oriented representation of dam-break floods are discussed in detail. Finally, a prototype system based on WebGL is developed to conduct an experiment analysis. The results of the experiment show that the proposed method can effectively support 3D representation of the spatiotemporal process of dam-break floods. Furthermore, the statistical results indicate that the storytelling is useful for assisting participants in understanding the occurrence and development of dam-break floods, and is applicable to the popularization of disaster science for the general public. @article{li2022storytelling, | |
Anna Arutyunova and
Anne Driemel and
Jan-Henrik Haunert and
Herman J. Haverkort and
Jürgen Kusche and
Elmar Langetepe and
Philip Mayer and
Petra Mutzel and
Heiko Röglin. Minimum-error triangulations for sea surface reconstruction. In volume 224 of LIPIcs. Proc. 38th International Symposium on Computational Geometry (SoCG 2022), pages 7:1-7:18. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022.
| |
@inproceedings{arutyunovaDHHKL22, | |
Peter Rottmann, Jan-Henrik Haunert, and Youness Dehbi. Automatic building footprint extraction from 3d laserscans. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, X-4/W2-2022:233-240, 2022.
| |
Building footprints are a prerequisite for many tasks such as urban mapping and planning. Such structures are mostly derived using airborne laser scanning which reveals rather roof structures than the underlying hidden footprint boundary. This paper introduces an approach to extract a 2D building boundary from a 3D point cloud stemming from either terrestrial scanning or via close-range sensing using a mobile platform, e.g. drone. To this end, a pipeline of methods including non-parametric kernel density estimation (KDE) of an underlying probability density function, a solution of the Travelling Salesperson Problem (TSP), outlier elimination and line segmentation are presented to extract the underlying building footprint. KDE turns out to be suitable to automatically determine a horizontal cut in the point cloud. An ordering of the resulting points in this cut using a shortest possible tour based on TSP allows for the application of existing line segmentation algorithms, otherwise dedicated to indoor segmentation. Outliers in the resulting segments are removed using DBSCAN. The segments are then generalized leading to the final footprint geometry. We applied our approach on real-world examples and achieved an IoU between 0.930 and 0.998 assessed by ground truth footprints from both authoritative and VGI data. @article{rottmann2022AutomaticBuilding, | |
Peter Rottmann, Makus Wallinger, Annika Bonerath, Sven Gedicke, Martin Nöllenburg, and Jan-Henrik Haunert. MosaicSets: Embedding Set Systems into Grid Graphs. In Abstracts of 1st Workshop on Computational Cartography 2022. 2022.
| |
@inproceedings{rottmann2022mosaicsets_compcart, | |
Peter Rottmann, Makus Wallinger, Annika Bonerath, Sven Gedicke, Martin Nöllenburg, and Jan-Henrik Haunert. Mosaicsets: embedding set systems into grid graphs. IEEE Transactions on Visualization and Computer Graphics, 29(1):875-885, 2022.
| |
Visualizing sets of elements and their relations is an important research area in information visualization. In this paper, we present MosaicSets : a novel approach to create Euler-like diagrams from non-spatial set systems such that each element occupies one cell of a regular hexagonal or square grid. The main challenge is to find an assignment of the elements to the grid cells such that each set constitutes a contiguous region. As use case, we consider the research groups of a university faculty as elements, and the departments and joint research projects as sets. We aim at finding a suitable mapping between the research groups and the grid cells such that the department structure forms a base map layout. Our objectives are to optimize both the compactness of the entirety of all cells and of each set by itself. We show that computing the mapping is NP-hard. However, using integer linear programming we can solve real-world instances optimally within a few seconds. Moreover, we propose a relaxation of the contiguity requirement to visualize otherwise non-embeddable set systems. We present and discuss different rendering styles for the set overlays. Based on a case study with real-world data, our evaluation comprises quantitative measures as well as expert interviews. @article{rottmann2022mosaicsets, | |
Annika Bonerath, Lukas Temerowski, Sven Gedicke, and Jan-Henrik Haunert. Exploring Spatio-Temporal Event Data on a Smart Watch. Abstracts of the ICA, 5:96, sep 2022.
| |
@article{bonerath2022spatiotemporal, | |
Victor Korir, Axel Forsch, Youness Dehbi, and Jan-Henrik Haunert. Visualizing the modal split in public transportation networks. Abstracts of the ICA, 5:89, sep 2022.
| |
@article{korir2022modalsplit, | |
Anna Brauer, Ville Mäkinen, Axel Forsch, Juha Oksanen, and Jan-Henrik Haunert. My home is my secret: concealing sensitive locations by context-aware trajectory truncation. International Journal of Geographical Information Science, 36(12):2496-2524, 2022.
| |
Ever since location-based services and mobile applications collecting data gathered through Global Navigation Satellite System (GNSS) positioning have become popular, concerns about location privacy have been expressed. Research has shown that human trajectory repositories containing sequences of observed locations ordered in time constitute a rich source for analyzing movement patterns, but they can also reveal sensitive personal information, such as a person’s home address. In this paper, we present a mechanism that protects visits to sensitive locations by suppressing revealing parts of trajectories. Our attack model acknowledges that the course of a trajectory, combined with spatial context information, can facilitate privacy breaches even if sensitive locations have been concealed. Thus, we introduce the concept of k-site-unidentifiability, a specialization of k-anonymity, under which a sensitive location cannot be singled out from a group of at least k sites that the trajectory could have visited. In an experimental study, we show that our method is utility-preserving and protects sensitive locations reliably even in sparsely built environments. As it can process each trajectory independently, individuals may also use our mechanism to enhance their privacy before publishing their trajectories. @article{brauer2022myhome, | |
Axel Forsch, Friederike Amann, and Jan-Henrik Haunert. Visualizing the off-screen evolution of trajectories. KN - Journal of Cartography and Geographic Information, 72(3):201-212, May 2022.
| |
In the context of volunteered geographic information, large sets of trajectories of humans and animals are collected. Analyzing these trajectories visually is often complicated due to limited display sizes. For instance, when a user chooses a large map scale to inspect the details of a trajectory, only a small part of the trajectory is visible in the map. Therefore, in this article, we present an approach for visualizing the off-screen evolution of trajectories, i.e., their continuation outside of the displayed map. We propose visual cues in the form of glyphs that are displayed at the map's boundary and that consist of one or multiple disk sectors of varying size and opening angle. These glyphs indicate the direction and variability of direction of a trajectory's continuation outside the map frame. We present an algorithm for computing the glyphs efficiently and evaluate them in a user study. The results show that the glyphs are intuitive to understand even without explanation. We further present suggestions for improving the glyph design based on the results. @article{forsch2022offscreen, | |
Axel Forsch, Ruben Kemna, Elmar Langetepe, and Jan-Henrik Haunert. Morphing of schematized polygons for animated travel-time maps. In 3rd Schematic Mapping Workshop. 2022. Poster abstract. Available at: https://www.ruhr-uni-bochum.de/schematicmapping/papers/smw-fklh.pdf
| |
Travel-time maps are an important tool for analyzing the efficacy of public transportation systems. These maps display one or multiple isochrones. An isochrone is a line of equal travel time, which means that reaching any point on it from a given starting point requires the same amount of time. As timetables are usually very irregular over the course of a week, travel-time maps are highly variant with respect to the starting time. Thus, for an extensive analysis of the network, this variance has to be visualized appropriately. One way of doing this is by using a digital map with animated transitions between different isochrones, so called morphs. In this work, we present an algorithm for computing such morphs between isochrones, particularly targeting schematized travel-time maps as they are often used for visualization in the context of public transportation. In our current research, we are optimizing the morphing between two given polygonal lines. In future research we plan to extend the approach to finding the correct polygon correspondences in the given isochrones. @inproceedings{forsch2022morphing, | |
Julius Knechtel, Lasse Klingbeil, Jan-Henrik Haunert, and Youness Dehbi. Optimal position and path planning for stop-and-go laserscanning for the acquisition of 3d building models. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-4-2022:129-136, 2022.
| |
Terrestrial laser scanning has become more and more popular in recent years. The according planning of the standpoint network is a crucial issue influencing the overhead and the resulting point cloud. Fully static approaches are both cost and time extensive, whereas fully kinematic approaches cannot produce the same data quality. Stop-and-go scanning, which combines the strengths of both strategies, represents a good alternative solution. In the scanning process, the standpoint planning is by now mostly a manual process based on expert knowledge and relying on the surveyor's experience. This paper provides a method based on Mixed Integer Linear Programming (MILP) ensuring an optimal placement of scanner standpoints considering all scanner-related constraints (e.g. incidence angle), a full coverage of the scenery, a sufficient overlap for the subsequent registration and an optimal route planning solving a Traveling Salesperson Problem (TSP). This enables the fully automatic application of autonomous systems for providing a complete model while performing a stop-and-go laser scanning, e.g. with the Spot robot from Boston Dynamics. Our pre-computed solution, i.e. standpoints and trajectory, has been evaluated surveying a real-world environment and successfully compared with a precise LoD2 building model of the underlying scene. The performed ICP-based registration issued from our fully automatic pipeline turns out to be a very good and safe alternative of the otherwise laborious target-based registration. @article{knechtel2022OptimalPositionPath, |
2021
Alina Nitzke, Benjamin Niedermann, Luciana Fenoglio-Marc, Jürgen Kusche, and Jan-Henrik Haunert. Reconstructing the time-variable sea surface from tide gauge records using optimal data-dependent triangulations. Computers & Geosciences, 157:104920, 2021.
| |
Reconstructions of sea level prior to the satellite altimeter era are usually derived from tide gauge records; however most algorithms for this assume that modes of sea level variability are stationary which is not true over several decades. Here we suggest a method that is based on optimized data-dependent triangulations of the network of gauge stations. Data-dependent triangulations are triangulations of point sets that rely not only on 2D point positions but also on additional data (here: sea surface anomalies). In particular, min-error criteria have been suggested to construct triangulations that approximate a given surface. In this article, we show how data-dependent triangulations with min-error criteria can be used to reconstruct 2D maps of the sea surface anomaly over a longer time period, assuming that anomalies are continuously monitored at a sparse set of stations and, in addition, observations of a control surface is provided over a shorter time period. At the heart of our method is the idea to learn a min-error triangulation based on the control data that is available, and to use the learned triangulation subsequently to compute piece-wise linear surface models for epochs in which only observations from monitoring stations are given. Moreover, we combine our approach of min-error triangulation with k-order Delaunay triangulation to stabilize the triangles geometrically. We show that this approach is in particular advantageous for the reconstruction of the sea surface by combining tide gauge measurements (which are sparse in space but cover a long period back in time) with data of modern satellite altimetry (which have a high spatial resolution but cover only the last decades). We show how to learn a min-error triangulation and a min-error k-order Delaunay triangulation using an exact algorithm based on integer linear programming. We confront our reconstructions against the Delaunay triangulation which had been proposed earlier for sea-surface modeling and find superior quality. With real data for the North Sea we show that the min-error triangulation outperforms the Delaunay method substantially for reconstructions back in time up to 18 years, and the k-order Delaunay min-error triangulation even up to 21 years for k=2. With a running time of less than one second our approach would be applicable to areas with far greater extent than the North Sea. @article{Nitzke2021104920, | |
Youness Dehbi, Johannes Leonhardt, Johannes Oehrlein, and Jan-Henrik Haunert. Optimal scan planning with enforced network connectivity for the acquisition of three-dimensional indoor models. ISPRS Journal of Photogrammetry and Remote Sensing, 180:103-116, 2021.
| |
The positioning of laser scanners for indoor surveying is still a time and cost expensive process. This article proposes an optimization approach for computing an admissible sensor placement with the minimal number of sensor view point positions. The approach facilitates both wall and floor surveying based on a floorplan of the study object. Optimal solutions are calculated by solving an Integer Linear Program that respects manufacturer specifications incorporating constraints such as full coverage. To enable a subsequent co-registration of the scans, a flow-based constraint formulation ensuring the connectivity of the selected positions in an appropriately defined geometric intersection graph is introduced. The method has been evaluated on real-world objects and compared to heuristic methods that have frequently been used for related problems. Our solutions outperform heuristic approaches regarding both running time and the number of TLS stations. In a case study with a larger floorplan of an institute building and with different parameter settings, our method resulted in a solution with at least two stations less compared to a solution generated by an expert. @article{dehbi2021optimalscan, | |
Annika Bonerath, Jan-Henrik Haunert, Joseph S. B. Mitchell, and Benjamin Niedermann. Shortcut hulls: vertex-restricted outer simplifications of polygons. In Meng He, and Don Sheehy, editors. Proceedings of the 33rd Canadian Conference on Computational Geometry, pages 12-23. 2021.
| |
Let P be a crossing-free polygon and C a set of shortcuts, where each shortcut is a directed straight-line segment connecting two vertices of P. A shortcut hull of P is another crossing-free polygon that encloses P and whose oriented boundary is composed of elements from C. Shortcut hulls find their application in geo-related problems such as the simplification of contour lines. We aim at a shortcut hull that linearly balances the enclosed area and perimeter. If no holes in the shortcut hull are allowed, the problem admits a straight-forward solution via shortest paths. For the more challenging case that the shortcut hull may contain holes, we present a polynomial-time algorithm that is based on computing a constrained, weighted triangulation of the input polygon’s exterior. We use this problem as a starting point for investigating further variants, e.g., restricting the number of edges or bends. We demonstrate that shortcut hulls can be used for drawing the rough extent of point sets as well as for the schematization of polygons. @inproceedings{Haunert2021, | |
Jan-Henrik Haunert. Interaktive Karten für große Datens\"atze - eine algorithmische und kartographische Herausforderung. In Geoinformationssysteme 2021 - Beiträge zur 8. Münchner GI-Runde, pages 63-64. 2021.
| |
@inproceedings{Haunert2021, | |
Anna Arutyunova, Anne Driemel, Jan-Henrik Haunert, Herman Haverkort, Petra Mutzel, and Heiko Röglin. Minimum-error triangulation is NP-hard. In Proceedings of the 37th European Workshop on Computational Geometry (EuroCG'21). 2021.
| |
Given two sets of points in the plane, $P$ and $R$, with a real-valued function defined on $P ∪R$, we define for any triangulation defined on $P$ the vertical error at the points of $R$ and wish to find a triangulation that minimizes this error. We show that this problem is NP-hard to approximate within any approximation factor. @inproceedings{Arutyunova2021, | |
Jakob Geiger, Sabine Cornelsen, Jan-Henrik Haunert, Philipp Kindermann, Tamara Mchedlidze, Martin Nöllenburg, Yoshio Okamoto, and Alexander Wolff. ClusterSets: Optimizing planar clusters in categorical point data. Computer Graphics Forum, 0(0):0, 2021. EuroVis 2021 special issue, accepted for publication
| |
In geographic data analysis, one is often given point data of different categories (such as facilities of a university categorized by department). Drawing upon recent research on set visualization, we want to visualize category membership by connecting points of the same category with visual links. Existing approaches that follow this path usually insist on connecting all members of a category, which may lead to many crossings and visual clutter. We propose an approach that avoids crossings between connections of different categories completely. Instead of connecting \emphall data points of the same category, we subdivide categories into smaller, local clusters where needed. We do a case study comparing the legibility of drawings produced by our approach and those by existing approaches. In our problem formulation, we are additionally given a graph $G$ on the data points whose edges express some sort of proximity. Our aim is to find a subgraph $G'$ of $G$ with the following properties: (i)~edges connect only data points of the same category, (ii)~no two edges cross, and (iii)~the number of connected components (clusters) is minimized. We then visualize the clusters in~$G'$. For arbitrary graphs, the resulting optimization problem, Cluster Minimization, is NP-hard (even to approximate). Therefore, we introduce two heuristics. We do an extensive benchmark test on real-world data. Comparisons with exact solutions indicate that our heuristics do astonishing well for certain relative-neighborhood graphs. @article{geiger2021, | |
Sven Gedicke, Johannes Oehrlein, and Jan-Henrik Haunert. Aggregating land-use polygons considering line features as separating map elements. Cartography and Geographic Information Science, 48(2):124-139, 2021.
| |
Map generalization is the process of deriving small-scale target maps from a large-scale source map or database while preserving valuable information. In this paper we focus on topographic data, in particular areas of different land-use classes and line features representing the road network. When reducing the map scale, some areas need to be merged to larger composite regions. This process is known as area aggregation. Given a planar partition of areas, one usually aims to build geometrically compact regions of sufficient size while keeping class changes small. Since line features (e.g. roads) are perceived as separating elements in a map, we suggest integrating them into the process of area aggregation. Our aim is that boundaries of regions coincide with line features in such a way that strokes (i.e. chains of line features with small angles of deflection) are not broken into short sections. Complementing the criteria of compact regions and preserving land-use information, we consider this aim as a third criterion. Regarding all three criteria, we formalize an optimization problem and solve it with a heuristic approach using simulated annealing. Our evaluation is based on experiments with different parameter settings. In particular, we compare results of a baseline method that considers two criteria, namely compactness and class changes, with results of our new method that additionally considers our stroke-based criterion. Our results show that this third criterion can be substantially improved while keeping the quality with respect to the original two criteria on a similar level. @article{GedickeCaGIS2021, | |
Sujoy Bhore, Jan-Henrik Haunert, Fabian Klute, Guangping Li, and Martin Nöllenburg. Balanced independent and dominating sets on colored interval graphs. In Proc. 47th International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM 2021), pages 89-103. Springer International Publishing, 2021.
| |
We study two new versions of independent and dominating set problems on vertex-colored interval graphs, namely f-Balanced Independent Set (f-BIS) and f-Balanced Dominating Set (f-BDS). Let \$\$G=(V,E)\$\$G=(V,E)be an interval graph with a color assignment function \$\$\\\backslash,\backslashmathrm\\backslashgamma \\backslash,\\:V \backslashrightarrow \backslash\1,\backslashldots ,k\backslash\\$\$$\gamma$:V\textrightarrow\1,\ldots,k\that maps all vertices in G onto k colors. A subset of vertices \$\$S\backslashsubseteq V\$\$S⊆Vis called f-balanced if S contains f vertices from each color class. In the f-BIS and f-BDS problems, the objective is to compute an independent set or a dominating set that is f-balanced. We show that both problems are NP-complete even on proper interval graphs. For the BIS problem on interval graphs, we design two FPT algorithms, one parameterized by (f, k) and the other by the vertex cover number of G. Moreover, for an optimization variant of BIS on interval graphs, we present a polynomial time approximation scheme (PTAS) and an \$\$O(n\backslashlog n)\$\$O(nlogn)time 2-approximation algorithm. @inproceedings{Bhore2021, | |
Sven Gedicke, Annika Bonerath, Benjamin Niedermann, and Jan-Henrik Haunert. Zoomless Maps: External Labeling Methods for the Interactive Exploration of Dense Point Sets at a Fixed Map Scale. IEEE Transactions on Visualization and Computer Graphics, 27(2):1247-1256, 2021. https://youtu.be/IuMhk8jp54c
| |
Visualizing spatial data on small-screen devices such as smartphones and smartwatches poses new challenges in computational cartography. The current interfaces for map exploration require their users to zoom in and out frequently. Indeed, zooming and panning are tools suitable for choosing the map extent corresponding to an area of interest. They are not as suitable, however, for resolving the graphical clutter caused by a high feature density since zooming in to a large map scale leads to a loss of context. Therefore, in this paper, we present new external labeling methods that allow a user to navigate through dense sets of points of interest while keeping the current map extent fixed. We provide a unified model, in which labels are placed at the boundary of the map and visually associated with the corresponding features via connecting lines, which are called leaders. Since the screen space is limited, labeling all features at the same time is impractical. Therefore, at any time, we label a subset of the features. We offer interaction techniques to change the current selection of features systematically and, thus, give the user access to all features. We distinguish three methods, which allow the user either to slide the labels along the bottom side of the map or to browse the labels based on pages or stacks. We present a generic algorithmic framework that provides us with the possibility of expressing the different variants of interaction techniques as optimization problems in a unified way. We propose both exact algorithms and fast and simple heuristics that solve the optimization problems taking into account different criteria such as the ranking of the labels, the total leader length as well as the distance between leaders. In experiments on real-world data we evaluate these algorithms and discuss the three variants with respect to their strengths and weaknesses proving the flexibility of the presented algorithmic framework. @article{gbnh-zm-21, | |
J.-H. Haunert, D. Schmidt, and M. Schmidt. Anonymization via clustering of locations in road networks (short paper). In Proc. 11th International Conference on Geographic Information Science (GIScience '21). 2021.
| |
Data related to households or addresses needs be published in an aggregated form to obfuscate sensitive information about individuals. Usually, the data is aggregated to the level of existing administrative zones, but these often do not correspond to formal models of privacy or a desired level of anonymity. Therefore, automatic privacy-preserving spatial clustering methods are needed. To address this need, we present algorithms to partition a given set of locations into $k$-anonymous clusters, meaning that each cluster contains at least $k$ locations. We assume that the locations are given as a set $T ⊆V$ of terminals in a weighted graph $G = (V, E)$ representing a road network. Our approach is to compute a forest in $G$, i.e., a set of trees, each of which corresponds to a cluster. We ensure the $k$-anonymity of the clusters by constraining the trees to span at least $k$ terminals each (plus an arbitrary number of non-terminal nodes called Steiner nodes). By minimizing the total edge weight of the forest, we ensure that the clusters reflect the proximity among the locations. Although the problem is NP-hard, we were able to solve instances of several hundreds of terminals using integer linear programming. Moreover, we present an efficient approximation algorithm and show that it can be used to process large and fine-grained data sets. @inproceedings{haunert2021anonymization, | |
Peter Rottmann, Anne Driemel, Herman Haverkort, Heiko Röglin, and Jan-Henrik Haunert. Bicriteria aggregation of polygons via graph cuts. In Krzysztof Janowicz, and Judith A. Verstegen, editors, volume 208 of Leibniz International Proceedings in Informatics (LIPIcs). 11th International Conference on Geographic Information Science (GIScience 2021) - Part II, pages 6:1-6:16. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021.
| |
We present a new method for the task of detecting groups of polygons in a given geographic data set and computing a representative polygon for each group. This task is relevant in map generalization where the aim is to derive a less detailed map from a given map. Following a classical approach, we define the output polygons by merging the input polygons with a set of triangles that we select from a constrained Delaunay triangulation of the input polygons' exterior. The innovation of our method is to compute the selection of triangles by solving a bicriteria optimization problem. While on the one hand we aim at minimizing the total area of the outputs polygons, we aim on the other hand at minimizing their total perimeter. We combine these two objectives in a weighted sum and study two computational problems that naturally arise. In the first problem, the parameter that balances the two objectives is fixed and the aim is to compute a single optimal solution. In the second problem, the aim is to compute a set containing an optimal solution for every possible value of the parameter. We present efficient algorithms for these problems based on computing a minimum cut in an appropriately defined graph. Moreover, we show how the result set of the second problem can be approximated with few solutions. In an experimental evaluation, we finally show that the method is able to derive settlement areas from building footprints that are similar to reference solutions. @inproceedings{rottmann2021bicriteria, | |
Timon Behr, Thomas C. van Dijk, Axel Forsch, Jan-Henrik Haunert, and Sabine Storandt. Map matching for semi-restricted trajectories. In Krzysztof Janowicz, and Judith A. Verstegen, editors, volume 208 of Leibniz International Proceedings in Informatics (LIPIcs). Proc. 11th International Conference on Geographic Information Science (GIScience '21) - Part II, pages 12:1-12:16. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021.
| |
We consider the problem of matching trajectories to a road map, giving particular consideration to trajectories that do not exclusively follow the underlying network. Such trajectories arise, for example, when a person walks through the inner part of a city, crossing market squares or parking lots. We call such trajectories semi-restricted. Sensible map matching of semi-restricted trajectories requires the ability to differentiate between restricted and unrestricted movement. We develop in this paper an approach that efficiently and reliably computes concise representations of such trajectories that maintain their semantic characteristics. Our approach utilizes OpenStreetMap data to not only extract the network but also areas that allow for free movement (as e.g. parks) as well as obstacles (as e.g. buildings). We discuss in detail how to incorporate this information in the map matching process, and demonstrate the applicability of our method in an experimental evaluation on real pedestrian and bicycle trajectories. @inproceedings{behr2021matching, | |
Axel Forsch, Youness Dehbi, Benjamin Niedermann, Johannes Oehrlein, Peter Rottmann, and Jan-Henrik Haunert. Multimodal travel-time maps with formally correct and schematic isochrones. Transactions in GIS, 25(6):3233-3256, 2021.
| |
The automatic generation of travel-time maps is a prerequisite for many fields of application such as tourist assistance and spatial decision support systems. The task is to determine outlines of zones that are reachable from a user's location in a given amount of time. In this work, we focus on travel-time maps with a formally guaranteed separation property in the sense that a zone exactly contains the part of the road network that is reachable within a pre-defined time from a given starting point and start time. In contrast to other automated methods, our approach generates schematized travel-time maps that reduce the visual complexity by representing each zone by an octilinear polygon. We aim at octilinear polygons with a small number of bends to further optimize the legibility of the map. The reachable parts of the road network are determined by the integration of timetable information for different modes of public transportation and pedestrian walkways based on a multi-modal time-expanded network. Moreover, the generated travel-time maps visualize multiple travel times using a map overlay of different time zones. In experiments on real-world data we compare our schematic visualizations to travel-time maps created with other visualization techniques. @article{forsch2021isochrones, | |
S. Gedicke, A. Jabrayilov, B. Niedermann, P. Mutzel, and J.-H. Haunert. Point feature label placement for multi-page maps on small-screen devices. Computers & Graphics, 100:66-80, 2021.
| |
Map applications on mobile devices such as smartphones and smartwatches have become ubiquitous. When visualizing spatial data on such small-screen devices, one major challenge is annotating the data with labels (e.g., small icons). The restricted space requires new visualization techniques as established strategies, such as maximizing the number of placed labels, easily lead to the omission of information. We propose an approach that distributes all labels contained in a temporarily fixed map section on multiple pages. Applying interaction techniques for navigating through the pages, a user can access all information both without any overlapping labels and without the need for zooming. We propose a method with two phases; a pre-processing phase and a query phase. We use an optimization approach to pre-compute labelings on the level of a whole city and provide the on-demand querying of individual labelings at a more local level. Our approach provides a consistent label-page assignment, meaning that labels do not appear and disappear when the user pans the map. Further, our model provides quick access to important information and a spatially balanced distribution of labels on pages. In experiments on real-world data we analyze different parameter settings and show that our model yields high-quality labelings. @article{gjnmh-zm-21, |
2020
A. Bonerath, B. Niedermann, J. Diederich, Y. Orgeig, J. Oehrlein, and J.-H. Haunert. A time-windowed data structure for spatial density maps. In Proc. 28th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL '20), pages 15-24. 2020.
| |
The visualization of spatio-temporal data helps researchers understand global processes such as animal migration. In particular, interactively restricting the data to different time windows reveals new insights into the short-term and long-term changes of the research data. Inspired by this use case, we consider the visualization of point data annotated with time stamps. We pick up classical, grid-based density maps as the underlying visualization technique and enhance them with an efficient data structure for arbitrarily specified time-window queries. The running time of the queries is logarithmic in the total number of points and linear in the number of actually colored cells. In experiments on real-world data we show that the data structure answers time-window queries within milliseconds, which supports the interactive exploration of large point sets. Further, the data structure can be used to visualize additional decision problems, e.g., it can answer whether the sum or maximum of additional weights given with the points exceed a certain threshold. We have defined the data structure general enough to also support multiple thresholds expressed by different colors. @inproceedings{bhn-twdssdm-20, | |
Sujoy Bhore, Jan-Henrik Haunert, Fabian Klute, Guangping Li, and Martin Nöllenburg. Balanced independent and dominating sets on colored interval graphs. In Proc. 36th European Workshop on Computational Geometry (EuroCG 2020), pages 66:1-66:6. 2020.
| |
We study two new versions of independent and dominating set problems on vertex-colored interval graphs, namely \emph$f$-Balanced Independent Set ($f$-BIS) and \emph$f$-Balanced Dominating Set ($f$-BDS). Let $G=(V,E)$ be a vertex-colored interval graph with a $k$-coloring $γo̧lon V i̊ghtarrow \1,\ldots,k\$ for some $k ∈\mathbb N$. A subset of vertices $S⊆V$ is called \emph$f$-balanced if $S$ contains $f$ vertices from each color class. In the $f$-BIS and $f$-BDS problems, the objective is to compute an independent set or a dominating set that is $f$-balanced. We show that both problems are \NP-complete even on proper interval graphs. For the BIS problem on interval graphs, we design two \FPT\ algorithms, one parameterized by $(f,k)$ and the other by the vertex cover number of $G$. Moreover, we present a 2-approximation algorithm for a slight variation of BIS on proper interval graphs. @inproceedings{BhoreEtAl2020, | |
Dongliang Peng, Alexander Wollf, and Jan-Henrik Haunert. Finding optimal sequences for area aggregation: A* vs. integer linear programming. ACM Transactions on Spatial Algorithms and Systems, 7(1):4:1-4:40, 2020.
| |
To provide users with maps of different scales and to allow them to zoom in and out without losing context, automatic methods for map generalization are needed. We approach this problem for land-cover maps. Given two land-cover maps at two different scales, we want to find a sequence of small incremental changes that gradually transforms one map into the other. We assume that the two input maps consist of polygons each of which belongs to a given land-cover type. Every polygon on the smaller-scale map is the union of a set of adjacent polygons on the larger-scale map. In each step of the computed sequence, the smallest area is merged with one of its neighbors. We do not select that neighbor according to a prescribed rule but compute the whole sequence of pairwise merges at once, based on global optimization. We have proved that this problem is NP-hard. We formalize this optimization problem as that of finding a shortest path in a (very large) graph. We present the A$^*$ algorithm and integer linear programming to solve this optimization problem. To avoid long computing times, we allow the two methods to return non-optimal results. In addition, we present a greedy algorithm as a benchmark. We tested the three methods with a dataset of the official German topographic database ATKIS. Our main result is that A$^*$ finds optimal aggregation sequences for more instances than the other two methods within a given time frame. @article{Peng2020, | |
Matan Mor, Johannes Oehrlein, Jan-Henrik Haunert, and Sagi Dalyot. Whom to follow? A comparison of walking routes computed based on social media photos from different types of contributors. In volume 1. Proc. 23rd AGILE Conference on Geographic Information Science (AGILE 2020), pages 16. 2020.
| |
Since many tourists share the photos they take on social media channels, large collections of tourist attraction photos are easily accessible online. Recent research has dealt with identifying popular places from these photos, as well as computing city tourism routes based on these photo collections. Although current approaches show great potential, many tourism attractions suffer from being overrun by tourists, not least because many tourists are aware of only a few tourism hot spots that are trending. In the worst case, automatic city route recommendations based on social media photos will intensify this issue and disappoint tourists who seek individual experiences. In the best case, however, if individual preferences are appropriately incorporated into the route planning algorithm, more personalized route recommendations will be achieved. In this paper, we suggest distinguishing two different types of photo contributors, namely: first-time visitors who are usually tourists who "follow the crowd" (e.g., to visit the top tourist attractions), and repeated visitors who are usually locals who "don’t follow the crowd" (e.g., to visit photogenic yet less well-known places). This categorization allows the user to decide how to trade the one objective off against the other. We present a novel method based on a classification of photographers into locals and tourists, and show how to incorporate this information into an algorithmic routing framework based on the Orienteering Problem approach. In detailed experiments we analyze how choosing the parameter that models the trade-off between both objectives influences the optimal route found by the algorithm, designed to serve the user’s travel objective and preferences in terms of visited attraction types. @inproceedings{Mor2020, | |
Hansjörg Kutterer, Hans Neuner, Helmut Mayer, Jan-Henrik Haunert, and Alexandra Weitkamp. Forschungsvorhaben. In Klaus Kummer, Theo Kötter, Hansjörg Kutterer, and Stefan Ostrau, editors. Das deutsche Vermessungs- und Geoinformationswesen 2020, pages 1067-1102. Wichmann Verlag, Heidelberg, Germany, 2020.
| |
@incollection{Kutterer2020, | |
A. Bonerath, J.-H. Haunert, and B. Niedermann. Tight rectilinear hulls of simple polygons. In Proceedings of the 36th European Workshop on Computational Geometry (EuroCG'20). 2020. trailer: https://youtu.be/GZTS5fs0FG4, presentation: https://youtu.be/B392e75WK2U
| |
A polygon is called C-oriented if the orientations of all its edges stem from a pre-defined set C. The schematization of a polygon is then a C-oriented polygon that describes and simplifies the shape of the input polygon with respect to given hard and soft constraints. We study the case that the C-oriented polygon needs to contain the input polygon such that it is tight in the sense that it cannot be shrunk without starting to overlap with the input polygon; we call this a tight C-hull of the polygon. We restrict the tight C-hull to be a simple polygon. We aim at a tight C-hull that optimally balances the number of bends, the total edge length and the enclosed area. For the case that both polygons are rectilinear, we present a dynamic-programming approach that yields such a tight hull in polynomial time. For arbitrary simple polygons we can use the same approach to obtain approximate tight rectilinear hulls. @inproceedings{bhn-trhsp-20, | |
A. Bonerath, J.-H. Haunert, and B. Niedermann. Dynamic aggregation of geo-objects for the interactive exploration of research data. In volume 29. Publikationen der Deutschen Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation e.V., DGPF Jahrestagung 2020, Stuttgart, Germany, pages 488-496. 2020.
| |
In the context of research data management, the interactive exploration of data is a useful tool to make data findable and reusable. For exploring spatio- and temporal data we developed in a previously published work a data structure that provides the user with simple visualizations of the data for time window queries. We considered the data to be points in space each associated with a time stamp and we visualized it using \alpha-shapes, which generalize convex hulls. In general, time windowed data structures support time window queries for geometric shapes or more general problems from computational geometry such as counting and intersection problems. With this paper, we contribute a review of our previously published method in the context of time windowed data structures, which is a relatively new concept of computational geometry. In particular, we highlight the relevance of our work for a growing domain of research. @inproceedings{bhn-dagoierd20, | |
A. Corbin, B. Niedermann, A. Nothnagel, Haas R., and J.-H. Haunert. Combinatorial optimization applied to VLBI scheduling. Journal of Geodesy, 94(19):1-22, 2020.
| |
Due to the advent of powerful solvers, today linear programming has seen many applications in production and routing. In this publication, we present mixed-integer linear programming as applied to scheduling geodetic very-long-baseline interferometry (VLBI) observations. The approach uses combinatorial optimization and formulates the scheduling task as a mixed-integer linear program. Within this new method, the schedule is considered as an entity containing all possible observations of an observing session at the same time, leading to a global optimum. In our example, the optimum is found by maximizing the sky coverage score. The sky coverage score is computed by a hierarchical partitioning of the local sky above each telescope into a number of cells. Each cell including at least one observation adds a certain gain to the score. The method is computationally expensive and this publication may be ahead of its time for large networks and large numbers of VLBI observations. However, considering that developments of solvers for combinatorial optimization are progressing rapidly and that computers increase in performance, the usefulness of this approach may come up again in some distant future. Nevertheless, readers may be prompted to look into these optimization methods already today seeing that they are available also in the geodetic literature. The validity of the concept and the applicability of the logic are demonstrated by evaluating test schedules for five 1-h, single-baseline Intensive VLBI sessions. Compared to schedules that were produced with the scheduling software sked, the number of observations per session is increased on average by three observations and the simulated precision of UT1-UTC is improved in four out of five cases (6 µs average improvement in quadrature). Moreover, a simplified and thus much faster version of the mixed-integer linear program has been developed for modern VLBI Global Observing System telescopes. @article{cnnhh-coas20, |
2019
B. Niedermann, and J.-H. Haunert. Focus+context map labeling with optimized clutter reduction. International Journal of Cartography, 5(2-3):158-177, 2019. Special issue of 29th International Cartographic Conference (ICC'19)
| |
Zooming is a basic operation that many digital maps support for exploring them interactively. Especially for maps on small-screen devices this is a helpful operation to uncover the user’s region of interest possibly hidden by labels, e.g., points of interest represented by icons. However, by scaling the map larger the user loses the context. As a consequent the user might need to repeatedly zoom in and out to explore the map step by step. We present an approach that reduces the necessity of zooming by providing the user with the possibility of displacing the labels of a circular focus region. To that end, we utilize techniques from focus+context maps implementing the displacement of the labels by fish-eye projections. The visual association between labels and their point features is established by connecting lines aggregated to bundles. Our approach particularly guarantees that labels move smoothly when the user continuously shifts the focus region, which reduces distracting flickering effects while exploring the map by panning the map view. Further, when the user stops moving the focus region, mathematical programming is applied to optimize positions of the displaced labels. In an evaluation on real-world data and synthetically generated data we show that our approach substantially increases the legibility of both the focus region and the displaced labels @article{niedermann2019, | |
A. Bonerath, J.-H. Haunert, and B. Niedermann. Computing alpha-shapes for temporal range queries on point sets. In Proceedings of the 35rd European Workshop on Computational Geometry (EuroCG'19). 2019. Preprint.
| |
The interactive exploration of data requires data structures that can be repeatedly queried to obtain simple visualizations of parts of the data. In this paper we consider the scenario that the data is a set of points each associated with a time stamp and that the result of each query is visualized by an α-shape, which generalizes the concept of convex hulls. Instead of computing each shape independently, we suggest and analyze a simple data structure that aggregates the α-shapes of all possible queries. Once the data structure is built, it particularly allows us to query single α-shapes without retrieving the actual (possibly large) point set and thus to rapidly produce small previews of the queried data. @inproceedings{bhn-castrqps-19, | |
B. Niedermann, and J.-H. Haunert. Anchored metro maps: combining schematic maps
with geographic maps for multi-modal navigation. In Schematic Mapping Workshop 2019. 2019. Poster abstract.
| |
For conducting navigation tasks in public trans- portation systems, schematic maps (e.g., metro maps) are the first choice, as they reduce the amount of information to a minimum. On the other hand, for navigation tasks in street networks, classical geographic maps are preferable, as they depict the area as accurately as possible. In this work, we create synergies between both types of maps by laying the metro map over the street map. We call those maps anchored metro maps, as we visually attach the metro stations of the metro map to their counterparts on the street map. Currently, we are developing algorithms optimizing the matching between both maps. In future research we plan to use this approach to show that the combination of schematic and geographical maps leads to an improvement for certain navigation tasks. @inproceedings{nh-amm-19, | |
A. Bonerath, B. Niedermann, and J.-H. Haunert. Retrieving alpha-shapes and schematic polygonal approximations for sets of points within queried temporal ranges. In Proc. 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL '19), pages 249-258. 2019. trailer: https://youtu.be/mlnUDhbMSfQ
| |
The interactive exploration of data requires data structures that can be repeatedly queried to obtain simple visualizations of parts of the data. We consider the scenario that the data is a set of points each associated with a time stamp and that the result of each query is visualized by an α-shape, which generalizes the concept of convex hulls. Instead of computing each shape independently, we suggest and analyze a simple data structure that aggregates the α-shapes of all possible queries. Once the data structure is built, it particularly allows us to query single α-shapes without retrieving the actual (possibly large) point set and thus to rapidly produce small previews of the queried data. We discuss the data structure for the original α-shapes as well as for a schematized version of α-shapes, which further simplifies the visualization. We evaluate the data structure on real-world data. The experiments indicate linear memory consumption with respect to the number of points, which makes the data structure applicable in practice, although the size is quadratic for a theoretic worst case example. @inproceedings{bhn-rasspa-19, | |
S. Gedicke, B. Niedermann, and J.-H. Haunert. Multi-page Labeling of Small-screen Maps with a Graph-coloring Approach. In LBS 2019: 15th International Conference on Location Based Services, November 11-13, 2019, Vienna, AT. 2019.
| |
Annotating small-screen maps with additional content such as labels for points of interest is a highly challenging problem that requires new algorithmic solutions. A common labeling approach is to select a maximum-size subset of all labels such that no two labels constitute a graphical conflict and to display only the selected labels in the map. A disadvantage of this approach is that a user often has to zoom in and out repeatedly to access all points of interest in a certain region. Since this can be very cumbersome, we suggest an alternative approach that allows the scale of the map to be kept fixed. Our approach is to distribute all labels on multiple pages through which the user can navigate, for example, by swiping the pages from right to left. We in particular optimize the assignment of the labels to pages such that no page contains two conflicting labels, more important labels appear on the first pages, and sparsely labeled pages are avoided. Algorithmically, we reduce this problem to a weighted and constrained graph coloring problem based on a graph representing conflicts between labels such that an optimal coloring of the graph corresponds to a multi-page labeling. We propose a simple greedy heuristic that is fast enough to be deployed in web-applications. We evaluate the quality of the obtained labelings by comparing them with optimal solutions, which we obtain by means of integer linear programming formulations. In our evaluation on real-world data we particularly show that the proposed heuristic achieves near-optimal solutions with respect to the chosen objective function and that it substantially improves the legibility of the labels in comparison to the simple strategy of assigning the labels to pages solely based on the labels' weights. @inproceedings{gedicke2019a, | |
J. Oehrlein, B. Niedermann, and J.-H. Haunert. Analyzing the supply and detecting spatial patterns of urban green spaces via optimization. Journal of Photogrammetry, Remote Sensing and Geoinformation Science (PFG), 87(4):137-158, 2019.
| |
Green spaces in urban areas offer great possibilities of recreation, provided that they are easily accessible. Therefore, an ideal city should offer large green spaces close to where its residents live. Although there are several measures for the assessment of urban green spaces, the existing measures usually focus either on the total size of all green spaces or on their accessibility. Hence, in this paper, we present a new methodology for assessing green-space provision and accessibility in an integrated way. The core of our methodology is an algorithm based on linear programming that computes an optimal assignment between residential areas and green spaces. In a basic setting, it assigns green spaces of a prescribed size exclusively to each resident, such that an objective function that, in particular, considers the average distance between residents and assigned green spaces is optimized. We contribute a detailed presentation on how to engineer an assignment-based method, such that it yields plausible results (e.g., by considering distances in the road network) and becomes efficient enough for the analysis of large metropolitan areas (e.g., we were able to process an instance of Berlin with about 130,000 polygons representing green spaces, 18,000 polygons representing residential areas, and 6 million road segments). Furthermore, we show that the optimal assignments resulting from our method enable a subsequent analysis that reveals both interesting global properties of a city as well as spatial patterns. For example, our method allows us to identify neighborhoods with a shortage of green spaces, which will help spatial planners in their decision-making. @article{oehrlein2019-pfg, |
2018
B. Niedermann, and J.-H. Haunert. An algorithmic framework for labeling network maps. Algorithmica, 80(5):1493-1533, 2018.
| |
Drawing network maps automatically comprises two challenging steps, namely laying out the map and placing non-overlapping labels. In this paper we tackle the problem of labeling an already existing network map considering the application of metro maps. We present a flexible and versatile labeling model. Despite its simplicity, we prove that it is NP-complete to label a single line of the network. For a restricted variant of that model, we introduce an efficient algorithm that optimally labels a single line. Based on that algorithm, we present a general and sophisticated workflow for multiple metro lines, which is experimentally evaluated on real-world metro maps. @article{HaunertN15a, | |
B. Niedermann, J. Oehrlein, S. Lautenbach, and J.-H. Haunert. A network flow model for the analysis of green spaces in urban areas. In volume 114 of Leibniz International Proceedings in Informatics (LIPIcs). Proc. 10th International Conference on Geographic Information Science (GIScience '18), pages 13:1-13:16. 2018.
| |
Green spaces in urban areas offer great possibilities of recreation, provided that they are easily accessible. Therefore, an ideal city should offer large green spaces close to where its residents live. Although there are several measures for the assessment of urban green spaces, the existing measures usually focus either on the total size of green spaces or on their accessibility. Hence, in this paper, we present a new methodology for assessing green-space provision and accessibility in an integrated way. The core of our methodology is an algorithm based on linear programming that computes an optimal assignment between residential areas and green spaces. In a basic setting, it assigns a green space of a prescribed size exclusively to each resident such that the average distance between residents and assigned green spaces is minimized. We contribute a detailed presentation on how to engineer an assignment-based method such that it yields reasonable results (e.g., by considering distances in the road network) and becomes efficient enough for the analysis of large metropolitan areas (e.g., we were able to process an instance of Berlin with about 130000 polygons representing green spaces, 18000 polygons representing residential areas, and 6 million road segments). Furthermore, we show that the optimal assignments resulting from our method enable a subsequent analysis that reveals both interesting global properties of a city as well as spatial patterns. For example, our method allows us to identify neighborhoods with a shortage of green spaces, which will help spatial planners in their decision making. @inproceedings{NiedermannEtAl2018, | |
J. Oehrlein, A. Förster, D. Schunck, Y. Dehbi, R. Roscher, and J.-H. Haunert. Inferring routing preferences of bicyclists from sparse sets of trajectories. In volume IV-4/W7 of ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Proc. 3rd International Conference on Smart Data and Smart Cities, pages 107-114. 2018.
| |
Understanding the criteria that bicyclists apply when they choose their routes is crucial for planning new bicycle paths or recommending routes to bicyclists. This is becoming more and more important as city councils are becoming increasingly aware of limitations of the transport infrastructure and problems related to automobile traffic. Since different groups of cyclists have different preferences, however, searching for a single set of criteria is prone to failure. Therefore, in this paper, we present a new approach to classify trajectories recorded and shared by bicyclists into different groups and, for each group, to identify favored and unfavored road types. Based on these results we show how to assign weights to the edges of a graph representing the road network such that minimumweight paths in the graph, which can be computed with standard shortest-path algorithms, correspond to adequate routes. Our method combines known algorithms for machine learning and the analysis of trajectories in an innovative way and, thereby, constitutes a new comprehensive solution for the problem of deriving routing preferences from initially unclassified trajectories. An important property of our method is that it yields reasonable results even if the given set of trajectories is sparse in the sense that it does not cover all segments of the cycle network. @inproceedings{OehrleinEtAl2018, | |
Y. Dehbi, N. Gojayeva, A. R. Pickert, J.-H. Haunert, and L. Plümer. Room shapes and functional uses predicted from sparse data. In volume IV-4:33-40 of ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Proc. ISPRS Technical Commission IV Symposium. 2018.
| |
Many researchers used expensive 3D laser scanning techniques to derive indoor models. Few papers describe the derivation of indoor models based on sparse data such as footprints. They assume that floorplans and rooms are rather rectangular and that information on functional use is given. This paper addresses the automatic learning of a classifier which predicts the functional use of housing rooms. The classification is based on features which are widely available such as room areas and orientation. These features are extracted from an extensive database of annotated rooms. A Bayesian classifier is applied which delivers probabilities of competing class hypotheses. In a second step, functional uses are used to predict the shape of the rooms in a further classification. @inproceedings{DehbiEtAl2018, |
2017
J. Sultan, G. Ben-Haim, and J.-H. Haunert. Extracting spatial patterns in bicycle routes from crowdsourced data. Transactions in GIS, 21(6):1321-1340, 2017.
| |
Much is done nowadays to provide cyclists with safe and sustainable road infrastructure. Its development requires the investigation of road usage and interactions between traffic commuters. This article is focused on exploiting crowdsourced user-generated data, namely GPS trajectories collected by cyclists and road network infrastructure generated by citizens, to extract and analyze spatial patterns and road-type use of cyclists in urban environments. Since user-generated data shows data-deficiencies, we introduce tailored spatial data-handling processes for which several algorithms are developed and implemented. These include data filtering and segmentation, map-matching and spatial arrangement of GPS trajectories with the road network. A spatial analysis and a characterization of road-type use are then carried out to investigate and identify specific spatial patterns of cycle routes. The proposed analysis was applied to the cities of Amsterdam (The Netherlands) and Osnabrück (Germany), proving its feasibility and reliability in mining road-type use and extracting pattern information and preferences. This information can help users who wish to explore friendlier and more interesting cycle patterns, based on collective usage, as well as city planners and transportation experts wishing to pinpoint areas most in need of further development and planning. @article{SultanEtAl17, | |
J.-H. Haunert, and A. Wolff. Beyond maximum independent set: an extended integer linear program for point feature labeling. ISPRS Journal of Geo-Information, 6(11), 2017.
| |
Map labeling is a classical problem of cartography that has frequently been approached by combinatorial optimization. Given a set of features in a map and for each feature a set of label candidates, a common problem is to select an independent set of labels (that is, a labeling without label-label intersections) that contains as many labels as possible and at most one label for each feature. To obtain solutions of high cartographic quality, the labels can be weighted and one can maximize the total weight (rather than the number) of the selected labels. We argue, however, that when maximizing the weight of the labeling, the influences of labels on other labels are insufficiently addressed. Furthermore, in a maximum-weight labeling, the labels tend to be densely packed and thus the map background can be occluded too much. We propose extensions of an existing model to overcome these limitations. Since even without our extensions the problem is NP-hard, we cannot hope for an efficient exact algorithm for the problem. Therefore, we present a formalization of our model as an integer linear program (ILP). This allows us to compute optimal solutions in reasonable time, which we demonstrate both for randomly generated point sets and an existing data set of cities. Moreover, a relaxation of our ILP allows for a simple and efficient heuristic, which yielded near-optimal solutions for our instances. @article{HaunertW17, | |
J. Oehrlein, and J.-H. Haunert. A cutting-plane method for contiguity-constrained spatial aggregation. Journal of Spatial Information Science, 15(1):89-120, 2017.
| |
Aggregating areas into larger regions is a common problem in spatial planning, geographic information science, and cartography. The aim can be to group administrative areal units into electoral districts or sales territories, in which case the problem is known as districting. In other cases, area aggregation is seen as a generalization or visualization task, which aims to reveal spatial patterns in geographic data. Despite these different motivations, the heart of the problem is the same: given a planar partition, one wants to aggregate several elements of this partition to regions. These often must have or exceed a particular size, be homogeneous with respect to some attribute, contiguous, and geometrically compact. Even simple problem variants are known to be NP-hard, meaning that there is no reasonable hope for an efficient exact algorithm. Nevertheless, the problem has been attacked with heuristic and exact methods. In this article we present a new exact method for area aggregation and compare it with a state-of-the-art method for the same problem. Our method results in a substantial decrease of the running time and, in particular, allowed us to solve certain instances that the existing method could not solve within five days. Both our new method and the existing method use integer linear programming, which allows existing problem solvers to be applied. Other than the existing method, however, our method employs a cutting-plane method, which is an advanced constraint-handling approach. We discuss this approach in detail and present its application to the aggregation of areas in choropleth maps. @article{DBLP:journals/josis/OehrleinH17, | |
J. Oehrlein, B. Niedermann, and J.-H. Haunert. Inferring the parametric weight of a bicriteria routing model from trajectories. In Proc. 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS '17), pages 59:1-59:4. 2017.
| |
Finding a shortest path between two nodes in a graph is a well-studied problem whose applicability in practice crucially relies on the choice of the applied cost function. Especially, for the key application of vehicle routing the cost function may consist of more than one optimization criterion (e.g., distance, travel time, etc.). Finding a good balance between these criteria is a challenging and essential task. We present an approach that learns that balance from existing GPS-tracks. The core of our approach is to find a balance factor alpha for a given set of GPS-tracks such that the tracks can be decomposed into a minimum number of optimal paths with respect to alpha. In an experimental evaluation on real-world GPS-tracks of bicyclists we show that our approach yields an appropriate balance factor in a reasonable amount of time. @inproceedings{Oehrlein:2017:IPW:3139958.3140033, | |
Y. Dehbi, J.-H. Haunert, and L. Plümer. Stochastic and geometric reasoning for indoor building models with electric installations - bridging the gap between gis and bim. In volume IV-4/W5 of ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences. Proc. 12th 3D Geoinfo Conference, pages 33-39. 2017.
| |
3D city and building models according to CityGML encode the geometry, represent the structure and model semantically relevant building parts such as doors, windows and balconies. Building information models support the building design, construction and the facility management. In contrast to CityGML, they include also objects which cannot be observed from the outside. The three dimensional indoor models characterize a missing link between both worlds. Their derivation, however, is expensive. The semantic automatic interpretation of 3D point clouds of indoor environments is a methodically demanding task. The data acquisition is costly and difficult. The laser scanners and image-based methods require the access to every room. Based on an approach which does not require an additional geometry acquisition of building indoors, we propose an attempt for filling the gaps between 3D building models and building information models. Based on sparse observations such as the building footprint and room areas, 3D indoor models are generated using combinatorial and stochastic reasoning. The derived models are expanded by a-priori not observable structures such as electric installation. Gaussian mixtures, linear and bi-linear constraints are used to represent the background knowledge and structural regularities. The derivation of hypothesised models is performed by stochastic reasoning using graphical models, Gauss-Markov models and MAP-estimators. @inproceedings{isprs-annals-IV-4-W5-33-2017, | |
J. Oehrlein, T. C. van Dijk, and J.-H. Haunert. Gleichwertige Ziele in dynamischen Navigationskarten. In volume 26. DGPF Tagungsband, pages 138-146. 2017.
| |
Die Generierung übersichtlicher Karten erfordert Verfahren der automatischen Generalisierung. Zur Darstellung auf Navigationskarten werden beispielsweise Objekte eines Verkehrsnetzes anhand ihrer Bedeutung für das Netz ausgewählt. Beschränkt man sich auf die Navigation von einem festen Startpunkt aus, verlieren viele Objekte für die Karte an Bedeutung. Durch Verzicht auf deren Darstellung wird die Karte übersichtlicher. Diesen Umstand nutzen van Dijk et al. (2016) für einen Algorithmus zur standortbasierten Generalisierung von Straßennetzen. Dieser trifft - abhängig von einem fest gewählten Standort - durch die Zusammenfassung von als gleichwertig erkannten Zielen eine Auswahl. Hebt man die Fixierung des Standorts auf, ergeben sich neue Möglichkeiten für mobile Geräte. Dieser Beitrag beschäftigt sich mit den Problemen, die mit der Dynamik Einzug in diesen Algorithmus halten, und bietet erste Lösungsansätze. @inproceedings{OvDH17, | |
D. Peng, A. Wolff, and J.-H. Haunert. Using the A* algorithm to find optimal sequences for area aggregation. In Michael P. Peterson, editors. Advances in Cartography and GIScience - Selections from the International Cartographic Conference 2017, pages 389-404. Springer International Publishing, 2017.
| |
Given two land-cover maps of different scales, we wish to find a sequence of small incremental changes that gradually transforms one map into the other. We assume that the two input maps consist of polygons that constitute a planar subdivision and belong to different land-cover classes. Every polygon in the small-scale map is the union of a set of polygons in the large-scale map. In each step of the sequence that we compute, the smallest area in the current map is merged with one of its neighbors. We do not select that neighbor according to a prescribed rule but define the whole sequence of pairwise merges at once, based on global optimization. An important requirement for such a method is a formalization of the problem in terms of optimization objectives and constraints, which we present together with a solution that is based on the so-called A A* algorithm. This algorithm allows us to limit the exploration of the search space such that we can compute solutions of high quality in reasonable time. We tested the method with a dataset of the official German topographic database ATKIS and discuss our results. @inproceedings{10.1007/978-3-319-57336-6_27, |
2016
T. C. van Dijk, J.-H Haunert, and J. Oehrlein. Location-dependent generalization of road networks based on equivalent destinations. Computer Graphics Forum, 35(3):451-460, 2016.
| |
Suppose a user located at a certain vertex in a road network wants to plan a route using a wayfinding map. The user's exact destination may be irrelevant for planning most of the route, because many destinations will be equivalent in the sense that they allow the user to choose almost the same paths. We propose a method to find such groups of destinations automatically and to contract the resulting clusters in a detailed map to achieve a simplified visualization. We model the problem as a clustering problem in rooted, edge-weighted trees. Two vertices are allowed to be in the same cluster if and only if they share at least a given fraction of their path to the root. We analyze some properties of these clusterings and give a linear-time algorithm to compute the minimum-cardinality clustering. This algorithm may have various other applications in network visualization and graph drawing, but in this paper we apply it specifically to focus-and-context map generalization. When contracting shortest-path trees in a geographic network, the computed clustering additionally provides a constant-factor bound on the detour that results from routing using the generalized network instead of the full network. This is a desirable property for wayfinding maps. @article{vanDijkEtAl2016, | |
J.-H. Haunert, and A. Wolff. Räumliche analyse durch kombinatorische optimierung. In W. Freeden, and R. Rummel, editors. Handbuch der Geodäsie: 5 Bände, pages 1-39. Springer Berlin Heidelberg, 2016.
| |
In diesem Beitrag geht es uns darum, an einigen wenigen Beispielen aus der räumlichen Analyse grundlegende Entwurfstechniken für Algorithmen und Werkzeuge der kombinatorischen Optimierung zu illustrieren. AuSSerdem wollen wir ein Minimum an theoretischem Unterbau vermitteln. Damit hoffen wir, dass es dem Leser, der Leserin gelingt, räumliche Probleme mit Methoden der Informatik bewusst und damit erfolgreich zu lösen. Wir halten es für besonders wichtig, dass man neue Probleme sorgfältig mathematisch modelliert und mittels exakter Algorithmen das eigene Modell wenigstens auf kleinen Instanzen überprüft, bevor man sich schnellen Heuristiken zuwendet, um groSSe Instanzen zu lösen. @incollection{Haunert2016, | |
J.-H. Haunert, and W. Meulemans. Partitioning polygons via graph augmentation. In volume 9927 of Lecture Notes in Computer Science. Proc. 9th International Conference on Geographic Information Science (GIScience 2016), pages 18-33. Springer-Verlag, 2016.
| |
@inproceedings{HaunertMeulemans2016, | |
D. Peng, A. Wolff, and J.-H. Haunert. Continuous generalization of administrative boundaries based on compatible triangulations. In Lecture Notes in Geoinformation and Cartography. Proc. 19th AGILE International Conference on Geographic Information Science, pages 399-415. Springer-Verlag, 2016.
| |
@inproceedings{PengEtAl2016, | |
J.-H Haunert, and A Wolff. Beyond maximum independent set: an extended model for point-feature label placement. In volume XLI-B2 of International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Proc. XXIII ISPRS Congress, pages 109-114. 2016.
| |
@inproceedings{haunertwolff2016, |
2015
A. Gemsa, J.-H. Haunert, and M. Nöllenburg. Multirow boundary-labeling algorithms for panorama images. ACM Transations on Spatial Algorithms and Systems, 1(1):1-30, 2015.
| |
Boundary labeling deals with placing annotations for objects in an image on the boundary of that image. This problem occurs frequently in situations in which placing labels directly in the image is impossible or produces too much visual clutter. Examples are annotating maps, photos, or technical/medical illustrations. Previous algorithmic results for boundary labeling consider a single layer of labels along some or all sides of a rectangular image. If, however, the number of labels is large or the labels are too long, multiple layers of labels are needed. In this article, we study boundary labeling for panorama images, where n points in a rectangle R are to be annotated by disjoint unit-height rectangular labels placed above R in K different rows (or layers). Each point is connected to its label by a vertical leader that does not intersect any other label. We present polynomial time algorithms based on dynamic programming that either minimize the number of rows to place all n labels or maximize the number (or total weight) of labels that can be placed in K rows for a given integer K. For weighted labels, the problem is shown to be (weakly) NP-hard; in this case, we give a pseudo-polynomial algorithm to maximize the weight of the selected labels. We have implemented our algorithms; the experimental results show that solutions for realistically sized instances are computed instantaneously. We have also investigated two-sided panorama labeling, for which the labels may be placed above or below the panorama image. In this model, all of the aforementioned problems are NP-hard. For solving them, we propose mixed-integer linear program formulations. @article{GemsaHN15, | |
J.-H. Haunert, and B. Niedermann. An algorithmic framework for labeling network maps. In Proc. 21st International Conference on Computing and Combinatorics (COCOON), pages 689-700. 2015.
| |
Drawing network maps automatically comprises two challenging steps, namely laying out the map and placing non-overlapping labels. In this paper we tackle the problem of labeling an already existing network map considering the application of metro maps. We present a flexible and versatile labeling model. Despite its simplicity, we prove that it is NP-complete to label a single line of the network. For a restricted variant of that model, we introduce an efficient algorithm that optimally labels a single line. Based on that algorithm, we present a general and sophisticated workflow for multiple metro lines, which is experimentally evaluated on real-world metro maps. @inproceedings{HaunertN15, | |
N. Schwartges, B. Morgan, J.-H. Haunert, and A. Wolff. Labeling streets along a route in interactive 3d maps using billboards. In Proc. 18th AGILE International Conference on Geographic Information Science, pages 269-287. 2015.
| |
@inproceedings{SchwartgesEtAl2015, | |
J. Sultan, G. Ben-Haim, J.-H. Haunert, and S. Dalyot. User-generated data for analyzing road-type use of cyclists. In Proc. Joint Workshop of FIG Commission 3 & Commission 7 on Crowdsourcing of Land Information. 2015. FIG Article of the Month - December 2015
| |
@inproceedings{label5321, |
2014
M. Sester, J. Jokar~Arsanjani, R. Klammer, D. Burghardt, and J.-H. Haunert. Integrating and generalising volunteered geographic information. In D. Burghardt, C. Duchene, and W. Mackaness, editors, Lecture Notes in Geoinformation and Cartography. Abstracting Geographic Information in a Data Rich World. Springer-Verlag, Berlin, Germany, 2014.
| |
The availability of spatial data on the web has greatly increased through the availability of user-generated community data and geosensor networks. The integration of such multi-source data is providing promising opportunities, as integrated information is richer than can be found in only one data source, but also poses new challenges due to the heterogeneity of the data, the differences in quality and in respect of tag-based semantic modelling. The chapter describes approaches for the integration of official and informal sources, and discusses the impact of integrating user-generated data on automated generalisation and visualisation. @inproceedings{SesterEtAl2014, | |
T. C. van Dijk, and J.-H. Haunert. Interactive focus maps using least-squares optimization. International Journal of Geographical Information Science, 28(10), 2014.
| |
We present a new algorithm that enlarges a focus region in a given network map without removing non-focus (i.e., context) network parts from the map or changing the map's size. In cartography, this problem is usually tackled with fish-eye projections, which, however, introduce severe distortion. Our new algorithm minimizes distortion and, with respect to this objective, produces results of similar quality compared to an existing algorithm. In contrast to the existing algorithm, the new algorithm achieves real-time performance that allows its application in interactive systems. We target applications where a user sets a focus by brushing parts of the network or the focus region is defined as the neighborhood of a moving user. A crucial feature of the algorithm is its capability of avoiding unwanted edge crossings. Basically, we first solve a least-squares optimization problem without constraints for avoiding edge crossings. The solution we find is then used to identify a small set of constraints needed for a crossing-free solution and, beyond this, allows us to start an animation enlarging the focus region before the final, crossing-free solution is found. Moreover, memorizing the non-crossing constraints from an initial run of the algorithm allows us to achieve a better runtime on any further run - assuming that the focus region does not move too much between two consecutive runs. As we show with experiments on real-world data, this enables response times well below 1 second. @article{VanDijkHaunert, | |
T. C. van Dijk, A. van Goethem, J.-H. Haunert, W. Meulemans, and B. Speckmann. An automated method for circular-arc metro maps. In Abstracts of the Schematic Mapping Workshop 2014. 2014.
| |
@inproceedings{vanDijkEtAl2014, | |
M. Chimani, T. C. van Dijk, and J.-H. Haunert. How to eat a graph: computing selection sequences for the continuous generalization of road networks. In Proc. 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS '14), pages 243-252. 2014.
| |
@inproceedings{ChimaniEtAl2014, | |
T. C. van Dijk, A. van Goethem, J.-H. Haunert, W. Meulemans, and B. Speckmann. Map schematization with circular arcs. In volume 8728 of Lecture Notes in Computer Science. Proc. 8th International Conference on Geographic Information Science (GIScience'14), pages 1-17. Springer-Verlag, Berlin, Germany, 2014.
| |
@inproceedings{DijkEtAl2014, | |
J.-H. Haunert, and T. Hermes. Labeling circular focus regions based on a tractable case of maximum weight independent set of rectangles. In Proc. 2nd ACM SIGSPATIAL Workshop on MapInteraction. 2014.
| |
@inproceedings{HaunertHermes2014, | |
N. Schwartges, J.-H. Haunert, D. Zwiebler, and A. Wolff. Point labeling with sliding labels in interactive maps. In Lecture Notes in Geoinformation and Cartography. Proc. 17th AGILE International Conference on Geographic Information Science, pages 295-310. Springer-Verlag, Berlin, Germany, 2014.
| |
@inproceedings{SchwartgesEtAl2014, | |
N. Schwartges, A. Wolff, and J.-H. Haunert. Labeling streets in interactive maps using embedded labels. In Proc. 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS '14), pages 517-520. 2014.
| |
@inproceedings{SchwartgesEtAl2014b, |
2013
M. Fink, J.-H. Haunert, J. Spoerhase, and A. Wolff. Selecting the aspect ratio of a scatter plot based on its Delaunay triangulation. In Proc. 29th European Workshop on Computational Geometry (EuroCG'13). 2013.
| |
Scatter plots are diagrams that visualize sets of points in two dimensions. They allow users to detect correlations and clusters in the data. Whether a user can accomplish these tasks highly depends on the aspect ratio selected for the plot, i.e., the ratio between the horizontal and the vertical extent of the diagram. We argue that an aspect ratio is good if the Delaunay triangulation of the scatter plot has some nice geometric property, e.g., a large minimum angle or a small total edge length. In order to find an optimum aspect ratio according to a given criterion we present an algorithm that efficiently maintains the Delaunay triangulation of the point set when traversing all aspect ratios @inproceedings{FinkEtAl2013, | |
M. Fink, J.-H. Haunert, J. Spoerhase, and A. Wolff. Selecting the aspect ratio of a scatter plot based on its Delaunay triangulation. IEEE Transactions on Visualization and Computer Graphics), 19(12):2326-2335, 2013.
| |
Scatter plots are diagrams that visualize two-dimensional data as sets of points in the plane. They allow users to detect correlations and clusters in the data. Whether or not a user can accomplish these tasks highly depends on the aspect ratio selected for the plot, i.e., the ratio between the horizontal and the vertical extent of the diagram. We argue that an aspect ratio is good if the Delaunay triangulation of the scatter plot at this aspect ratio has some nice geometric property, e.g., a large minimum angle or a small total edge length. More precisely, we consider the following optimization problem. Given a set Q of points in the plane, find a scale factor s such that scaling the x-coordinates of the points in Q by s and the y-coordinates by 1=s yields a point set P(s) that optimizes a property of the Delaunay triangulation of P(s), over all choices of s. We present an algorithm that solves this problem efficiently and demonstrate its usefulness on real-world instances. Moreover, we discuss an empirical test in which we asked 64 participants to choose the aspect ratios of 18 scatter plots. We tested six different quality measures that our algorithm can optimize. In conclusion, minimizing the total edge length and minimizing what we call the 'uncompactness' of the triangles of the Delaunay triangulation yielded the aspect ratios that were most similar to those chosen by the participants in the test. @article{FinkEtAl2013b, | |
J.-H. Haunert. An algorithmic approach to geographic information science. Universität Würzburg. Fakultät für Mathematik und Informatik, 2013. Habilitationsschrift (kumulativ).
| |
@phdthesis{Haunert2013b, | |
D. Peng, J.-H. Haunert, A. Wolff, and C. Hurter. Morphing polylines based on least-squares adjustment. In Proc. 16th ICA Generalisation Workshop. 2013.
| |
@inproceedings{Peng2013, | |
Nadine Schwartges, Dennis Allerkamp, J.-H. Haunert, and A. Wolff. Optimizing active ranges for point selection in dynamic maps. In Proc. 16th ICA Generalisation Workshop. 2013.
| |
@inproceedings{Schwartges2013, | |
T. C. van Dijk, A. van Goethem, J.-H. Haunert, W. Meulemans, and B. Speckmann. Accentuating focus maps via partial schematization. In Proc. 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS '13), pages 418-421. 2013.
| |
We present an algorithm for schematized focus maps. Focus maps integrate a high detailed, enlarged focus region continuously in a given base map. Recent methods integrate both with such low distortion that the focus region becomes hard to identify. We combine focus maps with partial schematization to display distortion of the context and to emphasize the focus region. Schematization visually conveys geographical accuracy, while not increasing map complexity. We extend the focus-map algorithm to incorporate geometric proximity relationships and show how to combine focus maps with schematization in order to cater to different use cases. @inproceedings{vanDijkEtAl2013, | |
T. C. van Dijk, and J.-H. Haunert. A probabilistic model for road selection in mobile maps. In Proc. 12th International Symposium on Web and Wireless Geographical Information Systems (W2GIS'13), pages 214-222. 2013.
| |
Mobile devices provide an interesting context for map drawing. This paper presents a novel road-selection algorithm based on PageRank, the algorithm famously used by Google to rank web pages by importance. Underlying the PageRank calculation is a probabilistic model of user behavior. We provide suitable generalizations of this model to road networks. Our implementation of the proposed algorithm handles a sizable map in approximately a tenth of a second on a desktop PC. Therefore, our methods should be feasible on modern mobile devices. @inproceedings{VanDijkHaunert2013, | |
T. C. van Dijk, K. Fleszar, J.-H. Haunert, and J. Spoerhase. Road segment selection with strokes and stability. In Proc. 1st ACM SIGSPATIAL Workshop on MapInteraction. 2013.
| |
In order to visualize a road network without producing visual clutter, a subset of all road segments needs to be selected. Many algorithms for road segment selection are based on a relevance score for edges in a network (for example betweenness centrality) and proceed by taking a greedy selection based on these weights. This can give dissatisfactory results. In order to improve readability, we introduce a stroke-based constraint and provide an efficient dynamic program that makes an optimal selection given this constraint. Next, we consider the computation of animated road selections from changing edge weights (for example a focus area that follows a moving user). Handling each time step of the animation individually can lead to distracting flickering effects. Here we introduce an optimization objective to achieve a more stable selection and provide a polynomial-time algorithm for solving it. While separately solvable in polynomial time, we show that the combination of the stroke constraints and stability optimization is NP-hard. @inproceedings{vanDijkHaunert2013b, |
2012
M. Fink, J.-H. Haunert, T. Mchedlidze, J. Spoerhase, and A. Wolff. Drawing graphs with vertices at specified positions and crossings at large angles. In volume 7157 of Lecture Notes in Computer Science. Proc. Workshop on Algorithms and Computation (WALCOM'12), pages 186-197. Springer-Verlag, Berlin, Germany, 2012.
| |
In point-set-embeddability (PSE) problems one is given not just a graph that is to be drawn, but also a set of points in the plane that specify where the vertices of the graph can be placed. The problem class was introduced by Gritzmann et al. [3] twenty years ago. In their work and most other works on PSE problems, however, planarity of the output drawing was an essential requirement. Recent experiments on the readability of drawings [4] showed that polyline drawings with angles at edge crossings close to 90 degrees. and a small number of bends per edge are just as readable as planar drawings. Motivated by these findings, Didimo et al. [2] recently introduced RAC drawings where pairs of crossing edges must form a right angle and, more generally, xAC drawings (for x in (0, 90]) where the crossing angle must be at least x. As usual, edges may not overlap and may not go through vertices. We investigate the intersection of PSE and RAC/xAC. @inproceedings{Fink2012, | |
M. Fink, J.-H. Haunert, A. Schulz, J. Spoerhase, and A. Wolff. Algorithms for labeling focus regions. IEEE Transactions on Visualization and Computer Graphics, 18(12):2583-2592, 2012.
| |
In this paper, we investigate the problem of labeling point sites in focus regions of maps or diagrams. This problem occurs, for example, when the user of a mapping service wants to see the names of restaurants or other POIs in a crowded downtown area but keep the overview over a larger area. Our approach is to place the labels at the boundary of the focus region and connect each site with its label by a linear connection, which is called a leader. In this way, we move labels from the focus region to the less valuable context region surrounding it. In order to make the leader layout well readable, we present algorithms that rule out crossings between leaders and optimize other characteristics such as total leader length and distance between labels. \par This yields a new variant of the boundary labeling problem, which has been studied in the literature. Other than in traditional boundary labeling, where leaders are usually schematized polylines, we focus on leaders that are either straight-line segments or Bézier curves. Further, we present algorithms that, given the sites, find a position of the focus region that optimizes the above characteristics. \par We also consider a variant of the problem where we have more sites than space for labels. In this situation, we assume that the sites are prioritized by the user. Alternatively, we take a new facility-location perspective which yields a clustering of the sites. We label one representative of each cluster. If the user wishes, we apply our approach to the sites within a cluster, giving details on demand. @article{FinkEtAl2012, | |
J.-H. Haunert. A symmetry detector for map generalization and urban-space analysis. ISPRS Journal of Photogrammetry and Remote Sensing, 74:66-77, 2012.
| |
This article presents an algorithmic approach to the problem of finding symmetries in building footprints, which is motivated by map generalization tasks such as symmetry-preserving building simplification and symmetry-aware grouping and aggregation. Moreover, symmetries in building footprints may be used for landmark selection and building classification. The presented method builds up on existing methods for symmetry detection in vector data that use algorithms for string matching. It detects both mirror symmetries and repetitions of geometric structures. In addition to the existing vector-based methods, the new method finds partial symmetries in polygons while allowing for small geometric errors and, based on a least-squares approach, computes optimally adjusted mirror axes and assesses their quality. Finally, the problem of grouping symmetry relations is addressed with an algorithm that finds mirror axes that are almost collinear. The presented approach was tested on a large building dataset of the metropolitan Boston area and its results were compared with results that were manually generated in an empirical test. The symmetry relations that the participants of the test considered most important were found by the algorithm. Future work will deal with the integration of information on symmetry relations into algorithms for map generalization. @article{Haunert2012, | |
J.-H. Haunert, and B. Budig. An algorithm for map matching given incomplete road data. In Proc. 20th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS '12), pages 510-513. 2012.
| |
We consider the problem of matching a GPS trajectory with a road data set in which some roads are missing. To solve this problem, we extend a map-matching algorithm by Newson and Krumm (Proc. ACM GIS 2009, pp. 336-343) that is based on a hidden Markov model and a discrete set of candidate matches for each point of the trajectory. We introduce an additional off-road candidate for each point of the trajectory. The output path becomes determined by selecting one candidate for each point of the trajectory and connecting the selected candidates via shortest paths, which preferably lie in the road network but, if off-road candidates become selected, may also include off-road sections. We discuss experiments with GPS tracks of pedestrians. @inproceedings{HaunertBudig2012, |
2011
A. Gemsa, J.-H. Haunert, and M. Nöllenburg. Boundary-labeling algorithms for panorama images. In Proc. 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS '11), pages 289-298. 2011.
| |
Boundary labeling deals with placing annotations for objects in an image on the boundary of that image. This problem occurs frequently in situations where placing labels directly in the image is impossible or produces too much visual clutter. Previous algorithmic results for boundary labeling consider a single layer of labels along some or all sides of a rectangular image. If, however, the number of labels is large or labels are too long, multiple layers of labels are needed. In this paper we study boundary labeling for panorama images, where n points in a rectangle R are to be annotated by disjoint unit-height rectangular labels placed above R in k different rows (or layers). Each point is connected to its label by a vertical leader that does not intersect any other label. We present polynomial-time algorithms based on dynamic programming that either minimize the number of rows to place all n labels, or maximize the number (or total weight) of labels that can be placed in k rows for a given integer k. For weighted labels, the problem is shown to be (weakly) NP-hard, and we give a pseudo-polynomial algorithm to maximize the weight of the selected labels. We have implemented our algorithms; the experimental results show that solutions for realistically-sized instances are computed instantaneously. @inproceedings{GemsaEtAl2011, | |
J.-H. Haunert. Detecting symmetries in building footprints by string matching. In S. Geertman, W. Reinhardt, and F. Toppen, editors, Lecture Notes in Geoinformation and Cartography. Advancing Geoinformation Science for a Changing World - Proc. 14th AGILE International Conference on Geographic Information Science, pages 319-336. Springer-Verlag, Berlin, Germany, 2011.
| |
This paper presents an algorithmic approach to the problem of finding symmetries in building footprints. The problem is motivated by map generalization tasks, for example, symmetry-preserving building simplification and symmetry-aware grouping and aggregation. Moreover, symmetries in building footprints may be used for landmark selection and building classification. The presented method builds up on existing methods for symmetry detection in polygons that use algorithms for string matching. It detects both axial symmetries and repetitions of geometric structures. In addition to the existing string-matching approaches to symmetry detection, we consider the problem of finding partial symmetries in polygons while allowing for small geometric errors. Moreover, we discuss how to find optimally adjusted mirror axes and to assess the quality of a detected mirror axis using a least-squares approach. The presented approach was tested on a large building data set of the metropolitan Boston area. The dominant symmetry relations were found. Future work is needed to aggregate the obtained symmetry relations, for example, by finding sets of mirror axes that are almost collinear. Another open problem is the integration of information on symmetry relations into algorithms for map generalization. @inproceedings{Haunert2011, | |
J.-H. Haunert. Map generalisation - quality benchmarks through optimisation. GIM International, 25(1):31-33, 2011.
| |
Decades of research on map generalisation have resulted in an abundance of heuristic algorithms, evaluation of the performance of which is vital for choosing the most suitable for a certain application. Proper evaluation methods are, however, missing. The author proposes an approach based on optimisation methods adopted from the field of operations research. @article{Haunert2011b, | |
J.-H. Haunert, and L. Sering. Drawing road networks with focus regions. IEEE Transactions on Visualization and Computer Graphics, 17(12):2555-2562, 2011.
| |
Mobile users of maps typically need detailed information about their surroundings plus some context information about remote places. In order to avoid that the map partly gets too dense, cartographers have designed mapping functions that enlarge a user-defined focus region such functions are sometimes called fish-eye projections. The extra map space occupied by the enlarged focus region is compensated by distorting other parts of the map. We argue that, in a map showing a network of roads relevant to the user, distortion should preferably take place in those areas where the network is sparse. Therefore, we do not apply a predefined mapping function. Instead, we consider the road network as a graph whose edges are the road segments. We compute a new spatial mapping with a graph-based optimization approach, minimizing the square sum of distortions at edges. Our optimization method is based on a convex quadratic program (CQP); CQPs can be solved in polynomial time. Important requirements on the output map are expressed as linear inequalities. In particular, we show how to forbid edge crossings. We have implemented our method in a prototype tool. For instances of different sizes, our method generated output maps that were far less distorted than those generated with a predefined fish-eye projection. Future work is needed to automate the selection of roads relevant to the user. Furthermore, we aim at fast heuristics for application in real-time systems. @article{haunertSering2011, |
2010
J.-H. Haunert, and A. Wolff. Optimal and topologically safe simplification of building footprints. In Proc. 18th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS '10), pages 192-201. 2010.
| |
We present an optimization approach to simplify sets of building footprints represented as polygons. We simplify each polygonal ring by selecting a subsequence of its original edges; the vertices of the simplified ring are defined by intersections of consecutive (and possibly extended) edges in the selected sequence. Our aim is to minimize the number of all output edges subject to a user-defined error tolerance. Since we earlier showed that the problem is NP-hard when requiring non-intersecting simple polygons as output, we cannot hope for an efficient, exact algorithm. Therefore, we present an efficient algorithm for a relaxed problem and an integer program (IP) that allows us to solve the original problem with existing software. Our IP is large, since it has O(m^6) constraints, where m is the number of input edges. In order to keep the running time small, we first consider a subset of only O(m) constraints. The choice of the constraints ensures some basic properties of the solution. Constraints that were neglected are added during optimization whenever they become violated by a new solution encountered. Using this approach we simplified a set of 144 buildings with a total of 2056 edges in 4.1 seconds on a standard desktop PC; the simplified building set contained 762 edges. During optimization, the number of constraints increased by a mere 13 %. We also show how to apply cartographic quality measures in our method and discuss their effects on examples. @inproceedings{haunertwolff2010, | |
J.-H. Haunert, and A. Wolff. Area aggregation in map generalisation by mixed-integer programming. International Journal of Geographical Information Science, 24(12):1871-1897, 2010.
| |
Topographic databases normally contain areas of different land cover classes, commonly defining a planar partition, that is, gaps and overlaps are not allowed. When reducing the scale of such a database, some areas become too small for representation and need to be aggregated. This unintentionally but unavoidably results in changes of classes. In this article we present an optimisation method for the aggregation problem. This method aims to minimise changes of classes and to create compact shapes, subject to hard constraints ensuring aggregates of sufficient size for the target scale. To quantify class changes we apply a semantic distance measure. We give a graph theoretical problem formulation and prove that the problem is NP-hard, meaning that we cannot hope to find an efficient algorithm. Instead, we present a solution by mixed-integer programming that can be used to optimally solve small instances with existing optimisation software. In order to process large datasets, we introduce specialised heuristics that allow certain variables to be eliminated in advance and a problem instance to be decomposed into independent sub-instances. We tested our method for a dataset of the official German topographic database ATKIS with input scale 1:50,000 and output scale 1:250,000. For small instances, we compare results of this approach with optimal solutions that were obtained without heuristics. We compare results for large instances with those of an existing iterative algorithm and an alternative optimisation approach by simulated annealing. These tests allow us to conclude that, with the defined heuristics, our optimisation method yields high-quality results for large datasets in modest time. @article{haunertwolff2010b, |
2009
J.-H. Haunert. Aggregation in map generalization by combinatorial optimization. Veröffentlichungen der Deutschen Geodätischen Kommission, Reihe C. Leibniz Universität Hannover, Germany, 2009. Dissertation.
| |
This thesis presents a new method for aggregating area objects in topographic databases. Aggregation is an important sub-problem of automatic generalization, which in cartography and geoinformatics means to derive representations of higher abstraction and smaller scale from given datasets. The developed method aims to create results of maximum quality and to ensure standards like minimal dimensions that are given with database specifications. Basically, two quality criteria are considered relevant: (1) Large-scale objects are to be represented by small-scale objects of semantically similar classes - this aim is formalized by defining a semantic distance between classes. (2) Areas in the smaller scale need to have a geometrically simple, compact shape. For the first time, this task is consequently formulated as an optimization problem and solved by applying combinatorial optimization techniques. In a simplified form, the problem can be stated as follows: Given a planar map with areas of different classes and a minimal allowed area size theta for the target scale, combine the areas into contiguous regions and define a new class for each region, such that each region has size at least theta and the area covered by objects whose classes change is minimized. It will be proven that this simplified problem is already NP-hard. Therefore, it is unlikely that an exact polynomial-time algorithm exists for its solution. This motivates an approach to the problem that is based on mixed-integer linear programming and heuristics. The proposed methods allow semantic class distances and different compactness measures to be considered. These criteria are incorporated into the cost function that is minimized. Furthermore, it is possible to define size thresholds for different classes, in order to preserve objects of high priority. Due to the high complexity of the problem, the exact solution cannot be obtained for large problem instances. Nevertheless, for small instances, the approach allows the results of heuristics to be compared with optimal solutions. This enables the performance of heuristic methods to be assessed with respect to the quality of the result. The development of the presented heuristics is motivated by two aims: Firstly, large problem instances, for example, national topographic databases, need to be generalized. Secondly, local updates need to be propagated into a previously derived small-scale map without processing the complete dataset again. In addition to heuristics that eliminate certain variables in the mixed-integer programs, a heuristic is proposed that decomposes a problem instance into smaller, independent instances. With this approach both aims are achieved. Furthermore, a generalization workflow is presented that, in addition to the aggregation method, comprises methods for area collapse and line simplification. In order to prove the applicability of the proposed method in practice, it was applied to a dataset of the official German topographic database ATKIS. A dataset of the ATKIS DLM 50 was processed, which contains similar amounts of detail as the digital topographic map at scale 1:50 000 (DTK 50). The minimal allowed size theta was defined according to existing specifications of the ATKIS DLM 250. A dataset of 22 km times 22 km with 5537 areas (a map sheet of the DTK 50) was aggregated in 82 minutes. Compared to an existing iterative approach (the so-called region-growing), the new method resulted in 20% less class change and 2% less cost for non-compact shapes. The developed method has several advantages, in particular, it offers results of high semantic accuracy. Furthermore, the proposed optimization method is deterministic and allows (hard) constraints to be handled. These are advantages compared to existing optimization methods for map generalization, which are often based on iterative meta-heuristics like simulated annealing. How to adapt the developed approach to other generalization tasks - in particular to building simplification - is discussed as a question for future research. @phdthesis{haunert2008f, | |
J.-H. Haunert, and C. Brenner. Vehicle localization by matching triangulated point patterns. In Proc. 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS '09), pages 344-351. 2009.
| |
We consider the problem of localizing a moving vehicle based on landmarks that were detected with a vehicle-mounted sensor. Landmarks are represented as points; correspondences of these points with the ones in a reference database are searched based on their geometric configurations. More specifically, we triangulate the landmark points and we match the obtained triangles with triangles in a reference database according to their geometric similarity. We maximize the number of triangle matches while considering the topological relations between different triangles, for example, if two triangles share an edge then the corresponding reference triangles must share an edge. Our method exploits that the observed points typically form a certain configuration: They appear at a limited distance from the vehicle's trajectory, thus the typical point pattern has a large extent in the driving direction and a relatively small lateral extent. This characteristic allows us to triangulate the observed point set such that we obtain a triangle strip (a sequence of triangles) in which each two consecutive triangles share one edge and each triangle connects three points that are relatively close to each other, that is, the triangle strip appropriately defines a neighborhood relationship for the landmarks. The adjacency graph of the triangles becomes a path; this allows for an efficient solution of our matching problem by dynamic programming. We present results of our method with data acquired with a mobile laser scanning system. The landmarks are objects of cylindric shape, for example, poles of traffic signs, which can be easily detected with the employed sensor. We tested the method with respect to its running time and its robustness when imposing different types of errors on the data. In particular, we tested the effect of non-rigid distortions of the observed point set, which are typically encountered during dead reckoning. Our matching approach copes well with such errors since it is based on local similarity measures of triangles, that is, we do not assume that a global non-rigid transformation between the observed point set and the reference point set exists. @inproceedings{haunert2009c, | |
J.-H. Haunert, A. Dilo, and P. van Oosterom. Constrained set-up of the tGAP structure for progressive vector data transfer. Computers & Geosciences, 35(11):2191-2203, 2009.
| |
A promising approach to submit a vector map from a server to a mobile client is to send a coarse representation first, which then is incrementally refined. We consider the problem of defining a sequence of such increments for areas of different land-cover classes in a planar partition. In order to submit well-generalised datasets, we propose a method of two stages: First, we create a generalised representation from a detailed dataset, using an optimisation approach that satisfies certain cartographic constraints. Second, we define a sequence of basic merge and simplification operations that transforms the most detailed dataset gradually into the generalised dataset. The obtained sequence of gradual transformations is stored without geometrical redundancy in a structure that builds up on the previously developed tGAP (topological Generalised Area Partitioning) structure. This structure and the algorithm for intermediate levels of detail (LoD) have been implemented in an object-relational database and tested for land-cover data from the official German topographic dataset ATKIS at scale 1:50 000 to the target scale 1:250 000. Results of these tests allow us to conclude that the data at lowest LoD and at intermediate LoDs is well generalised. Applying specialised heuristics the applied optimisation method copes with large datasets; the tGAP structure allows users to efficiently query and retrieve a dataset at a specified LoD. Data are sent progressively from the server to the client: First a coarse representation is sent, which is refined until the requested LoD is reached. @article{haunert2009d, | |
B. Kieler, W. Huang, J.-H. Haunert, and J. Jiang. Matching river datasets of different scales. In M. Sester, L. Bernard, and V. Paelke, editors, Lecture Notes in Geoinformation and Cartography. Advances in GIScience: Proc. 12th AGILE International Conference on Geographic Information Science, pages 56-63. Springer-Verlag, Berlin, Germany, 2009.
| |
In order to ease the propagation of updates between geographic datasets of different scales and to support multi-scale analyses, different datasets need to be matched, that is, objects that represent the same entity in the physical world need to be identified. We propose a method for matching datasets of river systems that were acquired at different scales. This task is related to the problem of matching networks of lines, for example road networks. However, we also take into account that rivers may be represented by polygons. The geometric dimension of a river object may depend, for example, on the width of the river and the scale. Our method comprises three steps. First, in order to cope with geometries of different dimensions, we collapse river polygons to centerlines by applying a skeletonization algorithm. We show how to preserve the topology of the river system in this step, which is an important requirement for the subsequent matching steps. Secondly, we perform a pre-matching of the arcs and nodes of the line network generated in the first step, that is, we detect candidate matches and define their quality. Thirdly, we perform the final matching by selecting a consistent set of good candidate matches. We tested our method for two Chinese river datasets of the same areal extent, which were acquired at scales 1:50 000 and 1:250 000. The evaluation of our results allows us to conclude that our method seldom yields incorrect matches. The number of correct matches that are missed by our method is quite small. @inproceedings{kieler2009, | |
B. Kieler, J.-H. Haunert, and M. Sester. Deriving scale-transition matrices from map samples for simulated annealing-based aggregation. Annals of GIS, 15(2):107-116, 2009.
| |
The goal of the research described in this article is to derive parameters necessary for the automatic generalization of land-cover maps. A digital land-cover map is often given as a partition of the plane into areas of different classes. Generalizing such maps is usually done by aggregating small areas into larger regions. This can be modelled using cost functions in an optimization process, where a major objective is to minimize the class changes. Thus, an important input parameter for the aggregation is the information about possible aggregation partners of individual object classes. This can be coded in terms of a transition matrix listing costs that are charged for changing a unit area from one class into another one. In our case we consider the problem of determining the transition matrix based on two data sets of different scales. We propose three options to solve the problem: (1) the conventional way where an expert defines manually the transition matrix, (2) to derive the transition matrix from an analysis of an overlay of both data sets, and (3) an automatic way where the optimization is iterated while adapting the transition matrix until the difference of the intersection areas between both data sets before and after the generalization is minimized. As underlying aggregation procedure we use an approach based on global combinatorial optimization. We tested our approach for two German topographic data sets of different origin, which are given in the same areal extent and were acquired at scales 1:1000 and 1:25,000, respectively. The evaluation of our results allows us to conclude that our method is promising for the derivation of transition matrices from map samples. In the discussion we describe the advantages and disadvantages and show options for future work. @article{kieler2009b, |
2008
J.-H. Haunert, and M. Sester. Area collapse and road centerlines based on straight skeletons. GeoInformatica, 12(2):169-191, 2008.
| |
Skeletonization of polygons is a technique, which is often applied to problems of cartography and geographic information science. Especially it is needed for generalization tasks such as the collapse of small or narrow areas, which are negligible for a certain scale. Different skeleton operators can be used for such tasks. One of them is the straight skeleton, which was rediscovered by computer scientists several years ago after decades of neglect. Its full range of practicability and its benefits for cartographic applications have not been revealed yet. Based on the straight skeleton an area collapse that preserves topological constraints as well as a partial area collapse can be performed. An automatic method for the derivation of road centerlines from a cadastral dataset, which uses special characteristics of the straight skeleton, is shown. @article{haunert2008, | |
J.-H. Haunert, and A. Wolff. Optimal simplification of building ground plans. In volume XXXVII(Part B2) of International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Proc. XXIst ISPRS Congress, pages 373-378. 2008.
| |
This paper presents an optimization approach to simplify building ground plans given by two-dimensional polygons. We define a simplified building as a subsequence of the original building edges; its vertices are defined by intersections of consecutive (and possibly extended) edges in the selected sequence. Our aim is to minimize the number of edges subject to a user-defined error tolerance. We prove that this problem is NP-hard when requiring that the output consists of non-intersecting simple polygons. Thus we cannot hope for an efficient, exact algorithm. Instead, we propose a heuristic and an integer programming formulation that can be used to solve the problem with existing optimization software. We discuss results of our algorithms and how to incorporate more sophisticated objective functions into our model. @inproceedings{haunert2008b, | |
J.-H. Haunert, and M. Sester. Assuring logical consistency and semantic accuracy in map generalization. Photogrammetrie - Fernerkundung - Geoinformation (PFG), 2008(3):165-173, 2008.
| |
In recent years national mapping agencies have increasingly integrated automatic map generalization methods in their production lines. This raises the question of how to assess and assure the quality of mapping products such as digital landscape models. Generalization must not only ensure specified standards for an output scale, but also needs to keep semantics as similar as possible under these given requirements. In order to allow for objective comparisons of different generalization results we introduce a semantic distance measure. We present results that optimize this measure subject to constraints reflecting database specifications and show how this measure can be used to compare the results of different methods, including exact and heuristic approaches. @article{haunert2008d, | |
M. Sester, J.-H. Haunert, and K.-H. Anders. Modell- und kartographische Generalisierung von topographischen und thematischen Informationen. Kartographische Nachrichten, 6:307-314, 2008.
| |
Die Automation in der Generalisierung ist eine Herausforderung, der sich Wissenschaftler und Firmen seit einigen Jahren stellen. Es wurden bereits sehr gute Erfolge erzielt, die sich an der Verfügbarkeit von sehr leistungsfähigen Algorithmen ablesen lassen. Im Beitrag wird auf einige dieser Operationen eingegangen und dargestellt, wo derzeit noch offene Fragen sind und wie diese sich durch neue Ansätze lösen lassen. Besonders wird die Möglichkeit herausgestellt, den Gesamtablauf der Generalisierung als Prozess zu beschreiben, der somit nachvollziehbar und reproduzierbar wird. @article{sester2008, |
2007
C. Brenner, V. Paelke, J.-H. Haunert, and N. Ripperda. Das virtuelle Fernrohr GeoScope. In Unimagazin Hannover. Übers Reisen - Forschung rund um das Fernweh, pages 34-37. Leibniz Universität Hannover, 2007.
| |
Bei der Reisevorbereitung nutzen viele bereits internetbasierte und virtuelle Darstellungen mit räumlichem Bezug: Routenplaner für die Anreise, Google Earth für die Luftansicht des Zielgebiets oder Bilder und Filme im Internet. Im Urlaub angekommen, ist die Auswahl technikbasierter Informationsmöglichkeiten eher ernüchternd und beschränkt sich nicht selten auf eine gedruckte Karte. Dabei gibt es ein sehr bekanntes technisches Gerät für den Tourismus schon sehr lange: das Münzfernrohr. Vier Wissenschaftler des Instituts für Kartographie und Geoinformatik zeigen, wie es "virtualisiert" und zur Informationsvermittlung erweitert werden kann. @incollection{brenner2007, | |
J.-H. Haunert. Optimization methods for area aggregation in land cover maps. In Proc. 23rd International Cartographic Conference (ICC'07), August 4-10, 2007, Moscow, Russia. 2007.
| |
The aggregation of areas is an important subproblem of the map generalization task. Especially, it is relevant for the generalization of topographic maps which contain areas of different land cover, such as settlement, water, or different kinds of vegetation. An existing approach is to apply algorithms that iteratively merge adjacent areas, taking only local measures into consideration. In contrast, global optimization methods are proposed in this paper to derive maps of higher quality. Given a planar subdivision in which each area is assigned to a land cover class, we consider the problem of aggregating areas such that defined thresholds are satisfied. The aggregation is directed at two objectives: Classes of areas shall change as little as possible and compact shapes are preferred. In this paper, the problem is formalized and two different approaches are compared, namely mixed-integer programming and simulated annealing. @inproceedings{haunert2007, | |
J.-H. Haunert. Efficient area aggregation by combination of different techniques. In Proc. 10th ICA Workshop on Generalisation and Multiple Representation, August 2-3, 2007, Moscow, Russia. 2007.
| |
When reducing the scale of a topographic database, some areas of the data set become too small for representation and need to be aggregated with others, unintentionally but unavoidably leading to changes of some areas' land cover classes. In this paper, we approach this problem by optimisation: Given a planar subdivision containing areas of different land cover classes, the problem is to aggregate areas into contiguous regions and to define the class for each region, such that each region satisfies a size threshold and the overall class change is minimal. A second objective is to create compact shapes. In an earlier paper we proved that the problem is NP-hard. We have presented a method by mixed-integer programming for its solution and introduced several heuristics. Our tests revealed that, even with the defined heuristics, our method does generally not allow to solve problem instances of more than 400 areas. In this paper we present a new efficient heuristic for the problem. Our approach is to locally introduce intermediate levels of details. Steps between these scales can be processed using our previously presented method. This approach allows processing large data sets a complete map sheet of the German topographic map at scale 1:50.000 was processed to meet the requirements for the scale 1:250.000. We show that our method generalizes an existing iterative algorithm for the same problem and compare the results being obtained with different settings of our method. Compared with the existing iterative algorithm, our method resulted in 27,4% less change of land cover classes. @inproceedings{haunert2007b, | |
J.-H. Haunert. A formal model and mixed-integer program for area aggregation in map generalization. In volume XXXVI(Part 3/W49A) of International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences. Photogrammetric Image Analysis (PIA'07), September 19-21, 2007, Munich, Germany, pages 161-166. 2007.
| |
This paper presents a model and an optimization method for a problem that appears when reducing the scale of a topographic database. Such a database commonly contains areas of different land cover classes that define a planar subdivision. When reducing its scale, some areas become too small and need to be aggregated. In order to produce contiguous aggregates that are not smaller than a user-defined threshold, it is necessary to change the classes of some areas. As generalization intends to preserve the characteristic features of the map, we aim to change classes as little as possible. A second objective is to create simple, compact shapes. Based on a previous work that neglected this second objective, we define a more general problem in this paper that reflects both aims of generalization. The problem was proven to be NP-hard, meaning that it is unlikely to find an efficient solution. Therefore, we propose a mixed-integer program (MIP) and heuristics, which enable the production of near-optimal results. The paper concludes with the presentation of some results we obtained using our method. @inproceedings{haunert2007d, | |
E. S. Podolskaya, K.-H. Anders, J.-H. Haunert, and M. Sester. Quality assessment for polygon generalization. In Quality Aspects in Spatial Data Mining, pages 211-220. CRC Press, Boca Raton, USA, 2007.
| |
Different methods for map generalization have been developed. However, there are only a few publications that discuss the automatic quality assessment of generalized maps. This problem becomes crucial when the usefulness of a map has to be evaluated or different methods need to be compared, e.g. to find the best algorithm for a specific application. In this paper we present a new approach for the quality assessment of generalized polygons. In particular the simplification of buildings and the generalization of polygons that represent land use in a topographic database are discussed. Our approach distinguishes between two aims of generalization: Reducing the amount of data and keeping the map similar to the input map. A measure of quality that defines a compromise between these conflicting objectives is introduced. The proposed method was tested for a German cadastral data set and the official German topographic database ATKIS. @incollection{podolskaya2007, |
2006
C. Brenner, V. Paelke, J.-H. Haunert, and N. Ripperda. The GeoScope - a mixed-reality system for planning and public participation. In Proc. 25th Urban Data Management Symposium (UDMS'06), May 15-17, 2006, Aalborg, Denmark. 2006.
| |
The augmentation of real environments with additional computer generated information (augmented reality, AR), and the addition of virtual contents (mixed reality, MR) are promising interaction paradigms for the support of applications outside the classical office scenario. While the potential of AR for applications such as geo-visualization has been shown using a number of demonstration systems, there are virtually no systems in daily use up to now. This is mainly due to technical reasons. Besides suitable displays, there is a lack of reliable and accurate tracking systems, which are however required for an exact registration of the virtual contents and the real scene. Additionally, there is a lack of experience and tools for the creation of mixed reality applications and often, of intuitive interaction techniques. Especially for applications involving public audience (e.g. exhibitions, museums, public participation), additional requirements apply, such as reliability, robustness, easy adaptation to different users, and the necessity for relatively low operating expenses. In this paper, we introduce the GeoScope, a mixed-reality input/output device, which addresses those problems, especially for public applications. The GeoScope is mounted at a fixed position and consists of a display, oriented towards the user, and a camera, which points at the surroundings. Just as a telescope, the GeoScope can be turned around two axes, the two angles being captured by measuring devices. Together with the known position, a fast and highly precise tracking of the current view direction is possible, allowing the superposition of the real scene, as delivered by the camera, and virtually generated information. @inproceedings{Brenner2006, | |
J.-H. Haunert, K.-H. Anders, and M. Sester. Hierarchical structures for rule-based incremental generalisation. In volume XXXVI(Part 2/W40) of International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences. Proc. ISPRS Workshop on Multiple Representation and Interoperability of Spatial Data, February 22-24, 2006, Hannover, Germany, pages 49-57. 2006.
| |
Multiple Representation Databases are structures that link different data sets based on common geometry and/or semantics. They offer the possibility to realize an efficient updating process in case of changes, as only one data set has to be updated and the changes can be propagated to the linked one. There are some prerequisites for the realisation of such a functionality which are elaborated on in the paper. First of all, functional dependencies between the data sets have to be known or made explicit. Secondly, partitioning schemes of space have to be available in order to make sure that local changes can be restricted locally and do not spread over the whole data set. @inproceedings{haunert2006a, | |
J.-H. Haunert, and A. Wolff. Generalization of land cover maps by mixed integer programming. In Proc. 14th Annual ACM International Symposium on Advances in Geographic Information Systems (ACM GIS '06), pages 75-82. 2006.
| |
We present a novel method for the automatic generalization of land cover maps. A land cover map is composed of areas that collectively form a tessellation of the plane and each area is assigned to a land cover class such as lake, forest, or settlement. Our method aggregates areas into contiguous regions of equal class and of size greater than a user-defined threshold. To achieve this goal, some areas need to be enlarged at the expense of others. Given function that defines costs for the transformation between pairs of classes, our method guarantees to return a solution of minimal total cost. The method is based on a mixed integer program (MIP). To process maps with more than 50 areas, heuristics are introduced that lead to an alternative MIP formulation. The effects of the heuristics on the obtained solution and the computation time are discussed. The methods were tested using real data from the official German topographic data set (ATKIS) at scales 1:50.000 and 1:250.000. @inproceedings{haunert2006b, | |
J.-H. Haunert. Aktualisierung von Geodaten in einer Multiple Representation Database. In Publikationen der Deutschen Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation e.V., DGPF Jahrestagung 2006, Berlin, Germany. 2006.
| |
In zunehmendem Maße werden topographische Datensätze unterschiedlicher Maßstäbe in einer gemeinsamen Datenbank gehalten. Die Fortführung der Daten innerhalb einer solchen Datenbank ist ein aktuelles Problem der Geoinformatik. Im Allgemeinen wird, ausgehend von einem Quelldatensatz, eine Propagierung von Information in einen Zieldatensatz geringerer Auflösung mittels Generalisierung angestrebt. In der Kartographie wird die Landschaft üblicherweise in Flächen unterschiedlicher Landnut-zungs- und Vegetationsklassen gegliedert. Die Aggregation dieser Flächen ist eine wichtige Aufgabe der Modellgeneralisierung. In diesem Beitrag wird ein iteratives Aggregationsverfahren, welches lokale Beziehungen zwischen Objekten berücksichtigt, einem globalen Optimierungsverfahren gegenübergestellt. Die Möglichkeiten zum Einsatz der Verfahren für die Fortführung werden diskutiert. @inproceedings{haunert2006c, |
2005
J.-H. Haunert, C. Brenner, and H. Neidhart. Using a geographic information system for the generation of driving simulator scenes. Advances in Transportation Studies - An International Journal, Special Issue:33-44, 2005.
| |
Driving simulators often require a linear description of scenes in terms of events which happen at specified locations. This concept is very sensible for the actual simulation, since it not only saves space, but also makes simulations repeatable. Since maneuverability is essentially constrained to one dimension, test persons will encounter the very same scenes and challenges during the simulation. However, defining scenes with a reasonable complexity or even scenes resembling real world situations is very elaborate. Furthermore, scene reuse is not supported very well, since cutting and pasting of linear scene descriptions requires a considerable ability to imagine the actual scenes being manipulated. In this paper, we show how Geographic Information Systems (GIS) can be used to define, visualize, and manage scene descriptions for a simulator system. To that end, we have chosen ArcGIS from ESRI, Inc. for the GIS part and STISIM drive from Systems Technology, Inc. for the simulator part. One key characteristic of the approach is that rather than putting the entire effort into building individual simulator scenes, one builds simulator worlds, from which later different simulator scenes can be derived with little effort. @article{haunert2005, | |
J.-H. Haunert. Link based conflation of geographic datasets. In Proc. 8th ICA Workshop on Generalisation and Multiple Representation. 2005.
| |
The conflation of geographic data from different sources is an important research area in geographic information sciences. In recent time this is increasingly motivated by the demand for a higher usability of expensively collected data on the part of the national mapping agencies. For this reason also the linkage of geographic datasets with different scales and their mutual administration in one database is under examination. Linking objects of different scales will ease the propagation of new information to a generalized dataset and the transfer of information from lower to higher scale becomes more feasible. In this paper a method for a topological consistent transfer of features is presented, which assumes, that links between corresponding features exist. The method is based on a map transformation which considers locally existing geometric differences between the two involved datasets. Even though algorithms and mathematical formulas of similar existing methods differ among each other and in relation to the presented work, the term rubber sheeting is often used for this kind of transformation and will also be used here. It refers to the illustration of the map as a flexible membrane that is fitted into a frame and hence is forced to become deformed. Rubber sheeting can be implemented by an interpolation of coordinate differences that are given for control points. In this work an infinite number of control points on corresponding linear elements is defined. After an overview on related work, this paper addresses how the transformation can be set up by exploiting links between linear features. Also the mathematical foundations for the transformation are explained and results for existing datasets are shown. In the conclusion limits are presented that require further development and research. @inproceedings{haunert2005a, | |
J.-H. Haunert, and M. Sester. Propagating updates between linked data sets of different scale. In Proc. 22nd International Cartographic Conference (ICC'05), July 11-16, 2005, Spain. 2005.
| |
This work shows an incremental approach for the propagation of updates from a topographic source dataset to a generalised dataset. It aims at the result that is equal to the complete generalisation of the source dataset. For this the information from links that express correspondences of features, the knowledge of the generalisation rules and the topology of the features are considered. The problem is discussed extensively for the case of the aggregation of areas, which is based on rules that take thematic and geometrical criteria into account. Results for the generalisation of an existing dataset are shown. For the incremental method it is considered, that the update of a single feature can have different impacts on the target dataset when following the rules. An algorithm is described which is based on the derivation of the influence area on which the update is restricted. @inproceedings{haunert2005b, | |
J.-H. Haunert. Geometrietypwechsel in einer Multi-Resolution-Datenbank. In Mitteilungen des Bundesamtes für Kartographie und Geodäsie, Band 34: Arbeitsgruppe Automation in der Kartographie - Tagung 2004. Verlag des Bundesamtes für Kartographie und Geodäsie, Frankfurt am Main, Germany, 2005.
| |
Die gemeinsame Haltung von kartographischen Datensätzen unterschiedlicher Maßstäbe in einer Datenbank stellt eine neue Herausforderung in der Kartographie und Geoinformatik dar. Die Motivation dazu ergibt sich aus dem Bedarf an einem schnellen und dynamischen Zoom für die Web-Kartographie oder einem effektiven Verfahren für die Fortführung der Datenbestände. Es wurden Datenmodelle vorgeschlagen, in denen Zuordnungen zwischen homologen Objekten explizit beschrieben werden. Teilweise wurden die entwickelten Konzepte bereits umgesetzt. Für die Fortführung der Daten innerhalb einer solchen Datenbank fehlen allerdings noch Verfahren. Im Allgemeinen wird, ausgehend von einem Quelldatensatz, eine Propagierung von Information in einen Zieldatensatz geringerer Auflösung angestrebt. Diese Aufgabenstellung ist ein Problem der Modellgeneralisierung. In dieser Arbeit wird ein Skelettalgorithmus dargestellt, der hierzu den Geometrietypwechsel von einer flächenhaften zu einer linienförmigen Repräsentation leistet. Es werden Beispiele für den Einsatz des Algorithmus und Modifikationen für besondere Problemstellungen vorgestellt. @incollection{haunert2005c, |
2004
K.-H. Anders, and J.-H. Haunert. MRDB to connect 3D city models, cadastral, and topographic data together. In Proc. 24th Urban Data Management Symposium (UDMS'04), October 27-29, 2004, Chioggia (Venice), Italy. 2004.
| |
A Multi-resolution/representation-database (MRDB) can be described as a spatial database, which can be used to store the same real-world-phenomena at different levels of precision, accuracy and resolution. Furthermore these phenomena can be stored in different ways of presentation or symbolisation. There are several reasons for introducing an MRDB: On the one hand it allows a multi-scale analysis of the data: Information in one resolution can be analysed with respect to information given in another resolution. On the other hand a major reason for National Mapping Agencies to investigate and implement MRDB is the possibility of propagating updates between the scales, which is also called incremental generalisation. In a cooperation with the German Federal Agency for Cartography and Geodesy (BKG) we are developing at the institute of cartography and geoinformatics (ikg) an automatic model generalisation und update (data revision) system for the German ATKIS (Authoritative Topographic-Cartographic Information System). In this paper we will describe the structure and functionality of our ATKIS MRDB, and the planed extension to store cadastral and 3D building information. @inproceedings{anders2004b, | |
J.-H. Haunert, and M. Sester. Using the straight skeleton for generalisation in a multiple representation environment. In Proc. 7th ICA Workshop on Generalisation and Multiple Representation, August 20-21, 2004, Leicester, UK. 2004.
| |
In recent time a lot of research has been done towards the comprehensive administration of geographic data of different scales and thematic domains. A promising approach is the use of a Multiple Representation Database (MRDB) in which links between corresponding objects are explicitly modelled. The advantage of this approach is a limitation of redundancies and inconsistencies. It can be expected that this comprehensive administration of different data sets supports complex analysis and eases the updating process (Sester et al. 1998). Our aim is to build up a framework enabling the automatic update of features in an MRDB. We envision a system which is able to propagate information into the other representations in the database when a feature is added to an arbitrary level. To meet this demand algorithms are needed which are able to perform the generalisation of the added data considering the embedding context. The algorithms needed for generalisation depend on the relationships between objects in adjacent scale levels. The main relation is the aggregation, however, there are also more complex ones. Two of these cases are area collapses and geometry type changes. This also includes partial geometry type changes. The paper is organized as follows: after a review of related work, the relationships between objects in different scales are briefly analyzed in section 2. The main focus of the paper lies, however, in the presentation of a skeleton algorithm that can be applied to solve this task for the generalisation cases area collapse and geometry type changes from area to line. In this work the Straight Skeleton is used. The algorithm and its application to generalisation is presented in section 3. In section 4 first ideas are presented how this generalisation method can be used for the propagation of updates in MRDB. A summary concludes the paper. @inproceedings{haunert2004, |