|
J.-H. Haunert. Aggregation in map generalization by combinatorial optimization. Veröffentlichungen der Deutschen Geodätischen Kommission, Reihe C. Leibniz Universität Hannover, Germany, 2009. Dissertation.
abstract
bibtex
|
| This thesis presents a new method for aggregating area objects in topographic databases. Aggregation is an important sub-problem of automatic generalization, which in cartography and geoinformatics means to derive representations of higher abstraction and smaller scale from given datasets. The developed method aims to create results of maximum quality and to ensure standards like minimal dimensions that are given with database specifications. Basically, two quality criteria are considered relevant: (1) Large-scale objects are to be represented by small-scale objects of semantically similar classes - this aim is formalized by defining a semantic distance between classes. (2) Areas in the smaller scale need to have a geometrically simple, compact shape. For the first time, this task is consequently formulated as an optimization problem and solved by applying combinatorial optimization techniques. In a simplified form, the problem can be stated as follows: Given a planar map with areas of different classes and a minimal allowed area size theta for the target scale, combine the areas into contiguous regions and define a new class for each region, such that each region has size at least theta and the area covered by objects whose classes change is minimized. It will be proven that this simplified problem is already NP-hard. Therefore, it is unlikely that an exact polynomial-time algorithm exists for its solution. This motivates an approach to the problem that is based on mixed-integer linear programming and heuristics. The proposed methods allow semantic class distances and different compactness measures to be considered. These criteria are incorporated into the cost function that is minimized. Furthermore, it is possible to define size thresholds for different classes, in order to preserve objects of high priority. Due to the high complexity of the problem, the exact solution cannot be obtained for large problem instances. Nevertheless, for small instances, the approach allows the results of heuristics to be compared with optimal solutions. This enables the performance of heuristic methods to be assessed with respect to the quality of the result. The development of the presented heuristics is motivated by two aims: Firstly, large problem instances, for example, national topographic databases, need to be generalized. Secondly, local updates need to be propagated into a previously derived small-scale map without processing the complete dataset again. In addition to heuristics that eliminate certain variables in the mixed-integer programs, a heuristic is proposed that decomposes a problem instance into smaller, independent instances. With this approach both aims are achieved. Furthermore, a generalization workflow is presented that, in addition to the aggregation method, comprises methods for area collapse and line simplification. In order to prove the applicability of the proposed method in practice, it was applied to a dataset of the official German topographic database ATKIS. A dataset of the ATKIS DLM 50 was processed, which contains similar amounts of detail as the digital topographic map at scale 1:50 000 (DTK 50). The minimal allowed size theta was defined according to existing specifications of the ATKIS DLM 250. A dataset of 22 km times 22 km with 5537 areas (a map sheet of the DTK 50) was aggregated in 82 minutes. Compared to an existing iterative approach (the so-called region-growing), the new method resulted in 20% less class change and 2% less cost for non-compact shapes. The developed method has several advantages, in particular, it offers results of high semantic accuracy. Furthermore, the proposed optimization method is deterministic and allows (hard) constraints to be handled. These are advantages compared to existing optimization methods for map generalization, which are often based on iterative meta-heuristics like simulated annealing. How to adapt the developed approach to other generalization tasks - in particular to building simplification - is discussed as a question for future research. @phdthesis{haunert2008f,
abstract = {This thesis presents a new method for aggregating area objects in topographic databases. Aggregation is an important sub-problem of automatic generalization, which in cartography and geoinformatics means to derive representations of higher abstraction and smaller scale from given datasets. The developed method aims to create results of maximum quality and to ensure standards like minimal dimensions that are given with database specifications. Basically, two quality criteria are considered relevant: (1) Large-scale objects are to be represented by small-scale objects of semantically similar classes - this aim is formalized by defining a semantic distance between classes. (2) Areas in the smaller scale need to have a geometrically simple, compact shape. For the first time, this task is consequently formulated as an optimization problem and solved by applying combinatorial optimization techniques. In a simplified form, the problem can be stated as follows: Given a planar map with areas of different classes and a minimal allowed area size theta for the target scale, combine the areas into contiguous regions and define a new class for each region, such that each region has size at least theta and the area covered by objects whose classes change is minimized. It will be proven that this simplified problem is already NP-hard. Therefore, it is unlikely that an exact polynomial-time algorithm exists for its solution. This motivates an approach to the problem that is based on mixed-integer linear programming and heuristics. The proposed methods allow semantic class distances and different compactness measures to be considered. These criteria are incorporated into the cost function that is minimized. Furthermore, it is possible to define size thresholds for different classes, in order to preserve objects of high priority. Due to the high complexity of the problem, the exact solution cannot be obtained for large problem instances. Nevertheless, for small instances, the approach allows the results of heuristics to be compared with optimal solutions. This enables the performance of heuristic methods to be assessed with respect to the quality of the result. The development of the presented heuristics is motivated by two aims: Firstly, large problem instances, for example, national topographic databases, need to be generalized. Secondly, local updates need to be propagated into a previously derived small-scale map without processing the complete dataset again. In addition to heuristics that eliminate certain variables in the mixed-integer programs, a heuristic is proposed that decomposes a problem instance into smaller, independent instances. With this approach both aims are achieved. Furthermore, a generalization workflow is presented that, in addition to the aggregation method, comprises methods for area collapse and line simplification. In order to prove the applicability of the proposed method in practice, it was applied to a dataset of the official German topographic database ATKIS. A dataset of the ATKIS DLM 50 was processed, which contains similar amounts of detail as the digital topographic map at scale 1:50 000 (DTK 50). The minimal allowed size theta was defined according to existing specifications of the ATKIS DLM 250. A dataset of 22 km times 22 km with 5537 areas (a map sheet of the DTK 50) was aggregated in 82 minutes. Compared to an existing iterative approach (the so-called region-growing), the new method resulted in 20% less class change and 2% less cost for non-compact shapes. The developed method has several advantages, in particular, it offers results of high semantic accuracy. Furthermore, the proposed optimization method is deterministic and allows (hard) constraints to be handled. These are advantages compared to existing optimization methods for map generalization, which are often based on iterative meta-heuristics like simulated annealing. How to adapt the developed approach to other generalization tasks - in particular to building simplification - is discussed as a question for future research.},
author = {Haunert, J.-H.},
file = {c-626.pdf:http\://129.187.165.2/typo3_dgk/docs/c-626.pdf:PDF},
isbn = {ISBN 978-3-7696-5038-9},
number = {626},
school = {Leibniz Universität Hannover, Germany},
series = {Veröffentlichungen der Deutschen Geodätischen Kommission, Reihe C},
timestamp = {2011-11-07T12:01:01.000+0100},
title = {Aggregation in Map Generalization by Combinatorial Optimization},
type = {Dissertation},
year = {2009}
}
|
|
J.-H. Haunert, and C. Brenner. Vehicle localization by matching triangulated point patterns. In Proc. 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS '09), pages 344-351. 2009.
abstract
doi
bibtex
|
| We consider the problem of localizing a moving vehicle based on landmarks that were detected with a vehicle-mounted sensor. Landmarks are represented as points; correspondences of these points with the ones in a reference database are searched based on their geometric configurations. More specifically, we triangulate the landmark points and we match the obtained triangles with triangles in a reference database according to their geometric similarity. We maximize the number of triangle matches while considering the topological relations between different triangles, for example, if two triangles share an edge then the corresponding reference triangles must share an edge. Our method exploits that the observed points typically form a certain configuration: They appear at a limited distance from the vehicle's trajectory, thus the typical point pattern has a large extent in the driving direction and a relatively small lateral extent. This characteristic allows us to triangulate the observed point set such that we obtain a triangle strip (a sequence of triangles) in which each two consecutive triangles share one edge and each triangle connects three points that are relatively close to each other, that is, the triangle strip appropriately defines a neighborhood relationship for the landmarks. The adjacency graph of the triangles becomes a path; this allows for an efficient solution of our matching problem by dynamic programming. We present results of our method with data acquired with a mobile laser scanning system. The landmarks are objects of cylindric shape, for example, poles of traffic signs, which can be easily detected with the employed sensor. We tested the method with respect to its running time and its robustness when imposing different types of errors on the data. In particular, we tested the effect of non-rigid distortions of the observed point set, which are typically encountered during dead reckoning. Our matching approach copes well with such errors since it is based on local similarity measures of triangles, that is, we do not assume that a global non-rigid transformation between the observed point set and the reference point set exists. @inproceedings{haunert2009c,
abstract = {We consider the problem of localizing a moving vehicle based on landmarks that were detected with a vehicle-mounted sensor. Landmarks are represented as points; correspondences of these points with the ones in a reference database are searched based on their geometric configurations. More specifically, we triangulate the landmark points and we match the obtained triangles with triangles in a reference database according to their geometric similarity. We maximize the number of triangle matches while considering the topological relations between different triangles, for example, if two triangles share an edge then the corresponding reference triangles must share an edge. Our method exploits that the observed points typically form a certain configuration: They appear at a limited distance from the vehicle's trajectory, thus the typical point pattern has a large extent in the driving direction and a relatively small lateral extent. This characteristic allows us to triangulate the observed point set such that we obtain a triangle strip (a sequence of triangles) in which each two consecutive triangles share one edge and each triangle connects three points that are relatively close to each other, that is, the triangle strip appropriately defines a neighborhood relationship for the landmarks. The adjacency graph of the triangles becomes a path; this allows for an efficient solution of our matching problem by dynamic programming. We present results of our method with data acquired with a mobile laser scanning system. The landmarks are objects of cylindric shape, for example, poles of traffic signs, which can be easily detected with the employed sensor. We tested the method with respect to its running time and its robustness when imposing different types of errors on the data. In particular, we tested the effect of non-rigid distortions of the observed point set, which are typically encountered during dead reckoning. Our matching approach copes well with such errors since it is based on local similarity measures of triangles, that is, we do not assume that a global non-rigid transformation between the observed point set and the reference point set exists.},
author = {Haunert, J.-H. and Brenner, C.},
booktitle = {Proc. 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS '09)},
doi = {10.1145/1653771.1653819},
file = {HaunertBrenner2009.pdf:http\://www1.informatik.uni-wuerzburg.de/pub/haunert/pdf/HaunertBrenner2009.pdf:PDF},
howpublished = {Accepted for ACM SIGSPATIAL GIS'09},
pages = {344--351},
timestamp = {2011-11-07T12:01:03.000+0100},
title = {Vehicle Localization by Matching Triangulated Point Patterns},
url = {http://doi.acm.org/10.1145/1653771.1653819},
year = {2009}
}
|
|
J.-H. Haunert, A. Dilo, and P. van Oosterom. Constrained set-up of the tGAP structure for progressive vector data transfer. Computers & Geosciences, 35(11):2191-2203, 2009.
abstract
doi
bibtex
|
| A promising approach to submit a vector map from a server to a mobile client is to send a coarse representation first, which then is incrementally refined. We consider the problem of defining a sequence of such increments for areas of different land-cover classes in a planar partition. In order to submit well-generalised datasets, we propose a method of two stages: First, we create a generalised representation from a detailed dataset, using an optimisation approach that satisfies certain cartographic constraints. Second, we define a sequence of basic merge and simplification operations that transforms the most detailed dataset gradually into the generalised dataset. The obtained sequence of gradual transformations is stored without geometrical redundancy in a structure that builds up on the previously developed tGAP (topological Generalised Area Partitioning) structure. This structure and the algorithm for intermediate levels of detail (LoD) have been implemented in an object-relational database and tested for land-cover data from the official German topographic dataset ATKIS at scale 1:50 000 to the target scale 1:250 000. Results of these tests allow us to conclude that the data at lowest LoD and at intermediate LoDs is well generalised. Applying specialised heuristics the applied optimisation method copes with large datasets; the tGAP structure allows users to efficiently query and retrieve a dataset at a specified LoD. Data are sent progressively from the server to the client: First a coarse representation is sent, which is refined until the requested LoD is reached. @article{haunert2009d,
abstract = {A promising approach to submit a vector map from a server to a mobile client is to send a coarse representation first, which then is incrementally refined. We consider the problem of defining a sequence of such increments for areas of different land-cover classes in a planar partition. In order to submit well-generalised datasets, we propose a method of two stages: First, we create a generalised representation from a detailed dataset, using an optimisation approach that satisfies certain cartographic constraints. Second, we define a sequence of basic merge and simplification operations that transforms the most detailed dataset gradually into the generalised dataset. The obtained sequence of gradual transformations is stored without geometrical redundancy in a structure that builds up on the previously developed tGAP (topological Generalised Area Partitioning) structure. This structure and the algorithm for intermediate levels of detail (LoD) have been implemented in an object-relational database and tested for land-cover data from the official German topographic dataset ATKIS at scale 1:50 000 to the target scale 1:250 000. Results of these tests allow us to conclude that the data at lowest LoD and at intermediate LoDs is well generalised. Applying specialised heuristics the applied optimisation method copes with large datasets; the tGAP structure allows users to efficiently query and retrieve a dataset at a specified LoD. Data are sent progressively from the server to the client: First a coarse representation is sent, which is refined until the requested LoD is reached.},
author = {Haunert, J.-H. and Dilo, A. and van Oosterom, P.},
doi = {10.1016/j.cageo.2008.11.002},
file = {HaunertEtAl2009.pdf:http\://www1.informatik.uni-wuerzburg.de/pub/haunert/pdf/HaunertEtAl2009.pdf:PDF},
journal = {Computers \& Geosciences},
number = {11},
pages = {2191-2203},
timestamp = {2011-11-07T12:01:04.000+0100},
title = {Constrained set-up of the t{GAP} structure for progressive vector data transfer},
url = {http://dx.doi.org/10.1016/j.cageo.2008.11.002},
volume = {35},
year = {2009}
}
|
|
B. Kieler, W. Huang, J.-H. Haunert, and J. Jiang. Matching river datasets of different scales. In M. Sester, L. Bernard, and V. Paelke, editors, Lecture Notes in Geoinformation and Cartography. Advances in GIScience: Proc. 12th AGILE International Conference on Geographic Information Science, pages 56-63. Springer-Verlag, Berlin, Germany, 2009.
abstract
doi
bibtex
|
| In order to ease the propagation of updates between geographic datasets of different scales and to support multi-scale analyses, different datasets need to be matched, that is, objects that represent the same entity in the physical world need to be identified. We propose a method for matching datasets of river systems that were acquired at different scales. This task is related to the problem of matching networks of lines, for example road networks. However, we also take into account that rivers may be represented by polygons. The geometric dimension of a river object may depend, for example, on the width of the river and the scale. Our method comprises three steps. First, in order to cope with geometries of different dimensions, we collapse river polygons to centerlines by applying a skeletonization algorithm. We show how to preserve the topology of the river system in this step, which is an important requirement for the subsequent matching steps. Secondly, we perform a pre-matching of the arcs and nodes of the line network generated in the first step, that is, we detect candidate matches and define their quality. Thirdly, we perform the final matching by selecting a consistent set of good candidate matches. We tested our method for two Chinese river datasets of the same areal extent, which were acquired at scales 1:50 000 and 1:250 000. The evaluation of our results allows us to conclude that our method seldom yields incorrect matches. The number of correct matches that are missed by our method is quite small. @inproceedings{kieler2009,
abstract = {In order to ease the propagation of updates between geographic datasets of different scales and to support multi-scale analyses, different datasets need to be matched, that is, objects that represent the same entity in the physical world need to be identified. We propose a method for matching datasets of river systems that were acquired at different scales. This task is related to the problem of matching networks of lines, for example road networks. However, we also take into account that rivers may be represented by polygons. The geometric dimension of a river object may depend, for example, on the width of the river and the scale. Our method comprises three steps. First, in order to cope with geometries of different dimensions, we collapse river polygons to centerlines by applying a skeletonization algorithm. We show how to preserve the topology of the river system in this step, which is an important requirement for the subsequent matching steps. Secondly, we perform a pre-matching of the arcs and nodes of the line network generated in the first step, that is, we detect candidate matches and define their quality. Thirdly, we perform the final matching by selecting a consistent set of good candidate matches. We tested our method for two Chinese river datasets of the same areal extent, which were acquired at scales 1:50 000 and 1:250 000. The evaluation of our results allows us to conclude that our method seldom yields incorrect matches. The number of correct matches that are missed by our method is quite small.},
author = {Kieler, B. and Huang, W. and Haunert, J.-H. and Jiang, J.},
booktitle = {Advances in GIScience: Proc. 12th AGILE International Conference on Geographic Information Science},
doi = {10.1007/978-3-642-00318-9_7},
editor = {Sester, M. and Bernard, L. and Paelke, V.},
file = {KielerEtAl2009.pdf:http\://www1.informatik.uni-wuerzburg.de/pub/haunert/pdf/KielerEtAl2009.pdf:PDF},
pages = {56--63},
publisher = {Springer-Verlag, Berlin, Germany},
series = {Lecture Notes in Geoinformation and Cartography},
timestamp = {2011-11-07T12:01:06.000+0100},
title = {Matching River Datasets of Different Scales},
url = {http://dx.doi.org/10.1007/978-3-642-00318-9_7},
year = {2009}
}
|
|
B. Kieler, J.-H. Haunert, and M. Sester. Deriving scale-transition matrices from map samples for simulated annealing-based aggregation. Annals of GIS, 15(2):107-116, 2009.
abstract
doi
bibtex
|
| The goal of the research described in this article is to derive parameters necessary for the automatic generalization of land-cover maps. A digital land-cover map is often given as a partition of the plane into areas of different classes. Generalizing such maps is usually done by aggregating small areas into larger regions. This can be modelled using cost functions in an optimization process, where a major objective is to minimize the class changes. Thus, an important input parameter for the aggregation is the information about possible aggregation partners of individual object classes. This can be coded in terms of a transition matrix listing costs that are charged for changing a unit area from one class into another one. In our case we consider the problem of determining the transition matrix based on two data sets of different scales. We propose three options to solve the problem: (1) the conventional way where an expert defines manually the transition matrix, (2) to derive the transition matrix from an analysis of an overlay of both data sets, and (3) an automatic way where the optimization is iterated while adapting the transition matrix until the difference of the intersection areas between both data sets before and after the generalization is minimized. As underlying aggregation procedure we use an approach based on global combinatorial optimization. We tested our approach for two German topographic data sets of different origin, which are given in the same areal extent and were acquired at scales 1:1000 and 1:25,000, respectively. The evaluation of our results allows us to conclude that our method is promising for the derivation of transition matrices from map samples. In the discussion we describe the advantages and disadvantages and show options for future work. @article{kieler2009b,
abstract = {The goal of the research described in this article is to derive parameters necessary for the automatic generalization of land-cover maps. A digital land-cover map is often given as a partition of the plane into areas of different classes. Generalizing such maps is usually done by aggregating small areas into larger regions. This can be modelled using cost functions in an optimization process, where a major objective is to minimize the class changes. Thus, an important input parameter for the aggregation is the information about possible aggregation partners of individual object classes. This can be coded in terms of a transition matrix listing costs that are charged for changing a unit area from one class into another one. In our case we consider the problem of determining the transition matrix based on two data sets of different scales. We propose three options to solve the problem: (1) the conventional way where an expert defines manually the transition matrix, (2) to derive the transition matrix from an analysis of an overlay of both data sets, and (3) an automatic way where the optimization is iterated while adapting the transition matrix until the difference of the intersection areas between both data sets before and after the generalization is minimized. As underlying aggregation procedure we use an approach based on global combinatorial optimization. We tested our approach for two German topographic data sets of different origin, which are given in the same areal extent and were acquired at scales 1:1000 and 1:25,000, respectively. The evaluation of our results allows us to conclude that our method is promising for the derivation of transition matrices from map samples. In the discussion we describe the advantages and disadvantages and show options for future work.},
author = {Kieler, B. and Haunert, J.-H. and Sester, M.},
doi = {10.1080/19475680903464639},
file = {KielerEtAl2009b.pdf:http\://www1.informatik.uni-wuerzburg.de/pub/haunert/pdf/KielerEtAl2009b.pdf:PDF},
journal = {Annals of GIS},
number = {2},
pages = {107--116},
timestamp = {2011-11-07T12:01:06.000+0100},
title = {Deriving scale-transition matrices from map samples for simulated annealing-based aggregation},
url = {http://www.informaworld.com/smpp/content~db=all~content=a917787774},
volume = {15},
year = {2009}
}
|
|
Y. Dehbi, Jörg Schmittwilken, and L. Plümer. Learning semantic models and grammar rules of building parts. In volume XXXVIII-2/W11 of ISPRS Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences. Proc. of the ISPRS WG II/2+3+4 and Cost Workshop. Lund, Sweden 2009. 2009.
abstract
bibtex
|
| Building reconstruction and building model generation nowadays receives more and more attention. In this context models such as formal grammars play a major role in 3D geometric modelling. Up to now, models have been designed manually by experts such as architects. Hence, this paper describes an Inductive Logic Programming (ILP) based approach for learning semantic models and grammar rules of buildings and their parts. Due to their complex structure and their important role as link between the building and its outside, straight stairs are presented as an example. ILP is introduced and applied as machine learning method. The learning process is explained and the learned models and results are presented. @inproceedings{dehbi2009,
abstract = {Building reconstruction and building model generation nowadays receives more and more attention. In this context models such as formal grammars play a major role in 3D geometric modelling. Up to now, models have been designed manually by experts such as architects. Hence, this paper describes an Inductive Logic Programming (ILP) based approach for learning semantic models and grammar rules of buildings and their parts. Due to their complex structure and their important role as link between the building and its outside, straight stairs are presented as an example. ILP is introduced and applied as machine learning method. The learning process is explained and the learned models and results are presented.},
author = {Dehbi, Y. and Schmittwilken, J{\"{o}}rg and Pl{\"{u}}mer, L.},
booktitle = {Proc. of the ISPRS WG II/2+3+4 and Cost Workshop. Lund, Sweden 2009},
series = {ISPRS Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences},
title = {Learning Semantic Models and Grammar Rules of Building Parts},
volume = {XXXVIII-2/W11},
year = {2009}
}
|