### ### Planet PostGIS

Welcome to Planet PostGIS

February 16, 2021

CruncyData

ArcGIS Feature Service to PostGIS: The QGIS Way

As a GIS newbie, I've been trying to use local open data for my own learning projects. I recently relocated to Tampa, Florida and was browsing through the City of Tampa open data portal and saw that they have a Public Art map. That looked like a cool dataset to work with but I couldn't find the data source anywhere in the portal. I reached out to the nice folks on the city's GIS team and they gave me an ArcGIS-hosted URL. 

To get the public art features into PostGIS I decided to use the "ArcGIS Feature Service" option in QGIS to point to the ArcGIS API, then export the feature layer to PostGIS. 

by Kat Batuigas (kat.batuigas@crunchydata.com) at February 16, 2021 02:31 PM

February 02, 2021

Martin Davis (Lin.ear th.inking)

OverlayNG and Invalid Geometry

A recent blog post by Elephant Tamer gives a critical appraisal of the improvements to overlay processing shipped in PostGIS 3.1 with GEOS 3.9.  The author is disappointed that PostGIS still reports errors when overlay is used on invalid geometry.  However, this is based on a misunderstanding of the technology. 

GEOS 3.9 includes OverlayNG, ported from the JTS Topology Suite). It brings a major advance in overlay robustness, along with other improvements (described here and here).  Previously, robustness limitations in the overlay algorithm could sometimes cause errors even for inputs which were topologically valid.  This was doubly problematic because there was no fully effective way to process the input geometries to avoid the errors.  Now, OverlayNG solves this problem completely.  Valid inputs will always produce a valid and essentially correct(*) output.

(*) "Essentially" correct, because in order to achieve full robustness a snapping heuristic may be applied to the input geometry.  However, this is done with a very fine tolerance, so should not appreciably alter the output from the theoretically correct value.

But for invalid inputs, OverlayNG will still report errors.  The reason is that there is a wide variety of gruesome ways in which geometry can be invalid.  Automated handling of invalidity would involve expensive extra processing, and also require making assumptions about what area a corrupt geometry is intended to represent.  Rather than silently repairing invalid geometry and returning potentially incorrect results, the design decision is to report this situation as an error.

In fact, OverlayNG is able to handle "mildly" invalid polygons, as described in this post. This covers situations which are technically invalid according to the OGC SFS specification, but which still have well-defined topology.  This includes self-touching rings (sometimes called "inverted polygons" or "exverted holes"), and zero-width gores and spikes.

Taking a detailed look at the data used in the blog post, we can see these improvements at work.  The dataset is the ecology polygons obtained from the GDOS WFS server.  This contains 7662 geometries, of which 10 are invalid.  Using the old overlay algorithm, 9 of these invalid polygons cause TopologyException errors.  Using OverlayNG, only 4 of them cause errors. 

The polygons that can now be processed successfully are typical "OGC-invalid" situations, which do not materially affect the polygonal topology.  These include self-touching rings with pinch points:

and zero-width gores:


Of the cases that still cause errors, two are classic small bow-tie errors:


And two are wildly invalid self-crossing rings:



The last two are good examples of geometry which is so invalid that it is impossible to unambiguously decide what area is represented (although ST_MakeValid will happily grind them into something that is technically valid).

Ultimately it is the user's responsibility to ensure that geometries to be processed by overlay (and many other PostGIS functions) have valid topology (as reported by ST_IsValid).  Ideally this is done by correcting the data at source.  But it can also be done a posteriori in the database itself, by either the ST_MakeValid function, or the well-known buffer(0) trick. (Which to use is a topic for another blog post...)

One improvement that could be made is to check for input validity when OverlayNG throws an error.  Then PostGIS can report definitively that an overlay error is caused by invalid input.  If there is an overlay error that is not caused by invalidity, the PostGIS team wants to hear about it! 

And perhaps there is a case to be made for repairing invalid geometry automatically, even if the repair is suspect.  Possibly this could be invoked via a flag parameter on the overlay functions. More research is required - feedback is welcome!

by Dr JTS (noreply@blogger.com) at February 02, 2021 06:42 PM

January 28, 2021

PostGIS Development

PostGIS 3.1.1

The PostGIS Team is pleased to release the release of PostGIS 3.1.1!

This release is a bug fix release, addressing issues found in the previous 3.1 release.

  • #4814, Crash passing collection with only empty components to ST_MakeValid
  • #4818, Make the VSICURL synthetic driver work as documented
  • #4825, Unstable results from ST_MakeValid
  • #4823, Avoid listing the same geometry in different collections
Continue Reading by clicking title hyperlink ..

by Paul Ramsey at January 28, 2021 12:00 AM

December 19, 2020

Martin Davis (Lin.ear th.inking)

Randomization to the Rescue!

Now that OverlayNG has landed on JTS master, it is getting attention from downstream projects interested in porting it or using it.  One of the JTS ports is the Net Topology Suite (NTS) project, and it is very proactive about tracking the JTS codebase.  Soon after the OverlayNG commit an NTS developer noticed an issue while running the performance tests in JTS: the InteriorPointInAreaPerfTest was now causing a StackOverflowError.  This turned out to be caused by the change to GeometryPrecisionReducer to use OverlayNG with Snap-Rounding to perform robust precision reduction of polygons.  Further investigation revealed that failure was occurring while querying the KdTree used in the HotPixelIndex in the SnapRoundingNoder.  Moreover, in addition to the outright failure, even when queries did succeed they were taking an excessively long time for large inputs.

The reason for using a K-D tree as the search structure for HotPixels is that it supports two kinds of queries with a single structure: 

  • Lookup queries to find the HotPixel for a coordinate.  These queries are performed while building the index incrementally. Hence a dynamic data structure like a K-D tree is required, rather than a static index such as STRtree.
  • Range queries to find all HotPixels within a given extent.  These queries are run after the index is loaded.

The JTS KdTree supports both of these queries more efficiently than QuadTree, which is the other dynamic index in JTS.  However, K-D trees are somewhat notorious for becoming unbalanced if inserted points are coherent (i.e. contain monotonic runs in either ordinate dimension).  This is exactly the situation which occurs in large, relatively smooth polygons - which is the test data used in InteriorPointInAreaPerfTest.  An unbalanced tree is deeper than it would be if perfectly balanced.  In extreme cases this leads to trees of very large depth relative to their size.  This slows down query performance and, because the KdTree query implementation uses recursion, can also lead to stack overflow.  

A perfectly balanced K-D tree, with depth = logN.  In the worst case an unbalanced tree has only one node at each level (depth = N).

I considered a few alternatives to overcome this major blocker.  One was to use a QuadTree, but as mentioned this would reduce performance.  There are schemes to load a K-D tree in a balanced way, but they seemed complex and potentially non-performant.

It's always great when people who file issues are also able to provide code to fix the problem.  And happily the NTS developer submitted a pull request with a very nice solution.  He observed that while the K-D tree was being built incrementally, in fact the inserted points were all available beforehand.  He proposed randomizing them before insertion, using the elegantly simple Fisher-Yates shuffle.  This worked extremely well, providing a huge performance boost and eliminated the stack overflow error.  Here's the relative performance times:

Num PtsRandomizedIn Order
10K126 ms341 ms
20K172 ms1924 ms
50K417 ms12.3 s
100K1030 ms59 s
200K1729 ms240 s
500K5354 msOverflow

Once the solution was merged into JTS, my colleague Paul Ramsey quickly ported it to GEOS, so that PostGIS and all the other downstream clients of GEOS would not encounter this issue.

It's surprising to me that this performance issue hasn't shown up in the other two JTS uses of KdTree: SnappingNoder and ConstrainedDelaunayTriangulator. More investigation required!




by Dr JTS (noreply@blogger.com) at December 19, 2020 03:35 AM

December 18, 2020

PostGIS Development

PostGIS 3.1.0

The PostGIS Team is pleased to release the release of PostGIS 3.1.0!

This version exposes the new features of GEOS 3.9 as well as numerous core performance enhancements for spatial joins, large object access, text format output and more.

Performance is a key feature of this release, with improvements to spatial joins, text outputs, large object reads, vector tile output, and a host of smaller tweaks.

The k-means clustering code has been enhanced to support weighting and higher dimensional clusters.

Geometry generators to create hexagonal and square tilings have been added, for simpler in-the-database summarization queries.

Finally, PostGIS exposes the latest enhancements in the GEOS geometry library 3.9 version. The new overlay engine (aka “OverlayNG”) provides more robust handling of difficult input geometries, using a set of new noding strategies to process geometry. For the end user, this should mean no more “topology exceptions” when using the union, difference, intersection or symmetric difference functions. PostGIS also exposes the new fixed precision overlay capability via an additional grid-size parameter on ST_Intersection and the other overlay functions.

Continue Reading by clicking title hyperlink ..

by Paul Ramsey at December 18, 2020 12:00 AM

December 16, 2020

Paul Ramsey

Waiting for PostGIS 3.1: GEOS 3.9

This post originally appeared on the Crunchy Data blog.


While we talk about “PostGIS” like it’s one thing, it’s actually the collection of a number of specialized geospatial libraries, along with a bunch of code of its own.

  • PostGIS provides core functionality
    • bindings to PostgreSQL, the types and indexes,
    • format reading and writing
    • basic algorithms like distance and area
    • performance tricks like caching
    • simple geometry manipulations (add a point, dump rings, etc)
    • algorithms that don’t exist in the other libraries
  • Proj provides coordinate system transformations
  • GDAL provides raster algorithms and format supports
  • GEOS provides computational geometry algorithms
    • geometry relationships, like “intersects”, “touches” and “relate”
    • geometry operations, like “intersection”, “union”
    • basic algorithms, like “triangulate”

The algorithms in GEOS are actually a port to C++ of algoriths in the JTS Java library. The ecosystem of projects that depend on GEOS or JTS or one of the other language ports of GEOS is very large.

GEOS/JTS Ecosystem

Overlay NG

Over the past 12 months, the geospatial team at Crunchy Data has invested heavily in JTS/GEOS development, overhauling the overlay engine that backs the Intersection, Union, Difference and SymDifference functions in all the projects that depend on the library.

IntersectionUnion

The new overlay engine, “Overlay NG”, promises to be more reliable, and hopefully also faster for most common cases.

One use of overlay code is chopping large objects up, to find the places they have in common. This query summarizes climate zones (bec) by watershed (wsa).

SELECT 
    Sum(ST_Area(ST_Intersection(b.geom, w.geom))) AS area_zone, 
    w.wsd_id, 
    b.zone
FROM bec b
JOIN wsa w
ON ST_Intersects(b.geom, w.geom)
WHERE w.nwwtrshdcd like '128-835500-%'
GROUP BY 2, 3

Summarization

The new implementation for this query runs about 2 times faster than the original. Even better, when run on a larger area with more data, the original implementation fails – it’s not possible to get a result out. The new implementation runs to completion.

Another common use of overlay code is melting together areas that share an attribute. This query takes (almost) every watershed on Vancouver Island and melts them together.

SELECT ST_Union(geom)
FROM wsa
WHERE nwwtrshdcd like '920-%'
   OR nwwtrshdcd like '930-%'

At the start, there are 1384 watershed polygons.

Vancouver Island watersheds

At the end there is just one.

Vancouver Island

The new implementation takes about 50% longer currently, but it is more robust and less likely to fail than the original.

Fixed Precision Overlay

The way Overlay NG ensures robust results, is by falling back to more and more reliable noding approaches. “Noding” refers to how new vertices are introduced into geometries during the overlay process.

  • Initially a naive “floating point” noding is used, that just uses double precision coordinates. This works most of the time, but occasionally fails when noding “almost parallel” edges.
  • On failure, a “snapping” noding is used, which nudges nearby edges and nodes together within a tolerance. That works most of the time, but occasionally fails.
  • Finally, a “fixed precision” noding nudges all of the coordinates in both geometries into a fixed space, where edge collapses can be handled deterministically. This is the lowest performance approach, but it very very rarely occurs.

Sometimes, end users actually prefer to have their geometry forced into a fixed precision grid, and for overlay to use a fixed precision. For those users, with PostGIS 3.1 and GEOS 3.9 there are some new parameters in the intersection/union/difference functions.

Precision reduction

The new “gridSize” parameter determines the size of the grid to snap to when generating new outputs. This can be used both to generate new geometries, and also to precision reduce existing geometries, just by unioning a geometry with an empty geometry.

Inscribed Circle

As always, there are a few random algorithmic treats in each new GEOS release. For 3.9, there is the “inscribed circle”, which finds the largest circle that can be fit inside a polygon (or any other boundary).

Vancouver Island inscribed circle

In addition to making a nice picture, the inscribed circle functions as a measure of the “wideness” of a polygon, so it can be used for things like analyzing river polygons to determine the widest point, or placing labels.

December 16, 2020 08:00 AM

Paul Ramsey

Waiting for PostGIS 3.1: Grid Generators

This post originally appeared on the Crunchy Data blog.


Summarizing data against a fixed grid is a common way of preparing data for analysis. Fixed grids have some advantages over natural and administrative boundaries:

  • No appeal to higher authorities
  • Equal unit areas
  • Equal distances between cells
  • Good for passing data from the “spatial” computational realm to a “non-spatial” realm

Ideally, we want to be able to generate grids that have some key features:

  • Fixed origin point, so that grid can be re-generated and not move
  • Fixed cell coordinates for a given cell size, so that the same cell can be referred to just using a cell address, without having to materialize the cell bounds

ST_SquareGrid()

The ST_SquareGrid(size, bounds) function generates a grid with an origin at (0, 0) in the coordinate plane, and fills in the square bounds of the provided geometry.

SELECT (ST_SquareGrid(400000, ST_Transform(a.geom, 3857))).* 
FROM admin a  
WHERE name = 'Brazil';

So a grid generated using Brazil as the driving geometry looks like this.

Brazil square grid

ST_HexagonGrid()

The ST_HexagonGrid(size, bounds) function works much the same as the square grid function.

Hexagons are popular for some cartographic display purposes and modeling purposes. Surprisingly they can also be indexed using the same two-dimensional indexing scheme as squares.

The hexagon grid starts with a (0, 0) hexagon centered at the origin, and the gridding for a bounds includes all hexagons that touch the bounds.

Hexagon gridding

As with the square gridding, the coordinates of hexes are fixed for a particular gridding size.

SELECT (ST_HexagonGrid(100000, ST_Transform(a.geom, 3857))).* 
FROM admin a  
WHERE name = 'Germany';

Here’s a 100km hexagon gridding of Germany.

Germany hex grid

Summarizing with Grids

It’s possible to materialize grid-based summaries, without actually materializing the grids, using the generator functions to create the desired grids on-the-fly.

Here’s a summary of population points, using a hex grid.

SELECT sum(pop_max) as pop_max, hexes.geom
FROM
    ST_HexagonGrid(
        4.0,
        ST_SetSRID(ST_EstimatedExtent('places', 'geom'), 4326)
    ) AS hexes
    INNER JOIN
    places AS p
    ON ST_Intersects(p.geom, hexes.geom)
GROUP BY hexes.geom;

World population summary

It’s also possible to join up on-the-fly gridding to visualization tools, for very dynamic user experiences, feeding these dynamically generated grids out to the end user via pg_tileserv.

December 16, 2020 08:00 AM

Paul Ramsey

Waiting for PostGIS 3.1: Performance

This post originally appeared on the Crunchy Data blog.


Open source developers sometimes have a hard time figuring out what feature to focus on, in order to generate the maximum value for end users. As a result, they will often default to performance.

Performance is the one feature that every user approves of. The software will keep on doing all the same cool stuff, only faster.

For PostGIS 3.1, there have been a number of performance improvements that, taken together, might add up to a substantial performance gain for your workloads.

Large Geometry Caching

Spatial joins have been slowed down by the overhead of large geometry access for a very long time.

SELECT A.*, B.*
FROM A
JOIN B
ON ST_Intersects(A.geom, B.geom)

PostgreSQL will plan and execute spatial joins like this using a “nested loop join”, which means iterating through one side of the join, and testing the join condition. This results in executions that look like:

  • ST_Intersects(A.geom(1), B.geom(1))
  • ST_Intersects(A.geom(1), B.geom(2))
  • ST_Intersects(A.geom(1), B.geom(3))

So one side of the test repeats over and over.

Geometry Caches

Caching that side and avoiding re-reading the large object for each iteration of the loop makes a huge difference to performance. We have seen 20 times speed-ups in common spatial join workloads (see below).

The fixes are quite technical, but if you are interested we have a detailed write-up available.

Header-only Geometry Reads

The on-disk format for geometry includes a short header that includes information about the geometry bounds, the spatial reference system and dimensionality. That means it’s possible for some functions to return an answer after only reading a few bytes of the header, rather than the whole object.

However, not every function that could do a fast read, did do a fast read. That is now fixed.

Faster Text Generation

It’s pretty common for web applications and others to generate text formats inside the database, and the code for doing so was not optimized. Generating “well-known text” (WKT), GeoJSON, and KML output all now use a faster path and avoid unnecessary copying.

PostGIS also now uses the same number-to-text code as PostgreSQL, which has been shown to be faster, and also allows us to expose a little more control over precision to end users.

How Much Faster?

For the specific use case of spatially joining, here is my favourite test case:

Admin0 and Populated Places

Load the data into both versions.

shp2pgsql -D -s 4326 -I ne_10m_admin_0_countries admin | psql postgis30
shp2pgsql -D -s 4326 -I ne_10m_populated_places places | psql postgis30

Run a spatial join that finds the sum of populated places in each country.

EXPLAIN ANALYZE
SELECT Sum(p.pop_max) as pop_max, a.name
FROM admin a
JOIN places p
ON ST_Intersects(a.geom, p.geom)
GROUP BY a.name

Average time over 5 runs:

  • PostGIS 3.0 = 23.4s
  • PostGIS 3.1 = 0.9s

This test is somewhat of a “worst case”, in that there are lots of very large countries in the admin data, but it gives an idea of the kind of speed-ups that are available for spatial joins against collections that include larger (250+ coordinates) geometries.

December 16, 2020 08:00 AM

December 14, 2020

PostGIS Development

PostGIS 3.1.0rc1

The PostGIS Team is pleased to release the release candidate of the upcoming PostGIS 3.1.0 release. This version is exposes some of the new performance and feature enhancements in GEOS 3.9 as well as numerous speed enhancements not requiring newer GEOS.

Best served with:

Continue Reading by clicking title hyperlink ..

by Paul Ramsey at December 14, 2020 12:00 AM

December 13, 2020

Raúl Marín Rodríguez

Waiting for Postgis 3.1: Performance with large geometries

Today I want to talk about partial decompression in PostGIS and how it has improved the speed of some operations with large geometries, that is the geometries made of hundreds of points or vertices. I don’t think we’ve intentionally focused on performance for the upcoming PostGIS 3.1 but unconsciously I...

December 13, 2020 11:00 PM

Raúl Marín Rodríguez

Using WebAssembly to add Argon2 to Snowflake

Snowflake seems like one of the hottest databases of the moment. Sorry, data warehouse. Tomayto, tomahto. Anyway, I wanted to give it a try and see how hard it is to add new SQL functions to it. Although Snowflake supports Javascript User Defined Functions I don’t like using JS to...

December 13, 2020 11:00 PM

December 09, 2020

PostGIS Development

PostGIS 3.1.0beta1

The PostGIS Team is pleased to release the first beta of the upcoming PostGIS 3.1.0 release. This version is exposes some of the new performance and feature enhancements in not yet relesed GEOS 3.9 as well as numerous speed enhancements not requiring newer GEOS. Requires GEOS 3.6+ and PostgreSQL 9.6+. To use MVT you will need protobuf-c 1.1. or higher.

Best served with:

Continue Reading by clicking title hyperlink ..

by Paul Ramsey at December 09, 2020 12:00 AM

December 05, 2020

Raúl Marín Rodríguez

Waiting for Postgis 3.1: Output functions

On the improvements to PostGIS output functions - To continue with the changes in PostGIS 3.1, in this post I’ll cover the performance improvements on many functions that output geometries either as binary or as text. I will talk about several changes which have in common that they kicked off by a single question: “Now what?” After the...

December 05, 2020 11:00 PM

November 20, 2020

PostGIS Development

PostGIS 3.0.3

The PostGIS Team is pleased to release PostGIS 3.0.3.

Best served with PostgreSQL 13.1, and GEOS 3.8.1

pgRouting 3.1.1

Continue Reading by clicking title hyperlink ..

by Regina Obe at November 20, 2020 12:00 AM

November 19, 2020

Paul Ramsey

Waiting for Postgis 3.1: Vector tile improvements

This is a guest post from Raúl Marín, a core PostGIS contributor and a former colleague of mine at Carto. Raúl is an amazing systems engineer and has been cruising through the PostGIS code base making things faster and more efficient. You can find the original of this post at his new personal tech blog. – Paul


I’m not big on creating new things, I would rather work on improving something that’s already in use and has proven its usefulness. So whenever I’m thinking about what I should do next I tend to look for projects or features that are widely used, where the balance between development and runtime costs favors a more in depth approach.

Upon reviewing the changes of the upcoming PostGIS 3.1 release, it shouldn’t come as a surprise then that most of my contributions are focused on performance. When in doubt, just make it faster.

Since CARTO, the company that pays for my lunch, uses PostGIS’ Vector Tile functions as its backend for dynamic vector maps, any improvement there will have a clear impact on the platform. This is why since the appearance of the MVT functions in PostGIS 2.4 they’ve been enhanced in each major release, and 3.1 wasn’t going to be any different.

In this occasion the main reason behind the changes wasn’t the usual me looking for trouble, but the other way around. As ST_AsMVT makes it really easy to extract information from the database and into the browser, a common pitfall is to use SELECT * to extract all available columns which might move a lot of data unnecessarily and generate extremely big tiles. The easy solution to this problem is to only select the properties needed for the visualization but it’s hard to apply it retroactively once the application is in production and already depending on the inefficient design.

So there I was, looking into why the OOM killer was stopping databases, and discovering queries using a massive amount of resources to generate tiles 50-100 times bigger than they should (the recommendation is smaller than 500 KB). And in this case, the bad design of extracting all columns from the dataset was worsened by the fact that is was being applied to a large dataset; this triggered PostgreSQL parallelism requiring extra resources to generate chunks in parallel and later merge them together. In PostGIS 3.1 I introduced several changes to improve the performance of these 2 steps: the parallel processing and the merge of intermediate results.

The changes

Without getting into too much detail, the main benefit comes from changing the vector tile .proto such that a feature can only hold one value at a time. This is what the specification says, but not what the .proto enforces, therefore the internal library was allocating memory that it never used.

There are other additional changes, such as improving how values are merged between parallel workers, so feel free to have a look at the final commit itself if you want more details.

Performance comparison

The best way to see the impact of these changes is through some examples. In both cases I am generating the same tile, in the same exact server and with the same dependencies; the only change was to replace the PostGIS library, which in 3.0 to 3.1 doesn’t require an upgrade.

In the first example the tile contains all the columns of the 287k points in it. As I’ve mentioned before, it is discouraged to do this, but it is the simplest query to generate.

And for the second example, I’m generating the same tile but now only including the minimal columns for the visualization:

We can see, both in 3.0 and 3.1, that adding only the necessary properties makes things 10 times as fast as with the full data, and also that Postgis 3.1 is 30-40% faster in both situations.

Memory usage

Aside from speed, this change also greatly reduces the amount of memory used to generate a tile.

To see it in action, we monitor the PostgreSQL process while it’s generating the tile with all the properties. In 3.0, we observe in the blue line that the memory usage increases with time until it reaches around 2.7 GB at the end of the transaction.

We now monitor the same request on a server using Postgis 3.1. In this case the server uses around a third of the memory as in 3.0 (1GB vs 2.7GB) and, instead of having a linear increase, the memory is returned back to the system as soon as possible.

To sum it all up: PostGIS 3.1 is faster and uses less memory when generating large vector tiles.

November 19, 2020 08:00 PM

PostGIS Development

PostGIS 3.1.0alpha3

The PostGIS Team is pleased to release the third alpha of upcoming PostGIS 3.1.0 release. This version is exposes some of the new performance and feature enhancements in not yet relesed GEOS 3.9 as well as numerous speed enhancements not requiring newer GEOS. Requires GEOS 3.6+ and PostgreSQL 9.6+. To use MVT you will need protobuf-c 1.1. or higher.

Best served with

PostgreSQL 13.1, GEOS 3.7 or higher is recommended.

pgRouting 3.1.1

Continue Reading by clicking title hyperlink ..

by Regina Obe at November 19, 2020 12:00 AM

November 18, 2020

Raúl Marín Rodríguez

Waiting for Postgis 3.1: Vector tile improvements

On the improvements to MVT generation in PostGIS 3.1 - I’m not big on creating new things, I would rather work on improving something that’s already in use and has proven its usefulness. So whenever I’m thinking about what I should do next I tend to look for projects or features that are widely used, where the balance between development...

November 18, 2020 11:00 PM

October 26, 2020

Boston GIS (Regina Obe, Leo Hsu)

Waiting for PostGIS 3.1: ST_Subdivide and other function support with fixed precision

One of the new features coming in PostGIS 3.1 is fixed precision support. This new feature will require compilation with not yet released GEOS 3.9. There are a couple of functions already in PostGIS 3.1 that have this new feature -- they are ST_Subdivide, ST_Union, ST_SymDifference, ST_Union, and ST_UnaryUnion as summarized in What's new in PostGIS 3.1. To take advantage of these new to die for features, you'll need to have compiled your PostGIS with development GEOS 3.9 which is planned for release around the same time we get around to releasing PostGIS 3.1. Windows users can download binaries with GEOS 3.9 support from PostGIS windows experimental binaries

The ST_Union feature should improve a lot of cases where people ran into topological exceptions. Perhaps I'll demonstrate that in a separate article once I've given it a test drive.

Continue reading "Waiting for PostGIS 3.1: ST_Subdivide and other function support with fixed precision"

by Regina Obe (nospam@example.com) at October 26, 2020 03:33 AM

October 12, 2020

Paul Ramsey

Talking PostGIS on Podcasts

Here in the Covid-times, I haven’t been able to keep up my previous schedule of speaking at conferences, but I have managed to participate in a couple of episodes of the MapScaping Podcast, hosted by Daniel O’Donohue.

Podcast

Daniel is a great interviewer and really puts together a tight show. So far I’ve been on two, and I quietly hope to join him again some time in the future.

October 12, 2020 08:00 AM

August 15, 2020

PostGIS Development

PostGIS 3.0.2, 2.5.5, 2.4.9 Released

The PostGIS development team is pleased to provide bug fix and performance enhancements 3.0.2, 2.5.5, 2.4.9 for the 3.0, 2.5, and 2.4 stable branches.

Continue Reading by clicking title hyperlink ..

by Regina Obe at August 15, 2020 12:00 AM

July 18, 2020

PostGIS Development

PostGIS 3.1.0alpha2

The PostGIS Team is pleased to release the second alpha of upcoming PostGIS 3.1.0 release.

Best served with

PostgreSQL 13beta2, GEOS 3.7 or higher is recommended.

ST_MaximumInscribedCircle requires compilation with GEOS 3.9.0 in development to be enabled.

pgRouting 3.1.0 which will also be released soon.

Continue Reading by clicking title hyperlink ..

by Regina Obe at July 18, 2020 12:00 AM

June 11, 2020

Paul Ramsey

Developers Diary 2

Have you ever watched a team of five-year-olds play soccer? The way the mass of children chases the ball around in a group? I think programmers do that too.

Get the ball!

There’s something about working on a problem together that is so much more rewarding than working separately, we cannot help but get drawn into other peoples problems. There’s a lot of gratification to be had in finding a solution to a shared difficulty!

Even better, different people bring different perspectives to a problem, and illuminate different areas of improvement.

Maximum Inscribed Circle

A couple months ago, my colleague Martin Davis committed a pair of new routines into JTS, to calculate the largest circles that can fit inside a polygon or in a collection of geometries.

Maximum Inscribed Circle

We want to bring all the algorithmic goodness of JTS to PostGIS, so I took up the first step, and ported “maximum inscribed circle” to GEOS and to PostGIS.

When I ported the GEOS test cases, I turned up some odd performance problems. The calculation seemed to be taking inordinately long for larger inputs. What was going on?

The “maximum inscribed circle” algorithm leans heavily on a routine called IndexedFacetDistance to calculate distances between polygon boundaries and candidate circle-centers while converging on the “maximum inscribed circle”. If that routine is slow, the whole algorithm will be slow.

Dan Baston, who originally ported the “IndexedFacetDistance” class got interested and started looking at some test cases of his own.

He found he could improve his old implementation using better memory management that he’d learned in the meantime. He also found some short-circuits to envelope distance calculation that improved performance quite a bit.

In fact, they improved performance so much that Martin ported them back to JTS, where he found that for some cases he could log a 10x performance in distance calculations.

There’s something alchemical about the whole thing.

  • There was a bunch of long-standing code nobody was looking at.
  • I ported an unrelated algorithm which exercised that code.
  • I wrote a test case and reported some profiling information.
  • Other folks with more knowledge were intrigued.
  • They fed their knowledge back and forth and developed more tests.
  • Improvements were found that made everything faster.

I did nothing except shine a light in a dark hole, and everyone else got very excited and things happened.

Toast Caching Redux

In a similar vein, as I described in my last diary entry, a long-standing performance issue in PostGIS was the repeated reading of large geometries during spatial joins.

Much of the problem was solved by dropping a very small “TOAST cache” into the process by which PostGIS reads geometries in functions frequently used in spatial joins.

TOAST

I was so happy with the improvement the TOAST cache provided that I just stopped. Fortunately, my fellow PostGIS community member Raúl Marín was more stubborn.

Having seen my commit of the TOAST cache, and having done some work in other caching parts of PostGIS, he took up the challenge and integrated the TOAST cache with the existing index caches.

The integrated system now uses TOAST identifiers to note identical repeated inputs and avoid both unneccessary reads off disk and unncessary cache checks of the index cache.

The result is that, for spatial joins over large objects, PostGIS 3.1 will be as much as 60x faster than the performance in PostGIS 3.0.

I prepared a demo for a bid proposal this week and found that an example query that took 800ms on my laptop took a full minute on the beefy 16-core demo server. What had I done wrong? Ah! My laptop is running the latest PostGIS code (which will become 3.1) while the cloud server was running PostGIS 2.4. Mystery solved!

Port, Port, Port

I may have mentioned that I’m not a very good programmer.

My current task is definitely exercising my imposter syndrome: porting Martin’s new overlay code from JTS to GEOS.

I knew it would take a long time, and I knew it would be a challenge; but knowing and experiencing are quite different things.

The challenges, as I’ve experienced them are:

  • Moving from Java’s garbage collected memory model to C++’s managed memory model means that I have to understand the object life-cycle which is implicit in Java and make it explicit in C++, all while avoiding accidentally introducing a lot of memory churn and data copying into the GEOS port. Porting isn’t a simple matter of transcribing and papering over syntactic idiom, it involves first understanding the actual JTS algorithms.
  • The age of the GEOS code base, and number of contributors over time, mean that there are a huge number of different patterns to potentially follow in trying to make a “consistent” port to GEOS. Porting isn’t a matter of blank-slate implementation of the JTS code – the ported GEOS code has to slot into the existing GEOS layout. So I have to spend a lot of time learning how previous implementations chose to handle life cycles and call patterns (pass reference, or pointer? yes. Return value? or void return and output parameter? also yes.)
  • My lack of C++ idiom means I spend an excessive amount of time looking up core functions and methods associated with them. This is the only place I’ve felt myself measurably get better over the past weeks.

I’m still only just getting started, having ported some core data structures, and little pieces of dependencies that the overlay needs. The reward will be a hugely improved overlay code for GEOS and thus PostGIS, but I anticipate the debugging stage of the port will take quite a while, even when the code is largely complete.

Wish me luck, I’m going to need it!

If you would like to test the new JTS overlay code, it resides on this branch.
If you would like to watch me suffer as I work on the port, the GEOS branch is here.

June 11, 2020 08:00 AM

May 31, 2020

PostGIS Development

PostGIS 2.3.11

The PostGIS Team is pleased to release PostGIS 2.3.11. This is the last bug fix release of the PostGIS 2.3 series. Please upgrade to 2.4 or higher if you want to continue receiving bug fixes.

If you come across any issues, feel free to report via our ticket tracker https://trac.osgeo.org/postgis or mailing list with details as described here. For security issues, send reports to security@postgis.net.

Continue Reading by clicking title hyperlink ..

by Regina Obe at May 31, 2020 12:00 AM

May 05, 2020

Michal Zimmermann

PostGIS Data Anonymization

Among all the sensitive spatial data being collected through cellphones and credit cards, our address of residency is probably the most delicate one. Can it be anonymized/pseudonymized/obscured before you share it with your business partners?

Imagine given a set of address points for each of your clients and the set of all address points in the country, you should adjust it in the following way:

  • find the two nearest address points for each address point of your client
  • find the center of these two and the client address point
  • measure the distance of the computed center to each of three points and keep the maximum value
  • make the biggest distance even bigger by adding 10 % of its value
  • ceil the value
  • output the new position and the ceiled distance

This shifts each address point by a dynamic distance, giving us at least three points within the given distance (one of them being the original address point).

SELECT
    tmp.code,
    ST_X(tmp.new_position) x,
    ST_Y(tmp.new_position) y,
    ceil(MAX(biggest_distance) + MAX(biggest_distance) * 0.1) round_distance
FROM (
    SELECT
        tmp.code,
        tmp.geom,
        ST_Centroid((ST_Union(two_closest_points, tmp.geom))) new_position,
        -- get distance to two closest points and the client address point
        ST_Centroid((ST_Union(two_closest_points, tmp.geom))) <-> (ST_DumpPoints(ST_Union(two_closest_points, tmp.geom))).geom biggest_distance
    FROM (
        SELECT
            r1.code,
            r1.geom,
            ST_Union(neighbours.geom) two_closest_points
        FROM address_points r1,
        LATERAL (
            -- keep two closest points to each client address point
            SELECT
                r2.code,
                r2.geom,
                r1.geom <-> r2.geom distance
            FROM address_points r2
            WHERE r1.code <> r2.code
            ORDER BY r1.geom <-> r2.geom ASC
            LIMIT 2
        ) neighbours
        GROUP BY
            r1.code,
            r1.geom
    ) tmp
) tmp
GROUP BY
    tmp.code,
    tmp.geom,
    tmp.new_position;

You might want to use LATERAL for tasks like this.

by Michal Zimmermann at May 05, 2020 02:00 PM

April 16, 2020

Paul Ramsey

Developers Diary 1

I’m not a particularly good developer.

I don’t plan well, I tend to hack first and try and find the structure afterwards. I am easily distracted. It takes me an exceedingly long time to marshal a problem in my head enough to attack it.

That said, the enforced slow-down from pandemic time has given me the opportunity to sit and look at code, knowing nothing else is coming down the pipe. There are no talks to prepare, no big-think keynotes to draft. I enjoy those things, and I really enjoy the ego-boost of giving them, but the preparation of them puts me in a mental state that is not conducive to doing code work.

So the end of travel has been good, for at least one aspect of my professional work.

The Successful Failure

Spatial operations against large objects have always been a performance hot spot.

The first problem is that large objects are … large. So if you have algorithms that scale O(n^2) on the number of vertices large objects will kill you. Guess what? Distance, intersects tests, and so on are all O(n^2) in their basic implementations.

We solved this problem a long time ago in PostGIS by putting in an extra layer of run-time indexing.

INDEXING!

During a query (for those functions where it makes sense) if we see the same object twice in a row, we build an index on the edges of that object and keep the index in memory, for the life of the query. This gives us O(log(n)) performance for intersects, point-in-polygon, and so on. For joins in particular, this pattern of “seeing the same big thing multiple times” is very common.

This one small trick is one reason PostGIS is so much faster than “the leading brands”.

However, in order to “see the same object twice” we have to, for each function call in the query, retrieve the whole object, in order to compare it against the one we are holding in memory, to see if it is the same.

Here we run into an issue with our back-end.

PostgreSQL deals with large objects by (a) compressing them and (b) cutting the compressed object into slices and storing them in a side table. This all happens in the background, and is why you can store 1GB objects transparently in a database that has only an 8KB page size.

It’s quite computationally expensive, though. So much so that I found that simply bypassing the compression part of this feature could provide 5x performance gains on our spatial join workload.

Faster

At a code sprint in 2018, the PostGIS team agreed on the necessary steps to work around this long-standing performance issue.

  • Enhance PostgreSQL to allow partial decompression. This would allow the PostGIS caching system to retrieve just a little bit of large objects and use that part to determine if the object was not already in the cache.
  • Enhance the PostGIS serialization scheme to add a hashcode at the front of each large object. This way “is this a new object” could be answered with just a few bytes of hash, instead of checking the whole object.
  • Actually update the caching code code to use hash code and avoid unneccessary object retrievals.

Since this involved a change in PostgreSQL, which runs on an annual release cycle, and a change to the PostGIS serialization scheme, which is a major release marker, the schedule for this work was… long term.

Long Term

Still, I managed to slowly chip away at it, goal in mind:

That left adding the hash code to the front of the objects, and using that code in the PostGIS statement cache.

And this is where things fall apart.

Things Fall Apart

The old statement cache was focussed on ensuring the in-memory indexes were in place. It didn’t kick in until the object had already been retrieved. So avoiding retrieval overhead was going to involve re-working the cache quite a bit, to handle both object and index caching.

I started on the work, which still lives on in this branch, but the many possible states of the cache (do I have part of an object? a whole object? an indexed object?) and the fact that it was used in multiple places by different indexing methods (geography tree, geometry tree, GEOS tree), made the change worrisomely complex.

And so I asked a question, that I should have asked years ago, to the pgsql-hackers list:

… within the context of a single SQL statement, will the Datum values for a particular object remain constant?

Basically, could I use the datum values as unique object keys without retrieving the whole object? That would neatly remove any need to retrieve full objects in order to determine if the cache needed to be updated. As usual, Tom Lane had the answer:

Jeez, no, not like that.

Oh, “good news”, I guess, my work is not in vain. Except wait, Tom included a codicil:

The case where this would actually be worth doing, probably, is where you are receiving a toasted-out-of-line datum. In that case you could legitimately use the toast pointer ID values (va_valueid + va_toastrelid) as a lookup key for a cache, as long as it had a lifespan of a statement or less.

Hm. So for a subset of objects, it was possible to generate a unique key without retrieving the whole object.

Facepalm

And that subset – “toasted-out-of-line datum” – were in fact the objects causing the hot spot: objects large enough to have been compressed and then stored in a side table in 8KB chunks.

What if, instead of re-writing my whole existing in-memory index cache, I left that in place, and just added a simple new cache that only worried about object retrieval. And only cached objects that it could obtain unique keys for, these “toasted-out-of-line” objects. Would that improve performance?

It did. By 20 times on my favourite spatial join benchmark. In increased it by 5 times on a join where only 10% of the objects were large ones. And on joins where none of the objects were large, the new code did not reduce performance at all.

And here’s the punch line: I’ve known about the large object hot spot for at least 5 years. Probably longer. I put off working on it because I thought the solution involved core changes to PostgreSQL and PostGIS, so first I had to put those changes in, which took a long time.

Once I started working on the “real problem”, I spent a solid week:

  • First on a branch to add hash codes, using the new serialization mechanisms from PostGIS 3.
  • Then on a unified caching system to replace the old in-memory index cache.

And then I threw all that work away, and in about 3 hours, wrote and tested the final patch that gave a 20x performance boost.

So, was this a success or a failure?

Stupid

I’ve become inured to the huge mismatch in “time spent versus code produced”, particularly when debugging. Spending 8 hours stepping through a debugger to generate a one-line patch is pretty routine.

But something about the mismatch between my grandious and complex solution (partial retrieval! hash code!) and the final solution (just ask! try the half-measure! see if it’s better!) has really gotten on my nerves.

I like the win, but the path was a long and windy one, and PostGIS users have had slower queries than necessary for years because I failed to pose my simple question to the people who had an answer.

The Successful Success

Contra to that story of the past couple weeks, this week has been a raging success. I keep pinching myself and waiting for something to go wrong.

A number of years ago, JTS got an improvement to robustness in some operations by doing determinant calculations in higher precision than the default IEEE double precision.

Those changes didn’t make it into GEOS. There was an experimental branch, that Mateusz Loskot put together, and it sat un-merged for years, until I picked it up last fall, rebased it and merged it. I did so thinking that was the fastest way, and probably it was, but it included a dependency on a full-precision math library, ttmath, which I added to our tree.

Stupid

Unfortunately, ttmath is basically unmaintained now.

And ttmath is arbitrary precision, while we really only need “higher precision”. JTS just uses a “double double” implementation, that uses the register space of two doubles for higher precision calculations.

And ttmath doesn’t support big-endian platforms (like Sparc, Power, and other chips), which was the real problem. We couldn’t go on into the future without support for these niche-but-not-uncommon platforms.

And ttmath includes some fancy assembly language that makes the build system more complex.

Fortunately, the JTS DD is really not that large, and it has no endian assumptions in it, so I ported it and tested it out against ttmath.

It’s smaller.

It’s faster. (About 5-10%. Yes, even though it uses no special assembly tricks, probably because it doesn’t have to deal with arbitrary precision.)

And here’s the huge surprise: it caused zero regression failures! It has exactly the same behaviour as the old implementation!

Perfect

So needless to say, once the branch was stable, I merged it in and stood there in wonderment. It seems implausable that something as foundational as the math routines could be swapped out without breaking something.

The whole thing took just a few days, and it was so painless that I’ve also made a patch to the 3.8 stable series to bring the new code back for big endian platform support in the mean time.

The next few days I’ll be doing ports of JTS features and fixes that are net-new to GEOS, contemplative work that isn’t too demanding.

Some days everything is easy.

Some days everything is hard.

Don’t let the hard days hold you back!

Perfect

April 16, 2020 08:00 AM

March 02, 2020

Anita Graser (Underdark)

Movement data in GIS #29: power your web apps with movement data using mobilitydb-sqlalchemy

This is a guest post by Bommakanti Krishna Chaitanya @chaitan94

Introduction

This post introduces mobilitydb-sqlalchemy, a tool I’m developing to make it easier for developers to use movement data in web applications. Many web developers use Object Relational Mappers such as SQLAlchemy to read/write Python objects from/to a database.

Mobilitydb-sqlalchemy integrates the moving objects database MobilityDB into SQLAlchemy and Flask. This is an important step towards dealing with trajectory data using appropriate spatiotemporal data structures rather than plain spatial points or polylines.

To make it even better, mobilitydb-sqlalchemy also supports MovingPandas. This makes it possible to write MovingPandas trajectory objects directly to MobilityDB.

For this post, I have made a demo application which you can find live at https://mobilitydb-sqlalchemy-demo.adonmo.com/. The code for this demo app is open source and available on GitHub. Feel free to explore both the demo app and code!

In the following sections, I will explain the most important parts of this demo app, to show how to use mobilitydb-sqlalchemy in your own webapp. If you want to reproduce this demo, you can clone the demo repository and do a “docker-compose up –build” as it automatically sets up this docker image for you along with running the backend and frontend. Just follow the instructions in README.md for more details.

Declaring your models

For the demo, we used a very simple table – with just two columns – an id and a tgeompoint column for the trip data. Using mobilitydb-sqlalchemy this is as simple as defining any regular table:

from flask_sqlalchemy import SQLAlchemy
from mobilitydb_sqlalchemy import TGeomPoint

db = SQLAlchemy()

class Trips(db.Model):
   __tablename__ = "trips"
   trip_id = db.Column(db.Integer, primary_key=True)
   trip = db.Column(TGeomPoint)

Note: The library also allows you to use the Trajectory class from MovingPandas as well. More about this is explained later in this tutorial.

Populating data

When adding data to the table, mobilitydb-sqlalchemy expects data in the tgeompoint column to be a time indexed pandas dataframe, with two columns – one for the spatial data  called “geometry” with Shapely Point objects and one for the temporal data “t” as regular python datetime objects.

from datetime import datetime
from shapely.geometry import Point

# Prepare and insert the data
# Typically it won’t be hardcoded like this, but it might be coming from 
# other data sources like a different database or maybe csv files
df = pd.DataFrame(
   [
       {"geometry": Point(0, 0), "t": datetime(2012, 1, 1, 8, 0, 0),},
       {"geometry": Point(2, 0), "t": datetime(2012, 1, 1, 8, 10, 0),},
       {"geometry": Point(2, -1.9), "t": datetime(2012, 1, 1, 8, 15, 0),},
   ]
).set_index("t")

trip = Trips(trip_id=1, trip=df)
db.session.add(trip)
db.session.commit()

Writing queries

In the demo, you see two modes. Both modes were designed specifically to explain how functions defined within MobilityDB can be leveraged by our webapp.

1. All trips mode – In this mode, we extract all trip data, along with distance travelled within each trip, and the average speed in that trip, both computed by MobilityDB itself using the ‘length’, ‘speed’ and ‘twAvg’ functions. This example also shows that MobilityDB functions can be chained to form more complicated queries.

mobilitydb-sqlalchemy-demo-1

trips = db.session.query(
   Trips.trip_id,
   Trips.trip,
   func.length(Trips.trip),
   func.twAvg(func.speed(Trips.trip))
).all()

2. Spatial query mode – In this mode, we extract only selective trip data, filtered by a user-selected region of interest. We then make a query to MobilityDB to extract only the trips which pass through the specified region. We use MobilityDB’s ‘intersects’ function to achieve this filtering at the database level itself.

mobilitydb-sqlalchemy-demo-2

trips = db.session.query(
   Trips.trip_id,
   Trips.trip,
   func.length(Trips.trip),
   func.twAvg(func.speed(Trips.trip))
).filter(
   func.intersects(Point(lat, lng).buffer(0.01).wkb, Trips.trip),
).all()

Using MovingPandas Trajectory objects

Mobilitydb-sqlalchemy also provides first-class support for MovingPandas Trajectory objects, which can be installed as an optional dependency of this library. Using this Trajectory class instead of plain DataFrames allows us to make use of much richer functionality over trajectory data like analysis speed, interpolation, splitting and simplification of trajectory points, calculating bounding boxes, etc. To make use of this feature, you have set the use_movingpandas flag to True while declaring your model, as shown in the below code snippet.

class TripsWithMovingPandas(db.Model):
   __tablename__ = "trips"
   trip_id = db.Column(db.Integer, primary_key=True)
   trip = db.Column(TGeomPoint(use_movingpandas=True))

Now when you query over this table, you automatically get the data parsed into Trajectory objects without having to do anything else. This also works during insertion of data – you can directly assign your movingpandas Trajectory objects to the trip column. In the below code snippet we show how inserting and querying works with movingpandas mode.

from datetime import datetime
from shapely.geometry import Point

# Prepare and insert the data
# Typically it won’t be hardcoded like this, but it might be coming from 
# other data sources like a different database or maybe csv files
df = pd.DataFrame(
   [
       {"geometry": Point(0, 0), "t": datetime(2012, 1, 1, 8, 0, 0),},
       {"geometry": Point(2, 0), "t": datetime(2012, 1, 1, 8, 10, 0),},
       {"geometry": Point(2, -1.9), "t": datetime(2012, 1, 1, 8, 15, 0),},
   ]
).set_index("t")

geo_df = GeoDataFrame(df)
traj = mpd.Trajectory(geo_df, 1)

trip = Trips(trip_id=1, trip=traj)
db.session.add(trip)
db.session.commit()

# Querying over this table would automatically map the resulting tgeompoint 
# column to movingpandas’ Trajectory class
result = db.session.query(TripsWithMovingPandas).filter(
   TripsWithMovingPandas.trip_id == 1
).first()

print(result.trip.__class__)
# <class 'movingpandas.trajectory.Trajectory'>

Bonus: trajectory data serialization

Along with mobilitydb-sqlalchemy, recently I have also released trajectory data serialization/compression libraries based on Google’s Encoded Polyline Format Algorithm, for python and javascript called trajectory and trajectory.js respectively. These libraries let you send trajectory data in a compressed format, resulting in smaller payloads if sending your data through human-readable serialization formats like JSON. In some of the internal APIs we use at Adonmo, we have seen this reduce our response sizes by more than half (>50%) sometimes upto 90%.

Want to learn more about mobilitydb-sqlalchemy? Check out the quick start & documentation.


This post is part of a series. Read more about movement data in GIS.

by chaitan94 at March 02, 2020 05:51 PM

February 28, 2020

PostGIS Development

PostGIS 2.5.4

The PostGIS Team is pleased to release PostGIS 2.5.4.

Continue Reading by clicking title hyperlink ..

by Paul Ramsey at February 28, 2020 12:00 AM

February 27, 2020

Paul Ramsey

PostGIS Day in STL

Every year, on the second Wednesday of November, Esri (“the Microsoft of GIS”) promotes a day of celebration, “GIS Day” in which the members of our community unite to tell the world about the wonders of cartography and spatial data and incidentally use their software a lot in the process.

And every year, for the last number of years, on the day after “GIS Day”, a motley crew of open source users and SQL database afficionados observe “PostGIS Day”. Until this fall, I had never had a chance to personally participate in a PostGIS Day event, but this year Crunchy sponsored a day in St Louis, and I got to talk an awful lot about PostGIS.

It was really good, and I feel like there’s lots more to be done, if only on the subject of spatial SQL and analysis in the database. Here’s the talks I gave, the balance are on the event page.

PostGIS Introduction

Serving Dynamic Vector Tiles

Geocoding and Text Search in PostGIS

PostGIS 3.0 Overview

February 27, 2020 08:00 AM

February 20, 2020

PostGIS Development

PostGIS 3.0.1

The PostGIS Team is pleased to release PostGIS 3.0.1.

Best served with PostgreSQL 12.2, GEOS 3.8.0, SFCGAL 1.3.7, GDAL 3.0.4, PROJ 6.3.1, protobuf-c 1.3.3, json-c 0.13.1.

Continue Reading by clicking title hyperlink ..

by Darafei Praliaskouski at February 20, 2020 12:00 AM

February 04, 2020

Martin Davis (Lin.ear th.inking)

Running commands in the JTS TestBuilder

The JTS TestBuilder is a great tool for creating geometry, processing it with JTS spatial functions, and visualizing the results.  It has powerful capabilities for inspecting the fine details of geometry (such as the Reveal Topology mode).  I've often thought it would be handy if there was a similar tool for PostGIS.  Of course QGIS excels at visualizing the results of PostGIS queries.  But it doesn't offer the same simplicity for creating geometry and passing it into PostGIS functions.

This is the motivation behind a recent enhancement to the TestBuilder to allow running external (system) commands that return geometry output.  The output can be in any text format that TestBuilder recognizes (currently WKT, WKB and GeoJSON).   It also provides the ability to encode the A and B TestBuilder input geometries as literal WKT or WKB values in the command.  The net result is the ability to run external geometry functions just as if they were functions built into the TestBuilder.

Examples

Running PostGIS spatial functions

Combined with the versatile Postgres command-line interface psql, this allows running a SQL statement and loading the output as geometry. Here's an example of running a PostGIS spatial function.  In this case a MultiPoint geometry has been digitized in the TestBuilder, and processed by the ST_VoronoiPolygons function.  The SQL output geometry is displayed as the TestBuilder result.

The command run is:

/Applications/Postgres.app/Contents/Versions/latest/bin/psql -qtA -c 
"SELECT ST_VoronoiPolygons('#a#'::geometry);"

Things to note:
  • the full path to psql is needed because the TestBuilder processes the command using a plain sh shell.  (It might be possible to improve this.)
  • The psql options -qtA  suppress messages, table headers, and column alignment, so that only the plain WKB of the result is output
  • The variable #a# has the WKT of the A input geometry substituted when the command is run.  This is converted into a PostGIS geometry via the ::geometry cast.  (#awkb# can be used to supply WKB, if full numeric precision is needed)

Loading data from PostGIS

This also makes it easy to load data from PostGIS to make use of TestBuilder geometry analysis and visualization capabilities.  The query can be any kind of SELECT statement, which makes it easy to control what data is loaded.  For large datasets it can be useful to draw an Area of Interest in the TestBuilder and use that as a spatial filter for the query.  The TestBuilder is able to load multiple textual geometries, so there is not need to collect the query result into a single geometry.



Loading data from the Web

Another use for commands is to load data from the Web, by using curl to retrieve a dataset.  Many web spatial datasets are available in GeoJSON, which loads fine into the TestBuilder. Here's an example of loading a dataset provided by an OGC Features service (pygeoapi):



Command Panel User Interface


The Command panel provides a simple UI to make it easier to work with commands.  Command text  can be pasted and cleared.  A history of commands run is recorded for the TestBuilder session.  Recorded commands in the session can be recalled via the Previous and Next Command buttons.
Buttons are provided to insert substitution variable text.


To help debug incorrect command syntax, error output from commands is displayed.


It can happen that a command executes successfully, but returns output that cannot be parsed.  This is indicated by an error in the Result panel.  A common cause of this is that the command produces logging output as well as the actual geometry text, which interferes with parsing.  To aid in debugging this situation the command window shows the first few hundred characters of command output.  The good news is that many commands offer a "quiet mode" to provide data-only output.

Unparseable psql output due to presence of column headers.  The pqsl -t option fixes this.



If you find an interesting use for the TestBuilder Command capability, post it in the comments!

by Dr JTS (noreply@blogger.com) at February 04, 2020 05:40 AM

February 02, 2020

PostGIS Development

PostGIS 3.1.0alpha1

The PostGIS Team is pleased to release the first alpha of upcoming PostGIS 3.1.0 release.

Best served with PostgreSQL 12.1, GEOS 3.8.0.

Continue Reading by clicking title hyperlink ..

by Darafei Praliaskouski at February 02, 2020 12:00 AM

November 18, 2019

Paul Ramsey

OGR FDW Spatial Filtering

The OGR FDW now pushes spatial filters down to remote data sources!

Whuuuut?!?!?

The Basics

OK, first, “OGR” is a subcomponent of the GDAL toolkit that allows generic access to dozens of different geospatial file formats. The OGR part handles the “vector” data (points, lines and polygons) and the GDAL part handles the “raster” data (imagery, elevation grids).

Second, “FDW” is a “foreign data wrapper”, an extension API for PostgreSQL that allows developers to connect non-database information to the database and present it in the form of a table.

The simplest FDWs, like the Oracle FDW, just make remote database tables in foreign systems look like local ones. Connecting two databases is “easy” because they share the same data model: tables of typed columns and rows of data.

The OGR data model is pleasantly similar to the database data model. Every OGR “datasource” (database) has “layers” (tables) made of “fields” (columns) with data types like “string” (varchar) and “number” (integer, real).

Now, combine the two ideas of “OGR” and “FDW”!

The “OGR FDW” uses the OGR library to present geospatial data sources as tables inside a PostgreSQL database. The FDW abstraction layer lets us make tables, and OGR abstraction layer lets those tables be sourced from almost any geospatial file format or server.

It’s an abstraction layer over an abstraction layer… the best kind!

Setup the FDW

Here’s an example that connects to a “web feature service” (WFS) from Belgium (we all speak Flemish, right?) and makes a table of it.


CREATE EXTENSION postgis;
CREATE EXTENSION ogr_fdw;

CREATE SERVER wfsserver 
  FOREIGN DATA WRAPPER ogr_fdw 
  OPTIONS (
    datasource 'WFS:http://geoservices.informatievlaanderen.be/overdrachtdiensten/Haltes/wfs',
    format 'WFS',
    config_options 'CPL_DEBUG=ON'
  );

CREATE FOREIGN TABLE haltes (
    fid bigint,
    shape Geometry(Point,31370),
    gml_id varchar,
    uidn double precision,
    oidn double precision,
    stopid double precision,
    naamhalte varchar,
    typehalte integer,
    lbltypehal varchar,
    codegem varchar,
    naamgem varchar
  ) 
  SERVER wfsserver 
  OPTIONS (
    layer 'Haltes:Halte'
  );

Pushdown from FDW

Let’s run a query on the haltes table, and peak into what the OGR FDW is doing, by setting the debug level to DEBUG1.

SET client_min_messages = DEBUG1;

SELECT gml_id, ST_AsText(shape) AS shape, naamhalte, lbltypehal
  FROM haltes 
  WHERE lbltypehal = 'Niet-belbus'
    AND shape && ST_MakeEnvelope(207950, 186590, 207960, 186600, 31370);

We get back one record, and two debug entries:

DEBUG:  OGR SQL: (LBLTYPEHAL = 'Niet-belbus')
DEBUG:  OGR spatial filter (207950 186590, 207960 186600)
-[ RECORD 1 ]-----------------------
gml_id     | Halte.10328
shape      | POINT(207956 186596)
naamhalte  | Lummen Frederickxstraat
lbltypehal | Niet-belbus

The debug entries are generated by the OGR FDW code, when it recognizes there are parts of the SQL query that can be passed to OGR:

  • OGR understands some limited SQL syntax, and OGR FDW passes those parts of any PostgreSQL query down to OGR.
  • OGR can handle simple bounding box spatial filters, and when OGR FDW sees the use of the && PostGIS operator, it passes the filter constant down to OGR.

So OGR FDW is passing the attribute and spatial filters from the SQL down to the OGR layer. But are they then being passed on to the remote datasource?

Pushdown from OGR

Every OGR “driver” is capable of pushing different amounts of logic down to the source data.

  • A driver that reads a file format cannot push anything down: there is no logic in a file.
  • A driver that reads from a database can push a lot down: databases are rich and powerful execution engines in their own right.

Our example data source, the Belgian “web feature server” actually supports both attribute and spatial filters, and the OGR driver will pass them down.

We can see OGR passing the filters down because when we created the server, we set config_options 'CPL_DEBUG=ON', to expose the GDAL logging information to our PostgreSQL server.

The GDAL debug entries are visible when we set the logging level to DEBUG2

SET client_min_messages = DEBUG2;

SELECT gml_id, ST_AsText(shape) AS shape, naamhalte, lbltypehal
  FROM haltes 
  WHERE lbltypehal = 'Niet-belbus'
    AND shape && ST_MakeEnvelope(207950, 186590, 207960, 186600, 31370);

Now we get a whole slew of logging, but I’m only going to pull out one line, the line that shows the WFS query that OGR sends to the remote server:

DEBUG:  GDAL None [0] WFS: http://geoservices.informatievlaanderen.be/overdrachtdiensten/Haltes/wfs?SERVICE=WFS&VERSION=1.1.0&REQUEST=GetFeature&TYPENAME=Haltes:Halte&FILTER=%3CFilter%20xmlns%3D%22http:%2F%2Fwww.opengis.net%2Fogc%22%20xmlns:Haltes%3D%22informatievlaanderen.be%2FHaltes%22%20xmlns:gml%3D%22http:%2F%2Fwww.opengis.net%2Fgml%22%3E%3CAnd%3E%3CPropertyIsEqualTo%3E%3CPropertyName%3ELBLTYPEHAL%3C%2FPropertyName%3E%3CLiteral%3ENiet%2Dbelbus%3C%2FLiteral%3E%3C%2FPropertyIsEqualTo%3E%3CBBOX%3E%3CPropertyName%3EHaltes:SHAPE%3C%2FPropertyName%3E%3Cgml:Box%3E%3Cgml:coordinates%3E207950.0000000000000000,186590.0000000000000000%20207960.0000000000000000,186600.0000000000000000%3C%2Fgml:coordinates%3E%3C%2Fgml:Box%3E%3C%2FBBOX%3E%3C%2FAnd%3E%3C%2FFilter%3E

Awesome, right?

That’s pretty much un-readable, but if I copy out the value in the FILTER request variable, and reverse the URL encoding, I get this:

<Filter
  xmlns="http://www.opengis.net/ogc"
  xmlns:Haltes="informatievlaanderen.be/Haltes"
  xmlns:gml="http://www.opengis.net/gml">
  <And>
    <PropertyIsEqualTo>
      <PropertyName>LBLTYPEHAL</PropertyName>
      <Literal>Niet-belbus</Literal>
    </PropertyIsEqualTo>
    <BBOX>
      <PropertyName>Haltes:SHAPE</PropertyName>
      <gml:Box>
        <gml:coordinates>
          207950.0000000000000000,186590.0000000000000000
          207960.0000000000000000,186600.0000000000000000
        </gml:coordinates>
      </gml:Box>
    </BBOX>
  </And>
</Filter>

I know, who ever thought that jamming an XML encoded version of a SQL filter into an HTTP GET request was a good idea? (Some very very nice people.)

Anyways, as you can see, both the attribute and spatial portions of our original SQL query have been re-encoded as a WFS XML filter, and sent to the remote server.

OGR FDW correctly pushed the attribute and spatial portions of the WHERE clause into OGR, and OGR correctly pushed those filters into the dialect of the driver we were using, in this case the WFS driver.

The End

The really really cool part is that if we had been using, for example, the Oracle driver, OGR would have instead generated Oracle-compatible SQL and pushed that down!

It’s an abstraction layer over an abstraction layer… the best kind!

November 18, 2019 08:00 AM

November 16, 2019

Anita Graser (Underdark)

Movement data in GIS #25: moving object databases

Recently there has been some buzz on Twitter about a new moving object database (MOD) called MobilityDB that builds on PostgreSQL and PostGIS (Zimányi et al. 2019). The MobilityDB Github repo has been published in February 2019 but according to the following presentation at PgConf.Russia 2019 it has been under development for a few years:

Of course, moving object databases have been around for quite a while. The two most commonly cited MODs are HermesDB (Pelekis et al. 2008) which comes as an extension for either PostgreSQL or Oracle and is developed at the University of Piraeus and SECONDO (de Almeida et al. 2006) which is a stand-alone database system developed at the Fernuniversität Hagen. However, both MODs remain at the research prototype level and have not achieved broad adoption.

It will be interesting to see if MobilityDB will be able to achieve the goal they have set in the title of Zimányi et al. (2019) to become “a mainstream moving object database system”. It’s promising that they are building on PostGIS and using its mature spatial analysis functionality instead of reinventing the wheel. They also discuss why they decided that PostGIS trajectories (which I’ve written about in previous posts) are not the way to go:

However, the presentation does not go into detail whether there are any straightforward solutions to visualizing data stored in MobilityDB.

According to the Github readme, MobilityDB runs on Linux and needs PostGIS 2.5. They also provide an online demo as well as a Docker container with MobilityDB and all its dependencies. If you give it a try, I would love to hear about your experiences.

References

  • de Almeida, V. T., Guting, R. H., & Behr, T. (2006). Querying moving objects in secondo. In 7th International Conference on Mobile Data Management (MDM’06) (pp. 47-47). IEEE.
  • Pelekis, N., Frentzos, E., Giatrakos, N., & Theodoridis, Y. (2008). HERMES: aggregative LBS via a trajectory DB engine. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data (pp. 1255-1258). ACM.
  • Zimányi, E., Sakr, M., Lesuisse, A., & Bakli, M. (2019). MobilityDB: A Mainstream Moving Object Database System. In Proceedings of the 16th International Symposium on Spatial and Temporal Databases (pp. 206-209). ACM.

This post is part of a series. Read more about movement data in GIS.

by underdark at November 16, 2019 02:14 PM

November 13, 2019

Paul Ramsey

ST_Subdivide all the Things

This post originally appeared in the CARTO blog.

One of the things that makes managing geospatial data challenging is the huge variety of scales that geospatial data covers: areas as large as a continent or as small as a man-hole cover.

The data in the database also covers a wide range, from single points, to polygons described with thousands of vertices. And size matters! A large object takes more time to retrieve from storage, and more time to run calculations on.

The Natural Earth countries file is a good example of that variation. Load the data into PostGIS and inspect the object sizes using SQL:

SELECT admin, ST_NPoints(the_geom), ST_MemSize(the_geom) 
FROM ne_10m_admin_0_countries 
ORDER BY ST_NPoints;
  • Coral Sea Islands are represented with a 4 point polygon, only 112 bytes.
  • Canada is represented with a 68159 point multi-polygon, 1 megabytes in size!

Countries by Size in KB

Over half (149) of the countries in the table are larger than the database page size (8Kb) which means they will take extra time to retrieve.

SELECT Count(*) 
FROM ne_10m_admin_0_countries 
WHERE ST_MemSize(the_geom) > 8192;

We can see the overhead involved in working with large data by forcing a large retrieval and computation.

Load the Natural Earth populated places into PostGIS as well, and then run a full spatial join between the two tables:

SELECT Count(*)
FROM ne_10m_admin_0_countries countries 
JOIN ne_10m_populated_places_simple places 
ON ST_Contains(countries.the_geom, places.the_geom)

Even though the places table (7322) and countries table (255) are quite small the computation still takes several seconds (about 30 seconds on my computer).

The large objects cause a number of inefficiencies:

  • Geographically large areas (like Canada or Russia) have large bounding boxes, so the indexes don’t work as efficiently in winnowing out points that don’t fall within the countries.
  • Physically large objects have large vertex lists, which take a long time to pass through the containment calculation. This combines with the poor winnowing to make a bad situation worse.

How can we speed things up? Make the large objects smaller using ST_Subdivide()!

First, generate a new, sub-divided countries table:

CREATE TABLE ne_10m_admin_0_countries_subdivided AS
SELECT ST_SubDivide(the_geom) AS the_geom, admin 
FROM ne_10m_admin_0_countries;

Now we have the same data, but no object is more than 255 vertices (about 4Kb) in size!

Subdivided Countries by Size in KB

Run the spatial join torture test again, and see the change!

SELECT Count(*)
FROM ne_10m_admin_0_countries_subdivided countries 
JOIN ne_10m_populated_places_simple places 
ON ST_Contains(countries.the_geom, places.the_geom)

On my computer, the return time about 0.5 seconds, or 60 times faster, even though the countries table is now 8633 rows. The subdivision has accomplished two things:

  • Each polygon now covers a smaller area, so index searches are less likely to pull up points that are not within the polygon.
  • Each polygon is now below the page size, so retrieval from disk will be much faster.

Subdividing big things can make map drawing faster too, but beware: once your polygons are subdivided you’ll have turn off the polygon outlines to avoid showing the funny square boundaries in your rendered map.

Happy mapping and querying!

November 13, 2019 08:00 AM

November 06, 2019

Paul Ramsey

GZip in PostgreSQL

I love PostgreSQL extensions.

Extensions are the truest expression of the second principle of the original “design of Postgres” vision, to

provide user extendibility for data types, operators and access methods.

Extensions allow users to do more with PostgreSQL than just basic storage and retrieval. PostgreSQL is a full-on integration environment, like Python or Perl, and you can build very complete data manipulation pipelines very close to the metal using native and extension features of PostgreSQL.

Even though I’m a contributor to one of the largest PostgreSQL extensions, I have particularly come to love small extensions, that do one simple thing, particularly one simple thing we maybe take for granted in other environments.

My old HTTP extension is just a binding of libcurl to a SQL interface, so users can do web queries inside the SQL environment.

And today I’ve finished up a GZIP extension, that is just a binding of zlib to SQL, so that users can… compress and decompress things.

It’s not a lot, but it’s a little.

The GZIP entension came about because of an email on the PostGIS development list, where Yuri noted

The amazing ST_AsMVT() has two common usage patterns: copy resulting MVTs to a tile cache (e.g. .mbtiles file or a materialized view), or serve MVT to the users (direct SQL->browser approach). Both patterns still require one additional data processing step – gziping.

Huh. And this use case also applies to people generating GeoJSON directly in the database and sending it out to web clients.

The PostgreSQL core has generally frowned on compression functions at the SQL level, because the database already does compression of over-sized tuples as necessary. The last thing we want is people manually applying compression to column values, and then stuffing them into rows where the database will then have to re-compress them internally. From the perspective of storage efficiency, just standing back and letting PostgreSQL do its work is preferable.

But from the perspective of an integration environment, where an application might be expected to emit or consume compressed data, having a tool in SQL to pack and unpack that data is potentially quite useful.

So I did the tiny binding to zlib and packed it up in an extension.

I hope lots of people find it useful.

November 06, 2019 08:00 AM

October 20, 2019

PostGIS Development

PostGIS 3.0.0

The PostGIS development team is pleased to release PostGIS 3.0.0.

This release works with PostgreSQL 9.5-12 and GEOS >= 3.6.

If you are using postgis_sfcgal extension, you need to compile against SFCGAL 1.3.1 or higher.

Best served with PostgreSQL 12 , GEOS 3.8.0 and pgRouting 3.0.0-beta.

Continue Reading by clicking title hyperlink ..

by Regina Obe at October 20, 2019 12:00 AM

October 15, 2019

Postgres OnLine Journal (Leo Hsu, Regina Obe)

PostGIS 3.0.0 coming soon - Try 3.0.0rc2 at a package repo near you

PostGIS 3.0.0 is planned for release early next week. In the meantime you will find PostGIS 3.0.0rc1 or rc2 available via yum.postgresql.org, apt.postgresql.org, and EDB Windows 64-bit stackbuilder for PostgreSQL 12.

Continue reading "PostGIS 3.0.0 coming soon - Try 3.0.0rc2 at a package repo near you"

by Leo Hsu and Regina Obe (nospam@example.com) at October 15, 2019 11:15 PM

October 13, 2019

PostGIS Development

PostGIS 3.0.0rc2

The PostGIS development team is pleased to release PostGIS 3.0.0rc2. This will be the final RC before release.

This release works with PostgreSQL 9.5-12 and GEOS >= 3.6

Best served with PostgreSQL 12 , GEOS 3.8.0 and pgRouting 3.0.0-alpha.

Continue Reading by clicking title hyperlink ..

by Regina Obe at October 13, 2019 12:00 AM

October 08, 2019

PostGIS Development

PostGIS 3.0.0rc1

The PostGIS development team is pleased to release PostGIS 3.0.0rc1.

This release works with PostgreSQL 9.5-12 and GEOS >= 3.6

Best served with PostgreSQL 12 , GEOS 3.8.0rc2 and pgRouting 3.0.0-alpha.

Continue Reading by clicking title hyperlink ..

by Regina Obe at October 08, 2019 12:00 AM

September 28, 2019

PostGIS Development

PostGIS 3.0.0beta1

The PostGIS development team is pleased to release PostGIS 3.0.0beta1.

This release works with PostgreSQL 9.5-12RC1 and GEOS >= 3.6

Best served with PostgreSQL 12RC1 and GEOS 3.8.0beta1 both of which came out in the past couple of days.

Continue Reading by clicking title hyperlink ..

by Regina Obe at September 28, 2019 12:00 AM

August 26, 2019

Paul Ramsey

Waiting for PostGIS 3: GEOS 3.8

While PostGIS includes lots of algorithms and functionality we have built ourselves, it also adds geospatial smarts to PostgreSQL by linking in specialized libraries to handle particular problems:

  • Proj for coordinate reference support;
  • GDAL for raster functions and formats;
  • GEOS for computational geometry (basic operations);
  • CGAL for more computational geometry (3D operations); and
  • for format support, libxml2, libjsonc, libprotobuf-c

Many of the standard geometry processing functions in PostGIS are actually evaluated inside the GEOS library, so updates in GEOS are very important to PostGIS – they add new functionality or smooth the behaviour of existing functions.

Functions backed by GEOS include:

These functions are all “overlay operation” functions – they take in geometry arguments and construct new geometries for output. Under the covers is an operation called an “overlay”, which combines all the edges of the inputs into a graph and then extracts new outputs from that graph.

While the “overlay operations” in GEOS are very reliable, they are not 100% reliable. When operations fail, the library throws the dreaded TopologyException, which indicates the graph is in an inconsistent and unusable state.

Because there are a lot of PostGIS users and they manage a lot of data, there are a non-zero number of cases that cause TopologyExceptions, and upset users. We would like take that number down to zero.

Update: Next-generation overlay did not make the 3.8 GEOS release and will be part of 3.9 instead.

With luck, GEOS 3.8 will succeed in finally bringing fully robust overlay operations to the open source community. The developer behind the GEOS algorithms, Martin Davis, recently joined Crunchy Data, and has spent this summer working on a new overlay engine.

Overlay failures are caused when intersections between edges result in inconsistencies in the overlay graph. Even using double precision numbers, systems have only 51 bits of precision to represent coordinates, and that fixed precision can result in graphs that don’t correctly reflect their inputs.

The solution is building a system that can operate on any fixed precision and retain valid geometry. As an example, here the new engine builds valid representations of Europe at any precision, even ludicrously coarse ones.

europe at different precisions

In practice, the engine will be used with a tolerance that is close to double precision, but still provides enough slack to handle tricky cases in ways that users find visually “acceptable”. Initially the new functionality should slot under the existing PostGIS functions without change, but in the future we will be able to expose knobs to allow users to explicitly set the precision domain they want to work in.

GEOS 3.8 may not be released in time for PostGIS 3, but it will be a close thing. In addition to the new overlay engine, a lot of work has been done making the code base cleaner, using more “modern” C++ idioms, and porting forward new fixes to existing algorithms.

August 26, 2019 08:00 AM

July 15, 2019

Martin Davis (Lin.ear th.inking)

Better/Faster ST_PointOnSurface for PostGIS

And now for the final chapter in the saga of improving InteriorPoint / PointOnSurface.  For those who missed the first two episodes, the series began with a new approach for the venerable JTS Geometry.interiorPoint() for polygons algorithm.  Episode 2 travelled deep into the wilds of C++ with a port to GEOS.  The series finale shows how this results in greatly improved performance of PostGIS ST_PointOnSurface.

The BC Voting Area dataset is a convenient test case, since it has lots of large polygons (shown here with interior points computed).
The query is about as simple as it gets:

   select ST_PointOnSurface(geom) from ebc.voting_area;

Here's the query timings comparison, using the improved GEOS code and the previous implementation:

Data sizeTimeTime
OLD
ImprovementTime
ST_Centroid
5,658 polygons
(2,171,676 vertices)
341 ms4,613 msx 13369 ms

As expected, there is a dramatic improvement in performance.  The improved ST_PointOnSurface runs 13 times faster than the old code.  And it's now as fast as ST_Centroid.  It's also more robust and tolerant of invalid input (although this test doesn't show it).

This should show up in PostGIS in the fall release (PostGIS 3 / GEOS 3.8).

On to the next improvement... (and also gotta update the docs and the tutorial!)

by Dr JTS (noreply@blogger.com) at July 15, 2019 05:53 AM

June 28, 2019

Stephen Mather

Tracing golden monkeys through time

My collegue TUYISINGIZE Deogratias (“Deo) and others at Dian Fossey Gorilla Fund International have been studying golden monkeys (Cercopithecus kandti) in Rwanda. Golden monkeys are an endangered monkey along the Albertine Rift (including the Virungas, host to the endangered mountain gorilla). They are also cute as can be, but more on that another time.

Golden monkey (Cercopithecus kandti) head.jpg

Deo has been leading efforts to track the golden monkeys in several locations across their range, observing their habits. Among the data gathered is the location of the groups of the monkeys as they move through their range. One element we want to understand from these data are how much does each group move per day.

The raw data look something like this:

raw_data

So we tweak things a bit to get ids in order of date and time, and also prep the data so that the date and time are proper types in PostgreSQL:

DROP TABLE IF EXISTS goldenmonkeys_sorted;
CREATE TABLE goldenmonkeys_sorted AS
(
    WITH nodate AS (
        SELECT gid, geom, id, lat, lon, alt, dater || ' ' || timer AS dater, month AS monther, season, groupid FROM hr_g_all
    )
    , sorted AS (
	SELECT gid, geom, id, lat, lon, alt, dater::TIMESTAMP WITH TIME ZONE AS datetimer, monther, season, groupid FROM nodate
		ORDER BY groupid, datetimer
    )
    SELECT gid AS id, ROW_NUMBER() OVER( PARTITION BY gid) AS gid, datetimer, date(datetimer) AS dater, monther, season, groupid, geom FROM sorted
        ORDER BY gid
);

Resulting in the following:

massaged_data

gishwatiGolden Monkey Ranging in Gishwati National Park

Ok. Now we want to turn this into traces of the movements of the monkeys everyday. Something like this:

gishwati_trace

But for every trace, for every day for each group.

We will create a function that leverages WITH RECURSIVE. We’ve seen this before. WITH RECURSIVE allows us to take each record in sequence and perform operations with the previous record, in this case calculating travel time, travel distance, and combining the individual points into a single line with ST_MakeLine.

CREATE OR REPLACE FUNCTION goldenmonkey_time (date, text)
RETURNS TABLE(dater date, monther text, traveltime interval, distance float, geom geometry) AS $$
WITH RECURSIVE gtime AS (
SELECT gid, dater, monther, datetimer, datetimer (SELECT min(datetimer) FROM goldenmonkeys_sorted WHERE dater = $1 AND groupid = $2) AS timediff, 0::float AS distance, geom
FROM goldenmonkeys_sorted WHERE dater = $1 AND groupid = $2
UNION ALL
SELECT n.gid, w.dater, w.monther, w.datetimer, n.datetimer w.datetimer AS timediff, ST_Distance(n.geom, w.geom) AS distance, n.geom
FROM goldenmonkeys_sorted n, gtime w
WHERE w.dater = $1 AND n.gid::integer = w.gid::integer + 1
)
SELECT max(dater) AS dater, max(monther) AS monther, max(timediff) AS traveltime, ST_Length(ST_MakeLine(geom)) AS length, ST_MakeLine(geom) FROM gtime;
$$ LANGUAGE SQL;

Now to use our function, we need a list of dates and groups so we can calculate this for each day:

WITH dategroup AS (
SELECT
DISTINCT dater, groupid
FROM
goldenmonkeys_sorted ORDER by groupid, dater
)
SELECT groupid, (goldenmonkey_time(dater, groupid)).dater,
(goldenmonkey_time(dater, groupid)).monther,
(goldenmonkey_time(dater, groupid)).traveltime,
(goldenmonkey_time(dater, groupid)).distance,
(goldenmonkey_time(dater, groupid)).geom FROM dategroup
ORDER BY groupid, (goldenmonkey_time(dater, groupid)).dater;

view raw
using_monkeytime.sql
hosted with ❤ by GitHub

Now we have traces not just for one day and group, but all traces and groups:

gishwati_tracesz

gishwati_traces

 

by smathermather at June 28, 2019 10:51 AM

May 22, 2019

Anita Graser (Underdark)

Movement data in GIS #23: trajectories in context

Today’s post continues where “Why you should be using PostGIS trajectories” leaves off. It’s the result of a collaboration with Eva Westermeier. I had the pleasure to supervise her internship at AIT last year and also co-supervised her Master’s thesis [0] on the topic of enriching trajectories with information about their geographic context.

Context-aware analysis of movement data is crucial for different domains and applications, from transport to ecology. While there is a wealth of data, efficient and user-friendly contextual trajectory analysis is still hampered by a lack of appropriate conceptual approaches and practical methods. (Westermeier, 2018)

Part of the work was focused on evaluating different approaches to adding context information from vector datasets to trajectories in PostGIS. For example, adding land cover context to animal movement data or adding information on anchoring and harbor areas to vessel movement data.

Classic point-based model vs. line-based model

The obvious approach is to intersect the trajectory points with context data. This is the classic point data model of contextual trajectories. It’s straightforward to add context information in the point-based model but it also generates large numbers of repeating annotations. In contrast, the line data model using, for example, PostGIS trajectories (LinestringM) is more compact since trajectories can be split into segments at context borders. This creates one annotation per segment and the individual segments are convenient to analyze (as described in part #12).

Spatio-temporal interpolation as provided by the line data model offers additional advantages for the analysis of annotated segments. Contextual segments start and end at the intersection of the trajectory linestring with context polygon borders. This means that there are no gaps like in the point-based model. Consequently, while the point-based model systematically underestimates segment length and duration, the line-based approach offers more meaningful segment length and duration measurements.

Schematic illustration of a subset of an annotated trajectory in two context classes, a) systematic underestimation of length or duration in the point data model, b) full length or duration between context polygon borders in the line data model (source: Westermeier (2018))

Another issue of the point data model is that brief context changes may be missed or represented by just one point location. This makes it impossible to compute the length or duration of the respective context segment. (Of course, depending on the application, it can be desirable to ignore brief context changes and make the annotation process robust towards irrelevant changes.)

Schematic illustration of context annotation for brief context changes, a) and b)
two variants for the point data model, c) gapless annotation in the line data model (source: Westermeier (2018) based on Buchin et al. (2014))

Beyond annotations, context can also be considered directly in an analysis, for example, when computing distances between trajectories and contextual point objects. In this case, the point-based approach systematically overestimates the distances.

Schematic illustration of distance measurement from a trajectory to an external
object, a) point data model, b) line data model (source: Westermeier (2018))

The above examples show that there are some good reasons to dump the classic point-based model. However, the line-based model is not without its own issues.

Issues

Computing the context annotations for trajectory segments is tricky. The main issue is that ST_Intersection drops the M values. This effectively destroys our trajectories! There are ways to deal with this issue – and the corresponding SQL queries are published in the thesis (p. 38-40) – but it’s a real bummer. Basically, ST_Intersection only provides geometric output. Therefore, we need to reconstruct the temporal information in order to create usable trajectory segments.

Finally, while the line-based model is well suited to add context from other vector data, it is less useful for context data from continuous rasters but that was beyond the scope of this work.

Conclusion

After the promising results of my initial investigations into PostGIS trajectories, I was optimistic that context annotations would be a straightforward add-on. The line-based approach has multiple advantages when it comes to analyzing contextual segments. Unfortunately, generating these contextual segments is much less convenient and also slower than I had hoped. Originally, I had planned to turn this work into a plugin for the Processing toolbox but the results of this work motivated me to look into other solutions. You’ve already seen some of the outcomes in part #20 “Trajectools v1 released!”.

References

[0] Westermeier, E.M. (2018). Contextual Trajectory Modeling and Analysis. Master Thesis, Interfaculty Department of Geoinformatics, University of Salzburg.


This post is part of a series. Read more about movement data in GIS.

by underdark at May 22, 2019 07:31 PM

March 25, 2019

Postgres OnLine Journal (Leo Hsu, Regina Obe)

PGConf US 2019 Data Loading Slides up

I gave a talk at PGConf US 2019 on some of the many ways you can load data into PostgreSQL using open source tools. This is similar to the talk I gave last year but with the addition of the pgloader commandline tool and the http PostgreSQL extension.

HTML slides PDF slides

Even though it was a talk Not much about PostGIS, but just tricks for loading data, I managed to get a mouthful of PostGIS in there.

by Leo Hsu and Regina Obe (nospam@example.com) at March 25, 2019 10:00 PM

February 11, 2019

Postgres OnLine Journal (Leo Hsu, Regina Obe)

Compiling http extension on ubuntu 18.04

We recently installed PostgreSQL 11 on an Ubuntu 18.04 using apt.postgresql.org. Many of our favorite extensions were already available via apt (postgis, ogr_fdw to name a few), but it didn't have the http extension we use a lot. The http extension is pretty handy for querying things like Salesforce and other web api based systems. We'll outline the basic compile and install steps. While it's specific to the http extension, the process is similar for any other extension you may need to compile.

Continue reading "Compiling http extension on ubuntu 18.04"

by Leo Hsu and Regina Obe (nospam@example.com) at February 11, 2019 08:31 AM

February 06, 2019

Martin Davis (Lin.ear th.inking)

Joining Crunchy Data to work on PostGIS

I'm happy to announce that I am taking a position with Crunchy Data as a Senior Geospatial Engineer.  I'm working alongside fellow Victorian geospatial maven extraordinaire Paul Ramsey, as the core of a proposed Geospatial Data Centre of Excellence.

Our mission statement is simple:  make PostGIS bigger, better, faster!

  • Bigger - more spatial algorithms and functions
  • Better - enhance existing functionality to make it easier to use, more powerful, and more robust
  • Faster - keep looking for algorithmic optimizations and ways to use the power of Postgres to make spatial processing faster

A lot of this work will involve enhancements to the core GEOS geometry library.  Part of the goal is to keep JTS and GEOS aligned, so this should produce a nice boost to JTS as well.

Having been lurking in the background for many years now, I'm stoked to be (finally) able to work directly on PostGIS.  And I'm excited to be part of the Crunchy team. They have some of the leading Postgres experts in-house, so I'm expecting that it will be a great learning experience.  And their client list promises to expose us to some fascinating large-scale use cases for spatial data processing, which can only be good for the power and robustness of PostGIS.

I'm also looking forward to re-engaging with the geospatial open source community, and learning more about the (even bigger) open source Postgres community.  Great things lie ahead!



by Dr JTS (noreply@blogger.com) at February 06, 2019 10:48 PM

January 01, 2019

Boston GIS (Regina Obe, Leo Hsu)

Using pg_upgrade to upgrade PostgreSQL 9.3 PostGIS 2.1 to PostgreSQL 11 2.5 on Yum

In a previous article Using pg upgrade to upgrade PostGIS without installing older version I demonstrated a trick for upgrading to a newer PostgreSQL instance from PostGIS 2.2 - 2.whatever without having to install the older version of PostGIS in your new PostgreSQL service. Unfortunately that trick does not work if coming from PostGIS 2.1 because in PostGIS 2.2 we renamed a c lib function that backed sql functions in 2.1.

Fear not. There is still a way to upgrade from 2.1 to 2.5 without installing an older version of PostGIS in your new PostgreSQL instance. To do so, you need to add a step and that is to remove the functions in 2.1 that are backed by this renamed lib function. In upcoming PostGIS 3.0, we've added this function back and have it throw an error so that even coming from PostGIS 2.1, you can upgrade just the same as you do from later versions.

Continue reading "Using pg_upgrade to upgrade PostgreSQL 9.3 PostGIS 2.1 to PostgreSQL 11 2.5 on Yum"

by Regina Obe (nospam@example.com) at January 01, 2019 06:48 AM

December 19, 2018

Michal Zimmermann

CentOS PostGIS Upgrade Hell… Yet Again

PostGIS upgrades used to be a nightmare. Broken dependencies, version mismatches, you name it. Upgrading PostgreSQL 10 with PostGIS 2.4 to PostgreSQL 11 on CentOS has been my mission impossible for two days. And it doesn’t seem to come to an end.

What? Why?

We’re running fairly large spatially enabled PostgreSQL 10 database cluster. To keep up with pretty fast development, I was hoping to pg_upgrade it to PostgreSQL 11.

Tried and failed

I’ve been trying different upgrade strategies with PostgreSQL 11 already running to no avail. Here comes the list.

Install PostGIS 2.4 to PostgreSQL 11 and pg_upgrade

yum install postgis24_11
systemctl stop postgresql-11

su postgres
/usr/pgsql-11/bin/pg_upgrade \
  --check \
  -b /usr/pgsql-10/bin/ -B /usr/pgsql-11/bin/ \
  -d /var/lib/pgsql/10/data -D /var/lib/pgsql/11/data \
  --link \
  -U root \
  -o ' -c config_file=/var/lib/pgsql/10/data/postgresql.conf' -O ' -c config_file=/var/lib/pgsql/11/data/postgresql.conf'

This results in:

Your installation references loadable libraries that are missing from the new installation. You can add these libraries to the new installation, or remove the functions using them from the old installation. A list of problem libraries is in the file: loadable_libraries.txt

loadable_libraries.txt says the following:

could not load library "$libdir/postgis-2.4": ERROR:  could not load library "/usr/pgsql-11/lib/postgis-2.4.so": /usr/pgsql-11/lib/postgis-2.4.so: undefined symbol: geod_polygon_init

Duckduckgoing I found the related PostgreSQL mailing list thread.

Build and install PostGIS 2.4 from source to PostgreSQL 11 and pg_upgrade

The bug report says there’s something wrong with proj4 version, so I chose proj49 and geos37.

yum install proj49 proj49-devel
wget https://download.osgeo.org/postgis/source/postgis-2.4.6.tar.gz
tar -xzvf postgis-2.4.6.tar.gz
cd postgis-2.4.6

./configure \
  --with-pgconfig=/usr/pgsql-11/bin/pg_config \
  --with-geosconfig=/usr/geos37/bin/geos-config \
  --with-projdir=/usr/proj49/

make &amp;&amp; make install

CREATE EXTENSION postgis fails with could not load library "/usr/pgsql-11/lib/postgis-2.4.so": /usr/pgsql-11/lib/postgis-2.4.so: undefined symbol: geod_polygon_init. Oh my.

Install PostGIS 2.5 to PostgreSQL 10 and pg_upgrade

Running out of ideas, I tried to install PostGIS 2.5 to our PostgreSQL 10 cluster and pg_upgrade.

yum install postgis25_10

The resulting error appeared almost instantly:

Transaction check error:
file /usr/pgsql-10/bin/shp2pgsql-gui from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/lib/liblwgeom.so from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/lib/postgis-2.4.so from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/address_standardizer.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/address_standardizer.sql from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/address_standardizer_data_us.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/address_standardizer_data_us.sql from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/postgis.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/postgis_sfcgal.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/postgis_tiger_geocoder.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/postgis_topology.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64

What the…

Build and install PostGIS 2.5 from source to PostgreSQL 10 and pg_upgrade

wget https://download.osgeo.org/postgis/source/postgis-2.5.1.tar.gz
tar -xzvf postgis-2.5.1.tar.gz
cd postgis-2.5.1

./configure \
  --with-pgconfig=/usr/pgsql-10/bin/pg_config \
  --with-geosconfig=/usr/geos37/bin/geos-config

make &amp;&amp; make install

CREATE EXTENSION postgis fails with ERROR: could not load library "/usr/pgsql-10/lib/postgis-2.5.so": /usr/pgsql-10/lib/postgis-2.5.so: undefined symbol: GEOSFrechetDistanceDensify. Again? Really?

GEOSFrechetDistanceDensify was added in GEOS 3.7 (linked in ./configure), yet ldd /usr/pgsql-10/lib/postgis-2.5.so says:

linux-vdso.so.1 =&gt;  (0x00007ffd4c5fa000)
libgeos_c.so.1 =&gt; /usr/geos36/lib64/libgeos_c.so.1 (0x00007f68ddf5a000)
libproj.so.0 =&gt; /lib64/libproj.so.0 (0x00007f68ddd07000)
libjson-c.so.2 =&gt; /lib64/libjson-c.so.2 (0x00007f68ddafc000)
libxml2.so.2 =&gt; /lib64/libxml2.so.2 (0x00007f68dd792000)
libm.so.6 =&gt; /lib64/libm.so.6 (0x00007f68dd48f000)
libSFCGAL.so.1 =&gt; /lib64/libSFCGAL.so.1 (0x00007f68dc9c0000)
libc.so.6 =&gt; /lib64/libc.so.6 (0x00007f68dc5f3000)
libgeos-3.6.3.so =&gt; /usr/geos36/lib64/libgeos-3.6.3.so (0x00007f68dc244000)
libstdc++.so.6 =&gt; /lib64/libstdc++.so.6 (0x00007f68dbf3d000)
libgcc_s.so.1 =&gt; /lib64/libgcc_s.so.1 (0x00007f68dbd27000)
libdl.so.2 =&gt; /lib64/libdl.so.2 (0x00007f68dbb22000)
libz.so.1 =&gt; /lib64/libz.so.1 (0x00007f68db90c000)
liblzma.so.5 =&gt; /lib64/liblzma.so.5 (0x00007f68db6e6000)
/lib64/ld-linux-x86-64.so.2 (0x000055960f119000)
libCGAL.so.11 =&gt; /usr/lib64/libCGAL.so.11 (0x00007f68db4bd000)
libCGAL_Core.so.11 =&gt; /usr/lib64/libCGAL_Core.so.11 (0x00007f68db284000)
libmpfr.so.4 =&gt; /usr/lib64/libmpfr.so.4 (0x00007f68db029000)
libgmp.so.10 =&gt; /usr/lib64/libgmp.so.10 (0x00007f68dadb0000)
libboost_date_time-mt.so.1.53.0 =&gt; /usr/lib64/libboost_date_time-mt.so.1.53.0 (0x00007f68dab9f000)
libboost_thread-mt.so.1.53.0 =&gt; /usr/lib64/libboost_thread-mt.so.1.53.0 (0x00007f68da988000)
libboost_system-mt.so.1.53.0 =&gt; /usr/lib64/libboost_system-mt.so.1.53.0 (0x00007f68da783000)
libboost_serialization-mt.so.1.53.0 =&gt; /usr/lib64/libboost_serialization-mt.so.1.53.0 (0x00007f68da517000)
libpthread.so.0 =&gt; /lib64/libpthread.so.0 (0x00007f68da2fa000)
librt.so.1 =&gt; /usr/lib64/librt.so.1 (0x00007f68da0f2000)

I’m nearly desperate after spending two days trying to break through. I have ~ 300 GB of PostgreSQL data to migrate to the current version and there seems to be no possible way to do it in CentOS.

One more thing to note: using yum install postgis25_11 and CREATE EXTENSION postgis in v11 database fails with the exact same error like the one above. I really enjoy working with PostgreSQL and PostGIS, yet there’s hardly something I fear more than trying to upgrade those two things together.

by Michal Zimmermann at December 19, 2018 09:00 AM

December 10, 2018

Postgres OnLine Journal (Leo Hsu, Regina Obe)

PostGIS 2.5.1 Bundle for Windows

PostGIS 2.5.1 was released on November 18th 2018 and I finished off packaging the PostGIS 2.5.1 windows builds and installers targeted for PostgreSQL EDB distribution this weekend and pushing them up to stackbuilder. This covers PostgreSQL 9.4-11 64-bit and PostgreSQL 95-10 (32bit).

Note that PostGIS 2.5 series will be the last of the PostGIS 2s. Goodbye PostGIS 2.* and start playing with the in-development version of PostGIS 3. Snapshot binaries for PostGIS 3.0 windows development are also available on the PostGIS windows download page. These should work for both BigSQL and EDB distributions.

Continue reading "PostGIS 2.5.1 Bundle for Windows"

by Leo Hsu and Regina Obe (nospam@example.com) at December 10, 2018 12:18 AM