### ### Planet PostGIS

Welcome to Planet PostGIS

October 14, 2020

CruncyData

Tuning Your Postgres Database for High Write Loads

As a database grows and scales up from a proof of concept to a full-fledged production instance, there are always a variety of growing pains that database administrators and systems administrators will run into.

Very often, the engineers on the Crunchy Data support team help support enterprise projects which start out as small, proof of concept systems, and are then promoted to large scale production uses.  As these systems receive increased traffic load beyond their original proof-of-concept sizes, one issue may be observed in the Postgres logs as the following:

LOG:  checkpoints are occurring too frequently (9 seconds apart)
HINT:  Consider increasing the configuration parameter "max_wal_size".
LOG:  checkpoints are occurring too frequently (2 seconds apart)
HINT:  Consider increasing the configuration parameter "max_wal_size".

by Tom Swartz at October 14, 2020 04:47 PM

May 30, 2014

Jorge Arévalo

Write support for GDAL PostGIS Raster driver on its way

It’s been a long time since the last news about GDAL PostGIS Raster driver. But now we have some good news. The write support will be implemented in the 2014 edition of Google Summer of Code. The student who will do the work is Varun Saraf, from India. Check his status reports here

by Jorge Arévalo at May 30, 2014 07:27 PM

October 12, 2020

CruncyData

PostgreSQL Monitoring for App Developers: Alerts & Troubleshooting

We've seen an example of how to set up PostgreSQL monitoring in Kubernetes. We've looked at two sets of statistics to keep track of it in your PostgreSQL cluster: your vitals (CPU/memory/disk/network) and your DBA fundamentals.

While starting at these charts should help you to anticipate, diagnose, and respond to issues with your Postgres cluster, the odds are that you are not staring at your monitor 24 hours a day. This is where alerts come in: a properly set up alerting system will let you know if you are on the verge of a major issue so you can head it off at the pass (and alerts should also let you know that there is a major issue).

 

 

Dealing with operational production issues was a departure from my application developer roots, but I looked at it as an opportunity to learn a new set of troubleshooting skills. It also offered an opportunity to improve communication skills: I would often convey to the team and customers what transpired during a downtime or performance degradation situation (VSSE: be transparent!). Some of what I observed I used to  help us to improve the application, while other parts helped me to better understand how PostgreSQL works.

But I digress: let's drill into alerts on your Postgres database.

Note that just because an alert or alarm is going off, it does not mean you need to immediately react: for example, a transient network degradation issue may cause a replica to lag further behind a primary for a bit too long but will clear up when the degradation passes. That said, you typically want to investigate the alert to understand what is causing it.

Additionally, it's important to understand what actions you want to take to solve the problem. For example, a common mistake during an "out-of-disk" error is to delete the PostgreSQL WAL logs with a rm command; doing so can lead to a very bad day (and is also an advertisement for ensuring you have backups).

As mentioned in the post on setting up PostgreSQL monitoring in Kubernetes, the Postgres Operator uses pgMonitor for metric collection and visualization via open source projects like Prometheus and Grafana. pgMonitor uses open source Alertmanager for configuring and sending alerts, and is what the PostgreSQL Operator uses.

Using the above, let's dive into some of the items that you should be alerting on, and I will describe how my experience as an app developer translated into troubleshooting strategies.

by Jonathan S. Katz (jonathan.katz@crunchydata.com) at October 12, 2020 10:04 PM

Paul Ramsey

Talking PostGIS on Podcasts

Here in the Covid-times, I haven’t been able to keep up my previous schedule of speaking at conferences, but I have managed to participate in a couple of episodes of the MapScaping Podcast, hosted by Daniel O’Donohue.

Podcast

Daniel is a great interviewer and really puts together a tight show. So far I’ve been on two, and I quietly hope to join him again some time in the future.

October 12, 2020 08:00 AM

October 10, 2020

CruncyData

PostgreSQL Monitoring for Application Developers: The DBA Fundamentals

I am an accidental DBA, with a huge emphasis on "accidental." I came to PostgreSQL as an application developer who really liked to program with SQL and use the database to help solve my problems. Nonetheless, these systems would enter into production, and as such I had to learn to support them.

PostgreSQL monitoring and performance optimization is a vast topic. In fact, I'll read content like what my colleague Greg Smith wrote on benchmarking PostgreSQL 13 on Ubuntu and be reminded that I have much more to learn! There's no way a few blog posts on the topic can do justice to it, but I want to try and share some of the insights I gleaned as the application developer thrust into an operations role.

In the previous post, I talked about how I could learn a lot from monitoring my Postgres system just by looking at the vital statistics: CPU, memory, disk, and network utilization. These stats about resource utilization were helpful as a starting point: they could indicate what part of the system was having an issue, but maybe not exactly what the problem is.

This is where the fundamental DBA statistics come into play.

These statistics provide an overall look at PostgreSQL database activity and can help you spot problems with performance and availability, and can provide "early warnings" before "bad things" start to happen.

So, what do these metrics tell you, and how can you use them to successfully manage your PostgreSQL database?

For these examples, we are using the monitoring stack powered by pgMonitor that can be installed alongside the Postgres Operator.

by Jonathan S. Katz (jonathan.katz@crunchydata.com) at October 10, 2020 03:52 PM

October 08, 2020

CruncyData

PostgreSQL Monitoring for Application Developers: The Vitals

My professional background has been in application development with a strong affinity for developing with PostgreSQL (which I hope comes through in previous articles). However, in many of my roles, I found myself as the "accidental" systems administrator, where I would troubleshoot issues in production and do my best to keep things running and safe.

When it came to monitoring my Postgres databases, I initially took what I knew about monitoring a web application itself, i.e. looking at CPU, memory, and network usage, and used that to try to detect issues. In many cases, it worked: for instance, I could see a CPU spike on a PostgreSQL database and deduce that there was a runaway query slowing down the system.

Over time, I learned about other types of metrics that would make it easier to triage and mitigate PostgreSQL issues. Combined with what I learned as an accidental systems administrator, I've found they make a powerful toolkit that even helps with application construction.

To help with sharing these experiences, I set up a few PostgreSQL clusters with the PostgreSQL Operator monitoring stack. The Postgres Operator monitoring stack uses the Kubernetes support of pgMonitor to collect metrics about the deployment environment (e.g. Pods) and the specific PostgreSQL instances. I'll also add a slight Kubernetes twist to it, given there are some special consideration you need to make when monitoring Postgres on Kubernetes.

We'll start with my "go to" set of statistics, what I call "the vitals."

The Vital Statistics: CPU, Memory, Disk, and Network Utilization

by Jonathan S. Katz (jonathan.katz@crunchydata.com) at October 08, 2020 09:17 AM

October 06, 2020

CruncyData

How to Setup PostgreSQL Monitoring in Kubernetes

 

You don't need monitoring until you need it. But if you're running anything in production, you always need it.

This is particularly true if you are managing databases. You need to be able to answer questions like  "am I running out of disk?" or "why does my application have degraded performance?" to be able to troubleshoot or mitigate problems before they occur.

When I first made a foray into how to monitor PostgreSQL in Kubernetes, let alone in a containerized environment, I learned that a lot of the tools that I had used previously did not exactly apply (though keep in mind, that foray was awhile back -- things have changed!). I found myself learning a whole new tech stack for monitoring, including open source projects as Prometheus and Grafana.

I also learned how I took for granted how easy it was to collect information like CPU and memory statistics in other environments. In container world this was a whole different ballgame, as you needed to get this information from cgroups. Fortunately for me, my colleague Joe Conway built a PostgreSQL extension called pgnodemx that reads these values from within PostgreSQL itself. Read more about pgnodemx.

And then there is the process of getting the metrics stack set up. Even with my earlier experiments on setting up PostgreSQL monitoring with Docker, I knew there was more work to be done to make an easy-to-setup monitoring solution in Kubernetes.

All this, combined with the adoption of the PostgreSQL Operator,  made us want to change how we support monitoring PostgreSQL clusters on Kubernetes. We wanted to continue using proven open source solutions for monitoring and analyzing systems in Kubernetes (e.g. Prometheus, Grafana), introduce support for alerting (Alertmanager), and provide accurate host-style metrics for things like CPU, memory, and disk usage.

In other words, we wanted to fully support pgMonitor for Kubernetes.

As such, the key introduction in the PostgreSQL Operator 4.5 release is a revamped monitoring system.

Which leads us to...

by Jonathan S. Katz (jonathan.katz@crunchydata.com) at October 06, 2020 09:27 AM

September 30, 2020

CruncyData

Synchronous Replication in PostgreSQL

PostgreSQL has supported streaming replication and hot standbys since version 9.0 (2010), and synchronous replication since version 9.1 (2011). Streaming replication (and in this case we're referring to "binary" streaming replication, not "logical")  sends the PostgreSQL WAL stream over a network connection from primary to a replica. By default, streaming replication is asynchronous: the primary does not wait for a replica to indicate that it wrote the data. With synchronous replication, the primary will wait for any or all  replicas (based on synchronous replication mode) to confirm that they received and wrote the data.

by David Youatt at September 30, 2020 05:35 PM

September 29, 2020

CruncyData

PostgreSQL 13 Upgrade and Performance Check on Ubuntu/Debian: 1.6GB/s random reads

PostgreSQL 13 was released last week. I'm excited about this one, as the more mature partitioning plus logical replication features allow some long-requested deployment architectures. I ran 13 through my usual 144 test quick spin to see if everything was working as expected. Mainly boring stuff, but I was pleased to see that with the simple 128 client/4X RAM benchmark workload, Postgres 13 is driving 1.6GB/s of random read traffic requests to my PCI-e 4.0 NVM-e SSD. It keeps up with a whole RAID volume of the old SATA SSDs! (I checked)

by Greg Smith at September 29, 2020 01:15 PM

CruncyData

PostgreSQL 13 Upgrade and Performance Check on Ubuntu/Debian: 1.6GB/s random reads

PostgreSQL 13 was released last week. I'm excited about this one, as the more mature partitioning plus logical replication features allow some long-requested deployment architectures. I ran 13 through my usual 144 test quick spin to see if everything was working as expected. Mainly boring stuff, but I was pleased to see that with the simple 128 client/4X RAM benchmark workload, Postgres 13 is driving 1.6GB/s of random read traffic requests to my PCI-e 4.0 NVM-e SSD. It keeps up with a whole RAID volume of the old SATA SSDs! (I checked)

by Greg Smith at September 29, 2020 01:15 PM

September 23, 2020

CruncyData

Using Postgres and pgRouting To Explore The Smooth Waves of Yacht Rock

pgRouting is a powerful routing tool, usually used for pathfinding/mapping/direction applications. (See Paul Ramsey's introduction to pgRouting here). It is, however, also a robust graph db implementation, and can be used for much more than just finding the directions to your great aunt Tildy’s. 
Yacht Rock (as if you didn’t know) is a music genre created well after its active era. It’s characterized by smooth dulcet sounds that bring to mind wavy blond-haired waspy men in boat shoes, and ultimately provides a sound that rocks, but won’t rock the boat.  

by John Porvaznik at September 23, 2020 07:07 PM

September 22, 2020

CruncyData

Postgres Data Types in PL/Python

I've been digging a little bit into PL/Python recently-inspired by our recent launch of Crunchy Bridge: a multi-cloud Postgres service with PL/Python and PL/R built right in. One thing that has tripped me up a few times is how PostgreSQL data types work with Python - especially when using external packages like NumPy or SciPy

Data type mapping is something you'll deal with when working with different languages or trying to get separate systems to talk to one another. Mapping types isn't just related to PL/Python, in fact one colleague once wrote a program to interoperate from Ruby to Python in C to make some of such things easier. In the available procedural languages in Postgres, there's going to be at least some documentation on what data types in that language correspond to which types in Postgres (and vice versa). For PL/Python, there are a few differences to note in the mappings for Python 2 and Python 3; I'd recommend studying the docs as they're pretty comprehensive.

by Kat Batuigas at September 22, 2020 05:04 PM

September 18, 2020

CruncyData

Exploring PL/Python: Turn Postgres Table Data Into a NumPy Array

PL/Python can be a convenient and powerful way to take advantage of your PostgreSQL database. In an earlier post, I talked a little bit about how you can use modules included in the Python standard library with PL/Python functions. In this post, we'll try running NumPy in a simple user-defined function which also takes advantage of PL/Python database access functions. The function will show a working example of how to easily convert a data table in Postgres to a NumPy array.

by Kat Batuigas at September 18, 2020 04:30 PM

September 17, 2020

CruncyData

Announcing Crunchy Bridge: A modern Postgres as a service

Most database services in the cloud have not significantly evolved in the past 5 years. They tend to support the basics of read replicas and backups and then move on to other new shiny services, forgetting about a key pillar of your application: your PostgreSQL database. Today we're looking to change that with the launch of Crunchy Bridge. At Crunchy Data, we believe in the power of Postgres and want to bring all the greatness that is Postgres to all developers.  

Crunchy Bridge delivers on the premise of a managed database service allowing you to focus on your application not your database, but we go several steps further. 

For starters you can choose your cloud, and have the ability to replicate across clouds. Start on AWS, but have a business need to shift to Azure? Migration is as simple as clicking a button. Wish you could avoid AWS downtime by having a stand-by replica running in Azure? Crunchy Bridge can do that.

Then there is how much you can do with your database. Need to do some more advanced learning on your database but don't want to write PL/pgSQL? Here is PL/Python and PL/R for you... And for those of you who are curious our PL/Python setup comes with SciPy, NumPy, and Pandas already setup for you so you can start doing data science directly inside your Postgres database.

And then there are extensions. Postgres extensions are surprisingly powerful— allowing Postgres to add new data types, new functions, new functionality without waiting on a release of the Postgres core. But let’s not get ahead of ourselves. At the core of our service is a database you can trust, including:

  • Configured to be production-ready out of the box
  • Automatic backups
  • High availability
  • Point in time recovery to any time in last 7 days
  • Encryption in transit and at rest

As of today Crunchy Bridge is available on Amazon Web Services (AWS) and Azure. Crunchy Bridge is fully self-service, no annual contracts needed, if you don't want to talk to a sales person you don't have to. But if you have questions that we can help with we are available and I'm personally happy to help answer questions. Pricing is simple - you select the instance (memory/CPU) and the amount of storage. Database clusters start at $70 a month and scale up from there.  

by Craig Kerstiens at September 17, 2020 03:23 PM

September 15, 2020

CruncyData

Hidden Gems of PostgreSQL 13

PostgreSQL 13 is a lucky release! There are some very impactful new features and the release is coming out at its regularly scheduled time in the fall.

One of my favorite parts around the PostgreSQL release process is when I can take a step back, review and discuss the features going into a new release, and learn how they can make a positive impact on our users. For example, in a past hallway track at PGCon, I learned how PostgreSQL 12 introduced the ability to do a form of multi-factor authentication!

Each release has a lot of these "hidden gems" -- features that may not jump off the page, but can have a big impact when you actually need them. Postgres 13 is no exception: some of these features make it easier to write queries, add additional layers of security, or help you to avoid downtime.

So what are the hidden gems of PostgreSQL 13?

by Jonathan S. Katz (jonathan.katz@crunchydata.com) at September 15, 2020 08:40 AM

September 11, 2020

CruncyData

Joins or Subquery in PostgreSQL: Lessons Learned

My introduction to databases  and PostgreSQL was for web application development and statistical analysis. I learned just enough SQL to get the queries to return the right answers. Because of my work with PostGIS (and FOSS4G) I became friends with Paul Ramsey. We are now co-workers at Crunchy Data and he is helping me up my SQL-fu. One of the first lessons he taught me was "Try to use joins rather than subqueries."

Today's post is going to work through this  advice, as Paul and I work through some SQL.

by Steve Pousty at September 11, 2020 01:09 PM

September 08, 2020

CruncyData

PostgreSQL Node Metrics: Don't Fly Blind

tl;dr -- If you are not actively monitoring your PostgreSQL clusters, and alerting on anomalous conditions, you are "flying blind". Don't do that!

At Crunchy Data, we distribute and support PostgreSQL as a product, in multiple forms, to many enterprise customers. We also make our products available via GitHub, our developer portal, and other places.

In particular, our team is well known for our Container Suite and Kubernetes Postgres Operator. Also pertinent to this blog, if less well known, is our pgMonitor project.

Recently we decided to bolster our monitoring capability as deployed by our Operator to include better node metrics. Specifically the goals were to:

    1. Gain access to certain Host/Operating System, and Kubernetes Pod (i.e. cgroup v1) specific, data
    2. Do so entirely via the SQL interface to PostgreSQL itself

There are various existing methods to achieve these goals, but we decided to create a purpose-built PostgreSQL extension to facilitate the gathering of these metrics. We will not debate the wisdom of that decision here, although personally I think it has worked out extremely well.

The extension is called pgnodemx. I also had additional goals for the extension to provide support for:

3. Non-containerized PostgreSQL clusters
4. Hosts running cgroup v2

The purpose of today's blog is to discuss this new extension in some detail.

Anatomy of the Solution

by Joe Conway (joe.conway@crunchydata.com) at September 08, 2020 05:58 PM

September 01, 2020

CruncyData

Three Easy Things To Remember About Postgres Indexes

If you're an application developer, analyst, data scientist, or anyone who's had to figure out how to work with relational databases, chances are you're familiar with indexes. At least to the extent that you know they somehow help speed up your queries. (That's where I'd left my understanding of indexes for a good amount of time). You may have also seen that PostgreSQL automatically creates an index for a column with a primary key or unique constraint. But start to dig just a little bit deeper and you'll quickly realize that there's a lot more to it than just making searches go faster!

If you're getting started with learning about indexes, here are a few things that hopefully will help round out your understanding.

by Kat Batuigas at September 01, 2020 07:17 PM

August 28, 2020

CruncyData

Introducing the Postgres Prometheus Adapter

Prometheus is a popular open source monitoring tool and we have many customers that leverage it when using the Crunchy  PostgreSQL Operator or Crunchy PostgreSQL High Availability. Prometheus ships out-of-the-box with its own time series data store but we’re big fans of Postgres, and we know Postgres can do time series just fine. Furthermore, if you’re already running PostgreSQL and using Prometheus to monitor it, why not just store that data in a Postgres database?

Just because you can do something, doesn’t mean you should, but in this case it's not such a bad idea. By storing Prometheus metric data natively in Postgres we can leverage many of the other features of PostgreSQL including:

To make it easier for anyone that wants to use Postgres as their backing store for Prometheus, we’re proud to announce the release of the PostgreSQL Prometheus Adapter.

by Yogesh Sharma at August 28, 2020 02:56 PM

August 25, 2020

CruncyData

Why PostgreSQL 13 is a Lucky Release

I'll admit it: I used to have triskaidekaphobia, otherwise known as a "fear or avoidance of the number 13." After all, I grew up in an area with many tall buildings where the elevators have no 13th floor, so it seems like I could rationalize this irrationality. I've been able to push aside this phobia, but I can completely empathize with those who don't like dealing with 13.

Which comes to our topic today: the upcoming PostgreSQL 13 release.

The fear of the number 13 was enough to prompt discussions on the upcoming version number, which given a lot of factors (including the relatively recent number scheme change) the community opted to stick to the status quo.

But the aim of this post is not dive into the psychology behind the number thirteen, but rather go into why Postgres 13 is a special, if not lucky, release.

Similar to what I wrote last year, PostgreSQL 13 is not a "This is the FEATURE release" that we've had in previous years (Partitioning! Logical Replication! Parallelism! Upsert! JSON! etc.), but rather improves upon many existing features based on real-world PostgreSQL experiences. In fact, I'd call PostgreSQL 13 the "something for everyone" release: there is very likely a feature that you will immediate benefit from by upgrading to PostgreSQL 13.

So while I encourage you to check out the Postgres 13 release notes (which at the time of this writing is still a draft), let me discuss some of the features that make this release very lucky!

by Jonathan S. Katz (jonathan.katz@crunchydata.com) at August 25, 2020 10:50 AM

August 15, 2020

PostGIS Development

PostGIS 3.0.2, 2.5.5, 2.4.9 Released

The PostGIS development team is pleased to provide bug fix and performance enhancements 3.0.2, 2.5.5, 2.4.9 for the 3.0, 2.5, and 2.4 stable branches.

Continue Reading by clicking title hyperlink ..

by Regina Obe at August 15, 2020 12:00 AM

July 18, 2020

PostGIS Development

PostGIS 3.1.0alpha2

The PostGIS Team is pleased to release the second alpha of upcoming PostGIS 3.1.0 release.

Best served with

PostgreSQL 13beta2, GEOS 3.7 or higher is recommended.

ST_MaximumInscribedCircle requires compilation with GEOS 3.9.0 in development to be enabled.

pgRouting 3.1.0 which will also be released soon.

Continue Reading by clicking title hyperlink ..

by Regina Obe at July 18, 2020 12:00 AM

June 11, 2020

Paul Ramsey

Developers Diary 2

Have you ever watched a team of five-year-olds play soccer? The way the mass of children chases the ball around in a group? I think programmers do that too.

Get the ball!

There’s something about working on a problem together that is so much more rewarding than working separately, we cannot help but get drawn into other peoples problems. There’s a lot of gratification to be had in finding a solution to a shared difficulty!

Even better, different people bring different perspectives to a problem, and illuminate different areas of improvement.

Maximum Inscribed Circle

A couple months ago, my colleague Martin Davis committed a pair of new routines into JTS, to calculate the largest circles that can fit inside a polygon or in a collection of geometries.

Maximum Inscribed Circle

We want to bring all the algorithmic goodness of JTS to PostGIS, so I took up the first step, and ported “maximum inscribed circle” to GEOS and to PostGIS.

When I ported the GEOS test cases, I turned up some odd performance problems. The calculation seemed to be taking inordinately long for larger inputs. What was going on?

The “maximum inscribed circle” algorithm leans heavily on a routine called IndexedFacetDistance to calculate distances between polygon boundaries and candidate circle-centers while converging on the “maximum inscribed circle”. If that routine is slow, the whole algorithm will be slow.

Dan Baston, who originally ported the “IndexedFacetDistance” class got interested and started looking at some test cases of his own.

He found he could improve his old implementation using better memory management that he’d learned in the meantime. He also found some short-circuits to envelope distance calculation that improved performance quite a bit.

In fact, they improved performance so much that Martin ported them back to JTS, where he found that for some cases he could log a 10x performance in distance calculations.

There’s something alchemical about the whole thing.

  • There was a bunch of long-standing code nobody was looking at.
  • I ported an unrelated algorithm which exercised that code.
  • I wrote a test case and reported some profiling information.
  • Other folks with more knowledge were intrigued.
  • They fed their knowledge back and forth and developed more tests.
  • Improvements were found that made everything faster.

I did nothing except shine a light in a dark hole, and everyone else got very excited and things happened.

Toast Caching Redux

In a similar vein, as I described in my last diary entry, a long-standing performance issue in PostGIS was the repeated reading of large geometries during spatial joins.

Much of the problem was solved by dropping a very small “TOAST cache” into the process by which PostGIS reads geometries in functions frequently used in spatial joins.

TOAST

I was so happy with the improvement the TOAST cache provided that I just stopped. Fortunately, my fellow PostGIS community member Raúl Marín was more stubborn.

Having seen my commit of the TOAST cache, and having done some work in other caching parts of PostGIS, he took up the challenge and integrated the TOAST cache with the existing index caches.

The integrated system now uses TOAST identifiers to note identical repeated inputs and avoid both unneccessary reads off disk and unncessary cache checks of the index cache.

The result is that, for spatial joins over large objects, PostGIS 3.1 will be as much as 60x faster than the performance in PostGIS 3.0.

I prepared a demo for a bid proposal this week and found that an example query that took 800ms on my laptop took a full minute on the beefy 16-core demo server. What had I done wrong? Ah! My laptop is running the latest PostGIS code (which will become 3.1) while the cloud server was running PostGIS 2.4. Mystery solved!

Port, Port, Port

I may have mentioned that I’m not a very good programmer.

My current task is definitely exercising my imposter syndrome: porting Martin’s new overlay code from JTS to GEOS.

I knew it would take a long time, and I knew it would be a challenge; but knowing and experiencing are quite different things.

The challenges, as I’ve experienced them are:

  • Moving from Java’s garbage collected memory model to C++’s managed memory model means that I have to understand the object life-cycle which is implicit in Java and make it explicit in C++, all while avoiding accidentally introducing a lot of memory churn and data copying into the GEOS port. Porting isn’t a simple matter of transcribing and papering over syntactic idiom, it involves first understanding the actual JTS algorithms.
  • The age of the GEOS code base, and number of contributors over time, mean that there are a huge number of different patterns to potentially follow in trying to make a “consistent” port to GEOS. Porting isn’t a matter of blank-slate implementation of the JTS code – the ported GEOS code has to slot into the existing GEOS layout. So I have to spend a lot of time learning how previous implementations chose to handle life cycles and call patterns (pass reference, or pointer? yes. Return value? or void return and output parameter? also yes.)
  • My lack of C++ idiom means I spend an excessive amount of time looking up core functions and methods associated with them. This is the only place I’ve felt myself measurably get better over the past weeks.

I’m still only just getting started, having ported some core data structures, and little pieces of dependencies that the overlay needs. The reward will be a hugely improved overlay code for GEOS and thus PostGIS, but I anticipate the debugging stage of the port will take quite a while, even when the code is largely complete.

Wish me luck, I’m going to need it!

If you would like to test the new JTS overlay code, it resides on this branch.
If you would like to watch me suffer as I work on the port, the GEOS branch is here.

June 11, 2020 08:00 AM

May 31, 2020

PostGIS Development

PostGIS 2.3.11

The PostGIS Team is pleased to release PostGIS 2.3.11. This is the last bug fix release of the PostGIS 2.3 series. Please upgrade to 2.4 or higher if you want to continue receiving bug fixes.

If you come across any issues, feel free to report via our ticket tracker https://trac.osgeo.org/postgis or mailing list with details as described here. For security issues, send reports to security@postgis.net.

Continue Reading by clicking title hyperlink ..

by Regina Obe at May 31, 2020 12:00 AM

May 05, 2020

Michal Zimmermann

PostGIS Data Anonymization

Among all the sensitive spatial data being collected through cellphones and credit cards, our address of residency is probably the most delicate one. Can it be anonymized/pseudonymized/obscured before you share it with your business partners?

Imagine given a set of address points for each of your clients and the set of all address points in the country, you should adjust it in the following way:

  • find the two nearest address points for each address point of your client
  • find the center of these two and the client address point
  • measure the distance of the computed center to each of three points and keep the maximum value
  • make the biggest distance even bigger by adding 10 % of its value
  • ceil the value
  • output the new position and the ceiled distance

This shifts each address point by a dynamic distance, giving us at least three points within the given distance (one of them being the original address point).

SELECT
    tmp.code,
    ST_X(tmp.new_position) x,
    ST_Y(tmp.new_position) y,
    ceil(MAX(biggest_distance) + MAX(biggest_distance) * 0.1) round_distance
FROM (
    SELECT
        tmp.code,
        tmp.geom,
        ST_Centroid((ST_Union(two_closest_points, tmp.geom))) new_position,
        -- get distance to two closest points and the client address point
        ST_Centroid((ST_Union(two_closest_points, tmp.geom))) <-> (ST_DumpPoints(ST_Union(two_closest_points, tmp.geom))).geom biggest_distance
    FROM (
        SELECT
            r1.code,
            r1.geom,
            ST_Union(neighbours.geom) two_closest_points
        FROM address_points r1,
        LATERAL (
            -- keep two closest points to each client address point
            SELECT
                r2.code,
                r2.geom,
                r1.geom <-> r2.geom distance
            FROM address_points r2
            WHERE r1.code <> r2.code
            ORDER BY r1.geom <-> r2.geom ASC
            LIMIT 2
        ) neighbours
        GROUP BY
            r1.code,
            r1.geom
    ) tmp
) tmp
GROUP BY
    tmp.code,
    tmp.geom,
    tmp.new_position;

You might want to use LATERAL for tasks like this.

by Michal Zimmermann at May 05, 2020 02:00 PM

April 16, 2020

Paul Ramsey

Developers Diary 1

I’m not a particularly good developer.

I don’t plan well, I tend to hack first and try and find the structure afterwards. I am easily distracted. It takes me an exceedingly long time to marshal a problem in my head enough to attack it.

That said, the enforced slow-down from pandemic time has given me the opportunity to sit and look at code, knowing nothing else is coming down the pipe. There are no talks to prepare, no big-think keynotes to draft. I enjoy those things, and I really enjoy the ego-boost of giving them, but the preparation of them puts me in a mental state that is not conducive to doing code work.

So the end of travel has been good, for at least one aspect of my professional work.

The Successful Failure

Spatial operations against large objects have always been a performance hot spot.

The first problem is that large objects are … large. So if you have algorithms that scale O(n^2) on the number of vertices large objects will kill you. Guess what? Distance, intersects tests, and so on are all O(n^2) in their basic implementations.

We solved this problem a long time ago in PostGIS by putting in an extra layer of run-time indexing.

INDEXING!

During a query (for those functions where it makes sense) if we see the same object twice in a row, we build an index on the edges of that object and keep the index in memory, for the life of the query. This gives us O(log(n)) performance for intersects, point-in-polygon, and so on. For joins in particular, this pattern of “seeing the same big thing multiple times” is very common.

This one small trick is one reason PostGIS is so much faster than “the leading brands”.

However, in order to “see the same object twice” we have to, for each function call in the query, retrieve the whole object, in order to compare it against the one we are holding in memory, to see if it is the same.

Here we run into an issue with our back-end.

PostgreSQL deals with large objects by (a) compressing them and (b) cutting the compressed object into slices and storing them in a side table. This all happens in the background, and is why you can store 1GB objects transparently in a database that has only an 8KB page size.

It’s quite computationally expensive, though. So much so that I found that simply bypassing the compression part of this feature could provide 5x performance gains on our spatial join workload.

Faster

At a code sprint in 2018, the PostGIS team agreed on the necessary steps to work around this long-standing performance issue.

  • Enhance PostgreSQL to allow partial decompression. This would allow the PostGIS caching system to retrieve just a little bit of large objects and use that part to determine if the object was not already in the cache.
  • Enhance the PostGIS serialization scheme to add a hashcode at the front of each large object. This way “is this a new object” could be answered with just a few bytes of hash, instead of checking the whole object.
  • Actually update the caching code code to use hash code and avoid unneccessary object retrievals.

Since this involved a change in PostgreSQL, which runs on an annual release cycle, and a change to the PostGIS serialization scheme, which is a major release marker, the schedule for this work was… long term.

Long Term

Still, I managed to slowly chip away at it, goal in mind:

That left adding the hash code to the front of the objects, and using that code in the PostGIS statement cache.

And this is where things fall apart.

Things Fall Apart

The old statement cache was focussed on ensuring the in-memory indexes were in place. It didn’t kick in until the object had already been retrieved. So avoiding retrieval overhead was going to involve re-working the cache quite a bit, to handle both object and index caching.

I started on the work, which still lives on in this branch, but the many possible states of the cache (do I have part of an object? a whole object? an indexed object?) and the fact that it was used in multiple places by different indexing methods (geography tree, geometry tree, GEOS tree), made the change worrisomely complex.

And so I asked a question, that I should have asked years ago, to the pgsql-hackers list:

… within the context of a single SQL statement, will the Datum values for a particular object remain constant?

Basically, could I use the datum values as unique object keys without retrieving the whole object? That would neatly remove any need to retrieve full objects in order to determine if the cache needed to be updated. As usual, Tom Lane had the answer:

Jeez, no, not like that.

Oh, “good news”, I guess, my work is not in vain. Except wait, Tom included a codicil:

The case where this would actually be worth doing, probably, is where you are receiving a toasted-out-of-line datum. In that case you could legitimately use the toast pointer ID values (va_valueid + va_toastrelid) as a lookup key for a cache, as long as it had a lifespan of a statement or less.

Hm. So for a subset of objects, it was possible to generate a unique key without retrieving the whole object.

Facepalm

And that subset – “toasted-out-of-line datum” – were in fact the objects causing the hot spot: objects large enough to have been compressed and then stored in a side table in 8KB chunks.

What if, instead of re-writing my whole existing in-memory index cache, I left that in place, and just added a simple new cache that only worried about object retrieval. And only cached objects that it could obtain unique keys for, these “toasted-out-of-line” objects. Would that improve performance?

It did. By 20 times on my favourite spatial join benchmark. In increased it by 5 times on a join where only 10% of the objects were large ones. And on joins where none of the objects were large, the new code did not reduce performance at all.

And here’s the punch line: I’ve known about the large object hot spot for at least 5 years. Probably longer. I put off working on it because I thought the solution involved core changes to PostgreSQL and PostGIS, so first I had to put those changes in, which took a long time.

Once I started working on the “real problem”, I spent a solid week:

  • First on a branch to add hash codes, using the new serialization mechanisms from PostGIS 3.
  • Then on a unified caching system to replace the old in-memory index cache.

And then I threw all that work away, and in about 3 hours, wrote and tested the final patch that gave a 20x performance boost.

So, was this a success or a failure?

Stupid

I’ve become inured to the huge mismatch in “time spent versus code produced”, particularly when debugging. Spending 8 hours stepping through a debugger to generate a one-line patch is pretty routine.

But something about the mismatch between my grandious and complex solution (partial retrieval! hash code!) and the final solution (just ask! try the half-measure! see if it’s better!) has really gotten on my nerves.

I like the win, but the path was a long and windy one, and PostGIS users have had slower queries than necessary for years because I failed to pose my simple question to the people who had an answer.

The Successful Success

Contra to that story of the past couple weeks, this week has been a raging success. I keep pinching myself and waiting for something to go wrong.

A number of years ago, JTS got an improvement to robustness in some operations by doing determinant calculations in higher precision than the default IEEE double precision.

Those changes didn’t make it into GEOS. There was an experimental branch, that Mateusz Loskot put together, and it sat un-merged for years, until I picked it up last fall, rebased it and merged it. I did so thinking that was the fastest way, and probably it was, but it included a dependency on a full-precision math library, ttmath, which I added to our tree.

Stupid

Unfortunately, ttmath is basically unmaintained now.

And ttmath is arbitrary precision, while we really only need “higher precision”. JTS just uses a “double double” implementation, that uses the register space of two doubles for higher precision calculations.

And ttmath doesn’t support big-endian platforms (like Sparc, Power, and other chips), which was the real problem. We couldn’t go on into the future without support for these niche-but-not-uncommon platforms.

And ttmath includes some fancy assembly language that makes the build system more complex.

Fortunately, the JTS DD is really not that large, and it has no endian assumptions in it, so I ported it and tested it out against ttmath.

It’s smaller.

It’s faster. (About 5-10%. Yes, even though it uses no special assembly tricks, probably because it doesn’t have to deal with arbitrary precision.)

And here’s the huge surprise: it caused zero regression failures! It has exactly the same behaviour as the old implementation!

Perfect

So needless to say, once the branch was stable, I merged it in and stood there in wonderment. It seems implausable that something as foundational as the math routines could be swapped out without breaking something.

The whole thing took just a few days, and it was so painless that I’ve also made a patch to the 3.8 stable series to bring the new code back for big endian platform support in the mean time.

The next few days I’ll be doing ports of JTS features and fixes that are net-new to GEOS, contemplative work that isn’t too demanding.

Some days everything is easy.

Some days everything is hard.

Don’t let the hard days hold you back!

Perfect

April 16, 2020 08:00 AM

March 02, 2020

Anita Graser (Underdark)

Movement data in GIS #29: power your web apps with movement data using mobilitydb-sqlalchemy

This is a guest post by Bommakanti Krishna Chaitanya @chaitan94

Introduction

This post introduces mobilitydb-sqlalchemy, a tool I’m developing to make it easier for developers to use movement data in web applications. Many web developers use Object Relational Mappers such as SQLAlchemy to read/write Python objects from/to a database.

Mobilitydb-sqlalchemy integrates the moving objects database MobilityDB into SQLAlchemy and Flask. This is an important step towards dealing with trajectory data using appropriate spatiotemporal data structures rather than plain spatial points or polylines.

To make it even better, mobilitydb-sqlalchemy also supports MovingPandas. This makes it possible to write MovingPandas trajectory objects directly to MobilityDB.

For this post, I have made a demo application which you can find live at https://mobilitydb-sqlalchemy-demo.adonmo.com/. The code for this demo app is open source and available on GitHub. Feel free to explore both the demo app and code!

In the following sections, I will explain the most important parts of this demo app, to show how to use mobilitydb-sqlalchemy in your own webapp. If you want to reproduce this demo, you can clone the demo repository and do a “docker-compose up –build” as it automatically sets up this docker image for you along with running the backend and frontend. Just follow the instructions in README.md for more details.

Declaring your models

For the demo, we used a very simple table – with just two columns – an id and a tgeompoint column for the trip data. Using mobilitydb-sqlalchemy this is as simple as defining any regular table:

from flask_sqlalchemy import SQLAlchemy
from mobilitydb_sqlalchemy import TGeomPoint

db = SQLAlchemy()

class Trips(db.Model):
   __tablename__ = "trips"
   trip_id = db.Column(db.Integer, primary_key=True)
   trip = db.Column(TGeomPoint)

Note: The library also allows you to use the Trajectory class from MovingPandas as well. More about this is explained later in this tutorial.

Populating data

When adding data to the table, mobilitydb-sqlalchemy expects data in the tgeompoint column to be a time indexed pandas dataframe, with two columns – one for the spatial data  called “geometry” with Shapely Point objects and one for the temporal data “t” as regular python datetime objects.

from datetime import datetime
from shapely.geometry import Point

# Prepare and insert the data
# Typically it won’t be hardcoded like this, but it might be coming from 
# other data sources like a different database or maybe csv files
df = pd.DataFrame(
   [
       {"geometry": Point(0, 0), "t": datetime(2012, 1, 1, 8, 0, 0),},
       {"geometry": Point(2, 0), "t": datetime(2012, 1, 1, 8, 10, 0),},
       {"geometry": Point(2, -1.9), "t": datetime(2012, 1, 1, 8, 15, 0),},
   ]
).set_index("t")

trip = Trips(trip_id=1, trip=df)
db.session.add(trip)
db.session.commit()

Writing queries

In the demo, you see two modes. Both modes were designed specifically to explain how functions defined within MobilityDB can be leveraged by our webapp.

1. All trips mode – In this mode, we extract all trip data, along with distance travelled within each trip, and the average speed in that trip, both computed by MobilityDB itself using the ‘length’, ‘speed’ and ‘twAvg’ functions. This example also shows that MobilityDB functions can be chained to form more complicated queries.

mobilitydb-sqlalchemy-demo-1

trips = db.session.query(
   Trips.trip_id,
   Trips.trip,
   func.length(Trips.trip),
   func.twAvg(func.speed(Trips.trip))
).all()

2. Spatial query mode – In this mode, we extract only selective trip data, filtered by a user-selected region of interest. We then make a query to MobilityDB to extract only the trips which pass through the specified region. We use MobilityDB’s ‘intersects’ function to achieve this filtering at the database level itself.

mobilitydb-sqlalchemy-demo-2

trips = db.session.query(
   Trips.trip_id,
   Trips.trip,
   func.length(Trips.trip),
   func.twAvg(func.speed(Trips.trip))
).filter(
   func.intersects(Point(lat, lng).buffer(0.01).wkb, Trips.trip),
).all()

Using MovingPandas Trajectory objects

Mobilitydb-sqlalchemy also provides first-class support for MovingPandas Trajectory objects, which can be installed as an optional dependency of this library. Using this Trajectory class instead of plain DataFrames allows us to make use of much richer functionality over trajectory data like analysis speed, interpolation, splitting and simplification of trajectory points, calculating bounding boxes, etc. To make use of this feature, you have set the use_movingpandas flag to True while declaring your model, as shown in the below code snippet.

class TripsWithMovingPandas(db.Model):
   __tablename__ = "trips"
   trip_id = db.Column(db.Integer, primary_key=True)
   trip = db.Column(TGeomPoint(use_movingpandas=True))

Now when you query over this table, you automatically get the data parsed into Trajectory objects without having to do anything else. This also works during insertion of data – you can directly assign your movingpandas Trajectory objects to the trip column. In the below code snippet we show how inserting and querying works with movingpandas mode.

from datetime import datetime
from shapely.geometry import Point

# Prepare and insert the data
# Typically it won’t be hardcoded like this, but it might be coming from 
# other data sources like a different database or maybe csv files
df = pd.DataFrame(
   [
       {"geometry": Point(0, 0), "t": datetime(2012, 1, 1, 8, 0, 0),},
       {"geometry": Point(2, 0), "t": datetime(2012, 1, 1, 8, 10, 0),},
       {"geometry": Point(2, -1.9), "t": datetime(2012, 1, 1, 8, 15, 0),},
   ]
).set_index("t")

geo_df = GeoDataFrame(df)
traj = mpd.Trajectory(geo_df, 1)

trip = Trips(trip_id=1, trip=traj)
db.session.add(trip)
db.session.commit()

# Querying over this table would automatically map the resulting tgeompoint 
# column to movingpandas’ Trajectory class
result = db.session.query(TripsWithMovingPandas).filter(
   TripsWithMovingPandas.trip_id == 1
).first()

print(result.trip.__class__)
# <class 'movingpandas.trajectory.Trajectory'>

Bonus: trajectory data serialization

Along with mobilitydb-sqlalchemy, recently I have also released trajectory data serialization/compression libraries based on Google’s Encoded Polyline Format Algorithm, for python and javascript called trajectory and trajectory.js respectively. These libraries let you send trajectory data in a compressed format, resulting in smaller payloads if sending your data through human-readable serialization formats like JSON. In some of the internal APIs we use at Adonmo, we have seen this reduce our response sizes by more than half (>50%) sometimes upto 90%.

Want to learn more about mobilitydb-sqlalchemy? Check out the quick start & documentation.


This post is part of a series. Read more about movement data in GIS.

by chaitan94 at March 02, 2020 05:51 PM

February 28, 2020

PostGIS Development

PostGIS 2.5.4

The PostGIS Team is pleased to release PostGIS 2.5.4.

Continue Reading by clicking title hyperlink ..

by Paul Ramsey at February 28, 2020 12:00 AM

February 27, 2020

Paul Ramsey

PostGIS Day in STL

Every year, on the second Wednesday of November, Esri (“the Microsoft of GIS”) promotes a day of celebration, “GIS Day” in which the members of our community unite to tell the world about the wonders of cartography and spatial data and incidentally use their software a lot in the process.

And every year, for the last number of years, on the day after “GIS Day”, a motley crew of open source users and SQL database afficionados observe “PostGIS Day”. Until this fall, I had never had a chance to personally participate in a PostGIS Day event, but this year Crunchy sponsored a day in St Louis, and I got to talk an awful lot about PostGIS.

It was really good, and I feel like there’s lots more to be done, if only on the subject of spatial SQL and analysis in the database. Here’s the talks I gave, the balance are on the event page.

PostGIS Introduction

Serving Dynamic Vector Tiles

Geocoding and Text Search in PostGIS

PostGIS 3.0 Overview

February 27, 2020 08:00 AM

February 20, 2020

PostGIS Development

PostGIS 3.0.1

The PostGIS Team is pleased to release PostGIS 3.0.1.

Best served with PostgreSQL 12.2, GEOS 3.8.0, SFCGAL 1.3.7, GDAL 3.0.4, PROJ 6.3.1, protobuf-c 1.3.3, json-c 0.13.1.

Continue Reading by clicking title hyperlink ..

by Darafei Praliaskouski at February 20, 2020 12:00 AM

February 02, 2020

PostGIS Development

PostGIS 3.1.0alpha1

The PostGIS Team is pleased to release the first alpha of upcoming PostGIS 3.1.0 release.

Best served with PostgreSQL 12.1, GEOS 3.8.0.

Continue Reading by clicking title hyperlink ..

by Darafei Praliaskouski at February 02, 2020 12:00 AM

November 18, 2019

Paul Ramsey

OGR FDW Spatial Filtering

The OGR FDW now pushes spatial filters down to remote data sources!

Whuuuut?!?!?

The Basics

OK, first, “OGR” is a subcomponent of the GDAL toolkit that allows generic access to dozens of different geospatial file formats. The OGR part handles the “vector” data (points, lines and polygons) and the GDAL part handles the “raster” data (imagery, elevation grids).

Second, “FDW” is a “foreign data wrapper”, an extension API for PostgreSQL that allows developers to connect non-database information to the database and present it in the form of a table.

The simplest FDWs, like the Oracle FDW, just make remote database tables in foreign systems look like local ones. Connecting two databases is “easy” because they share the same data model: tables of typed columns and rows of data.

The OGR data model is pleasantly similar to the database data model. Every OGR “datasource” (database) has “layers” (tables) made of “fields” (columns) with data types like “string” (varchar) and “number” (integer, real).

Now, combine the two ideas of “OGR” and “FDW”!

The “OGR FDW” uses the OGR library to present geospatial data sources as tables inside a PostgreSQL database. The FDW abstraction layer lets us make tables, and OGR abstraction layer lets those tables be sourced from almost any geospatial file format or server.

It’s an abstraction layer over an abstraction layer… the best kind!

Setup the FDW

Here’s an example that connects to a “web feature service” (WFS) from Belgium (we all speak Flemish, right?) and makes a table of it.


CREATE EXTENSION postgis;
CREATE EXTENSION ogr_fdw;

CREATE SERVER wfsserver 
  FOREIGN DATA WRAPPER ogr_fdw 
  OPTIONS (
    datasource 'WFS:http://geoservices.informatievlaanderen.be/overdrachtdiensten/Haltes/wfs',
    format 'WFS',
    config_options 'CPL_DEBUG=ON'
  );

CREATE FOREIGN TABLE haltes (
    fid bigint,
    shape Geometry(Point,31370),
    gml_id varchar,
    uidn double precision,
    oidn double precision,
    stopid double precision,
    naamhalte varchar,
    typehalte integer,
    lbltypehal varchar,
    codegem varchar,
    naamgem varchar
  ) 
  SERVER wfsserver 
  OPTIONS (
    layer 'Haltes:Halte'
  );

Pushdown from FDW

Let’s run a query on the haltes table, and peak into what the OGR FDW is doing, by setting the debug level to DEBUG1.

SET client_min_messages = DEBUG1;

SELECT gml_id, ST_AsText(shape) AS shape, naamhalte, lbltypehal
  FROM haltes 
  WHERE lbltypehal = 'Niet-belbus'
    AND shape && ST_MakeEnvelope(207950, 186590, 207960, 186600, 31370);

We get back one record, and two debug entries:

DEBUG:  OGR SQL: (LBLTYPEHAL = 'Niet-belbus')
DEBUG:  OGR spatial filter (207950 186590, 207960 186600)
-[ RECORD 1 ]-----------------------
gml_id     | Halte.10328
shape      | POINT(207956 186596)
naamhalte  | Lummen Frederickxstraat
lbltypehal | Niet-belbus

The debug entries are generated by the OGR FDW code, when it recognizes there are parts of the SQL query that can be passed to OGR:

  • OGR understands some limited SQL syntax, and OGR FDW passes those parts of any PostgreSQL query down to OGR.
  • OGR can handle simple bounding box spatial filters, and when OGR FDW sees the use of the && PostGIS operator, it passes the filter constant down to OGR.

So OGR FDW is passing the attribute and spatial filters from the SQL down to the OGR layer. But are they then being passed on to the remote datasource?

Pushdown from OGR

Every OGR “driver” is capable of pushing different amounts of logic down to the source data.

  • A driver that reads a file format cannot push anything down: there is no logic in a file.
  • A driver that reads from a database can push a lot down: databases are rich and powerful execution engines in their own right.

Our example data source, the Belgian “web feature server” actually supports both attribute and spatial filters, and the OGR driver will pass them down.

We can see OGR passing the filters down because when we created the server, we set config_options 'CPL_DEBUG=ON', to expose the GDAL logging information to our PostgreSQL server.

The GDAL debug entries are visible when we set the logging level to DEBUG2

SET client_min_messages = DEBUG2;

SELECT gml_id, ST_AsText(shape) AS shape, naamhalte, lbltypehal
  FROM haltes 
  WHERE lbltypehal = 'Niet-belbus'
    AND shape && ST_MakeEnvelope(207950, 186590, 207960, 186600, 31370);

Now we get a whole slew of logging, but I’m only going to pull out one line, the line that shows the WFS query that OGR sends to the remote server:

DEBUG:  GDAL None [0] WFS: http://geoservices.informatievlaanderen.be/overdrachtdiensten/Haltes/wfs?SERVICE=WFS&VERSION=1.1.0&REQUEST=GetFeature&TYPENAME=Haltes:Halte&FILTER=%3CFilter%20xmlns%3D%22http:%2F%2Fwww.opengis.net%2Fogc%22%20xmlns:Haltes%3D%22informatievlaanderen.be%2FHaltes%22%20xmlns:gml%3D%22http:%2F%2Fwww.opengis.net%2Fgml%22%3E%3CAnd%3E%3CPropertyIsEqualTo%3E%3CPropertyName%3ELBLTYPEHAL%3C%2FPropertyName%3E%3CLiteral%3ENiet%2Dbelbus%3C%2FLiteral%3E%3C%2FPropertyIsEqualTo%3E%3CBBOX%3E%3CPropertyName%3EHaltes:SHAPE%3C%2FPropertyName%3E%3Cgml:Box%3E%3Cgml:coordinates%3E207950.0000000000000000,186590.0000000000000000%20207960.0000000000000000,186600.0000000000000000%3C%2Fgml:coordinates%3E%3C%2Fgml:Box%3E%3C%2FBBOX%3E%3C%2FAnd%3E%3C%2FFilter%3E

Awesome, right?

That’s pretty much un-readable, but if I copy out the value in the FILTER request variable, and reverse the URL encoding, I get this:

<Filter
  xmlns="http://www.opengis.net/ogc"
  xmlns:Haltes="informatievlaanderen.be/Haltes"
  xmlns:gml="http://www.opengis.net/gml">
  <And>
    <PropertyIsEqualTo>
      <PropertyName>LBLTYPEHAL</PropertyName>
      <Literal>Niet-belbus</Literal>
    </PropertyIsEqualTo>
    <BBOX>
      <PropertyName>Haltes:SHAPE</PropertyName>
      <gml:Box>
        <gml:coordinates>
          207950.0000000000000000,186590.0000000000000000
          207960.0000000000000000,186600.0000000000000000
        </gml:coordinates>
      </gml:Box>
    </BBOX>
  </And>
</Filter>

I know, who ever thought that jamming an XML encoded version of a SQL filter into an HTTP GET request was a good idea? (Some very very nice people.)

Anyways, as you can see, both the attribute and spatial portions of our original SQL query have been re-encoded as a WFS XML filter, and sent to the remote server.

OGR FDW correctly pushed the attribute and spatial portions of the WHERE clause into OGR, and OGR correctly pushed those filters into the dialect of the driver we were using, in this case the WFS driver.

The End

The really really cool part is that if we had been using, for example, the Oracle driver, OGR would have instead generated Oracle-compatible SQL and pushed that down!

It’s an abstraction layer over an abstraction layer… the best kind!

November 18, 2019 08:00 AM

November 16, 2019

Anita Graser (Underdark)

Movement data in GIS #25: moving object databases

Recently there has been some buzz on Twitter about a new moving object database (MOD) called MobilityDB that builds on PostgreSQL and PostGIS (Zimányi et al. 2019). The MobilityDB Github repo has been published in February 2019 but according to the following presentation at PgConf.Russia 2019 it has been under development for a few years:

Of course, moving object databases have been around for quite a while. The two most commonly cited MODs are HermesDB (Pelekis et al. 2008) which comes as an extension for either PostgreSQL or Oracle and is developed at the University of Piraeus and SECONDO (de Almeida et al. 2006) which is a stand-alone database system developed at the Fernuniversität Hagen. However, both MODs remain at the research prototype level and have not achieved broad adoption.

It will be interesting to see if MobilityDB will be able to achieve the goal they have set in the title of Zimányi et al. (2019) to become “a mainstream moving object database system”. It’s promising that they are building on PostGIS and using its mature spatial analysis functionality instead of reinventing the wheel. They also discuss why they decided that PostGIS trajectories (which I’ve written about in previous posts) are not the way to go:

However, the presentation does not go into detail whether there are any straightforward solutions to visualizing data stored in MobilityDB.

According to the Github readme, MobilityDB runs on Linux and needs PostGIS 2.5. They also provide an online demo as well as a Docker container with MobilityDB and all its dependencies. If you give it a try, I would love to hear about your experiences.

References

  • de Almeida, V. T., Guting, R. H., & Behr, T. (2006). Querying moving objects in secondo. In 7th International Conference on Mobile Data Management (MDM’06) (pp. 47-47). IEEE.
  • Pelekis, N., Frentzos, E., Giatrakos, N., & Theodoridis, Y. (2008). HERMES: aggregative LBS via a trajectory DB engine. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data (pp. 1255-1258). ACM.
  • Zimányi, E., Sakr, M., Lesuisse, A., & Bakli, M. (2019). MobilityDB: A Mainstream Moving Object Database System. In Proceedings of the 16th International Symposium on Spatial and Temporal Databases (pp. 206-209). ACM.

This post is part of a series. Read more about movement data in GIS.

by underdark at November 16, 2019 02:14 PM

November 13, 2019

Paul Ramsey

ST_Subdivide all the Things

This post originally appeared in the CARTO blog.

One of the things that makes managing geospatial data challenging is the huge variety of scales that geospatial data covers: areas as large as a continent or as small as a man-hole cover.

The data in the database also covers a wide range, from single points, to polygons described with thousands of vertices. And size matters! A large object takes more time to retrieve from storage, and more time to run calculations on.

The Natural Earth countries file is a good example of that variation. Load the data into PostGIS and inspect the object sizes using SQL:

SELECT admin, ST_NPoints(the_geom), ST_MemSize(the_geom) 
FROM ne_10m_admin_0_countries 
ORDER BY ST_NPoints;
  • Coral Sea Islands are represented with a 4 point polygon, only 112 bytes.
  • Canada is represented with a 68159 point multi-polygon, 1 megabytes in size!

Countries by Size in KB

Over half (149) of the countries in the table are larger than the database page size (8Kb) which means they will take extra time to retrieve.

SELECT Count(*) 
FROM ne_10m_admin_0_countries 
WHERE ST_MemSize(the_geom) > 8192;

We can see the overhead involved in working with large data by forcing a large retrieval and computation.

Load the Natural Earth populated places into PostGIS as well, and then run a full spatial join between the two tables:

SELECT Count(*)
FROM ne_10m_admin_0_countries countries 
JOIN ne_10m_populated_places_simple places 
ON ST_Contains(countries.the_geom, places.the_geom)

Even though the places table (7322) and countries table (255) are quite small the computation still takes several seconds (about 30 seconds on my computer).

The large objects cause a number of inefficiencies:

  • Geographically large areas (like Canada or Russia) have large bounding boxes, so the indexes don’t work as efficiently in winnowing out points that don’t fall within the countries.
  • Physically large objects have large vertex lists, which take a long time to pass through the containment calculation. This combines with the poor winnowing to make a bad situation worse.

How can we speed things up? Make the large objects smaller using ST_Subdivide()!

First, generate a new, sub-divided countries table:

CREATE TABLE ne_10m_admin_0_countries_subdivided AS
SELECT ST_SubDivide(the_geom) AS the_geom, admin 
FROM ne_10m_admin_0_countries;

Now we have the same data, but no object is more than 255 vertices (about 4Kb) in size!

Subdivided Countries by Size in KB

Run the spatial join torture test again, and see the change!

SELECT Count(*)
FROM ne_10m_admin_0_countries_subdivided countries 
JOIN ne_10m_populated_places_simple places 
ON ST_Contains(countries.the_geom, places.the_geom)

On my computer, the return time about 0.5 seconds, or 60 times faster, even though the countries table is now 8633 rows. The subdivision has accomplished two things:

  • Each polygon now covers a smaller area, so index searches are less likely to pull up points that are not within the polygon.
  • Each polygon is now below the page size, so retrieval from disk will be much faster.

Subdividing big things can make map drawing faster too, but beware: once your polygons are subdivided you’ll have turn off the polygon outlines to avoid showing the funny square boundaries in your rendered map.

Happy mapping and querying!

November 13, 2019 08:00 AM

November 06, 2019

Paul Ramsey

GZip in PostgreSQL

I love PostgreSQL extensions.

Extensions are the truest expression of the second principle of the original “design of Postgres” vision, to

provide user extendibility for data types, operators and access methods.

Extensions allow users to do more with PostgreSQL than just basic storage and retrieval. PostgreSQL is a full-on integration environment, like Python or Perl, and you can build very complete data manipulation pipelines very close to the metal using native and extension features of PostgreSQL.

Even though I’m a contributor to one of the largest PostgreSQL extensions, I have particularly come to love small extensions, that do one simple thing, particularly one simple thing we maybe take for granted in other environments.

My old HTTP extension is just a binding of libcurl to a SQL interface, so users can do web queries inside the SQL environment.

And today I’ve finished up a GZIP extension, that is just a binding of zlib to SQL, so that users can… compress and decompress things.

It’s not a lot, but it’s a little.

The GZIP entension came about because of an email on the PostGIS development list, where Yuri noted

The amazing ST_AsMVT() has two common usage patterns: copy resulting MVTs to a tile cache (e.g. .mbtiles file or a materialized view), or serve MVT to the users (direct SQL->browser approach). Both patterns still require one additional data processing step – gziping.

Huh. And this use case also applies to people generating GeoJSON directly in the database and sending it out to web clients.

The PostgreSQL core has generally frowned on compression functions at the SQL level, because the database already does compression of over-sized tuples as necessary. The last thing we want is people manually applying compression to column values, and then stuffing them into rows where the database will then have to re-compress them internally. From the perspective of storage efficiency, just standing back and letting PostgreSQL do its work is preferable.

But from the perspective of an integration environment, where an application might be expected to emit or consume compressed data, having a tool in SQL to pack and unpack that data is potentially quite useful.

So I did the tiny binding to zlib and packed it up in an extension.

I hope lots of people find it useful.

November 06, 2019 08:00 AM

October 20, 2019

PostGIS Development

PostGIS 3.0.0

The PostGIS development team is pleased to release PostGIS 3.0.0.

This release works with PostgreSQL 9.5-12 and GEOS >= 3.6.

If you are using postgis_sfcgal extension, you need to compile against SFCGAL 1.3.1 or higher.

Best served with PostgreSQL 12 , GEOS 3.8.0 and pgRouting 3.0.0-beta.

Continue Reading by clicking title hyperlink ..

by Regina Obe at October 20, 2019 12:00 AM

October 15, 2019

Postgres OnLine Journal (Leo Hsu, Regina Obe)

PostGIS 3.0.0 coming soon - Try 3.0.0rc2 at a package repo near you

PostGIS 3.0.0 is planned for release early next week. In the meantime you will find PostGIS 3.0.0rc1 or rc2 available via yum.postgresql.org, apt.postgresql.org, and EDB Windows 64-bit stackbuilder for PostgreSQL 12.

Continue reading "PostGIS 3.0.0 coming soon - Try 3.0.0rc2 at a package repo near you"

by Leo Hsu and Regina Obe (nospam@example.com) at October 15, 2019 11:15 PM

October 13, 2019

PostGIS Development

PostGIS 3.0.0rc2

The PostGIS development team is pleased to release PostGIS 3.0.0rc2. This will be the final RC before release.

This release works with PostgreSQL 9.5-12 and GEOS >= 3.6

Best served with PostgreSQL 12 , GEOS 3.8.0 and pgRouting 3.0.0-alpha.

Continue Reading by clicking title hyperlink ..

by Regina Obe at October 13, 2019 12:00 AM

October 08, 2019

PostGIS Development

PostGIS 3.0.0rc1

The PostGIS development team is pleased to release PostGIS 3.0.0rc1.

This release works with PostgreSQL 9.5-12 and GEOS >= 3.6

Best served with PostgreSQL 12 , GEOS 3.8.0rc2 and pgRouting 3.0.0-alpha.

Continue Reading by clicking title hyperlink ..

by Regina Obe at October 08, 2019 12:00 AM

September 28, 2019

PostGIS Development

PostGIS 3.0.0beta1

The PostGIS development team is pleased to release PostGIS 3.0.0beta1.

This release works with PostgreSQL 9.5-12RC1 and GEOS >= 3.6

Best served with PostgreSQL 12RC1 and GEOS 3.8.0beta1 both of which came out in the past couple of days.

Continue Reading by clicking title hyperlink ..

by Regina Obe at September 28, 2019 12:00 AM

August 26, 2019

Paul Ramsey

Waiting for PostGIS 3: GEOS 3.8

While PostGIS includes lots of algorithms and functionality we have built ourselves, it also adds geospatial smarts to PostgreSQL by linking in specialized libraries to handle particular problems:

  • Proj for coordinate reference support;
  • GDAL for raster functions and formats;
  • GEOS for computational geometry (basic operations);
  • CGAL for more computational geometry (3D operations); and
  • for format support, libxml2, libjsonc, libprotobuf-c

Many of the standard geometry processing functions in PostGIS are actually evaluated inside the GEOS library, so updates in GEOS are very important to PostGIS – they add new functionality or smooth the behaviour of existing functions.

Functions backed by GEOS include:

These functions are all “overlay operation” functions – they take in geometry arguments and construct new geometries for output. Under the covers is an operation called an “overlay”, which combines all the edges of the inputs into a graph and then extracts new outputs from that graph.

While the “overlay operations” in GEOS are very reliable, they are not 100% reliable. When operations fail, the library throws the dreaded TopologyException, which indicates the graph is in an inconsistent and unusable state.

Because there are a lot of PostGIS users and they manage a lot of data, there are a non-zero number of cases that cause TopologyExceptions, and upset users. We would like take that number down to zero.

Update: Next-generation overlay did not make the 3.8 GEOS release and will be part of 3.9 instead.

With luck, GEOS 3.8 will succeed in finally bringing fully robust overlay operations to the open source community. The developer behind the GEOS algorithms, Martin Davis, recently joined Crunchy Data, and has spent this summer working on a new overlay engine.

Overlay failures are caused when intersections between edges result in inconsistencies in the overlay graph. Even using double precision numbers, systems have only 51 bits of precision to represent coordinates, and that fixed precision can result in graphs that don’t correctly reflect their inputs.

The solution is building a system that can operate on any fixed precision and retain valid geometry. As an example, here the new engine builds valid representations of Europe at any precision, even ludicrously coarse ones.

europe at different precisions

In practice, the engine will be used with a tolerance that is close to double precision, but still provides enough slack to handle tricky cases in ways that users find visually “acceptable”. Initially the new functionality should slot under the existing PostGIS functions without change, but in the future we will be able to expose knobs to allow users to explicitly set the precision domain they want to work in.

GEOS 3.8 may not be released in time for PostGIS 3, but it will be a close thing. In addition to the new overlay engine, a lot of work has been done making the code base cleaner, using more “modern” C++ idioms, and porting forward new fixes to existing algorithms.

August 26, 2019 08:00 AM

August 25, 2019

Paul Ramsey

Waiting for PostGIS 3: Parallelism in PostGIS

Parallel query has been a part of PostgreSQL since 2016 with the release of version 9.6 and in theory PostGIS should have been benefiting from parallelism ever since.

In practice, the complex nature of PostGIS has meant that very few queries would parallelize under normal operating configurations – they could only be forced to parallelize using oddball configurations.

With PostgreSQL 12 and PostGIS 3, parallel query plans will be generated and executed far more often, because of changes to both pieces of software:

  • PostgreSQL 12 includes a new API that allows extensions to modify query plans and add index clauses. This has allowed PostGIS to remove a large number of inlined SQL functions that were previously acting as optimization barriers to the planner.
  • PostGIS 3 has taken advantage of the removal of the SQL inlines to re-cost all the spatial functions with much higher costs. The combination of function inlining and high costs used to cause the planner to make poor decisions, but with the updates in PostgreSQL that can now be avoided.

Increasing the costs of PostGIS functions has allowed us to encourage the PostgreSQL planner to be more aggressive in choosing parallel plans.

PostGIS spatial functions are far more computationally expensive than most PostgreSQL functions. An area computation involves lots of math involving every point in a polygon. An intersection or reprojection or buffer can involve even more. Because of this, many PostGIS queries are bottlenecked on CPU, not on I/O, and are in an excellent position to take advantage of parallel execution.

One of the functions that benefits from parallelism is the popular ST_AsMVT() aggregate function. When there are enough input rows, the aggregate will fan out and parallelize, which is great since ST_AsMVT() calls usually wrap a call to the expensive geometry processing function, ST_AsMVTGeom().

Tile 1/0/0

Using the Natural Earth Admin 1 layer of states and provinces as an input, I ran a small performance test, building a vector tile for zoom level one.

parallel MVT tile performance

Spatial query performance appears to scale about the same as non-spatial as the number of cores increases, taking 30-50% less time with each doubling of processors, so not quite linearly.

Join, aggregates and scans all benefit from parallel planning, though since the gains are sublinear there’s a limit to how much performance you can extract from an operation by throwing more processors at at. Also, operations that do a large amount of computation within a single function call, like ST_ClusterKMeans, do not automatically parallelize: the system can only parallelize the calling of functions multiple times, not the internal workings of single functions.

August 25, 2019 08:00 AM

June 28, 2019

Stephen Mather

Tracing golden monkeys through time

My collegue TUYISINGIZE Deogratias (“Deo) and others at Dian Fossey Gorilla Fund International have been studying golden monkeys (Cercopithecus kandti) in Rwanda. Golden monkeys are an endangered monkey along the Albertine Rift (including the Virungas, host to the endangered mountain gorilla). They are also cute as can be, but more on that another time.

Golden monkey (Cercopithecus kandti) head.jpg

Deo has been leading efforts to track the golden monkeys in several locations across their range, observing their habits. Among the data gathered is the location of the groups of the monkeys as they move through their range. One element we want to understand from these data are how much does each group move per day.

The raw data look something like this:

raw_data

So we tweak things a bit to get ids in order of date and time, and also prep the data so that the date and time are proper types in PostgreSQL:

DROP TABLE IF EXISTS goldenmonkeys_sorted;
CREATE TABLE goldenmonkeys_sorted AS
(
    WITH nodate AS (
        SELECT gid, geom, id, lat, lon, alt, dater || ' ' || timer AS dater, month AS monther, season, groupid FROM hr_g_all
    )
    , sorted AS (
	SELECT gid, geom, id, lat, lon, alt, dater::TIMESTAMP WITH TIME ZONE AS datetimer, monther, season, groupid FROM nodate
		ORDER BY groupid, datetimer
    )
    SELECT gid AS id, ROW_NUMBER() OVER( PARTITION BY gid) AS gid, datetimer, date(datetimer) AS dater, monther, season, groupid, geom FROM sorted
        ORDER BY gid
);

Resulting in the following:

massaged_data

gishwati

Golden Monkey Ranging in Gishwati National Park

Ok. Now we want to turn this into traces of the movements of the monkeys everyday. Something like this:

gishwati_trace

But for every trace, for every day for each group.

We will create a function that leverages WITH RECURSIVE. We’ve seen this before. WITH RECURSIVE allows us to take each record in sequence and perform operations with the previous record, in this case calculating travel time, travel distance, and combining the individual points into a single line with ST_MakeLine.

CREATE OR REPLACE FUNCTION goldenmonkey_time (date, text)
RETURNS TABLE(dater date, monther text, traveltime interval, distance float, geom geometry) AS $$
WITH RECURSIVE gtime AS (
SELECT gid, dater, monther, datetimer, datetimer (SELECT min(datetimer) FROM goldenmonkeys_sorted WHERE dater = $1 AND groupid = $2) AS timediff, 0::float AS distance, geom
FROM goldenmonkeys_sorted WHERE dater = $1 AND groupid = $2
UNION ALL
SELECT n.gid, w.dater, w.monther, w.datetimer, n.datetimer w.datetimer AS timediff, ST_Distance(n.geom, w.geom) AS distance, n.geom
FROM goldenmonkeys_sorted n, gtime w
WHERE w.dater = $1 AND n.gid::integer = w.gid::integer + 1
)
SELECT max(dater) AS dater, max(monther) AS monther, max(timediff) AS traveltime, ST_Length(ST_MakeLine(geom)) AS length, ST_MakeLine(geom) FROM gtime;
$$ LANGUAGE SQL;

Now to use our function, we need a list of dates and groups so we can calculate this for each day:

WITH dategroup AS (
SELECT
DISTINCT dater, groupid
FROM
goldenmonkeys_sorted ORDER by groupid, dater
)
SELECT groupid, (goldenmonkey_time(dater, groupid)).dater,
(goldenmonkey_time(dater, groupid)).monther,
(goldenmonkey_time(dater, groupid)).traveltime,
(goldenmonkey_time(dater, groupid)).distance,
(goldenmonkey_time(dater, groupid)).geom FROM dategroup
ORDER BY groupid, (goldenmonkey_time(dater, groupid)).dater;

view raw
using_monkeytime.sql
hosted with ❤ by GitHub

Now we have traces not just for one day and group, but all traces and groups:

gishwati_tracesz

gishwati_traces

 

by smathermather at June 28, 2019 10:51 AM

May 22, 2019

Anita Graser (Underdark)

Movement data in GIS #23: trajectories in context

Today’s post continues where “Why you should be using PostGIS trajectories” leaves off. It’s the result of a collaboration with Eva Westermeier. I had the pleasure to supervise her internship at AIT last year and also co-supervised her Master’s thesis [0] on the topic of enriching trajectories with information about their geographic context.

Context-aware analysis of movement data is crucial for different domains and applications, from transport to ecology. While there is a wealth of data, efficient and user-friendly contextual trajectory analysis is still hampered by a lack of appropriate conceptual approaches and practical methods. (Westermeier, 2018)

Part of the work was focused on evaluating different approaches to adding context information from vector datasets to trajectories in PostGIS. For example, adding land cover context to animal movement data or adding information on anchoring and harbor areas to vessel movement data.

Classic point-based model vs. line-based model

The obvious approach is to intersect the trajectory points with context data. This is the classic point data model of contextual trajectories. It’s straightforward to add context information in the point-based model but it also generates large numbers of repeating annotations. In contrast, the line data model using, for example, PostGIS trajectories (LinestringM) is more compact since trajectories can be split into segments at context borders. This creates one annotation per segment and the individual segments are convenient to analyze (as described in part #12).

Spatio-temporal interpolation as provided by the line data model offers additional advantages for the analysis of annotated segments. Contextual segments start and end at the intersection of the trajectory linestring with context polygon borders. This means that there are no gaps like in the point-based model. Consequently, while the point-based model systematically underestimates segment length and duration, the line-based approach offers more meaningful segment length and duration measurements.

Schematic illustration of a subset of an annotated trajectory in two context classes, a) systematic underestimation of length or duration in the point data model, b) full length or duration between context polygon borders in the line data model (source: Westermeier (2018))

Another issue of the point data model is that brief context changes may be missed or represented by just one point location. This makes it impossible to compute the length or duration of the respective context segment. (Of course, depending on the application, it can be desirable to ignore brief context changes and make the annotation process robust towards irrelevant changes.)

Schematic illustration of context annotation for brief context changes, a) and b)
two variants for the point data model, c) gapless annotation in the line data model (source: Westermeier (2018) based on Buchin et al. (2014))

Beyond annotations, context can also be considered directly in an analysis, for example, when computing distances between trajectories and contextual point objects. In this case, the point-based approach systematically overestimates the distances.

Schematic illustration of distance measurement from a trajectory to an external
object, a) point data model, b) line data model (source: Westermeier (2018))

The above examples show that there are some good reasons to dump the classic point-based model. However, the line-based model is not without its own issues.

Issues

Computing the context annotations for trajectory segments is tricky. The main issue is that ST_Intersection drops the M values. This effectively destroys our trajectories! There are ways to deal with this issue – and the corresponding SQL queries are published in the thesis (p. 38-40) – but it’s a real bummer. Basically, ST_Intersection only provides geometric output. Therefore, we need to reconstruct the temporal information in order to create usable trajectory segments.

Finally, while the line-based model is well suited to add context from other vector data, it is less useful for context data from continuous rasters but that was beyond the scope of this work.

Conclusion

After the promising results of my initial investigations into PostGIS trajectories, I was optimistic that context annotations would be a straightforward add-on. The line-based approach has multiple advantages when it comes to analyzing contextual segments. Unfortunately, generating these contextual segments is much less convenient and also slower than I had hoped. Originally, I had planned to turn this work into a plugin for the Processing toolbox but the results of this work motivated me to look into other solutions. You’ve already seen some of the outcomes in part #20 “Trajectools v1 released!”.

References

[0] Westermeier, E.M. (2018). Contextual Trajectory Modeling and Analysis. Master Thesis, Interfaculty Department of Geoinformatics, University of Salzburg.


This post is part of a series. Read more about movement data in GIS.

by underdark at May 22, 2019 07:31 PM

March 25, 2019

Postgres OnLine Journal (Leo Hsu, Regina Obe)

PGConf US 2019 Data Loading Slides up

I gave a talk at PGConf US 2019 on some of the many ways you can load data into PostgreSQL using open source tools. This is similar to the talk I gave last year but with the addition of the pgloader commandline tool and the http PostgreSQL extension.

HTML slides PDF slides

Even though it was a talk Not much about PostGIS, but just tricks for loading data, I managed to get a mouthful of PostGIS in there.

by Leo Hsu and Regina Obe (nospam@example.com) at March 25, 2019 10:00 PM

February 11, 2019

Postgres OnLine Journal (Leo Hsu, Regina Obe)

Compiling http extension on ubuntu 18.04

We recently installed PostgreSQL 11 on an Ubuntu 18.04 using apt.postgresql.org. Many of our favorite extensions were already available via apt (postgis, ogr_fdw to name a few), but it didn't have the http extension we use a lot. The http extension is pretty handy for querying things like Salesforce and other web api based systems. We'll outline the basic compile and install steps. While it's specific to the http extension, the process is similar for any other extension you may need to compile.

Continue reading "Compiling http extension on ubuntu 18.04"

by Leo Hsu and Regina Obe (nospam@example.com) at February 11, 2019 08:31 AM

January 01, 2019

Boston GIS (Regina Obe, Leo Hsu)

Using pg_upgrade to upgrade PostgreSQL 9.3 PostGIS 2.1 to PostgreSQL 11 2.5 on Yum

In a previous article Using pg upgrade to upgrade PostGIS without installing older version I demonstrated a trick for upgrading to a newer PostgreSQL instance from PostGIS 2.2 - 2.whatever without having to install the older version of PostGIS in your new PostgreSQL service. Unfortunately that trick does not work if coming from PostGIS 2.1 because in PostGIS 2.2 we renamed a c lib function that backed sql functions in 2.1.

Fear not. There is still a way to upgrade from 2.1 to 2.5 without installing an older version of PostGIS in your new PostgreSQL instance. To do so, you need to add a step and that is to remove the functions in 2.1 that are backed by this renamed lib function. In upcoming PostGIS 3.0, we've added this function back and have it throw an error so that even coming from PostGIS 2.1, you can upgrade just the same as you do from later versions.

Continue reading "Using pg_upgrade to upgrade PostgreSQL 9.3 PostGIS 2.1 to PostgreSQL 11 2.5 on Yum"

by Regina Obe (nospam@example.com) at January 01, 2019 06:48 AM

December 19, 2018

Michal Zimmermann

CentOS PostGIS Upgrade Hell… Yet Again

PostGIS upgrades used to be a nightmare. Broken dependencies, version mismatches, you name it. Upgrading PostgreSQL 10 with PostGIS 2.4 to PostgreSQL 11 on CentOS has been my mission impossible for two days. And it doesn’t seem to come to an end.

What? Why?

We’re running fairly large spatially enabled PostgreSQL 10 database cluster. To keep up with pretty fast development, I was hoping to pg_upgrade it to PostgreSQL 11.

Tried and failed

I’ve been trying different upgrade strategies with PostgreSQL 11 already running to no avail. Here comes the list.

Install PostGIS 2.4 to PostgreSQL 11 and pg_upgrade

yum install postgis24_11
systemctl stop postgresql-11

su postgres
/usr/pgsql-11/bin/pg_upgrade \
  --check \
  -b /usr/pgsql-10/bin/ -B /usr/pgsql-11/bin/ \
  -d /var/lib/pgsql/10/data -D /var/lib/pgsql/11/data \
  --link \
  -U root \
  -o ' -c config_file=/var/lib/pgsql/10/data/postgresql.conf' -O ' -c config_file=/var/lib/pgsql/11/data/postgresql.conf'

This results in:

Your installation references loadable libraries that are missing from the new installation. You can add these libraries to the new installation, or remove the functions using them from the old installation. A list of problem libraries is in the file: loadable_libraries.txt

loadable_libraries.txt says the following:

could not load library "$libdir/postgis-2.4": ERROR:  could not load library "/usr/pgsql-11/lib/postgis-2.4.so": /usr/pgsql-11/lib/postgis-2.4.so: undefined symbol: geod_polygon_init

Duckduckgoing I found the related PostgreSQL mailing list thread.

Build and install PostGIS 2.4 from source to PostgreSQL 11 and pg_upgrade

The bug report says there’s something wrong with proj4 version, so I chose proj49 and geos37.

yum install proj49 proj49-devel
wget https://download.osgeo.org/postgis/source/postgis-2.4.6.tar.gz
tar -xzvf postgis-2.4.6.tar.gz
cd postgis-2.4.6

./configure \
  --with-pgconfig=/usr/pgsql-11/bin/pg_config \
  --with-geosconfig=/usr/geos37/bin/geos-config \
  --with-projdir=/usr/proj49/

make &amp;&amp; make install

CREATE EXTENSION postgis fails with could not load library "/usr/pgsql-11/lib/postgis-2.4.so": /usr/pgsql-11/lib/postgis-2.4.so: undefined symbol: geod_polygon_init. Oh my.

Install PostGIS 2.5 to PostgreSQL 10 and pg_upgrade

Running out of ideas, I tried to install PostGIS 2.5 to our PostgreSQL 10 cluster and pg_upgrade.

yum install postgis25_10

The resulting error appeared almost instantly:

Transaction check error:
file /usr/pgsql-10/bin/shp2pgsql-gui from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/lib/liblwgeom.so from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/lib/postgis-2.4.so from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/address_standardizer.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/address_standardizer.sql from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/address_standardizer_data_us.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/address_standardizer_data_us.sql from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/postgis.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/postgis_sfcgal.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/postgis_tiger_geocoder.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64
file /usr/pgsql-10/share/extension/postgis_topology.control from install of postgis25_10-2.5.1-1.rhel7.x86_64 conflicts with file from package postgis24_10-2.4.5-1.rhel7.x86_64

What the…

Build and install PostGIS 2.5 from source to PostgreSQL 10 and pg_upgrade

wget https://download.osgeo.org/postgis/source/postgis-2.5.1.tar.gz
tar -xzvf postgis-2.5.1.tar.gz
cd postgis-2.5.1

./configure \
  --with-pgconfig=/usr/pgsql-10/bin/pg_config \
  --with-geosconfig=/usr/geos37/bin/geos-config

make &amp;&amp; make install

CREATE EXTENSION postgis fails with ERROR: could not load library "/usr/pgsql-10/lib/postgis-2.5.so": /usr/pgsql-10/lib/postgis-2.5.so: undefined symbol: GEOSFrechetDistanceDensify. Again? Really?

GEOSFrechetDistanceDensify was added in GEOS 3.7 (linked in ./configure), yet ldd /usr/pgsql-10/lib/postgis-2.5.so says:

linux-vdso.so.1 =&gt;  (0x00007ffd4c5fa000)
libgeos_c.so.1 =&gt; /usr/geos36/lib64/libgeos_c.so.1 (0x00007f68ddf5a000)
libproj.so.0 =&gt; /lib64/libproj.so.0 (0x00007f68ddd07000)
libjson-c.so.2 =&gt; /lib64/libjson-c.so.2 (0x00007f68ddafc000)
libxml2.so.2 =&gt; /lib64/libxml2.so.2 (0x00007f68dd792000)
libm.so.6 =&gt; /lib64/libm.so.6 (0x00007f68dd48f000)
libSFCGAL.so.1 =&gt; /lib64/libSFCGAL.so.1 (0x00007f68dc9c0000)
libc.so.6 =&gt; /lib64/libc.so.6 (0x00007f68dc5f3000)
libgeos-3.6.3.so =&gt; /usr/geos36/lib64/libgeos-3.6.3.so (0x00007f68dc244000)
libstdc++.so.6 =&gt; /lib64/libstdc++.so.6 (0x00007f68dbf3d000)
libgcc_s.so.1 =&gt; /lib64/libgcc_s.so.1 (0x00007f68dbd27000)
libdl.so.2 =&gt; /lib64/libdl.so.2 (0x00007f68dbb22000)
libz.so.1 =&gt; /lib64/libz.so.1 (0x00007f68db90c000)
liblzma.so.5 =&gt; /lib64/liblzma.so.5 (0x00007f68db6e6000)
/lib64/ld-linux-x86-64.so.2 (0x000055960f119000)
libCGAL.so.11 =&gt; /usr/lib64/libCGAL.so.11 (0x00007f68db4bd000)
libCGAL_Core.so.11 =&gt; /usr/lib64/libCGAL_Core.so.11 (0x00007f68db284000)
libmpfr.so.4 =&gt; /usr/lib64/libmpfr.so.4 (0x00007f68db029000)
libgmp.so.10 =&gt; /usr/lib64/libgmp.so.10 (0x00007f68dadb0000)
libboost_date_time-mt.so.1.53.0 =&gt; /usr/lib64/libboost_date_time-mt.so.1.53.0 (0x00007f68dab9f000)
libboost_thread-mt.so.1.53.0 =&gt; /usr/lib64/libboost_thread-mt.so.1.53.0 (0x00007f68da988000)
libboost_system-mt.so.1.53.0 =&gt; /usr/lib64/libboost_system-mt.so.1.53.0 (0x00007f68da783000)
libboost_serialization-mt.so.1.53.0 =&gt; /usr/lib64/libboost_serialization-mt.so.1.53.0 (0x00007f68da517000)
libpthread.so.0 =&gt; /lib64/libpthread.so.0 (0x00007f68da2fa000)
librt.so.1 =&gt; /usr/lib64/librt.so.1 (0x00007f68da0f2000)

I’m nearly desperate after spending two days trying to break through. I have ~ 300 GB of PostgreSQL data to migrate to the current version and there seems to be no possible way to do it in CentOS.

One more thing to note: using yum install postgis25_11 and CREATE EXTENSION postgis in v11 database fails with the exact same error like the one above. I really enjoy working with PostgreSQL and PostGIS, yet there’s hardly something I fear more than trying to upgrade those two things together.

by Michal Zimmermann at December 19, 2018 09:00 AM

December 10, 2018

Postgres OnLine Journal (Leo Hsu, Regina Obe)

PostGIS 2.5.1 Bundle for Windows

PostGIS 2.5.1 was released on November 18th 2018 and I finished off packaging the PostGIS 2.5.1 windows builds and installers targeted for PostgreSQL EDB distribution this weekend and pushing them up to stackbuilder. This covers PostgreSQL 9.4-11 64-bit and PostgreSQL 95-10 (32bit).

Note that PostGIS 2.5 series will be the last of the PostGIS 2s. Goodbye PostGIS 2.* and start playing with the in-development version of PostGIS 3. Snapshot binaries for PostGIS 3.0 windows development are also available on the PostGIS windows download page. These should work for both BigSQL and EDB distributions.

Continue reading "PostGIS 2.5.1 Bundle for Windows"

by Leo Hsu and Regina Obe (nospam@example.com) at December 10, 2018 12:18 AM

November 26, 2018

Michal Zimmermann

Implementing Linked List with PostgreSQL Recursive CTE

I’ve been working on a book/storytelling pet project recently. Dealing with book events and keeping them in order was a task that was to be tackled sooner or later. While both frontend and backend of the app could deal with linked and ordered data, database might be just about the best place to do so.

What you might need a linked list for

You have a set of chronological events. The set is not complete at the beginning and position of events might be changed (e.g. their neighbouring events might change in time).

Implementation

Linked list is a perfect structure for such a case (see Wikipedia). You can keep your data in tact using just id and previous/next id.

CREATE TABLE public.events (
  id integer generated always as identity primary key,
  previous_id integer
);

COPY public.events (id, previous_id) FROM stdin;
7   \N
10  5
5   1
1   3
3   8
8   9
9   2
2   6
6   4
4   7
\.

Generating the list of events in the right order is the matter of running one recursive CTE query.

WITH RECURSIVE evt(id) AS (
SELECT
    id,
    previous_id
FROM events
WHERE previous_id IS NULL
UNION
SELECT
    e.id,
    e.previous_id
FROM events e
JOIN evt ON (e.previous_id = evt.id)
)
SELECT * FROM evt;

It gathers the first event (the one having the previous pointer set to NULL) and iteratively adds the following ones. Note that this version is actually the reverse implementation of the linked list, pointing to the previous instead of the next event. All it would take to change that, would be finding the event id not present in previous_id column as the first one instead of WHERE previous_id IS NULL.

With the data coming properly sorted to the client, all it has to do is rendering the list.

by Michal Zimmermann at November 26, 2018 08:00 PM