Welcome to Planet PostGIS

February 05, 2016

Stephen Mather

Taking Slices from LiDAR data: Part IV

Trans-Alaska Pipeline System Luca Galuzzi 2005

In PDAL, a pipeline file can be used to do a variety of operations. Within the following context, I think of a pipeline file like an ad hoc preferences file, where I can use an external command to iterate through the things I want to change, while holding constant everything else in the pipeline file.

In my use case for this vignette, I’ll use the pipeline file to hold my database preferences for putting the point clouds into my PostGIS database. For the record, I’m using vpicavet’s docker pggis as the starting place for installing PostGIS with the pgpointcloud extension. I have adapted the following pipeline file from the PDAL writers.pgpointcloud example.

<?xml version="1.0" encoding="utf-8"?>
<Pipeline version="1.0">
  <Writer type="writers.pgpointcloud">
    <Option name="connection">
      host='localhost' dbname=‘lidar’ user='user' password=‘password’
    </Option>
    <Option name="table">54001640PAN_heightasz_0-1.5</Option>
    <Option name="compression">dimensional</Option>
<!--    <Option name="srid">3270</Option> -->
    <Filter type="filters.chipper">
      <Option name="capacity">400</Option>
      <Reader type="readers.las">
          <Option name="filename">54001640PAN_heightasz_0-1.5.las</Option>
<!--          <Option name="spatialreference">EPSG:3270</Option> -->
      </Reader>
    </Filter>
  </Writer>
</Pipeline>

Some things to note. I have commented out the SRID and readers.las.spatialreference in the XML above. We’ll rely on PDAL to discover this, and use the default output of epsg:4326 to store our point clouds for the time being.

Our wrapper script for the pipeline file is very simple. We will use the wrapper script to specify the table and the input file name.

#!/bin/bash

TABLE=`basename $1 .bpf`
INPUT=$1

#echo $TABLE
pdal pipeline -i pipeline.xml --writers.pgpointcloud.table=$TABLE --readers.bpf.filename=$INPUT

Now to use, we’ll use GNU Parallel, as much for it’s XArgs like functionality as scalability:

ls *.bpf | parallel -j6 ./pipeliner.sh {}

Now we can see what tables got loaded:

psql -d lidar
psql (9.5.0)
Type "help" for help.

lidar=# \dt
                    List of relations
 Schema |             Name              | Type  |  Owner  
--------+-------------------------------+-------+---------
 public | 54001640PAN_heightasz_0-1.5   | table | user
 public | 54001640PAN_heightasz_1.5-6   | table | user
 public | 54001640PAN_heightasz_100-200 | table | user
 public | 54001640PAN_heightasz_30-45   | table | user
 public | 54001640PAN_heightasz_45-55   | table | user
 public | 54001640PAN_heightasz_55-65   | table | user
 public | 54001640PAN_heightasz_6-30    | table | user
 public | 54001640PAN_heightasz_65-75   | table | user
 public | 54001640PAN_heightasz_75-85   | table | user
 public | 54001640PAN_heightasz_85-100  | table | user
 public | pointcloud_formats            | table | user
 public | spatial_ref_sys               | table | user
(12 rows)

W00t! We’ve got point clouds in our database! Next, we will visualize the data, and extract some analyses from it.


by smathermather at February 05, 2016 11:08 PM

Stephen Mather

Taking Slices from LiDAR data: Part III

forest_structure
Borrowed from: http://irwantoshut.com

Continuing my series on slicing LiDAR data in order to analyze a forest, one of the objectives of the current project is to understand the habitats that particular species of birds prefer. This will be accomplished using field info from breeding bird surveys combined with LiDAR data of forest structure to help predict what habitats are necessary for particular species of breeding birds.

There are a number of studies doing just this for a variety of regions which I will detail later, but suffice it to say, structure of vegetation matters a lot to birds, so using LiDAR to map out structure can be an important predictive tool for mapping bird habitat. Being able to do that at scale across entire ecoregions—well, that’s just an exciting prospect.
Let’s get to some coding. I would like to take a few slices through our forest based on functional height groups:

Forest Canopy >15 meters
Sub-Canopy 10-15 meters
Tall Shrub 2-10 meters
Short Shrub 0.5-2 meters

(For the record, we don’t have to get these perfectly right. Sub-canopy, for example could be taller or shorter depending on how tall a forest it is in. This is just a convenience for dividing and reviewing the data for the time being. Also, we’ll split a little finer than the above numbers just for giggles.)

We’ll start by just echoing our pdal translate to make sure we like what we are getting for output:

for START in 0:0.5 0.5:1 1:2 2:5 5:10 10:15 15:20 20:35 35:50 50:75 75:100 100:200
 do
  nameend=`echo $START | sed s/:/-/g`
  echo pdal translate 54001640PAN_heightasz.bpf 54001640PAN_heightasz_$nameend.bpf -f range --filters.range.limits="Height[$START)"
done

Thus we get the following output:

pdal translate 54001640PAN_heightasz.bpf 54001640PAN_heightasz_0-0.5.bpf -f range --filters.range.limits=Height[0:0.5)
pdal translate 54001640PAN_heightasz.bpf 54001640PAN_heightasz_0.5-1.bpf -f range --filters.range.limits=Height[0.5:1)
pdal translate 54001640PAN_heightasz.bpf 54001640PAN_heightasz_1-2.bpf -f range --filters.range.limits=Height[1:2)
pdal translate 54001640PAN_heightasz.bpf 54001640PAN_heightasz_2-5.bpf -f range --filters.range.limits=Height[2:5)
... etc.

Let’s remove our echo statement so this actually runs:

for START in 0:0.5 0.5:1 1:2 2:5 5:10 10:15 15:20 20:35 35:50 50:75 75:100 100:200
 do
  nameend=`echo $START | sed s/:/-/g`
  pdal translate 54001640PAN_heightasz.bpf 54001640PAN_heightasz_$nameend.bpf -f range --filters.range.limits="Height[$START)"
done

We should generalize this a bit too. Let’s make this a script to which we can pass our filenames and ranges:

#!/bin/bash
namer=`basename $1 .bpf`
for START in $2
 do
  nameend=`echo $START | sed s/:/-/g`
  pdal translate $namer.bpf $namer"_"$nameend".bpf" -f range --filters.range.limits="Height[$START)"
done

Which to run, we use a statement as follows:

./tree_slicer.sh 54001640PAN_heightasz.bpf "0:0.5 0.5:1 1:2 2:5 5:10 10:15 15:20 20:35 35:50 50:75 75:100 100:200"

I forgot all my data is in feet, but my height classes in meters, so we’ll just multiply our input values by a factor of 3 to get in the same ballpark (future refinement likely):

./tree_slicer.sh 54001640PAN_heightasz.bpf "0:1.5 1.5:3 3:6 6:15 15:30 30:45 45:60 60:105 105:150 150:200"

We could alternatively stick to our original categories (short shrub, tall shrub, sub-canopy, canopy) as our break points:

./tree_slicer.sh 54001640PAN_heightasz.bpf "0:1.5 1.5:6 6:30 30:45 45:200"

Finally, we can convert to laz, and load all our slices of point clouds in plas.io an animate between the slices

for OUTPUT in $(ls *.bpf); do docker run -v /home/gisuser/test/test:/data pdal/master pdal translate //data/$OUTPUT //data/$OUTPUT.laz; done

If you look closely, you should be able to see where a tornado in the 80s knocked down much of the forest here. It’s signature is in the tall shrub / sub-canopy layer:


by smathermather at February 05, 2016 01:12 PM

January 30, 2016

Oslandia

Tempus V2.0: routing faster, better, stronger !


We are pleased to announce the release of a new version of our route planning framework: Tempus



Tempus is an Open Source C++/Python framework that allows to use, develop and test route planning algorithms with a particular focus on multimodal routing where every possible transport modes are taken into consideration for a trip: private cars, public transports, shared vehicules, bikes, etc.


This new 2.0 version results from a partnership with Mappy and focuses on performance improvements and scalability. It brings the following major changes :

  • use of modern C++11 constructs for a better code
  • a new plugin that implements the Contraction Hierarchies algorithm on road newtorks.
  • more flexibility on graph data structures: specialized graph types can exist, in addition to the multimodal graph
  • a smaller memory footprint of in-memory structures thanks to optimisations and use of serialization to avoid memory fragmentation
  • continuous integration with Travis CI and Appveyor for a stronger code

The contraction hierarchies algorithm allows to answer road itinerary requests very fast, thanks to an efficient preprocessing phase where shortcuts are added to the graph. In conjunction with the newly improved data structures, it is now possible to process requests on country or continental-sized networks.



Tempus is on GitHub, get it, fork it, PR it !


Developments are still going on to improve state-of-the-art algorithms. Please let us know if you are interested in development, training or consulting around these pieces of technology. Do not hesitate to contact us at infos@oslandia.com .

by Hugo Mercier at January 30, 2016 09:00 AM

January 29, 2016

Postgres OnLine Journal (Leo Hsu, Regina Obe)

An almost idiot's guide to install PostgreSQL 9.5, PostGIS 2.2 and pgRouting 2.1.0 with Yum

If you already have a working PostgreSQL 9.5 install, and just want to skip to relevant sections, follow this list:

As a general note, these instructions are what I did for CentOS 7. For lower versions ther are some differences in packages you'll get. For example currently if you are installing on CentOS 6 (and I presume by extension other 6 family), you won't get SFCGAL and might have pgRouting 2.0 (instead of 2.1)


Continue reading "An almost idiot's guide to install PostgreSQL 9.5, PostGIS 2.2 and pgRouting 2.1.0 with Yum"

by Leo Hsu and Regina Obe (nospam@example.com) at January 29, 2016 08:21 AM

January 28, 2016

Boston GIS (Regina Obe, Leo Hsu)

pgRouting: A PRACTICAL GUIDE now on sale

Our upcoming book pgRouting: A PRACTICAL GUIDE (publisher LocatePress) just went on sale a couple of days ago. We currently have a little over 100 pages of content. We expect the final published book to weight in at a little over 250 pages.

pgRouting is a PostgreSQL extension that also utilizes PostGIS for solving routing problems. Routing problems involve application of costs to network paths as well as considerations of the parties involved such as kinds vehicles, people, packages etc. that will be carried across these networks. While routing is often applied to physical networks like roads and bike paths, the fundamentals of routing go beyond the physical. Spatial concepts such as edges and nodes are a convenient way of visualizing costs across even a logical network.

If you purchase now, you'll get the following benefits:

  • Significant discount off the eBook final price. Currently 50% off
  • Our current preview draft and updates to the draft as we update it
  • Input via our discussion page of what you'd like to see covered. That could be specific functions or kinds of examples or just some clarity.

The focus of this book is pgRouting 2.1+. It's really hard to talk about pgRouting without giving some air time to PostGIS and PostgreSQL, which are some of the key foundations of pgRouting. As such expect to see a lot of talk of PostGIS and PostgreSQL in this book. Since pgRouting 2.0 is so different from pgRouting 2.1, we had to make a difficult decision to focus on the future rather than the past. The appendix does cover some content of how things were done in pgRouting 2.0 and how they changed in pgRouting 2.1+. You can expect the appendix discussion to be expanded. For the core of the book we've largely abandonned any mention of pgRouting 2.0 though pgRouting 2.1 is for the most part backward compatible with pgRouting 2.0 if you choose to be limited by the pgRouting 2.0 function signatures.

We've been pushing package maintainers to focus on releasing pgRouting 2.1 rather than sticking with pgRouting 2.0 and also encouraging maintainers who haven't carried pgRouting before, to start doing so. The PostGIS 2.2 windows bundle comes packaged with pgRouting 2.1 for example. Yes these are selfish reasons, but FOSS is inherently a selfish scratch your own itch kind of pursuit, and that's a good thing since all our common self-interests come together to make something very special that will change the world.

We are planning for the book to be released in another 3-4 months. I know we had planned to release around now, but as happens with every book we've written so far, as we get deeper into it, we realize, keeping it baking just a little longer is so much better than rushing thru the process and outputting a half-baked pie. Although pgRouting 2.2 is still being heavily worked on, the development version is almost feature complete and slated for release in next three to four months. As such we plan to cover some new features coming in pgRouting 2.2 and hope pgRouting 2.2 is out just before or around the same time as this book.

by Regina Obe (nospam@example.com) at January 28, 2016 03:55 PM

Stephen Mather

parallel processing in PDAL

Frankfurt Airport tunnel.JPG
Frankfurt Airport tunnel” by Peter IsotaloOwn work. Licensed under CC BY-SA 3.0 via Commons.

In my ongoing quest to process all the LiDAR data for Pennsylvania and Ohio into one gigantic usable dataset, I finally had to break down and learn how to do parallel processing in BASH. Yes, I still need to jump on the Python band wagon (the wagon is even long in the tooth, if we choose to mix metaphors), but BASH makes me soooo happy.

So, in a previous post, I wanted to process relative height in a point cloud. By relative height, I mean height relative to ground. PDAL has a nice utility for this, and it’s pretty easy to use, if you get PDAL installed successfully.

pdal translate 55001640PAN.las 55001640PAN_height.bpf height --writers.bpf.output_dims=&amp;quot;X,Y,Z,Height&amp;quot;;

Installing PDAL is not too easy, so I used the dockerized version of PDAL and it worked great. Problem is, the dockerized version complicates my commands on the command line, especially if I want to run it on a bunch of files.

Naturally, the next step is to run it on a whole bunch of LiDAR files. For that I need a little control script which I called pdal_height.sh, and then I need to run that in a “for” loop.

#!/bin/bash
# Get the pathname from the input value
pathname="${1%/*}";

# Get the short name of the file, sans path and sans extension
name=`basename $1 .las`

docker run -v $pathname:/data pdal/master pdal translate //data/"$name".las //data/"$name"_height.bpf height --writers.bpf.output_dims="X,Y,Z,Intensity,ReturnNumber,NumberOfReturns,ScanDirectionFlag,EdgeOfFlightLine,Classification,ScanAngleRank,UserData,PointSourceId";

Now we need a basic for loop will take care of sending the las files into our pdal_height.sh, thus looping through all available las files:

for OUTPUT in $(ls *.las); do ~/./pdal_height.sh $OUTPUT; done;

This is great, but I calculated it would take 13 days to complete on my 58366 LiDAR files. We’re talking approximately 41,000 square miles of non-water areas for Ohio, and approximately 44,000 square miles of non-water areas for Pennsylvania. I’m on no particular timeline, but I’m not really that patient. Quick duckduckgo search later, and I remember the GNU Parallel project. It’s wicked easy to use for this use case.

ls *.las | parallel -j24 ~/./pdal_height.sh

How simple! First, we list our las files, then we “pipe” them as a list to parallel, we tell parallel we want it to spawn 24 independent processes using that list as the input for our pdal_height script.

Now we can run it on 24 cores simultaneously. Sadly, I have slow disks :( so really I ran it like this:

ls *.las | parallel -j6 ~/./pdal_height.sh

Time to upgrade my disks! Finally, I want to process my entire LiDAR dataset irrespective of location. For this, we use the find command, name the starting directory location, and tell it we want to search based on name.

find /home/myspecialuser/LiDAR/ -name "*.las" | parallel -j6 ~/./pdal_height.sh

Estimated completion time: 2 days. I can live with that until I get better disks. Only one problem, I should make sure this doesn’t stop if my network drops for any reason. Let’s wrap this in nohup which will prevent network-based hangups:

nohup sh -c 'find /home/myspecialuser/LiDAR -name "*.las" | parallel -j6 ~/./pdal_height.sh {}'

by smathermather at January 28, 2016 01:00 PM

January 27, 2016

Stephen Mather

wget for downloading boatloads of data

My current project to create a complete dataset of airborne LiDAR data for Ohio and Pennsylvania has been teaching me some simple, albeit really powerful tricks. We’ll just discuss one today — recursive use of wget. This allows us to download entire trees of web sites to mirror, or in our case download all the data. Additionally, wget works on ftp trees as well, with no real change in syntax.

wget -r ftp://pamap.pasda.psu.edu/pamap_lidar/cycle1/LAS/

Capture

I recognize that curl can be a more powerful utility than wget, but for dead simple problems like this, a dead simple tool is more than adequate.


by smathermather at January 27, 2016 01:04 PM

January 25, 2016

Oslandia

Meilleurs vœux 2016 ! Best wishes for 2016 !


(english below)

Toute l'équipe d'Oslandia vous souhaite ses meilleurs vœux pour l'année 2016 dans tous vos projets.


L'année passée a été riche et intense, et Oslandia continue de renforcer son équipe. Avec de nouveaux collaborateurs, de nouvelles expériences et expertises, et de nouveaux projets, 2016 part sous de bons augures.


Pour cette nouvelle année, nous vous offrons dès à présent la sortie de Tempus 2.0, le calculateur d'itinéraire multimodal que nous développons avec l'IFSTTAR. Au menu, les "Contraction Hierarchies" , et d'autres améliorations qui permettent le calcul d'itinéraire à haute performance. L'annonce paraitra sous peu.


Ce n'est qu'un début, et en 2016 notre collaboration avec l'IGN va permettre de faire progresser les solutions 3D OpenSource. Nous participons toujours activement au développement de QGIS, et la version 3 devrait voir le jour cette année. Du côté des métiers de l'eau, les avancées sur qWAT et qgis-epanet sont déjà au programme.


Nous avons hâte de collaborer avec vous sur des projets innovants et motivants. N'hésitez donc pas à nous contacter , pour tout besoin en conseil, développement, ou formation !


A très bientôt,

L'équipe Oslandia


Pour toute information ou demande : infos@oslandia.com


Happy new year 2016


The whole team at Oslandia wishes you all the best for 2016 in your projects.


Last year has been rich and intense, and Oslandia keeps reinforcing its team. With new collaborators, new experiences and skills, and new projects, 2016 makes a good start.


For this new year, we offer you a new release of Tempus, our multimodal routing engine developped in collaboration with IFSTTAR : version 2.0 has just seen light. It features "Contraction Hierarchies" and other improvements for high performance routing. Stay tuned, official release announcement soon !


This is just the beginning. In 2016 our collaboration with French IGN will allow us to improve OpenSource 3D solutions. We still contribute actively to QGIS, and version 3.0 should be out later this year. As for the water management domain, qWAT and qgis-epanet progress is already planned.


We will be really pleased to work with you on innovative and motivating projects. Do not hesitate to contact us for any need in consulting, development or training !


À très bientôt,

Oslandia Team

For any information : infos@oslandia.com

by Vincent Picavet at January 25, 2016 11:00 AM

January 24, 2016

Boston GIS (Regina Obe, Leo Hsu)

Early look at Upcoming PostGIS 2.3

PostGIS 2.3 is still in its incubation, but I suspect it will be even more impressive than PostGIS 2.2. PostGIS 2.2 was long in the oven, and out came out a lot, but I'm hoping PostGIS 2.3 will have a shorter gestation period, but be just as, if not more impressive than PostGIS 2.2.

Before starting my "what's already there and what I hope to be there" list for PostGIS 2.3, I'll point out things that PostGIS 2.2 brought

PostGIS 2.2 offerings

Many itemized in PostGIS 2.3 new features

  • KNN for geography and 3D geometries
  • Geometry clustering functions
  • True distance KNN for PostgreSQL 9.5+. If you are using 9.5, your KNN is not just bounding box distance, but true real distance.
  • Faster topology, now a lot has been moved into C for better performance.
  • Lots more 3D advanced functions via SFCGAL, SFCGAL now installable as extension.
  • Lots of new geometry functions like geometry clustering, geometry clipping, subdivide, trajectory functions
  • More raster stats and create overview table function without having to rely on raster2pgsql
  • Tiger geocoder updated for Tiger 2015 data. Address standardizer now part of PostGIS.
  • Last but not least, we added Dan Baston to our Core Contributor list. He gave us all the clustering functions in PostGIS 2.2, fixed lots of bugs and is already got more commits and proposed plans than anyone else for PostGIS 2.3. Go Dan go, and welcome.

Continue reading "Early look at Upcoming PostGIS 2.3"

by Regina Obe (nospam@example.com) at January 24, 2016 10:32 AM

January 20, 2016

CartoDB

Time’s Running Out! Voting for FOSS4G-NA 2016

CartoDB at FOSS4G NA 2016

CartoDB is very excited about the upcoming FOSS4G-NA conference!

We’ve given sessions at the conference for the last 2 years and hope 2016 won’t be an exception. Held in Raleigh, North Carolina from May 2-5, this is the premiere conference for free, open source software designed for geospatial in North America. Thousands of participants and hundreds of companies converge to share, demo, and talk about the things we love most.

In order to see us there we need your help!

FOSS4G NA has opened up community voting to allow participants to have a say in what sessions they would like to see at this year’s conference. You can vote for as many panels as you’d like! Voting remains open until February 8, 2016, 11:59 EST.

Here are CartoDB’s proposed sessions:

Cast your votes for our 5 star panels!

We can’t wait to see you at FOSS4G-NA 2016!

January 20, 2016 02:00 PM

Stephen Mather

Point Clouds – the (re)start of a journey

If you have followed this blog for a while, you will notice a continual returning to and refinement of ideas and topics. That’s how the blog started, and this role it has served, as a touch stone in my exploration of topics is critical to how I use it. I hope it is useful to you too, as a reader.

So, let’s talk about point clouds again. Point clouds are like ordinary geographic points — they have X, Y, and Z (typically), but there are a whole lot more of them than ordinary. How much more? Well, instead of thinking in the 100s of thousands, we think more along the lines of 100 of millions or billions of points.

The other thing we need to recognize about point clouds is that they may be n-dimensional — a LiDAR point cloud may have X, Y, Z, intensity, return number, class, etc..

Good Free and Open Source Software (FOSS) tools for dealing with point clouds include a spectrum of projects, including PDAL (Point Data Abstraction Library), and the Pointcloud extension for PostgreSQL (with PostGIS integration, of course).

Previously, I was attempting to process large amounts of LiDAR point clouds in order to model bird habitat. Now that I have the requisite storage, and PDAL has a height calculator, I am ready to dive back into this.

Patch heights showing vegetation and open areas Isometric 3D visualization of 3D chipping Height cloud overlayed with patchs

In the meantime, there are some cool things to check in this space. Let’s start with this presentation by Michael Smith on PDAL from FOSS4G in Seoul last year.

Screen Shot 2016-01-19 at 8.32.03 PM

Want to see some of the incredible things you can do in the browser, go no further than here:

Screen Shot 2016-01-19 at 8.34.52 PM.png

This goes deeper than simple vector tile approaches and employs level of detail optimizations that are necessary for this large of a use case. More on this later

postscript 1/20/2016:

Looks like I outed the above project… . Apologies to https://hobu.co for this. For the record and from their site:

Hobu is a team of three software engineers located in Iowa City, Iowa and Cambridge, Iowa. We have been on the forefront of open source LiDAR software for over five years, and we have been building open source GIS software for even longer. We provide open source lidar software systems development, open source GIS software development in addition to contract research and development, systems design, evaluation and implementation.

Please contact us about any of the open source software for which we provide support.

We are the primary support organization for the following libraries:

  • PDAL — Point Data Abstraction Library
  • plas.io — WebGL point cloud rendering
  • Greyhound — Point cloud data streaming
  • libLAS — ASPRS LAS read/write

Welp, there we go, an unintentional public launch. Awesome work though… .


by smathermather at January 20, 2016 01:39 AM

January 15, 2016

Stephen Mather

Moar generative PostGIS Art

Screen Shot 2016-01-15 at 12.07.06 AM
Screen Shot 2016-01-15 at 12.09.33 AMScreen Shot 2016-01-15 at 12.09.18 AM
Screen Shot 2016-01-15 at 12.18.08 AM
Screen Shot 2016-01-15 at 12.11.17 AM
DROP TABLE IF EXISTS cuyahogaaaahh;
CREATE TABLE cuyahogaaaahh AS
SELECT row_number() OVER() As id, geom
FROM
(
SELECT ST_Buffer(ST_SubDivide(ST_Buffer(
     geom, 50, 100), generate_series), generate_series) AS geom
	FROM generate_series(8,50), cuyahoga
) subdivide

by smathermather at January 15, 2016 05:29 AM

Stephen Mather

Generative art in PostGIS

Screen Shot 2016-01-14 at 11.46.55 PMScreen Shot 2016-01-14 at 11.45.55 PMScreen Shot 2016-01-14 at 11.45.41 PM

SELECT ST_Buffer(ST_SubDivide(ST_Buffer(
     ST_GeomFromText(
      'POINT(0 0)'
     ), 50000, 100), generate_series), generate_series * 100) AS geom
	FROM generate_series(8,100)

by smathermather at January 15, 2016 04:50 AM

CartoDB

Bulk Nearest Neighbor using Lateral Joins

“Find me the nearest…” is a common way to start a geographical query, and we’ve talked about how to write such queries for single points, but it can be tricky to carry out in bulk.

CartoDB supports high performance “nearest neighbor” queries, but getting them to run in bulk on a whole table can be tricky, so let’s look at the SQL we need to make it happen.

This post will use data from the City of Victoria Open Data Catalogue if you want to follow along:

The goal is to create a map of parcels that colors each parcel by how far away it is from the nearest fire hydrant – the kind of map that might be useful for fire department planners.

Nearest To One Location

The underlying functionality to get the nearest neighbor is a PostGIS ordering operation (the “<->” symbol below) that sorts results of a query in distance order relative to a point.

SELECT cartodb_id, fulladdr
FROM parcelsshp
ORDER BY the_geom <-> CDB_LatLng(48.43, -123.35)
LIMIT 1;

By limiting our result to just the first entry in the sorted list, we naturally get the nearest parcel to our query point.

The one nearest parcel

The trouble is, this trick only works one record at a time. If we want to map the distance each parcel is from its nearest fire hydrant, we need some way of running this query 30,288 times (that’s how many parcels there are in total).

Nearest To Many Locations

To create a query where every resulting row is tied to a “nearest neighbor” we have to iterate a PostGIS ordering operating once for each input row. Fortunately, we can do this using a “lateral join”.

SELECT
  parcels.*,
  hydrants.cartodb_id as hydrant_cartodb_id,
  ST_Distance(geography(hydrants.the_geom), geography(parcels.the_geom)) AS distance
FROM
  parcelsshp AS parcels
CROSS JOIN LATERAL
  (SELECT cartodb_id, the_geom
   FROM hydrantsshp
   ORDER BY
     parcels.the_geom_webmercator <-> the_geom_webmercator
   LIMIT 1) AS hydrants

You can see our original nearest neighbor query embedded after the “lateral” keyword: for each parcel, we’re finding the nearest hydrant.

Since there is always at least one nearest hydrant, we can use a “cross join” that takes every combination of records (just one, in our case) on each side of
the join.

Nearest hydrant distance calculated per-parcel

Once we have the nearest hydrant, we just calculate a little extra information for the parcel/hydrant pair: how far apart are they? To get an accurate answer, we do the calculation using the “geography” type, as the distance between two geographies will be calculated exactly on the spheroid.

ST_Distance(geography(hydrants.the_geom), geography(parcels.the_geom))

To get a neat visualization, use the “chloropleth” wizard to build a basic 5-category style, then go into the CSS code and tidy up the breaks to nice even numbers (every 30 meters, in this case).

#parcelsshp{
  polygon-fill: #eee;
  polygon-opacity: 0.8;
  line-color: #ddd;
  line-width: 0.5;
  line-opacity: 0.8;
}
#parcelsshp [ d <= 150] {
   polygon-fill: #BD0026;
}
#parcelsshp [ d <= 120] {
   polygon-fill: #F03B20;
}
#parcelsshp [ d <= 90] {
   polygon-fill: #FD8D3C;
}
#parcelsshp [ d <= 60] {
   polygon-fill: #FECC5C;
}
#parcelsshp [ d <= 30] {
   polygon-fill: #FFFFb2;
}

Tidying Up

The map we get from the lateral join is almost perfect. We can color parcels by distance, and get an attractive rendering.

However, there are a few areas where the coloring is slightly off, because the City of Victoria parcel data includes multiple polygons for parcels that are strata titles (condominiums).

To remove the duplicates, we add a “distinct on” clause that strips out duplicate geometries. And to remove unattributed parcels we also filter out all parcels with a null value in the parcel number (“pid”).

SELECT
  parcels.*,
  hydrants.cartodb_id as hydrant_cartodb_id,
  ST_Distance(geography(hydrants.the_geom), geography(parcels.the_geom)) as distance
FROM
  (SELECT DISTINCT ON (the_geom) *
   FROM parcelsshp
   WHERE pid IS NOT NULL) AS parcels
CROSS JOIN LATERAL
  (SELECT cartodb_id, the_geom
   FROM hydrantsshp
   ORDER BY parcels.the_geom_webmercator <-> the_geom_webmercator
   LIMIT 1) AS hydrants

And now we have the completed map, calculating the nearest hydrant to all parcels in real time.

Calculating nearest neighbors is fast, but not instantaneous. For this example, the nearest hydrant for each parcel calcuation takes about 3 seconds for 30,000 parcels. So don’t try this on more than a few hundred thousand records, or your query will probably time out before your map has a chance to draw.

Happy data mapping!

January 15, 2016 12:00 AM

January 14, 2016

Stephen Mather

Yet another approach to ST_Buffer on geography

Another approach to ST_Buffer would be to subdivide the geometries before buffering, and put them all together at the end. ST_SubDivide can do this for us. We can tell it how may vertices we want in each geometry (minimum of 8). Since _ST_BestSRID will try UTM first, we’ll add enough nodes to ensure we always have 8 nodes within the 1,000,000 meter width of a UTM zone by segmentizing at 62,500 meters.

CREATE OR REPLACE FUNCTION ST_Buffer(g1 geography, radius_of_buffer float,
num_seg_quarter_circle integer)
RETURNS geography AS
$BODY$

WITH subdivided AS (
 SELECT
 CASE WHEN ST_GeometryType(g1::geometry) = 'ST_Point' THEN g1
 ELSE (SELECT ST_SubDivide(ST_Segmentize(g1::geometry, 62500), 8) AS geom)
 END
),
transformed_local AS (
 SELECT ST_Transform(g1::geometry, _ST_BestSRID(geom)) AS geom FROM subdivided
),
buffered AS (
 SELECT ST_Buffer(geom, radius_of_buffer, num_seg_quarter_circle) AS geom
 FROM transformed_local
),
transformed_4326 AS (
 SELECT (ST_Dump(ST_Transform(geom, 4326))).geom AS geom FROM buffered
),
checksrid AS (
 SELECT geom FROM transformed_4326
)

SELECT ST_Union(geom)::geography FROM checksrid;
 
$BODY$
LANGUAGE sql VOLATILE;

Aaand, a somewhat unrelated image (just the buffered subdivision of a buffered point, not really an output from our function above. More on all this later.

Screen Shot 2016-01-13 at 6.38.15 PM


by smathermather at January 14, 2016 04:30 AM

January 13, 2016

Stephen Mather

ST_Buffer on Geography — Iteration

In my previous post, Imagico said:

This is nice but still fails in one important aspect – it will create significant inaccuracies once your buffering distance reaches values where buffering in a single local coordinate system is getting problematic which, depending on your precision requirements, will likely happen at distances of a few degrees. A solution to this would require either an iterative split and merge technique of a completely different approach on a very basic level (i.e. attempting truly spherical buffering).

Ok, ok. I kinda forgot about this aspect. The proper way to do this is, as always is to do it at a lower level thank in an SQL function, preferably calculating the buffer on the spheroid, ellipsoid, or geoid. But, ouch. I think we can munge something good enough for now.

We’ll still do the buffer:

WITH transformed AS (
SELECT ST_Transform(g1::geometry, _ST_BestSRID(g1)) AS geom
),
buffered AS (
SELECT ST_Buffer(geom, radius_of_buffer, num_seg_quarter_circle) AS geom
FROM transformed
),
transformed_4326 AS (
SELECT ST_Transform(geom, 4326) AS geom FROM buffered
)
...

But now that we’ve performed the buffer, we need to determine that the buffered geometry is not too large for our local coordinate system. To do that, we’ll compare the best SRID for our buffered geometry to the original best SRID that we calculated based on the original input geometry.

SELECT
CASE WHEN _ST_BestSRID(geom) = _St_BestSRID(g1) THEN geom::geography

If the two SRIDs aren’t equal, then we need to use the new best SRID based on buffered geometry and buffer using that new best SRID.

ELSE ST_Transform(ST_Buffer(ST_Transform(g1::geometry, _ST_BestSRID(geom)), radius_of_buffer, num_seg_quarter_circle), 4326)::geography

Now let’s put it all together:

CREATE OR REPLACE FUNCTION ST_Buffer(g1 geography, radius_of_buffer float,
num_seg_quarter_circle integer)
RETURNS geography AS
$BODY$
 
WITH transformed AS (
SELECT ST_Transform(g1::geometry, _ST_BestSRID(g1)) AS geom
),
buffered AS (
SELECT ST_Buffer(geom, radius_of_buffer, num_seg_quarter_circle) AS geom
FROM transformed
),
transformed_4326 AS (
SELECT ST_Transform(geom, 4326) AS geom FROM buffered
)
SELECT
 CASE WHEN _ST_BestSRID(geom) = _St_BestSRID(g1) THEN geom::geography 
 ELSE ST_Transform(ST_Buffer(ST_Transform(g1::geometry, _ST_BestSRID(geom)), radius_of_buffer, num_seg_quarter_circle), 4326)::geography
 END
 FROM transformed_4326
 
$BODY$
LANGUAGE sql VOLATILE;


CREATE OR REPLACE FUNCTION ST_Buffer(g1 geography, radius_of_buffer float,
buffer_style_parameters text)
RETURNS geography AS
$BODY$
 
WITH transformed AS (
SELECT ST_Transform(g1::geometry, _ST_BestSRID(g1)) AS geom
),
buffered AS (
SELECT ST_Buffer(geom, radius_of_buffer, buffer_style_parameters) AS geom
FROM transformed
),
transformed_4326 AS (
SELECT ST_Transform(geom, 4326) AS geom FROM buffered
)
SELECT
 CASE WHEN _ST_BestSRID(geom) = _St_BestSRID(g1) THEN geom::geography 
 ELSE ST_Transform(ST_Buffer(ST_Transform(g1::geometry, _ST_BestSRID(geom)), radius_of_buffer, buffer_style_parameters), 4326)::geography
 END
 FROM transformed_4326
 
$BODY$
LANGUAGE sql VOLATILE;

So, what’s the affect? Pretty significant really.

SELECT ST_Buffer(
	 ST_GeomFromText(
	  'POINT(0 0)'
	 )::geography, 5000000, 50);
Blue line showing correct buffer, dashed red line showing distorted buffer from using wrong local coordinate system.

Blue line showing correct buffer, dashed red line showing distorted buffer from using wrong local coordinate system.


by smathermather at January 13, 2016 02:21 AM

January 12, 2016

CartoDB

Stacking Chips - Showing Many Points that Share the Same Location

The NYPD recently made waves with an open data release, sharing fairly detailed crime data about the “seven major felonies” for the first 3 quarters of 2015. The dataset has a row for every incident, including location (as lat/lon), date/time, etc. However, as with many crime datasets, the point locations have been adjusted to mid-block or nearest intersection so that you can’t determine exactly where the reported crime occurred. As you might expect, there are many locations in NYC where multiple crimes occurred on the same block/intersection, so we end up with many rows that have the exact same point coordinates. This is a mapper’s conundrum, as point markers will simply overlap each other, only allowing you to see the one point when there are many beneath it.

Some Workarounds to the ‘Overlapping Points Problem’

We’ll use a simple dummy dataset that has 10 points in one location, and 5 points in another location. When rendered by CartoDB, we see 2 points when there are actually 15:

Heatmaps

A heatmap overlay “heats up” an area on the map based on the number of points, or based on some aggregation of a numeric attribute. This obscures the point markers themselves, but at least the data are all visually accounted for.

Clustering

Clustering replaces many markers with a single marker, either with a label or other visual representation of the represented points. When you zoom in closer, the algorithm determines whether there is still overlap. If there is none, it breaks up the cluster. Of course, if the points have the exact same coordinates, the cluster will never split up. Go ahead zoom all the way in… the clusters will remain:

Spidering

Spidering, or spiderifying is a visualization technique where overlapping points can be offset around their mapped location, and visually linked to it with a line drawn on the map. This is sort of an “exploded view”, when there are multiple markers in the same area, allowing the user to interact with each one, but indicating that the marker has been shifted away from the location it represents.

This functionality is built into the leaflet.MarkerCluster plugin, which makes pretty animated clusters. Here’s an example using the same dummy data (click each cluster to see the spiderify animations):

Crime Wafers, Mentos, Chip Stacks - A SQL Hack in CartoDB

While tinkering with the NYC Felonies data shortly after its release, I was struggling with the overlapping points problem, and tried to implement a CartoDB-based approach to spidering based on this example by our very own Andrew Hill.

Because of the density of the felonies data, the results of trying this technique were a huge mess. Even after reigning in the spirals, the clusters were still too close together and the map was unusable.

I eventually figured out a way to modify the example so that instead of spreading out the points in a spiral pattern, I could simply offset each point along the y axis. The result was a felony map that shows “stacks” of markers at each location, allowing both visual styling and interactivity with every point in the dataset. Take a look:

So what’s the secret sauce? The example below shows the same dummy data from before rendered using this method. I’ll walk through through each step of the complex SQL query, explaining what each part does.

Step 1:

WITH
  m AS (
    SELECT array_agg(cartodb_id) id_list, the_geom_webmercator, ST_Y(the_geom_webmercator) y 
    FROM chriswhong.stack_dummy 
    GROUP BY the_geom_webmercator 
    ORDER BY y DESC
  ),

This step groups the points by their geometry, so points with identical locations will be grouped. It stashes the cartodb_ids for each group into an array using array_agg(). The groups are then sorted by their y coordinate (ST_Y is used to generate a new column with the y value), so that groups that are further south appear first. This will ensure that the more southern “stacks” are rendered in the foreground.

Now we’ve got one row per location, each with an array of unique ids from the source data associated with that location.

Step 2:

f AS (
    SELECT  generate_series(1, array_length(id_list,1)) p, unnest(id_list) cartodb_id, the_geom_webmercator 
    FROM m
  ),

With this step, we’re going to un-group the data. generate_series() assigns a number for each row within its group, which we’ll use later to offset the geometries. The cartodb_ids are pulled out of the array we stashed them in, so now we’ve got a row for each felony in the original dataset, with it’s original geometry, its cartodb_id, and its number within its previous location group.

Step 3:

SELECT  ST_Translate(f.the_geom_webmercator,0,f.p*12) the_geom_webmercator, f.cartodb_id, q.category
FROM f, chriswhong.stack_dummy q
WHERE f.cartodb_id = q.cartodb_id

The last step generates the result set that is actually used in the map. ST_Translate() is used to create new geometries for each point based on the “location group” numbering we added in step 3. The x value of the new geometry remains the same as the original, but the y value is calculated as ST_Y(f.the_geom_webmercator) + f.p*12. (12 is the y-offset in web mercator units, you can modify this to tighen or spread out the points) The first point in the location group will be 12 web mercator units north of the original point, the second one will be 24, and so on.

A little CartoCSS makes the markers into ovals instead of circles for a 3D-ish look of stacked chips.

marker-type: ellipse;
   marker-width: 10;
   marker-height: 8;

This approach with a hard-coded offset will only work at a single zoom level. If you zoom in on the map above, you’ll see the points appear to spread out. This is due to the fact that 12 web mercator units takes up double the distance on your screen each time you zoom in. To overcome this in the felony map, we must dynamically define the offset based on the zoom level.

Here’s a little javascript function that that returns a y-offset for a given zoom level.

function getYOffset(zoom) {
      var yOffset = 655360; //offset in webmercator units at zoom level 0
      for(var i=0;i<zoom;i++) {
        yOffset = yOffset/2;
      }
      return yOffset;
    }

For each zoom level above zero, the y-offset is cut in half. The offset is then used with a SQL template in cartodb.js to re-render the map whenever the user zooms, and the “stacks” appear the same no matter what zoom level you’re on Check out the full code for the felonies map here.

Next time you find yourself facing the overlapping points problem, give this method a try and stack your chips!

Happy Stacking!

January 12, 2016 12:00 AM

January 11, 2016

Stephen Mather

Final?: ST_Buffer on Geography, or My love for @NullIsland

Thanks to Paul Norman’s reminder in a previous post, we now have all the pieces we need to complete an ST_Buffer function that exposes all the wonderful goodness of buffer style parameters to geography users and also chooses the best local coordinate system automatically.

We use _ST_BestSRID to choose our local coordinate system.

Remember my previous three disadvantages:

  1. It didn’t do this at a low enough level to automatically use the best available local coordinate system. This leaves it to the user to choose the best available local coordinate system
  2. It used a different function name than you otherwise use for buffering.
  3. Finally, it wouldn’t let you use all the other parameters that ST_Buffer exposes through the use of buffer style parameters.

They are now all solved.

CREATE OR REPLACE FUNCTION ST_Buffer(g1 geography, radius_of_buffer float,
num_seg_quarter_circle integer)
RETURNS geography AS
$BODY$

WITH transformed AS (
SELECT ST_Transform(g1::geometry, _ST_BestSRID(g1)) AS geom
),
buffered AS (
SELECT ST_Buffer(geom, radius_of_buffer, num_seg_quarter_circle) AS geom
FROM transformed
),
transformed_4326 AS (
SELECT ST_Transform(geom, 4326) AS geom FROM buffered
)
SELECT geom::geography FROM transformed_4326

$BODY$
LANGUAGE sql VOLATILE;

 

CREATE OR REPLACE FUNCTION ST_Buffer(g1 geography, radius_of_buffer float,
buffer_style_parameters text)
RETURNS geography AS
$BODY$

WITH transformed AS (
SELECT ST_Transform(g1::geometry, _ST_BestSRID(g1)) AS geom
),
buffered AS (
SELECT ST_Buffer(geom, radius_of_buffer, buffer_style_parameters) AS geom
FROM transformed
),
transformed_4326 AS (
SELECT ST_Transform(geom, 4326) AS geom FROM buffered
)
SELECT geom::geography FROM transformed_4326

$BODY$
LANGUAGE sql VOLATILE;

Let’s try this!

SELECT ST_Buffer(
ST_GeomFromText(
'LINESTRING(5 2.5,0 -2.50,-5 2.5)'
)::geography, 500000, 'join=mitre');
Heart shaped buffer over Null Island.

Heart shaped buffer over Null Island.

Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under ODbL.

(poor editing earlier tonight. My apologies.


by smathermather at January 11, 2016 11:07 PM

Stephen Mather

ST_Buffer diving under the covers part 1

This will be a quick post. I just looked at how ST_Buffer is written in the PostGIS codebase, and I thought this was interesting. When you use ST_Buffer and specify geometry, buffer distance, and an integer value, this gets converted in the overloaded function into a quad_segs version.

CREATE OR REPLACE FUNCTION ST_Buffer(geometry,float8,integer)

RETURNS geometry

AS $$ SELECT _ST_Buffer($1, $2,

CAST('quad_segs='||CAST($3 AS text) as cstring))

$$

LANGUAGE 'sql' IMMUTABLE STRICT;

P.S. I would have written more, but Sherlock just started.


by smathermather at January 11, 2016 03:02 AM

January 09, 2016

Stephen Mather

ST_Buffer for those who want really want all buffer features in geography

My previous post on improving buffering geography in PostGIS has three disadvantages:

  1. It doesn’t do this at a low enough level to automatically use the best available local coordinate system. This leaves it to the user to choose the best available local coordinate system. I should fix this (but I probably won’t just now…).
  2. It uses a different function name than you otherwise use for buffering. Can we use the same name as ST_Buffer?
  3. Finally, it won’t let you use all the other parameters that ST_Buffer exposes through the use of buffer style parameters.

First, let’s address point 2. With PostgreSQL functions, it is possible to “overload” a function. This means that the same function name, but with a different set of inputs is effectively a different function but with the same name. This is because function + inputs = signature. Since our inputs for our new function are different in number and type from the existing ST_Buffer functions, we can add our function using the same name. The disadvantage is that we will likely lose and have to re-create these functions when we dump and reload the database. (For the advanced user, put it in an alternate schema, and add that to your schema search path so it’s readily available when you want to use it, but you don’t have to go out of your way to make sure that it dumps and reloads.)

CREATE OR REPLACE FUNCTION ST_Buffer(g1 geography, radius_of_buffer float,
 num_seg_quarter_circle integer, local_coord integer)
RETURNS geometry AS
$BODY$
 
WITH transformed AS (
 SELECT ST_Transform(g1::geometry, local_coord) AS geom
),
buffered AS (
 SELECT ST_Buffer(geom, radius_of_buffer, num_seg_quarter_circle) AS geom
 FROM transformed
),
transformed_4326 AS (
 SELECT ST_Transform(geom, 4326) AS geom FROM buffered
)
SELECT * FROM transformed_4326
 
$BODY$
LANGUAGE sql VOLATILE;

We can also address point 3. Let’s perform a similar overload for buffer style parameters:

CREATE OR REPLACE FUNCTION ST_Buffer(g1 geography, radius_of_buffer float,
 buffer_style_parameters text, local_coord integer)
RETURNS geometry AS
$BODY$
 
WITH transformed AS (
 SELECT ST_Transform(g1::geometry, local_coord) AS geom
),
buffered AS (
 SELECT ST_Buffer(geom, radius_of_buffer, buffer_style_parameters) AS geom
 FROM transformed
),
transformed_4326 AS (
 SELECT ST_Transform(geom, 4326) AS geom FROM buffered
)
SELECT * FROM transformed_4326
 
$BODY$
LANGUAGE sql VOLATILE;

Finally, we can use our special buffer style parameters with geography:

SELECT 0 AS id, ST_Buffer(
 ST_GeomFromText(
 'LINESTRING(41 -81,41 -82,42 -82)'
 ), 500000, 'join=mitre mitre_limit=5.0', 3734);

 

Example buffer of line with join=mitre mitre_limit=5.0.

Example buffer of line with join=mitre mitre_limit=5.0.

This is a hack. Daniel’s note in the previous post still stands — it would be better to do this at a lower level within geography, rather than asking the user to know and employ the best possible local coordinate system.


by smathermather at January 09, 2016 09:43 PM

January 08, 2016

Stephen Mather

ST_Buffer for those who want really round geographic circles

A little functionality request from @antoniolocandro on twitter:

Ask and ye might receive. Let’s build a function that will take your input geography, how far you want to buffer (in local coordinates) number of segments you want in a quarter, and what local coordinate system you want to use for your buffer as an EPSG code.

CREATE OR REPLACE FUNCTION zz_buffer(g1 geography, radius_of_buffer float,
num_seg_quarter_circle integer, local_coord integer)
RETURNS geometry AS
$BODY$

WITH transformed AS (
SELECT ST_Transform(g1::geometry, local_coord) AS geom
),
buffered AS (
SELECT ST_Buffer(geom, radius_of_buffer, num_seg_quarter_circle) AS geom
FROM transformed
),
transformed_4326 AS (
SELECT ST_Transform(geom, 4326) AS geom FROM buffered
)
SELECT * FROM transformed_4326

$BODY$
LANGUAGE sql VOLATILE
COST 100;

Now let’s use this:

SELECT 0 AS id, zz_buffer(ST_GeomFromText ('POINT(-81 41)', 4326) , 15000, 2, 3734)
Image of 8 segment buffer from geography

Image of 8 segment buffer from geography.

Now, we can test which level of smoothness we like best:

SELECT 0 AS id, zz_buffer(ST_GeomFromText ('POINT(-81 41)', 4326) , 15000, generate_series, 3734)
FROM generate_series(1,50)
Square, Octagon, etc. generated from new buffer function.

Square, Octagon, etc. generated from new buffer function.

Zoom in of different buffer smoothnesses.

Zoom in of different buffer smoothnesses.


by smathermather at January 08, 2016 11:36 PM

CartoDB

SQL the Sequel: Intermediate SQL Session with CartoDB

Join CartoDB scientists Andy Eschbacher and Stuart Lynn, fellow geospatial data wranglers, and web cartographers at our Intermediate SQL workshop on Tuesday, January 12th!

We strongly recommend taking our first two courses on SQL and PostGIS in our
Map Academy to prepare because this course will not be introducing basic levels of SQL.

In this workshop you’ll learn to:

  • Use aggregate functions and GROUP BY
  • Subqueries
  • Super fun spatial queries
  • Ultra cool tricks

Where can you find us? That’s easy. This workshop will be at our Brooklyn office in Bushwick (201 Moore Street, Brooklyn, NY) on Tuesday, January 12 at
6:30 PM.

Can’t make this session or simply can’t get enough of CartoDB? Join us for our twice-monthly Friday open office hours at our spacious Brooklyn office. Bring that project you need a little extra help on, come learn that webmapping skill you’ve always wanted to pick up, or just come to have a coffee!

Happy data mapping!

January 08, 2016 02:00 PM

January 07, 2016

Postgres OnLine Journal (Leo Hsu, Regina Obe)

Import Foreign Schema hack with OGR_FDW and reading LibreOffice calc workbooks

PostgreSQL 9.4 and below doesn't support importing whole set of tables from a FOREIGN server, but PostgreSQL 9.5 does with the upcoming Import Foreign Schema. To use will require FDW wrapper designers to be aware of this feature and use the plumbing in their wrappers. IMPORT FOREIGN SCHEMA for ogr_fdw come PostgreSQL 9.5 release is on the features ticket list.

UPDATE: If you are using PostgreSQL 9.5+, you can use the IMPORT FOREIGN SCHEMA feature which is available in ogr_fdw 1.0.1+. We demonstrate this in: ogr fdw IMPORT FOREIGN SCHEMA.

The ogr_fdw comes with this to die for commandline utility called ogr_fdw_info that does generate the table structures for you and will also list all the tables in the Foreign data source if you don't give it a specific table name. So with this utility I wrote a little hack involving using PostgreSQL COPY PROGRAM feature to call out to the ogr_fdw_info commandline tool to figure out the table names and some DO magic to create the tables.

Though ogr_fdw is designed to be a spatial foreign data wrapper, it's turning out to be a pretty nice non-spatial FDW as well especially for reading spreadsheets which we seem to get a lot of. This hack I am about to demonstrate I am demonstrating with LibreOffice/OpenOffice workbook, but works equally well with Excel workbooks and most any data source that OGR supports.


Continue reading "Import Foreign Schema hack with OGR_FDW and reading LibreOffice calc workbooks"

by Leo Hsu and Regina Obe (nospam@example.com) at January 07, 2016 07:59 AM

January 06, 2016

PostGIS Development

PostGIS 2.2.1 Released

The PostGIS development team is happy to release patch for PostGIS 2.2, the 2.2.1 release. As befits a patch release, the focus is on bugs, breakages, and performance issues. This release includes many fixes for topology, so topology users should give this release special focus.

Bug Fixes and Improvements

  • #2232, avoid accumulated error in SVG rounding
  • #3321, Fix performance regression in topology loading
  • #3329, Fix robustness regression in TopoGeo_addPoint
  • #3349, Fix installation path of postgis_topology scripts
  • #3351, set endnodes isolation on ST_RemoveIsoEdge (and lwt_RemIsoEdge)
  • #3355, geography ST_Segmentize has geometry bbox
  • #3359, Fix toTopoGeom loss of low-id primitives from TopoGeometry definition
  • #3360, _raster_constraint_info_scale invalid input syntax
  • #3375, crash in repeated point removal for collection(point)
  • #3378, Fix handling of hierarchical TopoGeometries in presence of multiple topologies
  • #3380, #3402, Decimate lines on topology load
  • #3388, #3410, Fix missing end-points in ST_RemoveRepeatedPoints
  • #3389, Buffer overflow in lwgeom_to_geojson
  • #3390, Compilation under Alpine Linux 3.2 gives an error when compiling the postgis and postgis_topology extension
  • #3393, ST_Area NaN for some polygons
  • #3401, Improve ST_Split robustness on 32bit systems
  • #3404, ST_ClusterWithin crashes backend
  • #3407, Fix crash on splitting a face or an edge defining multiple TopoGeometry objects
  • #3411, Clustering functions not using spatial index
  • #3412, Improve robustness of snapping step in TopoGeo_addLinestring
  • #3415, Fix OSX 10.9 build under pkgsrc
  • Fix memory leak in lwt_ChangeEdgeGeom [liblwgeom]

See the full list of changes in the news file and please report bugs that you find in the release.

Binary packages will appear in repositories over the coming weeks as packagers roll out builds.

View all closed tickets for 2.2.1.

by Regina Obe at January 06, 2016 12:00 AM

January 05, 2016

Stephen Mather

ST_ApproximateMedialAxis(geom)

Trying out PostGIS 2.2’s ST_ApproximateMedialAxis capabilities today. I’m going to use it to label parcels. So here is my parcel fabric:

1

And here’s what the skeleton of that fabric looks like:

2

It get’s pretty interesting where the parcels are more complicated:

3

more:

4

Oh, the code you say? Well, it couldn’t be much easier:

SELECT gid, ST_ApproximateMedialAxis(geom) AS geom FROM cuy_parcel_2015;

Or the full version for a new table:

DROP TABLE IF EXISTS cuy_parcel_2015_medial;

CREATE TABLE cuy_parcel_2015_medial AS (
 SELECT gid, ST_ApproximateMedialAxis(geom) AS geom FROM cuy_parcel_2015
 );
ALTER TABLE cuy_parcel_2015_medial ADD PRIMARY KEY (gid);
CREATE INDEX cuy_parcel_2015_medial_geom_idx ON cuy_parcel_2015_medial USING GIST ( geom );

Happy New Year.

by smathermather at January 05, 2016 12:09 AM

January 04, 2016

CartoDB

Visualize 2015 Urban Populations with Proportional Symbols

header image

Proportional symbol maps are used to represent point data that are attached to a specific geographic location (like a city) or data aggregated to a point from an area (like a state). The area of each symbol on the map (usually a circle) is scaled according to its value at a given geographic location using either absolute scaling or range-grading. The result is a map where larger symbols indicate higher values and smaller ones, lower values. This type of map is flexible in that you can represent raw data values (total population) or data that are normalized (percentage of population).

In this blog, I’ll walk through the basic principles of proportional symbols and some best practices when designing them. Then, I’ll cover the steps to make a map of 2015 urban population by country using the absolute scaling method.

Proportional Symbol Maps

Absolute Scaling vs. Range-Grading

There are two methods used to create proportional symbols: absolute scaling and range-grading. With absolute scaling, the area of each symbol on the map is scaled proportionately to its value in the data. With range-grading, values are broken into ranges (typically 3-5) using a classification method where symbols are sized based on the range they fall into (this is similar to the choropleth technique used for polygon data).

Use absolute scaling when you want your map reader to estimate relative magnitudes between individual values. Use range-grading to show magnitudes using classes that represent a range of values.

You will hear proportional symbol maps referred to in different ways. In the CartoDB Editor, we call them Bubble maps. Others refer to absolute scaled symbols as proportional symbol and range-graded as graduated symbols. Another way they are referred to is classed (range-graded) and unclassed (absolute scaling).

Best Practices

There are some important design considerations when making proportional symbol maps:

  • Avoid a homogeneous looking map. It is important that there is enough variation between symbol sizes representing the range of values. With absolute scaling if there isn’t a lot of variation between the values in your data, there will be little variation in symbol size. With range-grading, make sure there is enough differentiation in symbol size from class to class so they are easily distinguishable from one another

  • Use the right method for your data. Research shows that map readers have a hard time estimating the area of a symbol on a map and even more difficulty between symbols. Make sure your data are appropriate for absolutely scaled symbols. If not, use range-grading to limit the number of symbols that need to be interpreted

  • Think about your map reader. Cartographic research also shows that map readers interpret the areas of circles and squares best on proportional symbol maps. Definitely experiment with other symbol types (even text!) but when in doubt, default to these

  • Legends are important. For absolute scaling, your legend should provide symbols that are sized close to the largest and smallest values in the data. And of course with range-grading, each symbol should be sized as it is on the map with its associated values listed

  • Symbol legibility. Make sure to order your data so symbols representing smaller values don’t get lost underneath symbols with larger values. In areas where there is a concentration of symbols, the key is to help each one stand out against the other. This can be done using a couple of design tricks like symbol outlines

For more information on proportional symbols, definitely check out Cartography: Thematic Map Design by Borden Dent where a lot of this information comes from!

Let’s Get Mapping!

Before we dive into the data preparation and map making process, let’s take a look at some details about the final map:

  • It was designed to be viewed from zooms 2-5
  • Symbols are sized using absolute scaling
  • Only countries with urban populations of greater than 1 million are included
  • The color of each country point relates to its continent
  • It uses a simple basemap to promote visual hierarchy as discussed in this blog
  • The final map projection is World Robinson (for more on projections in CartoDB see this blog)
  • Zoom in on the map below to see the multi-scale legend, that uses custom data and CartoCSS for its design (this will be covered in another blog post!)

As mentioned earlier, with absolute scaling, the area of each symbol is proportionate to its value in the data. In future versions of CartoDB.js and the CartoDB Editor, you will be able to do this on the fly. Until then, this blog outlines the process to make a proportional symbol map using absolute scaling.

The topics covered are:

  • Data Prep
  • Formula overview for absolute scaling
  • Calculating symbol sizes in CartoDB
  • Applying symbology based on values
  • Final touches

Data

The data used for this map come from the World Bank Data Bank.

To get the data ready for the map:

  • I downloaded total urban population projections by country for 2015 in .csv format
  • Next, I uploaded the .csv to CartoDB where it was georeferenced using country name
  • I converted the resulting polygons to points using ST_Centroid
  • Finally, I assigned a continent to each point in the data using a method similar to the one outlined here

I won’t outline each step in detail, but if you would like to follow along, you can download the prepared dataset here.

The attributes in the dataset are:

country_name: the name of the country
country_code: the three letter country code
continent_name: the continent the country belongs to
pop_2015: the total count for 2015 urban population

Note: the SQL and CartoCSS in this blog assume that the name of your table is world_population_2015

Formula Overview

To determine the proper symbol size for each country, we’ll do the following:

  1. Pick a maximum symbol size that will be used for the largest value in the data
  2. Calculate the square root for each value in the data
  3. Use those two pieces of information to calculate the symbol size for each country

To simplify this to an equation:

symbol size = (maximum symbol size) * (square root / max square root)

Where:

maximum symbol size = the size of the largest symbol on our map
square root = the square root of each value in the data
max square root = the square root of the largest value in the data

Note: For a more detailed discussion on this calculation, see Chapter 9 of Borden Dent’s book!

Symbol Size Calculation in CartoDB

1. Choose a Maximum Symbol Size

The maximum symbol size is assigned to the largest value in the data and is the starting point from which all other symbols will be scaled. Choose a maximum symbol size that won’t cause too much clutter or overpower the map.

For the final map (after some trial and error), I chose a maximum symbol size of 75px based on the values in the data, the opening scale, and extent.

Note: You should adjust this number depending on your map’s extent, scale, and the range of values. Using the method outlined below, you can easily adjust this number and recalculate symbol sizes.

2. Add Fields

We need to add two numeric fields to our table: square_root where we’ll calculate the square root for each pop_2015 value, and symbol_size where we’ll calculate the symbol size for each country.

  • Go to DATA VIEW
  • In the bottom right hand corner, click the button to Add Column
Add Column
  • Name the column square_root and hit enter
  • Change the column type to number by selecting it from the drop-down
  • Follow the same steps to add another column named symbol_size
Add Field

3. Calculate Square Root

  • From DATA VIEW open the SQL tray
  • Using the SQL query below, we’ll update our square_root field with the square root of each value in the pop_2015 field:
UPDATE 
  world_population_2015 
SET 
  square_root = SQRT(pop_2015)
  • Click Apply Query
Calculate Square Root

4. Calculate Symbol Size

Now that we have decided on our maximum symbol size (75) and have the square_root field populated, we can use that information to calculate the symbol_size for all other values in the data.

Let’s take a look at our equation again:

symbol size = (maximum symbol size) * (square root / max square root)

Where:
symbol size = our symbol_size field
maximum symbol size = 75
square root = our square_root field
max square root = 27620.2642999664 (the square_root value for China)

With all of the information that we need, we can construct the equation using SQL to update the symbol_size field:

  • From Data View open your SQL tray
  • Copy/paste the SQL below to calculate a symbol size for each value in the data
UPDATE 
  world_population_2015 
SET 
  symbol_size = 75.0 * square_root / 27620.2642999664
  • Click Apply Query
Calculate Symbol Size

If we look at the resulting sizes, we can see that the country with the largest population (China) has a symbol size of 75 and all other values are scaled accordingly:

Final Symbol Size

Symbol Design

Now that we have the additional attributes in the data, we’ll walk through the steps to size, color, and define the drawing order of the symbols in MAP VIEW.

  • From DATA VIEW click the option to VISUALIZE to create a map from our data table
Default Map

1. Symbolize by Continent

On the final map, each symbol is colored according to its continent using the continent_name field.

  • Open the styling Wizard
  • Select the Category option and choose continent_name for Column
Color Points

2. Size Symbols

Next, we’ll size each point symbol using the values from our [symbol_size] field.

  • Open the CartoCSS tray for the layer
  • Replace the value for marker-width: from 10 to [symbol_size] so the first block of global styling looks like:
#world_population_2015 {
   marker-fill-opacity: 0.9;
   marker-line-color: #FFF;
   marker-line-width: 1;
   marker-line-opacity: 0.8;
   marker-placement: point;
   marker-type: ellipse;
   marker-width: [symbol_size];
   marker-allow-overlap: true;
}

...
  • Click Apply Style

If you aren’t satisfied with the symbol sizes after seeing them on the map, you can change the maximum symbol size value and rerun the equation to calculate new values for the symbol_size field.

Default Map

3. Color Symbols

Similar to the drought monitor map, I chose six different colors for each continent, assigned them as variables, and then replaced the default category colors. I also made a slight adjustment to the default marker-line-opacity from 1 to 0.8.

  • In the CartoCSS tray, copy and paste the six color variables (or choose your own!) and paste them above the first block of CartoCSS
@africa:  #75445C;
@asia:    #AF6458;
@europe:  #D5A75B;
@northam: #736F4C;
@oceania: #5b788e;
@southam: #4C4E8F;
  • Next, assign the color variables to each continent name
  • For example, the CartoCSS for Africa would look like this:
#world_population_2015[continent_name="Africa"] {
   marker-fill: @africa;
}
  • Click Apply Style to see the changes
Color Symbols

4. Drawing Order

As outlined in the best practices section, we need to make sure that smaller symbols are being drawn on top of larger symbols so important information does not get covered up. Looking around the map, we can see that there are cases where this is happening.

To fix this, we will change the default drawing order (from ascending) to descending using a field (symbol_size) in our data along with the SQL operator ORDER BY. This will force larger symbol_size values to draw first and smaller values to draw on top.

  • Open the SQL tray
  • Copy/paste the following
SELECT * 
FROM 
  world_population_2015 
ORDER BY 
  symbol_size 
DESC
  • Click Apply Query to see the changes
Drawing Order

Final Touches

1. Symbol Size Through Zoom

Since the map will be viewed from zooms 2-5, we need to size the symbols appropriately based on viewing scale.

We’ll do this with zoom dependent styling that increases the symbol_size by a factor of 2 at each zoom level for the marker-width property.

  • Open the CartoCSS tray for the layer
  • We’ll add the additional marker-width styling to the global properties of the layer
#world_population_2015 {
   marker-fill-opacity: 0.9;
   marker-line-color: #FFF;
   marker-line-width: 1;
   marker-line-opacity: 0.8;
   marker-placement: point;
   marker-type: ellipse;
   marker-width: [symbol_size];
   marker-allow-overlap: true;
   
   //scale symbols based on zoom level
   [zoom=3]{marker-width: [symbol_size]*2;}
   [zoom=4]{marker-width: [symbol_size]*4;}
   [zoom=5]{marker-width: [symbol_size]*8;}
}
  • Click Apply Style

At this point, your map should look something like this where symbols are colored by continent, sized by the value they represent in the data, smaller symbols are drawing on top of larger ones, and a symbol’s size scales between zooms 2-5:

2. Increase Buffer Size

If you zoom into China, you’ll notice that at zoom 4, the symbol cuts off along a tile boundary.

We can fix this by increasing the map’s buffer-size, which adds additional pixels around each tile.

  • Copy the following
Map {
  buffer-size: 256;
}
  • Paste it above the color variables for continent colors
  • Click Apply Style

And the result:

3. Basemap

For the final map, I designed a simple basemap in World Robinson. For tips on designing a basemap for thematic overlays in a variety of projections, see this post and this post.

Alternatively, you can use one of the default CartoDB basemaps like Dark Matter (the lite version):

4. Other Map Elements

As discussed earlier, legends are important for proportional symbol maps. For the final map, I made a legend with three data points, assigned them large, medium, and low values and then scaled them using the same formula that was used for all other symbols. I then positioned the circles and text using CartoCSS through zoom level.

I followed the same method for the map title, description, and data source text.

Experiment!

Proportional symbol maps are some of my favorite! There are a lot of interesting stories that can be told using this method in a variety of creative ways.

For example, the final map from the blog can be taken further to include the World Bank’s population projections for multiple time periods:

or to visualize 50+ years of cholera outbreaks around the world:

or for tiger counts in India:

… and many, many more …

Happy proportional symbol mapping!

January 04, 2016 10:00 AM

January 02, 2016

Boston GIS (Regina Obe, Leo Hsu)

ogr_fdw foreign data wrapper for PostgreSQL 9.5rc1

The PostgreSQL 9.5 FDW api changed a little bit between 9.5beta2 release and 9.5rc1. This means that many FDWs may be broken at the moment. I patched up the OGR_FDW to be compatible with 9.5rc1 and Paul merged this into the ogr_fdw repo.

That said I have not repackaged the Windows PostGIS 2.2.0 9.5 bundle to have this updated library, so as such, if you upgraded to PostgreSQL 9.5rc1, your ogr_fdw will crash your backend if you are using the ones on Stackbuilder. However there are newer versions compiled against 9.5rc1 with the patch on winnie's 9.5 extras folder - http://winnie.postgis.net/download/windows/pg95/buildbot/extras.

We plan to do another release with this patched version (or newer patched version), for PostGIS 2.2.1, which should be out within the next week or so.

I also submitted a patch for ogr_fdw for the new IMPORT FOREIGN SCHEMA support which is available for 9.5+. This patch has not been accepted yet, but I'm hoping it will be in some shape or form before 9.5/PostGIS 2.2.1 release time. How to take advantage of this new feature is briefly described in Import Foreign Schema for ogr fdw for PostgreSQL 9.5. If any one on windows is in a rush to try this new feature, let me know and I'll spin up a binary from my ogr_fdw fork. The IMPORT FOREIGN SCHEMA feature is much more useful than I dreamed it would be for ogr_fdw so I'm very excited.

by Regina Obe (nospam@example.com) at January 02, 2016 10:05 PM

December 31, 2015

Postgres OnLine Journal (Leo Hsu, Regina Obe)

Import Foreign Schema for ogr_fdw for PostgreSQL 9.5

PostgreSQL 9.5RC1 got released recently, and as with PostgreSQL 9.5beta2, the FDW API changed just enough so that the ogr_fdw I compiled for PostgreSQL 9.5beta2 no longer worked for PostgreSQL 9.5RC1. While patching up ogr_fdw to make it work with PostgreSQL 9.5RC1, I took a study of postgres_fdw to see how much effort it would be to implement this new PostgreSQL 9.5 Import Schema functionality for my favorite fdw ogr_fdw. Took me about a day's work, and if I was more experienced, it would have been probably only an hour to graft the logic from postgres_fdw and the ogr_fdw_info that Paul Ramsey had already done, to achieve Import Foreign Schema nirvana. Here's hoping my ogr_fdw patch gets accepted in some shape or form in time for PostgreSQL 9.5 release and in time to package for Windows PostGIS 2.2 Bundle for PostgreSQL 9.5.

UPDATE - ogr_fdw 1.0.1+ now includes the IMPORT FOREIGN SCHEMA functionality discussed here.

Continue reading "Import Foreign Schema for ogr_fdw for PostgreSQL 9.5"

by Leo Hsu and Regina Obe (nospam@example.com) at December 31, 2015 11:43 PM

November 29, 2015

Paul Ramsey

PostGIS Gotchas @ PgConfSV 2015

I attended PgConf Silicon Valley a couple weeks ago and gave a new talk about aspects of PostGIS that come as a surprise to new users. Folks in Silicon Valley arrive at PostGIS with lots of technical chops, but often little experience with geospatial concepts, which can lead to fun misunderstandings. Also, PostGIS just has a lot of historical behaviours we've kept in place for backwards compatibility over the years.

Thanks to everyone who turned out to attend!

by Paul Ramsey (noreply@blogger.com) at November 29, 2015 12:53 AM

November 24, 2015

Boston GIS (Regina Obe, Leo Hsu)

FOSS4G NA 2016 Call for Proposals and Sponsors

For those who are unaware. Free and Open Source 4 GIS North America (FOSS4G NA 2016) will be happening in Raleigh, NC May 2-5th 2016. We are accepting proprosals now. Early Bird proposal deadline is January 22nd, 2016. Which means if you submit early, you'll get a notification early before the February 8th, 2016 deadline.

Unlike last year where decisions were made entirely by the program committee, we will have users voting for presentations similar to what has been done for many FOSS4G international.

We are also looking for Sponsors, so if you'd like to sponsor the event in some way, please refer to our Sponsor Prospectus page.

by Regina Obe (nospam@example.com) at November 24, 2015 08:07 AM

November 22, 2015

Postgres OnLine Journal (Leo Hsu, Regina Obe)

PostGIS 2.2.0 windows bundle for PostgreSQL 9.5beta2 32 and 64 bit

We just pushed out installers for PostGIS 2.2.0 for PostgreSQL 9.5beta2 windows both 32-bit and 64-bit on Application Stackbuilder. These installers are also available as standalone listed on PostGIS windows page. This is the first PostGIS 2.2.0 release for the PostgreSQL 9.5 32-bit and a rerelease for PostgreSQL 9.5 x 64-bit (this time compiled against beta2 instead of beta1).

On quick testing the PostGIS 2.2 beta1 release and pgRouting 2.1.0 worked fine on 9.5beta2, however you may want to reinstall anyway just to be safe. You can just reinstall over your existing install, no need to uninstall first. Similarly just upgrading a PostgreSQL 9.5beta1 to 9.5beta2 seemed to not require pg_upgrade or dump/restore, so safe to just upgrade from 9.5beta1 to 9.5beta2. Other notes about this 9.5beta2 PostGIS 2.2.0 release:

  • The FDW API changed between PostgreSQL 9.5beta1 and PostgreSQL 9.5beta2, so the OGR_FDW, if you don't reinstall the bundle, will crash and burn in PostgreSQL 9.5beta2 (using PostGIS 2.2. beta1 executables). Similarly this newly compiled OGR_FDW will not work on PostgreSQL 9.5beta1 (so upgrade to 9.5beta2 first).
  • The PostgreSQL 9.5betas (that includes both beta1 and beta2), are compiled against the pointcloud 1.1 master branch. This was required because the released pointcloud 1.0.1, does not compile against PostgreSQL 9.5
  • The PostgreSQL 9.5beta2 PostGIS 2.2.0 release comes packaged with SFCGAL 1.2.2 (instead of 1.2.0 like the others versions) which fixes a crasher with ST_StraightSkeleton as noted in ticket - https://trac.osgeo.org/postgis/ticket/3324. Newer SFCGAL will be packaged with upcoming PostGIS 2.2.1, but if you are on an older edition and are using SFCGAL, you can always copy latest SFCGAL.dll binaries from the 2.2.1dev packages on PostGIS windows page http://postgis.net/windows_downloads/.

by Leo Hsu and Regina Obe (nospam@example.com) at November 22, 2015 06:47 AM

November 20, 2015

Geo Perspectives (Justin Lewis)

PostGIS Day 2015 (Denver) – Better than your average show and tell

 

Here in Denver we have a great meetup called GeoSpatial Amateurs that devoted (and aspiring) users and contributors of open source spatial tools frequently attend. For those that aren’t familiar with the group don’t be fooled by the name.  The use of “Amateurs” was strategic to scare away those hungry vendors that might want to encroach on the grass-roots open source feel of the meetup.  All this aside the meetup is often run by the ever charismatic Brian Timmony (@briantimoney) who with his co-organizers (Peter Batty @pmbatty and Nate Irwin @nateirwin) organized a ‘Post-GIS’ day… The day after GIS day.  While being a very clever play on names and timing it also proved to be a great way to celebrate the amazing PostGIS database, FOSS4G, and geography in general.  The structure was your common round of lightning talks running for about 1.5 hours.  In all I think there was about 15 talks all of which I learned something from.

 

Peter Batty (@pmbatty) – Talked about the use of PostGIS for large data systems and specifically the amazing performance of loading data with the PostgreSQL copy command.  He also got into comparing PostgrSQL/PostGIS with Oracle which was pretty interesting and entertaining.

What I learned:

PostgreSQL is just better.

 

Matt Krusemark (@kspatial) – Talked about his history with PostGIS, how to bring it into an organization, and how he teaches core database techniques at CU Denver with PostGIS.

What I learned:

When you teach people the core concepts of a powerful system like PostgreSQL/PostGIS it can literally transform their career and therefore lives.  Open source is empowering.

 

Ted Quinby – Showed the crowed very useful techniques for analyzing PostgreSQL queries and then using special operators to ensure that the query utilizes the correct indexes.

What I learned:

This <-> newish operator is amazing for querying spatial indexes (fast) in nearest neighbor functions to return an ordered list of data. Here is a blog post on the topic if you want to know more.

 

Jim McAndrews (@jimmyrocks) – Demonstrated materialized views in PostgreSQL and why they are so damn awesome for making database views of spatial data fast.  

What I learned:

I now better understand what a materialized view actually is and how to best utilize it.

 

Matt Baker (@MapBaker) – Demonstrated how he used PostGIS and Qgis to process data for maps that help local school districts analyse student demographics.  The resulting maps where really interesting and got a number of people around me talking about the demographic patterns demonstrated.

What I learned:

I don’t know how I have gone so long not knowing about case statements in PostgreSQL but this was new to me.  Also, Denver appears to be a very segregated city in terms of student race/ethnicity.

 

Andy Mention (@amention) – Talked about how he constructs PostGIS queries with Ruby and Active Directory. In the end he showed how his query can query 3.2 million records in 15 milliseconds.  Pretty damn fast.

What I learned:

Apparently you can get fast queries through tools that abstract SQL. Ruby just peaked my interest a little more.

 

Mamata Akella (@mamataakella) – Showed us some of the amazing cartography she is doing in her new role with CartoDB. She also demonstrated how we can use projections other than Web Merkator in CartoDB.

What I learned:

CartoDB is getting better and better at supporting beautiful custom cartography AND Mamata makes some of the most beautiful maps on the web.

 

Steve Coast (@SteveC) – That’s right… The founder of Open Street Map made an appearance! He promoted his new book about the history of Open Street Map.

What I learned:

His book is more about the history of the unique beginnings of Open Street Map than what OSM is or how to use it.  This should be interesting and can be purchased in digital or paper form.

 

Chris Ebright – Demonstrated a great story of how difficult it was to work with different projections in various technologies until he found out how amazingly easy it is in PostGIS.

What I learned:

The power of PostGIS can be a game changer for even the most basic of functions.

 

Chad Hammond – Demonstrated dynamic calculation of zones derived from points within certain distances of a POI using PostGIS and PGRouting.

What I learned:

PGRouting is surprisingly fast!

 

Mike Miller – Demonstrated how he used PostGIS as a back-end for an application running on both desktop and mobile devices.

What I learned:

PostGIS empowers application developers to expose powerful query functionality to the browser.

 

Brian Timmony (@briantimoney) – Talked about his early history with PostGIS and how he used it to provide useful solutions to customers needing solutions that work with their environment.

What I learned:

Brian has seen it all and is probably one of the best FOSS4G spokesman.

 

Me (@jmapping) – Talked a little about why PostGIS stands out as an amazing spatial database.

What I learned:

Sometimes giving a presentation with physical props instead of digital media is much more fun and engaging.  Also, people get really excited by some of the functionality that PostGIS offers.

 

I’m missing at least half the speakers because I didn’t take notes.  If anyone wants to send me a message of who/what I am forgetting please do and I will update this page. 

by jmapping at November 20, 2015 08:07 PM

November 18, 2015

CartoDB

Make a Thematic Map of Current Drought Conditions

header image

The most common maps built with CartoDB highlight a specific topic or theme of information. Cartographers refer to these as thematic maps. Typically, thematic maps have two components: a basemap and a thematic overlay.

The logical ordering (or visual hierarchy) of elements on thematic maps is especially important. The thematic layer should be highest in the visual hierarchy designed with bold colors to stand out against the more subtly
designed basemap.

Building on these ideas, and this previous discussion on map projections in CartoDB, this blog walks through the steps to make a thematic map of current drought conditions in the United States using data from the United States Drought Monitor.

All of the maps in this post are pannable and zoomable so as you go through the different steps, interact with them along the way!

Getting Started

First, we’ll design a simple basemap that is made up of a base layer and a reference layer. The purpose of the basemap is to provide our map readers with just enough geographic context to interpret the drought conditions around the country without distracting them from the overall pattern of drought.

Data

  • From your dashboard, click the option for NEW MAP
  • In the Add datasets dialog, click on Search and type in ‘states’
  • Look for the dataset titled “USA States” from Natural Earth (ne_50m_admin1_states), highlight it and click CREATE MAP
Add State Data

Next, add the US states layer again. This is done so we can use one layer for the base and the second copy for the reference information placed on top of the drought monitor data as seen in the final map.

  • Expand the right hand layer panel and click Add Layer
  • From the “Add new layer” dialog, highlight ne_50m_admin_1_states and click the option to ADD LAYER
  • Rename the layers in your map to reference and base by double clicking on the layer name in the layer panel
Rename Layers

Labeling Attributes

The state labels on the final map transition from two letter postal codes at zoom 4, to abbreviated names at zoom 5, and full state names appear at zooms 6 and larger.

Let’s take a look at the attributes for the reference layer to determine which attributes we’ll use for labeling:

  • From the reference layer, click on DATA VIEW
  • The three attributes we’ll use for state labeling through zoom are: postal (the two letter postal code for a state), abbrev (an abbreviated version of the state name), and name (the state’s full name)

Keep note of these attributes as we’ll use them in the SQL query we construct for the reference layer.

Reference Layer Attributes

Projection

Our final map uses the Albers Equal Area Conic projection centered on the contiguous United States (SRID 5070). This is a very common projection for thematic maps of the US. This is an equal area projection meaning areas are preserved and distortion is minimized.

This projection is part of the default spatial_ref_sys table in your CartoDB account. If you have trouble accessing this SRID in your map, you can insert it into your spatial_ref_sys table using the instructions outlined in this blog.

SQL Queries

For the reference layer we’ll need to add two pieces to the SQL query: the projection information and the attributes that we need for labeling.

  • Open the SQL tray for the reference layer and either copy/paste or type in the following and then click Apply Query:
SELECT 
	ST_Transform(the_geom, 5070) 
AS 
	the_geom_webmercator,
	postal,
	abbrev,
	name
FROM 
	ne_50m_admin_1_states
Reference Layer SQL
  • Next, open the SQL tray for the base layer and copy/paste or type in the following and then click Apply Query:
SELECT 
	ST_Transform(the_geom, 5070) 
AS 
	the_geom_webmercator
FROM 
	ne_50m_admin_1_states

At this point, your map should look something like this:

Basemap Styling

Base Layer

First, let’s turn off Postiron as the basemap and set the background to white:

  • Click the option Change Basemap in the bottom left hand corner of the map editor view
  • Next, choose the option for Custom
  • Select the white color chip (#FFFFFF)
Change Basemap

Next, color the US polygon (base) that will show everywhere there is not drought:

  • Open the styling editor for base
  • Click on CSS and copy and paste the following CartoCSS (or add your own!)
#ne_50m_admin_1_states {
  polygon-fill: #E2E4DC;
}

Reference Layer

State Lines

  • We’ll style state lines white, with a line width of 0.5px, and some transparency:
#ne_50m_admin_1_states {
  line-color: #FFF;
  line-width: 0.5;
  line-opacity: 0.8;
}
  • Next, add some zoom dependent styling to increase the line width at larger scales:
#ne_50m_admin_1_states {
  line-color: #FFF;
  line-width: 0.5;
  line-opacity: 0.8;

  //at a zoom level greater than or equal to 5
  //the line width changes to `1px`
  [zoom>=5]{line-width: 1;}
}

State Labels

The three attributes we are using for the state labels are postal,abbrev, and name. We’ll use zoom level conditions to determine when the labels turn on and zoom dependent styling to determine which version of the label draws when.

  • Add the CartoCSS for state labels inside of the existing reference block using the CartoCSS symbolizer ::label
#ne_50m_admin_1_states {
  line-color: #FFF;
  line-width: 0.5;
  line-opacity: 0.8;

  //at a zoom level greater than or equal to 5
  //the line width changes to `1px`
  [zoom>=5]{line-width: 1;}

  //state labels begin at zoom 4
  ::label[zoom>=4]{

    //states labeled with postal code to start out using a 9pt font size
    text-name: [postal];
    text-face-name: 'Lato Bold';
    text-fill: #7F858E;
    text-size: 9;

    //transform state labels to uppercase
    text-transform: uppercase;

    text-halo-fill: fadeout(#fff,70);
    text-halo-radius: 2;
    	
    //at zoom 5 change to abbreviated state label and increase the font size
    [zoom=5]{
	    text-name: [abbrev];
      text-size: 10;
    }
    
    //at zoom 6 change to the full name for state labels, 
    //increase the font size, 
    //and wrap text for longer state labels
    [zoom>=6]{
      text-name: [name];
      text-size: 11;
      text-wrap-width: 20;
    }

    //at zooms 7 and larger increase font size
    [zoom>=7]{
      text-size:12;
    }
  }	
}

Projected and styled, the finished basemap looks like this:

Drought Monitor Readings

Next, we’ll add the latest drought monitor readings to the map, compose the layer’s SQL query, and apply a style.

Data

  • From the US Drought Monitor site, right-click and copy the link address for the most recent readings in shapefile format
Drought Monitor Data
  • Back in your map, click the + sign on the right hand panel to Add Layer
Add Layer
  • From the Add a new layer dialog, click the option Connect Dataset
  • Right click and paste the data URL from the drought monitor site, click SUBMIT and then ADD LAYER
Connect Drought Data
  • Once added to the map, rename the layer to drought and drag and drop it in between the reference and base layers to define the correct drawing order
Layer Order

The drought monitor data will not be visible in the current map view because we still need to project it!

Before we do, let’s take a look at the attributes that we’ll need for styling:

  • Click on DATA VIEW
  • The attribute that we’ll use to color each polygon based on drought severity is dm

The values in the dm field range from 0-4 where 0 represents abnormally dry areas and 4, areas of exceptional drought as seen in the metadata and authoritative maps.

Drought Data Attributes

SQL Query

  • In the SQL tray for the drought layer, add the following query and click Apply Query
SELECT 
	ST_Transform(the_geom, 5070) 
AS 
	the_geom_webmercator,
	dm
FROM 
	usdm_20151103
Drought Data SQL

NOTE: if you download a different time period of drought monitor data, you will have to change the table name accordingly. This query assumes that your table name is usdm_20151103.

  • Back in MAP VIEW you should see the drought monitor data on your map drawn with default symbology sandwiched in between the reference and base layers

Drought Data Styling

  • Back in MAP VIEW click on the Wizard icon and symbolize the drought readings by CATEGORY based on the dm field
Symbolize by Category
  • For the final map, I chose a multi-hue sequential ramp that ranges from pale yellow to red
Color Ramp
  • We’ll define CartoCSS variables for each color and then assign them to the associated value in the drought data
  • Click on the CSS option for the drought layer to customize the default symbology in the CartoCSS Editor
  • Copy the CartoCSS for the color variables below (or assign your own colors!) and paste it at the top of the style sheet
@0: #FAEECA;
@1: #F7E4AB;
@2: #F7C791;
@3: #E99083;
@4: #CB6787;
Color Variables
  • Set the polygon-fill for each dm value to its associated color variable in the CartoCSS and click Apply Style
Color Variables CartoCSS

At this point, this is what our CartoCSS looks like:

/** category visualization */

@0: #FAEECA;
@1: #F7E4AB;
@2: #F7C791;
@3: #E99083;
@4: #CB6787;

#usdm_20151103 {
   polygon-opacity: 0.7;
   line-color: #FFF;
   line-width: 0.5;
   line-opacity: 1;
}

#usdm_20151103[dm=0] {
   polygon-fill: @0;
}
#usdm_20151103[dm=1] {
   polygon-fill: @1;
}
#usdm_20151103[dm=2] {
   polygon-fill: @2;
}
#usdm_20151103[dm=3] {
   polygon-fill: @3;
}
#usdm_20151103[dm=4] {
   polygon-fill: @4;
}

And this is what our map looks like:

Customize More

By default, the Editor assigns global styles to the layer. Since the drought monitor readings are the main focus of the map, we’ll remove the default polygon-opacity. Additionally, the default white lines make the drought data look more abrupt than continuous so we’ll remove that style as well.

  • To do this, delete the first block of CartoCSS (commented below) and click Apply Style
/** category visualization **/

@0: #FAEECA;
@1: #F7E4AB;
@2: #F7C791;
@3: #E99083;
@4: #CB6787;

//remove this block
//starting here
#usdm_20151103 {
   polygon-opacity: 0.7;
   line-color: #FFF;
   line-width: 0.5;
   line-opacity: 1;
}
//ending here

#usdm_20151103[dm=0] {
   polygon-fill: @0;
}
#usdm_20151103[dm=1] {
   polygon-fill: @1;
}
#usdm_20151103[dm=2] {
   polygon-fill: @2;
}
#usdm_20151103[dm=3] {
   polygon-fill: @3;
}
#usdm_20151103[dm=4] {
   polygon-fill: @4;
}

We now have our basemap and drought monitor data styled and are ready to put a couple final touches on the map!

On the map above, the labels in areas of exceptional drought (like California!) are hard to read against the darkest color in our palette. A slight adjustment you can make here is to add a white halo that has transparency by adding text-halo-fill: rgba(255,255,255,0.5) to the state label properties. Alternatively, you can adjust the text-fill to a color that you feel is more suitable.

I used a small transparent halo on my state labels:

Other map elements that I customized for the the final map are:

  • a legend
  • an appropriate title
  • and a data source citation for the US Drought Monitor

You can check out a live version of the final map here.

BONUS!

My talented co-worker, UI Developer and Designer, Emilio made a template that can be used to compare drought readings over three different time periods. This example shows maps of drought severity for the same week in November in 2015, 2014, and 2013. You can find the template here.

Happy thematic mapping!

November 18, 2015 10:00 AM

October 30, 2015

CartoDB

Free Your Maps from Web Mercator!

header image

Most maps that we see on the web use the Web Mercator projection. Web Mercator gained its popularity because it provides an efficient way for a two-dimensional flat map to be chopped up into seamless 256x256 pixel map tiles that load quickly into the rectangular shape of your browser.

If you asked a cartographer which map projection you should choose for your map, most of the time the answer would not be Web Mercator. What projection you choose depends on your map’s extent, the type of data you are mapping, and as with all maps, the story you want to tell.

Well, get excited because with a few lines of SQL in CartoDB, you can free your maps from Web Mercator!

This blog covers how you can go from the standard Web Mercator:

To Albers Equal Area Conic (a popular choice for thematic maps of North America):

Projections in CartoDB

Every CartoDB account comes with a set of default projections. Even if the projection you are looking for isn’t in the default set, no problem! In a few steps, you can start using nearly any projection you want for your web maps.

Even better, this is all done on the fly and you can project the same dataset in multiple ways for different maps.

For a more detailed discussion on projections, see this tutorial.

Getting Started

Let’s start by accessing the list of default projections available in CartoDB:

  • From your account open any SQL tray, and type in
SELECT * FROM spatial_ref_sys
Spatial Reference
  • Next, click Apply Query

What you will see is a table load with all of the projections that you can use for your maps. Take some time to sort through the table to see what is available. As mentioned before, even if you don’t see the projection you are looking for, its ok!

Spatial Reference

Adding a Projection

Our final map is in Albers Equal Area Conic centered on North America. I know this projection isn’t in the default list, so let’s add it.

To add a projection, we need to insert its definition into the spatial_ref_sys table. There are a couple of great websites out there where you can copy and paste the definition that you need. Two of the ones that I’ve found most useful are spatialreference.org and epsg.io.

  • In a web browser go to epsg.io
  • In the search bar, type Albers Equal Area
Search for Albers
  • Scroll to the bottom of the first page and click on the link for North America Albers Equal Area Conic
  • Under the Export list on the left hand side of the page, click on PostGIS and copy the projection definition text
Copy PostGIS Text
  • Back in your CartoDB dashboard, paste the definition text into a SQL tray and click Apply Query
Insert Spatial Reference
  • Your projections table is now updated with North America Albers Equal Area Conic (SRID 102008)

Let’s Make a Map!

Now that we have added the projection definition to CartoDB, we can use its SRID to project any data layers on the fly. In this example, we’ll use two datasets from Natural Earth (land and ocean) that are available in CartoDB’s Data Library.

  • From your Maps dashboard, click the option NEW MAP
  • In the Add Dataset window, we’ll search for available Natural Earth datasets by typing ne_50m into the data search bar
  • From the available list, highlight Land (ne_50m_land) and Ocean (ne_50m_ocean) and click the option to CREATE MAP
Add Data
  • Next, we’ll project each layer using ST_Transform and the projection’s SRID
Project Layers
  • Copy/paste or type in the following query into each layer’s SQL tray and click Apply Query

ne_50m_land

SELECT 
	ST_Transform(the_geom, 102008)
AS 
	the_geom_webmercator
FROM 
	ne_50m_land

ne_50m_ocean

SELECT 
	ST_Transform(the_geom, 102008)
AS 
	the_geom_webmercator
FROM 
	ne_50m_ocean

The land and ocean datasets should now be projected and your map probably looks something like this:

Ok! Let’s add some final touches to the map.

  • Since we are using the Albers projection centered on North America, let’s zoom our map to focus on that part of the world
  • Next, we’ll remove Positron as our basemap and instead use a white background. To do this click on the option to Change Basemap and choose white (#FFFFFF) for your Custom color
Change Basemap
  • As a final design touch, change the color of the land and ocean. If you would like to use similar colors to the final map, here is the CartoCSS:

ne_50m_land

#ne_50m_land {
  polygon-fill: #98B087;
}

ne_50m_ocean

#ne_50m_ocean {
  polygon-fill: #B8D0CB;
}

And here is our final map! (If you would like to add graticule lines to your map you can download them from Natural Earth, and add the ST_Transform SQL from above.)

Coming Soon

In the coming weeks, look for more detailed blog posts going over some advanced cartographic effects on a variety of maps… most of which are NOT Web Mercator!

Other Projections and Additional Resources

And for fun, here are some other projections that you might like to use in your maps. This CartoDBlock has a more detailed overview with links to the projection text and SQL examples.

North Pole Azimuthal Equidistant

World Bonne

Lambert Conformal Conic centered on Asia

Winkel Tripel

We’ll be hosting a CartoCamp here at the CartoDB offices in NYC on Tuesday, November 10th. Join us to learn more about advanced mapping techniques (including projections) with CartoDB! You can sign up here.

Happy Mapping!

October 30, 2015 10:00 AM

Geo Perspectives (Justin Lewis)

Creating a Node.js & PostGIS App on OpenShift

This is the first part of what I plan to be a series of setting up an app built on PostGIS, Node.js, Leaflet.js, and D3.js to visualize the routes a climber has done (ticks) and those that they want to do (todos). Specifically the idea is to show where those routes are on the earth and some interactivity to help better understand details about a climbers past ticks and future todos.  For this post we will ignore the map and just focus on getting Node and Postgis on OpenShift.

To make sure everyone is up to speed I figure I’ll go over some questions that might come up on this topic.

1.  What is Node.js?

Answer:  A framework running JavaScript from server to client. It’s a fairly disruptive concept but as I’m learning has some nice advantages that we wont get into here.

2.  What is PostGIS?

Answer:  PostGIS is the amazing spatial bindings to PostgreSQL that, when used together, make the best damn spatial database that exists today.

3.  What are Leaflet.js and D3.js?

Answer: Leaflet is a really nice mapping JavaScript library for putting dynamic maps on the web. D3.js is a really powerful visualization library that excels at making customized web visualizations.  Both are damn awesome.

4.  What is OpenShift?

Answer:  OpenShift is Red Hat’s Platform AA Service (PAAS) solution for hosting web applications.  It makes life really easy because OpenShift manages all the infrastructure around your app while also incorporating a bunch of really handy tools.  It’s also really cheap for a simple non-resource needy app.  OpenShift is also damn awesome.

5.  Why am I writing about this topic?

Answer: There just wasn’t enough on the internet for me to easily have my simple questions answered so I thought I would try to add my experience.  There are a couple of write-ups on the internet covering this topic but the primary OpenShift blog was broken at the time.

What you would need if you where trying to set this up as well

  1. A free (or paid if ya like) OpenShift account.
  2. To set up OpenShift client tools.
  3. Node.js installed on your computer.
  4. PostgreSQL and PostGIS installed on your computer (if you want to test locally).
  5. Git installed on your computer.
  6. Some JavaScript skillz (yes, with a ‘z’).

 

2 Ways to setup an initial environment

  1. Using OpenShift’s instant app functionality is probably the easiest and fastest way to get started.
  2. Of course, you can also convert any Node.js app to OpenShift with a couple easy modifications.

Lets cover both methods just for fun.

Method #1 – OpenShift’s Handy Instant App

1.  After setting up an account with OpenShift create a new Node.js app following the basic instructions.  There are lots of resources for doing this online by using either OpenShift’s command line tools, the OpenShift web interface, or both.  Here is an official doc from OpenShift on the topic.

NOTE: Leave the “Source Code” field blank if using the web interface. See Method #2 below for more details.

2.  Now that you have an app created you can add a PostgreSQL database to it… This is amazingly easy.  Simply go to your apps main page on the OpenShift website and click the “Add PostreSQL x.x” button in the databases section below your apps cartage section. This will ask if you want to add the PostgreSQL cartage to which you will click “yes”.

  • Take note of your PostgreSQL’s credentials.  Those are important.

3.  To get PostGIS installed is easier than it used to be but you still have to SSH into your app. You can find the SSH command on your apps home page of the OpenShift website. This is what my SSH command looks like:

ssh 90840-YOURSECRETKEY-080e8r@YOURAPPNAME-YOUROPENSHIFTDOMAIN.rhcloud.com

Once you are SSH’d into your app you have access to PSQL which is the primary terminal based language for interacting with PostgreSQL. Type ‘psql’ in the terminal. The prompt should change a bit to something like this ‘APPNAME=#’. That’s good. That means your terminal is in psql mode.  Now simply type:

CREATE EXTENSION postgis;

The terminal should stall for a couple seconds and then output ‘CREATE EXTENSION’.

 

Congratulations! You just setup a new Node.js app with a PostGIS database!

 

4.  Now that you’ve created your first OpenShift Node app you can clone the Git repo that OpenShift requires you to use as a code repository.  This is what your clone command looks like if typed in the terminal.

git clone ssh://5632ea3d2d5271fc9c000028@junk-maplabs.rhcloud.com/~/git/THENAMEOFYOURAPP.git/

Why clone a repo you might ask?

Answer:  OpenShift relies on the wildly popular Git as a version control system for your applications source code.  They do some amazing things to make your life easier like managing your apps infrastructure, allowing simple integration of build tools (like Jenkins), and simply allowing you to redeploy an app with a ‘git push’ to your remote OpenShift repo.  I love that I rarely have to SSH into my app to pull new source or other menial tasks.

This repo is fully configured to run on openshift. All you have to do now is make any desired code changes and push the changes to OpenShift via Git. This means deploying your changes to your app is as simple as:

git push origin master

  • When running this command you should see all kinds of messages indicating fun messages like your app and database cartridges are restarting.

 

Method #2 – Adding an existing Node.js app to OpenShift

1.  Step one for this method is very similar to method #1 except that you must use the option to add an existing Git repo to the application during the setup process. I havn’t done this via terminal but on the web interface you enter a URL to your Git repository in the “Source Code” field. You can optionally specify a branch.

2. Step 2, 3, and 4 are the same as in method #1 so… Follow those steps to add a PostgreSQL database, add the PostGIS extension, and clone the OpenShift Git repo.

3. Now, in my experience there was really only a couple changes to make to get my Node.js app to work using this method.  They primarily have to do with the fact that OpenShift will not modify existing files or create certain configuration files for you.  There is some documentation on this but I will outline my steps here for the sake of clarity.

  1. Make sure there is a package.json file in the root of the project. From what I can tell it is standard Node to have this so this step is basically a gimme.  However, my default node configuration was different so I changed the ‘scripts.start’ and ‘main’ options to reference the correct files. You can see my whole package.json file below.
  2. Add some OpenShift specific settings to the top of your main app configuration file (mine is called server.js). This is needed because OpenShift uses environment variables in your app environment to set the port and domain for your app.  The example code below will use port 8080 and a localhost domain (127.0.0.1) if the OpenShift environment variables don’t exist. This is useful for not having to change code between your dev and production environments.  You can see my example below.

 

server.js OpenShift specific configuration example

var express = require('express');
var path = require('path');

var server_port = process.env.OPENSHIFT_NODEJS_PORT || 8080;
var server_ip_address = process.env.OPENSHIFT_NODEJS_IP || '127.0.0.1';
 

app.set('port', process.env.OPENSHIFT_NODEJS_PORT || process.env.PORT || 8080);
app.set('ip', process.env.OPENSHIFT_NODEJS_IP || "127.0.0.1");

app.listen(server_port, server_ip_address, function () {
  console.log( "Listening on " + server_ip_address + ", server_port " + server_port )
});

...

 

package.json example:

{
  "name": "APPNAME",
  "version": "0.0.0",
  "private": true,
  "main": "server.js",
  "scripts": {
    "start": "node server.js"
  },
  "dependencies": {
    "body-parser": "~1.13.2",
    "cookie-parser": "~1.3.5",
    "debug": "~2.2.0",
    "express": "~4.13.1",
    "hjs": "~0.0.6",
    "morgan": "~1.6.1",
    "serve-favicon": "~2.3.0"
  }
}

 

That’s all folks! I think next I will cover the basics of using your new database but this is enough for now. I hope this helped some of you stumble a little less than I did.

by jmapping at October 30, 2015 05:31 AM

October 25, 2015

PostGIS Development

Why can't I do CREATE EXTENSION postgis_sfcgal

PostGIS 2.2.0 came out this month, and the SFCGAL extension that offers advanced 3D and volumetric support, in addition to some extended 2D functions like ST_ApproximateMedialAxis became a standard PostgreSQL extension and seems to be a fairly popular extension.

I’ve seen several reports on GIS Stack Exchange of people trying to install PostGIS SFCGAL and getting things such as

ERROR: could not open extension control file /Applications/Postgres.app/Contents/Versions/9.5/share/postgresql/extension/postgis_sfcgal.control

as in this question: SFCGAL in PostGIS problem.

There are two main causes for this:

Continue Reading by clicking title hyperlink ..

by Regina Obe at October 25, 2015 12:00 AM

October 18, 2015

Boston GIS (Regina Obe, Leo Hsu)

PostGIS 2.2.0 windows bundle is now on Application Stackbuilder includes pgRouting 2.1.0

The PostGIS 2.2.0 bundles for windows for PostgreSQL 9.3-9.4 (both 32/64-bit) and PostgreSQL 9.5beta1 64-bit are now on stackbuilder. I'm working out some compile issues for the PostgreSQL 9.5beta1 32-bit so don't have those ready yet. This includes pgRouting 2.1.0 and inaugural release of PostGIS SFCGAL extension, OGR FDW spatial foreign data wrapper, and pgPointCloud on windows.

I have also updated our popular BostonGIS - Part 1: Getting Started With PostGIS: An almost Idiot's Guide to utilize PostgreSQL 9.5beta1 and PostGIS 2.2. Still on our todo is to update the loader/query example and also the pgRouting tutorial for pgRouting 2.1.0. A ton has changed in pgRouting 2.1.0, so be sure to buy our book pgRouting: A Practical Guide which will have a preview release sale coming soon with first 4-5 draft chapters available and all on final publication all chapters (approximately 10-12).

Check out the PostGIS Bundle 2.2 extensions higlighted in yellow in pgAdmin extensions drop down

To take full advantage of what PostGIS 2.2 has to offer (notable true KNN distance for geometry, geography, and 3D geometries -- not just bounding box like is available for PostgreSQL 9.4 and below), you need to use PostgreSQL 9.5. So start testing out your apps on PostgreSQL 9.5 now so you will be ready when it hits release.

by Regina Obe (nospam@example.com) at October 18, 2015 02:37 AM

October 07, 2015

PostGIS Development

2.2.0 Released!

PostGIS 2.2.0 is released! Over the last two years a number of interesting new features have been added, such as:

See the full list of changes in the news file and please report bugs that you find in the release.

Binary packages will appear in repositories over the coming weeks as packagers roll out builds.

View all closed tickets for 2.2.0.

by Paul Ramsey at October 07, 2015 12:00 AM

September 30, 2015

CartoDB

Flying Around the World with CartoDB

As a child, nothing was more exciting to me than a chance to ride on an airplane. And after enjoying playing with the seatbelt buckle and feeling the crazy push of take-off acceleration, I would usually settle in and page to the back of the in-flight magazine where the airline route maps were: where were we going today, and where could we go tomorrow?

Destination Map

We can build route maps for any city in the world using airport and route data from OpenFlights.org. Start by uploading the airports.csv and routes.csv files
into CartoDB.

We can see every destination available starting from Vancouver, Canada (airport code “YVR”) by making some custom SQL to join the airports table to the routes table and restricting to just the “YVR” routes:

SELECT a2.cartodb_id, a2.the_geom_webmercator, a2.city, r.airline
FROM airports a1
JOIN routes r ON r.airport_st = a1.code_iata
JOIN airports a2 ON r.airport_end = a2.code_iata
WHERE a1.code_iata = 'YVR' AND r.codeshare IS NULL

That’s the data we want, but without the flight lines it lacks a sense of movement.

Simple Route Map

Our query is already joining the airports twice: once for the origin and once for the destination airport, so we can turn the end points into a line very easily using the ST_MakeLine() function:

SELECT a2.cartodb_id,
 a2.city, r.airline,
 ST_Makeline(a2.the_geom_webmercator, a1.the_geom_webmercator) as the_geom_webmercator
FROM airports a1
JOIN routes r ON r.airport_st = a1.code_iata
JOIN airports a2 ON r.airport_end = a2.code_iata
WHERE a1.code_iata = 'YVR' AND r.codeshare IS NULL

That looks much better! But there’s something wrong about this map – actually two things wrong.

First, the routes are all straight lines, and they should be great circle routes, that’s how the airplanes fly!

Second, some of the routes go the wrong way around the world: no airline would fly from Vancouver to Sydney via Africa!

Great Circle Route Map

If we convert our lines into great circle routes, we can maybe kill both of these birds with one stone, since the great circle routes will go the right direction.

SELECT a2.cartodb_id, 
 a2.city AS city, r.airline,
  ST_Transform(
     ST_Segmentize(
         ST_Makeline(
           a2.the_geom, 
           a1.the_geom
         )::geography, 
         100000
     )::geometry,
     3857
   ) as the_geom_webmercator
FROM airports a1
JOIN routes r ON r.airport_st = a1.code_iata
JOIN airports a2 ON r.airport_end = a2.code_iata
WHERE a1.code_iata = 'YVR' AND r.codeshare IS NULL

This is a bit complex, but reading the nested functions outwards starting from the ST_MakeLine(), we:

  • Cast the point-to-point line to the “geography” type, which understands edges (also known as connections between nodes) as great circles; then
  • Use the ST_Segmentize(geography) function to add points to the line along the great circle (so when we put it back on the flat map, it’ll appear curved); then
  • Cast the line back into the “geometry” type we use for flat mapping; and finally
  • Transform the coordinates of the line using ST_Transform(geometry, srid) to the web mercator projection we use on our flat maps.

The end result is really, really close!

But what is going on with those horizontal lines?

Great Circle Route Map with Dateline Fix

There’s a gap, right where the horizontal line appears.

Everything is fine until an edge on the great circle route tries to cross the dateline. Then it zings around the world in order to hook up to the next edge. Fundamentally our map still does not understand the circularity of the world, even though the edges we built do understand it. We have to work around the limitations of the flat map, by chopping our data at the dateline to avoid having edges that cross it.

-- First build our lines just as before, this can be any raw data you
-- need to feed into a dateline chopping process
WITH lines AS (
  SELECT a2.cartodb_id, 
  a2.city, r.airline,
  ST_Segmentize(ST_Makeline(a2.the_geom, a1.the_geom)::geography,100000)::geometry as the_geom
  FROM airports a1
  JOIN routes r ON r.airport_st = a1.code_iata
  JOIN airports a2 ON r.airport_end = a2.code_iata
  WHERE a1.code_iata = 'YVR' AND r.codeshare IS NULL
),
-- Now break the input data into two sets, one to split and one to leave 
-- unprocessed. Objects that cross the dateline will appear to be very wide 
-- (as they zing across the world) so we'll only chop features that are very 
-- wide. This is just for efficiency.
tosplit AS (
  SELECT * FROM lines 
  WHERE ST_XMax(the_geom) - ST_XMin(the_geom) > 180
),
-- Narrow objects we'll leave un-chopped.
nosplit AS (
  SELECT * FROM lines 
  WHERE ST_XMax(the_geom) - ST_XMin(the_geom) <= 180
),
-- In order to chop the objects we need to get them into a space where they make
-- "sense" so we shift any vertex that is < 0 to the right by 180 units, 
-- effectively building a map ranging from 0 to 360 units centered around the 
-- dateline, instead of the usual map from -180 to 180 centered at Greenwich.
-- Then if we split at 180, we get a nice cut, just where we want it.
-- We split by removing a very narrow gap from the objects, centered at 180, wide
-- enough so that we don't get any wee rounding errors at the dateline.
split AS (
  SELECT 
    cartodb_id,
    city, airline,
    ST_Difference(ST_Shift_Longitude(the_geom), 
                  ST_Buffer(ST_GeomFromText('LINESTRING(180 90, 180 -90)',4326),
                            0.00001)) AS the_geom
  FROM tosplit
),
-- Merge the split features back with the unprocessed features.
final AS (
  SELECT * FROM split
  UNION ALL
  SELECT * FROM nosplit
)
-- Transform them all into web mercator for mapping. 
-- The web mercator map projection handles the longitudes > 180 as if they were 
-- negative longitudes, so there is no need to convert them back.
SELECT 
  cartodb_id,
  city, airline,
  ST_Transform(the_geom,3857) AS the_geom_webmercator
FROM final

And it works!

Of course, this is a route map of all flights leaving Vancouver (YVR), so it’s not exactly the kind of map you’d find in a in-flight magazine. However, it’s easy to build such a map, just by changing set of input airports we use to build
the routes.

Where the existing query says:

WHERE a1.code_iata = 'YVR' AND r.codeshare IS NULL

Replace the airport filter with an airline filter of “AC” to get an Air Canada route map:

WHERE r.airline = 'AC' AND r.codeshare IS NULL

Or try “UA” for a United map, or “DL” for a Delta map.

Happy flying!

September 30, 2015 10:45 AM

September 27, 2015

Postgres OnLine Journal (Leo Hsu, Regina Obe)

Connecting to SQL Server from Linux using FDWs

There are two PostgreSQL FDWs (currently maintained) I know of for connecting to SQL Server from a Linux/Unix PostgreSQL box. There is the TDS Foreign Data wrapper (tds_fdw driver) which relies on the Free TDS driver. This is a fairly light-weight FDW since it just relies on TDS which is commonly already available on Linux installs or an easy install away. Unfortunately when I tried to use it on windows (compiling my usual mingw64 way), while it compiled and installed, it crashed when I attempted to connect to my SQL Server 2008 R2 box table, so I gave up on it for the time being as a cross-platform solution. One thing I will say about it is that it accepts ad-hoc queries from what I can see, as a data source, which is pretty nice. So we may revisit it in the future to see if we can get it to work on windows. I'm not sure if tds_fdw would support SQL Server spatial geometry columns though would be interesting to try.

The second option, which as you may have noticed, we spent much time talking about is the ogr_fdw foreign data driver. ogr_fdw utilizes UnixODBC on Linux, iODBC on MacOSX and Windows ODBC on windows for connecting to SQL Server. The ogr_fdw big downside is that it has a dependency on GDAL, which is a hefty FOSS swiss-army knife ETL tool that is a staple of all sorts of spatial folks doing both open source and proprietary development. The good thing about ogr_fdw, is that since it is a spatial driver, it knows how to translate SQL Server geometry to it's equivalent PostGIS form in addition to being able to handle most of the other not-so spatial columns.


Continue reading "Connecting to SQL Server from Linux using FDWs"

by Leo Hsu and Regina Obe (nospam@example.com) at September 27, 2015 06:38 AM

September 24, 2015

PostGIS Development

2.2.0rc1 Available

This is it, PostGIS 2.2 is almost out the door, so we’re looking for testing and feedback! Please give the release candidate a try and report back any issues you encounter.

View all closed tickets for 2.2.0.

by Paul Ramsey at September 24, 2015 12:00 AM

September 21, 2015

Boston GIS (Regina Obe, Leo Hsu)

PostGIS and extended SFCGAL and pgRouting family spotted in Seoul

I missed Seoul, but I spotted these familiar looking family faces in Jody Garnett's picture deck.

FOSS4G 17TH_0406
Ko Nagase and Vicky Vergara: pgRouting from KYUHO SHIM's photos
Laughter at FOSS4G
Hiroaki Sengoku and Vicky Vergara: pgRouting
Height Difference
Vicky Vergara and Vincent Picavet: pgRouting and SFCGAL
DSC00654 DSC00656
Olivier Courtin: PostGIS and SFCGAL
DSC00713 DSC00684
Daniel Kastl: pgRouting

And last but not least - the Lone Ranger
DSC00605 Paul looking toward the Cloud
Paul Ramsey: PostGIS

by Regina Obe (nospam@example.com) at September 21, 2015 01:50 PM

Simon Greener

Vectorization: Exploding a linestring or polygon into individual vectors in PostGIS [9]

I find having a vectorising function in my database kit-bag almost indispensable in my work. I have written and revised my Oracle PL/SQL functions that do this many times over the years and so, when I first started using PostGIS, I decided to learn how to program in PL/pgSQL by implementing something familiar: a GetVector() function.

Now, my PostgreSQL/PostGIS is not bad, but I am always willing to learn from others who are more expert than me: so don’t take the following function (which , for example, doesn’t handle Curves as they are still in their infancy in PostGIS) as being perfect or correct in every manner.

Firstly, we need to create two data types that we can use.

A Coordinate Type

CREATE TYPE coordtype AS (
    x double precision,
    y double precision,
    z double precision,
    m double precision
);
Query returned successfully with no result in 78 ms.

A Vector Type based on the Coordinate Type

CREATE TYPE vectortype AS (
    startcoord coordtype,
    endcoord coordtype
);
Query returned successfully with no result in 16 ms.

What I tried to do in the following ST_GetVector() function is handle all geometry types in the one function. Initially I used one cursor per geometry type (and a single refCursor for processing) but in this implementation I have chosen to use a single cursor with UNION ALLs between the SQL that processes the vertices of each geometry type. I am hoping that the PostgreSQL query optimiser does its equivalent of Oracle’s partition pruning. Perhaps someone may test this aspect and report if it is not performing as fast as one would hope.


CREATE OR REPLACE FUNCTION ST_Vectorize(p_geometry geometry) RETURNS SETOF VectorType IMMUTABLE
AS $$
DECLARE v_GeometryType varchar(1000); v_rec RECORD; v_vector VectorType; v_start CoordType; v_end CoordType; c_points CURSOR ( p_geom geometry ) IS SELECT ST_X(sp) as sx,ST_Y(sp) as sy,ST_Z(sp) as sz,ST_M(sp) as sm, ST_X(ep) as ex,ST_Y(ep) as ey,ST_Z(ep) as ez,ST_M(ep) as em FROM (SELECT ST_PointN(p_geom, generate_series(1, ST_NPoints(p_geom)-1)) as sp, ST_PointN(p_geom, generate_series(2, ST_NPoints(p_geom) )) as ep WHERE ST_GeometryType(p_geom) = ‘ST_LineString’ UNION ALL SELECT ST_PointN(b.geom, generate_series(1, ST_NPoints(b.geom)-1)) as sp, ST_PointN(b.geom, generate_series(2, ST_NPoints(b.geom) )) as ep FROM (SELECT ST_GeometryN(p_geom,generate_series(1,ST_NumGeometries(p_geom))) as geom WHERE ST_GeometryType(p_geom) = ‘ST_MultiLineString’ ) as b UNION ALL SELECT ST_PointN(a.geom, generate_series(1, ST_NPoints(a.geom)-1)) as sp, ST_PointN(a.geom, generate_series(2, ST_NPoints(a.geom) )) as ep FROM ( SELECT ST_ExteriorRing(p_geom) as geom UNION ALL SELECT ST_InteriorRingN(p_geom,generate_series(1,ST_NumInteriorRings(p_geom))) as geom ) a WHERE ST_GeometryType(p_geom) = ‘ST_Polygon’ UNION ALL SELECT ST_PointN(a.geom, generate_series(1, ST_NPoints(a.geom)-1)) as sp, ST_PointN(a.geom, generate_series(2, ST_NPoints(a.geom) )) as ep FROM ( SELECT ST_ExteriorRing(b.geom) as geom FROM (SELECT ST_GeometryN(p_geom,generate_series(1,ST_NumGeometries(p_geom))) as geom) as b UNION ALL SELECT ST_InteriorRingN(c.geom,generate_series(1,ST_NumInteriorRings(c.geom))) as geom FROM (SELECT ST_GeometryN(p_geom,generate_series(1,ST_NumGeometries(p_geom))) as geom) as c ) a WHERE ST_GeometryType(p_geom) = ‘ST_MultiPolygon’ ) as f;
Begin If ( p_geometry is NULL ) Then return; End If; v_GeometryType := ST_GeometryType(p_geometry); — RAISE NOTICE ‘ST_GeometryType %’, v_GeometryType; IF ( v_GeometryType in (‘ST_Point’,‘ST_MultiPoint’,‘ST_Geometry’) ) Then return; END IF; IF ( v_GeometryType IN (‘ST_GeometryCollection’) ) THEN FOR v_geom IN 1..ST_NumGeometries(p_geometry) LOOP FOR v_rec IN SELECT * FROM ST_Vectorize(ST_GeometryN(p_geometry,v_geom)) LOOP RETURN NEXT v_rec; END LOOP; END LOOP; ELSE OPEN c_points(p_geometry); LOOP FETCH c_points INTO v_start.x, v_start.y, v_start.z, v_start.m, v_end.x, v_end.y, v_end.z, v_end.m; v_vector.startcoord := v_start; v_vector.endcoord := v_end; EXIT WHEN NOT FOUND; RETURN NEXT v_vector; END LOOP; CLOSE c_points; END IF;
end;
$$ LANGUAGE ‘plpgsql’;

CREATE OR REPLACE FUNCTION ST_VectorizeSQL(p_geometry geometry) RETURNS SETOF VectorType IMMUTABLE
AS
$$ SELECT * FROM ST_Vectorize($1);
$$
LANGUAGE ‘sql’;
Query returned successfully with no result in 156 ms.

Now, so that we can use this function in nested table Select statements we construct a “wrapper”.

CREATE OR REPLACE FUNCTION ST_GetVectorSQL(p_geometry geometry)
  RETURNS SETOF VectorType IMMUTABLE 
AS
$$
	SELECT * FROM ST_GetVector($1);
$$
LANGUAGE 'sql';
Query returned successfully with no result in 31 ms.

Now let’s conduct some simple tests.

select * from ST_GetVector('GEOMETRYCOLLECTION(POINT(2 3 4),LINESTRING(2 3 4,3 4 5))'::geometry);

startcoord
coordtype
endcoord
coordtype
(2,3,4,) (3,4,5,)

select * from ST_GetVector('LINESTRING(0 0, 1 1, 2 2, 3 3)'::geometry) as geom;

startcoord
coordtype
endcoord
coordtype
(0,0,,) (1,1,,)
(1,1,,) (2,2,,)
(2,2,,) (3,3,,)

select * from ST_GetVector('MULTILINESTRING((0 0,1 1,1 2),(2 3,3 2,5 4))'::geometry) As GV;

startcoord
coordtype
endcoord
coordtype
(0,0,,) (1,1,,)
(1,1,,) (1,2,,)
(2,3,,) (3,2,,)
(3,2,,) (5,4,,)

select * from ST_GetVector('POLYGON((326454.7 5455793.7,326621.3 5455813.7,326455.4 5455796.6,326454.7 5455793.7))'::geometry);

startcoord
coordtype
endcoord
coordtype
(326454.7,5455793.7,,) (326621.3,5455813.7,,)
(326621.3,5455813.7,,) (326455.4,5455796.6,,)
(326455.4,5455796.6,,) (326454.7,5455793.7,,)

select * from ST_GetVector('MULTIPOLYGON(((326454.7 5455793.7,326621.3 5455813.7,326455.4 5455796.6,326454.7 5455793.7)),((326771.6 5455831.6,326924.1 5455849.9,326901.9 5455874.2,326900.7 5455875.8,326888.9 5455867.3,326866 5455853.1,326862 5455851.2,326847.4 5455845.8,326827.7 5455841.2,326771.6 5455831.6)))'::geometry);

startcoord
coordtype
endcoord
coordtype
(326454.7,5455793.7,,) (326621.3,5455813.7,,)
(326621.3,5455813.7,,) (326455.4,5455796.6,,)
(326455.4,5455796.6,,) (326454.7,5455793.7,,)
(326771.6,5455831.6,,) (326924.1,5455849.9,,)
(326924.1,5455849.9,,) (326901.9,5455874.2,,)
(326901.9,5455874.2,,) (326900.7,5455875.8,,)
(326900.7,5455875.8,,) (326888.9,5455867.3,,)
(326888.9,5455867.3,,) (326866,5455853.1,,)
(326866,5455853.1,,) (326862,5455851.2,,)
(326862,5455851.2,,) (326847.4,5455845.8,,)
(326847.4,5455845.8,,) (326827.7,5455841.2,,)
(326827.7,5455841.2,,) (326771.6,5455831.6,,)


Table Tests
Now we can test our function against some real tables and SQL.

DROP  TABLE v_polygons;

Query returned successfully with no result in 16 ms.

CREATE TABLE v_polygons
(
  gid serial NOT NULL primary key
)
WITH (
  OIDS=FALSE
);

NOTICE:  CREATE TABLE will create implicit sequence "v_polygons_gid_seq" for serial column "v_polygons.gid"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "v_polygons_pkey" for table "v_polygons"
Query returned successfully with no result in 125 ms.

ALTER TABLE v_polygons OWNER TO postgres;

Query returned successfully with no result in 16 ms.

SELECT addGeometryColumn('public','v_polygons','geom','28355','MULTIPOLYGON','2'); 
Total query runtime: 78 ms.
1 rows retrieved.

addgeometrycolumn
text
public.v_polygons.geom SRID:28355 TYPE:MULTIPOLYGON DIMS:2

ALTER TABLE v_polygons DROP CONSTRAINT enforce_geotype_geom;

Query returned successfully with no result in 16 ms.

ALTER TABLE v_polygons ADD  CONSTRAINT enforce_geotype_geom CHECK ((geometrytype(geom) = ANY (ARRAY['MULTIPOLYGON'::text, 'POLYGON'::text])) OR geom IS NULL);

INSERT INTO v_polygons (geom) 
VALUES 
(ST_GeomFromText('MULTIPOLYGON(((326454.7 5455793.7, 326621.3 5455813.7, 326455.4 5455796.6, 326454.7 5455793.7)), ((326771.6 5455831.6, 326924.1 5455849.9, 326901.9 5455874.2, 326900.7 5455875.8, 326888.9 5455867.3, 326866 5455853.1, 326862 5455851.2, 326847.4 5455845.8, 326827.7 5455841.2, 326771.6 5455831.6)))',28355)),
(ST_PolygonFromText('POLYGON((326667.8 5455760.3, 326669.7 5455755.6, 326688.1 5455766.0, 326685.3 5455770.5, 326667.8 5455760.3))',28355));

SELECT v_polygons.gid, 
       (ST_GetVectorSQL(v_polygons.geom)).*
  FROM v_polygons;

oid
integer
startcoord
coordtype
endcoord
coordtype
1 (326454.7,5455793.7,,) (326621.3,5455813.7,,)
1 (326621.3,5455813.7,,) (326455.4,5455796.6,,)
1 (326455.4,5455796.6,,) (326454.7,5455793.7,,)
1 (326771.6,5455831.6,,) (326924.1,5455849.9,,)
1 (326924.1,5455849.9,,) (326901.9,5455874.2,,)
1 (326901.9,5455874.2,,) (326900.7,5455875.8,,)
1 (326900.7,5455875.8,,) (326888.9,5455867.3,,)
1 (326888.9,5455867.3,,) (326866,5455853.1,,)
1 (326866,5455853.1,,) (326862,5455851.2,,)
1 (326862,5455851.2,,) (326847.4,5455845.8,,)
1 (326847.4,5455845.8,,) (326827.7,5455841.2,,)
1 (326827.7,5455841.2,,) (326771.6,5455831.6,,)
2 (326667.8,5455760.3,,) (326669.7,5455755.6,,)
2 (326669.7,5455755.6,,) (326688.1,5455766,,)
2 (326688.1,5455766,,) (326685.3,5455770.5,,)
2 (326685.3,5455770.5,,) (326667.8,5455760.3,,)


Let’s visualise this by moving the resultant vectors parallel 10 meters from the original polygon boundaries.

We will use Regina Obe’s upgis_lineshift function to move individual polygon vectors to the right by 10m.

CREATE OR REPLACE FUNCTION upgis_lineshift(centerline geometry, dist double precision)
  RETURNS geometry AS
  $$
  DECLARE
        delx float;
        dely float;
        x0 float;
        y0 float;
        x1 float;
        y1 float;
        az float;
        dir integer;
        line geometry;
  BEGIN
        az := ST_Azimuth (ST_StartPoint(centerline), ST_EndPoint(centerline));
        dir := CASE WHEN az < pi() THEN -1 ELSE 1 END;
        delx := ABS(COS(az)) * dist * dir;
        dely := ABS(SIN(az)) * dist * dir;

        IF az > pi()/2 AND az < pi() OR az > 3 * pi()/2 THEN
                line := ST_Translate(centerline, delx, dely) ;
        ELSE
                line := ST_Translate(centerline, -delx, dely);
        END IF;

        RETURN line;
  END;
  $$
  LANGUAGE 'plpgsql' IMMUTABLE;
  COMMENT ON FUNCTION upgis_lineshift(geometry, double precision) IS 'Takes a 2D line string and shifts it dist units along  the perpendicular defined by the straight line between the start and end point.
Convention: (right is positive and left is negative. Right being defined as to right of observer standing at start point and looking down the end point)';

Query returned successfully with no result in 78 ms.

Then we need a table to hold the vectors we create and shift.

DROP   TABLE polygon_vectors;

Query returned successfully with no result in 31 ms.

CREATE TABLE polygon_vectors
(
  gid serial NOT NULL primary key
)
WITH (
  OIDS=FALSE
);

NOTICE:  CREATE TABLE will create implicit sequence "polygon_vectors_gid_seq" for serial column "polygon_vectors.gid"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "polygon_vectors_pkey" for table "polygon_vectors"
Query returned successfully with no result in 125 ms.

ALTER TABLE v_polygons OWNER TO postgres;

Query returned successfully with no result in 16 ms.

SELECT addGeometryColumn('public','polygon_vectors','geom','28355','LINESTRING','2'); 

addgeometrycolumn
text
public.polygon_vectors.geom SRID:28355 TYPE:LINESTRING DIMS:2


Finally, let’s create, shift and write the vectors.

INSERT INTO polygon_vectors (geom)
SELECT upgis_lineshift(ST_SetSRID(ST_MakeLine(ST_MakePoint((startcoord).x,(startcoord).y)::geometry, 
                                              ST_MakePoint((endcoord).x,  (endcoord).y)::geometry),
                                  28355),
                       10)
  FROM (SELECT (ST_GetVector(v_polygons.geom)).*
          FROM v_polygons ) as v;

Query returned successfully: 16 rows affected, 15 ms execution time.

SELECT COUNT(*)
  FROM polygon_vectors;

Total query runtime: 16 ms.
1 rows retrieved.

count
bigint
16


Finally, let’s visualise the polygons and the shifted vectors.

Polygons and Vectors

I hope this function, and article, is of use to someone.

by Simon Greener at September 21, 2015 11:07 AM

September 20, 2015

Geo Perspectives (Justin Lewis)

FOSS4G 2015 Seoul Korea – Recap

Seoul Korea

Copyright © 2015 GEO-PD – Genetic Epidemiology of Parkinson’s Disease.

I was very lucky to have been able to participate at FOSS4G in Seoul Korea this year as both a presenter and avid listener.  While the first and only other FOSS4G I attended (in Denver 2011) was a huge eye opener for me from a technical perspective this experience was much different. In Seoul I felt a whole new sense of the open source geo communities strength and cohesiveness.  In the past I have always approached these groups as an avid user where I was mostly concerned with technical details but this time was a little different.  This time I came to the conference as a presenter, as an open source contributor, and with a strong desire to build relationships with this inspiring community.  As a result I had a fantastic and productive time that will likely affect my path ahead in this dynamic and kinda crazy industry.

A Bit About The Venue

Seoul is a wonderful city of around 10 million people where ancient temples sit below modern sky scrapers.  The local transit system is one of the best I have ever experienced making it ridiculously easy to travel all around the city in comfort.  Seriously, their subway is cleaner than your computer desktop.  The food is amazing as well. Everywhere you look are bunches of small restaurants offering different flavors of Korea.  And finally, the people are incredibly nice. Many people speak English or at least can figure out what you need with a little effort. All and all I found this to be a great venue to bring people from around the world.

 

Sensors and the Internet of Things (IOT)

I had the great opportunity to spend time with Ray from SensorUp who is doing some great work with Internet of Things (IOT) which I think will be a huge part of geo-tech in the not so distant future.  The basic idea of IOT is that there will be (or are) sensors broadcasting to a common network from which applications can access signals to provide more intelligent decisions/behavior based off of that live environmental feedback.  Right now IOT seems to be in it’s earlyish stages but after seeing how connected, automated, and sensor driven so much of Seoul’s day to day activity is I think it wont be long before we are all working with sensor data in real-time to help add rich features to our applications.

* His business partner couldn’t make it because he was needed pushing their involvement in IOT through as an OGC standard!

GeoCouch

Volker Mische demonstrated an amazingly simple and scalable distributed system of document stores with spatial support.  It was pretty amazing to see how simple it is to distribute geo data across a cluster and then redistribute data across a changed cluster if a machine is added or removed.  The ease of use and flexibility of this project will likely lead to more people deploying apps on advanced scalable architectures as the barriers to entry are removed.

GeoServer

The GeoServer team has been pushing lots of improvements to raster based performance and functionality including creating new supporting core components to replace old project dependencies.  Andrea Aime was busy running around to all the many presentations he had throughout the conference.  The list of new features is pretty big but here is a short snapshot.

  • Faster, Faster, Faster, Faster!
  • Clipping rasters by vector or raster mask
  • Image overlay averages to combine images rather than just using alphas
  • … I’m forgetting a lot but the videos will be out soon

Also, the Boundless Geo folks gave me a sneak peak into an upcoming GeoServer UI which looks awesome and much more useful than what we are all used to using.  It takes a more map designer perspective where layers are styled in a map studio of sorts.

PostGIS

The PostGIS team is busy as usual making lots of improvements to the spatial database we all love so much.  It seems like most of the new work has been done on the raster side of things where in general we will increasingly be able to do more crazy robust computation with less code.  You’ll have to watch the videos for details because I didn’t write down the syntax details.  Needless to say this team, and especially Paul Ramsey, is not slowing down and will continue to build incredible functionality for the open source community!

OpenLayers 3

OpenLayers 3 is coming along nicely with lots of new developments in all kinds of areas.  The pieces that stuck out to me the most include increasing support for all kinds of formats (including Vector Tiles and WebGL).  They also mentioned that support for WebGL is currently limited to points but progress for supporting polgons and lines are in the works.  I really liked their demonstration of advanced topological editing as it demonstrated how sophisticated OpenLayers 3 is while giving some indication that this library is going to kick even more ass in the future.

MapZen

MapZen is doing great things with WebGL and data visualization in general but they stirred up some thought and conversation by talking about a global Gazetteer for geographic places.  You may not think this is interesting but the difficulties in achieving such a goal range from both technical to social and if successful could affect all of us as geographers so pay attention. You can read more about it here.

Free vs. Open

While not exactly a technology or specific presentation there where numerous discussions about the difference between the meaning of ‘free’ vs ‘open’.  While it may seem like semantics I was convinced that this does matter but is not so easy to measure.  This is actually not a new debate. You can read about a long running discussion on the GNU website here for more details but in general you can think of free meaning the freedom to use software as you want while open simply meaning the ability to see it but not necessarily use it for your intended purpose.  Some may argue that this is just a matter of what licence you use but the issue is more complicated.  This discussion reaches deep down to the very core of what we do in the open source community and should continue to be discussed.

The importance of OSGeo, LocationTech, and fostering open source

Easily one of the biggest lessons I was reminded about at FOSS4G this year is that community is important.  It’s very important.  In fact, I think it’s so important that at some point it’s what makes or breaks an open source project.  It was very clear that the work that OSGeo and LocationTech are doing to support open source groups is valuable to both developers and users.  For those that don’t know, these groups provide resources, community, and support for projects to grow and thrive.  Projects that are part of these groups get branded as stable and reliable which provides a mechanism for reaching beyond the open source community to show others that these tools are not just a basement project.

Ontologies

I gave a talk about how we implement ontologies and geo-ontologies to enhance data quality, data accessibility, and mapping but I was excited to see another group was also talking about ontologies.  Specifically Hamish Mcnair and Paul Goodhue where talking about how they used ontologies to ensure data quality in biological surveys.  It was great to see others using these techniques as they are proving to be very useful for projects we work on at TerraFrame.

Other points of note

  • People have been pushing lots of data into Vector Tiles with reasonable performance (look for “Stuffing your vector tiles full of data”).
  • Changing data on the fly in already loaded Vector Tiles is still really tricky.
  • OL3-Cesium is a pretty impressive library for adding a Cesium 3D globe to applications based on OpenLayers 3 (look for “OL3-Cesium: 3D for OpenLayers maps”).
  • Tile Reduce is really great for handling larger data sets in the browser (look for “Big data analysis with Tile Reduce and Turf.js”).
  • OGC GeoPackage… Use it! It’s great.
  • Huge point clouds in the browser with Potree (look for “Point Clouds in a Browser with WebGL”).
  • Write more documentation! (look for “Everybody wants (someone else to do) it: Writing documentation for open source software”).

 

That’s about all I have for now.  Keep an eye out for the session videos which should come out soon.

by jmapping at September 20, 2015 02:26 PM

September 19, 2015

Postgres OnLine Journal (Leo Hsu, Regina Obe)

Compiling and installing ogr_fdw on CentOS after Yum Install PostgreSQL PostGIS

After installing PostgreSQL 9.4 and PostGIS following An Almost Idiot's guide to installing PostgreSQL, PostGIS, and pgRouting, on my CentOS 6.7 64-bit except replacing 9.3 references with equivalent 9.4 reference, I then proceeded to install ogr_fdw. To my disappointment, there are no binaries yet for that, which is not surprising, considering there aren't generally any binaries for any OS, except the windows ones I built which I will be packaging with PostGIS 2.2 windows bundle. Getting out of my windows comfort zone, I proceeded to build those on CentOS. Mainly because I have a client on CentOS where ogr_fdw I think is a perfect fit for his workflow and wanted to see how difficult of a feat this would be. I'll go over the steps I used for building and stumbling blocks I ran into in this article with hope it will be of benefit to those who find themselves in a similar situation.

UPDATE pgdg yum now has ogr_fdw as an offering. If you are on PostgreSQL 9.4, you can now install with : yum install ogr_fdw94

Continue reading "Compiling and installing ogr_fdw on CentOS after Yum Install PostgreSQL PostGIS"

by Leo Hsu and Regina Obe (nospam@example.com) at September 19, 2015 03:39 PM

September 15, 2015

CartoDB

Powered by CartoDB: MapGeo brings location intelligence to local government

The ‘smarter city’ is all the rage, and while a buzzing megapolis may have the complexity and resources to deploy a mayoral ‘geek squad’ or install flocks of sensors, the average town has to fulfill its duties on a much tighter budget.

We’re all about tools that bring location intelligence to everyone, which is why a new product from our partner AppGeo has us excited. They just launched the next generation of MapGeo, a data management and visualization platform for local governments. They’ve rebuilt it from scratch using CartoDB, and, if we may say so, MapGeo 2.0 is fantastic.

If you’re interested learning more about what MapGeo can offer your town or city, we hope you’ll join our October 8 webinar.

MapGeo

Why Location Matters to Municipalities

Cities and towns manage services and infrastructure across geographic area, so they need to track and understand what’s happening where. Location-based decisions are central to many city activities – from transportation to housing to public safety. Local authorities are also looking for ways to work smart – to do more, more efficiently, and to use data to aid their decision-making.

Kate Hickey, AppGeo’s VP for Local Government Services, explains the inspiration for MapGeo:

“After a decade of rolling out custom websites for local agencies, we had a sense of the common needs. Everybody needed base maps, query tools, access to property information and the like. We started to build a blueprint of what a hosted platform might look like in partnership with several municipalities. That blueprint became MapGeo.”

“The more users interacted with MapGeo, the more they wanted to do. They wanted the power of heatmaps and the impact of animated visualizations to explain change. City leaders wanted summary graphics to guide their work and share their reasoning with constituents. CartoDB’s mapping and analysis features make data easy to understand, and the platform is really robust, so we know we can build even more great functionality over time.”

More than a hundred municipalities are using MapGeo today. So this upgrade is based on a deep understanding of what citizens and officials do and care about. Here are some of the ways MapGeo helps address their needs.

Using Land Records to Spot Trends

Land records stewardship is crucial for determining zoning regulations, collecting taxes and managing right of ways along with dozens of other day-to-day municipal tasks. MapGeo was designed with these parcel data at its heart. An intuitive interface makes data updates and publishing fast and simple. The result is that staff and citizens have quick access to key information and broader insights.

Powered by the CartoDB APIs, MapGeo converts these tables of records into visuals showing trends and patterns. Users can see economic development indicators, such as property sales and values, construction activity, neighborhood revitalization projects, and commercial site availability, and track change over time.

“In modernizing MapGeo we evaluated a variety of cloud-based solutions, including Google Maps Engine,” explains Hickey. “In the end, CartoDB stood out as the best fit for us offering the flexibility, power and features we needed.”

A CartoDB-powered heatmap of property values helps city leaders spot clusters of economic activity.

Self-service tools for the public

MapGeo uses CartoDB’s analytical functions, via the CartoDB SQL API, to automate previously laborious and error-prone tasks.

If you’re building on a property, many towns require that you notify nearby landowners – for example, those within a certain radius – to protect against environmental and safety hazards. MapGeo allows you to simply choose your property and then query the land records for the list of nearby properties – and even create a set of mailing labels –according to the exact local regulations.

Identify abutting properties for notification and other regulations, saving a trip to the land records office.

MapGeo also makes it easy for the public to find other location-dependent information. Residents can learn when trash pickup occurs in their neighborhood. Visitors can find information about where to park. Business owners can find industrial zones to help determine where to locate a new warehouse.

Fresh information saves time

Direct access to up-to-date data, including details and related documents, can improve operations, offer new perspective, and uncover hidden opportunities.

MapGeo combines data from a jurisdiction with other public and private sources, including Google’s Street View, to help city staff save time. One simple example is property assessment. Now, when assessors and inspectors want to check the status of a development project, they can answer a lot of questions before they even arrive on site.

CartoDB powers MapGeo’s on-demand visualizations, allowing for timelapse views of everything from municipal inspection sites to home sales.

The idea is to offer each audience just what they need. For example, while staffers of the Department of Public Works might look at just a few types of data – repair requests or locations of infrastructure like hydrants – city leaders want to know how a given street or neighborhood is impacted by the full range of government services.

Various departments can look at the city’s workforce activity, showing crime incidents and responses, summarizing the number and location of inspections, detailing work order requests and how quickly they are closed.

The smarter city is within reach

Great technology is all about breaking down barriers and making hard things easy. MapGeo is a real step toward helping local governments to do just that, and we’re proud to be involved.

We’re getting closer to a day when –- whether you’re a town council member, city planner, or new resident –- you won’t need to visit multiple offices or dig through endless file cabinets to find an answer. Those of you who’ve ever worked on local policy or investment projects will understand just what a big step forward this is!

If you’re interested in MapGeo for your town, join our October 8 webinar to learn more.

Start developing with the CartoDB APIs

Learn more about MapGeo

September 15, 2015 10:30 PM

September 08, 2015

Boston GIS (Regina Obe, Leo Hsu)

pgRouting 2.1.0 and osm2pgrouting-2.1.0-alpha is out, windows binaries available

pgRouting 2.1.0 got released today along with pgRouting 2.0.1 and osm2pgrouting 2.1.0-alpha1. Full details on Daniel's note: https://lists.osgeo.org/pipermail/pgrouting-users/2015-September/002148.html.

I want to especially thank Vicky Vergara for putting in all the long hours to make this happen. I am most excited about the changes in osm2pgrouting -- now support of bigints, speed improvement, and mutli-file loading much of which is Vicky and GSoc work. As a small contribution, I added schema support for osm2pgrouting 2.1

I also want to extend a BIG thank you to Ko Nagase for fixing a TRSP bug that only exhibited on 64-bit windows (fixed in pgRouting 2.0.1 and pgRotuing 2.1.0) as well as his fixing of other windows and boost related compile issues.

For windows users excited to try all these new features, you can get them - on PostGIS windows download page (for PostgreSQL 9.2-9.4) : http://postgis.net/windows_downloads

We plan to package osm2pgrouting 2.1.0 (latest) and pgRouting 2.1.0 as part of the upcoming PostGIS 2.2 Windows Application builder bundle install for PostgreSQL 9.3-9.5. We would very much appreciate people testing these packages out now so no surprises come official package time.

If you already have pgRouting 2.0.0 installed in your database, you can upgrade with :

ALTER EXTENSION pgrouting UPDATE TO "2.1.0";

If you were experimenting with a beta or RC pgrouting, you'll need to drop the extension and readd

DROP EXTENSION pgrouting;
CREATE EXTENSION pgrouting VERSION "2.1.0";

To confirm you are running the latest and greatest, run this query:

SELECT * 
FROM pgr_version();

Which should output:

 version |       tag       | build |  hash   | branch | boost
---------+-----------------+-------+---------+--------+--------
 2.1.0   | pgrouting-2.1.0 | 1     | b38118a | master | 1.59.0
(1 row)

by Regina Obe (nospam@example.com) at September 08, 2015 02:26 AM

August 20, 2015

Archaeogeek (Jo Cook)

Portable GIS 5.2

I’m pleased to announce the latest release of Portable GIS. This version (v5.2) has only a couple of changes:

  • QGIS 2.8 (I’m going to try and do a release to coincide with each long-term release of QGIS)
  • Loader has been updated to the latest version

You can download the setup exe and the md5 checksum here.

Older versions are still available but have been archived to avoid confusion.

As always, please let me know of any problems via the Portable GIS google group.

This version also marks the start of an attempt to create a repository for Portable GIS, which will hold the additional and/or changed files and instructions for rolling your own version. This is something that I have wanted to do for some time, but it’s not ready for release yet. Watch this space!

August 20, 2015 06:16 PM

August 05, 2015

Boston GIS (Regina Obe, Leo Hsu)

pgRouting 2.1.0 Beta Release

pgRouting Beta 2.1.0 released details: https://lists.osgeo.org/pipermail/pgrouting-dev/2015-August/001569.html

The pgRouting team would like to announce:
 
      pgRouting 2.1.0 Beta Release
 
is ready for review and testing.

NEWS:
https://github.com/pgRouting/pgrouting/blob/develop/NEWS
 
Documentation:
http://docs.pgrouting.org/
 
Download:
https://github.com/pgRouting/pgrouting/releases/tag/pgrouting-2.1.0-beta
 
Source:
https://github.com/pgRouting/pgrouting/tree/develop
 
Closed Issues
https://github.com/pgRouting/pgrouting/issues?q=is%3Aissue+milestone%3A%22Release+2.1.0%22+is%3Aclosed

 
We are very excited about this release and all the new features that are
being made available. 
 
We hope the community will join in and support all the effort to get the
release this far with additional testing and feedback.
 
Best regards,
   The pgRouting Team

Windows binaries

Windows experimental binaries for both PostGIS 2.2.0dev and pgRouting Beta 1 available at http://postgis.net/windows_downloads

by Regina Obe (nospam@example.com) at August 05, 2015 03:45 AM

Postgres OnLine Journal (Leo Hsu, Regina Obe)

pgRouting 2.1 Beta released

pgRouting 2.1 Beta release is out. Full details at https://lists.osgeo.org/pipermail/pgrouting-dev/2015-August/001569.html. pgRouting 2.1 works with PostGIS 2.1 and 2.2.0dev.


Continue reading "pgRouting 2.1 Beta released"

by Leo Hsu and Regina Obe (nospam@example.com) at August 05, 2015 03:31 AM