Raycasting with PostGIS

I recently developed a function in PostGIS that performs simple 2D raycasting. Raycasting is the process of drawing lines from a single point which radiate outwards until they intersect a boundary. It is used quite often in video game development, particularly in first-person shooter (FPS) games. There’s a ton of literature and videos out there on developing raycasting engines for games, but this one I think gives a great overview about how it works while giving a very interesting dive into how early FPSs “cheated” by using a 2D raycaster and some visual trickery.

I was surprised to not find any straight-up application of raycasting in GIS, except for the common use case of determining if a point is inside a polygon, or doing line-of-sight analysis, and that one is almost always done using a raster dataset. I thought it would be an interesting topic to tackle so I went ahead and developed a 2D raycasting function in PostGIS. The function is pretty straightforward, give it some points and some lines and it will cast rays outwards from the points until they intersect one of the lines, and abruptly stop, returning the set of either points where the rays intersected the lines, or the lines defining the rays themselves.

Each point’s colour-coded rays and intersection points with the black boundaries.

I’m pretty happy with how it turned out. You can specify the number of rays to cast, as well as maximum distance that a ray can be casted to control the resolution and the distance that the points can “see”, respectively. In order to use it, all you have to do is run the function definition query on your database and start calling it as you would any other PostGIS function. It returns a GEOMETRY data type, always either a MULTIPOINT or MULTILINE, which represent aggregations of points or lines corresponding to its source point.

The source code, usage and examples can be found at the Github page.

Expanding and shrinking polygons with ST_Dilate

Creating a buffer of a feature is quite simple – it is often one of the very first things that a student learns in an “Intro to GIS” class. You input your features, how far outward (or inward) to buffer, and bam, you’re done.

But what if you want to buffer a polygon feature in such a way that you want the resultant polygon to be a percentage bigger or smaller than the source polygon? What distance would you enter for that?

The truth is that for all non-trivial geometric shapes (circles, ellipses, rectangles), it is infeasible to try and figure it out mathematically. We need to employ an iterative solution – one that tries different buffer distances, figuring out how close it is to the correct answer, and trying again and again, until it finds a value that produces a result similar enough to the desired one to be considered correct.

The Solution

I have written a PostGIS function called ST_Dilate that does exactly that – it takes the following parameters:

  • in_geom – The geometry field for the polygon to be buffered.
  • scale_factor – How much bigger or smaller the final buffered polygon should be than in_geom. A value of 1.5 means that it should be 50% bigger.
  • tol – How similar the final buffered polygon’s area must be to the desired area, which itself it calculated by multiplying in_geom‘s area by the scale factor. Default is 0.o01, or 0.1%. A smaller value means that the final buffered polygon will be much more precise, but at a cost of usually requiring more iterations.
  • guess – The first value to try when buffering the source polygon. Default is 1.
  • safety – A maximum number of attempts before the function gives up and outputs a NULL geometry. As the solution is iterative, it is good to have this kind of mechanism to stop infinite loops in case of unintended behaviour. Default is 1000

How it works

The function first calculates the desired final area by multiplying in_geom‘s area by scale_factor. It then buffers in_geom using a distance value of guess. After comparing the area of the resultant buffered polygon with the desired final area, it increments (if the resultant buffered polygon’s area was too small) or decrements (if it was too big) guess by half of itself and tries again. If the resultant buffer polygon’s area is bigger than the desired final area on the last run, and is small on the current run, the amount by which guess is incremented or decremented is halved. Once the resultant buffered polygon’s area is sufficiently similar to the desired final area (if the normalized difference is less than tol), then this resultant buffered polygon is returned.

safety is a parameter that indicates the maximum number of attempts to find the correct final buffered polygon before giving up and returning a NULL geometry. This is intended to be a failsafe measure to avoid creating an infinite loop and causing damage to the greater system in the event of unintended behaviour.

Dilation of polygons by both negative and positive values.

To get a better feel for how the iteration works in this solution, we can look at the below graph, which shows the discrepancy between the area of the resultant buffered polygon and the desired final area. The blue line is the first attempt at dilating a polygon, using a guess value of 25. The orange line is the same polygon, but with a guess value of 100.

We can clearly see that changing this initial guess can have significant effects on the amount of iterations needed to arrive at the correctly buffered polygon. This is important to keep in mind when dilating your polygons. Changing the tolerance would have an effect, too – but at the cost of precision.

As usual, the source code, as well as installation instructions, is available on github

Raster Stamp

 

Github link.

A typical application of GIS is translating biologists’ assessments into spatial analysis algorithms and applying them to the data that we had. So the biologist might say “Everything within 50m of the development’s footprint is ‘highly disturbed, then everything 100m beyond that is ‘somewhat disturbed’, and everything else up to 500m is ‘minimally disturbed'” The GIS person would hear that as “Make three buffers, one at 50m, another at 150m, then another one at 500m, and summarize the amount of suitable habitat that intersects each of them.”

Easy peasy, but rather limited. After all, what exactly means “very disturbed”, “somewhat disturbed” and “minimally disturbed”? This ordinal classification scheme was good enough for the task at hand, but I couldn’t help but see it as a simple case of a much broader application. Let’s look at the case as described above, but using a road as the proposed development:

There’s our ordinally-classified boundaries. You could take this and intersect each of those stretched donut rings with a habitat suitability map to get an idea for how much environmental impact the development has on the area. If we think of it in terms of environmental degradation, we could imagine this being a “stamp” that we press onto a surface, whose height represents the suitability for a species. Such a stamp would look something like this:

And the result of “stamping” it onto a surface would leave an imprint like this:

The rendering in this picture looks a little wonky – it seems to be a graphical glitch in ArcScene, but you can see some plateauing along the edges as a result of the stamp being used.

Doesn’t look so nice, does it? The chosen boundaries are too wide, and they cause a noticeable plateauing effect. It would be nice to be able to create some smoother edges – and while we’re at it, wouldn’t it be nice if we could have complete control over the shape of the stamp? Say, from a mathematical function?

Enter the Raster Stamp

The raster stamp is an arcpy-based tool that I wrote for the “Student of the Year” competition for ESRI in Kranzberg, Germany. I thought back on some old un-executed ideas I had kicking around in my head. Why not be able to give the user control over the exact number and distance of “bands” from the development? And then, automatically calculate the “height” of each band based on a mathematical function, taking the distance from the development as input?

Well, I scripted it up, made a GUI in ArcGIS, and packaged it up into a Toolbox that you can download at my github. It creates such a “stamp” based on user input, which is composed of distances from the input vector features, evaluating their height at a mathematical distance function, also supplied by the user. Here is the stamp created when using a very high number of distances (and therefore bands), and their heights defined by a parabolic decay function:

And probably more intuitively, in 3D:

That looks a lot more natural. If the surface onto which it gets pressed has a sufficient spatial resolution, it could even be considered continuous. The best part is that you could tweak the parameters via the input distance function to adjust the resultant stamp. Different species might warrant different parameters, if we go back to the “habitat degradation” application example. This is just the stamp, though. Here is what it looks like after we “stamp” it onto the surface:

Much nicer. So sleek and smooth. This also much more accurately represents real-world phenomena, which rarely have sudden sharp changes across continuous surfaces. The best part about the raster stamp is that you can define the distance height function as a parameter. Here are three examples of stamped surfaces, from top to bottom, a linear decay function, a parabolic decay function (what we just walked through), and a sinusoidal decay function:

Linear decay function.
Parabolic decay function.
Sinusoidal decay function.

Pine Beetle Infestation 1999 – 2014 Webmap

In 1999, while many were living in fear of the infamous flop of a supposed world-ending disaster known as “the Y2K bug”, there was a bug beginning its infestation that actually did so significant damage. Dendroctonus ponderosae, order Coleoptera, family Curculionidae, or known otherwise simply as “The Mountain Pine Beetle” have devastated the pine forests of western Canada, most notably the province of British Columbia. As a province whose forest products comprise a whopping one third of all exports, this little bug has been a big problem.

I took a course in the Intl. Masters in Geomatics program at Hochschule Karlsruhe called “Time Dependent Visualization”. The aim of the course was to understand the various methods of depicting time on both static and dynamic maps. The final project was to create a static map and a web map, both depicting the same temporal phenomenon, but the dynamic map had to have some intrinsic added value. The end result of my work was the webmap “Pine Beetle Infestation in Productive Forests in British Columbia, 1999-2014”.

The map, made using Javascript (+ various helper libraries), Leaflet and d3, shows the progression of the pine beetle infestation from 1999 to 2015 using data scraped from a comprehensive report commissioned by the British Columbia provincial government. It presents the information in as direct a way as possible, while offering some additional context to help put some of the numbers in perspective. For instance, the symbols for each region plainly show the amount of productive forest it has, and how much of that is actually comprised of pine forests, which are the trees affected by the infestation. Saying “70% destroyed” can sound absolutely catastrophic, but by showing that 70% as being only 70% of 5% of total forest softens the blow a little.

A time slider at the top right, as well as a “Play” button allows the user to watch the infestation play out, and see which regions were hit when, how fast, and how long until things leveled out.

As far as interactivity goes, I added some controls for clicking on the regions. the “+” buttons will add that region’s loss over time to the left or right side of a pyramid bar chart, allowing for regions to be aggregated and compared to each other. The arrow buttons just next to them will completely replace their side of the pyramid with the next-clicked region.

You can also change the visualization method by selecting the tabs just below the “Play” and “Stop” buttons. By visualizing the data in terms of percent loss, the spatial extent of the infestation over time becomes clearer, as the viewer can see where the infestation started, and how it spread over time.

I’m rather happy with how this map turned out. I learned a lot about Javascript by making it, especially from the process linking together the map and the pyramid containers. I also started to get a lot more familiar with bootstrap on this project, which was nice because up to then I was usually making layouts manually, which was taking up a lot of time.

A static version of this map can be found in “Static Maps”:

Static Maps

Some people say that the age of paper maps is past. Flashy, interactive web maps seem to be the status quo. Symbols, colours, arrangement and presentation have to be generated on the fly, directly from the underlying data. I remember learning in my cartography classes during my masters that the meaning of the word “dynamic map” has changed over the years – a map being “dynamic” once meant that it just showed information at two different points in time. Nowadays, of course, “dynamic”, at least in the world of cartography, is nearly synonymous with “interactive”. “Static” is the word typically used to describe the traditional, non-changing, non-interactive map.

But are static maps really dead? I don’t think so. They may have lost some of their “wow factor” in the age of slippy maps and data-driven story maps, but there is some kind of succinct elegance that comes with a well-executed paper map. The classical art of cartography may not be the career unto itself that it once was, but it is necessary to know the old rules and systems before you can extend them to the digital domain.

I thought I would share some of my own static map creations, most of which come from the beginning of my GIS career. Most of my maps are of the thematic variety – those are ones that tie information to geographical space in order to communicate a message. The first one I present is the first “real” map I ever created, as the final project for my very first cartography class in  at Memorial University of Newfoundland, in St. John’s, Newfoundland, Canada in 2012:

What story was I trying to tell here? Well, not much. I was still very new in the world of GIS, and fighting with ArcMap to make it look the way I wanted it to was a challenge unto itself. I had some data about forest fires and some demographic information. My goal here was to “colour within the lines” and just make a map that looked professional. I feel like it worked, because bringing it to an interview helped get me my first job as a GIS Tech. I blacked out my name because I posted it on some social media sites a long time ago and wanted to protect my identity.

Fast forward a few years and I was working full-time as a GIS Specialist at that same company where the first map got me hired. I has moved out to Calgary because oil prices were soaring and everybody needed maps. Environmental reports for pipelines still needed paper maps and that ended up becoming a big part of my job. The controversial and now-abandoned Energy East Pipeline project provided me with a LOT of practice in using ArcMap to make official maps. I couldn’t put a number on how many maps I produced during my time there, but since they followed the company, client and project standard, they mostly looked a lot like this:

I always liked the maps we produced on this project. I thought they looked clean, orderly and professional. They give you the relevant information up-front with little distraction, and just enough background detail to make the map feel balanced and aesthetically pleasing. My senior colleagues were very good at finding the right balance to make a static map look good, and I learned a lot from them during my time there.

As time went on, I enrolled in a Master program in Karlsruhe, Germany, in Geomatics. Even though ArcGIS is still considered the standard in the GIS world, here I found open source solutions and technologies abound, much more than I had ever seen up to that point. There was still a string anchor in the classic ways, but we were encouraged to push the boundaries of what we knew about traditional mapping. One particular course had us explore more recent trends in cartography, techniques that were only practically possible with a computer. One such technique is “Firefly mapping”. Using some data from the New York City Open Data Portal, I put together a map showing green infrastructure in the style of a firefly map in Quantum GIS (QGIS):

This is also around the time I began to incorporate graphic design software into my mapmaking workflow. As powerful as GIS software like ArcGIS and QGIS are, you sometimes find yourself hitting the glass ceiling when trying to make the map really pop. In this one, for instance, I had no way of making the legend look the way it does. The little firefly dots weren’t actually an option to use in the legend – I had to make them manually afterwards in GIMP. The leafy-looking Statue of Liberty was also done in GIMP.

My most recent static map creation was also done during my Masters. I was tasked with showing a time-dependent pattern in both a static and a dynamic map environment. The dynamic map is actually also an item in my portfolio – the Pine Beetle Infestation in Productive Forests in British Columbia webmap. Our professor, however, was a firm believer in learning the classic methods, and made us design the map from scratch – only the underlying base map showing the regions was allowed to be created in a GIS software. I found a spreadsheet showing the infestation rates of the mountain pine beetle in British Columbia, and after weeks of data cleaning, aggregating, experimenting, slaving, getting to the very end, then realizing something wasn’t just right, starting all over again, I eventually came to my final result:

The square symbols were generated by a Python script that I wrote, which would programmatically create rectangles with dimensions read from an excel table that held all the source data. After that, though, they had to be placed manually – that was the prof’s requirement. They wanted us to get a real appreciation for the old ways of making a map by hand, and even though the symbols were technically made by a computer, the hours of work that went into designing each symbol, then finding a way to fit them all together was a laborious, humbling but nonetheless enlightening exercise.

It actually feels quite fitting, that at the time of writing this, the last map I created was of the same area as my first ever completed map. When I look at the two I think of all the time spent learning new systems and methods and the road my career has taken since then. I may focus a lot more on development and data science now, but the truth is that classic cartography always finds a way to creep back into the vogue. After all, what good are your results if they’re not interesting to look at? Static maps may be old, but they never seem to lose their ability to captivate the viewer.

Multi-Locational-Temporal Weather Application (MuLTWA)

People then to have mixed feelings about group work, but when a group clicks, the results are orders of magnitude better than whatever one person is able to accomplish in the same time. That is exactly what happened with the group I worked with on the “Multi-Temporal-Weather Application”, or “MuLTWA” for short. Nobody could think of a better name and we felt that “MuLTWA” was pretty memorable so it just kind of stuck. The first result on Google Image search (at least for me) is a picture of the application, so we got that going for us.

I’ll try not to let the fame go to my head.

Anyway, the goal of this project, which was for the “Location Based Services” class for the International Masters in Geomatics program at Hochschule Karlsruhe, was to design some kind of web application (with a geographic/cartographic focus) that implemented a web service of some sort. Other groups used geolocated Flickr images and tweets. We thought it would be neat if you could have a “traveling weather map” to optimize your travel plans if you’re hitting multiple destinations within the next couple of weeks. We couldn’t find an example of that kind of information able to be consolidated on a single page on a whim so we ran with it. I think I can speak for the group when I say that we were all happy with how it turned out.

It was created by Igbiloba Olumide Enoch, Adrián Castelló Martínez, Macarena Parrizas Siles and myself, and I remember the work fondly. I was able to focus my efforts on creating the carousel you can see at the bottoms as you add & remove locations/dates, as well as handle the asynchronous javascript requests to the weather service we used to power the whole thing. Adrián and Olumide worked together to set up the Leaflet map, as well as the date & time pickers and the geocoding service. Macarena was the CSS wizard who really went above and beyond in designing logos, layouting and other visual elements & interactivity.

MuLTWA was also one of the first experiences that most of us had using git as a collaboration tool. We may have nuked the repository a couple of times by accident but we managed to land on our feet, with a few valuable lessons on the conceptual differences between pulling and fetching under our belt to show for it. We’re all a lot better at using git now, but I feel like this classic really nails my first exposure to git:

I promise I understand git a lot better now. Honest. (Source)

Feel free to check out the source code on Github. I think there is still some room for improvement, for instance right now you can only actually query the weather service for the weather two weeks in advance, so it would be nice to constrain the date picker to only two weeks. I will take a crack at it eventually if I ever find time, but if someone is in a collaboration mood, I think it would make a good task for a beginner web developer.

Webmap of castles and ruins in Baden-Württemberg

My first finished Leaflet project was done in July of 2017, for the “Web map development” class of the International Master’s degree in Geomatics at Hochschule Karlsruhe. Having recently moved to Germany, I was enamoured by the centuries of history embodied in the various castles and ruins in Baden-Württemberg. Naturally there are more castles all over Germany, but I was limited in what was feasibly able to be drawn in a Leaflet map in a web browser. Baden-Württemberg seemed appropriate given that I live there (at least at the time of writing). Click here to see it.

The data used for the castles and ruins comes from OpenStreetMap. specifically, I downloaded the Baden-Württemberg OSM data from Geofabrik, and parsed the .osm file into a PostGIS database using osm2pgsql. I could have used the Overpass API (specifically the web interface, Overpass Turbo), but I wanted to get all the data to see what was really in there to play with. It took quite a long time to parse it. Germany is shockingly dense with OSM data – Germans seem to LOVE open source and crowdsourced data. I do, too. OSM is a treasure trove of useful information for geospatial data and companies like Geofabrik provide an amazing service in making this information available to the public.

Once the data was downloaded and loaded into my PostGIS database, I did some finnicking around to figure out what was really there for castles. It turns out that there the definitions for “Castle” and “Ruin” can be rather blurry in OpenStreetMap, which is not surprising for crowdsourced data. This brings up an important pitfall to be aware of when it comes to using free data – you get what you pay for. That isn’t a knock at OpenStreetMap. Like I said, I love it and what it provides to geospatial research & development. But you always have to make sure to examine through your data very thoroughly and perform cleaning as necessary.

“Castle” Neuscharfeneck – or is it a “Ruin”? Definitions can be subjective – especially when using free data. Photo credit yours truly.

After I had my castles and ruins picked out, I saved them to GeoJSON format and plunked them into a Leaflet map with a CSS template frame. I modified the frame to show the information I wanted and Also added some functionality for saving bookmarked locations. Try clicking on maps and ruins and see how they get added to the sidebar. You can click on their names later to return to that location.

I’m also happy to report that I passed the class. I think it would be neat to implement some functionality to auto-generate Google image search results for the castle name when clicked on. Maybe I’ll come back to this one someday. I like castles.

If you want to have a look at the source code, go ahead.