Ensuring quality in the new environment

All organizations are forced to update their spatial data infrastructures from time to time. Transport agencies have an extra incentive to update their infrastructures, since traffic data is evolving with quick phase, and it’s used in the future by in example autonomous cars and drones. New OGC standards (Read more: WFS3 is a novel OGC API for feature access) are being adopted into APIs and traditional spatial technologies are maturing and being updated as well.

New environments have to be built by using some parts of old already proven standards, while utilizing new technologies to their fullest potential. We believe that for example Geoserver is one of our industry standards these days, and it can be used as a cornerstone for any spatial web services in 2020.

When end users of these new spatial web services are not experiencing any problems everything is running smoothly. The problems start to rise once the services aren’t behaving as intended, and downtime occurs. So one of the first steps when building new spatial web services is to ensure that the availability stays very high. Displaying third party proof of the availability is also something that builds trust within your users. 

Traficom is a good example of these kinds of actions. They started using Spatineo Monitor in  April of 2020, and one of the first things they did was to ensure the quality of their services. They communicate their availability with high transparency via their “Availability dashboards” which clearly displays the status of all their spatial web services neatly in one page. This availability was achieved with hard work, and smart choices in their spatial data infrastructure.

Spatial Web Service Availability Dashboard created by Traficom

Building better spatial web services for the users

Usability and accessibility are a couple of things to pay attention to when updating or building new spatial web services. Can the services hold good quality even when under heavy loads and if thousands of people decide to check your service at the same time, are you able to withstand that kind of traffic? 

Both of these issues can be addressed before going to the actual publication phase. Cutting the bottlenecks of your service is actually better done before going public. Making traffic data available but “laggy” at the start of the going public might have long lasting effects within your user base. First time using the spatial web services must be fluent, so the first impression is good.

This is why we highly recommend testing the performance of your spatial web services after updates and changes to your infrastructure. Spatineo Performance was in example used by Traficom when they were updating their spatial web services, and they were able to cut some of the bottlenecks before going public with the updated APIs.

Read more: Water Map Popularity in Finland Proved the Importance of Public Maps’ Performance Testing

Spatineo also offered consulting services to accompany our Performance testing, and with that Traficom was able to get the most out of the test done. The test results were shared among IT-companies who are responsible for operating Traficom’s IT-systems, so everyone involved in with the spatial web services were up to date with the latest developments. 
Juha Tiihonen from Traficom gave Spatineo a brief comment, on why they acquired Spatineo’s tools to assist their goals of making spatial web services more available and usable:

“Spatineo tools offer user-friendly way to be aware the availability and performance of our data services and monitor possible bottlenecks as well. “

Traficom acquired Spatineo tools to enhance their spatial web service quality

After ensuring that the spatial web services provided by the new platform can handle large amounts of requests, Traficom’s spatial web services went operational in April. 

Having the capability to constantly monitor their services, now gives Traficom also power to optimize the quality of those services efficiently. Making sure that the availability stays high, and users get to experience robust services is the outcome of this optimization in the long run. Every minor change to the data infrastructure might have an impact on the quality of the service, so vigilant monitoring will help in assuring that the quality stays as high as possible.