Spatineo leading OGC activities for Quality of Service

OGC_QoS_1k

We all know that sometimes computer-related things just don’t work: cryptic error messages, programs stuck for an indefinite time, or crashing altogether. Sometimes the culprit can be outside your laptop or mobile phone: the resources it’s trying to fetch are just not there or cannot be accessed as fast as usual. Whatever the reason behind this behaviour is, the ultimate result is the same: the user experience sucks.

To make things work better together you need to agree on how they are expected to function. The standardisation work by the Open Geospatial Consortium (OGC) has greatly influenced how computers and software from different vendors are able to handle and exchange spatial information. Still, the product level functional standards compliance alone is not enough to guarantee a fully functional and reliable networked systems in the real world: Even the best products can be configured and connected sub-optimally, making them inefficient, unreliable and difficult to use. As any computer systems, the spatial data servers and clients also suffer from occasional technical failures and connectivity problems, resulting in degraded experienced quality from the user perspective. The key factor for systematically improving the experienced Quality of Service is to constantly measure it using well-defined and consistent metrics.

When we joined the OGC in early 2016, we wanted to bring the Quality of Service issues of the OGC spatial data services on the table. At the Opening Plenary of OGC Technical Committee meeting held Dublin in June 2016, I presented the initial idea of the new OGC working group for improving how the OGC Web Service standards, such as WMS, WFS, and WCS, could help the data providers in creating and maintaining reliable, high-quality spatial data infrastructures. These ideas were well received by the OGC members, and the Quality of Service ad hoc group has since met 6 times to discuss the focus and the type and to draft the charter document of the proposed new OGC working group.

The preparatory phase for the new OGC working group concentrating on Quality of Service and Quality of Experience related topics is now about the come to a conclusion as OGC released the draft charter document for the OGC Quality of Service and Experience Domain Working Group (QoSE DWG) for a three week public comment period on 25th October 2016. The plan is to have the OGC Technical Committee to vote for approving the final version of the charter, and thus approve forming of the new working group during the next OGC meeting in Taichung, Taiwan in December.

Key issues actions of the proposed OGC QoSE DWG include the following:

  • Make it easier to evaluate and compare the Quality of Service of live spatial data service instances implementing OGC standards.
  • Further the standardisation work within the OGC for declaring the expected level of Quality of Service for OGC compliant services in a formal, machine and human readable way.
  • Create and improve guidance and best practices for improving the Quality of Experience of OGC spatial data services by encouraging the data and software providers to publish better and more understandable metadata, layer names, legends and descriptions and other properties helping the users of these services to understands and use the services and data more effectively.
  • Encourage activities for piloting technical means for improving automatic evaluation and monitoring the Quality of Service of spatial data services in the OGC Testbeds and other activities of the OGC Interoperability Program, and present the results of these real-world experiments to the OGC TC for further actions.

The OGC Quality of Service ad hoc group will have a face-to-face meeting during the Taichung OGC meeting between 4th – 8th December. If all goes well, this will be the last meeting of ad hoc group and the kick-off for the first year of the new OGC Quality of Service and Experience DWG. I hope to see you there.

More information:

The future is already here

The future is already here
By Sampo Savolainen and Anita Lankinen, originally published on Geo International Nov/Dec 2016 Issue

Sampo Savolainen and Anita Lankinen look at some of the latest technological advances and analyse how they use spatial data infrastructures, what else can and should be done, and explore what role they will play in our future

Transport agencies, cities and municipalities, environmental agencies, mapping and cadaster authorities – all of these public organisations have been collecting and storing spatial data for many years. However, often companies end up developing their own spatial data infrastructures (SDIs) and creating their core data by themselves instead of leveraging this publicly collected data.

Look at self-driving cars. Instead of relying on available information on road infrastructure, car manufacturer Tesla has mostly relied on sensors to guide its Autopilot system. The sensors are placed all around the car to help the car understand its environment, so that it can safely steer itself in most situations.

Every Tesla car, with or without Autopilot, is connected to the cloud. The company is constantly monitoring and collecting data from each car to create and update its maps. In short, the company acquires all its data through drivers of its cars. As a result, Tesla will have its very own map of the world, built from data collected from the hundreds of sensors embedded in each of its cars. Founder Elon Musk calls this a ‘fleet learning network’, where every car learns from the other. Musk referred to an example of a Californian highway, where the lines are badly marked, which, however, doesn’t affect Autopilot data, as the system uses information from Tesla drivers who have used this section of road. This means that the data Tesla is collecting goes far beyond two-dimensional road maps.

Why not use existing maps?

So why do Tesla and other private companies prefer building SDIs from scratch instead of using data governments were collecting for years? We have come across the following typical arguments for this choice.

Firstly, governmental data is often missing some particular information that is crucial for the private organisation. For instance, TomTom could not rely on base maps to contain information about allowed turns at intersections, so had to collect this crucial information by driving through every intersection in every country in which this information was missing.

Secondly, the update cycle of suitable public data might not match the needs of private enterprises. For example, it might be dangerous for autonomous cars to rely on old data.

Lastly, spatial data is often recorded differently across countries or administrative areas. This is one of major issues the European Inspire directive is aiming to address by developing easily accessible, harmonised data and interoperable infrastructure for spatial information that supports environmental policy-making. Similar goals are pursued by the member of the Arctic SDI – a collaboration between eight national mapping agencies (Canada, Finland, Iceland, Norway, Russia, Sweden, USA and Denmark) to provide politicians, governments, policy makers, scientists, private enterprises and citizens in the Arctic with access to harmonised data, digital maps and tools to assist in monitoring and decision-making in the region.

Moreover, in our experience, it looks like private companies often simply assume that the collaboration with public authorities would end up being too difficult.

But what if there was less friction, greater collaboration and data sharing between authorities and companies could work two ways? What if, say, Tesla shared some of the data it collected with public authorities? The data would be integrated into a common reliable data set and could be shared with more partners. One clear benefit of this would be for autonomous cars: cars from different manufacturers would be able to better coordinate and exchange information that would lead to even safer journeys.

This is just one example of how new technologies would evolve better and faster by sharing more data. It would not be the first time Tesla shared some of its innovations and hard work for the common good, either – back in 2014, it opened its patents for electric vehicle technology to advance the development of sustainable transport. This was not an act of charity either – having more electric cars and charging stations on the market was in their commercial interests.

This makes it even clearer that the quality of public spatial data should be constantly monitored, improved and recorded, and that it should be accessible using common standards across boundaries. Having reliable, good quality data readily available, our cities and countries will become smarter and more interconnected faster, while collaboration between governments and the private sector to build better services would become a natural step to help create more efficiencies. Data is the foundation of smart cities and our improved tomorrow, so we need to make sure it is available when it is required.

Artificial intelligence

In the past few years, artificial intelligence (AI) technology has been commoditized, and is now being used in applications as varied as autonomous cars, Google search and evaluating and categorising produce. This is largely due to advances in AI research such as recurrent neural networks, as well as large software companies opening the source code of the technology platforms necessary for massive data analysis.

Soon, we will reach the point where every building collects and analyses the data about its inhabitants to better serve their needs, every traffic light constantly analyses traffic and reacts accordingly, and waste management is monitored to allow for optimisation, just as well as any other system in a city. The amount of data collected will be so massive, its analysis so complex, the only way to truly process and make sense of all this data will be through AI. This will allow both lower costs due to automatisation of previously work-intensive tasks and new truly innovative products and services.

The cities would need to run several digital applications simultaneously, with SDIs serving as the base on top of which to merge all this data, as location is one of the most intuitive way to connect different characteristics of a specific place. It is interesting to see how companies are already enriching their maps with data from the Internet of Things (IoT). Smartphones are one of the most used sensors, collecting a wide palette of signals and data, and acting as an enormous distributed sensor network.

Google has used the location data of smartphone users to find out what are the most crowded areas in cities. Take a closer look at your city in Google Maps and you will notice that some areas are highlighted with darker shades of brown. This signifies which areas and buildings are hot spots – a tremendous resource for real estate agents and buyers, as well as for tourists who want to visit areas of interest.

Humankind has a great track record in innovation and disruption but historically we have not been able to fully understand the repercussions of our new inventions beforehand. During the early days of the commercial internet, most companies were unable to see the benefits it could offer them and reputable papers were publishing articles proclaiming that e-commerce will never happen.

In 1995, Newsweek said: ‘The truth is no online database will replace your daily newspaper.’ Looking at this quote today highlights not only the power of our inventions but also our nearsightedness in understanding their true value. Just as AI already has been, IoT is now being commoditised quickly; the combination of the two will bring unforeseen advancements and implementations. This is the right time for fearless innovators to take action and seize the moment.

Spatial Data is Never on Holidays

By Kari-Pekka Karlsson, The Secretary of the KMTK Project, National Land Survey of Finland. Originally published in Finnish on www.maanmittauslaitos.fi (Read the original version here)

Do we realize how widely spatial data is used in our society? Might it be that not even GIS experts know how often we refer to geographical data, although Google and other widely-used map and spatial data services and products have increased the understanding of the importance of spatial data, or maps at least.

Topographical data, often called reference data, is spatial data in its most basic form. This reference data, typically complemented with other datasets, is used as a basis for most of the operations that are based on spatial data.

Some years ago, I did a small survey on the applications the reference data of the National Land Survey of Finland is used on. I was also surprised by the results. It felt like it’s much easier to list the fields where the reference data is not used.

What Needs to Be Improved and Why?

When the data is used widely, it’s important to put constant effort on ensuring good usability of the data. Furthermore, currently the potential of reference data is not fully utilized – partly because of usability issues caused by incompatible datasets.

National Topographic Database Project Focuses on Development Needs

National Reference Database (KMTK) focuses on the following development need: easy-to-access and relevant basic spatial data is needed to make operations in many areas of the society more efficient. KMTK does not solve all the problems but its mission is to create a basis for a better, more efficient future of spatial data usage that the applications of the future can be based on.

Once I heard a thought-provoking comment from a police officer: “The police car does not start driving unless the navigator is working”. That makes perfect sense. In addition to the security sector, reference data is used 24/7/365 for important purposes in many other applications as well.