Data flow is like a river; it may have several sources, it can cross through several points and it usually pours into a larger body in the end. We think that monitoring these streams and enabling them to flow as seamlessly as possible is very important. Opening data from silos and breaking down barriers along the flow will eventually lead into better data and information for the whole society.
As data is becoming more of a commodity its usage is turning as well. We are now at the crossroads where our choices will greatly impact what data will be in the future. Will it be like a river that everyone can enjoy, or will it be siloed and hoarded commodity. Enabling data to flow efficiently is one of the most important things to do in any organisation right now.
Data Flow is Complex
Amount of data is increasing globally, more and more of the things rely on network and data flows and are becoming more complex. So monitoring the online resources is crucial and for example performance testing is becoming more important. As well, data quality is more important than ever.
The purpose of EU’s data strategy is to make the EU the leader in a data-driven society. It has been understood also at european level that the importance of data and it’s open flow is essential in this.
In Finland “Report on spatial data policy” target is that Finland will have the most innovative and secure spatial data ecosystem in the world.
As well INSPIRE requirements in the public sector has made some data flows mandatory in the EU, but we see that it’s just a beginning and much more progress and benefits can and should be reached in the near future.
In Finland we are already forerunners and leading many discussions. For example Sitra in Finland has already published a rulebook for a fair data economy and the foundation for a fair data policy in Europe.
Enable the usage of your data in every corner
Data should be available throughout its flows and functions. Why do you ask? Don’t the users just want refined and processed data? Yes, some of them might want exactly that, but we should take into account that there might be several different processed outcomes from your data. Some users might want to do different refining processes to your raw data than you could even do by yourself. Let those users have the data at its origin.
We can use weather data here as an example. I as your average weather app checking citizen just want to see what the temperature is in Vantaa at any given time. And that is enough for me. Finnish Meteorological Institute (FMI) gives me that refined data via their website. But what if I would be building a climate model of Vantaa that requires way more data than just the current temperature? FMI has that available through their APIs and they have enabled users to reach for that data at their convenience. Now I’m able to fetch that data and even start building a business around it possibly!
In Finland the ecosystems and guidances are in reasonably good shape, but there’s a lot to be done. Enabling the data flows gives competition advantages to societes, it boosts new innovations and gives us better chances to make things differently via digitalisation than ever before. Data is a fuel in digitalisation and the importance of data quality is crucial. It is not only the data which happens to be available but the best would be if the most appropriate data could be used. And there is already massive amounts of data and increasing every single day, so we may think that actually there might be too much data. I think that in some use cases the more data the better results are and the more valuable impacts can be reached, like in AI and ML where they need even more data. The other side is the quality of the data used, it must be ensured and many times harmonised. And of course enabling and ensuring the data to flow well inside the organisations and between organisations!
Break the barriers of usage
What we are aiming for is open data. But what makes data truly open – Accessibility. So the question then becomes: how can we make our data accessible? Best way to approach this is from the user’s perspective.
If there are any hoops the users need to jump through in order to get the data, those hoops should be removed. Don’t force the users into logging to your systems, don’t make them sign arbitrary terms and conditions if you don’t ever follow up on those, and if it suits your data dlow make your data free. Especially if you’re funded through municipal or governmental money you should definitely think about the pricing of your data. Usually open data provides the largest economic output at societal level.
So think what are the things that are “slowing” your users from reaching your data. Think if these obstacles are really necessary and consider removing them. Also think about how open data will change after a world changing pandemic. We’ve written a whole article about this topic, which we urge you to check out.
Read more: What Will Open Data Look Like After Covid-19?
Hurdles users have to overcome from reaching data flows
One is for sure technical aspects; the knowledge of the possibilities and standards are too much behind the technical guys. This is definitely also the question of mindset in all the enterprise levels. So I mean it is definitely not only a technical question, and there are also business aspects, competitive aspects, juridical aspects, ethical aspects and security and data protection aspects. I believe that the understanding of knowledge of these will increase a lot during this year, and Spatineo can help in those aspects as well. We really encourage you to think data flows as a success factor as a whole and think about it holistically and if don’t know where to start or need sparring, it is available. So don’t leave alone, I encourage you to discuss with your network more than ever.
Know your usage and impact of your data flow
Let’s continue with the river & water allegory: your data is flowing like a river but you don’t know where the flow is at its peak and where the river separates into smaller streams. Do you know where the local people fetch their water or if they use automated watering systems? You should be able to know all of these things!
Being able to identify where all the action is enables you to develop your data flow. Identifying the most used services, and what kind of data is being fetched from your APIs is the key here. If you’re in charge of the river’s maintenance you need to also know the water’s quality and which way the water mass is headed. Our Spatineo Monitor tool is developed for this exact purpose, and you can read more about the benefits of monitoring here.
Mission to enable better data flow
With the three above mentioned steps, every organisation should have the basic necessary knowledge to start making their data flow more reachable and robust. Data flow has been already in public discussion for at least since 2012, and we wonder why haven’t more organisations taken these steps already? All of these actions only end up making the users experience good in the long run.
We’d also like to hear your thoughts on what would make data flow better and more reachable. We believe that we’ve only scraped the surface with these three steps so far. Several organisations have taken way more than these measures in order to make their data flows extraordinarily good. Please share your best practises down in the comments below!