This is just a quick notice that we’ve launched a new Spatineo channel (http://www.youtube.com/user/spatineoinc) on YouTube. The channel mainly features videos about our products Spatineo Monitor, Spatineo Directory and the upcoming Spatineo Performance, but also some selected highlights of Spatineo related events.
We previously had a YouTube channel at http://www.youtube.com/user/spatineo, which is no no longer available, but all the videos have been transferred to the new channel. Creating the new channel was unfortunately necessary to associate it with our Google+ page and better organize our appearance in social media. While doing the changes, I also took the opportunity to do some face lifting of the channel visual appearance.
The next in line for the channel are a couple of “User manual” videos covering some of the most typical usage scenarios of Spatineo Monitor. It would also be interesting to experiment with Hangouts On Air to discuss with you about our products and answer any questions you might have. Let’s see how that works out, so stay tuned.
Robots.txt refers to the file name specified in the unofficial robots exclusion “standard”. This is used to inform automatic web crawlers which parts of a server should not crawled. You can also specify different rules for different crawlers. This standard is not a technical barrier for crawlers but a gentlemen’s agreement that automated processes should, and generally do respect.
A website may define robots exclusion information by publishing a robots.txt in the root path of the service. For example http://www.spatineo.com/robots.txt is the exclusion information for our website.
Spatineo Monitor adheres to the exclusion rules and thus, does not monitor web services that are disallowed via this mechanism. Spatineo however does load service descriptions despite robots.txt in the following cases, where we think it is nevertheless appropriate.
A user may request to update or add a service to our registry. This is an user-initiated operation and thus robots.txt does not apply to this situation.
We attempt to update every service once per week. This is because we want to avoid Spatineo Directory containing outdated or incorrect information about other service providers (you, perhaps?). One request per week should not cause performance issues for anyone.
“Why is there no availability information for my service?”
It is common practice for IT maintenance to disallow all crawling for web services. This is usually done by having a catch-all disallow-all robots.txt on the server in question. This is done to avoid generic web crawlers from inadvertently causing load peaks and performance issues on the servers. While it is true, that typical search engine spiders will usually only be confused by web service descriptions and operations, Spatineo Monitor is created specifically to understand these services. As such, allowing Spatineo to crawl the service will not cause performance issues.
We recommend you make sure that your current robots.txt is truly appropriate for your server. Broad exclusion of crawlers will mean that your users may never find interesting information you have published on the server. Generally, when you publish something online, you want that to be found.
The easiest change (besides completely removing robots.txt) you can make to allow Spatineo Monitoring is to add the following lines in your robots.txt, before all other content:
User-agent: spatineo Allow: /
Please note that both “User-agent” and “spatineo” here are case sensitive. Also, our monitoring follows the first ruleset that matches our user agent.
“I want you to stop monitoring my service”
If monitoring is causing performance issues for you, we recommend you first take a look at how your service is built and configured. We monitor services once every 5 minutes and this should not cause noticeable load to any web service. If performance issues is not the reason you want to stop our monitoring, then I urge you to reconsider: Does monitoring take anything away from you? Do your users appreciate having availability statistics publicly available? If you have a good reason for us to not monitor you besides performance, I ask you to comment on this post and we can discuss your case.
In case your mind is made up, you can forbid us from monitoring your service. You can either upload a catch-all disallow-all robots.txt on your server, or place the following directives in your robots.txt:
User-agent: spatineo Disallow: /
Please note that both “User-agent” and “spatineo” here are case sensitive and should be written as in the example above. Also keeping in mind that directives are read in order and robots use only the first matching directive. So place the above directive as the first directive or at least before User-agent *.
If you think you have already set up blocking correctly, but we are still monitoring your service, please do the following:
Make sure the character cases in your robots.txt match the above example (User-agent != User-Agent).
Check that your robots.txt does not have conflicting rules which would specifically allow our monitoring.
If you only just changed the file, you can update our records manually: enter the complete URL to your service into our search engine. This will update the records for that service and monitoring will cease.
In case this does not stop the requests, please post below or contact us via this page
We had an interesting real-world case of using open environmental data for journalism a couple of weeks ago in Finland. In the early hours of Saturday the 10th of November Yle, the Finnish public broadcasting company, published a background news item at their site related to the continued pollution leakage at Talvivaara mining site in Sodankylä, Finland.
A few hours later the map was rendered practically useless because of the serious performance problems of the background WMS services providing the data.
The map window application at Paikkatietoikkuna makes it possible for any user to aggregate and publish web maps with their preferred selection of visualized geospatial data layer provided by the various Finnish governmental organizations. The data layers are served by the WMS servers hosted by the organizations, the application only provides an interactive graphical user interface for displaying them as a mashup. In this case Yle reporters had been able to make an up-to-date, interactive map covering soil types, lakes and rivers, ground water reserves, land claims for minings and natural protection areas just by selecting the layers and publishing the link pointing at it in their news item.
The one month time series of one of the services (Soil data) shows the average response times on10th Nov. were considerably above normal for that service:
It seems that the journalists are really starting to take advantage of the public open geospatial data resources and easily available web map tools like Paikkatietoikkuna, but the data providers are not very well prepared for even pretty minor “slashdot effects” caused by sudden increased traffic at their services.
We at Spatineo are quite glad to be able to report things like this based on our continuous monitoring of thousands of spatial web services around the world. It confirms us that our proactive monitoring strategy is the right one: In most cases we have been collecting the performance data already before our customers experience performance problems in their spatial web services.