Posts Tagged ‘SDI’

eBook “An Architectural View of Spatial Data Infrastructures”

Sunday, January 13th, 2013, Fco. Javier Zarazaga-Soria

Spatial data infrastructures are large distributed information systems on the Internet, based on open standards that enable data sharing and use whose location is important, such as roads, satellite and aerial images, business and tourist attractions, noise maps and pollution, street or demographics. This book presents an approach based on architectures of distributed information systems for specifying and documenting spatial data infrastructures and facilitate their development and analysis.
Available at: bit.ly/13t230k
A quick view at: bit.ly/WT0kvc

DEXA 2010 and EGOVIS 2010

Monday, June 7th, 2010, Francisco J Lopez-Pellicer

DEXA’10 and EGOVIS’10 will be held in conjunction next September. We present two papers.

  • Francisco J. Lopez-Pellicer, Mário J. Silva, Marcirio Chaves, F. Javier Zarazaga-Soria and Pedro R. Muro-Medrano, Geo Linked Data
  • Miguel Ángel Latre, Francisco J. Lopez-Pellicer, Javier Nogueras-Iso, Rubén Béjar, and Pedro R. Muro-Medrano, Facilitating E-government Services through SDIs, an Application for Water Abstractions Authorizations

We present in DEXA 2.010 the paper Geo Linked Data. This paper research a feasible approach for using maps as data in Semantic Web applications.  If the maps were machine-processable, it is possible to avoid messages such as “We are sorry, but we don’t have imagery at this zoom level for this region”, or select the most appropriate map to present data. Our contribution involves (1) the characterization of geospatial proxies, geospatial Web resources that could complement Semantic Web descriptions about entites, (2) the identification of their roles in semantic applications, (3) a recipe based on Linked Data practices for publishing alongside geospatial Web resources and RDF descriptions, and (4) best practices for advertising the presence, role and location of geospatial proxies.

We present to EGOVIS 2.010 the paper Facilitating E-government Services through SDIs, an Application for Water Abstractions Authorizations. In the last years, there has been a huge increment in the number of e-government services offered to the citizens and companies. However, environment-related permits are among the least developed kind of e-government services in Europe. Environmental management and government requires the use of geographic information and SDIs are being providing the framework for optimizing its management, and they are becoming a legal obligation for some countries and institutions. In order to make profit of geographic information technologies and of the obligation of building SDIs to contribute to the development of e-gov services, we presents how to use SDIs (for example IDE Ebro) in a real tool in the area of the environment-related permits: the application for a water abstraction authorization. SDI services are used for the capture, management, and assess of geographical information in a full transactional level e-government service.

Publishing Open Goverment Data

Friday, May 21st, 2010, Francisco J Lopez-Pellicer

I’ve have just re-read the W3C Working Draft Publising Open Government Data. It’s abstract is compelling:

Every day, governments and government agencies publish more data on the Internet. Sharing this data enables greater transparency; delivers more efficient public services; and encourages greater public and commercial use and re-use of government information.

Publish and sharing implies clients that access and integrate data. In the Geo world this is enabled by Spatial Data Infrastructure (SDI):

There is a clear need, at all scales, to be able to access, integrate and use spatial data from disparate sources in guiding decision making. Our ability then, to make sound decisions collectively at the local, regional, and global levels, is dependent on the implementation of SDI that provides for compatibility across jurisdictions that promotes data access and use (Douglas Nebert, The SDI Cookbook).

However, the approach behind the W3C draft is quite straighforward:

  • Publish the data in its raw form (XML, RDF, CSV). Formats that allow data to be seen but not extracted are not useful for this approach.
  • Create an online catalog of the raw data (complete with documentation) so people can discover what has been posted.
  • Make the data both human- and machine-readable. That is, use XHTML, RDFa, content negotiation, “cool uris”…

The intended purpose of this approach is allow third parties to create and develop new interfaces to the data that may not be obvious (even absurd) to the data providers. The publication of the data should be separated from the interfaces, enabling mashup developers, data integrators, data crawlers … access to the raw data.