A few days ago I had the chance to participate to the “touch the Web 2010” workshop.
The goals of the workshop were rather similar to the ones of WoT 2010 however, rather than being hosted at a Ubicomp/Pervasive venue, Touch the Web was collocated with ICWE2010, a pure Web engineering conference.
The most surprising fact was probably how close the two communities are getting. Web people are increasingly interested in embedded/physical/sensor computing, and on the other hand, pervasive people are getting more and more convinced that the Web protocols as not so bad after all (take this paper for instance), at least good enough for a good range of applications. Quite a change of mindset compared to a year ago.
One of the good outcomes of the workshop was the fruitful final discussion. Three big challenges seem to emerge: the discovery of things, the real-time things and understanding the needs for Web-enabled things. Three challenges that were also identified as keys at WOT 2010.
We need to look at describing things so that they can be both discovered by machines (i.e. network discovery) and their “services” understood by humans (i.e. service discovery). REST is good, REST is great but it’s raw expressiveness is not enough to understand things. By crawling a RESTful API you can find the resources it exposes, by reading the URIs you can get rough “tags” (e.g. /temperature) describing their nature. But this is not enough for users, neither for machines. As an examples, attendees mentioned the need to generate sense-making UIs on the fly or to customize page rendering depending on the thing one discovers. A simple example of this is Google rich snippets where the search engine renders the page results differently if they embed some semantics. What if Google could render search results for things in a way that helps users interacting with them.
Thus, researchers are exploring ways of better describing things directly inspired from the semantic Web. In “A Triple Space-Based Semantic Distributed. Middleware for Internet of Things” the authors suggest using RDF. In the mashup framework we presented an RDFa based solution to be able to integrated newly discovered devices as mashup actors directly. Those solutions however have the drawbacks of being based on well-know syntax but “proprietary semantics”, i.e. they cannot be understood by One alternative we (and others) currently explore is the use of Microformats which enables to use “agreed-upon” lightweight semantics. Their recent fast-pace expansion makes them even more interesting (I should post about our early experiments with things and microformats here soon!).
Next in line of the important aspects for a WoT was the need for real-time communication patterns. Not ground-breaking, since this topic has been around WoT architectural discussions since the beginning but the workshop made it clear: client server architectures are great for controlling things, but for monitoring we also need things to be able to push data. Of course, we would also like this push pattern to be as Web oriented as possible. On the REST-side People talked about Atom and especially the latest push based mechanism using it, aka PuSH (or pubsubhubbub). On the more WS-* side, speakers talked about using WS-eventing. We also talked about our experiences with WS-eventing in DPWS (a device tailored WS-* stack) and concluded that it was getting better but still quite heavy for many devices and rather hard to get hands on.
Overall it seemed that this space for still open for further exploration. Speaking of which we also presented a paper at the main conference about a light messaging service for things called RMS (Vlad will tell you more about it here soon!)
Understanding the Needs for Web-enabled Things
Last but not least one really tricky question emerged: “why do we do this?” We propose a re-programmable world where everything is created not as a single purpose object but rather as an API ready for opportunistic applications, but do people want that and why?
Most of the people there believed they do and for various reasons ranging from sustainability (objects have a second life thanks to involving them in new use cases), to customization (things are often not quite the way we want them to be) and satisfaction of DIY (Do It Yourself).
However, raising this question is key and depicted the strong need for better understanding the “mashup space” from an end-user point of view. What would people like to mash in their homes, cities and offices?