Main Page

From Swsc-WIKI
Jump to: navigation, search

Semantic Web Service Challenge: Evaluating Semantic Web Services Mediation, Choreography and Discovery

sponsored by
SealsLogo.jpg KW logo.pngSti-ibk.pngSti-int.pngSap.jpg


NEW: The SWS Challenge will be supported by and aligned with the SEALS Project from Jun 1st, 2009
NEW: The SWS Challenge has a blog at http://blog.sws-challenge.org/

The goal of the SWS Challenge is to develop a common understanding of various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. The intent of this challenge is to explore the trade-offs among existing approaches. Additionally we would like to figure out which parts of problem space may not yet be covered.

Our most important service is that we provide a certification of SOA (Service Oriented Achitecture) technologies. We challenge the technical community to show what their web service mediation, discovery, and composition technologies can really do.

Jump to

Evaluation Results

Detailed Description

The Challenge workshops seek participation from industry and academic researchers developing software components and/or intelligent agents that have the ability to automate mediation, choreography and discovery processes between Web services.

The Challenge aims to provide a forum for discussion based on a common application base. The approach is to provide a set of problems that participants solve in a series of workshops. In each workshop, participants self-select which scenario problems, and sub-problems they would like to attempt to solve. The call for problem solutions is a continous one. New participants are invited to start working on the challenge problems any time and to present their solution at the next workshop! Solutions are verified by Challenge staff - participants must invoke the right web services with the right sequence of correct messages in order to solve each problem. In addition, we attempt to evaluate the level of effort of the software engineering technique used in going from a problem to a sub-problem. This evaluation methodology is evolving in the W3c SWS Testbed Incubator. The most recent is linked under Scenarios as are the problems and past solutions. As of February 2008, there is a report of Recommended Best Practices from the Incubator.

The most current solutions and their evaluations are always displayed below on this home page. This constitutes a certification that the Challenge staff has verified the solution and that the workshop attendees evaluated the software engineering of the solution and came to a consensus, possibly with some footnotes, as below. Participant may then make claims on their home website, linked to this wiki, and may use the Challenge logo to indicate certification of the claims.

Participants are also expected to share their code and/or ontologies and to add scenarios. This is an "open source" initiative to create a space of re-usable test but real web services and associated ontologies. Participants are encouraged to "steal" parts of solutions from each other in order to eventually converge upon the "best of breed" solutions for types of problems.

Because of this methodology, we will limit the number of participants to a relatively small group so that we can carefully examine the solution of each participant, usually on the 2nd day of the workshop.

Related Work

This SWS Challenge is related to but distinct from the IEEE Web Services Challenge. The WSC is indeed beginning to consider semantics in relating XML descriptions of the input and output messages of the WSDL. The SWSC allows participants to provide additional semantic annotations of the WSDL in order to solve the problems and also evaluates the efficacy of the different approaches to doing so.

Similarly, this SWS Challenge is related to but distinct from the Semantic Services Selection (S3) Contest. It is related in that both initiatives attempt to create common testbeds of servies and both evaluate the efficacy of semantic annotations for service selection. However, the SWS Challenge problem set is not limited to service discovery but includes other types of problems including service mediation and composition.

More important, with respect to to both the WSC and S3 Contest, theSWSC Challenge emphasizes not computer and software speed but rather programmer productivity. It assumes that semantic annotations of web services (derived from the natural language scenario descriptions in the formalism of the participant's choice) will show such productivity, but all approaches are welcome.

We note that the S3 Contest additionally makes a comparative measurement of the retrieval performance of tools for semantic service discovery (e.g. recall, precision, F1, accuracy, average query response time), which are not measured by the SWS Challenge.

Finally, unlike either the WSC or the S3 Contest, the SWS Challenge is a certification challenge rather than a contest. One may see from the results that certain technologies can solve more problems than others, but there is no official winner. All participants are winners in that they have been certified to be able to substantiate the claims of their papers by demonstrating their solution on a common set of problems, in comparison to, and in consensus with, other participants and the Challenge staff.

Surprise Problem Agenda

A special feature of the SWS Challenge is that we also provide a surprise problem. The object here is to test the adaptability of the approaches. The problem will be a variation upon one previously already solved. We announce in advance of which public base scenario the surprise problem is a variant. The process constrains the amount of time the teams have to solve the surprise problem. As we develop the Challenge, we plan that the problems will get harder and the times shorter so that not all teams can be successful with all surprise problems, so that some differentciation occurs among the approaches and problems.

The surprise problem is kept secret. Teams announce their intent to try it by joining the process. Participating teams should have solved the base scenario. Since the problem is a secret, we will indicate teams that tried the problem but did not succeed, as well as those that did, as they are then ineligible to try the problem at another workshop. Typically, we only run one surprise problem at a workshop due to resource restrictions, but there may be exceptions to this practice.

The process is summarized in the table below. See the Karlsruhe workshop for an example of actual dates.

Date/Time Phase Description
(1) Deadline Day 0 "Code freeze" credentials distribution Committed participants will receive instructions and credentials for the Phase 2.
(2) Day 1 "Code freeze" submission The deadline for existing solution submissions.
(3) Day 2 Surprise problem announcement Committed participants will gain access to the surprise problem description.
(4) Day 2 + n - depending upon the problem Solution submission The deadline for surprise problem solution submissions.
(5) Next Day Solution verification The surprise problem solutions verification report.

A solution submission (Phase 4) should be accompanied with a document clearly stating all changes that were introduced to the frozen version in order to respond to the surprise problem requirements. Further, we may ask at the workshop that participants dynamically demonstrate the ability to make minor changes to the surprise problem and get the new correct answer.

Most Recent Aggregated Certification Results

The tables below reflects the aggregated certification results of all workshops until and including the ECOWS 2009 workshop (Eindhoven, November 2009). They show the extent to which participating solutions were able to solve particular problem levels of the mediation and the discovery scenarios. For detailed information about the individual solutions and to access the available technical content, please visit the solution overview and documentation page or use the links in the table header below.

Aggregated Evaluation Results for the Mediation Scenarios
Problem Level PoliMi - Cefriel
(Solution Details)
DERI AT & DERI IE
(Solution Details)
FSU Jena
(Solution Details)
University of Dortmund & University of Potsdam (jABC)
(Solution Details)
University of Dortmund & University of Potsdam (LTL)
(Solution Details)
University of Dortmund & University of Potsdam & SAP Research
(Solution Details)
Fraunhofer FOKUS
(Solution Details)
LSDIS Labs
(Solution Details)
IBM - Max Maximilien
(Solution Details)
Novay (formerly, Telematica Instituut) & University of Twente
(Solution Details)
0: Static mediation
1a: Changes data mediation 1
1b: Changes process mediation 4 2
1c: Mediation/integration for payment authorization √+ √+ 5
1d: Mediation Surprise √+ √+ √+ √+

1Only adapters changed

2Different addresses on line item level have not been addressed correctly

4Abstract code model change

5Mediator runs in simulation environment only. It is accessible from the testbed and properly interacts with Moon and Blue, but it has not been transformed to BPEL and been made available as a web service.

+ Successfully implemented "Surprise Problem Changes" (see Surprise Problem Methodology)



Aggregated Evaluation Results for the Discovery Scenarios
Problem Level PoliMi - Cefriel
(Solution Details)
University Milano-Bicocca - Cefriel
(Solution Details)
DERI AT & DERI IE
(Solution Details)
FSU Jena
(Solution Details)
University of Dortmund & University of Potsdam
(Solution Details)
Shipping Discovery Scenario 2a: Discovery based on Destination 1
2b: Discovery based on Destination and Weight 2 1
2c: Discovery based on Destination, Weight and Price 1
2d: Discovery involving simple composition
2e: Discovery including temporal reasoning 3
Hardware Purchasing Scenario 3a: Discovery based on clear defined product specifications - Goal A1
3a: Discovery based on clear defined product specifications - Goal A2
3b: Discovery 3B - Additionally specify preferences - Goal B1
3b: Discovery 3B - Additionally specify preferences - Goal B2
3c: Discovery 3C Composition of services - Goal C1 (unrelated composition)
3c: Discovery 3C Composition of services - Goal C2 (correlated composition)
3c: Discovery 3C Composition of services - Goal C3 (unrelated but global condition)
3c: Discovery 3C Composition of services - Goal C4 (unrelated with global condition and preferences)
Logistics Management Scenario A1: Standard single order 4
A2: A.D.R. rules 4
A3: A.T.P. truck 4
B1: A2 + simple soft constraints 4
C1: A3 + soft constraints with preferences 4
D1: warehouse 4
E1: A.T.P. truck + warehouse 4

1No automated invocation

2Arithmetic calculation performed by external Web services (which is absolutely good)

3Algorithm is correct, but not complete

4The representation and execution of the A.T.P. and A.D.R. regulations as well as the preference policies were solved correctly, but there were bugs in the underlying functional discovery with respect to the computation of shipping times and the corresponding filtering of providers.

About this Wiki

This Wiki is dedicated as a collaboration platform for the Semantic Web Service Challenge. Note:To use the testbed and contribute to the wiki you will need an account; You most likely already have an account, check the user list. If you forgot your password, go to the login page and request to resend your password. For editing information in this wiki you require an account. If you have not yet received one please mail Srdjan Komazec (srdjan.komazec<at>sti2<dot>at).