News and Updates
- The workshop's program has been posted.
- Instructions for camera-ready papers have been posted.
Submission site is now open. Submission deadline has been extended to March 25, 2013.
- Prof. Volker Markl (TU Berlin) will deliver the workshop's keynote.
Motivation and Scope
Testing and tuning database systems has become increasingly expensive: functionality is expanding, new applications and usage patterns are emerging, cloud computing and the paradigm of software-as-a-service have led to new system architectures, and the era of Big Data has ushered in an ecosystem of tools and platforms that may be loosely connected and yet function as a "unit" for the end user. It is not unusual to find that fifty percent of the DBMS development cost derives from testing and tuning, and that several months are needed for testing before a new release can be shipped. This situation will only get worse unless new ideas can be brought to bear.
Building on the success of the five previous workshops, the goal of DBTest 2013 is to bring together researchers and practitioners from academia and industry to discuss key problems and ideas related to testing database systems and applications. The long-term objective is to reduce the cost and time required to test and tune database products so that users and vendors can spend more time and energy on actual innovations.
Topics of Interest
- Testing and resilience in service-oriented architectures
- Testing issues in multi-tenant database systems and cloud database systems
- Testing issues in large-scale analytics systems (e.g., Hadoop)
- Testing database systems, data storage services, and database applications
- Generation of test artifacts (e.g., test databases, test queries)
- Interactions between testing and tuning of database systems
- Maximizing code coverage of database systems/applications
- Testing the reliability and availability of database systems
- Testing the reliability of user-defined functions
- Improving the usability of database systems
- Testing and designing systems that are robust to estimation inaccuracies
- Testing the efficiency of adaptive policies and components
- Identifying performance bottlenecks
- Robust query processing
- Metrics for predictability of query and workload performance
- Security and vulnerability testing
- War stories and vision papers