ADMS 2011
Second International Workshop on Accelerating Data Management Systems Using Modern Processor and Storage Architectures

September 2, 2011
In conjunction with VLDB 2011
St. Helens Room, Westin Hotel, Seattle, WA
Workshop Overview

The objective of this one-day workshop is to investigate opportunities in accelerating data management systems and workloads (which include traditional OLTP, data warehousing/OLAP, ETL, Streaming/Real-time, Business Analytics, and XML/RDF Processing) using processors (e.g., commodity and specialized Multi-core, GPUs, and FPGAs), storage systems (e.g., Storage-class Memories like SSDs and Phase-change Memory), and hybrid programming models like CUDA and OpenCL.

The current data management scenario is characterized by the following trends: traditional OLTP and OLAP/data warehousing systems are being used for increasing complex workloads (e.g., Petabyte of data, complex queries under real-time constraints, etc.); applications are becoming far more distributed, often consisting of different data processing components; non-traditional domains such as bio-informatics, social networking, mobile computing, sensor applications, gaming are generating growing quantities of data of different types; economical and energy constraints are leading to greater consolidation and virtualization of resources; and analyzing vast quantities of complex data is becoming more important than traditional transactional processing.

At the same time, there have been tremendous improvements in the CPU and memory technologies. Newer processors are more capable in the CPU and memory capabilities and are optimized for multiple application domains. Commodity systems are increasingly using multi-core processors with more than 4 cores per chip and enterprise-class systems are using processors with 8 cores per chip, where each core can execute upto 4 simultaneous threads (4-way SMT). Specialized multi-core processors such as the GPUs have brought the computational capabilities of supercomputers to cheaper commodity machines. On the storage front, FLASH-based solid state devices (SSDs) are becoming smaller in size, cheaper in price, and larger in capacity. Exotic technologies like Phase-change memory are on the near-term horizon and can be game-changers in the way data is stored and processed.

In spite of the trends, currently there is limited usage of these technologies in data management domain. Naive usage of multi-core processors or SSDs often leads to unbalanced system. It is therefore important to evaluate applications in a holistic manner to ensure effective utilization of CPU and memory resources. This workshop aims to understand impact of modern hardware technologies on accelerating core components of data management workloads. Specifically, the workshop hopes to explore the interplay between overall system design, core algorithms, query optimization strategies, programming approaches, performance modelling and evaluation, etc., from the perspective of data management applications.

Topics of Interest

The suggested topics of interest include, but are not restricted to:

  • Hardware and System Issues in Domain-specific Accelerators
  • New Programming Methodologies for Data Management Problems on Modern Hardware
  • Query Processing for Hybrid Architectures
  • Large-scale I/O-intensive (Big Data) Applications
  • Parallelizing/Accelerating Analytical (e.g., Data Mining) Workloads
  • Autonomic Tuning for Data Management Workloads on Hybrid Architectures
  • Algorithms for Accelerating Multi-modal Multi-tiered Systems
  • Energy Efficient Software-Hardware Co-design for Data Management Workloads
  • Parallelizing non-traditional (e.g., graph mining) workloads
  • Algorithms and Performance Models for modern Storage Sub-systems
  • Data Layout Issues for Modern Memory and Storage Hierarchies
  • Novel Applications of Low-Power Processors (e.g., ARM Processor based systems)
  • New Benchmarking Methodologies for Storage-class Memories

Workshop Program

9 am-5.30 pm, St. Helens Room

8-8.30 am: Breakfast (Cascade North Foyer & San Juan Foyer)

9-9.10 am: Welcome Comments

9.15-10.30 am: Keynote by Sumanta Chatterjee (VP of Development in the Oracle Database Group)

Clusters Accelerated: a Study (Slides)

Traditional DB configuration with monolithic SMP servers and Storage Arrays have very high costs with limited scalability. I will present an alternative grid based design that has superior scaling factors at a fraction of the cost. In our architecture both compute and storage nodes are connected via RDMA capable Infiniband fabric. I will discuss how the Grid based architecture can be leveraged for a large scale database deployments in the enterprise.

10.30-10.45 am: Coffee Break (Cascade North Foyer & San Juan Foyer)

10.45 am-12.15 pm Session 1: Optimizing for Multi-core Processors

12.15-1.45 pm: Lunch (Grand 3)

1.45-3.15 pm Session 2: Optimizing for Memory Sub-systems

3.15-3.45 pm: Coffee Break (Cascade North Foyer & San Juan Foyer)

3.45-5.30 pm: Workshop Panel (Moderator: Guy Lohman, IBM Almaden Research)

In the multi-core age, how do larger, faster, cheaper and more responsive memory sub-systems affect data management?

Panelist include:

5.30 pm: Concluding Remarks

Important Dates

  • Paper Submission: Friday, June 24, 2011 (Updated)
  • Notification of Acceptance: Monday, July 11, 2011
  • Camera-ready Submission: Monday, July 25, 2011
  • Workshop Date: Friday, September 2, 2011

Submission Instructions

The workshop proceedings will be published by VLDB.

Submission Site 

All submissions will be handled electronically via EasyChair.

Formatting Guidelines 

It is the authors' responsibility to ensure that their submissions adhere strictly to the VLDB format detailed here. In particular, it is not allowed to modify the format with the objective of squeezing in more material. Submissions that do not comply with the formatting detailed here will be rejected without review. 

The paper length is limited to 8 pages. You are permitted a 4 page appendix beyond these 8 pages. However, reviewers are not required to read this appendix, and the paper should be self-contained, complete and understandable within the 8 pages. Typically, it is appropriate to place proofs, algorithm pseudocode, data set descriptions, etc. in the appendix. It is usually not appropriate to move definitions, theorems, figures, bibliography, and experimental results to the appendix. Any references to the appendix from the main paper should only be in the nature of "for additional detail see..". In particular, there should be nothing in the appendix that is necessary for a reader to understand the paper. This 8+4 page rule applies to both submissions and camera-ready.  

We will use the same document templates as the VLDB conference. You can find them here.


Workshop Co-Chairs

       For questions regarding the workshop please send email to

Program Committee

  • Sumanta Chatterjee, Oracle
  • John Davis, Microsoft Research
  • Christophe Dubach, University of Edinburgh
  • Pradeep Dubey, Intel
  • Michael Garland, Nvidia
  • Bugra Gedik, IBM Watson Research
  • Goetz Graefe, HP Labs
  • Annie Foong, Intel
  • George Mihaila, Google
  • C. Mohan, IBM Almaden Research
  • Bongki Moon, University of Arizona
  • Ji-Yong Shin, Cornell University
  • Sayantan Sur, Ohio State University
  • Jens Teubner, ETH Zürich
  • Philip S. Yu, University of Illinois, Chicago