Call for Papers
Processing of very large data sets requires a unique combination of data management and distributed systems engineering knowledge. The data management challenges include, among others, the development of new approaches and algorithms that can reduce the complexity of the data processing and allow incremental, continuous, and as accurate as possible result production. Simultaneously, the sheer volume and velocity of the data require support of systems which can automatically and adaptively scale up and out in order to accommodate big data processing algorithms.
The focus of this workshop is on new cloud-based data management and processing systems which span tens of thousands of machines in order to support processing of contemporary, very large data sets. Such systems require novel architectures, programming models and designs that go beyond approaches used in fixed-sized compute clusters. The focus of such systems is to support the work of users who interactively explore and analyze large and quickly changing data sets. The right platforms and techniques can simplify and accelerate the design, implementation, and execution of new “big data” applications.
In the past, data processing in the cloud has been dominated by batch processing paradigms such as MapReduce, but increasingly users seek to consume their results in near real-time. In order to efficiently support these new types of applications, it is necessary to overcome challenges when supporting adaptive, near real-time processing of data in cloud environments. Ultimately, adaptive low-latency data processing across large number of machines brings a new set of problems related to systems, distributed systems and geo-distribution, networking, fault-tolerance, and data management research.
Instead of providing a forum for merely extending existing cloud data systems and platforms, we hope to encourage the discussion of radical new alternatives. In particular, we want to foster the development of new infrastructures and platforms that rethink how data can be processed in cloud-based systems. We plan to attract research that has the potential to underpin the next generation of scalable and efficient data management applications on top of high-level, flexible platforms.
The topics of the workshop relate to various aspects of cloud-based data management platforms, and the resulting challenges for the supporting cloud infrastructures. Specifically, we invite submissions focusing on, not exclusively:
- adaptive data management
- case studies
- cloud networking
- cloud storage
- cloud security, privacy, compliance, and trust
- dependability and fault tolerance
- elasticity and adaptive scheduling
- energy management
- large-scale and distributed deployments
- mobile and edge cloud computing
- multi-tenancy and virtualization
- new processing paradigms
- new programming models
- predictability in cloud environments
- resource allocation and provisioning
- vertical and horizontal scalability
The workshop format will change this year in order to facilitate submissions and to maximize the feedback that authors will receive about the work. Submissions are expected to be 1 page long, clearly and concisely conveying the challenges being addressed and hinting at key aspects of the proposed approach or solution. Reviews will also be short, in the order of 1 paragraph per review. However, every submission will be reviewed by every member of the workshop’s program committee.
The workshop will not publish proceedings, therefore making it easy to submit improved versions of the same ideas in other forums.
Paper Submission deadline: February 26, 2016 (23:59, anywhere on earth)
Notification of acceptance: March 23rd, 2016 (was 11th)
Paper submission can be done here.
The CloudDP 2016 workshop is co-located with the EuroSys 2016 conference. Please refer to the EuroSys 2016 local information pages for details about the venue and accommodation.