Show simple item record

dc.contributor.supervisorHumm, Bernhard, Prof. Dr.
dc.contributor.authorSwientek, Martin
dc.contributor.otherFaculty of Science and Engineeringen_US
dc.date.accessioned2015-07-28T09:01:02Z
dc.date.available2015-07-28T09:01:02Z
dc.date.issued2015
dc.identifier10178189en_US
dc.identifier.urihttp://hdl.handle.net/10026.1/3461
dc.description.abstract

Enterprise Systems like customer-billing systems or financial transaction systems are required to process large volumes of data in a fixed period of time. Those systems are increasingly required to also provide near-time processing of data to support new service offerings. Common systems for data processing are either optimized for high maximum throughput or low latency.

This thesis proposes the concept for an adaptive middleware, which is a new approach for designing systems for bulk data processing. The adaptive middleware is able to adapt its processing type fluently between batch processing and single-event processing. By using message aggregation, message routing and a closed feedback-loop to adjust the data granularity at runtime, the system is able to minimize the end-to-end latency for different load scenarios.

The relationship of end-to-end latency and throughput of batch and message-based systems is formally analyzed and a performance evaluation of both processing types has been conducted. Additionally, the impact of message aggregation on throughput and latency is investigated.

The proposed middleware concept has been implemented with a research prototype and has been evaluated. The results of the evaluation show that the concept is viable and is able to optimize the end-to-end latency of a system.

The design, implementation and operation of an adaptive system for bulk data processing differs from common approaches to implement enterprise systems. A conceptual framework has been development to guide the development process of how to build an adaptive software for bulk data processing. It defines the needed roles and their skills, the necessary tasks and their relationship, artifacts that are created and required by different tasks, the tools that are needed to process the tasks and the processes, which describe the order of tasks.

en_US
dc.language.isoenen_US
dc.publisherPlymouth Universityen_US
dc.subjectAdaptive Middlewareen_US
dc.subjectMessage Aggregationen_US
dc.subjectBatch Processingen_US
dc.subjectMessage-Based Processingen_US
dc.subjectMessagingen_US
dc.subjectThroughputen_US
dc.subjectEnd-to-End Latencyen_US
dc.subjectNear-Time Processingen_US
dc.subjectPerformance Evaluationen_US
dc.subjectFeedback-Controlen_US
dc.titleHigh-Performance Near-Time Processing of Bulk Dataen_US
dc.typeThesisen_US
plymouth.versionFull versionen_US
dc.identifier.doihttp://dx.doi.org/10.24382/1311


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record


All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
Atmire NV