Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      05 January 2017, Volume 32 Issue 1   
    For Selected: View Abstracts Toggle Thumbnails
    Special Section on Dataflow Architecture
    Preface
    Wen-Guang Chen
    Journal of Data Acquisition and Processing, 2017, 32 (1): 1-2. 
    Abstract   PDF(63KB) ( 763 )  
    Principles to Support Modular Software Construction
    Jack B. Dennis
    Journal of Data Acquisition and Processing, 2017, 32 (1): 3-10. 
    Abstract   PDF(163KB) ( 1074 )  
    The construction of large software systems is always achieved through assembly of independently written components-program modules. For these software components to work together, they must share a common set of data types and principles for representing structured data such as arrays of values and files. This common set of tools for creating and operating on data objects is provided by the infrastructure of the computer system:the hardware, operating system and runtime code. Because the nature and properties of these tools are crucial for correct operation of software components and their inter-operation, it is essential to have a precise specification that may be used for verifying correctness of application software on one hand, and to verify correctness of system behavior on the other. We call such a specification a program execution model (PXM). It is evident that the properties of the PXM implemented by a computer system can have serious impact on the ability of application programmers to practice modular software construction. This paper discusses the concept of program execution models and presents a set of principles that a PXM must satisfy to provide a sound basis for modular software construction. Because parallel program execution on computer systems with many processing units is an essential part of contemporary computing environments, the expression of parallelism and modular software construction using components involving parallel operations is included in this treatment. The conclusion is that it is possible to build computer systems that implement a PXM within which any parallel program may be used, unmodified, as a component for building more substantial parallel programs.
    An Efficient Network-on-Chip Router for Dataflow Architecture
    Xiao-Wei Shen, Xiao-Chun Ye, Xu Tan, Da Wang, Lunkai Zhang, Wen-Ming Li, Zhi-Min Zhang, Dong-Rui Fan, Ning-Hui Sun
    Journal of Data Acquisition and Processing, 2017, 32 (1): 11-25. 
    Abstract   PDF(6335KB) ( 1993 )  
    Dataflow architecture has shown its advantages in many high-performance computing cases. In dataflow computing, a large amount of data are frequently transferred among processing elements through the network-on-chip (NoC). Thus the router design has a significant impact on the performance of dataflow architecture. Common routers are designed for control-flow multi-core architecture and we find they are not suitable for dataflow architecture. In this work, we analyze and extract the features of data transfers in NoCs of dataflow architecture:multiple destinations, high injection rate, and performance sensitive to delay. Based on the three features, we propose a novel and efficient NoC router for dataflow architecture. The proposed router supports multi-destination; thus it can transfer data with multiple destinations in a single transfer. Moreover, the router adopts output buffer to maximize throughput and adopts non-flit packets to minimize transfer delay. Experimental results show that the proposed router can improve the performance of dataflow architecture by 3.6x over a state-of-the-art router.
    Computer Architecture and Systems
    A Lookahead Read Cache: Improving Read Performance for Deduplication Backup Storage
    Dongchul Park, Ziqi Fan, Young Jin Nam, David H. C. Du
    Journal of Data Acquisition and Processing, 2017, 32 (1): 26-40. 
    Abstract   PDF(843KB) ( 1371 )  
    Data deduplication (dedupe for short) is a special data compression technique. It has been widely adopted to save backup time as well as storage space, particularly in backup storage systems. Therefore, most dedupe research has primarily focused on improving dedupe write performance. However, backup storage dedupe read performance is also a crucial problem for storage recovery. This paper designs a new dedupe storage read cache for backup applications that improves read performance by exploiting a special characteristic:the read sequence is the same as the write sequence. Consequently, for better cache utilization, by looking ahead for future references within a moving window, it evicts victims from the cache having the smallest future access. Moreover, to further improve read cache performance, it maintains a small log buffer to judiciously cache future access data chunks. Extensive experiments with real-world backup workloads demonstrate that the proposed read cache scheme improves read performance by up to 64.3%
    dCompaction: Speeding up Compaction of the LSM-Tree via Delayed Compaction
    Feng-Feng Pan, Yin-Liang Yue, Jin Xiong
    Journal of Data Acquisition and Processing, 2017, 32 (1): 41-54. 
    Abstract   PDF(962KB) ( 2054 )  
    Key-value (KV) stores have become a backbone of large-scale applications in today's data centers. Writeoptimized data structures like the Log-Structured Merge-tree (LSM-tree) and their variants are widely used in KV storage systems like BigTable and RocksDB. Conventional LSM-tree organizes KV items into multiple, successively larger components, and uses compaction to push KV items from one smaller component to another adjacent larger component until the KV items reach the largest component. Unfortunately, current compaction scheme incurs significant write amplification due to repeated KV item reads and writes, and then results in poor throughput. We propose a new compaction scheme, delayed compaction (dCompaction) that decreases write amplification. dCompaction postpones some compactions and gathers them into the following compaction. In this way, it avoids KV item reads and writes during compaction, and consequently improves the throughput of LSM-tree based KV stores. We implement dCompaction on RocksDB, and conduct extensive experiments. Validation using YCSB framework shows that compared with RocksDB, dCompaction has about 40% write performance improvements and also comparable read performance.
    MemSC: A Scan-Resistant and Compact Cache Replacement Framework for Memory-Based Key-Value Cache Systems
    Mei Li, Hong-Jun Zhang, Yan-Jun Wu, Chen Zhao
    Journal of Data Acquisition and Processing, 2017, 32 (1): 55-67. 
    Abstract   PDF(500KB) ( 1418 )  
    Memory-based key-value cache systems, such as Memcached and Redis, have become indispensable components of data center infrastructures and have been used to cache performance-critical data to avoid expensive back-end database accesses. As the memory is usually not large enough to hold all the items, cache replacement must be performed to evict some cached items to make room for the newly coming items when there is no free space. Many real-world workloads target small items and have frequent bursts of scans (a scan is a sequence of one-time access requests). The commonly used LRU policy does not work well under such workloads since LRU needs a large amount of metadata and tends to discard hot items with scans. Small decreases in hit ratio can result in large end-to-end losses in these systems. This paper presents MemSC, which is a scan-resistant and compact cache replacement framework for Memcached. MemSC assigns a multi-granularity reference flag for each item, which requires only a few bits (two bits are enough for general use) per item to support scanresistant cache replacement policies. To evaluate MemSC, we implement three representative cache replacement policies (MemSC-HM, MemSC-LH, and MemSC-LF) on MemSC and test them using various workloads. The experimental results show that MemSC outperforms prior techniques. Compared with the optimized LRU policy in Memcached, MemSC-LH reduces the cache miss ratio and the memory usage of the resulting system by up to 23% and 14% respectively.
    Data Management and Data Mining
    Sparse Support Vector Machine with Lp Penalty for Feature Selection
    Lan Yao, Feng Zeng, Dong-Hui Li, Zhi-Gang Chen
    Journal of Data Acquisition and Processing, 2017, 32 (1): 68-77. 
    Abstract   PDF(1328KB) ( 1680 )  
    We study the strategies in feature selection with sparse support vector machine (SVM). Recently, the socalled L p-SVM (0< p< 1) has attracted much attention because it can encourage better sparsity than the widely used L1-SVM. However, Lp-SVM is a non-convex and non-Lipschitz optimization problem. Solving this problem numerically is challenging. In this paper, we reformulate the Lp-SVM into an optimization model with linear objective function and smooth constraints (LOSC-SVM) so that it can be solved by numerical methods for smooth constrained optimization. Our numerical experiments on artificial datasets show that LOSC-SVM (0< p< 1) can improve the classification performance in both feature selection and classification by choosing a suitable parameter p. We also apply it to some real-life datasets and experimental results show that it is superior to L1-SVM.
    Efficient Processing of Distributed Twig Queries Based on Node Distribution
    Xin Bi, Xiang-Guo Zhao, Guo-Ren Wang
    Journal of Data Acquisition and Processing, 2017, 32 (1): 78-92. 
    Abstract   PDF(468KB) ( 1323 )  
    Massive XML data are increasingly generated for the representation, storage and exchange of web information. Twig query processing over massive XML data has become a research focus. However, most traditional algorithms cannot be directly implemented in a distributed manner. Some of the existing distributed algorithms generate a lot of useless intermediate results and execute many join operations of partial results in most cases; others require the priori knowledge of query pattern before XML partition, storage and query processing, which is impractical in the cases of large-scale data or frequent incoming new queries. To improve efficiency and scalability, in this paper, we propose a 3-phase distributed algorithm DisT3 based on node distribution mechanism to avoid unnecessary intermediate results. Furthermore, we propose a lightweight local index ReP with an enhanced XML partitioning approach using arbitrary partitioning strategy, and based on ReP we propose an improved 2-phase distributed algorithm DisT2ReP to further reduce the communication cost. After the performance guarantees are analyzed, extensive experiments are conducted to verify the efficiency and scalability of our proposed algorithms in distributed twig query applications.
    Approximate Continuous Top-k Query over Sliding Window
    Rui Zhu, Bin Wang, Shi-Ying Luo, Xiao-Chun Yang, Guo-Ren Wang
    Journal of Data Acquisition and Processing, 2017, 32 (1): 93-109. 
    Abstract   PDF(616KB) ( 1287 )  
    Continuous top-k query over sliding window is a fundamental problem in database, which retrieves k objects with the highest scores when the window slides. Existing studies mainly adopt exact algorithms to tackle this type of queries, whose key idea is to maintain a subset of objects in the window, and try to retrieve answers from it. However, all the existing algorithms are sensitive to query parameters and data distribution. In addition, they suffer from expensive overhead for incremental maintenance, and thus cannot satisfy real-time requirement. In this paper, we define a novel query named (ε,δ)-approximate continuous top-k query, which returns approximate answers for top-k query. In order to efficiently support this query, we propose an efficient framework, named PABF (Probabilistic Approximate Based Framework), to support approximate top-k query over sliding window. We firstly maintain a self-adaptive pruning value, which could filter out newly arrived objects who have a probability less than 1-δ of being a query result. For those objects that are not filtered, we combine them together, if the score difference among them is less than a threshold. To efficiently maintain these combined results, the framework PABF also proposes a multi-phase merging algorithm. Theoretical analysis indicates that even in the worst case, we require only logarithmic complexity for maintaining each candidate.
    Survey
    Intelligent Visual Media Processing: When Graphics Meets Vision
    Ming-Ming Cheng, Qi-Bin Hou, Song-Hai Zhang, Paul L. Rosin
    Journal of Data Acquisition and Processing, 2017, 32 (1): 110-121. 
    Abstract   PDF(1790KB) ( 1547 )  
    The computer graphics and computer vision communities have been working closely together in recent years, and a variety of algorithms and applications have been developed to analyze and manipulate the visual media around us. There are three major driving forces behind this phenomenon:1) the availability of big data from the Internet has created a demand for dealing with the ever-increasing, vast amount of resources; 2) powerful processing tools, such as deep neural networks, provide effective ways for learning how to deal with heterogeneous visual data; 3) new data capture devices, such as the Kinect, the bridge between algorithms for 2D image understanding and 3D model analysis. These driving forces have emerged only recently, and we believe that the computer graphics and computer vision communities are still in the beginning of their honeymoon phase. In this work we survey recent research on how computer vision techniques benefit computer graphics techniques and vice versa, and cover research on analysis, manipulation, synthesis, and interaction. We also discuss existing problems and suggest possible further research directions.
    A Survey on Pre-Processing in Image Matting
    Gui-Lin Yao
    Journal of Data Acquisition and Processing, 2017, 32 (1): 122-138. 
    Abstract   PDF(7373KB) ( 2252 )  
    Pre-processing is an important step in digital image matting, which aims to classify more accurate foreground and background pixels from the unknown region of the input three-region mask (Trimap). This step has no relation with the well-known matting equation and only compares color differences between the current unknown pixel and those known pixels. These newly classified pure pixels are then fed to the matting process as samples to improve the quality of the final matte. However, in the research field of image matting, the importance of pre-processing step is still blurry. Moreover, there are no corresponding review articles for this step, and the quantitative comparison of Trimap and alpha mattes after this step still remains unsolved. In this paper, the necessity and the importance of pre-processing step in image matting are firstly discussed in details. Next, current pre-processing methods are introduced by using the following two categories:static thresholding methods and dynamic thresholding methods. Analyses and experimental results show that static thresholding methods, especially the most popular iterative method, can make accurate pixel classifications in those general Trimaps with relatively fewer unknown pixels. However, in a much larger Trimap, there methods are limited by the conservative color and spatial thresholds. In contrast, dynamic thresholding methods can make much aggressive classifications on much difficult cases, but still strongly suffer from noises and false classifications. In addition, the sharp boundary detector is further discussed as a prior of pure pixels. Finally, summaries and a more effective approach are presented for pre-processing compared with the existing methods
    Regular Paper
    On Participation Constrained Team Formation
    Yu Zhou, Jian-Bin Huang, Xiao-Lin Jia, He-Li Sun
    Journal of Data Acquisition and Processing, 2017, 32 (1): 139-154. 
    Abstract   PDF(555KB) ( 1100 )  
    The task assignment on the Internet has been widely applied to many areas, e.g., online labor market, online paper review and social activity organization. In this paper, we are concerned with the task assignment problem related to the online labor market, termed as ClusterHire. We improve the definition of the ClusterHire problem, and propose an efficient and effective algorithm, entitled Influence. In addition, we place a participation constraint on ClusterHire. It constrains the load of each expert in order to keep all members from overworking. For the participation-constrained ClusterHire problem, we devise two algorithms, named ProjectFirst and Era. The former generates a participationconstrained team by adding experts to an initial team, and the latter generates a participation-constrained team by removing the experts with the minimum influence from the universe of experts. The experimental evaluations indicate that 1) Influence performs better than the state-of-the-art algorithms in terms of effectiveness and time efficiency; 2) ProjectFirst performs better than Era in terms of time efficiency, yet Era performs better than ProjectFirst in terms of effectiveness.
    Reverse Furthest Neighbors Query in Road Networks
    Xiao-Jun Xu, Jin-Song Bao, Bin Yao, Jing-Yu Zhou, Fei-Long Tang, Min-Yi Guo, Jian-Qiu Xu
    Journal of Data Acquisition and Processing, 2017, 32 (1): 155-167. 
    Abstract   PDF(2131KB) ( 1290 )  
    Given a road network G=(V, E), where V (E) denotes the set of vertices(edges) in G, a set of points of interest P and a query point q residing in G, the reverse furthest neighbors (RFNR) query in road networks fetches a set of points pP that take q as their furthest neighbor compared with all points in P ∪{q}. This is the monochromatic RfnR (MRFNR) query. Another interesting version of RFNR query is the bichromatic reverse furthest neighbor (BRFNR) query. Given two sets of points P and Q, and a query point qQ, a BRFNR query fetches a set of points pP that take q as their furthest neighbor compared with all points in Q. This paper presents efficient algorithms for both MRFNR and BRFNR queries, which utilize landmarks and partitioning-based techniques. Experiments on real datasets confirm the efficiency and scalability of proposed algorithms.
    A Light-Weight Opportunistic Forwarding Protocol with Optimized Preamble Length for Low-Duty-Cycle Wireless Sensor Networks
    Hai-Ming Chen, Li Cui, Gang Zhou
    Journal of Data Acquisition and Processing, 2017, 32 (1): 168-180. 
    Abstract   PDF(703KB) ( 730 )  
    In wireless sensor networks, sensed information is expected to be reliably and timely delivered to a sink in an ad-hoc way. However, it is challenging to achieve this goal because of the highly dynamic topology induced from asynchronous duty cycles and temporally and spatially varying link quality among nodes. Currently some opportunistic forwarding protocols have been proposed to address the challenge. However, they involve complicated mechanisms to determine the best forwarder at each hop, which incurs heavy overheads for the resource-constrained nodes. In this paper, we propose a light-weight opportunistic forwarding (LWOF) scheme. Different from other recently proposed opportunistic forwarding schemes, LWOF employs neither historical network information nor a contention process to select a forwarder prior to data transmissions. It confines forwarding candidates to an optimized area, and takes advantage of the preamble in low-power-listening (LPL) MAC protocols and dual-channel communication to forward a packet to a unique downstream node towards the sink with a high probability, without making a forwarding decision prior to data transmission. Under LWOF, we optimize LPL MAC protocol to have a shortened preamble (LWMAC), based on a theoretical analysis on the relationship among preamble length, delivery probability at each hop, node density and sleep duration. Simulation results show that LWOF, along with LWMAC, can achieve relatively good performance in terms of delivery reliability and latency, as a receiver-based opportunistic forwarding protocol, while reducing energy consumption per packet by at least twice
    High-Impact Bug Report Identification with Imbalanced Learning Strategies
    Xin-Li Yang, David Lo, Xin Xia, Qiao Huang, Jian-Ling Sun
    Journal of Data Acquisition and Processing, 2017, 32 (1): 181-198. 
    Abstract   PDF(1489KB) ( 5325 )  
    In practice, some bugs have more impact than others and thus deserve more immediate attention. Due to tight schedule and limited human resources, developers may not have enough time to inspect all bugs. Thus, they often concentrate on bugs that are highly impactful. In the literature, high-impact bugs are used to refer to the bugs which appear at unexpected time or locations and bring more unexpected effects (i.e., surprise bugs), or break pre-existing functionalities and destroy the user experience (i.e., breakage bugs). Unfortunately, identifying high-impact bugs from thousands of bug reports in a bug tracking system is not an easy feat. Thus, an automated technique that can identify high-impact bug reports can help developers to be aware of them early, rectify them quickly, and minimize the damages they cause. Considering that only a small proportion of bugs are high-impact bugs, the identification of high-impact bug reports is a difficult task. In this paper, we propose an approach to identify high-impact bug reports by leveraging imbalanced learning strategies. We investigate the effectiveness of various variants, each of which combines one particular imbalanced learning strategy and one particular classification algorithm. In particular, we choose four widely used strategies for dealing with imbalanced data and four state-of-the-art text classification algorithms to conduct experiments on four datasets from four different open source projects. We mainly perform an analytical study on two types of high-impact bugs, i.e., surprise bugs and breakage bugs. The results show that different variants have different performances, and the best performing variants SMOTE (synthetic minority over-sampling technique)+KNN (K-nearest neighbours) for surprise bug identification and RUS (random under-sampling)+NB (naive Bayes) for breakage bug identification outperform the F1-scores of the two state-of-the-art approaches by Thung et al. and Garcia and Shihab.
    Automated Testing of Web Applications Using Combinatorial Strategies
    Xiao-Fang Qi, Zi-Yuan Wang, Jun-Qiang Mao, Peng Wang
    Journal of Data Acquisition and Processing, 2017, 32 (1): 199-210. 
    Abstract   PDF(407KB) ( 1513 )  
    Recently, testing techniques based on dynamic exploration, which try to automatically exercise every possible user interface element, have been extensively used to facilitate fully testing web applications. Most of such testing tools are however not effective in reaching dynamic pages induced by form interactions due to their emphasis on handling client-side scripting. In this paper, we present a combinatorial strategy to achieve a full form test and build an automated test model. We propose an algorithm called pairwise testing with constraints (PTC) to implement the strategy. Our PTC algorithm uses pairwise coverage and handles the issues of semantic constraints and illegal values. We have implemented a prototype tool ComjaxTest and conducted an empirical study on five web applications. Experimental results indicate that our PTC algorithm generates less form test cases while achieving a higher coverage of dynamic pages than the general pairwise testing algorithm. Additionally, our ComjaxTest generates a relatively complete test model and then detects more faults in a reasonable amount of time, as compared with other existing tools based on dynamic exploration.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved