Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      05 March 2015, Volume 30 Issue 2   
    For Selected: View Abstracts Toggle Thumbnails
    Special Section on Applications and Industry
    Preface
    Guo-Jie Li
    Journal of Data Acquisition and Processing, 2015, 30 (2): 225-226. 
    Abstract   PDF(350KB) ( 905 )  
    While an academic journal, JCST welcomes submissions that report research work and results aiming at direct impacts on the computing industry and applications. These include innovative techniques leading to hardware products, new software designs benefiting companies or the open source community, and evaluation results offering new insights to the industry. We especially encourage papers that report joint work between industry and academia. ......
    Reevaluating Data Stall Time with the Consideration of Data Access Concurrency
    Yu-Hang Liu, Xian-He Sun
    Journal of Data Acquisition and Processing, 2015, 30 (2): 227-245. 
    Abstract   PDF(1266KB) ( 1559 )  
    Data access delay has become the prominent performance bottleneck of high-end computing systems. The key to reducing data access delay in system design is to diminish data stall time. Memory locality and concurrency are the two essential factors influencing the performance of modern memory systems. However, existing studies in reducing data stall time rarely focus on utilizing data access concurrency because the impact of memory concurrency on overall memory system performance is not well understood. In this study, a pair of novel data stall time models, the L-C model for the combined effort of locality and concurrency and the P-M model for the effect of pure miss on data stall time, are presented. The models provide a new understanding of data access delay and provide new directions for performance optimization. Based on these new models, a summary table of advanced cache optimizations is presented. It has 38 entries contributed by data concurrency while only has 21 entries contributed by data locality, which shows the value of data concurrency. The L-C and P-M models and their associated results and opportunities introduced in this study are important and necessary for future data-centric architecture and algorithm design of modern computing systems.
    Tencent and Facebook Data Validate Metcalfe's Law
    Xing-Zhou Zhang, Jing-Jie Liu, Zhi-Wei Xu
    Journal of Data Acquisition and Processing, 2015, 30 (2): 246-251. 
    Abstract   PDF(1151KB) ( 1943 )  
    In 1980s, Robert Metcalfe, the inventor of Ethernet, proposed a formulation of network value in terms of the network size (the number of nodes of the network), which was later named as Metcalfe's law. The law states that the value V of a network is proportional to the square of the size n of the network, i.e., Vn2. Metcalfe's law has been influential and an embodiment of the network effect concept. It also generated many controversies. Some scholars went so far as to state "Metcalfe's law is wrong" and "dangerous". Some other laws have been proposed, including Sarnoff's law (Vn), Odlyzko's law (Vn log(n)), and Reed's law (Vn2). Despite these arguments, for 30 years, no evidence based on real data was available for or against Metcalfe's law. The situation was changed in late 2013, when Metcalfe himself used Facebook's data over the past 10 years to show a good fit for Metcalfe's law. In this paper, we expand Metcalfe's results by utilizing the actual data of Tencent (China's largest social network company) and Facebook (the world's largest social network company). Our results show that: 1) of the four laws of network effect, Metcalfe's law by far fits the actual data the best; 2) both Tencent and Facebook data fit Metcalfe's law quite well; 3) the costs of Tencent and Facebook are proportional to the squares of their network sizes, not linear; and 4) the growth trends of Tencent and Facebook monthly active users fit the netoid function well.
    Software-Defined Cluster
    Hua Nie, Xiao-Jun Yang, Tao-Ying Liu
    Journal of Data Acquisition and Processing, 2015, 30 (2): 252-258. 
    Abstract   PDF(1288KB) ( 2569 )  
    The cluster architecture has played an important role in high-end computing for the past 20 years. With the advent of Internet services, big data, and cloud computing, traditional clusters face three challenges: 1) providing flexible system balance among computing, memory, and I/O capabilities; 2) reducing resource pooling overheads; and 3) addressing low performance-power efficiency. This position paper proposes a software-defined cluster (SDC) architecture to deal with these challenges. The SDC architecture inherits two features of traditional cluster: its architecture is multicomputer and it has loosely-coupled interconnect. SDC provides two new mechanisms: global I/O space (GIO) and hardware-supported native access (HNA) to remote devices. Application software can define a virtual cluster best suited to its needs from resources pools provided by a physical cluster, and traditional cluster ecosystems need no modification. We also discuss a prototype design and implementation of a 32-processor cloud server utilizing the SDC architecture.
    High Performance Interconnect Network for Tianhe System
    Xiang-Ke Liao, Zheng-Bin Pang, Ke-Fei Wang, Yu-Tong Lu, Min Xie, Jun Xia, De-Zun Dong, Guang Suo
    Journal of Data Acquisition and Processing, 2015, 30 (2): 259-272. 
    Abstract   PDF(1200KB) ( 2286 )  
    In this paper, we present the Tianhe-2 interconnect network and message passing services. We describe the architecture of the router and network interface chips, and highlight a set of hardware and software features effectively supporting high performance communications, ranging over remote direct memory access, collective optimization, hardwareenable reliable end-to-end communication, user-level message passing services, etc. Measured hardware performance results are also presented.
    Fatman: Building Reliable Archival Storage Based on Low-Cost Volunteer Resources
    An Qin, Dian-Ming Hu, Jun Liu, Wen-Jun Yang, Dai Tan
    Journal of Data Acquisition and Processing, 2015, 30 (2): 273-282. 
    Abstract   PDF(1095KB) ( 1522 )  
    We present Fatman, an enterprise-scale archival storage based on volunteer contribution resources from underutilized web servers, usually deployed on thousands of nodes with spare storage capacity. Fatman is specifically designed for enhancing the utilization of existing storage resources and cutting down the hardware purchase cost. Two major concerned issues of the system design are maximizing the resource utilization of volunteer nodes without violating service level objectives (SLOs) and minimizing the cost without reducing the availability of archival system. Fatman has been widely deployed on tens of thousands of server nodes across several datacenters, providing more than 100 PB storage capacity and serving dozens of internal mass-data applications. The system realizes an efficient storage quota consolidation by strong isolation and budget limitation, to maximally support resource contribution without any degradation on host-level SLOs. It novelly improves data reliability by applying disk failure prediction to minish failure recovery cost, named fault-aware data management, dramatically reduces the mean time to repair (MTTR) by 76:3% and decreases file crash ratio by 35% on real-life product workload.
    Accelerating Iterative Big Data Computing Through MPI
    Fan Liang, Xiaoyi Lu
    Journal of Data Acquisition and Processing, 2015, 30 (2): 283-294. 
    Abstract   PDF(2235KB) ( 1222 )  
    Current popular systems, Hadoop and Spark, cannot achieve satisfied performance because of the inefficient overlapping of computation and communication when running iterative big data applications. The pipeline of computing, data movement, and data management plays a key role for current distributed data computing systems. In this paper, we first analyze the overhead of shuffle operation in Hadoop and Spark when running PageRank workload, and then propose an event-driven pipeline and in-memory shuffle design with better overlapping of computation and communication as DataMPIIteration, an MPI-based library, for iterative big data computing. Our performance evaluation shows DataMPI-Iteration can achieve 9X~21X speedup over Apache Hadoop, and 2X~3X speedup over Apache Spark for PageRank and K-means.
    Global Optimization for Advertisement Selection in Sponsored Search
    Qing Cui, Feng-Shan Bai, Bin Gao, Tie-Yan Liu
    Journal of Data Acquisition and Processing, 2015, 30 (2): 295-310. 
    Abstract   PDF(1225KB) ( 1206 )  
    Advertisement (ad) selection plays an important role in sponsored search, since it is an upstream component and will heavily influence the effectiveness of the subsequent auction mechanism. However, most existing ad selection methods regard ad selection as a relatively independent module, and only consider the literal or semantic matching between queries and keywords during the ad selection process. In this paper, we argue that this approach is not globally optimal. Our proposal is to formulate ad selection as such an optimization problem that the selected ads can work together with downstream components (e.g., the auction mechanism) to achieve the maximization of user clicks, advertiser social welfare, and search engine revenue (we call the combination of these objective functions as the marketplace objective for ease of reference). To this end, we 1) extract a bunch of features to represent each pair of query and keyword, and 2) train a machine learning model that maps the features to a binary variable indicating whether the keyword is selected or not, by maximizing the aforementioned marketplace objective. This formalization seems quite natural; however, it is technically difficult because the marketplace objective is non-convex, discontinuous, and indifferentiable regarding the model parameter due to the ranking and second-price rules in the auction mechanism. To tackle the challenge, we propose a probabilistic approximation of the marketplace objective, which is smooth and can be effectively optimized by conventional optimization techniques. We test the ad selection model learned with our proposed method using the sponsored search log from a commercial search engine. The experimental results show that our method can significantly outperform several ad selection algorithms on all the metrics under investigation.
    A New ETL Approach Based on Data Virtualization
    Shu-Sheng Guo, Zi-Mu Yuan, Ao-Bing Sun, Qiang Yue
    Journal of Data Acquisition and Processing, 2015, 30 (2): 311-323. 
    Abstract   PDF(2135KB) ( 1214 )  
    ETL (Extract-Transform-Load) usually includes three phases: extraction, transformation, and loading. In building data warehouse, it plays the role of data injection and is the most time-consuming activity. Thus it is necessary to improve the performance of ETL. In this paper, a new ETL approach, TEL (Transform-Extract-Load) is proposed. The TEL approach applies virtual tables to realize the transformation stage before extraction stage and loading stage, without data staging area or staging database which stores raw data extracted from each of the disparate source data systems. The TEL approach reduces the data transmission load, and improves the performance of query from access layers. Experimental results based on our proposed benchmarks show that the TEL approach is feasible and practical
    Special Section on Object Recognition
    Preface
    Xi-Lin Chen
    Journal of Data Acquisition and Processing, 2015, 30 (2): 324-324. 
    Abstract   PDF(356KB) ( 593 )  
    Last year, the Editorial Board decided to make some change on this journal. As a part of this change, six areas are selected as highlight topics in each issue yearly around. We do hope this change will help readers to choose papers easily. One of the six areas is ``Artificial Intelligence and Pattern Recognition''. From my personal view, this is still too broad to highlight. As a trade-off, we have to focus on a narrow area. We select four papers on the topic of object recognition from 10 submissions. I do not think I can provide more information rather than these papers, and I also do not believe the readers need a guide to read these papers. For these reasons, I only briefly summarize the papers below. ......
    VFM: Visual Feedback Model for Robust Object Recognition
    Chong Wang, Kai-Qi Huang
    Journal of Data Acquisition and Processing, 2015, 30 (2): 325-339. 
    Abstract   PDF(24240KB) ( 1266 )  
    Object recognition, which consists of classification and detection, has two important attributes for robustness: 1) closeness: detection windows should be as close to object locations as possible, and 2) adaptiveness: object matching should be adaptive to object variations within an object class. It is difficult to satisfy both attributes using traditional methods which consider classification and detection separately; thus recent studies propose to combine them based on confidence contextualization and foreground modeling. However, these combinations neglect feature saliency and object structure, and biological evidence suggests that the feature saliency and object structure can be important in guiding the recognition from low level to high level. In fact, object recognition originates in the mechanism of "what" and "where" pathways in human visual systems. More importantly, these pathways have feedback to each other and exchange useful information, which may improve closeness and adaptiveness. Inspired by the visual feedback, we propose a robust object recognition framework by designing a computational visual feedback model (VFM) between classification and detection. In the "what" feedback, the feature saliency from classification is exploited to rectify detection windows for better closeness; while in the "where" feedback, object parts from detection are used to match object structure for better adaptiveness. Experimental results show that the "what" and "where" feedback is effective to improve closeness and adaptiveness for object recognition, and encouraging improvements are obtained on the challenging PASCAL VOC 2007 dataset.
    RGB-D Hand-Held Object Recognition Based on Heterogeneous Feature Fusion
    Xiong Lv, Shu-Qiang Jiang, Luis Herranz, Shuang Wang
    Journal of Data Acquisition and Processing, 2015, 30 (2): 340-352. 
    Abstract   PDF(12708KB) ( 1240 )  
    Object recognition has many applications in human-machine interaction and multimedia retrieval. However, due to large intra-class variability and inter-class similarity, accurate recognition relying only on RGB data is still a big challenge. Recently, with the emergence of inexpensive RGB-D devices, this challenge can be better addressed by leveraging additional depth information. A very special yet important case of object recognition is hand-held object recognition, as manipulating objects with hands is common and intuitive in human-human and human-machine interactions. In this paper, we study this problem and introduce an effective framework to address it. This framework first detects and segments the hand-held object by exploiting skeleton information combined with depth information. In the object recognition stage, this work exploits heterogeneous features extracted from different modalities and fuses them to improve the recognition accuracy. In particular, we incorporate handcrafted and deep learned features and study several multi-step fusion variants. Experimental evaluations validate the effectiveness of the proposed method.
    Robust Video Text Detection with Morphological Filtering Enhanced MSER
    Yun-Zhi Zhuge, Hu-Chuan Lu
    Journal of Data Acquisition and Processing, 2015, 30 (2): 353-363. 
    Abstract   PDF(26120KB) ( 1868 )  
    Video text detection is a challenging problem, since video image background is generally complex and its subtitles often have the problems of color bleeding, fuzzy boundaries and low contrast due to video lossy compression and low resolution. In this paper, we propose a robust framework to solve these problems. Firstly, we exploit gradient amplitude map (GAM) to enhance the edge of an input image, which can overcome the problems of color bleeding and fuzzy boundaries. Secondly, a two-direction morphological filtering is developed to filter background noise and enhance the contrast between background and text. Thirdly, maximally stable extremal region (MSER) is applied to detect text regions with two extreme colors, and we use the mean intensity of the regions as the graph cuts' label set, and the Euclidean distance of three channels in HSI color space as the graph cuts smooth term, to get optimal segmentations. Finally, we group them into text lines using the geometric characteristics of the text, and then corner detection, multi-frame veri cation, and some heuristic rules are used to eliminate non-text regions. We test our scheme with some challenging videos, and the results prove that our text detection framework is more robust than previous methods.
    Raw Trajectory Rectification via Scene-Free Splitting and Stitching
    Chun-Chao Guo, Xiao-Jun Hu, Jian-Huang Lai, Shi-Chang Shi, Shi-Zhe Chen
    Journal of Data Acquisition and Processing, 2015, 30 (2): 364-372. 
    Abstract   PDF(10506KB) ( 1051 )  
    Trajectories carry rich motion cues and thus have been leveraged to many high-level computer vision tasks. Due to the easy implementation of simple trackers, most previous work on trajectory-based applications utilizes raw tracking outputs without explicitly considering tracking errors. Reliable trajectories are prerequisite for modeling and recognizing high-level behaviors. Therefore, this paper tackles such problems by rectifying raw trajectories, which aims to post-process existing trajectories. Our approach firstly splits them into short tracks, and then infers identity ambiguity to remove unquali ed detection responses. At last, short tracks are stitched via maximum bipartite graph matching. This post-processing is completely scene-free. Results of trajectory rectification and their bene ts are both evaluated on two challenging datasets. Results demonstrate that recti ed trajectories are conducive to high-level tasks and the proposed approach is also competitive with state-of-the-art multi-target tracking methods.
    Computer Architecture and Systems
    SRAM-Based FPGA Systems for Safety-Critical Applications: A Survey on Design Standards and Proposed Methodologies
    Cinzia Bernardeschi, Luca Cassano, Andrea Domenici
    Journal of Data Acquisition and Processing, 2015, 30 (2): 373-390. 
    Abstract   PDF(1379KB) ( 3049 )  
    As the ASIC design cost becomes affordable only for very large-scale productions, the FPGA technology is currently becoming the leading technology for those applications that require a small-scale production. FPGAs can be considered as a technology crossing between hardware and software. Only a small-number of standards for the design of safety-critical systems give guidelines and recommendations that take the peculiarities of the FPGA technology into consideration. The main contribution of this paper is an overview of the existing design standards that regulate the design and verification of FPGA-based systems in safety-critical application fields. Moreover, the paper proposes a survey of significant published research proposals and existing industrial guidelines about the topic, and collects and reports about some lessons learned from industrial and research projects involving the use of FPGA devices.
    Register Clustering Methodology for Low Power Clock Tree Synthesis
    Chao Deng, Yi-Ci Cai, Qiang Zhou
    Journal of Data Acquisition and Processing, 2015, 30 (2): 391-403. 
    Abstract   PDF(2004KB) ( 2015 )  
    Clock networks dissipate a significant fraction of the entire chip power budget. Therefore, the optimization for power consumption of clock networks has become one of the most important objectives in high performance IC designs. In contrast to most of the traditional studies that handle this problem with clock routing or buffer insertion strategy, this paper proposes a novel register clustering methodology in generating the leaf level topology of the clock tree to reduce the power consumption. Three register clustering algorithms called KMR, KSR and GSR are developed and a comprehensive study of them is discussed in this paper. Meanwhile, a buffer allocation algorithm is proposed to satisfy the slew constraint within the clusters at a minimum cost of power consumption. We integrate our algorithms into a classical clock tree synthesis (CTS) flow to test the register clustering methodology on ISPD 2010 benchmark circuits. Experimental results show that all the three register clustering algorithms achieve more than 20% reduction in power consumption without affecting the skew and the maximum latency of the clock tree. As the most effective method among the three algorithms, GSR algorithm achieves a 31% reduction in power consumption as well as a 4% reduction in skew and a 5% reduction in maximum latency. Moreover, the total runtime of the CTS flow with our register clustering algorithms is significantly reduced by almost an order of magnitude.
    Computer Networks and Distributed Computing
    Provisioning of Inter-Domain QoS-Aware Services
    Fernando Matos, Alexandre Matos, Paulo Si
    Journal of Data Acquisition and Processing, 2015, 30 (2): 404-420. 
    Abstract   PDF(1091KB) ( 1620 )  
    Cooperation among service providers, network providers, and access providers in the Internet allows the creation of new services to offer to customers that are in other domains, thus increasing revenue. However, the Internet heterogeneous environment, where each provider has its own policies, infrastructure and business goals, hinders the deployment of more advanced communication services. This paper presents a Quality of Service (QoS) for Inter-Domain Services (QIDS) model that allows inter-domain QoS-aware services to be defined, configured, and adapted in a dynamic and on-demand fashion, among service providers. This is accomplished by: 1) the use of a common communication channel (business layer) where service providers publish and search for services, and interact with each other to contract and manage these services; 2) the templates to specify the business and technical characteristics of the services; 3) the automatic composition of services using service elements (smaller services) according to performance and service-specific QoS parameters; and 4) the creation and enforcement of configuration rules for the underlying infrastructure. A prototype was implemented to validate QIDS and performance tests were conducted on an inter-domain Border Gateway Protocol (BGP)/Multiprotocol Label Switching (MPLS) Virtual Private Network (VPN) scenario.
    Service-Oriented Resource Allocation in Clouds: Pursuing Flexibility and Efficiency
    Sheng Zhang, Zhu-Zhong Qian, Jie Wu, Sang-Lu Lu
    Journal of Data Acquisition and Processing, 2015, 30 (2): 421-436. 
    Abstract   PDF(2096KB) ( 995 )  
    The networking-oblivious resource reservation model in today's public clouds cannot guarantee the performance of tenants' applications. Virtual networks that capture both computing and networking resource requirements of tenants have been proposed as better interfaces between cloud providers and tenants. In this paper, we propose a novel virtual network model that could specify not only absolute and relative location requirements but also time-varying resource demands. Building on top of our model, we study how to efficiently and flexibly place multiple virtual networks in a cloud, and we propose two algorithms, MIPA and SAPA, which focus on optimizing resource utilization and providing flexible placement, respectively. The mixed integer programming based MIPA transforms the placement problem into the multi-commodity flow problem through augmenting the physical network with shadow nodes and links. The simulated annealing-based SAPA achieves resource utilization efficiency through opportunistically sharing physical resources among multiple resource demands. Besides, SAPA allows cloud providers to control the trade-offs between performance guarantee and resource utilization, and between allocation optimality and running time, and allows tenants to control the trade-off between application performance and placement cost. Extensive simulation results confirm the efficiency of MIPA in resource utilization and the flexibility of SAPA in controlling trade-offs.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved