Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      05 March 2017, Volume 32 Issue 2   
    For Selected: View Abstracts Toggle Thumbnails
    Special Section on MOST Cloud and Big Data
    Providing Virtual Cloud for Special Purposes on Demand in JointCloud Computing Environment
    Dong-Gang Cao, Bo An, Pei-Chang Shi, Huai-Min Wang
    Journal of Data Acquisition and Processing, 2017, 32 (2): 211-218. 
    Abstract   PDF(489KB) ( 1315 )  
    Cloud computing has been widely adopted by enterprises because of its on-demand and elastic resource usage paradigm. Currently most cloud applications are running on one single cloud. However, more and more applications demand to run across several clouds to satisfy the requirements like best cost efficiency, avoidance of vender lock-in, and geolocation sensitive service. JointCloud computing is a new research initiated by Chinese institutes to address the computing issues concerned with multiple clouds. In JointCloud, users' diverse and dynamic requirements on cloud resources are satisfied by providing users virtual cloud (VC) for special purposes. A virtual cloud for special purposes is in essence a user's specific cloud working environment having the customized software stacks, configurations and computing resources readily available. This paper first introduces what is JointCloud computing and then describes the design rationales, motivation examples, mechanisms and enabling technologies of VC in JointCloud.
    Labeled von Neumann Architecture for Software-Defined Cloud
    Yun-Gang Bao, Sa Wang
    Journal of Data Acquisition and Processing, 2017, 32 (2): 219-223. 
    Abstract   PDF(390KB) ( 1481 )  
    As cloud computing is moving forward rapidly, cloud providers have been encountering great challenges:long tail latency, low utilization, and high interference. They intend to co-locate multiple workloads on a single server to improve the resource utilization. But the co-located applications suffer from severe performance interference and long tail latency, which lead to unpredictable user experience. To meet these challenges, software-defined cloud has been proposed to facilitate tighter coordination among application, operating system and hardware. Users' quality of service (QoS) requirements could be propagated all the way down to the hardware with differential management mechanisms. However, there is little hardware support to maintain and guarantee users' QoS requirements. To this end, this paper proposes Labeled von Neumann Architecture (LvNA), which introduces a labelling mechanism to convey more software's semantic information such as QoS and security to the underlying hardware. LvNA is able to correlate labels with various entities, e.g., virtual machine, process and thread, and propagate labels in the whole machine and program differentiated services based on rules. We consider LvNA to be a fundamental hardware support to the software-defined cloud.
    Evolution of Cloud Operating System: From Technology to Ecosystem
    Zuo-Ning Chen, Kang Chen, Jin-Lei Jiang, Lu-Fei Zhang, Song Wu, Zheng-Wei Qi, Chun-Ming Hu, Yong-Wei Wu, Yu-Zhong Sun, Hong Tang, Ao-Bing Sun, Zi-Lu Kang
    Journal of Data Acquisition and Processing, 2017, 32 (2): 224-241. 
    Abstract   PDF(669KB) ( 1361 )  
    The cloud operating system (cloud OS) is used for managing the cloud resources such that they can be used effectively and efficiently. And also it is the duty of cloud OS to provide convenient interface for users and applications. However, these two goals are often conflicting because convenient abstraction usually needs more computing resources. Thus, the cloud OS has its own characteristics of resource management and task scheduling for supporting various kinds of cloud applications. The evolution of cloud OS is in fact driven by these two often conflicting goals and finding the right tradeoff between them makes each phase of the evolution happen. In this paper, we have investigated the ways of cloud OS evolution from three different aspects:enabling technology evolution, OS architecture evolution and cloud ecosystem evolution. We show that finding the appropriate APIs (application programming interfaces) is critical for the next phase of cloud OS evolution. Convenient interfaces need to be provided without scarifying efficiency when APIs are chosen. We present an API-driven cloud OS practice, showing the great capability of APIs for developing a better cloud OS and helping build and run the cloud ecosystem healthily.
    Intelligent Development Environment and Software Knowledge Graph
    Ze-Qi Lin, Bing Xie, Yan-Zhen Zou, Jun-Feng Zhao, Xuan-Dong Li, Jun Wei, Hai-Long Sun, Gang Yin
    Journal of Data Acquisition and Processing, 2017, 32 (2): 242-249. 
    Abstract   PDF(2059KB) ( 1397 )  
    Software intelligent development has become one of the most important research trends in software engineering. In this paper, we put forward two key concepts-intelligent development environment (IntelliDE) and software knowledge graph-for the first time. IntelliDE is an ecosystem in which software big data are aggregated, mined and analyzed to provide intelligent assistance in the life cycle of software development. We present its architecture and discuss its key research issues and challenges. Software knowledge graph is a software knowledge representation and management framework, which plays an important role in IntelliDE. We study its concept and introduce some concrete details and examples to show how it could be constructed and leveraged.
    Experience Availability: Tail-Latency Oriented Availability in Software-Defined Cloud Computing
    Bin-Lei Cai, Rong-Qi Zhang, Xiao-Bo Zhou, Lai-Ping Zhao, Ke-Qiu Li
    Journal of Data Acquisition and Processing, 2017, 32 (2): 250-257. 
    Abstract   PDF(356KB) ( 1183 )  
    Resource sharing, multi-tenant interference and bursty workloads in cloud computing lead to high tail-latency that severely affects user quality of experience (QoE), where response latency is a critical factor. A lot of research efforts are dedicated to reducing high tail-latency and improving user QoE, such as software-defined cloud computing (SDC). However, the traditional availability analysis of cloud computing captures the pure failure-repair behavior with user QoE ignored. In this paper, we propose a conceptual framework, experience availability, to properly assess the effectiveness of SDC while taking into account both availability and response latency simultaneously. We review the related work on availability models and methods of cloud systems, and discuss open problems for evaluating experience availability in SDC. We also show some of our preliminary results to demonstrate the feasibility of our ideas.
    Architectural Design of a Cloud Robotic System for Upper-Limb Rehabilitation with Multimodal Interaction
    Hui-Jun Li, Ai-Guo Song
    Journal of Data Acquisition and Processing, 2017, 32 (2): 258-268. 
    Abstract   PDF(2659KB) ( 986 )  
    The rise in the cases of motor impairing illnesses demands the research for improvements in rehabilitation therapy. Due to the current situation that the service of the professional therapists cannot meet the need of the motorimpaired subjects, a cloud robotic system is proposed to provide an Internet-based process for upper-limb rehabilitation with multimodal interaction. In this system, therapists and subjects are connected through the Internet using client/server architecture. At the client site, gradual virtual games are introduced so that the subjects can control and interact with virtual objects through the interaction devices such as robot arms. Computer graphics show the geometric results and interaction haptic/force is fed back during exercising. Both video/audio information and kinematical/physiological data are transferred to the therapist for monitoring and analysis. In this way, patients can be diagnosed and directed and therapists can manage therapy sessions remotely. The rehabilitation process can be monitored through the Internet. Expert libraries on the central server can serve as a supervisor and give advice based on the training data and the physiological data. The proposed solution is a convenient application that has several features taking advantage of the extensive technological utilization in the area of physical rehabilitation and multimodal interaction.
    Computer Architecture and Systems
    Parallel Turing Machine, a Proposal
    Peng Qu, Jin Yan, You-Hui Zhang, Guang R. Gao
    Journal of Data Acquisition and Processing, 2017, 32 (2): 269-285. 
    Abstract   PDF(1141KB) ( 1885 )  
    We have witnessed the tremendous momentum of the second spring of parallel computing in recent years. But, we should remember the low points of the field more than 20 years ago and review the lesson that has led to the question at that point whether "parallel computing will soon be relegated to the trash heap reserved for promising technologies that never quite make it" in an article entitled "the death of parallel computing" written by the late Ken Kennedy-a prominent leader of parallel computing in the world. Facing the new era of parallel computing, we should learn from the robust history of sequential computation in the past 60 years. We should study the foundation established by the model of Turing machine (1936) and its profound impact in this history. To this end, this paper examines the disappointing state of the work in parallel Turing machine models in the past 50 years of parallel computing research. Lacking a solid yet intuitive parallel Turing machine model will continue to be a serious challenge in the future parallel computing. Our paper presents an attempt to address this challenge by presenting a proposal of a parallel Turing machine model. We also discuss why we start our work in this paper from a parallel Turing machine model instead of other choices.
    DLPlib: A Library for Deep Learning Processor
    Hui-Ying Lan, Lin-Yang Wu, Xiao Zhang, Jin-Hua Tao, Xun-Yu Chen, Bing-Rui Wang, Yu-Qing Wang, Qi Guo, Yun-Ji Chen
    Journal of Data Acquisition and Processing, 2017, 32 (2): 286-296. 
    Abstract   PDF(878KB) ( 1741 )  
    Recently, deep learning processors have become one of the most promising solutions of accelerating deep learning algorithms. Currently, the only method of programming the deep learning processors is through writing assembly instructions by bare hands, which costs a lot of programming efforts and causes very low efficiency. One solution is to integrate the deep learning processors as a new back-end into one prevalent high-level deep learning framework (e.g., TPU (tensor processing unit) is integrated into Tensorflow directly). However, this will obstruct other frameworks to profit from the programming interface. The alternative approach is to design a framework-independent low-level library for deep learning processors (e.g., the deep learning library for GPU, cuDNN). In this fashion, the library could be conveniently invoked in high-level programming frameworks and provides more generality. In order to allow more deep learning frameworks to gain benefits from this environment, we envision it as a low-level library which could be easily embedded into current high-level frameworks and provide high performance. Three major issues of designing such a library are discussed. The first one is the design of data structures. Data structures should be as few as possible while being able to support all possible operations. This will allow us to optimize the data structures easier without compromising the generality. The second one is the selection of operations, which should provide a rather wide range of operations to support various types of networks with high efficiency. The third is the design of the API, which should provide a flexible and user-friendly programming model and should be easy to be embedded into existing deep learning frameworks. Considering all the above issues, we propose DLPlib, a tensor-filter based library designed specific for deep learning processors. It contains two major data structures, tensor and filter, and a set of operators including basic neural network primitives and matrix/vector operations. It provides a descriptor-based API exposed as a C++ interface. The library achieves a speedup of 0.79x compared with the performance of hand-written assembly instructions.
    A Power and Area Optimization Approach of Mixed Polarity Reed-Muller Expression for Incompletely Specified Boolean Functions
    Zhen-Xue He, Li-Min Xiao, Li Ruan, Fei Gu, Zhi-Sheng Huo, Guang-Jun Qin, Ming-Fa Zhu, Long-Bing Zhang, Rui Liu, Xiang Wang
    Journal of Data Acquisition and Processing, 2017, 32 (2): 297-311. 
    Abstract   PDF(892KB) ( 1099 )  
    The power and area optimization of Reed-Muller (RM) circuits has been widely concerned. However, almost none of the exiting power and area optimization approaches can obtain all the Pareto optimal solutions of the original problem and are efficient enough. Moreover, they have not considered the don't care terms, which makes the circuit performance unable to be further optimized. In this paper, we propose a power and area optimization approach of mixed polarity RM expression (MPRM) for incompletely specified Boolean functions based on Non-Dominated Sorting Genetic Algorithm II (NSGA-II). Firstly, the incompletely specified Boolean function is transformed into zero polarity incompletely specified MPRM (ISMPRM) by using a novel ISMPRM acquisition algorithm. Secondly, the polarity and allocation of don't care terms of ISMPRM is encoded as chromosome. Lastly, the Pareto optimal solutions are obtained by using NSGA-II, in which MPRM corresponding to the given chromosome is obtained by using a chromosome conversion algorithm. The results on incompletely specified Boolean functions and MCNC benchmark circuits show that a significant power and area improvement can be made compared with the existing power and area optimization approaches of RM circuits.
    A Hint Frequency Based Approach to Enhancing the I/O Performance of Multilevel Cache Storage Systems
    Xiao-Dong Meng, Chen-Tao Wu, Min-Yi Guo, Jie Li, Xiao-Yao Liang, Bin Yao, Long Zheng
    Journal of Data Acquisition and Processing, 2017, 32 (2): 312-328. 
    Abstract   PDF(903KB) ( 1018 )  
    With the enormous and increasing user demand, I/O performance is one of the primary considerations to build a data center. Several new technologies in data centers, such as tiered storage, prompt the widespread usage of multilevel cache techniques. In these storage systems, the upper level storage typically serves as a cache for the lower level, which forms a distributed multilevel cache system. However, although many excellent multilevel cache algorithms have been proposed to improve the I/O performance, they still have potential to be enhanced by investigating the history information of hints. To address this challenge, in this paper, we propose a novel hint frequency based approach (HFA), to improve the overall multilevel cache performance of storage systems. The main idea of HFA is using hint frequencies (the total number of demotions/promotions by employing demote/promote hints) to efficiently explore the valuable history information of data blocks among multiple levels. HFA can be applied with several popular multilevel cache algorithms, such as Demote, Promote and Hint-K. Simulation results show that, compared with original multilevel cache algorithms such as Demote, Promote and Hint-K, HFA can improve the I/O performance by up to 20% under different I/O workloads.
    Enhancing Security of FPGA-Based Embedded Systems with Combinational Logic Binding
    Ji-Liang Zhang, Wei-Zheng Wang, Xing-Wei Wang, Zhi-Hua Xia
    Journal of Data Acquisition and Processing, 2017, 32 (2): 329-339. 
    Abstract   PDF(418KB) ( 2709 )  
    With the increasing use of field-programmable gate arrays (FPGAs) in embedded systems and many embedded applications, the failure to protect FPGA-based embedded systems from cloning attacks has brought serious losses to system developers. This paper proposes a novel combinational logic binding technique to specially protect FPGA-based embedded systems from cloning attacks and provides a pay-per-device licensing model for the FPGA market. Security analysis shows that the proposed binding scheme is robust against various types of malicious attacks. Experimental evaluations demonstrate the low overhead of the proposed technique.
    Artificial Intelligence and Pattern Recognition
    A Novel Hardware/Software Partitioning Method Based on Position Disturbed Particle Swarm Optimization with Invasive Weed Optimization
    Xiao-Hu Yan, Fa-Zhi He, Yi-Lin Chen
    Journal of Data Acquisition and Processing, 2017, 32 (2): 340-355. 
    Abstract   PDF(1749KB) ( 1018 )  
    With the development of the design complexity in embedded systems, hardware/software (HW/SW) partitioning becomes a challenging optimization problem in HW/SW co-design. A novel HW/SW partitioning method based on position disturbed particle swarm optimization with invasive weed optimization (PDPSO-IWO) is presented in this paper. It is found by biologists that the ground squirrels produce alarm calls which warn their peers to move away when there is potential predatory threat. Here, we present PDPSO algorithm, in each iteration of which the squirrel behavior of escaping from the global worst particle can be simulated to increase population diversity and avoid local optimum. We also present new initialization and reproduction strategies to improve IWO algorithm for searching a better position, with which the global best position can be updated. Then the search accuracy and the solution quality can be enhanced. PDPSO and improved IWO are synthesized into one single PDPSO-IWO algorithm, which can keep both searching diversification and searching intensification. Furthermore, a hybrid NodeRank (HNodeRank) algorithm is proposed to initialize the population of PDPSO-IWO, and the solution quality can be enhanced further. Since the HW/SW communication cost computing is the most time-consuming process for HW/SW partitioning algorithm, we adopt the GPU parallel technique to accelerate the computing. In this way, the runtime of PDPSO-IWO for large-scale HW/SW partitioning problem can be reduced efficiently. Finally, multiple experiments on benchmarks from state-of-the-art publications and large-scale HW/SW partitioning demonstrate that the proposed algorithm can achieve higher performance than other algorithms.
    Automatic Fall Detection Using Membership Based Histogram Descriptors
    Mohamed Maher Ben Ismail, Ouiem Bchir
    Journal of Data Acquisition and Processing, 2017, 32 (2): 356-367. 
    Abstract   PDF(2379KB) ( 1136 )  
    We propose a framework for automatic fall detection based on video visual feature extraction. The proposed approach relies on a membership histogram descriptor that encodes the visual properties of the video frames. This descriptor is obtained by mapping the original low-level visual features to a more discriminative descriptor using possibilistic memberships. This mapping can be summarized in two main phases. The first one consists in categorizing the low-level visual features of the video frames and generating homogeneous clusters in an unsupervised way. The second phase uses the obtained membership degrees generated by the clustering process to compute the membership based histogram descriptor (MHD). For the testing stage, the proposed fall detection approach categorizes unlabeled videos as "Fall" or "Non-Fall" scene using a possibilistic K-nearest neighbors classifier. The proposed approach is assessed using standard videos dataset that simulates patient fall. Also, we compare its performance with that of state-of-the-art fall detection techniques.
    Regular Paper
    Parallel Incremental Frequent Itemset Mining for Large Data
    Yu-Geng Song, Hui-Min Cui, Xiao-Bing Feng
    Journal of Data Acquisition and Processing, 2017, 32 (2): 368-385. 
    Abstract   PDF(2209KB) ( 1006 )  
    Frequent itemset mining (FIM) is a popular data mining issue adopted in many fields, such as commodity recommendation in the retail industry, log analysis in web searching, and query recommendation (or related search). A large number of FIM algorithms have been proposed to obtain better performance, including parallelized algorithms for processing large data volumes. Besides, incremental FIM algorithms are also proposed to deal with incremental database updates. However, most of these incremental algorithms have low parallelism, causing low efficiency on huge databases. This paper presents two parallel incremental FIM algorithms called IncMiningPFP and IncBuildingPFP, implemented on the MapReduce framework. IncMiningPFP preserves the FP-tree mining results of the original pass, and utilizes them for incremental calculations. In particular, we propose a method to generate a partial FP-tree in the incremental pass, in order to avoid unnecessary mining work. Further, some of the incremental parallel tasks can be omitted when the inserted transactions include fewer items. IncbuildingPFP preserves the CanTrees built in the original pass, and then adds new transactions to them during the incremental passes. Our experimental results show that IncMiningPFP can achieve significant speedup over PFP (Parallel FPGrowth) and a sequential incremental algorithm (CanTree) in most cases of incremental input database, and in other cases IncBuildingPFP can achieve it.
    A New Feistel-Type White-Box Encryption Scheme
    Ting-Ting Lin, Xue-Jia Lai, Wei-Jia Xue, Yin Jia
    Journal of Data Acquisition and Processing, 2017, 32 (2): 386-395. 
    Abstract   PDF(299KB) ( 1267 )  
    The white-box attack is a new attack context in which it is assumed that cryptographic software is implemented on an un-trusted platform and all the implementation details are controlled by the attackers. So far, almost all white-box solutions have been broken. In this study, we propose a white-box encryption scheme that is not a variant of obfuscating existing ciphers but a completely new solution. The new scheme is based on the unbalanced Feistel network as well as the ASASASA (where "A" means affine, and "S" means substitution) structure. It has an optional input block size and is suitable for saving space compared with other solutions because the space requirement grows slowly (linearly) with the growth of block size. Moreover, our scheme not only has huge white-box diversity and white-box ambiguity but also has a particular construction to bypass public white-box cryptanalysis techniques, including attacks aimed at white-box variants of existing ciphers and attacks specific to the ASASASA structure. More precisely, we present a definition of white-box security with regard to equivalent key, and prove that our scheme satisfies such security requirement.
    Developer Role Evolution in Open Source Software Ecosystem: An Explanatory Study on GNOME
    Can Cheng, Bing Li, Zeng-Yang Li, Yu-Qi Zhao, Feng-Ling Liao
    Journal of Data Acquisition and Processing, 2017, 32 (2): 396-414. 
    Abstract   PDF(733KB) ( 1288 )  
    An open source software (OSS) ecosystem refers to an OSS development community composed of many software projects and developers contributing to these projects. The projects and developers co-evolve in an ecosystem. To keep healthy evolution of such OSS ecosystems, there is a need of attracting and retaining developers, particularly project leaders and core developers who have major impact on the project and the whole team. Therefore, it is important to figure out the factors that influence developers' chance to evolve into project leaders and core developers. To identify such factors, we conducted a case study on the GNOME ecosystem. First, we collected indicators reflecting developers' subjective willingness to contribute to the project and the project environment that they stay in. Second, we calculated such indicators based on the GNOME dataset. Then, we fitted logistic regression models by taking as independent variables the resulting indicators after eliminating the most collinear ones, and taking as a dependent variable the future developer role (the core developer or project leader). The results showed that part of such indicators (e.g., the total number of projects that a developer joined) of subjective willingness and project environment significantly influenced the developers' chance to evolve into core developers and project leaders. With different validation methods, our obtained model performs well on predicting developmental core developers, resulting in stable prediction performance (0.770, F-value).
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved