Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      05 November 2017, Volume 32 Issue 6   
    For Selected: View Abstracts Toggle Thumbnails
    Special Section on Software Systems 2017
    Preface
    Tao Xie, Yuanfang Cai, Xuanzhe Liu, Xiaoyin Wang, Mithun P. Acharya, Marcelo d'Amorim, Xiaoxing Ma
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1057-1059. 
    Abstract   PDF(257KB) ( 388 )  
    What Are They Talking About? Analyzing Code Reviews in Pull-Based Development Model
    Zhi-Xing Li, Yue Yu, Gang Yin, Tao Wang, Huai-Min Wang
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1060-1075. 
    Abstract   PDF(944KB) ( 673 )  
    Code reviews in pull-based model are open to community users on GitHub. Various participants are taking part in the review discussions and the review topics are not only about the improvement of code contributions but also about project evolution and social interaction. A comprehensive understanding of the review topics in pull-based model would be useful to better organize the code review process and optimize review tasks such as reviewer recommendation and pull-request prioritization. In this paper, we first conduct a qualitative study on three popular open-source software projects hosted on GitHub and construct a fine-grained two-level taxonomy covering four level-1 categories (code correctness, pullrequest decision-making, project management, and social interaction) and 11 level-2 subcategories (e.g., defect detecting, reviewer assigning, contribution encouraging). Second, we conduct preliminary quantitative analysis on a large set of review comments that were labeled by TSHC (a two-stage hybrid classification algorithm), which is able to automatically classify review comments by combining rule-based and machine-learning techniques. Through the quantitative study, we explore the typical review patterns. We find that the three projects present similar comments distribution on each subcategory. Pull-requests submitted by inexperienced contributors tend to contain potential issues even though they have passed the tests. Furthermore, external contributors are more likely to break project conventions in their early contributions.
    CSLabel:An Approach for Labelling Mobile App Reviews
    Li Zhang, Xin-Yue Huang, Jing Jiang, Ya-Kun Hu
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1076-1089. 
    Abstract   PDF(325KB) ( 694 )  
    Mobile apps (applications) have become a popular form of software, and the app reviews by users have become an important feedback resource. Users may raise some issues in their reviews when they use apps, such as a functional bug, a network lag, or a request for a feature. Understanding these issues can help developers to focus on users' concerns, and help users to evaluate similar apps for download or purchase. However, we do not know which types of issues are raised in a review. Moreover, the amount of user reviews is huge and the nature of the reviews' text is unstructured and informal. In this paper, we analyze 3 902 user reviews from 11 mobile apps in a Chinese app store-360 Mobile Assistant, and uncover 17 issue types. Then, we propose an approach CSLabel that can label user reviews based on the raised issue types. CSLabel uses a cost-sensitive learning method to mitigate the effects of the imbalanced data, and optimizes the setting of the support vector machine (SVM) classifier's kernel function. Results show that CSLabel can correctly label reviews with the precision of 66.5%, the recall of 69.8%, and the F1 measure of 69.8%. In comparison with the state-of-the-art approach, CSLabel improves the precision by 14%, the recall by 30%, the F1 measure by 22%. Finally, we apply our approach to two real scenarios:1) we provide an overview of 1 076 786 user reviews from 1 100 apps in the 360 Mobile Assistant and 2) we find that some issue types have a negative correlation with users' evaluation of apps.
    A Cluster Based Feature Selection Method for Cross-Project Software Defect Prediction
    Chao Ni, Wang-Shu Liu, Xiang Chen, Qing Gu, Dao-Xu Chen, Qi-Guo Huang
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1090-1107. 
    Abstract   PDF(516KB) ( 768 )  
    Cross-project defect prediction (CPDP) uses the labeled data from external source software projects to compensate the shortage of useful data in the target project, in order to build a meaningful classification model. However, the distribution gap between software features extracted from the source and the target projects may be too large to make the mixed data useful for training. In this paper, we propose a cluster-based novel method FeSCH (Feature Selection Using Clusters of Hybrid-Data) to alleviate the distribution differences by feature selection. FeSCH includes two phases. The feature clustering phase clusters features using a density-based clustering method, and the feature selection phase selects features from each cluster using a ranking strategy. For CPDP, we design three different heuristic ranking strategies in the second phase. To investigate the prediction performance of FeSCH, we design experiments based on real-world software projects, and study the effects of design options in FeSCH (such as ranking strategy, feature selection ratio, and classifiers). The experimental results prove the effectiveness of FeSCH. Firstly, compared with the state-of-the-art baseline methods, FeSCH achieves better performance and its performance is less affected by the classifiers used. Secondly, FeSCH enhances the performance by effectively selecting features across feature categories, and provides guidelines for selecting useful features for defect prediction.
    On Locating Malicious Code in Piggybacked Android Apps
    Li Li, Daoyuan Li, Tegawendé F. Bissyandé, Jacques Klein, Haipeng Cai, David Lo, Yves Le Traon
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1108-1124. 
    Abstract   PDF(638KB) ( 580 )  
    To devise efficient approaches and tools for detecting malicious packages in the Android ecosystem, researchers are increasingly required to have a deep understanding of malware. There is thus a need to provide a framework for dissecting malware and locating malicious program fragments within app code in order to build a comprehensive dataset of malicious samples. Towards addressing this need, we propose in this work a tool-based approach called HookRanker, which provides ranked lists of potentially malicious packages based on the way malware behaviour code is triggered. With experiments on a ground truth of piggybacked apps, we are able to automatically locate the malicious packages from piggybacked Android apps with an accuracy@5 of 83.6% for such packages that are triggered through method invocations and an accuracy@5 of 82.2% for such packages that are triggered independently.
    Automated String Constraints Solving for Programs Containing String Manipulation Functions
    Xu-Zhou Zhang, Yun-Zhan Gong, Ya-Wen Wang, Ying Xing, Ming-Zhe Zhang
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1125-1135. 
    Abstract   PDF(819KB) ( 498 )  
    The ability to solve various constraints is a principal factor of automatic constraint solvers. Most object-oriented languages treat a character string as a primitive data type which is manipulated by string library functions. Most constraint solvers have limitations on their input constraints, such as strong restrictions on the expressiveness of constraints or lack of the ability to solve hybrid constraints. These limitations hinder applying automated constraint solvers on program analysis techniques for programs containing strings and string manipulation functions. We propose an approach to automatically solve program constraints involving strings and string manipulation functions. Based on the character array model, we design a constraint language which contains primitive operations to precisely describe the constraints of commonly used string manipulation functions. The translated string constraints together with numeric constraints are then solved by a twophase test generation procedure:firstly, a partial solution is obtained to satisfy the arithmetic constraints of the position variables, and the solution is utilized to simplify the string constraints into pure character array constraints; secondly, the pure array constraints are solved by an off-the-shelf array-specific theory based constraint solver. We integrate the approach into an automated testing tool to support the generation of string test cases, and then perform experiments. The results of the experiments prove that the integration of the proposed approach promotes the testing coverage of the existing testing tool, and the integrated tool has an advantage of handling specific string manipulation functions compared with an existing string solver.
    Special Section of CAD/Graphics 2017
    Preface
    Li-Gang Liu, Kai Xu
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1136-1137. 
    Abstract   PDF(84KB) ( 248 )  
    ExquiMo:An Exquisite Corpse Tool for Collaborative 3D Shape Design
    Warunika Ranaweera, Parmit Chilana, Daniel Cohen-Or, Hao Zhang
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1138-1149. 
    Abstract   PDF(1552KB) ( 633 )  
    We introduce ExquiMo, a collaborative modeling tool which enables novice users to work together to generate interesting, and even creative, 3D shapes. Inspired by an Exquisite Corpse gameplay, our tool allocates distinct parts of a shape to multiple players who model the assigned parts in a sequence. Our approach is motivated by the understanding that effective surprise leads to creative outcomes. Hence, to maintain the surprise factor of the output, we conceal the previously modeled parts from the most recent player. Part designs from individual players are fused together to produce an often unexpected and novel end result. We investigate the effectiveness of collaboration on the output designs by conducting a sequence of user studies to validate the hypotheses formed based on our research questions. Results of the user studies are supportive of our hypotheses that multi-user collaborative 3D modeling via ExquiMo tends to lead to more creative novice designs according to the commonly used criteria for creativity:novelty and surprise.
    Estimation of Vehicle Pose and Position with Monocular Camera at Urban Road Intersections
    Jin-Zhao Yuan, Hui Chen, Bin Zhao, Yanyan Xu
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1150-1161. 
    Abstract   PDF(3015KB) ( 657 )  
    With the rapid development of urban, the scale of the city is expanding day by day. The road environment is becoming more and more complicated. The vehicle ego-localization in complex road environment puts forward imperative requirements for intelligent driving technology. The reliable vehicle ego-localization, including the lane recognition and the vehicle position and attitude estimation, at the complex traffic intersection is significant for the intelligent driving of the vehicle. In this article, we focus on the complex road environment of the city, and propose a pose and position estimation method based on the road sign using only a monocular camera and a common GPS (global positioning system). Associated with the multi-sensor cascade system, this method can be a stable and reliable alternative when the precision of multi-sensor cascade system decreases. The experimental results show that, within 100 meters distance to the road signs, the pose error is less than 2 degrees, and the position error is less than one meter, which can reach the lane-level positioning accuracy. Through the comparison with the Beidou high-precision positioning system L202, our method is more accurate for detecting which lane the vehicle is driving on.
    Mechanical Assembly Packing Problem Using Joint Constraints
    Ming-Liang Xu, Ning-Bo Gu, Wei-Wei Xu, Ming-Yuan Li, Jun-Xiao Xue, Bing Zhou
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1162-1171. 
    Abstract   PDF(950KB) ( 565 )  
    The three-dimensional packing problem is generally on how to pack a set of models into a given bounding box using the smallest packaging volume. It is known as an NP-hard problem. When discussing the packing problem in mechanical field, the space utilization of a mechanism is low due to the constraint of mechanical joints between different mechanical parts. Although such a situation can be improved by breaking the mechanism into components at every joint, it burdens the user when reassembling the mechanism and may also reduce the service life of mechanical parts. In this paper, we propose a novel mechanism packing algorithm that deliberately considers the DOFs (degrees of freedom) of mechanical joints. With this algorithm, we construct the solution space according to each joint. While building the search tree of the splitting scheme, we do not break the joint, but move the joint. Therefore, the algorithm proposed in this paper just requires the minimal number of splits to meet the goal of space utilization. Numerical examples show that the proposed method is convenient and efficient to pack three-dimensional models into a given bounding box with high space utilization.
    Non-Frontal Facial Expression Recognition Using a Depth-Patch Based Deep Neural Network
    Nai-Ming Yao, Hui Chen, Qing-Pei Guo, Hong-An Wang
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1172-1185. 
    Abstract   PDF(1698KB) ( 786 )  
    The challenge of coping with non-frontal head poses during facial expression recognition results in considerable reduction of accuracy and robustness when capturing expressions that occur during natural communications. In this paper, we attempt to recognize facial expressions under poses with large rotation angles from 2D videos. A depth-patch based 4D expression representation model is proposed. It was reconstructed from 2D dynamic images for delineating continuous spatial changes and temporal context under non-frontal cases. Furthermore, we present an effective deep neural network classifier, which can accurately capture pose-variant expression features from the depth patches and recognize non-frontal expressions. Experimental results on the BU-4DFE database show that the proposed method achieves a high recognition accuracy of 86.87% for non-frontal facial expressions within a range of head rotation angle of up to 52°, outperforming existing methods. We also present a quantitative analysis of the components contributing to the performance gain through tests on the BU-4DFE and Multi-PIE datasets.
    Surface Tension Model Based on Implicit Incompressible Smoothed Particle Hydrodynamics for Fluid Simulation
    Xiao-Kun Wang, Xiao-Juan Ban, Ya-Lan Zhang, Si-Nuo Liu, Peng-Fei Ye
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1186-1197. 
    Abstract   PDF(3194KB) ( 931 )  
    In order to capture stable and realistic microscopic features of fluid surface, a surface tension and adhesion method based on implicit incompressible SPH (smoothed particle hydrodynamics) is presented in this paper. It gives a steady and fast tension model and can solve the problem of not considering adhesion. Molecular cohesion and surface minimization are considered for surface tension, and adhesion is added to show the microscopic characteristics of the surface. To simulate surface tension and adhesion stably and efficiently, the surface tension and adhesion model is integrated to an implicit incompressible SPH method. The experimental results show that the method can better simulate surface features in a variety of scenarios compared with previous methods and meanwhile ensure stability and efficiency.
    Stable Real-Time Surgical Cutting Simulation of Deformable Objects Embedded with Arbitrary Triangular Meshes
    Shi-Yu Jia, Zhen-Kuan Pan, Guo-Dong Wang, Wei-Zhong Zhang, CCF Xiao-Kang Yu
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1198-1213. 
    Abstract   PDF(2930KB) ( 694 )  
    Surgical simulators need to simulate deformation and cutting of deformable objects. Adaptive octree mesh based cutting methods embed the deformable objects into octree meshes that are recursively refined near the cutting tool trajectory. Deformation is only applied to the octree meshes; thus the deformation instability problem caused by degenerated elements is avoided. Biological tissues and organs usually contain complex internal structures that are ignored by previous work. In this paper the deformable objects are modeled as voxels connected by links and embedded inside adaptive octree meshes. Links swept by the cutting tool are disconnected and object surface meshes are reconstructed from disconnected links. Two novel methods for embedding triangular meshes as internal structures are proposed. The surface mesh embedding method is applicable to arbitrary triangular meshes, but these meshes have no physical properties. The material sub-region embedding method associates the interiors enclosed by the triangular meshes with physical properties, but requires that these meshes are watertight, and have no self-intersections, and their smallest features are larger than a voxel. Some local features are constructed in a pre-calculation stage to increase simulation performance. Simulation tests show that our methods can cut embedded structures in a way consistent with the cutting of the deformable objects. Cut fragments can also deform correctly along with the deformable objects.
    Automatic Anterior Lamina Cribrosa Surface Depth Measurement Based on Active Contour and Energy Constraint
    Zai-Liang Chen, Peng Peng, Bei-Ji Zou, Hai-Lan Shen, Hao Wei, Rong-Chang Zhao
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1214-1221. 
    Abstract   PDF(3506KB) ( 514 )  
    The lamina cribrosa is affected by intraocular pressure, which is the major risk of glaucoma. However, the capability to evaluate the lamina cribrosa in vivo has been limited until recently due to poor image quality and the posterior laminar displacement of glaucomatous eyes. In this study, we propose an automatic method to measure the anterior lamina cribrosa surface depth (ALCSD), including a method for detecting Bruch's membrane opening (BMO) based on k-means and region-based active contour. An anterior lamina cribrosa surface segmentation method based on energy constraint is also proposed. In BMO detection, we initialize the Chan-Vese active contour model by using the segmentation map of the k-means cluster. In the segmentation of anterior lamina cribrosa surface, we utilize the energy function in each A-scan to establish a set of candidates. The points in the set that fail to meet the constraints are removed. Finally, we use the B-spline fitting method to obtain the results. The proposed automatic method can model the posterior laminar displacement by measuring the ALCSD. This method achieves a mean error of 45.34 μm in BMO detection. The mean errors of the anterior lamina cribrosa surface are 94.1% within five pixels and 76.1% within three pixels.
    Supervised Vessels Classification Based on Feature Selection
    Bei-Ji Zou, Yao Chen, Cheng-Zhang Zhu, Zai-Liang Chen, Zi-Qian Zhang
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1222-1230. 
    Abstract   PDF(2001KB) ( 673 )  
    Arterial-venous classification of retinal blood vessels is important for the automatic detection of cardiovascular diseases such as hypertensive retinopathy and stroke. In this paper, we propose an arterial-venous classification (AVC) method, which focuses on feature extraction and selection from vessel centerline pixels. The vessel centerline is extracted after the preprocessing of vessel segmentation and optic disc (OD) localization. Then, a region of interest (ROI) is extracted around OD, and the most efficient features of each centerline pixel in ROI are selected from the local features, grey-level co-occurrence matrix (GLCM) features, and an adaptive local binary patten (A-LBP) feature by using a max-relevance and min-redundancy (mRMR) scheme. Finally, a feature-weighted K-nearest neighbor (FW-KNN) algorithm is used to classify the arterial-venous vessels. The experimental results on the DRIVE database and INSPIRE-AVR database achieve the high accuracy of 88.65% and 88.51% in ROI, respectively.
    Regular Paper
    Effective Query Grouping Strategy in Clouds
    Qin Liu, Yuhong Guo, Jie Wu, Guojun Wang
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1231-1249. 
    Abstract   PDF(5162KB) ( 635 )  
    As the demand for the development of cloud computing grows, more and more organizations have outsourced their data and query services to the cloud for cost-saving and flexibility. Suppose an organization that has a great number of users querying the cloud-deployed multiple proxy servers to achieve cost efficiency and load balancing. Given n queries, each of which is expressed as several keywords, and k proxy servers, the problem to be solved is how to classify n queries into k groups, in order to minimize the difference between each group and the number of distinct keywords in all groups. Since this problem is NP-hard, it is solved in mathematic and heuristic ways. Mathematic grouping uses a local optimization method, and heuristic grouping is based on k-means. Specifically, two extensions are provided:the first one focuses on robustness, i.e., each user obtains search results even if some proxy servers fail; the second one focuses on benefit, i.e., each user can retrieve as many files as possible that may be of interest without increasing the sum. Extensive evaluations have been conducted on both a synthetic dataset and real query traces to verify the effectiveness of our strategies.
    A Reverse Auction Framework for Hybrid Access in Femtocell Network
    Yan-Jiao Chen, Xiao-Yan Yin, Jin Zhang
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1250-1264. 
    Abstract   PDF(654KB) ( 529 )  
    In the two-tier macro-femto heterogeneous network, hybrid access is regarded as the most ideal access control approach to mitigating macro-femto cross-tier interference and enhancing overall network performance. However, the implementation of hybrid access is hindered by a lack of incentive market mechanism to motivate private femtocell owners to offer access permissions to macro users. In this paper, we propose a reverse auction framework for access permission transaction between a macrocell operator and multiple femtocell owners to promote hybrid access. Our goal is to maximize social welfare while guaranteeing the truthfulness of the auction. Since the coverage of multiple femtocells may overlap, we partition each cell to adjust the granularity of access permission availability. We first propose a Vickery-Clarke-Grove (VCG)-based mechanism, which costs the least among all auction mechanisms that produce maximum social welfare. As the VCG mechanism is too time-consuming, we propose two alternative truthful mechanisms, namely, generalized secondprice and suboptimal mechanism. We further extend the auction framework to the scenario where femtocell owners have heterogeneous valuations for access permissions in different locations.
    LTSS:Load-Adaptive Traffic Steering and Forwarding for Security Services in Multi-Tenant Cloud Datacenters
    Xue-Kai Du, Zhi-Hui Lu, Qiang Duan, Jie Wu, Cheng-Rong Wu
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1265-1278. 
    Abstract   PDF(1958KB) ( 472 )  
    Currently, different kinds of security devices are deployed in the cloud datacenter environment and tenants may choose their desired security services such as firewall and IDS (intrusion detection system). At the same time, tenants in cloud computing datacenters are dynamic and have different requirements. Therefore, security device deployment in cloud datacenters is very complex and may lead to inefficient resource utilization. In this paper, we study this problem in a software-defined network (SDN) based multi-tenant cloud datacenter environment. We propose a load-adaptive traffic steering and packet forwarding scheme called LTSS to solve the problem. Our scheme combines SDN controller with TagOper plug-in to determine the traffic paths with the minimum load for tenants and allows tenants to get their desired security services in SDN-based datacenter networks. We also build a prototype system for LTSS to verify its functionality and evaluate performance of our design.
    Intermittent Fault Diagnosability of Interconnection Networks
    Jia-Rong Liang, Hao Feng, Xiaojiang Du
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1279-1287. 
    Abstract   PDF(2470KB) ( 571 )  
    An interconnection network's diagnosability is an important metric for measuring its self-diagnostic capability. Permanent fault and intermittent fault are two different fault models that exist in an interconnection network. In this paper, we focus on the problem pertaining to the diagnosability of interconnection networks in an intermittent fault situation. First, we study a class of interconnection networks called crisp three-cycle networks, in which the cnin-number (the number of common vertices each pair of vertices share) is no more than one. Necessary and sufficient conditions are derived for the diagnosability of crisp three-cycle networks under the PMC (Preparata, Metze, and Chien) model. A simple check can show that many well-known interconnection networks are crisp three-cycle networks. Second, we prove that an interconnection network S is a ti-fault diagnosable system without repair if and only if its minimum in-degree is greater than ti under the BGM (Barsi, Grandoni, and Masetrini) model. Finally, we extend the necessary and sufficient conditions to determine whether an interconnection network S is ti-fault diagnosable without repair under the MM (Maeng and Malek) model from the permanent fault situation to the intermittent fault situation.
    CHAUS:Scalable VM-Based Channels for Unbounded Streaming
    Yu Zhang, Yu-Fen Yu, Hui-Fang Cao, Jian-Kang Chen, Qi-Liang Zhang
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1288-1304. 
    Abstract   PDF(834KB) ( 611 )  
    Stream processing is a special form of the dataflow execution model that offers extensive opportunities for optimization and automatic parallelism. A streaming application is represented by a graph of computation stages that communicate with each other via FIFO channels. In shared-memory environment, an FIFO channel is classically a common, fixed-size synchronized buffer shared between the producer and the consumer. As the number of concurrent stage workers increases, the synchronization overheads, such as contention and waiting times, rise sharply and severely impair application performance. In this paper, we present a novel multithreaded model which isolates memory between threads by default and provides a higher level abstraction for scalable unicast or multicast communication between threads-CHAUS (Channel for Unbounded Streaming). The CHAUS model hides the underlying synchronization details, but requires the user to declare producer-consumer relationship of a channel in advance. It is the duty of the runtime system to ensure reliable data transmission at data item granularity as declared. To achieve unbounded buffer for streaming and reduce the synchronization overheads, we propose a virtual memory based solution to implement a scalable CHAUS channel. We check the programmability of CHAUS by successfully porting dedup and ferret from PARSEC as well as implementing MapReduce library with Phoenix-like API. The experimental results show that workloads built with CHAUS run faster than those with Pthreads, and CHAUS has the best scalability compared with two Pthread versions. There are three workloads whose CHAUS versions only spend no more than 0.17x runtime of Pthreads on both 16 and 32 cores.
    A Configurable Circuit for Cross-Correlation in Real-Time Image Matching
    Quan Zhou, Liang Yang, Hui Cao
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1305-1318. 
    Abstract   PDF(1159KB) ( 569 )  
    Cross-correlation (CC) is the most time-consuming in the implementation of image matching algorithms based on the correlation method. Therefore, how to calculate CC fast is crucial to real-time image matching. This work reveals that the single cascading multiply-accumulate (CAMAC) and concurrent multiply-accumulate (COMAC) architectures which have been widely used in the past, actually, do not necessarily bring about a satisfactory time performance for CC. To obtain better time performance and higher resource efficiency, this paper proposes a configurable circuit involving the advantages of CAMAC and COMAC for a large amount of multiply-accumulate (MAC) operations of CC in exhaustive search. The proposed circuit works in an array manner and can better adapt to changing size image matching in real-time processing. Experimental results demonstrate that this novel circuit which involves the two structures can complete vast MAC calculations at a very high speed. Compared with existing related work, it improves the computation density further and is more flexible to use.
    Greedy Randomized Adaptive Search Procedure with Path-Relinking for the Vertex p-Center Problem
    Ai-Hua Yin, Tao-Qing Zhou, Jun-Wen Ding, Qing-Jie Zhao, Zhi-Peng Lv
    Journal of Data Acquisition and Processing, 2017, 32 (6): 1319-1333. 
    Abstract   PDF(825KB) ( 809 )  
    The p-center problem consists of choosing a subset of vertices in an undirected graph as facilities in order to minimize the maximum distance between a client and its closest facility. This paper presents a greedy randomized adaptive search procedure with path-relinking (GRASP/PR) algorithm for the p-center problem, which combines both GRASP and path-relinking. Each iteration of GRASP/PR consists of the construction of a randomized greedy solution, followed by a tabu search procedure. The resulting solution is combined with one of the elite solutions by path-relinking, which consists in exploring trajectories that connect high-quality solutions. Experiments show that GRASP/PR is competitive with the state-of-the-art algorithms in the literature in terms of both solution quality and computational efficiency. Specifically, it virtually improves the previous best known results for 10 out of 40 large instances while matching the best known results for others.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved