Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      05 May 2016, Volume 31 Issue 3   
    For Selected: View Abstracts Toggle Thumbnails
    Special Section of CVM 2016
    Preface
    Shi-Min Hu, Li-Gang Liu, Ralph R. Martin
    Journal of Data Acquisition and Processing, 2016, 31 (3): 437-438. 
    Abstract   PDF(108KB) ( 695 )  
    Skeleton-Sectional Structural Analysis for 3D Printing
    Wen-Peng Xu, Wei Li, Li-Gang Liu
    Journal of Data Acquisition and Processing, 2016, 31 (3): 439-449. 
    Abstract   PDF(1642KB) ( 913 )  
    3D printing has become popular and has been widely used in various applications in recent years. More and more home users have motivation to design their own models and then fabricate them using 3D printers. However, the printed objects may have some structural or stress defects as the users may be lack of knowledge on stress analysis on 3D models. In this paper, we present an approach to help users analyze a model's structural strength while designing its shape. We adopt sectional structural analysis instead of conventional FEM (Finite Element Method) analysis which is computationally expensive. Based on sectional structural analysis, our approach imports skeletons to assist in integrating mesh designing, strength computing and mesh correction well. Skeletons can also guide sections building and load calculation for analysis. For weak regions with high stress over a threshold value in the model from analysis result, our system corrects them by scaling the corresponding bones of skeleton so as to make these regions stiff enough. A number of experiments have demonstrated the applicability and practicability of our approach.
    Improving Shape from Shading with Interactive Tabu Search
    Jing Wu, Paul L. Rosin, Xianfang Sun, Ralph R. Martin
    Journal of Data Acquisition and Processing, 2016, 31 (3): 450-462. 
    Abstract   PDF(7300KB) ( 870 )  
    Optimisation based shape from shading (SFS) is sensitive to initialization: errors in initialization are a significant cause of poor overall shape reconstruction. In this paper, we present a method to help overcome this problem by means of user interaction. There are two key elements in our method. Firstly, we extend SFS to consider a set of initializations, rather than to use a single one. Secondly, we efficiently explore this initialization space using a heuristic search method, tabu search, guided by user evaluation of the reconstruction quality. Reconstruction results on both synthetic and real images demonstrate the effectiveness of our method in providing more desirable shape reconstructions.
    View-Aware Image Object Compositing and Synthesis from Multiple Sources
    Xiang Chen, Wei-Wei Xu, Sai-Kit Yeung, Kun Zhou
    Journal of Data Acquisition and Processing, 2016, 31 (3): 463-478. 
    Abstract   PDF(5766KB) ( 1086 )  
    Image compositing is widely used to combine visual elements from separate source images into a single image. Although recent image compositing techniques are capable of achieving smooth blending of the visual elements from different sources, most of them implicitly assume the source images are taken in the same viewpoint. In this paper, we present an approach to compositing novel image objects from multiple source images which have different viewpoints. Our key idea is to construct 3D proxies for meaningful components of the source image objects, and use these 3D component proxies to warp and seamlessly merge components together in the same viewpoint. To realize this idea, we introduce a coordinateframe based single-view camera calibration algorithm to handle general types of image objects, a structure-aware cuboid optimization algorithm to get the cuboid proxies for image object components with correct structure relationship, and finally a 3D-proxy transformation guided image warping algorithm to stitch object components. We further describe a novel application based on this compositing approach to automatically synthesize a large number of image objects from a set of exemplars. Experimental results show that our compositing approach can be applied to a variety of image objects, such as chairs, cups, lamps, and robots, and the synthesis application can create novel image objects with significant shape and style variations from a small set of exemplars.
    A Linear Approach for Depth and Colour Camera Calibration Using Hybrid Parameters
    Ke-Li Cheng, Xuan Ju, Ruo-Feng Tong, Min Tang, Jian Chang, Jian-Jun Zhang
    Journal of Data Acquisition and Processing, 2016, 31 (3): 479-488. 
    Abstract   PDF(1415KB) ( 1153 )  
    Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking different colour and depth inputs is required. Our approach removes the numerical difficulties of using non-linear optimization in previous methods which explicitly resolve camera intrinsics as well as the transformation between depth and colour cameras. A matrix of hybrid parameters is introduced to linearize our optimization. The hybrid parameters offer a transformation from a depth parametric space (depth camera image) to a colour parametric space (colour camera image) by combining the intrinsic parameters of depth camera and a rotation transformation from depth camera to colour camera. Both the rotation transformation and intrinsic parameters can be explicitly calculated from our hybrid parameters with the help of a standard QR factorisation. We test our algorithm with both synthesized data and real-world data where ground-truth depth information is captured by Microsoft Kinect. The experiments show that our approach can provide comparable accuracy of calibration with the state-of-the-art algorithms while taking much less computation time (1/50 of Herrera's method and 1/10 of Raposo's method) due to the advantage of using hybrid parameters.
    Multi-Task Learning for Food Identification and Analysis with Deep Convolutional Neural Networks
    Xi-Jin Zhang, Yi-Fan Lu, Song-Hai Zhang
    Journal of Data Acquisition and Processing, 2016, 31 (3): 489-500. 
    Abstract   PDF(1917KB) ( 1823 )  
    In this paper, we proposed a multi-task system that can identify dish types, food ingredients, and cooking methods from food images with deep convolutional neural networks. We built up a dataset of 360 classes of different foods with at least 500 images for each class. To reduce the noises of the data, which was collected from the Internet, outlier images were detected and eliminated through a one-class SVM trained with deep convolutional features. We simultaneously trained a dish identifier, a cooking method recognizer, and a multi-label ingredient detector. They share a few low-level layers in the deep network architecture. The proposed framework shows higher accuracy than traditional method with handcrafted features, and the cooking method recognizer and ingredient detector can be applied to dishes which are not included in the training dataset to provide reference information for users.
    A Modified Fuzzy C-Means Algorithm for Brain MR Image Segmentation and Bias Field Correction
    Wen-Qian Deng, Xue-Mei Li, Xifeng Gao, Cai-Ming Zhang
    Journal of Data Acquisition and Processing, 2016, 31 (3): 501-511. 
    Abstract   PDF(1523KB) ( 1354 )  
    In quantitative brain image analysis, accurate brain tissue segmentation from brain magnetic resonance image (MRI) is a critical step. It is considered to be the most important and difficult issue in the field of medical image processing. The quality of MR images is influenced by partial volume effect, noise, and intensity inhomogeneity, which render the segmentation task extremely challenging. We present a novel fuzzy c-means algorithm (RCLFCM) for segmentation and bias field correction of brain MR images. We employ a new gray-difference coefficient and design a new impact factor to measure the effect of neighbor pixels, so that the robustness of anti-noise can be enhanced. Moreover, we redefine the objective function of FCM (fuzzy c-means) by adding the bias field estimation model to overcome the intensity inhomogeneity in the image and segment the brain MR images simultaneously. We also construct a new spatial function by combining pixel gray value dissimilarity with its membership, and make full use of the space information between pixels to update the membership. Compared with other state-of-the-art approaches by using similarity accuracy on synthetic MR images with different levels of noise and intensity inhomogeneity, the proposed algorithm generates the results with high accuracy and robustness to noise.
    Nonlinear Dimensionality Reduction by Local Orthogonality Preserving Alignment
    Tong Lin, Yao Liu, Bo Wang, Li-Wei Wang, Hong-Bin Zha
    Journal of Data Acquisition and Processing, 2016, 31 (3): 512-524. 
    Abstract   PDF(4177KB) ( 1059 )  
    We present a new manifold learning algorithm called Local Orthogonality Preserving Alignment (LOPA). Our algorithm is inspired by the Local Tangent Space Alignment (LTSA) method that aims to align multiple local neighborhoods into a global coordinate system using affine transformations. However, LTSA often fails to preserve original geometric quantities such as distances and angles. Although an iterative alignment procedure for preserving orthogonality was suggested by the authors of LTSA, neither the corresponding initialization nor the experiments were given. Procrustes Subspaces Alignment (PSA) implements the orthogonality preserving idea by estimating each rotation transformation separately with simulated annealing. However, the optimization in PSA is complicated and multiple separated local rotations may produce globally contradictive results. To address these difficulties, we first use the pseudo-inverse trick of LTSA to represent each local orthogonal transformation with the unified global coordinates. Second the orthogonality constraints are relaxed to be an instance of semi-definite programming (SDP). Finally a two-step iterative procedure is employed to further reduce the errors in orthogonal constraints. Extensive experiments show that LOPA can faithfully preserve distances, angles, inner products, and neighborhoods of the original datasets. In comparison, the embedding performance of LOPA is better than that of PSA and comparable to that of state-of-the-art algorithms like MVU and MVE, while the runtime of LOPA is significantly faster than that of PSA, MVU and MVE.
    Computer Graphics and Multimedia
    Texture Repairing by Unified Low Rank Optimization
    Xiao Liang, Xiang Ren, Zhengdong Zhang, Yi Ma
    Journal of Data Acquisition and Processing, 2016, 31 (3): 525-546. 
    Abstract   PDF(8335KB) ( 811 )  
    In this paper, we show how to harness both low-rank and sparse structures in regular or near-regular textures for image completion. Our method is based on a unified formulation for both random and contiguous corruption. In addition to the low rank property of texture, the algorithm also uses the sparse assumption of the natural image: because the natural image is piecewise smooth, it is sparse in certain transformed domain (such as Fourier or wavelet transform). We combine low-rank and sparsity properties of the texture image together in the proposed algorithm. Our algorithm based on convex optimization can automatically and correctly repair the global structure of a corrupted texture, even without precise information about the regions to be completed. This algorithm integrates texture rectification and repairing into one optimization problem. Through extensive simulations, we show our method can complete and repair textures corrupted by errors with both random and contiguous supports better than existing low-rank matrix recovery methods. Our method demonstrates significant advantage over local patch based texture synthesis techniques in dealing with large corruption, non-uniform texture, and large perspective deformation.
    Variance Analysis and Adaptive Sampling for Indirect Light Path Reuse
    Hao Qin, Xin Sun, Jun Yan, Qi-Ming Hou, Zhong Ren, Kun Zhou
    Journal of Data Acquisition and Processing, 2016, 31 (3): 547-560. 
    Abstract   PDF(4437KB) ( 1236 )  
    In this paper, we study the estimation variance of a set of global illumination algorithms based on indirect light path reuse. These algorithms usually contain two passes — in the first pass, a small number of indirect light samples are generated and evaluated, and they are then reused by a large number of reconstruction samples in the second pass. Our analysis shows that the covariance of the reconstruction samples dominates the estimation variance under high reconstruction rates and increasing the reconstruction rate cannot effectively reduce the covariance. We also find that the covariance represents to what degree the indirect light samples are reused during reconstruction. This analysis motivates us to design a heuristic approximating the covariance as well as an adaptive sampling scheme based on this heuristic to reduce the rendering variance. We validate our analysis and adaptive sampling scheme in the indirect light field reconstruction algorithm and the axis-aligned filtering algorithm for indirect lighting. Experiments are in accordance with our analysis and show that rendering artifacts can be greatly reduced at a similar computational cost.
    Data Management and Data Mining
    Subgroup Discovery Algorithms: A Survey and Empirical Evaluation
    Sumyea Helal
    Journal of Data Acquisition and Processing, 2016, 31 (3): 561-576. 
    Abstract   PDF(330KB) ( 1646 )  
    Subgroup discovery is a data mining technique that discovers interesting associations among different variables with respect to a property of interest. Existing subgroup discovery methods employ different strategies for searching, pruning and ranking subgroups. It is very crucial to learn which features of a subgroup discovery algorithm should be considered for generating quality subgroups. In this regard, a number of reviews have been conducted on subgroup discovery. Although they provide a broad overview on some popular subgroup discovery methods, they employ few datasets and measures for subgroup evaluation. In the light of the existing measures, the subgroups cannot be appraised from all perspectives. Our work performs an extensive analysis on some popular subgroup discovery methods by using a wide range of datasets and by defining new measures for subgroup evaluation. The analysis result will help with understanding the major subgroup discovery methods, uncovering the gaps for further improvement and selecting the suitable category of algorithms for specific application domains.
    Trinity: Walking on a User-Object-Tag Heterogeneous Network for Personalised Recommendations
    Ming-Xin Gan, Lily Sun, Rui Jiang
    Journal of Data Acquisition and Processing, 2016, 31 (3): 577-594. 
    Abstract   PDF(3109KB) ( 1087 )  
    The rapid evolution of the Internet has been appealing for effective recommender systems to pinpoint useful information from online resources. Although historical rating data has been widely used as the most important information in recommendation methods, recent advancements have been demonstrating the improvement in recommendation performance with the incorporation of tag information. Furthermore, the availability of tag annotations has been well addressed by such fruitful online social tagging applications as CiteULike, MovieLens and BibSonomy, which allow users to express their preferences, upload resources and assign their own tags. Nevertheless, most existing tag-aware recommendation approaches model relationships among users, objects and tags using a tripartite graph, and hence overlook relationships within the same types of nodes. To overcome this limitation, we propose a novel approach, Trinity, to integrate historical data and tag information towards personalised recommendation. Trinity constructs a three-layered object-user-tag network that considers not only interconnections between different types of nodes but also relationships within the same types of nodes. Based on this heterogeneous network, Trinity adopts a random walk with restart model to assign the strength of associations to candidate objects, thereby providing a means of prioritizing the objects for a query user. We validate our approach via a series of large-scale 10-fold cross-validation experiments and evaluate its performance using three comprehensive criteria. Results show that our method outperforms several existing methods, including supervised random walk with restart, simulation of resource allocating processes, and traditional collaborative filtering.
    A Hybrid Method of Domain Lexicon Construction for Opinion Targets Extraction Using Syntax and Semantics
    Chun Liao, Chong Feng, Sen Yang, He-Yan Huang
    Journal of Data Acquisition and Processing, 2016, 31 (3): 595-603. 
    Abstract   PDF(624KB) ( 1016 )  
    Opinion targets extraction of Chinese microblogs plays an important role in opinion mining. There has been a significant progress in this area recently, especially the method based on conditional random field (CRF). However, this method only takes lexicon-related features into consideration and does not excavate the implied syntactic and semantic knowledge. We propose a novel approach which incorporates domain lexicon with groups of syntactical and semantic features. The approach acquires domain lexicon through a novel way which explores syntactic and semantic information through Partof-Speech, dependency structure, phrase structure, semantic role and semantic similarity based on word embedding. And then we combine the domain lexicon with opinion targets extracted from CRF with groups of features for opinion targets extraction. Experimental results on COAE2014 dataset show the outperformance of the approach compared with other well-known methods on the task of opinion targets extraction.
    Computer Networks and Distributed Computing
    Metadata Feedback and Utilization for Data Deduplication Across WAN
    Bing Zhou, Jiang-Tao Wen
    Journal of Data Acquisition and Processing, 2016, 31 (3): 604-623. 
    Abstract   PDF(575KB) ( 837 )  
    Data deduplication for file communication across wide area network (WAN) in the applications such as file synchronization and mirroring of cloud environments usually achieves significant bandwidth saving at the cost of significant time overheads of data deduplication. The time overheads include the time required for data deduplication at two geographically distributed nodes (e.g., disk access bottleneck) and the duplication query/answer operations between the sender and the receiver, since each query or answer introduces at least one round-trip time (RTT) of latency. In this paper, we present a data deduplication system across WAN with metadata feedback and metadata utilization (MFMU), in order to harness the data deduplication related time overheads. In the proposed MFMU system, selective metadata feedbacks from the receiver to the sender are introduced to reduce the number of duplication query/answer operations. In addition, to harness the metadata related disk I/O operations at the receiver, as well as the bandwidth overhead introduced by the metadata feedbacks, a hysteresis hash re-chunking mechanism based metadata utilization component is introduced. Our experimental results demonstrated that MFMU achieved an average of 20% 40% deduplication acceleration with the bandwidth saving ratio not reduced by the metadata feedbacks, as compared with the “baseline” content defined chunking (CDC) used in LBFS (Low-bandwith Network File system) and exiting state-of-the-art Bimodal chunking algorithms based data deduplication solutions.
    Artificial Intelligence and Pattern Recognition
    Learning Better Word Embedding by Asymmetric Low-Rank Projection of Knowledge Graph
    Fei Tian, Bin Gao, En-Hong Chen, Tie-Yan Liu
    Journal of Data Acquisition and Processing, 2016, 31 (3): 624-634. 
    Abstract   PDF(332KB) ( 1055 )  
    Word embedding, which refers to low-dimensional dense vector representations of natural words, has demonstrated its power in many natural language processing tasks. However, it may suffer from the inaccurate and incomplete information contained in the free text corpus as training data. To tackle this challenge, there have been quite a few studies that leverage knowledge graphs as an additional information source to improve the quality of word embedding. Although these studies have achieved certain success, they have neglected some important facts about knowledge graphs: 1) many relationships in knowledge graphs are many-to-one, one-to-many or even many-to-many, rather than simply one-to-one; 2) most head entities and tail entities in knowledge graphs come from very different semantic spaces. To address these issues, in this paper, we propose a new algorithm named ProjectNet. ProjectNet models the relationships between head and tail entities after transforming them with different low-rank projection matrices. The low-rank projection can allow non oneto-one relationships between entities, while different projection matrices for head and tail entities allow them to originate in different semantic spaces. The experimental results demonstrate that ProjectNet yields more accurate word embedding than previous studies, and thus leads to clear improvements in various natural language processing tasks.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved