Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      05 May 2010, Volume 25 Issue 3   
    For Selected: View Abstracts Toggle Thumbnails
    Special Section on Trends Changing Data Management
    Preface
    Xiao-Feng Meng and Hai-Xun Wang
    Journal of Data Acquisition and Processing, 2010, 25 (3): 387-388. 
    Abstract   PDF(165KB) ( 1672 )  

    Information explosion and advances in computing hardware have brought forth a new generation of applications on the Internet and on mobile devices that are poised to transform the way we work and play. One of the biggest challenges for the database community is to better support the ubiquity of big data in the Internet age. This calls for new data management solutions that traditional DBMSs cannot provide. The JCST {Special Section on Trends Changing Data Management} aims at bringing together researchers in data management to discuss the state of the database research and its impacts on practice.
    The special section has received enthusiastic responses. The topics of submitted papers ranged from flash memory databases to cloud databases. After careful review, we have accepted 13 papers, each of which has high technical quality and collectively cover a wide range of topics that reflect new trends in data management.
    The paper "Outlier Detection over Sliding Windows for Probabilistic Data Streams'' by Wang {\it et al}. addresses the challenges inherent in the probabilistic or the uncertain data. It studies a classic problem --- outlier detection --- in the new setting.
    The paper "Privacy-Preserving Data Sharing in Cloud Computing'' by Hui Wang addresses two important problems that are characteristic of the Internet age. Cloud computing is poised to revolutionize IT, and it also represents a controversial concept because of the disruptive change it may incur in the area of privacy. The paper is one of the first that addresses this concern.
    The paper "Efficient Location Updates for Continuous Queries over Moving Objects'' by Hsueh {\it et al}. addresses challenges in location databases. The wide adoption of mobile devices spurred many applications in this domain, and the paper sheds light on advances in processing location aware queries.
    The paper "Towards Progressive and Load Balancing Distributed Computation: A Case Study on Skyline Analysis'' by Huang {\it et al}. addresses another reality in the new trends of data management: data distribution and load balancing. The paper proposed a solution known as progressive query processing for this task.
    The paper "Context-Sensitive Document Ranking'' by Chang {\it et al}. focuses on an important data management issue on the Web: ranking. The paper focuses on using auxiliary information associated with documents to improve ranking.
    The paper "The Inverse Classification Problem'' by Aggarwal {\it et al}. focuses on data mining, which is the cornerstone technique in handling the big data. The paper proposed a very interesting problem --- determining attribute values given class labels --- that may benefit many applications.
    The paper "Annotation Based Query Answer over Inconsistent Database'' by Wu {\it et al}. focuses on query processing against inconsistent database. It addresses one of the current challenges in data management, i.e., data does not observe constraints, such as functional dependency, required by a traditional DBMS.
    The paper "HAPS: Supporting Effective and Efficient Full-Text P2P Search with Peer Dynamics'' by Ren {\it et al}. shows that much data is being stored in P2P systems, where data management issues abound. Specifically, it studies how to improve search in a P2P environment.
    The paper "A Solution of Data Inconsistencies in Data Integration --- Designed for Pervasive Computing Environment'' by Wang {\it et al}. addresses data inconsistency issues in data integration. The approach it proposes uses data quality as a criterion.
    The paper "Flash-Optimized B+-Tree'' by On {\it et al}. studies issues of data access in flash memory databases. As flash memory has increasing data storage capacity, the database community is faced with the challenge of re-optimizing data accesses for memory based instead of disk based devices.
    The paper "Efficient Distributed Skyline Queries for Mobile Applications'' by Xiao {\it et al}. focuses on query processing in mobile environment. It deals with a wide range of issues including mobility, wireless bandwidth, disconnections, etc.
    The paper "A Query Interface Matching Approach Based on Extended Evidence Theory for Deep Web'' by Dong {\it et al}. studies a schema matching like problem in deep Web, i.e., how to match query interfaces specified by Web forms so that the users can tap into the rich data hidden inside the deep Web.
    The paper "Dynamic Damage Recovery for Web Databases'' by Zhu {\it et al}. investigate a problem familiar to the database community --- data recovery, except that the database is web based.
    We thank our authors and reviewers who contributed greatly to the special section. We are proud of its high quality papers and its diverse topics. We hope that the vision, the technical achievements, and the empirical findings as presented in this special section will help encourage the database community to address the challenges in the new trends of data management.

    Outlier Detection over Sliding Windows for Probabilistic Data Streams
    Bin Wang, Member, CCF, Xiao-Chun Yang, Senior Member, CCF, Member ACM, IEEE, Guo-Ren Wang, Senior Member, CCF, Member, ACM, IEEE, and Ge Yu, Senior Member, CCF, Member, ACM, IEEE
    Journal of Data Acquisition and Processing, 2010, 25 (3): 389-400. 
    Abstract   PDF(832KB) ( 2035 )  

    Outlier detection is a very useful technique in many applications, where data is generally uncertain and could be described using probability. While having been studied intensively in the field of deterministic data, outlier detection is still novel in the emerging uncertain data field. In this paper, we study the semantic of outlier detection on probabilistic data stream and present a new definition of distance-based outlier over sliding window. We then show the problem of detecting an outlier over a set of possible world instances is equivalent to the problem of finding the $k$-th element in its neighborhood. Based on this observation, a dynamic programming algorithm (DPA) is proposed to reduce the detection cost from $O(2^{|R(e,d)|})$ to $O(|k{\cdot}R(e,d)|)$, where $R(e,d)$ is the $d$-neighborhood of $e$. Furthermore, we propose a pruning-based approach (PBA) to effectively and efficiently filter non-outliers on single window, and dynamically detect recent $m$ elements incrementally. Finally, detailed analysis and thorough experimental results demonstrate the efficiency and scalability of our approach.

    Privacy-Preserving Data Sharing in Cloud Computing
    Hui Wang
    Journal of Data Acquisition and Processing, 2010, 25 (3): 401-414. 
    Abstract   PDF(823KB) ( 4571 )  

    Storing and sharing databases in the cloud of computers raise serious concern of individual privacy. We consider two kinds of privacy risk: presence leakage, by which the attackers can explicitly identify individuals in (or not in) the database, and association leakage, by which the attackers can unambiguously associate individuals with sensitive information. However, the existing privacy-preserving data sharing techniques either fail to protect the presence privacy or incur considerable amounts of information loss. In this paper, we propose a novel technique, Ambiguity, to protect both presence privacy and association privacy with low information loss. We formally define the privacy model and quantify the privacy guarantee of Ambiguity against both presence leakage and association leakage. We prove both theoretically and empirically that the information loss of Ambiguity is always less than the classic generalization-based anonymization technique. We further propose an improved scheme, PriView, that can achieve better information loss than Ambiguity. We propose efficient algorithms to construct both Ambiguity and PriView schemes. Extensive experiments demonstrate the effectiveness and efficiency of both Ambiguity and PriView schemes.

    Efficient Location Updates for Continuous Queries over Moving Objects
    Yu-Ling Hsueh, Roger Zimmermann, Member, ACM, Senior Member, IEEE, and Wei-Shinn Ku, Member, ACM, IEEE
    Journal of Data Acquisition and Processing, 2010, 25 (3): 415-430. 
    Abstract   PDF(1873KB) ( 1802 )  

    The significant overhead related to frequent location updates from moving objects often results in poor performance. As most of the location updates do not affect the query results, the network bandwidth and the battery life of moving objects are wasted. Existing solutions propose lazy updates, but such techniques generally avoid only a small fraction of all unnecessary location updates because of their basic approach (e.g., safe regions, time or distance thresholds). Furthermore, most prior work focuses on a simplified scenario where queries are either static or rarely change their positions. In this study, two novel efficient location update strategies are proposed in a trajectory movement model and an arbitrary movement model, respectively. The first strategy for a trajectory movement environment is the \emph{Adaptive Safe Region} (ASR) technique that retrieves an adjustable safe region which is continuously reconciled with the surrounding dynamic queries. The communication overhead is reduced in a highly dynamic environment where both queries and data objects change their positions frequently. In addition, we design a framework that supports multiple query types (e.g., range and c-$k$NN queries). In this framework, our query re-evaluation algorithms take advantage of ASRs and issue location probes only to the affected data objects, without flooding the system with many unnecessary location update requests. The second proposed strategy for an arbitrary movement environment is the \emph{Partition-based Lazy Update} (PLU, for short) algorithm that elevates this idea further by adopting Location Information Tables (LITs) which (a) allow each moving object to estimate possible query movements and issue a location update only when it may affect any query results and (b)~enable smart server probing that results in fewer messages. We first define the data structure of an LIT which is essentially packed with a set of surrounding query locations across the terrain and discuss the mobile-side and server-side processes in correspondence to the utilization of LITs. Simulation results confirm that both the ASR and PLU concepts improve scalability and efficiency over existing methods.

    Towards Progressive and Load Balancing Distributed Computation: A Case Study on Skyline Analysis
    Jin Huang, Feng Zhao, Jian Chen, Member, CCF, Jian Pei, Senior Member, ACM, IEEE, and Jian Yin, Senior Member, CCF
    Journal of Data Acquisition and Processing, 2010, 25 (3): 431-443. 
    Abstract   PDF(878KB) ( 2164 )  

    Many latest high performance distributed computational environments come with high bandwidth in communication. Such high bandwidth distributed systems provide unprecedented opportunities for analyzing huge datasets, but simultaneously posts new technical challenges. For users, progressive query answering is important. For utility of systems, load balancing is critical. How we can achieve progressive and load balancing distributed computation is an interesting and promising research direction. As skyline analysis has been shown very useful in many multi-criteria decision making applications, in this paper, we study the problem of progressive and load balancing distributed skyline analysis. We propose a simple yet scalable approach which comes with several nice properties for progressive and load balancing query answering. We conduct extensive experiments which demonstrate the feasibility and effectiveness of the proposed method.

    Context-Sensitive Document Ranking
    Li-Jun Chang, Jeffrey Xu Yu, and Lu Qin
    Journal of Data Acquisition and Processing, 2010, 25 (3): 444-457. 
    Abstract   PDF(661KB) ( 1702 )  

    Ranking is a main research issue in IR-styled keyword search over a set of documents. In this paper, we study a new keyword search problem, called context-sensitive document ranking, which is to rank documents with an additional context that provides additional information about the application domain where the documents are to be searched and ranked. The work is motivated by the fact that additional information associated with the documents can possibly assist users to find more relevant documents when they are unable to find the needed documents from the documents alone. In this paper, a context is a multi-attribute graph, which can represent any information maintained in a relational database, where multi-attribute nodes represent tuples, and edges represent primary key and foreign key references among nodes. The context-sensitive ranking is related to several research issues, how to score documents, how to evaluate the additional information obtained in the context that may contribute to the document ranking, how to rank the documents by combining the scores/costs from the documents and the context. More importantly, the relationships between documents and the information stored in a relational database may be uncertain, because they are from different data sources and the relationships are determined systematically using similarity match which causes uncertainty. In this paper, we concentrate ourselves on these research issues, and provide our solution on how to rank the documents in a context where there exist uncertainty between the documents and the context. We confirm the effectiveness of our approaches by conducting extensive experimental studies using real datasets. We present our findings in this paper.

    The Inverse Classification Problem
    Charu C. Aggarwal, Member, ACM, Fellow, IEEE, Chen Chen, and Jiawei Han, Fellow, ACM, IEEE
    Journal of Data Acquisition and Processing, 2010, 25 (3): 458-468. 
    Abstract   PDF(519KB) ( 1791 )  
    In this paper, we examine an emerging variation of the classification problem, which is known as the inverse classification problem. In this problem, we determine the features to be used to create a record which will result in a desired class label. Such an approach is useful in applications in which it is an objective to determine a set of actions to be taken in order to guide the data mining application towards a desired solution. This system can be used for a variety of decision support applications which have pre-determined task criteria. We will show that the inverse classification problem is a powerful and general model which encompasses a number of different criteria. We propose a number of algorithms for the inverse classification problem, which use an inverted list representation for intermediate data structure representation and classification. We validate our approach over a number of real datasets.
    Annotation Based Query Answer over Inconsistent Database
    Ai-Hua Wu, Zi-Jing Tan, and Wei Wang, Senior Member, CCF
    Journal of Data Acquisition and Processing, 2010, 25 (3): 469-481. 
    Abstract   PDF(752KB) ( 1982 )  

    In this paper, we introduce a concept of Annotation Based Query Answer, and a method for its computation, which can answer queries on relational databases that may violate a set of functional dependencies. In this approach, inconsistency is viewed as a property of data and described with annotations. To be more precise, every piece of data in a relation can have zero or more annotations with it and annotations are propagated along with queries from the source to the output. With annotations, inconsistent data in both input tables and query answers can be marked out but preserved, instead of being filtered in most previous work. Thus this approach can avoid information loss, a vital and common deficiency of most previous work in this area. To calculate query answers on an annotated database, we propose an algorithm to annotate the input tables, and redefine the five basic relational algebra operations (selection, projection, join, union and difference) so that annotations can be correctly propagated as the valid set of functional dependency changes during query processing. We also prove the soundness and completeness of the whole annotation computing system. Finally, we implement a prototype of our system, and give some performance experiments, which demonstrate that our approach is reasonable in running time, and excellent in information preserving.

    HAPS: Supporting Effective and Efficient Full-Text P2P Search with Peer Dynamics
    Zu-Jie Ren, Ke Chen, Li-Dan Shou, Gang Chen, Yi-Jun Bei, and Xiao-Yan Li
    Journal of Data Acquisition and Processing, 2010, 25 (3): 482-498. 
    Abstract   PDF(1026KB) ( 1855 )  

    Recently, peer-to-peer (P2P) search technique has become popular in the Web as an alternative to centralized search due to its high scalability and low deployment-cost. However, P2P search systems are known to suffer from the problem of peer dynamics, such as frequent node join/leave and document changes, which cause serious performance degradation. This paper presents the architecture of a P2P search system that supports full-text search in an overlay network with peer dynamics. This architecture, namely HAPS, consists of two layers of peers. The upper layer is a DHT (distributed hash table) network interconnected by some super peers (which we refer to as hubs). Each hub maintains distributed data structures called search directories, which could be used to guide the query and to control the search cost. The bottom layer consists of clusters of ordinary peers (called providers), which can receive queries and return relevant results. Extensive experimental results indicate that HAPS can perform searches effectively and efficiently. In addition, the performance comparison illustrates that HAPS outperforms a flat structured system and a hierarchical unstructured system in the environment with peer dynamics.

    A Solution of Data Inconsistencies in Data Integration --- Designed for Pervasive Computing Environment
    Xin Wang, Student Member, CCF, Lin-Peng Huang, Senior Member, CCF, Yi Zhang, Xiao-Hui Xu, Student Member, CCF, and Jun-Qing Chen, Student Member, CCF
    Journal of Data Acquisition and Processing, 2010, 25 (3): 499-508. 
    Abstract   PDF(603KB) ( 2070 )  

    New challenges including how to share information on heterogeneous devices appear in data-intensive pervasive computing environments. Data integration is a practical approach to these applications. Dealing with inconsistencies is one of the important problems in data integration. In this paper we motivate the problem of data inconsistency solution for data integration in pervasive environments. We define data quality criteria and expense quality criteria for data sources to solve data inconsistency. In our solution, firstly, data sources needing high expense to obtain data from them are discarded by using expense quality criteria and utility function. Since it is difficult to obtain the actual quality of data sources in pervasive computing environment, we introduce fuzzy multi-attribute group decision making approach to selecting the appropriate data sources. The experimental results show that our solution has ideal effectiveness.

    Flash-Optimized B+-Tree
    Sai Tung On, Haibo Hu, Yu Li, and Jianliang Xu
    Journal of Data Acquisition and Processing, 2010, 25 (3): 509-522. 
    Abstract   PDF(758KB) ( 2092 )  

    With the rapid increasing capacity of flash memory, flash-aware indexing techniques are highly desirable for flash devices. The unique features of flash memory, such as the erase-before-write constraint and the asymmetric read/write cost, severely deteriorate the performance of the traditional B+-tree algorithm. In this paper, we propose an optimized indexing method, called \textit{lazy-update} B+-tree, to overcome the limitations of flash memory. The basic idea is to defer the committing of update requests to the B+-tree by buffering them in a segment of main memory. They are later committed in groups so that the cost of each write operation can be amortized by a bunch of update requests. We identify a victim selection problem for the \textit{lazy-update} B+-tree and develop two heuristic-based commit policies to address this problem. Simulation results show that the proposed \textit{lazy-update} method, along with a well-designed commit policy, greatly improves the update performance of the traditional B+-tree while preserving the query efficiency.

    Efficient Distributed Skyline Queries for Mobile Applications
    Ying-Yuan Xiao, Senior Member, CCF, and Yue-Guo Chen
    Journal of Data Acquisition and Processing, 2010, 25 (3): 523-536. 
    Abstract   PDF(554KB) ( 1845 )  

    In this paper, we consider skyline queries in a mobile and distributed environment, where data objects are distributed in some sites (database servers) which are interconnected through a high-speed wired network, and queries are issued by mobile units (laptop, cell phone, etc.) which access the data objects of database servers by wireless channels. The inherent properties of mobile computing environment such as mobility, limited wireless bandwidth, frequent disconnection, make skyline queries more complicated. We show how to efficiently perform distributed skyline queries in a mobile environment and propose a skyline query processing approach, called efficient distributed skyline based on mobile computing (EDS-MC). In EDS-MC, a distributed skyline query is decomposed into five processing phases and each phase is elaborately designed in order to reduce the network communication, network delay and query response time. We conduct extensive experiments in a simulated mobile database system, and the experimental results demonstrate the superiority of EDS-MC over other skyline query processing techniques on mobile computing.

    A Query Interface Matching Approach Based on Extended Evidence Theory for Deep Web
    Yong-Quan Dong, Member, CCF, Qing-Zhong Li, Senior Member, CCF, Yan-Hui Ding, Member, CCF, and Zhao-Hui Peng, Member, CCF
    Journal of Data Acquisition and Processing, 2010, 25 (3): 537-547. 
    Abstract   PDF(637KB) ( 1986 )  

    Matching query interfaces is a crucial step in data integration across multiple Web databases. Different types of information about query interface schemas have been used to match attributes between schemas. Relying on a single aspect of information is not sufficient and the matching results of individual matchers are often inaccurate and uncertain. The evidence theory is the state-of-the-art approach for combining multiple sources of uncertain information. However, traditional evidence theory has the limitations of treating individual matchers in different matching tasks equally for query interface matching, which reduces matching performance. This paper proposes a novel query interface matching approach based on extended evidence theory for Deep Web. Our approach firstly introduces the dynamic prediction procedure of different matchers' credibilities. Then, it extends traditional evidence theory with the credibilities and uses exponentially weighted evidence theory to combine the results of multiple matchers. Finally, it performs matching decision in terms of some heuristics to obtain the final matches. Our approach overcomes the shortage of traditional method and can adapt to different matching tasks. Experimental results demonstrate the feasibility and effectiveness of our proposed approach.

    Dynamic Damage Recovery for Web Databases
    Hong Zhu, Member, CCF, Ge Fu, Yu-Cai Feng, and Kevin Lü
    Journal of Data Acquisition and Processing, 2010, 25 (3): 548-561. 
    Abstract   PDF(873KB) ( 1637 )  

    In the web context, there is an urgent need for a self-healing database system which has the ability to automatically locate and undo a set of transactions that are corrupted by malicious attacks. The metrics of survivability and availability require a database to provide continuous services during the period of recovery, which is referred to as dynamic recovery. In this paper, we present that an extended read operation from a corrupted data would cause damage spreading. We build a fine grained transaction log to record the extended read and write operations while user transactions are processing. Based on that, we propose a dynamic recovery system to implement the damage repair. The system captures damage spreading caused by extended read-write dependency between transactions. It also retains the execution results for blind write transactions and gives a solution to the issues of recovery conflicts caused by forward recovery. Moreover, a confinement activity is imposed on the in-repairing data to prevent a further damage propagation while the data recovery is processing. The performance evaluation in our experiments shows that the system is reliable and highly efficient.

    Computer Graphics and Visualization
    Harmonic Field Based Volume Model Construction from Triangle Soup
    Chao-Hui Shen, Guo-Xin Zhang, Student Member, CCF, Yu-Kun Lai, Shi-Min Hu, Senior Member, CCF, and Ralph R. Martin
    Journal of Data Acquisition and Processing, 2010, 25 (3): 562-571. 
    Abstract   PDF(9233KB) ( 1651 )  

    Surface triangle meshes and volume data are two commonly used representations of digital geometry. Converting from triangle meshes to volume data is challenging, since triangle meshes often contain defects such as small holes, internal structures, or self-intersections. In the extreme case, we may be simply presented with a set of arbitrarily connected triangles, a ``triangle soup''. This paper presents a novel method to generate volume data represented as an octree from a general 3D triangle soup. Our motivation is the Faraday cage from electrostatics. We consider the input triangles as forming an approximately closed Faraday cage, and set its potential to zero. We then introduce a second conductor surrounding it, and give it a higher constant potential. Due to the electrostatic shielding effect, the resulting electric field approximately lies in that part of space outside the shape implicitly determined by the triangle soup. Unlike previous approaches, our method is insensitive to small holes and internal structures, and is observed to generate volumes with low topological complexity. While our approach is somewhat limited in accuracy by the requirement of filling holes, it is still useful, for example, as a preprocessing step for applications such as mesh repair and skeleton extraction.

    Making Slide Shows with Zoomquilts
    Lin Cong, Ruo-Feng Tong, Member, CCF, and Jin-Xiang Dong
    Journal of Data Acquisition and Processing, 2010, 25 (3): 572-582. 
    Abstract   PDF(34974KB) ( 1820 )  

    We present a novel method for generating a slide show, which takes as input a collection of images, and outputs a video consisting of these images, switching between images smoothly in a continuous zoom-like process: as the sequence plays, a miniature of the next image is embedded in the current image and enlarges until eventually replaces the current image. Color differences, texture similarity, image complexity, etc. are taken into account to measure the distance between two images. Based on this distance, a dynamic programming algorithm is used to generate the best playing sequence which minimizes the sum of distances between successive images. The embedded image is naturally merged with the current one for smooth sequence through a graph-cut-guided blending strategy, and interframe coherence is maintained to avoid abrupt change. Experiments show that our approach is very effective on image collections of scenic spots.

    Model Transduction for Triangle Meshes
    Huai-Yu Wu, Chun-Hong Pan, Hong-Bin Zha, and Song-De Ma
    Journal of Data Acquisition and Processing, 2010, 25 (3): 583-594. 
    Abstract   PDF(18086KB) ( 2473 )  

    This paper proposes a novel method, called model transduction, to directly transfer pose between different meshes, without the need of building the skeleton configurations for meshes. Different from previous retargetting methods, such as deformation transfer, model transduction does not require a reference source mesh to obtain the source deformation, thus effectively avoids unsatisfying results when the source and target have different reference poses. Moreover, we show other two applications of the model transduction method: pose correction after various mesh editing operations, and skeleton-free deformation animation based on 3D Mocap (Motion capture) data. Model transduction is based on two ingredients: model deformation and model correspondence. Specifically, based on the mean-value manifold operator, our mesh deformation method produces visually pleasing deformation results under large angle rotations or big-scale translations of handles. Then we propose a novel scheme for shape-preserving correspondence between manifold meshes. Our method fits nicely in a unified framework, where the similar type of operator is applied in all phases. The resulting quadratic formulation can be efficiently minimized by fast solving the sparse linear system. Experimental results show that model transduction can successfully transfer both complex skeletal structures and subtle skin deformations.

    Feature Preserving Mesh Simplification Using Feature Sensitive Metric
    Jin Wei, Student Member, CCF, ACM, and Yu Lou, Student Member, ACM
    Journal of Data Acquisition and Processing, 2010, 25 (3): 595-605. 
    Abstract   PDF(22939KB) ( 1756 )  

    We present a new method for feature preserving mesh simplification based on feature sensitive (FS) metric. Previous quadric error based approach is extended to a high-dimensional FS space so as to measure the geometric distance together with normal deviation. As the normal direction of a surface point is uniquely determined by the position in Euclidian space, we employ a two-step linear optimization scheme %SOR (successive over-relaxation) to efficiently derive the constrained optimal target point. We demonstrate that our algorithm can preserve features more precisely under the global geometric properties, and can naturally retain more triangular patches on the feature regions without special feature detection procedure during the simplification process. Taking the advantage of the blow-up phenomenon in FS space, we design an error weight that can produce more suitable results. We also show that Hausdorff distance is markedly reduced during FS simplification.

    Special Section on Trends Changing Data Management
    Geometry Texture Synthesis Based on Laplacian Texture Image
    Ling-Qiang Ran and Xiang-Xu Meng
    Journal of Data Acquisition and Processing, 2010, 25 (3): 606-613. 
    Abstract   PDF(14927KB) ( 1989 )  

    In this paper, we present a new method to synthesize geometric texture details on an arbitrary surface from a sample texture patch. The key idea is to use Laplacian texture images to represent geometric texture details, which in turn facilitate simple and effective geometry texture synthesis and enable flexible geometry texture editing. Given a sample model and a target model, we first select a patch from the sample model and extract the geometric texture details. Next, we construct a Laplacian texture image for the extracted texture and synthesize the Laplacian texture image to the target model using an image texture synthesis technique on surface. Finally, we reconstruct the textured target model with adjusted Laplacian coordinates. Compared to the existing methods, our method is easy-to-implement and produces results of high quality. Furthermore, it performs flexible control on the appearance of the textured target model through operations on the Laplacian texture image.

    Computer Graphics and Visualization
    Geometric Bone Modeling: From Macro to Micro Structures
    Oded Zaideman and Anath Fischer
    Journal of Data Acquisition and Processing, 2010, 25 (3): 614-622. 
    Abstract   PDF(23500KB) ( 11311 )  

    There is major interest within the bio-engineering community in developing accurate and non-invasive means for visualizing, modeling and analyzing bone micro-structures. Bones are composed of hierarchical bio-composite materials characterized by complex multi-scale structural geometry. The process of reconstructing a volumetric bone model is usually based upon CT/MRI scanned images. Meshes generated by current commercial CAD systems cannot be used for further modeling or analysis. Moreover, recently developed methods are only capable of capturing the micro-structure for small volumes (biopsy samples). This paper examines the problem of re-meshing a 3D computerized model of bone micro-structure. The proposed method is based on the following phases: defining sub-meshes of the original model in a grid-based structure, remeshing each sub-mesh using the neural network (NN) method, and merging the sub-meshes into a global mesh. Applying the NN method to micro-structures proved to be quite time consuming. Therefore, a parallel, grid-based approach was applied, yielding a simpler structure in each grid cell. The performance of this method is analyzed, and the method is demonstrated on real bone micro-structures. Furthermore, the method may be used as the basis for generating a multi-resolution bone geometric model.

    Pattern Recognition and Image Processing
    Nonlocal-Means Image Denoising Technique Using Robust M-Estimator
    Dinesh J. Peter, V. K. Govindan, and Abraham T. Mathew
    Journal of Data Acquisition and Processing, 2010, 25 (3): 623-631. 
    Abstract   PDF(6572KB) ( 3740 )  

    Edge preserved smoothing techniques have gained importance for the purpose of image processing applications. A good edge preserving filter is given by nonlocal-means filter rather than any other linear model based approaches. This paper explores a different approach of nonlocal-means filter by using robust M-estimator function rather than the exponential function for its weight calculation. Here the filter output at each pixel is the weighted average of pixels with surrounding neighborhoods using the chosen robust M-estimator function. The main direction of this paper is to identify the best robust M-estimator function for nonlocal-means denoising algorithm. In order to speed up the computation, a new patch classification method is followed to eliminate the uncorrelated patches from the weighted averaging process. This patch classification approach compares favorably to existing techniques in respect of quality versus computational time. Validations using standard test images and brain atlas images have been analyzed and the results were compared with the other known methods. It is seen that there is reason to believe that the proposed refined technique has some notable points.

    Multilevel Threshold Based Image Denoising in Curvelet Domain
    Nguyen Thanh Binh and Ashish Khare
    Journal of Data Acquisition and Processing, 2010, 25 (3): 632-640. 
    Abstract   PDF(4850KB) ( 5164 )  

    In this paper, we propose a multilevel thresholding technique for noise removal in curvelet transform domain which uses cycle-spinning. Most of uncorrelated noise gets removed by thresholding curvelet coefficients at lowest level, while correlated noise gets removed by only a fraction at lower levels, so we used multilevel thresholding on curvelet coefficients. The threshold in the proposed method depends on the variance of curvelet coefficients, the mean and the median of absolute curvelet coefficients at a particular level which makes it adaptive in nature. Results obtained for 2-D images demonstrate an improved performance over other recent related methods available in literature.

    A New Classifier for Facial Expression Recognition: Fuzzy Buried Markov Model
    Yong-Zhao Zhan, Senior Member,CCF, Ke-Yang Cheng, Member, CCF, Ya-Bi Chen, and Chuan-Jun Wen
    Journal of Data Acquisition and Processing, 2010, 25 (3): 641-650. 
    Abstract   PDF(8921KB) ( 2389 )  

    To overcome the disadvantage of classical recognition model that cannot perform well enough when there are some noises or lost frames in expression image sequences, a novel model called fuzzy buried Markov model (FBMM) is presented in this paper. FBMM relaxes conditional independence assumptions for classical hidden Markov model (HMM) by adding the specific cross-observation dependencies between observation elements. Compared with buried Markov model (BMM), FBMM utilizes cloud distribution to replace probability distribution to describe state transition and observation symbol generation and adopts maximum mutual information (MMI) method to replace maximum likelihood (ML) method to estimate parameters. Theoretical justifications and experimental results verify higher recognition rate and stronger robustness of facial expression recognition for image sequences based on FBMM than those of HMM and BMM.

SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved