Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      01 September 2010, Volume 25 Issue 5   
    For Selected: View Abstracts Toggle Thumbnails
    Computer Graphics and Visualization
    Novel Geometrical Voxelization Approach with Application to Streamlines
    Hsien-Hsi Hsieh(谢咸熙), Chin-Chen Chang(张勤振), Wen-Kai Tai(戴文凯), and Han-Wei Shen(沈汉威)
    Journal of Data Acquisition and Processing, 2010, 25 (5): 895-904. 
    Abstract   PDF(8599KB) ( 2153 )  

    This paper presents a novel geometrical voxelization algorithm for polygonal models. First, distance computation is performed slice by slice on graphics processing units (GPUs) between geometrical primitives and voxels for line/surface voxelization. A novel solid filling process is then proposed to assist surface voxelization and achieve solid voxelization. Furthermore, using the proposed transfer functions, both binary and anti-aliasing voxelizations are achievable. Finally, the proposed approach can be applied to voxelize streamlines for 3D vector fields using line voxelization. The proposed approach obtains desired experimental results.

    Level-of-Detail Rendering of Large-Scale Irregular Volume Datasets Using Particles
    Takuma Kawamura, Naohisa Sakamoto,and Koji Koyamada, Member, IEEE
    Journal of Data Acquisition and Processing, 2010, 25 (5): 905-915. 
    Abstract   PDF(597KB) ( 2121 )  

    This paper describes a level-of-detail rendering technique for large-scale irregular volume datasets. It is well known that the memory bandwidth consumed by visibility sorting becomes the limiting factor when carrying out volume rendering of such datasets. To develop a sorting-free volume rendering technique, we previously proposed a particle-based technique that generates opaque and emissive particles using a density function constant within an irregular volume cell and projects the particles onto an image plane with sub-pixels. When the density function changes significantly in an irregular volume cell, the cell boundary may become prominent, which can cause blocky noise. When the number of the sub-pixels increases, the required frame buffer tends to be large. To solve this problem, this work proposes a new particle-based volume rendering which generates particles using metropolis sampling and renders the particles using the ensemble average. To confirm the effectiveness of this method, we applied our proposed technique to several irregular volume datasets, with the result that the ensemble average outperforms the sub-pixel average in computational complexity and memory usage. In addition, the ensemble average technique allowed us to implement a level of detail in the interactive rendering of a 71-million-cell hexahedral volume dataset and a 26-million-cell quadratic tetrahedral volume dataset.

    Degeneracy from Twisted Cubic Under Two Views
    Yi-Hong Wu (吴毅红), Tian Lan (兰 添), and Zhan-Yi Hu (胡占义)
    Journal of Data Acquisition and Processing, 2010, 25 (5): 916-924. 
    Abstract   PDF(1088KB) ( 1811 )  
    Fundamental matrix, drawing geometric relationship between two images, plays an important role in 3-dimensional computer vision. Degenerate configurations of the space points and the two camera optical centers affect stability of computation for fundamental matrix. In order to robustly estimate fundamental matrix, it is necessary to study these degenerate configurations. We analyze all the possible degenerate configurations caused by twisted cubic and give the corresponding degenerate rank for each case. Relationships with general degeneracies, the previous ruled quadric degeneracy and the homography degeneracy, are also reported.
    Volumetric Vector-Based Representation for Indirect Illumination Caching
    Romain Pacanowski, Member, ACM, Xavier Granier, Member, ACM, Christophe Schlick, and Pierre Poulin4
    Journal of Data Acquisition and Processing, 2010, 25 (5): 925-932. 
    Abstract   PDF(417KB) ( 1894 )  

    This paper introduces a caching technique based on a volumetric representation that captures low-frequency indirect illumination. This structure is intended for efficient storage and manipulation of illumination. It is based on a 3D grid that stores a fixed set of irradiance vectors. During preprocessing, this representation can be built using almost any existing global illumination software. During rendering, the indirect illumination within a voxel is interpolated from its associated irradiance vectors, and is used as additional local light sources. Compared with other techniques, the 3D vector-based representation of our technique offers increased robustness against local geometric variations of a scene. We thus demonstrate that it may be employed as an efficient and high-quality caching data structure for bidirectional rendering techniques such as particle tracing or photon mapping.

    Distributed Computing and Systems
    DICE: An Effective Query Result Cache for Distributed Storage Systems
    Jun-Ki Min, Member, ACM, and Mi-Young Lee
    Journal of Data Acquisition and Processing, 2010, 25 (5): 933-944. 
    Abstract   PDF(407KB) ( 1877 )  

    Due to the proliferation of Internet and Intranet, the distributed storage systems have received a lot of attention. These systems span a large number of machines and store huge amount of data for a lot of users. In the distributed storage systems, a row can be directly accessed using a row key. We concentrate on a problem of efficient processing of queries whose predicate is on a column but not a row key. In this paper, we present a cache management technique, called DICE which maintains query results of range queries to support the next range queries. To accelerate the search time of the cached query results, we use modified Interval Ski Lists. In addition, we devise a novel cache replacement policy since DICE maintains an interval rather than a data item. Since our cache replacement policy considers the properties of intervals, our proposed technique is more efficient than traditional buffer replacement algorithms. Our experimental result demonstrates the efficiency of our proposed technique.

    Self-Adaptive Resource Management for Large-Scale Shared Clusters
    Yan Li (李 研), Feng-Hong Chen (陈峰宏), Xi Sun (孙 熙), Ming-Hui Zhou (周明辉), Member, CCF| Wen-Pin Jiao (焦文品), Senior Member, CCF, Dong-Gang Cao (曹东刚) and Hong Mei (梅 宏), Senior Member, CCF
    Journal of Data Acquisition and Processing, 2010, 25 (5): 945-957. 
    Abstract   PDF(486KB) ( 1965 )  

    In a shared cluster, each application runs on a subset of nodes and these subsets can overlap with one another. Resource management in such a cluster should adaptively change the application placement and workload assignment to satisfy the dynamic applications workloads and optimize the resource usage. This becomes a challenging problem with the cluster scale and application amount growing large. This paper proposes a novel self-adaptive resource management approach which is inspired from human market: the nodes trade their shares of applications' requests with others via auction and bidding to decide its own resource allocation and a global high-quality resource allocation is achieved as an emergent collective behavior of the market. Experimental results show that the proposed approach can ensure quick responsiveness, high scalability, and application prioritization in addition to managing the resources effectively.

    ERFC: An Enhanced Recursive Flow Classification Algorithm
    Xiang-Yang Gong (龚向阳), Wen-Dong Wang (王文东), Senior Member, CCF, and Shi-Duan Cheng (程时端)
    Journal of Data Acquisition and Processing, 2010, 25 (5): 958-969. 
    Abstract   PDF(474KB) ( 1951 )  

    Packet classification on multi-fields is a fundamental mechanism in network equipments, and various classification solutions have been proposed. Because of inherent difficulties, many of these solutions scale poorly in either time or space as rule sets grow in size. Recursive Flow Classification (RFC) is an algorithm with a very high classifying speed. However, its preprocessing complexity and memory requirement are rather high. In this paper, we propose an enhanced RFC (ERFC) algorithm, in which a hash-based aggregated bit vector scheme is exploited to speed up its preprocessing procedure. A compressed and cacheable data structure is also introduced to decrease total memory requirement and improve its searching performance. Evaluation results show that ERFC provides a great improvement over RFC in both space requirement and preprocessing time. The search time complexity of ERFC is equivalent to that of RFC in the worst case; and its average classifying speed is improved by about 100%.

    QoS-TEOS: QoS Guaranteed Throughput-Effciency Optimal Distributed Scheduling in WiMAX Mesh Networks
    Da Teng (滕 达), Member, CCF, Shou-Bao Yang (杨寿保), Senior Member, CCF Wei-Qing He (赫卫卿), Member, CCF, and Yun Hu (胡 云), Member, CCF
    Journal of Data Acquisition and Processing, 2010, 25 (5): 970-981. 
    Abstract   PDF(450KB) ( 1768 )  

    WiMAX distributed scheduling can be modeled as two procedures: three-way handshaking procedure and data subframe scheduling procedure. Due to manipulating data transmission directly, data subframe scheduling has a closer relationship with user Quality of Service (QoS) satisfaction, and has more severe impact on network performance, compared with handshaking procedure. A QoS guaranteed Throughput-Efficiency Optimal distributed data subframe Scheduling scheme, named as QoS-TEOS, is proposed. QoS-TEOS achieves QoS guarantee through modeling services into different ranks and assigning them with corresponding priorities. A service with higher priority is scheduled ahead of that with lower priority and offered with high QoS quality. Same kinds of services that request similar QoS quality are classified into one service set. Different service sets are scheduled with different strategies. QoS-TEOS promotes network performance through improving network throughput and efficiency. Theoretical analysis shows that the scheduled data transmission should balance data generation rate from upper layer and transmission rate of physical layer, to avoid network throughput and efficiency declining. Simulation results show that QoS-TEOS works excellently to achieve throughput-efficiency optimization and guarantee a high QoS.

    Uncertain Distance-Based Range Queries over Uncertain Moving Objects
    Yi-Fei Chen (陈逸菲), Member, CCF, Xiao-Lin Qin (秦小麟), Senior Member, CCF and Liang Liu (刘 亮), Student Member, CCF
    Journal of Data Acquisition and Processing, 2010, 25 (5): 982-998. 
    Abstract   PDF(719KB) ( 2046 )  

    Distance-based range search is crucial in many real applications. In particular, given a database and a query issuer, a distance-based range search retrieves all the objects in the database whose distances from the query issuer are less than or equal to a given threshold. Often, due to the accuracy of positioning devices, updating protocols or characteristics of applications (for example, location privacy protection), data obtained from real world are imprecise or uncertain. Therefore, existing approaches over exact databases cannot be directly applied to the uncertain scenario. In this paper, we redefine the distance-based range query in the context of uncertain databases, namely the probabilistic uncertain distance-based range (PUDR) queries, which obtain objects with confidence guarantees. We categorize the topological relationships between uncertain objects and uncertain search ranges into six cases and present the probability evaluation in each case. It is verified by experiments that our approach outperform Monte-Carlo method utilized in most existing work in precision and time cost for uniform uncertainty distribution. This approach approximates the probabilities of objects following other practical uncertainty distribution, such as Gaussian distribution with acceptable errors. Since the retrieval of a PUDR query requires accessing all the objects in the databases, which is quite costly, we propose spatial pruning and probabilistic pruning techniques to reduce the search space. Two metrics, false positive rate and false negative rate are introduced to measure the qualities of query results. An extensive empirical study has been conducted to demonstrate the efficiency and effectiveness of our proposed algorithms under various experimental settings.

    Building Reusable Remote Labs with Adaptable Client User-Interfaces
    Salaheddin Odeh, Member, ACM
    Journal of Data Acquisition and Processing, 2010, 25 (5): 999-1015. 
    Abstract   PDF(636KB) ( 2270 )  

    Nowadays remote laboratories suffer the absence of reusability. In addition, their construction and maintenance require time, money and skills. The system implementation of a specific remote lab is neither generic nor reusable. In this paper, a solution for a reusable remote lab dedicated for disparate types of scientific and engineering experiments is presented. The experiment designer needs only to connect the experiment components and equipment such as capacitors, resistors, transistors, function generators with a switch system of a lab server, then, she/he has to map this connection structure in a configuration data structure. Once a student starts the Web-based client user-interface and logs-in into the lab server, the menu structure of the graphical user-interface builds and initializes itself automatically, using information stored in a configuration data structure. This contribution discusses some hitherto used lab servers, some of their drawbacks, the desirable requirements on a universal remote lab, which simplify the building process of newer lab experiments consisting of experiment components and equipment as well as a client user-interface that could enable students to remotely access the experiment.

    A Survey of Dynamic Software Metrics
    Jitender Kumar Chhabra and Varun Gupta
    Journal of Data Acquisition and Processing, 2010, 25 (5): 1016-1029. 
    Abstract   PDF(244KB) ( 2749 )  

    Software metrics help us to make meaningful estimates for software products and guide us in taking managerial and technical decisions. However, conventional static metrics have been found to be inadequate for modern object-oriented software due to the presence of object-oriented features such as polymorphism, dynamic binding, inheritance and unused code. This fact motivates us to focus on dynamic metrics in place of traditional static metrics. Moreover, dynamic metrics are more precise than static metrics as they are able to capture the dynamic behaviour of the software system during measurement. These dynamic metrics are usually obtained from the execution traces of the code or from the executable models. In this paper, advantages of dynamic metrics over static metrics are discussed and then a survey of the existing dynamic metrics is carried out. These metrics are characterized into different categories such as dynamic coupling metrics, dynamic cohesion metrics. Towards end of the paper, potential research challenges and opportunities in the field of dynamic metrics are identified.

    Artificial Intelligence
    Unsupervised WSD by Finding the Predominant Sense Using Context as a Dynamic Thesaurus
    Javier Tejada-Cárcamo, Hiram Calvo, Alexander Gelbukh, and Kazuo Hara
    Journal of Data Acquisition and Processing, 2010, 25 (5): 1030-1039. 
    Abstract   PDF(982KB) ( 1616 )  

    We present and analyze an unsupervised method for Word Sense Disambiguation (WSD). Our work is based on the method presented by McCarthy et al. in 2004 for finding the predominant sense of each word in the entire corpus. Their maximization algorithm allows weighted terms (similar words) from a distributional thesaurus to accumulate a score for each ambiguous word sense, i.e., the sense with the highest score is chosen based on votes from a weighted list of terms related to the ambiguous word. This list is obtained using the distributional similarity method proposed by Lin Dekang to obtain a thesaurus. In the method of McCarthy et al., every occurrence of the ambiguous word uses the same thesaurus, regardless of the context where the ambiguous word occurs. Our method accounts for the context of a word when determining the sense of an ambiguous word by building the list of distributed similar words based on the syntactic context of the ambiguous word. We obtain a top precision of 77.54% of accuracy versus 67.10% of the original method tested on SemCor. We also analyze the effect of the number of weighted terms in the tasks of finding the Most Frecuent Sense (MFS) and WSD, and experiment with several corpora for building the Word Space Model.

    A Generalization of Haussler's Convolution Kernel &mdash|Mapping Kernel and Its Application to Tree Kernels
    Kilho Shin and Tetsuji Kuboyama
    Journal of Data Acquisition and Processing, 2010, 25 (5): 1040-1054. 
    Abstract   PDF(1219KB) ( 1611 )  

    Haussler's convolution kernel provides an effective framework for engineering positive semidefinite kernels, and has a wide range of applications. On the other hand, the mapping kernel that we introduce in this paper is its natural generalization, and will enlarge the range of application significantly. Our main theorem with respect to positive semidefiniteness of the mapping kernel (1) implies Haussler's theorem as a corollary, (2) exhibits an easy-to-check necessary and sufficient condition for mapping kernels to be positive semidefinite, and (3) formalizes the mapping kernel so that significant flexibility is provided in engineering new kernels. As an evidence of the effectiveness of our results, we present a framework to engineer tree kernels. The tree is a data structure widely used in many applications, and tree kernels provide an effective method to analyze tree-type data. Thus, not only is the framework important as an example but also as a practical research tool. The description of the framework accompanies a survey of the tree kernels in the literature, where we see that 18 out of the 19 surveyed tree kernels of different types are instances of the mapping kernel, and examples of novel interesting tree kernels.

    Explanation Knowledge Graph Construction Through Causality Extraction from Texts
    Chaveevan Pechsiri and Rapepun Piriyakul
    Journal of Data Acquisition and Processing, 2010, 25 (5): 1055-1070. 
    Abstract   PDF(578KB) ( 3141 )  

    Explanation knowledge expressed by a graph, especially in the graphical model, is essential to comprehend clearly all paths of effect events in causality for basic diagnosis. This research focuses on determining the effect boundary using a statistical based approach and patterns of effect events in the graph whether they are consequence or concurrence without temporal markers. All necessary causality events from texts for the graph construction are extracted on multiple clauses/EDUs (Elementary Discourse Units) which assist in determining effect-event patterns from written event sequences in documents. To extract the causality events from documents, it has to face the effect-boundary determination problems after applying verb pair rules (a causative verb and an effect verb) to identify the causality. Therefore, we propose Bayesian Network and Maximum entropy to determine the boundary of the effect EDUs. We also propose learning the effect-verb order pairs from the adjacent effect EDUs to solve the effect-event patterns for representing the extracted causality by the graph construction. The accuracy result of the explanation knowledge graph construction is 90% based on expert judgments whereas the average accuracy results from the effect boundary determination by Bayesian Network and Maximum entropy are 90% and 93%, respectively.

    Hierarchical Polytope ARTMAP for Supervised Learning
    Leonardo Liao (廖鑫鹏), Yong-Qiang Wu (吴永强), and Chong-Zhao Han (韩崇昭)
    Journal of Data Acquisition and Processing, 2010, 25 (5): 1071-1082. 
    Abstract   PDF(1178KB) ( 1786 )  

    The recent Polytope ARTMAP (PTAM) suggests that irregular polytopes are more flexible than the predefined category geometries to approximate the borders among the desired output predictions. However, category expansion and adjustment steps without statistical information make PTAM not robust to noise and category overlap. In order to push the learning problem towards Structural Risk Minimization (SRM), this paper proposes Hierarchical Polytope ARTMAP (HPTAM) to use a hierarchical structure with different levels, which are determined by the complexity of regions incorporating the input pattern. Besides, overlapping of simplexes from the same desired prediction is designed to reduce category proliferation. Although HPTAM is still inevitably sensible to noisy outliers in the presence of noise, main experimental results show that HPTAM can achieve a balance between representation error and approximation error, which ameliorates the overall generalization capabilities.

    VLSI Design
    Multilevel Optimization for Large-Scale Hierarchical FPGA Placement
    Hui Dai (戴 晖), Qiang Zhou (周 强), and Ji-Nian Bian (边计年)
    Journal of Data Acquisition and Processing, 2010, 25 (5): 1083-1091. 
    Abstract   PDF(419KB) ( 2320 )  

    This paper proposes a multilevel placer targeted at hierarchical FPGA (Field Programmable Gate Array) devices. The placer is based on multilevel optimization method which combines the multilevel bottom-up clustering process and top-down placement process into a V-cycle. It provides superior wirelength results over a known heuristic high-quality placement tool on a set of large circuits, when restricted to a short run time. For example, it can generate a placement result for a circuit with 5000 4-LUTs (4-Input Look Up Tables) in 70 seconds, almost 30% decrease of wirelength compared with than the heuristic implementation that takes over 500 seconds. We have verified our algorithm yields good quality-time tradeoff results as a low-temperature simulated annealing refinement process can only improve the result by an average of 1.11% at the cost of over 25-fold runtime.

    Quasi Delay-Insensitive High Speed Two-Phase Protocol Asynchronous Wrapper for Network on Chips
    Xu-Guang Guan (管旭光), Student Member, IEEE, Xing-Yuan Tong (佟星元), and Yin-Tang Yang (杨银堂)
    Journal of Data Acquisition and Processing, 2010, 25 (5): 1092-1100. 
    Abstract   PDF(609KB) ( 1757 )  

    For the purpose of solving the shortcomings of low speed and high power consumption of asynchronous wrapper in conventional network on chips, this paper proposes a quasi delay-insensitive high-speed two-phase operation mode asynchronous wrapper. The metastable state in sampling data procedure can be avoided by detecting the write/read signal, which can be used to stop the clock. Empty/full level of the registers can be determined by detecting the pulse signal of the two-phase asynchronous register, and then control the wrapper to sample input/output data. Sender wrapper and receiver wrapper consist of C elements and threshold gates, which ensure the quasi delay-insensitive characteristics and enhance the robustness. Simulations under different technology corners are implemented based on SMIC 0.18μm standard CMOS. Sender wrapper and receiver wrapper allow synchronous modules to work at the speed of 3.08,GHz and 2.98,GHz respectively with average dynamic power consumption of 1.727,mW and 1.779,mW. Its advantages of high-throughput, low-power, scalability and robustness make it a viable option for high-speed low-power interconnection of network-on-chip.

SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved