Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      15 September 2007, Volume 22 Issue 5   
    For Selected: View Abstracts Toggle Thumbnails
    Articles
    Server-Based Data Push Architecture for Multi-Processor Environments
    Xian-He Sun, Surendra Byna, and Yong Chen
    Journal of Data Acquisition and Processing, 2007, 22 (5): 641-652 . 
    Abstract   PDF(543KB) ( 9554 )  
    Data access delay is a major bottleneck in utilizing current high-end computing (HEC) machines. Prefetching, where data is fetched before CPU demands for it, has been considered as an effective solution to masking data access delay. However, current client-initiated prefetching strategies, where a computing processor initiates prefetching instructions, have many limitations. They do not work well for applications with complex, non-contiguous data access patterns. While technology advances continue to increase the gap between computing and data access performance, trading computing power for reducing data access delay has become a natural choice. In this paper, we present a server-based data-push approach and discuss its associated implementation mechanisms. In the server-push architecture, a dedicated server called Data Push Server (DPS) initiates and proactively pushes data closer to the client in time. Issues, such as what data to fetch, when to fetch, and how to push are studied. The SimpleScalar simulator is modified with a dedicated prefetching engine that pushes data for another processor to test DPS based prefetching. Simulation results show that L1 Cache miss rate can be reduced by up to 97\% (71\% on average) over a superscalar processor for SPEC CPU2000 benchmarks that have high cache miss rates.
    A Yield-Driven Gridless Router
    Qiang Zhou, Yi-Ci Cai, Duo Li, and Xian-Long Hong
    Journal of Data Acquisition and Processing, 2007, 22 (5): 653-660 . 
    Abstract   PDF(361KB) ( 2983 )  
    A new gridless router to improve the yield of IC layout is presented. The improvement of yield is achieved by reducing the critical areas where the circuit failures are likely to happen. This gridless area router benefits from a novel cost function to compute critical areas during routing process, and heuristically lays the patterns on the chip area where it is less possible to induce critical area. The router also takes other objectives into consideration, such as routing completion rate and nets length. It takes advantage of gridless routing to gain more flexibility and a higher completion rate. The experimental results show that critical areas are effectively decreased by 21% on average while maintaining the routing completion rate over 99%.
    A Novel VLSI Architecture for Real-Time Line-Based Wavelet Transform Using Lifting Scheme
    Kai Liu, Ke-Yan Wang, Yun-Song Li, and Cheng-Ke Wu
    Journal of Data Acquisition and Processing, 2007, 22 (5): 661-672 . 
    Abstract   PDF(522KB) ( 2619 )  
    In this paper, we propose a VLSI architecture that performs the line-based discrete wavelet transform (DWT) using a lifting scheme. The architecture consists of row processors, column processors, an intermediate buffer and a control module. Row processor and Column processor work as the horizontal and vertical filters respectively. Intermediate buffer is composed of five FIFOs to store temporary results of horizontal filter. Control module schedules the output order to external memory. Compared with existing ones, the presented architecture parallelizes all levels of wavelet transform to compute multilevel DWT within one image transmission time, and uses no external but one intermediate buffer to store several line results of horizontal filtering, which decreases resource required significantly and reduces memory efficiently. This architecture is suitable for various real-time image/video applications.
    Leakage Current Optimization Techniques During Test Based on Don t Care Bits Assignment
    Wei Wang, Yu Hu, Yin-He Han, Xiao-Wei Li, and You-Sheng Zhang
    Journal of Data Acquisition and Processing, 2007, 22 (5): 673-680 . 
    Abstract   PDF(363KB) ( 3579 )  

    It is a well-known fact that test power consumption may exceed that during functional operation. Leakage power dissipation caused by leakage current in Complementary Metal-Oxide-Semiconductor (CMOS) circuits during test has become a significant part of the total power dissipation. Hence, it is important to reduce leakage power to prolong battery life in portable systems which employ periodic self-test, to increase test reliability and to reduce test cost. This paper analyzes leakage current and presents a kind of leakage current simulator based on the transistor stacking effect. Using it, we propose techniques based on don't care bits (denoted by $X$s) in test vectors to optimize leakage current in integrated circuit (IC) test by genetic algorithm. The techniques identify a set of don't care inputs in given test vectors and reassign specified logic values to the $X$ inputs by the genetic algorithm to get minimum leakage vector (MLV). Experimental results indicate that the techniques can effectually optimize leakage current of combinational circuits and sequential circuits during test while maintaining high fault coverage.

    Low Cost Scan Test by Test Correlation Utilization
    Ozgur Sinanoglu
    Journal of Data Acquisition and Processing, 2007, 22 (5): 681-694 . 
    Abstract   PDF(424KB) ( 2317 )  
    Scan-based testing methodologies remedy the testability problem of sequential circuits; yet they suffer from prolonged test time and excessive test power due to numerous shift operations. The correlation among test data along with the high density of the unspecified bits in test data enables the utilization of the existing test data in the scan chain for the generation of the subsequent test stimulus, thus reducing both test time and test data volume. We propose a pair of scan approaches in this paper; in the first approach, a test stimulus partially consists of the preceding stimulus, while in the second approach, a test stimulus partially consists of the preceding test response bits. Both proposed scan-based test schemes access only a subset of scan cells for loading the subsequent test stimulus while freezing the remaining scan cells with the preceding test data, thus decreasing scan chain transitions during shift operations. The proposed scan architecture is coupled with test data manipulation techniques which include test stimuli ordering and partitioning algorithms, boosting test time reductions. The experimental results confirm that test time reductions exceeding 97\%, and test power reductions exceeding 99\% can be achieved by the proposed scan-based testing methodologies on larger ISCAS89 benchmark circuits.
    Cooperating CoScheduling: A Coscheduling Proposal Aimed at Non-Dedicated Heterogeneous NOWs
    Francesc Giné, Francesc Solsona, Mauricio Hanzich, Porfidio Hernández, and Emilio Luque
    Journal of Data Acquisition and Processing, 2007, 22 (5): 695-710 . 
    Abstract   PDF(797KB) ( 3943 )  
    Implicit coscheduling techniques applied to non-dedicated homogeneous Networks Of Workstations (NOWs) have shown they can perform well when many local users compete with a single parallel job. Implicit coscheduling deals with minimizing the communication waiting time of parallel processes by identifying the processes in need of coscheduling through gathering and analyzing implicit runtime information, basically communication events. Unfortunately, implicit coscheduling techniques do not guarantee the performance of local and parallel jobs, when the number of parallel jobs competing against each other is increased. Thus, a low efficiency use of the idle computational resources is achieved. In order to solve these problems, a new technique, named Cooperating CoScheduling (CCS), is presented in this work. Unlike traditional implicit coscheduling techniques, under CCS, each node takes its scheduling decisions from the occurrence of local events, basically communication, memory, Input/Output and CPU, together with foreign events received from cooperating nodes. This allows CCS to provide a social contract based on reserving a percentage of CPU and memory resources to ensure the progress of parallel jobs without disturbing the local users, while coscheduling of communicating tasks is ensured. Besides, the CCS algorithm uses status information from the cooperating nodes to balance the resources across the cluster when necessary. Experimental results in a non-dedicated heterogeneous NOW reveal that CCS allows the idle resources to be exploited efficiently, thus obtaining a satisfactory speedup and provoking an overhead that is imperceptible to the local user.
    A semi-random multiple decision-tree algorithm for mining data streams
    Xue-Gang Hu, Pei-Pei Li, Xin-Dong Wu, and Gong-Qing Wu
    Journal of Data Acquisition and Processing, 2007, 22 (5): 711-724 . 
    Abstract   PDF(463KB) ( 4212 )  
    Mining with streaming data is a hot topic in data mining. When performing classification on data streams, traditional classification algorithms based on decision trees, such as ID3 and C4.5, have a relatively poor efficiency in both time and space due to the characteristics of streaming data. There are some advantages in time and space when using random decision trees. An incremental algorithm for mining data streams, SRMTDS (Semi-Random Multiple decision Trees for Data Streams), based on random decision trees is proposed in this paper. SRMTDS uses the inequality of Hoeffding bounds to choose the minimum number of split-examples, a heuristic method to compute the information gain for obtaining the split thresholds of numerical attributes, and a Naïve Bayes classifier to estimate the class labels of tree leaves. Our extensive experimental study shows that SRMTDS has an improved performance in time, space, accuracy and the anti-noise capability in comparison with VFDTc, a state-of-the-art decision-tree algorithm for classifying data streams.
    Efficient Mining of Frequent Closed XML Query Pattern
    Jian-Hua Feng, Qian Qian, Jian-Yong Wang, and Li-Zhu Zhou
    Journal of Data Acquisition and Processing, 2007, 22 (5): 725-735 . 
    Abstract   PDF(478KB) ( 6366 )  
    Previous research works have presented convincing arguments that a frequent pattern mining algorithm should not mine all frequent but only the closed ones because the latter leads to not only more compact yet complete result set but also better efficiency. Upon discovery of frequent closed XML query patterns, indexing and caching can be effectively adopted for query performance enhancement. Most of the previous algorithms for finding frequent patterns basically introduced a straightforward generate-and-test strategy. In this paper, we present SOLARIA$^{*}$, an efficient algorithm for mining frequent closed XML query patterns without candidate maintenance and costly tree-containment checking. Efficient algorithm of sequence mining is involved in discovering frequent tree-structured patterns, which aims at replacing expensive containment testing with cheap parent-child checking in sequences. SOLARIA$^{*}$ deeply prunes unrelated search space for frequent pattern enumeration by parent-child relationship constraint. By a thorough experimental study on various real-life data, we demonstrate the efficiency and scalability of SOLARIA$^{*}$ over the previous known alternative. SOLARIA$^{*}$ is also linearly scalable in terms of XML queries' size.
    HCH for checking containment of XPath fragment
    Jian-Hua Feng, Yu-Guo Liao, and Yong Zhang
    Journal of Data Acquisition and Processing, 2007, 22 (5): 736-748 . 
    Abstract   PDF(473KB) ( 3855 )  
    XPath is ubiquitous in XML applications for navigating XML trees and selecting a set of element nodes. In XPath query processing, one of the most important issues is how to efficiently check containment relationship between two XPath expressions. To get out of the intricacy and complexity caused by numerous XPath features, we investigate this issue on a frequently used fragment of XPath expressions that consists of node tests, the child axis (/), the descendant axis (//), branches ([\,]) and label wildcards ($*$). Prior work has shown that homomorphism technology can be used for containment checking. However, homomorphism is the sufficient but not necessary condition for containment. For special classes of this fragment, the homomorphism algorithm returns false negatives. To address this problem, this paper proposes two containment techniques, conditioned homomorphism and hidden conditioned homomorphism, and then presents sound algorithms for checking containment. Experimental results confirm the practicability and efficiency of the proposed algorithms.
    Differentials-based segmentation and parameterization for point-sampled surfaces
    Yong-Wei Miao, Jie-Qing Feng, Chun-Xia Xiao, Qun-Sheng Peng and A.R. Forrest
    Journal of Data Acquisition and Processing, 2007, 22 (5): 749-760 . 
    Abstract   PDF(839KB) ( 2438 )  
    Efficient parameterization of point-sampled surfaces is a fundamental problem in the field of digital geometry processing. In order to parameterize a given point-sampled surface for minimal distance distortion, a differentials-based segmentation and parameterization approach is proposed in this paper. Our approach partitions the point-sampled geometry based on two criteria: variation of Euclidean distance between sample points, and angular difference between surface differential directions. According to the analysis of normal curvatures for some specified directions, a new projection approach is adopted to estimate the local surface differentials. Then a $k$-means clustering ($k$-MC) algorithm is used for partitioning the model into a set of charts based on the estimated local surface attributes. Finally, each chart is parameterized with a statistical method --- multidimensional scaling (MDS) approach, and the parameterization results of all charts form an atlas for compact storage.
    Accelerated parallel texture optimization
    Hao-Da Huang, Xin Tong, and Wen-Cheng Wang
    Journal of Data Acquisition and Processing, 2007, 22 (5): 761-769 . 
    Abstract   PDF(675KB) ( 2080 )  
    Texture optimization is a texture synthesis method that can efficiently reproduce various features of exemplar textures. However, its slow synthesis speed limits its usage in many interactive or real time applications. In this paper, we propose a parallel texture optimization algorithm to run on GPUs. In our algorithm, $k$-coherence search and principle component analysis (PCA) are used for hardware acceleration, and two acceleration techniques are further developed to speed up our GPU-based texture optimization. With a reasonable precomputation cost, the online synthesis speed of our algorithm is 4000+ times faster than that of the original texture optimization algorithm and thus our algorithm is capable of interactive applications. The advantages of the new scheme are demonstrated by applying it to interactive editing of flow-guided synthesis.
    Fast adaptive wavelet for remote sensing image compression
    Bo Li, Run-Hai Jiao, and Yuan-Cheng Li
    Journal of Data Acquisition and Processing, 2007, 22 (5): 770-778 . 
    Abstract   PDF(483KB) ( 2403 )  
    Remote sensing images are hard to achieve high compression ratio because of their rich texture. By analyzing the influence of wavelet properties on image compression, this paper proposes wavelet construction rules and builds a new biorthogonal wavelet construction model with parameters. The model parameters are optimized by using genetic algorithm and adopting energy compaction as the optimization object function. In addition, in order to resolve the computation complexity problem of online construction, according to the image classification rule proposed in this paper we construct wavelets for different classes of images and implement the fast adaptive wavelet selection algorithm (FAWS). Experimental results show wavelet bases of FAWS gain better compression performance than Daubechies9/7.
    A new FIR filter for state estimation and its application
    Pyung-Soo Kim and Myung-Eui Lee
    Journal of Data Acquisition and Processing, 2007, 22 (5): 779-784 . 
    Abstract   PDF(293KB) ( 3948 )  
    This paper proposes a new FIR (finite impulse response) filter under a least squares criterion using a forgetting factor. The proposed FIR filter does not require information of the noise covariances as well as the initial state, and has some inherent properties such as time-invariance, unbiasedness and deadbeat. The proposed FIR filter is represented in a batch form and then a recursive form as an alternative form. From discussions about the choice of a forgetting factor and a window length, it is shown that they can be considered as useful parameters to make the estimation performance of the proposed FIR filter as good as possible. It is shown that the proposed FIR filter can outperform the existing FIR filter with incorrect noise covariances via computer simulations. Finally, as a useful application, an image sequence stabilization problem is considered. Through this application, the FIR filtering based approach is shown to be superior to the Kalman filtering based approach.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved