Bimonthly    Since 1986
ISSN 1004-9037
Indexed in:
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Distributed by:
China: All Local Post Offices
  • Table of Content
      15 May 2006, Volume 21 Issue 3   
    For Selected: View Abstracts Toggle Thumbnails
    AVS Intellectual Property Rights (IPR) Policy
    Cliff Reader
    Journal of Data Acquisition and Processing, 2006, 21 (3): 306-309 . 
    Abstract   PDF(159KB) ( 1525 )  
    The AVS Workgroup has developed an IPR Policy to facilitate the adoption of standards in the marketplace. The policy is based on consideration of IPR issues in parallel with the technical work for drafting the standard. The paper describes the relationship between IPR and the standard, and how the goals for the standard must be complemented by goals for the IPR. The existing IPR policies of the ITU and ISO are outlined, and then the AVS IPR policy is described, organized by its three main components: commitment to license on declared basic terms, disclosure of intellectual property, and protection of IPR.
    Performance Comparison of AVS and H.264/AVC Video Coding Standards
    Xin-Fu Wang and De-Bin Zhao
    Journal of Data Acquisition and Processing, 2006, 21 (3): 310-314 . 
    Abstract   PDF(340KB) ( 1692 )  
    A new audio and video compression standard of China, known as advanced Audio Video coding Standard (AVS), is emerging. This standard provides a technical solution for many applications within the information industry such as digital broadcast, high-density laser-digital storage media, and so on. The basic part of AVS, AVS1-P2, targets standard definition (SD) and high definition (HD) format video compression, and aims to achieve similar coding efficiency as H.264/AVC but with lower computational complexity. In this paper, we first briefly describe the major coding tools in AVS1-P2, and then perform the coding efficiency comparison between AVS1-P2 Jizhun profile and H.264/AVC main profile. The experimental results show that the AVS1-P2 Jizhun profile has an average of 2.96% efficiency loss relative to H.264/AVC main profile in terms of bit-rate saving on HD progressive-scan sequences, and an average of 28.52% coding loss on interlace-scan sequences. Nevertheless, AVS1-P2 possesses a valuable feature of lower computational complexity.
    Context-Based 2D-VLC Entropy Coder in AVS Video Coding Standard
    Qiang Wang, De-Bin Zhao, and Wen Gao
    Journal of Data Acquisition and Processing, 2006, 21 (3): 315-322 . 
    Abstract   PDF(385KB) ( 2153 )  
    In this paper, a Context-based 2D Variable Length Coding (C2DVLC) method for coding the transformed residuals in AVS video coding standard is presented. One feature in C2DVLC is the usage of multiple 2D-VLC tables and another feature is the usage of simple Exponential-Golomb codes. C2DVLC employs context-based adaptive multiple table coding to exploit the statistical correlation between DCT coefficients of one block for higher coding efficiency. Exp-Golomb codes are applied to code the pairs of the run-length of zero coefficients and the nonzero coefficient for lower storage requirement. C2DVLC is a low complexity coder in terms of both computational time and memory requirement. The experimental results show that C2DVLC can gain 0.34dB in average for the tested videos when compared with the traditional 2D-VLC coding method like that used in MPEG-2. And compared with CAVLC in H.264/AVC, C2DVLC shows similar coding efficiency.
    A Novel MBAFF Scheme of AVS
    Jian-Wen Chen, Guo-Ping Li, and Yun He
    Journal of Data Acquisition and Processing, 2006, 21 (3): 323-331 . 
    Abstract   PDF(816KB) ( 1491 )  
    Adaptive frame/field coding techniques have been adopted in many international video standards for interlaced sequence coding. When the frame/field adaptation is applied on the picture level, the coding efficiency is improved greatly, compared with the pure frame coding or the pure field coding. The picture-level adaptive frame/field coding (PAFF) selects frame coding or field coding once for one picture. If this frame/field adaptation is extended to Macro Block (MB) level, the coding efficiency will be further increased. In this paper, a novel MB-level adaptive frame/field (MBAFF) coding scheme is proposed. In the proposed MBAFF scheme, the top field of the current picture is used as a reference. The experiments are implemented on the platforms of Audio Video coding Standard (AVS) base profile and H.264/AVC, respectively. On the AVS platform, 0.35dB gain can be achieved averagely, compared with AVS1.0 anchor. On the H.264/AVC platform, 0.16dB gain can be achieved averagely, compared with MBAFF scheme of H.264/AVC. Additionally, an extensive subjective quality enhancement can be achieved by the proposed scheme.
    AVS-M: From Standards to Applications
    Ye-Kui Wang
    Journal of Data Acquisition and Processing, 2006, 21 (3): 332-344 . 
    Abstract   PDF(438KB) ( 2187 )  
    AVS stands for the Audio Video coding Standard Workgroup of China, who develops audio/video coding standards as well as system and digital right management standards. AVS-M is the AVS video coding standard targeting for mobile multimedia applications. Besides the coding specification, AVS also developed the file format and Real-time Transport Protocol (RTP) payload format specifications to enable the application of AVS-M video in various services. This paper reviews the high-level coding tools and features of the AVS-M coding standard as well as the file format and payload format standards. In particular, sixteen AVS-M high-level coding tools and features, which cover most of the high-level topics during AVS-M standardization, are discussed in some detail. After that, the error resilience tools are briefly reviewed before the file format and RTP payload format discussions. The coding efficiency and error resiliency performances of AVS-M are provided finally. H.264/AVC has been extensively used as a comparison in many of the discussions and the simulation results.
    Low-Complexity Tools in AVS Part 7
    Feng Yi, Qi-Chao Sun, Jie Dong, and Lu Yu
    Journal of Data Acquisition and Processing, 2006, 21 (3): 345-353 . 
    Abstract   PDF(402KB) ( 1331 )  
    Audio Video coding Standard (AVS) is established by the AVS Working Group of China. The main goal of AVS part 7 is to provide high compression performance with relatively low complexity for mobility applications. There are 3 main low-complexity tools: deblocking filter, context-based adaptive 2D-VLC and direct intra prediction. These tools are presented and analyzed respectively. Finally, we compare the performance and the decoding speed of AVS part 7 and H.264 baseline profile. The analysis and results indicate that AVS part 7 achieves similar performance with lower cost.
    Low Complexity Integer Transform and Adaptive Quantization Optimization
    Si-Wei Ma and Wen Gao
    Journal of Data Acquisition and Processing, 2006, 21 (3): 354-359 . 
    Abstract   PDF(328KB) ( 2156 )  
    In this paper, a new low complexity integer transform is proposed, which has been adopted by AVS1-P7. The proposed transform can enable AVS1-P7 to share the same quantization/dequantization table with AVS1-P2. As the bases of the proposed transform coefficients are very close, the transform normalization can be implemented only on the encoder side and the dequantization table size can be reduced compared with the transform used in H.264/MPEG-4 AVC. Along with the feature of the proposed transform, adaptive dead-zone quantization optimization for the proposed transform is studied. Experimental results show that the proposed integer transform has similar coding performance compared with the transform used in H.264/MPEG-4 AVC, and would gain near 0.1dB with the adaptive dead-zone quantization optimization.
    Introduction to AVS Audio
    Hao-Jun Ai, Shui-Xian Chen, and Rui-Min Hu
    Journal of Data Acquisition and Processing, 2006, 21 (3): 360-365 . 
    Abstract   PDF(389KB) ( 1554 )  
    This paper describes a general audio coding algorithm which has been recently standardized by AVS, China. The algorithm is based on a perceptual coding technique. The codec delivers near CD-quality audio at 128kb/s. This paper describes the coder structure in detail and discusses the reasons for specific design methods. A summary of the subjective test results are presented for the prototype codec. Comparison Mean Opinion Score (CMOS) test indicates that the quality of the AVS audio coder is comparable with MPEG Layer-3 audio coder. A real-time decoder was used for the characterization test, which is based on a 16-bit fixed-point DSP. The performance of the DSP solution was demonstrated, including computational complexity and storage characteristics.
    Basic Considerations on AVS DRM Architecture
    Tie-Jun Huang and Yong-Liang Liu
    Journal of Data Acquisition and Processing, 2006, 21 (3): 366-369 . 
    Abstract   PDF(204KB) ( 1754 )  
    Digital Rights Management (DRM) is an important infrastructure for the digital media age. It is a part of the AVS (Audio and Video coding Standard) of China. AVS Trusted Decoder (ATD) that plays back digital media program according to rights conditions is the core of AVS DRM architecture. Adaptation layers are responsible for translating or negotiating between ATD and peripheral systems. The Packaging Adaptation Layer (PAL), Licensing Adaptation Layer (LAL) and Rendering Adaptation Layer (RAL) will help ATD to gain the interoperability in various DRM environments.
    An Efficient VLSI Architecture for Motion Compensation of AVS HDTV Decoder
    Jun-Hao Zheng, Lei Deng, Peng Zhang, and Don Xie
    Journal of Data Acquisition and Processing, 2006, 21 (3): 370-377 . 
    Abstract   PDF(556KB) ( 1629 )  
    In the part 2 of advanced Audio Video coding Standard (AVS-P2), many efficient coding tools are adopted in motion compensation, such as new motion vector prediction, symmetric matching, quarter precision interpolation, etc. However, these new features enormously increase the computational complexity and the memory bandwidth requirement, which make motion compensation a difficult component in the implementation of the AVS HDTV decoder. This paper proposes an efficient motion compensation architecture for AVS-P2 video standard up to the Level 6.2 of the Jizhun Profile. It has a macroblock-level pipelined structure which consists of MV predictor unit, reference fetch unit and pixel interpolation unit. The proposed architecture exploits the parallelism in the AVS motion compensation algorithm to accelerate the speed of operations and uses the dedicated design to optimize the memory access. And it has been integrated in a prototype chip which is fabricated with TSMC 0.18-um CMOS technology, and the experimental results show that this architecture can achieve the real time AVS-P2 decoding for the HDTV 1080i (1920 * 1088 4:2:0 60field/s) video. The efficient design can work at the frequency of 148.5MHz and the total gate count is about 225K.
    Improved FFSBM Algorithm and Its VLSI Architecture for AVS Video Standard
    Li Zhang Don Xie, and Di Wu
    Journal of Data Acquisition and Processing, 2006, 21 (3): 378-382 . 
    Abstract   PDF(331KB) ( 1398 )  
    The Video part of AVS (Audio Video Coding Standard) has been finalized recently. It has adopted variable block size motion compensation to improve its coding efficiency. This has brought heavy computation burden when it is applied to compress the HDTV (high definition television) content. Based on the original FFSBM (fast full search blocking matching), this paper proposes an improved FFSBM algorithm to adaptively reduce the complexity of motion estimation according to the actual motion intensity. The main idea of the proposed algorithm is to use the statistical distribution of MVD (motion vector difference). A VLSI (very large scale integration) architecture is also proposed to implement the improved motion estimation algorithm. Experimental results show that this algorithm-hardware co-design gives better tradeoff of gate-count and throughput than the existing ones and is a proper solution for the variable block size motion estimation in AVS.
    Second Attribute Algorithm Based on Tree Expression
    Su-Qing Han and Jue Wang
    Journal of Data Acquisition and Processing, 2006, 21 (3): 383-392 . 
    Abstract   PDF(423KB) ( 1393 )  
    One view of finding a personalized solution of reduct in an information system is grounded on the viewpoint that attribute order can serve as a kind of semantic representation of user requirements. Thus the problem of finding personalized solutions can be transformed into computing the reduct on an attribute order. The second attribute theorem describes the relationship between the set of attribute orders and the set of reducts, and can be used to transform the problem of searching solutions to meet user requirements into the problem of modifying reduct based on a given attribute order. An algorithm is implied based on the second attribute theorem, with computation on the discernibility matrix. Its time complexity is O(n^2*m) (n is the number of the objects and m the number of the attributes of an information system). This paper presents another effective second attribute algorithm for facilitating the use of the second attribute theorem, with computation on the tree expression of an information system. The time complexity of the new algorithm is linear in n. This algorithm is proved to be equivalent to the algorithm on the discernibility matrix.
    Force-Based Incremental Algorithm for Mining Community Structure in Dynamic Network
    Bo Yang and Da-You Liu
    Journal of Data Acquisition and Processing, 2006, 21 (3): 393-400 . 
    Abstract   PDF(574KB) ( 1582 )  
    Community structure is an important property of network. Being able to identify communities can provide invaluable help in exploiting and understanding both social and non-social networks. Several algorithms have been developed up till now. However, all these algorithms can work well only with small or moderate networks with vertexes of order 10$^{4}$. Besides, all the existing algorithms are off-line and cannot work well with highly dynamic networks such as web, in which web pages are updated frequently. When an already clustered network is updated, the entire network including original and incremental parts has to be recalculated, even though only slight changes are involved. To address this problem, an incremental algorithm is proposed, which allows for mining community structure in large-scale and dynamic networks. Based on the community structure detected previously, the algorithm takes little time to reclassify the entire network including both the original and incremental parts. Furthermore, the algorithm is faster than most of the existing algorithms such as Girvan and Newman's algorithm and its improved versions. Also, the algorithm can help to visualize these community structures in network and provide a new approach to research on the evolving process of dynamic networks.
    Constraint-Based Fuzzy Models for an Environment with Heterogeneous Information-Granules
    K. Robert Lai and Yi-Yuan Chiang
    Journal of Data Acquisition and Processing, 2006, 21 (3): 401-411 . 
    Abstract   PDF(547KB) ( 1283 )  
    A novel framework for fuzzy modeling and model-based control design is described. Based on the theory of fuzzy constraint processing, the fuzzy model can be viewed as a generalized Takagi-Sugeno (TS) fuzzy model with fuzzy functional consequences. It uses multivariate antecedent membership functions obtained by granular-prototype fuzzy clustering methods and consequent fuzzy equations obtained by fuzzy regression techniques. Constrained optimization is used to estimate the consequent parameters, where the constraints are based on control-relevant {a priori knowledge about the modeled process. The fuzzy-constraint-based approach provides the following features. 1) The knowledge base of a constraint-based fuzzy model can incorporate information with various types of fuzzy predicates. Consequently, it is easy to provide a fusion of different types of knowledge. The knowledge can be from data-driven approaches and/or from control-relevant physical models. 2) A corresponding inference mechanism for the proposed model can deal with heterogeneous information granules. 3) Both numerical and linguistic inputs can be accepted for predicting new outputs. The proposed techniques are demonstrated by means of two examples: a nonlinear function-fitting problem and the well-known Box-Jenkins gas furnace process. The first example shows that the proposed model uses fewer fuzzy predicates achieving similar results with the traditional rule-based approach, while the second shows the performance can be significantly improved when the control-relevant constraints are considered.
    Agent-Oriented Probabilistic Logic Programming
    Jie Wang, Shi-Er Ju, and Chun-Nian Liu
    Journal of Data Acquisition and Processing, 2006, 21 (3): 412-417 . 
    Abstract   PDF(292KB) ( 2495 )  
    Currently, agent-based computing is an active research area, and great efforts have been made towards the agent-oriented programming both from a theoretical and practical view. However, most of them assume that there is no uncertainty in agents' mental state and their environment. In other words, under this assumption agent developers are just allowed to specify how his agent acts when the agent is 100% sure about what is true/false. In this paper, this unrealistic assumption is removed and a new agent-oriented probabilistic logic programming language is proposed, which can deal with uncertain information about the world. The programming language is based on a combination of features of probabilistic logic programming and imperative programming.
    Checking Content Consistency of Integrated Web Documents
    Franz Weitl and Burkhard Freitag
    Journal of Data Acquisition and Processing, 2006, 21 (3): 418-429 . 
    Abstract   PDF(513KB) ( 1649 )  
    A conceptual framework for the specification and verification of constraints on the content and narrative structure of documents is proposed. As a specification formalism, CTLDL is defined, which is an extension of the temporal logic CTL by description logic concepts. In contrast to existing solutions this approach allows for the integration of ontologies to achieve interoperability and abstraction from implementation aspects of documents. This makes CTLDL specifically suitable for the integration of heterogeneous and distributed information resources in the semantic web.
    GDM: A New Graph Based Data Model Using Functional Abstractionx
    Sankhayan Choudhury, Nabendu Chaki, Swapan Bhattacharya
    Journal of Data Acquisition and Processing, 2006, 21 (3): 430-438 . 
    Abstract   PDF(341KB) ( 2030 )  
    In this paper, a Graph-based semantic Data Model (GDM) is proposed with the primary objective of bridging the gap between the human perception of an enterprise and the needs of computing infrastructure to organize information in some particular manner for efficient storage and retrieval. The Graph Data Model (GDM) has been proposed as an alternative data model to combine the advantages of the relational model with the positive features of semantic data models. The proposed GDM offers a structural representation for interacting to the designer, making it always easy to comprehend the complex relations amongst basic data items. GDM allows an entire database to be viewed as a Graph (V,E) in a layered organization. Here, a graph is created in a bottom up fashion where V represents the basic instances of data or a functionally abstracted module, called primary semantic group (PSG) and secondary semantic group (SSG). An edge in the model implies the relationship among the secondary semantic groups. The contents of the lowest layer are the semantically grouped data values in the form of primary semantic groups. The SSGs are nothing but the higher-level abstraction and are created by the method of encapsulation of various PSGs, SSGs and basic data elements. This encapsulation methodology to provide a higher-level abstraction continues generating various secondary semantic groups until the designer thinks that it is sufficient to declare the actual problem domain. GDM, thus, uses standard abstractions available in a semantic data model with a structural representation in terms of a graph. The operations on the data model are formalized in the proposed graph algebra. A Graph Query Language (GQL) is also developed, maintaining similarity with the widely accepted user-friendly SQL. Finally, the paper also presents the methodology to make this GDM compatible with the distributed environment, and a corresponding query processing technique for distributed environment is also suggested for the sake of completeness.
    A Supervised Learning Approach to Search of Definitions
    Jun Xu, Yun-Bo Cao, Hang Li, Min Zhao, and Ya-Lou Huang
    Journal of Data Acquisition and Processing, 2006, 21 (3): 439-449 . 
    Abstract   PDF(426KB) ( 2537 )  
    This paper addresses the issue of search of definitions. Specifically, for a given term, we are to find out its definition candidates and rank the candidates according to their likelihood of being good definitions. This is in contrast to the traditional methods of either generating a single combined definition or outputting all retrieved definitions. Definition ranking is essential for tasks. A specification for judging the goodness of a definition is given. In the specification, a definition is categorized into one of the three levels: good definition, indifferent definition, or bad definition. Methods of performing definition ranking are also proposed in this paper, which formalize the problem as either classification or ordinal regression. We employ SVM (Support Vector Machines) as the classification model and Ranking SVM as the ordinal regression model respectively, and thus they rank definition candidates according to their likelihood of being good definitions. Features for constructing the SVM and Ranking SVM models are defined, which represent the characteristics of terms, definition candidate, and their relationship. Experimental results indicate that the use of SVM and Ranking SVM can significantly outperform the baseline methods such as heuristic rules, the conventional information retrieval---Okapi, or SVM regression. This is true when both the answers are paragraphs and they are sentences. Experimental results also show that SVM or Ranking SVM models trained in one domain can be adapted to another domain, indicating that generic models for definition ranking can be constructed.
    A Component-Based Debugging Approach for Detecting Structural Inconsistencies in Declarative Equation Based Models
    Jian-Wan Ding, Li-Ping Chen, and Fan-Li Zhou
    Journal of Data Acquisition and Processing, 2006, 21 (3): 450-458 . 
    Abstract   PDF(448KB) ( 1594 )  
    Object-oriented modeling with declarative equation based languages often unconsciously leads to structural inconsistencies. Component-based debugging is a new structural analysis approach that addresses this problem by analyzing the structure of each component in a model to separately locate faulty components. The analysis procedure is performed recursively based on the depth-first rule. It first generates fictitious equations for a component to establish a debugging environment, and then detects structural defects by using graph theoretical approaches to analyzing the structure of the system of equations resulting from the component. The proposed method can automatically locate components that cause the structural inconsistencies, and show the user detailed error messages. This information can be a great help in finding and localizing structural inconsistencies, and in some cases pinpoints them immediately.
    Visual Similarity Based Document Layout Analysis
    Di Wen and Xiao-Qing Ding
    Journal of Data Acquisition and Processing, 2006, 21 (3): 459-448 . 
    Abstract   PDF(833KB) ( 1652 )  
    In this paper, a visual similarity based document layout analysis (DLA) scheme is proposed, which by using clustering strategy can adaptively deal with documents in different languages, with different layout structures and skew angles. Aiming at a robust and adaptive DLA approach, the authors first manage to find a set of representative filters and statistics to characterize typical texture patterns in document images, which is through a visual similarity testing process. Texture features are then extracted from these filters and passed into a dynamic clustering procedure, which is called visual similarity clustering. Finally, text contents are located from the clustered results. Benefit from this scheme, the algorithm demonstrates strong robustness and adaptability in a wide variety of documents, which previous traditional DLA approaches do not possess.
SCImago Journal & Country Rank

ISSN 1004-9037


Editorial Board
Author Guidelines
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
  Copyright ©2015 JCST, All Rights Reserved