Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      15 March 2003, Volume 18 Issue 2   
    For Selected: View Abstracts Toggle Thumbnails
    Articles
    Problems in the Information Dissemination of the Internet Routing
    ZHAO YiXin (赵邑新), YIN Xia (尹 霞) and WU JianPing (吴建平)
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(652KB) ( 1235 )  
    Internet routing is achieved by a set of nodes running distributed algorithms --- routing protocols. However, many nodes are resistless to wrong messages or improper operations, unable to detect or correct them. Thus a wrong message or an improper operation can easily sweep almost the whole Internet. Such a fragile Internet routing comes from the features of these algorithms and protocols. Besides, the strategies taken by the network equipment manufacturers and administrators also are of important influence. When determining the options or selections in the implementation/operation, they always pay more attention to the expense of a single node or a single area and make some simplifications in implementations and configurations while caring less about the influence on the whole network. This paper tries to illustrate such a scheme is not reasonable at all and suggests the consideration from the view of the overall optimization. From three typical cases involved in the Internet routing, a general model is abstracted, which makes the results significative for more Internet related aspects. This paper evaluates the complexity of the theoretical analysis, then acquires the effect of error information on the whole network through the simulation on the Internet topology. It is shown that even very little error information can incur severe impact on the Internet. And it will take much more efforts of downstream nodes to make remedies. This result is intuitively revealed through the comparisons in the charts and the visual presentations. Then a hierarchical solution to establish the upgrade plan is given, which helps to upgrade the nodes of the network in a most efficient and economical way.
    Quantitative Adaptive RED in Differentiated Service Networks
    LONG KePing (隆克平), WANG Qian (王 茜), CHENG ShiDuan (程时端) and CHEN JunLiang (陈俊亮)
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(361KB) ( 1417 )  
    This paper derives a quantitative model between RED (Random Early Detection) max_p and committed traffic rate for token-based marking schemes in DiffServ IP networks. Then, a DiffServ Quantitative RED (DQRED) is presented, which can adapt its dropping probability to marking probability of the edge router to reflect not only the sharing bandwidth but also the requirement of performance of these services. Hence, DQRED can cooperate with marking schemes to guarantee fairness between different DiffServ AF class services. A new marking probability metering algorithm is also proposed to cooperate with DQRED. Simulation results verify that DQRED mechanism can not only control congestion of DiffServ network very well, but also satisfy different quality requirements of AF class service. The performance of DQRED is better than that of WRED.
    A Cost Effective Fault-Tolerant Scheme for RAIDs
    FANG Liang (方 粮) and LU XiCheng (卢锡城)
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(275KB) ( 1480 )  
    The rapid progress in mass storage technology has made it possible for designers to implement large data storage systems for a variety of applications. One of the efficient ways to build large storage systems is to use RAIDs as basic storage modules. In general, the data can be recovered in RAIDs only when one error occurs. But in large RAIDs systems, the fault probability will increase when the number of disks increases, and the use of disks with big storage capacity will cause the recovering time to prolong, thus the probability of the second disk's fault will increase. Therefore, it is necessary to develop methods to recover data when two or more errors have occurred. In this paper, a fault tolerant scheme is proposed based on extended Reed-Solomon code, a recovery procedure is designed to correct up to two errors which is implemented by software and hardware together, and the scheme is verified by computer simulation. In this scheme, only two redundant disks are used to recover up to two disks' fault. The encoding and decoding methods, and the implementation based on software and hardware are described. The application of the scheme in software RAIDs that are built in cluster computers are also described. Compared with the existing methods such as EVENODD and DH, the proposed scheme has distinct improvement in implementation and redundancy.
    Improved Method to Generate Path-Wise Test Data
    SHAN JinHui (单锦辉), WANG Ji (王 戟), QI ZhiChang (齐治昌) and WU JianPing (吴建平)
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(338KB) ( 1377 )  
    Gupta et al., proposed a method, which is referred to as the Iterative Relaxation Method, to generate test data for a given path in a program by linearizing the predicate functions. In this paper, a model language is presented and the properties of static and dynamic data dependencies are investigated. The notions in the Iterative Relaxation Method are defined formally. The predicate slice proposed by Gupta et al. is extended to path-wise static slice. The correctness of the constructional algorithm is proved afterward. The improvement shows that the constructions of predicate slice and input dependency set can be omitted. The equivalence of systems of constraints generated by both methods is proved. The prototype of path-wise test data generator is presented in this paper. The experiments show that our method is practical, and fits the path-wise automatic generation of test data for both white-box testing and black-box testing.
    Multi-Cue-Based Face and Facial Feature Detection on Video Segments
    PENG ZhenYun (彭振云), AI HaiZhou (艾海舟), Hong Wei (洪 微), LIANG LuHong (梁路宏) and XU GuangYou (徐光祐)
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(396KB) ( 1531 )  
    An approach is presented to detect faces and facial features on a video segment based on multi-cues, including gray-level distribution, color, motion, templates, algebraic features and so on. Faces are first detected across the frames by using color segmentation, template matching and artificial neural network. A PCA-based (Principal Component Analysis) feature detector for still images is then used to detect facial features on each single frame until the resulting features of three adjacent frames, named as base frames, are consistent with each other. The features of frames neighboring the base frames are first detected by the still-image feature detector, then verified and corrected according to the smoothness constraint and the planar surface motion constraint. Experiments have been performed on video segments captured under different environments, and the presented method is proved to be robust and accurate over variable poses, ages and illumination conditions.
    A Constraint Satisfaction Neural Network and Heuristic Combined Approach for Concurrent Activities Scheduling
    YAN JiHong (闫纪红) and WU Cheng (吴 澄)
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(297KB) ( 1268 )  
    Scheduling activities in concurrent product development process is of great significance to shorten development lead time and minimize the cost. Moreover, it can eliminate the unnecessary redesign periods and guarantee that serial activities can be executed as concurrently as possible. This paper presents a constraint satisfaction neural network and heuristic combined approach for concurrent activities scheduling. In the combined approach, the neural network is used to obtain a feasible starting time of all the activities based on sequence constraints, the heuristic algorithm is used to obtain a feasible solution of the scheduling problem based on resource constraints. The feasible scheduling solution is obtained by a gradient optimization function. Simulations have shown that the proposed combined approach is efficient and feasible with respect to concurrent activities scheduling.
    A Site-Based Proxy Cache
    ZHU Jing (朱 晶), YANG GuangWen (杨广文), HU Min (胡 敏) and SHEN MeiMing (沈美明)
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(285KB) ( 1514 )  
    In traditional proxy caches, any visited page from any Web server is cached independently, ignoring connections between pages. And users still have to frequently visit indexing pages just for reaching useful informative ones, which causes significant waste of caching space and unnecessary Web traffic. In order to solve the above problem, this paper introduced a site graph model to describe WWW and a site-based replacement strategy has been built based on it. The concept of "access frequency" is developed for evaluating whether a Web page is worth being kept in caching space. On the basis of user's access history, auxiliary navigation information is provided to help him reach target pages more quickly. Performance test results have shown that the proposed proxy cache system can get higher hit ratio than traditional ones and can reduce user's access latency effectively.
    Verification of Duration Systems Using an Approximation Approach
    Riadh Robbana
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(354KB) ( 1191 )  
    We consider the verification problem of invariance properties for timed systems modeled by (extended) Timed Graphs with duration variables. This problem is in general case undecidable. Nevertheless we give in this paper a technique extending a given system into another one containing the initial computations as well as additional ones. Then we define a digitization technique allowing the translation from the continuous case to the discrete one. Using this digitization, we show that to each real computation in the initial system corresponds a discrete computation in the extended system. Then, we show that the extended system corresponds to a very close approximation of the initial one, allowing per consequent, a good analysis of invariance properties of the initial system.
    Construction of Feature-Matching Perception in Virtual Assembly
    CHENG Cheng (程 成), WANG HongAn (王宏安) and DAI GuoZhong (戴国忠)
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(380KB) ( 1394 )  
    An important characteristic of virtual assembly is interaction. Traditional direct manipulation in virtual assembly relies on dynamic collision detection, which is very time-consuming and even impossible in desktop virtual assembly environment. Feature-matching is a critical process in harmonious virtual assembly, and is the premise of assembly constraint sensing. This paper puts forward an active object-based feature-matching perception mechanism and a feature-matching interactive computing process, both of which make the direct manipulation in virtual assembly break away from collision detection. They also help to enhance virtual environment understandability of user intention and promote interaction performance. Experimental results show that this perception mechanism can ensure that users achieve real-time direct manipulation in desktop virtual environment.
    Specification and Verification of Multimedia Synchronization in Duration Calculus
    MA HuaDong (马华东)
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(325KB) ( 1821 )  
    This paper proposes a new method of specifying multimedia synchronization based on Duration Calculus (DC), a real time interval logic, presents the completeness of the new model, and uses it to specify the temporal relations between multimedia objects. Moreover, the paper provides a method of constructing a meta-script based on basic synchronization requirements. Some properties of the formal specifications, including safety and liveness, are stated in DC. Furthermore, the verification of the above properties is discussed in DC semantic. Compared with other methods for specifying multimedia synchronization, this method is more powerful and flexible, and it is good at specifying the quantitative properties of multimedia synchronization.
    Z-SATCHMORE: An Improvement of A-SATCHMORE
    HE LiFeng (何立风), Yuyan Chao, Tsuyoshi Nakamura and Hidenori Itoh
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(354KB) ( 1407 )  
    This paper presents an improvement of A-SATCHMORE (SATCHMORE with Availability). A-SATCHMORE incorporates relevancy testing and availability checking into SATCHMO to prune away irrelevant forward chaining. However, considering every consequent atom of those non-Horn clauses being derivable, A-SATCHMORE may suffer from a potential explosion of the search space when some of such consequent atoms are actually underivable. This paper introduces a solution for this problem and shows its correctness.
    Fault-Tolerant Systems with Concurrent Error-Locating Capability
    JIANG JianHui (江建慧)1,2, MIN YingHua (闵应骅)3 and PENG ChengLian (彭澄廉)2
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(393KB) ( 1359 )  
    Fault-tolerant systems have found wide applications in military, industrial and commercial areas. Most of these systems are constructed by multiple-modular redundancy or error control coding techniques. They need some fault-tolerant specific components (such as voter, switcher, encoder, or decoder) to implement error-detecting or error-correcting functions. However, the problem of error detection, location or correction for fault-tolerance specific components themselves has not been solved properly so far. Thus, the dependability of a whole fault-tolerant system will be greatly affected. This paper presents a theory of robust fault-masking digital circuits for characterizing fault-tolerant systems with the ability of concurrent error location and a new scheme of dual-modular redundant systems with partially robust fault-masking property. A basic robust fault-masking circuit is composed of a basic functional circuit and an error-locating corrector. Such a circuit not only has the ability of concurrent error correction, but also has the ability of concurrent error location. According to this circuit model, for a partially robust fault-masking dual-modular redundant system, two redundant modules based on alternating-complementary logic consist of the basic functional circuit. An error-correction specific circuit named as alternating-complementary corrector is used as the error-locating corrector. The performance (such as hardware complexity, time delay) of the scheme is analyzed.
    DCF+: An Enhancement for Reliable Transport Protocol over WLAN
    WU HaiTao (邬海涛) and CHENG ShiDuan (程时端)
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(384KB) ( 1600 )  
    Distributed Coordination Function (DCF) is the basis of the IEEE 802.11 WLAN MAC protocols. This paper proposes a scheme named DCF+, which can be regarded as an option to DCF, to enhance the performance of reliable transport protocol over WLAN. To analyze the performance of DCF and DCF+, this paper also introduces an analytical model to compute the saturated throughput performance of WLAN. Compared with other models, this model is more accurate, which is verified by elaborate simulations. Moreover, DCF+ is shown to be able to improve the performance of TCP over WLAN by both modeling and simulations.
    An Efficient Key Assignment Scheme Based on One-Way Hash Function in a User Hierarchy
    Tzer-Shyong Chen and Yu-Fang Chung
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(227KB) ( 1450 )  
    In order to solve the problems resulted from dynamic access control in a user hierarchy, a cryptographic key assignment scheme is proposed by Prof. Lin to promote the performing ability and to simplify the procedure. However, it may cause the security in danger as the user changes his secret key; besides, some secret keys may be disclosed due to the unsuitable selection of the security classes' identities. Through setting up a one-way hash function onto Lin's scheme, the proposed modification can greatly improve the security of Lin's scheme.
    A Motion Compensated Lifting Wavelet Codec for 3D Video Coding
    LUO Lin (罗 琳), LI Jin (李 劲), LI ShiPeng (李世鹏) and ZHUANG ZhenQuan (庄镇泉)
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(431KB) ( 1518 )  
    A motion compensated lifting (MCLIFT) framework for the 3D wavelet video coding is proposed in this paper. By using bi-directional motion compensation in each lifting step of the temporal direction, the video frames are effectively de-correlated. With the proper entropy coding and bit-stream packaging schemes, the MCLIFT wavelet video coder is scalable at frame rate and quality level. Experimental results show that the MCLIFT video coder outperforms the 3D wavelet video coder without motion by an average of 0.9--1.3dB, and outperforms MPEG-4 coder by an average of 0.2--0.6dB.
    Allocating Network Resources by Weight Between TCP Traffics
    XU ChangBiao (徐昌彪)1, LONG KePing (隆克平)1 and YANG ShiZhong (杨士中)2
    Journal of Data Acquisition and Processing, 2003, 18 (2): 0-0. 
    Abstract   PDF(263KB) ( 1541 )  
    Under the current TCP/IP architecture, all TCP traffics compete for network resources completely fairly, which makes it difficult to satisfy applications' versatile communication requirements. This paper presents an improved TCP congestion control mechanism where the congestion window becomes omega (1-b)W rather than (1-b)W for every window W containing a packet loss. Theoretical analysis and simulation results show that it can be easily implemented with less additional overhead and can easily perform network resource allocation by weighted parameter omega for traffics under the similar communication environments, which can efficiently lead to guaranteed relative quality of services and improve network performances.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved