Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      15 January 2007, Volume 22 Issue 1   
    For Selected: View Abstracts Toggle Thumbnails
    Articles
    Implementing a 1GHz Four-Issue Out-of-Order Execution Microprocessor in a Standard Cell ASIC Methodology
    Wei-Wu Hu, Ji-Ye Zhao, Shi-Qiang Zhong, Xu Yang, Elio Guidetti, and Chris Wu
    Journal of Data Acquisition and Processing, 2007, 22 (1): 1-0. 
    Abstract   PDF(541KB) ( 50445 )  
    This paper introduces the microarchitecture and physical implementation of the Godson-2E processor, which is a four-issue superscalar RISC processor that supports the 64-bit MIPS instruction set. The adoption of the aggressive out-of-order execution and memory hierarchy techniques help Godson-2E to achieve high performance. The Godson-2E processor has been physically designed in a 7-metal 90nm CMOS process using the cell-based methodology with some bit-sliced manual placement and a number of crafted cells and macros. The processor can be run at 1GHz and achieves a SPEC CPU2000 rate higher than 500.
    An Energy-Efficient Instruction Scheduler Design with Two-Level Shelving and Adaptive Banking
    Yu-Lai Zhao, Xian-Feng Li, Dong Tong, and Xu Cheng
    Journal of Data Acquisition and Processing, 2007, 22 (1): 15-24 . 
    Abstract   PDF(590KB) ( 5598 )  
    Mainstream processors implement the instruction scheduler using a monolithic CAM-based issue queue (IQ), which consumes increasingly high energy as its size scales. In particular, its instruction wakeup logic accounts for a major portion of the consumed energy. Our study shows that instructions with 2 non-ready operands (called 2OP instructions) are in small percentage, but tend to spend long latencies in the IQ. They can be effectively shelved in a small RAM-based waiting instruction buffer (WIB) and steered into the IQ at appropriate time. With this two-level shelving ability, half of the CAM tag comparators are eliminated in the IQ, which significantly reduces the energy of wakeup operation. In addition, we propose an adaptive banking scheme to downsize the IQ and reduce the bit-width of tag comparators. Experiments indicate that for an 8-wide issue superscalar or SMT processor, the energy consumption of the instruction scheduler can be reduced by 67%. Furthermore, the new design has potentially faster scheduler clock speed while maintaining close IPC to the monolithic scheduler design. Compared with the previous work on eliminating tags through prediction, our design is superior in terms of both energy reduction and SMT support.
    A 485ps 64-Bit Parallel Adder in 0.18$\mu$m CMOS
    Dong-Yu Zheng, Yan Sun, Shao-Qing Li, and Liang Fang
    Journal of Data Acquisition and Processing, 2007, 22 (1): 25-27 . 
    Abstract   PDF(265KB) ( 5859 )  
    This paper presents an optimized 64-bit parallel adder. Sparse-tree architecture enables low carry-merge fan-outs and inter-stage wiring complexity. Single-rail and semi-dynamic circuit improves operation speed. Simulation results show that the proposed adder can operate at 485ps with power of 25.6mW in 0.18$\mu$m CMOS process. It achieves the goal of higher speed and lower power.
    Unified Parallel Systolic Multiplier Over GF(2^m)
    Chiou-Yng Lee, Yung-Hui Chen, Che-Wun Chiou, and Jim-Min Lin
    Journal of Data Acquisition and Processing, 2007, 22 (1): 28-38 . 
    Abstract   PDF(427KB) ( 5895 )  
    In general, there are three popular basis representations, standard (canonical, polynomial) basis, normal basis, and dual basis, for representing elements in ${\it GF}(2^{m}$). Various basis representations have their distinct advantages and have their different associated multiplication architectures. In this paper, we will present a unified systolic multiplication architecture, by employing Hankel matrix-vector multiplication, for various basis representations. For various element representation in ${\it GF}(2^{m}$), we will show that various basis multiplications can be performed by Hankel matrix-vector multiplications. A comparison with existing and similar structures has shown that the proposed architectures perform well both in space and time complexities.
    Bounded Model Checking of CTL^*
    Zhi-Hong Tao, Cong-Hua Zhou, Zhong Chen, and Li-Fu Wang
    Journal of Data Acquisition and Processing, 2007, 22 (1): 39-43 . 
    Abstract   PDF(293KB) ( 5607 )  
    Bounded Model Checking has been recently introduced as an efficient verification method for reactive systems. This technique reduces model checking of linear temporal logic to propositional satisfiability. In this paper we first present how quantified Boolean decision procedures can replace BDDs. We introduce a bounded model checking procedure for temporal logic CTL* which reduces model checking to the satisfiability of quantified Boolean formulas. Our new technique avoids the space blow up of BDDs, and extends the concept of bounded model checking.
    CSchema: A Downgrading Policy Language for XML Access Control
    Dong-Xi Liu
    Journal of Data Acquisition and Processing, 2007, 22 (1): 44-53 . 
    Abstract   PDF(396KB) ( 4890 )  
    The problem of regulating access to XML documents has attracted much attention from both academic and industry communities. In existing approaches, the XML elements specified by access policies are either accessible or inaccessible according to their sensitivity. However, in some cases, the original XML elements are sensitive and inaccessible, but after being processed in some appropriate ways, the results become insensitive and thus accessible. This paper proposes a policy language to accommodate such cases, which can express the downgrading operations on sensitive data in XML documents through explicit calculations on them. The proposed policy language is called {\it calculation-embedded schema} (CSchema), which extends the ordinary schema languages with {\it protection type} for protecting sensitive data and specifying downgrading operations. CSchema language has a type system to guarantee the type correctness of the embedded calculation expressions and moreover this type system also generates a security view after type checking a CSchema policy. Access policies specified by CSchema are enforced by a validation procedure, which produces the released documents containing only the accessible data by validating the protected documents against CSchema policies. These released documents are then ready to be accessed by, for instance, XML query engines. By incorporating this validation procedure, other XML processing technologies can use CSchema as the access control module.
    Two-Factor Cancelable Biometrics Authenticator
    Ying-Han Pang, Andrew T. B. J., and David N. C. L
    Journal of Data Acquisition and Processing, 2007, 22 (1): 54-59 . 
    Abstract   PDF(362KB) ( 5089 )  
    Biometrics-based authentication system offers advantages of providing high reliability and accuracy. However, the contemporary authentication system is impuissance to compromise. If a biometrics data is compromised, it cannot be replaced and rendered unusable. In this paper, a cancelable biometrics-based authenticator is proposed to solve this irrevocability issue. The proposed approach is a two-factor authentication system, which requires both of the random data and facial feature in order to access the system. In this system, tokenized pseudo-random data is coupled with moment-based facial feature via inner product algorithm. The output of the product is then discretized to generate a set of private binary code, coined as 2factor-Hashing code, which is acted as verification key. If this biometrics-based verification key is compromised, a new one can be issued by replacing a different set of random number via token replacement. Then, the compromised one is rendered completely useless. This feature offers an extra protection layer against biometrics fabrication since the verification code is replaceable. Experimental results demonstrate that the proposed system provides zero Equal Error Rate in which there is a clear separation in between the genuine and the imposter distribution populations.
    Higher-Level Hardware Synthesis of the KASUMI Algorithm
    Issam W. Damaj
    Journal of Data Acquisition and Processing, 2007, 22 (1): 60-70 . 
    Abstract   PDF(443KB) ( 5737 )  
    Programmable Logic Devices (PLDs) continue to grow in size and currently contain several millions of gates. At the same time, research effort is going into higher-level hardware synthesis methodologies for reconfigurable computing that can exploit PLD technology. In this paper, we explore the effectiveness and extend one such formal methodology in the design of massively parallel algorithms. We take a step-wise refinement approach to the development of correct reconfigurable hardware circuits from formal specifications. A functional programming notation is used for specifying algorithms and for reasoning about them. The specifications are realised through the use of a combination of function decomposition strategies, data refinement techniques, and off-the-shelf refinements based upon higher-order functions. The off-the-shelf refinements are inspired by the operators of Communicating Sequential Processes (CSP) and map easily to programs in Handel-C (a hardware description language). The Handel-C descriptions are directly compiled into reconfigurable hardware. The practical realisation of this methodology is evidenced by a case studying the third generation mobile communication security algorithms. The investigated algorithm is the KASUMI block cipher. In this paper, we obtain several hardware implementations with different performance characteristics by applying different refinements to the algorithm. The developed designs are compiled and tested under Celoxica's RC-1000 reconfigurable computer with its 2 million gates Virtex-E FPGA. Performance analysis and evaluation of these implementations are included.
    Breaking and Repairing Trapdoor-Free Group Signature Schemes from Asiacrypt 2004
    Xin-Yi Huang, Willy Susilo, Yi Mu, and Fu-Tai Zhang
    Journal of Data Acquisition and Processing, 2007, 22 (1): 71-74 . 
    Abstract   PDF(298KB) ( 4930 )  
    Group signature schemes allow a member of a group to sign messages anonymously on behalf of the group. In case of later dispute, a designated group manager can revoke the anonymity and identify the originator of a signature. In Asiacrypt2004, Nguyen and Safavi-Naini proposed a group signature scheme that has a constant-sized public key and signature length, and more importantly, their group signature scheme does {\em not} require trapdoor. Their scheme is very efficient and the sizes of signatures are smaller than those of the other existing schemes. In this paper, we point out that Nguyen and Safavi-Naini's scheme is insecure. In particular, it is shown in our cryptanalysis of the scheme that it allows a non-member of the group to sign on behalf of the group. And the resulting signature convinces any third party that a member of the group has indeed generated such a signature, although none of the members has done so. Therefore is in case of dispute, even the group manager cannot identify who has signed the message. In the paper a new scheme that does not suffer from this problem is provided.
    Relationship Between a Non-Malleable Commitment Scheme and a Modified Selective Decommitment Scheme
    Hai-Xia Xu and Bao Li
    Journal of Data Acquisition and Processing, 2007, 22 (1): 75-78 . 
    Abstract   PDF(275KB) ( 4861 )  
    The notion of commitment is one of the most important primitives in cryptography. To meet various needs, there have been many kinds of commitment schemes among which non-malleable commitment scheme and selective decommitment scheme are important and in general use. And, the increasing security demands suggest a closer look at the relationship between the two schemes. For the convenience of our proof, a new definition for selective decommitment scheme is proposed, which is named as modified selective decommitment scheme. The security relation is deduced that a non-malleable commitment scheme implies a modified selective decommitment scheme, but the reverse is not true.
    Improved Collision Attack on Hash Function MD5
    Jie Liang and Xue-Jia Lai
    Journal of Data Acquisition and Processing, 2007, 22 (1): 79-87 . 
    Abstract   PDF(363KB) ( 5524 )  
    In this paper, we present a fast attack algorithm to find two-block collision of hash function MD5. The algorithm is based on the two-block collision differential path of MD5 that was presented by Wang {\it et al}. in the Conference EUROCRYPT 2005. We found that the derived conditions for the desired collision differential path were not sufficient to guarantee the path to hold and that some conditions could be modified to enlarge the collision set. By using technique of small range searching and omitting the computing steps to check the characteristics in the attack algorithm, we can speed up the attack of MD5 efficiently. Compared with the Advanced Message Modification technique presented by Wang {\it et al}., the small range searching technique can correct 4 more conditions for the first iteration differential and 3 more conditions for the second iteration differential, thus improving the probability and the complexity to find collisions. The whole attack on the MD5 can be accomplished within 5 hours using a PC with Pentium4 1.70GHz CPU.
    Secure Two-Party Point-Circle Inclusion Problem
    Yong-Long Luo, Liu-Sheng Huang, and Hong Zhong
    Journal of Data Acquisition and Processing, 2007, 22 (1): 88-91 . 
    Abstract   PDF(285KB) ( 5692 )  
    Privacy-preserving computational geometry is a special secure multi-party computation and has many applications. Previous protocols for determining whether a point is inside a circle are not secure enough. We present a two-round protocol for computing the distance between two private points and develop a more efficient protocol for the point-circle inclusion problem based on the distance protocol. In comparison with previous solutions, our protocol not only is more secure but also reduces the number of communication rounds and the number of modular multiplications significantly.
    Attack on Digital Multi-Signature Scheme Based on Elliptic Curve Cryptosystem
    Duo Liu, Ping Luo, and Yi-Qi Dai
    Journal of Data Acquisition and Processing, 2007, 22 (1): 92-94 . 
    Abstract   PDF(232KB) ( 5450 )  
    The concept of multisignature, in which multiple signers can cooperate to sign the same message and any verifier can verify the validity of the multi-signature, was first introduced by Itakura and Nakamura. Several multisignature schemes have been proposed since. Chen {\it et al}. proposed a new digital multi-signature scheme based on the elliptic curve cryptosystem recently. In this paper, we show that their scheme is insecure, for it is vulnerable to the so-called active attacks, such as the substitution of a "false" public key to a "true" one in a key directory or during transmission. And then the attacker can sign a legal signature which other users have signed and forge a signature himself which can be accepted by the verifier.
    A New Public-Key Encryption Scheme
    Hai-Bo Tian, Xi Sun, and Yu-Min Wang
    Journal of Data Acquisition and Processing, 2007, 22 (1): 95-02 . 
    Abstract   PDF(320KB) ( 5527 )  
    This paper proposes a new public-key encryption scheme which removes one element from the public-key tuple of the original Cramer-Shoup scheme. As a result, a ciphertext is not a quadruple but a triple at the cost of a strong assumption, the third version of knowledge of exponent assumption (KEA3). Under assumptions of KEA3, a decision Diffie-Hellman (DDH) and a variant of target collision resistance (TCRv), the new scheme is proved secure against indistinguishable adaptive chosen ciphertext attack (IND-CCA2). This scheme is as efficient as Damg{\aa}rd ElGamal (DEG) scheme when it makes use of a well-known algorithm for product of exponentiations. The DEG scheme is recently proved IND-CCA1 secure by Bellare and Palacio in ASIACRYPT 2004 under another strong assumption. In addition to our IND-CCA2 secured scheme, we also believe that the security proof procedure itself provides a well insight for ElGamal-based encryption schemes which are secure in real world.
    Efficient ID-Based Multi-Decrypter Encryption with Short Ciphertexts
    Zhen-Chuan Chai, Zhen-Fu Cao, and Yuan Zhou
    Journal of Data Acquisition and Processing, 2007, 22 (1): 103-108 . 
    Abstract   PDF(311KB) ( 5072 )  
    Multi-decrypter encryption is a typical application in multi-user cryptographic branches. In multi-decrypter encryption, a message is encrypted under multiple decrypters' public keys in the way that only when all the decrypters cooperate, can the message be read. However, trivial implementation of multi-decrypter encryption using standard approaches leads to heavy computation costs and long ciphertext which grows as the receiver group expands. This consumes much precious bandwidth in wireless environment, such as mobile ad hoc network. In this paper, we propose an efficient identity based multi-decrypter encryption scheme, which needs only one or zero (if precomputed) pairing computation and the ciphertext contains only three group elements no matter how many the receivers are. Moreover, we give a formal security definition for the scheme, and prove the scheme to be chosen ciphertext secure in the random oracle model, and discuss how to modify the scheme to resist chosen ciphertext attack.
    Chameleon Hashes Without Key Exposure Based on Factoring
    Wei Gao, Xue-Li Wang, and Dong-Qing Xie
    Journal of Data Acquisition and Processing, 2007, 22 (1): 109-113 . 
    Abstract   PDF(291KB) ( 5224 )  
    Chameleon hash is the main primitive to construct a chameleon signature scheme which provides non-repudiation and non-transferability simultaneously. However, the initial chameleon hash schemes suffer from the key exposure problem: non-transferability is based on an unsound assumption that the designated receiver is willing to abuse his private key regardless of its exposure. Recently, several key-exposure-free chameleon hashes have been constructed based on RSA assumption and SDH (strong Diffie-Hellman) assumption. In this paper, we propose a factoring-based chameleon hash scheme which is proven to enjoy all advantages of the previous schemes. In order to support it, we propose a variant Rabin signature scheme which is proven secure against a new type of attack in the random oracle model.
    An Efficient Handoff Decision Algorithm for Vertical Handoff Between WWAN and WLAN
    Min Liu, Zhong-Cheng Li, and Xiao-Bing Guo
    Journal of Data Acquisition and Processing, 2007, 22 (1): 114-120 . 
    Abstract   PDF(332KB) ( 5664 )  
    Vertical handoff is one significant challenge for mobility management in heterogeneous wireless networks. Compared with horizontal handoff, vertical handoff involves different wireless network technologies varying widely in terms of bandwidth, delay, coverage area, power consumption, etc. In this paper, we analyze the signal strength model of mobile node and present a new vertical handoff decision algorithm. This algorithm can adapt to the change of mobile node's velocity and improve the handoff efficiency significantly. We analyze the algorithm's performance and the effect of different parameters on handoff triggering. In addition, we propose three performance evaluation models and verify the algorithm's feasibility and effectiveness in simulations.
    FloodNet: Coupling Adaptive Sampling with Energy Aware Routing in a Flood Warning System
    Jing Zhou and David De Roure
    Journal of Data Acquisition and Processing, 2007, 22 (1): 121-130 . 
    Abstract   PDF(404KB) ( 5206 )  
    We describe the design of FloodNet, a flood warning system, which uses a grid-based flood predictor model developed by environmental experts to make flood predictions based on readings of water level collected by a set of sensor nodes. To optimize battery consumption, the reporting frequency of sensor nodes is required to be adaptive to local conditions as well as the flood predictor model. We therefore propose an energy aware routing protocol which allows sensor nodes to consume energy according to this need. This system is notable both for the adaptive sampling regime and the methodology adopted in the design of the adaptive behavior, which involved development of simulation tools and very close collaboration with environmental experts.
    An Efficient Data Dissemination Scheme for Spatial Query Processing
    Kwangjin Park, Hyunseung Choo, and Chong-Sun Hwang
    Journal of Data Acquisition and Processing, 2007, 22 (1): 131-134 . 
    Abstract   PDF(291KB) ( 5004 )  
    Due to the personal portable devices and advances in wireless communication technologies, Location Dependent Information Services (LDISs) have received a lot of attention from both the industrial and academic communities. In LDISs, it is important to reduce the query response time, since a late query response may contain out-of-date information. In this paper, we study the issue of LDISs using a Voronoi Diagram. We introduce a new NN search method, called the Exponential Sequence Scheme (ESS), to support NN query processing in periodic broadcast environment. This paper aims to provide research directions towards minimizing both the access latency and energy consumption for the NN-query processing.
    End-to-End Utilization Control for Aperiodic Tasks in Distributed Real-Time Systems
    Yong Liao, Xu-Dong Chen, Guang-Ze Xiong, Qing-Xin Zhu, and Nan Sang
    Journal of Data Acquisition and Processing, 2007, 22 (1): 135-146 . 
    Abstract   PDF(585KB) ( 20417 )  
    An increasing number of {DRTS} (Distributed Real-Time Systems) are employing an end-to-end aperiodic task model. The key challenges of such {DRTS} are guaranteeing utilization on multiple processors to achieve overload protection, and meeting the end-to-end deadlines of aperiodic tasks. This paper proposes an end-to-end utilization control architecture and an {IC-EAT} (Integration Control for End-to-End Aperiodic Tasks) algorithm, which features a distributed feedback loop that dynamically enforces the desired utilization bound on multiple processors. IC-EAT integrates admission control with feedback control, which is able to dynamically determine the QoS (Quality of Service) of incoming tasks and guarantee the end-to-end deadlines of admitted tasks. Then an LQOCM (Linear Quadratic Optimal Control Model) is presented. Finally, experiments demonstrate that, for the end-to-end {DRTS} whose control matrix $\pmb G$ falls into the stable region, the {IC-EAT} is convergent and stable. Moreover, it is capable of providing better QoS guarantees for end-to-end aperiodic tasks and improving the system throughput.
    3D Morphing Using Strain Field Interpolation
    Han-Bing Yan, Shi-Min Hu, and Ralph R Martin
    Journal of Data Acquisition and Processing, 2007, 22 (1): 147-155 . 
    Abstract   PDF(424KB) ( 5553 )  
    In this paper, we present a new technique based on strain fields to carry out 3D shape morphing for applications in computer graphics and related areas. Strain is an important geometric quantity used in mechanics to describe the deformation of objects. We apply it in a novel way to analyze and control deformation in morphing. Using position vector fields, the strain field relating source and target shapes can be obtained. By interpolating this strain field between zero and a final desired value we can obtain the position field for intermediate shapes. This method ensures that the 3D morphing process is smooth. Locally, volumes suffer minimal distortion, and no shape jittering or wobbling happens: other methods do not necessarily have these desirable properties. We also show how to control the method so that changes of shape (in particular, size changes) vary linearly with time.
    Visual Simulation of Multiple Unmixable Fluids
    Wen Zheng, Jun-Hai Yong, and Jean-Claude Paul
    Journal of Data Acquisition and Processing, 2007, 22 (1): 156-160 . 
    Abstract   PDF(356KB) ( 5381 )  
    We present a novel grid-based method for simulating multiple unmixable fluids moving and interacting. Unlike previous methods that can only represent the interface between two fluids (usually between liquid and gas), this method can handle an arbitrary number of fluids through multiple independent level sets coupled with a constrain condition. To capture the fluid surface more accurately, we extend the particle level set method to a multi-fluid version. It shares the advantages of the particle level set method, and has the ability to track the interfaces of multiple fluids. To handle the dynamic behavior of different fluids existing together, we use a multiphase fluid formulation based on a smooth weight function.
    A Markov Random Field Model-Based Approach to Natural Image Matting
    Sheng-You Lin and Jiao-Ying Shi
    Journal of Data Acquisition and Processing, 2007, 22 (1): 161-167 . 
    Abstract   PDF(481KB) ( 5807 )  
    This paper proposes a Markov Random Field (MRF) model-based approach to natural image matting with complex scenes. After the trimap for matting is given manually, the unknown region is roughly segmented into several joint sub-regions. In each sub-region, we partition the colors of neighboring background or foreground pixels into several clusters in RGB color space and assign matting label to each unknown pixel. All the labels are modelled as an MRF and the matting problem is then formulated as a maximum a posteriori (MAP) estimation problem. Simulated annealing is used to find the optimal MAP estimation. The better results can be obtained under the same user-interactions when images are complex. Results of natural image matting experiments performed on complex images using this approach are shown and compared in this paper.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved