Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      15 May 2007, Volume 22 Issue 3   
    For Selected: View Abstracts Toggle Thumbnails
    Articles
    An Empirical Study on the Impact of Automation on the Requirements Analysis Process
    Giuseppe Lami and Robert W. Ferguson
    Journal of Data Acquisition and Processing, 2007, 22 (3): 338-347 . 
    Abstract   PDF(359KB) ( 6517 )  
    Requirements analysis is an important phase in a software project. The analysis is often performed in an informal way by specialists who review documents looking for ambiguities, technical inconsistencies and incomplete parts. Automation is still far from being applied in requirements analyses, above all since natural languages are informal and thus difficult to treat automatically. There are only a few tools that can analyse texts. One of them, called QuARS, was developed by the Istituto di Scienza e Tecnologie dell'Informazione and can analyse texts in terms of ambiguity. This paper describes how QuARS was used in a formal empirical experiment to assess the impact in terms of effectiveness and efficacy of the automation in the requirements review process of a software company.
    Developing Project Duration Models in Software Engineering
    Pierre Bourque, Serge Oligny, Alain Abran, and Bertrand Fournier
    Journal of Data Acquisition and Processing, 2007, 22 (3): 348-357 . 
    Abstract   PDF(362KB) ( 4844 )  
    Based on the empirical analysis of data contained in the International Software Benchmarking Standards Group (ISBSG) repository, this paper presents software engineering project duration models based on project effort. Duration models are built for the entire dataset and for subsets of projects developed for personal computer, mid-range and mainframe platforms. Duration models are also constructed for projects requiring fewer than 400 person-hours of effort and for projects requiring more than 400 person-hours of effort. The usefulness of adding the maximum number of assigned resources as a second independent variable to explain duration is also analyzed. The opportunity to build duration models directly from project functional size in function points is investigated as well.
    On the Estimation of the Functional Size of Software from Requirements Specifications
    Nelly Condori-Fernandez, Silvia Abrahao, and Oscar Pastor
    Journal of Data Acquisition and Processing, 2007, 22 (3): 358-370 . 
    Abstract   PDF(498KB) ( 4964 )  
    This paper introduces a measurement procedure, called RmFFP, which describes a set of operations for modelling and estimating the size of object-oriented software systems from high-level specifications using the OO-Method Requirement Model. OO-Method is an automatic software production method. The contribution of this work is to systematically define a set of rules that allows estimating the functional size at an early stage of the software production process, in accordance with COSMIC-FFP. To do this, we describe the design, the application, and the analysis of the proposed measurement procedure following the steps of a process model for software measurement. We also report initial results on the evaluation of RmFFP in terms of its reproducibility.
    Software Project Effort Estimation Based on Multiple Parametric Models Generated Through Data Clustering
    Juan J. Cuadrado Gallego, Daniel Rodri guez, Miguel Angel Sicilia, Miguel Garre Rubio and Angel Garci a Crespo
    Journal of Data Acquisition and Processing, 2007, 22 (3): 371-378 . 
    Abstract   PDF(539KB) ( 4465 )  
    Parametric software effort estimation models usually consists of only a single mathematical relationship. With the advent of software repositories containing data from heterogeneous projects, these types of models suffer from poor adjustment and predictive accuracy. One possible way to alleviate this problem is the use of a set of mathematical equations obtained through dividing of the historical project datasets according to different parameters into subdatasets called partitions. In turn, partitions are divided into clusters that serve as a tool for more accurate models. In this paper, we describe the process, tool and results of such approach through a case study using a publicly available repository, ISBSG. Results suggest the adequacy of the technique as an extension of existing single-expression models without making the estimation process much more complex that uses a single estimation model. A tool to support the process is also presented.
    Component Dependency in Object-Oriented Software
    Li-Guo Yu and Srini Ramaswamy
    Journal of Data Acquisition and Processing, 2007, 22 (3): 379-386 . 
    Abstract   PDF(359KB) ( 4615 )  
    Component dependency is an important software measure. It is directly related to software understandability, maintainability, and reusabi\-lity. Two important parameters in describing component dependency are the type of coupling between two components and the type of the dependent component. Depending upon the different types of coupling and the type of the dependent components, there can be different effects on component maintenance and component reuse. In this paper, we divide dependent components into three types. We then classify various component dependencies and analyze their effects on maintenance and reuse. Based on the classification, we present a dependency metric and validate it on 11 open-source Java components. Our study shows that a strong correlation exists between the measurement of the dependency of the component and the effort to reuse the component. This indicates that the classification of component dependency and the suggested metric could be further used to represent other external software quality factors.
    Improving Software Quality Prediction by Noise Filtering Techniques
    Taghi M. Khoshgoftaar and Pierre Rebours
    Journal of Data Acquisition and Processing, 2007, 22 (3): 387-396 . 
    Abstract   PDF(362KB) ( 4945 )  
    Accuracy of machine learners is affected by quality of the data the learners are induced on. In this paper, quality of the training dataset is improved by removing instances detected as noisy by the Partitioning Filter. The fit dataset is first split into subsets, and different base learners are induced on each of these splits. The predictions are combined in such a way that an instance is identified as noisy if it is misclassified by a certain number of base learners. Two versions of the Partitioning Filter are used: Multiple-Partitioning Filter and Iterative-Partitioning Filter. The number of instances removed by the filters is tuned by the voting scheme of the filter and the number of iterations. The primary aim of this study is to compare the predictive performances of the final models built on the filtered and the un-filtered training datasets. A case study of software measurement data of a high assurance software project is performed. It is shown that predictive performances of models built on the filtered fit datasets and evaluated on a noisy test dataset are generally better than those built on the noisy (un-filtered) fit dataset. However, predictive performance based on certain aggressive filters is affected by presence of noise in the evaluation dataset.
    Improving Fault Detection in Modified Code --- A Study from the Telecommunication Industry
    Piotr Tomaszewski, Lars Lundberg, and Haa kan Grahn
    Journal of Data Acquisition and Processing, 2007, 22 (3): 397-409 . 
    Abstract   PDF(588KB) ( 4466 )  
    Many software systems are developed in a number of consecutive releases. In each release not only new code is added but also existing code is often modified. In this study we show that the modified code can be an important source of faults. Faults are widely recognized as one of the major cost drivers in software projects. Therefore, we look for methods that improve the fault detection in the modified code. We propose and evaluate a number of prediction models that increase the efficiency of fault detection. To build and evaluate our models we use data collected from two large telecommunication systems produced by Ericsson. We evaluate the performance of our models by applying them both to a different release of the system than the one they are built on and to a different system. The performance of our models is compared to the performance of the theoretical best model, a simple model based on size, as well as to analyzing the code in a random order (not using any model). We find that the use of our models provides a significant improvement over not using any model at all and over using a simple model based on the class size. The gain offered by our models corresponds to 38~57% of the theoretical maximum gain.
    A Three-Layer Model for Business Processes --- Process Logic, Case Semantics and Workflow Management
    Chong-Yi Yuan, Wen Zhao, Shi-Kun Zhang, and Yu Huang
    Journal of Data Acquisition and Processing, 2007, 22 (3): 410-425 . 
    Abstract   PDF(584KB) ( 5103 )  
    Workflow management aims at the controlling, monitoring, optimizing and supporting of business processes. Well designed formal models will facilitate such management since they provide explicit representations of business processes as the basis for computerized analysis, verification and execution. Petri Nets have been recognized as the most suitable candidate for workflow modeling, and as such, formal models based on Petri Nets have been proposed, among them WF-net by Aalst is the most popular one. But WF-net has turned out to be conceptually chaotic as will be illustrated in this paper with an example from Aalst's book. This paper proposes a series of models for the description and analysis of business processes at conceptually different hierarchical layers. Analytic goals and methods at these layers are also discussed. The underlying structure, shared by all these models, is SYNCHRONIZER, which is designed with the guidance of synchrony theory of GNT (General Net Theory) and serves as the conceptual foundation of workflow formal models. Structurally, synchronizers connect tasks to form a whole while dynamically synchronizers control tasks to achieve synchronization.
    Garbage Collector Verification for Proof-Carrying Code
    Chun-Xiao Lin, Yi-Yun Chen, Long Li, and Bei Hua
    Journal of Data Acquisition and Processing, 2007, 22 (3): 426-437 . 
    Abstract   PDF(682KB) ( 4514 )  
    We present the verification of the machine-level implementation of a conservative variant of the standard mark-sweep garbage collector in a Hoare-style program logic. The specification of the collector is given on a machine-level memory model using separation logic, and is strong enough to preserve the safety property of any common mutator program. Our verification is fully implemented in the Coq proof assistant and can be packed immediately as foundational proof-carrying code package. Our work makes important attempt toward building fully certified production-quality garbage collectors.
    An Anti-Counterfeiting RFID Privacy Protection Protocol
    Xiaolan Zhang and Brian King
    Journal of Data Acquisition and Processing, 2007, 22 (3): 438-448 . 
    Abstract   PDF(392KB) ( 4790 )  
    The privacy problem of many RFID systems has been extensively studied. Yet integrity in RFID has not received much attention as regular computer systems. When we evaluate an identification protocol for an RFID system for anti-counterfeiting, it is important to consider integrity issues. Moreover, many RFID systems are accessed by multiple level trust parties, which makes comprehensive integrity protection even harder. In this paper, we first propose an integrity model for RFID protocols. Then we use the model to analyze the integrity problems in Squealing Euros protocol. Squealing Euros was proposed by Juels and Pappu for RFID enabled banknotes that will support anti-forgery and lawful tracing yet preserve individual's privacy. We analyze its integrity, we then discuss the problems that arise and propose some solutions to these problems. Then an improved protocol with integrity protection for the law enforcement is constructed, which includes an unforgeable binding between the banknote serial number and the RF ciphertext only readable to law enforcement. This same protocol can be applied in many other applications which require a privacy protecting anti-counterfeiting mechanism.
    Impossible Differential Cryptanalysis of Reduced-Round ARIA and Camellia
    Wen-Ling Wu, Wen-Tao Zhang, and Deng-Guo Feng
    Journal of Data Acquisition and Processing, 2007, 22 (3): 449-456 . 
    Abstract   PDF(401KB) ( 5175 )  
    This paper studies the security of the block ciphers ARIA and Camellia against impossible differential cryptanalysis. Our work improves the best impossible differential cryptanalysis of ARIA and Camellia known so far. The designers of ARIA expected no impossible differentials exist for 4-round ARIA. However, we found some nontrivial 4-round impossible differentials, which may lead to a possible attack on 6-round ARIA. Moreover, we found some nontrivial 8-round impossible differentials for Camellia, whereas only 7-round impossible differentials were previously known. By using the 8-round impossible differentials, we presented an attack on 12-round Camellia without $FL/FL^{-1}$ layers.
    Targeted Local Immunization in Scale-Free Peer-to-Peer Networks
    Xin-Li Huang, Fu-Tai Zou, and Fan-Yuan Ma
    Journal of Data Acquisition and Processing, 2007, 22 (3): 457-468 . 
    Abstract   PDF(519KB) ( 4567 )  
    The power-law node degree distributions of peer-to-peer overlay networks make them extremely robust to random failures whereas highly vulnerable under intentional targeted attacks. To enhance attack survivability of these networks, DeepCure, a novel heuristic immunization strategy, is proposed to conduct decentralized but targeted immunization. Different from existing strategies, DeepCure identifies immunization targets as not only the highly-connected nodes but also the nodes with high {\it availability} and/or high {\it link load}, with the aim of injecting immunization information into just {\it right} targets to cure. To better trade off the cost and the efficiency, DeepCure deliberately select these targets from 2-{\it local} neighborhood, as well as topologically-remote but semantically-close friends if needed. To remedy the weakness of existing strategies in case of sudden epidemic outbreak, DeepCure is also coupled with a local-hub oriented {\it rate throttling} mechanism to enforce proactive rate control. Extensive simulation results show that DeepCure outperforms its competitors, producing an arresting increase of the network attack tolerance, at a lower price of eliminating viruses or malicious attacks.
    Cryptanalysis of Achterbahn-Version 1 and -Version 2
    Xiao-Li Huang and Chuan-Kun Wu
    Journal of Data Acquisition and Processing, 2007, 22 (3): 469-475 . 
    Abstract   PDF(312KB) ( 4505 )  
    Achterbahn is one of the candidate stream ciphers submitted to the eSTREAM, which is the ECRYPT Stream Cipher Project. The cipher Achterbahn uses a new structure which is based on several nonlinear feedback shift registers (NLFSR) and a nonlinear combining output Boolean function. This paper proposes distinguishing attacks on Achterbahn-Version 1 and -Version 2 on the reduced mode and the full mode. These distinguishing attacks are based on linear approximations of the output functions. On the basis of these linear approximations and the periods of the registers, parity checks with noticeable biases are found. Then distinguishing attacks can be achieved through these biased parity checks. As to Achterbahn-Version 1, three cases that the output function has three possibilities are analyzed. Achterbahn-Version 2, the modification version of Achterbahn-Version 1, is designed to avert attacks based on approximations of the output Boolean function. Our attack with even much lower complexities on Achterbahn-Version 2 shows that Achterbahn-Version 2 cannot prevent attacks based on linear approximations.
    Smart Proactive Caching Scheme for Fast Authenticated Handoff in Wireless LAN
    Sin-Kyu Kim, Jae-Woo Choi, Dae-Hun Nyang, Gene-Beck Hahn, and Joo-Seok Song
    Journal of Data Acquisition and Processing, 2007, 22 (3): 476-480 . 
    Abstract   PDF(356KB) ( 4549 )  
    Handoff in IEEE 802.11 requires the repeated authentication and key exchange procedures, which will make the provision of seamless services in wireless LAN more difficult. To reduce the overhead, the proactive caching schemes have been proposed. However, they require too many control packets delivering the security context information to neighbor access points. Our contribution is made in two-fold: one is a significant decrease in the number of control packets for proactive caching and the other is a superior cache replacement algorithm.
    Some Notes on Prime-Square Sequences
    En-Jian Bai and Xiao-Juan Liu
    Journal of Data Acquisition and Processing, 2007, 22 (3): 481-486 . 
    Abstract   PDF(264KB) ( 4699 )  
    The well-known binary Legendre sequences possess good autocorrelation functions and high linear complexity, and are just special cases of much larger families of cyclotomic sequences. Prime-square sequences are the generalization of these Legendre sequences, but the ratio of the linear complexity to the least period of these sequences approximates to zero if the prime is infinite. However, a relatively straightforward modification can radically improve this situation. The structure and properties, including linear complexity, minimal polynomial, and autocorrelation function, of these modified prime-square sequences are investigated. The hardware implementation is also considered.
    ID-Based Fair Off-Line Electronic Cash System with Multiple Banks
    Chang-Ji Wang, Yong Tang, and Qing Li
    Journal of Data Acquisition and Processing, 2007, 22 (3): 487-493 . 
    Abstract   PDF(345KB) ( 4309 )  
    ID-based public key crypto\-graphy (ID-PKC) has many advantages over certificate-based public key cryptography (CA-PKC), and has drawn researchers' extensive attention in recent years. However, the existing electronic cash schemes are constructed under CA-PKC, and there seems no electronic cash scheme under ID-PKC up to now to the best of our knowledge. It is important to study how to construct electronic cash schemes based on ID-PKC from views on both practical perspective and pure research issue. In this paper, we present a simpler and provably secure ID-based restrictive partially blind signature (RPBS), and then propose an ID-based fair off-line electronic cash (ID-FOLC) scheme with multiple banks based on the proposed ID-based RPBS. The proposed ID-FOLC scheme with multiple banks is more efficient than existing electronic cash schemes with multiple banks based on group blind signature.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved