Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      05 March 2009, Volume 24 Issue 2   
    For Selected: View Abstracts Toggle Thumbnails
    Special Issue on Software Engineering for High-Confidence Systems
    Preface
    Shing-Chi Cheung, Hong Mei, and Jian Lv
    Journal of Data Acquisition and Processing, 2009, 24 (2): 181-182. 
    Abstract   PDF(192KB) ( 1537 )  

    It is our greatest honor to edit this special issue entitled ``Software Engineering for High-Confidence Systems'', for the Journal of Data Acquisition and Processing (JCST). Our main objective is to disseminate inspiring work that addresses the problem of developing and managing high-confidence systems. With the increasing use of software systems for enterprise, services and ubiquitous computing, software becomes complex and difficult to evolve. However, these systems need to be of high-confidence, demonstrating dependability, trust, security, and privacy if they are deployed to perform mission-critical tasks. The design and development of high-confidence software systems have drawn great research interest from the software engineering community.

    ......

    A Rigorous Architectural Approach to Adaptive Software Engineering
    Jeff Kramer, Fellow, ACM, and Jeff Magee
    Journal of Data Acquisition and Processing, 2009, 24 (2): 183-188. 
    Abstract   PDF(298KB) ( 1938 )  

    The engineering of distributed adaptive software is a complex task which requires a rigorous approach. Software architectural (structural) concepts and principles are highly beneficial in specifying, designing, analysing, constructing and evolving distributed software. A rigorous architectural approach dictates formalisms and techniques that are compositional, components that are context independent and systems that can be constructed and evolved incrementally. This paper overviews some of the underlying reasons for adopting an architectural approach, including a brief ``rational history'' of our research work, and indicates how an architectural model can potentially facilitate the provision of self-managed adaptive software system.

    Software, Software Engineering and Software Engineering Research: Some Unconventional Thoughts
    David Notkin, Fellow, ACM, IEEE
    Journal of Data Acquisition and Processing, 2009, 24 (2): 189-197. 
    Abstract   PDF(268KB) ( 2325 )  

    Software engineering is broadly discussed as falling far short of expectations. Data and examples are used to justify how software itself is often poor, how the engineering of software leaves much to be desired, and how research in software engineering has not made enough progress to help overcome these weaknesses. However, these data and examples are presented and interpreted in ways that are arguably imbalanced. This imbalance, usually taken at face value, may be distracting the field from making significant progress towards improving the effective engineering of software, a goal the entire community shares. Research dichotomies, which tend to pit one approach against another, often subtly hint that there is a best way to engineer software or a best way to perform research on software. This, too, may be distracting the field from important classes of progress.

    Formalisms to Support the Definition of Processes
    Leon J. Osterweil, Fellow, ACM, Member, IEEE
    Journal of Data Acquisition and Processing, 2009, 24 (2): 198-211. 
    Abstract   PDF(998KB) ( 1937 )  

    This paper emphasizes the importance of defining processes rigorously, completely, clearly, and in detail in order to support the complex projects that are essential to the modern world. The paper argues that such process definitions provide needed structure and context for the development of effective software systems. The centrality of process is argued by enumerating seven key ways in which processes and their definitions are expected to provide important benefits to society. The paper provides an example of a process formalism that makes good progress towards the difficult goal of being simultaneously rigorous, detailed, broad, and clear. Early experience suggests that these four key characteristics of this formalism do indeed seem to help it to support meeting the seven key benefits sought from process definitions. Additional research is suggested in order to gain more insights into needs in the area of process definition formalisms.

    Architecting Fault Tolerance with Exception Handling: Verification and Validation
    Patrick H. S. Brito, Rogerio de Lemos, Cecilia M. F. Rubira, and Eliane Martins
    Journal of Data Acquisition and Processing, 2009, 24 (2): 212-237. 
    Abstract   PDF(719KB) ( 1881 )  

    When building dependable systems by integrating untrusted software components that were not originally designed to interact with each other, it is likely the occurrence of architectural mismatches related to assumptions in their failure behaviour. These mismatches, if not prevented during system design, have to be tolerated during runtime. This paper presents an architectural abstraction based on exception handling for structuring fault-tolerant software systems. This abstraction comprises several components and connectors that promote an existing untrusted software element into an idealised fault-tolerant architectural element. Moreover, it is considered in the context of a rigorous software development approach based on formal methods for representing the structure and behaviour of the software architecture. The proposed approach relies on a formal specification and verification for analysing exception propagation, and verifying important dependability properties, such as deadlock freedom, and scenarios of architectural reconfiguration. The formal models are automatically generated using model transformation from UML diagrams: component diagram representing the system structure, and sequence diagrams representing the system behaviour. Finally, the formal models are also used for generating unit and integration test cases that are used for assessing the correctness of the source code. The feasibility of the proposed architectural approach was evaluated on an embedded critical case study.

    Global-to-Local Approach to Rigorously Developing Distributed System with Exception Handling
    Chao Cai, Zong-Yan Qiu, Senior Member, CCF, Member, IEEE, Hong-Li Yang, and Xiang-Peng Zhao
    Journal of Data Acquisition and Processing, 2009, 24 (2): 238-249. 
    Abstract   PDF(501KB) ( 2022 )  

    Cooperative distributed system covers a wide range of applications such as the systems for industrial controlling and business-to-business trading, which are usually safety-critical. Coordinated exception handling (CEH) refers to exception handling in the cooperative distributed systems, where exceptions raised on a peer should be dealt with by all relevant peers in a consistent manner. Some CEH algorithms have been proposed. A crucial problem in using these algorithms is how to develop the peers which are guaranteed coherent in both normal execution and exceptional execution. Straightforward testing or model checking is very expensive. In this paper, we propose an effective way to rigorously develop the systems with correct CEH behavior. Firstly, we formalize the CEH algorithm by proposing a Peer Process Language to precisely describe the distributed systems and their operational semantics. Then we dig out a set of syntactic conditions, and prove its sufficiency for system coherence. Finally, we propose a global-to-local approach, including a language describing the distributed systems from a global perspective and a projection algorithm, for developing the systems. Given a well-formed global description, a set of peers can be generated automatically. We prove the system composed of these peers satisfies the conditions, that is, it is always coherent and correct for CEH.

    QoS-Driven Self-Healing Web Service Composition Based on Performance Prediction
    Yu Dai, Member, CCF, Lei Yang, and Bin Zhang
    Journal of Data Acquisition and Processing, 2009, 24 (2): 250-261. 
    Abstract   PDF(575KB) ( 1882 )  

    Web services run in a highly dynamic environment, as a result, the QoS of which will change relatively frequently. In order to make the composite service adapt to such dynamic property of Web services, we propose a self-healing approach for web service composition. Such an approach is an integration of backing up in selection and reselecting in execution. In order to make the composite service heal itself as quickly as possible and minimize the number of reselections, a way of performance prediction is proposed in this paper. On this basis, the self-healing approach is presented including framework, the triggering algorithm of the reselection and the reliability model of the service. Experiments show that the proposed solutions have better performance in supporting the self-healing Web service composition.

    Do Rules and Patterns Affect Design Maintainability?
    Javier Garzas, Felix Garcia, and Mario Piattini
    Journal of Data Acquisition and Processing, 2009, 24 (2): 262-272. 
    Abstract   PDF(930KB) ( 4878 )  

    At the present time, best rules and patterns have reached a zenith in popularity and diffusion, thanks to the software community's efforts to discover, classify and spread knowledge concerning all types of rules and patterns. Rules and patterns are useful elements, but many features remain to be studied if we wish to apply them in a rational manner. The improvement in quality that rules and patterns can inject into design is a key issue to be analyzed, so a complete body of empirical knowledge dealing with this is therefore necessary. This paper tackles the question of whether design rules and patterns can help to improve the extent to which designs are easy to understand and modify. An empirical study, composed of one experiment and a replica, was conducted with the aim of validating our conjecture. The results suggest that the use of rules and patterns affect the understandability and modifiability of the design, as the diagrams with rules and patterns are more difficult to understand than non-rule/pattern versions and more effort is required to carry out modifications to designs with rules and patterns.

    Package Coupling Measurement in Object-Oriented Software
    Varun Gupta and Jitender Kumar Chhabra
    Journal of Data Acquisition and Processing, 2009, 24 (2): 273-283. 
    Abstract   PDF(375KB) ( 2818 )  

    The grouping of correlated classes into a package helps in better organization of modern object-oriented software. The quality of such packages needs to be measured so as to estimate their utilization. In this paper, new package coupling metrics are proposed, which also take into consideration the hierarchical structure of packages and direction of connections among package elements. The proposed measures have been validated theoretically as well as empirically using 18 packages taken from two open source software systems. The results obtained from this study show strong correlation between package coupling and understandability of the package which suggests that proposed metrics could be further used to represent other external software quality factors.

    Test-Data Generation Guided by Static Defect Detection
    Dan Hao, Member, CCF, Lu Zhang,Senior Member, CCF, Ming-Hao Liu, He Li, and Jia-Su Sun, Senior Member, CCF
    Journal of Data Acquisition and Processing, 2009, 24 (2): 284-293. 
    Abstract   PDF(498KB) ( 2102 )  

    Software testing is an important technique to assure the quality of software systems, especially high-confidence systems. To automate the process of software testing, many automatic test-data generation techniques have been proposed. To generate effective test data, we propose a test-data generation technique guided by static defect detection in this paper. Using static defect detection analysis, our approach first identifies a set of suspicious statements which are likely to contain faults, then generates test data to cover these suspicious statements by converting the problem of test-data generation to the constraint satisfaction problem. We performed a case study to validate the effectiveness of our approach, and made a simple comparison with another test-data generation on-line tool, JUnit Factory. The results show that, compared with JUnit Factory, our approach generates fewer test data that are competitive on fault detection.

    Runtime Monitoring Composite Web Services Through Stateful Aspect Extension
    Tao Huang, Member, CCF, Guo-Quan Wu, and Jun Wei, Member, CCF
    Journal of Data Acquisition and Processing, 2009, 24 (2): 294-308. 
    Abstract   PDF(679KB) ( 2426 )  

    The execution of composite Web services with WS-BPEL relies on externally autonomous Web services. This implies the need to constantly monitor the running behavior of the involved parties. Moreover, monitoring the execution of composite Web services for particular patterns is critical to enhance the reliability of the processes. In this paper, we propose an aspect-oriented framework as a solution to provide monitoring and recovery support for composite Web services. In particular, this framework includes 1) a stateful aspect based template, where history-based pointcut specifies patterns of interest cannot be violated within a range, while advice specifies the associated recovery action; 2) a tool support for runtime monitoring and recovery based on aspect-oriented execution environment. Our experiments indicate that the proposed monitoring approach incurs minimal overhead and is efficient.

    A Secure Elliptic Curve-Based RFID Protocol
    Santi Marti nez, Magda Valls, Concepcio Roig, Josep M. Miret, and Francesc Gine
    Journal of Data Acquisition and Processing, 2009, 24 (2): 309-318. 
    Abstract   PDF(711KB) ( 2247 )  

    Nowadays, the use of Radio Frequency Identification (RFID) systems in industry and stores has increased. Nevertheless, some of these systems present privacy problems that may discourage potential users. Hence, high confidence and efficient privacy protocols are urgently needed. Previous studies in the literature proposed schemes that are proven to be secure, but they have scalability problems. A feasible and scalable protocol to guarantee privacy is presented in this paper. The proposed protocol uses elliptic curve cryptography combined with a zero knowledge-based authentication scheme. An analysis to prove the system secure, and even forward secure is also provided.

    Feature-Oriented Nonfunctional Requirement Analysis for Software Product Line
    Xin Peng, Member, CCF, Seok-Won Lee, and Wen-Yun Zhao, Senior Member, CCF
    Journal of Data Acquisition and Processing, 2009, 24 (2): 319-338. 
    Abstract   PDF(3644KB) ( 6448 )  

    Domain analysis in software product line (SPL) development provides a basis for core assets design and implementation by a systematic and comprehensive commonality/variability analysis. In feature-oriented SPL methods, products of the domain analysis are domain feature models and corresponding feature decision models to facilitate application-oriented customization. As in requirement analysis for a single system, the domain analysis in the SPL development should consider both functional and nonfunctional domain requirements. However, the nonfunctional requirements (NFRs) are often neglected in the existing domain analysis methods. In this paper, we propose a context-based method of the NFR analysis for the SPL development. In the method, NFRs are materialized by connecting nonfunctional goals with real-world context, thus NFR elicitation and variability analysis can be performed by context analysis for the whole domain with the assistance of NFR templates and NFR graphs. After the variability analysis, our method integrates both functional and nonfunctional perspectives by incorporating the nonfunctional goals and operationalizations into an initial functional feature model. NFR-related constraints are also elicited and integrated. Finally, a decision model with both functional and nonfunctional perspectives is constructed to facilitate application-oriented feature model customization. A computer-aided grading system (CAGS) product line is employed to demonstrate the method throughout the paper.

    Availability Analysis of Application Servers Using Software Rejuvenation and Virtualization
    Thandar Thein and Jong Sou Park, Member, IEEE
    Journal of Data Acquisition and Processing, 2009, 24 (2): 339-346. 
    Abstract   PDF(528KB) ( 2542 )  

    Demands on software reliability and availability have increased tremendously due to the nature of present day applications. We focus on the aspect of software for the high availability of application servers since the unavailability of servers more often originates from software faults rather than hardware faults. The software rejuvenation technique has been widely used to avoid the occurrence of unplanned failures, mainly due to the phenomena of software aging or caused by transient failures. In this paper, first we present a new way of using the virtual machine based software rejuvenation named VMSR to offer high availability for application server systems. Second we model a single physical server which is used to host multiple virtual machines (VMs) with the VMSR framework using stochastic modeling and evaluate it through both numerical analysis and SHARPE (Symbolic Hierarchical Automated Reliability and Performance Evaluator) tool simulation. This VMSR model is very general and can capture application server characteristics, failure behavior, and performability measures. Our results demonstrate that VMSR approach is a practical way to ensure uninterrupted availability and to optimize performance for aging applications.

    Demand-Driven Memory Leak Detection Based on Flow- and Context-Sensitive Pointer Analysis
    Ji Wang, Senior Member, CCF, Xiao-Dong Ma, Wei Dong, Hou-Feng Xu, and Wan-Wei Liu, Member, CCF
    Journal of Data Acquisition and Processing, 2009, 24 (2): 347-356. 
    Abstract   PDF(548KB) ( 2311 )  

    We present a demand-driven approach to memory leak detection algorithm based on flow- and context-sensitive pointer analysis. The detection algorithm firstly assumes the presence of a memory leak at some program point and then runs a backward analysis to see if this assumption can be disproved. Our algorithm computes the memory abstraction of programs based on points-to graph resulting from flow- and context-sensitive pointer analysis. We have implemented the algorithm in the SUIF2 compiler infrastructure and used the implementation to analyze a set of C benchmark programs. The experimental results show that the approach has better precision with satisfied scalability as expected.

    QoS Requirement Generation and Algorithm Selection for Composite Service Based on Reference Vector
    Bang-Yu Wu, Chi-Hung Chi, Shi-Jie Xu, Ming Gu, and Jia-Guang Sun
    Journal of Data Acquisition and Processing, 2009, 24 (2): 357-372. 
    Abstract   PDF(680KB) ( 2115 )  

    Under SOA (Service-Oriented Architecture), composite service is formed by aggregating multiple component services together in a given workflow. One key criterion of this research topic is QoS composition. Most work on service composition mainly focuses on the algorithms about how to compose services according to assumed QoS, without considering where the required QoS comes from and the selection of user preferred composition algorithm among those with different computational cost and different selection results. In this paper, we propose to strengthen current service composition mechanism by generation of QoS requirement and its algorithm selection based on the QoS reference vectors which are calculated optimally from the existing individual services' QoS by registry to represent QoS overview about the best QoS, the worst (or most economical) QoS, or the average QoS of all composite services. To implement QoS requirement, which is determined according to QoS overview, this paper introduces two selection algorithms as two kinds of experiment examples, one aiming at the most accurate service selection and the other chasing for trade-off between selection cost and result. Experimental results show our mechanism can help the requester achieve his expected composite service with appropriate QoS requirement and customized selection algorithm.

    A Trust-Based Approach to Estimating the Confidence of the Software System in Open Environments
    Feng Xu, Member, CCF, Jing Pan, and Wen Lu
    Journal of Data Acquisition and Processing, 2009, 24 (2): 373-385. 
    Abstract   PDF(1327KB) ( 2118 )  

    Emerging with open environments, the software paradigms, such as open resource coalition and Internetware, present several novel characteristics including user-centric, non-central control, and continual evolution. The goal of obtaining high confidence on such systems is more difficult to achieve. The general developer-oriented metrics and testing-based methods which are adopted in the traditional measurement for high confidence software seem to be infeasible in the new situation. Firstly, the software development is changed from the developer-centric to user-centric, while user's opinions are usually subjective, and cannot be generalized in one objective metric. Secondly, there is non-central control to guarantee the testing on components which formed the software system, and continual evolution makes it impossible to test on the whole software system. Therefore, this paper proposes a trust-based approach that consists of three sequential sub-stages: 1) describing metrics for confidence estimation from users; 2) estimating the confidence of the components based on the quantitative information from the trusted recommenders; 3) estimating the confidence of the whole software system based on the component confidences and their interactions, as well as attempts to make a step toward a reasonable and effective method for confidence estimation of the software system in open environments.

    A Scalable Testing Framework for Location-Based Services
    Jiang Yu, Andrew Tappenden, James Miller, and Michael Smith
    Journal of Data Acquisition and Processing, 2009, 24 (2): 386-404. 
    Abstract   PDF(6495KB) ( 3237 )  

    A novel testing framework for location based services is introduced. In particular, the paper showcases a novel architecture for such a framework. The implementation of the framework illustrates both the functionality and the feasibility of the framework proposed and the utility of the architecture. The new framework is evaluated through comparison to several other methodologies currently available for the testing of location-based applications. A case study is presented in which the testing framework was applied to a typical mobile service tracking system. It is concluded that the proposed testing framework achieves the best coverage of the entire location based service testing problem of the currently available methodologies; being equipped to test the widest array of application attributes and allowing for the automation of testing activities.

SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved