Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      01 November 2010, Volume 25 Issue 6   
    For Selected: View Abstracts Toggle Thumbnails
    Sprcial Section on Software/Service in Networked Enviroments
    Preface
    De-Yi Li, Zhi-Yong Liu and Ke-Qing He
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1101-1102. 
    Abstract   PDF(137KB) ( 1801 )  
    Special Issue on Software Engineering for High-Confidence Systems
    Semantic Interoperability Aggregation in Service Requirements Refinement
    Ke-Qing He (何克清), Senior Member, CCF, IEEE, Jian Wang (王健), Member, CCF and Peng Liang (梁鹏), Member, CCF, ACM, IEEE
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1103-1117. 
    Abstract   PDF(684KB) ( 2149 )  
    Semantic refinement of stakeholders' requirements is a fundamental issue in requirements engineering. Facing with the on-demand collaboration problem among the heterogeneous, autonomous, and dynamic service resources in the Web, service requirements refinement becomes extremely important, and the key issue in service requirements refinement is semantic interoperability aggregation. A method for creating connecting ontologies driven by requirement sign ontology is proposed. Based on connecting ontologies, a method for semantic interoperability aggregation in requirements refinement is proposed. In addition, we discover that the necessary condition for semantic interoperability is semantic similarity, and the sufficient condition is the coverability of the agreed mediation ontology. Based on this viewpoint, a metric framework for calculating semantic interoperability capability is proposed. This methodology can provide a semantic representation mechanism for refining users' requirements; meanwhile, since users' requirements in the Web usually originate from different domains, it can also provide semantic interoperability guidance for networked service discovery, and is an effective approach for the realization of on-demand service integration. The methodology will be beneficial in service-oriented software engineering and cloud computing.
    Sprcial Section on Software/Service in Networked Enviroments
    Goal-Based Automated Code Generation in Self-Adaptive System
    Joonhoon Lee, Jeongmin Park, Giljong Yoo and Eunseok Lee
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1118-1129. 
    Abstract   PDF(777KB) ( 2331 )  

    System administrator deals with many problems, as computing environment becomes increasingly complex. Systems with an ability to recognize system states and adapt to resolve these problems offer a solution. Much experience and knowledge are required to build a self-adaptive system. Self-adaptive systems have inherent difficulties. This paper proposes a technique that automatically generates the code for the self-adaptive system. Thus the system is easier to build. Self-adaptive systems of previous research required high system resource usage. Incorrect operation could be invoked by external factors such as viruses. We propose an improved self-adaptive system approach and apply it to video conference system and robot system. We compared the lines of code, the number of classes created by the developers. We have confirmed this enhanced approach to be effective in reducing these development metrics.

    Special Issue on Software Engineering for High-Confidence Systems
    A Cloud-Based Trust Model for Evaluating Quality of Web Services
    Shou-Xin Wang (王守信), Member CCF, Li Zhang (张莉), Member CCF, Shuai Wang (王帅) and Xiang Qiu (邱翔)
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1130-1142. 
    Abstract   PDF(651KB) ( 3482 )  
    Because trust is regarded as an essential secured relationship within a distributed network environment, selecting services over the Internet from the viewpoint of trust has been a major trend. Current research about trust model and evaluation in the context of Web services does not rationally and accurately reflect some essential characteristics of trust such as subjective uncertainty and dynamism. In this paper, we analyze some important characteristics of trust, and some key factors that affect the trust relation in the Web service environment. Accordingly, we propose a trust model based on Cloud Model theory to describe the subjective uncertainty of trust factors. A time-related backward cloud generation algorithm is given to express the dynamism of trust. Furthermore, according to the trust model and algorithm, a formalized calculation approach is provided to evaluate the trust degree of services requestors in providers. Our experiment shows that the evaluation of trust degree can effectively support trust-decisions and provide a helpful exploitation for selecting services based on the viewpoint of trust.
    Sprcial Section on Software/Service in Networked Enviroments
    Web Service Composition Based on QoS Rules
    Ming-Wei Zhang (张明卫), Member, CCF, Bin Zhang (张斌), Senior Member, CCF, Ying Liu (刘莹), Jun Na (那俊) and Zhi-Liang Zhu (朱志良), Senior Member, CCF
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1143-1156. 
    Abstract   PDF(959KB) ( 1934 )  

    For workflow-based service composition approach, the relations between the Web service QoS and environments are usually not considered, so that the information about QoS for composite service selection is inaccurate. It makes the selected composite service inefficient, or even unexecutable. To address this problem, a novel service composition approach based on production QoS rules is proposed in this paper. Generally, it is very difficult to directly analyze how different kinds of environment factors influence the Web service QoS. We adopt "black-box" analysis method of optimizing composite services, discovering the knowledge such as "the QoS of one Web service will be higher in specific environments". In our approach, the execution information of the composite service is recorded into a log first, which will be taken as the basis of the subsequent statistical analysis and data mining. Then, the timely QoS values of the Web services are estimated and the production QoS rules being used to qualitatively express the different performances of the Web service QoS in different environments are mined. At last, we employ the mined QoS knowledge of the Web services to optimize the composite service selection. Extensive experimental results show that our approach can improve the performance of selected composite services on the premise of assuring the selecting computation cost.

    A Cloud-Based BPM Architecture with User-End Distribution of Non-Compute-Intensive Activities and Sensitive Data
    Yan-Bo Han (韩燕波), Jun-Yi Sun (孙君意), Gui-Ling Wang (王桂玲) and Hou-Fu Li (李厚福)
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1157-1167. 
    Abstract   PDF(652KB) ( 7327 )  

    While cloud-based BPM (Business Process Management) shows potentials of inherent scalability and expenditure reduction, such issues as user autonomy, privacy protection and efficiency have popped up as major concerns. Users may have their own rudimentary or even full-fledged BPM systems, which may be embodied by local EAI systems, at their end, but still intend to make use of cloud-side infrastructure services and BPM capabilities, which may appear as PaaS (Platform-as-a-Service) services, at the same time. A whole business process may contain a number of non-compute-intensive activities, for which cloud computing is over-provision. Moreover, some users fear data leakage and loss of privacy if their sensitive data is processed in the cloud. This paper proposes and analyzes a novel architecture of cloud-based BPM, which supports user-end distribution of non-compute-intensive activities and sensitive data. An approach to optimal distribution of activities and data for synthetically utilizing both user-end and cloud-side resources is discussed. Experimental results show that with the help of suitable distribution schemes, data privacy can be satisfactorily protected, and resources on both sides can be utilized at lower cost.

    Modeling and Verifying Concurrent Programs with Finite Chu Spaces
    Xu-Tao Du (杜旭涛), Chun-Xiao Xing (邢春晓), Member, CCF, IEEE and Li-Zhu Zhou (周立柱), Member, ACM
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1168-1183. 
    Abstract   PDF(395KB) ( 2087 )  

    Finite Chu spaces are proposed for the modeling and verification of concurrent programs. In order to model not only typical concurrent behaviors but also modern exception handling and synchronization mechanisms, we design an enriched process algebra of Chu spaces from a practical point of view. To illustrate the power of finite Chu spaces and the process algebra while abstracting away from language-specific details, an imaginary concurrent programming language (ICL) is designed. A denotational semantics of ICL is presented using finite Chu spaces and the enriched process algebra. The valuation functions are fairly straightforward since the carefully designed operators have done much of the job. The enriched process algebra is also used as the specification language for Chu spaces, with which process-algebraic properties can be specified. Verification algorithms are presented with their time complexities discussed.

    A Hybrid Set of Complexity Metrics for Large-Scale Object-Oriented Software Systems
    Yu-Tao Ma (马于涛), Member, CCF, ACM, Ke-Qing He (何克清), Senior Member, CCF, IEEE, Bing Li (李兵), Senior Member, CCF, Jing Liu (刘婧) and Xiao-Yan Zhou (周晓燕)
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1184-1201. 
    Abstract   PDF(1128KB) ( 3059 )  

    Large-scale object-oriented (OO) software systems have recently been found to share global network characteristics such as small world and scale free, which go beyond the scope of traditional software measurement and assessment methodologies. To measure the complexity at various levels of granularity, namely graph, class (and object) and source code, we propose a hierarchical set of metrics in terms of coupling and cohesion --- the most important characteristics of software, and analyze a sample of 12 open-source OO software systems to empirically validate the set. Experimental results of the correlations between cross-level metrics indicate that the graph measures of our set complement traditional software metrics well from the viewpoint of network thinking, and provide more effective information about fault-prone classes in practice.

    Measuring Structural Quality of Object-Oriented Softwares via Bug Propagation Analysis on Weighted Software Networks
    Wei-Feng Pan (潘伟丰), Member, CCF, Bing Li (李兵), Senior Member, CCF, Yu-Tao Ma (马于涛), Member, CCF, ACM, Ye-Yi Qin (覃叶宜) and Xiao-Yan Zhou (周晓燕)
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1202-1213. 
    Abstract   PDF(579KB) ( 2849 )  

    The quality of a software system is partially determined by its structure (topological structure), so the need to quantitatively analyze the quality of the structure has become eminent. In this paper a novel metric called software quality of structure (SQoS) is presented for quantitatively measuring the structural quality of object-oriented (OO) softwares via bug propagation analysis on weighted software networks (WSNs). First, the software systems are modeled as a WSN, weighted class dependency network (WCDN), in which classes are nodes and the interaction between every pair of classes if any is a directed edge with a weight indicating the probability that a bug in one class will propagate to the other. Then we analyze the bug propagation process in the WCDN together with the bug proneness of each class, and based on this, a metric (SQoS) to measure the structural quality of OO softwares as a whole is developed. The approach is evaluated in two case studies on open source Java programs using different software structures (one employs design patterns and the other does not) for the same OO software. The results of the case studies validate the effectiveness of the proposed metric. The approach is fully automated by a tool written in Java.

    Covering-Based Routing Algorithms for Cyclic Content-Based P/S Overlays
    Ming-Wen Chen (陈明文), Jian Zhang (张健), Song-Lin Hu (虎嵩林), Senior Member, CCF and Zhi-Yong Liu (刘志勇), Senior Member, CCF
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1214-1224. 
    Abstract   PDF(649KB) ( 1656 )  

    Content-based routing (CBR) publish/subscribe (P/S) system is an important class of distributed systems. This system differs from classical paradigms as messages are routed based on their content rather than their destination address, so as to provide a fine-granularity event dissemination, and support more flexibility decoupling applications. Covering-based routing is a typical optimization method of CBR and has been widely used as a building block in many distributed P/S systems, for it maintains a compact routing table and reduces the costs of communications and matching computations. So far as we know, this optimization method can only be implemented on acyclic overlay network, but cannot be directly utilized on cyclic networks. As the CBR in cyclic systems becomes a new focus of research, developing covering-based protocols and algorithms for cyclic P/S system is becoming significantly important. This paper contributes the cyclic covering-based routing protocol with corresponding algorithms to support covering-based protocol in cyclic P/S system, and implements it in PADRES, a distributed event management infrastructure based on the publish/subscribe model.

    An Uncertainty Enhanced Trust Evolution Strategy for e-Science
    Wei Du (杜薇), Guo-Hua Cui (崔国华) and Wei Liu (刘伟), Member, CCF
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1225-1236. 
    Abstract   PDF(469KB) ( 1938 )  

    Resources shared in e-Science have critical requirements on security. Thus subjective trust management is essential to guarantee users' collaborations and communications on such a promising infrastructure. As an important nature of subjective trust, uncertainty should be preserved and exhibited in trust definition, representation and evolution. Consider the drawbacks of existing mechanisms based on random mathematics and fuzzy theory, this paper designs an uncertainty enhanced trust evolution strategy based on cloud model theory. We define subjective trust as trust cloud. Then we propose new algorithms to propagate, aggregate and update trust. Furthermore, based on the concept of similar cloud, a method to assess trust level is put forward. The simulation results show the effectiveness, rationality and efficiency of our proposed strategy.

    Regular Papers
    Efficient Relational Techniques for Processing Graph Queries
    Sherif Sakr, Member, ACM and Ghazi Al-Naymat, Member, ACM
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1237-1255. 
    Abstract   PDF(740KB) ( 1688 )  

    Graphs are widely used for modeling complicated data such as social networks, chemical compounds, protein interactions and semantic web. To effectively understand and utilize any collection of graphs, a graph database that efficiently supports elementary querying mechanisms is crucially required. For example, Subgraph and Supergraph queries are important types of graph queries which have many applications in practice. A primary challenge in computing the answers of graph queries is that pair-wise comparisons of graphs are usually hard problems. Relational database management systems (RDBMSs) have repeatedly been shown to be able to efficiently host different types of data such as complex objects and XML data. RDBMSs derive much of their performance from sophisticated optimizer components which make use of physical properties that are specific to the relational model such as sortedness, proper join ordering and powerful indexing mechanisms. In this article, we study the problem of indexing and querying graph databases using the relational infrastructure. We present a purely relational framework for processing graph queries. This framework relies on building a layer of graph features knowledge which capture metadata and summary features of the underlying graph database. We describe different querying mechanisms which make use of the layer of graph features knowledge to achieve scalable performance for processing graph queries. Finally, we conduct an extensive set of experiments on real and synthetic datasets to demonstrate the efficiency and the scalability of our techniques.

    Negative Selection of Written Language Using Character Multiset Statistics
    Matti Pöllä and Timo Honkela
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1256-1266. 
    Abstract   PDF(585KB) ( 1472 )  

    We study the combination of symbol frequence analysis and negative selection for anomaly detection of discrete sequences where conventional negative selection algorithms are not practical due to data sparsity. Theoretical analysis on ergodic Markov chains is used to outline the properties of the presented anomaly detection algorithm and to predict the probability of successful detection. Simulations are used to evaluate the detection sensitivity and the resolution of the analysis on both generated artificial data and real-world language data including the English Wikipedia. Simulation results on large reference corpora are used to study the effects of the assumptions made in the theoretical model in comparison to real-world data.

    Eventual Leader Election with Weak Assumptions on Initial Knowledge, Communication Reliability, and Synchrony
    Antonio Fernández Anta, Senior Member, ACM, IEEE, Ernesto Jiménez and Michel Raynal
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1267-1281. 
    Abstract   PDF(444KB) ( 1588 )  

    This paper considers the eventual leader election problem in asynchronous message-passing systems where an arbitrary number t of processes can crash (t<n, where n is the total number of processes). It considers weak assumptions both on the initial knowledge of the processes and on the network behavior. More precisely, initially, a process knows only its identity and the fact that the process identities are different and totally ordered (it knows neither n nor t). Two eventual leader election protocols and a lower bound are presented. The first protocol assumes that a process also knows a lower bound α on the number of processes that do not crash. This protocol requires the following behavioral properties from the underlying network: the graph made up of the correct processes and fair lossy links is strongly connected, and there is a correct process connected to (n-f)-α other correct processes (where f is the actual number of crashes in the considered run) through eventually timely paths (paths made up of correct processes and eventually timely links). This protocol is not communication-efficient in the sense that each correct process has to send messages forever. The second protocol is communication-efficient: after some time, only the final common leader has to send messages forever. This protocol does not require the processes to know α, but requires stronger properties from the underlying network: each pair of correct processes has to be connected by fair lossy links (one in each direction), and there is a correct process whose n-f-1 output links to the rest of correct processes have to be eventually timely. A matching lower bound result shows that any eventual leader election protocol must have runs with this number of eventually timely links, even if all processes know all the processes identities. In addition to being communication-efficient, the second protocol has another noteworthy efficiency property, namely, be the run finite or infinite, all the local variables and message fields have a finite domain in the run.

    Frontal and Semi-Frontal Facial Caricature Synthesis Using Non-Negative Matrix Factorization
    Hua Huang (黄华), Senior Member, CCF, Member, IEEE and Xiang-Wang Ma (马湘旺)
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1282-1292. 
    Abstract   PDF(497KB) ( 2027 )  

    In this paper, we present a novel approach to synthesizing frontal and semi-frontal cartoon-like facial caricatures from an image. The caricature is generated by warping the input face from the original feature points to the corresponding exaggerated feature points. A 3D mean face model is incorporated to facilitate face to caricatures by inferring the depth of 3D feature points and the spatial transformation. Then the 3D face is deformed by using non-negative matrix factorization and projected back to image plane for future warping. To efficiently solve the nonlinear spatial transformation, we propose a novel initialization scheme to set up Levenberg-Marquardt optimization. According to the spatial transformation, exaggeration is applied to the most salient features by exaggerating their normalized difference from the mean. Non-photorealistic rendering (NPR) based stylization completes the cartoon caricature. Experiments demonstrate that our method outperforms existing methods in terms of view angles and aesthetic visual quality.

    Attribute-Based Signature with Policy-and-Endorsement Mechanism
    Huai-Xi Wang (王怀习), Yan Zhu (朱岩), Rong-Quan Feng (冯荣权) and Stephen S. Yau, Fellow, IEEE
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1293-1304. 
    Abstract   PDF(483KB) ( 2685 )  

    In this paper a new signature scheme, called Policy-Endorsing Attribute-Based Signature, is developed to correspond with the existing Ciphertext-Policy Attribute-Based Encryption. This signature provides a policy-and-endorsement mechanism. In this mechanism a single user, whose attributes satisfy the predicate, endorses the message. This signature allows the signer to announce his endorsement using an access policy without having to reveal the identity of the signer. The security of this signature, selfless anonymity and existential unforgeability, is based on the Strong Diffie-Hellman assumption and the Decision Linear assumption in bilinear map groups.

    Formally Analyzing Expected Time Complexity of Algorithms Using Theorem Proving
    Osman Hasan and Sofiéne Tahar, Senior Member, IEEE, Member, ACM
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1305-1320. 
    Abstract   PDF(339KB) ( 1709 )  

    Probabilistic techniques are widely used in the analysis of algorithms to estimate the computational complexity of algorithms or a computational problem. Traditionally, such analyses are performed using paper-and-pencil proofs and the results are sometimes validated using simulation techniques. These techniques are informal and thus may result in an inaccurate analysis. In this paper, we propose a formal technique for analyzing the expected time complexity of algorithms using higher-order-logic theorem proving. The approach calls for mathematically modeling the algorithm along with its inputs, using indicator random variables, in higher-order logic. This model is then used to formally reason about the expected time complexity of the underlying algorithm in a theorem prover. The paper includes the higher-order-logic formalization of indicator random variables, which are fundamental to the proposed infrastructure. In order to illustrate the practical effectiveness and utilization of the proposed infrastructure, the paper also includes the analysis of algorithms for three well-known problems, i.e., the hat-check problem, the birthday paradox and the hiring problem.

    NP-Logic Systems and Model-Equivalence Reductions
    Yu-Ping Shen (沈榆平) and Xi-Shun Zhao (赵希顺)
    Journal of Data Acquisition and Processing, 2010, 25 (6): 1321-1326. 
    Abstract   PDF(245KB) ( 1630 )  

    In this paper we investigate the existence of model-equivalence reduction between NP-logic systems which are logic systems with model existence in NP. It is shown that among all NP-systems with model checking problem in NP, the existentially quantified propositional logic (彐PF) is maximal with respect to poly-time model-equivalent reduction. However,彐PF seems not a maximal NP-system in general because there exits an NP-system with model checking problem DP-complete.

SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved