Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      15 July 2006, Volume 21 Issue 4   
    For Selected: View Abstracts Toggle Thumbnails
    Articles
    Scalable and Practical Nonblocking Switching Networks
    Si-Qing Zheng and Ashwin Gumaste
    Journal of Data Acquisition and Processing, 2006, 21 (4): 466-475 . 
    Abstract   PDF(487KB) ( 1486 )  
    Large-scale strictly nonblocking (SNB) and wide-sense nonblocking (WSNB) networks may be infeasible due to their high cost. In contrast, rearrangeable nonblocking (RNB) networks are more scalable because of their much lower cost. However, RNB networks are not suitable for circuit switching. In this paper, the concept of virtual nonblockingness is introduced. It is shown that a virtual nonblocking (VNB) network functions like an SNB or WSNB network, but it is constructed with the cost of an RNB network. The results indicate that for large-scale circuit switching applications, it is only needed to build VNB networks.
    Optimal Routing in a Small-World Network
    Jian-Yang Zeng and Wen-Jing Hsu
    Journal of Data Acquisition and Processing, 2006, 21 (4): 476-481 . 
    Abstract   PDF(333KB) ( 1591 )  
    Substantial research has been devoted to the modelling of the small-world phenomenon that arises in nature as well as human society. Earlier work has focused on the static properties of various small-world models. To examine the routing aspects, Kleinberg proposes a model based on a d-dimensional toroidal lattice with long-range links chosen at random according to the d-harmonic distribution. Kleinberg shows that, by using only local information, the greedy routing algorithm performs in O(lg^2 n) expected number of hops. We extend Kleinberg's small-world model by allowing each node x to have two more random links to nodes chosen uniformly and randomly within (lg n)^(2/d) Manhattan distance from x. Based on this extended model, we then propose an oblivious algorithm that can route messages between any two nodes in O(lg n) expected number of hops. Our routing algorithm keeps only O((lg n)^{beta+1}) bits of information on each node, where 1< beta <2, thus being scalable w.r.t. the network size. To our knowledge, our result is the first to achieve the optimal routing complexity while still keeping a poly-logarithmic number of bits of information stored on each node in the small-world networks.
    An SPN-Based Integrated Model for Web Prefetching and Caching
    Lei Shi, Ying-Jie Han, Xiao-Guang Ding, Lin Wei and Zhi-Min Gu
    Journal of Data Acquisition and Processing, 2006, 21 (4): 482-489 . 
    Abstract   PDF(578KB) ( 1811 )  
    The World Wide Web has become the primary means for information dissemination. Due to the limited resources of the network bandwidth, users always suffer from long time waiting. Web prefetching and web caching are the primary approaches to reducing the user perceived access latency and improving the quality of services. In this paper, a Stochastic Petri Nets (SPN) based integrated web prefetching and caching model (IWPCM) is presented and the performance evaluation of IWPCM is made. The performance metrics, access latency, throughput, HR (hit ratio) and BHR (byte hit ratio) are analyzed and discussed. Simulations show that compared with caching only model (CM), IWPCM can further improve the throughput, HR and BHR efficiently and reduce the access latency. The performance evaluation based on the SPN model can provide a basis for implementation of web prefetching and caching and the combination of web prefetching and caching holds the promise of improving the QoS of web systems.
    Coverage and Exposure Paths in Wireless Sensor Networks
    Liu-Sheng Huang, Hong-Li Xu, Yang Wang, Jun-Min Wu, and Hong Li
    Journal of Data Acquisition and Processing, 2006, 21 (4): 490-495 . 
    Abstract   PDF(314KB) ( 1413 )  
    Wireless sensor networks have posed a number of challenging problems such as localization, deployment and tracking, etc. One of the interesting problems is the calculation of the coverage and exposure paths for the sensor networks. This paper presents a fully localized algorithm to solve the worst coverage problem first introduced by Meguerdichian et al. The nodes of the sensor network cooperate to construct the worst coverage path only by the one-hop neighbor's information, thus avoiding the massive communication and conserving the energy. The correctness of the proposed algorithm is proved formally under the sensing diminishing model. Moreover, this algorithm can be easily extended to solve the minimal exposure problem with local information as well.
    Random Walk Routing in WSNs with Regular Topologies
    Hui Tian, Hong Shen, and Teruo Matsuzawa
    Journal of Data Acquisition and Processing, 2006, 21 (4): 496-502 . 
    Abstract   PDF(318KB) ( 1694 )  
    Topology is one of the most important characteristics for any type of networks because it represents the network's inherent properties and has great impact on the performance of the network. For wireless sensor networks (WSN), a well-deployed regular topology can help save more energy than what a random topology can do. WSNs with regular topologies can prolong network lifetime as studied in many previous work. However, little work has been done in developing effective routing algorithms for WSNs with regular topologies, except routing along a shortest path with the knowledge of global location information of sensor nodes. In this paper, a new routing protocol based on random walk is proposed. It does not require global location information. It also achieves load balancing property inherently for WSNs which is difficult to achieve by other routing protocols. In the scenarios where the message required to be sent to the base station is in comparatively small size with the inquiry message among neighboring nodes, it is proved that the random walk routing protocol can guarantee high probability of successful transmission from the source to the base station with the same amount of energy consumption as the shortest path routing. Since in many applications of WSNs, sensor nodes often send only beep-like small messages to the base station to report their status, our proposed random walk routing is thus a viable scheme and can work very efficiently especially in these application scenarios. The random walk routing provides load balancing in the WSN as mentioned, however, the nodes near to the base station are inevitably under heavier burden than those far away from the base station. Therefore, a density-aware deployment scheme is further proposed to guarantee that the heavy-load nodes do not affect the network lifetime even if their energy is exhausted. The main idea is deploying sensors with different densities according to their distance to the base station. It will be shown in this paper that incorporating the random walk routing protocol with the density-aware deployment scheme can effectively prolong the network lifetime.
    2DCMA: An Effective Maintenance Algorithm of Materialized Views in Peer Data Management Systems
    Biao Qin, Shan Wang, and Xiao-Yong Du
    Journal of Data Acquisition and Processing, 2006, 21 (4): 503-512 . 
    Abstract   PDF(458KB) ( 1327 )  
    Update management is very important for data integration systems. So update management in peer data management systems (PDMSs) is a hot research area. This paper researches on view maintenance in PDMSs. First, the definition of view is extended and the peer view, local view and global view are proposed according to the requirements of applications. There are two main factors to influence materialized views in PDMSs. One is that schema mappings between peers are changed, and the other is that peers update their data. Based on the requirements, this paper proposes an algorithm called 2DCMA, which includes two sub-algorithms: data and definition consistency maintenance algorithms, to effectively maintain views. For data consistency maintenance, Mork's rules are extended for governing the use of updategrams and boosters. The new rule system can be used to optimize the execution plan. And are extended for the data consistency maintenance algorithm is based on the new rule system. Furthermore, an ECA rule is adopted for definition consistency maintenance. Finally, extensive simulation experiments are conducted in SPDMS. The simulation results show that the 2DCMA algorithm has better performance than that of Mork's when maintaining data consistency. And the 2DCMA algorithm has better performance than that of centralized view maintenance algorithm when maintaining definition consistency.
    Globus Toolkit Version 4: Software for Service-Oriented Systems
    Ian Foster
    Journal of Data Acquisition and Processing, 2006, 21 (4): 513-520 . 
    Abstract   PDF(426KB) ( 2601 )  
    The Globus Toolkit (GT) has been developed since the late 1990s to support the development of service-oriented distributed computing applications and infrastructures. Core GT components address, within a common framework, fundamental issues relating to security, resource access, resource management, data movement, resource discovery, and so forth. These components enable a broader "Globus ecosystem" of tools and components that build on, or interoperate with, GT functionality to provide a wide range of useful application-level functions. These tools have in turn been used to develop a wide range of both "Grid" infrastructures and distributed applications. I summarize here the principal characteristics of the recent Web Services-based GT4 release, which provides significant improvements over previous releases in terms of robustness, performance, usability, documentation, standards compliance, and functionality. I also introduce the new "dev.globus" community development process, which allows a larger community to contribute to the development of Globus software.
    Design and Implementation of NAREGI SuperScheduler Based on the OGSA Architecture
    Satoshi Matsuoka, Masayuki Hatanaka, Yasumasa Nakano, Yuji Iguchi, Toshio Ohno, Kazushige Saga, and Hidemoto Nakada
    Journal of Data Acquisition and Processing, 2006, 21 (4): 521-528 . 
    Abstract   PDF(397KB) ( 1562 )  
    NAREGI is a 5-year Japanese National Grid Project during 2003---2007, whose chief aim is to develop a set of grid middleware to serve as a basis for future e-Science. NAREGI also aims to lead the way in standardization of grid middleware, based on the OGSA architecture. Its super-scheduler is based on the proposed OGSA-EMS Architecture, in that it becomes the first working implementation that implements the documented component relationships within the OGSA-EMS architecture document v.1.0. Through the efforts and experience in the design and implementation, it has been confirmed that the documented OGSA-EMS architecture is quite feasible, but will require significant amount of refinement and speed improvements to finalize its detailed specifications. The super-scheduler also supports co-allocation across multiple sites to support automated execution of grid-based MPIs that execute across machines. Such a resource allocation requires sophisticated interactions between the OGSA-EMS components not covered in the current OGSA-EMS architecture, some of which are non-trivial. Overall, job scheduling with OGSA-EMS has proven to not only work, but also that its job allocation and execution time is within reasonable bounds.
    Automatic Transaction Compensation for Reliable Grid Applications
    Fei-Long Tang, Ming-Lu Li, and Joshua Zhexue Huang
    Journal of Data Acquisition and Processing, 2006, 21 (4): 529-536 . 
    Abstract   PDF(372KB) ( 1310 )  
    As grid technology is expanding from scientific computing to business applications, service oriented grid computing is aimed at providing reliable services for users and hiding complexity of service processes from them. The grid services for coordinating long-lived transactions that occur in business applications play an important role in reliable grid applications. In this paper, the grid transaction service (GridTS) is proposed for dealing with long-lived business transactions. We present a compensation-based long-lived transaction coordination algorithm that enables users to select results from committed sub-transactions. Unlike other long-lived transaction models that require application programmers to develop corresponding compensating transactions, GridTS can automatically generate compensating transactions on execution of a long-lived grid transaction. The simulation result has demonstrated the feasibility of GridTS and effectiveness of the corresponding algorithm.
    Client-Centric Adaptive Scheduling of Service-Oriented Applications
    Jing Wang, Li-Yong Zhang, Yan-Bo Han
    Journal of Data Acquisition and Processing, 2006, 21 (4): 537-546 . 
    Abstract   PDF(483KB) ( 1715 )  
    The paper proposes a client-centric computing model that allows for adaptive execution of service-oriented applications. The model can flexibly dispatch application tasks to the client side and the network side, dynamically adjust an execution scheme to adapt to environmental changes, and thus is expected to achieve better scalability, higher performance and more controllable privacy. Scheduling algorithms and the rescheduling strategies are proposed for the model. Experiments show that with the model the performance of service-oriented application execution can be improved.
    QoS-Aware Composite Services Retrieval
    Xiao-Ling Wang, Sheng Huang, and Ao-Ying Zhou
    Journal of Data Acquisition and Processing, 2006, 21 (4): 547-558 . 
    Abstract   PDF(704KB) ( 1754 )  
    For current service-oriented applications, individual web service usually cannot meet the requirements arising from real world applications, so it is necessary to combine the functionalities of different web services to obtain a composite service in response to users' service requests. In order to address the problem of web service composition, this paper proposes an efficient approach to composing basic services in case no any individual service can fully satisfy users' requests. Compared with the general strategies adopted in most previously proposed approaches where only the best composition solution is produced, the QoS-aware service composition approach is given and top $k$ solutions in the framework are provided, rather than focusing on obtaining the best composition solution, since the presented approach allows more candidates that are likely to meet the requirements of the users. The approach is based on a succinct binary tree data structure, and a system, named ATC (Approach to Top-$k$ Composite services retrieval) system is implemented. In ATC, QoS is taken into account for composite service, and a heuristic-based search method is proposed to retrieve top $k$ composite service. Some extensive experiments are designed and two web service benchmarks are used for performance study. The experimental results show that the proposed approach can assure high precision and efficiency for composite service search.
    A Heuristic Algorithm for Task Scheduling Based on Mean Load on Grid
    Li-Na Ni, Jin-Quan Zhang, Chun-Gang Yan, and Chang-Jun Jiang
    Journal of Data Acquisition and Processing, 2006, 21 (4): 559-564 . 
    Abstract   PDF(333KB) ( 1302 )  
    Efficient task scheduling is critical to achieving high performance on grid computing environment. The task scheduling on grid is studied as optimization problem in this paper. A heuristic task scheduling algorithm satisfying resources load balancing on grid environment is presented. The algorithm schedules tasks by employing mean load based on task predictive execution time as heuristic information to obtain an initial scheduling strategy. Then an optimal scheduling strategy is achieved by selecting two machines satisfying condition to change their loads via reassigning their tasks under the heuristic of their mean load. Methods of selecting machines and tasks are given in this paper to increase the throughput of the system and reduce the total waiting time. The efficiency of the algorithm is analyzed and the performance of the proposed algorithm is evaluated via extensive simulation experiments. Experimental results show that the heuristic algorithm performs significantly to ensure high load balancing and achieve an optimal scheduling strategy almost all the time. Furthermore, results show that our algorithm is high efficient in terms of time complexity.
    Performance Aware Service Pool in Dependable Service Oriented Architecture
    Gang Huang, Li Zhou, Xuan-Zhe Liu, Hong Mei, and Shing-Chi Cheung
    Journal of Data Acquisition and Processing, 2006, 21 (4): 565-573 . 
    Abstract   PDF(458KB) ( 1834 )  
    As a popular approach to dependable service oriented architecture (SOA), a service pool collects a set of services that provide the same functionality by different service providers for achieving desired reliability. However, if the tradeoff between reliability and other important qualities, e.g., performance, has to be considered, the construction and management of a service pool become much more complex. In this paper, an automated approach to this problem is presented. Based on the investigation of service pools in the typical triangle SOA model, two challenges critical to the effectiveness and efficiency of service pools are identified, including which services should be held by a pool and what order these services are invoked in. A set of algorithms are designed to address the two challenges and then a service pool can be automatically constructed and managed for given reliability and performance requirements in polynomial time. The approach is demonstrated on a J2EE based service platform and the comparison results between different pooling algorithms are evaluated.
    A Semantic Matchmaker for Ranking Web Services
    Bin Xu, Po Zhang, Juan-Zi Li, and Wen-Jun Yang
    Journal of Data Acquisition and Processing, 2006, 21 (4): 574-581 . 
    Abstract   PDF(336KB) ( 1485 )  
    This paper is concerned with the matchmaker for ranking web services by using semantics. So far several methods of semantic matchmaker have been proposed. Most of them, however, focus on classifying the services into predefined categories rather than providing a ranking result. In this paper, a new method of semantic matchmaker is proposed for ranking web services. It is proposed to use the semantic distance for estimating the matching degree between a service and a user request. Four types of semantic distances are defined and four algorithms are implemented respectively to calculate them. Experimental results show that the proposed semantic matchmaker significantly outperforms the keyword-based baseline method.
    Facilitating Service Discovery with Semantic Overlay
    Hai Jin, Hao Wu, and Xiao-Min Ning
    Journal of Data Acquisition and Processing, 2006, 21 (4): 582-591 . 
    Abstract   PDF(511KB) ( 1458 )  
    Service Oriented Architecture (SOA) and Peer-to-Peer (P2P) computing share many common characteristics. It is believed that the combination of the two emerging techniques is a very promising method in promoting the web services (WS). Because the service discovery plays a key role in the integration, here a P2P-based framework to manage the knowledge of service and locating services is proposed. In this paper, the details of the principle, constructing and maintaining of service semantic overlay architecture have been described, and the way how the semantic overlay facilitates discovery of service resources is illustrated. To enable the semantic web service superiority, Service Ontology, which is considered as the service semantic model, is employed to depict service. The service discovery includes two phases: searching on the service semantic overlay; and local discovery in peer's service repository. Various solutions have been proposed to realize those two phases. Furthermore, tests are carried out to evaluate service discovery on the architecture.
    Using DragPushing to Refine Concept Index for Text Categorization
    Xueqi Cheng, Songbo Tan, and Lilian Tang
    Journal of Data Acquisition and Processing, 2006, 21 (4): 592-596 . 
    Abstract   PDF(310KB) ( 1693 )  
    Concept index (CI) is a very fast and efficient feature extraction (FE) algorithm for text classification. The key approach in CI scheme is to express each document as a function of various concepts (centroids) present in the collection. However, the representative ability of centroids for categorizing corpus is often influenced by so-called model misfit caused by a number of factors in the FE process including feature selection to similarity measure. In order to address this issue, this work employs the ``DragPushing'' Strategy to refine the centroids that are used for concept index. We present an extensive experimental evaluation of refined concept index (RCI) on two English collections and one Chinese corpus using state-of-the-art Support Vector Machine (SVM) classifier. The results indicate that in each case, RCI-based SVM yields a much better performance than the normal CI-based SVM but lower computation cost during training and classification phases.
    Dynamic Query Optimization Approach for Semantic Database Grid
    Xiao-Qing Zheng, Hua-Jun Chen, Zhao-Hui Wu, and Yu-Xin Mao
    Journal of Data Acquisition and Processing, 2006, 21 (4): 597-608 . 
    Abstract   PDF(700KB) ( 3220 )  
    Fundamentally, semantic grid database is about bringing globally distributed databases together in order to coordinate resource sharing and problem solving in which information is given well-defined meaning, and DartGrid II is the implemented database gird system whose goal is to provide a semantic solution for integrating database resources on the Web. Although many algorithms have been proposed for optimizing query-processing in order to minimize costs and/or response time, associated with obtaining the answer to query in a distributed database system, database grid query optimization problem is fundamentally different from traditional distributed query optimization. These differences are shown to be the consequences of autonomy and heterogeneity of database nodes in database grid. Therefore, more challenges have arisen for query optimization in database grid than traditional distributed database. Following this observation, the design of a query optimizer in DartGrid II is presented, and a heuristic, dynamic and parallel query optimization approach to processing query in database grid is proposed. A set of semantic tools supporting relational database integration and semantic-based information browsing has also been implemented to realize the above vision.
    BEAP: An End-User Agile Programming Paradigm for Business Applications
    Cheng-Chun Shu, Hai-Yan Yu, and Hao-Zhi Liu
    Journal of Data Acquisition and Processing, 2006, 21 (4): 609-619 . 
    Abstract   PDF(455KB) ( 1216 )  
    Business applications are subject to changes with technology trends or market demands. However, quick response to these changes is still a challenging issue. Most of the existing architectures (e.g., CORBA, Web Services) still expose the developers to excessive low-level details and force a tight coupling between program modules. For end users, developing, customizing, and reengineering applications remain difficult and time-consuming tasks. A high-level programming model is presented, together with a descriptive programming paradigm called BEAP, to facilitate end-user programming. In this approach, applications could be visually composed from well-defined software components called ``funnels'' in an event-driven fashion. Application examples have shown that, by raising the level of abstraction as well as simplifying the programming model, BEAP could empower end users to build business applications on demand with improved productivity.
    Extending Interactive Web Services for Improving Presentation Level Integration in Web Portals
    Jing-Yu Song, Jun Wei, Shu-Chao Wan, and Tao Huang
    Journal of Data Acquisition and Processing, 2006, 21 (4): 620-629 . 
    Abstract   PDF(483KB) ( 1528 )  
    Presentation level integration now becomes an important and fast growing trend in enterprise computing. Portal-based composite applications use portlet and interactive web service, which usually offers several portlets, as their basic constituents. Hence, portlet description and discovery are the key issues that have to be considered for the development of portal-based composite applications. This paper proposes a novel concept POI (Presentation Oriented Interface) to describe the presentation features of a portlet, so that interactive web services may be extended to facilitate the selection and interoperation of portlets. Portlet discovery can be effectively achieved based on the calculation of POI similarity that considers both type and structure similarity. Experiments show that the proposed approach can improve the satisfaction of portlet discovery, and also facilitate the portlet interoperation, thereby achieving better application integration at presentation level.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved