Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      05 July 2011, Volume 26 Issue 4   
    For Selected: View Abstracts Toggle Thumbnails
    Special Section on Perspectives on Future Computer Science
    Preface
    Xiao-Ming Li, Xiaodong Zhang
    Journal of Data Acquisition and Processing, 2011, 26 (4): 567-568. 
    Abstract   PDF(173KB) ( 1528 )  
    Sponsored by NSF China and NSF USA, The Third U.S.-China Computer Science Leadership Summit was held at Peking University, June 14~15, 2010. Over 50 senior computer scientists from both countries came together, exchanging the status of research activities and discussing future trends in computer science. After the event, some of the delegates are kind enough to write down their thoughts exchanged during the summit, which forms the major part of this special section collection.
    Understanding Science Through the Computational Lens
    Richard M. Karp
    Journal of Data Acquisition and Processing, 2011, 26 (4): 569-577. 
    Abstract   PDF(606KB) ( 2156 )  
    This article explores the changing nature of the interaction between computer science and the natural and social sciences. After briefly tracing the history of scientific computation, the article presents the concept of computational lens, a metaphor for a new relationship that is emerging between the world of computation and the world of the sciences. Our main thesis is that, in many scientific fields, the processes being studied can be viewed as computational in nature, in the sense that the processes perform dynamic transformations on information represented as digital data. Viewing natural or engineered systems through the lens of their computational requirements or capabilities provides new insights and ways of thinking. A number of examples are discussed in support of this thesis. The examples are from various fields, including quantum computing, statistical physics, the World Wide Web and the Internet, mathematics, and computational molecular biology.
    New Methodologies for Parallel Architecture
    Dong-Rui Fan (范东睿), Member, CCF,IEEE, Xiao-Wei Li (李晓维), and Guo-Jie Li (李国杰), Fellow, CCF
    Journal of Data Acquisition and Processing, 2011, 26 (4): 578-587. 
    Abstract   PDF(799KB) ( 3289 )  
    Moore's law continues to grant computer architects ever more transistors in the foreseeable future, and para-llelism is the key to continued performance scaling in modern microprocessors. In this paper, the achievements in our research project, which is supported by the National Basic Research 973 Program of China, on parallel architecture, are systematically presented. The innovative approaches and techniques to solve the significant problems in parallel architecture design are summarized, including architecture level optimization, compiler and language-supported technologies, reliability, power-performance efficient design, test and verification challenges, and platform building. Two prototype chips, a multi-heavy-core Godson-3 and a many-light-core Godson-T, are described to demonstrate the highly scalable and reconfigurable parallel architecture designs. We also present some of our achievements appearing in ISCA, MICRO, ISSCC, HPCA, PLDI, PACT, IJCAI, Hot Chips, DATE, IEEE Trans. VLSI, IEEE Micro, IEEE Trans. Computers, etc.
    Internetware: An Emerging Software Paradigm for Internet Computing
    Hong Mei (梅宏), Fellow, CCF, and Xuan-Zhe Liu(刘譞哲), Member, CCF
    Journal of Data Acquisition and Processing, 2011, 26 (4): 588-599. 
    Abstract   PDF(1194KB) ( 2486 )  
    The Internet is undergoing a tremendous change towards the globalized computing environment. Due to the open, dynamic and uncontrollable natures of the Internet, software running in the Internet computing environment has some new features, which bring challenges to current software technologies in terms of software model, software operating platform, software engineering approaches and software quality. Researchers in China have proposed the term "Internetware" to present the emerging software paradigm. Sponsored by the National Basic Research 973 Program, several research practices have been done on the Internetware in the past decade. This paper summarizes the progress and status of the Internetware researches. A technical solution framework for the Internetware paradigm is proposed from four aspects: the Internetware software model defines what the Internetware is to be; the Internetware middleware determines how to run the Internetware applications; the engineering methodology determines how to develop the Internetware applications; the Internetware quality assurance determines how well the Internetware applications can perform. The paper also discusses the ongoing research issues and future trends of Internetware.
    A Programming Language Approach to Internet-Based Virtual Computing Environment
    Ji Wang (王戟), Senior Member, CCF, Member, IEEE, Rui Shen (沈锐), Student Member,IEEE and Huai-Min Wang (王怀民), Senior Member, CCF, Member, IEEE
    Journal of Data Acquisition and Processing, 2011, 26 (4): 600-615. 
    Abstract   PDF(944KB) ( 1613 )  
    There is an increasing need to build scalable distributed systems over the Internet infrastructure. However, the development of distributed scalable applications suffers from lack of a wide accepted virtual computing environment. Users have to take great efforts on the management and sharing of the involved resources over Internet, whose characteristics are intrinsic growth, autonomy and diversity. To deal with this challenge, Internet-based Virtual Computing Environment (iVCE) is proposed and developed to serve as a platform for distributed scalable applications over the open infrastructure, whose kernel mechanisms are on-demand aggregation and autonomic collaboration of resources. In this paper, we present a programming language for iVCE named Owlet. Owlet conforms with the conceptual model of iVCE, and exposes the iVCE to application developers. As an interaction language based on peer-to-peer content-based publish/subscribe scheme, Owlet abstracts the Internet as an environment for the roles to interact, and uses roles to build a relatively stable view of resources for the on-demand resource aggregation. It provides language constructs to use 1) distributed event driven rules to describe interaction protocols among different roles, 2) conversations to correlate events and rules into a common context, and 3) resource pooling to do fault tolerance and load balancing among networked nodes. We have implemented an Owlet compiler and its runtime environment according to the architecture of iVCE, and built several Owlet applications, including a peer-to-peer file sharing application. Experimental results show that, with iVCE, the separation of resource aggregation logic and business logic significantly eases the process of building scalable distributed applications.
    Three New Concepts of Future Computer Science
    Zhi-Wei Xu (徐志伟) and Dan-Dan Tu (涂丹丹)
    Journal of Data Acquisition and Processing, 2011, 26 (4): 616-624. 
    Abstract   PDF(304KB) ( 4813 )  
    This article presents an observation resulted from the six-year Sino-USA computer science leadership exchanges: the trend towards the emergence of a new computer science that is more universal and fundamental than that in the past. In the 21st century, the field of computer science is experiencing fundamental transformations, from its scope, objects of study, basic metrics, main abstractions, fundamental principles, to its relationship to other sciences and to the human society, while inheriting the basic way of thinking and time-tested body of knowledge accumulated through the past 50 years. We discuss three new concepts related to this trend. They are computational lens and computational thinking articulated by US scientists, and ternary computing for the masses proposed by Chinese scientists. We review the salient features of these concepts, discuss their impact, and summarize future research directions.
    Possibilities for Healthcare Computing
    Peter Szolovits
    Journal of Data Acquisition and Processing, 2011, 26 (4): 625-631. 
    Abstract   PDF(226KB) ( 1518 )  
    Advances in computing technology promise to aid in achieving the goals of healthcare. We review how such changes can support each of the goals of healthcare as identified by the U.S. Institute of Medicine: safety, effectiveness, patient-centricity, timeliness, efficiency, and equitability. We also describe current foci of computing technology research aimed at realizing the ambitious goals for health information technology that have been set by the American Recovery and Reinvestment Act of 2009 and the Health Reform Act of 2010. Finally, we mention efforts to build health information technologies to support improved healthcare delivery in developing countries.
    Overview of Center for Domain-Specific Computing
    Jason Cong (丛京生)
    Journal of Data Acquisition and Processing, 2011, 26 (4): 632-635. 
    Abstract   PDF(1101KB) ( 1562 )  
    In this short article, we would like to introduce the Center for Domain-Specific Computing (CDSC) established in 2009, primarily funded by the US National Science Foundation with an award from the 2009 Expeditions in Computing Program. In this project we look beyond parallelization and focus on customization as the next disruptive technology to bring orders-of-magnitude power-performance efficiency improvement for applications in a specific domain.
    The Challenges of Multidisciplinary Education in Computer Science
    Fred S. Roberts
    Journal of Data Acquisition and Processing, 2011, 26 (4): 636-642. 
    Abstract   PDF(195KB) ( 1465 )  
    Some of the most important problems facing the United States and China, indeed facing our entire planet, require approaches that are fundamentally multidisciplinary in nature. Many of those require skills in computer science (CS), basic understanding of another discipline, and the ability to apply the skills in one discipline to the problems of another. Modern training in computer science needs to prepare students to work in other disciplines or to work on multidisciplinary problems. What do we do to prepare them for a multidisciplinary world when there are already too many things we want to teach them about computer science? This paper describes successful examples of multidisciplinary education at the interface between CS and the biological sciences, as well as other examples involving CS and security, CS and sustainability, and CS and the social and economic sciences. It then discusses general principles for multidisciplinary education of computer scientists.
    Machine Learning and Data Mining
    DHC: Distributed, Hierarchical Clustering in Sensor Networks
    Xiu-Li Ma (马秀莉), Hai-Feng Hu (胡海峰), Shuang-Feng Li (李双峰), Hong-Mei Xiao (肖红梅), Qiong Luo (罗琼), Dong-Qing Yang (杨冬青), Member,CCF, and Shi-Wei Tang (唐世渭), Senior Member, CCF
    Journal of Data Acquisition and Processing, 2011, 26 (4): 643-662. 
    Abstract   PDF(1112KB) ( 1553 )  
    In many sensor network applications, it is essential to get the data distribution of the attribute value over the network. Such data distribution can be got through clustering, which partitions the network into contiguous regions, each of which contains sensor nodes of a range of similar readings. This paper proposes a method named Distributed, Hierarchical Clustering (DHC) for online data analysis and mining in senior networks. Different from the acquisition and aggregation of raw sensory data, DHC clusters sensor nodes based on their current data values as well as their geographical proximity, and computes a summary for each cluster. Furthermore, these clusters, together with their summaries, are produced in a distributed, bottom-up manner. The resulting hierarchy of clusters and their summaries facilitates interactive data exploration at multiple resolutions. It can also be used to improve the efficiency of data-centric routing and query processing in sensor networks. We also design and evaluate the maintenance mechanisms for DHC to make it be able to work on evolving data. Our simulation results on real world datasets as well as synthetic datasets show the effectiveness and efficiency of our approach.
    Bootstrapping Object Coreferencing on the Semantic Web
    Wei Hu (胡伟), Yu-Zhong Qu (瞿裕忠), Senior Member,CCF, and Xing-Zhi Sun (孙行智)
    Journal of Data Acquisition and Processing, 2011, 26 (4): 663-675. 
    Abstract   PDF(529KB) ( 2335 )  
    An object on the Semantic Web is likely to be denoted with several URIs by different parties. Object core-ferencing is a process to identify "equivalent" URIs of objects for achieving a better Data Web. In this paper, we propose a bootstrapping approach for object coreferencing on the Semantic Web. For an object URI, we firstly establish a kernel that consists of semantically equivalent URIs from the same-as, (inverse) functional properties and (max-)cardinalities, and then extend the kernel with respect to the textual descriptions (e.g., labels and local names) of URIs. We also propose a trustworthiness-based method to rank the coreferent URIs in the kernel as well as a similarity-based method for ranking the URIs in the extension of the kernel. We implement the proposed approach, called ObjectCoref, on a large-scale dataset that contains 76 million URIs collected by the Falcons search engine until 2008. The evaluation on precision, relative recall and response time demonstrates the feasibility of our approach. Additionally, we apply the proposed approach to investigate the popularity of the URI alias phenomenon on the current Semantic Web.
    Guided Structure-Aware Review Summarization
    Feng Jin (金锋), Min-Lie Huang (黄民烈),and Xiao-Yan Zhu (朱小燕), Member, CCF
    Journal of Data Acquisition and Processing, 2011, 26 (4): 676-684. 
    Abstract   PDF(581KB) ( 1449 )  
    Although the goal of traditional text summarization is to generate summaries with diverse information, most of those applications have no explicit definition of the information structure. Thus, it is difficult to generate truly structure-aware summaries because the information structure to guide summarization is unclear. In this paper, we present a novel framework to generate guided summaries for product reviews. The guided summary has an explicitly defined structure which comes from the important aspects of products. The proposed framework attempts to maximize expected aspect satisfaction during summary generation. The importance of an aspect to a generated summary is modeled using Labeled Latent Dirichlet Allocation. Empirical experimental results on consumer reviews of cars show the effectiveness of our method.
    Aggressive Complex Event Processing with Confidence over Out-of-Order Streams
    Chuan-Wen Li (李传文), Student Member,CCF, Member, ACM, Yu Gu (谷峪), Member,CCF, ACM Ge Yu (于戈), Senior Member,CCF, Member, ACM
    Journal of Data Acquisition and Processing, 2011, 26 (4): 685-696. 
    Abstract   PDF(483KB) ( 1519 )  
    In recent years, there has been a growing need for complex event processing (CEP), ranging from supply chain management to security monitoring. In many scenarios events are generated in different sources but arrive at the central server out of order, due to the differences of network latencies. Most state-of-the-art techniques process out-of-order events by buffering the events until the total event order within a specified range can be guaranteed. Their main problems are leading to increasing response time and reducing system throughput. This paper aims to build a high performance out-of-order event processing mechanism, which can match events as soon as they arrive instead of buffering them till all arrive. A suffix-automaton-based event matching algorithm is proposed to speed up query processing, and a confidence-based accuracy evaluation is proposed to control the query result quality. The performance of our approach is evaluated through detailed accuracy and response time analysis. As experimental results show, our approach can obviously speed up the query matching time and produce reasonable query results.
    Information Security
    Provably Secure Role-Based Encryption with Revocation Mechanism
    Yan Zhu (朱岩), Member, CCF, Hong-Xin Hu (胡宏新), Gail-Joon Ahn, Senior Member, ACM, IEEE Huai-Xi Wang (王怀习), and Shan-Biao Wang (王善标)
    Journal of Data Acquisition and Processing, 2011, 26 (4): 697-710. 
    Abstract   PDF(785KB) ( 1564 )  
    Role-Based Encryption (RBE) realizes access control mechanisms over encrypted data according to the widely adopted hierarchical RBAC model. In this paper, we present a practical RBE scheme with revocation mechanism based on partial-order key hierarchy with respect to the public key infrastructure, in which each user is assigned with a unique private-key to support user identification, and each role corresponds to a public group-key that is used to encrypt data. Based on this key hierarchy structure, our RBE scheme allows a sender to directly specify a role for encrypting data, which can be decrypted by all senior roles, as well as to revoke any subgroup of users and roles. We give a full proof of security of our scheme against hierarchical collusion attacks. In contrast to the existing solutions for encrypted file systems, our scheme not only supports dynamic joining and revoking users, but also has shorter ciphertexts and constant-size decryption keys.
    Improvement on the Multihop Shareholder Discovery for Threshold Secret Sharing in MANETs
    Seleviawati Tarmizi, Prakash Veeraraghavan, and Somnath Ghosh
    Journal of Data Acquisition and Processing, 2011, 26 (4): 711-721. 
    Abstract   PDF(541KB) ( 1351 )  
    The collaboration of at least a threshold number of secret shareholders in a threshold secret sharing scheme is a strict requirement to ensure its intended functionality. Due to its promising characteristics, such a scheme has been proposed to solve a range of security problems in mobile ad hoc networks. However, discovering a sufficient number of secret shareholders in such dynamic and unpredictable networks is not easy. In this paper, we propose a more efficient shareholder discovery mechanism compared to our previous work. The discovery process is performed in a multihop fashion to adapt to the mobile ad hoc network environment. We introduce batch extension that gradually extends the shareholders' collaboration boundary by more than one hop at a time around the service requestor, to find at least the threshold number of the unknown shareholders. Through the batch extension, reply aggregation is applicable, hence reducing the redundancy use of reply routes, decreasing the required packet transmission, and lessening the service delay, compared to the previously proposed mechanism. Our simulation results show that, with the appropriate batch size, the latest mechanism is more efficient with an insignificant increase of control overhead.
    Differential Attack on Five Rounds of the SC2000 Block Cipher
    Ji-Qiang Lv (吕继强)
    Journal of Data Acquisition and Processing, 2011, 26 (4): 722-731. 
    Abstract   PDF(372KB) ( 1331 )  
    The SC2000 block cipher has a 128-bit block size and a user key of 128, 192 or 256 bits, which employs a total of 6.5 rounds if a 128-bit user key is used. It is a CRYPTREC recommended e-government cipher in Japan. In this paper we address how to recover the user key from a few subkey bits of SC2000, and describe two 4.75-round differential characteristics with probability 2-126 of SC2000 and seventy-six 4.75-round differential characteristics with probability 2-127. Finally, we present a differential cryptanalysis attack on a 5-round reduced version of SC2000 when used with a 128-bit key; the attack requires 2125.68 chosen plaintexts and has a time complexity of 2125.75 5-round SC2000 encryptions. The attack does not threat the security of the full SC2000 cipher, but it suggests for the first time that the safety margin of SC2000 with a 128-bit key decreases below one and a half rounds.
    A New Protocol for the Detection of Node Replication Attacks in Mobile Wireless Sensor Networks
    Xiao-Ming Deng (邓晓明) and Yan Xiong (熊焰) Membership IEEE, ACM, CCF
    Journal of Data Acquisition and Processing, 2011, 26 (4): 732-743. 
    Abstract   PDF(379KB) ( 2100 )  
    Wireless sensor networks (WSNs) are often deployed in harsh environments. Thus adversaries can capture some nodes, replicate them and deploy those replicas back into the strategic positions in the network to launch a variety of attacks. These are referred to as node replication attacks. Some methods of defending against node replication attacks have been proposed, yet they are not very suitable for the mobile wireless sensor networks. In this paper, we propose a new protocol to detect the replicas in mobile WSNs. In this protocol, polynomial-based pair-wise key pre-distribution scheme and Counting Bloom Filters are used to guarantee that the replicas can never lie about their real identifiers and collect the number of pair-wise keys established by each sensor node. Replicas are detected by looking at whether the number of pair-wise keys established by them exceeds the threshold. We also derive accurate closed form expression for the expected number of pair-wise keys established by each node, under commonly used random waypoint model. Analyses and simulations verify that the protocol accurately detects the replicas in the mobile WSNs and supports their removal.
    Impossible Differential Attacks on 13-Round CLEFIA-128
    Hamid Mala, Mohammad Dakhilalian, and Mohsen Shakiba
    Journal of Data Acquisition and Processing, 2011, 26 (4): 744-750. 
    Abstract   PDF(457KB) ( 1429 )  
    CLEFIA, a new 128-bit block cipher proposed by Sony Corporation, is increasingly attracting cryptanalysts' attention. In this paper, we present two new impossible differential attacks on 13 rounds of CLEFIA-128. The proposed attacks utilize a variety of previously known techniques, in particular the hash table technique and redundancy in the key schedule of this block cipher. The first attack does not consider the whitening layers of CLEFIA, requires 2109.5 chosen plaintexts, and has a running time equivalent to about 2112.9 encryptions. The second attack preserves the whitening layers, requires 2117.8 chosen plaintexts, and has a total time complexity equivalent to about 2121.2 encryptions.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved