Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      05 July 2012, Volume 27 Issue 4   
    For Selected: View Abstracts Toggle Thumbnails
    Special Section on Theoretical Computer Science
    Preface
    Xiao-Ming Sun
    Journal of Data Acquisition and Processing, 2012, 27 (4): 667-667. 
    Abstract   PDF(98KB) ( 1296 )  
    In summer of 2010, JCST invited me to organize a special section on theoretical computer science (TCS). The goal of this special section is to provide a place for presenting the recent advances in TCS community, and promoting theoretical computer science research in China.
    Satisfiability with Index Dependency
    Hong-Yu Liang (梁宏宇) and Jing He (何晶)
    Journal of Data Acquisition and Processing, 2012, 27 (4): 668-677. 
    Abstract   PDF(336KB) ( 1749 )  
    We study the Boolean satisfiability problem (SAT) restricted on input formulas for which there are linear arithmetic constraints imposed on the indices of variables occurring in the same clause. This can be seen as a structural counterpart of Schaefer’s dichotomy theorem which studies the SAT problem with additional constraints on the assigned values of variables in the same clause. More precisely, let k-SAT(m,A) denote the SAT problem restricted on instances of k-CNF formulas, in every clause of which the indices of the last k−m variables are totally decided by the first m ones through some linear equations chosen from A. For example, if A contains i3=i1+2i2 and i4=i2−i1+1, then a clause of the input to 4-SAT(2,A) has the form yi1yi2yi1+2i2yi2-i1+1, with yi being xi or xi. We obtain the following results: 1) If m ≥ 2, then for any set A of linear constraints, the restricted problem k-SAT(m,A) is either in P or NP-complete assuming P ≠ NP. Moreover, the corresponding #SAT problem is always #P-complete, and the Max-SAT problem does not allow a polynomial time approximation scheme assuming P ≠ NP. 2) m=1, that is, in every clause only one index can be chosen freely. In this case, we develop a general framework together with some techniques for designing polynomial-time algorithms for the restricted SAT problems. Using these, we prove that for any A, #2-SAT(1,A) and Max-2-SAT(1,A) are both polynomial-time solvable, which is in sharp contrast with the hardness results of general #2-SAT and Max-2-SAT. For fixed k ≥ 3, we obtain a large class of non-trivial constraints A, under which the problems k-SAT(1,A), #k-SAT(1,A) and Max-k-SAT(1,A) can all be solved in polynomial time or quasi-polynomial time.
    A Quadratic Lower Bound for Three-Query Linear Locally Decodable Codes over Any Field
    David P. Woodruff
    Journal of Data Acquisition and Processing, 2012, 27 (4): 678-686. 
    Abstract   PDF(272KB) ( 1512 )  
    A linear (q, δ, ?, m(n))-locally decodable code (LDC) C:Fn→Fm(n) is a linear transformation from the vector space Fn to the space Fm(n) for which each message symbol xi can be recovered with probability at least 1/|F| + ? from C(x) by a randomized algorithm that queries only q positions of C(x), even if up to δm(n) positions of C(x) are corrupted. In a recent work of Dvir, the author shows that lower bounds for linear LDCs can imply lower bounds for arithmetic circuits. He suggests that proving lower bounds for LDCs over the complex or real field is a good starting point for approaching one of his conjectures. Our main result is an m(n)=Ω(n2) lower bound for linear 3-query LDCs over any, possibly infinite, field. The constant in the Ω(·) depends only on ? and δ. This is the first lower bound better than the trivial m(n)=Ω(n) for arbitrary fields and more than two queries.
    On Isomorphism Testing of Groups with Normal Hall Subgroups
    You-Ming Qiao (乔友明), Jayalal Sarma M.N., and Bang-Sheng Tang (唐邦晟)
    Journal of Data Acquisition and Processing, 2012, 27 (4): 687-701. 
    Abstract   PDF(476KB) ( 1525 )  
    A normal Hall subgroup N of a group G is a normal subgroup with its order coprime with its index. Schur-Zassenhaus theorem states that every normal Hall subgroup has a complement subgroup, that is a set of coset representatives H which also forms a subgroup of G. In this paper, we present a framework to test isomorphism of groups with at least one normal Hall subgroup, when groups are given as multiplication tables. To establish the framework, we first observe that a proof of Schur-Zassenhaus theorem is constructive, and formulate a necessary and sufficient condition for testing isomorphism in terms of the associated actions of the semidirect products, and isomorphisms of the normal parts and complement parts. We then focus on the case when the normal subgroup is abelian. Utilizing basic facts of representation theory of finite groups and a technique by Le Gall (STACS 2009), we first get an efficient isomorphism testing algorithm when the complement has bounded number of generators. For the case when the complement subgroup is elementary abelian, which does not necessarily have bounded number of generators, we obtain a polynomial time isomorphism testing algorithm by reducing to generalized code isomorphism problem, which asks whether two linear subspaces are the same up to permutation of coordinates. A solution to the latter can be obtained by a mild extension of the singly exponential (in the number of coordinates) time algorithm for code isomorphism problem developed recently by Babai et al. (SODA 2011). Enroute to obtaining the above reduction, we study the following computational problem in representation theory of finite groups: given two representations ρ and τ of a group H over Zpd, p a prime, determine if there exists an automorphism Φ:HH, such that the induced representation ρΦ=ρ ο Φ and τ are equivalent, in time poly(|H|, pd).
    Envy-Free Pricing with General Supply Constraints for Unit Demand Consumers
    Sungjin Im, Pin-Yan Lu (陆品燕), Member, ACM, and Ya-Jun Wang (王亚军), Member, ACM
    Journal of Data Acquisition and Processing, 2012, 27 (4): 702-709. 
    Abstract   PDF(323KB) ( 1296 )  
    The envy-free pricing problem can be stated as finding a pricing and allocation scheme in which each consumer is allocated a set of items that maximize his/her utility under the pricing. The goal is to maximize seller revenue. We study the problem with general supply constraints which are given as an independence system defined over the items. The constraints, for example, can be a number of linear constraints or matroids. This captures the situation where items do not pre-exist, but are produced in reflection of consumer valuation of the items under the limit of resources. This paper focuses on the case of unit-demand consumers. In the setting, there are n consumers and m items; each item may be produced in multiple copies. Each consumer em ∈ [n] has a valuation vij on item j in the set Si in which he/she is interested. He/she must be allocated (if any) an item which gives the maximum (non-negative) utility. Suppose we are given an α-approximation oracle for finding the maximum weight independent set for the given independence system (or a slightly stronger oracle); for a large number of natural and interesting supply constraints, constant approximation algorithms are available. We obtain the following results. 1) O(α log n)-approximation for the general case. 2) O(αk)-approximation when each consumer is interested in at most k distinct types of items. 3) O(αf)-approximation when each type of item is interesting to at most f consumers. Note that the final two results were previously unknown even without the independence system constraint.
    Worst-Case Nash Equilibria in Restricted Routing
    Pin-Yan Lu (陆品燕), Member, ACM, and Chang-Yuan Yu (余昌远)
    Journal of Data Acquisition and Processing, 2012, 27 (4): 710-717. 
    Abstract   PDF(272KB) ( 1259 )  
    We study the network routing problem with restricted and related links. There are parallel links with possibly different speeds, between a source and a sink. Also there are users, and each user has a traffic of some weight to assign to one of the links from a subset of all the links, named his/her allowable set. The users choosing the same link suffer the same delay, which is equal to the total weight assigned to that link over its speed. A state of the system is called a Nash equilibrium if no user can decrease his/her delay by unilaterally changing his/her link. To measure the performance degradation of the system due to the selfish behavior of all the users, Koutsoupias and Papadimitriou proposed the notion Price of Anarchy (denoted by PoA), which is the ratio of the maximum delay in the worst-case Nash equilibrium and in an optimal solution. The PoA for this restricted related model has been studied, and a linear lower bound was obtained. However in their bad instance, some users can only use extremely slow links. This is a little artificial and unlikely to appear in a real world. So in order to better understand this model, we introduce a parameter for the system, and prove a better Price of Anarchy in terms of the parameter. We also show an important application of our result in coordination mechanism design for task scheduling game. We propose a new coordination mechanism, Group-Makespan, for unrelated selfish task scheduling game with improved price of anarchy.
    Pricing Loss Leaders Can be Hard
    Yi Wu (吴奕)
    Journal of Data Acquisition and Processing, 2012, 27 (4): 718-726. 
    Abstract   PDF(269KB) ( 1251 )  
    Consider the problem of pricing n items under an unlimited supply with m single minded buyers, each of which is interested in at most k of the items. The goal is to price each item with profit margin p1, p2,..., pn so as to maximize the overall profit. There is an O(k)-approximation algorithm by Balcan and Blum when the price on each item must be above its margin cost; i.e., each pi > 0. We investigate the above problem when the seller is allowed to price some of the items below their margin cost. It was shown by Balcan et al. that by pricing some of the items below cost, the seller could possibly increase the maximum profit by Ω(log n) times. These items sold at low prices to stimulate other profitable sales are usually called “loss leader”. It is unclear what kind of approximation guarantees are achievable when some of the items can be priced below cost. Understanding this question is posed as an open problem by Balcan and Blum. In this paper, we give a strong negative result for the problem of pricing loss leaders. We prove that assuming the Unique Games Conjecture (UGC), there is no constant approximation algorithm for item pricing with prices below cost allowed even when each customer is interested in at most three items. Conceptually, our result indicates that although it is possible to make more money by selling some items below their margin cost, it can be computationally intractable to do so.
    Regular Papers
    Improving Application Launch Performance on Solid State Drives
    Yongsoo Joo, Member, IEEE, Junhee Ryu, Sangsoo Park, Member, IEEE, and Kang G. Shin, Fellow, IEEE
    Journal of Data Acquisition and Processing, 2012, 27 (4): 727-743. 
    Abstract   PDF(2046KB) ( 1797 )  
    Application launch performance is of great importance to system platform developers and vendors as it greatly affects the degree of users' satisfaction. The single most effective way to improve application launch performance is to replace a hard disk drive (HDD) with a solid state drive (SSD), which has recently become affordable and popular. A natural question is then whether or not to replace the traditional HDD-aware application launchers with a new SSD-aware optimizer. We address this question by analyzing the ine眂iency of the HDD-aware application launchers on SSDs and then proposing a new SSD-aware application prefetching scheme, called the Fast Application STarter (FAST). The key idea of FAST is to overlap the computation (CPU) time with the SSD access (I/O) time during an application launch. FAST is composed of a set of user-level components and system debugging tools provided by Linux OS (operating system). Hence, FAST can be easily deployed in any recent Linux versions without kernel recompilation. We implement FAST on a desktop PC with an SSD running Linux 2.6.32 OS and evaluate it by launching a set of widely-used applications, demonstrating an average of 28% reduction of application launch time as compared to PC without a prefetcher.
    A Parallel Interval Computation Model for Global Optimization with Automatic Load Balancing
    Yong Wu (吴勇) and Arun Kumar
    Journal of Data Acquisition and Processing, 2012, 27 (4): 744-753. 
    Abstract   PDF(448KB) ( 1437 )  
    In this paper, we propose a decentralized parallel computation model for global optimization using interval analysis. The model is adaptive to any number of processors and the workload is automatically and evenly distributed among all processors by alternative message passing. The problems received by each processor are processed based on their local dominance properties, which avoids unnecessary interval evaluations. Further, the problem is treated as a whole at the beginning of computation so that no initial decomposition scheme is required. Numerical experiments indicate that the model works well and is stable with different number of parallel processors, distributes the load evenly among the processors, and provides an impressive speedup, especially when the problem is time-consuming to solve.
    SWIP Prediction: Complexity-Effective Indirect-Branch Prediction Using Pointers
    Zi-Chao Xie (谢子超), Dong Tong (佟冬), Member, CCF, ACM, Ming-Kai Huang (黄凯明), Qin-Qing Shi (史秦青), and Xu Cheng (程旭), Senior Member, CCF
    Journal of Data Acquisition and Processing, 2012, 27 (4): 754-768. 
    Abstract   PDF(2957KB) ( 1301 )  
    Predicting indirect-branch targets has become a performance bottleneck for many applications. Previous highperformance indirect-branch predictors usually require significant hardware storage or additional compiler support, which increases the complexity of the processor front-end or the compilers. This paper proposes a complexity-effective indirectbranch prediction mechanism, called the Set-Way Index Pointing (SWIP) prediction. It stores multiple indirect-branch targets in different branch target buffer (BTB) entries, whose set indices and way locations are treated as set-way index pointers. These pointers are stored in the existing branch-direction predictor. SWIP prediction reuses the branch direction predictor to provide such pointers, and then accesses the pointed BTB entries for the predicted indirect-branch target. Our evaluation shows that SWIP prediction could achieve attractive performance improvement without requiring large dedicated storage or additional compiler support. It improves the indirect-branch prediction accuracy by 36.5% compared to that of a commonly-used BTB, resulting in average performance improvement of 18.56%. Its energy consumption is also reduced by 14.34% over that of the baseline.
    Active Store Window: Enabling Far Store-Load Forwarding with Scalability and Complexity-Efficiency
    Zhen-Hao Zhang (张栚滈), Xiao-Yin Wang (王箫音), Dong Tong (佟冬), Member, CCF, ACM, Jiang-Fang Yi (易江芳), Jun-Lin Lu (陆俊林), and Ke-Yi Wang (王克义)
    Journal of Data Acquisition and Processing, 2012, 27 (4): 769-780. 
    Abstract   PDF(1204KB) ( 1271 )  
    Conventional dynamically scheduled processors often use fully associative structures named load/store queue (LSQ) to implement the value communication between loads and the older in-flight stores and to detect the store-load order violation. But this in-flight forwarding only occupies about 15% of all store-load communications, which makes the CAM-based micro-architecture the major bottleneck to scale store-load communication further. This paper presents a new micro-architecture named ASW (short for active store window). It provides a new structure named speculative active store window to implement more aggressively speculative store-load forwarding than conventional LSQ. This structure could forward the data of committed stores to the executing loads without accessing to L1 data cache, which is referred to as far forwarding in this paper. At the back-end of the pipeline, it uses in-order load re-execution filtered by the tagged SSBF (short for store sequence bloom filter) to verify the correctness of the store-load forwarding. The speculative active store window and tagged store sequence bloom filter are all set-associate structures that are more efficient and scalable than fully associative structures. Experiments show that this simpler and faster design outperforms a conventional load/store queue based design and the NoSQ design on most benchmarks by 10.22% and 8.71% respectively.
    Effectiveness Analysis of DVFS and DPM in Mobile Devices
    Youngbin Seo, Jeongki Kim, and Euiseong Seo
    Journal of Data Acquisition and Processing, 2012, 27 (4): 781-790. 
    Abstract   PDF(1207KB) ( 2181 )  
    The demand for high-performance embedded processors in multimedia mobile electronics is growing and their power consumption thus increasingly threatens battery lifetime. It is usually believed that the dynamic voltage and frequency scaling (DVFS) feature saves significant energy by changing the performance levels of processors to match the performance demands of applications on the fly. However, because the energy efficiency of embedded processors is rapidly improving, the effectiveness of DVFS is expected to change. In this paper, we analyze the benefit of DVFS in state-of-the-art mobile embedded platforms in comparison to those in servers or PCs. To obtain a clearer view of the relationship between power and performance, we develop a measurement methodology that can synchronize time series for power consumption with those for processor utilization. The results show that DVFS hardly improves the energy efficiency of mobile multimedia electronics, and can even significantly worsen energy efficiency and performance in some cases. According to this observation, we suggest that power management for mobile electronics should concentrate on adaptive and intelligent power management for peripheral devices. As a preliminary design, we implement an adaptive network interface card (NIC) speed control that reduces power consumption by 10% when NIC is not heavily used. Our results provide valuable insights into the design of power management schemes for future mobile embedded systems.
    Multidimensional Projections for Visual Analysis of Social Networks
    Rafael Messias Martins, Gabriel Faria Andery, Henry Heberle, Fernando Vieira Paulovich, Alneu de Andrade Lopes, Helio Pedrini, Member, IEEE, and Rosane Minghim, Member, IEEE
    Journal of Data Acquisition and Processing, 2012, 27 (4): 791-810. 
    Abstract   PDF(10916KB) ( 1388 )  
    Visual analysis of social networks is usually based on graph drawing algorithms and tools. However, social networks are a special kind of graph in the sense that interpretation of displayed relationships is heavily dependent on context. Context, in its turn, is given by attributes associated with graph elements, such as individual nodes, edges, and groups of edges, as well as by the nature of the connections between individuals. In most systems, attributes of individuals and communities are not taken into consideration during graph layout, except to derive weights for force-based placement strategies. This paper proposes a set of novel tools for displaying and exploring social networks based on attribute and connectivity mappings. These properties are employed to layout nodes on the plane via multidimensional projection techniques. For the attribute mapping, we show that node proximity in the layout corresponds to similarity in attribute, leading to easiness in locating similar groups of nodes. The projection based on connectivity yields an initial placement that forgoes force-based or graph analysis algorithm, reaching a meaningful layout in one pass. When a force algorithm is then applied to this initial mapping, the final layout presents better properties than conventional force-based approaches. Numerical evaluations show a number of advantages of pre-mapping points via projections. User evaluation demonstrates that these tools promote ease of manipulation as well as fast identification of concepts and associations which cannot be easily expressed by conventional graph visualization alone. In order to allow better space usage for complex networks, a graph mapping on the surface of a sphere is also implemented.
    TangiWheel: A Widget for Manipulating Collections on Tabletop Displays Supporting Hybrid Input Modality
    Alejandro Catalá, Member, ACM, IEEE, Fernando Garcia-Sanjuan, Javier Jaen, and Jose A. Mocholi
    Journal of Data Acquisition and Processing, 2012, 27 (4): 811-829. 
    Abstract   PDF(2902KB) ( 1248 )  
    In this paper we present TangiWheel, a collection manipulation widget for tabletop displays. Our implementation is flexible, allowing either multi-touch or interaction, or even a hybrid scheme to better suit user choice and convenience. Different TangiWheel aspects and features are compared with other existing widgets for collection manipulation. The study reveals that TangiWheel is the first proposal to support a hybrid input modality with large resemblance levels between touch and tangible interaction styles. Several experiments were conducted to evaluate the techniques used in each input scheme for a better understanding of tangible surface interfaces in complex tasks performed by a single user (e.g., involving a typical master-slave exploration pattern). The results show that tangibles perform significantly better than fingers, despite dealing with a greater number of interactions, in situations that require a large number of acquisitions and basic manipulation tasks such as establishing location and orientation. However, when users have to perform multiple exploration and selection operations that do not require previous basic manipulation tasks, for instance when collections are fixed in the interface layout, touch input is significantly better in terms of required time and number of actions. Finally, when a more elastic collection layout or more complex additional insertion or displacement operations are needed, the hybrid and tangible approaches clearly outperform finger-based interactions.
    Edit Propagation via Edge-Aware Filtering
    Wei Hu (胡伟), Zhao Dong (董朝), and Guo-Dong Yuan (袁国栋)
    Journal of Data Acquisition and Processing, 2012, 27 (4): 830-840. 
    Abstract   PDF(25725KB) ( 2741 )  
    This paper presents a novel framework for efficiently propagating the stroke-based user edits to the regions with similar colors and locations in high resolution images and videos. Our framework is based on the key observation that the edit propagation intrinsically can also be achieved by utilizing recently proposed edge-preserving filters. Therefore, instead of adopting the traditional global optimization which may involve a time-consuming solution, our algorithm propagates edits with the aid of the edge-preserve filters. Such a propagation scheme has low computational complexity and supports multiple kinds of strokes for more flexible user interactions. Further, our method can be easily and efficiently implemented in GPU. The experimental results demonstrate the efficiency and user-friendliness of our approach.
    A Geometric Approach for Multi-Degree Spline
    Xin Li (李新), Zhang-Jin Huang (黄章进), and Zhao Liu (刘昭)
    Journal of Data Acquisition and Processing, 2012, 27 (4): 841-850. 
    Abstract   PDF(1597KB) ( 1583 )  
    Multi-degree spline (MD-spline for short) is a generalization of B-spline which comprises of polynomial segments of various degrees. The present paper provides a new definition for MD-spline curves in a geometric intuitive way based on an efficient and simple evaluation algorithm. MD-spline curves maintain various desirable properties of B-spline curves, such as convex hull, local support and variation diminishing properties. They can also be refined exactly with knot insertion. The continuity between two adjacent segments with different degrees is at least C1 and that between two adjacent segments of same degrees d is Cd-1. Benefited by the exact refinement algorithm, we also provide several operators for MD-spline curves, such as converting each curve segment into Bézier form, an efficient merging algorithm and a new curve subdivision scheme which allows different degrees for each segment.
    Hyperspectral Imagery Denoising Using a Spatial-Spectral Domain Mixing Prior
    Shao-Lin Chen (陈绍林), Xi-Yuan Hu (胡晰远), Member, IEEE, and Si-Long Peng (彭思龙)
    Journal of Data Acquisition and Processing, 2012, 27 (4): 851-861. 
    Abstract   PDF(3380KB) ( 2040 )  
    By introducing a novel spatial-spectral domain mixing prior, this paper establishes a Maximum a posteriori (MAP) framework for hyperspectral images (HSIs) denoising. The proposed mixing prior takes advantage of different properties of HSI in the spatial and spectral domain. Furthermore, we propose a spatially adaptive weighted prior combining smoothing prior and discontinuity-preserving prior in the spectral domain. The weights can be defined as a function of the spectral discontinuity measure (DM). For minimizing the objective function, a half-quadratic optimization algorithm is used. The experimental results illustrate that our proposed model can get a higher signal-to-noise ratio (SNR) than using only smoothing prior or discontinuity-preserving prior.
    2D-Manifold Boundary Surfaces Extraction from Heterogeneous Object on GPU
    Ming Wang (王明), Member, CCF, and Jie-Qing Feng (冯结青), Senior Member, CCF
    Journal of Data Acquisition and Processing, 2012, 27 (4): 862-871. 
    Abstract   PDF(1326KB) ( 1796 )  
    The conventional isosurface techniques are not competent for meshing a heterogeneous object because they assume that the object is homogeneous. Thus the visualization method taking the heterogeneity into account is desired. In this paper, we propose a novel algorithm to extract the boundary surfaces from a heterogeneous object in one pass, whose remarkable advantage is free of the number of materials contained. The heterogeneous object is first classified into a series of homogeneous material components. Then each component is enclosed with a 2D-manifold boundary surface extracted via our algorithm. The information important to the heterogeneous object is also provided, such as the interface between two materials, the intersection curve where three materials meet and the intersection point where four materials meet. To improve the performance, the algorithm is also designed and implemented on GPU. Experimental results demonstrate the effectiveness and efficiency of the proposed algorithm.
    Lazy Slicing for State-Space Exploration
    Shao-Bin Huang (黄绍斌), Senior Member, CCF, Hong-Tao Huang (黄宏涛), Zhi-Yuan Chen (陈志远), Tian-Yang Lü (吕天阳), Member, CCF, and Tao Zhang (张涛)
    Journal of Data Acquisition and Processing, 2012, 27 (4): 872-890. 
    Abstract   PDF(1834KB) ( 1921 )  
    CEGAR (Counterexample-guided abstraction refinement)-based slicing is one of the most important techniques in reducing the state space in model checking. However, CEGAR-based slicing repeatedly explores the state space handled previously in case a spurious counterexample is found. Inspired by lazy abstraction, we introduce the concept of lazy slicing which eliminates this repeated computation. Lazy slicing is done on-the-fly, and only up to the precision necessary to rule out spurious counterexamples. It identifies a spurious counterexample by concretizing a path fragment other than the full path, which reduces the cost of spurious counterexample decision significantly. Besides, we present an improved over-approximate slicing method to build a more precise slice model. We also provide the proof of the correctness and the termination of lazy slicing, and implement a prototype model checker to verify safety property. Experimental results show that lazy slicing scales to larger systems than CEGAR-based slicing methods.
    An Optimized Divide-and-Conquer Algorithm for the Closest-Pair Problem in the Planar Case
    José C. Pereira and Fernando G. Lobo, Member, ACM
    Journal of Data Acquisition and Processing, 2012, 27 (4): 891-986. 
    Abstract   PDF(1607KB) ( 1717 )  
    We present an engineered version of the divide-and-conquer algorithm for finding the closest pair of points, within a given set of points in the XY-plane. For this version of the algorithm we show that only two pairwise comparisons are required in the combine step, for each point that lies in the 2δ-wide vertical slab. The correctness of the algorithm is shown for all Minkowski distances with p ≥ 1. We also show empirically that, although the time complexity of the algorithm is still O(n lg n), the reduction in the total number of comparisons leads to a significant reduction in the total execution time, for inputs with size sufficiently large.
    Some Indices of Alphabet Overlap Graph
    Rong Yang (杨荣), Zhao-Lan Yang (杨兆兰), and He-Ping Zhang (张和平)
    Journal of Data Acquisition and Processing, 2012, 27 (4): 897-902. 
    Abstract   PDF(284KB) ( 1339 )  
    The undirected de Bruijn graph is often used as the model of communication network for its useful properties, such as short diameter, small maximum vertex degree. In this paper, we consider the alphabet overlap graph G(k, d, s): the vertex set V={v|v=(v1...vk); vi∈{1, 2,..., d}, i=1, 2,..., k}; they are distinct and two vertices u=(u1... uk) and v=(v1... vk) are adjacent if and only if us+i=vi or vs+i=ui (i=1, 2,..., k-s). In particular, when s=1, G(k, d, s) is just an undirected de Bruijn graph. First, we give a formula to calculate the vertex degree of G(k, d, s). Then, we use the corollary of Menger’s theorem to prove that the connectivity of G(k, d, s) is 2ds-2d2s-k for sk/2.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved