MapReduce is a popular parallel computing paradigm for large-scale data processing in clusters and data centers. A MapReduce workload generally contains a set of jobs, each of which consists of multiple map tasks followed by multiple reduce tasks. Due to 1) that map tasks can only run in map slots and reduce tasks can only run in reduce slots, and 2) the general execution constraints that map tasks are executed before reduce tasks, different job execution orders and map/reduce slot configurations for a MapReduce workload have significantly different performance and system utilization. This paper proposes two classes of algorithms to minimize the makespan and the total completion time for an offline MapReduce workload. Our first class of algorithms focuses on the job ordering optimization for a MapReduce workload under a given map/reduce slot configuration. In contrast, our second class of algorithms considers the scenario that we can perform optimization for map/reduce slot configuration for a MapReduce workload. We perform simulations as well as experiments on Amazon EC2 and show that our proposed algorithms produce results that are up to 15 _ 80 percent better than currently unoptimized Hadoop, leading to significant reductions in running time in practice.
MapReduce job consists of a set of map and reduce tasks, where reduce tasks are performed after the map tasks. Hadoop, an open source implementation of MapReduce, has been deployed in large clusters containing thousands of machines by companies such as Amazon and Facebook. In those cluster and data center environments, MapReduce and Hadoop are used to support batch processing for jobs submitted from multiple users (i.e., MapReduce workloads). Despite many research efforts devoted to improve the performance of a single MapReduce job, there is relatively little attention paid to the system performance of MapReduce workloads. Therefore, this paper tries to improve the performance of MapReduce workloads.
Slow performance of the Map Reducer workloads
In this paper, we target at one subset of production MapReduce workloads that consist of a set of independent jobs (e.g., each of jobs processes distinct data sets with no dependency between each other) with different approaches. For dependent jobs (i.e., MapReduce workflow), one MapReduce can only start only when its previous dependent jobs finish the computation subject to the input-output data dependency. In contrast, for independent jobs, there is an overlap computation between two jobs, i.e., when the current job completes its map-phase computation and starts its reduce-phase computation, the next job can begin to perform its map-phase computation in a pipeline processing mode by possessing the released map slots from its previous job.
Propose slot configuration algorithms for makespan and total completion time. We also show that there is a proportional feature for them, which is very important and can be used to address the time efficiency problem of proposed enumeration algorithms for a large size of total slots.
Slow performance of the Map Reducer workloads
It is cost-efficient for a tenant with a limited budget to establish a virtual MapReduce cluster by renting multiple virtual private servers (VPSs) from a VPS provider. To provide an appropriate scheduling scheme for this type of computing environment, we propose in this paper a hybrid job-driven scheduling scheme (JoSS for short) from a tenant’s perspective. JoSS provides not only joblevel scheduling, but also map-task level scheduling and reduce-task level scheduling. JoSS classifies MapReduce jobs based on job scale and job type and designs an appropriate scheduling policy to schedule each class of jobs. The goal is to improve data locality for both map tasks and reduce tasks, avoid job starvation, and improve job execution performance. Two variations of JoSS are further introduced to separately achieve a better map-data locality and a faster task assignment. We conduct extensive experiments to evaluate and compare the two variations with current scheduling algorithms supported by Hadoop. The results show that the two variations outperform the other tested algorithms in terms of map-data locality, reduce-data locality, and network overhead without incurring significant overhead. In addition, the two variations are separately suitable for different MapReduce-workload scenarios and provide the best job performance among all tested algorithms.
Typically, a MapReduce cluster consists of a set of commodity machines/nodes located on several racks and interconnected with each other in a local area network (LAN). In this paper, we call this a conventional MapReduce cluster. Due to the fact that building and maintaining a conventional MapReduce cluster is costly for a person/organization with a limited budget, an alternative way is to establish a virtual MapReduce cluster by either renting a MapReduce framework from a MapReduce service provider or renting multiple virtual private servers (VPSs) from a VPS provider. Each VPS is a virtual machine with its own operating system and disk space. Due to some reasons, such as availability issue of a datacenter or resource shortage on a popular datacenter, a tenant might rent VPSs from different datacenters operated by a same VPS provider to establish his/her virtual MapReduce cluster.
Slow performance of the Map Reducer workloads
In order to provide an appropriate scheduling scheme for a tenant to achieve a high map-and-reduce data locality and improve job performance in his/her virtual MapReduce cluster, in this paper we propose a hybrid job-driven scheduling scheme (JoSS for short) by providing scheduling in three levels: job, map task, and reduce task. JoSS classifies MapReduce jobs into either large or small jobs based on each job’s input size to the average datacenter scale of the virtual MapReduce cluster, and further classifies small MapReduce jobs into either map-heavy or reduce-heavy based on the ratio between each job’s reduce-input size and the job’s map-input size. Then JoSS uses a particular scheduling policy to schedule each class of jobs such that the corresponding network traffic generated during job execution (especially for inter-datacenter traffic) can be reduced, and the corresponding job performance can be improved. In addition, we propose two variations of JoSS, named JoSS-T and JoSS-J, to guarantee a fast task assignment and to further increase the VPS-locality, respectively.
Big sensor data is prevalent in both industry and scientific research
applications where the data is generated with high volume and velocity it is
difficult to process using on-hand database management tools or traditional
data processing applications. Cloud computing provides a promising platform to
support the addressing of this challenge as it provides a flexible stack of
massive computing, storage, and software services in a scalable manner at low
cost. Some techniques have been developed in recent years for processing sensor
data on cloud, such as sensor-cloud. However, these techniques do not provide
efficient support on fast detection and locating of errors in big sensor data
in big sensor data sets. For fast data error detection in big sensor data sets, in this paper, we develop a novel data error detection approach which exploits the full computation potential of cloud platform and the network feature of WSN. Firstly, a set of sensor data error types are classified and defined. Based on that classification, the network feature of a clustered WSN is introduced and analyzed to support fast error detection and location. Specifically, in our proposed approach, the error detection is based on the scale-free network topology and most of detection operations can be conducted in limited temporal or spatial data blocks instead of a whole big data set. Hence the detection and location process can be dramatically accelerated. Furthermore, the detection and location tasks can be distributed to cloud platform to fully exploit the computation power and massive storage. Through the experiment on our cloud computing platform of U-Cloud, it is demonstrated that our proposed approach can significantly reduce the time for error detection and location in big data sets generated by large scale sensor network systems with acceptable error detecting accuracy.
Index Terms—Big data, cloud computing, data abnormality, error detection, time efficiency, sensor networks, complex network systems.
One of important source for scientific big data is the data sets
collected by wireless sensor networks (WSN). Wireless sensor networks have
potential of significantly enhancing people’s ability to monitor and interact
with their physical
environment. Big data set from sensors is often subject to corruption and losses due to wireless medium of communication and presence of hardware inaccuracies in the nodes. For a WSN application to deduce an appropriate result, it is necessary that the data received is clean, accurate, and lossless. However, effective detection and cleaning of sensor big data errors is a challenging issue demanding innovative solutions.
WSN big data error detection commonly requires powerful real-time processing and storing of the massive sensor data as well as analysis in the context of using inherently complex error models to identify and locate events of abnormalities. In this paper, we aim to develop a novel error detection approach by exploiting the massive storage, scalability and computation power of cloud to detect errors in big data sets from sensor networks. Some work has been done about processing sensor data on cloud. However, fast detection of data errors in big data with cloud remains challenging. Especially, how to use the computation power of cloud to quickly find and locate errors of nodes in WSN needs to be explored.
Trust management is one of the most challenging issues for the adoption and growth of cloud computing. The highly dynamic, distributed, and non-transparent nature of cloud services introduces several challenging issues such as privacy, security, and availability. Preserving consumers’ privacy is not an easy task due to the sensitive information involved in the interactions between consumers and the trust management service. Protecting cloud services against their malicious users (e.g., such users might give misleading feedback to disadvantage a particular cloud service) is a difficult problem. Guaranteeing the availability of the trust management service is another significant challenge because of the dynamic nature of cloud environments. In this article, we describe the design and implementation of CloudArmor, a reputation-based trust management framework that provides a set of functionalities to deliver trust as a service (TaaS), which includes i) a novel protocol to prove the credibility of trust feedbacks and preserve users’ privacy, ii) an adaptive and robust credibility model for measuring the credibility of trust feedbacks to protect cloud services from malicious users and to compare the trustworthiness of cloud services, and iii) an availability model to manage the availability of the decentralized implementation of the trust management service. The feasibility and benefits of our approach have been validated by a prototype and experimental studies using a collection of real-world trust feedbacks on cloud services.
According to researchers at Berkeley, trust and security are ranked one of the top 10 obstacles for the adoption of cloud computing. Indeed, Service-Level Agreements (SLAs). Consumers’ feedback is a good source to assess the overall trustworthiness of cloud services. Several researchers have recognized the significance of trust management and proposed solutions to assess and manage trust based on feedbacks collected from participants. We find the following problem issues:
1. Guaranteeing the availability of TMS is a difficult problem due to the unpredictable number of users and the highly dynamic nature of the cloud environment.
2. A Self-promoting attack might have been performed on cloud service sy, which means sx should have been selected instead.
3. Disadvantage a cloud service by giving multiple misleading trust feedbacks (i.e., collusion attacks).
4. Trick users into trusting cloud services that are not trustworthy by creating several accounts and giving misleading trust feedbacks (i.e., Sybil attacks).
In this paper, we overview the design and the implementation of Cloud consumer’s credibility Assessment & trust management of cloud services (Cloud Armor): a framework for reputation-based trust management in cloud environments. In Cloud Armor, trust is delivered as a service (TAAS) where TMS spans several distributed nodes to manage feedbacks in a decentralized way. Cloud Armor exploits techniques to identify credible feedbacks from malicious ones. The advantages of proposed system:
1. Trust Cloud framework for accountability and trust in cloud computing. In particular, Trust Cloud consists of five layers including workflow.
2. Propose a multi-faceted Trust Management (TM) system architecture for cloud computing to help the cloud service users to identify trustworthy Cloud service providers.
Recently, a number of extended Proxy Re-Encryptions (PRE), e.g. Conditional (CPRE), identity-based PRE (IPRE) and broadcast PRE (BPRE), have been proposed for flexible applications. By incorporating CPRE, IPRE and BPRE, this paper proposes a versatile primitive referred to as conditional identity-based broadcast PRE (CIBPRE) and formalizes its semantic security. CIBPRE allows a sender to encrypt a message to multiple receivers by specifying these receivers’ identities, and the sender can delegate a re-encryption key to a proxy so that he can convert the initial ciphertext into a new one to a new set of intended receivers. Moreover, the re-encryption key can be associated with a condition such that only the matching ciphertexts can be re-encrypted, which allows the original sender to enforce access control over his remote ciphertexts in a fine-grained manner. We propose an efficient CIBPRE scheme with provable security. In the instantiated scheme, the initial ciphertext, the re-encrypted ciphertext and the re-encryption key are all in constant size, and the parameters to generate a re-encryption key are independent of the original receivers of any initial ciphertext. Finally, we show an application of our CIBPRE to secure cloud email system advantageous over existing secure email systems based on Pretty Good Privacy protocol or identity-based encryption.
Proxy Re-Encryption (PRE) provides a secure and flexible method for a sender to store and share data. A user may encrypt his file with his own public key and then store the ciphertext in an honest-but-curious server. When the receiver is decided, the sender can delegate a re-encryption key associated with the receiver to the server as a proxy. Then the proxy re-encrypts the initial ciphertext to the intended receiver. Finally, the receiver can decrypt the resulting ciphertext with her private key.
The security of PRE usually assures that (1) neither the server/proxy nor non-intended receivers can learn any useful information about the (re-)encrypted file, and (2) before receiving the re-encryption key, the proxy cannot re-encrypt the initial ciphertext in a meaningful way. Efforts have been made to equip PRE with versatile capabilities. The early PRE was proposed in the traditional public- key infrastructure setting which incurs complicated certificate management. To relieve from this problem, several identity-based PRE (IPRE) schemes were proposed so that the receivers’ recognizable identities can serve as public keys. Instead of fetching and verifying the receivers’ certificates, the sender and the proxy just need to know the receivers’ identities, which is more convenient in practice. We find the following problem issues:
1. The early PRE was proposed in the traditional public- key infrastructure setting which incurs complicated certificate management.
In this paper, we refine PRE by incorporating the advantages of IPRE, CPRE and BPRE for more flexible applications and propose a new concept of conditional identity based broadcast PRE (CIBPRE). In a CIBPRE system, a trusted key generation center (KGC) initializes the system parameters of CIBPRE, and generates private keys for users. To securely share files to multiple receivers, a sender can encrypt the files with the receivers’ identities and file-sharing conditions. If later the sender would also like to share some files associated with the same condition with other receivers, the sender can delegate a reencryption key labeled with the condition to the proxy, and the parameters to generate the re-encryption key is independent of the original receivers of these files. Then the proxy can re-encrypt the initial ciphertexts matching the condition to the resulting receiver set.
With CIBPRE, in addition to the initial authorized receivers who can access the file by decrypting the initial ciphertext with their private keys, the newly authorized receivers can also access the file by decrypting the re-encrypted ciphertext with their private keys. Note that the initial ciphertexts may be stored remotely while keeping secret. The sender does not need to download and re-encrypt repetitively, but delegates a single key matching condition to the proxy. These features make CIBPRE a versatile tool to secure remotely stored files, especially when there are different receivers to share the files as time passes. The advantages of proposed system:
1. It allows a user to share their outsourced encrypted data with others in a fine-grained manner.
2. It avoids a user to fetch and verify other users’ certificates before encrypting his data.
3. Moreover, it allows a user to generate a broadcast ciphertext for multiple receivers and share his outsourced encrypted data to multiple receivers in a batch manner.
Attribute-based Encryption (ABE) is regarded as a promising cryptographic conducting tool to guarantee data owners’ direct control over their data in public cloud storage. The earlier ABE schemes involve only one authority to maintain the whole attribute set, which can bring a single-point bottleneck on both security and performance. Subsequently, some multi-authority schemes are proposed, in which multiple authorities separately maintain disjoint attribute subsets. However, the single-point bottleneck problem remains unsolved. In this paper, from another perspective, we conduct a threshold multi-authority CP-ABE access control scheme for public cloud storage, named TMACS, in which multiple authorities jointly manage a uniform attribute set. In TMACS, taking advantage of (t; n) threshold secret sharing, the master key can be shared among multiple authorities, and a legal user can generate his/her secret key by interacting with any t authorities. Security and performance analysis results show that TMACS is not only verifiable secure when less than t authorities are compromised, but also robust when no less than t authorities are alive in the system. Furthermore, by efficiently combining the traditional multi-authority scheme with TMACS, we construct a hybrid one, which satisfies the scenario of attributes coming from different authorities as well as achieving security and system-level robustness.
There is only one authority responsible for attribute management and key distribution. This only-one-authority scenario can bring a single-point bottleneck on both security and performance. Once the authority is compromised, an adversary can easily obtain the only-one-authority’s master key, and then he/she can generate private keys of any attribute subsetto decrypt the specific encrypted data. Crash or offline of a specific authority will make that private keys of all attributes in attribute subset maintained by this authority cannot be generated and distributed, which will still influence the whole system’s effective operation. We find the following problem issues:
1. Crash or offline of a specific authority will make that private keys of all attributes in attribute subset maintained by this authority cannot be generated and distributed, which will still influence the whole system’s effective operation.
2. The access structure is not flexible enough to satisfy complex environments. Subsequently, much effort has been made to deal with the disadvantages in the early schemes.
In this paper, we propose a robust and verifiable threshold multi-authority CP-ABE access control scheme, named TMACS, to deal with the single-point bottleneck on both security and performance in most existing schemes. In TMACS, multiple authorities jointly manage the whole attribute set but no one has full control of any specific attribute. Since in CP-ABE schemes, there is always a secret key (SK) used to generate attribute private keys, we introduce (t; n) threshold secret sharing into our scheme to share the secret key among authorities. In TMACS, we redefine the secret key in the traditional CP-ABE schemes as master key. The introduction of (t; n) threshold secret sharing guarantees that, the master key cannot be obtained by any authority alone. TMACS is not only verifiable secure when less than t authorities are compromised, but also robust when no less than t authorities are alive in the system. To the best of our knowledge, this paper is the first try to address the single point bottleneck on both security and performance in CPABE access control schemes in public cloud storage. The advantages of proposed system:
1. This only-one-authority scenario can bring a single-point bottleneck on both security and performance.
2. These CP-ABE schemes are still far from being widely used for access control in public cloud storage.
In this paper, we introduce a new fine-grained two-factor authentication (2FA) access control system for web-based cloud computing services. Specifically, in our proposed 2FA access control system, an attribute-based access control mechanism is implemented with the necessity of both a user secret key and a lightweight security device. As a user cannot access the system if they do not hold both, the mechanism can enhance the security of the system, especially in those scenarios where many users share the same computer for web-based cloud services. In addition, attribute-based control in the system also enables the cloud server to restrict the access to those users with the same set of attributes while preserving user privacy, i.e., the cloud server only knows that the user fulfills the required predicate, but has no idea on the exact identity of the user. Finally, we also carry out a simulation to demonstrate the practicability of our proposed 2FA system.
Though the new paradigm of cloud computing provides great advantages, there are meanwhile also concerns about security and privacy especially for web-based cloud services. As sensitive data may be stored in the cloud for sharing purpose or convenient access; and eligible users may also access the cloud system for various applications and services, user authentication has become a critical component for any cloud system. A user is required to login before using the cloud services or accessing the sensitive data stored in the cloud. There are two problems for the traditional account/ password based system. We find the following problem issues:
1. First, the traditional account/password-based authentication is not privacy-preserving. However, it is well acknowledged that privacy is an essential feature that must be considered in cloud computing systems.
2. Second, it is common to share a computer among different people. It may be easy for hackers to install some spyware to learn the login password from the web-browser.
3. In existing, Even though the computer may be locked by a password, it can still be possibly guessed or stolen by undetected malwares.
In this paper, we propose a fine-grained two-factor access control protocol for web-based cloud computing services, using a lightweight security device. The device has the following properties: (1) it can compute some lightweight algorithms, e.g. hashing and exponentiation; and (2) it is tamper resistant, i.e., it is assumed that no one can break into it to get the secret information stored inside. The advantages of proposed system:
1. Our protocol provides a 2FA security.
2. Our protocol supports fine-grained attribute-based access which provides a great flexibility for the system to set different access policies according to different scenarios. At the same time, the privacy of the user is also preserved.
Link error and malicious packet dropping are two sources for packet losses in multi-hop wireless ad hoc network. In this paper, while observing a sequence of packet losses in the network, we are interested in determining whether the losses are caused by link errors only, or by the combined effect of link errors and malicious drop. We are especially interested in the insider-attack case, whereby malicious nodes that are part of the route exploit their knowledge of the communication context to selectively drop a small amount of packets critical to the network performance. Because the packet dropping rate in this case is comparable to the channel error rate, conventional algorithms that are based on detecting the packet loss rate cannot achieve satisfactory detection accuracy. To improve the detection accuracy, we propose to exploit the correlations between lost packets. Furthermore, to ensure truthful calculation of these correlations, we develop a homomorphic linear authenticator (HLA) based public auditing architecture that allows the detector to verify the truthfulness of the packet loss information reported by nodes. This construction is privacy preserving, collusion proof, and incurs low communication and storage overheads. To reduce the computation overhead of the baseline scheme, a packet-block-based mechanism is also proposed, which allows one to trade detection accuracy for lower computation complexity. Through extensive simulations, we verify that the proposed mechanisms achieve significantly better detection accuracy than conventional methods such as a maximum-likelihood based detection.
The most of the related works preclude the ambiguity of the environment by assuming that malicious dropping is the only source of packet loss, so that there is no need to account for the impact of link errors. On the other hand, for the small number of works that differentiate between link errors and malicious packet drops, their detection algorithms usually require the number of maliciously-dropped packets to be significantly higher than link errors, in order to achieve acceptable detection accuracy. Depending on how much weight a detection algorithm gives to link errors relative to malicious packet drops, the related work can be classified into the following two categories.
The first category aims at high malicious dropping rates, where most (or all) lost packets are caused by malicious dropping. The second category targets the scenario where the number of maliciously dropped packets is significantly higher than that caused by link errors, but the impact of link errors is non-negligible.
1. Most of the related works assumes that malicious dropping is the only source of packet loss.
2.In the reputation-based approach, the malicious node can maintain a reasonably good reputation by forwarding most of the packets to the next hop.
3.While the Bloom-filter scheme is able to provide a packet forwarding proof, the correctness of the proof is probabilistic and it may contain errors.
To develop an accurate algorithm for detecting selective packet drops made by insider attackers. This algorithm also provides a truthful and publicly verifiable decision statistics as a proof to support the detection decision. The high detection accuracy is achieved by exploiting the correlations between the positions of lost packets, as calculated from the auto-correlation function (ACF) of the packet-loss bitmap–a bitmap describing the lost/received status of each packet in a sequence of consecutive packet transmissions. The main challenges in our mechanism lies in how to guarantee that the packet-loss bitmaps reported by individual nodes along the route are truthful, i.e., reflect the actual status of each packet transmission. Such truthfulness is essential for correct calculation of the correlation between lost packets; this can be achieved by some auditing.
Public-auditing problem is constructed based on the homomorphic linear authenticator (HLA) cryptographic primitive, which is basically a signature scheme widely used in cloud computing and storage server systems to provide a proof of storage from the server to entrusting clients.
1.High detection accuracy.
2.Privacy-preserving: the public auditor should not be able to decern the content of a packet delivered on the route through the auditing information submitted by individual hops
3.Incurs low communication and storage overheads at intermediate nodes
In this study, we design a privacy-aware map generation scheme, PMG. Unlike the existing methods, in our scheme, each user selectively chooses, reshuffles, and uploads a few locations from their traces, instead of the entire traces. After receiving those unorganized points from a group of users, the server generates the final map. To provide high-quality map generation service, meanwhile preserving the privacy for each user, there are three major challenges we need to address: 1) quantifying the privacy leakage of data points provided by individual users; 2) generating theoretically-proven map using the reported unorganized points cloud; 3) designing map generation scheme that is robust to various discrepancies such as GPS error.
1.We generate an accurate and reliable map while avoid leaking the users’ privacy
Traditional IP-geolocation mapping schemes are primarily delay-measurement based. In these schemes, there are a number of landmarks with known geolocations. The delays from a targeted client to the landmarks are measured, and the targeted client is mapped to a geolocation inferred from the measured delays. However, most of the schemes are based on the assumption of a linear correlation between networking delay and the physical distance between targeted client and landmark.
1.The strong correlation has been verified in some regions of the Internet, such as North America and Western Europe. But as pointed out in the literature, the Internet connectivity around the world is very complex, and such strong correlation may not hold for the Internet everywhere.
1.The contributions of this paper are twofold. First, by studying a large data set, we show that most of the traditional IP-Geolocation mapping schemes cannot work well for moderately connected Internet regions, since the linear delay-distance correlation is weak in this kind of Internet regions. Second, based on the measurement results (MR), we develop and implement GeoGet, which uses the closest-shortest rule and works much better than traditional schemes in moderately connected Internet regions. We acknowledge that we are not the first to apply the closest shortest rule and the mapping accuracy of GeoGet is still not very high. However, we go a large step toward developing a better IP-Geolocation system for moderately connected Internet regions. We believe the accuracy will improve significantly if probing more landmarks.
Network coding has been shown to be an effective approach to improve the wireless system performance. However, many security issues impede its wide deployment in practice. Besides the well-studied pollution attacks, there is another severe threat, that of wormhole attacks, which undermines the performance gain of network coding. Since the underlying characteristics of network coding systems are distinctly different from traditional wireless networks, the impact of wormhole attacks and countermeasures are generally unknown. In this paper, we quantify wormholes’ devastating harmful impact on network coding system performance through experiments. We first propose a centralized algorithm to detect wormholes and show its correctness rigorously. For the distributed wireless network, we proposes DAWN, Distributed detection Algorithm against Wormhole in wireless Network coding systems, by exploring the change of the flow directions of the innovative packets caused by wormholes. We rigorously prove that DAWN guarantees a good lower bound of successful detection rate. We perform analysis on the resistance of DAWN against collusion attacks. We find that the robustness depends on the node density in the network, and prove a necessary condition to achieve collusion-resistance. DAWN does not rely on any location information, global synchronization assumptions or special hardware/middleware. It is only based on the local information that can be obtained from regular network coding protocols, and thus the overhead of our algorithms is tolerable. Extensive experimental results have verified the effectiveness and the efficiency of DAWN.
In contrast, in wireless network
coding systems, the forwarders are allowed to apply encoding schemes on what
they receive, and thus they create and transmit new packets. The idea of mixing
packets on each node takes good advantages of the opportunity diversity and broadcast
nature of wireless communications, and significantly enhances system
performance. However, practical wireless network coding systems face new
challenges and attacks, whose impact and countermeasures are still not well
understood because their underlying characteristics are different from
well-studied traditional wireless networks. The wormhole attack is one
of these attacks.
1.Existing system consists Security issues
2.And also wormhole attacks.
The main objective of this paper is
to detect and localize wormhole attacks in wireless network coding systems. The
major differences in routing and packet forwarding rule out
using existing countermeasures in traditional networks. In network coding systems like MORE, the connectivity in the network is described using the link loss probability value between each pair of nodes, while traditional networks use connectivity graphs with a binary relation (i.e., connected or not) on the set of nodes. In this paper, we first propose a centralized algorithm to detect wormholes leveraging a central node in the network. For the distributed scenarios, we propose a distributed algorithm, DAWN, to detect wormhole attacks in wireless intra flow network coding systems.
1.We investigate the harmful impact of wormholes on system performance and regional nodes resource utilization.
2.We propose a centralized algorithm to detect wormholes.
Low-altitude unmanned aerial vehicles (UAVs)combined with WLAN mesh networks (WMNs) have facilitated the emergence of airborne network-assisted applications. In disaster relief, they are key solutions for 1) on-demand ubiquitous network access and 2) efficient exploration of sized areas. Nevertheless, these solutions still face major security challenges as WMNs are prone to routing attacks. Consequently, the network can be sabotaged, and the attacker might manipulate payload data or even hijack the UAVs. Contemporary security standards, such as the IEEE 802.11i and the security mechanisms of the IEEE 802.11s mesh standard, are vulnerable to routing attacks as we experimentally showed in previous works. Therefore, a secure routing protocol is indispensable for making feasible the deployment of UAV-WMN. As far as we know, none of the existing research approaches have gained acceptance in practice due to their high overhead or strong assumptions. Here, we present the position-aware, secure, and efficient mesh routing approach (PASER). Our proposal prevents more attacks than the IEEE 802.11s/i security mechanisms and the well-known, secure routing protocol ARAN, without making restrictive assumptions. In realistic UAV-WMN scenarios, PASER achieves similar performance results as the well-established, non-secure routing protocol HWMP combined with the IEEE 802.11s security mechanisms.
While the WMN capability for auto-configuration and self-healing significantly reduces the complexity of the network deployment and maintenance, it makes the WMN backbone prone to routing attacks, including the wormhole and blackhole attacks. Consequently, an attacker can, with little cost or effort, redirect the traffic and drop the data packets even if the wireless backbone links are encrypted. In UAV-WMN-assisted disaster relief situations, this can sabotage the communication between rescue fighters. In addition, the data exchanged between the UAVs and their ground station will get disrupted. This issue makes the use of WMNs (or any wireless multi-hop solution relying on a routing protocol to dynamically set up routes) problematic for the command and control of the UAVs in practice as flight regulations impose that it should be always possible to remotely pilot the UAVs. In case the attacker is able to compromise the network credentials and as long as there is no efficient way to refresh those credentials, the attacker might manipulate payload data or even inject corrupted control information that could lead to the highjacking of an Unmanned Aerial Vehicles (UAVs).
PASER aims to secure the routing process in UAV-WMN in a feasible manner. We initially proposed PASER in existing. In this section, we extend upon our previous works by clearly defining the network and attacker models of PASER, and by extending its security goals, based on discussions with UAVWMN end-users and stakeholders among others. Here, PASER has been enhanced to provide origin authentication in order to proactively minimize the harm of internal attackers, i.e., to combat the fabrication and blackhole attacks. The dynamic key management scheme of PASER has been adjusted to include the key number in all PASER messages for a better detection of key changes. From the routing point of view, the path accumulation has been removed as it was observed that this scheme is ineffective in UAV-WMN. The information gained from path accumulation in UAV-WMN is worth less than the overhead it generates. Apart from that, while we only addressed the route discovery process in our previous works, we have upgraded PASER to include a route maintenance mechanism.
In multihop wireless networks, when a mobile node wants to communicate with a destination, it relies on the other nodes to forward the packets. This multihop packet transmission can extend the network coverage area using limited power and improve area distance efficiency. In the proposed multihop wireless network E-STAR integrates the payment and trust systems with the routing protocol with the goal of enhancing route reliability and stability. The payment system describes to charge the nodes that send packets and reward those forwarding packets. The trust system is important to evaluate the nodes’ trustworthiness and reliability in forwarding packets in terms of multi-dimensional trust values and the trust values are calculated for each node and developed two routing protocol is used to send the packets through highly trusted nodes having sufficient energy to minimize the possibility of breaking the route. To strengthen the trust evaluation, recommendation from each node is included in trust calculation by TP (Trusted Party). This protocol is implemented over the MANET network and simulated using NS2. Performance evaluated from the parameters such as packet delivery ratio, call acceptance ratio and route lifetime.
The multihop wireless network implemented in many useful
applications such as data sharing and multimedia data transmission. It can establish
a network to communicate, distribute files, and share information. However, the
assumption is that the nodes are willing to spend their limited resources, such
as battery energy and available network bandwidth. Drawbacks in the existing
routing protocols such as the network nodes are willing to relay other nodes’
packets. This assumption is reasonable in disaster recovery because the nodes
pursue a common goal and belong to one authority, but it may not hold for
civilian applications where the nodes aim to maximize their benefits, since
their cooperation consumes their valuable resources such as bandwidth, energy,
and computing power without any benefits. In civilian applications, selfish
nodes will not be voluntarily interested in cooperation without sufficient incentive,
and make use of the cooperative nodes to relay their packets, which has
negative effect on the network fairness and performance. Fairness issue arises
when a selfish node takes advantage from the cooperative nodes without
contributing to them, and the cooperative nodes are unfairly overloaded. The
selfish behavior degrades the network performance significantly resulting in
failure of the multi-hop communication. In addition, some nodes may break
routes because they do not have sufficient energy to relay the source nodes’
packets and keep the routes
We develop two routing protocols to direct traffic through those highly-trusted nodes having sufficient energy to minimize the probability of breaking the route. By this way, E-STAR can stimulate the nodes not only to relay packets, but also to maintain route stability and report correct battery energy capability. we propose a multi-dimensional trust system based on processing the payment receipts; and we propose trust-based and energy-aware routing protocols to establish stable routes. Unlike most of the existing schemes that aim to identify and mitigate the malicious nodes, E-STAR aims to identify the good nodes and select them in routing.
Equipped with state-of-the-art smart phones and mobile devices, today’s highly interconnected urban population is increasingly dependent on these gadgets to organize and plan their daily lives. These applications often rely on current(or preferred) locations of individual users or a group of users to provide the desired service, which jeopardizes their privacy; users do not necessarily want to reveal their current (or preferred)locations to the service provider or to other, possibly un-trusted, users. In this paper, we propose privacy-preserving algorithms for determining an optimal meeting location for a group of users. We perform a thorough privacy evaluation by formally quantifying privacy-loss of the proposed approaches. In order to study the performance of our algorithms in a real deployment, we implement and test their execution efficiency on Nokia smart phones. By means of a targeted user-study, we attempt to get an insight into the privacy-awareness of users in location based services and the usability of the proposed solutions.
The rapid proliferation of smart phone technology in urban communities has enabled mobile users to utilize context aware services on their devices. Service providers take advantage of this dynamic and ever-growing technology landscape
by proposing innovative context-dependent services for mobile subscribers. Location-based Services (LBS), for example, are used by millions of mobile subscribers every day to obtain location-specific information .Two popular features of location-based services are location check-ins and location sharing. By checking into a location, users can share their current location with family and friends or obtain location-specific services from third-party providers ,The obtained service does not depend on the locations of other users. The other type of location-based services, which rely on sharing of locations (or location preferences) by a group of users in order to obtain some service for the whole group, are also becoming popular. According to a recent study , location sharing services are used by almost 20% of all mobile phone users. One prominent example of such a service is the taxi-sharing application, offered by a global telecom operator , where smart phone users can share a taxi with other users at a suitable location by revealing their departure and destination locations. Similarly, another popular service enables a group of users to find the most geographically convenient place to meet.
We then propose two algorithms for solving the above formulation of the FRVP
problem in a privacy-preserving fashion, where each user participates by providing only a single location preference to the FRVP solver or the service provider.
In this significantly extended version of our earlier conference paper ,we evaluate the security of our proposal under various passive and active adversarial scenarios, including collusion. We also provide an accurate and detailed analysis of the privacy properties of our proposal and show that our algorithms do not provide
any probabilistic advantage to a passive adversary in correctly guessing the preferred location of any participant. In addition to the theoretical analysis, we also evaluate the practical efficiency and performance of the proposed algorithms by
means of a prototype implementation on a test bed of Nokia mobile devices. We also address the multi-preference case, where each user may have multiple prioritized location preferences. We highlight the main differences, in terms of performance, with the single preference case, and also present initial experimental results for the multi-preference implementation. Finally, by means of a targeted user study, we provide insight into the usability of our proposed solutions.
Recent developments on Wireless sensor networks have made their application used in a wide range of applications, such as, military sensing and tracking, health monitoring. Wireless sensor nodes have restricted computational resources, and are always deployed in a harsh, unattended or hostile environment. Therefore, network security represents a challenging task. This work presents a public-key based pre-distribution scheme with time-position nodes for simultaneous exchange of secure keys. In this paper, we proposed a general three-tier security framework for authentication and pair wise key establishment between mobile sinks and sensor nodes. The proposed defend attack and key management mechanism for sensor network applications can successfully handle sink mobility and can continually deliver data to neighboring nodes and sinks. Simulation results indicate that the proposed mechanism can reduce energy consumption and extend the average network lifetime by about 25%.
The Existing Systems used various techniques such as:
Asymmetric key technique for the key exchange technique.
Probabilistic key predistribution scheme
Two key predistribution schemes
Although the above security approach makes the network more resilient to mobile sink replication attacks compared to the single polynomial pool-based key predistribution scheme, it is still vulnerable to stationary access node replication attacks. In these types of attacks, the attacker is able to launch a replication attack similar to the mobile sink replication attack. After a fraction of sensor nodes have been compromised by an adversary, captured static polynomials can be loaded into a replicated stationary access node that transmits the recorded mobile sink’s data request messages to trigger sensor nodes to send their aggregated data.
To address the above-mentioned problem, we have developed a general framework that permits the use of any pairwise key predistribution scheme as its basic component, to provide authentication and pairwise key establishment between sensor nodes and MSs.
To facilitate the study of a new security technique, we first cultivated a general three-tier security framework for authentication and pairwise key establishment, based on the polynomial pool-based key predistribution scheme
To make the three-tier security scheme more robust against a stationary access node replication attack, we have strengthened the authentication mechanism between the stationary access nodes and sensor nodes using one-way hash chains algorithm in conjunction with the static polynomial pool-based scheme. Our analytical results indicate that the new security technique makes the network more resilient to both mobile sink replication attacks and stationary access nodes replication attacks compared to the single polynomial pool-based approach.
It is cost-efficient for a tenant with a limited budget to establish a virtual Map Reduce cluster by renting multiple virtual private servers (VPSs) from a VPS provider. To provide an appropriate scheduling scheme for this type of computing environment, we propose in this paper a hybrid job-driven scheduling scheme (JoSS for short) from a tenant’s perspective. JoSS provides not only job level scheduling, but also map-task level scheduling and reduce-task level scheduling. JoSS classifies Map Reduce jobs based on job scale and job type and designs an appropriate scheduling policy to schedule each class of jobs. The goal is to improve data locality for both map tasks and reduce tasks, avoid job starvation, and improve job execution performance. Two variations of JoSS are further introduced to separately achieve a better map-data locality and a faster task assignment. We conduct extensive experiments to evaluate and compare the two variations with current scheduling algorithms supported by Hadoop. The results show that the two variations outperform the other tested algorithms in terms of map-data locality, reduce-data locality, and network overhead without incur ring significant overhead. In addition, the two variations are separately suitable for different Map Reduce-work load scenarios and provide the best job performance among all tested algorithms
Map Reduce enables a programmer to define a Map Reduce job as a map function and a reduce function, and provides a runtime system to divide the job into multiple map tasks and reduce tasks and perform these tasks on a Map Reduce cluster in parallel. Typically, a Map Reduce cluster consists of a set of commodity machines/nodes located on several racks and interconnected with each other in a local area network (LAN). Many task scheduling algorithms have been proposed to improve data locality and to shorten job turnaround time, but most of them only focus on scheduling map tasks, rather than scheduling reduce tasks. Hence, employing them in a virtual MapReduce cluster might cause a low reduce-data locality. Besides, most of current scheduling algorithms are designed to achieve the node locality and rack locality for conventional MapReduce clusters, rather than achieving the VPS-locality and Cenlocality for virtual MapReduce clusters. Consequently, adopting them in a virtual MapReduce cluster might be unable to provide a high map-data locality.following are the issues:
1. Low reduce-data locality and map-data locality
2.Map Reduce cluster is costly for a person/organization with a limited budget, an alternative way is to establish a virtual Map Reduce cluster by either renting a Map Reduce framework from a Map Reduce service provider or renting multiple virtual private servers (VPSs) from a VPS provider.
We propose a hybrid job-driven scheduling scheme (JoSS for short) by providing scheduling in three levels: job, map task, and reduce task. JoSS classifies Map Reduce jobs into either large or small jobs based on each job’s input size to the average datacenter scale of the virtual Map Reduce cluster, and further classifies small Map Reduce jobs into either map-heavy or reduce-heavy based on the ratio between each job’s reduce-input size and the job’s map-input size. Then JoSS uses a particular scheduling policy to schedule each class of jobs such that the corresponding network traffic generated during job execution (especially for inter-datacenter traffic) can be reduced, and the corresponding job performance can be improved. In addition, we propose two variations of JoSS, named JoSS-T and JoSS-J, to guarantee a fast task assignment and to further increase the VPS-locality, respectively.following are the advantages:
1.We introduce JoSS to appropriately schedule Map Reduce jobs in a virtual Map Reduce cluster by addressing both map-data locality and reduce-data locality from the perspective of a tenant.
2.By classifying jobs into map-heavy and reduce heavy jobs and designing the corresponding policies to schedule each class of jobs, JoSS increases data locality and improves job performance.
Virtualization of resources on the cloud offers a scalable means of consuming services beyond the capabilities of small systems. In a cloud that offers infrastructure such as processor, memory, hard disk, etc., a coalition of virtual machines formed by grouping two or more may be needed. Economical management of cloud resources needs allocation strategies with minimum wastage, while configuring services ahead of actual requests. We propose a resource allocation mechanism for machines on the cloud, based on the principles of coalition formation and the uncertainty principle of game theory. We compare the results of applying this mechanism with existing resource allocation methods that have been deployed on the cloud. We also show that this method of resource allocation by coalition-formation of the machines on the cloud leads not only to better resource utilization but also higher request satisfaction.
Optimizing resource allocation to ensure the best performance can be done in many ways. Present IaaS service providers, largely unaware of application-level requirements, do not provide any optimization by configuring the required software on the VMs. Relying only on application level optimization is not sensible, as such is restricted to an existing infrastructure allocation. The placement of VMs is, however, in the hands of the IaaS provider and can be changed based on the topology of the machines in the cloud system. An application-level optimization technique along with topology-based VM placement—offers better chances of performance improvement with lower resource wastage.following are the issues:
1.The cloud providers’ current situation is that they know the type of VMs that may be requested but are unaware of the exact request specifications such as the number of instances of a particular type of VM.
2.Heavy resource wastage
In this paper, we model the cloud as a multi-agent system that is composed of agents (machines) with varied capabilities. Allocation of resources to perform specific tasks requires agents to form coalitions, as the resource requirements may be beyond the capabilities of any single agent (machine). Coalition formation is modeled as a game and uses the uncertainty principle of game theory to arrive at approximately optimal strategies of the game. We implement a resource allocation mechanism for the cloud that is demand-aware, topology-aware and uses a gametheoretic approach based on coalition formation of machines for requests with uncertain task information. With these ideas in place, we can use our agent-based resource allocation mechanism for the IaaS cloud. The evaluation of the efficacy of our approach is carried out by comparison with common commercial allocation strategies on the cloud. We evaluate it based on randomly generated VM requests that include data-intensive requests..following are the advantages:
1) By solving the optimization problem of coalition formation, we avoid the complexities of integer programming.
2)The resource allocation mechanism, when deployed, is found to perform better with respect to lower task allocation time, lower resource wastage, and higher request satisfaction.
The provisioning of basic security mechanisms such as authentication and confidentiality is highly challenging in a contentbased publish/subscribe system. Authentication of publishers and subscribers is difficult to achieve due to the loose coupling of publishers and subscribers. Likewise, confidentiality of events and subscriptions conflicts with content-based routing. This paper presents a novel approach to provide confidentiality and authentication in a broker-less content-based publish/subscribe system. The authentication of publishers and subscribers as well as confidentiality of events is ensured, by adapting the pairing-based cryptography mechanisms, to the needs of a publish/subscribe system. Furthermore, an algorithm to cluster subscribers according to their subscriptions preserves a weak notion of subscription confidentiality. In addition to our previous work , this paper contributes 1) use of searchable encryption to enable efficient routing of encrypted events, 2) multicredential routing a new event dissemination strategy to strengthen the weak subscription confidentiality, and 3) thorough analysis of different attacks on subscription confidentiality. The overall approach provides fine-grained key management and the cost for encryption, decryption, and routing is in the order of subscribed attributes. Moreover, the evaluations show that providing security is affordable w.r.t. 1) throughput of the proposedcryptographic primitives, and 2) delays incurred during the construction of the publish/subscribe overlay and the event dissemination.
Content-based publish/subscribe is the variant which pro-vides the most expressive subscription model, where subscriptions de ne restrictions on the message content. Its expressiveness and asynchronous nature is particularly useful for large-scale distributed applications with high-volume data streams. ·Access control in the context of publish/subscribe system means that only authenticated publishers are allowed to disseminate events in the network and only those events are delivered to authorized subscribers. Similarly, the content of events should not be exposed to the routing infrastructure and a subscriber should receive all relevant events without revealing its subscription to the system. These security issues are not trivial to solve in a content-based pubish/subscribe system and pose new challenges.following are the major issues:
1.It is very hard to provide subscription congeniality in a broker-less publish/subscribe system, where the subscribers are arranged in an overlay network according to the containment relationship between their subscriptions. In this case, regardless of the cryptographic primitives used, the maximum level of attainable condentiality is very limited.
2.The limitation arises from the fact that a parent can decrypt every event it forwarded to its children. Therefore, mechanisms are needed to provide a weaker notion of congeniality.
3.Do not intend to solve the digital copyright problem.
Ø In this paper, we present a new approach to provide authentication and condentiality in a broker-less publish/subscribe system. Ø Our approach allows subscribers to maintain credentials according to their subscriptions. Private keys assigned to the subscribers are labelled with the credentials. Ø A publisher associates each encrypted event with a set of credentials. We adapted identity based encryption mechanisms.following are the advantages:
1.To ensure that a particular subscriber can decrypt an event only if there is match between the credentials associated with the event and the key.
2. To allow subscribers to verify the authenticity of received events. Furthermore, we address the issue of subscription condentiality in the presence of semantic clustering of subscribers. A weaker notion of subscription
condentiality is dened and a secure connection protocol is designed to preserve the weak subscription condentiality. Finally, the evaluations demonstrate the viability of the proposed security mechanisms.
Cloud computing allows business customers to scale up and down their resource usage based on needs. Many of the touted gains in the cloud model come from resource multiplexing through virtualization technology. In this paper, we present a system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. We introduce the concept of “skewness” to measure the unevenness in the multi-dimensional resource utilization of a server. By minimizing skewness, we can combine different types of workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.
Virtual machine monitors (VMMs) like Xen provide a mechanism for mapping virtual machines (VMs) to physical resources. This mapping is largely hidden from the cloud users. Users with the Amazon EC2 service , for example, do not know where their VM instances run. It is up to the cloud provider to make sure the underlying physical machines (PMs) have sufficient resources to meet their needs. VM live migration technology makes it possible to change the mapping between VMs and PMs while applications are running.following are the major issues:
1.Most existing IDS are optimized to detect attacks with high accuracy. However, they still have various disadvantages that have been outlined in a number of publications and a lot of work has been done to analyze IDS in order to direct future research.
2.Besides others, one drawback is the large amount of alerts produced.
In this paper, we present the design and implementation of an automated resource management system that achieves a good balance between the two goal:
a.Overload avoidance: the capacity of a PM should be sufficient to satisfy the resource needs of all VMs running on it. Otherwise, the PM is overloaded and can lead to degraded performance of its VMs.
b.Green computing: the number of PMs used should be minimized as long as they can still satisfy the needs of all VMs. Idle PMs can be turned off to save energy.following are the advantages:
1.We develop a resource allocation system that can avoid overload in the system effectively while minimizing the number of servers used.
2.We introduce the concept of “skewness” to measure the uneven utilization of a server. By minimizing skewness, we can improve the overall utilization of servers in the face of multi-dimensional resource constraints.
In this paper, we proposed adaptive configuration of spatial and frequency resources to maximize energy efficiency (EE) and reveal the relationship between the spectral efficiency (SE) and the EE in downlink multiple-input-multiple-output (MIMO) orthogonal frequency division multiple access (OFDMA) systems. The problem is formulated as minimizing the total power consumed at the base station under constraints on the average data rates from multiple users, the total number of subcarriers, and the number of radio frequency (RF) chains. A two-step searching algorithm is developed to solve this problem, which first finds the near-optimal numbers of subcarriers for multiple users based on Karush-Kuhn-Tucker (KKT) conditions and then optimize the number of active RF chains. Simulation results demonstrate that increasing frequency resource improves both the SE and the EE, and it is more efficient than increasing spatial resource. Consequently, there is tradeoff between the SE and the EE only when the frequency resource is limited. In general, the adaptive configuration of spatial and frequency resources outperforms the adaptive configuration of only spatial resource and that of only frequency resource.
Multiple-input-multiple-output (MIMO) orthogonal frequency division multiple access (OFDMA) systems are very popular these days owing to high spectral efficiency (SE).However, whether they are with high energy efficiency (EE) is not clear. Although MIMO requires minimum transmit power than single-input-single-output (SISO) for the same data rate, it takes more circuit power because more active transmit or receive radio frequency (RF) chains are used . On the other hand, in MIMO-OFDMA systems, spatial precoding and other baseband processing are carried out at each subcarrier and thus the circuit power consumption on processing increases with the number of subcarriers. Since signal processing becomes more complicated due to high need on the data rate and transmission reliability, we cannot neglect the circuit power taken by both spatial and frequency resources besides the transmit power consumption when designing an energy efficient MIMO-OFDMA system.There are some preliminary results on energy saving by adaptively using the spatial and frequency resources. The EE of Alamouti diversity scheme is discussed in . It has been shown that if modulation order is adaptively adjusted to balance the transmit and circuit power consumption, multiple-input-single-output always do better than SISO. Adaptive switching between MIMO and single-input multiple output modes are addressed in  to save the energy in uplink cellular networks. The relationship between the EE and bandwidth is investigated in  and . The EE has been shown to increase with bandwidth if the circuit power consumption either does not depend on or linearly increases with the bandwidth. Energy-efficient link adaptation for MIMO-OFDM systems is studied in , where the active RF chains, the overall bandwidth, MIMO transmission modes can be adjusted according to the data rate need and channel fading.Priori work mainly does focus on point-to-point MIMO transmission. In downlink MIMO-OFDMA networks, RF chains are shared by different users. In this scenario switching on or switching off RF chains and allocating bandwidth are intertwined, that makes it complicated to research the EE. In this paper, we study adaptive configuration of spatial and frequency resources to reduce the EE in downlink MIMO-OFDMA systems.
Consider a downlink MIMO-OFDMA system with one base station (BS) and M users. Nt and Nr RF chains are configured at the BS and each user side, respectively. Overall K subcarriers are shared by multiple users without overlap. Since a large portion of power is consumed by the BS during downlink transmission , we concern about how to save energy at the BS side. Consider that the number of active RF chains at the BS and the number of subcarriers allocated to each user can be adjusted based on the data rates required by the users. A normal development structure of MIMO-OFDMA systems is represented in Fig. 1. The data first pass the channel coding and modulation mapping unit and then mapped into complex symbols. After spatial processing in the MIMO encoder unit, the signals are outputted to nt active RF chains. Different OFDM operations are done on every RF branch, including series to parallel converting (S/P), inverse fast fourier transform (IFFT), and parallel to series converting (P/S). After digital processing, the analog signals generated by the digital to analog converter (D/A) and are filtered and up-converted to a high frequency band. Finally, the signals are amplified by the power amplifiers (PAs) and radiated to the air. We first formulated the optimization problem to minimize the total power consumed at the BS with average data rate requirements from multiple users. Then we developed a two-step searching algorithm. Simulation results indicate that increasing frequency resource helps to improve both the SE and the EE. The tradeoff between the SE and the EE only exists when the total number of active subcarriers is restricted by a maximum value. On the other hand, the optimal number of active RF chains increases only when the total number of used subcarriers cannot be increased, which means that frequency resource is more efficient than spatial resource on improving the EE. The proposed spatial-frequency resource adaptive configuration outperforms both the spatial-only-adaptation and the frequency-only-adaptation.
Consider a downlink MIMO-OFDMA system with one base station (BS) and M users. Nt and Nr RF chains are configured at the BS and each user side, respectively. Overall K subcarriers are shared by multiple users without overlap. Since a large portion of power is consumed by the BS during downlink transmission , we concern about how to save energy at the BS side. Consider that the number of active RF chains at the BS and the number of subcarriers allocated to each user can be adjusted based on the data rates required by the users. A normal development structure of MIMO-OFDMA systems is represented in Fig. 1. The data first pass the channel coding and modulation mapping unit and then mapped into complex symbols. After spatial processing in the MIMO encoder unit, the signals are outputted to nt active RF chains. Different OFDM operations are done on every RF branch, including series to parallel converting (S/P), inverse fast fourier transform (IFFT), and parallel to series converting (P/S). After digital processing, the analog signals generated by the digital to analog converter (D/A) and are filtered and up-converted to a high frequency band. Finally, the signals are amplified by the power amplifiers (PAs) and radiated to the air. We first formulated the optimization problem to minimize the total power consumed at the BS with average data rate requirements from multiple users. Then we developed a two-step searching algorithm. Simulation results indicate that increasing frequency resource helps to improve both the SE and the EE. The tradeoff between the SE and the EE only exists when the total number of active subcarriers is restricted by a maximum value. On the other hand, the optimal number of active RF chains increases only when the total number of used subcarriers cannot be increased, which means that frequency resource is more efficient than spatial resource on improving the EE. The proposed spatial-frequency resource adaptive configuration outperforms both the spatial-only-adaptation and the frequency-only-adaptation
We are looking for people who excelled in previous academic achievements or have done significant contribution in an area of work , are highly motivated and ambitious, are confident, creative, innovative, with strong interpersonal skills, are looking forward to future-proof themselves. Does this describe you? Then, you have arrived at the right place.