1.1 Introduction
Voice security is most important in voice communication, especially in hostile and unattended environment, security is first. Public safety land mobile radio systems are susceptible to eavesdropping and can easily be exploited by criminals. Readily available scanners and other devices can be used to receive voice signals from analog and digital public safety radio systems, including trunked radio systems. Lists of frequencies and channel assignments used in public safety jurisdictions are easily obtained from numerous print and online sources. To ensure that sensitive information is shared only among authorized individuals or organizations, public safety operators need to ensure the confidentiality of sensitive radio traffic. This is typically accomplished through voice encryption by an effective encryption algorithm.
1.2 Types of Secure Voice
There are two types of secure voice
1. Digital Secure Voice and 2. Analog Secure Voice
2.1 Importance of voice security
Effective management is essential to implementing an encrypted voice network successfully.
 Unencrypted public safety voice transmissions can be intercepted, abetting criminal activity, thwarting public safety efforts, and endangering the public and public safety personnel.
 Voice encryption helps ensure that voice transmissions can be accessed only by authorized personnel, thereby increasing the safety and efficiency of public safety personnel.
 Although voice encryption adds complexity and cost to public safety voice networks but security is maintain by it.
A digital secure voice usually includes two components, a digitizer to convert between speech and digital signals and an encryption system to provide confidentiality. What makes ciphony difficult in practice is a need to send the encrypted signal over the same communication system used to transmit unencrypted voice, e.g. telephone lines or mobile radios or sensor. This has led to the use of Voice Coders (vocoders) to achieve tight bandwidth compression of the speech signals. NSA's STUIII, KY57 and SCIP are examples of systems that operate over existing voice system. The STE system, by contrast, requires wide bandwidth ISDN lines for its normal mode of operation. For encrypting GSM and VoIP, which are digital anyway, the standard protocol ZRTP could be used as an endtoend encryption technology. Secure voice's robustness greatly benefits from having the voice data compressed into very low bitrates by special component called speech coding, voice compression or voice coder (also known as vocoder). The old secure voice compression standards include (CVSD, CELP, LPC10e and MELP, where the latest standard is the state of the art MELPe algorithm.
2.3 Analog Secure Voice technologies
One does not necessarily need always digital secure voice to achieve security, as the Australian CODAN analog system (originally designed for HF but used on VHF and UHF) has proven that digital compression and encryption methods are not always required to achieve voice security. Although CODAN is by no means original or unique technology or a unique product, it has achieved recognition in the security market that exclusively digital methods aren't always needed. Voice inversion methods were commonplace in the 20th century. Few analog voice offerings exist due to the rise of exclusively digital solutions to the voice security problem.
2.4 The encryption process
Securing the message encryption is a unique technique. A voice message is first digitized and then encrypted (or locked).The process requires the use of an algorithm and a unique cryptographic key, which are analogous to a door lock and its key although many houses may use the same brand of lock, how the lock is keyed makes it unique.
 The Data Encryption Standard (DES) and
 The Advanced Encryption Standard (AES) are two wellknown algorithms.
 Provides confidentiality for sensitive radio traffic.
 Prevents unauthorized parties from successfully monitoring radio traffic.
 Enhances personnel security.
 Provides some user authentication on radio traffic.
The MELPe or enhancedMELP (Mixed Excitation Linear Prediction) is a United States Department of Defense speech coding standard used mainly in military applications and satellite communications, secure voice, and secure radio devices. Its development was led and supported by NSA, and NATO. The US government's MELPe secure voice standard is also known as MILSTD3005, and the NATO's MELPe secure voice standard is also known as STANAG4591.The 2400 bit/s MELP was created by Texas Instruments, and first standardized in 1997 and was known as MILSTD3005. Between 1998 and 2001, a new MELPbased vocoder was created at half the rate (i.e. 1200 bit/s) and substantial enhancements were added to the MILSTD3005 by SignalCom (later acquired by Microsoft) and AT&T, which included
 Additional new vocoder at half the rate (i.e. 1200 bit/s),
 Substantially improved encoding (analysis),
 Substantially improved decoding (synthesis),
 NoisePreprocessing for removing background noise,
 Trans coding between the 2400 bit/s and 1200 bit/s bit streams. This fairly significant development was aimed to create a new coder at half the rate and have it interoperable with the old MELP standard.
2.6 Secure encrypted dualmode military band base stations
Figure 2.1 illustrates secure encrypted dualmode military band base stations, Vehicle Mobiles, Manpacks & hand held radios. These devices are used in hostile and unattended environment.
Figure: 2.1 Secure encrypted dualmode Military Band Base stations
The Nabishi NHT78A hand held radio, the Nabishi MMR78 multipurpose (base, mobile or manpack) and the MR3088 mobile VHF Military radios are used in the modern tactical environments, the radios are dualmode which means these radios work in either DIGITAL or ANALOGUE mode a feature which is userchangeable at a press of a button. Both radios provides secure and reliable voice and data communication over the entire military band of 3088MHz with 12.5 or 25KHz step. Some of the their powerful features include: the capability to transmit both data and voice, securing communication with scrambler, selectable tone signaling, versatile scanning options, dynamic antenna matching circuit, auto battery detection, etc. The unit is encrypted for secure operation with a true 256bit AES Digital Encryption for operation in Digital mode for secure voice communications to prevent unauthorized listening of radio conversations or eavesdropping. Both radios fully compliment one another. NHT78A conforms to rigid military standards: MilStd810E tests for reliability under harsh environments and MilStd461C tests for EMI/EMC. It is also fully compatible with old generation radios such as AN/PRC77, AN/VRC12, etc.
2.7 Secure voice communication systems
Secure voice communication systems has several classes according their applications and according their application fields. These are as bellow.
 Secure Press to Talk (PTT) System
 Push To Talk (PTT)
 Secure GSM Phone
 VOIP and VPN
 The cellular Voice communication
 Thuraya satellite communication system
 Voice communication in Wireless Sensor Network
Just like a walkietalkie network, PTT not only connects instantly users within their group but the use of secure encryption ensures privacy during the conversation to prevent unauthorized listening or eavesdropping. The system provides instant onetoone or onetomany voice calls to anywhere in the world where via the GSM network. Designed for fast communications for the Police, Security, Government, Hospitals Transport and Public Utilities, The Nabishi Secure PTT System from one country to another, and because it does not need to dial up, the call is instant and there are no expensive call charges. The Secure PTT system works seamlessly over any GSM network in the world using the 3G, GPRS, EDGE as well as WiFi.
2.7.2 Push to Talk (PTT)
Push to Talk (PTT) is like turning as a mobile into a walkietalkie but with a global worldwide range. Instead of transmitting/receiving radio frequencies, Push to Talk uses either WiFi 3G, GPRS or EDGE data connection on mobile phone to send and receive calls in encrypted mode using VOIP technology. Voice communication at the push of a button. Communicate privately on a onetoone basis, or have a discussion with the whole team on an instant conference call in clear digital transparency.
2.7.3 The cellular Voice
The cellular Voice Encryption is a snap on to the data port of a cellular phone (included) working over the GSM network. Simple operation uses military grade 256 Bit AES encryption algorithm which is the most advanced encryption standard for voice communication, even more advanced than the DES standard. Transparency operation, no action required by user.
2.7.4 VoIP VPN
A VoIP VPN combines Voice over IP and Virtual Private Network technologies to offer a method for delivering secure voice. Because VoIP transmits digitized voice as a stream of data, the VoIP VPN solution accomplishes voice encryption quite simply, applying standard dataencryption mechanisms inherently available in the collection of protocols used to implement a VPN. The VoIP gatewayrouter first converts the analog voice signal to digital form, encapsulates the digitized voice within IP packets, then encrypts the digitized voice using IPSec, and finally routes the encrypted voice packets securely through a VPN tunnel. At the remote site, another VoIP router decodes the voice and converts the digital voice to an analog signal for delivery.
2.6.5 Secure GSM Phone
Figure: 2.2 Secure GSM Phone
Illegal wiretapping poses ever bigger risks to the confidentiality of voice communication, as the respective technical effort has even become less complex with the advent of Voice over IP (VoIP) and GSM networks. Socalled IMSIcatchers are available on the market which fake a base station towards a mobile phone and can thus be used for a maninthemiddle attack. Other attackers focus directly on the internal network of a GSM provider to catch many more calls as they pass the respective switches. This danger becomes especially very real when road warriors travel in less secure or trustworthy areas around the world. Figure 2.2 illustrate the Secure GSM system.
2.6.6 Thuraya satellite communication system
Thuraya is a satellite communication system that covers Europe, the Middle East, most of Africa and large parts of Asia using geostationary satellites. Thuraya combines excellent voice quality with very low perminute rates. It uses affordable terminal equipment that has approximately the size of an oldfashioned GSM mobile phone. The Nabishi T2 in Thurayamode is the perfect solution for secure communication in areas without GSM coverage or if you do not wish to depend on the local network infrastructure. For more information, including current coverage maps. It was used in marine and naval (vessel) communication very much.
2.6.7 Voice communication in Wireless Sensor Network
In the global secure voice communication, the future trend is towards the wireless sensor networks. Wireless sensor networks are quickly gaining popularity due to the fact that they are potentially low cost solutions to a variety of realworld challenges. Their low cost provides a means to deploy large sensor arrays in a variety of conditions capable of performing both military and civilian 1tasks. But sensor networks also introduce severe resource constraints due to their lack of data storage and power. Both of these represent major obstacles to the implementation of traditional computer security techniques in a wireless sensor network. The unreliable communication channel and unattended operation make the security defenses even harder. Indeed, as pointed out in, wireless sensors often have the processing characteristics of machines that are decades old (or longer), and the industrial trend is to reduce the cost of wireless sensors while maintaining similar computing power. With that in mind, many researchers have begun to address the challenges of maximizing the processing capabilities and energy reserves of wireless sensor nodes while also securing them against attackers. In addition to those traditional security issues, To observe that many generalpurpose sensor network techniques (particularly the early research) assumed that all nodes are cooperative and trustworthy. This is not the case for most, or much of, realworld wireless sensor networking applications, which require a certain amount of trust in the application in order to maintain proper network functionality. Network security into four major categories: the obstacles to sensor network security, the requirements of a secure wireless sensor network, attacks, and defensive measures. Figure 2.3 illustrate Wireless Sensor Networks System.
Figure: 2.3 Wireless Sensor Networks System
2.6.7.1 Wireless Voice Sensor
A voce sensor needs two or three microphones as its input device. Observing the waveforms which are received at each time, voce detector always checks whether they are potential voce or not. When a voice is detected, feature extractor extracts highdimensional features such as spectral coefficients. Finally, here get to obtain a recognized voice through calculating the probability of input waveform. The voce sensor is shown in the following Figure 2.4.
Figure: 2.4 Voice sensor
The recognized voce consist of a string of characters, which is an encoded string. This string of characters is transported to the pretend console driver in the kernel, which is responsible for transmitting the input string to keyboard input strokes. Those key input stokes will be considered as if someone strokes keyboards, so the recognized voice can be efficiently transmitted to each application requiring input information and according others secure protocols or algorithms.
2.6.7.2 Obstacles of Sensor network Security
A wireless sensor network is an exceptional network which has many constraints compared to a traditional computer network. Due to these constraints it is so diﬃcult to directly take up the existing security approaches to the area of wireless sensor networks. Therefore, to build up a functional security mechanisms while borrowing the ideas from the current security techniques, it is necessary to know and understand these constraints ﬁrst.
2.6.7.3 Limitations of Resources
Every security idea requires a certain amount of resources for the implementation, including data memory, code space, and energy to power the sensor. However, currently these resources are very limited in a tiny wireless sensor. Figure 2.5 illustrates WSN device which are limited in several options.
 Limited Memory and Storage Space A sensor is a tiny device with only a small amount of memory and storage space for the code. In order to build an eﬀective security mechanism, it is necessary to limit the code size of the security algorithm. For example, one common sensor type (TelosB) has an 16bit, 8 MHz RISC CPU with only 10K RAM, 48K program memory, and 1024K ﬂash storage. With such a limitation, the software built for the sensor must also be quite small. The total code space of TinyOS, the defacto standard operating system for wireless sensors, is approximately 4K , and the core scheduler occupies only 178 bytes. Therefore, the code size for the all security related code must also be small.
 Power Limitation Energy is the biggest constraint to wireless sensor capabilities. We assume that once sensor nodes are deployed in a sensor network, they cannot be easily replaced (high operating cost) or recharged (high cost of sensors). Therefore, the battery charge taken with them to the ﬁeld must be conserved to extend the life of the individual sensor node and the entire sensor network. When implementing a cryptographic function or protocol within a sensor 3 node, the energy impact of the added security code must be considered. When adding security to a sensor node, we are interested in the impact that security has on the lifespan of a sensor (i.e., its battery life). The extra power consumed by sensor nodes due to security is related to the processing required for security functions (e.g., encryption, decryption, signing data, verifying signatures), the energy required to transmit the security related data or overhead (e.g., initialization vectors needed for encryption/decryption), and the energy required to store security parameters in a secure manner (e.g., cryptographic key storage).
Figure: 2.5 WSN device
2.6.7.4 Unreliable Communication
Certainly, unreliable communication is another threat to sensor security. The security of the network relies heavily on a deﬁned protocol, which in turn depends on communication.
 Unreliable transfer normally the packetbased routing of the sensor network is connectionless and thus inherently unreliable. Packets may get damaged due to channel errors or dropped at highly congested nodes. The result is lost or missing packets. Furthermore, the unreliable wireless communication channel also results in damaged packets. Higher channel error rate also forces the software developer to devote resources to error handling. More importantly, if the protocol lacks the appropriate error handling it is possible to lose critical security packets. This may include, for example, a cryptographic key.
 Conﬂicts Even if the channel is reliable, the communication may still be unreliable. This is due to the broadcast nature of the wireless sensor network. If packets meet in the middle of transfer, conﬂicts will occur and the transfer itself will fail. In a crowded (high density) sensor network, this can be a major problem. More details about the eﬀect of wireless communication can be found at.
 Latency the multihop routing, network congestion, and node processing can lead to greater latency in the network, thus making it diﬃcult to achieve synchronization among sensor nodes. The synchronization issues can be critical to sensor security where the security mechanism relies on critical event reports and cryptographic key distribution. Interested readers please refer to reference on realtime communications in wireless sensor networks.
2.6.7.5 Unattended Operation
Depending on the function of the particular sensor network, the sensor nodes may be left unattended for long periods of time. There are three main caveats to unattended sensor nodes:
 Exposure to Physical Attacks The sensor may be deployed in an environment open to adversaries, bad weather, and so on. The possibility that a sensor suﬀers a physical attack in such an environment is therefore much higher than the typical PCs, which is located in a secure place and mainly faces attacks from a network.
 Managed Remotely Remote management of a sensor network makes it virtually impossible to detect physical tampering (i.e., through tamperproof seals) and physical maintenance issues (e.g., battery replacement). Perhaps the most extreme example of this is a sensor node used for remote reconnaissance missions behind enemy lines. In such a case, the node may not have any physical contact with friendly forces once deployed.
 No Central Management Point A sensor network should be a distributed network without a central management point. This will increase the vitality of the sensor network. However, if designed incorrectly, it will make the network organization diﬃcult, ineﬃcient, and fragile. Perhaps most importantly, the longer that a sensor is left unattended the more likely that an adversary has compromised the node.
2.6.7.6 Security Requirements
A sensor network is a special type of network. It shares some commonalities with a typical computer network, but also poses unique requirements of its own. Therefore, it can think of the requirements of a wireless sensor network as encompassing both the typical network requirements and the unique requirements suited solely to wireless sensor networks.
2.7.7.6.1 Data Conﬁdentiality
Data conﬁdentiality is the most important issue in network security. Every network with any security focus will typically address this problem ﬁrst. In sensor networks, the conﬁdentiality relates to the following.
 A sensor network should not leak sensor readings to its neighbors. Especially in a military application, the data stored in the sensor node may be highly sensitive.
 In many applications nodes communicate highly sensitive data, e.g., key distribution; therefore it is extremely important to build a secure channel in a wireless sensor network.
 Public sensor information, such as sensor identities and public keys, should also be encrypted to some extent to protect against traﬃc analysis attacks. The standard approach for keeping sensitive data secret is to encrypt the data with a secret key that only intended receivers possess, thus achieving conﬁdentiality.
2.7.7.6.2 Data Integrity
With the implementation of conﬁdentiality, an adversary may be unable to steal information. However, this doesn’t mean the data is safe. The adversary can change the data, so as to send the sensor network into disarray. For example, a malicious node may add some fragments or manipulate the data within a packet. This new packet can then be sent to the original receiver. Data loss or damage can even occur without the presence of a malicious node due to the harsh communication environment. Thus, data integrity ensures that any received data has not been altered in transit.
2.7.7.6.3 Data Freshness
Even if conﬁdentiality and data integrity are assured, we also need to ensure the freshness of each message. Informally, data freshness suggests that the data is recent, and it ensures that no old messages have been replayed. This requirement is especially important when there are sharedkey strategies employed in the design. Typically shared keys need to be changed over time. However, it takes time for new shared keys to be propagated to the 6entire network. In this case, it is easy for the adversary to use a replay attack. Also, it is easy to disrupt the normal work of the sensor, if the sensor is unaware of the new key change time. To solve this problem a nonce, or another timerelated counter, can be added into the packet to ensure data freshness.
2.7.7.6.4 Availability
Adjusting the traditional encryption algorithms to ﬁt within the wireless sensor network is not free, and will introduce some extra costs. Some approaches choose to modify the code to reuse as much code as possible. Some approaches try to make use of additional communication to achieve the same goal. What’s more, some approaches force strict limitations on the data access, or propose an unsuitable scheme (such as a central point scheme) in order to simplify the algorithm. But all these approaches weaken the availability of a sensor and sensor network for the following reasons:
 Additional computation consumes additional energy. If no more energy exists, the data will no longer be available.
 Additional communication also consumes more energy. What’s more, as communication increases so too does the chance of incurring a communication conﬂict.
 A single point failure will be introduced if using the central point scheme. This greatly threatens the availability of the network. The requirement of security not only aﬀects the operation of the network, but also is highly important in maintaining the availability of the whole network.
2.7.7.6.5 SelfOrganization
A wireless sensor network is a typically an ad hoc network, which requires every sensor node be independent and ﬂexible enough to be selforganizing and selfhealing according to diﬀerent situations. There is no ﬁxed infrastructure available for the purpose of network management in a sensor network. This inherent feature brings a great challenge to wireless sensor network security as well. For example, the dynamics of the whole network inhibits the idea of preinstallation of a shared key between the base station and all sensors . Several random key predistribution schemes have been proposed 7in the context of symmetric encryption techniques. In the context of applying publickey cryptography techniques in sensor networks, an eﬃcient mechanism for publickey distribution is necessary as well. In the same way that distributed sensor networks must selforganize to support multihop routing, they must also selforganize to conduct key management and building trust relation among sensors. If selforganization is lacking in a sensor network, the damage resulting from an attack or even the hazardous environment may be devastating.
2.7.7.6.6 Time Synchronization
Most sensor network applications rely on some form of time synchronization. In order to conserve power, an individual sensor’s radio may be turned oﬀ for periods of time. Furthermore, sensors may wish to compute the endtoend delay of a packet as it travels between two pairwise sensors. A more collaborative sensor network may require group synchronization for tracking applications, etc. In deference, the authors propose a set of secure synchronization protocols for senderreceiver (pairwise), multihop senderreceiver (for use when the pair of nodes are not within singlehop range), and group synchronization.
2.7.7.6.7 Secure Localization
Often, the utility of a sensor network will rely on its ability to accurately and automatically locate each sensor in the network. A sensor network designed to locate faults will need accurate location information in order to pinpoint the location of a fault. Unfortunately, an attacker can easily manipulate nonsecured location information by reporting false signal strengths, replaying signals, etc. A technique called veriﬁable multilateration (VM) is described in. In multilateration, a device’s position is accurately computed from a series of known reference points. In, authenticated ranging and distance bounding are used to ensure accurate location of a node. Because of distance bounding, an attacking node can only increase its claimed distance from a reference point. However, to ensure location consistency, an attacking node would also have to prove that its distance from another reference point is shorter. Since it cannot do this, a node manipulating the localization protocol can be found. For large sensor networks, the SPINE (Secure Positioning for sensor NEtworks) algorithm is used. It is a three phase algorithm 8based upon veriﬁable multilateration.In, SeRLoc (Secure RangeIndependent Localization) is described.Its novelty is its decentralized, rangeindependent nature. SeRLoc uses locators that transmit beacon information. It is assumed that the locators are trusted and cannot be compromised. Furthermore, each locator is assumed to know its own location. A sensor computes its location by listening for the beacon information sent by each locator. The beacons include the locator’s location. Using all of the beacons that a sensor node detects, a node computes an approximate location based on the coordinates of the locators. Using a majority vote scheme, the sensor then computes an overlapping antenna region. The ﬁnal computed location is the “center of gravity” of the overlapping antenna region. All beacons transmitted by the locators are encrypted with a shared global symmetric key that is preloaded to the sensor prior to deployment. Each sensor also shares a unique symmetric key with each locator. This key is also preloaded on each sensor.
2.7.7.6.8 Authentication
An adversary is not just limited to modifying the data packet. It can change the whole packet stream by injecting additional packets. So the receiver needs to ensure that the data used in any decisionmaking process originates from the correct source. On the other hand, when constructing the sensor network, authentication is necessary for many administrative tasks (e.g. network reprogramming or controlling sensor node duty cycle). From the above, we can see that message authentication is important for many applications in sensor networks. Informally, data authentication allows a receiver to verify that the data really is sent by the claimed sender. In the case of twoparty communication, data authentication can be achieved through a purely symmetric mechanism: the sender and the receiver share a secret key to compute the message authentication code (MAC) of all communicated data. Adrian Perrig et al. propose a keychain distribution system for their µTESLA secure broadcast protocol. The basic idea of the µTESLA system is to achieve asymmetric cryptography by delaying the disclosure of the symmetric keys. In this case a sender will broadcast a message generated with a secret key. After a certain period of time, the sender will disclose the secret key. The receiver is responsible for buﬀering the packet until the secret key has been disclosed. After disclosure the receiver can authenticate the packet, provided that the packet was received before the key was disclosed.9One limitation of µTESLA is that some initial information must be unicast to each sensor node before authentication of broadcast messages can begin.Liu and Ning propose an enhancement to the µTESLA system that uses broadcasting of the key chain commitments rather than µTESLA’s unicasting technique. They present a series of schemes starting with a simple predetermination of key chains and ﬁnally settling on a multilevel key chain technique. The multilevel key chain scheme uses predetermination and broadcasting to achieve a scalable key distribution technique that is designed to be resistant to denial of service attacks, including jamming.
2.7.7.7 Attacks
Sensor networks are particularly vulnerable to several key types of attacks.Attacks can be performed in a variety of ways, most notably as denial of service attacks, but also through traﬃc analysis, privacy violation, physical attacks, and so on. Denial of service attacks on wireless sensor networks can range from simply jamming the sensor’s communication channel to more sophisticated attacks designed to violate the 802.11 MAC protocol or any other layer of the wireless sensor network. Due to the potential asymmetry in power and computational constraints, guarding against a well orchestrated denial of service attack on a wireless sensor network can be nearly impossible. A more powerful node can easily jam a sensor node and eﬀectively prevent the sensor network from performing its intended duty. We note that attacks on wireless sensor networks are not limited to simply denial of service attacks, but rather encompass a variety of techniques including node takeovers, attacks on the routing protocols, and attacks on a node’s physical security. In this section, we ﬁrst address some common denial of service attacks and then describe additional attacking, including those on the routing protocols as well as an identity based attack known as the Sybil attack.
2.7.7.7.1 Contemplation of attacks
Wood and Stankovic deﬁne one kind of denial of service attack as “any event that diminishes or eliminates a network’s capacity to perform its expected function”. Certainly, denials of service attacks are not a new phenomenon. In fact, there are several standard techniques used in traditional computing to cope with some of the more common denial of service 10techniques, although this is still an open problem to the network security community. Unfortunately, wireless sensor networks cannot aﬀord the computational overhead necessary in implementing many of the typical defensive strategies. What makes the prospect of denial of service attacks even more alarming is the projected use of sensor networks in highly critical and sensitive applications. For example, a sensor network designed to alert building occupants in the event of a ﬁre could be highly susceptible to a denial of service attack. Even worse, such an attack could result in the deaths of building occupants due to the nonoperational ﬁre detection network. Other possible uses for wireless sensors include the monitoring of traﬃc ﬂows which may include the control of traﬃc lights, and so forth. A denial of service attack on such a sensor network could prove very costly, especially on major roads. For this reason, researchers have spent a great deal of time both identifying the various types of denial of service attacks and devising strategies to subvert such attacks. We describe now some of the major types of denial of service attacks.
2.7.7.7.2 Types of Denial of Service attacks
A standard attack on wireless sensor networks is simply to jam a node or set of nodes. Jamming, in this case, is simply the transmission of a radio signal that interferes with the radio frequencies being used by the sensor network. The jamming of a network can come in two forms: constant jamming, and intermittent jamming. Constant jamming involves the complete jamming of the entire network. No messages are able to be sent or received. If the jamming is only intermittent, then nodes are able to exchange messages periodically, but not consistently. This too can have a detrimental impact on the sensor network as the messages being exchanged between nodes may be time sensitive. Attacks can also be made on the link layer itself. One possibility is that an attacker may simply intentionally violate the communication protocol, e.g., ZigBee or IEEE 801.11b (WiFi) protocol, and continually transmit messages in an attempt to generate collisions. Such collisions would require the retransmission of any packet aﬀected by the collision. Using this technique it would be possible for an attacker to simply deplete a sensor node’s power supply by forcing too many retransmissions. At the routing layer, a node may take advantage of a multihop network 11by simply refusing to route messages. This could be done intermittently or constantly with the net result being that any neighbor who routes through the malicious node will be unable to exchange messages with, at least, part of the network. Extensions to this technique including intentionally routing messages to incorrect nodes (misdirection). The transport layer is also susceptible to attack, as in the case of ﬂooding. Flooding can be as simple as sending many connection requests to a susceptible node. In this case, resources must be allocated to handle the connection request. Eventually a node’s resources will be exhausted, thus rendering the node useless.
2.7.7.7.3 The Sybil attack
Newsome et al. describe the Sybil attack as it relates to wireless sensor networks. Simply put, the Sybil attack is deﬁned as a “malicious device illegitimately taking on multiple identities”. It was originally described as an attack able to defeat the redundancy mechanisms of distributed data storage systems in peertopeer networks. In addition to defeating distributed data storage systems, the Sybil attack is also eﬀective against routing algorithms, data aggregation, voting, fair resource allocation and foiling misbehavior detection. Regardless of the target (voting, routing, aggregation), the Sybil algorithm functions similarly. All of the techniques involve utilizing multiple identities. For instance, in a sensor network voting scheme, the Sybil attack might utilize multiple identities to generate additional “votes.” Similarly, to attack the routing protocol, the Sybil attack would rely on a malicious node taking on the identity of multiple nodes, and thus routing multiple paths through a single malicious node.
2.7.7.7.4 Traﬃc Analysis Attacks
Wireless sensor networks are typically composed of many lowpower sensors communicating with a few relatively robust and powerful base stations. It is not unusual, therefore, for data to be gathered by the individual nodes where it is ultimately routed to the base station. Often, for an adversary to eﬀectively render the network useless, the attacker can simply disable the base station. To make matters worse, Deng et al. demonstrate two attacks that can identify the base station in a network (with high probability) without even understanding the contents of the packets (if the packets are Themselves encrypted). 12A rate monitoring attack simply makes use of the idea that nodes closest to the base station tend to forward more packets than those farther away from the base station. An attacker need only monitor which nodes are sending packets and follow those nodes that are sending the most packets. In a time correlation attack, an adversary simply generates events and monitors to whom a node sends its packets. To generate an event, the adversary could simply generate a physical event that would be monitored by the sensor(s) in the area (turning on a light, for instance).
2.7.7.7.5 Node Replication Attacks
Conceptually, a node replication attack is quite simple: an attacker seeks to add a node to an existing sensor network by copying (replicating) the node ID of an existing sensor node . A node replicated in this fashion can severely disrupt a sensor network’s performance: packets can be corrupted or even misrouted. This can result in a disconnected network, false sensor readings, etc. If an attacker can gain physical access to the entire network he can copy cryptographic keys to the replicated sensor and can also insert the replicated node into strategic points in the network .By inserting the replicated nodes at speciﬁc network points, the attacker could easily manipulate a speciﬁc segment of the network, perhaps by disconnecting it altogether.
2.7.7.7.6 Attacks against Privacy
Sensor network technology promises a vast increase in automatic data collection capabilities through eﬃcient deployment of tiny sensor devices. While these technologies oﬀer great beneﬁts to users, they also exhibit signiﬁcant potential for abuse. Particularly relevant concerns are privacy problems, since sensor networks provide increased data collection capabilities. Adversaries can use even seemingly innocuous data to derive sensitive information if they know how to correlate multiple sensor inputs. For example, in the famous “pandahunter problem” , the hunter can imply the position of pandas by monitoring the traﬃc. The main privacy problem, however, is not that sensor networks enable the collection of information. In fact, much information from sensor networks could probably be collected through direct site surveillance. Rather, sensor networks aggravate the privacy problem because they make large volumes of information easily available through remote access. Hence, adversaries need not be physically present to maintain surveillance. They can gather information in a lowrisk, anonymous manner. Remote access also allows a single adversary to monitor multiple sites simultaneously. Some of the more common attacks against sensor privacy are:
 Monitor and Eavesdropping this is the most obvious attack to privacy. By listening to the data, the adversary could easily discover the communication contents. When the traﬃc conveys the control information about the sensor network conﬁguration, which contains potentially more detailed information than accessible through the location server, the eavesdropping can act eﬀectively against the privacy protection.
 Traﬃc Analysis Traﬃc analysis typically combines with monitoring and eavesdropping. An increase in the number of transmitted packets between certain nodes could signal that a speciﬁc sensor has registered activity. Through the analysis on the traﬃc, some sensors with special roles or activities can be eﬀectively identiﬁed.
 Camouﬂage Adversaries can insert their node or compromise the nodes to hide in the sensor network. After that these nodes can masquerade as a normal node to attract the packets, then misroute the packets, e.g. forward the packets to the nodes conducting the privacy analysis.It is worth noting that, as pointed out in, the current understanding of privacy in wireless sensor networks is immature, and more research is needed.
2.7.7.7.7 Physical Attacks
Sensor networks typically operate in hostile outdoor environments. In such environments, the small form factor of the sensors, coupled with the unattended and distributed nature of their deployment make them highly susceptible to physical attacks, i.e., threats due to physical node destructions. Unlike many other attacks mentioned above, physical attacks destroy sensors permanently, so the losses are irreversible. For instance, attackers can extract cryptographic secrets, tamper with the associated circuitry, modify Programming in the sensors, or replace them with malicious sensors under the control of the attacker. Recent work has shown that standard sensor nodes, such as the MICA2 motes, can be compromised in less than one 14minute While these results are not surprising given that the MICA2 lacks tamper resistant hardware protection, they provide a cautionary note about the speed of a welltrained attacker. If an adversary compromises a sensor node, then the code inside the physical node may be modiﬁed.
2.8 Summery
Now it is in a position to describe the measures for satisfying security requirements, and protecting the sensor network from attacks. It starts with key establishment in wireless sensor networks, which lays the foundation for the security in a wireless sensor network, followed by defending against DoS attacks, secure broadcasting and multicasting, defending against attacks on routing protocols, combating traﬃc analysis attacks, defending against attacks on sensor privacy, intrusion detection, secure data aggregation, defending against physical attacks, and trust management. It is first thinking that encryption algorithm must be simple in design, develop and operation, secure from attacks and very complex for unauthorized access. An encryption algorithm in voice or others communication systems must be supportable of existence systems for faster implementation.
LITERATURE REVIEW ON CRYPTOGRAPHY
3.1 Introduction
Cryptography is the science of security where using different techniques to encrypt and decrypt data. Cryptography enables to store sensitive information or transmit it across insecure networks (like the Internet) so that it cannot be read by anyone except the intended recipient. While cryptography is the science of securing data, cryptanalysis is the science of analyzing and breaking secure communication. Classical cryptanalysis involves an interesting combination of analytical reasoning, application of mathematical tools, pattern finding, patience, determination, and luck. Cryptography is a technique used to hide the meaning of a message and is derived from the Greek word kryptos (hidden). This is different from steganograhic techniques in that one is not hiding the actual message, only the meaning of the message. If a message here to fall into the hands of the wrong person, cryptography should ensure that message could not be read. Typically the sender and receiver agree upon a message scrambling protocol beforehand and agree upon methods for encrypting and decrypting messages. Cryptography is further divided into two implementation techniques and those include transposition and substitution.
3.2 Requirements of Cryptography in voice communication
3.2.1 Message Content Security
 No observer can access the contents of the message
 No observer can identify the sender and receiver.
3.2.2 Integrity of Message Content
 The message has not been changed or lost during transmission;
 The message has not been prevented from reaching the recipient; and
 The message has not reached the recipient twice.
3.2.3 Authentication of the Sender and Recipient
 The sender can be sure that the message reaches the intended recipient, and only the intended recipient; and
 The recipient can be sure that the message came from the sender and not an imposter. The act by an imposter of sending such a message is referred to as 'spoofing'.
3.2.4 NonRepudiation by the Sender and Recipient
 The sender cannot credibly deny that the message was sent by them; and
 The recipient cannot credibly deny that the message was received by them.
Data that can be read and understood without any special measures is called plaintext or clear text. This is the message or data has to be secured. The method of disguising plaintext in such a way as to hide its substance is called encryption. Encrypting plaintext results in unreadable gibberish called cipher text. You use encryption to ensure that information is hidden from anyone for whom it is not intended, even those who can see the encrypted data. The process of reverting cipher text to its original plaintext is called decryption. A cryptographic algorithm, or cipher, is a mathematical function used in the encryption and decryption process. A cryptographic algorithm works in combination with a key—a word, number, or phrase—to encrypt the plaintext. The same plaintext encrypts to different cipher text with different keys. The security of encrypted data is entirely dependent on two things: the strength of the cryptographic algorithm and the secrecy of the key .A cryptographic algorithm, plus all possible keys and all the protocols that make it work, comprise a cryptosystem. PGP is a cryptosystem. Cryptography is an important element of any strategy to address message transmission security requirements. It is the practical art of converting messages or data into a different form, such that noone can read them without having access to the 'key'. The message may be converted using a 'code' (in which case each character or group of characters is substituted by an alternative one), or a 'cipher' (in which case the message as a whole is converted, rather than individual characters).
 Cryptography is the science of mathematics to “encrypt” and “decrypt” data. Cryptography enables us to store sensitive information of transmit it across insecure networks like internet so that no one else other the intended recipient can read it.
 Cryptanalysis is the art of breaking Ciphers that is retrieving the original message without knowing the proper key. Cryptography deals with all aspects of secure messaging, authentication, digital signatures, electronic money, and other applications.
 Breaking encryption scheme.
 Cracking encryption scheme.
 Strong encryption scheme.
 Weak encryption scheme.
3.3.1 Strong encryption scheme
A 'strong encryption scheme' is one that cannot be cracked in a sufficiently short time that the security of the message is threatened, even using the most powerful computers available for the task.
3.3.2 Weak encryption scheme
With a 'weak encryption scheme', on the other hand, there is a significant risk that the key can be discovered by an organization that has access to sufficient computing power. Discussions in this area generally relate to the computing power owned by the U.S. National Security Agency (NSA).There are several aspects of a scheme that determine its strength. One of particular importance is the keylength needed to make it highly unlikely that a cracking attempt will be successful.
There are two kinds of cryptosystem:
 Symmetric Cryptosystem(private key)
 Asymmetric Cryptosystem(public key)
3.3.3 Private Key Cryptography processes
There are two kinds of private Key Cryptography processes. Like as below
 Private key Encryption Process
 Private key Decryption process
3.3.3.1 Private key Encryption Process
In privatekey cryptography, the sender and recipient agree beforehand on a secret private key. The plaintext is somehow combined with the key to create the cipher text. The method of combination is such that, it is hoped, an adversary could not determine the meaning of the message without decrypting the message, for which he needs the key. The following diagram illustrates the encryption process. The problems of key distribution are solved by publickey cryptography, the concept of which was introduced by Whitfield Diffie and Martin Hellman in 1975. (There is now evidence that the British Secret Service invented it a few years before Diffie and Hellman, but kept it a military secret—and did nothing with it.) Publickey cryptography uses a pair of keys: a public key, which encrypts data, and a corresponding private key, for decryption. Because it uses two keys, it is sometimes called asymmetric cryptography. Figure 3.1 as shown as private key encryption.
Figure: 3.1 Private key encryption
3.3.3.2 Private key Decryption process
To break a message encrypted with privatekey cryptography, an adversary must either exploit a weakness in the encryption algorithm itself, or else try an exhaustive search of all possible keys (brute force method). If the key is large enough (e.g. 128 bits), such a search would take a very long time (few years), even with very powerful computers.
Figure: 3.2 Private key decryption
Privatekey methods are efficient and difficult to break. However, one major drawback is that the key must be exchanged between the sender and recipient beforehand, raising the issue of how to protect the secrecy of the key. When the president of the United States exchanges launch codes with a nuclear weapons site under his command, the key is accompanied by a team of armed couriers. Banks likewise use high security in transferring their keys between branches. These ypes of key exchanges are not practical, however, for ecommerce between, say, Amazon.com and a casual web surfer. Figure 3.2 as shown as private key decryption.
3.3.4 Public key Cryptography processes
There are two kinds of Public Key Cryptography processes. Like as below
 Public key Encryption Process
 Public key decryption process
3.3.4.1 Public key Encryption Process
Public key cryptography uses two private key (known only by the recipient) and a public key (known to everybody). The public key is used to encrypt the message and then it is sent to the recipient who can decrypt the message using the private key. The message encrypted with the public key cannot be decrypted with any other key except for its corresponding private key. The following Figure 3.3 illustrates the encryption process in the public key cryptography.




Figure: 3.3 Public key encryption
3.3.4.2 Public key decryption process
Figure: 3.4 Public key Decryption processes
The publickey algorithm uses a oneway function to translate plaintext to cipher text. Then, without the private key, it is very difficult for anyone (including the sender) to reverse the process ( i.e., translate the cipher text back to plaintext). A oneway function is a function used in publickey cryptography involves factoring very large numbers. The idea is that it is relatively easy to multiply numbers, even large ones, with a computer; however, it is very difficult to factor large numbers. The only known algorithms basically have to do a sort of exhaustive search (Does 2 go in to? Does 2? 4? 5? 3? and so on). With numbers 128 bits long, such a search requires performing as many tests as there are particles in the universe. Figure 3.4 as shown as public key decryption process.
3.3.5 Public key’s characteristics
 Plaintext: This is the readable message or data that is fed into the algorithm as input.
 Encryption algorithm: The encryption algorithm performs various transformations on the plaintext.
 Public and private keys: This is a pair of keys that have been selected so that if one is used for encryption, the other is used for decryption. The exact transformations performed by the algorithm depend on the public or private key that is provided as input.
 Ciphertext: This is the scrambled message produced as output. It depends on the plaintext and the key. For a given message, two different keys will produce two different ciphertexts.
 Decryption algorithm: This algorithm accepts the ciphertext and the matching key and produces the original plaintext.It has several essential steps are the following:
 Each user generates a pair of keys to be used for the encryption and decryption of messages.
 Each user places one of the two keys in a public register or other accessible file. This is the public key. The companion key is kept private. As Figure 3.1 suggests, each user maintains a collection of public keys obtained from others
 If Boby wishes to send a confidential message to Ali, Boby encrypts the message using Ali's public key.
 When Ali receives the message, she decrypts it using her private key. No other recipient can decrypt the message because only Ali knows Ali's private key.
3.3.6 Applications for Public key Cryptosystems
 Encryption/decryption: The sender encrypts a message with the recipient's public key.
 Digital signature: The sender "signs" a message with its private key. Signing is achieved by a cryptographic algorithm applied to the message or to a small block of data that is a function of the message.
 Key exchange: Two sides cooperate to exchange a session key. Several different approaches are possible, involving the private key(s) of one or both parties.
 Randomize Authentication: A randomized protocol is a deterministic protocol that uses an extra random string in addition to its normal input.
Table 3.1 applications supported by the algorithms of Public key Encryption  

Algorithm  Encryption and Decryption  Digital Signature  Key Exchange  Randomize Decryption Complexity 
RSA  Yes  Yes  Yes  No 
Elliptic Curve  Yes  Yes  Yes  No 
DiffieHellman  No  No  Yes  No 
DSS  No  Yes  No  No 
3.3.7 PublicKey Cryptanalysis
As with symmetric encryption, a publickey encryption scheme is vulnerable to a bruteforce attack. The countermeasure is the same: Use large keys. However, there is a tradeoff to be considered. Publickey systems depend on the use of some sort of invertible mathematical function. The complexity of calculating these functions may not scale linearly with the number of bits in the key but grow more rapidly than that. Thus, the key size must be large enough to make bruteforce attack impractical but small enough for practical encryption and decryption. In practice, the key sizes that have been proposed do make bruteforce attack impractical but result in encryption/decryption speeds that are too slow for generalpurpose use. Instead, as was mentioned earlier, publickey encryption is currently confined to key management and signature applications. Another form of attack is to find some way to compute the private key given the public key. To date, it has not been mathematically proven that this form of attack is infeasible for a particular publickey algorithm. Thus, any given algorithm, including the widely used RSA algorithm, is suspect. The history of cryptanalysis shows that a problem that seems insoluble from one perspective can be found to have a solution if looked at in an entirely different way. Finally, there is a form of attack that is peculiar to publickey systems. This is, in essence, a probablemessage attack. Suppose, for example, that a message is to be sent that consisted solely of a 53bit DES key. An adversary could encrypt all possible 53bit DES keys using the public key and could discover the encrypted key by matching the transmitted ciphertext. Thus, no matter how large the key size of the publickey scheme, the attack is reduced to a bruteforce attack on a 53bit key. This attack can be dissatisfied by appending some random bits to such simple messages.
3.3.8 Confused of cryptographical complexity
Complexity has always been a part of our environment, and therefore many scientific fields have dealt with complex systems and phenomena. Indeed, some would say that only what is somehow complex what displays variation without being random is worthy of interest. The use of the term complex is often confused with the term complicated. In today's systems, this is the difference between myriad connecting "stovepipes" and effective "integrated" solutions.^{[3]} This means that complex is the opposite of independent, while complicated is the opposite of simple. While this has led some fields to come up with specific definitions of complexity, there is a more recent movement to regroup observations from different fields to study complexity in itself, whether it appears in anthills, human brains, or stock markets. One such interdisciplinary group of fields is relational order theories.
3.4 Complexity define using bigO notation
Here express complexity using bigO notation. For a problem of size n:
 a constanttime method is "order 1": O(1)
 a lineartime method is "order n": O(n)
 a quadratictime method is "order n squared": O(n^{2})
A function T(n) is O(f(n)) if for some constant c and for all values of n greater than some value n_{0}:
T(n) <= c * f(n)
The idea is that T(n) is the exact complexity of a method or algorithm as a function of the problem size N, and that f(n) is an upperbound on that complexity (i.e., the actual time/space or whatever for a problem of size n will be no worse than f(n)). In practice, here want the smallest f(n) the least upper bound on the actual complexity.For example, consider T(n) = 2 * n^{2} + 5. Here can show that T(n) is O(n^{2}) by choosing c = 4 and n_{0} = 2. This is because for all values of n greater than 2:
2 * n^{2} + 5 <= 4 * n^{2}
T(n) is not O(n), because whatever constant c and value n_{0} is choose, It always find a value of n greater than n_{0} so that 2 * n^{2} + 5 is greater than c * n. Here O (n) is upper bound so O(n) is complexity.
3.4.1 How to Determine Complexities of algorithm
 Sequence of statements
1. statement 1;
2. statement 2;
3. ...
4. statement k;
Total time = time(statement 1) + time(statement 2) + … + time(statement k)
If each statement is "simple" (only involves basic operations) then the time for each statement is constant and the total time is also constant: O (1).
 ifthenelse statements
1. if (condition) {
2. sequence of statements 1
3. }
4. else {
5. sequence of statements 2
6. }
 for loops
1. for (i = 0; i < N; i++) {
2. sequence of statements
3. }
 Nested loops
for (i = 0; i < n; i++)
{
for (j = 0; j < m; j++)
{
sequence of statements
}
}
for (i = 0; i < n; i++) {
for (j = i+1; j < n; j++) {
sequence of statements
}
}
3.4.2 Time complexity in mathematical operation according bigO notation
Assume that a,b and c are n bit number. Then here have
Operation Complexity
a+b O(n)
a.b O(n)
amod c O(n)
In general, the bigO notation makes use of the term that grows the fastest. For example,
 O[ax^{7} + 2x^{2} + sin(x)] O(ax^{7}) = O(x^{7})
 O(e^{n} + an^{10}) = O(e^{n})
 O(n! + n^{50}) = O(n!)
 (nX n))= O()
An algorithm with an input of size n is said to be
 Linear: If the running time is O(n)
 Polynomial: If the running time is O(n^{t}) for some constant t
 Exponential: If the running time is O(t^{h(n)}) for some constant t and polynomial h(n)
In n = O (n ) and out n = O (n)
In key exchange n= O (n) and in digital signature n = O (n )
3.4.3 Bestcase and Averagecase Complexity
In general, here may want to consider the best and average time requirements of a method as well as its worstcase time requirements. Which is considered the most important will depend on several factors. For example, if a method is part of a timecritical system like one that controls an airplane, the worstcase times are probably the most important (if the plane is flying towards a mountain and the controlling program can't make the next course correction until it has performed a computation, then the bestcase and averagecase times for that computation are not relevant — the computation needs to be guaranteed to be fast enough to finish before the plane hits the mountain).
On the other hand, if occasionally waiting a long time for an answer is merely inconvenient (as opposed to lifethreatening), it may be better to use an algorithm with a slow worstcase time and a fast averagecase time, rather than one with soso times in both the average and worst cases. Note that calculating the averagecase time for a method can be tricky. You need to consider all possible values for the important factors, and whether they will be distributed evenly.
3.4.4 Memory complexity
Here provide an analysis of the computational and memory complexity for H.234/AVC decoders when applying VCR functionalities. Here, denote the index of the first Ipacket as 0, and define N as the number of packets in one GOP (geooptcal properties) and M as the inter packet distance bethereen every two successive I/Ppacket, which here assume fixed throughout the whole GOP. Also, here will use RP, RBL0, and RBL1 as the number of reference packets used for Ppackets, Bpacket List 0, and Bpacket List 1, respectively. The bitstreams are encoded with a conventional prediction structure where every Ppacket is predicted from the nearest forward I/Ppackets and every Bpacket uses the nearest I/Ppacket as the references. Forward reference packets are in List 0 and backward reference packets are in List 1.When random access to a specific packet is required, the packet to be displayed and any packets that it depends on need to be decoded. The complexity to access the jth packet in one GOP1 is denoted as CRA(j) which indicates the number of packets to be decoded.
1 1Packet
C= j/M+1 PPacket
Min([j/M]+R),N/M+1) BPacket
Within one GOP, the decoding complexity for a Ppacket is determined by its index j, since large j means more previously encoded Ppackets need to be decoded due to MCP reference dependencies.
3.4.5 communication complexity (CC)
The notion of communication complexity (CC) was introduced by Yao in 1979, who investigated the following problem involving two separated parties (Ali and Boby). Ali receives an nbit string x and Boby another nbit string y^{]}, and the goal is for one of them (say Boby) to compute a certain function f(x,y) with the least amount of communication bethereen them. Note that here here are not concerned about the number of computational steps, or the size of the computer memory used. Communication complexity tries to quantify the amount of communication required for such distributed computations. CC always O (n).
3.4.7 Constants Matter of complexity
 Mechanism of cryptography
3.5.1 Scheduling
Bit allocating process for data traffic defines Scheduling. Process of Scheduling organized the order in which packets get transmitted at the output of each queue ensuring that packets from different applications meet their QoS requirements. Scheduling algorithms provide mechanisms for data allocation and multiplexing at the packet level. A suitable scheduling mechanism will provide the necessary QoS guarantees required by heterogeneous classes of traffic while utilizing the resources as efficiently as possible. It is an important concept in IEEE 802.13 networks. Admission control and congestion control policies are all dependent on the specific scheduling disciplines used. It refers to the way traffic is assigned priorities according to their queuing algorithm. This task is carried out by software known as a scheduler.
3.5.2 Queuing
It is the way in which traffic packets are queuing in order while they wait for processing by the scheduler. The scheduler decides which queue and which packets from that queue should be transmitted. The order in which the scheduler selects the packets to process can affect network performance. This will be shown in the following sections.
3.5.3 Different queuing applications
There are several types of queuing techniques like as priority queuing, FIFO queuing, Round Robin queuing, customs queuing and weight queuing etc.
3.5.3.1 Priority Queuing
Priority Queuing is a famous queuing technique. Traffic is prioritized with a priorityqueuelist. Packets are classified based on userspecified criteria and placed into one of the four output queues – high, medium, normal, and low – in terms of the assigned priority. When traffic packets arrive, different traffic packets are classified into high, medium, normal and low priority queues. When the scheduler is ready to allocate data to traffic, it searches the high queue for a request. If there is one, it gets answered. If not, the medium queue is then checked. If there is a request, it is answered. If not, the normal queue, and finally the low priority queues are checked consequently. The process is repeated for the next cycle. In PQ, higher priority traffic can starve lower priority queues of channel. If there is enough traffic in the higher queue, the rest of the queues will never be answered. It’s disadvantage is also mentioned in.Figure 3.1 illustrates the working principle of PQ. PQ gives higher priority queues absolute preferential treatment over lower priority queues. When a packet is to be sent out to an interface, the high priority queue is scanned and serviced first. The packet at the head of the highest queue is chosen for transmission. This procedure is repeated until the highest queue is empty. The lower queues are then serviced in sequence. In, strict PQ is used for the bandwidth allocation of the four different service flows:UGS, rtPS, nrtPS and BE in its hierarchical structure of the bandwidth allocation for the IEEE 802.15 standard. The authors point out that one disadvantage of the strict PQ scheduling algorithm is that higher priority connections can starve the bandwidth of lower priority connections. To overcome this problem, the authors include a trafficpolicing module in each SS which forces the connection’s bandwidth demand to stay within its traffic contract. This approach prevents higher priority connections from using more than their allocation bandwidth. This means every connection has its hardthreshold in its bandwidth demand. In, the author presents that PQ could be suitable for voice traffic in a network. PQ offers a queue that will offer the highest priority to specific traffic.
Incoming traffic Outgoing traffic Packets Packets
Figure: 3.1 Priority Queuing
The problem is that congestion could occur since lower queues have lower priority and whose traffic may not get adequate bandwidth.
3.5.3.2 Firstin Firstout Queuing
Incoming traffic Outgoing traffic Packets Packets
Figure: 3.2 FIFO Scheduler
Firstin Firstout queuing is called also as Firstcome Firstserved. It works like a queue in a supermarket's checkout. The first item arrived is the first processed. As new packets arrive, they are simply added to the end of the queue. If the queue becomes full, the analogy with the supermarket stops. Newly arriving packets are then discarded. This is known as taildrop .Figure 3.2 illustrates the concept of FIFO.
3.5.3.3 Custom Queuing
CQ has been available for a number of years and allows approximate bandwidth allocation for different queues. In CQ, traffic is classified into multiple queues with configurable queue limits by specifying the number of packets or bytes to be serviced for each class of traffic. In Queue 0, the system queues high priority packets such as signaling packets so that these packets are always serviced first. System queue 0 is not considered in this study since here focus on traffic queues. For the other queues, packets are sent from each queue by cycling through the queues in a roundrobin way, ensuring that each queue is serviced and that no individual queue can starve other queues. As each packet is sent, a byte counter is incremented. When the byte counter reaches or exceeds the default or configured threshold for the queue, transmission moves to the next queue. When one queue is empty, transmission moves to the next queue.


Figure: 3.3 Custom Queuing
Figure 3.3 presents the illustration of CQ. Service providers will determine how to assign the bandwidth to different applications according to the service demands by custom queuing. Queue 0 is used for system traffic. Queue 0 is not user configurable. The queue size for queue 1 through 13 can be specified by associating each queue size with a configurable byte count. The byte count specifies how many bytes of data the scheduler should send from the current queue before it moves on to the next queue. For instance, some service providers may set it up to assign queue 1 to a 20% quota of a link bandwidth or a frame capacity, queue 2 to a 20% quota and so forth. CQ has fairness in division of bandwidth. Bandwidth used by a specific queue can only be indirectly specified in terms of byte count and queue length. When a particular queue is being processed, packets are sent until the byte count or packet limit is met or the queue is empty. It is pointed out in that CQ ensures that no application or specified class of services achieves more than a predetermined proportion of overall capacity when the proportion is decided. As in the case of PQ, CQ is statically configured and does not automatically adapt to changing network conditions.
3.5.3.4 Weighted Fair Queuing
Weighted Fair Queuing is a hashing algorithm which places flows into separate queues where weights are used to determine how many packets are processed at a time. Determining weights depend on the service requirements. There is no unique rule for weight determination. WFQ works like having several doors. When a packet arrives, it is classified by the classifier and put into one of the doors. Different doors have different weights which are assigned by the different applications according to its Qos requirements. The door is the entry to a queue that is served together with some other traffic in a weighted round robin order. Figure 3.4 illustrates the system of weighted Fair Queuing.
— —


Incoming traffic Outgoing traffic Packet s Packets
Figure: 3.4 Weighted Fair Queuing
3.5.3.5 Round Robin Scheduling
Round robin scheduling algorithms is a simple and famous scheduling algorithm for processes in an operating system which equitably assigns time slices to each process in equal portions in order to handle all processes as having the same priority. Round robin scheduling is starvationfree. Round robin scheduling can also be applied to other scheduling problems, such as network resource scheduling. In wireless networks where many stations share one channel, the Round Robin algorithm provides the ability for every station to transmit or receive on the shared channel at a regular interval. Round Robin is a fair algorithm due to its nature. However, it is not efficient if the processes have different processing capacities. Some processes are likely to receive more requests than they can treat while other processes will use only part of their resources. Figure 3.5 illustrates the Round robin scheduling algorithm.
Figure: 3.5 Round Robin Scheduling
3.5.3.6 Data Slicing System in cryptography
Data slicing system are different types.
 Segmentation is the term used to describe the process of chopping streams of data into smaller chunks
 Resegmentation is called fragmentation. Fragmentation is the process of chopping larger chunks of data into smaller chunks
3.6.1 Segmentation
Segmentation is the term used to describe the process of chopping streams of data into smaller chunks as shown in Figure 3.6. Segmentation usually occurs fairly early in the communication process and it is almost always software that performs the segmentation process. The segmentation process is performed prior to transfer of data across a network or before storage on a peripheral device. Segmentation is necessary because today's communication systems use what is called packetized communication. Chopping the data into segments has several advantages.
 Once segmented, the segments are referred to individually as protocol data units.
 These PDU's are encapsulated into packets.
 Packets can be sent along more than one path to the destination.
 This increases both the reliability, and
 The speed at which data can travel across a network.
 Packets can travel across a packet switched network without tying up a communications circuit expressly for that purpose.
 Multiple conversations between different parties can therefore share a single communications link.
 If any single packet is lost, it can be retransmitted instead of having to start the entire conversation all over again.
Figure: 3.6 Processes of Segmentation
3.6.2 Fragmentation
Resegmentation is called fragmentation. Fragmentation is the process of chopping larger chunks of data into smaller chunks as shown in Figure 3.7. Fragmentation is usually performed at the hardware level, and when data is chopped into fragments, it is referred to as a frame. Fragmentation occurs so that data can be transmitted across a connection without overwhelming the memory buffers on either side of the connection. Fragmentation allows for the coordination of data transmission amongst devices connected to a common transmission medium.



Figure: 3.7 Processes of Fragmentation
3.6.3 Process of Encapsulation System
All communications on a network originate at a source, and are sent to a destination. The information sent on a network is referred to as data or data packets. If one computer (host A) wants to send data to another computer (host B), the data must first be packaged through a process called encapsulation. Encapsulation wraps data with the necessary protocol information before network transit. Therefore, as the data packet moves down through the layers of the OSI model, it receives headers, trailers, and other information. To see how encapsulation occurs, examine the manner in which data travels through the layers as illustrated on figure 3.8. Once the data is sent from the source, it travels through the application layer down through the other layers



Figure: 3.8 Processes of Encapsulation
The packaging and flow of the data that is exchanged goes through changes as the layers perform their services for end users. Networks must perform the following five conversion steps in order to encapsulate data.
3.6.3.1 Package the data for endtoend transport
The data is packaged for internetwork transport. By using segments, the transport function ensures that the message hosts at both ends of the email system can reliably communicate.
3.6.3.2 Add the network IP address to the header
The data is put into a packet or datagram that contains a packet header with source and destination logical addresses. These addresses help network devices send the packets across the network along a chosen path.
3.6.3.3 Add the data link layer header and trailer
Each network device must put the packet into a frame. The frame allows connection to the next directly connected network device on the link. Each device in the chosen network path requires framing in order for it to connect to the next device.
3.6.3.4 Convert to bits for transmission
The frame must be converted into a pattern of 1s and 0s (bits) for transmission on the medium. A clocking function enables the devices to distinguish these bits as they travel across the medium. The medium on the physical internetwork can vary along the path used. For example, the email message can originate on a LAN, cross a campus backbone, and go out a WAN link until it reaches its destination on another remote host area flag fragment offset.
3.7 Sending of data
As a user sends a message, its alphanumeric characters are converted to data that can travel across the internetwork. Thus a way continues the process until finished the packets of data.
3.8 Summery
In cryptography requirements are not liner types, it is always dynamic. System Supported protocol, Better secure encryption algorithm, cryptographical mechanism and complexities are inclode here for an effective secure communication. So always thinking these first reqirement for doing something in communication security system it may be voice communication or others.
CHAPTER FOUR
COMPARISON AMONG THE DIFFERENT ALGORITHMS
4.1 Introduction
In this chapter discuses, evaluates and summarizes about different algorithms like as DiffieHellman Key Exchange, RivestShamirAdleman (RSA) algorithm, Elliptic Curve Cryptography (ECC) algorithm and Digital Signature system (DSS).One algorithm contain different complexities likes time complexity, Memory complexity, communication complexity and it requires some associate components like as different types of queues, schedulers and scanners.
4.2 DiffieHellman Key Exchange Algorithm
DiffieHellman Key Exchange algorithm is an algorithm of secure communication. The first published publickey algorithm appeared in the seminal paper by Diffie and Hellman that defined publickey cryptography and is generally referred to as DiffieHellman key exchange.A number of commercial products employ this key exchange technique.The intention of the algorithm is to enable two users to securely exchange a key that can then be used for subsequent encryption of messages. The DiffieHellman algorithm depends for its effectiveness on the complexity of computing discrete logarithms. Shortly define the discrete logarithm in the following way.
 First, we define a primitive root of a prime number p as one whose powers modulo p generate all the integers from 1 to p1. That is, if a is a primitive root of the prime number p, then the numbers a mod p, a^{2} mod p,…, a^{p1} mod p are distinct and consist of the integers from 1 through p 1 in some permutation.
 For any integer b and a primitive root a of prime number p, we can find a unique exponent i such that b a^{i} (mod p) where 0 i (p1). The exponent i is referred to as the discrete logarithm of b for the base a, mod p. We express this value as dlog_{a,p} (b).
4.2.1 Concept DiffieHellman Key Exchange Algorithm
K  = (Y_{B})^{X}^{A} mod q  

= (^{X}^{B} mod q)^{X}^{A} mod q  
= (^{X}^{B})^{X}^{A} mod q  by the rules of modular arithmetic  
= (^{X}^{B}^{ X}^{A} mod q  
= (^{X}^{A})^{X}^{B} mod q  
= (^{X}^{A} mod q)  
= (^{X}^{A} mod q)^{X}^{B} mod q  
= (Y_{A})^{X}^{B} mod q 
X_{B} = dlog_{,q} (Y_{B})
The adversary can then calculate the key K in the same manner as user B calculates it.The security of the DiffieHellman key exchange lies in the fact that, while it is relatively easy to calculate exponentials modulo a prime, it is very difficult to calculate discrete logarithms. For large primes, the latter task is considered infeasible.Here is an example. Key exchange is based on the use of the prime number q = 353 and a primitive root of 353, in this case = 3. A and B select secret keys X_{A} = 97 and X_{B} = 233, respectively. Each computes its public key:
A computes Y_{A}  = 3^{97} mod 353  = 40.  

B computes Y_{B}  = 3^{233} mod 353  = 248. 
A computes K  = (Y_{B}) mod 353  = 248^{97} mod 353  =160. 

B computes K  = (Y_{A})^{X}^{B} mod 353  = 40^{233} mod 353  = 160. 
q = 353; = 3; Y_{A} = 40; Y_{B} = 248
In this simple example, it would be possible by brute force to determine the secret key 160. In particular, an attacker E can determine the common key by discovering a solution to the equation 3^{a} mod 353 = 40 or the equation 3^{b} mod 353 = 248. The bruteforce approach is to calculate powers of 3 modulo 353, stopping when the result equals either 40 or 248. The desired answer is reached with the exponent value of 97, which provides 3^{97} mod 353 = 40.With larger numbers, the problem becomes impractical.
4.2.2 Limitations
· ManintheMiddle Attack: The protocol depicted in SMS insecure against a maninthemiddle attack.
 Limited to the exchange of secret values: The purpose of the algorithm is to enable two users to securely exchange a key that can then be used for subsequent encryption of messages. The algorithm itself is limited to the exchange of secret values.
 Difficulty of computing discrete logarithms: The DiffieHellman algorithm depends for its effectiveness on the difficulty of computing discrete logarithms
DiffieHellman Key Exchange Algorithm contains different complexities like as time complexity, Memory complexity and communication complexity.
 Encryption Time Complexity: Minimum time complexity of DiffieHellman Key Exchange Algorithm is (Y_{A})^{X}^{B} mod q or O (n) and maximum time complexity of DiffieHellman Key Exchange Algorithm is ((Y_{A})^{X}^{B} )mod q or O(n).
 Decryption Time Complexity: Minimum time complexity of DiffieHellman Key Exchange Algorithm is (Y_{A})^{X}^{B} mod q or O (n) and maximum time complexity of DiffieHellman Key Exchange Algorithm is ((Y_{A})^{X}^{B} )mod q or O(n).
 Memory Complexity: Maximum memory complexity of DiffieHellman Key Exchange Algorithm is O(n). Here n is key size.
 Communication Complexity: Maximum memory complexity of DiffieHellman Key Exchange Algorithm is O(n).
4.3 Introduction of RSA Algorithm
4.3.1 Concept of RSA Algorithm
C = M^{e} mod n
M = C^{d} mod n = (M^{e})^{d} mod n = M^{ed} mod n
Both sender and receiver must know the value of n. The sender knows the value of e, and only the receiver knows the value of d. Thus, this is a publickey encryption algorithm with a public key of PU = {e, n} and a private key of PU = {d, n}. For this algorithm to be satisfactory for publickey encryption, the following requirements must be met:
 It is possible to find values of e, d, n such that M^{ed} mod n = M for all M < n.
 It is relatively easy to calculate mod M^{e} mod n and C^{d} for all values of M < n.
 It is infeasible to determine d given e and n.
M^{ed} mod n = M
The preceding relationship holds if e and d are multiplicative inverses modulo (n), where (n) is the Euler totient function. p, q prime, (pq) = (p 1)(q 1) The relationship between e and d can be expressed like as below.
ed mod (n)=1
This is equivalent to saying
ed =1 mod (n)
d =e^{1} mod (n)
That is, e and d are multiplicative inverses mod (n). Note that, according to the rules of modular arithmetic, this is true only if d (and therefore e) is relatively prime to (n). Equivalently, gcd((n),d) = 1.
To state the RSA scheme here. The ingredients are the following:
p,q, two prime numbers  (private, chosen) 

n = pq  (public, calculated) 
e, with gcd((n),e) = 1;1 < e < (n)  (public, chosen) 
D= e^{1}(mod (n))  (private, calculated) 
The private key consists of {d, n} and the public key consists of {e, n}. Suppose that user A has published its public key and that user B wishes to send the message M to A. Then B calculates C = M^{e} mod n and transmits C. On receipt of this ciphertext, user A decrypts by calculating
M = C^{d} mod n.
The basic elements of the RSA algorithm can be summarized as follows. Given two prime numbers p and q, with n = pq and a message block M < n, two integers e and d are chosen such that
M^{ed} mod n = M
It is state that the preceding relationship holds if e and d are multiplicative inverses modulo (n), where (n) is the Euler totient function. It is shown in that for p, q prime, (n) = (p 1)(q 1). The relationship between e and d can be expressed as ed mod (n) = 1
4.3.2 Proof of the RSA Algorithm
[(a mod n) x (b mod n)] mod n = (a x b) mod n
From this, it should be easy to see that if it has x mod n = 1 then x^{2} mod n 1 and for any integer y, it has also x^{y} mod n = 1. Similarly, if it is x mod n = 0, for any integer y, it is also x^{y} mod n = 0.Another property of modular arithmetic is [(a mod n) (b mod n)] mod n = (a b) mod n .If integers a and n are relatively prime, than a(n) mod n = 1.
4.3.2.1 Mathematics behind RSA
The RSA algorithm works as follows. It first finds two prime numbers and generates a key pair using those two prime numbers. Then the encryption and decryption are done using the key pair. p and q are distinct primes : N = p·q
Find a, b such that : a·b = 1 mod (p1)(q1)
Encryption Key: e = (b, n)
Decryption Key: d = (a, n)
Encryption of message m: Ee(m) = m^{b} mod n = C (cipher)
Decryption of C : D_{d}(C) = c^{a} mod n = m
So inorder for RSA to work we must have the property :
(m^{b})^{a }= m mod n
Here it has to prove that the above equation is true. If the above equation is proved then it can say that the RSA algorithm really works.
4.3.3 RSA Algorithm Processes
· Key Generation: The first step in RSA encryption is to generate a key pair. Two keys are generated of which one is used as the public key and the other is used as the private key. The keys are generated with the help of two large prime numbers. The keys are generated as follows
2. Compute n which is equal to product of those two prime numbers, n = pq
3. Compute φ(n) = (p1)(q1).
4. Choose an integer e, 1 < e < φ(n), such that gcd(e, φ(n)) = 1.
5. Compute the secret exponent d, 1 < d < φ(n), such that ed ≡ 1 (mod φ(n)).
6. The public key is (n, e) and the private key is (n, d). The values of p, q, and φ(n) should also be kept secret.
 n is known as the modulus.
 e is known as the public exponent or encryption exponent.
 d is known as the secret exponent or decryption exponent.
 Encryption: Encryption is done using the public key component e and the modulus n. To whomever we need to send the message, we encrypt the message with their public key (e,n). Encryption is done by taking an exponentiation of the message m with the public key e and then taking a modulus of it. The following steps are done in encryption.
1. Obtain the recipient’s public key (n,e)
2. Represent the plaintext message as a positive integer m < n
3. Compute the ciphertext c = m^{^e} mod n.
4. Send the ciphertext c to the recipient.
· Decryption: Decryption is done using the Private key. The person who is receiving the encrypted message uses his own private key to decrypt the message. Decryption is similar to the encryption except that the keys used are different.
2. Extract the plaintext from the integer representative m.
The RSA algorithm has been implemented in many applications and it is currently
one of the most popularly used encryption algorithm.
· Prime Number Hunt: But how would really find the prime numbers? There are many theorems available that can be used to find if the given number is prime or not. One of the most popularly used theorems for finding if a given number is prime or not is the Fermat’s little theorem. It states that for any a that is less than p, a^{p1} = 1 mod p. Here this theorem to test the primarily of a number, called as primarily testing. The proof of the theorem is given below.
Now to find the prime number, it can do the following steps
Step1: Choose a number p, randomly. This number, if large has a chance of being
prime in the order of 1 in several thousand.
Step2: Then choose a number a < p, randomly. It will use a to test the primarily
of p. First, it knows that if p is prime, a^{p}^{1} is 1 (mod p). Secondly, know that if
for some x (where x is not 1 or p1), if x^{2} is 1 or p1 (mod p) then p is not prime.
4.3.4 Limitation of RSA
RSA algorithm has several limitations and decency these as following.
 The security of the RSA algorithm lies in the fact that there is no good way of factoring numbers. No one till now knows a way to factorize a number into its prime factors. As long as no one finds a way RSA will be safe and will be one of the best encryption algorithm in use. If someone comes up with a way to factorize algorithms, then that’s the end of RSA.
 The computational complexity of the RSA algorithm completely depends upon the key length and the length of the modulus n. It exponent the key with the message and take a modulus of that value with the modulus n. So the computational complexity will depend upon these two factors. To find the exponentiation, can square the message and then multiply it again with the squared value. For example to find 5^8, it can first find 5^2 by squaring 5 and then can find 5^4 by squaring the resulted value of 5^2 and then can find 5^8 by squaring the resulted value of 5^4. Hence the complexity of the encryption and decryption depends on how long the key is.
 Compute the complexity of the RSA will have to look at all the steps involved in the protocol right from the Prime number generation. Lets leave the complexity of generating prime numbers aside for a while, as it is going to look at a deterministic polynomial time algorithm in order to find the prime p. So lets start with computing the complexities of the other steps in RSA.
4.3.5 Complexities
RSA Algorithm contains different complexities like as time complexity, Memory complexity and communication complexity.
 Encryption Time Complexity: The computational complexity of the RSA algorithm completely depends upon the key length and the length of the modulus n (here n=200). We exponent the key with the message and take a modulus of that value with the modulus n. So the computational complexity will depend upon these two factors. To find the exponentiation, we can square the message and then multiply it again with the squared value. For example to find 5^8, we can first find 5^2 by squaring 5 and then can find 5^4 by squaring the resulted value of 5^2 and then can find 5^8 by squaring the resulted value of 5^4. Hence the complexity of the encryption and decryption depends on how long the key is. Minimum time complexity of RSA Algorithm is C=m^{e} mod n. or O (n) or O(log^{3} n).and maximum time complexity of RSA Algorithm is O(n).
 Decryption Time Complexity: Minimum time complexity of RSA Algorithm is m = c^{^d} mod n.or O (n) and maximum time complexity of RSA Algorithm is O(n).
 Memory Complexity: Maximum memory complexity of RSA Algorithm is O(n). Here n is key size.
 Communication Complexity: Maximum memory complexity of RSA Algorithm is O(n).
4.4 Concept of Elliptic Curve Algorithm
An elliptic curve E is a set of points(x,y) with coordinates x and y lying in the ﬁeld F2 and satisfying the equation y+ xy=x+ax+b. a and b are constants, ﬁeld elements which will specify later. For a particular choice of a and b, the points (x,y) form a commutative group under “addition”; the rule for “addition” involves several ﬁeld operations, including computing a reciprocal, The elliptic curve analogue of the DifﬁeHellman key exchange method uses an elliptic curve E a,b deﬁned over a ﬁeld F, and a point P=(x0,y0) which generates the whole addition group of E. E,a,b,F,P,x0 and y0 are all systemwide public parameters. When user A wants to start a conversation, he chooses a secret integer multiplier KA in the range and computes KAP by iterating addition of P using a “double and add” scheme. User A sends the x and y coordinates of the point KAP to user B. User B selects his own secret multiplier KB and computes and sends to user A the point KBP. Each user can then compute the point (KAKB) P. Some bits are selected from the coordinates to become the secret session key for their conversation. Insofar as is known, there is no effective method for recovering (KAKB) P by eavesdropping on this this conversation, other than solving the discrete logarithm problem. The elliptic curve operations require addition, multiplication, squaring, and inversion in the underlying ﬁeld. The number of applications of each operation depends on the exact details of the implementation; in all implementations the inversion operation is by far the most expensive (by a factor of 5 to 20 over multiplication).
4.4.1 Importance of Elliptic Curve Algorithm
 Elliptic curve arithmetic can be used to develop a variety of elliptic curve cryptography (ECC) schemes, including key exchange, encryption, and digital signature.
 For purposes of ECC, elliptic curve arithmetic involves the use of an elliptic curve equation defined over a finite field. The coefficients and variables in the equation are elements of a finite field. Schemes using Z_{p} and GF(2^{m}) have been developed.
4.4.2 Operation Steps ECC
Closure: If a and b belong to G, then a • b is also in G.
Associative: a • (b • c) = (a • b) • c for all a, b, c in G.
Identity element: There is an element e in G such that a • e = e • a = a for all a in G.
Inverse element: For each a in G there is an element a' in G such that a • a' = a' • a = e.
Commutative: a • b = b • a for all a, b in G.
A number of publickey ciphers are based on the use of an abelian group. For example, DiffieHellman key exchange involves multiplying pairs of nonzero integers modulo a prime number q. Keys are generated by exponentiation over the group, with exponentiation defined as repeated multiplication. For example, a^{k} mod q =(a.a.a….a) mod q. To attack DiffieHellman, the attacker must determine k given a and a^{k}; this is the discrete log problem.For elliptic curve cryptography, an operation over elliptic curves, called addition, is used. Multiplication is defined by repeated addition. For example, a.k = (a.a.a….a) , where the addition is performed over an elliptic curve. Cryptanalysis involves determining k given a and (a.k).An elliptic curve is defined by an equation in two variables, with coefficients. For cryptography, the variables and coefficients are restricted to elements in a finite field, which results in the definition of a finite abelian group. Before looking at this, we first look at elliptic curves in which the variables and coefficients are real numbers. This case is perhaps easier to visualize.
4.4.3 Elliptic Curve Encryption/Decryption
To encrypt and send a message P_{m} to B, A chooses a random positive integer k and produces the ciphertext C_{m} consisting of the pair of points:
C_{m} = {kG, P_{m} + kP_{B}}
Note that A has used B's public key P_{B}. To decrypt the ciphertext, B multiplies the first point in the pair by B's secret key and subtracts the result from the second point:
P_{m} + kP_{B} n_{B}(kG) = P_{m} + k(n_{B}G) n_{B}(kG) = P_{m}
A has masked the message P_{m} by adding kP_{B} to it. Nobody but A knows the value of k, so even though P_{B} is a public key, nobody can remove the mask kP_{B}. However, A also includes a "clue," which is enough to remove the mask if one knows the private key n_{B}. For an attacker to recover the message, the attacker would have to compute k given G and kG, which is assumed hard.
As an example of the encryption process (taken from), take p = 751; E_{p}(1, 188), which is equivalent to the curve y^{2} = x^{3} x + 188; and G = (0, 376). Suppose that A wishes to send a message to B that is encoded in the elliptic point P_{m} = (562, 201) and that A selects the random number k = 386. B's public key is P_{B} = (201, 5). We have 386(0, 376) = (676, 558), and (562, 201) + 386(201, 5) = (385, 328). Thus A sends the cipher text {(676, 558), (385, 328)} .
4.4.4 Security of Elliptic Curve Cryptography
4.4.5 Application with an Elliptic Curve
There are several applications of ECC discuses these step by step.
4.4.5.1 Adding and Doubling Points
The two elliptic curve operations that are most relevant to the complexity of of multiplying a group element by a constant are the Add and Double operations. We also include the Negation operation, since it offers a speed increase. The operations are presented slightly modiﬁed from their presentation in . The elliptic curve Ea,b is the set of all solutions (x,y) to the equation y+ xy =x+ax+b. a and b are constants from the ﬁeld F2;b must be non zero. x and y are also elements of F2. A solution (x,y) is called a point of the curve. An extra point O is also needed to represent the group identity. We use (0,0) to represent O. (Because b≠0,(0,0) is never a solution of our equation.) The Add and Double routines make a special check to see if an input is O. The Double routine is not just an optimized special case of the Add routine: The Add formula fails when the two input points have equal x coordinates; the formula calls for a division by 0. A check is made for this case; if the y coordinates are also equal, the two input points are equal, and the Double routine is called. If they coordinates differ, the two points are inverses in the elliptic curve group, so O is returned.
4.4.5.1 Rule for Adding two points
Let (x1,y1) € E(F2) and (x2,y2) € E(F2) be two points.
 If either point is O, return the other point as the sum.
 If x1= x2 and y1=y2, use the Doubling rule.
 If x1= x2 but y1≠y2, return O as the sum.
 If x1≠x2, then (x1,y1)+(x2,y2)=(x3,y3), where
4.4.5.2 Rule for doubling a point
Let (x1,y1) € E(F2) be two points
 If x1=0, then 2(x,y)=O
 If x1≠0, then 2(x,y)=(x2,y2), where
Gives the formula for x2 as x2= x+ . Our formula is equivalent, but saves the multiplication by the constant b.
4.4.5.3 Rule for negating a point
Let (x1,y1) € E(F2) be two point. Then 1(x,y) =(x, x+y).
From these formulas, determine the number of ﬁeld operations required for each kind of elliptic curve operation. An addition step usually requires eight additions, two multiplications, one squaring, three reductions mod T (u), and one inversion. A Doubling step usually requires four additions, two multiplications, two squaring, four reductions mod T(u), and one inversion. A Negation step requires one addition. The important contributors to the run time are the multiplications and inversions.
4.4.6 Choosing different option
Choosing the Curves: The constant a in the elliptic curve can be chosen to simplify the operations of doubling a point and of adding two points. Here use a=0, which eliminates one addition from the formula for the x coordinate in both operations. The size of the group depends on the choice of a and b , improvements by Atkin, Elkies, Morain, and Couveignes[9] for determining the group order. The order is always close to the number of ﬁeld elements. For maximum security, the order should have as large a prime factor as possible. In the equation, with a=0, the best possible order is 4p (with p a prime near 2). (If we select a =0, the order can have the form 2p with p near 2, giving a small amount of extra security.) Lay and Zimmer give a method for creating a curve with a given order, but reluctant to use their scheme, since it produces curves closely related to rational curves with an extra structural property called complex multiplication. Tis extra structure might be insecure. It recommend trying small bs, computing the curve order and checking if the order is of the form 4p. For curves based on F2 , a few hundred tries may be necessary. One scheme for selecting the bs is to try small values for (x,y) and to compute b from the equation y+ xy = x+ b.The best known methods for computing elliptic curve discrete logarithms take time proportional to the squareroot of the largest prime factor of the group order. In case, the largest prime factor will be about 2, so ﬁnding discrete logarithms will take about 210 operations.
Choosing a Multiplier:The number of additions and doublings necessary for computing nP (where P is a point on the curve and n is an integer) is an important factor in the speed of the DH algorithm. It has implemented the straightforward doubleandadd approach based on the binary expansion of n. For a random 155bitmultipliern, computing nP will require 154 doubling
steps and an average of 77 addition steps. (The number of addition steps depends on the number of 1 bits in the binary representation of n.) Although the number of doubling steps is roughly ﬁxed, it is possible to reduce considerably the number of addition steps needed. This problem is studied in a large literature on addition chains . We mention a few points:
 Menezes discusses the idea of using a low Hamming weight multiplier. If one is content to select from among 2 multipliers, while allowing multipliers up to about 2, then one can select a multiplier of Hamming weight
 The operation to negate a point is cheap, so can use additionsubtraction chains instead of simply addition chains. The extra ﬂexibility permits shorter chains.
 If using one multiplier for many connections, it pays to invest some effort in ﬁnding a good addition, subtraction chain for it. The speedup available from using a good additionsubtraction chain with a random multiplier and novel point is 2030%.
 Since freely to select a random multiplier, one approach is to select a random additionsubtraction chain instead. This allows more selection freedom at each chain step, so here need fewer addition or subtraction steps to generate roughly 2 possible multipliers. Some care must be taken in the selection algorithm, to avoid having some multipliers be overly likely. (This can happen because many steps in the additionsubtraction chains commute.)
 When it known that the starting point P ahead of time, then can prepare tables which give a considerable speedup.This is the situation for the ﬁrst two of the four point multiplications in DifﬁeHellman key exchange.
4.4.7 Limitation o f ECC
ECC has several limitations, these are as following
 The discrete logarithm problem is hard for elliptic curves, because the index calculus attack that is so effective modulus does not work.
 Furthermore, using ECC with or without precomputed values to perform key exchange is not that much faster than RSA. So the only real advantage to using ECC to perform key exchange is key and transmission size.
 Another disadvantage to ECC is certificates. Publickey crypto does not really work without digital certificates, and digital certificates don’t really work without certificate authorities. It’s hard to find ECC digital certificates. So even if want to use ECC, it might not be able to get a certificate.
 People say that ECC is very much faster than RSA, but actually ECC is significantly faster than RSA only when used with precomputed values. That is can store ECC key in a small space, but if want to get the performance advantage, it also has to store some tables of precomputed values. These tables can be as many as 20,000 bytes. But it doesn’t have 20,000 bytes of storage space lying around (say for a smart card), people may not be able to use the precomputed tables. The ECC is not that much faster than RSA. With ECC can sign fast or save storage space, but it can’t do both. Of course, saving storage space and transmission size may be reason enough for ECC.
4.4.8 Complexities
ECC Algorithm contains different complexities like as time complexity, Memory complexity and communication complexity.
 Encryption Time Complexity: The computational complexity of the ECC algorithm completely depends upon the key length and the length of the modulus n (here n=).. Minimum time complexity of ECC Algorithm is P_{A} = n_{A} x G..or O (n) or O(log^{3} n).and maximum time complexity of ECC Algorithm is O(n).
 Decryption Time Complexity: Minimum time complexity of ECC Algorithm is C_{m} = {kG, P_{m} + kP_{B}}.or O (n) and maximum time complexity of ECC lgorithm is O(n).
 Memory Complexity: Maximum memory complexity of ECC Algorithm is O(n). Here n is key size.
 Communication Complexity: Maximum memory complexity of ECC Algorithm is O(n).
4.5 History of Digital Signature
4.5.1 Concept of Digital Signature algorithm
4.5.2 The Digital Signature Standard (DSS)
Global PublicKey Components:
 p, prime number where 2^{L}^{ 1} < p < 2^{L} for 512 or 1024 and L a multiple of 64; i.e., bit length of between 512 and 1024 bits in increments of 64 bits
 q, prime divisor of (p 1), where 2^{159} < q < 2^{160}; i.e., bit length of 160 bits
 g= h^{(p 1)/q} mod p, where h is any integer with 1 < h < (p 1) such that h^{(p 1)/q} mod p > 1
x= random or pseudorandom integer with 0 < x < q
User's Public Key:
y = g^{x} mod p
User's PerMessage Secret Number:
k= random or pseudorandom integer with 0 < k < q
Signing:
r= (g^{k} mod p) mod q
s= [k^{1} (H(M) + xr)] mod q
Signature = (r, s)
Verifying:
w= (s')^{1} mod q
u1= [H(M')w] mod q
u2=(r')w mod q
v= [(g^{u}^{ 1} y^{u}^{ 2}) mod p] mod q
TEST: v = r'
M= message to be signed
H(M)= hash of M using SHA1
M', r', s'= received versions of M, r, s
With these numbers in hand, each user selects a private key and generates a public key. The private key x must be a number from 1 to (q 1) and should be chosen randomly or pseudo randomly. The public key is calculated from the private key as y = g^{x} mod p. The calculation of y given x is relatively straightforward. However, given the public key y, it is believed to be computationally infeasible to determine x, which is the discrete logarithm of y to the base g, mod p.To create a signature, a user calculates two quantities, r and s, that are functions of the public key components (p, q, g), the user's private key (x), the hash code of the message, H(M), and an additional integer k that should be generated randomly or pseudo randomly and be unique for each signing. At the receiving end, verification is performed using the formulas. The receiver generates a quantity v that is a function of the public key components, the sender's public key, and the hash code of the incoming message. If this quantity matches the r component of the signature, then the signature is validated. The structure of the algorithm is quite interesting. Note that the test at the end is on the value r, which does not depend on the message at all. Instead, r is a function of k and the three global publickey components. The multiplicative inverse of k (mod q) is passed to a function that also has as inputs the message hash code and the user's private key. The structure of this function is such that the receiver can recover r using the incoming message and signature, the public key of the user, and the global public key. Another point worth nothing is that the only computationally demanding task in signature generation is the exponential calculation g^{k} mod p. Because this value does not depend on the message to be signed, it can be computed ahead of time. Indeed, a user could precalculate a number of values of r to be used to sign documents as needed. The only other somewhat demanding task is the determination of a multiplicative inverse, K^{1}. Again, a number of these values can be precalculated. Digital Signatures is the most important development from the work on publickey cryptography. The digital signature provides a set of security capabilities that would be difficult to implement in any other way.
4.5.3 Requirements of DSS
 Mina may forge a different message and claim that it came from Nina. Mina would simply have to create a message and append an authentication code using the key that Nina and Mina share.
 Nina can deny sending the message. Because it is possible for Mina to forget a message, there is no way to prove that Nina did in fact send the message.
In situations where there is not complete trust between sender and receiver, something more than authentication is needed. The most attractive solution to this problem is the digital signature. The digital signature is analogous to the handwritten signature. It must have the following properties:
 It must verify the author and the date and time of the signature.
 It must to authenticate the contents at the time of the signature.
 It must be verifiable by third parties, to resolve disputes.
 The signature must be a bit pattern that depends on the message being signed.
 The signature must use some information unique to the sender, to prevent both forgery and denial.
 It must be relatively easy to produce the digital signature.
 It must be relatively easy to recognize and verify the digital signature.
 It must be computationally infeasible to forge a digital signature, either by constructing a new message for an existing digital signature or by constructing a fraudulent digital signature for a given message.
 It must be practical to retain a copy of the digital signature in storage.
4.5.4 Complexities
DSS Algorithm contains different complexities like as time complexity, Memory complexity and communication complexity.
 Encryption Time Complexity: The computational complexity of the DSS algorithm completely depends upon the key length and the length of the modulus n. Minimum time complexity of DSS Algorithm is g^{x} mod p or O (n) or O(log^{3} n).
 Decryption Time Complexity: Minimum time complexity of DSS Algorithm is g^{x} mod p or O (n) and maximum time complexity of DSS Algorithm is (g^{k} mod p) mod q or O(n)
 Memory Complexity: Maximum memory complexity of DSS Algorithm is O(n). Here n is key size.
 Communication Complexity: Maximum memory complexity of DSS Algorithm is O(n).
4.6 Comparisons of Complexities for PublicKey Cryptosystems
Table 4.1 illustrates Complexities for PublicKey Cryptosystems for different algorithms like as RSA, Elliptic Curve, DiffieHellman, DSS and Bit Reallocation Algorithms.
Table 4.1 Complexities for PublicKey Cryptosystems 


Algorithm  Regular Encryption Complexity  Minimum Encryption Complexity  Maximum Encryption Complexity  Authorize Decryption Complexity  Unauthorized Decryption Complexity 
RSA  O(n)  O(n)  O(n)  O(n)  O(n) 
Elliptic Curve  O(n)  O(n)  O(n)  O(n)  O(n) 
DiffieHellman  O(n)  O(n)  O(n)  O(n)  O(n) 
DSS  O(n)  O(n)  O(n)  O(n)  O(n) 
BRA  O(n)  O(n)  O(n)  O(n)  O(2n) 
Table 4.2 illustrates the application techniques for PublicKey Cryptosystems.
Table 4.2 Applications for PublicKey Cryptosystems 


Algorithm  Encryption/Decryption  Digital Signature  Key Exchange  Randomizing  Significant always 
RSA  Yes  Yes  Yes  Yes  No 
Elliptic Curve  Yes  Yes  Yes  No  No 
DiffieHellman  No  No  Yes  No  No 
DSS  No  Yes  No  No  No 
BRA  Yes  Yes  Yes  Yes  Yes 
4.8 Summery
Although every algorithm have some limitations but these are acceptable for application. Complexities of these are also being acceptable. There many algorithms are proposed for security but very few are acceptable for implementation in communication field cause of their less complexity in encryption, more complexities in decryption and more security.
CHAPTER FIVE
PROPOSED ALGORITHM
5.1 Introduction
The proposed algorithm will also benefit WSN providers because it aims to maximize the one stop security, simplicities of voice communication system and have minimize complexities and requirements. In other words, it can serve multiple securities concurrently without decreasing the quality of service for an avantgarde hybrid design.
5.2 Development of the Bit Reallocation Algorithm Approach
Figure 5.1 illustrates the development of the bit reallocation algorithm approach. Step by step discuss the constructions of the approach
5.2.1 Construction 1
The algorithm proposed in this study is a Bit Reallocation algorithm in WSN. The approach uses and develops features that are based on FIFO Queuing, Priority Queuing (PQ) and Custom Queuing (CQ) and Round Robin algorithm (RR) in a WSN system. It can improve the fragmentation and reallocation service for future WSN users because it achieves acceptable coservices, QoS guarantees and fairness for all information services.
5.2.2 Construction 2
The literature review and the performance comparisons among the ECC, RSA, DSS, DH and the proposed BRA can satisfy the QoS requirements of higher security services but ignore the QoS of the other lower security services. ECC is significantly faster than others only when used with precomputed values; ECC can provide efficient guarantees for allocated information in different services but fails to consider the unformed or overformal information or miscomputed data utilization. The proposed algorithm (BRA) over come this problem with blind (binary bits) technique and with contains significantly faster mechanism of ECC.
5.2.3 Construction 3
RSA has a welldeveloped certificate infrastructure. Currently in the industry, RSA is winning. The key size, transmission size and signature performance issues concern makers of small devices. But they often find that RSA is fast and small enough. Sure, it’s not the fastest signer or the smallest key, but it still works just only with prime numbers. The security of the RSA algorithm lies in the fact that there is no good way of factoring numbers. No one till now knows a way to factorize a number into its prime factors. As long as no one finds a way RSA will be safe and will be one of the best encryption algorithms in use. Coming factorize algorithms, make the end of RSA. The proposed algorithm (BRA) over come this problem with bits reallocation technique and with contain transmission keys mechanism of RSA.
5.2.4 Construction 4
The most important development from the work on publickey cryptography is the digital signature. The digital signature provides a set of security capabilities that would be difficult to implement in any other way. In authentication protocols, many of which depend on the use of the digital signature. Message authentication protects two parties who exchange messages from any third party. However, it does not protect the two parties against each other. It is used in Message protection and authentication not use in voice security. DSS requirement maximum but communication QoS minimum. The proposed algorithm (BRA) over come this problem with parallel bits processing technique and with contains unique authentication mechanism of DSS












Figure: 5.1 The development of Bit Reallocation Algorithm Approach
5.2.5 Construction 5
DH is so popular encryption algorithm and used from long time. The purpose of the algorithm is to enable two users to securely exchange a key that can then be used for subsequent encryption of messages. The algorithm itself is limited to the exchange of secret values. The DiffieHellman algorithm (DH) depends for its effectiveness on the difficulty of computing discrete logarithms. The proposed algorithms (BRA) over come this problem with multiple and adaptive reallocation technique and with contain exchange mechanism of DH.
5.3 Control unit
Control unit contain control value, control link, control counter and control effect of the algorithm and unauthorized authentication complexity, describe step by step.
5.3.1 Control Value
It is the most important part of the proposed algorithm by this part algorithm take all counters value from it. Random unit takes value of n of matrix of (nX n), counter value and active bit positions queues number from control value. Fragmentation unit takes counter value and active queues number from control value. All scheduler take counters value from control value. It may be different types one is default control value and another is customs control value. In the proposed algorithm 4 is used as a control value as shown in Figure 5.2.
Figure: 5.2 Control value
 Default control value:
 Custom control value:
5.3.2 Control link
There are many links in the full algorithm among the control value and different counter scheduler. Arrow line from control value to others component in algorithm is called control link as shown in Figure 5.2. Random bit position generator and All counter like as random 2^{n} bit position counter, control bit position segmentation counter, data bit fragmentation counter and scheduler like as data bit queue scheduler.
5.3.3 Control counter
The counter which takes a counter value equal to the control value is called control counter. Random 2^{n} bit position counter, control bit position segmentation counter, data bit fragmentation counter take a counter value equal to control value.
5.3.4 Control effect on the algorithm
Random bit position generator depends on control value it may be customs or default fixed, how many number elements will generate by it. All counter specially random 2^{n} bit position counter, control bit position segmentation counter, data bit fragmentation counter and scheduler like as data bit queue scheduler and header & secure key scheduler and queues like as secure data buildup queue stack and control bit FIFO queue depend on control value.
5.3.5 Unauthorized authentication complexity
Random bit position generator generates a matrix of decimal number and that is a matrix of (nX n). Here (nX n) is (4X 4) that means (256X256) matrix or 65536X65536 matrix has 65536 columns and 65536 rows. Every row of the matrix contains 65536 elements of numbers such a way 65536 rows contain 4294965296 elements of numbers and all elements don’t contain same digits. More than four hundred million number elements is a large volume of number elements which generate in a moment by random bit position generator. Then random bit position scanner scans 2elements only from the 4294965296 number and every elements of number is the indicator of bit positions of control queue. Like as 2565 is a number element where 5 is indicate the bit position of zero fragmented queue(FQ0), 6 is indicate the bit position of first fragmented queue(FQ1), 5 is indicate the bit position of second fragmented queue(FQ2) and 2 is indicate the bit position of third fragmented queue(FQ3). For unauthorized authentication access complexity so more. Here, decryption time complexity is
= (((nX n)))= O()
When, n=4
O()= (((nX n)))= (4294965296)= (4294965296)= 65519456536
65519456536 elements evaluate in seconds which impossible for any user because in seconds random unit makes the 65519456536 decryptions options and applicable only one element.
5.4 Random unit
Random unit contain random bit position generator, random 2^{n} bit position scanner and control bit position segmentation, describe step by step.
5.4.1 Random bit position generator
Random bit position generator generates a matrix of decimal number and every decimal number is called number element here. It may be an electronics circuit or a small program which can randomize a matrix of (nX n) as shown in Figure 5.3. Here (nX n) is (4X 4) that means (256X256) matrix or 65536X65536 matrix has 65536 columns and 65536 rows. Every row of the matrix contains 65536 elements of numbers such a way 65536 rows contain 4294965296 elements of numbers and all elements don’t contain same digits. More than four hundred million number elements is a large volume of number elements which are generate in a moment by Random bit Position Generator and every elements of number is the indicator of bit positions of control Queue.

Figure: 5.3 Random bit position generator
5.4.2 Random 2^{n} Bit Position scanner
Random bit position scanner scans decimal number which called number element. Here matrix is (4X 4) that means (256X256)matrix or 65536X65536matrix has 4294965296 number elements and all elements don’t contain same digits. More than four hundred million number elements is a large volume of number elements which are generating in a moment by random bit position generator. From the number elements only 2 or 2 or 16 elements received randomly by “random 2^{n} bit position scanner” and every elements of number is the indicator of bit positions of control queue as shown in Figure 5.4. Like as 2565 is a number element where 5 is indicate the bit position of Zero fragmented queue(FQ0), 6 is indicate the bit position of first fragmented queue(FQ1), 5 is indicate the bit position of Second fragmented queue(FQ2) and 2 is indicate the bit position of third fragmented queue(FQ3). Random 2^{n} Bit Position scanner as shown in Figure 5.4.

Figure: 5.4 Random 2^{n} Bit Position scanner
5.4.3 Control Bit position Segmentation
Random bit position scanner scans only 2 or 2 or 16 elements and received randomly where every elements of number is the indicator of bit positions of control queue. Like as 2134 is a number element where 4 is indicate the bit position of Zero fragmented queue (FQ0), 3 is indicate the bit position of first fragmented queue (FQ1), 1 is indicate the bit position of Second fragmented queue (FQ2) and 2 is indicate the bit position of third fragmented queue (FQ3) for setting indicator of the fragmented queues required segmentation of 2134 and is store in n queues (here 4 queues). Element scheduler, scheduling in FIFO system 2134 to store as a indicator of the bit position queues and where 4 is store in zero bit position queue, 3 is store in first bit position queue, 1 is store in second bit position queue and 2 is store in third bit position queue as shown in Figure 5.5 .
Figure: 5.5 Control Bits position Fragmentation
5.5 Data bit unit
Data bit unit contain data bit scanner, fragmentation, queue & bit scheduler and header & secure key scheduler
5.5.1 Data bit scanner
It is an endless queue which can read binary data bits only as shown in Figure 5.6. It reads data bits encapsulate packet one by one from the encapsulation channel and sends header bits to the header & secure key scheduler.

Figure: 5.6 Data bit scanner
5.5.2 Fragmentation
Resegmentation is called fragmentation. Fragmentation is the process of chopping larger chunks of data into smaller chunks as shown in Figure 5.7. For fragmentation needs data bits scheduling and store fragmentation data bit in fragmentation queues stack.
5.5.2.1 Data bits scheduler
Data bits scheduler a minimal scheduler which can distribute the data bits according control value. If the control value is n it stores the data into n queues in sequentially and in FIFO system. Distribute the data bits from the first queue to second last queues contain the average bits but last queue contain the total rest bits which is not greater than two terms of average values and not less than one bit as a result last queue some in fall in overflow or underflow. When 4 queues(q0,q1,q2,q3) ,data bits16 and average data bits is 16/4=4 bits , every queue contain the average 4 data bits as shown in Figure 5.7.
5.5.2.2 Fragmentation data queues stack
Fragmentation data queues stack segmented all data bits In FIFO system according control value.
Fragmentation data queues stack contain n queues but from it required queues become active. If the control value is 4 then only 4 queues become active from the queues stack. From the first queue to second last queues contain the average bits but last queue contain the total rest bits which is not greater than two terms of average values and not less than one bit as a result last queue some in fall in overflow or underflow. Packet bits/control value= average bits of a queue, When 4 queues (q0,q1,q2,q3) ,data bits16 and average data bits is 16/4=4 bits , every queue contain the average 4 data bits, as shown in Figure 5.7. Last data bit queue one of the three conditions.
 All queues contain equal data bits which are called average.
 Last queue contain less than others queues but not less than one bit that is called the underflow.
 Last queue contain more than others queues but not more than two others queues that is called overflow.




Figure: 5.7 Data bits Fragmentation
5.5.3 Queue & Bit Scheduler
Queue and bit scheduler read two queues parallel and simultaneously one from fragmentation queues stack and another from bit position queues stack. It schedule queues according control value.It schedules data bits from first to last bit of one queue from the fragmentation queues stack and schedules control bit according bit position value from all queues of fragmentation queues stack and complete the scheduling process one by one queue when all queues are completed then continue another same process as shown in Figure 5.8.
Figure: 5.8 Queues & Bits Scheduler
5.5.4 Header & secure key scheduler
Header & secure key scheduler reads the header of the encapsulate packets and bit positions value parallel and simultaneously and create a secure key value and scheduling the secure key (SK) is create by destination, source and control bit’s positions as shown in Figure 5.9. SK makes a very difficult access complexity for unauthorized users. Complexity for unauthorized users is like as = O () (where, is complexity for unauthorized users)
Figure: 5.9 Header & Secure key Scheduler
Figure 5.10 illustrate the surroundings of Header & Secure key Scheduler; bit position data queue, secure data buildup queue and fragmentation data queues stack.
…S
SK










….. y
Data bits
Figure: 5.10 Surroundings of Header & Secure key Scheduler
5.6 Secure data Buildup queue Stack
Secure data buildup queue stack contain one control bit queue which is called Cq, H1, H2 or more H header queues and other data bits queues (total ns queues for data bits, here n=4).All queues are contain FIFO mechanism and Secure data buildup queue stack also contain parallel round robin mechanism. Queue and bit scheduler read two queues parallel and simultaneously one from fragmentation queues stack and another from bit position queues stack. It schedule queues according control value. It schedules control bit according bit position value and store at Cq in secure data buildup queue stack. It schedules data bits from first to last bit of one queue from the fragmentation queues stack and complete the scheduling process one by one queue and store at FIFO queues in secure data buildup queue stack such a away complete all queue of fragmentation queues stack. Header & secure key scheduler reads the header of the encapsulate packets and bit positions value parallel and simultaneously and create a secure key value and store the secure key value at H1 and H2 queues in secure data buildup queue stack as shown in Figure 5.11.
Figure: 5.11 Secure data buildup queue stack
5.7 Control bit FIFO Queue
Control queue (Cq) contain only control bits. Every bit of Cq read according control value and bit positions element. If control value is n then store n bits in Cq (here n=4). Every bit of Cq collect from queue of fragmentation data queues stack according the bit position of the control bit of the queue. It is the main mechanism of the algorithm. Bit reallocation algorithm is total depend on Cq and it’s mechanism. All part of the proposed algorithm make something together only for Cq and fulfill the requirements of Cq. Cq as shown in Figure 5.12.

Figure: 5.12 Control bit FIFO queue
5.8 Queue Scheduler
Queue scheduler is a parallel scheduler which can schedules the all FIFO queues of secure data buildup queue stack toward the cipher data queue as shown in Figure 5.13.
Figure: 5.13 Queue Scheduler
5.9 Cipher data queue
Reassembly is the reverse of segmentation. Protocol data units are put back together in the correct order to reassemble a stream of data in as a data bit packet. Cipher data queue receives reallocation data bits from queue scheduler and buildsup cipher data packet then transmits toward the transmission channel of the proposed algorithm as shown in Figure 5.14.
Figure: 5.14 Cipher data queue
5.10 Proposed Bits Reallocation algorithm
Bit Reallocation algorithm is a proposed algorithm for voice security with encryption and decryption. Flowchart of proposed algorithm, proposed BR algorithm, Architecture of proposed algorithm and working discuses step by step.
5.10.1 Flow chart of the algorithm
Flow chart of the algorithm as shown in Figure 5.15


Yes Yes





Yes
No

No
No Yes
Figure: 5.15 Flowchart of the proposed algorithm
5.10.2 Proposed Bits Reallocation algorithm
Step1: Randomize the bit positions ║ Read the Data bits (every 1/8 second)
Step2: Fragmentation of Data bits ║ Segmentation bit positions
{
Step3: Count=i++ until i=n (here n=4 and n is the number of FQ )
{
Step4: Count=j++ until j=q[m] (here m is bits number of a queue)
{
Step5: Count=p++ until p= = q[t] ;
(here q[t] is the control bit position queue and t is control bit position.)
Step6: Store bit in control Queue(Cq)
Otherwise
Step7: Store bit in data Queue
}
Step8: Goto step4
}
Step9: Goto step3
}
Step10: Go to step1
5.10.3 Architectural design of Proposed Bit Reallocation (BR) Algorithm
BR Algorithm as shown in Figure5.16. All units, all schedulers and queues are include here to build up the total procedural design of the Bit Reallocation algorithm. Oval is the symbol of scheduler and rectangle is the symbol of queues here. Unidirectional arrow or bidirectional arrows are direction of data flow or operations. Considering 4 is the control value here. Several small ovals are used for counter.
Design of Proposed BR Algorithm
Figure: 5.16 Design of Proposed BR Algorithm
5.10.4 Working procedure of Proposed Bit Reallocation (BR) Algorithm
Step1: Randomize the bit positions and read the data bits parallel and simultaneously
Step2: Fragmentation of data bits and segmentation bit positions parallel and simultaneously
Step3: One scheduler schedules the data bits according a counter i until the counter become
equal n or control value.
Step4 One scheduler schedules the data bits according a counter j until the counter become
equal q(m) or queue’s bit number.
Step5 One scheduler schedules the data bits according a counter p until the counter become
equal q(t) or queue the control bit position and t is control bit position.
Step6: Store bit in control queue (Cq) or store bit in data queue
Step7: Store bit in cipher data in queue.
Step5: Transmit data bits into the tunnel.
5.10.5 Proof of Proposed Algorithm for key exchange
Equation of Encryption:
Y_{U} = C^{X}^{U} mod q
Equation of Decryption:
Y_{V} = C^{XV} mod q
For this algorithm, there are two publicly known numbers: a number q and an integer that is a primitive root of q. Suppose the Sensor U and Data storage or Base station V wish to exchange a key. U selects a random integer X_{U} < q and computes Y_{U} = C^{X}^{U} mod q. Similarly, V independently selects a random integer X_{V} < q and computes Y_{V} = C^{XV} mod q. Each side keeps the X value private and makes the Y value available publicly to the other side. User U computes the key as K = (Y_{V}) mod q and user B computes the key as K = (Y_{U})^{XV} mod q. These two calculations produce identical results.
K  = (Yv)^{XU} mod q  

= (C^{X}^{V} mod q)^{X}^{U} mod q  
= (C^{XV})^{XU} mod q  [the rules of modular arithmetic}  
= (C^{XU}) ^{X}^{V} mod q  
= (C^{X}^{U})^{XV} mod q  
= (C^{X}^{U} mod q)^{ X}^{V }mod q  
= (Y_{U})^{X}^{V} mod q 
The result is that the two sides have exchanged a secret value. Furthermore, because Xu and Xv are private, an adversary only has the following ingredients to work with: q, C, Yu, and Yv.
5.11 Simulation model of Wireless Sensor Network(WSN)
Based on the IEEE 802.15.4 Standard, the WSN system was set up illustrates in Figure 5.17. The simulation model is developed with the OMNeT++, GNED and PLOVE Simulation modeling tools, which can be described as an objectoriented modular discrete event simulation system. It is a networksimulating tool that allows
Figure: 5.17 Simulation model of Wireless Sensor Network (WSN)
Several wireless sensors, several wireless sensor, one data storage, one micropower generator, two monitors, two high gain receiver, one hacker and one attacker are include here to build up a wireless sensor network(WSN) in simulation model with Omnet++. GNED and PLOVE Simulation modeling tools of Omnet++ which are help to design and develop the simulation model of WSN on crate a experiment field of secure voice communication.
5.11.1 A slice of source’s Voice in simulation experiment
Figure 5.18 illustrates the Voice Signal of Source. It is a prestorage voice slice of source. Xaxis contains phases of voice signal for ten bits in digital form. Yaxis contains amplitude of voice signal in digital form.
Figure: 5.18 Voice Signal of Source
Xaxis of the voice signal of source, start from 0 to 10 and it divided into 5 portions like as 0 to 2, 2 to 4, 4 to 6, 6 to 8 and 8 to 10. Every portion of the signal contains the phases of two binary bits thus a way 10 bits contain the total voice signal. The total signal 0 to 10 is contain 10 phases for 10 bits of voice. Yaxis contains the amplitude of every phase of the voice signal and it start from 0 to 18 units’ value of high.
5.11.2 A slice of Encrypt Voice signal in simulation experiment
Figure 5.19 illustrates the Encrypt signal of Source’s Voice Signal. It is a slice of encrypt signal of prestorage source’s voice signal. Xaxis contains phases of encrypt voice signal for ten bits in digital form. Yaxis contains amplitude of encrypt voice signal in digital form.
Figure: 5.19 Encrypt Voice signal
Xaxis of the Encrypt voice signal start from 0 to 10 and it divided into 5 portions like as 0 to 2, 2 to 4, 4 to 6, 6 to 8 and 8 to 10. Every portion of the signal contains the phases of two binary bits thus a way 10 bits contain the total voice signal. The total signal 0 to 10 is contain 10 phases for 10 bits of voice. Yaxis contains the amplitude of every phase of the voice signal and it start from 0 to 14 units’ value of high.
5.11.3 A slice of Receiver Voice signal in simulation experiment
Figure 5.20 illustrates the signal of received Voice Signal. It is an overlapping signal of prestorage voice signal of source which received by receiver. Xaxis contains phases of received voice signal for ten bits in digital form. Yaxis contains amplitude of received voice signal in digital form.
Figure: 5.20 Received Voice signal
Xaxis of the Received voice signal, start from 0 to 10 and it divided into 5 portions like as 0 to 2, 2 to 4, 4 to 6, 6 to 8 and 8 to 10. Every portion of the signal contains the phases of two binary bits thus a way 10 bits contain the total voice signal. The total signal 0 to 10 is contain 10 phases for 10 bits of voice. Yaxis contains the amplitude of every phase of the voice signal and it start from 0 to 18 units’ value of high like as source voice signal.
5.11.4 Comparison among the different Voice signals in simulation
Figure 5.21 illustrates the Comparison of Source and Received Voice signal in simulation system. Source and Received signals are overlapping here. Xaxis contains phases of Source and Received voice signals for ten bits in digital form. Yaxis contains amplitude of Source and Received voice signals in digital form.
Figure: 5.21 Comparison of Source, Encrypt and Received Voice signal
Xaxis of the voice signal of different voices, start from 0 to 10 and it divided into 5 portions like as 0 to 2, 2 to 4, 4 to 6, 6 to 8 and 8 to 10. Every portion of the signal contains the phases of two binary bits thus a way 10 bits contain the total voice signal. The total signal 0 to 10 is contain 10 phases for 10 bits of voice. Yaxis contains the amplitude of every phase of the voice signal and it start from 0 to 18 units’ value of high. Three signals are include here. Source and Received voice signal are overlapping that’s why orange signal is invisible here. Green signal is for encrypt voice signal.
5.11.5 Comparison of Source, Encrypt and Received Voice signal in simulation
Figure 5.22 illustrates the Comparison of Source and Received Voice signal in simulation system. Source and Received signals are overlapping here but Encrypt voice signal is different here. Xaxis contains phases of different voice signals for ten bits in digital form. Yaxis contains amplitude of different voice signals in digital form.
Figure: 5.22 Comparison of Source and Received Voice signal
Xaxis of the voice signal of source and Received voices, start from 0 to 10 and it divided into 5 portions like as 0 to 2, 2 to 4, 4 to 6, 6 to 8 and 8 to 10. Every portion of the signal contains the phases of two binary bits thus a way 10 bits contain the total voice signal. The total signal 0 to 10 is contain 10 phases for 10 bits of voice. Yaxis contains the amplitude of every phase of the voice signal and it start from 0 to 18 units’ value of high. Two signals are include here. Source and Received voice signals are overlapping that’s why orange signal is invisible here which for source signal.
. 5.12 Comparison complexities among the algorithms
Figure: 5.23 encryption complexities
Figure 5.23 as shown as encryption complexities for different algorithms in different key size.
Figure: 5.24 Decryption complexities
Figure 5.24 as shown as decryption complexities for different algorithms in different key size
5.13 Summery
According accuracy of the simulation based experiment and a simple design of development Bit reallocation Algorithm is proved as an encryption algorithm of secure voice communication. It’s complexities are most acceptable and simplicities of operations and development makes it in future one stop secure algorithm. Considering everything it can say that it may become a powerful applicable algorithm in unintended and hostile environment for it unlimited unauthorized access complexities. Others algorithm increase their encryption and decryption complexities according increasing their key size but Bit Reallocation Algorithm can not increase it’s encryption complexities according increase it’s key size without increasing it’s unauthorized decryption complexity that means it’s encryption always a horizontal parallel line with Xaxis
CONCLUSION
6.1 Conclusion
Conclusion is more important for a dissertation which makes termination. Step by step minimize the options of conclusion in this chapter.
6.2 Contributions of the thesis
Contributions are most important part of a research. Without effective contributions of a research of all argument of assumption is not as much as necessary. In this research, the essential Contributions is to focus on several suppositions. In order to address the most important research contributions in a stepbystep frontage.
6.2.1 Contribution 1
The proposed algorithm will also benefit WSN providers because it aims to maximize the one stop security and simplicities of voice communication system and have minimize complexities and requirements. In other words, it can serve multiple securities concurrently without decreasing the quality of service for an avantgarde hybrid design.
6.2.2 Contribution 2
The algorithm proposed in this study is an Bit Reallocation algorithm in WSN. The approach uses and develops features that are based on Priority Queuing (PQ) and Custom Queuing (CQ) and Round Robin algorithm (RR) in a WSN system. It can improve the fragmentation and reallocation service for future WSN users because it achieves acceptable coservices, QoS guarantees and fairness for all information services.
6.2.3 Contribution 3
The literature review and the performance comparisons among the ECC, RSA, DSS, DH and the proposed BRA can satisfy the QoS requirements of higher security services but ignore the QoS of the other lower security services. ECC is significantly faster than others only when used with precomputed values, ECC can provide efficient guarantees for allocated information in different services but fails to consider the unformed or overformal information or miscomputed data utilization. The proposed algorithm (BRA) over come this problem with blind (binary bits) technique and with contain significantly faster mechanism of ECC.
6.2.4 Contribution 4
RSA has a welldeveloped certificate infrastructure. Currently in the industry, RSA is winning. The key size, transmission size and signature performance issues concern makers of small devices. But they often find that RSA is fast and small enough. Sure, it’s not the fastest signer or the smallest key, but it still works just fine with prime numbers. The security of the RSA algorithm lies in the fact that there is no good way of factoring numbers. No one till now knows a way to factorize a number into its prime factors. As long as no one finds a way RSA will be safe and will be one of the best encryption algorithms in use. If someone comes up with a way to factorize algorithms, then that’s the end of RSA. The proposed algorithm (BRA) over come this problem with bits reallocation technique and with contain transmission keys mechanism of RSA.
6.2.5 Contribution 5
The most important development from the work on publickey cryptography is the digital signature. The digital signature provides a set of security capabilities that would be difficult to implement in any other way. In authentication protocols, many of which depend on the use of the digital signature. Message authentication protects two parties who exchange messages from any third party. However, it does not protect the two parties against each other. It is used in Message protection and authentication not use in voice security. DSS requirement maximum but communication QoS minimum. The proposed algorithm (BRA) over come this problem with parallel bits processing technique and with contain unique authentication mechanism of DSS.
6.2.6 Contribution 6
DH is so popular encryption algorithm and used from long time. The purpose of the algorithm is to enable two users to securely exchange a key that can then be used for subsequent encryption of messages. The algorithm itself is limited to the exchange of secret values. The DiffieHellman algorithm (DH) depends for its effectiveness on the difficulty of computing discrete logarithms. The proposed algorithms (BRA) over come this problem with multiple and adaptive reallocation technique and with contain exchange mechanism of DH.
6.2.7 Contribution 7
Different complexity of Bit reallocation algorithm is acceptable (e.g. time complexity is less than O(n^3), memory complexity is near zero O(n=0.005%), hardware complexity is near zero O(n=0.00%), communication complexity is near zero O(n=0.05%),decryption complexity is O(n) for each 1/8 seconds.
6.3 Prospective
Bit reallocation algorithm will quickly gain popularity for potential voice security, acceptable access complexities, operational simplicities and overall unauthorized access more complexity.
6.4 Limitations of this Study
It is known that the QoS improvement and system resource utilization require consideration of all the subparts of the network witch is so difficult. This study attempts to focus on the encryption algorithm as presented above and delimits itself from the following issues:
 Evaluated. WSN refers to the IEEE 802.152004 standard in this report. The IEEE 802.15 family has several standards for both fixed and mobile wireless access. Only the IEEE 802.152004 standard will be adopted
 The IEEE 802.152004 supports many multimedia applications. Only voice and data services will be applied and evaluated in an integrated voice and data WSN network. In this context, voice traffic refers to voice over IP (VoIP) traffic and data traffic refers to data based services such as email, webaccess, and file transfer.
 The WNS architecture has mesh and Point to Multipoint (PMP) modes. PMP will be adopted.
 A number of traffic queuing and scheduling algorithms have been proposed and studied. Only Round Robin algorithm (RR), Priority Queuing (PQ) and Custom Queuing (CQ) will be used
 The performance evaluated among the ECC, RSA, DSS, DH and the proposed BRA only encryption and related issues.
 The performance evaluated of BRA in simulation way for a slice of digital voice form and comparisons with Omnet ++.
There are several related options of the proposed algorithm for further study.
 Application of bit reallocation algorithm in sequential voice communication
 Application of bit reallocation algorithm in secure video communication
 Study of efficiency of bit reallocation algorithm
 Study of significance of bit reallocation algorithm
 Study of QoS activities of bit reallocation algorithm
1. Encyclopedia online.
http://en.wikipedia.org
2. Codan analog system
http://en.wikipedia.org/wiki/Codan
3. Federal Information Processing Standards Publication 197,November 26, 2001
Announcing the ADVANCED ENCRYPTION STANDARD (AES).
http://csrc.nist.gov/publications/fips/fips197/fips197.pdf
4. Mixed Excitation Linear Prediction
http://en.wikipedia.org/wiki/Mixed_Excitation_Linear_Prediction
http://www.nabishi.com/mmr78military_mobile_manpack_tacticle_secure_radio.htm
6. Secure Press to Talk (PTT) System for instant communication
http://www.speechsecurity.co.uk/Secure_PTT_system.htm
7. The cellular voice Security.
http://www.spyworld.com/equipment/listings/l0004.html
8. VOIP/Voice Security in VOIP system/Voice over internet protocol system..
http://en.wikipedia.org/wiki/VoIP_VPN.
9. Voice Security in GSM system
http://hubpages.com/hub/SecureGSMPhoneSecureGSM
10. Voice Security in Thuraya system.
http://www.thuraya.com
11. John Paul Walters, Zhengqiang Liang, Weisong Shi, and Vipin Chaudhary;
“Wireless Sensor Network Security: A Survey”
Department of Computer Science;Wayne State University
Email: {jwalters, sean, weisong, vipin}@wayne.edu
12. Emerson.White Paper, CISCO. Integrating an Industrial Wireless Sensor Network with ;
Your Plant’s Switched Ethernet and IP Network.
13. voice sensor information
http://www.sunrom.com/sensors/voicedetectingsensorwithdelay
14. WSN device
http://www.google.com/search?hl=en&rlz=1B2GGFB_enBD383BD384&q
wireless+sensor+networl+device&aq=f&aqi=gl4glm1&aql=&oq=&gs_rfai
15. Karthik Karuppaiya :Project Report,CSE 450/598 “Design and Analysis of Algorithms,RSA”
Public Key Cryptography Algorithm(Project Id: P12),
Computer Science Department,Arizona State University,
Karthik.k@asu.edu , www.rsasecurity.com , ww.cse.iitk.ac.in/jprimality..pdf
www.cse.iitk.ac.in/news/primality.html
16. William Stallings.”Cryptography and Network Security Principles and Practices”,
Fourth Edition PublisherPrentice Hall November 16, 2005, Print ISBN100131873164,
eText ISBN13 9780131873193.
17. 0Jian Lou, Shan Liu, Anthony Vetro, and MingTing Sun .Complexity and Memory Efﬁcient
GOP Structures Supporting VCR Functionalities
in H.264/AVC TR2008041 May 2008
http://www.merl.com/papers/docs/TR2008041.pdf
18. WebArticles MEDIA NETWORK, “The Round Robin Scheduling Algorithm”,
http://www.osdcom.info/content/view/29/39/
19. OpenBSD, “PF: Packet Queueing and Prioritization”
http://www.openbsd.org/f aq/pf/queueing.html,
20. “Cisco IOS Quality of Service Solutions Configuration Guide – Congestion Management
Overview” http://hsnlab.tmit.bme.hu/~vidacs/education/traffic_management/qcfconmg.pdf
21.Kitti Wongthavarawat, Aura Ganz, “Packet Scheduling for QoS Support in IEEE 802.16
Broadband Wireless Access Systems”, International Journal of Communication Systems,
Vol. 16, 2003, pp. 81 – 96.
22. JING QIU, “An adaptive Dynamic bandwidth allocation algorithm in
WiMAX System..” ,2007;
libserv5.tut.ac.za:7780/pls/eres/…/Qiu.pdf;
23. Peter J. Welcher, “QoS (Quality of Service) Features”,
http://www.netcraftsmen.net/welcher/papers/qos1.html,l
24. Huawei Technologies Co., Ltd., “Huawei VRP QoS white paper”,
http://network.51cto.com/art/200606/27613.htm,
25. TutorialReports.com, “VoIP Tutorial”,
http://www.tutorialreports.com/internet/telephony/voip/technology.php?
26. Kushalnagar, N., Montenegro, G., and C. Schumacher,
"IPv6 over LowPower Wireless Personal Area Networks (6LoWPANs): Overview,
Assumptions, Problem Statement, and Goals", RFC 4919, August 2007.
http://tools.ietf.org/html/rfc4919.
27. Jason Lester Hill. System Architecture for Wireless Sensor Networks
A dissertation submitted in partial satisfaction of the requirements for the degree of
Doctor of Philosophy in Computer Science of the univerisy of california, berkeley
28. John Paul Walters, Zhengqiang Liang, Weisong Shi, and Vipin Chaudhary;
“Wireless Sensor Network : A Survey”
Department of Computer Science; State University
Email: {jwalters, sean, weisong, vipin}@wayne.edu
29. Ozlem Durmaz ˙ Incel: “MultiChannel Wireless Sensor Networks:Protocols, Design
and Evaluation” ;Print Service;ISBN 9789036528122 ;DOI
10.3990/1.9789036528122Copyright c2009 ¨ Ozlem Durmaz Incel, The Netherlands. to be
publicly defended on Friday, March 20, 2009 at 16.45
30. Gutierrez, J.A., Callaway, E. Barrett, R., “Low Rate Wireless Personal Area Networks:
Enabling Wireless Sensors with IEEE 802.15.4”, IEEE Press, 2nd. Edition, 2007.
31. Cisco Catalyst 6500: Virtual Switching System 1440.
http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps9336/product_at_a_glance
0900aecd806ee2d4.pdf.
32. Nicky van Foreest. “Simulating Queueing Networks with OMNeT++”;,April 3,
2002Andr´as: Thanks a lot for OMNeT++.1.5 Versions; Jan 2001 and May 2001.
33. D.E. Knuth. The art of computer programming, volume 2,Seminumerical algorithms. 3
edition, 1997.
34. S.M. Ross. Introduction to Probability Models. Academic Press,5th edition, 1993.
35. Chris Bryant, CCIE # 12933, “FlowBased Weighted Fair, Priority, and Custom Queuing
A Comparison of FBWFQ, PQ, and CQ”,
http://www.thebryantadvantage.com/ComparingWFQPQCQ.htm
36. OMNeT++ Discrete Simulation Event System,
http://www.omnetpp.org,