Voice Security In Wireless Sensor Network

View With Charts And Images


1.1 Introduction
Voice security is most important in voice communication, especially in hostile and unattended environment, security is first. Public safety land mobile radio systems are susceptible to eavesdropping and can easily be exploited by criminals. Readily available scanners and other devices can be used to receive voice signals from analog and digital public safety radio systems, including trunked radio systems. Lists of frequencies and channel assignments used in public safety jurisdictions are easily obtained from numerous print and online sources. To ensure that sensitive information is shared only among authorized individuals or organizations, public safety operators need to ensure the confidentiality of sensitive radio traffic. This is typically accomplished through voice encryption by an effective encryption algorithm.
1.2 Types of Secure Voice
There are two types of secure voice
                   1. Digital Secure Voice and        2. Analog Secure Voice
2.1 Importance of voice security
Effective management is essential to implementing an encrypted voice network successfully.

  • Unencrypted public safety voice transmissions can be intercepted, abetting criminal activity, thwarting public safety efforts, and endangering the public and public safety personnel.
  • Voice encryption helps ensure that voice transmissions can be accessed only by authorized personnel, thereby increasing the safety and efficiency of public safety personnel.
  • Although voice encryption adds complexity and cost to public safety voice networks but security is maintain by it. 
2.2 Digital Secure Voice
A digital secure voice usually includes two components, a digitizer to convert between speech and digital signals and an encryption system to provide confidentiality. What makes ciphony difficult in practice is a need to send the encrypted signal over the same communication system used to transmit unencrypted voice, e.g. telephone lines or mobile radios or sensor. This has led to the use of Voice Coders (vocoders) to achieve tight bandwidth compression of the speech signals. NSA's STU-III, KY-57 and SCIP are examples of systems that operate over existing voice system. The STE system, by contrast, requires wide bandwidth ISDN lines for its normal mode of operation. For encrypting GSM and VoIP, which are digital anyway, the standard protocol ZRTP could be used as an end-to-end encryption technology. Secure voice's robustness greatly benefits from having the voice data compressed into very low bit-rates by special component called speech coding, voice compression or voice coder (also known as vocoder). The old secure voice compression standards include (CVSD, CELP, LPC-10e and MELP, where the latest standard is the state of the art MELPe algorithm.
2.3 Analog Secure Voice technologies

One does not necessarily need always digital secure voice to achieve security, as the Australian CODAN analog system (originally designed for HF but used on VHF and UHF) has proven that digital compression and encryption methods are not always required to achieve voice security. Although CODAN is by no means original or unique technology or a unique product, it has achieved recognition in the security market that exclusively digital methods aren't always needed. Voice inversion methods were commonplace in the 20th century. Few analog voice offerings exist due to the rise of exclusively digital solutions to the voice security problem.

 
2.4 The encryption process
Securing the message encryption is a unique technique. A voice message is first digitized and then encrypted (or locked).The process requires the use of an algorithm and a unique cryptographic key, which are analogous to a door lock and its key although many houses may use the same brand of lock, how the lock is keyed makes it unique.
  • The Data Encryption Standard (DES) and
  • The Advanced Encryption Standard (AES) are two well-known algorithms.
Cryptographic keys are usually expressed in terms of number of bits. Typical key sizes are 56 bits for the DES, and 128 to 256 bits for the AES. Once the voice message has been encrypted and transmitted, the receiver can only decrypt (or unlock) the message using the same algorithm and unique key as the transmitter. For security purposes, only the sender and the intended recipient(s) of the encrypted message should know the key. If members of a group need to communicate securely with each other, all radios belonging to the group must share the same key. When several radios share the same key, the group is known as a cryptonet. If a radio is used to participate in more than one cryptonet, it must hold a unique key for each cryptonet. When it is necessary to communicate with persons who do not have access to the cryptonet, a gateway device is often employed. Such devices can compromise communication security because the voice message has to be unlocked to pass through the gateway. Also, there is no assurance of the level of security of the system through which the voice message is passed.  Advantages of voice encryption are as following:
  • Provides confidentiality for sensitive radio traffic.
  • Prevents unauthorized parties from successfully monitoring radio traffic.
  • Enhances personnel security.
  • Provides some user authentication on radio traffic.
2.5 Digital method using in voice compression
The MELPe or enhanced-MELP (Mixed Excitation Linear Prediction) is a United States Department of Defense speech coding standard used mainly in military applications and satellite communications, secure voice, and secure radio devices. Its development was led and supported by NSA, and NATO. The US government's MELPe secure voice standard is also known as MIL-STD-3005, and the NATO's MELPe secure voice standard is also known as STANAG-4591.The 2400 bit/s MELP was created by Texas Instruments, and first standardized in 1997 and was known as MIL-STD-3005. Between 1998 and 2001, a new MELP-based vocoder was created at half the rate (i.e. 1200 bit/s) and substantial enhancements were added to the MIL-STD-3005 by SignalCom (later acquired by Microsoft) and AT&T, which included
  • Additional new vocoder at half the rate (i.e. 1200 bit/s),
  • Substantially improved encoding (analysis),
  • Substantially improved decoding (synthesis),
  • Noise-Preprocessing for removing background noise,
  • Trans coding between the 2400 bit/s and 1200 bit/s bit streams. This fairly significant development was aimed to create a new coder at half the rate and have it interoperable with the old MELP standard.
This enhanced-MELP (also known as MELPe) was adopted as the new MIL-STD-3005 in 2001 in form of annexes and supplements made to the original MIL-STD-3005. The significant breakthrough of the 1200 bit/s MELPe enables the same quality as the old 2400 bit/s MELP's at half the rate. One of the greatest advantages of the new 2400 bit/s MELPe is that it shares the same bit format as MELP, and hence can interoperate with legacy MELP systems, but would deliver better quality at both ends. MELPe provides much better quality than all older military standards, especially in noisy environments such as battlefield and vehicles and aircraft.In 2002, the US DoD MELPe was adopted also as NATO standard, known as STANAG-4591. As part of NATO testing for new NATO standard, MELPe was tested against other candidates such as France's HSX (Harmonic Stochastic eXcitation) and Turkey's SB-LPC (Split-Band Linear Predictive Coding), as well as the old secure voice standards such as FS1015 LPC-10e (2.4 kbit/s), FS1016 CELP (4.8 kbit/s) and CVSD (16 kbit/s). Subsequently, the MELPe won also the NATO competition, surpassing the quality of all other candidates as well as the quality of all old secure voice standards (CVSD, CELP and LPC-10e).The NATO competition concluded that MELPe substantially improved performance (in terms of speech quality, intelligibility, and noise immunity), while reducing throughput requirements. The NATO testing also included interoperability tests, used over 200 hours of speech data, and was conducted by 3 test laboratories world wide.In 2005, a new 600 bit/s rate MELPe vocoder was added to the NATO standard STANAG-4591 by Thales (France), and there are more advanced efforts to lower the bit rates to 300 bit/s and even 150 bit/s.
2.6 Secure encrypted dual-mode military band base stations
Figure 2.1 illustrates secure encrypted dual-mode military band base stations, Vehicle Mobiles, Manpacks & hand held radios. These devices are used in hostile and unattended environment.

Figure: 2.1 Secure encrypted dual-mode Military Band Base stations
The Nabishi NHT-78A hand held radio, the Nabishi MMR-78 multi-purpose (base, mobile or manpack) and the MR-3088 mobile VHF Military radios are  used in the modern tactical environments, the radios are dualmode which means these radios  work in either DIGITAL or ANALOGUE mode a feature which is user-changeable at a press of a button. Both radios provides secure and reliable voice and data communication over the entire military band of 30-88MHz with 12.5 or 25KHz step. Some of the their powerful features include: the capability to transmit both data and voice, securing communication with scrambler, selectable tone signaling, versatile scanning options, dynamic antenna matching circuit, auto battery detection, etc. The unit is encrypted for secure operation with a true 256-bit AES Digital Encryption  for operation in Digital mode for secure voice communications to prevent unauthorized listening of radio conversations or eavesdropping. Both radios fully compliment one another. NHT-78A conforms to rigid military standards: Mil-Std-810E tests for reliability under harsh environments and Mil-Std-461C tests for EMI/EMC. It is also fully compatible with old generation radios such as AN/PRC-77, AN/VRC-12, etc. 
2.7 Secure voice communication systems
Secure voice communication systems has several classes according their applications and according their application fields. These are as bellow.
  • Secure Press to Talk (PTT) System
  • Push To Talk (PTT)
  • Secure GSM Phone
  • VOIP and VPN
  • The cellular Voice communication
  • Thuraya satellite communication system
  • Voice communication in Wireless Sensor Network
2.7.1 Secure Press to Talk (PTT) System
Just like a walkie-talkie network, PTT not only connects instantly users within their group but the use of secure encryption ensures privacy during the conversation to prevent unauthorized listening or eavesdropping. The system provides instant one-to-one or one-to-many voice calls to anywhere in the world where via the GSM network. Designed for fast communications for the Police, Security, Government, Hospitals Transport and Public Utilities, The Nabishi Secure PTT System from one country to another, and because it does not need to dial up, the call is instant and there are no expensive call charges. The Secure PTT system works seamlessly over any GSM network in the world using the 3G, GPRS, EDGE as well as WiFi.
2.7.2 Push to Talk (PTT)
Push to Talk (PTT) is like turning as a mobile into a walkie-talkie but with a global world-wide range. Instead of transmitting/receiving radio frequencies, Push to Talk uses either WiFi 3G, GPRS or EDGE data connection on mobile phone to send and receive calls in encrypted mode using VOIP technology. Voice communication at the push of a button. Communicate privately on a one-to-one basis, or have a discussion with the whole team on an instant conference call in clear digital transparency.
2.7.3 The cellular Voice
The cellular Voice Encryption is a snap on to the data port of a cellular phone (included) working over the GSM network. Simple operation uses military grade 256 Bit AES encryption algorithm which is the most advanced encryption standard for voice communication, even more advanced than the DES standard. Transparency operation, no action required by user.
2.7.4 VoIP VPN
A VoIP VPN combines Voice over IP and Virtual Private Network technologies to offer a method for delivering secure voice. Because VoIP transmits digitized voice as a stream of data, the VoIP VPN solution accomplishes voice encryption quite simply, applying standard data-encryption mechanisms inherently available in the collection of protocols used to implement a VPN. The VoIP gateway-router first converts the analog voice signal to digital form, encapsulates the digitized voice within IP packets, then encrypts the digitized voice using IPSec, and finally routes the encrypted voice packets securely through a VPN tunnel. At the remote site, another VoIP router decodes the voice and converts the digital voice to an analog signal for delivery.

2.6.5 Secure GSM Phone

Secure GSM Phone has been designed to provide end-to-end secure communication across the GSM network. Secure GSM Phone is software application which can work with any mobile having Symbian 60.2 and 60.3 operating system. Secure GSM Phone uses AES 256 bit symmetric data protection algorithm encryption algorithm, which is currently the highest security standard worldwide. Secure GSM Phone provides complete end to end protection, from phone to phone, for both audio and text messages. Crypto GSM secure Phone is designed for easy operation and no security knowledge required by user. GSM Phone encrypts the Information and sent it automatically without any need for user interaction.
 
 

 
Figure: 2.2 Secure GSM Phone
Illegal wiretapping poses ever bigger risks to the confidentiality of voice communication, as the respective technical effort has even become less complex with the advent of Voice over IP (VoIP) and GSM networks. So-called IMSI-catchers are available on the market which fake a base station towards a mobile phone and can thus be used for a man-in-the-middle attack. Other attackers focus directly on the internal network of a GSM provider to catch many more calls as they pass the respective switches. This danger becomes especially very real when road warriors travel in less secure or trustworthy areas around the world. Figure 2.2 illustrate the Secure GSM system.
2.6.6 Thuraya satellite communication system
Thuraya is a satellite communication system that covers Europe, the Middle East, most of Africa and large parts of Asia using geostationary satellites. Thuraya combines excellent voice quality with very low per-minute rates. It uses affordable terminal equipment that has approximately the size of an old-fashioned GSM mobile phone. The Nabishi T2 in Thuraya-mode is the perfect solution for secure communication in areas without GSM coverage or if you do not wish to depend on the local network infrastructure. For more information, including current coverage maps. It was used in marine and naval (vessel) communication very much.
 2.6.7 Voice communication in Wireless Sensor Network
In the global secure voice communication, the future trend is towards the wireless sensor networks. Wireless sensor networks are quickly gaining popularity due to the fact that they are potentially low cost solutions to a variety of real-world challenges. Their low cost provides a means to deploy large sensor arrays in a variety of conditions capable of performing both military and civilian 1tasks. But sensor networks also introduce severe resource constraints due to their lack of data storage and power. Both of these represent major obstacles to the implementation of traditional computer security techniques in a wireless sensor network. The unreliable communication channel and unattended operation make the security defenses even harder. Indeed, as pointed out in, wireless sensors often have the processing characteristics of machines that are decades old (or longer), and the industrial trend is to reduce the cost of wireless sensors while maintaining similar computing power. With that in mind, many researchers have begun to address the challenges of maximizing the processing capabilities and energy reserves of wireless sensor nodes while also securing them against attackers. In addition to those traditional security issues, To observe that many general-purpose sensor network techniques (particularly the early research) assumed that all nodes are cooperative and trustworthy. This is not the case for most, or much of, real-world wireless sensor networking applications, which require a certain amount of trust in the application in order to maintain proper network functionality. Network security into four major categories: the obstacles to sensor network security, the requirements of a secure wireless sensor network, attacks, and defensive measures. Figure 2.3 illustrate Wireless Sensor Networks System.
 

 
Figure: 2.3 Wireless Sensor Networks System
 
2.6.7.1 Wireless Voice Sensor
 
A voce sensor needs two or three microphones as its input device. Observing the waveforms which are received at each time, voce detector always checks whether they are potential voce or not. When a voice is detected, feature extractor extracts high-dimensional features such as spectral coefficients. Finally, here get to obtain a recognized voice through calculating the probability of input waveform. The voce sensor is shown in the following Figure 2.4.

Figure: 2.4 Voice sensor
 
The recognized voce consist of a string of characters, which is an encoded string. This string of characters is transported to the pretend console driver in the kernel, which is responsible for transmitting the input string to keyboard input strokes. Those key input stokes will be considered as if someone strokes keyboards, so the recognized voice can be efficiently transmitted to each application requiring input information and according others secure protocols or algorithms.
 
2.6.7.2 Obstacles of Sensor network Security
A wireless sensor network is an exceptional network which has many constraints compared to a traditional computer network. Due to these constraints it is so difficult to directly take up the existing security approaches to the area of wireless sensor networks. Therefore, to build up a functional security mechanisms while borrowing the ideas from the current security techniques, it is necessary to know and understand these constraints first.
 
2.6.7.3 Limitations of Resources
 
Every security idea requires a certain amount of resources for the implementation, including data memory, code space, and energy to power the sensor. However, currently these resources are very limited in a tiny wireless sensor. Figure 2.5 illustrates WSN device which are limited in several options.
 
  • Limited Memory and Storage Space A sensor is a tiny device with only a small amount of memory and storage space for the code. In order to build an effective security mechanism, it is necessary to limit the code size of the security algorithm. For example, one common sensor type (TelosB) has an 16-bit, 8 MHz RISC CPU with only 10K RAM, 48K program memory, and 1024K flash storage. With such a limitation, the software built for the sensor must also be quite small. The total code space of TinyOS, the de-facto standard operating system for wireless sensors, is approximately 4K , and the core scheduler occupies only 178 bytes. Therefore, the code size for the all security related code must also be small. 
  • Power Limitation Energy is the biggest constraint to wireless sensor capabilities. We assume that once sensor nodes are deployed in a sensor network, they cannot be easily replaced (high operating cost) or recharged (high cost of sensors). Therefore, the battery charge taken with them to the field must be conserved to extend the life of the individual sensor node and the entire sensor network. When implementing a cryptographic function or protocol within a sensor 3 node, the energy impact of the added security code must be considered. When adding security to a sensor node, we are interested in the impact that security has on the lifespan of a sensor (i.e., its battery life). The extra power consumed by sensor nodes due to security is related to the processing required for security functions (e.g., encryption, decryption, signing data, verifying signatures), the energy required to transmit the security related data or overhead (e.g., initialization vectors needed for encryption/decryption), and the energy required to store security parameters in a secure manner (e.g., cryptographic key storage).
 

 
Figure: 2.5 WSN device
 
2.6.7.4 Unreliable Communication
 
Certainly, unreliable communication is another threat to sensor security. The security of the network relies heavily on a defined protocol, which in turn depends on communication.
 
  • Unreliable transfer normally the packet-based routing of the sensor network is connectionless and thus inherently unreliable. Packets may get damaged due to channel errors or dropped at highly congested nodes. The result is lost or missing packets. Furthermore, the unreliable wireless communication channel also results in damaged packets. Higher channel error rate also forces the software developer to devote resources to error handling. More importantly, if the protocol lacks the appropriate error handling it is possible to lose critical security packets. This may include, for example, a cryptographic key.
  • Conflicts Even if the channel is reliable, the communication may still be unreliable. This is due to the broadcast nature of the wireless sensor network. If packets meet in the middle of transfer, conflicts will occur and the transfer itself will fail. In a crowded (high density) sensor network, this can be a major problem. More details about the effect of wireless communication can be found at.
  • Latency the multi-hop routing, network congestion, and node processing can lead to greater latency in the network, thus making it difficult to achieve synchronization among sensor nodes. The synchronization issues can be critical to sensor security where the security mechanism relies on critical event reports and cryptographic key distribution. Interested readers please refer to reference on real-time communications in wireless sensor networks.
 
2.6.7.5 Unattended Operation
 
Depending on the function of the particular sensor network, the sensor nodes may be left unattended for long periods of time. There are three main caveats to unattended sensor nodes:
 
  • Exposure to Physical Attacks The sensor may be deployed in an environment open to adversaries, bad weather, and so on. The possibility that a sensor suffers a physical attack in such an environment is therefore much higher than the typical PCs, which is located in a secure place and mainly faces attacks from a network.
  • Managed Remotely Remote management of a sensor network makes it virtually impossible to detect physical tampering (i.e., through tamperproof seals) and physical maintenance issues (e.g., battery replacement). Perhaps the most extreme example of this is a sensor node used for remote reconnaissance missions behind enemy lines. In such a case, the node may not have any physical contact with friendly forces once deployed.
  • No Central Management Point A sensor network should be a distributed network without a central management point. This will increase the vitality of the sensor network. However, if designed incorrectly, it will make the network organization difficult, inefficient, and fragile. Perhaps most importantly, the longer that a sensor is left unattended the more likely that an adversary has compromised the node.
 
2.6.7.6 Security Requirements
 
A sensor network is a special type of network. It shares some commonalities with a typical computer network, but also poses unique requirements of its own. Therefore, it can think of the requirements of a wireless sensor network as encompassing both the typical network requirements and the unique requirements suited solely to wireless sensor networks.
 
2.7.7.6.1 Data Confidentiality
 
Data confidentiality is the most important issue in network security. Every network with any security focus will typically address this problem first. In sensor networks, the confidentiality relates to the following.
  • A sensor network should not leak sensor readings to its neighbors. Especially in a military application, the data stored in the sensor node may be highly sensitive.
 
  • In many applications nodes communicate highly sensitive data, e.g., key distribution; therefore it is extremely important to build a secure channel in a wireless sensor network.
  • Public sensor information, such as sensor identities and public keys, should also be encrypted to some extent to protect against traffic analysis attacks. The standard approach for keeping sensitive data secret is to encrypt the data with a secret key that only intended receivers possess, thus achieving confidentiality.
 
2.7.7.6.2 Data Integrity
 
With the implementation of confidentiality, an adversary may be unable to steal information. However, this doesn’t mean the data is safe. The adversary can change the data, so as to send the sensor network into disarray. For example, a malicious node may add some fragments or manipulate the data within a packet. This new packet can then be sent to the original receiver. Data loss or damage can even occur without the presence of a malicious node due to the harsh communication environment. Thus, data integrity ensures that any received data has not been altered in transit.
 
2.7.7.6.3 Data Freshness
 
Even if confidentiality and data integrity are assured, we also need to ensure the freshness of each message. Informally, data freshness suggests that the data is recent, and it ensures that no old messages have been replayed. This requirement is especially important when there are shared-key strategies employed in the design. Typically shared keys need to be changed over time. However, it takes time for new shared keys to be propagated to the 6entire network. In this case, it is easy for the adversary to use a replay attack. Also, it is easy to disrupt the normal work of the sensor, if the sensor is unaware of the new key change time. To solve this problem a nonce, or another time-related counter, can be added into the packet to ensure data freshness.
 
 
2.7.7.6.4 Availability
 
Adjusting the traditional encryption algorithms to fit within the wireless sensor network is not free, and will introduce some extra costs. Some approaches choose to modify the code to reuse as much code as possible. Some approaches try to make use of additional communication to achieve the same goal. What’s more, some approaches force strict limitations on the data access, or propose an unsuitable scheme (such as a central point scheme) in order to simplify the algorithm. But all these approaches weaken the availability of a sensor and sensor network for the following reasons:
 
  • Additional computation consumes additional energy. If no more energy exists, the data will no longer be available.
  • Additional communication also consumes more energy. What’s more, as communication increases so too does the chance of incurring a communication conflict.
  • A single point failure will be introduced if using the central point scheme. This greatly threatens the availability of the network. The requirement of security not only affects the operation of the network, but also is highly important in maintaining the availability of the whole network.
 
2.7.7.6.5 Self-Organization
 
A wireless sensor network is a typically an ad hoc network, which requires every sensor node be independent and flexible enough to be self-organizing and self-healing according to different situations. There is no fixed infrastructure available for the purpose of network management in a sensor network. This inherent feature brings a great challenge to wireless sensor network security as well. For example, the dynamics of the whole network inhibits the idea of pre-installation of a shared key between the base station and all sensors . Several random key predistribution schemes have been proposed 7in the context of symmetric encryption techniques. In the context of applying public-key cryptography techniques in sensor networks, an efficient mechanism for public-key distribution is necessary as well. In the same way that distributed sensor networks must self-organize to support multihop routing, they must also self-organize to conduct key management and building trust relation among sensors. If self-organization is lacking in a sensor network, the damage resulting from an attack or even the hazardous environment may be devastating.
 
2.7.7.6.6 Time Synchronization
 
Most sensor network applications rely on some form of time synchronization. In order to conserve power, an individual sensor’s radio may be turned off for periods of time. Furthermore, sensors may wish to compute the end-to-end delay of a packet as it travels between two pairwise sensors. A more collaborative sensor network may require group synchronization for tracking applications, etc. In deference, the authors propose a set of secure synchronization protocols for sender-receiver (pairwise), multihop sender-receiver (for use when the pair of nodes are not within single-hop range), and group synchronization.
 
2.7.7.6.7 Secure Localization
 
Often, the utility of a sensor network will rely on its ability to accurately and automatically locate each sensor in the network. A sensor network designed to locate faults will need accurate location information in order to pinpoint the location of a fault. Unfortunately, an attacker can easily manipulate non-secured location information by reporting false signal strengths, replaying signals, etc. A technique called verifiable multilateration (VM) is described in. In multilateration, a device’s position is accurately computed from a series of known reference points. In, authenticated ranging and distance bounding are used to ensure accurate location of a node. Because of distance bounding, an attacking node can only increase its claimed distance from a reference point. However, to ensure location consistency, an attacking node would also have to prove that its distance from another reference point is shorter. Since it cannot do this, a node manipulating the localization protocol can be found. For large sensor networks, the SPINE (Secure Positioning for sensor NEtworks) algorithm is used. It is a three phase algorithm 8based upon verifiable multilateration.In, SeRLoc (Secure Range-Independent Localization) is described.Its novelty is its decentralized, range-independent nature. SeRLoc uses locators that transmit beacon information. It is assumed that the locators are trusted and cannot be compromised. Furthermore, each locator is assumed to know its own location. A sensor computes its location by listening for the beacon information sent by each locator. The beacons include the locator’s location. Using all of the beacons that a sensor node detects, a node computes an approximate location based on the coordinates of the locators. Using a majority vote scheme, the sensor then computes an overlapping antenna region. The final computed location is the “center of gravity” of the overlapping antenna region. All beacons transmitted by the locators are encrypted with a shared global symmetric key that is pre-loaded to the sensor prior to deployment. Each sensor also shares a unique symmetric key with each locator. This key is also pre-loaded on each sensor.
 
2.7.7.6.8 Authentication
 
An adversary is not just limited to modifying the data packet. It can change the whole packet stream by injecting additional packets. So the receiver needs to ensure that the data used in any decision-making process originates from the correct source. On the other hand, when constructing the sensor network, authentication is necessary for many administrative tasks (e.g. network reprogramming or controlling sensor node duty cycle). From the above, we can see that message authentication is important for many applications in sensor networks. Informally, data authentication allows a receiver to verify that the data really is sent by the claimed sender. In the case of two-party communication, data authentication can be achieved through a purely symmetric mechanism: the sender and the receiver share a secret key to compute the message authentication code (MAC) of all communicated data. Adrian Perrig et al. propose a key-chain distribution system for their µTESLA secure broadcast protocol. The basic idea of the µTESLA system is to achieve asymmetric cryptography by delaying the disclosure of the symmetric keys. In this case a sender will broadcast a message generated with a secret key. After a certain period of time, the sender will disclose the secret key. The receiver is responsible for buffering the packet until the secret key has been disclosed. After disclosure the receiver can authenticate the packet, provided that the packet was received before the key was disclosed.9One limitation of µTESLA is that some initial information must be unicast to each sensor node before authentication of broadcast messages can begin.Liu and Ning propose an enhancement to the µTESLA system that uses broadcasting of the key chain commitments rather than µTESLA’s unicasting technique. They present a series of schemes starting with a simple pre-determination of key chains and finally settling on a multi-level key chain technique. The multi-level key chain scheme uses pre-determination and broadcasting to achieve a scalable key distribution technique that is designed to be resistant to denial of service attacks, including jamming.
 
2.7.7.7 Attacks
 
Sensor networks are particularly vulnerable to several key types of attacks.Attacks can be performed in a variety of ways, most notably as denial of service attacks, but also through traffic analysis, privacy violation, physical attacks, and so on. Denial of service attacks on wireless sensor networks can range from simply jamming the sensor’s communication channel to more sophisticated attacks designed to violate the 802.11 MAC protocol or any other layer of the wireless sensor network. Due to the potential asymmetry in power and computational constraints, guarding against a well orchestrated denial of service attack on a wireless sensor network can be nearly impossible. A more powerful node can easily jam a sensor node and effectively prevent the sensor network from performing its intended duty. We note that attacks on wireless sensor networks are not limited to simply denial of service attacks, but rather encompass a variety of techniques including node takeovers, attacks on the routing protocols, and attacks on a node’s physical security. In this section, we first address some common denial of service attacks and then describe additional attacking, including those on the routing protocols as well as an identity based attack known as the Sybil attack.
 
2.7.7.7.1 Contemplation of attacks
 
Wood and Stankovic define one kind of denial of service attack as “any event that diminishes or eliminates a network’s capacity to perform its expected function”. Certainly, denials of service attacks are not a new phenomenon. In fact, there are several standard techniques used in traditional computing to cope with some of the more common denial of service 10techniques, although this is still an open problem to the network security community. Unfortunately, wireless sensor networks cannot afford the computational overhead necessary in implementing many of the typical defensive strategies. What makes the prospect of denial of service attacks even more alarming is the projected use of sensor networks in highly critical and sensitive applications. For example, a sensor network designed to alert building occupants in the event of a fire could be highly susceptible to a denial of service attack. Even worse, such an attack could result in the deaths of building occupants due to the non-operational fire detection network. Other possible uses for wireless sensors include the monitoring of traffic flows which may include the control of traffic lights, and so forth. A denial of service attack on such a sensor network could prove very costly, especially on major roads. For this reason, researchers have spent a great deal of time both identifying the various types of denial of service attacks and devising strategies to subvert such attacks. We describe now some of the major types of denial of service attacks.
 
2.7.7.7.2 Types of Denial of Service attacks
 
A standard attack on wireless sensor networks is simply to jam a node or set of nodes. Jamming, in this case, is simply the transmission of a radio signal that interferes with the radio frequencies being used by the sensor network. The jamming of a network can come in two forms: constant jamming, and intermittent jamming. Constant jamming involves the complete jamming of the entire network. No messages are able to be sent or received. If the jamming is only intermittent, then nodes are able to exchange messages periodically, but not consistently. This too can have a detrimental impact on the sensor network as the messages being exchanged between nodes may be time sensitive. Attacks can also be made on the link layer itself. One possibility is that an attacker may simply intentionally violate the communication protocol, e.g., ZigBee or IEEE 801.11b (Wi-Fi) protocol, and continually transmit messages in an attempt to generate collisions. Such collisions would require the retransmission of any packet affected by the collision. Using this technique it would be possible for an attacker to simply deplete a sensor node’s power supply by forcing too many retransmissions. At the routing layer, a node may take advantage of a multihop network 11by simply refusing to route messages. This could be done intermittently or constantly with the net result being that any neighbor who routes through the malicious node will be unable to exchange messages with, at least, part of the network. Extensions to this technique including intentionally routing messages to incorrect nodes (misdirection). The transport layer is also susceptible to attack, as in the case of flooding. Flooding can be as simple as sending many connection requests to a susceptible node. In this case, resources must be allocated to handle the connection request. Eventually a node’s resources will be exhausted, thus rendering the node useless.
 
2.7.7.7.3 The Sybil attack
 
Newsome et al. describe the Sybil attack as it relates to wireless sensor networks. Simply put, the Sybil attack is defined as a “malicious device illegitimately taking on multiple identities”. It was originally described as an attack able to defeat the redundancy mechanisms of distributed data storage systems in peer-to-peer networks. In addition to defeating distributed data storage systems, the Sybil attack is also effective against routing algorithms, data aggregation, voting, fair resource allocation and foiling misbehavior detection. Regardless of the target (voting, routing, aggregation), the Sybil algorithm functions similarly. All of the techniques involve utilizing multiple identities. For instance, in a sensor network voting scheme, the Sybil attack might utilize multiple identities to generate additional “votes.” Similarly, to attack the routing protocol, the Sybil attack would rely on a malicious node taking on the identity of multiple nodes, and thus routing multiple paths through a single malicious node.
 
 
2.7.7.7.4 Trac Analysis Attacks
 
Wireless sensor networks are typically composed of many low-power sensors communicating with a few relatively robust and powerful base stations. It is not unusual, therefore, for data to be gathered by the individual nodes where it is ultimately routed to the base station. Often, for an adversary to effectively render the network useless, the attacker can simply disable the base station. To make matters worse, Deng et al. demonstrate two attacks that can identify the base station in a network (with high probability) without even understanding the contents of the packets (if the packets are Themselves encrypted).  12A rate monitoring attack simply makes use of the idea that nodes closest to the base station tend to forward more packets than those farther away from the base station. An attacker need only monitor which nodes are sending packets and follow those nodes that are sending the most packets. In a time correlation attack, an adversary simply generates events and monitors to whom a node sends its packets. To generate an event, the adversary could simply generate a physical event that would be monitored by the sensor(s) in the area (turning on a light, for instance).
 
2.7.7.7.5 Node Replication Attacks
 
Conceptually, a node replication attack is quite simple: an attacker seeks to add a node to an existing sensor network by copying (replicating) the node ID of an existing sensor node . A node replicated in this fashion can severely disrupt a sensor network’s performance: packets can be corrupted or even misrouted. This can result in a disconnected network, false sensor readings, etc. If an attacker can gain physical access to the entire network he can copy cryptographic keys to the replicated sensor and can also insert the replicated node into strategic points in the network .By inserting the replicated nodes at specific network points, the attacker could easily manipulate a specific segment of the network, perhaps by disconnecting it altogether.
 
 
2.7.7.7.6 Attacks against Privacy
 
Sensor network technology promises a vast increase in automatic data collection capabilities through efficient deployment of tiny sensor devices. While these technologies offer great benefits to users, they also exhibit significant potential for abuse. Particularly relevant concerns are privacy problems, since sensor networks provide increased data collection capabilities. Adversaries can use even seemingly innocuous data to derive sensitive information if they know how to correlate multiple sensor inputs. For example, in the famous “panda-hunter problem” , the hunter can imply the position of pandas by monitoring the traffic. The main privacy problem, however, is not that sensor networks enable the collection of information. In fact, much information from sensor networks could probably be collected through direct site surveillance. Rather, sensor networks aggravate the privacy problem because they make large volumes of information easily available through remote access. Hence, adversaries need not be physically present to maintain surveillance. They can gather information in a low-risk, anonymous manner. Remote access also allows a single adversary to monitor multiple sites simultaneously. Some of the more common attacks against sensor privacy are:
 
  • Monitor and Eavesdropping this is the most obvious attack to privacy. By listening to the data, the adversary could easily discover the communication contents. When the traffic conveys the control information about the sensor network configuration, which contains potentially more detailed information than accessible through the location server, the eavesdropping can act effectively against the privacy protection.
  • Traffic Analysis Traffic analysis typically combines with monitoring and eavesdropping. An increase in the number of transmitted packets between certain nodes could signal that a specific sensor has registered activity. Through the analysis on the traffic, some sensors with special roles or activities can be effectively identified.
  • Camouflage Adversaries can insert their node or compromise the nodes to hide in the sensor network. After that these nodes can masquerade as a normal node to attract the packets, then misroute the packets, e.g. forward the packets to the nodes conducting the privacy analysis.It is worth noting that, as pointed out in, the current understanding of privacy in wireless sensor networks is immature, and more research is needed.
 
2.7.7.7.7 Physical Attacks
 
Sensor networks typically operate in hostile outdoor environments. In such environments, the small form factor of the sensors, coupled with the unattended and distributed nature of their deployment make them highly susceptible to physical attacks, i.e., threats due to physical node destructions. Unlike many other attacks mentioned above, physical attacks destroy sensors permanently, so the losses are irreversible. For instance, attackers can extract cryptographic secrets, tamper with the associated circuitry, modify Programming in the sensors, or replace them with malicious sensors under the control of the attacker. Recent work has shown that standard sensor nodes, such as the MICA2 motes, can be compromised in less than one 14minute While these results are not surprising given that the MICA2 lacks tamper resistant hardware protection, they provide a cautionary note about the speed of a well-trained attacker. If an adversary compromises a sensor node, then the code inside the physical node may be modified.
 
2.8 Summery
 
Now it is in a position to describe the measures for satisfying security requirements, and protecting the sensor network from attacks. It starts with key establishment in wireless sensor networks, which lays the foundation for the security in a wireless sensor network, followed by defending against DoS attacks, secure broadcasting and multicasting, defending against attacks on routing protocols, combating traffic analysis attacks, defending against attacks on sensor privacy, intrusion detection, secure data aggregation, defending against physical attacks, and trust management. It is first thinking that encryption algorithm must be simple in design, develop and operation, secure from attacks and very complex for unauthorized access. An encryption algorithm in voice or others communication systems must be supportable of existence systems for faster implementation.
 
 
LITERATURE REVIEW ON CRYPTOGRAPHY
 
3.1 Introduction
 
Cryptography is the science of security where using different techniques to encrypt and decrypt data. Cryptography enables to store sensitive information or transmit it across insecure networks (like the Internet) so that it cannot be read by anyone except the intended recipient. While cryptography is the science of securing data, cryptanalysis is the science of analyzing and breaking secure communication. Classical cryptanalysis involves an interesting combination of analytical reasoning, application of mathematical tools, pattern finding, patience, determination, and luck. Cryptography is a technique used to hide the meaning of a message and is derived from the Greek word kryptos (hidden). This is different from steganograhic techniques in that one is not hiding the actual message, only the meaning of the message. If a message here to fall into the hands of the wrong person, cryptography should ensure that message could not be read. Typically the sender and receiver agree upon a message scrambling protocol beforehand and agree upon methods for encrypting and decrypting messages. Cryptography is further divided into two implementation techniques and those include transposition and substitution.
 

3.2 Requirements of Cryptography in voice communication

Cryptography ensures to secure voice communication. The requirements of a security establishment which addresses these risks are conventionally summarized under four broad headings. For the sender and receiver to be confident in the outcome of their communications, all of these requirements need to be satisfied.

3.2.1 Message Content Security

This comprises two separate requirements that, during a message's transit from sender to receiver:
  • No observer can access the contents of the message
  • No observer can identify the sender and receiver.
The term 'confidentiality' is used by computer scientists who specialize in security matters. This is most unfortunate, because the term has an entirely different meaning within commerce generally, which derives from the law of confidence. For this reason, the alternative term 'message content security' is used in this Module.

3.2.2 Integrity of Message Content

This requires that the recipient can be sure that, whether accidentally, or because of an action by any party:
  • The message has not been changed or lost during transmission;
  • The message has not been prevented from reaching the recipient; and
  • The message has not reached the recipient twice.

3.2.3 Authentication of the Sender and Recipient

This requires that:
  • The sender can be sure that the message reaches the intended recipient, and only the intended recipient; and
  • The recipient can be sure that the message came from the sender and not an imposter. The act by an imposter of sending such a message is referred to as 'spoofing'.

3.2.4 Non-Repudiation by the Sender and Recipient

This requires that:
  • The sender cannot credibly deny that the message was sent by them; and
  • The recipient cannot credibly deny that the message was received by them.  
3.3 Concept of Cryptography
 
Data that can be read and understood without any special measures is called plaintext or clear text. This is the message or data has to be secured. The method of disguising plaintext in such a way as to hide its substance is called encryption. Encrypting plaintext results in unreadable gibberish called cipher text. You use encryption to ensure that information is hidden from anyone for whom it is not intended, even those who can see the encrypted data. The process of reverting cipher text to its original plaintext is called decryption. A cryptographic algorithm, or cipher, is a mathematical function used in the encryption and decryption process. A cryptographic algorithm works in combination with a key—a word, number, or phrase—to encrypt the plaintext. The same plaintext encrypts to different cipher text with different keys. The security of encrypted data is entirely dependent on two things: the strength of the cryptographic algorithm and the secrecy of the key .A cryptographic algorithm, plus all possible keys and all the protocols that make it work, comprise a cryptosystem. PGP is a cryptosystem. Cryptography is an important element of any strategy to address message transmission security requirements. It is the practical art of converting messages or data into a different form, such that no-one can read them without having access to the 'key'. The message may be converted using a 'code' (in which case each character or group of characters is substituted by an alternative one), or a 'cipher' (in which case the message as a whole is converted, rather than individual characters).
  • Cryptography is the science of mathematics to “encrypt” and “decrypt” data. Cryptography enables us to store sensitive information of transmit it across insecure networks like internet so that no one else other the intended recipient can read it.
  • Cryptanalysis is the art of breaking Ciphers that is retrieving the original message without knowing the proper key. Cryptography deals with all aspects of secure messaging, authentication, digital signatures, electronic money, and other applications.
There are two kinds of encryption schemes, according encryption type.
  • Breaking encryption scheme.
  • Cracking encryption scheme.
There are two kinds of encryption schemes, according encryption complexity.
  • Strong encryption scheme.
  • Weak encryption scheme.
 
 
3.3.1 Strong encryption scheme
A 'strong encryption scheme' is one that cannot be cracked in a sufficiently short time that the security of the message is threatened, even using the most powerful computers available for the task.
3.3.2 Weak encryption scheme
With a 'weak encryption scheme', on the other hand, there is a significant risk that the key can be discovered by an organization that has access to sufficient computing power. Discussions in this area generally relate to the computing power owned by the U.S. National Security Agency (NSA).There are several aspects of a scheme that determine its strength. One of particular importance is the key-length needed to make it highly unlikely that a cracking attempt will be successful.
There are two kinds of cryptosystem:
 
  • Symmetric Cryptosystem(private key)
  • Asymmetric Cryptosystem(public key)
 
3.3.3 Private Key Cryptography processes
 
There are two kinds of private Key Cryptography processes. Like as below
 
  • Private key Encryption Process
  • Private key Decryption process
 
 
3.3.3.1 Private key Encryption Process
 
In private-key cryptography, the sender and recipient agree beforehand on a secret private key. The plaintext is somehow combined with the key to create the cipher text. The method of combination is such that, it is hoped, an adversary could not determine the meaning of the message without decrypting the message, for which he needs the key. The following diagram illustrates the encryption process. The problems of key distribution are solved by public-key cryptography, the concept of which was introduced by Whitfield Diffie and Martin Hellman in 1975. (There is now evidence that the British Secret Service invented it a few years before Diffie and Hellman, but kept it a military secret—and did nothing with it.) Public-key cryptography uses a pair of keys: a public key, which encrypts data, and a corresponding private key, for decryption. Because it uses two keys, it is sometimes called asymmetric cryptography. Figure 3.1 as shown as private key encryption.
 
 
Figure: 3.1 Private key encryption
 
3.3.3.2 Private key Decryption process
 
To break a message encrypted with private-key cryptography, an adversary must either exploit a weakness in the encryption algorithm itself, or else try an exhaustive search of all possible keys (brute force method). If the key is large enough (e.g. 128 bits), such a search would take a very long time (few years), even with very powerful computers.
 
 
 
 
 
 
 
 
 
 
 
 

Figure: 3.2 Private key decryption
 
Private-key methods are efficient and difficult to break. However, one major drawback is that the key must be exchanged between the sender and recipient beforehand, raising the issue of how to protect the secrecy of the key. When the president of the United States exchanges launch codes with a nuclear weapons site under his command, the key is accompanied by a team of armed couriers. Banks likewise use high security in transferring their keys between branches. These ypes of key exchanges are not practical, however, for e-commerce between, say, Amazon.com and a casual web surfer. Figure 3.2 as shown as private key decryption.
 
3.3.4 Public key Cryptography processes
 
There are two kinds of Public Key Cryptography processes. Like as below
 

  • Public key Encryption Process
  • Public key decryption process
 
3.3.4.1 Public key Encryption Process
 
 Public key cryptography uses two private key (known only by the recipient) and a public key (known to everybody). The public key is used to encrypt the message and then it is sent to the recipient who can decrypt the message using the private key. The message encrypted with the public key cannot be decrypted with any other key except for its corresponding private key. The following Figure 3.3 illustrates the encryption process in the public key cryptography.
       
 
Encryption Algorithm
 
Encrypted
Message or
Cipher text
 
Message to be encrypted or plain text
 

         
 
         
                   

 
 
Public Key know to everyone
 
 
 

Figure: 3.3 Public key encryption
 
3.3.4.2 Public key decryption process
 

 
 
 
 
 
 
 
 
 
 
 
 
Figure: 3.4 Public key Decryption processes
The public-key algorithm uses a one-way function to translate plaintext to cipher text. Then, without the private key, it is very difficult for anyone (including the sender) to reverse the process ( i.e., translate the cipher text back to plaintext). A one-way function is a function used in public-key cryptography involves factoring very large numbers. The idea is that it is relatively easy to multiply numbers, even large ones, with a computer; however, it is very difficult to factor large numbers. The only known algorithms basically have to do a sort of exhaustive search (Does 2 go in to? Does 2? 4? 5? 3? and so on). With numbers 128 bits long, such a search requires performing as many tests as there are particles in the universe.  Figure 3.4 as shown as public key decryption process.
 

3.3.5 Public key’s characteristics

Asymmetric algorithms rely on one key for encryption and a different but related key for decryption.A public-key encryption scheme has six ingredients as following.
  • Plaintext: This is the readable message or data that is fed into the algorithm as input.
  • Encryption algorithm: The encryption algorithm performs various transformations on the plaintext.
  • Public and private keys: This is a pair of keys that have been selected so that if one is used for encryption, the other is used for decryption. The exact transformations performed by the algorithm depend on the public or private key that is provided as input.
  • Ciphertext: This is the scrambled message produced as output. It depends on the plaintext and the key. For a given message, two different keys will produce two different ciphertexts.
  • Decryption algorithm: This algorithm accepts the ciphertext and the matching key and produces the original plaintext.It has several essential steps are the following:
  1. Each user generates a pair of keys to be used for the encryption and decryption of messages.
  2. Each user places one of the two keys in a public register or other accessible file. This is the public key. The companion key is kept private. As Figure 3.1 suggests, each user maintains a collection of public keys obtained from others
  3. If Boby wishes to send a confidential message to Ali, Boby encrypts the message using Ali's public key.                                                                                                                        
  4. When Ali receives the message, she decrypts it using her private key. No other recipient can decrypt the message because only Ali knows Ali's private key.
With this approach, all participants have access to public keys, and private keys are generated locally by each participant and therefore need never be distributed. As long as a user's private key remains protected and secret, incoming communication is secure. At any time, a system can change its private key and publish the companion public key to replace its old public key.Some of the important aspects of symmetric and public-key encryption. To discriminate between the two, here refer to the key used in symmetric encryption as a secret key. The two keys used for asymmetric encryption are referred to as the public key and the private key.Invariably, the private key is kept secret, but it is referred to as a private key rather than a secret key to avoid confusion with symmetric encryption. The following notation is used consistently throughout. A secret key is represented by Km, where m is some modifier; for example, Ka is a secret key owned by user A. A public key is represented by PUa, for user A, and the corresponding private key is PRa, Encryption of plaintext X can be performed with a secret key, a public key, or a private key, denoted by E(Ka, X), E(PUa, X), and E(PRa, X), respectively. Similarly, decryption of ciphertext C can be performed with a secret key, a public key, or a private key, denoted by D(Ka, X), D(PUa, X), and D(PRa, X), respectively.

3.3.6 Applications for Public key Cryptosystems

Before proceeding, here need to clarify one aspect of public-key cryptosystems that is otherwise likely to lead to confusion. Public-key systems are characterized by the use of a cryptographic algorithm with two keys, one held private and one available publicly. Depending on the application, the sender uses either the sender's private key or the receiver's public key, or both, to perform some type of cryptographic function. In broad terms, here can classify the use of public-key cryptosystems into three categories:
  • Encryption/decryption: The sender encrypts a message with the recipient's public key.
  • Digital signature: The sender "signs" a message with its private key. Signing is achieved by a cryptographic algorithm applied to the message or to a small block of data that is a function of the message.
  • Key exchange: Two sides cooperate to exchange a session key. Several different approaches are possible, involving the private key(s) of one or both parties.
  • Randomize Authentication: A randomized protocol is a deterministic protocol that uses an extra random string in addition to its normal input.
Some algorithms are suitable for all three applications, whereas others can be used only for one or two of these applications. Table 3.1 indicates the applications supported by the algorithms discussed in this dissertation.
 
Table 3.1 applications supported by the algorithms of Public key Encryption
Algorithm Encryption and Decryption Digital Signature Key Exchange Randomize Decryption  Complexity
RSA Yes Yes Yes No
Elliptic Curve Yes Yes Yes No
Diffie-Hellman No No Yes No
DSS No Yes No No
 
3.3.7 Public-Key Cryptanalysis
As with symmetric encryption, a public-key encryption scheme is vulnerable to a brute-force attack. The countermeasure is the same: Use large keys. However, there is a tradeoff to be considered. Public-key systems depend on the use of some sort of invertible mathematical function. The complexity of calculating these functions may not scale linearly with the number of bits in the key but grow more rapidly than that. Thus, the key size must be large enough to make brute-force attack impractical but small enough for practical encryption and decryption. In practice, the key sizes that have been proposed do make brute-force attack impractical but result in encryption/decryption speeds that are too slow for general-purpose use. Instead, as was mentioned earlier, public-key encryption is currently confined to key management and signature applications. Another form of attack is to find some way to compute the private key given the public key. To date, it has not been mathematically proven that this form of attack is infeasible for a particular public-key algorithm. Thus, any given algorithm, including the widely used RSA algorithm, is suspect. The history of cryptanalysis shows that a problem that seems insoluble from one perspective can be found to have a solution if looked at in an entirely different way. Finally, there is a form of attack that is peculiar to public-key systems. This is, in essence, a probable-message attack. Suppose, for example, that a message is to be sent that consisted solely of a 53-bit DES key. An adversary could encrypt all possible 53-bit DES keys using the public key and could discover the encrypted key by matching the transmitted ciphertext. Thus, no matter how large the key size of the public-key scheme, the attack is reduced to a brute-force attack on a 53-bit key. This attack can be dissatisfied by appending some random bits to such simple messages.
3.3.8 Confused of crypto-graphical complexity
 
Complexity has always been a part of our environment, and therefore many scientific fields have dealt with complex systems and phenomena. Indeed, some would say that only what is somehow complex what displays variation without being random is worthy of interest. The use of the term complex is often confused with the term complicated. In today's systems, this is the difference between myriad connecting "stovepipes" and effective "integrated" solutions.[3] This means that complex is the opposite of independent, while complicated is the opposite of simple. While this has led some fields to come up with specific definitions of complexity, there is a more recent movement to regroup observations from different fields to study complexity in itself, whether it appears in anthills, human brains, or stock markets. One such interdisciplinary group of fields is relational order theories.
3.4 Complexity define using big-O notation
 
Here express complexity using big-O notation. For a problem of size n:
  • a constant-time method is "order 1": O(1)
  • a linear-time method is "order n": O(n)
  • a quadratic-time method is "order n squared": O(n2)
Note that the big-O expressions do not have constants or low-order terms. This is because; when n gets large enough, constants and low-order terms don't matter (a constant-time method will be faster than a linear-time method, which will be faster than a quadratic-time method). Formal definition:
A function T(n) is O(f(n)) if for some constant c and for all values of n greater than some value n0:
 
                                                    T(n) <= c * f(n)
 
The idea is that T(n) is the exact complexity of a method or algorithm as a function of the problem size N, and that f(n) is an upper-bound on that complexity (i.e., the actual time/space or whatever for a problem of size n will be no worse than f(n)). In practice, here want the smallest f(n)  the least upper bound on the actual complexity.For example, consider T(n) = 2 * n2 + 5. Here can show that T(n) is O(n2) by choosing c = 4 and n0 = 2. This is because for all values of n greater than 2:
 
                                                              2 * n2 + 5 <= 4 * n2
T(n) is not O(n), because whatever constant c and value n0 is choose, It always find a value of n greater than n0 so that 2 * n2 + 5 is greater than c * n. Here O (n) is upper bound so O(n) is complexity.

3.4.1 How to Determine Complexities of algorithm

In general, determine the running time of a piece of code which depends on what kinds of statements are used.
  • Sequence of statements
1.      statement 1;
2.      statement 2;
3.        ...
4.      statement k;
(Note: this is code that really is exactly k statements; this is not an unrolled loop like the n calls to add shown above.) The total time is found by adding the times for all statements.
Total time = time(statement 1) + time(statement 2) + … + time(statement k)
If each statement is "simple" (only involves basic operations) then the time for each statement is constant and the total time is also constant: O (1).
  • if-then-else statements
1.      if (condition) {
2.      sequence of statements 1
3.      }
4.      else {
5.      sequence of statements 2
6.      }
Here, either sequence 1 will execute, or sequence 2 will execute. Therefore, the worst-case time is the slowest of the two possibilities: max(time(sequence 1), time(sequence 2)). For example, if sequence 1 is O(n) and sequence 2 is O(1) the worst-case time for the whole if-then-else statement would be O(n).
  • for loops
1.      for (i = 0; i < N; i++) {
2.      sequence of statements
3.      }
The loop executes n times, so the sequence of statements also executes n times. Since here assume the statements are O(1), the total time for the for loop is n * O(1), which is O(n) overall.
  • Nested loops
First here'll consider loops where the number of iterations of the inner loop is independent of the value of the outer loop's index. For example:
for (i = 0; i < n; i++) 
{

 
    for (j = 0; j < m; j++)
 {
        sequence of statements
    }
}
The outer loop executes N times. Every time the outer loop executes, the inner loop executes m times. As a result, the statements in the inner loop execute a total of n * m times. Thus, the complexity is O(n * m). In a common special case where the stopping condition of the inner loop is j < n instead of j < m (i.e., the inner loop also executes n times), the total complexity for the two loops is O(n2). Now let's consider nested loops where the number of iterations of the inner loop depends on the value of the outer loop's index. For example:
for (i = 0; i < n; i++) {
    for (j = i+1; j < n; j++) {
        sequence of statements
    }
}
Now here can't just multiply the number of iterations of the outer loop times the number of terations of the inner loop, because the inner loop has a different number of iterations each time. So let's think about how many iterations that inner loop has. Here seen that formula before the total is O (n2).
 
3.4.2 Time complexity in mathematical operation according big-O notation
Assume that a,b and c are n bit number. Then here have
Operation       Complexity
    a+b                  O(n) 
     a.b                  O(n
 amod c             O(n
In general, the big-O notation makes use of the term that grows the fastest. For example,
  • O[ax7 + 2x2 + sin(x)] O(ax7) = O(x7)
  • O(en + an10) = O(en)
  • O(n! + n50) = O(n!)
  •  (nX n))=  O()
There is much more to the big-O notation, with fascinating ramifications. For the interested reader, two of the best accounts are in 
An algorithm with an input of size n is said to be
  • Linear: If the running time is O(n)
  • Polynomial: If the running time is O(nt) for some constant t
  • Exponential: If the running time is O(th(n)) for some constant t and polynomial h(n)
Random process
In  n = O (n )   and out n = O (n)
In key exchange n= O (n) and in digital signature n = O (n )  

3.4.3 Best-case and Average-case Complexity

Some methods may require different amounts of time on different calls, even when the problem size is the same for both calls. For example, consider the add method that adds an item to the end of the list. In the worst case (the array is full), that method requires time proportional to the number of items in the list (because it has to copy all of them into the new, larger array). However, when the array is not full, add will only have to copy one value into the array, so in that case its time is independent of the length of the list; i.e., constant time.
In general, here may want to consider the best and average time requirements of a method as well as its worst-case time requirements. Which is considered the most important will depend on several factors. For example, if a method is part of a time-critical system like one that controls an airplane, the worst-case times are probably the most important (if the plane is flying towards a mountain and the controlling program can't make the next course correction until it has performed a computation, then the best-case and average-case times for that computation are not relevant — the computation needs to be guaranteed to be fast enough to finish before the plane hits the mountain).
On the other hand, if occasionally waiting a long time for an answer is merely inconvenient (as opposed to life-threatening), it may be better to use an algorithm with a slow worst-case time and a fast average-case time, rather than one with so-so times in both the average and worst cases. Note that calculating the average-case time for a method can be tricky. You need to consider all possible values for the important factors, and whether they will be distributed evenly.
3.4.4 Memory complexity
Here provide an analysis of the computational and memory complexity for H.234/AVC decoders when applying VCR functionalities. Here, denote the index of the first I-packet as 0, and define N as the number of packets in one GOP (geo-optcal properties) and M as the inter packet distance bethereen every two successive I/P-packet, which here assume fixed throughout the whole GOP. Also, here will use RP, RBL0, and RBL1 as the number of reference packets used for P-packets, B-packet List 0, and B-packet List 1, respectively. The bitstreams are encoded with a conventional prediction structure where every P-packet is predicted from the nearest forward I/P-packets and every B-packet uses the nearest I/P-packet as the references. Forward reference packets are in List 0 and backward reference packets are in List 1.When random access to a specific packet is required, the packet to be displayed and any packets that it depends on need to be decoded. The complexity to access the j-th packet in one GOP1 is denoted as  CRA(j) which indicates the number of packets to be decoded.
 
 
 

                     1                                       1-Packet
C=      j/M+1                                P-Packet
                     Min([j/M]+R),N/M+1)    B-Packet
Within one GOP, the decoding complexity for a P-packet is determined by its index j, since large j means more previously encoded P-packets need to be decoded due to MCP reference dependencies.
3.4.5 communication complexity (CC)
The notion of communication complexity (CC) was introduced by Yao in 1979, who investigated the following problem involving two separated parties (Ali and Boby). Ali receives an n-bit string x and Boby another n-bit string y], and the goal is for one of them (say Boby) to compute a certain function f(x,y) with the least amount of communication bethereen them. Note that here here are not concerned about the number of computational steps, or the size of the computer memory used. Communication complexity tries to quantify the amount of communication required for such distributed computations. CC always O (n).

3.4.7 Constants Matter of complexity

Recall that when here use big-O notation, here drop constants and low-order terms. This is because when the problem size gets sufficiently large, those terms don't matter. However, this means that two algorithms can have the same big-O time complexity, even though one is always faster than the other. For example, suppose algorithm 1 requires N2 time, and algorithm 2 requires 10 * N2 + N time. For both algorithms, the time is O (N2), but algorithm 1 will always be faster than algorithm 2. In this case, the constants and low-order terms do matter in terms of which algorithm is actually faster. However, it is important to note that constants do not matter in terms of the question of how an algorithm "scales" (i.e., how does the algorithm's time change when the problem size doubles). Although an algorithm that requires N2 time will always be faster than an algorithm that requires 10*N2 time, for both algorithms, if the problem size doubles, the actual time will quadruple. When two algorithms have different big-O time complexity, the constants and low-order terms only matter when the problem size is small. For example, even if there are large constants involved, a linear-time algorithm will always eventually be faster than a quadratic-time algorithm. This is illustrated in the following table, which shows the value of 100*N (a time that is linear in N) and the value of N2/100 (a time that is quadratic in N) for some values of N. For values of N less than 104, the quadratic time is smaller than the linear time. However, for all values of N greater than 104, the linear time is smaller.
  1. Mechanism of cryptography
Study as a related mechanism of the queuing and scheduling algorithms that is related to Data Slicing and Allocation Algorithm. The importance of queuing and scheduling algorithms is explained here. The focus is given on the queuing algorithms which support WiMAX, WLAN, WiFi, SpicoNET and VoIP applications. For adapting the concept to the WSN technology at BR encryption algorithm, introducing maximum these types of concepts.
 
3.5.1 Scheduling
 
Bit allocating process for data traffic defines Scheduling. Process of Scheduling organized the order in which packets get transmitted at the output of each queue ensuring that packets from different applications meet their QoS requirements. Scheduling algorithms provide mechanisms for data allocation and multiplexing at the packet level. A suitable scheduling mechanism will provide the necessary QoS guarantees required by heterogeneous classes of traffic while utilizing the resources as efficiently as possible. It is an important concept in IEEE 802.13 networks. Admission control and congestion control policies are all dependent on the specific scheduling disciplines used. It refers to the way traffic is assigned priorities according to their queuing algorithm. This task is carried out by software known as a scheduler.
 
3.5.2 Queuing
 
It is the way in which traffic packets are queuing in order while they wait for processing by the scheduler. The scheduler decides which queue and which packets from that queue should be transmitted. The order in which the scheduler selects the packets to process can affect network performance. This will be shown in the following sections.
 
3.5.3 Different queuing applications
 
 There are several types of queuing techniques like as priority queuing, FIFO queuing, Round Robin queuing, customs queuing and weight queuing etc.
 
3.5.3.1 Priority Queuing
 
Priority Queuing is a famous queuing technique. Traffic is prioritized with a priority-queue-list. Packets are classified based on user-specified criteria and placed into one of the four output queues – high, medium, normal, and low – in terms of the assigned priority. When traffic packets arrive, different traffic packets are classified into high, medium, normal and low priority queues. When the scheduler is ready to allocate data to traffic, it searches the high queue for a request. If there is one, it gets answered. If not, the medium queue is then checked. If there is a request, it is answered. If not, the normal queue, and finally the low priority queues are checked consequently. The process is repeated for the next cycle. In PQ, higher priority traffic can starve lower priority queues of channel. If there is enough traffic in the higher queue, the rest of the queues will never be answered. It’s disadvantage is also mentioned in.Figure 3.1 illustrates the working principle of PQ. PQ gives higher priority queues absolute preferential treatment over lower priority queues. When a packet is to be sent out to an interface, the high priority queue is scanned and serviced first. The packet at the head of the highest queue is chosen for transmission. This procedure is repeated until the highest queue is empty. The lower queues are then serviced in sequence. In, strict PQ is used for the bandwidth allocation of the four different service flows:UGS, rtPS, nrtPS and BE in its hierarchical structure of the bandwidth allocation for the IEEE 802.15 standard. The authors point out that one disadvantage of the strict PQ scheduling algorithm is that higher priority connections can starve the bandwidth of lower priority connections. To overcome this problem, the authors include a traffic-policing module in each SS which forces the connection’s bandwidth demand to stay within its traffic contract. This approach prevents higher priority connections from using more than their allocation bandwidth. This means every connection has its hard-threshold in its bandwidth demand. In, the author presents that PQ could be suitable for voice traffic in a network. PQ offers a queue that will offer the highest priority to specific traffic.
 
 
 
 
 
 

Incoming traffic                                                                                             Outgoing traffic Packets                                                                                                                       Packets
                    
 
Figure: 3.1 Priority Queuing
 
The problem is that congestion could occur since lower queues have lower priority and whose traffic may not get adequate bandwidth.
 
3.5.3.2 First-in First-out Queuing
 

Incoming traffic                                                                                             Outgoing traffic Packets                                                                                                                       Packets
                                                                                               
Figure: 3.2 FIFO Scheduler
 
First-in First-out queuing is called also as First-come First-served. It works like a queue in a supermarket's checkout. The first item arrived is the first processed. As new packets arrive, they are simply added to the end of the queue. If the queue becomes full, the analogy with the supermarket stops. Newly arriving packets are then discarded. This is known as tail-drop .Figure 3.2 illustrates the concept of FIFO.
 
3.5.3.3 Custom Queuing
 
CQ has been available for a number of years and allows approximate bandwidth allocation for different queues. In CQ, traffic is classified into multiple queues with configurable queue limits by specifying the number of packets or bytes to be serviced for each class of traffic. In Queue 0, the system queues high priority packets such as signaling packets so that these packets are always serviced first. System queue 0 is not considered in this study since here focus on traffic queues. For the other queues, packets are sent from each queue by cycling through the queues in a round-robin way, ensuring that each queue is serviced and that no individual queue can starve other queues. As each packet is sent, a byte counter is incremented. When the byte counter reaches or exceeds the default or configured threshold for the queue, transmission moves to the next queue. When one queue is empty, transmission moves to the next queue.
 
 
 

40%
 
 
Queue 16
Incoming traffic                                                                                             Outgoing traffic Packets                                                                                                                  Packets
Rectangular Callout: Percent of length 
 

                                                                                               
 
Figure: 3.3 Custom Queuing
 
Figure 3.3 presents the illustration of CQ. Service providers will determine how to assign the bandwidth to different applications according to the service demands by custom queuing. Queue 0 is used for system traffic. Queue 0 is not user configurable. The queue size for queue 1 through 13 can be specified by associating each queue size with a configurable byte count. The byte count specifies how many bytes of data the scheduler should send from the current queue before it moves on to the next queue. For instance, some service providers may set it up to assign queue 1 to a 20% quota of a link bandwidth or a frame capacity, queue 2 to a 20% quota and so forth. CQ has fairness in division of bandwidth. Bandwidth used by a specific queue can only be indirectly specified in terms of byte count and queue length. When a particular queue is being processed, packets are sent until the byte count or packet limit is met or the queue is empty. It is pointed out in that CQ ensures that no application or specified class of services achieves more than a predetermined proportion of overall capacity when the proportion is decided. As in the case of PQ, CQ is statically configured and does not automatically adapt to changing network conditions.
 
3.5.3.4 Weighted Fair Queuing
 
Weighted Fair Queuing is a hashing algorithm which places flows into separate queues where weights are used to determine how many packets are processed at a time. Determining weights depend on the service requirements. There is no unique rule for weight determination. WFQ works like having several doors. When a packet arrives, it is classified by the classifier and put into one of the doors. Different doors have different weights which are assigned by the different applications according to its Qos requirements. The door is the entry to a queue that is served together with some other traffic in a weighted round robin order. Figure 3.4 illustrates the system of weighted Fair Queuing.
 
                     Rectangular Callout: Weighted Fair Queuing
                   
   
   
         
 
 

                                                                                                                            

           
   
   
Queue n, Weight n
          
 
 
 

Incoming traffic                                                                                      Outgoing traffic Packet s                                                                                                                       Packets
                                                                                               
 
Figure: 3.4 Weighted Fair Queuing
 
3.5.3.5 Round Robin Scheduling
Round robin scheduling algorithms is a simple and famous scheduling algorithm for processes in an operating system which equitably assigns time slices to each process in equal portions in order to handle all processes as having the same priority. Round robin scheduling is starvation-free. Round robin scheduling can also be applied to other scheduling problems, such as network resource scheduling. In wireless networks where many stations share one channel, the Round Robin algorithm provides the ability for every station to transmit or receive on the shared channel at a regular interval. Round Robin is a fair algorithm due to its nature. However, it is not efficient if the processes have different processing capacities. Some processes are likely to receive more requests than they can treat while other processes will use only part of their resources. Figure 3.5 illustrates the Round robin scheduling algorithm.
 

 
Figure: 3.5 Round Robin Scheduling
 
3.5.3.6 Data Slicing System in cryptography
 
 Data slicing system are different types.

  • Segmentation is the term used to describe the process of chopping streams of data into smaller chunks
  • Re-segmentation is called fragmentation. Fragmentation is the process of chopping larger chunks of data into smaller chunks
 
3.6.1 Segmentation
Segmentation is the term used to describe the process of chopping streams of data into smaller chunks as shown in Figure 3.6. Segmentation usually occurs fairly early in the communication process and it is almost always software that performs the segmentation process. The segmentation process is performed prior to transfer of data across a network or before storage on a peripheral device. Segmentation is necessary because today's communication systems use what is called packetized communication. Chopping the data into segments has several advantages.
  • Once segmented, the segments are referred to individually as protocol data units.
  • These PDU's are encapsulated into packets.
  • Packets can be sent along more than one path to the destination.
  • This increases both the reliability, and
  • The speed at which data can travel across a network.
  • Packets can travel across a packet switched network without tying up a communications circuit expressly for that purpose.
  • Multiple conversations between different parties can therefore share a single communications link.
  • If any single packet is lost, it can be retransmitted instead of having to start the entire conversation all over again.
An example of segmentation would be when Transmission Control Protocol (TCP of TCP/IP fame) chops an e-mail into a segments, encapsulates the segment with remote and local TCP port numbers and then delivers the completed protocol data unit is passed from TCP to Internet Protocol (IP) to be stamped with a sequence number, source and destination addresses added and a checksum calculated.
 
 
 
 
 
 
 
 
 

Figure: 3.6 Processes of Segmentation
 
3.6.2 Fragmentation
 
Re-segmentation is called fragmentation. Fragmentation is the process of chopping larger chunks of data into smaller chunks as shown in Figure 3.7. Fragmentation is usually performed at the hardware level, and when data is chopped into fragments, it is referred to as a frame. Fragmentation occurs so that data can be transmitted across a connection without overwhelming the memory buffers on either side of the connection. Fragmentation allows for the coordination of data transmission amongst devices connected to a common transmission medium.
 

           
   
    Oval Callout: Fragmentation
Scheduler
 
 
 
 
 
 
 

1011
 
1110
                                                            
 
 
1101
 
 
 

Figure: 3.7 Processes of Fragmentation
 
3.6.3 Process of Encapsulation System
All communications on a network originate at a source, and are sent to a destination. The information sent on a network is referred to as data or data packets. If one computer (host A) wants to send data to another computer (host B), the data must first be packaged through a process called encapsulation. Encapsulation wraps data with the necessary protocol information before network transit. Therefore, as the data packet moves down through the layers of the OSI model, it receives headers, trailers, and other information. To see how encapsulation occurs, examine the manner in which data travels through the layers as illustrated on figure 3.8. Once the data is sent from the source, it travels through the application layer down through the other layers

       
 
   
 
 
 
 
 
 
 
 
®  © 1110 @
                                                            
 
       
   
®  © 1101 @
  Rectangular Callout: Encapsulation data Receiver
 
 
 
 
 
 

Figure: 3.8 Processes of Encapsulation
The packaging and flow of the data that is exchanged goes through changes as the layers perform their services for end users. Networks must perform the following five conversion steps in order to encapsulate data.
3.6.3.1 Package the data for end-to-end transport
The data is packaged for internet-work transport. By using segments, the transport function ensures that the message hosts at both ends of the e-mail system can reliably communicate.
3.6.3.2 Add the network IP address to the header
The data is put into a packet or datagram that contains a packet header with source and destination logical addresses. These addresses help network devices send the packets across the network along a chosen path.
3.6.3.3 Add the data link layer header and trailer
Each network device must put the packet into a frame. The frame allows connection to the next directly connected network device on the link. Each device in the chosen network path requires framing in order for it to connect to the next device.
3.6.3.4 Convert to bits for transmission
The frame must be converted into a pattern of 1s and 0s (bits) for transmission on the medium. A clocking function enables the devices to distinguish these bits as they travel across the medium. The medium on the physical internet-work can vary along the path used. For example, the e-mail message can originate on a LAN, cross a campus backbone, and go out a WAN link until it reaches its destination on another remote host area flag fragment offset.
3.7 Sending of data
As a user sends a message, its alphanumeric characters are converted to data that can travel across the internet-work. Thus a way continues the process until finished the packets of data. 
3.8 Summery
In cryptography requirements are not liner types, it is always dynamic. System Supported protocol, Better secure encryption algorithm, cryptographical mechanism and complexities are inclode here for an effective secure communication. So always thinking these first reqirement for doing something in communication security system it may be voice communication or others.
 

CHAPTER FOUR

COMPARISON AMONG THE DIFFERENT ALGORITHMS

4.1 Introduction

In this chapter discuses, evaluates and summarizes about different algorithms like as Diffie-Hellman Key Exchange, Rivest-Shamir-Adleman (RSA) algorithm, Elliptic Curve Cryptography (ECC) algorithm and Digital Signature system (DSS).One algorithm contain different complexities likes time complexity, Memory complexity, communication complexity and  it requires some associate components like as different types of queues, schedulers and scanners.

4.2        Diffie-Hellman Key Exchange Algorithm

Diffie-Hellman Key Exchange algorithm is an algorithm of secure communication. The first published public-key algorithm appeared in the seminal paper by Diffie and Hellman that defined public-key cryptography and is generally referred to as Diffie-Hellman key exchange.A number of commercial products employ this key exchange technique.The intention of the algorithm is to enable two users to securely exchange a key that can then be used for subsequent encryption of messages. The Diffie-Hellman algorithm depends for its effectiveness on the complexity of computing discrete logarithms. Shortly define the discrete logarithm in the following way.

  • First, we define a primitive root of a prime number p as one whose powers modulo p generate all the integers from 1 to p1. That is, if a is a primitive root of the prime number p, then the numbers a mod p, a2 mod p,…, ap1 mod p are distinct and consist of the integers from 1 through p 1 in some permutation.
  • For any integer b and a primitive root a of prime number p, we can find a unique exponent i such that b ai (mod p) where 0 i  (p1). The exponent i is referred to as the discrete logarithm of b for the base a, mod p. We express this value as dloga,p (b).

4.2.1 Concept Diffie-Hellman Key Exchange Algorithm

For this algorithm, there are two publicly known numbers: a prime number q and an integer that is a primitive root of q. Suppose the users A and B wish to exchange a key. User A selects a random integer XA < q and computes YA = XA mod q. Similarly, user B independently selects a random integer XB < q and computes YB = XB mod q. Each side keeps the X value private and makes the Y value available publicly to the other side. User A computes the key as K = (YB) mod q and user B computes the key as K = (YA)XB mod q. These two calculations produce identical results.
 
K = (YB)XA mod q  
  = (XB mod q)XA mod q  
  = (XB)XA mod q by the rules of modular arithmetic
  = (XB XA mod q  
  = (XA)XB mod q  
  = (XA mod q)   
  = (XA mod q)XB mod q  
  = (YA)XB mod q  
The result is that the two sides have exchanged a secret value. Furthermore, because XA and XB are private, an adversary only has the following ingredients to work with: q, , YA, and YB. Thus, the adversary is forced to take a discrete logarithm to determine the key. For example, to determine the private key of user B, an adversary must compute
                                              XB = dlog,q (YB)
The adversary can then calculate the key K in the same manner as user B calculates it.The security of the Diffie-Hellman key exchange lies in the fact that, while it is relatively easy to calculate exponentials modulo a prime, it is very difficult to calculate discrete logarithms. For large primes, the latter task is considered infeasible.Here is an example. Key exchange is based on the use of the prime number q = 353 and a primitive root of 353, in this case  = 3. A and B select secret keys XA = 97 and XB = 233, respectively. Each computes its public key:
A computes YA = 397 mod 353 = 40.
B computes YB = 3233 mod 353 = 248.  
 
After they exchange public keys, each can compute the common secret key:
A computes K = (YB) mod 353 = 24897 mod 353 =160.
B computes K = (YA)XB mod 353 = 40233 mod 353 = 160.
 
Let, assume an attacker would have available the following information:
                                     q = 353;  = 3; YA = 40; YB = 248
In this simple example, it would be possible by brute force to determine the secret key 160. In particular, an attacker E can determine the common key by discovering a solution to the equation 3a mod 353 = 40 or the equation 3b mod 353 = 248. The brute-force approach is to calculate powers of 3 modulo 353, stopping when the result equals either 40 or 248. The desired answer is reached with the exponent value of 97, which provides 397 mod 353 = 40.With larger numbers, the problem becomes impractical.

4.2.2 Limitations

·         Man-in-the-Middle Attack: The protocol depicted in SMS insecure against a man-in-the-middle attack.

  • Limited to the exchange of secret values: The purpose of the algorithm is to enable two users to securely exchange a key that can then be used for subsequent encryption of messages. The algorithm itself is limited to the exchange of secret values.
  • Difficulty of computing discrete logarithms: The Diffie-Hellman algorithm depends for its effectiveness on the difficulty of computing discrete logarithms
4.2.3 Complexities
 

Diffie-Hellman Key Exchange Algorithm contains different complexities like as time complexity, Memory complexity and communication complexity.

 
  • Encryption Time Complexity: Minimum time complexity of Diffie-Hellman Key Exchange Algorithm is (YA)XB mod q  or  O (n) and maximum time complexity of Diffie-Hellman Key Exchange Algorithm is ((YA)XB )mod q  or  O(n).
  • Decryption Time Complexity: Minimum time complexity of Diffie-Hellman Key Exchange Algorithm is (YA)XB mod q  or  O (n) and maximum time complexity of Diffie-Hellman Key Exchange Algorithm is ((YA)XB )mod q  or  O(n).
  • Memory Complexity: Maximum memory complexity of Diffie-Hellman Key Exchange Algorithm is O(n). Here n is key size.
  • Communication Complexity: Maximum memory complexity of Diffie-Hellman Key Exchange Algorithm is O(n).

4.3 Introduction of RSA Algorithm

RSA is one of the most popular and successful public key cryptography algorithms. The algorithm has been implemented in many commercial applications. It is named after its inventor’s Ronald L. Rivest, Adi Shamir, and Leonard Adleman. They invented this algorithm in the year 1977. The Rivest-Shamir-Adleman (RSA) algorithm has since that time reigned supreme as the most widely accepted and implemented general-purpose approach to public-key encryption. They utilized the fact that when prime numbers are chosen as a modulus, operations behave “conveniently”. They found that if we use a prime for the modulus, then raising a number to the power (prime ‑ 1) is 1. RSA algorithm is a block cipher in which the plaintext and ciphertext are integers between 0 and n, 1 for some n. A typical size for n is 1024 bits, or 309 decimal digits. That is, n is less than 2. RSA algorithm simply capitalizes on the fact that there is no efficient way to factor very large integers. The security of the whole algorithm relies on that fact. If someone comes up with an easy way of factoring a large number, then that’s the end of the RSA algorithm. Then any message encrypted with the RSA algorithm is no more secure.

4.3.1 Concept of RSA Algorithm

The scheme developed by Rivest, Shamir, and Adleman makes use of an expression with exponentials. Plaintext is encrypted in blocks, with each block having a binary value less than some number n. That is, the block size must be less than or equal to log2 (n); in practice, the block size is i bits, where 2i < n, 2i+1. Encryption and decryption are of the following form, for some plaintext block M and ciphertext block C:
                                           C = Me mod n
M = Cd mod n = (Me)d mod n = Med mod n
Both sender and receiver must know the value of n. The sender knows the value of e, and only the receiver knows the value of d. Thus, this is a public-key encryption algorithm with a public key of PU = {e, n} and a private key of PU = {d, n}. For this algorithm to be satisfactory for public-key encryption, the following requirements must be met:
  1. It is possible to find values of e, d, n such that Med mod n = M for all M < n.
  2. It is relatively easy to calculate mod Me mod n and Cd for all values of M < n.
  3. It is infeasible to determine d given e and n.
For now, we focus on the first requirement and consider the other questions later. We need to find a relationship of the form
                                              Med mod n = M
The preceding relationship holds if e and d are multiplicative inverses modulo (n), where (n) is the Euler totient function. p, q prime, (pq) = (p 1)(q 1) The relationship between e and d can be expressed  like as below.
                                               ed mod (n)=1
This is equivalent to saying
                                               ed =1 mod (n)
                                               d =e1 mod (n)
That is, e and d are multiplicative inverses mod (n). Note that, according to the rules of modular arithmetic, this is true only if d (and therefore e) is relatively prime to (n). Equivalently, gcd((n),d) = 1.
To state the RSA scheme here. The ingredients are the following:
p,q, two prime numbers (private, chosen)
n = pq (public, calculated)
e, with gcd((n),e) = 1;1 < e < (n) (public, chosen)
D= e1(mod (n)) (private, calculated)
 
The private key consists of {d, n} and the public key consists of {e, n}. Suppose that user A has published its public key and that user B wishes to send the message M to A. Then B calculates C = Me mod n and transmits C. On receipt of this ciphertext, user A decrypts by calculating
                                                    M = Cd mod n.
The basic elements of the RSA algorithm can be summarized as follows. Given two prime numbers p and q, with n = pq and a message block M < n, two integers e and d are chosen such that
                                                   Med mod n = M
It is state that the preceding relationship holds if e and d are multiplicative inverses modulo (n), where (n) is the Euler totient function. It is shown in that for p, q prime, (n) = (p 1)(q 1). The relationship between e and d can be expressed as  ed mod (n) = 1

4.3.2 Proof of the RSA Algorithm

Before proving the algorithm, it summarize on some basic results. The property of modular arithmetic is the following:
                      [(a mod n) x (b mod n)] mod n = (a x b) mod n
From this, it should be easy to see that if it has x mod n = 1 then x2 mod n 1 and for any integer y, it has also xy mod n = 1. Similarly, if it is x mod n = 0, for any integer y, it is also xy mod n = 0.Another property of modular arithmetic is [(a mod n) (b mod n)] mod n = (a b) mod n .If integers a and n are relatively prime, than a(n) mod n = 1.
4.3.2.1 Mathematics behind RSA
 
The RSA algorithm works as follows. It first finds two prime numbers and generates a key pair using those two prime numbers. Then the encryption and decryption are done using the key pair. p and q are distinct primes :  N = p·q
 
Find a, b such that :  a·b = 1 mod (p-1)(q-1)
Encryption Key:  e = (b, n)     
Decryption Key: d = (a, n)
Encryption of message m: Ee(m) = mb mod n = C (cipher)
Decryption of C : Dd(C) = ca mod n = m
 
So in-order for RSA to work we must have the property :
                                         
                                              (mb)a = m mod n 
 
Here it has to prove that the above equation is true. If the above equation is proved then it can say that the RSA algorithm really works.
 

4.3.3 RSA Algorithm Processes

 

The encryption and decryption in the RSA algorithm is done as follows. Before encryption and decryption is done, we have to generate the key pair and then those keys are used for encryption and decryption.

·         Key Generation: The first step in RSA encryption is to generate a key pair. Two keys are generated of which one is used as the public key and the other is used as the private key. The keys are generated with the help of two large prime numbers. The keys are generated as follows

1. Generate two large random primes p and q.
2. Compute n which is equal to product of those two prime numbers, n = pq
3. Compute φ(n) = (p-1)(q-1).
4. Choose an integer e, 1 < e < φ(n), such that gcd(e, φ(n)) = 1.
5. Compute the secret exponent d, 1 < d < φ(n), such that ed ≡ 1 (mod φ(n)).
6. The public key is (n, e) and the private key is (n, d). The values of p, q, and φ(n) should also be kept secret.
  • n is known as the modulus.
  • e is known as the public exponent or encryption exponent.
  • d is known as the secret exponent or decryption exponent.
  • Encryption: Encryption is done using the public key component e and the modulus n. To whomever we need to send the message, we encrypt the message with their public key (e,n). Encryption is done by taking an exponentiation of the message m with the public key e and then taking a modulus of it. The following steps are done in encryption.
 
            1. Obtain the recipient’s public key (n,e)
            2. Represent the plaintext message as a positive integer m < n
            3. Compute the ciphertext c = m^e mod n.
            4. Send the ciphertext c to the recipient.

·         Decryption: Decryption is done using the Private key. The person who is receiving the encrypted message uses his own private key to decrypt the message. Decryption is similar to the encryption except that the keys used are different.

            1. Recipient uses his private key (n,d) to compute m = c^d mod n.
            2. Extract the plaintext from the integer representative m.
 
           The RSA algorithm has been implemented in many applications and it is currently   
           one of the most popularly used encryption algorithm.

·         Prime Number Hunt: But how would really find the prime numbers? There are many theorems available that can be used to find if the given number is prime or not. One of the most popularly used theorems for finding if a given number is prime or not is the Fermat’s little theorem. It states that for any a that is less than p,  ap-1 = 1 mod p. Here this theorem to test the primarily of a number, called as primarily testing. The proof of the theorem is given below.

           Since p is prime, a e Z* p and f(p) = p – 1 , Thus ap-1 = af(p) = 1 mod p.
 
            Now to find the prime number, it can do the following steps
 
      Step1: Choose a number p, randomly. This number, if large has a chance of being   
      prime in the order of 1 in several thousand.
 
      Step2: Then choose a number a < p, randomly. It will use a to test the primarily
      of p. First, it knows that if p is prime, ap-1 is 1 (mod p). Secondly, know that if
       for some x (where x is not 1 or p-1), if x2 is 1 or p-1 (mod p) then p is not prime.
 
4.3.4 Limitation of RSA
 
RSA algorithm has several limitations and decency these as following.
 
  • The security of the RSA algorithm lies in the fact that there is no good way of factoring numbers. No one till now knows a way to factorize a number into its prime factors. As long as no one finds a way RSA will be safe and will be one of the best encryption algorithm in use. If someone comes up with a way to factorize algorithms, then that’s the end of RSA.
  • The computational complexity of the RSA algorithm completely depends upon the key length and the length of the modulus n. It exponent the key with the message and take a modulus of that value with the modulus n. So the computational complexity will depend upon these two factors. To find the exponentiation, can square the message and then multiply it again with the squared value. For example to find 5^8, it can first find 5^2 by squaring 5 and then can find 5^4 by squaring the resulted value of 5^2 and then can find 5^8 by squaring the resulted value of 5^4. Hence the complexity of the encryption and decryption depends on how long the key is.
  • Compute the complexity of the RSA will have to look at all the steps involved in the protocol right from the Prime number generation. Lets leave the complexity of generating prime numbers aside for a while, as it is going to look at a deterministic polynomial time algorithm in order to find the prime p. So lets start with computing the complexities of the other steps in RSA.
 
4.3.5 Complexities

RSA Algorithm contains different complexities like as time complexity, Memory complexity and communication complexity.

 
  • Encryption Time Complexity: The computational complexity of the RSA algorithm completely depends upon the key length and the length of the modulus n (here n=200). We exponent the key with the message and take a modulus of that value with the modulus n. So the computational complexity will depend upon these two factors. To find the exponentiation, we can square the message and then multiply it again with the squared value. For example to find 5^8, we can first find 5^2 by squaring 5 and then can find 5^4 by squaring the resulted value of 5^2 and then can find 5^8 by squaring the resulted value of 5^4. Hence the complexity of the encryption and decryption depends on how long the key is. Minimum time complexity of RSA Algorithm is C=me mod n. or O (n) or O(log3 n).and maximum time complexity of RSA Algorithm is O(n).
  • Decryption Time Complexity: Minimum time complexity of RSA Algorithm is m = c^d mod n.or  O (n) and maximum time complexity of RSA Algorithm is  O(n).
  • Memory Complexity: Maximum memory complexity of RSA Algorithm is O(n). Here n is key size.
  • Communication Complexity: Maximum memory complexity of RSA Algorithm is O(n).
 
4.4 Concept of Elliptic Curve Algorithm
An elliptic curve E is a set of points(x,y) with coordinates x and y lying in the field F2 and satisfying the equation y+ xy=x+ax+b. a and b are constants, field elements which will specify later. For a particular choice of a and b, the points (x,y) form a commutative group under “addition”; the rule for “addition” involves several field operations, including computing a reciprocal, The elliptic curve analogue of the Diffie-Hellman key exchange method uses an elliptic curve E a,b defined over a field F, and a point P=(x0,y0) which generates the whole addition group of E. E,a,b,F,P,x0 and y0 are all system-wide public parameters. When user A wants to start a conversation, he chooses a secret integer multiplier KA in the range and computes KAP by iterating addition of P using a “double and add” scheme. User A sends the x and y coordinates of the point KAP to user B. User B selects his own secret multiplier KB and computes and sends to user A the point KBP. Each user can then compute the point (KAKB) P. Some bits are selected from the coordinates to become the secret session key for their conversation. Insofar as is known, there is no effective method for recovering (KAKB) P by eavesdropping on this this conversation, other than solving the discrete logarithm problem. The elliptic curve operations require addition, multiplication, squaring, and inversion in the underlying field. The number of applications of each operation depends on the exact details of the implementation; in all implementations the inversion operation is by far the most expensive (by a factor of 5 to 20 over multiplication).
 

4.4.1 Importance of Elliptic Curve Algorithm

Most of the products and standards that use public-key cryptography for encryption and digital signatures use RSA. As it has seen, the key length for secure RSA use has increased over recent years, and this has put a heavier processing load on applications using RSA. This burden has ramifications, especially for electronic commerce sites that conduct large numbers of secure transactions. Recently, a competing system has begun to challenge RSA: elliptic curve cryptography (ECC). Already, ECC is showing up in standardization efforts, including the IEEE P1363 Standard for Public-Key cryptography.The principal attraction of ECC, compared to RSA, is that it appears to offer equal security for a far smaller key size, thereby reducing processing overhead. On the other hand, although the theory of ECC has been around for some time, it is only recently that products have begun to appear and that there has been sustained cryptanalytic interest in probing for weaknesses. Accordingly, the confidence level in ECC is not yet as high as that in RSA.ECC is fundamentally more difficult to explain than either RSA or Diffie-Hellman, and a full mathematical description is beyond the scope of this part. This section and the next give some background on elliptic curves and ECC. It is examine the concept of elliptic curves defined over the real numbers. This is followed by a look at elliptic curves defined over finite fields. Finally, here able to examine elliptic curve ciphers.
  • Elliptic curve arithmetic can be used to develop a variety of elliptic curve cryptography (ECC) schemes, including key exchange, encryption, and digital signature.
  • For purposes of ECC, elliptic curve arithmetic involves the use of an elliptic curve equation defined over a finite field. The coefficients and variables in the equation are elements of a finite field. Schemes using Zp and GF(2m) have been developed.

4.4.2 Operation Steps ECC

Sometimes denoted by {G, •}, is a set of elements with a binary operation, denoted by •, that associates to each ordered pair (a, b) of elements in G an element (a • b) in G, such that the following axioms are obeyed [5]. The operator • is generic and can refer to addition, multiplication, or some other mathematical operation.
Closure:              If a and b belong to G, then a • b is also in G.
Associative:        a • (b • c) = (a • b) • c for all a, b, c in G.
Identity element: There is an element e in G such that a • e = e • a = a for all a in G.
Inverse element: For each a in G there is an element a' in G such that a • a' = a' • a = e.
Commutative:     a • b = b • a for all a, b in G.
A number of public-key ciphers are based on the use of an abelian group. For example, Diffie-Hellman key exchange involves multiplying pairs of nonzero integers modulo a prime number q. Keys are generated by exponentiation over the group, with exponentiation defined as repeated multiplication. For example,  ak mod q =(a.a.a….a) mod q. To attack Diffie-Hellman, the attacker must determine k given a and ak; this is the discrete log problem.For elliptic curve cryptography, an operation over elliptic curves, called addition, is used. Multiplication is defined by repeated addition. For example,  a.k = (a.a.a….a) , where the addition is performed over an elliptic curve. Cryptanalysis involves determining k given a and (a.k).An elliptic curve is defined by an equation in two variables, with coefficients. For cryptography, the variables and coefficients are restricted to elements in a finite field, which results in the definition of a finite abelian group. Before looking at this, we first look at elliptic curves in which the variables and coefficients are real numbers. This case is perhaps easier to visualize.

4.4.3 Elliptic Curve Encryption/Decryption

Several approaches to encryption/decryption using elliptic curves have been analyzed in different literatures. It is look at perhaps the simplest. The first task in this system is to encode the plaintext message m to be sent as an x-y point Pm. It is the point Pm that will be encrypted as a ciphertext and subsequently decrypted. Note that it cannot simply encode the message as the x or y coordinate of a point, because not all such coordinates are in Eq(a, b);  Again, there are several approaches to this encoding, which will not address here, but suffice it to say that there are relatively straightforward techniques that can be used. As with the key exchange system, an encryption/decryption system requires a point G and an elliptic group Eq(a, b) as parameters. Each user A selects a private key nA and generates a public key PA = nA x G.
To encrypt and send a message Pm to B, A chooses a random positive integer k and produces the ciphertext Cm consisting of the pair of points:
                                                    Cm = {kG, Pm + kPB}
Note that A has used B's public key PB. To decrypt the ciphertext, B multiplies the first point in the pair by B's secret key and subtracts the result from the second point:
                                     Pm + kPB nB(kG) = Pm + k(nBG) nB(kG) = Pm
A has masked the message Pm by adding kPB to it. Nobody but A knows the value of k, so even though PB is a public key, nobody can remove the mask kPB. However, A also includes a "clue," which is enough to remove the mask if one knows the private key nB. For an attacker to recover the message, the attacker would have to compute k given G and kG, which is assumed hard.
As an example of the encryption process (taken from), take p = 751; Ep(1, 188), which is equivalent to the curve y2 = x3 x + 188; and G = (0, 376). Suppose that A wishes to send a message to B that is encoded in the elliptic point Pm = (562, 201) and that A selects the random number k = 386. B's public key is PB = (201, 5). We have 386(0, 376) = (676, 558), and (562, 201) + 386(201, 5) = (385, 328). Thus A sends the cipher text {(676, 558), (385, 328)} .

4.4.4 Security of Elliptic Curve Cryptography

The security of ECC depends on how difficult it is to determine k given kP and P. This is referred to as the elliptic curve logarithm problem. The fastest known technique for taking the elliptic curve logarithm is known as the Pollardrho method. It is seen that a considerably smaller key size can be used for ECC compared to RSA. Furthermore, for equal key lengths, the computational effort required for ECC and RSA is comparable. Thus, there is a computational advantage to using ECC with a shorter key length than a comparably secure RSA.
4.4.5 Application with an Elliptic Curve
 
There are several applications of ECC discuses these step by step.
 
4.4.5.1 Adding and Doubling Points
 
The two elliptic curve operations that are most relevant to the complexity of of multiplying a group element by a constant are the Add and Double operations. We also include the Negation operation, since it offers a speed increase. The operations are presented slightly modified from their presentation in . The elliptic curve Ea,b is the set of all solutions (x,y) to the equation y+ xy =x+ax+b. a and b are constants from the field F2;b must be non zero. x and y are also elements of F2. A solution (x,y) is called a point of the curve. An extra point O is also needed to represent the group identity. We use (0,0) to represent O. (Because b≠0,(0,0) is never a solution of our equation.) The Add and Double routines make a special check to see if an input is O. The Double routine is not just an optimized special case of the Add routine: The Add formula fails when the two input points have equal x coordinates; the formula calls for a division by 0. A check is made for this case; if the y coordinates are also equal, the two input points are equal, and the Double routine is called. If they coordinates differ, the two points are inverses in the elliptic curve group, so O is returned.
 
4.4.5.1 Rule for Adding two points
 
 Let (x1,y1) € E(F2) and (x2,y2) € E(F2) be two points.
  • If either point is O, return the other point as the sum.
  • If x1= x2 and y1=y2, use the Doubling rule.
  • If x1= x2 but y1≠y2, return O as the sum.
  • If x1≠x2, then (x1,y1)+(x2,y2)=(x3,y3), where
X3= λ+ λ + x1+x2+a,   y3= λ ( x1+x3)+ x3+ y1    and  λ=
 
4.4.5.2 Rule for doubling a point
 
Let (x1,y1) € E(F2) be two points
  • If x1=0, then 2(x,y)=O
  • If x1≠0, then 2(x,y)=(x2,y2), where
x2= λ+ λ +a,   y2= x+ (λ +1)x2    and  λ=(x+)
Gives the formula for x2 as x2= x+ . Our formula is equivalent, but saves the multiplication by the constant b.
 
 
 
4.4.5.3 Rule for negating a point
 
 Let (x1,y1) € E(F2) be two point. Then -1(x,y) =(x, x+y).
From these formulas, determine the number of field operations required for each kind of elliptic curve operation. An addition step usually requires eight additions, two multiplications, one squaring, three reductions mod T (u), and one inversion. A Doubling step usually requires four additions, two multiplications, two squaring, four reductions mod T(u), and one inversion. A Negation step requires one addition. The important contributors to the run time are the multiplications and inversions.
 
4.4.6 Choosing different option
 
Choosing the Curves: The constant a in the elliptic curve can be chosen to simplify the operations of doubling a point and of adding two points. Here use a=0, which eliminates one addition from the formula for the x coordinate in both operations. The size of the group depends on the choice of a and b , improvements by Atkin, Elkies, Morain, and Couveignes[9] for determining the group order. The order is always close to the number of field elements. For maximum security, the order should have as large a prime factor as possible. In the equation, with a=0, the best possible order is   4p (with p a prime near 2). (If we select a =0, the order can have the form 2p with p near 2, giving a small amount of extra security.) Lay and Zimmer give a method for creating a curve with a given order, but  reluctant to use their scheme, since it produces curves closely related to rational curves with an extra structural property called complex multiplication. Tis extra structure might be insecure. It recommend trying small bs, computing the curve order and checking if the order is of the form 4p. For curves based on F2 , a few hundred tries may be necessary. One scheme for selecting the bs is to try small values for (x,y) and to compute b from the equation y+ xy = x+ b.The best known methods for computing elliptic curve discrete logarithms take time proportional to the square-root of the largest prime factor of the group order. In case, the largest prime factor will be about 2, so finding discrete logarithms will take about 210 operations.
 
Choosing a Multiplier:The number of additions and doublings necessary for computing nP (where P is a point on the curve and n is an integer) is an important factor in the speed of the DH algorithm. It has implemented the straightforward double-and-add approach based on the binary expansion of n. For a random 155-bitmultipliern, computing nP will require 154 doubling
steps and an average of 77 addition steps. (The number of addition steps depends on the number of 1 bits in the binary representation of n.) Although the number of doubling steps is roughly fixed, it is possible to reduce considerably the number of addition steps needed. This problem is studied in a large literature on addition chains . We mention a few points:
 
  • Menezes discusses the idea of using a low Hamming weight multiplier. If one is content to select from among 2 multipliers, while allowing multipliers up to about 2, then one can select a multiplier of Hamming weight  
  •  The operation to negate a point is cheap, so can use addition-subtraction chains instead of simply addition chains. The extra flexibility permits shorter chains.
  • If using one multiplier for many connections, it pays to invest some effort in finding a good addition, subtraction chain for it. The speedup available from using a good addition-subtraction chain with a random multiplier and novel point is 20-30%.
  •  Since freely to select a random multiplier, one approach is to select a random addition-subtraction chain instead. This allows more selection freedom at each chain step, so here need fewer addition or subtraction steps to generate roughly 2 possible multipliers. Some care must be taken in the selection algorithm, to avoid having some multipliers be overly likely. (This can happen because many steps in the addition-subtraction chains commute.)
  • When it known that the starting point P ahead of time, then can prepare tables which give a considerable speedup.This is the situation for the first two of the four point multiplications in Diffie-Hellman key exchange.
 
4.4.7 Limitation o f ECC
 
ECC has several limitations, these are as following
  • The discrete logarithm problem is hard for elliptic curves, because the index calculus attack that is so effective modulus does not work.
  • Furthermore, using ECC with or without precomputed values to perform key exchange is not that much faster than RSA. So the only real advantage to using ECC to perform key exchange is key and transmission size.
  • Another disadvantage to ECC is certificates. Public-key crypto does not really work without digital certificates, and digital certificates don’t really work without certificate authorities. It’s hard to find ECC digital certificates. So even if want to use ECC, it might not be able to get a certificate.
  • People say that ECC is very much faster than RSA, but actually ECC is significantly faster than RSA only when used with precomputed values. That is can store ECC key in a small space, but if want to get the performance advantage, it also has to store some tables of precomputed values. These tables can be as many as 20,000 bytes. But it doesn’t have 20,000 bytes of storage space lying around (say for a smart card), people may not be able to use the precomputed tables. The ECC is not that much faster than RSA. With ECC can sign fast or save storage space, but it can’t do both. Of course, saving storage space and transmission size may be reason enough for ECC.
 
4.4.8 Complexities
 
ECC Algorithm contains different complexities like as time complexity, Memory complexity and communication complexity.
  • Encryption Time Complexity: The computational complexity of the ECC algorithm completely depends upon the key length and the length of the modulus n (here n=).. Minimum time complexity of ECC Algorithm is PA = nA x G..or  O (n) or O(log3 n).and maximum time complexity of ECC Algorithm is O(n).
  • Decryption Time Complexity: Minimum time complexity of ECC Algorithm is Cm = {kG, Pm + kPB}.or  O (n) and maximum time complexity of ECC lgorithm is  O(n).
  • Memory Complexity: Maximum memory complexity of ECC Algorithm is O(n). Here n is key size.
  • Communication Complexity: Maximum memory complexity of ECC Algorithm is O(n).

4.5 History of Digital Signature

The National Institute of Standards and Technology (NIST) has published Federal Information Processing Standard FIPS 186, known as the Digital Signature Standard (DSS). The DSS makes use of the Secure Hash Algorithm (SHA) described here and presents a new digital signature technique, the Digital Signature Algorithm (DSA). The DSS was originally proposed in 1991 and revised in 1993 in response to public feedback concerning the security of the scheme. There was a further minor revision in 1996. In 2000, an expanded version of the standard was issued as FIPS 186-2. This latest version also incorporates digital signature algorithms based on RSA and on elliptic curve cryptography. In this section, we discuss the original DSS algorithm.

4.5.1 Concept of Digital Signature algorithm

The DSS uses an algorithm that is designed to provide only the digital signature function. Unlike RSA, it cannot be used for encryption or key exchange. Nevertheless, it is a public-key technique.The DSS approach for generating digital signatures to that used with RSA. In the RSA approach, the message to be signed is input to a hash function that produces a secure hash code of fixed length. This hash code is then encrypted using the sender's private key to form the signature. Both the message and the signature are then transmitted. The recipient takes the message and produces a hash code. The recipient also decrypts the signature using the sender's public key. If the calculated hash code matches the decrypted signature, the signature is accepted as valid. Because only the sender knows the private key, only the sender could have produced a valid signature.The DSS approach also makes use of a hash function. The hash code is provided as input to a signature function along with a random number k generated for this particular signature. The signature function also depends on the sender's private key (PRa)and a set of parameters known to a group of communicating principals. We can consider this set to constitute a global public key (PUG). The result is a signature consisting of two components, labeled s and r. It is also possible to allow these additional parameters to vary with each user so that they are a part of a user's public key. In practice, it is more likely that a global public key will be used that is separate from each user's public key.At the receiving end, the hash code of the incoming message is generated. This plus the signature is input to a verification function. The verification function also depends on the global public key as well as the sender's public key (PUa), which is paired with the sender's private key. The output of the verification function is a value that is equal to the signature component r if the signature is valid. The signature function is such that only the sender, with knowledge of the private key, could have produced the valid signature.

4.5.2 The Digital Signature Standard (DSS)

The DSS is based on the difficulty of computing discrete logarithms and is based on schemes originally presented by ElGamal and Schnorr.There are three parameters that are public and can be common to a group of users. A 160-bit prime number q is chosen. Next, a prime number p is selected with a length between 512 and 1024 bits such that q divides (p 1). Finally, g is chosen to be of the form h(p1)/q mod p where h is an integer between 1 and (p 1) with the restriction that g must be greater than 1. In number-theoretic terms, g is of order q mod p.
Global Public-Key Components:
  • p, prime number where 2L 1 < p < 2L for 512  or 1024 and L a multiple of 64; i.e., bit length of between 512 and 1024 bits in increments of 64 bits
  • q, prime divisor of (p 1), where 2159 < q < 2160; i.e., bit length of 160 bits
  • g= h(p 1)/q mod p, where h is any integer with 1 < h < (p 1) such that h(p 1)/q mod p > 1
User's Private Key:
                              x= random or pseudorandom integer with 0 < x < q
 
User's Public Key:
                                y = gx mod p
 
User's Per-Message Secret Number:
                               k= random or pseudorandom integer with 0 < k < q
 
Signing:                     
                                    r= (gk mod p) mod q
                              s= [k-1 (H(M) + xr)] mod q
                                   Signature = (r, s)
 
Verifying:
                               w= (s')-1 mod q
                               u1= [H(M')w] mod q
                               u2=(r')w mod q
                               v= [(gu 1 yu 2) mod p] mod q
TEST: v = r'
                               M= message to be signed
                               H(M)= hash of M using SHA-1
                               M', r', s'= received versions of M, r, s
With these numbers in hand, each user selects a private key and generates a public key. The private key x must be a number from 1 to (q 1) and should be chosen randomly or pseudo randomly. The public key is calculated from the private key as y = gx mod p. The calculation of y given x is relatively straightforward. However, given the public key y, it is believed to be computationally infeasible to determine x, which is the discrete logarithm of y to the base g, mod p.To create a signature, a user calculates two quantities, r and s, that are functions of the public key components (p, q, g), the user's private key (x), the hash code of the message, H(M), and an additional integer k that should be generated randomly or pseudo randomly and be unique for each signing. At the receiving end, verification is performed using the formulas. The receiver generates a quantity v that is a function of the public key components, the sender's public key, and the hash code of the incoming message. If this quantity matches the r component of the signature, then the signature is validated. The structure of the algorithm is quite interesting. Note that the test at the end is on the value r, which does not depend on the message at all. Instead, r is a function of k and the three global public-key components. The multiplicative inverse of k (mod q) is passed to a function that also has as inputs the message hash code and the user's private key. The structure of this function is such that the receiver can recover r using the incoming message and signature, the public key of the user, and the global public key. Another point worth nothing is that the only computationally demanding task in signature generation is the exponential calculation gk mod p. Because this value does not depend on the message to be signed, it can be computed ahead of time. Indeed, a user could precalculate a number of values of r to be used to sign documents as needed. The only other somewhat demanding task is the determination of a multiplicative inverse, K1. Again, a number of these values can be precalculated. Digital Signatures is the most important development from the work on public-key cryptography. The digital signature provides a set of security capabilities that would be difficult to implement in any other way.

4.5.3 Requirements of DSS

Message authentication protects two parties who exchange messages from any third party. However, it does not protect the two parties against each other. Several forms of dispute between the two are possible. Using one of the schemes considers the following disputes that could arise:
  • Mina may forge a different message and claim that it came from Nina. Mina would simply have to create a message and append an authentication code using the key that Nina and Mina share.
  • Nina can deny sending the message. Because it is possible for Mina to forget a message, there is no way to prove that Nina did in fact send the message.
An electronic funds transfer takes place, and the receiver increases the amount of funds transferred and claims that the larger amount had arrived from the sender. An example of the second is that an electronic mail message contains instructions to a stockbroker for a transaction that subsequently turns out badly. The sender pretends that the message was never sent.
In situations where there is not complete trust between sender and receiver, something more than authentication is needed. The most attractive solution to this problem is the digital signature. The digital signature is analogous to the handwritten signature. It must have the following properties:
  • It must verify the author and the date and time of the signature.
  • It must to authenticate the contents at the time of the signature.
  • It must be verifiable by third parties, to resolve disputes.
Thus, the digital signature function includes the authentication function. On the basis of these properties, we can formulate the following requirements for a digital signature:
  • The signature must be a bit pattern that depends on the message being signed.
  • The signature must use some information unique to the sender, to prevent both forgery and denial.
  • It must be relatively easy to produce the digital signature.
  • It must be relatively easy to recognize and verify the digital signature.
  • It must be computationally infeasible to forge a digital signature, either by constructing a new message for an existing digital signature or by constructing a fraudulent digital signature for a given message.
  • It must be practical to retain a copy of the digital signature in storage.
A secure hash function, embedded in a scheme such satisfies these requirements. A variety of approaches has been proposed for the digital signature function. These approaches fall into two categories: direct and arbitrated.

4.5.4 Complexities

DSS Algorithm contains different complexities like as time complexity, Memory complexity and communication complexity.

 
  • Encryption Time Complexity: The computational complexity of the DSS algorithm completely depends upon the key length and the length of the modulus n. Minimum time complexity of DSS Algorithm is gx mod p or O (n) or O(log3 n).
  • Decryption Time Complexity: Minimum time complexity of DSS Algorithm is gx mod p or  O (n) and maximum time complexity of DSS Algorithm is  (gk mod p) mod q or O(n)
  • Memory Complexity: Maximum memory complexity of DSS Algorithm is O(n). Here n is key size.
  • Communication Complexity: Maximum memory complexity of DSS Algorithm is O(n).
 
4.6 Comparisons of Complexities for Public-Key Cryptosystems
 
Table 4.1 illustrates Complexities for Public-Key Cryptosystems for different algorithms like as RSA, Elliptic Curve, Diffie-Hellman, DSS and Bit Re-allocation Algorithms.
 
 
Table 4.1 Complexities for Public-Key Cryptosystems
Algorithm Regular Encryption Complexity Minimum Encryption Complexity Maximum Encryption Complexity Authorize Decryption Complexity Unauthorized Decryption Complexity
RSA O(n) O(n) O(n) O(n) O(n)
Elliptic Curve O(n) O(n) O(n) O(n) O(n)
Diffie-Hellman O(n) O(n) O(n) O(n) O(n)
DSS O(n) O(n) O(n) O(n) O(n)
BRA O(n) O(n) O(n) O(n) O(2n)
4.7 Application techniques for Public-Key Cryptosystems
 Table 4.2 illustrates the application techniques for Public-Key Cryptosystems.
 
Table 4.2 Applications for Public-Key Cryptosystems
Algorithm Encryption/Decryption Digital Signature Key Exchange Randomizing Significant always
RSA Yes Yes Yes Yes No
Elliptic Curve         Yes Yes Yes No No
Diffie-Hellman No No Yes No No
DSS No Yes No No No
BRA Yes Yes Yes Yes Yes
 
4.8 Summery
 
Although every algorithm have some limitations but these are acceptable for application. Complexities of these are also being acceptable. There many algorithms are proposed for security but very few are acceptable for implementation in communication field cause of their less complexity in encryption, more complexities in decryption and more security.
 
CHAPTER FIVE
 
PROPOSED ALGORITHM
 
5.1 Introduction
 
The proposed algorithm will also benefit WSN providers because it aims to maximize the one stop security, simplicities of voice communication system and have minimize complexities and requirements. In other words, it can serve multiple securities concurrently without decreasing the quality of service for an avant-garde hybrid design.
 
5.2 Development of the Bit Re-allocation Algorithm Approach
 
Figure 5.1 illustrates the development of the bit re-allocation algorithm approach. Step by step discuss the constructions of the approach
 
5.2.1 Construction 1
 
The algorithm proposed in this study is a Bit Re-allocation algorithm in WSN. The approach uses and develops features that are based on FIFO Queuing, Priority Queuing (PQ) and Custom Queuing (CQ) and Round Robin algorithm (RR) in a WSN system. It can improve the fragmentation and re-allocation service for future WSN users because it achieves acceptable co-services, QoS guarantees and fairness for all information services.
 
5.2.2 Construction 2
 
The literature review and the performance comparisons among the ECC, RSA, DSS, DH and the proposed BRA can satisfy the QoS requirements of higher security services but ignore the QoS of the other lower security services. ECC is significantly faster than others only when used with precomputed values; ECC can provide efficient guarantees for allocated information in different services but fails to consider the unformed or overformal information or miscomputed data utilization. The proposed algorithm (BRA) over come this problem with blind (binary bits) technique and with contains significantly faster mechanism of ECC.
 
5.2.3 Construction 3
 
RSA has a well-developed certificate infrastructure. Currently in the industry, RSA is winning. The key size, transmission size and signature performance issues concern makers of small devices. But they often find that RSA is fast and small enough. Sure, it’s not the fastest signer or the smallest key, but it still works just only with prime numbers. The security of the RSA algorithm lies in the fact that there is no good way of factoring numbers. No one till now knows a way to factorize a number into its prime factors. As long as no one finds a way RSA will be safe and will be one of the best encryption algorithms in use. Coming factorize algorithms, make the end of RSA. The proposed algorithm (BRA) over come this problem with bits re-allocation technique and with contain transmission keys mechanism of RSA.
 
5.2.4 Construction 4
 
The most important development from the work on public-key cryptography is the digital signature. The digital signature provides a set of security capabilities that would be difficult to implement in any other way. In authentication protocols, many of which depend on the use of the digital signature. Message authentication protects two parties who exchange messages from any third party. However, it does not protect the two parties against each other. It is used in Message protection and authentication not use in voice security. DSS requirement maximum but communication QoS minimum. The proposed algorithm (BRA) over come this problem with parallel bits processing technique and with contains unique authentication mechanism of DSS
 
     
 
 
 
DSS feature -> Authentications
 
 
 
 
 
 
 
 
       
 
DH feature -> Keys Exchange
   
 
 
Queuing feature -> Fragmentation
 
 
Scheduling feature -> Allocation
 
 
        Fragmentation Concept
 
 
 
        Randomizing Concept
 
 
Control bit and Queue Concepts
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Figure: 5.1 The development of Bit Re-allocation Algorithm Approach
 
5.2.5 Construction 5
DH is so popular encryption algorithm and used from long time. The purpose of the algorithm is to enable two users to securely exchange a key that can then be used for subsequent encryption of messages. The algorithm itself is limited to the exchange of secret values. The Diffie-Hellman algorithm (DH) depends for its effectiveness on the difficulty of computing discrete logarithms. The proposed algorithms (BRA) over come this problem with multiple and adaptive re-allocation technique and with contain exchange mechanism of DH.
5.3 Control unit
 
Control unit contain control value, control link, control counter and control effect of the algorithm and unauthorized authentication complexity, describe step by step.
 
5.3.1 Control Value
 
It is the most important part of the proposed algorithm by this part algorithm take all counters value from it. Random unit takes value of n of matrix of (nX n), counter value and active bit positions queues number from control value. Fragmentation unit takes counter value and active queues number from control value. All scheduler take counters value from control value. It may be different types one is default control value and another is customs control value. In the proposed algorithm 4 is used as a control value as shown in Figure 5.2.
 

 
 
 
 
 
 

Figure: 5.2 Control value
 

  • Default control value:
      Default control value set by designer or programmer and it may be 0,1,2,3…… n
  • Custom control value:
            Default control value set by administrator or user and it may be. 0,1,2,3………. n
 
5.3.2 Control link
 
There are many links in the full algorithm among the control value and different counter scheduler. Arrow line from control value to others component in algorithm is called control link as shown in Figure 5.2. Random bit position generator and All counter like as random 2n bit position counter, control bit position segmentation counter, data bit fragmentation counter and scheduler like as data bit queue  scheduler.
 
5.3.3 Control counter
 
The counter which takes a counter value equal to the control value is called control counter. Random 2n bit position counter, control bit position segmentation counter, data bit fragmentation counter take a counter value equal to control value.
5.3.4 Control effect on the algorithm
 
Random bit position generator depends on control value it may be customs or default fixed, how many number elements will generate by it. All counter specially random 2n bit position counter, control bit position segmentation counter, data bit fragmentation counter and scheduler like as data bit queue  scheduler and  header & secure key scheduler and queues like as secure data buildup queue stack and control bit FIFO queue depend on control value.
 
5.3.5 Unauthorized authentication complexity
 
Random bit position generator generates a matrix of decimal number and that is a matrix of (nX n). Here (nX n) is (4X 4) that means (256X256) matrix or 65536X65536 matrix has 65536 columns and 65536 rows. Every row of the matrix contains 65536 elements of numbers such a way 65536 rows contain 4294965296 elements of numbers and all elements don’t contain same digits. More than four hundred million number elements is a large volume of number elements which generate in a moment by random bit position generator. Then random bit position scanner scans 2elements only from the 4294965296 number and every elements of number is the indicator of bit- positions of control queue. Like as 2565 is a number element where 5 is indicate the  bit position of zero fragmented queue(FQ0), 6 is indicate the  bit position of first fragmented queue(FQ1), 5 is indicate the  bit position of second fragmented queue(FQ2) and 2 is indicate the  bit position of third fragmented queue(FQ3). For unauthorized authentication access complexity so more. Here, decryption time complexity is
 
                                                          (((nX n)))=  O()
 
When, n=4
 
            O()= (((nX n)))= (4294965296)= (4294965296)= 65519456536                                                                         
 
65519456536 elements evaluate in  seconds which impossible for any user because in   seconds random unit makes the 65519456536 decryptions options and applicable only one element.
 
5.4 Random unit
 
Random unit contain random bit position generator, random 2n bit position scanner and control bit position segmentation, describe step by step.
 
5.4.1 Random bit position generator
 
Random bit position generator generates a matrix of decimal number and every decimal number is called number element here. It may be an electronics circuit or a small program which can randomize a matrix of (nX n) as shown in Figure 5.3. Here (nX n) is (4X 4) that means (256X256) matrix or 65536X65536 matrix has 65536 columns and 65536 rows. Every row of the matrix contains 65536 elements of numbers such a way  65536 rows contain  4294965296 elements of numbers and all elements don’t contain same digits. More than four hundred million number elements is a large volume of number elements which are generate in a moment by Random bit Position Generator and every elements of number is the indicator of bit- positions of control Queue.
 
 
 
(nX n)  Random bit Position Generator
 
 
 
 

Figure: 5.3 Random bit position generator
 
5.4.2 Random 2n Bit Position scanner
 
Random bit position scanner scans decimal number which called number element. Here matrix is (4X 4) that means (256X256)-matrix or 65536X65536-matrix has 4294965296 number elements and all elements don’t contain same digits. More than four hundred million number elements is a large volume of number elements which are generating in a moment by random bit position generator. From the number elements only 2 or 2 or 16 elements received randomly by “random 2n bit position scanner” and every elements of number is the indicator of bit- positions of control queue as shown in Figure 5.4. Like as 2565 is a number element where 5 is indicate the  bit position of Zero fragmented queue(FQ0), 6 is indicate the  bit position of first fragmented queue(FQ1), 5 is indicate the  bit position of Second fragmented queue(FQ2) and 2 is indicate the  bit position of third fragmented queue(FQ3). Random 2n Bit Position scanner as shown in Figure 5.4.
 
 

 
 
 2310,2134,0012,……………………………………………………………………………………..,3212,4567
 
 
 

Figure: 5.4 Random 2n Bit Position scanner
 
5.4.3 Control Bit position Segmentation
 
Random bit position scanner scans only 2 or 2 or 16 elements and received randomly where every elements of number is the indicator of bit- positions of control queue. Like as 2134 is a number element where 4 is indicate the bit position of Zero fragmented queue (FQ0), 3 is indicate the bit position of first fragmented queue (FQ1), 1 is indicate the bit position of Second fragmented queue (FQ2) and 2 is indicate the bit position of third fragmented queue (FQ3) for setting indicator of the fragmented queues required segmentation of 2134 and is store in n queues            (here 4 queues). Element scheduler, scheduling in FIFO system 2134 to store as a indicator of the bit position queues and where 4 is store in zero bit position queue, 3 is store in first bit position queue, 1 is store in second bit position queue and 2 is store in third bit position queue as shown in Figure 5.5 .
 
 
Figure: 5.5 Control Bits position Fragmentation
 
5.5 Data bit unit
 
Data bit unit contain data bit scanner, fragmentation, queue & bit scheduler and header & secure key scheduler
 
5.5.1 Data bit scanner
 
It is an endless queue which can read binary data bits only as shown in Figure 5.6. It reads data bits encapsulate packet one by one from the encapsulation channel and sends header bits to the header & secure key scheduler.
 

 
 
1101111010110111
 

 
 
Figure: 5.6 Data bit scanner
 
 
5.5.2 Fragmentation
 
Re-segmentation is called fragmentation. Fragmentation is the process of chopping larger chunks of data into smaller chunks as shown in Figure 5.7. For fragmentation needs data bits scheduling and store fragmentation data bit in fragmentation queues stack.
 
5.5.2.1 Data bits scheduler
 
Data bits scheduler a minimal scheduler which can distribute the data bits according control value. If the control value is n it stores the data into n queues in sequentially and in FIFO system. Distribute the data bits from the first queue to second last queues contain the average bits but last queue contain the total rest bits which is not greater than two terms of average values and not less than one bit as a result last queue some in fall in overflow or underflow. When 4 queues(q0,q1,q2,q3) ,data bits16 and average data bits is 16/4=4 bits , every queue contain the average 4 data bits as shown in Figure 5.7.
5.5.2.2 Fragmentation data queues stack
 
Fragmentation data queues stack segmented all data bits In FIFO system according control value.
Fragmentation data queues stack contain n queues but from it required queues become active. If the control value is 4 then only 4 queues become active from the queues stack. From the first queue to second last queues contain the average bits but last queue contain the total rest bits which is not greater than two terms of average values and not less than one bit as a result last queue some in fall in overflow or underflow. Packet bits/control value= average bits of a queue,  When 4 queues (q0,q1,q2,q3) ,data bits16 and average data bits is 16/4=4 bits , every queue contain the average 4 data bits, as shown in Figure 5.7. Last data bit queue one of the three conditions.
 

  • All queues contain equal data bits which are called average.
  • Last queue contain less than others queues but not less than one bit that is called the          underflow.
  • Last queue contain more than others queues but not more than two others queues that is called overflow.
 
 
 
 
 
 
 
 
 

1011
 
1110
                                                             
 
       
  Rectangular Callout: Encapsulate data Receiver
   
1101
 
 
 
 
 
 

Figure: 5.7 Data bits Fragmentation
 
5.5.3 Queue & Bit Scheduler
 
Queue and bit scheduler read two queues parallel and simultaneously one from fragmentation queues stack and another from bit position queues stack. It schedule queues according control value.It schedules data bits from first to last bit of one queue from the fragmentation queues stack and schedules control bit according bit position value from all queues of fragmentation queues stack and complete the scheduling process one by one queue when all queues are completed then continue another same process as shown in Figure 5.8.
 

 
 
 
 
 

         

                                                                               

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Figure: 5.8 Queues & Bits Scheduler
 
5.5.4 Header & secure key scheduler
 
Header & secure key scheduler reads the header of the encapsulate packets and bit positions value parallel and simultaneously and create a secure key value and scheduling the secure key (SK) is create by destination, source and control bit’s positions as shown in Figure 5.9. SK makes a very difficult access complexity for unauthorized users. Complexity for unauthorized users is like as = O () (where,  is complexity for unauthorized users)
 

Figure: 5.9 Header & Secure key Scheduler
 
Figure 5.10 illustrate the surroundings of Header & Secure key Scheduler; bit position data queue, secure data buildup queue and fragmentation data queues stack.
 

       
   
 
 
 
 
 
 
 
 

 

                                         …S
                             SK

           
 
   
0000
 
   
111
 
   
111
 
   
111
 
   
111
 
 
 
 
 
 
 
 
 
 
 
 

                

                        …..                         y
 
                                         
                                              Data bits
 
 
Figure: 5.10 Surroundings of Header & Secure key Scheduler
 
5.6 Secure data Buildup queue Stack
 
Secure data buildup queue stack contain one control bit queue which is called Cq, H1, H2 or more H header queues and other data bits queues (total ns queues for data bits, here n=4).All queues are contain FIFO mechanism and Secure data buildup queue stack also contain parallel round robin mechanism. Queue and bit scheduler read two queues parallel and simultaneously one from fragmentation queues stack and another from bit position queues stack. It schedule queues according control value. It schedules control bit according bit position value and store at Cq in secure data buildup queue stack. It schedules data bits from first to last bit of one queue from the fragmentation queues stack and complete the scheduling process one by one queue and store at FIFO queues in secure data buildup queue stack such a away complete all queue of fragmentation queues stack. Header & secure key scheduler reads the header of the encapsulate packets and bit positions value parallel and simultaneously and create a secure key value and store the secure key value at H1 and H2 queues in secure data buildup queue stack as shown in Figure 5.11.
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Figure: 5.11 Secure data buildup queue stack
 
5.7 Control bit FIFO Queue
 
Control queue (Cq) contain only control bits. Every bit of Cq read according control value and bit positions element. If control value is n then store n bits in Cq (here n=4). Every bit of Cq collect from queue of fragmentation data queues stack according the bit position of the control bit of the queue. It is the main mechanism of the algorithm. Bit re-allocation algorithm is total depend on Cq and it’s mechanism. All part of the proposed algorithm make something together only for Cq and fulfill the requirements of Cq. Cq as shown in Figure 5.12.

 
 
 
 
 
 0 0 0 0
 
 
 
 
 
 

Figure: 5.12 Control bit FIFO queue
 
5.8 Queue Scheduler
 
Queue scheduler is a parallel scheduler which can schedules the all FIFO queues of secure data buildup queue stack toward the cipher data queue as shown in Figure 5.13.
 
 
 
 Figure: 5.13 Queue Scheduler

5.9 Cipher data queue

 
Reassembly is the reverse of segmentation. Protocol data units are put back together in the correct order to reassemble a stream of data in as a data bit packet. Cipher data queue receives re-allocation data bits from queue scheduler and builds-up cipher data packet then transmits toward the transmission channel of the proposed algorithm as shown in Figure 5.14.
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Figure: 5.14 Cipher data queue
 
5.10 Proposed Bits Re-allocation algorithm
 
Bit Re-allocation algorithm is a proposed algorithm for voice security with encryption and decryption. Flowchart of proposed algorithm, proposed BR algorithm, Architecture of proposed algorithm and working discuses step by step.
 
5.10.1 Flow chart of the algorithm
 
Flow chart of the algorithm as shown in Figure 5.15

 
 
 
 
 
 

    Data?
                                                   No
 
 
 
 

Scan data bits
                                                                              Yes
               
   
 
   
     
 
 
 
 

                Yes                                                                                                                Yes

                   
   
C=n?
 
       C=2?
 
 
 
     
 
 
 
 
Fragment data queues
 
Bit positions (Bp) of Cq
                                  No                                                                      No
 
               
   
     
     
 
 
 
 
 
 
 
 
 

                                                                  Yes
 
                                  No

 
 
 
C=Qb?
                Yes                                                                                 
         
   
 
 
 
   
 
 
 

                                   No

       
 
   
 
 
 
 

                               No                                                           Yes

 
 
 
 
 
 
 
 
 
 
 
 

Figure: 5.15 Flowchart of the proposed algorithm
 
5.10.2 Proposed Bits Re-allocation algorithm
 
Step1: Randomize the bit positions ║ Read the Data bits (every 1/8 second)
 
Step2: Fragmentation of Data bits ║ Segmentation bit positions
 
          {
 
Step3: Count=i++ until i=n   (here n=4 and n is the number of FQ )
       
              {
 
Step4:       Count=j++ until j=q[m]      (here m is bits number of a queue)
 
                           {
 
Step5:                    Count=p++   until  p= = q[t]  ;  
 
(here q[t] is the control bit position queue and  t is control bit position.)
 
Step6:                    Store  bit in control Queue(Cq)
    
                              Otherwise
 
Step7:                    Store  bit in data Queue
 
                            }
 
 Step8:         Goto  step4
 
                  }
 
Step9:  Goto  step3
 
           }
 
Step10:  Go to step1
 
 
5.10.3 Architectural design of Proposed Bit Re-allocation (BR) Algorithm
 
BR Algorithm as shown in Figure5.16. All units, all schedulers and queues are include here to build up the total procedural design of the Bit Re-allocation algorithm. Oval is the symbol of scheduler and rectangle is the symbol of queues here. Unidirectional arrow or bidirectional arrows are direction of data flow or operations. Considering 4 is the control value here. Several small ovals are used for counter.
 
 
           
 

Oval Callout: Fragmentrator

Design of Proposed BR Algorithm

               
    Rectangular Callout: 2 4 Bit position scanner
   
 
   
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Figure: 5.16 Design of Proposed BR Algorithm
5.10.4 Working procedure of Proposed Bit Re-allocation (BR) Algorithm
 
Step1: Randomize the bit positions and read the data bits parallel and simultaneously
Step2: Fragmentation of data bits and segmentation bit positions parallel and simultaneously
Step3: One scheduler schedules the data bits according a counter i until the counter become
            equal n or control value.                      
Step4 One scheduler schedules the data bits according a counter j until the counter become
            equal q(m) or queue’s bit number.
Step5 One scheduler schedules the data bits according a counter p until the counter become
            equal q(t) or queue the control bit position and  t is control bit position.                                                                      
Step6: Store bit in control queue (Cq) or store bit in data queue
Step7: Store bit in cipher data in queue.
Step5: Transmit data bits into the tunnel.
    
5.10.5 Proof of Proposed Algorithm for key exchange
 
Equation of Encryption:
                                  YU = CXU mod q
 
Equation of Decryption:
                                  YV = CXV mod q
For this algorithm, there are two publicly known numbers: a number q and an integer that is a primitive root of q.  Suppose the Sensor U and Data storage or Base station V wish to exchange a key. U selects a random integer XU < q and computes YU = CXU mod q. Similarly, V independently selects a random integer XV < q and computes YV = CXV mod q. Each side keeps the X value private and makes the Y value available publicly to the other side. User U computes the key as K = (YV) mod q and user B computes the key as K = (YU)XV mod q. These two calculations produce identical results.

K = (Yv)XU mod q  
  = (CXV mod q)XU mod q  
  = (CXV)XU mod q [the rules of modular arithmetic}
  = (CXU) XV mod q  
  = (CXU)XV mod q  
  = (CXU mod q) XV mod q  
  = (YU)XV mod q  
 
The result is that the two sides have exchanged a secret value. Furthermore, because Xu and Xv are private, an adversary only has the following ingredients to work with: q, C, Yu, and Yv.
 5.11 Simulation model of Wireless Sensor Network(WSN)
 
Based on the IEEE 802.15.4 Standard, the WSN system was set up illustrates in Figure 5.17. The simulation model is developed with the OMNeT++, GNED and PLOVE Simulation modeling tools, which can be described as an object-oriented modular discrete event simulation system. It is a network-simulating tool that allows
 

 
Figure: 5.17 Simulation model of Wireless Sensor Network (WSN)
 
Several wireless sensors, several wireless sensor, one data storage, one micro-power generator, two monitors, two high gain receiver, one hacker and one attacker are include here to build up a wireless sensor network(WSN) in simulation model with Omnet++. GNED and PLOVE Simulation modeling tools of Omnet++ which are help to design and develop the simulation model of WSN on crate a experiment field of secure voice communication.
 
 
5.11.1 A slice of source’s Voice in simulation experiment
 
Figure 5.18 illustrates the Voice Signal of Source. It is a pre-storage voice slice of source. X-axis contains phases of voice signal for ten bits in digital form. Y-axis contains amplitude of voice signal in digital form.
 

Figure: 5.18 Voice Signal of Source
 
X-axis of the voice signal of source, start from 0 to 10 and it divided into 5 portions like as 0 to 2, 2 to 4, 4 to 6, 6 to 8 and 8 to 10. Every portion of the signal contains the phases of two binary bits thus a way 10 bits contain the total voice signal. The total signal 0 to 10 is contain 10 phases for 10 bits of voice. Y-axis contains the amplitude of every phase of the voice signal and it start from 0 to 18 units’ value of high.
 
5.11.2 A slice of Encrypt Voice signal in simulation experiment
 
Figure 5.19 illustrates the Encrypt signal of Source’s Voice Signal. It is a slice of encrypt signal of pre-storage source’s voice signal. X-axis contains phases of encrypt voice signal for ten bits in digital form. Y-axis contains amplitude of encrypt voice signal in digital form.
 

Figure: 5.19 Encrypt Voice signal
 
X-axis of the Encrypt voice signal start from 0 to 10 and it divided into 5 portions like as 0 to 2, 2 to 4, 4 to 6, 6 to 8 and 8 to 10. Every portion of the signal contains the phases of two binary bits thus a way 10 bits contain the total voice signal. The total signal 0 to 10 is contain 10 phases for 10 bits of voice. Y-axis contains the amplitude of every phase of the voice signal and it start from 0 to 14 units’ value of high.
 
5.11.3 A slice of Receiver Voice signal in simulation experiment
 
Figure 5.20 illustrates the signal of received Voice Signal. It is an overlapping signal of pre-storage voice signal of source which received by receiver. X-axis contains phases of received voice signal for ten bits in digital form. Y-axis contains amplitude of received voice signal in digital form.
 
 
 

 
Figure: 5.20 Received Voice signal
 
X-axis of the Received voice signal, start from 0 to 10 and it divided into 5 portions like as 0 to 2, 2 to 4, 4 to 6, 6 to 8 and 8 to 10. Every portion of the signal contains the phases of two binary bits thus a way 10 bits contain the total voice signal. The total signal 0 to 10 is contain 10 phases for 10 bits of voice. Y-axis contains the amplitude of every phase of the voice signal and it start from 0 to 18 units’ value of high like as source voice signal.
 
5.11.4 Comparison among the different Voice signals in simulation
 
Figure 5.21 illustrates the Comparison of Source and Received Voice signal in simulation system. Source and Received signals are overlapping here. X-axis contains phases of Source and Received voice signals for ten bits in digital form. Y-axis contains amplitude of Source and Received voice signals in digital form.
 

Figure: 5.21 Comparison of Source, Encrypt and Received Voice signal
 
X-axis of the voice signal of different voices, start from 0 to 10 and it divided into 5 portions like as 0 to 2, 2 to 4, 4 to 6, 6 to 8 and 8 to 10. Every portion of the signal contains the phases of two binary bits thus a way 10 bits contain the total voice signal. The total signal 0 to 10 is contain 10 phases for 10 bits of voice. Y-axis contains the amplitude of every phase of the voice signal and it start from 0 to 18 units’ value of high. Three signals are include here. Source and Received voice signal are overlapping that’s why orange signal is invisible here. Green signal is for encrypt voice signal.

5.11.5 Comparison of Source, Encrypt and Received Voice signal in simulation
 
Figure 5.22 illustrates the Comparison of Source and Received Voice signal in simulation system. Source and Received signals are overlapping here but Encrypt voice signal is different here. X-axis contains phases of different voice signals for ten bits in digital form. Y-axis contains amplitude of different voice signals in digital form.
 
 
 

 
 
Figure: 5.22 Comparison of Source and Received Voice signal
 
X-axis of the voice signal of source and Received voices, start from 0 to 10 and it divided into 5 portions like as 0 to 2, 2 to 4, 4 to 6, 6 to 8 and 8 to 10. Every portion of the signal contains the phases of two binary bits thus a way 10 bits contain the total voice signal. The total signal 0 to 10 is contain 10 phases for 10 bits of voice. Y-axis contains the amplitude of every phase of the voice signal and it start from 0 to 18 units’ value of high. Two signals are include here. Source and Received voice signals are overlapping that’s why orange signal is invisible here which for source signal.
 
. 5.12 Comparison complexities among the algorithms

 
 
Figure: 5.23 encryption complexities
 
Figure 5.23 as shown as encryption complexities for different algorithms in different key size.
 

Figure: 5.24 Decryption complexities
 
Figure 5.24 as shown as decryption complexities for different algorithms in different key size
 
5.13 Summery
 
According accuracy of the simulation based experiment and a simple design of development Bit re-allocation Algorithm is proved as an encryption algorithm of secure voice communication.  It’s complexities are most acceptable and simplicities of operations and development makes it in future one stop secure algorithm. Considering everything it can say that it may become a powerful applicable algorithm in unintended and hostile environment for it unlimited unauthorized access complexities. Others algorithm increase their encryption and decryption complexities according increasing their key size but Bit Re-allocation Algorithm can not increase it’s encryption complexities according increase it’s key size without increasing it’s unauthorized decryption complexity that means it’s encryption always a horizontal parallel line with X-axis
 
CONCLUSION
 
6.1 Conclusion
 
Conclusion is more important for a dissertation which makes termination. Step by step minimize the options of conclusion in this chapter.
 
6.2 Contributions of the thesis
 
Contributions are most important part of a research. Without effective contributions of a research of all argument of assumption is not as much as necessary. In this research, the essential Contributions is to focus on several suppositions. In order to address the most important research contributions in a step-by-step frontage.
 
6.2.1 Contribution 1
 
The proposed algorithm will also benefit WSN providers because it aims to maximize the one stop security and simplicities of voice communication system and have minimize complexities and requirements. In other words, it can serve multiple securities concurrently without decreasing the quality of service for an avant-garde hybrid design.
 
6.2.2 Contribution 2
 
The algorithm proposed in this study is an Bit Re-allocation algorithm in WSN. The approach uses and develops features that are based on  Priority Queuing (PQ) and Custom Queuing (CQ) and Round Robin algorithm (RR) in a WSN system. It can improve the fragmentation and re-allocation service for future WSN users because it achieves acceptable co-services, QoS guarantees and fairness for all information services.
 
6.2.3 Contribution 3
 
The literature review and the performance comparisons among the ECC, RSA, DSS, DH and the proposed  BRA can satisfy the QoS requirements of higher security services but ignore the QoS of the other lower security services. ECC is significantly faster than others only when used with precomputed values, ECC can provide efficient guarantees for allocated information in different services but fails to consider the unformed or overformal information or miscomputed data utilization. The proposed algorithm (BRA) over come this problem with blind (binary bits) technique and with contain significantly faster mechanism of ECC.
 
6.2.4 Contribution 4
 
RSA has a well-developed certificate infrastructure. Currently in the industry, RSA is winning. The key size, transmission size and signature performance issues concern makers of small devices. But they often find that RSA is fast and small enough. Sure, it’s not the fastest signer or the smallest key, but it still works just fine with prime numbers. The security of the RSA algorithm lies in the fact that there is no good way of factoring numbers. No one till now knows a way to factorize a number into its prime factors. As long as no one finds a way RSA will be safe and will be one of the best encryption algorithms in use. If someone comes up with a way to factorize algorithms, then that’s the end of RSA. The proposed algorithm (BRA) over come this problem with bits re-allocation technique and with contain transmission keys mechanism of RSA.
 
6.2.5 Contribution 5
 
The most important development from the work on public-key cryptography is the digital signature. The digital signature provides a set of security capabilities that would be difficult to implement in any other way. In authentication protocols, many of which depend on the use of the digital signature. Message authentication protects two parties who exchange messages from any third party. However, it does not protect the two parties against each other. It is used in Message protection and authentication not use in voice security. DSS requirement maximum but communication QoS minimum. The proposed algorithm (BRA) over come this problem with parallel bits processing technique and with contain unique authentication mechanism of DSS.
 
6.2.6 Contribution 6
DH is so popular encryption algorithm and used from long time. The purpose of the algorithm is to enable two users to securely exchange a key that can then be used for subsequent encryption of messages. The algorithm itself is limited to the exchange of secret values. The Diffie-Hellman algorithm (DH) depends for its effectiveness on the difficulty of computing discrete logarithms. The proposed algorithms (BRA) over come this problem with multiple and adaptive re-allocation technique and with contain exchange mechanism of DH.

6.2.7 Contribution 7
Different complexity of Bit re-allocation algorithm is acceptable (e.g. time complexity is less than O(n^3), memory complexity is near zero O(n=0.005%), hardware complexity is near zero O(n=0.00%), communication complexity is near zero O(n=0.05%),decryption complexity is O(n) for each 1/8 seconds.

6.3 Prospective
 
Bit re-allocation algorithm will quickly gain popularity for potential voice security, acceptable access complexities, operational simplicities and overall unauthorized access more complexity.
 
6.4 Limitations of this Study 
 
It is known that the QoS improvement and system resource utilization require consideration of all the sub-parts of the network witch is so difficult. This study attempts to focus on the encryption algorithm as presented above and delimits itself from the following issues:
 

  • Evaluated. WSN refers to the IEEE 802.15-2004 standard in this report. The IEEE 802.15 family has several standards for both fixed and mobile wireless access. Only the IEEE 802.15-2004 standard will be adopted
 
  • The IEEE 802.15-2004 supports many multimedia applications. Only voice and data services will be applied and evaluated in an integrated voice and data WSN network. In this context, voice traffic refers to voice over IP (VoIP) traffic and data traffic refers to data based services such as e-mail, web-access, and file transfer.
 
  • The WNS architecture has mesh and Point to Multipoint (PMP) modes. PMP will be adopted.
 
  • A number of traffic queuing and scheduling algorithms have been proposed and studied. Only Round Robin algorithm (RR), Priority Queuing (PQ) and Custom Queuing (CQ) will be used
 
  • The performance evaluated among the ECC, RSA, DSS, DH and the proposed BRA only encryption and related issues.
 
  • The performance evaluated of BRA in simulation way for a slice of digital voice form and comparisons with Omnet ++.
6.5 Further study scopes
There are several related options of the proposed algorithm for further study.
  • Application of bit re-allocation algorithm in sequential voice communication
  • Application of bit re-allocation algorithm in secure video communication
  • Study of efficiency of bit re-allocation algorithm
  • Study of significance of bit re-allocation algorithm
  • Study of QoS activities of bit re-allocation algorithm
References
 
 
1. Encyclopedia online.
    http://en.wikipedia.org
 
2. Codan analog system
    http://en.wikipedia.org/wiki/Codan
 
3. Federal Information Processing Standards Publication 197,November 26, 2001
    Announcing the ADVANCED ENCRYPTION STANDARD (AES).
    http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf
 

4. Mixed Excitation Linear Prediction

    http://en.wikipedia.org/wiki/Mixed_Excitation_Linear_Prediction

 

5. Multi-purpose Military Secure base, mobile or manpack Transceiver MMR-78
    http://www.nabishi.com/mmr-78-military_mobile_manpack_tacticle_secure_radio.htm
 
6. Secure Press to Talk (PTT) System for instant communication
    http://www.speechsecurity.co.uk/Secure_PTT_system.htm
 
7. The cellular voice Security.
     http://www.spyworld.com/equipment/listings/l0004.html
8. VOIP/Voice Security in VOIP system/Voice over internet protocol system.. 
    http://en.wikipedia.org/wiki/VoIP_VPN.
 
9. Voice Security in GSM system
    http://hubpages.com/hub/Secure-GSM-Phone-Secure-GSM
10. Voice Security in Thuraya system.
      http://www.thuraya.com
11. John Paul Walters, Zhengqiang Liang, Weisong Shi, and Vipin Chaudhary;
     “Wireless Sensor Network Security: A Survey”
      Department of Computer Science;Wayne State University
      E-mail: {jwalters, sean, weisong, vipin}@wayne.edu
 
12. Emerson.White Paper, CISCO. Integrating an Industrial Wireless Sensor Network with ;
      Your Plant’s Switched Ethernet and IP Network.
 
13. voice sensor information
      http://www.sunrom.com/sensors/voice-detecting-sensor-with-delay
 
 
14. WSN device
      http://www.google.com/search?hl=en&rlz=1B2GGFB_enBD383BD384&q
     wireless+sensor+networl+device&aq=f&aqi=g-l4g-lm1&aql=&oq=&gs_rfai
 
15. Karthik Karuppaiya :Project Report,CSE 450/598 “Design and Analysis of Algorithms,RSA”
      Public Key Cryptography Algorithm(Project Id: P12),
      Computer Science Department,Arizona State University,
      Karthik.k@asu.edu , www.rsasecurity.com , ww.cse.iitk.ac.in/jprimality..pdf
      www.cse.iitk.ac.in/news/primality.html
 
16. William Stallings.”Cryptography and Network Security Principles and Practices”,
      Fourth Edition  PublisherPrentice Hall November 16, 2005, Print ISBN-100-13-187316-4,
      eText ISBN-13 978-0-13-187319-3.
 
17. 0Jian Lou, Shan Liu, Anthony Vetro, and Ming-Ting Sun .Complexity and Memory Efficient  
     GOP Structures Supporting VCR Functionalities
     in H.264/AVC  TR2008-041 May 2008
     http://www.merl.com/papers/docs/TR2008-041.pdf
 
18. WebArticles MEDIA NETWORK, “The Round Robin Scheduling Algorithm”,
      http://www.osdcom.info/content/view/29/39/
 
19. OpenBSD, “PF: Packet Queueing and Prioritization
       http://www.openbsd.org/f aq/pf/queueing.html,
 
20. “Cisco IOS Quality of Service Solutions Configuration Guide – Congestion Management
      Overview”  http://hsnlab.tmit.bme.hu/~vidacs/education/traffic_management/qcfconmg.pdf
 
21.Kitti Wongthavarawat, Aura Ganz, “Packet Scheduling for QoS Support in IEEE 802.16
     Broadband Wireless Access Systems”, International Journal of Communication Systems,
     Vol. 16, 2003, pp. 81 – 96.
 
22. JING QIU, “An adaptive Dynamic bandwidth allocation algorithm in
      WiMAX System..” ,2007;
      libserv5.tut.ac.za:7780/pls/eres/…/Qiu.pdf;
23. Peter J. Welcher, “QoS (Quality of Service) Features”,
      http://www.netcraftsmen.net/welcher/papers/qos1.html,l
 
24. Huawei Technologies Co., Ltd., “Huawei VRP QoS white paper”,
      http://network.51cto.com/art/200606/27613.htm,
 
25. Tutorial-Reports.com, “VoIP Tutorial”,     
      http://www.tutorialreports.com/internet/telephony/voip/technology.php?
 
 
26. Kushalnagar, N., Montenegro, G., and C. Schumacher,
      "IPv6 over Low-Power Wireless Personal Area Networks (6LoWPANs): Overview,   
      Assumptions, Problem Statement, and Goals", RFC 4919, August 2007.    
      http://tools.ietf.org/html/rfc4919. 
 
27. Jason Lester Hill. System Architecture for Wireless Sensor Networks
      A dissertation submitted in partial satisfaction of the requirements for the degree of 
      Doctor of Philosophy in Computer Science of the univerisy of california, berkeley
 
28. John Paul Walters, Zhengqiang Liang, Weisong Shi, and Vipin Chaudhary;
      “Wireless Sensor Network : A Survey”
      Department of Computer Science; State University
      E-mail: {jwalters, sean, weisong, vipin}@wayne.edu
 
29. Ozlem Durmaz ˙ Incel: “Multi-Channel Wireless Sensor Networks:Protocols, Design
      and Evaluation” ;Print Service;ISBN 978-90-365-2812-2 ;DOI  
      10.3990/1.9789036528122Copyright c-2009 ¨ Ozlem Durmaz Incel, The Netherlands. to be
       publicly defended on Friday, March 20, 2009 at 16.45
 
30. Gutierrez, J.A., Callaway, E. Barrett, R., “Low Rate Wireless Personal Area Networks:
      Enabling Wireless Sensors with IEEE 802.15.4”, IEEE Press, 2nd. Edition, 2007.
 
31. Cisco Catalyst 6500: Virtual Switching System 1440.
      http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps9336/product_at_a_glance
      0900aecd806ee2d4.pdf.
 
32. Nicky van Foreest. “Simulating Queueing Networks with OMNeT++”;,April 3,
     2002Andr´as: Thanks a lot for OMNeT++.1.5 Versions; Jan 2001 and  May 2001.
 
33. D.E. Knuth. The art of computer programming, volume 2,Seminumerical algorithms. 3
      edition, 1997.
 
34. S.M. Ross. Introduction to Probability Models. Academic Press,5th edition, 1993.
35. Chris Bryant, CCIE # 12933, “Flow-Based Weighted Fair, Priority, and Custom Queuing
      A Comparison of FBWFQ, PQ, and CQ”,
      http://www.thebryantadvantage.com/ComparingWFQPQCQ.htm
 
36. OMNeT++ Discrete Simulation Event System,
      http://www.omnetpp.org,