Computer restarts after loading the Windows Logo screen

7

Posted by mady | Posted in | Posted on 5:50 AM

whether this happened to you? You try to start your computer. Everything seems fine until it gets to the Windows logo screen and then restarts again. If you let it go it will possibly go on forever.
Two main causes for this problem. Not saying that these are the only answers for this problem, however these are the most common for me that i found.
1.Faulty random access memory (ram)
Having faulty memory can cause a number of symptoms including a continual restart of Windows. I have tested this theory in a number of computers, including Acer and Dell computers. The computers were working fine until I installed the faulty memory. As soon as the bad memory is removed and the working memory is replaced the computer is fine again. Faulty memory can also cause your computer to show a complete black screen or even the blue screen of death. Also see
Computer has a black screen and will not Boot or
Startup

If you have two sticks of memory in your computer, remove one and then start your computer. If this doesn't work then swap them over and try the other one. If the computer still restarts after loading the Windows Logo screen, then try the next option
2.The hard drive is dead or the boot sector is destroyed
The computer will not boot or start if there is no hard drive to go to. I repaired a computer with exactly this problem. It loaded the windows logo screen and then kept restarting. I first thought it was a memory problem, however it turned out to be the hard drive. I removed the hard drive and placed it in an external case, and still could not access it. I fitted a new hard drive to the computer and put a Windows Xp disc in the cd drive. And away it went, formatting and installing Windows XP. The odd thing is that you always presume that the computer will come up with an error message saying that there is no disk present. But this doesn't always happen.

So next time your computer loads the Windows Logo screen and then restarts be sure to check for these two common problems before investigating the problem further.

How to share file between Windows XP and a Vista computer

3

Posted by mady | Posted in | Posted on 5:49 AM

In case of Vista, the default workgroup is MSHOME,and in case XP WORKGROUP. So make sure both system’s are on the same workgroup before attempting to share files between XP and Vista.

First thing to do:
In XP, go to Control Panel > System. Then on the “Computer Name” tab, click the change button, and edit the workgroup to be the same as Vista (MSHOME).


On the Vista machine:
1.First go to the Network and Sharing Centre and make sure you have file sharing turned on. Having your network set to Private also helps.



1.afterthat go to the folder you want to share and Right-click on the file to bring a menu up > Choose Properties > Then Sharing. Click on Advanced Sharing. More details on how to share a file.
2.Now click the Share this folder check box, and then click Permissions.
3.Let the Permission window should open.
4.After now click on add to add a user. Then type in guest in the textbox that says Enter the object names to select.
5.Click Check Names which should then find Vista’s guest account and prefix your PC’s name in this format: YourComputerName\guest.
6.Click Ok to go back to the Permissions window.
7.You should now have the Guest account under group or users names.
8.Highlight the guest account and give it the relevant permissions. (Full control, Change or Read)
9.Click Ok and your done!

This is how simple it is to sharing file worked between Windows XP computers , and now you can have it enabled for Vista too!

Networking commands

3

Posted by mady | Posted in | Posted on 5:47 AM

Here are some networking commands that i find out. This will help freshers to quickly understand networking basics .

1. IPCONFIG
2. PING
3. NSLOOKUP
4. TRACERT
5. ARP
6. NETSTAT
7. NBTSTAT
8. ROUTE
9. WHOIS database
11. TELNET
10. Installing IPv6

1.IPCONFIG
The IPCONFIG command is used to display internet configuration of the system.
In IPCONFIG the default is to display the IP address, subnet mask and default gateway for each
adapter bound to TCP/IP. It helps in detecting bad IP addresses, incorrect subnet masks,
and improper broadcast addresses.
EXAMPLE:
c:>ipconfig
Windows IP Configuration
Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . :
IP Address. . . . . . . . . . . . : 192.168.6.124
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.6.254
Two common commands:
1.To display full configuration information
c:>ipconfig /all
Windows IP Configuration
Host Name . . . . . . . . . . . . : LABVIEW
Primary Dns Suffix . . . . . . . :
Node Type . . . . . . . . . . . . : Unknown
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No

Ethernet Adapter Local Area Connection :
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Etherne
Physical Address. . . . . . . . . : 00-16-17-20-1D-9F
Dhcp Enabled. . . . . . . . . . . : No
IP Address. . . . . . . . . . . . : 192.168.6.124
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.6.254
DNS Servers . . . . . . . . . . . : 192.168.1.250


2.To release and renew (get) an address from the DHCP server
c:>ipconfig /release

c:>ipconfig /renew

2.PING
Ping command is used to indicates whether a remote host can be reached.
It also displays information about packet loss and packet delivery time.
If the host is reachable then it send replay message Replay from that specific IP address otherwise Destination host unreachable.
EXAMPLE:
C:>ping 192.1689.1.250
Reply from 192.168.1.250: bytes=32 time<1ms TTL=63 Reply from 192.168.1.250: bytes=32 time<1ms TTL=63 Reply from 192.168.1.250: bytes=32 time<1ms TTL=63 Reply from 192.168.1.250: bytes=32 time<1ms TTL=63 Ping statistics for 192.168.1.250: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms
Normal Use: send four packets
Usage: ping target_host
EXAMPLE:
c:>ping 192.168.1.250

Send ping packets count times
Usage: ping -n count target_host
Example:
c:>ping -n 192.168.1.250

Send ping packets with a specific buffer size (byte)
Usage: ping -l size target_host
Example:
C:> ping -l 100 192.168.5.254

3.TRACERT
TRACERT command is used to print information about each routing hop that
packets take going from your system to a remote system.
EXAMPLE:
c:>tracert
Usage: tracert [-d] [-h maximum_hops] [-j host-list] [-w timeout] target_name
Options:
-d Do not resolve addresses to hostnames.
-h maximum_hops Maximum number of hops to search for target.
-j host-list Loose source route along host-list.
-w timeout Wait timeout milliseconds for each reply.








Real Time Streaming Protocol (RTSP)

6

Posted by mady | Posted in | Posted on 6:07 AM

Real Time Streaming Protocol (RTSP)
Abstract:
The traffic in the internet is becoming more and more involved with multimedia applications. To cope with such bandwith-demanding data, specific protocols have been created. The most difficult area is realtime (or realtimish) streams that enable us to make such things as videophonecalls and remote groupworks in much more efficient way.
The Real Time Streaming Protocol, or RTSP, is an application-level client –server protocol for control over the delivery of data with real-time properties.
RTSP provides an extensible framework to enable controlled, on-demand delivery of real-time data, such as audio and video. Sources of data can include both live data feeds and stored clips.
This protocol is intended to control multiple data delivery sessions, provide a means for choosing delivery channels such as UDP, multicast UDP and TCP, and provide a means for choosing delivery mechanisms based upon RTP.
The protocol defines the connection between a streaming media client and a streaming media server, and also provides a standard way for clients and servers from multiple vendors to stream multimedia content.
It takes advantage of Internet and Intranet infrastructure such as IP Multicast, RTCP and RTP, providing a means for choosing delivery mechanisms

RESOURCE SHARING ON INTERNET

7

Posted by mady | Posted in | Posted on 6:05 AM

RESOURCE SHARING ON INTERNET CERTIFICATE

CONTENTS :





CHAPTER 1
______________________________________________________
1.1 Introduction
It is commonly observed that the continued exponential growth in the capacity of fundamental computing resources — processing power, communication bandwidth, and storage is working a revolution in the capabilities and practices of the research community. It has become increasingly evident that the most revolutionary applications of this superabundance use resource sharing to enable new possibilities for collaboration and mutual benefit. Over the past 30 years, two basic models of resource sharing with different design goals have emerged. The differences between these two approaches, which are distinguished as the Computer Center and the Internet models, tend to generate divergent opportunity spaces, and it therefore becomes important to explore the alternative choices they present as we plan for and develop an information infrastructure for the scientific community in the next decade.
Interoperability and scalability are necessary design goals for distributed systems based on resource sharing, but the two models we consider differ in the positions they take along the continuum between total control and complete openness. The difference affects the tradeoffs they tend to make in fulfilling their other design goals. The Computer Center model, which came to maturity with the NSF Supercomputer Centers of the 80s and 90s, was developed in order to allow scarce and extremely valuable resources to be shared by a select community in an environment where security and accountability are major concerns. The form of sharing it implements is necessarily highly controlled – authentication and access control are its characteristic design issues. In the last few years this approach has given rise to a resource sharing paradigm known as information technology “Grids.” Grids are designed to flexibly aggregate various types of highly distributed resources into unified platforms on which a wide range of “virtual organizations” can build. By contrast, the Internet paradigm, which was developed over the same 30 year period, seeks to share network bandwidth for the purpose of universal communication among an international community of indefinite size. It uses lightweight allocation of network links via packet routing in a public infrastructure to create a system that is designed to be open and easy to use, both in the sense of giving easy access to a basic set of network services and of allowing easy addition of privately provisioned resources to the public infrastructure. While admission and accounting policies are difficult to implement in this model, the power of the universality and generality of the resource sharing it implements is undeniable.
Though experience with the Internet suggests that the transformative power of information technology is at its highest when the ease and openness of resource sharing is at its greatest, the Computer Center model experiencing a rebirth in the Grid while the Internet paradigm has yet to be applied to any resource other than communication bandwidth. But the Internet model can be applied to other kinds of resources, and that, with the current Internet and the Web as a foundation, such an application can lead to similarly powerful results. The storage technology developed is called the Internet Backplane Protocol (IBP), designed to test this hypothesis and explore its implications. IBP is our primary tool in the study of logistical networking, a field motivated by viewing data transmission and storage within unified framework.

1.2 What Is IBP?
The proliferation of applications that are performance limited by network speeds leads us to explore new ways to exploit data locality in distributed settings. Currently, standard networking protocols (such as TCP/IP) support end-to-end resource control. An application can specify the endpoints associated with a communication stream, and possibly the buffering strategy to use at each endpoint, but have little control over how the communicated data is managed while it is traversing the network. In particular, it is not possible for the application to influence where and when data may be stored (other than at the endpoints) so that it can be accessed efficiently.
To support domain and application-specific optimization of data movement, the Internet Backplane Protocol (IBP) has been developed for managing storage within the network fabric itself. IBP allows an application to implement inter-process communication in terms of intermediate data staging operations so that locality can be exploited and scarce buffer resources managed more effectively.
In the case of the Internet, a large distributed system providing a communication service to systems at its periphery, such considerations of robustness and scalability led its designers to choose a stateless model, which is the networking equivalent of the functional model of computation. Routers perform stateless data transfer operations and control is calculated by routing algorithms which require only that the routers behave correctly, not that they maintain shared state. Indeed, the end-to-end model of IP networking has served the Internet well, allowing it to scale far beyond its original design while maintaining stable levels of performance and reliability.
It is important to note that the designers of large-scale information systems often follow a shared memory model because the functional model puts undesirable limitations on performance and control. It is difficult to express the management of stored data in functional terms, and resource management is the key to high performance. Moreover, people think naturally in terms of processes, not just computations, and persistence is the key to defining a process. Increasingly the design of Internet-based information systems is moving toward shared-memory approaches that support the management of distributed data, rather than just access. A rapidly growing global industry is being built up around the caching and replication of content on the World Wide Web and massive scientific datasets are being distributed via networked systems of large storage resources.
But the management of shared state in such systems is in no way a part of the underlying network model. All such management is consequently pushed up to the application level and must be implemented using application-specific mechanisms that are rarely interoperable. The result has been a balkanization of state management capabilities that prevents applications that need to manage distributed state from benefiting from the kind of standardization, interoperability, and scalability that have made the Internet into such a powerful communication tool. The Internet Backplane Protocol (IBP) has been defined in order to provide a uniform interface to state management that is better integrated with the Internet.
Because IBP is a compromise between two conflicting design models, it does not fit comfortably into either of the usual categories of mechanisms for state management: IBP can be viewed either as a mechanism to manage either communication buffers or remote files and both are equally valid characterizations and useful in different situations. If, in order to allow our terminology to be as neutral as possible, we simply refer to the units of data that IBP manages as byte arrays, then these different views of IBP can be presented as follows:


IBP as buffer management:
In communication between nodes on the Internet, which is built upon the basic operation of delivering packets from sender to receiver, each packet is buffered at each intermediate node. In other words, the communication makes use of storage in the network. Because the capacity of even large storage systems is tiny compared with the amount of data that flows through the Internet, allocation of communication buffers must be time limited. In current routers and switches, time-limited allocation is implemented by use of FIFO buffers, serviced under the constraints of fair queuing.
Against this background, IBP byte arrays can be viewed as application-managed communication buffers in the network. IBP allows the use of time-limited allocation and FIFO disciplines to constrain the use of storage. These buffers can improve communication by way of application-driven staging of data, or they may support better network utilization by allowing applications to perform their own coarse-grained routing of data.
IBP as file management:
Since high-end Internet applications often transfer gigabytes of data, the systems to manage storage resources for such applications are often on the scale of gigabytes to terabytes in size. Storage on this scale is usually managed using highly structured file systems or databases with complex naming, protection and robustness semantics. Normally such storage resources are treated as part of a host system and therefore as more or less private.
From this point of view IBP byte arrays can be viewed as files that live in the network. IBP allows an application to read and write data stored at remote sites, and direct the movement of data among storage sites and to receivers. In this way IBP creates a shareable network resource for storage in the same way that standard networks provide shareable bandwidth for file transfer.

This characterization of IBP as a mechanism for the management of state in the network supplies an operational understanding of the proposed approach, but does not provide a unified view that synthesizes storage and networking together. In order to arrive at this more general view, we say that routing of packets through a network is a series of spatial choices that allows for control of only one aspect of data movement. An incoming packet sent out on one of several alternative links, but any particular packet is held in communication buffers for as short time as possible.

IBP, on the other hand, allows for data to be stored at one location while en route from sender to receiver, adding the ability to control data movement temporally as well as spatially. This generalized notion of data movement is termed logistical networking, drawing an analogy with systems of warehouses and distribution channels commonly used in the logistics of transporting military and industrial material. Logistical networking can improve an application’s performance by allowing files to be staged near where they will be used, data to be collected near its source, or content to be replicated close to its users.

1.3 Background
The Internet Backplane Protocol is a mechanism developed for the purpose of sharing storage resources across networks ranging from rack mounted clusters in a single machine room to global networks. To approximate the openness of the Internet paradigm for the case of storage, the design of IBP parallels key aspects of the design of IP, in particular IP datagram delivery. This service is based on packet delivery at the link level, but with more powerful and abstract features that allow it to scale globally. Its leading feature is the independence of IP datagrams from the attributes of the particular link layer, which is established as follows:
· Aggregation of link layer packets masks its limits on packet size;
· Fault detection with a single, simple failure model (faulty datagrams are dropped) masks the variety of different failure modes;
· Global addressing masks the difference between local area network addressing schemes and masks the local network’s reconfiguration.

This higher level of abstraction allows a uniform IP model to be applied to network resources globally, and it is crucial to creating the most important difference between link layer packet delivery and IP datagram service. Namely,
Any participant in a routed IP network can make use of any link layer connection in the network regardless of who owns it. Routers aggregate individual link layer connections to create a global communication service.
This IP-based aggregation of locally provisioned, link layer resources for the common purpose of universal connectivity constitutes the form of sharing that has made the Internet the foundation for a global information infrastructure. IBP is designed to enable the sharing of storage resources within a community in much the same manner. Just as IP is a more abstract service based on link-layer datagram delivery, IBP is a more abstract service based on blocks of data (on disk, tape, or other media) that are managed as “byte arrays.” The independence of IBP byte arrays from the attributes of the particular access layer (which is our term for storage service at the local level) is established as follows:

· Aggregation of access layer blocks masks the fixed block size;
· Fault detection with a very simple failure model (faulty byte arrays are discarded) masks the variety of different failure modes;
· Global addressing based on global IP addresses masks the difference between access layer addressing schemes.

This higher level of abstraction allows a uniform IBP model to be applied to storage resources globally, and this is essential to creating the most important difference between access layer block storage and IBP byte array service:
Any participant in an IBP network can make use of any access layer storage resource in the network regardless of who owns it. The use of IP networking to access IBP storage resources creates a global storage service.
Regardless of the strengths of the IP paradigm, however, its application here leads directly to two problems. First, in the case of storage, the chronic vulnerability of IP networks to Denial of Use (DoU) attacks is greatly amplified. The free sharing of communication within a routed IP network leaves every local network open to being overwhelmed by traffic from the wide area network, and consequently open to the unfortunate possibility of DoU from the network. While DoU attacks in the Internet can be detected and corrected, they cannot be effectively avoided. Yet this problem is not debilitating for two reasons: on the one hand, each datagram sent over a link uses only a tiny portion of the capacity of that link, so that DoU attacks require constant sending from multiple sources; on the other hand, monopolizing remote communication resources cannot profit the attacker in any way - except, of course, economic side-effects of attacking a competitor’s resource. Unfortunately neither of these factors hold true for access layer storage resources. Once a data block is written to a storage medium, it occupies that portion of the medium until it is de-allocated, so no constant sending is required. Moreover it is clear that monopolizing remote storage resources can be very profitable for an attacker.
The second problem with sharing storage network style is that the usual definition of a storage service is based on processor-attached storage, and so it includes strong semantics (near-perfect reliability and availability) that are difficult to implement in the wide area network. Even in “storage area” or local area networks, these strong semantics can be difficult to implement and are a common cause of error conditions. When extended to the wide area, it becomes impossible to support such strong guarantees for storage access.
Both issues are addressed through special characteristics of the way IBP allocates storage:
Allocations of storage in IBP can be time limited. When the lease on an allocation expires, the storage resource can be reused and all data structures associated with it can be deleted. An IBP allocation can be refused by a storage resource in response to over allocation, much as routers can drop packets, and such “admission decisions” can be based on both size and duration. Forcing time limits puts transience into storage allocation, giving it some of the fluidity of datagram delivery.

The semantics of IBP storage allocation are weaker than the typical storage service. Chosen to model storage accessed over the network, it is assumed that an IBP storage resource can be transiently unavailable. Since the user of remote storage resources is depending on so many uncontrolled remote variables, it may be necessary to assume that storage can be permanently lost. Thus, IBP is a “best effort” storage service. To encourage the sharing of idle resources, IBP even supports “volatile” storage allocation semantics, where allocated storage can be revoked at any time. In all cases, such weak semantics mean that the level of service must be characterized statistically.

Because of IBP’s limitations on the size and duration of allocation and its weak allocation semantics, IBP does not directly implement reliable storage abstractions such as conventional files. Instead, these must be built on top of IBP using techniques such as redundant storage, much as TCP builds on IP’s unreliable datagram delivery in order to provide reliable transport.

CHAPTER 2 IBP STRUCTURE AND CLIENT API
________________________________________________
IBP has been designed to be a minimal abstraction of storage to serve the needs of logistical networking, i.e. to manage the path of data through both time and space. The fundamental operations are:
1. Allocating a byte array for storing data.
2. Moving data from a sender to a byte array.
3. Delivering data from a byte array to a receiver (either another byte array or a client).

A client API for IBP that consists of seven procedure calls, and server daemon software that makes local storage available for remote management has been defined and implemented. Currently, connections between clients and servers are made through TCP/IP sockets.
IBP client calls may be made by anyone who can attach to an IBP server (which we also call an IBP depot to emphasize its logistical functionality). IBP depots require only storage and networking resources, and running one does not necessarily require supervisory privileges. These servers implement policies that allow an initiating user some control over how IBP makes use of storage. An IBP server may be restricted to use only idle physical memory and disk resources, or to enforce a time-limit on all allocations, ensuring that the host machine is either not impacted at all, or only impacted for a finite duration. The goal of these policies is to encourage users to experiment with logistical networking without over-committing server resources.
Logically speaking, the IBP client sees a depot’s storage resources as a collection of append-only byte arrays. There are no directory structures or client-assigned file names. Clients initially gain access to byte arrays by allocating storage on an IBP server. If the allocation is successful, the server returns three capabilities to the client: one for reading, one for writing, and one for management. These capabilities can be viewed as names that are assigned by the server. Currently, each capability is a text string encoded with the IP identity of the IBP server, plus other information to be interpreted only by the server. This approach enables applications to pass IBP capabilities among themselves without registering these operations with IBP, and in this way supports high-performance without sacrificing the correctness of applications. The IBP client API consists of seven procedure calls, broken into the following three groups:
Allocation:
IBP_cap_set IBP_allocate(char *host, int size, IBP_attributes attr)
Reading / Writing:
IBP_store(IBP_cap write_cap, char *data, int size)
IBP_remote_store(IBP_cap write_cap, char *host, int port, int size)
IBP_read(IBP_cap read_cap, char *buf, int size, int offset)
IBP_deliver(IBP_cap read_cap, char *host, int port, int size, int offset)
IBP_copy(IBP_cap source, IBP_cap target, int size, int offset)
Management:
IBP_manage(IBP_cap manage_cap, int cmd, int capType, IBP_status info)

2.1 Allocation

The heart of IBP’s innovative storage model is its approach to allocation. Storage resources that are part of the network, as logistical networking intends them to be, cannot
be allocated in the same way as they are on a host system. Any network is a collection of shared resources that are allocated among the members of a highly distributed community. A public network serves a community which is not closely affiliated and which may have no social or economic basis for cooperation. To understand how IBP needs to treat allocation of storage for the purposes of logistical networking, it is helpful to consider the problem of sharing resources in the Internet, and how that situation compares with the allocation of storage resources on host systems.
In the Internet, the basic shared resources are data transmission and routing. The greatest impediment to sharing these resources is the risk that their owners will be denied the use of them. The reason that the Internet can function in the face of the possibility of denial-of-use attacks is that it is not possible for the attacker to profit in proportion to their own effort, expense and risk. When other resources, such as disk space in spool directories, are shared, we tend to find administrative mechanisms that limit their use by restricting either the size of allocations or the amount of time for which data will be held.
By contrast, a user of a host storage system is usually an authenticated member of some community that has the right to allocate certain resources and to use them indefinitely. Consequently, sharing of resources allocated in this way cannot extend to an arbitrary community. For example, an anonymous FTP server with open write permissions is an invitation for someone to monopolize those resources; such servers must be allowed to delete stored material at will.
In order to make it possible to treat storage as a shared network resource, IBP supports some of these administrative limits on allocation, while at the same time seeking to provide guarantees that are as strong as possible for the client. So, for example, under IBP allocation can be restricted to a certain length of time, or specified in a way that permits the server to revoke the allocation at will. Clients who want to find the maximum resources available to them must choose the weakest form of allocation that their application can use.
To allocate storage at a remote IBP depot, the client calls IBP allocate(). The maximum storage requirements of the byte array are noted in the size parameter, and additional attributes (described below) are included in the attr parameter. If the allocation is successful, a trio of capabilities is returned.
There are several allocation attributes that the client can specify:
Permanent Vs Time-limited: The client can specify whether the storage is intended to live forever, or whether the server should delete it after a certain period of time.
Volatile Vs. stable: The client can specify whether the server may revoke the storage at any time (volatile) or whether the server must maintain the storage for the lifetime of the buffer.
Byte-array/ Pipe/ Circular-queue: The client can specify whether the storage is to be accessed as an append-only byte array, as a FIFO pipe (read one end, write to another), or as a circular queue where writes to one end push data off of the other end once a certain queue length has been attained. We anticipate that applications making use of shared network storage, (storage that they do not explicitly own), will be constrained to allocate either permanent and volatile storage, or time-limited and stable storage.


2.2 Reading / Writing

All reading and writing to IBP byte arrays is done through the four reading/writing calls defined above.
These calls allow clients to read from and write to IBP buffers. IBP store() and IBP read() allow clients to read from and write to their own memory. IBP remote store() and IBP deliver() allow clients to direct an IBP server to connect to a third party (via a socket) for writing/reading. Finally, IBP copy() allows a client to copy an IBP buffer from one server to another. Note that IBP remote store(), IBP deliver() and IBP copy() all allow a client to direct an interaction between two other remote entities. The support that these three calls provide for third party transfers are an important part of what makes IBP different from, for example, typical distributed file systems. The semantics of IBP store(), IBP remote store(), and IBP copy() are append-only. IBP read(), IBP deliver() and IBP copy() allow portions of IBP buffers to be read by the client or third party. If an IBP server has removed a buffer, due to a time-limit expiration or volatility, these client calls simply fail, encoding the reason for failure in an IBP errno variable.

2.3 Management / Monitoring
All management of IBP byte arrays is performed through the IBP manage() call. This procedure requires the client to pass the management capability returned from the initial IBP allocate() call. With IBP manage(), the client may manipulate a server-controlled reference count of the read and write capabilities. When the reference count of a byte array’s write capability reaches zero, the byte array becomes read-only. When the read capability reaches zero, the byte array is deleted. Note that the byte array may also be removed by the server, due to time-limited allocation or volatility. In this case, the server invalidates all of the byte array’s capabilities. The client may also use IBP manage() to probe the state of a byte array and its IBP server, to modify the time limit on time-limited allocations, and to modify the maximum size of a byte array.


CHAPTER 3 IBP DESIGN ISSUES
________________________________________________
A design for shared network storage technology that draws on the strengths of the Internet model must also cope with its weaknesses and limitations. In this section we sketch the key design issues that we have encountered in developing IBP.

3.1 Security
The basis of IBP network security is the capability. Capabilities are created by the depot in response to an allocation request and returned to the client in the form of long, cryptographically secure byte strings. Every subsequent request for actions on the allocated byte array must then present the appropriate capability. As long as capabilities are transmitted securely between client and server, and the security of the depot itself is not compromised, only someone who has obtained the capability from the client can perform operations on the byte array. It is important to notice that this is the only level of security that IBP must deal with: the data encryption, because of its end-to-end nature, has to be handled in the layer(s) above IBP. For example, the IBP-mail application see encrypts all the data sent, but this is, and has to be, completely transparent to the IBP depot. The same applies for data compression and any other end-to-end service.


3.2 Timeouts

Timeouts are implemented in both the IBP library and server for different purposes. The client library deals with the possibility that faults in the network or server will cause a particular request to hang. On the server side, the fact that reading and writing to a byte array may occur simultaneously means that a particular IBP_load or IBP_store operation may not be able to fulfill its byte count immediately; to avoid such problems, a server timeout (or Server Sync) has been implemented to let the server wait a specified amount of time for the conditions requested to be available before answering a particular request.


3.3 Data Mover

Since the primary intent of IBP is to provide a common abstraction of storage, it is arguable that third party transfer is unnecessary. Indeed, it is logically possible to build an external service for moving data between depots that access IBP allocations using only the IBP_load and IBP_store; however, such a service would have to act as a proxy for clients, and this immediately raises trust and performance concerns. The IBP_copy and IBP_mcopy data movement calls were provided in order to allow a simple implementation to avoid these concerns, even if software architectures based on external data movement operation are still of great interest to us. The intent of the basic IBP_copy call is to provide access to a simple data transfer over a TCP stream. This call is built in the IBP depot itself, to offer a simple solution for data movement; to achieve the transfer, the depot that receives the call is acting as a client doing an IBP_store on the target depot. IBP_mcopy is a more general facility, designed to provide access to operations that range from simple variants of basic TCP-based data transfer to highly complex protocols using multicast and real-time adaptation of the transmission protocol, depending on the nature of the underlying backbone and of traffic concerns. In all cases, the caller has the capacity to determine the appropriateness of the operation to the depot’s network environment, and to select what he believes to be the best data transfer strategy; a similar control is given over any error correction plan, should the data movement call return in failure.

3.4 Depot Allocation Policy and Client Allocation Strategies

One of the key elements of the IBP design is the ability of depots to refuse allocations and the strategies used by clients in the face of such refusal. IBP depots implement an allocation policy based on both time and space. Any allocation has to be lower than the curve defined by “space*time = K”, where K varies linearly with the remaining free space. This permits smaller allocations to have a longer duration. This policy gives us some defense to DoU attacks, because any allocation of storage that depletes a large fraction of the remaining space in a particular depot will tend to be limited to a short duration, thereby limiting the impact on other depot users.


CHAPTER 4 THE EXNODE
_______________________________________________
One of the key strategies of the IBP project was adhering to a very simple philosophy: IBP models access layer storage resources as closely as possible while still maintaining the ability to share those resources between applications. While this gives rise to a very general service, it results in semantics that might be too weak to be conveniently used directly by many applications. As in the networking field, the IP protocol alone does not guarantee many highly desirable characteristics and it needs to be complemented by a transport protocol such as TCP. What is needed is a service at the next layer that, working in synergy with IBP, implements stronger allocation properties such as reliability and arbitrary duration that IBP does not support.

4.1 Introduction
Following the example of the Unix inode, which aggregates disk blocks to implement a file, single generalized data structure is implemented, called an external node, or exNode, in order to manage aggregate allocations that can be used in implementing network storage with many different strong semantic properties. Rather than aggregating blocks on a single disk volume, the exNode aggregates storage allocations on the Internet, and the exposed nature of IBP makes IBP byte-arrays exceptionally well adapted to such aggregations (Figure 1). In the present context the key point about the design of the exNode is that it has allowed us to create an abstraction of a network file to layer over IBP-based storage in a way that is completely consistent with the exposed resource approach.
The exNode library has a set of calls (Table 2) that allow an application to create and destroy an exNode, to add or delete a mapping from it, to query it with a set of criteria, and to produce an XML serialization, to allow the use of XML-based tools and interoperability with XML systems and protocols.







Figure 4.1: Inode compared to exNode – The exNode implements a file turned inside
out.




exNode Management Segment Management
XndCreateExNodexndFreeExNode xndCreateSegmentxndFreeSegmentxndAppendSegmentxndDeleteSegment
Segment Query exNode Serialization
XndQueryxndEnumNextxndFreeEnum XndSerializexndDeserialize

4.2 Mobile Control State Of A Network File
The exNode implements an abstract data structure that represents information known about the storage resources implementing a single file. The exNode is a set of declarations and assertions that together describe the state of the file. For purposes of illustration in this we introduce a small subset of the exNode specification and a minimal example of its application.

4.2.1 A Simple exNode API

In this minimal formulation, the exNode is a set of mappings, each of which specifies that a particular portion of the file’s byte extent during a certain period of time is mapped to a storage resource specifier that is given by a string (a URL or IBP capability). A minimal exNode API must give us a means to create and destroy these sets of mappings, as well as a way of building them.
1. Creation and destruction are implemented by simple constructor and destructor functions.
xnd_t n = xnd_create()
xnd_destroy(xnd_t n)
2. An exNode is built by an operation that adds a mapping by specifying a data extent (start byte and length) a temporal extent (start time and duration) and a storage resource specifier.
xnd_addmap(xnd_t n, unsigned int data_start, unsigned int data_length,time_t
time_start,time_t time_length, char *storage)
3. The simplest possible useful query to an exNode simply finds one (of possibly many) storage resource descriptor and offset that holds the nth byte of the data extent at a specified time.
xnd_bytequery(xnd_t n, unsigned int byte_pos, time_t when, char **storage, unsigned int *offset);
This minimal exNode API can be extended in a number of ways that have been left out of this account for the sake of clarity, and to keep from having to introduce additional structure. Some of these extensions include:
1. Queries can be much more complex, specifying ranges of data and time, and returning sets of storage resources with associated metadata to direct the process of retrieving data.
2. Mappings can be annotated to specify read-only or write-only data.
3. As storage allocations expire or become unavailable it will be necessary to manage the exNode by finding and deleting mappings, and this will require additional mapping management calls.
4. By associating a mapping with a set of storage specifiers and an aggregation function, it possible to model group allocations such as RAID-like error correction.
5. By defining metrics on the location or performance or other characteristics of different storage allocations it is possible to inform the user of the exNode which of multiple alternatives to choose.

4.2.2 XML Serialization of the exNode
The mobility of the exNode is based on two premises:
1. It is possible to populate the exNode exclusively with network-accessible storage resources.
2. The exNode can be encoded in a portable way that can be interpreted at any node in the network Today, XML is the standard tool used to implement portable encoding of structured data, and so we are defining a standard XML serialization of the exNode. The serialization is based on the abstract exNode data structure, and so allows each node or application to define its own local data structure.

4.2.3 Sample exNode Applications
1. IBP-Mail is a simple application that uses IBP to store mail attachments rather than include them in the SMTP payload using MIME encoding. IBP-Mail builds an exNode to represent the attached file and then sends the XML serialization of that file in the SMTP payload. The receiver can then rebuild an exNode data structure and use it to access the stored attachment.
2. A simple distributed file system can be built by storing serialized exNodes in the host file system and using them like Unix soft links. Programs that would normally access a file instead find the exNode serialization, build an exNode data structure and use it to access the file. Caching can be implemented by creating a copy of accessed data on a nearby IBP depot and entering the additional mappings into the stored exNode.
3. A mobile agent that uses IBP depots to store part of its state can carry that state between hosts in the form of a serialized exNode. If the hosts understand the exNode serialization, then they can perform file system tasks for the agent while it is resident, returning the updated exNode to the agent when it migrates.

4.3 Active File Management Using The exNode
In conventional file systems, many users consider the mapping of files to storage resources as static or changing only in response to end-user operations, but in fact this is far from true:
1. Even in a conventional disk-based file system, detection of impending failure of the physical medium can result in the movement of data and remapping of disk block addresses.
2. Defragmentation and backups can be another examples of autonomous movement of data by the files system not driven by end-user operations.
3. In a RAID storage system, partial or complete failure of a disk results in regeneration and remapping of data to restore fault tolerance.
4. In network-based systems, scanning for viruses can be a necessary autonomous action resulting in deletion of files or movement to a quarantine area.
The exNode is most closely analogous to the state of a process when it is used to implement an autonomous activity that is not under the direct control of a client, and may be completely hidden. The following activities are examples of “file-process” activity.

4.3.1 Active Probing to Maintain Fault Tolerance

The exNode can be used to express simple redundant storage of data or, with the appropriate extension, to express storage aggregation functions such as redundant storage of error correcting codes. However, as with RAID systems, when failures are detected, data must be copied to reestablish redundancy. In the case of the network, which can experience partitions and other extreme events that cause depots to become unreachable, active probing may be required in order to ensure that data losses are detected in a timely manner. This can be accomplished by the host setting a timer and actively probing every depot for reachability. Because transient network partitions are common, this data must be correlated over time to deduce the likelihood of data loss or long-term unavailability. The frequency of probing may be adjusted to account for network conditions or the urgency of constant availability required by the application.

4.3.2 Defragmentation

In any system that responds to congestion by locally restricting the size of an allowable allocation, fragmentation can result. In this regard IBP is no exception: when depots become full, they limit the size of an allowable allocation, and clients will begin fragmenting the files they represent using the exNode (see Figure 4.1). For the case of network storage, fragmentation results in a loss of reliability, requiring increased forward error correction in the form of error coding or full duplication of stored data. This can put undesirable loads on the host system and actually increases the global demand for storage – at some point allocations simply fail.

Congestion can be caused by the under-provisioning of the network with storage resources, but it can also be caused by network partitioning that makes remote storage resources unavailable to the host system. Thus, storage congestion can be transient, but when it is relieved, the fragmentation that has been caused by it can persist. What is required is the merging of fragmented allocations. While it would be possible to attempt the wholesale defragmentation of an exNode, this may place a burden on the host system, and if attempted at a time when congestion is not yet relieved may be fruitless. Instead,
the host system may choose to attempt defragmentation through more aggressive lease renewal, combined with attempted merging of adjacent fragments. Over time, this will lead to a natural defragmentation up to the point where depots resist larger allocations.

4.3.3 Asynchronous Transfer Management
The current IBP client API implements a simple set of synchronous calls for all storage allocation, management and data transfer operations. However, large data transfers can take a very long time, sometimes longer than the exNode will reside at the client system at which it was initiated. In this case, the data transfer call must itself generate a capability to represent the state of the transfer in process. In order to find out the result of the data transfer operation and service the exNode accordingly, the receiver of the exNode must obtain the data transfer capability. Thus, data transfer capabilities and metadata describing the transfers they represent must be part of the exNode in transit. When the exNode is transferred between hosts before reaching its final destination (as when it is held by a mail spool server) that intermediate host can interpret and service the exNode (for instance representing a mail attachment). A further compliation arises when the intent of the host is not to initiate a single data transfer, but a set of dependent transfers, as when a mail message is routed to a set of receivers, making copies in a tree-structured manner. In this case, the sequence of operations and the dependences between them must be encoded in the exNode, and the processing of the exNode can involve issuing new data transfers as their dependences are resolved.


CHAPTER 5 EXAMPLE: IBP MAIL
________________________________________________
IBP-Mail is an improvement to the current state of the art in mailing large files over the Internet. It arises from the addition of writable storage to the pool of available Internet resources. With IBP-Mail, a sender registers a large file with an IBP-Mail agent, and stores it into a network storage depot. The sender then mails a pointer to the file to the recipient using the standard SMTP mailing protocol. When the receiver reads the mail message, he or she contacts the agent to discover the file’s whereabouts, and then downloads the file. In the meantime, the agent can route the file to a storage depot close to the receiver. IBP-Mail allows for an efficient transfer of the file from sender to receiver that makes use of the asynchronous nature of mail transactions, does not expend extra spooling resources of the sender or receiver, and utilizes network resources efficiently and safely.

5.1 Introduction
Electronic mail is the de facto mechanism for communication on the Internet. In an electronic mail transaction, a sender desires to transmit a message to one or more recipients. Typically, this is done using the SMTP mailing Protocol. Electronic mail works extremely well for most mailing usages. However, the model has limitations when the sender desires to transmit large data files to one or more receivers. Given the current network infrastructure, there are four basic ways that a sender can employ to transmit a large data file to one or more recipients:
1. Send the file as an encoded mail message:
One of the earliest email tools is Unix’s uuencode. Uuencode transforms any binary data file into an ASCII message that can be sent and received by virtually all mail clients. Uudecode then transforms the message back into a binary data file. Later in time, the MIME protocol was developed as a standard format for attaching non-textual files to mail messages and identifying their types so that mail clients can bundle multiple data files into an email message, send them to recipients, and then have the recipients unbundle them and launch file-specific applications to act on them. While the MIME standard has certainly made the task of sending and receiving files much easier than using
uuencode/uudecode, they share the same fundamental limitations when the files are very large. First, in order to work across all SMTP daemons, the files must be encoded into printable ASCII, which expands them roughly by a factor of 1.4. Second, spooler space at both the sender and the receiver must be expended for the file. Typically, spooler space is allocated by an institution’s system administrator and is shared by all users of the system. Often, there is a limit on the amount of data that may be stored in the spooler space, meaning that files over a certain size cannot be sent with MIME or uuencode encoding. Moreover, if a sender sends to multiple recipients, a separate copy of the mail message will be stored for each recipient, even if recipients share the same spooling space. Therefore, for large files, the uuencode/MIME mechanism for sending the file is often untenable.
2. Store the file locally and mail a pointer:
A typical example has the sender storing the file in a local HTTP or anonymous FTP server, and mailing the receiver a pointer to it. When the receiver reads the mail, he or she downloads the file. This solution solves the problem of expending spooler resources at the sender and receiver, but still has limitations. First, the sender may not have access to an HTTP or FTP server that contains sufficient storage. Second, a significant period of time may pass between the time that the sender sends the message and the time that the receiver downloads the file. When the file is served at a local HTTP or FTP server, this time is wasted, because the receiver will not start the download until he or she reads the message. Third, there is no automatic mechanism to inform the sender that the file has been downloaded, and may therefore be deleted from the server. And fourth, since HTTP and anonymous FTP downloads may be initiated by anyone on the Internet and both protocols have directory listing operations, the file may be discovered and read by unintended recipients.
3. Upload the file remotely and mail a pointer:
A typical example of this is when the receiver owns an anonymous FTP server with an incoming directory in which anonymous users may upload files. The sender then uploads the file and mails the receiver a pointer to it. The receiver may then download the file upon reading the mail, and delete it when the download is finished. This solution solves many of the problems with the previous two solutions, but has three limitations. First, the sender has to wait for the file to be stored remotely before he or she may mail the pointer. Second, the receiver, in allowing anonymous users to write to his or her FTP server, opens the server to invasion by malicious users. Finally, the receiver may not have access to such a server.
4. Save the file to tape and use surface mail:
While this solution seems truly primitive, it is sometimes the only way to transmit very large files from a sender to a receiver.

5.2 The IBP-Mail Solution
The IBP-Mail solution to this problem relies on the existence of writable storage depots in the network. Given such depots, the sending of large data files can be performed with near optimal efficiency. With IBP-Mail, the storage depots are registered with IBP-Mail agents. The sender registers with one of these agents and stores the file at a nearby depot. The sender then mails a message with the file’s ID, as assigned by the agent. Meanwhile, the agent attempts to move the file to a depot close to the receiver. When the receiver reads the message, he or she contacts the agent to discover the whereabouts of the file, and downloads it from the appropriate depot. The file may then be deleted from the depot. The IBP-Mail solution exhibits none of the problems of the above solutions:
1. The file is not expanded due to mailer limitations.
2. No extra storage resources are required at the sender or receiver.
3. If enough time has passed between the message’s initiation and the receiver’s reading of the message, the file should be close to the receiver for quick downloading.
4. The sender does not have to wait for the file to reach the receiver before sending the message.
5. Storage resources are not opened to malicious users.
6. Multiple recipients may download shared copies of the file.
Additionally, the IBP-Mail model may be extended to allow other interesting functionalities such as compression, data mining, fragmentation of very large data files, and uses of remote compute cycles.
IBP is software that allows applications to manage and make use of remote storage resources. It is structured as server daemons that run at the storage sites, serving up dedicated disk, spare disk, and physical memory to IBP clients. IBP clients can run anywhere on the Internet and do not have to be authenticated with the IBP servers. Thus, when an IBP server is running, anyone may use it. There are several features of IBP’s design that make offering storage as a network resource feasible:
There are no user-defined names:
IBP clients allocate storage, and if the allocation is successful, then it returns three capabilities to the client — one each for reading, writing, and management. These capabilities are text strings, and may be viewed as server-defined names for the storage. The elimination of user-defined names facilitates scalability, since no global namespace needs to be maintained. Additionally, there is no way to query an IBP server for these capabilities, so unlike an anonymous FTP server, there is no way for a user to gain access to a file unless he or she knows its name a priori.
Storage may be constrained to be volatile or time limited:
An important issue when serving storage to Internet applications is being able to reclaim the storage. IBP servers may be configured so that the storage allocated to IBP clients is volatile, meaning it can go away at any time, or time-limited, meaning that it goes away after a specified time period.
Clients can direct remote data operations:
One of IBP’s primitive operations is the ability to copy a file from one IBP server to another. This is ideal for applications that need to route data from one place to another, and direct this routing from a third party.
Reference counts are maintained on the files:
One operation that IBP clients may perform with management capabilities is an explicit increment and decrement of reference counts for reading and writing. When the write reference count is zero, the file becomes read-only. When the read reference count becomes zero, the file is deleted.

5.3 The Structure Of IBP-Mail

IBP-Mail has been designed with three goals in mind. First, it must work, and be available to general Internet users with a minimum of effort. Second, it must solve the problems with mailing large data files as detailed in the Introduction. Third, it must facilitate testing.
IBP-Mail consists of three components, as depicted in Figure 1. These are:
A pool of IBP servers:
Only one server is necessary in the pool for IBP-Mail to work. Ideally the pool will consist of a collection of servers with wide geographic distribution.
IBP-RQ: A registration/query server:
This is a server that maintains the state of the IBP server pool. Servers register with the IBP-RQ server, and clients may query the IBP-RQ server for information about
servers. More information about the IBP-RQ server is given below.
IBP-NT: An agent for naming and transport:
The IBP-NT keeps track of where the data file is in the IBP server pool, and directs the movement of the file from server to server. More information about the IBP-NT is given below.

An IBP-Mail transaction takes the following nine steps, also outlined in Figure 1:
1. The sender contacts an IBP-NT agent. In the current implementation, there is only one such agent, but multiple agents are possible. The size of the file is communicated to the agent.
2. The IBP-NT agent queries the IBP-RQ server to list appropriate IBP servers from the server pool that can store the file.
3. The IBP-NT agent allocates storage for the file on an IBP server, and receives the capabilities for the storage.
4. The IBP-NT agent creates a name for the transaction and returns that name, plus the write capability of the file, to the sender.
5. The sender stores the file into the IBP server.
6. The sender sends mail to the receiver with the name of the transaction. At the same time, the sender informs the agent that the file has been written to the IBP server.
7. The receiver presents the name to the IBP-NT agent.
8. The IBP-NT agent returns the read and manage capabilities of the file to the receiver.
9. The receiver downloads the file from the IBP server, and may delete it if desired, by decrementing the file’s read reference count. Note that if a file is shared by multiple recipients, the agent may increment its reference count to equal the number of recipients, and then the file will be deleted only after each recipient has decremented the reference count (or when the time limit expires).
There are two steps in Figure 1 labeled with an asterisk. These may be performed by the IBP-NT agent after the sender stores the file, if the agent determines that it may be able to move the file close to the receiver. If enough time passes between steps 6 and 7 (due to the receiver not reading his or her email instantly), then this time may be used by the agent to move the file close to the receiver(s), thereby improving the time for downloading. If a receiver tries to download the file before it has been copied, it may do so from the original IBP server, with reduced performance. The current implementation of each individual component is described below.
5.3.1 The IBP-RQ Server
The IBP-RQ server provides basic directory registration and query services to the IBP-NT agent. The directory part of the server is a two-level structure, with groups containing hosts (IBP servers). A host can belong to more than one group at the same time.
While the directory structure can be implemented using existing technologies (e.g. LDAP), what sets the IBPRQ server apart is the query component. Queries in IBPRQ provide a ranking of IBP servers according to selection criteria embedded into IBP-RQ. Currently, two rankings are implemented: free storage available, and proximity to a given host. The former is easy to implement given IBP semantics. The latter is more difficult. Currently, each IBP server runs a SonarDNS daemon, and the IBPRQ can force all its servers to perform Sonar performance queries to the given host. While this implementation is not a long-term solution (for example, it is not scalable to a large number of IBP servers), it suffices for our current implementation. Similarly, IBP server failures are currently not detected, and the IBP-RQ is not a distributed service, problems which will have to be solved as IBP-Mail scales.

5.3.2 The IBP-NT Agent

The IBP-NT agent provides naming and transport to IBP-Mail clients. Currently there is only one, but there could potentially be many IBP-NT agents working independently. A client starts a mail transaction by contacting an IBP-NT agent and providing initial information about the transaction: sender location and file size. The agent then queries the IBP-RQ server to find an IBP server in which the sender should insert the file. Currently, this query attempts to find a server with enough storage that is closest to the sender (using Sonar DNS’s metrics for closeness). In the future, this may be improved to take account of server load or reliability as well. The agent allocates the storage, and then stores the capabilities for that storage into a second IBP file located at a server on or near the agent’s host. We will call this the name file. The read capability of name file is returned to the sender, along with the write capability of the first IBP file. The sender stores the data into the first IBP file, and then sends mail to the receiver with the capability name file. This capability serves as a name for the mail transaction.
At the same time, the sender informs the agent that the IBP file is has been stored, and gives the location(s) of the receiver(s). Meanwhile, the agent is free to transport the data file to a server near the receiver (or receivers). This may be done with a simple third-party IBP copy() call. When finished, the agent updates the information in the IBP name file and deletes the initial data file. When the receiver presents the agent with the name, the agent returns the read and manage capabilities of the data file, and the receiver downloads it and decrements the read capability if desired.
The decision to use IBP for the name file was for flexibility. With this structure, it is possible to have multiple agents manage the transport, to have agents restart upon failure without losing information, and even to have the receiver find the location of the file without contacting the agent. The allocation of the data files and the name files are performed using IBP’s time-limited allocation policy. Currently, the time limit default is 10 days, and this may be adjusted by the sender. As discussed above, the time-limited allocation is necessary for storage owners to be able to reclaim their storage resources from the network. It also solves the lost pointer problem in the case of agent failure or lost email. Additionally, it frees the sender and receiver from having to explicitly delete the file. One ramification of this is that there is a new failure mode for mail – time limit expiration on the data file. There are several ways to address this mode – send warning mail or error reporting mail back to the sender, send warning mail to the receiver, allow the sender to extend the time limit, or simply delete the file.

5.3.3 The Current Mail Interface

There are currently two sender interfaces to IBP-Mail. The first is a command-line Unix program that works like mpack1 for sending MIME mail attachments. It may be obtained on the web at http://www.cs.utk.edu/˜elwasif/ ibp-mail. The second interface is web-based. Senders may use this interface by pointing their web browsers to the URL http://www.cs.utk.edu/˜elwasif/ cgi-bin/ibp-mail.cgi. This is a CGI enabled web page that allows the sender to specify a file name, plus a message, and have the message/file sent to a set of recipients using IBP-Mail. File uploading in HTTP requires the file to go to the server of the web page; therefore the web interface requires that all attachments be uploaded to Tennessee’s web server and the web server then acts as a proxy sender of the message. This makes this interface less efficient than the command-line version, since all files must go through the server at Tennessee. We are working to have the web server redirect the sender to a closer web server in a manner analogous to the receiving protocol described below. We will also write a Windows 95/98/NT sending program in the near future.
The message that the receiver gets has a MIME-encoded URL that points to the IBP-NT agent and contains the capabilities of the name file. The IBP-NT agent must be running on a host that contains a web server, so that it can resolve this URL. When the receiver clicks on the URL, the IBP-NT agent is contacted through a CGI script on its web server, and it returns a web page with an HTTP redirection link. This link contains a URL for the IBP server that holds the data file. The receiver’s web browser, upon receiving the redirection, automatically attempts to get this new URL. We have written a small HTTP gateway daemon that can run on IBP hosts so that these redirection requests may be served directly by IBP. In this way, the receiver’s browser downloads the data file directly from IBP without any knowledge of IBP.
An alternative to this interface would be to provide a stand-alone application that retrieves IBP-Mail files, which could be launched as a special MIME type by MIME enabled mailers. Or we could modify a mailer such as Pine or MH to recognize IBP-Mail attachments and download the files accordingly. We chose the web-based alternative so that users can receive IBP-Mail files without needing to install any code; indeed, they do not even have to understand what IBP-Mail is.

ROUTING PROTOCOLS FOR AD HOC WIRELESS LANs

6

Posted by mady | Posted in | Posted on 6:03 AM

ROUTING PROTOCOLS FOR AD HOC WIRELESS LANs


ABSTRACT



The idea of forming an ad hoc on-the-fly network of mobile devices opens up an exciting new world of possibilities. Because ad-hoc networks do not need any pre-existing infrastructure, they can solve many interesting problems of spontaneous link establishment, i.e. communication on the fly. In this case, ad-hoc networks have a clear advantage over the classic, wire-bound connections.

An ad-hoc mobile network is a collection of mobile nodes that are dynamically and arbitrarily located in such a manner that the interconnections between nodes are capable of changing on a continual basis. In order to facilitate communication within the network, a routing protocol is used to discover routes between nodes. The primary goal of such an ad-hoc network routing protocol is correct and efficient route establishment between a pair of nodes so that messages may be delivered in a timely manner. Route construction should be done with a minimum of overhead and bandwidth consumption.

This report examines firstly the mathematical dynamism of such ad hoc networks, which spawns the need for a different approach towards routing. Then it goes on to explain various routing protocols for ad-hoc networks and evaluates these protocols based on a given set of parameters. The paper provides an overview of various protocols by presenting their characteristics and functionality, and then provides a comparison and discussion of their respective merits and drawbacks.

1. INTRODUCTION


The widespread reliance on networking in business and the meteoric growth of the Internet and online services are strong testimonies to the benefits of shared data and resources. With Wireless LANs users can access such shared information without having to look for a place to plug in, and network managers can set up or augment networks without installing or moving wires. Thus Wireless LANs have gained extreme popularity in a variety of industries and applications over the last few years, health-care, retail, manufacturing, warehousing, and academics to name a few. These industries have profited from the productivity gains of using hand-held terminals and note-book computers to transmit real-time information to centralized hosts for processing.

Today Wireless LANs are becoming more widely recognized as a general-purpose connectivity alternative for a broad range of business customers and are expected to diversify in terms of both coverage and revenue returns in due course of time. Let us look at what they are all about.

1.1 Local Area Network (LAN)

A network is defined as a set of independent computing entities (such as workstations, mini computers, standalone printers etc.) that are equipped to communicate with each other.

By definition, a Local Area Network means it is “local” i.e. limited in its physical extent. Thus a Local Area Network consists of the aforementioned computing entities that are restricted to a certain geographical range. Fast, flexible and economical movement of data between systems is achieved by a LAN.

1.2 Wireless LAN (WLAN)

Wireless LANs provide all the functionality of wired LANs, but without the physical constraints of the wire itself. A wireless LAN (WLAN) is a flexible data communication system implemented as an extension to or as an alternative for, a wired LAN within a building, campus or any small area (the limits of the size of this area depend on the wireless technology used). Wireless LANs combine data connectivity with user mobility, and, through simplified configuration, enable movable LANs.

Wireless LANs use electromagnetic airwaves (radio and infrared) to communicate information from one point to another without relying on any physical connection. Radio waves are often referred to as radio carriers because they simply perform the function of delivering energy to a remote receiver. The data being transmitted is superimposed on the radio carrier so that it can be accurately extracted at the receiving end. This is generally referred to as modulation of the carrier by the information being transmitted. Once data is superimposed (modulated) onto the radio carrier, the radio signal occupies more than a single frequency, since the frequency or bit rate of the modulating information adds to the carrier.

Wireless LANs offer the following productivity, convenience, and cost advantages over traditional wired networks:

§ Mobility
§ Installation Speed and Simplicity
§ Installation Flexibility
§ Reduced cost of Ownership
§ Scalability



1.3 Wireless LAN Configurations

Depending on the configuration of the Wireless LAN and the connection hardware used, Wireless LANs can exist in two configurations:

§ Independent (Ad hoc) Wireless LANs
§ Infrastructure Wireless LANs

1.3.1 Independent (Ad hoc) Wireless LAN

Wireless LANs can be simple or complex. At its most basic, two PCs equipped with wireless adapter cards can set up an independent network whenever they are within range of one another. This is called a peer-to-peer network. On-demand networks such as in this example require no administration or pre configuration. Several mobile nodes (e.g. notebook computers) may get together in a small area (e.g. in a conference room) and establish peer-to-peer communications among themselves without the help of any infrastructure such as wired/wireless backbone. In this case each client would only have access to the resources of the other client and not to a central server.

Example applications of Ad hoc networks are emergency search-and-rescue operations, meetings or conventions in which persons wish to quickly share information, and data acquisition operations in inhospitable terrains.










Fig. 1.1 Ad hoc network of three mobile devices

Installing an access point can extend the range of an ad hoc network, effectively doubling the range at which the devices can communicate.

1.3.2 Infrastructure LANs

Despite the exciting research avenues and applications of ad-hoc networking, some applications might require communications with services located in a pre-existing infrastructure.



Fig. 1.2 Infrastructure LANs using multiple access points

Such an infrastructure is typically a higher-speed wired (or wireless) backbone. Therefore we can divide typical network traffic into two directions: uplink (into the backbone) and downlink (from the backbone). The contact points to the backbone are called access points. The access points can be either base stations for wired infrastructures or wireless bridges for wireless infrastructures. Repeaters may also be used for enlarging the coverage area of communication.

1.4 Microcells, Roaming and Handoffs

Wireless communication is limited by how far signals carry for given power output. WLANs use cells, called microcells, similar to the cellular telephone system to extend the range of wireless connectivity. At any point in time, a mobile PC equipped with a WLAN adapter is associated with a single access point and its microcell, or area of coverage. Individual microcells overlap to allow continuous communication within wired network. (Fig. 1.3) They handle low-power signals and hand off users as they roam through a given geographic area.









Fig 1.3 Handing of the WLAN Connection Between Access Points

Cellular structures are adopted to increase the effective total bandwidth by using different frequencies in different microcells. This concept is known as frequency reuse. As a result of frequency reuse, the total available communication bandwidth for all users is much larger than the transmission speed. A function that allows a mobile node to communicate with the access point in a cell and then switch to the access point in another cell is called handoff or handover. The purpose of the handoff is to keep continuous or seamless service to mobile nodes through different cell coverages. Handoff is consequently a special feature to deal with the mobility issue for wireless networks.


















2. AD HOC NETWORK ROUTING CONSIDERATIONS


An ad hoc network is the cooperative engagement of a collection of mobile nodes without the required intervention of any centralized access point or existing infrastructure. In ad hoc networks all nodes are mobile and can be connected dynamically in an arbitrary manner. All nodes of these networks behave as routers and take part in discovery and maintenance of routes to other nodes in the network.

2.1 Applications of Ad hoc Networks

Akin to packet radio networks, ad-hoc wireless networks have an important role to play in military applications. Soldiers equipped with multi-mode mobile communicators can now communicate in an ad-hoc manner, without the need for fixed wireless base stations. In addition, small vehicular devices equipped with audio sensors and cameras can be deployed at targeted regions to collect important location and environmental information which will be communicated back to a processing node via ad-hoc mobile communications. Ship-to-ship ad-hoc mobile communication is also desirable since it provides alternate communication paths without reliance on ground- or space-based communication infrastructures.

Commercial scenarios for ad-hoc wireless networks include:
§ conferences/meetings/lectures,
§ emergency services and
§ law enforcement.

People today attend meetings and conferences with their laptops, palmtops and notebooks. It is therefore attractive to have instant network formation, in addition to file and information sharing without the presence of fixed base stations and systems administrators. A presenter can multicast slides and audio to intended recipients. Attendees can ask questions and interact on a commonly-shared white board. Ad-hoc mobile communication is particularly useful in relaying information (status, situation awareness, etc.) via data, video and/or voice from one rescue team member to another over a small handheld or wearable wireless device. Again, this applies to law enforcement personnel as well.

2.2 Routing Protocols

A routing protocol is needed whenever a packet delivered needs to be handed over several nodes to arrive at its destination. A routing protocol has to find a route for packet delivery and make the packet delivered to the correct destination. These protocols can be classified in three categories:

· Centralized or distributed.
· Adaptive or static.
· Reactive or proactive or hybrid.

When a routing protocol is centralized, all decisions are made at a center node, where as in a distributed routing protocol, all nodes share the routing decision. An adaptive protocol may change behavior according to the network status, which can be a congestion on a link or many other possible factors. A reactive protocol takes required actions such as discovering routes when needed; besides a proactive protocol discovers the routes before they are needed. Reactive methods are called on-demand routing protocols. Since they run on demand, the control packet overhead is greatly reduced. Proactive methods keep tables of routes, and maintain those tables periodically. Hybrid methods make use of both to come up with a more efficient one. Zone routing protocol is an example to hybrid methods. Associativity Based Routing (ABR) is an adaptive protocol, where associativity is related to spatial, temporal and connection stability of a mobile host.

2.3 Routing Considerations

Ad hoc networks differ significantly from existing networks. First of all, the topology of interconnections may be quite dynamic. Secondly, most users will not wish to perform any administrative actions to set up such a network. Moreover, in order to provide service in the most general situation, one cannot assume that every computer is within communication range of every other computer. This lack of complete connectivity would certainly be a reasonable characteristic of, say, a population of mobile computers in a large room which relied on infrared transceivers to effect their data communications.

Currently, there is no method available which enables mobile computers with wireless data communications equipment to freely roam about while still maintaining connections with each other, unless special assumptions are made about the way the computers are situated with respect to each other. One mobile computer may often be able to exchange data with two other mobile computers which cannot themselves directly exchange data. As a result, computer users in a conference room may be unable to predict which of their associates’ computers could be relied upon to maintain network connection, especially as the users moved from place to place within the room.

Routing protocols for existing networks have not been designed specifically to provide the kind of dynamic, self-starting behavior needed for ad-hoc networks. Most protocols exhibit their least desirable behavior when presented with a highly dynamic interconnection topology. Although mobile computers could naturally be modeled as routers, it is also clear that existing routing protocols would place too heavy a computational burden on each mobile computer. Moreover, the convergence characteristics of existing routing protocols did not seem good enough to fit the needs of ad-hoc networks.

Lastly, the wireless medium differs in important ways from wired media, which would require that we make modifications to whichever routing protocol we might choose to experiment with. For instance, mobile computers may well have only a single network interface adapter, whereas most existing routers have network interfaces to connect two separate networks together. Besides, wireless media are of limited and variable range, in distinction to existing wired media.


2.4 Existing Ad hoc Routing Protocols

Since the advent of DARPA packet radio networks in the early 1970s, numerous protocols have been developed for ad-hoc mobile networks. Such protocols must deal with the typical limitations of these networks, which include high power consumption, low bandwidth, and high error rates.




Fig .2.1 Categorization of Ad hoc Routing Protocols



As shown in Figure 2.1, these routing protocols may generally be categorized as:

(a) table-driven and
(b) source-initiated on-demand driven.

Solid lines in this figure represent direct descendants while dotted lines depict logical descendants. Despite being designed for the same type of underlying network, the characteristics of each of these protocols are quite distinct. The following chapters describe these routing protocols in detail.





















3. TABLE-DRIVEN ROUTING PROTOCOLS


The table-driven routing protocols attempt to maintain consistent, up-to-date routing information from each node to every other node in the network. These protocols require each node to maintain one or more tables to store routing information, and they respond to changes in network topology by propagating updates throughout the network in order to maintain a consistent network view. The areas where they differ are the number of necessary routing-related tables and the methods by which changes in network structure are broadcast. The following sections discuss some of the existing table-driven ad-hoc routing protocols.

3.1 Destination-Sequenced Distance-Vector Routing (DSDV)

The Destination-Sequenced Distance-Vector Routing protocol (DSDV) described in is a table-driven algorithm based on the classical Bellman-Ford routing mechanism [14]. The improvements made to the Bellman-Ford algorithm include freedom from loops in routing tables.

Every mobile node in the network maintains a routing table in which all of the possible destinations within the network and the number of hops to each destination are recorded. Each entry is marked with a sequence number assigned by the destination node. The sequence numbers enable the mobile nodes to distinguish stale routes from new ones, thereby avoiding the formation of routing loops. Routing table updates are periodically transmitted throughout the network in order to maintain table consistency.

To help alleviate the potentially large amount of network traffic that such updates can generate, route updates can employ two possible types of packets. The first is known as a full dump. This type of packet carries all available routing information and can require multiple network protocol data units (NPDUs). During periods of occasional movement, these packets are transmitted infrequently. Smaller incremental packets are used to relay only that information which has changed since the last full dump. Each of these broadcasts should fit into a standard size NPDU, thereby decreasing the amount of traffic generated.

The mobile nodes maintain an additional table where they store the data sent in the incremental routing information packets. New route broadcasts contain the address of the destination, the number of hops to reach the destination, the sequence number of the information received regarding the destination, as well as a new sequence number unique to the broadcast [11]. The route labeled with the most recent sequence number is always used. In the event that two updates have the same sequence number, the route with the smaller metric is used in order to optimize (shorten) the path. Mobiles also keep track of the settling time of routes, or the weighted average time that routes to a destination will fluctuate before the route with the best metric is received (see [11]). By delaying the broadcast of a routing update by the length of the settling time, mobiles can reduce network traffic and optimize routes by eliminating those broadcasts that would occur if a better route was discovered in the very near future.

3.2 Clusterhead Gateway Switch Routing (CGSR)

The Clusterhead Gateway Switch Routing (CGSR) protocol differs from the previous protocol in the type of addressing and network organization scheme employed. Instead of a at network, CGSR is a clustered multihop mobile wireless network with several heuristic routing schemes. By having a cluster head controlling a group of ad-hoc nodes, a framework for code separation (among clusters), and channel access, routing and bandwidth allocation can be achieved. A cluster head selection algorithm is utilized to elect a node as the cluster head using a distributed algorithm within the cluster. The disadvantage of having a cluster head scheme is that frequent cluster head changes can adversely affect routing protocol performance since nodes are busy in cluster head selection rather than packet relaying. Hence, instead of invoking cluster head reselection every time the cluster membership changes, a Least Cluster Change (LCC) clustering algorithm Is introduced. Using LCC, cluster heads only change when two cluster heads come into contact, or when a node moves out of contact of all other cluster heads.


Fig. 3.1 CGSR: Routing from Node 1 to Node 8.

CGSR uses DSDV as the underlying routing scheme, and hence has much of the same overhead as DSDV. However, it modifies DSDV by using a hierarchical cluster head-to-gateway routing approach to route traffic from source to destination. Gateway nodes are nodes that are within communication range of two or more cluster heads. A packet sent by a node is first routed to its cluster head, and then the packet is routed from the cluster head to a gateway to another cluster head, and so on until the cluster head of the destination node is reached. The packet is then transmitted to the destination. Figure 3.1 illustrates an example of this routing scheme. Using this method, each node must keep a cluster member table where it stores the destination cluster head for each mobile node in the network. These cluster member tables are broadcast by each node periodically using the DSDV algorithm. Nodes update their cluster member tables on the reception of such a table from a neighbor.

In addition to the cluster member table, each node must also maintain a routing table, which is used to determine the next hop in order to reach the destination. On receiving a packet, a node will consult its cluster member table and routing table to determine the nearest cluster head along the route to the destination. Next the node will check its routing table to determine the node in order to reach the selected cluster head. It then transmits the packet to this node.

3.3 The Wireless Routing Protocol (WRP)

The Wireless Routing Protocol (WRP) is a table-based protocol with the goal of maintaining routing information among all nodes in the network. Each node in the network is responsible for maintaining four tables:

1. distance table,
2. routing table,
3. link-cost table, and
4. message retransmission list (MRL) table.

Each entry of the MRL contains the sequence number of the update message, a retransmission counter, an acknowledgment-required flag vector with one entry per neighbor, and a list of updates sent in the update message. The MRL records which updates in an update message need to be retransmitted and which neighbors should acknowledge the retransmission.

Mobiles inform each other of link changes through the use of update messages. An update message is sent only between neighboring nodes and contains a list of updates (the destination, the distance to the destination, and the predecessor of the destination), as well as a list of responses indicating which mobiles should acknowledge (ACK) the update. Mobiles send update messages after processing updates from neighbors or detecting a change in a link to a neighbor. In the event of the loss of a link between two nodes, the nodes send update messages to their neighbors. The neighbors then update their distance table entries and check for new possible paths through other nodes. Any new paths are relayed back to the original nodes so that they can update their tables accordingly.

Nodes learn of the existence of their neighbors from the receipt of acknowledgments and other messages. If a node is not sending messages, it must send a hello message within a specified time period to ensure connectivity. Otherwise, the lack of messages from the node indicates the failure of that link; this may cause a false alarm. When a mobile receives a hello message from a new node, that new node is added to the mobile's routing table, and the mobile sends the new node a copy of its routing table information.

Part of the novelty of WRP stems from the way in which it achieves loop freedom. In WRP, routing nodes communicate the distance and second-to-last hop information for each destination in the wireless networks. WRP belongs to the class of path finding algorithms with an important exception. It avoids the count-to-infinity problem [1] by forcing each node to perform consistency checks of predecessor information reported by all its neighbors. This ultimately (though not instantaneously) eliminates looping situations and provides faster route convergence when a link failure event occurs.










4. SOURCE-INITIATED ON-DEMAND ROUTING


A different approach from table-driven routing is source-initiated on-demand routing. This type of routing creates routes only when desired by the source node. When a node requires a route to a destination, it initiates a route discovery process within the network. This process is completed once a route is found or all possible route permutations have been examined. Once a route has been established, it is maintained by some form of route maintenance procedure until either the destination becomes inaccessible along every path from the source or until the route is no longer desired.

4.1 Ad-hoc On-Demand Distance Vector Routing (AODV)

The Ad-hoc On-Demand Distance Vector (AODV) routing protocol described in [7] builds on the DSDV algorithm previously described. AODV is an improvement on DSDV because it typically minimizes the number of required broadcasts by creating routes on an on-demand basis, as opposed to maintaining a complete list of routes as in the DSDV algorithm. The authors of AODV classify it as a pure on-demand route acquisition system, as nodes that are not on a selected path do not maintain routing information or participate in routing table exchanges.

When a source node desires to send a message to some destination node and does not already have a valid route to that destination, it initiates a Path Discovery process to locate the other node. It broadcasts a route request (RREQ) packet to its neighbors, which then forward the request to their neighbors, and so on, until either the destination or an intermediate node with a fresh enough route to the destination is located. Figure 4.1(a) illustrates the propagation of the broadcast RREQs across the network. AODV utilizes destination sequence numbers to ensure all routes are loop-free and contain the most recent route information. Each node maintains its own sequence number, as well as a broadcast ID. The broadcast ID is incremented for every RREQ the node initiates, and together with the node's IP address, uniquely identifies a RREQ. Along with its own sequence number and the broadcast ID, the source node includes in the RREQ the most recent sequence number it has for the destination. Intermediate nodes can reply to the RREQ only if they have a route to the destination whose corresponding destination sequence number is greater than or equal to that contained in the RREQ.


Fig. 4.1 AODV Route Discovery


During the process of forwarding the RREQ, intermediate nodes record in their route tables the address of the neighbor from which the first copy of the broadcast packet is received, thereby establishing a reverse path. If additional copies of the same RREQ are later received, these packets are discarded. Once the RREQ reaches the destination or an intermediate node with a fresh enough route, the destination/intermediate node responds by unicasting a route reply (RREP) packet back to the neighbor from which it first received the RREQ [Figure 4.1(b)]. As the RREP is routed back along the reverse path, nodes along this path set up forward route entries in their route tables which point to the node from which the RREP came. These forward route entries indicate the active forward route. Associated with each route entry is a route timer which will cause the deletion of the entry if it is not used within the specified lifetime. Because the RREP is forwarded along the path established by the RREQ, AODV only supports the use of symmetric links.

Routes are maintained as follows. If a source node moves, it is able to reinitiate the route discovery protocol to find a new route to the destination. If a node along the route moves, its upstream neighbor notices the move and propagates a link failure notification message (an RREP with infinite metric) to each of its active upstream neighbors to inform them of the erasure of that part of the route. These nodes in turn propagate the link failure notification to their upstream neighbors, and so on until the source node is reached. The source node may then choose to re-initiate route discovery for that destination if a route is still desired.

An additional aspect of the protocol is the use of hello messages, periodic local broadcasts by a node to inform each mobile node of other nodes in its neighborhood. Hello messages can be used to maintain the local connectivity of a node. However the use of hello messages is not required. Nodes listen for retransmissions of data packets to ensure the next hop is still within reach. If such a retransmission is not heard, the node may use any one of a number of techniques, including the reception of hello messages, to determine whether the next hop is within communication range. The hello messages may list the other nodes from which a mobile has heard, thereby yielding a greater knowledge of the network connectivity.

4.2 Dynamic Source Routing (DSR)

The Dynamic Source Routing (DSR) protocol is an on-demand routing protocol that is based on the concept of source routing. Mobile nodes are required to maintain route caches that contain the source routes of which the mobile is aware. Entries in the route cache are continually updated as new routes are learned. The protocol consists of two major phases: route discovery and route maintenance. When a mobile node has a packet to send to some destination, it first consults its route cache to determine whether it already has a route to the destination. If it has an unexpired route to the destination, it will use this route to send the packet. On the other hand, if the node does not have such a route, it initiates route discovery by broadcasting a route request packet. This route request contains the address of the destination, along with the source node's address and a unique identification number.



Fig 4.2 Creation of the Route Record in DSR

Each node receiving the packet checks whether it knows of a route to the destination. If it does not, it adds its own address to the route record of the packet and then forwards the packet along its outgoing links. To limit the number of route requests propagated on the outgoing links of a node, a mobile only forwards the route request if the request has not yet been seen by the mobile and if the mobile's address does not already appear in the route record.

A route reply is generated when either the route request reaches the destination itself, or when it reaches an intermediate node which contains in its route caches an unexpired route to the destination. By the time the packet reaches either the destination or such an intermediate node, it contains a route record yielding the sequence of hops taken. Figure 4.2(a) illustrates the formation of the route record as the route request propagates through the network. If the node generating the route reply is the destination, it places the route record contained in the route request into the route reply. If the responding node is an intermediate node, it will append its cached route to the route record and then generate the route reply. To return the route reply, the responding node must have a route to the initiator. If it has a route to the initiator in its route cache, it may use that route. Otherwise, if symmetric links are supported, the node may reverse the route in the route record. If symmetric links are not supported, the node may initiate its own route discovery and piggyback the route reply on the new route request. Figure 4.2(b) shows the transmission of the route reply with its associated route record back to the source node.

Route maintenance is accomplished through the use of route error packets and acknowledgments. Route error packets are generated at a node when the data link layer encounters a fatal transmission problem. When a route error packet is received, the hop in error is removed from the node's route cache and all routes containing the hop are truncated at that point. In addition to route error messages, acknowledgments are used to verify the correct operation of the route links. Such acknowledgments include passive acknowledgments, where a mobile is able to hear the next hop forwarding the packet along the route.

4.3 Temporally-Ordered Routing Algorithm (TORA)

TORA (Temporally-Ordered Routing Algorithm) is a highly adaptive, loop-free, distributed routing algorithm based on the concept of link reversal. TORA is proposed to operate in a highly dynamic mobile networking environment. It is source-initiated and provides multiple routes for any desired source/destination pair. The key design concept of TORA is the localization of control messages to a very small set of nodes near the occurrence of a topological change. To accomplish this, nodes need to maintain routing information about adjacent (1-hop) nodes. The protocol performs three basic functions:

I. route creation,
II. route maintenance, and
III. route erasure.
During the route creation and maintenance phases, nodes use a height metric to establish a directed acyclic graph (DAG) rooted at the destination. Thereafter, links are assigned a direction (upstream or downstream) based on the relative height metric of neighboring nodes, as shown in Figure 4.3(a). This process of establishing a DAG is similar to the query/reply process proposed in LMR (Lightweight Mobile Routing). In times of node mobility, the DAG route is broken and route maintenance is necessary to re-establish a DAG rooted at the same destination. As shown in Figure 4.3(b), upon failure of the last downstream link, a node generates a new reference level which results in the propagation of that reference level by neighboring nodes, effectively coordinating a


Fig 4.3 (a) Route creation (showing link direction assignment), (b) Route Maintenance (showing link reversal phenomenon) in TORA.

structured reaction to the failure. Links are reversed to reflect the change in adapting to the new reference level. This has the same effect as reversing the direction of one or more links when a node has no downstream links.

Timing is an important factor for TORA because the height metric is dependent on the logical time of a link failure; TORA assumes all nodes have synchronized clocks (accomplished via an external time source such as Global Positioning System). TORA's metric is a quintuple comprised of five elements, namely:

1. logical time of a link failure,
2. the unique ID of the node that defined the new reference level,
3. a reflection indicator bit,
4. a propagation ordering parameter, and
5. the unique ID of the node.

The first three elements collectively represent the reference level. A new reference level is defined each time a node loses its last downstream link due to a link failure. TORA's route erasure phase essentially involves flooding a broadcast clear packet (CLR) throughout the network to erase invalid routes.

In TORA, there is a potential for oscillations to occur, especially when multiple sets of coordinating nodes are concurrently detecting partitions, erasing routes, and building new routes based on each other. Because TORA uses internodal coordination, its instability problem is similar to the count-to-infinity problem in distance-vector routing protocols, except that such oscillations are temporary and route convergence will ultimately occur.

4.4 Associativity-Based Routing (ABR)

The Associativity-Based Routing (ABR) protocol is free from loops, deadlock, and packet duplicates, and defines a new routing metric for ad-hoc mobile networks. This metric is known as the degree of association stability. In ABR, a route is selected based on the degree of association stability of mobile nodes. Each node periodically generates a beacon to signify its existence. When received by neighboring nodes, these beaconing causes their associativity tables to be updated. For each beacon received, the associativity tick of the current node with respect to the beaconing node is incremented. Association stability is defined by connection stability of one node with respect to another node over time and space. A high degree of association stability may indicate a low state of node mobility, while a low degree may indicate a high state of node mobility. Associativity ticks are reset when the neighbors of a node or the node itself moves out of proximity. A fundamental objective of ABR is to derive longer-lived routes for ad-hoc mobile networks.



Fig 4.4 Route Maintenance for Source and Destination Movement in ABR.

The three phases of ABR are:

1. route discovery,
2. route re-construction (RRC), and
3. route deletion.

The route discovery phase is accomplished by a broadcast query and await-reply (BQ-REPLY) cycle. A node desiring a route broadcasts a BQ message in search of mobiles that have a route to the destination. All nodes receiving the query (that are not the destination) append their addresses and their associativity ticks with their neighbors along with QoS information to the query packet. A successor node erases its upstream node neighbors' associativity tick entries and retains only the entry concerned with itself and its upstream node. In this way, each resultant packet arriving at the destination will contain the associativity ticks of the nodes along the route to the destination. The destination is then able to select the best route by examining the associativity ticks along each of the paths. In the case where multiple paths have the same overall degree of association stability, the route with the minimum number of hops is selected. The destination then sends a REPLY packet back to the source along this path. Nodes propagating the REPLY mark their routes as valid. All other routes remain inactive and the possibility of duplicate packets arriving at the destination is avoided.

Route re-construction may consist of partial route discovery, invalid route erasure, valid route updates, and new route discovery, depending on which node(s) along the route move. Movement by the source results in a new BQ-REPLY process, as shown in Figure 4.4(a). The RN[1] message is a route notification that is used to erase the route entries associated with downstream nodes. When the destination node moves, the immediate upstream node erases its route and determines if the node is still reachable by a localized query (LQ[H]) process, where H refers to the hop count from the upstream node to the destination (Figure 6b). If the destination receives the LQ packet, it REPLYs with the best partial route; otherwise, the initiating node times out and the process backtracks to the next upstream node. Here an RN[0] message is sent to the next upstream node to erase the invalid routes and inform this node it should invoke the LQ[H] process. If this process results in backtracking more than halfway to the source, the LQ process is discontinued and a new BQ process is initiated at the source.

When a discovered route is no longer desired, the source node initiates a route delete (RD) broadcast so that all nodes along the route update their routing tables. The RD message is propagated by a full broadcast, as opposed to a directed broadcast, because the source node may not be aware of any route node changes that occurred during route re-constructions.



4.5 Signal Stability Routing (SSR)

Another on-demand protocol is the Signal Stability based Adaptive Routing protocol (SSR). Unlike the algorithms described so far, SSR selects routes based on the signal strength between nodes and on a node's location stability. This route selection criterion has the effect of choosing routes that have stronger connectivities. SSR can be divided into two cooperative protocols: the Dynamic Routing Protocol (DRP) and the Static Routing Protocol (SRP).

The DRP is responsible for the maintenance of the Signal Stability Table (SST) and the Routing Table (RT). The SST records the signal strength of neighboring nodes, which is obtained by periodic beacons from the link layer of each neighboring node. The signal strength may be recorded as either a strong or weak channel. All transmissions are received by, and processed in, the DRP. After updating all appropriate table entries, the DRP passes a received packet to the SRP.

The SRP processes packets by passing the packet up the stack if it is the intended receiver or looking up the destination in the RT and then forwarding the packet if it is not. If no entry is found in the RT for the destination, a route-search process is initiated to find a route. Route requests are propagated throughout the network but are only forwarded to the next hop if they are received over strong channels and have not been previously processed (to prevent looping). The destination chooses the first arriving route-search packet to send back because it is most probable that the packet arrived over the shortest and/or least congested path. The DRP then reverses the selected route and sends a route-reply message back to the initiator. The DRP of the nodes along the path update their RTs accordingly.

Route-search packets arriving at the destination have necessarily chosen the path of strongest signal stability, as the packets are dropped at a node if they have arrived over a weak channel. If there is no route-reply message received at the source within a specific timeout period, the source changes the PREF field in the header to indicate that weak channels are acceptable, as these may be the only links over which the packet can be propagated.

When a failed link is detected within the network, the intermediate nodes send an error message to the source indicating which channel has failed. The source then initiates another route-search process to find a new path to the destination. The source also sends an erase message to notify all nodes of the broken link.

















5. COMPARISONS


The following sections provide comparisons of the previously described routing algorithms. Section 5.1 compares table-driven protocols, and Section 5.2 compares on-demand protocols. Section 5.3 presents a discussion of the two classes of algorithms. In Tables 5.1 and 5.2, Time Complexity is defined as the number of steps needed to perform a protocol operation, and Communication Complexity is the number of messages needed to perform a protocol operation. Also, the values for these metrics represent worst case behavior.

5.1 Table-Driven Protocols

Our discussion here will be based on Table 5.1. As stated earlier, DSDV routing is essentially a modification of the basic Bellman-Ford routing algorithm. The modifications include the guarantee of loop-free routes and a simple route update protocol. While only providing one path to any given destination, DSDV selects the shortest path based on the number of hops to the destination. DSDV provides two types of update messages, one of which is significantly smaller than the other. The smaller update message can be used for incremental updates so that the entire routing table need not be transmitted for every change in network topology. However, DSDV is inefficient because of the requirement of periodic update transmissions, regardless of the number of changes in the network topology. This effectively limits the number of nodes that can connect to the network since the overhead grows as O(n2).

In CGSR, DSDV is used as the underlying routing protocol. Routing in CGSR occurs over cluster heads and gateways. A cluster head table is necessary in addition to the routing table. One advantage of CGSR is that several heuristic methods can be employed to improve the protocol's performance. These methods include priority token scheduling, gateway code scheduling, and path reservation.

Parameters DSDV CGSR WRP

Time Complexity (link addition / failure) O(d) O(d) O(h)
Communication Complexity (link addition / failure) O(x=N) O(x=N) O(x=N)
Routing Philosophy Flat Hierarchical Flat
Loop Free Yes Yes Yes, but notinstantaneous
Multicast Capability No No No
Number of Required Tables Two Two Four
Frequency of Update Transmissions Periodically & as needed Periodically Periodically & as needed
Updates Transmitted to Neighbors Neighbors& cluster head Neighbors
Utilizes Sequence Numbers Yes Yes Yes
Utilizes “Hello” Messages Yes No Yes
Critical Nodes No Yes (cluster head) No
Routing Metric Shortest Path Shortest Path Shortest Path

Table 5.1: Comparisons of the Characteristics of Table-Driven Routing Protocols.

Abbreviations:
N = Number of nodes in the network
d = Network diameter
h = Height of routing tree
x = Number of nodes affected by a topological change

The WRP protocol differs from the other protocols in several ways. WRP requires each node to maintain four routing tables. This can lead to substantial memory requirements, especially when the number of nodes in the network is large. Furthermore, the WRP protocol requires the use of hello packets whenever there are no recent packet transmissions from a given node. The hello packets consume bandwidth and disallow a node to enter sleep mode. However, though it belongs to the class of path finding algorithms, WRP has an advantage over other path finding algorithms because it avoids the problem of creating temporary routing loops that these algorithms have through the verification of predecessor information, as described in Section 3.3. Having discussed the operation and characteristics of each of the existing table-driven based routing protocols, it is important to highlight the differences. During link failures, WRP has lower time complexity than DSDV since it only informs neighboring nodes about link status changes. During link additions, hello messages are used as a presence indicator such that the routing table entry can be updated. Again, this only affects neighboring nodes. In CGSR, because routing performance is dependent on the status of specific nodes (cluster head, gateway or normal nodes), time complexity of a link failure associated with a cluster head is higher than DSDV, given the additional time needed to perform cluster head reselection. Similarly, this applies to the case of link additions associated with the cluster head. There is no gateway selection in CGSR since each node declares it is a gateway node to its neighbors if it is responding to multiple radio codes. If a gateway node moves out of range, the routing protocol is responsible for routing the packet to another gateway.

In terms of communication complexity, since DSDV, CGSR and WRP use distance vector shortest-path routing as the underlying routing protocol, they all have the same degree of complexity during link failures and additions.

5.2 Source-Initiated On-Demand Routing Protocols

Table 5.2 presents a comparison of AODV, DSR, TORA, ABR and SSR. The AODV protocol employs a route discovery procedure similar to DSR; however, there are a couple important distinctions. The most notable of these is that the overhead of DSR is potentially larger than that of AODV since each DSR packet must carry full routing information, whereas in AODV packets need only contain the destination address. Similarly, the route replies in DSR are larger because they contain the address of every node along the route, whereas in AODV route replies need only carry the destination IP address and sequence number. Also, the memory overhead may be slightly greater in DSR because of the need to remember full routes, as opposed to only next hop information in AODV. A further advantage of AODV is its support for multicast [7]. None of the other algorithms considered in this paper currently incorporate multicast communication. On the downside, AODV requires symmetric links between nodes, and hence cannot utilize routes with asymmetric links. In this aspect, DSR is superior as it does not require the use of such links, and can utilize asymmetric links when symmetric links are not available.

The DSR algorithm is intended for networks in which the mobiles move at a moderate speed with respect to packet transmission latency. Assumptions that the algorithm makes for operation are that the network diameter is relatively small and that the mobile nodes can enable a promiscuous receive mode, whereby every received packet is delivered to the network driver software without filtering by destination address. An advantage of DSR over some of the other on-demand protocols is that DSR does not make use of periodic routing advertisements, thereby saving bandwidth and reducing power consumption. Hence the protocol does not incur any overhead when there are no changes in network topology. Additionally, DSR allows nodes to keep multiple routes to a destination in their cache. Hence, when a link on a route is broken, the source node can check its cache for another valid route. If such a route is found, route reconstruction does not need to be reinvoked. In this case, route recovery is faster than in many of the other on-demand protocols. However, if there are no additional routes to the destination in the source node's cache, route discovery must be reinitiated, as in AODV, if the route is still required.

On the other hand, because of the small diameter assumption and because of the source routing requirement, DSR is not scalable to large networks. Furthermore, as previously stated, the need to place the entire route in both route replies and data packets causes greater control overhead than in AODV.

TORA is a link reversal algorithm that is best-suited for networks with large, dense populations of nodes. Part of the novelty of TORA stems from its creation of DAGs to aid route establishment. One of the advantages of TORA is its support for multiple routes.


Parameters AODV DSR TORA ABR SSR

Time Complexity (initialization) O(2d) O(2d) O(2d) O(d+x) O(d+x)
Time Complexity (post failure) O(2d) O(2d) or 0 (cache hit) O(2d) O(l+x) O(l+x)
Communication Complexity (initialization) O(2N) O(2N) O(2N) O(N+y) O(N+y)
Communication Complexity (post failure) O(2N) O(2N) O(2x) O(x+y) O(x+y)
Routing Philosophy Flat Flat Flat Flat Flat
Loop Free Yes Yes Yes Yes Yes
Multicast Capability Yes No No No No
Beaconing Requirements No No No Yes Yes
Multiple Route Possibilities No Yes Yes No No
Route Maintained in Route table Route table Route table Route table Route table
Utilizes Route Cache / Expiration Timers Yes No No No No
Route Reconfiguration Methodology Erase Route; Notify Source Erase Route; Notify Source Erase Route; Notify Source
Routing Metric Freshest & Shortest Path Shortest Path Shortest Path Associativity & Shortest Path Associativity & Stability

Table 5.2: Comparisons of the Characteristics of Source-Initiated On-Demand Ad-Hoc Routing Protocols.

Abbreviations:
l = Diameter of the affected network segment
y = Total number of nodes forming the directed path where the REPLY packet transits
z = Diameter of the directed path where the REPLY packet transits
TORA and DSR are the only on-demand protocols considered here which retain multiple route possibilities for a single source/destination pair. Route reconstruction is not necessary until all known routes to a destination are deemed invalid, and hence bandwidth can potentially be conserved because of the necessity for fewer route rebuilding. Another advantage of TORA is its support for multicast. Although, unlike AODV, TORA does not incorporate multicast into its basic operation, it functions as the underlying protocol for the Lightweight Adaptive Multicast Algorithm (LAM), and together the two protocols provide multicast capability. TORA's reliance on synchronized clocks, while a novel idea, inherently limits its applicability. If a node does not have a GPS positioning system or some other external time source, it cannot use the algorithm. Additionally, if the external time source fails, the algorithm will cease to operate. Further, route rebuilding in TORA may not occur as quickly as in the other algorithms due to the potential for oscillations during this period. This can lead to potentially lengthy delays while waiting for the new routes to be determined. ABR is a compromise between broadcast and point-to-point routing and uses the connection-oriented packet forwarding approach. Route selection is primarily based on the aggregated associativity ticks of nodes along the path. Hence, although the resulting path does not necessarily result in the smallest possible number of hops, the path tends to be longer-lived than other routes. A long-lived route requires fewer route reconstructions and therefore yields higher throughput. Another benefit of ABR is that, like the other protocols, it is guaranteed to be free from packet duplicates. The reason is that only the best route is marked valid while all other possible routes remain passive. ABR, however, relies on the fact that each node is beaconing periodically. The beaconing interval must be short enough so as to accurately reflect the spatial, temporal, and connectivity state of the mobile hosts. This beaconing requirement may result in additional power consumption. However, experimental results obtained in [24] reveal that the inclusion of periodic beaconing has a minute inuance on the overall battery power consumption. Unlike DSR, ABR does not utilize route caches. The SSR algorithm is a logical descendant of ABR. It utilizes a new technique of selecting routes based on the signal strength and location stability of nodes along the path. As in ABR, while the paths selected by this algorithm are not necessarily shortest in hop count, they do tend to be more stable and longer-lived, resulting in fewer route reconstructions.

One of the major drawbacks of the SSR protocol is that, unlike in AODV and DSR, intermediate nodes cannot reply to route requests sent towards a destination; this results in potentially long delays before a route can be discovered. Additionally, when a link failure occurs along a path, the route discovery algorithm must be re-invoked from the source to find a new path to the destination. No attempt is made to use partial route recovery (unlike ABR) - i.e. to allow intermediate nodes to attempt to rebuild the route themselves. AODV and DSR also do not specify intermediate node rebuilding. While this may lead to longer route reconstruction times since link failures cannot be resolved locally without the intervention of the source node, the attempt and failure of an intermediate node to rebuild a route will cause a longer delay then if the source node had attempted the rebuilding as soon as the broken link was noticed. Thus it remains to be seen whether intermediate node route rebuilding is more optimal than source node route rebuilding.

5.3 Table-Driven vs. On-Demand Routing


Parameters On-Demand Table-Driven

Availability of Routing Information Available when needed Always available regardless of need
Routing Philosophy Flat Mostly Flat except for CSGR
Periodic Route Updates Not Required Yes
Coping with mobility Using localized route discovery as in ABR & SSR Inform other nodes to achieve consistent routing table
Signaling traffic generated Grows with increasing mobility of active routes (as in ABR) Greater than that of On-Demand Routing
Quality of service support Few can support QoS Mainly Shortest Path as QoS metric

Table 5.3: Overall Comparisons of On-Demand versus Table-Driven Based Routing Protocols.


As discussed earlier, the table-driven ad-hoc routing approach is similar to the connectionless approach of forwarding packets, with no regard to when and how frequent such routes are desired. It relies on an underlying routing table update mechanism that involves the constant propagation of routing information. This is, however, not the case for on-demand routing protocols. When a node using an on-demand protocol desires a route to a new destination, it will have to wait until such a route can be discovered. On the other hand, because routing information is constantly propagated and maintained in table-driven routing protocols, a route to every other node in the ad-hoc network is always available, regardless of whether or not it is needed. This feature, although useful for datagram traffic, incurs substantial signaling traffic and power consumption. Since both bandwidth and battery power are scarce resources in mobile computers, this becomes a serious limitation.

Table 5.3 lists some of the basic differences between the two classes of algorithms. Another consideration is whether a at or hierarchical addressing scheme should be used. All of the protocols considered here, except for CGSR, use a at addressing scheme. In [6], a discussion of the two addressing schemes is presented. While at addressing may be less complicated and easier to use, there are doubts as to its scalability.










6. AD HOC ROUTING PROTOCOLS: THE NEXT GENERATION


Current challenges for ad-hoc wireless networks include:
§ multicast,
§ QoS support,
§ power-aware routing [13], and
§ location-aided routing [12].

Multicast is desirable to support multi-party wireless communications. Since the multicast tree is no longer static (i.e., its topology is subject to change over time), the multicast routing protocol must be able to cope with mobility, including multicast membership dynamics (such as leave and join). In terms of QoS, it is inadequate to consider QoS merely at the network level without considering the underlying media access control layer. Again, given the problems associated with the dynamics of nodes, hidden terminals, and fluctuating link characteristics, supporting end-to-end QoS is a non-trivial issue that requires in-depth investigation. Currently, there is a trend towards an adaptive QoS approach instead of the plain resource reservation method with hard QoS guarantees.

Another important factor is the limited power supply in handheld devices which can seriously prohibit packet forwarding in an ad-hoc mobile environment. Hence, routing traffic based on nodes' power metric is one way to distinguish routes that are more long-lived than others. Finally, instead of using beaconing or broadcast search, location-aided routing uses positioning information to define associated regions so that the routing is spatially-oriented and limited. This is analogous to associativity-oriented and restricted broadcast in ABR.

Current ad-hoc routing approaches have introduced several new paradigms, such as exploiting user's demand, the use of location, power, and association parameters. Adaptivity and self-configuration are key features of these approaches. However, flexibility is also important. A flexible ad-hoc routing protocol could responsively invoke table-driven approaches and/or on-demand approaches based on situations and communication requirements. The toggle between these two approaches may not be trivial since concerned nodes must be in-sync with the toggling. Co-existence of both approaches may also exist in spatially clustered ad-hoc groups, with intra-cluster employing the table-driven approach and inter-cluster employing the demand-driven approach or vice versa.

Further work is necessary to investigate the feasibility and performance of hybrid ad-hoc routing approaches. Lastly, in addition to the above, further research in the areas of media access control, security, service discovery, and internet protocol operability is required before the potential of ad-hoc mobile networking can be realized.










CONCLUSION


In this seminar report I have provided descriptions of several routing schemes proposed for ad-hoc mobile networks. We have also provided a classification of these schemes according to the routing strategy, i.e., table-driven and on-demand. We have presented a comparison of these two categories of routing protocols, highlighting their features, differences and characteristics. Finally, we have identified possible applications and challenges facing ad-hoc mobile wireless networks. While it is not clear that any particular algorithm or class of algorithm is the best for all scenarios, each protocol has definite advantages and disadvantages and has certain situations for which it is well-suited. The field of ad-hoc mobile networks is rapidly growing and changing, and while there are still many challenges that need to be met, it is likely that such networks will see wide-spread use within the next few years.