Knowlegde Through Fun: Create your own Diary on your system

3

Posted by mady | Posted in | Posted on 1:25 AM

Knowlegde Through Fun: Create your own Diary on your system: "Here are few steps by following these you can create your own diary on your system. STEPS: 1.Goto start menu of your system. 2.select notepa..."


Shutdown Computer at Specific Time

6

Posted by mady | Posted in | Posted on 11:02 PM

Sometimes you are downloading files from internet in those situation you have to getup at nights and shutdown your system or you are watching movie after that you want to shutdown your system automatically then here is a trick for you.
Steps:
1.Goto to Start Menu.
2.Choose Run
3.Type shutdown -s -t 3600
Here 3600 is time in seconds.
4.Press Enter.

Your system will now shutdown automatically after 1Hour from now.

Create your own Diary on your system

8

Posted by mady | Posted in | Posted on 10:35 PM

Here are few steps by following these you can create your own diary on your system.
STEPS:
1.Goto start menu of your system.
2.select notepad and double click on it.
3.then type .LOG
4.save it with any name as you wish.
e.g. mady.txt
5.now open "mady.txt"
That's your Diary enjoy it.

Close Multiple Windows

5

Posted by mady | Posted in | Posted on 7:35 AM

If you have opened a many separate windows or related windows like a folder inside a folder again folder inside a folder and so on.
There is a easier way to close all those windows at-a-time.
Just Hold down the Shift key and click the X caption button which appears in the upper-right corner of the last window opened.
This will closes all the windows which were opened before that window.

Classic Start Menu

6

Posted by mady | Posted in | Posted on 7:23 AM

If you want to have only system icons on your Desktop rather than having many of them in your Start Menu then by following these steps you can get Classic Start Menu on your system.
1.Goto Start right click on it.
2Choose properties.
3.then a small window will appear.Select classic Start Menu from bottom half of the window.
4.Click OK
H=Now you will get Classic Start Menu on tour system.

shutdown XP faster

9

Posted by mady | Posted in | Posted on 11:30 AM

Hey guys some times Windows XP takes long time to shutdown or restart your system.
You can solve this problem by as follows:
1.Start your System.
2.Goto Start Menu.
3.select Contol Panel.
4.select Sound,Speech,and Audio Devices.
5.Sound and Audio Devices
6.Sounds
7.Now from Sound Scheme select no sound
8.Click ok
And see the result.

Useful Windows Run Commands

8

Posted by mady | Posted in | Posted on 10:58 PM

To Access…. - Run Command


Accessibility Controls - access.cpl


Add Hardware Wizard - hdwwiz.cpl


Add/Remove Programs - appwiz.cpl


Administrative Tools - control admintools


Automatic Updates - wuaucpl.cpl


Bluetooth Transfer Wizard - fsquirt


Calculator - calc


Certificate Manager - certmgr.msc


Character Map - charmap


Check Disk Utility - chkdsk


Clipboard Viewer - clipbrd


Command Prompt - cmd


Component Services - dcomcnfg


Computer Management - compmgmt.msc


Date and Time Properties - timedate.cpl


DDE Shares - ddeshare


Device Manager - devmgmt.msc


Direct X Control Panel (If Installed)* - directx.cpl


Direct X Troubleshooter - dxdiag


Disk Cleanup Utility - cleanmgr


Disk Defragment - dfrg.msc


Disk Management - diskmgmt.msc


Disk Partition Manager - diskpart


Display Properties - control desktop


Display Properties - desk.cpl


Display Properties (w/Appearance Tab Preselected) - control color



Driver Verifier Utility - verifier


Event Viewer - eventvwr.msc


File Signature Verification Tool - sigverif


Findfast - findfast.cpl


Folders Properties - control folders


Fonts - control fonts


Fonts Folder - fonts


Free Cell Card Game - freecell


Game Controllers - joy.cpl


Group Policy Editor (XP Prof) - gpedit.msc


Hearts Card Game - mshearts


Iexpress Wizard - iexpress


Indexing Service - ciadv.msc


Internet Properties - inetcpl.cpl


IP Configuration (Display Connection Configuration) - ipconfig /all


IP Configuration (Display DNS Cache Contents) - ipconfig /displaydns


IP Configuration (Delete DNS Cache Contents) - ipconfig /flushdns


IP Configuration (Release All Connections) - ipconfig /release


IP Configuration (Renew All Connections) - ipconfig /renew


IP Configuration (Refreshes DHCP & Re - Registers DNS) - ipconfig /registerdns


IP Configuration (Display DHCP Class ID) - ipconfig /showclassid


IP Configuration (Modifies DHCP Class ID) - ipconfig /setclassid


Java Control Panel (If Installed) - jpicpl32.cpl


Java Control Panel (If Installed) - javaws


Keyboard Properties - control keyboard


Local Security Settings - secpol.msc


Local Users and Groups - lusrmgr.msc


Logs You Out Of Windows - logoff


Microsoft Chat - winchat


Minesweeper Game - winmine


Mouse Properties - control mouse


Mouse Properties - main.cpl


Network Connections - control netconnections


Network Connections - ncpa.cpl


Network Setup Wizard - netsetup.cpl


Notepad - notepad


Nview Desktop Manager (If Installed) - nvtuicpl.cpl


Object Packager - packager


ODBC Data Source Administrator - odbccp32.cpl


On Screen Keyboard - osk


Opens AC3 Filter (If Installed) - ac3filter.cpl


Password Properties - password.cpl


Performance Monitor - perfmon.msc


Performance Monitor - perfmon


Phone and Modem Options - telephon.cpl


Power Configuration - powercfg.cpl


Printers and Faxes - control printers


Printers Folder - printers


Private Character Editor - eudcedit


Quicktime (If Installed) - QuickTime.cpl


Regional Settings - intl.cpl


Registry Editor - regedit


Registry Editor - regedit32


Remote Desktop - mstsc


Removable Storage - ntmsmgr.msc


Removable Storage Operator Requests - ntmsoprq.msc


Resultant Set of Policy (XP Prof) - rsop.msc


Scanners and Cameras - sticpl.cpl


Scheduled Tasks - control schedtasks


Security Center - wscui.cpl


Services - services.msc


Shared Folders - fsmgmt.msc


Shuts Down Windows - shutdown


Sounds and Audio - mmsys.cpl


Spider Solitare Card Game - spider


SQL Client Configuration - cliconfg


System Configuration Editor - sysedit


System Configuration Utility - msconfig


System File Checker Utility (Scan Immediately) - sfc /scannow


System File Checker Utility (Scan Once At Next Boot) - sfc /scanonce


System File Checker Utility (Scan On Every Boot) - sfc /scanboot


System File Checker Utility (Return to Default Setting) - sfc /revert


System File Checker Utility (Purge File Cache) - sfc /purgecache


System File Checker Utility (Set Cache Size to size x) - sfc /cachesize=x


System Properties - sysdm.cpl


Task Manager - taskmgr


Telnet Client - telnet


User Account Management - nusrmgr.cpl


Utility Manager - utilman


Windows Firewall - firewall.cpl


Windows Magnifier - magnify


Windows Management Infrastructure - wmimgmt.msc


Windows System Security Tool - syskey


Windows Update Launches - wupdmgr


Windows XP Tour Wizard - tourstart


Wordpad - write






Reference:
www.Funlok.com

WHAT IS A WEB SERVICE?

9

Posted by mady | Posted in , | Posted on 9:48 PM

Technically speaking, a Web Service is "a component of programmable
application logic that can be accessed using standard web protocols".
That is, it's quite similar to the components we considered earlier
on, but let us access all its functionality via the Web. In principle,
anyone who can browse the Web can see and use a web service.

Think of a Web Service as a black box resource that accepts requests
from a consumer (some kind of program running on the web client),
performs a specific task, and returns the results of that task. In
some respects, a search engine such as Google (www.google.com) is a
kind of Web Service – you submit a search expression, and it compiles
a list of matching sites, which it sends back to your browser.

Currently the term, Web Service, is something of a buzzword within
the sphere of software development, thanks to a number of new
protocols that have opened up the scope of what we can expect Web
Services to do. XML plays a central role in all these technologies.

There's a very important distinction between a web service like
Google and the kind of XML Web Service that we're going to be talking
about: on Google, you submit the search expression, and you read the
list of sites that gets sent back. Okay, the browser provides you with
a textbox, and parses the response stream so that it looks nice – but
it doesn't actually understand the information you've submitted, let
alone the HTML that Google sends back.

If you are using an XML Web Service, you can assume the results will
be returned as some kind of XML document, that's explicitly structured
and self-describing.


It's, therefore, quite straightforward to write a program that
interprets these results and
perhaps even formulate a new submission.

As we're going to see, ASP.NET makes it very easy to build XML Web Services,
and just as easy to use them – ultimately you need only to reference
the Web Service in your code, and you can use it just as is it were a
local component. As with normal components, we don't need to know
anything about how the service functions, just the tasks it can do,
the type of information it needs to do them, and the type of results
we're going to get back.


1. KINDS OF WEB SERVICES

Practical applications of web services technologies fall into three groups:

1.1 PLUG-IN FUNCTIONALITY.
The simplest and most prevalent web services in use today add
third-party functions to web pages and portals. Common examples
include external news feeds and stock quotes, banner ad serving, and
search boxes. Few use XML and SOAP, relying instead on HTML based
technologies such as JavaScript, CGI calls and Java.

1.2 REMOTE INFRASTRUCTURE SERVICES.
Third-party providers use web services technologies to deliver
behind-the-scenes functionality for commercial websites, such as user
authentication, payment processing and visitor traffic analysis.

1.3 ENTERPRISE APPLICATION INTEGRATION.
Web services technologies are rapidly finding favour as a solution to
the complex integration challenges of linking applications within the
enterprise, or across a value chain. EAI implementations are the most
likely to use formal web services standards.

2. REQUIREMENTS OF WEB SERVICES

The original Internet made it possible to send and receive email and
to share access to files. The World Wide Web added a software layer
that made it easier to publish and access other content. The final
development has been the emergence of web
services, enriching the software layer with application functionality.
The task of the web services infrastructure is to support
commercial-grade application functionality.

The difference between content and applications comes from the
addition of process — the sequence of events that need to happen in
order to produce a result. When participants in the application are
distributed across the web, the need to complete processes adds some
important new operating requirements:

2.1 CONSISTENCY
Each separate component must act within certain parameters, such as
response times and availability.

2.2 AUTHENTICITY
There must be some way of assuring the identity of each of the
participants in the process.

2.3 TIMELINESS
Each step in the process must execute in the correct order, and
promptly — especially if a user is waiting on the result in order to
continue with their work

2.4 INTEGRITY
There must be mechanisms to avoid data becoming corrupted as it passes
between participants

2.5 PERSISTENCE
Each participant in the process must maintain a permanent record of
their part in the transaction.
The web services infrastructure must provide a platform that supports
these requirements and more. That demands an array of tools and
services to enhance, monitor
and maintain quality-of-service.

WHY WEB SERVICES?

3

Posted by mady | Posted in | Posted on 9:45 PM

Web Services is an emerging technology driven by the will to securely
expose business logic beyond the firewall. Through Web services
companies can encapsulate existing business processes, publish them as
services, search for and subscribe to other services, and exchange
information throughout and beyond the enterprise. Web services will
enable application-to-application e-marketplace interaction, removing
the inefficiencies of human intervention.

The key benefits of Web Services are:

1. SOFTWARE AS A SERVICE

As opposed to packaged products, Web Services can be delivered and
paid for as streams of services and allow ubiquitous access from any
platform. Web services allow for encapsulation. Components can be
isolated such that only the business-level services are exposed. This
results in decoupling between components and more stable and flexible
systems.

2. DYNAMIC BUSINESS INTEROPERABILITY

New business partnerships can be constructed dynamically and
automatically since Web Services ensure complete interoperability
between systems.

3. ACCESSIBILITY

Business services can be completely decentralized and distributed over
the Internet and accessed by a wide variety of communications devices

4. EFFICIENCIES

Businesses can be released from the burden of complex, slow and
expensive software development and focus instead on value added and
mission critical tasks. Web
services constructed from applications meant for internal use can be
easily exposed for external use without changing code. Incremental
development using Web services is natural and easy and since Web
Services are declared and implemented in a human readable format there
is easier bug tracking and fixing. The overall result is risk
reduction and more efficient deployability.

5. UNIVERSALLY AGREED SPECIFICATIONS

Web Services are based on universally agreed specifications for
structured data exchange, messaging, discovery of services, interface
description, and business process orchestration.

6. LEGACY INTEGRATION

Greater agility and flexibility from increased integration between
legacy systems


7. NEW MARKET OPPORTUNITIES

There will be greater feasibility to the dynamic enterprise and
dynamic value chain businesses

SECURITY IN WEB SERVICES

4

Posted by mady | Posted in | Posted on 9:41 PM

Web Services security has been the most talked about thing in the Web
Services arena for quite some time now. If there's one thing that has
slowed the widespread acceptance and implementation of Web Services,
it's their lack of security standards. There also seems to be a
cautious implementation schedule for many companies that are thinking
about moving to the .NET platform. Partly because of security concerns
I would imagine, and partly to give the technology time to grab hold.
Nevertheless, security is still a major concern that holds back most
of the Web Service implementations today.

In April 2002, IBM, Microsoft, and VeriSign published a new Web
Services security specification, WS-Security. The specification aims
to help enterprises build secure Web Services, and applications based
on them that are broadly interoperable. Eventually, this specification
would be submitted for consideration as a standard, and looking at the
amount of commitment that IBM, Microsoft, and VeriSign have invested
in it, it should soon go that way. This specification proposes a
standard set of SOAP extensions that can be used when building secure
Web Services to implement integrity and confidentiality.

1. QUALITY OF PROTECTION

When we talk about security in Web Services, there are three types of
potential threats that need to be considered and addressed:

· The SOAP message could be modified or read by hackers.
· A hacker could send messages to a service that, while well-formed,
lack appropriate security claims carry on the processing.


· Service theft. For example, a subscription based Web Service that
doesn't authenticate or is not well secured is open to service
leeching by unauthorized users. Not necessarily hackers per se, but
people who are taking advantage of a hole in the service to get the
service for free.

A message security model is defined to understand these threats.

2. MESSAGE SECURITY MODEL

The WS-Security specification specifies an abstract message security
model in terms of security tokens combined with digital signatures as
proof of possession of the security token referred to as a key.
Security tokens assert claims, and signatures provide a mechanism for
authenticating the sender's knowledge of the key. This signature can
also be used to bind with the claims in the security token. This
assumes that the token is trusted. It may be interesting to note that
we do not specify a particular method for authentication. The
specification only indicates that security tokens may be bound to
messages. This is where the power and extensibility of WS-Security
lies.

A claim can be either endorsed or unendorsed by a trusted authority. A
set of endorsed claims is usually represented as a signed security
token that is digitally signed or encrypted by the authority. An X.509
certificate, claiming the binding between one's identity and public
key, is an example of a signed security token.

An unendorsed claim, on the other hand, can be trusted if there is a
trust relationship between the sender and the receiver.

One special type of unendorsed claim is Proof-of-Possession. Such a
claim proves that the sender has a particular piece of knowledge that
is verifiable by appropriate actors.
For example, a username/password combination is a security token with
this type of claim. A Proof-of-Possession claim is sometimes combined
with other security.tokens to prove the claims of the sender.

3. MESSAGE PROTECTION

The primary security concerns in Web Services are confidentiality and
integrity. WS-Security provides a means to protect messages by
encrypting and/or digitally signing a body, a header, an attachment,
or any combination of thees. Message integrity is provided by using
XML Signature in conjunction with security tokens to ensure that
messages are transmitted without modifications. The integrity
mechanisms are designed to support multiple signatures, potentially by
multiple actors, and to be extensible to support additional signature
formats.

Message confidentiality leverages XML Encryption in conjunction with
security tokens to keep portions of a SOAP message confidential. The
encryption mechanisms are designed to support additional encryption
processes and operations by multiple actors.

4. MISSING OR INAPPROPRIATE CLAIMS

The message receiver should reject a message with an invalid
signature, or missing or inappropriate claims, as if it is an
unauthorized (or malformed) message, as would be expected in a secure
environment. WS-Security provides a flexible way for the message
sender to claim the security properties by associating zero or more
security tokens with the message.

Issues related to Web Services

4

Posted by mady | Posted in | Posted on 9:39 PM

1. SERVICE HIJACKING (OR PIGGYBACKING)

Once your Web Service is available to the public, you may attract a
client who is particularly interested in the service you provide. They
're so interested, infact, that they consider wrapping your powerful
Web Service inside of their own and representing it as their own
product. Without security safeguards in place (and legal documents as
well), a client may repackage your Web Service as if it were their own
function, and there's no way for you to detect that this is being done
(though you may become suspicious by examining your usage log when
your client who occasionally uses your Web Service suddenly shows an
enormous increase in activity). Given the level of abstraction that
Web Services provide, it would also be nearly impossible for any
customers of your unethical client to know who owns the functionality.

Some organizations use a combination of usage logging or per-use
charges. Another simpler way to avoid piggybacking is by using false
data tests. We could create an undocumented function within our Web
Service that creates a result that only our logic could produce. We
would be able to determine whether this code is really ours and the
client is piggybacking our Web Service or if client is truly using its
own logic. For an example, we could say that the Web Service would
provide a specific result which is undocumented and known only to us.
Say , the Web Service is going to take as an input a phone number and
going to return the Name of the person / organization who owns the
phone number. Suppose we are sure that there is no entry for a phone
number containing only zeroes. So we make sure that when such a number
is entered the Service will return a message which is specific and
known only to us. We could then test this on the
piggybacking company we suspect is stealing our Web Service. Since this hidden

functionality would not be published, it would provide a great way to
prove that a company was reselling your Web Service's logic without
your legal approval.

2. PROVIDER SOLVENCY

Since the Web Service model is a viable solution, you're probably
eager to add their functionality to your core information systems and
mission critical applications. As Web Services become more and more
interdependent, it becomes increasingly necessary to research the
companies from where you consume Web Services. You'll want to be sure
that these providers appear to have what it takes to remain in
business. UDDI goes a long way towards helping you with this research
by providing company information for each registered Web Service
provider. In the business world, nothing seems to impact and force
sweeping changes more than insolvency, and if you find yourself in the
unfortunate circumstance of lost functionality due to a bankrupt Web
Service provider, you'll realize how painful the hurried search for a
new vendor can be (with little room to bargain with your ex-service's
competitors). Although the initial work can be a bit tedious, it is
important to know, as far as you can, whether a potential Web Service
vendor will still be in business five years from now.

3. THE INTERDEPENDENCY SCENARIO

The basis for all these and other Web Service considerations is the
issue of interdependency. The potential exists for you to wake up any
given morning, start an application that has worked for years, and
find that the Web Service that it relies on is no longer available.

To some extent, thanks to the UDDI search capabilities, you can
investigate and assess potential providers, but at the end of the day
a degree of faith needs to be put into the services of each provider
that you choose to consume.

Zero Configuration Networking

4

Posted by mady | Posted in | Posted on 5:42 AM

Today’s networks are becoming increasingly dynamic in their configuration. With the emergence of wireless LANs a modern network can expect to have devices removed and added frequently. Naturally these networks rely on common TCP/IP protocols such as DNS, DHCP, MADCAP and LDAP, which in turn require an administrative staff. For increasingly popular ad-hoc and small home networks, the technical knowledge of end-users is often limited and administrative skill can be lacking. In a world where networks are beginning to connect not only computer users of varying technical skill but also a huge variety of personal digital devices the end-user can't always be expected to have the time, desire, or knowledge to configure their network. From hotel rooms to airplanes, cars, and campuses computer users are routinely connecting to networks of which they have no knowledge of services (LDAP and printing, for example) or the primary hosts on the network (DHCP or DNS servers, for example). In situations where there are no administrators either because they are unavailable or don't exist all, these networks need protocols that require zero configuration and administration. The evolution of the IP standards suite has concentrated on achieving a reliable and scalable networking architecture. Emphasis has always been placed on mechanisms that allow decentralized administration. Individual networks have been operated with local configuration, while Internet wide configuration has been coordinated through different agencies handling registration of domain names, network numbers, and other parameters. Network operation requires consistent configuration of all hosts and servers and normally requires centralized, knowledgeable network administration and increasingly complex configuration management services. Several computer software companies have taken the initiative to enhance the IP suite to address this challenge. The Internet Engineering Task Force (IETF) has begun work on zero configuration networking for IP. The goal is to allow hosts to communicate using IP without requiring any prior configuration or the presence of network services. True to the traditional architectural principals of the IP suite, care is being taken to ensure that zero configuration networking protocols and operation do not detract from the scalability of larger configured networks with fully administered services. The central issue of this type of networking is the emergence of protocols for operation without services or configuration. Work in the area of zero configuration protocols has been motivated by new demands in the marketplace.

Blu-ray versus DVD versus CD

2

Posted by mady | Posted in | Posted on 5:17 AM

Blu-Ray is a new optical storage technologies that fight as the successor of DVD. In this topic you will know everything you need to know about this technology.
With the introduction of high-definition TV (HDTV) DVD storage capacity showed to be insufficient to this application. DVD supports a resolution up to 720x480 pixels, while HDTV works with resolutions as high as 1920x1080 pixels. Just to give you an idea, two hours of high-definition video with data compression requires 22 GB of storage space. Keep in mind that the maximum capacity of a DVD is of 17 GB, if a DVD-18 disc is used (keep in mind that this is a dual-sided dual-layer disc).
In fact a Blu-Ray or a HD-DVD is a DVD disc with a higher storage capacity, allowing you to store high-definition contents. It is important to remark that the main motivation to the creation of a DVD successor was the introduction of HDTV, which requires a higher disc storage capacity, feature a regular DVD cannot provide. But how a Blu-Ray is able to store more data than a DVD disc.
Blu-Ray technology was developed in February 2002 to be DVD’s successor by a consortium made by companies that include Apple, Dell, Hitachi, HP, JVC, LG, Mitsubishi, Panasonic, Pioneer, Philips, Samsung, Sharp, Sony, TDK and Thomson. HD-DVD, on the other hand, was created by Toshiba and recently got support from Microsoft, HP and Intel.
Both Blu-Ray and HD-DVD discs have the same physical size of DVD discs (and CDs), with a diameter of 12 cm (120 mm, around 4 ¾”).

Blu-ray Disc

3

Posted by mady | Posted in | Posted on 5:17 AM

In 1997, a new technology emerged that brought digital sound and video into homes all over the world. It was called DVD, and it revolutionized the movie industry.
The industry is set for yet another revolution with the introduction of Blu-ray Discs (BD). With their high storage capacity, Blu-ray discs can hold and play back large quantities of high-definition video and audio, as well as photos, data and other digital content.
Blu-ray, also known as Blu-ray Disc (BD), is the name of a next-generation optical disc format jointly developed by the Blu-ray Disc Association (BDA), a group of the world’s leading consumer electronics, personal computer and media manufacturers (including Apple, Dell, Hitachi, HP, JVC, LG, Mitsubishi, Panasonic, Pioneer, Philips, Samsung, Sharp, Sony, TDK and Thomson). The format was developed to enable recording, rewriting and playback of high-definition video (HD), as well as storing large amounts of data. The format offers more than five times the storage capacity of traditional DVDs and can hold up to 25GB on a single-layer disc and 50GB on a dual-layer disc. This extra capacity combined with the use of advanced video and audio codec’s will offer consumers an unprecedented HD experience.
Blu-ray is currently supported by more than 180 of the world’s leading consumer electronics, personal computer, recording media, video game and music companies. The format also has broad support from the major movie studios as a successor to today’s DVD format. In fact, seven of the eight major movie studios (Disney, Fox, Warner, Paramount, Sony, Lionsgate and MGM) are supporting the Blu-ray format and five of them (Disney, Fox, Sony, Lionsgate and MGM) are releasing their movies exclusively in the Blu-ray format. Many studios have also announced that they will begin releasing new feature films on Blu-ray Disc day-and-date with DVD, as well as a continuous slate of catalog titles every month. For more information about Blu-ray movies, check out our Blu-ray movies section which offers information about new and upcoming Blu-ray releases, as well as what movies are currently available in the Blu-ray format.

1 Why the name Blu-ray?
The name Blu-ray is derived from the underlying technology, which utilizes a blue-violet laser to read and write data. The name is a combination of “Blue” (blue-violet laser) and “Ray” (optical ray). According to the Blu-ray Disc Association the spelling of “Blu-ray” is not a mistake; the character “e” was intentionally left out so the term could be registered as a trademark.

The correct full name is Blu-ray Disc, not Blu-ray Disk (incorrect spelling)
the correct shortened name is Blu-ray, not Blu-Ray (incorrect capitalization) or Blue-ray (incorrect spelling)
the correct abbreviation is BD, not BR or BRD (wrong abbreviation)
The name Blu-ray came from the fact that the laser beam which reads the data from the new discs is blue instead of red which is used for current DVDs and CDs. This new blue laser is at the heart of Blu-ray Disc technology (i.e. blue ray of light).



2 What is Blu-Ray Disc?

A current, single-sided, standard DVD can hold 4.7 GB (gigabytes) of information. That’s about the size of an average two-hour, standard-definition movie with a few extra features. But a high-definition movie, which has a much clearer image (see How Digital Television Works), takes up about five times more bandwidth and therefore requires a disc with about five times more storage. As TV sets and movie studios make the move to high definition, consumers are going to need playback systems with a lot more storage capacity.

Blu-ray, also known as Blu-ray Disc (BD) is the name of a next-generation optical disc format. The format was developed to enable recording, rewriting and playback of high-definition video (HD), as well as storing large amounts of data. The format offers more than five times the storage capacity of traditional DVDs and can hold up to 25GB on a single-layer disc and 50GB on a dual-layer disc.
Blu-ray is the next-generation digital video disc. It can record, store and play back high-definition video and digital audio, as well as computer data. The advantage to Blu-ray is the sheer amount of information it can hold:
• A single-layer Blu-ray disc, which is roughly the same size as a DVD, can hold up to 27 GB of data – that’s more than two hours of high-definition video or about 13 hours of standard video.
• A double-layer Blu-ray disc can store up to 50 GB, enough to hold about 4.5 hours of high-definition video or more than 20 hours of standard video. And there are even plans in the works to develop a disc with twice that amount of storage.



3 Key features of Blu-ray Disc
Blu-ray Disc is a next-generation, optical disc format that enables the ultimate high-def entertainment experience. Blu-ray Disc provides these key features and advantages:
• Maximum picture resolution. Blu-ray Disc delivers full 1080p* video resolution to provide pristine picture quality.
• Largest capacity available anywhere (25 GB single layer/50 GB dual layer). Blu-ray Disc offers up to 5X the capacity of today’s DVDs.
• Best audio possible. Blu-ray Disc provides as many as 7.1 channels of native, uncompressed surround sound for crystal-clear audio entertainment.
• Enhanced interactivity. Enjoy such capabilities as seamless menu navigation, exciting, new bonus features, and network/Internet connectivity.
• Broadest industry support from brands you trust. More than 90% of major Hollywood studios, virtually all leading consumer electronics companies, four of the top computer brands, the world’s two largest music companies, PLAYSTATION® 3 and the leading gaming companies, all support Blu-ray Disc.
• The largest selection of high-def playback devices.Blu-ray Disc is supported by many of the leading consumer electronics and computing manufacturers. That means you can maximize the use of your HDTV and your home entertainment system with the widest selection of high-def playback devices—including players, recorders, computers, aftermarket drives and the PLAYSTATION® 3 game console.
• Backward compatibility**. Blu-ray Disc players enable you to continue to view and enjoy your existing DVD libraries.
• Disc robustness. Breakthroughs in hard-coating technologies enable Blu-ray Disc to offer the strongest resistance to scratches and fingerprints.

Blu-ray Disc

2

Posted by mady | Posted in | Posted on 5:17 AM

The "Blu-ray Disc Association" was founded in 2002 by nine leading electronic companies: Matsushita, Pioneer, Phillips, Thomson, LG Electronics, Hitachi, Sharp, Samsung and Sony as contrast to the DVD Forum. Spearheaded by Sony Corporation on February 19th 2002 the companies announced they were the "Founders" of the Blu-ray Disc and later changed their name to the "Blu-ray Disc Association" in order to achieve more companies joining their development. Some examples of companies that signed in include Apple, TDK, Dell, Hewlett Packard, Walt Disney, Warner Bros. and Universal Music Group. At the moment the there are more than 250 members and supporters of the Association.
In today’s world data storage is one of the key features in especially technology field in early days of the computer world the data was stored on punch card which hardly stored data in bytes, as the technology field always is the one which upgrades constantly. Later the storage media which came into use was magnetic tape, where aries storage medias such as VHS, Floppy drive and various medias came into existence,with the use on magnetic disk the amount of data which was portable was in ten’s of Mega Byte data .

Then the revolutionary technology came where the storage device was the compact disc with the help of optical beam which could be used to store the data in the hundred of MB’s. In CD the beam which was used was a red beam which could store upto700 MB of data on one disk.
But with the help of Blue beam on the disc Blu-ray data upto 25GB can be stored.

Open Source Software Model

3

Posted by mady | Posted in | Posted on 12:56 AM

The Linux operating system kernel is a very successful example of a
large software system in widespread use that has been developed using
an "open source" development (OSD) model. If a conclusion has to be
compiled, then freedom, ownership of the source, no license fees,
availability of skills and resource to fix problems or to develop
enhancements are all readily available. The distribution of software
to smaller companies or acquisitions of the main business is freely
available. Desires of a company dictate, freedom, cost effectiveness,
availability of resources, a reliable and flexible solution that works
well and retain any office tools and integration with popular market
products, services or communication methods are key to the
consideration of a business solution.
So why is it not a perfect model for the commercial world? Habit and
poor understanding is the cause of all issues related to acceptance of
the Open Source Model. It is also ignorance and laziness that prevents
the research and investigation into such technologies for many
businesses. Ironically, it is far simpler to replace a system using
Open Source model with numerous organizations that will guide any
organization through the process, than it is to retain the large
supplier that creates the locked in Psyche. The Open Source model will
be freely accepted commercially and one day there will be no preferred
alternative, the time line for globalization of such methods is
certainly within the next five years, so hold on to your hats and do
not get left behind, it is your choice and a free one.

Who Develops Linux Code1?

3

Posted by mady | Posted in | Posted on 12:56 AM

Tens of thousands of independent programmers contribute code to
project maintainers for inclusion in Linux. Improvements and bug fixes
developed and submitted by companies and individual programmers are
included in Linux releases based on technical merit alone. When a new
Linux kernel is released, it is put up on the main Linux kernel site,
www.kernel.org.
There are a number of good Linux news web sites keeping the programmer
and user community continuously updated about the latest developments
in the kernel. The best way to keep track of the kernel development
though is undoubtedly the Linux kernel mailing list. One can find all
the Linux source code in the /usr/src/ directory of a native Linux
partition. Also, a detailed documentation including discussions of
major issues such as kernel hacking has been kept in the
/usr/src/linux*/Documentation directory. The users interested in
extracting details about the configuration workings of the OS at
runtime can take a look at the contents of the /proc file system which
is used by the kernel to provide information to the user programs. A
user who has worked on Microsoft Windows OS will understand the
difference here.
A program to extract and print the currently running kernel's
information can be as small as 5 lines of C code in Linux. To achieve
the same task under MS Windows would leave the programmer clueless as
to how to hack the system to get these details. When a Linux system
crashes on a user's machine he may post the crash log on a mailing
list such as one listed above, and he might find the bug-fix within
hours of posting. If such type of system crash happens on any other
proprietary OS, the user may need to wait for months or even years
till the next expensive release/update comes into market. The above
two examples show us where Open Source Software scores over its
rivals.

What Does A Linux Distribution Vendor Do?

1

Posted by mady | Posted in | Posted on 12:54 AM

A Linux distribution includes the Linux kernel plus utilities,
programming tools, window managers, and other software that make up a
full operating system. Distribution companies, such as Caldera, Red
Hat, SuSE, Turbo Linux, and nonprofit organizations such as Debian,
download the latest Open Source packages from the Internet, QA them,
add utilities such as installation programs, and package them on a
CD-ROM with a manual.

The underlying code in each distribution is exactly the same. Slight
differences may occur in the following:
• Hardware installation programs
• Default X-windows configuration
• Graphical systems management tools
• Proprietary software packages (very few)
In the vast majority of cases, Linux applications are compatible with
all distributions of Linux, which accounts for the aphorism "Linux is
Linux is Linux."
Distribution vendors take the kernel as is, with all changes and fixes
that are contributed by members of the development community. Each
distribution company releases new distributions about twice a year.
The Open Source development model discourages distribution vendors
from forking the Linux code base into incompatible code streams. The
GPL specifies that additions, modifications, and extensions to Linux
be distributed in source code form whenever executables are made
available. If a distribution company were to acquire development
expertise and attempt to build unique features into Linux, its
innovations would be released back to the development community. Truly
valuable changes would then be included in the next release of Linux
and/or freely adopted by other distribution vendors, eliminating any
competitive advantage. Currently, independent developers contribute
the vast majority of fixes, patches, and additions to Linux. Each one
of these modifications improves the stability and functionality of
Linux.

How is the Linux Kernel Developed and Updated?

4

Posted by mady | Posted in | Posted on 12:53 AM

The Linux kernel is developed and updated following the Open Source
development model you might have know. Linus Torvalds is the project
maintainer, with final authority over what goes into the kernel.
Because of the complexity of the project, he is aided by a group of
appointed project maintainers who are responsible for various
components of the code.
A large number of developers worldwide contribute to improvements to
Linux. Any developer can submit a patch that includes source code
changes to the kernel mailing list. Linus and his project maintainers
review the patch. They decide whether or not to include it in the next
release based on technical merit, not commercial reasons. Thus, there
is no single company directing the development path of the Linux
kernel.

SOME SUCCESSFUL OPEN SOURCE PROJECTS

3

Posted by mady | Posted in | Posted on 12:50 AM

Although OSS has recently become a hot topic in the press, it has
actually been in existence since the 1960s and has shown a successful
track record to-date. The open source movement gained momentum in big
business in 1998, when IBM, Corel, Oracle, and Informix endorsed open
source software. Examples of popular open source products include
Emacs, GNU toolset, Apache, Sendmail, GIMP, Samba and Linux. While
Linux and Apache are among the most well known open source-based
applications, there are many others, including BSD, Debian, and other
applications based on the GNU license.

1. GNU Software:
Emacs was one of the first open source products whose success of Emacs
led to the GNU program. It is a text editor that is widely used for
software development. The GNU project consists of an operating system
kernel and associated UNIX tools. The GNU tools have been ported to a
wide variety of platforms, including Windows NT and are widely used by
software developers to produce both open source and proprietary
software.


2. Apache Web Server:
The Apache web server is a freely available web server distributed
under an open source license. Apache web servers are known for their
functionality and reliability. They form the backbone infrastructure
running the Internet. Today the Apache web server is arguably the most
widely used web server in the world garnering almost 50 percent of the
web server market. Apache was built and is maintained by a group of 20
core developers and 10 major contributors from around the world. A
large pool of developers regularly suggests and implements minor
adjustments and bug fixes to the core group.

3. Sendmail:
Send mail is a platform for moving mail from one machine to another.
The Sendmail Consortium, a nonprofit organization, runs the open
source program and maintains a website to serve as a resource.
Sendmail is estimated to carry nearly 90 percent of e-mail traffic.


4. PERL:
While Emacs, GNU toolset, Apache, Sendmail, and Linux are examples of
open source products, the Practical Extraction and Reporting Language
(Perl) is an example of an open source process. Perl is a system
administration and computer-programming language widely used
throughout the Internet. It is the standard scripting language for all
Apache web servers, and is commonly used on UNIX. There are an
estimated one million Perl users today.

5. Netscape:
On January 22, 1998, Netscape announced that it would make the source
code to its flagship client software, Netscape Communicator, freely
available for modification and redistribution on the Internet.
Netscape's pioneering decision to distribute software via the Internet
has become an integral strategy for every software company worldwide.

OPEN SOURCE SOFTWARE MODEL IS NOT QUITE PERFECT?

3

Posted by mady | Posted in | Posted on 12:49 AM

The mantra of the open source community is that OSS increases the
reliability of software because it is peer-reviewed by many
developers, all performing their own tests, making bug corrections,
and tweaking the software until it is complete. Proponents claim this
process creates mature, stable code more quickly than conventional
software development, lowers overhead, and, through operating system
porting, broadens the market for the software. The nature of open
source development also increases the interaction between the
developers of software and the customers who will ultimately use it.
As an ideal, OSS sounds wonderful—a utopia for software development.
Unfortunately, it's not as perfect as it seems. All of the claims made
by the Open Source Initiative are true, but only if the community of
engineers and developers is actually interested in the success of a
piece of software. If the open source community is not motivated to
work on a certain software project, it will languish in obscurity.
This seems to be the scenario that has plagued Netscape and its
Mozilla project. Netscape had hoped its willingness to reveal design
secrets would attract outside programmers and yield new features,
better code, and faster development. However, very few programmers
jumped on the bandwagon. The Mozilla project was to be the loss leader
that would open the market for Netscape's enterprise software;
instead, the effort has been a commercial flop.
The major problem with OSS is the lack of objective information on how
open source works in an enterprise and on how much it actually costs.
Open source software is not a new concept, but its application in an
enterprise environment is a recent phenomenon. Until open source
software development moves out of the "philosophical" into the
"practical," management decisions on its viability will be suspect and
ill-advised.
There is no doubt that open source standards have benefits for
enterprise computing needs, but the technology is not quite ready for
prime time. Infrastructure and standards of practice are being
developed and true cost analysis will be implemented. Only then will
it be possible to say with any certainty that open source is the
future.

HOW DO OPEN SOURCE COMPANIES MAKE MONEY?

4

Posted by mady | Posted in | Posted on 12:48 AM

While it is true that an open source business may not make money
directly from its products, it is untrue that open source companies do
not generate stable and scalable revenue streams. In actuality, in the
21st century web technology market, it is the open source company that
has the greatest long-term strategic advantage. This is demonstrated
by companies such as LINUX, Apache, and Netscape, a host of
web-specific technologies such as Java, Perl, TCL, and a host of
web-specific technology companies such as Sendmail. The open source
business model relies on shifting the commercial value away from the
actual products and generating revenue from the 'Product Halo,' or
ancillary services like systems integration, support, tutorials and
documentation.
This focus on the product halo is rooted in the firm understanding
that in the real world, the value of software lies in the value-added
services of the product halo and not in the product or any
intellectual property that the product represents. In actuality, the
value of software products approaches zero in the fast-paced, highly
customized, ever-changing world of information technology. But it is
not simply an acknowledgement of the revenue streams generated by the
product halo that makes open source a compelling business strategy.
Open source also cuts down on essential research and development costs
while at the same time speeding up delivery of new products.
This paradoxical situation arises from the fact that within an open
source project, the community members themselves provide free research
and development by contributing new solutions, features, and ideas
back to the community as a whole. The company that sits at the center
of any successful open source project may reap the rewards of the work
of thousands of highly skilled developers without paying them a cent.
A final strength of the open source business model lies in its ability
to market itself. Because open source products are typically released
for free, open source companies that can produce quality products and
generate a good reputation can almost immediately grab huge shares of
any market based on the complex and far-reaching global referral
networks generated by users.
In fact, in the web technology space, almost every global standard has
been based upon open source technology. By using the open source
technology model, we can create a superior product, which immediately
has a competitive advantage, and which generates multiple scalable
revenue streams while being freely available throughout the community

Why use Open Source Software/Free Software?

5

Posted by mady | Posted in | Posted on 12:47 AM

This paper provides quantitative data that, in many cases, using open
source software / free software is a reasonable or even superior
approach to using their proprietary competition according to various
measures. This paper's goal is to show that you should consider using
OSS/FS when acquiring software. There are many good reasons to use
OSS/FS, and there's actually quantitative data justifying some of its
claims (such as higher reliability).

7.1 Major Projects
A List of important OSS/FS programs that are generally recognized as mature.
Major OSS/FS Projects include:
1. Linux kernel,
2. Apache (web server),
3. Samba (supports interoperability with Windows clients by acting as
a Windows file and print server),
4. GNOME (a desktop environment),
5. KDE (also a desktop environment),
6. The GIMP (bitmapped image editor),
7. MySQL (database emphasizing speed),
8. PostgreSQL (database emphasizing functionality),
9. PHP (hypertext preprocessor used for web development),
10. Mailman (mailing list manager),
11. XFree86 (graphics infrastructure which implements the X window system),
12. bind (domain naming service, a critical Internet infrastructure service),
13. GNU Compiler Collection (GCC, a suite of compilation tools for C,
C++, and several other languages),
14. Perl (programming/scripting language),
15. Python (another programming/scripting language),
16. Mozilla (web browser and email client),
17. OpenOffice.org (office suite, including word processor,
spreadsheet, and presentation software),
The open source BSD Operating systems
FreeBSD (general purpose),
OpenBSD (security-focused),
NetBSD (portability-focused)).
A great deal of documentation is available at the Linux Documentation
Project (LDP).
A number of up and coming projects are at an alpha or beta level. Some
projects that have the potential to be very important, have running
code, and are working toward more functionality or stability include
the following: Wine (a program to allow Windows programs to run on
Unix-like systems, AbiWord (a word processor), Gnumeric (spreadsheet),
KOffice (office suite), and GnuCash (money management).
Web projects around the world often use LAMP, an abbreviation for
Linux, Apache, MySQL (sometimes replaced with PostgreSQL), and
PHP/Perl/Python. More complex web projects may use major libraries or
frameworks, such as PHP-Nuke (based on PHP) and Zope (based on
Python).

OPEN-SOURCE SOFTWARE LICENSES

6

Posted by mady | Posted in | Posted on 12:45 AM

Open Source refers to software distributed under a legal license, such
as the GNU General Public License (GPL), that permits free
distribution and require open availability of the source code. The
entire essential portions of the Linux operating system—its heart, or
kernel, and most of the utilities that make up the operating
system—are published under the GPL. There are several licensing models
for Open Source. Some require that all changes made to the source must
be freely distributed with the modified product. Other licenses permit
an organization to make changes and keep the changes private.

• Open Source licenses, such as the GPL, guarantee anyone the right to
read, redistribute, modify, and use the software freely.
•Under many Open Source licenses, including the GPL, modifications of
existing software must be distributed under the same license as the
original software. The source code to any changes or improvements must
be made available to the public.
• The GPL is one example of an Open Source license. Other examples
include the BSD license, the MIT X License, the Artistic License, and
the IBM Public License.
All accomplish the same basic objectives: free distribution and openly
available source code. All Open Source licenses meet the Open Source
Definition, which is described at http://opensource.org/osd.html.
Many people have heard that all open-source licenses are the same, and
that open-source software infects everything around it, destroying all
the proprietary value in a company's intellectual property. In fact,
there are many different licenses. Some allow commercialization for
free. Many work quite well with proprietary licensing strategies. The
two most common licenses are the General Public License, or GPL, and
the Berkeley Software Distribution, or BSD, license. The GPL allows
anyone to use, change and share the source code. If you make changes,
though, you must share them freely. The BSD license, by contrast,
allows you to keep your changes private. We can conclude that to be
OSI certified the open source software must be distributed under a
license that guarantees the right to read, redistribute, modify, and
use the software freely.

KEY ROLES IN OPEN SOURCE DEVELOPMENT PROCESS

7

Posted by mady | Posted in | Posted on 12:45 AM

To understand the Open Source software development process it is
important to acknowledge the roles of the various participants who
take part in creating the code.

1. Project Maintainer/Developer
• Determines the software license
• Writes the first code release and puts it up on the Internet
• Sets up a Web site, mailing lists, and version control services (e.g. VCS)
• Builds and leads the development team, usually from volunteers
• Approves official releases

2. Development Team
• Adds features, fixes bugs, creates patches, writes documentation

3. Users/Debuggers
• Find bugs, point out design flaws, and request new features

After the project maintainer puts up the first release, both users and
the development teams submit ideas to the project mailing lists.
Patches come in from developers to the project maintainer. The
maintainer incorporates improvements and releases a new version to the
development team and users. As momentum builds, more people get
involved, and the software evolves. Developers are rewarded by the
immediately visible recognition of their contributions to the product.
Linux is probably the best-known example of a successful Open Source
development project.

The Advantages of Open Source Software over Proprietary Software

7

Posted by mady | Posted in | Posted on 12:44 AM

• Stability: Open Source software is often more reliable and stable
than proprietary software. This is because Open Source projects have
large numbers of contributors and follow an iterative development,
debugging, and testing cycle. The best-known Open Source projects such
as Linux have more contributors and testers than a traditional
software company could afford to deploy on a project.

• Cost: Open Source software is free. This results in immediate
savings on licensing fees and upgrading costs. And the larger the
project, the greater the savings-for example, there is no charge for
additional client connections to an Open Source database.

• Security: In the proprietary software model, developers compete to
discover and exploit or publicize security holes. The Open Source peer
review process redirects developer competition toward preventing
security breaches in the first place. Additionally, there are no
hidden APIs that can be exploited.

• Flexibility: Open Source code can be modified to fit customer
requirements. Drivers can be developed or modified without
reverse-engineering unpublished APIs. The best-known Open Source
projects such as Linux have more contributors and testers than a
traditional software company could afford to deploy on a project.

• Choice of vendors: In the Open Source model, vendors compete purely
on the basis of their ability to add value to a shared platform, not
on the basis of proprietary secrets.

• Reduced risk: The Open Source development model effectively spreads
risks over a large pool of programming talent. And it provides a hedge
against obsolescence-for example, if a company that develops Open
Source software goes out of business, the code could thereafter be
maintained in perpetuity by other developers. Cisco Systems recently
decided to release print spooler software under an Open Source license
to reduce its dependency on in-house programming staff.

Open Source Development vs. Traditional Processes

8

Posted by mady | Posted in | Posted on 12:43 AM

Once the originator is ready to invite others into the project he
makes the code base available to others and development proceeds.
Typically, anyone may contribute towards the development of the
system, but the originator/owner is free to decide which contributions
may or may not become part of the official release. The open source
development (OSD) model is different from traditional in-house
commercial development processes in several fundamental ways. First,
the usual goal of an open source project is to create a system that is
useful or interesting to those who are working on it, not to fill an
commercial void.
Developers are often unpaid volunteers, who contribute towards the
project as a hobby; in return, they receive peer recognition and
whatever personal satisfaction their efforts bring to them. Sometimes
this means that much of the effort on an OSD project concentrates on
what part-time programmers find interesting, rather than on what might
be more essential. It can be difficult to direct development toward
particular goals, since the project owner holds little power over the
contributing developers. This freedom also means that it can be
difficult to convince developers to perform essential tasks, such as
systematic testing or code restructuring that are not as exciting as
writing new code.

THE OPEN SOURCE MODEL DIFFERS FROM PROPRIETARY SOFTWARE MODELS?

7

Posted by mady | Posted in | Posted on 12:42 AM

With traditional proprietary software, the purchaser obtains only
executable code-the ones and zeros that computers understand, but that
are unreadable by humans. The company that develops the software holds
the worldwide monopoly on its source code, and becomes the only place
where the code can be modified, updated, or fixed. With Open Source
software, the source code is freely available, giving developers the
ability to isolate and fix bugs and to customize the software to their
needs. A common illustration equates using proprietary software with
driving a car with the hood permanently welded shut. Under this
scenario, if the engine were to break down, the owner would have to
return the car to the manufacturer for repair. Without access to the
engine, neither the owner nor the car dealer would be able to fix the
problem. Open Source software is like a car with a hood that opens.
Car owners can fix problems themselves, or choose a repair service
that best fits their needs. In the proprietary software model, the
best company to provide support is the company that manufactures the
software. The manufacturer is the only company that truly understands
the source code, has access to it, and can modify or fix it when it
breaks.
In the Open Source software model, there is no single manufacturer.
Distributed teams of programmers around the world develop open Source
software. So there is no exclusive source for expertise,
modifications, or bug fixes. Distribution vendors such as Caldera, Red
Hat, SuSE, and Red Hat are not primarily manufacturers, but rather
packagers and distributors of free software developed by others.

The Open Source model unties the knot between the product vendor and
support services. Because source code is available to all, vendors are
able to focus on a part of the value chain and build competitive
services without fear of proprietary lockouts.
Therefore, the best provider of shrink-wrapped Linux products is the
vendor that best understands packaging, distribution, point-of-sale
promotion, and branding. The best provider of Linux customer services
is the vendor that specializes in service, building deep technical
expertise and superior service delivery systems. The bottom line is
that the Open Source software development model, by creating-and
protecting an open playing field, encourages vendor specialization and
fosters honest competition, ultimately giving the customer more
choice, flexibility, and control.

The Pros and Cons of Open Source

3

Posted by mady | Posted in | Posted on 12:41 AM

Some of the benefits of Open Source software include high quality,
flexibility, stable code, cost savings and frequent incremental
releases. Disadvantages include uncertain release schedules and
dependence on the continued interest of a large community of
volunteers. There are many unnoticed advantages of this model such as
freedom to choose from different vendors, access across multi-vendor
environments, protection investment in existing computer system,
ability to use/share information anywhere in the world and
interoperability/portability across various platforms.

The concept of open source software (OSS) has become more than a mere
blip on the radar screens of IT professionals. However, the question
of whether open source is a viable, cost-effective system for
developing software for actual business applications has yet to be
answered.
To be certified as OSS, developers must follow the Open Source
Definition (www.opensource.org/osd). The Open Source Web site
(www.opensource.org) cites principles of the definition, including:

• Free Redistribution: No party can be restricted from selling or
giving away the software as a component of an aggregate software
distribution containing programs from several different sources. The
license may not require a fee for such sale.
• Source Code: The program must include source code and must allow
distribution. If some form of a product is not distributed with source
code, there must be a publicized means of obtaining the source code
for no more than a reasonable reproduction cost—preferably downloading
via the Internet without charge. The source code must be the preferred
form in which a programmer would modify the program. Deliberately
obfuscated source code is not allowed.
• Derived works: Modifications and derived works are allowed and can
be distributed under the same terms as the license of the original
software.
• Integrity of the Author's Source Code: Source code can be restricted
from distribution in modified form only if the license allows the
distribution of "patch files" with the source code for the purpose of
modifying the program at build time. Software built from modified
source code may be distributed but may be required to carry a
different name or version number from the original software.
• Distribution of License: The rights attached to the program must
apply to all users. No additional licenses are needed.
• License Must Not Be Specific To A Product: The rights attached to
the program must not depend on the program being part of a particular
software distribution.
• License Must Not Contaminate Other Software: No restrictions should
be placed on other software distributed with the licensed software.
For example, all other programs distributed on the same medium need
not be open source software.
Because of its wide-open management methods and unusual fee
structures, OSS, as a business model, seems to fly in the face of
conventional development wisdom. According to the Open Source
Initiative, the organization that maintains the Open Source
Definition, companies can make money with OSS using these four
business models:
• Support Sellers: Companies give away the software product but sell
distribution, branding, and after-sale service.
• Loss Leader: Companies give away open source as a loss leader to
establish market position for closed software.
• Widget Frosting: A hardware company goes open source to get better
and cheaper drivers and interface tools.
• Accessories: Companies sell accessories—books, compatible hardware,
and complete systems—with open source software pre-installed.

The possibility that OSS can be and, some would argue, is viable in a
business enterprise raises questions about how much it costs, what
support is available, and what training is required. All of these are
practical questions that IT professionals need to consider before
jumping into the open source system with both feet.

Benefits and Risks of Open Source Software Compared To Traditional COTS (Commercial Off-The-Shelf)

5

Posted by mady | Posted in | Posted on 12:40 AM

Due to the different development models, Program Managers can achieve
many benefits over traditional COTS by using OSS. Popular open source
products have access to extensive technical expertise, and this
enables the software to achieve a high level of efficiency, using less
lines of code than its COTS counterparts. The rapid release rate of
OSS distributes fixes and patches quickly, potentially an order of
magnitude faster than those of commercial software. OSS is relatively
easy to manage because it often incorporates elements such as central
administration and remote management. Because the source code is
publicly available, Program Managers can have the code tailored to
meet their specific needs and tightly control system resources.

Moreover, Program Managers can re-use code written by others for
similar tasks or purposes. This enables Program Managers to
concentrate on developing the features unique to their current task,
instead of spending their effort on rethinking and re-writing code
that has already been developed by others.
Code re-use reduces development time and provides predictable results.
With access to the source code, the lifetime of OSS systems and their
upgrades can be extended indefinitely. In contrast, the lifetime of
traditional COTS systems and their upgrades cannot be extended if the
vendor does not share its code and either goes out of business, raises
its prices prohibitively, or reduces the quality of the software
prohibitively. The open source model builds open standards and
achieves a high degree of interoperability. While traditional COTS
typically depend on monopoly support with one company providing
support and "holding all the cards" (i.e., access to the code) for a
piece of software, the publicly available source code for OSS enables
many vendors to learn the platform and provide support. Because OSS
vendors compete against one another to provide support, the quality of
support increases while the end-user cost of receiving the support
decreases.
Open source can create support that lasts as long as there is demand,
even if one support vendor goes out of business. For government
acquisition purposes, OSS adds potential as a second-source
"bargaining chip" to improve COTS support. OSS can be a long-term
viable solution with significant benefits, but there are issues and
risks to Program Managers. Poor code often results if the open source
project is too small or fails to attract the interest of enough
skilled developers; thus, Program Managers should make sure that the
OSS community is large, talented, and well organized to offer a viable
alternative to COTS. Highly technical, skilled developers tend to
focus on the technical user at the expense of the non-technical user.
As a result, OSS tends to have a relatively weak graphical user
interface (GUI) and fewer compatible applications, making it more
difficult to use and less practical, in particular, for desktop
applications (although some OSS products are greatly improving in this
area). Version control can become an issue if the OSS system requires
integration and development.

As new versions of the OSS are released, Program Managers need to make
sure that the versions to be integrated are compatible, ensure that
all developers are working with the proper version, and keep track of
changes made to the software.
Without a formal corporate structure, OSS faces a risk of
fragmentation of the code base, or code forking, which transpires when
multiple, inconsistent versions of the project's code base evolve.
This can occur when developers try to create alternative means for
their code to play a more significant role than achieved in the base
product. Sometimes fragmentation occurs for good reasons (e.g., if the
maintainer is doing a poor job) and sometimes it occurs for bad
reasons (e.g., a personality conflict between lead developers). The
Linux kernel code has not yet forked, and this can be attributed to
its accepted leadership structure, open membership and long-term
contribution potential, GNU General Public License (GPL) licensing
eliminating the economic motivations for fragmentation, and the
subsequent threat of a fragmented pool of developers.
Ninety-nine percent of Linux distributed code is the same. The small
amount of fragmentation between different Linux distributions is good
because it allows them to cater to different segments. Users benefit
by choosing a Linux distribution that best meets their needs. Finally,
there is a risk of companies developing competitive strategies
specifically focused against OSS.
When comparing long-term economic costs and benefits of open source
usage and maintenance to traditional COTS, the winner varies according
to each specific use and set of circumstances. Typically, open source
compares favorably in many cases for server and embedded system
implementations that may require some customization, but fares no
better than COTS for typical desktop applications.

SIGNIFICANCE OF OPEN SOURCE SOFTWARE MODEL

5

Posted by mady | Posted in | Posted on 12:39 AM

The open source development process differs sharply from the
traditional commercial off-the-shelf (COTS) model. Eric Raymond likens
the corporate or traditional COTS model, whereby a corporation
produces and sells proprietary software, to a cathedral and the open
source model to a bazaar. In the corporate model, individuals or small
groups of individuals quietly and reverently develop software in
isolation, without releasing a beta version before it is deemed ready.
In contrast, the open source model relies on a network of "volunteer"
programmers, with differing styles and agendas, who develop and debug
the code in parallel. From the submitted modifications, the delegated
leader chooses whether or not to accept one of the modifications. If
the leader thinks the modification will benefit many users, he will
choose the best code from all of the submittals and incorporate it
into the OSS updates. The software is released early and often.

WHAT IS OPEN SOURCE?

4

Posted by mady | Posted in | Posted on 12:38 AM

Common Public Views:

Open source has burst upon the software development scene as the new
paradigm of faster turnaround and more reliable software. With the
open source development model, a computer program's source code is
given away freely along with the program itself. This allows any
programmer to view, modify, and redistribute the program. By allowing
the outside world to adapt and propagate the source code, the
development lifecycle is greatly reduced and the final product is much
more stable and versatile, proponents advocate. The best thing about
Open source is that it propels innovation, as users are free to tailor
the software to suit their own needs and circulate those changes.
Most Open Source software is not developed by one single vendor, but
by a distributed group of programmers. Typically, open source software
development is guided by project maintainers who address technical or
end-user requirements rather than vendor agendas.
Nobody "owns" Open Source software, which is freely available for
download over the Internet.
Closed-source software is the kind that most people know best. For
decades, software companies have shipped their products on floppy
disks and CD-ROMs. People can install and use those programs but
cannot change them or fix them. The human-readable version of the
software: the source code is jealously guarded by the software maker.
One may think that open-source software is less secure or less
reliable than closed source. This isn't true. For example, it is now
universally accepted in the computer industry that the open-source
Apache Web server is a much more secure alternative to Microsoft's
closed-source Internet Information Server. Open source lets engineers
around the world examine the code for security flaws and other bugs.
Unlike most commercial software, the core code of such software can be
easily studied by other programmers and improved upon--the only
proviso being that such improvements must also be revealed publicly
and distributed freely in a process that encourages continual
innovation.


The Formal Framework:

Open source, by definition, means that the source code is available.
Open source software (OSS) is software with its source code available
that may be used, copied, and distributed with or without
modifications, and that may be offered either with or without a fee.
If the end-user makes any alterations to the software, he can either
choose to keep those changes private or return them to the community
so that they can potentially be added to future releases. Open Source
Initiative (OSI), an unincorporated nonprofit research and educational
association with the mission to own and defend the open source
trademark and advance the cause of OSS certifies the open source
license. The open source community consists of individuals or groups
of individuals who contribute to a particular open source product or
technology. The open source process refers to the approach for
developing and maintaining open source products and technologies,
including software, computers, devices, technical formats, and
computer languages.
Open source software, by definition, includes any program or
application in which the programming code is open and visible. The
concept of open source software dates to the earliest days of computer
programming. The term came into popular usage following a February
1998 meeting in Palo Alto, California. A group of leading free
software advocates, reacting to Netscape's announcement that it
planned to make the source code for its browser widely available, came
to the realization that open source software had to be promoted and
marketed based on pragmatic business strategies to compete effectively
against closed source vendors.

Open Source Software Model

8

Posted by mady | Posted in | Posted on 12:37 AM

We've all heard a lot of talk about open source, a software
application development paradigm that puts development into the hands
of a loosely defined community of programmers. Linux in particular, an
open source operating system developed by Linus Torvalds in 1991,
seems to be the poster child for the movement.
Open source is nothing new to computing; it has been the underpinning
of the Internet for years. Open source software is an idea whose time
has finally come. For twenty years it has been building momentum in
the technical cultures that built the Internet and the World Wide Web.
Now it's breaking out into the commercial world, and that's changing
all the rules.
Open Source software puts a new marketing face on a long tradition of
enterprise-class free software. Unlike closed source, packaged
applications, when you use Open Source software, you get the source
code, which you can modify to fit your needs. You can incorporate Open
Source code into commercial products without restriction. Open Source
solutions are available for almost any conceivable application. Some
of the world's largest companies, as well as the Internet itself,
depend on Open Source for enterprise applications.
The basic idea behind open source is very simple: When programmers can
read, redistribute, and modify the source code for a piece of
software, the software evolves. People improve it, people adapt it,
and people fix bugs. And this can happen at a speed that, if one is
used to the slow pace of conventional software development, seems
astonishing.
The motives (or at least the emphasis) of the people who use the term
``open source'' are sometimes different than those who use the term
``Free Software.'' The term ``open source software'' (a term
championed by Eric Raymond) is often used by people who wish to stress
aspects such as high reliability and flexibility of the resulting
program as the primary motivation for developing such software. In
contrast, the term ``Free Software'' (used in this way) stresses
freedom from control by another (the standard explanation is ``think
free speech, not free beer'').

Proof of Signature Verification.

8

Posted by mady | Posted in | Posted on 12:33 AM

The purpose of this is to show that if M' = M, r' = r and s' = s in
the signature verification, then v = r'. Where, M is the sent message,
the pair of numbers r and s is signature of the message M. M' is the
received message, r' and s' is the received signature.
Before proving the final result, we state and prove following lemma.

Lemma:
Let p and q be primes so that q divides p – 1, h a positive integer
less than p, and
g = h (p-1)/q mod p.
Then g q mod p = 1, and if m mod q = n mod q, then g m mod p = g n mod p.
Proof:
We have,
g q mod p = (h (p- 1)/ q mod p) q mod p
= h (p-1) mod p
= 1 by Fermat's Little Theorem.

Now let m mod q = n mod q, i.e., m = n + k.q for some integer k. Then
g m mod p = g (n+k.q) mod p
= (g n .g k.q) mod p
= ((g n mod p) . (g q mod p) k) mod p
= g n mod p

Since, g q mod p = 1.
We are now ready to prove the main result.

THEOREM:
If M' = M, r' = r, and s' = s in the signature verification, then v = r'.
Proof: We have,
w = ( s' ) -1 mod q = s-1 mod q
u1 = ( ( SHA(M') ) w ) mod q = ( (SHA(M) ) w ) mod q
u2 = ( r' . w ) mod q = ( r . w ) mod q.
Now y = g x mod p,

So that by the lemma,

v = ( ( g u1 y u2 ) mod p ) mod q

= ( ( g SHA(M) . w . y r . w ) mod p) mod q

= ( ( g SHA(M) . w . g x .r. w ) mod p) mod q

= ( ( g ( SHA(M)+x.r ) . w ) mod p ) mod q.

Also,
s = ( k-1( SHA(M) + x.r ) ) mod q.

Hence,
w = ( k . ( SHA(M) + x.r )-1 ) mod q

( SHA(M) + x.r ) .w mod q = k mod q.

Thus by the lemma,
v = (gk mod p) mod q = r = r'.

Hence the theorem is proved.

ASPECTS OF DIGITAL SIGNATURE

31

Posted by mady | Posted in | Posted on 12:31 AM

In various countries the complete process of using digital signature
has been standardized, under the protection of law. The Information
Technology Act of India has number of sections which define the
Digital Signature Standard. The rules for the certifying authorities
such as, the process of obtaining a license to issue the digital
signature certificate, have been mentioned under this act. Few of
these provisions have been discussed below to get an idea of legal
recognition of digital signature.


1. Secure digital signature

If, by application of a security procedure agreed to by the parties
concerned, it can
be verified that a digital signature, at the time it was affixed, was:-

(a) unique to the subscriber affixing it.

(b) capable of identifying such subscriber.

(c) created in a manner or using a means under the exclusive control
of the subscriber and is linked to the electronic record to which it
relates in such a manner that if the electronic record was altered the
digital signature would be invalidated, then such digital signature
shall be deemed to be a secure digital signature.


2. Rules for Certifying Authority.

2.1. Certifying Authority to follow certain procedures.

Every Certifying Authority shall,
(a) make use of hardware, software and procedures that are secure from
intrusion and misuse.

(b) provide a reasonable level of reliability in its services which
are reasonably suited to the performance of intended functions.

(c) adhere to security procedures to ensure that the secrecy and
privacy of the digital signatures are assured.

(d) observe such other standards as may be specified by regulations.

(e) ensure that every person employed or otherwise engaged by it
complies, in the course of his employment or engagement, with the
provisions of this Act, rules, regulations and orders made there
under.

2.2. Certifying Authority to issue Digital Signature Certificate.

(1) Any person may make an application to the Certifying Authority for
the issue of a Digital Signature Certificate in such form as may be
prescribed by the Central Government.

(2) Every such application shall be accompanied by such fee as may be
prescribed by the Central Government, to be paid to the Certifying
Authority.

(3) Every such application shall be accompanied by a certification
practice statement or where there is no such statement, a statement
containing such particulars, as may be specified by regulations.

(4) On receipt of an application the Certifying Authority may, after
consideration of the Certification practice statement or the other
statement and after making such enquiries as it may deem fit, grant
the Digital Signature Certificate or for reasons to be recorded in
writing, reject the application.

Provided that no Digital Certificate shall be granted unless the
Certifying Authority is satisfied that-

(a) the application holds the private key corresponding to the public
key to be listed in the Digital Signature Certificate.

(b) the applicant holds a private key, which is capable of creating a
digital signature.

(c) the public key to be listed in the certificate can be used to
verify a digital signature affixed by the private key held by the
applicant.

Provided further that, no application shall be rejected unless the
applicant has been given a reasonable opportunity of showing cause
against the proposed rejection.

DRAWBACKS OF USING DIGITAL SIGNATURE

100

Posted by mady | Posted in | Posted on 12:29 AM

Although the digital signature technique is a very effective method of
maintaining integrity and authentication of data, there are some
drawbacks associated with this method. They are discussed in this
section.


1. The private key must be kept in a secured manner. The loss of
private key can cause severe damage since, anyone who gets the private
key can use it to send signed messages to the public key holders and
the public key will recognize these messages as valid and so the
receivers will feel that the message was sent by the authentic private
key holder.

2. The process of generation and verification of digital signature
requires considerable amount of time. So, for frequent exchange of
messages the speed of communication will reduce.

3. When the digital signature is not verified by the public key, then
the receiver simply marks the message as invalid but he does not know
whether the message was corrupted or the false private key was used.

4. For using the digital signature the user has to obtain private and
public key, the receiver has to obtain the digital signature
certificate also. This requires them to pay additional amount of
money.

5. If a user changes his private key after every fixed interval of
time, then the record of all these changes must be kept. If a dispute
arises over a previously sent message then the old key pair needs to
be referred. Thus storage of all the previous keys is another
overhead.

6. Although digital signature provides authenticity, it does not
ensure secrecy of the data. To provide the secrecy, some other
technique such as encryption and decryption needs to be used.

APPLICATIONS OF DIGITAL SIGNATURE

10

Posted by mady | Posted in | Posted on 12:28 AM

The scope of Digital Signature is not just limited to exchange of
messages. The handwritten signature is commonly used in all kinds of
applications to prove the identity of the signer. In the same way, a
digital signature can be used for all kinds of electronic records. Any
field in which the integrity and validity of the data is crucial, can
make use of a Digital Signature. Here we discuss a few of these
applications.

1. Electronic Mail.

When we send an e-mail to a mailbox, it is desired that the owner of
the mailbox should get the e-mail in its original form. If during
transport, the content changes either accidentally or due to intrusion
by a third party, then the receiving end should be able to recognize
this change in the content. Also no person should be able to send
e-mail in the disguise of another person. Both these factors are taken
care of by the Digital signature. Any change in the e-mail will affect
the message digest generated by the SHA and thus the digital signature
will be marked as unverified. So the recipient will reject that
message.

2. Data storage.

This is one more interesting application of Digital Signature.
Suppose a large amount of data is stored on a computer. Only
authorized people are allowed to make changes to the data. In such
case, along with the data, a signature can also be stored as an
attachment. This signature is generated from the data digest and the
private key. So if any changes are made in the data by some
unauthorized person, then they will get easily recognized at the time
of signature verification and thus that copy of data will be
discarded.

3. Electronic funds transfer.

Applications like online banking, e-commerce come under this
category. In these applications the information being exchanged by the
two sides is vital and thus extreme secrecy and authenticity must be
maintained. A digital signature can ensure the authentication of the
information but, the secrecy should be maintained by using some
encryption techniques. So before generating the message digest, the
message should be encrypted. Then the digital signature is generated
and attached to the message. At the receiving end after verification
of signature, the message is decrypted to recover the original
message.

5. Software Distribution.
Software developers often distribute their software using some
electronic media, for example, the internet. In this case, in order to
ensure that the software remains unmodified and its source is genuine,
Digital Signature can be used. The developer signs the software and
the users verify the signature before using it. If signature gets
verified, then only the users can be sure about the validity of that
software.

Example of Digital Signature

74

Posted by mady | Posted in | Posted on 12:27 AM

Suppose there is a XYZ company, which is a client of ABC consultancy.
XYZ seeks advice from ABC in the company matters. They used to
communicate with each other through letters for many years. Now, with
the computerization of XYZ, they decide to use electronic media for
information exchange. Since the information exchanged by the two
parties is of professional importance, they decide to use some
authentication protocol, so that reliable communication is possible.
The representatives from ABC and XYZ conduct a meeting and decide the
use of digital signature, as a means of authentic message transport.
To establish the digital signature system they perform the following
actions,

1. Each party contacts the authority responsible for allocating the
private and public key. By paying the required amount, each of them
gets a unique key pair.

2. Then each of them makes an application to the Certifying Authority
for getting the Digital Signature Certificate for the public key of
other party.

3. The Certifying authority asks the applicants to produce the private
key corresponding to the public key, to be listed in the digital
signature certificate, i.e. for ABC to obtain certificate for the
public key of XYZ, it should ask XYZ to produce its private key and
public key before the concerned Certifying Authority.

4. The Certifying Authority verifies the functioning of the key pair
i.e. they are capable of generation and verification of digital
signature.

5. On confirming the working of key pair, it issues a Digital
Signature Certificate to the applicant.

6. Now the company XYZ has the Certificate, which lists the public key
of ABC. While ABC has the Certificate, which lists the public key of
XYZ.

7. They install the software necessary for generation and verification
of each other's digital signature. This software must be same for both
the parties, so that they use the same hashing algorithm.

8. With this set-up they are ready to use the digital signature with
their messages. Each party can sign the messages by using the private
key and the recipient party can verify these messages using the
corresponding public key, listed on the digital signature certificate.

To understand how this system behaves in different circumstances we
consider number of cases of usage of this system.
CASE 1:
Company XYZ needs an advice from ABC consultancy regarding the
financial strategy of the company. So, it creates a message addressed
to ABC and attaches the digital signature to the message using the
correct private key. ABC receives the message from XYZ, and it applies
the public key of XYZ to the message. Suppose the message gets
verified.

Conclusion: Since the message got verified, ABC is assured that the
message was sent by XYZ and the content of that message is intact
since it was sent by XYZ.

CASE 2:
On receiving the above message, ABC decides to send an advice to XYZ.
So, ABC writes a message addressed to XYZ and uses its private key to
generate the digital signature. On receiving this message XYZ applies
the corresponding public key and verifies the message. It finds that
the signature gets verified.

Conclusion: Verification of the message is an indication of the
authenticity of the sender and integrity of the data. Thus XYZ can
safely assume that it has an unmodified message from ABC only, and no
one else.

CASE 3:
Suppose company XYZ takes action according the advice given by ABC
consultancy and XYZ has to suffer a major financial loss due this
action. XYZ holds ABC responsible for the loss and wants to take legal
action against ABC. Thus XYZ files a case in the court, accusing ABC
for giving wrong advice and demands compensation for the loss
suffered. The consultancy ABC denies giving such advice to XYZ. The
court asks XYZ to prove their claim against ABC.

Then XYZ produces the copy of message received from ABC and the
Digital signature certificate, which lists the public key of ABC. It
shows that the signature on the message gets verified by ABC's public
key, so that message was indeed sent by ABC. The court accepts the
claim of XYZ and orders ABC to give compensation to XYZ.

Conclusion: A digital signature can be used to prove the identity of
the sender to a third party.

CASE 4:
A company LMN is a business rival of XYZ and it knows about the
communication of XYZ with consultancy ABC. So LMN sends a fake
message, containing a false advice, to XYZ pretending to be ABC. On
receiving this message, XYZ verifies it with the public key of ABC. It
finds that the signature doesn't get verified. So, it rejects the
message considering it as invalid. Thus it is saved from getting wrong
advice.

Conclusion: Any message not signed by the proper private key will not
get verified by the public key corresponding to the correct private
key.
CASE 5:
Failing to mislead XYZ, LMN now decides to use some different method.
By some means LMN manages to modify the content of a message send by
ABC to XYZ. When XYZ receives the message and verifies it with the
public key, it finds that message is invalid. Thus it rejects the
advice. So again XYZ is safeguarded from the attempt to intrude into
the communication. XYZ immediately informs ABC about the rejection of
the message and asks them to resend the message.

Conclusion: Although the proper private key is used to generate
message, if the message content gets modified, then the message digest
generated at the receiver end is different, due to which the
signature will never get verified.


CASE 6:
With the failure of one more attempt to misinform the company XYZ,
LMN decides to steal the private key from ABC and somehow it succeeds
in obtaining it. LMN writes a message to XYZ in the disguise of ABC
and digitally signs the message using the stolen key. On receiving the
message, XYZ verifies the digital signature and finds it to be a valid
one. Thus it accepts the advice and acts accordingly. Following the
wrong advice it suffers loss and XYZ accuses ABC for the loss. The
court finding the valid signature accepts the claim of XYZ and ABC is
asked to give compensation.

Conclusion: Security of the private key is responsibility of the key
holder. If the key is lost, then the key owner will be responsible for
the damage made using the key.


On the Basis of all the above cases we can conclude that a Digital
Signature can protect the subscribers from any attempts of forgery,
provided that the private key is kept in a secure manner. Also this
system is considered valid in the legal matters. So using digital
signature is definitely an excellent option for preserving the
integrity of data and authenticity of the user identity.

DSA PARAMETERS

9

Posted by mady | Posted in | Posted on 12:26 AM

1. Specification of parameters.

The DSA (Digital Signature Algorithm) makes use of the following parameters:

1. p is a prime number, where 2L-1 < p < 2L for 512 <= L <= 1024 and
L a multiple of 64.
2. q is a prime divisor of p - 1, where 2159 < q < 2160 .
3. g = h(p-1)/q mod p, where h is any integer with 1 < h < p - 1 such that
h(p-1)/q mod p > 1 (g has order q mod p)
4. x = a randomly generated integer with 0 < x < q
5. y = gx mod p
6. k = a randomly or generated integer with 0 < k < q

The integers p, q, g can be public and they can be common to a group
of users. A user's private and public keys are x and y, respectively.
They are normally fixed for a period of time. Parameters x and k are
used for signature generation only, and must be kept secret. Parameter
k must be regenerated for each signature.


2. Signature Generation.

The signature of a message M is the pair of numbers r and s computed
according to the equations below.

r = (gk mod p) mod q and
s = (k-1(SHA (M) + xr)) mod q.

The value of SHA (M) is a 160-bit string output by the Secure Hash
Algorithm. For use in computing s, this string must be converted to an
integer. As an option, one may wish to check if r = 0 or s = 0. If
either r = 0 or s = 0, a new value of k should be generated and the
signature should be recalculated (it is extremely unlikely that r = 0
or s = 0 if signatures are generated properly).

The signature is transmitted along with the message to the verifier.


3. Signature Verification.

Prior to verifying the signature in a signed message, p, q and g plus
the sender's public key and identity are made available to the
verifier in an authenticated manner.

Let M', r' and s' be the received versions of M, r, and s,
respectively, and let y be the public key of the signatory. To
verifier first checks to see that 0 < r' < q and 0 < s' < q; if either
condition is violated the signature shall be rejected. If these two
conditions are satisfied, the verifier computes

w = (s')-1 mod q
u1 = ((SHA (M') w) mod q
u2 = ((r') w) mod q
v = (((g)u1 (y)u2 ) mod p) mod q.

If v = r', then the signature is verified and the verifier can have
high confidence that the received message was sent by the party
holding the secret key x corresponding to y. For a proof that v = r'
when M' = M, r' = r, and s' = s, see Appendix1.

If v does not equal r', then the message may have been modified, the
message may have been incorrectly signed by the signatory, or the
message may have been signed by an impostor. The message should be
considered invalid.


.