Posted by mady | Posted in Shutdown Computer | Posted on 11:02 PM
Steps:
1.Goto to Start Menu.
2.Choose Run
3.Type shutdown -s -t 3600
Here 3600 is time in seconds.
4.Press Enter.
Your system will now shutdown automatically after 1Hour from now.
Think of a Web Service as a black box resource that accepts requests
from a consumer (some kind of program running on the web client),
performs a specific task, and returns the results of that task. In
some respects, a search engine such as Google (www.google.com) is a
kind of Web Service – you submit a search expression, and it compiles
a list of matching sites, which it sends back to your browser.
Currently the term, Web Service, is something of a buzzword within
the sphere of software development, thanks to a number of new
protocols that have opened up the scope of what we can expect Web
Services to do. XML plays a central role in all these technologies.
There's a very important distinction between a web service like
Google and the kind of XML Web Service that we're going to be talking
about: on Google, you submit the search expression, and you read the
list of sites that gets sent back. Okay, the browser provides you with
a textbox, and parses the response stream so that it looks nice – but
it doesn't actually understand the information you've submitted, let
alone the HTML that Google sends back.
If you are using an XML Web Service, you can assume the results will
be returned as some kind of XML document, that's explicitly structured
and self-describing.
It's, therefore, quite straightforward to write a program that
interprets these results and
perhaps even formulate a new submission.
As we're going to see, ASP.NET makes it very easy to build XML Web Services,
and just as easy to use them – ultimately you need only to reference
the Web Service in your code, and you can use it just as is it were a
local component. As with normal components, we don't need to know
anything about how the service functions, just the tasks it can do,
the type of information it needs to do them, and the type of results
we're going to get back.
1. KINDS OF WEB SERVICES
Practical applications of web services technologies fall into three groups:
1.1 PLUG-IN FUNCTIONALITY.
The simplest and most prevalent web services in use today add
third-party functions to web pages and portals. Common examples
include external news feeds and stock quotes, banner ad serving, and
search boxes. Few use XML and SOAP, relying instead on HTML based
technologies such as JavaScript, CGI calls and Java.
1.2 REMOTE INFRASTRUCTURE SERVICES.
Third-party providers use web services technologies to deliver
behind-the-scenes functionality for commercial websites, such as user
authentication, payment processing and visitor traffic analysis.
1.3 ENTERPRISE APPLICATION INTEGRATION.
Web services technologies are rapidly finding favour as a solution to
the complex integration challenges of linking applications within the
enterprise, or across a value chain. EAI implementations are the most
likely to use formal web services standards.
2. REQUIREMENTS OF WEB SERVICES
The original Internet made it possible to send and receive email and
to share access to files. The World Wide Web added a software layer
that made it easier to publish and access other content. The final
development has been the emergence of web
services, enriching the software layer with application functionality.
The task of the web services infrastructure is to support
commercial-grade application functionality.
The difference between content and applications comes from the
addition of process — the sequence of events that need to happen in
order to produce a result. When participants in the application are
distributed across the web, the need to complete processes adds some
important new operating requirements:
2.1 CONSISTENCY
Each separate component must act within certain parameters, such as
response times and availability.
2.2 AUTHENTICITY
There must be some way of assuring the identity of each of the
participants in the process.
2.3 TIMELINESS
Each step in the process must execute in the correct order, and
promptly — especially if a user is waiting on the result in order to
continue with their work
2.4 INTEGRITY
There must be mechanisms to avoid data becoming corrupted as it passes
between participants
2.5 PERSISTENCE
Each participant in the process must maintain a permanent record of
their part in the transaction.
The web services infrastructure must provide a platform that supports
these requirements and more. That demands an array of tools and
services to enhance, monitor
and maintain quality-of-service.
The key benefits of Web Services are:
1. SOFTWARE AS A SERVICE
As opposed to packaged products, Web Services can be delivered and
paid for as streams of services and allow ubiquitous access from any
platform. Web services allow for encapsulation. Components can be
isolated such that only the business-level services are exposed. This
results in decoupling between components and more stable and flexible
systems.
2. DYNAMIC BUSINESS INTEROPERABILITY
New business partnerships can be constructed dynamically and
automatically since Web Services ensure complete interoperability
between systems.
3. ACCESSIBILITY
Business services can be completely decentralized and distributed over
the Internet and accessed by a wide variety of communications devices
4. EFFICIENCIES
Businesses can be released from the burden of complex, slow and
expensive software development and focus instead on value added and
mission critical tasks. Web
services constructed from applications meant for internal use can be
easily exposed for external use without changing code. Incremental
development using Web services is natural and easy and since Web
Services are declared and implemented in a human readable format there
is easier bug tracking and fixing. The overall result is risk
reduction and more efficient deployability.
5. UNIVERSALLY AGREED SPECIFICATIONS
Web Services are based on universally agreed specifications for
structured data exchange, messaging, discovery of services, interface
description, and business process orchestration.
6. LEGACY INTEGRATION
Greater agility and flexibility from increased integration between
legacy systems
7. NEW MARKET OPPORTUNITIES
There will be greater feasibility to the dynamic enterprise and
dynamic value chain businesses
In April 2002, IBM, Microsoft, and VeriSign published a new Web
Services security specification, WS-Security. The specification aims
to help enterprises build secure Web Services, and applications based
on them that are broadly interoperable. Eventually, this specification
would be submitted for consideration as a standard, and looking at the
amount of commitment that IBM, Microsoft, and VeriSign have invested
in it, it should soon go that way. This specification proposes a
standard set of SOAP extensions that can be used when building secure
Web Services to implement integrity and confidentiality.
1. QUALITY OF PROTECTION
When we talk about security in Web Services, there are three types of
potential threats that need to be considered and addressed:
· The SOAP message could be modified or read by hackers.
· A hacker could send messages to a service that, while well-formed,
lack appropriate security claims carry on the processing.
· Service theft. For example, a subscription based Web Service that
doesn't authenticate or is not well secured is open to service
leeching by unauthorized users. Not necessarily hackers per se, but
people who are taking advantage of a hole in the service to get the
service for free.
A message security model is defined to understand these threats.
2. MESSAGE SECURITY MODEL
The WS-Security specification specifies an abstract message security
model in terms of security tokens combined with digital signatures as
proof of possession of the security token referred to as a key.
Security tokens assert claims, and signatures provide a mechanism for
authenticating the sender's knowledge of the key. This signature can
also be used to bind with the claims in the security token. This
assumes that the token is trusted. It may be interesting to note that
we do not specify a particular method for authentication. The
specification only indicates that security tokens may be bound to
messages. This is where the power and extensibility of WS-Security
lies.
A claim can be either endorsed or unendorsed by a trusted authority. A
set of endorsed claims is usually represented as a signed security
token that is digitally signed or encrypted by the authority. An X.509
certificate, claiming the binding between one's identity and public
key, is an example of a signed security token.
An unendorsed claim, on the other hand, can be trusted if there is a
trust relationship between the sender and the receiver.
One special type of unendorsed claim is Proof-of-Possession. Such a
claim proves that the sender has a particular piece of knowledge that
is verifiable by appropriate actors.
For example, a username/password combination is a security token with
this type of claim. A Proof-of-Possession claim is sometimes combined
with other security.tokens to prove the claims of the sender.
3. MESSAGE PROTECTION
The primary security concerns in Web Services are confidentiality and
integrity. WS-Security provides a means to protect messages by
encrypting and/or digitally signing a body, a header, an attachment,
or any combination of thees. Message integrity is provided by using
XML Signature in conjunction with security tokens to ensure that
messages are transmitted without modifications. The integrity
mechanisms are designed to support multiple signatures, potentially by
multiple actors, and to be extensible to support additional signature
formats.
Message confidentiality leverages XML Encryption in conjunction with
security tokens to keep portions of a SOAP message confidential. The
encryption mechanisms are designed to support additional encryption
processes and operations by multiple actors.
4. MISSING OR INAPPROPRIATE CLAIMS
The message receiver should reject a message with an invalid
signature, or missing or inappropriate claims, as if it is an
unauthorized (or malformed) message, as would be expected in a secure
environment. WS-Security provides a flexible way for the message
sender to claim the security properties by associating zero or more
security tokens with the message.
Once your Web Service is available to the public, you may attract a
client who is particularly interested in the service you provide. They
're so interested, infact, that they consider wrapping your powerful
Web Service inside of their own and representing it as their own
product. Without security safeguards in place (and legal documents as
well), a client may repackage your Web Service as if it were their own
function, and there's no way for you to detect that this is being done
(though you may become suspicious by examining your usage log when
your client who occasionally uses your Web Service suddenly shows an
enormous increase in activity). Given the level of abstraction that
Web Services provide, it would also be nearly impossible for any
customers of your unethical client to know who owns the functionality.
Some organizations use a combination of usage logging or per-use
charges. Another simpler way to avoid piggybacking is by using false
data tests. We could create an undocumented function within our Web
Service that creates a result that only our logic could produce. We
would be able to determine whether this code is really ours and the
client is piggybacking our Web Service or if client is truly using its
own logic. For an example, we could say that the Web Service would
provide a specific result which is undocumented and known only to us.
Say , the Web Service is going to take as an input a phone number and
going to return the Name of the person / organization who owns the
phone number. Suppose we are sure that there is no entry for a phone
number containing only zeroes. So we make sure that when such a number
is entered the Service will return a message which is specific and
known only to us. We could then test this on the
piggybacking company we suspect is stealing our Web Service. Since this hidden
functionality would not be published, it would provide a great way to
prove that a company was reselling your Web Service's logic without
your legal approval.
2. PROVIDER SOLVENCY
Since the Web Service model is a viable solution, you're probably
eager to add their functionality to your core information systems and
mission critical applications. As Web Services become more and more
interdependent, it becomes increasingly necessary to research the
companies from where you consume Web Services. You'll want to be sure
that these providers appear to have what it takes to remain in
business. UDDI goes a long way towards helping you with this research
by providing company information for each registered Web Service
provider. In the business world, nothing seems to impact and force
sweeping changes more than insolvency, and if you find yourself in the
unfortunate circumstance of lost functionality due to a bankrupt Web
Service provider, you'll realize how painful the hurried search for a
new vendor can be (with little room to bargain with your ex-service's
competitors). Although the initial work can be a bit tedious, it is
important to know, as far as you can, whether a potential Web Service
vendor will still be in business five years from now.
3. THE INTERDEPENDENCY SCENARIO
The basis for all these and other Web Service considerations is the
issue of interdependency. The potential exists for you to wake up any
given morning, start an application that has worked for years, and
find that the Web Service that it relies on is no longer available.
To some extent, thanks to the UDDI search capabilities, you can
investigate and assess potential providers, but at the end of the day
a degree of faith needs to be put into the services of each provider
that you choose to consume.
The underlying code in each distribution is exactly the same. Slight
differences may occur in the following:
• Hardware installation programs
• Default X-windows configuration
• Graphical systems management tools
• Proprietary software packages (very few)
In the vast majority of cases, Linux applications are compatible with
all distributions of Linux, which accounts for the aphorism "Linux is
Linux is Linux."
Distribution vendors take the kernel as is, with all changes and fixes
that are contributed by members of the development community. Each
distribution company releases new distributions about twice a year.
The Open Source development model discourages distribution vendors
from forking the Linux code base into incompatible code streams. The
GPL specifies that additions, modifications, and extensions to Linux
be distributed in source code form whenever executables are made
available. If a distribution company were to acquire development
expertise and attempt to build unique features into Linux, its
innovations would be released back to the development community. Truly
valuable changes would then be included in the next release of Linux
and/or freely adopted by other distribution vendors, eliminating any
competitive advantage. Currently, independent developers contribute
the vast majority of fixes, patches, and additions to Linux. Each one
of these modifications improves the stability and functionality of
Linux.
1. GNU Software:
Emacs was one of the first open source products whose success of Emacs
led to the GNU program. It is a text editor that is widely used for
software development. The GNU project consists of an operating system
kernel and associated UNIX tools. The GNU tools have been ported to a
wide variety of platforms, including Windows NT and are widely used by
software developers to produce both open source and proprietary
software.
2. Apache Web Server:
The Apache web server is a freely available web server distributed
under an open source license. Apache web servers are known for their
functionality and reliability. They form the backbone infrastructure
running the Internet. Today the Apache web server is arguably the most
widely used web server in the world garnering almost 50 percent of the
web server market. Apache was built and is maintained by a group of 20
core developers and 10 major contributors from around the world. A
large pool of developers regularly suggests and implements minor
adjustments and bug fixes to the core group.
3. Sendmail:
Send mail is a platform for moving mail from one machine to another.
The Sendmail Consortium, a nonprofit organization, runs the open
source program and maintains a website to serve as a resource.
Sendmail is estimated to carry nearly 90 percent of e-mail traffic.
4. PERL:
While Emacs, GNU toolset, Apache, Sendmail, and Linux are examples of
open source products, the Practical Extraction and Reporting Language
(Perl) is an example of an open source process. Perl is a system
administration and computer-programming language widely used
throughout the Internet. It is the standard scripting language for all
Apache web servers, and is commonly used on UNIX. There are an
estimated one million Perl users today.
5. Netscape:
On January 22, 1998, Netscape announced that it would make the source
code to its flagship client software, Netscape Communicator, freely
available for modification and redistribution on the Internet.
Netscape's pioneering decision to distribute software via the Internet
has become an integral strategy for every software company worldwide.
7.1 Major Projects
A List of important OSS/FS programs that are generally recognized as mature.
Major OSS/FS Projects include:
1. Linux kernel,
2. Apache (web server),
3. Samba (supports interoperability with Windows clients by acting as
a Windows file and print server),
4. GNOME (a desktop environment),
5. KDE (also a desktop environment),
6. The GIMP (bitmapped image editor),
7. MySQL (database emphasizing speed),
8. PostgreSQL (database emphasizing functionality),
9. PHP (hypertext preprocessor used for web development),
10. Mailman (mailing list manager),
11. XFree86 (graphics infrastructure which implements the X window system),
12. bind (domain naming service, a critical Internet infrastructure service),
13. GNU Compiler Collection (GCC, a suite of compilation tools for C,
C++, and several other languages),
14. Perl (programming/scripting language),
15. Python (another programming/scripting language),
16. Mozilla (web browser and email client),
17. OpenOffice.org (office suite, including word processor,
spreadsheet, and presentation software),
The open source BSD Operating systems
FreeBSD (general purpose),
OpenBSD (security-focused),
NetBSD (portability-focused)).
A great deal of documentation is available at the Linux Documentation
Project (LDP).
A number of up and coming projects are at an alpha or beta level. Some
projects that have the potential to be very important, have running
code, and are working toward more functionality or stability include
the following: Wine (a program to allow Windows programs to run on
Unix-like systems, AbiWord (a word processor), Gnumeric (spreadsheet),
KOffice (office suite), and GnuCash (money management).
Web projects around the world often use LAMP, an abbreviation for
Linux, Apache, MySQL (sometimes replaced with PostgreSQL), and
PHP/Perl/Python. More complex web projects may use major libraries or
frameworks, such as PHP-Nuke (based on PHP) and Zope (based on
Python).
• Open Source licenses, such as the GPL, guarantee anyone the right to
read, redistribute, modify, and use the software freely.
•Under many Open Source licenses, including the GPL, modifications of
existing software must be distributed under the same license as the
original software. The source code to any changes or improvements must
be made available to the public.
• The GPL is one example of an Open Source license. Other examples
include the BSD license, the MIT X License, the Artistic License, and
the IBM Public License.
All accomplish the same basic objectives: free distribution and openly
available source code. All Open Source licenses meet the Open Source
Definition, which is described at http://opensource.org/osd.html.
Many people have heard that all open-source licenses are the same, and
that open-source software infects everything around it, destroying all
the proprietary value in a company's intellectual property. In fact,
there are many different licenses. Some allow commercialization for
free. Many work quite well with proprietary licensing strategies. The
two most common licenses are the General Public License, or GPL, and
the Berkeley Software Distribution, or BSD, license. The GPL allows
anyone to use, change and share the source code. If you make changes,
though, you must share them freely. The BSD license, by contrast,
allows you to keep your changes private. We can conclude that to be
OSI certified the open source software must be distributed under a
license that guarantees the right to read, redistribute, modify, and
use the software freely.
1. Project Maintainer/Developer
• Determines the software license
• Writes the first code release and puts it up on the Internet
• Sets up a Web site, mailing lists, and version control services (e.g. VCS)
• Builds and leads the development team, usually from volunteers
• Approves official releases
2. Development Team
• Adds features, fixes bugs, creates patches, writes documentation
3. Users/Debuggers
• Find bugs, point out design flaws, and request new features
After the project maintainer puts up the first release, both users and
the development teams submit ideas to the project mailing lists.
Patches come in from developers to the project maintainer. The
maintainer incorporates improvements and releases a new version to the
development team and users. As momentum builds, more people get
involved, and the software evolves. Developers are rewarded by the
immediately visible recognition of their contributions to the product.
Linux is probably the best-known example of a successful Open Source
development project.
• Cost: Open Source software is free. This results in immediate
savings on licensing fees and upgrading costs. And the larger the
project, the greater the savings-for example, there is no charge for
additional client connections to an Open Source database.
• Security: In the proprietary software model, developers compete to
discover and exploit or publicize security holes. The Open Source peer
review process redirects developer competition toward preventing
security breaches in the first place. Additionally, there are no
hidden APIs that can be exploited.
• Flexibility: Open Source code can be modified to fit customer
requirements. Drivers can be developed or modified without
reverse-engineering unpublished APIs. The best-known Open Source
projects such as Linux have more contributors and testers than a
traditional software company could afford to deploy on a project.
• Choice of vendors: In the Open Source model, vendors compete purely
on the basis of their ability to add value to a shared platform, not
on the basis of proprietary secrets.
• Reduced risk: The Open Source development model effectively spreads
risks over a large pool of programming talent. And it provides a hedge
against obsolescence-for example, if a company that develops Open
Source software goes out of business, the code could thereafter be
maintained in perpetuity by other developers. Cisco Systems recently
decided to release print spooler software under an Open Source license
to reduce its dependency on in-house programming staff.
The concept of open source software (OSS) has become more than a mere
blip on the radar screens of IT professionals. However, the question
of whether open source is a viable, cost-effective system for
developing software for actual business applications has yet to be
answered.
To be certified as OSS, developers must follow the Open Source
Definition (www.opensource.org/osd). The Open Source Web site
(www.opensource.org) cites principles of the definition, including:
• Free Redistribution: No party can be restricted from selling or
giving away the software as a component of an aggregate software
distribution containing programs from several different sources. The
license may not require a fee for such sale.
• Source Code: The program must include source code and must allow
distribution. If some form of a product is not distributed with source
code, there must be a publicized means of obtaining the source code
for no more than a reasonable reproduction cost—preferably downloading
via the Internet without charge. The source code must be the preferred
form in which a programmer would modify the program. Deliberately
obfuscated source code is not allowed.
• Derived works: Modifications and derived works are allowed and can
be distributed under the same terms as the license of the original
software.
• Integrity of the Author's Source Code: Source code can be restricted
from distribution in modified form only if the license allows the
distribution of "patch files" with the source code for the purpose of
modifying the program at build time. Software built from modified
source code may be distributed but may be required to carry a
different name or version number from the original software.
• Distribution of License: The rights attached to the program must
apply to all users. No additional licenses are needed.
• License Must Not Be Specific To A Product: The rights attached to
the program must not depend on the program being part of a particular
software distribution.
• License Must Not Contaminate Other Software: No restrictions should
be placed on other software distributed with the licensed software.
For example, all other programs distributed on the same medium need
not be open source software.
Because of its wide-open management methods and unusual fee
structures, OSS, as a business model, seems to fly in the face of
conventional development wisdom. According to the Open Source
Initiative, the organization that maintains the Open Source
Definition, companies can make money with OSS using these four
business models:
• Support Sellers: Companies give away the software product but sell
distribution, branding, and after-sale service.
• Loss Leader: Companies give away open source as a loss leader to
establish market position for closed software.
• Widget Frosting: A hardware company goes open source to get better
and cheaper drivers and interface tools.
• Accessories: Companies sell accessories—books, compatible hardware,
and complete systems—with open source software pre-installed.
The possibility that OSS can be and, some would argue, is viable in a
business enterprise raises questions about how much it costs, what
support is available, and what training is required. All of these are
practical questions that IT professionals need to consider before
jumping into the open source system with both feet.
Moreover, Program Managers can re-use code written by others for
similar tasks or purposes. This enables Program Managers to
concentrate on developing the features unique to their current task,
instead of spending their effort on rethinking and re-writing code
that has already been developed by others.
Code re-use reduces development time and provides predictable results.
With access to the source code, the lifetime of OSS systems and their
upgrades can be extended indefinitely. In contrast, the lifetime of
traditional COTS systems and their upgrades cannot be extended if the
vendor does not share its code and either goes out of business, raises
its prices prohibitively, or reduces the quality of the software
prohibitively. The open source model builds open standards and
achieves a high degree of interoperability. While traditional COTS
typically depend on monopoly support with one company providing
support and "holding all the cards" (i.e., access to the code) for a
piece of software, the publicly available source code for OSS enables
many vendors to learn the platform and provide support. Because OSS
vendors compete against one another to provide support, the quality of
support increases while the end-user cost of receiving the support
decreases.
Open source can create support that lasts as long as there is demand,
even if one support vendor goes out of business. For government
acquisition purposes, OSS adds potential as a second-source
"bargaining chip" to improve COTS support. OSS can be a long-term
viable solution with significant benefits, but there are issues and
risks to Program Managers. Poor code often results if the open source
project is too small or fails to attract the interest of enough
skilled developers; thus, Program Managers should make sure that the
OSS community is large, talented, and well organized to offer a viable
alternative to COTS. Highly technical, skilled developers tend to
focus on the technical user at the expense of the non-technical user.
As a result, OSS tends to have a relatively weak graphical user
interface (GUI) and fewer compatible applications, making it more
difficult to use and less practical, in particular, for desktop
applications (although some OSS products are greatly improving in this
area). Version control can become an issue if the OSS system requires
integration and development.
As new versions of the OSS are released, Program Managers need to make
sure that the versions to be integrated are compatible, ensure that
all developers are working with the proper version, and keep track of
changes made to the software.
Without a formal corporate structure, OSS faces a risk of
fragmentation of the code base, or code forking, which transpires when
multiple, inconsistent versions of the project's code base evolve.
This can occur when developers try to create alternative means for
their code to play a more significant role than achieved in the base
product. Sometimes fragmentation occurs for good reasons (e.g., if the
maintainer is doing a poor job) and sometimes it occurs for bad
reasons (e.g., a personality conflict between lead developers). The
Linux kernel code has not yet forked, and this can be attributed to
its accepted leadership structure, open membership and long-term
contribution potential, GNU General Public License (GPL) licensing
eliminating the economic motivations for fragmentation, and the
subsequent threat of a fragmented pool of developers.
Ninety-nine percent of Linux distributed code is the same. The small
amount of fragmentation between different Linux distributions is good
because it allows them to cater to different segments. Users benefit
by choosing a Linux distribution that best meets their needs. Finally,
there is a risk of companies developing competitive strategies
specifically focused against OSS.
When comparing long-term economic costs and benefits of open source
usage and maintenance to traditional COTS, the winner varies according
to each specific use and set of circumstances. Typically, open source
compares favorably in many cases for server and embedded system
implementations that may require some customization, but fares no
better than COTS for typical desktop applications.
Open source has burst upon the software development scene as the new
paradigm of faster turnaround and more reliable software. With the
open source development model, a computer program's source code is
given away freely along with the program itself. This allows any
programmer to view, modify, and redistribute the program. By allowing
the outside world to adapt and propagate the source code, the
development lifecycle is greatly reduced and the final product is much
more stable and versatile, proponents advocate. The best thing about
Open source is that it propels innovation, as users are free to tailor
the software to suit their own needs and circulate those changes.
Most Open Source software is not developed by one single vendor, but
by a distributed group of programmers. Typically, open source software
development is guided by project maintainers who address technical or
end-user requirements rather than vendor agendas.
Nobody "owns" Open Source software, which is freely available for
download over the Internet.
Closed-source software is the kind that most people know best. For
decades, software companies have shipped their products on floppy
disks and CD-ROMs. People can install and use those programs but
cannot change them or fix them. The human-readable version of the
software: the source code is jealously guarded by the software maker.
One may think that open-source software is less secure or less
reliable than closed source. This isn't true. For example, it is now
universally accepted in the computer industry that the open-source
Apache Web server is a much more secure alternative to Microsoft's
closed-source Internet Information Server. Open source lets engineers
around the world examine the code for security flaws and other bugs.
Unlike most commercial software, the core code of such software can be
easily studied by other programmers and improved upon--the only
proviso being that such improvements must also be revealed publicly
and distributed freely in a process that encourages continual
innovation.
The Formal Framework:
Open source, by definition, means that the source code is available.
Open source software (OSS) is software with its source code available
that may be used, copied, and distributed with or without
modifications, and that may be offered either with or without a fee.
If the end-user makes any alterations to the software, he can either
choose to keep those changes private or return them to the community
so that they can potentially be added to future releases. Open Source
Initiative (OSI), an unincorporated nonprofit research and educational
association with the mission to own and defend the open source
trademark and advance the cause of OSS certifies the open source
license. The open source community consists of individuals or groups
of individuals who contribute to a particular open source product or
technology. The open source process refers to the approach for
developing and maintaining open source products and technologies,
including software, computers, devices, technical formats, and
computer languages.
Open source software, by definition, includes any program or
application in which the programming code is open and visible. The
concept of open source software dates to the earliest days of computer
programming. The term came into popular usage following a February
1998 meeting in Palo Alto, California. A group of leading free
software advocates, reacting to Netscape's announcement that it
planned to make the source code for its browser widely available, came
to the realization that open source software had to be promoted and
marketed based on pragmatic business strategies to compete effectively
against closed source vendors.
Lemma:
Let p and q be primes so that q divides p – 1, h a positive integer
less than p, and
g = h (p-1)/q mod p.
Then g q mod p = 1, and if m mod q = n mod q, then g m mod p = g n mod p.
Proof:
We have,
g q mod p = (h (p- 1)/ q mod p) q mod p
= h (p-1) mod p
= 1 by Fermat's Little Theorem.
Now let m mod q = n mod q, i.e., m = n + k.q for some integer k. Then
g m mod p = g (n+k.q) mod p
= (g n .g k.q) mod p
= ((g n mod p) . (g q mod p) k) mod p
= g n mod p
Since, g q mod p = 1.
We are now ready to prove the main result.
THEOREM:
If M' = M, r' = r, and s' = s in the signature verification, then v = r'.
Proof: We have,
w = ( s' ) -1 mod q = s-1 mod q
u1 = ( ( SHA(M') ) w ) mod q = ( (SHA(M) ) w ) mod q
u2 = ( r' . w ) mod q = ( r . w ) mod q.
Now y = g x mod p,
So that by the lemma,
v = ( ( g u1 y u2 ) mod p ) mod q
= ( ( g SHA(M) . w . y r . w ) mod p) mod q
= ( ( g SHA(M) . w . g x .r. w ) mod p) mod q
= ( ( g ( SHA(M)+x.r ) . w ) mod p ) mod q.
Also,
s = ( k-1( SHA(M) + x.r ) ) mod q.
Hence,
w = ( k . ( SHA(M) + x.r )-1 ) mod q
( SHA(M) + x.r ) .w mod q = k mod q.
Thus by the lemma,
v = (gk mod p) mod q = r = r'.
Hence the theorem is proved.
1. Secure digital signature
If, by application of a security procedure agreed to by the parties
concerned, it can
be verified that a digital signature, at the time it was affixed, was:-
(a) unique to the subscriber affixing it.
(b) capable of identifying such subscriber.
(c) created in a manner or using a means under the exclusive control
of the subscriber and is linked to the electronic record to which it
relates in such a manner that if the electronic record was altered the
digital signature would be invalidated, then such digital signature
shall be deemed to be a secure digital signature.
2. Rules for Certifying Authority.
2.1. Certifying Authority to follow certain procedures.
Every Certifying Authority shall,
(a) make use of hardware, software and procedures that are secure from
intrusion and misuse.
(b) provide a reasonable level of reliability in its services which
are reasonably suited to the performance of intended functions.
(c) adhere to security procedures to ensure that the secrecy and
privacy of the digital signatures are assured.
(d) observe such other standards as may be specified by regulations.
(e) ensure that every person employed or otherwise engaged by it
complies, in the course of his employment or engagement, with the
provisions of this Act, rules, regulations and orders made there
under.
2.2. Certifying Authority to issue Digital Signature Certificate.
(1) Any person may make an application to the Certifying Authority for
the issue of a Digital Signature Certificate in such form as may be
prescribed by the Central Government.
(2) Every such application shall be accompanied by such fee as may be
prescribed by the Central Government, to be paid to the Certifying
Authority.
(3) Every such application shall be accompanied by a certification
practice statement or where there is no such statement, a statement
containing such particulars, as may be specified by regulations.
(4) On receipt of an application the Certifying Authority may, after
consideration of the Certification practice statement or the other
statement and after making such enquiries as it may deem fit, grant
the Digital Signature Certificate or for reasons to be recorded in
writing, reject the application.
Provided that no Digital Certificate shall be granted unless the
Certifying Authority is satisfied that-
(a) the application holds the private key corresponding to the public
key to be listed in the Digital Signature Certificate.
(b) the applicant holds a private key, which is capable of creating a
digital signature.
(c) the public key to be listed in the certificate can be used to
verify a digital signature affixed by the private key held by the
applicant.
Provided further that, no application shall be rejected unless the
applicant has been given a reasonable opportunity of showing cause
against the proposed rejection.
1. The private key must be kept in a secured manner. The loss of
private key can cause severe damage since, anyone who gets the private
key can use it to send signed messages to the public key holders and
the public key will recognize these messages as valid and so the
receivers will feel that the message was sent by the authentic private
key holder.
2. The process of generation and verification of digital signature
requires considerable amount of time. So, for frequent exchange of
messages the speed of communication will reduce.
3. When the digital signature is not verified by the public key, then
the receiver simply marks the message as invalid but he does not know
whether the message was corrupted or the false private key was used.
4. For using the digital signature the user has to obtain private and
public key, the receiver has to obtain the digital signature
certificate also. This requires them to pay additional amount of
money.
5. If a user changes his private key after every fixed interval of
time, then the record of all these changes must be kept. If a dispute
arises over a previously sent message then the old key pair needs to
be referred. Thus storage of all the previous keys is another
overhead.
6. Although digital signature provides authenticity, it does not
ensure secrecy of the data. To provide the secrecy, some other
technique such as encryption and decryption needs to be used.
1. Electronic Mail.
When we send an e-mail to a mailbox, it is desired that the owner of
the mailbox should get the e-mail in its original form. If during
transport, the content changes either accidentally or due to intrusion
by a third party, then the receiving end should be able to recognize
this change in the content. Also no person should be able to send
e-mail in the disguise of another person. Both these factors are taken
care of by the Digital signature. Any change in the e-mail will affect
the message digest generated by the SHA and thus the digital signature
will be marked as unverified. So the recipient will reject that
message.
2. Data storage.
This is one more interesting application of Digital Signature.
Suppose a large amount of data is stored on a computer. Only
authorized people are allowed to make changes to the data. In such
case, along with the data, a signature can also be stored as an
attachment. This signature is generated from the data digest and the
private key. So if any changes are made in the data by some
unauthorized person, then they will get easily recognized at the time
of signature verification and thus that copy of data will be
discarded.
3. Electronic funds transfer.
Applications like online banking, e-commerce come under this
category. In these applications the information being exchanged by the
two sides is vital and thus extreme secrecy and authenticity must be
maintained. A digital signature can ensure the authentication of the
information but, the secrecy should be maintained by using some
encryption techniques. So before generating the message digest, the
message should be encrypted. Then the digital signature is generated
and attached to the message. At the receiving end after verification
of signature, the message is decrypted to recover the original
message.
5. Software Distribution.
Software developers often distribute their software using some
electronic media, for example, the internet. In this case, in order to
ensure that the software remains unmodified and its source is genuine,
Digital Signature can be used. The developer signs the software and
the users verify the signature before using it. If signature gets
verified, then only the users can be sure about the validity of that
software.
1. Each party contacts the authority responsible for allocating the
private and public key. By paying the required amount, each of them
gets a unique key pair.
2. Then each of them makes an application to the Certifying Authority
for getting the Digital Signature Certificate for the public key of
other party.
3. The Certifying authority asks the applicants to produce the private
key corresponding to the public key, to be listed in the digital
signature certificate, i.e. for ABC to obtain certificate for the
public key of XYZ, it should ask XYZ to produce its private key and
public key before the concerned Certifying Authority.
4. The Certifying Authority verifies the functioning of the key pair
i.e. they are capable of generation and verification of digital
signature.
5. On confirming the working of key pair, it issues a Digital
Signature Certificate to the applicant.
6. Now the company XYZ has the Certificate, which lists the public key
of ABC. While ABC has the Certificate, which lists the public key of
XYZ.
7. They install the software necessary for generation and verification
of each other's digital signature. This software must be same for both
the parties, so that they use the same hashing algorithm.
8. With this set-up they are ready to use the digital signature with
their messages. Each party can sign the messages by using the private
key and the recipient party can verify these messages using the
corresponding public key, listed on the digital signature certificate.
To understand how this system behaves in different circumstances we
consider number of cases of usage of this system.
CASE 1:
Company XYZ needs an advice from ABC consultancy regarding the
financial strategy of the company. So, it creates a message addressed
to ABC and attaches the digital signature to the message using the
correct private key. ABC receives the message from XYZ, and it applies
the public key of XYZ to the message. Suppose the message gets
verified.
Conclusion: Since the message got verified, ABC is assured that the
message was sent by XYZ and the content of that message is intact
since it was sent by XYZ.
CASE 2:
On receiving the above message, ABC decides to send an advice to XYZ.
So, ABC writes a message addressed to XYZ and uses its private key to
generate the digital signature. On receiving this message XYZ applies
the corresponding public key and verifies the message. It finds that
the signature gets verified.
Conclusion: Verification of the message is an indication of the
authenticity of the sender and integrity of the data. Thus XYZ can
safely assume that it has an unmodified message from ABC only, and no
one else.
CASE 3:
Suppose company XYZ takes action according the advice given by ABC
consultancy and XYZ has to suffer a major financial loss due this
action. XYZ holds ABC responsible for the loss and wants to take legal
action against ABC. Thus XYZ files a case in the court, accusing ABC
for giving wrong advice and demands compensation for the loss
suffered. The consultancy ABC denies giving such advice to XYZ. The
court asks XYZ to prove their claim against ABC.
Then XYZ produces the copy of message received from ABC and the
Digital signature certificate, which lists the public key of ABC. It
shows that the signature on the message gets verified by ABC's public
key, so that message was indeed sent by ABC. The court accepts the
claim of XYZ and orders ABC to give compensation to XYZ.
Conclusion: A digital signature can be used to prove the identity of
the sender to a third party.
CASE 4:
A company LMN is a business rival of XYZ and it knows about the
communication of XYZ with consultancy ABC. So LMN sends a fake
message, containing a false advice, to XYZ pretending to be ABC. On
receiving this message, XYZ verifies it with the public key of ABC. It
finds that the signature doesn't get verified. So, it rejects the
message considering it as invalid. Thus it is saved from getting wrong
advice.
Conclusion: Any message not signed by the proper private key will not
get verified by the public key corresponding to the correct private
key.
CASE 5:
Failing to mislead XYZ, LMN now decides to use some different method.
By some means LMN manages to modify the content of a message send by
ABC to XYZ. When XYZ receives the message and verifies it with the
public key, it finds that message is invalid. Thus it rejects the
advice. So again XYZ is safeguarded from the attempt to intrude into
the communication. XYZ immediately informs ABC about the rejection of
the message and asks them to resend the message.
Conclusion: Although the proper private key is used to generate
message, if the message content gets modified, then the message digest
generated at the receiver end is different, due to which the
signature will never get verified.
CASE 6:
With the failure of one more attempt to misinform the company XYZ,
LMN decides to steal the private key from ABC and somehow it succeeds
in obtaining it. LMN writes a message to XYZ in the disguise of ABC
and digitally signs the message using the stolen key. On receiving the
message, XYZ verifies the digital signature and finds it to be a valid
one. Thus it accepts the advice and acts accordingly. Following the
wrong advice it suffers loss and XYZ accuses ABC for the loss. The
court finding the valid signature accepts the claim of XYZ and ABC is
asked to give compensation.
Conclusion: Security of the private key is responsibility of the key
holder. If the key is lost, then the key owner will be responsible for
the damage made using the key.
On the Basis of all the above cases we can conclude that a Digital
Signature can protect the subscribers from any attempts of forgery,
provided that the private key is kept in a secure manner. Also this
system is considered valid in the legal matters. So using digital
signature is definitely an excellent option for preserving the
integrity of data and authenticity of the user identity.
The DSA (Digital Signature Algorithm) makes use of the following parameters:
1. p is a prime number, where 2L-1 < p < 2L for 512 <= L <= 1024 and
L a multiple of 64.
2. q is a prime divisor of p - 1, where 2159 < q < 2160 .
3. g = h(p-1)/q mod p, where h is any integer with 1 < h < p - 1 such that
h(p-1)/q mod p > 1 (g has order q mod p)
4. x = a randomly generated integer with 0 < x < q
5. y = gx mod p
6. k = a randomly or generated integer with 0 < k < q
The integers p, q, g can be public and they can be common to a group
of users. A user's private and public keys are x and y, respectively.
They are normally fixed for a period of time. Parameters x and k are
used for signature generation only, and must be kept secret. Parameter
k must be regenerated for each signature.
2. Signature Generation.
The signature of a message M is the pair of numbers r and s computed
according to the equations below.
r = (gk mod p) mod q and
s = (k-1(SHA (M) + xr)) mod q.
The value of SHA (M) is a 160-bit string output by the Secure Hash
Algorithm. For use in computing s, this string must be converted to an
integer. As an option, one may wish to check if r = 0 or s = 0. If
either r = 0 or s = 0, a new value of k should be generated and the
signature should be recalculated (it is extremely unlikely that r = 0
or s = 0 if signatures are generated properly).
The signature is transmitted along with the message to the verifier.
3. Signature Verification.
Prior to verifying the signature in a signed message, p, q and g plus
the sender's public key and identity are made available to the
verifier in an authenticated manner.
Let M', r' and s' be the received versions of M, r, and s,
respectively, and let y be the public key of the signatory. To
verifier first checks to see that 0 < r' < q and 0 < s' < q; if either
condition is violated the signature shall be rejected. If these two
conditions are satisfied, the verifier computes
w = (s')-1 mod q
u1 = ((SHA (M') w) mod q
u2 = ((r') w) mod q
v = (((g)u1 (y)u2 ) mod p) mod q.
If v = r', then the signature is verified and the verifier can have
high confidence that the received message was sent by the party
holding the secret key x corresponding to y. For a proof that v = r'
when M' = M, r' = r, and s' = s, see Appendix1.
If v does not equal r', then the message may have been modified, the
message may have been incorrectly signed by the signatory, or the
message may have been signed by an impostor. The message should be
considered invalid.
.
A digital signature is nothing but an attachment to any piece of
electronic information, which represents the content of the document
and the identity of the originator of that document uniquely. The
digital signature is intended for use in electronic mail, electronic
funds transfer, electronic data interchange, software distribution,
data storage, and other applications which require data integrity
assurance and data origin authentication.
When a message is received, the recipient may desire to verify that
the message has not been altered in transit. Furthermore, the
recipient may wish to be certain of the originator's identity. Both of
these services can be provided by the digital signature. A digital
signature is an electronic analogue of a written signature in that the
digital signature can be used in proving to the recipient or a third
party that the message was, in fact, signed by the originator. Digital
signatures may also be generated for stored data and programs so that
the integrity of the data and programs may be verified at any later
time.
Although there are various approaches to implement the digital
signature, this report discusses the 'Digital Signature Standard'. It
specifies the Digital Signature Algorithm (DSA) which is appropriate
for applications requiring a digital rather than written signature.
The DSA is considered as the standard procedure to generate and verify
digital signatures. A DSA digital signature is a pair of large numbers
represented in a computer as strings of binary digits.
The first section of this report deals with the basic requirements
for using the digital signature. The next sections contain detailed
explanation of the process of generation and verification of the
digital signature. In addition to this the applications of the digital
Signature are also discussed. The report also focuses on some legal
aspects of digital signature, with reference to the Information
Technology Act. The use of digital signature has been illustrated with
an example in a practical scenario.
This report is an attempt to make the readers familiar with the
concepts related to the digital signature and give them an idea of
usefulness of a digital signature in the world of electronic
information exchange.
© All Rights Reserved. Knowlegde Through Fun
Theme by Website Hosting | Converted into Blogger Templates by Theme Craft | Falcon Hive