Thursday 26 January 2012

REST Protocol

Representational state transfer (REST) is a style of software architecture for distributed hypermedia systems such as the World Wide Web. The term representational state transfer was introduced and defined in 2000 by Roy Fielding in his doctoral dissertation.[1][2] Fielding is one of the principal authors of the Hypertext Transfer Protocol (HTTP) specification versions 1.0 and 1.1.[3][4]


History

The REST architectural style was developed in parallel with HTTP/1.1, based on the existing design of HTTP/1.0.[6] The largest implementation of a system conforming to the REST architectural style is the World Wide Web. REST exemplifies how the Web's architecture emerged by characterizing and constraining the macro-interactions of the four components of the Web, namely origin servers, gateways, proxies and clients, without imposing limitations on the individual participants. As such, REST essentially governs the proper behavior of participants.

Concept

REST-style architectures consist of clients and servers. Clients initiate requests to servers; servers process requests and return appropriate responses. Requests and responses are built around the transfer of representations of resources. A resource can be essentially any coherent and meaningful concept that may be addressed. A representation of a resource is typically a document that captures the current or intended state of a resource.
The client begins sending requests when it is ready to make the transition to a new state. While one or more requests are outstanding, the client is considered to be in transition. The representation of each application state contains links that may be used next time the client chooses to initiate a new state transition.[7]
The name "Representational State Transfer" is intended to evoke an image of how a well-designed Web application behaves: a network of web pages (a virtual state-machine), where the user progresses through the application by selecting links (state transitions), resulting in the next page (representing the next state of the application) being transferred to the user and rendered for their use. Fielding's PhD thesis, section 6.1
REST was initially described in the context of HTTP, but is not limited to that protocol. RESTful architectures can be based on other Application Layer protocols if they already provide a rich and uniform vocabulary for applications based on the transfer of meaningful representational state. RESTful applications maximize the use of the pre-existing, well-defined interface and other built-in capabilities provided by the chosen network protocol, and minimize the addition of new application-specific features on top of it.

HTTP examples

HTTP, for example, has a very rich vocabulary in terms of verbs (or "methods"), URIs, Internet media types, request and response codes, etc. REST uses these existing features of the HTTP protocol, and thus allows existing layered proxy and gateway components to perform additional functions on the network such as HTTP caching and security enforcement.

SOAP RPC contrast

SOAP RPC over HTTP, on the other hand, encourages each application designer to define a new and arbitrary vocabulary of nouns and verbs (for example getUsers(), savePurchaseOrder(...)), usually overlaid onto the HTTP POST verb. This disregards many of HTTP's existing capabilities such as authentication, caching and content type negotiation, and may leave the application designer re-inventing many of these features within the new vocabulary.[8] Examples of doing so may include the addition of methods such as getNewUsersSince(Date date), savePurchaseOrder(string customerLogon, string password, ...).

Constraints

The REST architectural style describes the following six constraints applied to the architecture, while leaving the implementation of the individual components free to design:
Client–server
A uniform interface separates clients from servers. This separation of concerns means that, for example, clients are not concerned with data storage, which remains internal to each server, so that the portability of client code is improved. Servers are not concerned with the user interface or user state, so that servers can be simpler and more scalable. Servers and clients may also be replaced and developed independently, as long as the interface is not altered.
Stateless
The client–server communication is further constrained by no client context being stored on the server between requests. Each request from any client contains all of the information necessary to service the request, and any session state is held in the client. The server can be stateful; this constraint merely requires that server-side state be addressable by URL as a resource. This not only makes servers more visible for monitoring, but also makes them more reliable in the face of partial network failures as well as further enhancing their scalability.
Cacheable
As on the World Wide Web, clients can cache responses. Responses must therefore, implicitly or explicitly, define themselves as cacheable, or not, to prevent clients reusing stale or inappropriate data in response to further requests. Well-managed caching partially or completely eliminates some client–server interactions, further improving scalability and performance.
Layered system
A client cannot ordinarily tell whether it is connected directly to the end server, or to an intermediary along the way. Intermediary servers may improve system scalability by enabling load-balancing and by providing shared caches. They may also enforce security policies.
Code on demand (optional)
Servers are able temporarily to extend or customize the functionality of a client by the transfer of executable code. Examples of this may include compiled components such as Java applets and client-side scripts such as JavaScript.
Uniform interface
The uniform interface between clients and servers, discussed below, simplifies and decouples the architecture, which enables each part to evolve independently. The four guiding principles of this interface are detailed below.
The only optional constraint of REST architecture is code on demand. If a service violates any other constraint, it cannot strictly be considered RESTful.
Complying with these constraints, and thus conforming to the REST architectural style, will enable any kind of distributed hypermedia system to have desirable emergent properties, such as performance, scalability, simplicity, modifiability, visibility, portability and reliability.

Guiding principles of the interface

The uniform interface that any REST interface must provide is considered fundamental to the design of any REST service.[9]
Identification of resources
Individual resources are identified in requests, for example using URIs in web-based REST systems. The resources themselves are conceptually separate from the representations that are returned to the client. For example, the server does not send its database, but rather, perhaps, some HTML, XML or JSON that represents some database records expressed, for instance, in Finnish and encoded in UTF-8, depending on the details of the request and the server implementation.
Manipulation of resources through these representations
When a client holds a representation of a resource, including any metadata attached, it has enough information to modify or delete the resource on the server, provided it has permission to do so.
Self-descriptive messages
Each message includes enough information to describe how to process the message. For example, which parser to invoke may be specified by an Internet media type (previously known as a MIME type). Responses also explicitly indicate their cacheability.[1]
Hypermedia as the engine of application state
Clients make state transitions only through actions that are dynamically identified within hypermedia by the server (e.g. by hyperlinks within hypertext). Except for simple fixed entry points to the application, a client does not assume that any particular actions will be available for any particular resources beyond those described in representations previously received from the server.

Key goals

Key goals of REST include:
REST has been applied to describe the desired web architecture, to help identify existing problems, to compare alternative solutions, and to ensure that protocol extensions would not violate the core constraints that make the Web successful.
Fielding describes REST's effect on scalability thus:
REST's client–server separation of concerns simplifies component implementation, reduces the complexity of connector semantics, improves the effectiveness of performance tuning, and increases the scalability of pure server components. Layered system constraints allow intermediaries—proxies, gateways, and firewalls—to be introduced at various points in the communication without changing the interfaces between components, thus allowing them to assist in communication translation or improve performance via large-scale, shared caching. REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability.[10]

Central principle

An important concept in REST is the existence of resources (sources of specific information), each of which is referenced with a global identifier (e.g., a URI in HTTP). In order to manipulate these resources, components of the network (user agents and origin servers) communicate via a standardized interface (e.g., HTTP) and exchange representations of these resources (the actual documents conveying the information). For example, a resource that represents a circle may accept and return a representation that specifies a center point and radius, formatted in SVG, but may also accept and return a representation that specifies any three distinct points along the curve (since this also uniquely identifies a circle) as a comma-separated list.
Any number of connectors (e.g., clients, servers, caches, tunnels, etc.) can mediate the request, but each does so without "seeing past" its own request (referred to as "layering," another constraint of REST and a common principle in many other parts of information and networking architecture). Thus, an application can interact with a resource by knowing two things: the identifier of the resource and the action required—it does not need to know whether there are caches, proxies, gateways, firewalls, tunnels, or anything else between it and the server actually holding the information. The application does, however, need to understand the format of the information (representation) returned, which is typically an HTML, XML or JSON document of some kind, although it may be an image, plain text, or any other content.

RESTful web services

A RESTful web service (also called a RESTful web API) is a simple web service implemented using HTTP and the principles of REST. It is a collection of resources, with four defined aspects:
  • the base URI for the web service, such as http://example.com/resources/
  • the Internet media type of the data supported by the web service. This is often JSON, XML or YAML but can be any other valid Internet media type.
  • the set of operations supported by the web service using HTTP methods (e.g., GET, PUT, POST, or DELETE).
  • The API must be hypertext driven.[11]
The following table shows how the HTTP methods are typically used to implement a web service.



The PUT and DELETE methods are idempotent methods. The GET method is a safe method, meaning that calling it produces no side-effects (this also implies idempotence).
Unlike SOAP-based web services, there is no "official" standard for RESTful web services.[13] This is because REST is an architecture, unlike SOAP, which is a protocol. Even though REST is not a standard, a RESTful implementation such as the Web can use standards like HTTP, URI, XML, etc.

Public implementations

REST can be found in a number of places on the public Web:
  • Sones GraphDB is a graph-oriented database written in C# that provides a RESTful interface

Framework implementations

Outside the Web

Software that may interact with a number of different kinds of objects or devices can do so by virtue of a uniform, agreed interface.

CMIP

The Common Management Information Protocol (CMIP) was designed to allow the control of network resources by presenting their manageable characteristics as object attributes. The objects have parent-child relationships that are identified using distinguished names and attributes, which are read and modified by a set of CRUD operations. The notable non-restful aspect of CMIP is the M_ACTION operation although, wherever possible, designers of management information bases (MIBs) would typically endeavour to represent controllable and stateful aspects of network equipment through attributes.

Monday 23 January 2012

Single Sign-on(SSO).


According to wiki.

Single sign-on (SSO) is a property of access control of multiple related, but independent software systems. With this property a user logs in once and gains access to all systems without being prompted to log in again at each of them. Single sign-off is the reverse property whereby a single action of signing out terminates access to multiple software systems.
As different applications and resources support different authentication mechanisms, single sign-on has to internally translate to and store different credentials compared to what is used for initial authentication.
Benefits include:
  • Reduces phishing success, because users are not trained to enter password everywhere without thinking.
  • Reducing password fatigue from different user name and password combinations
  • Reducing time spent re-entering passwords for the same identity
  • Can support conventional authentication such as Windows credentials (i.e., username/password)
  • Reducing IT costs due to lower number of IT help desk calls about passwords
  • Security on all levels of entry/exit/access to systems without the inconvenience of re-prompting users
  • Centralized reporting for compliance adherence.
SSO uses centralized authentication servers that all other applications and systems utilize for authentication purposes, and combines this with techniques to ensure that users do not have to actively enter their credentials more than once.
SSO users need not remember so many passwords to login to different systems or applications.
The term enterprise reduced sign-on is preferred by some authors[who?] who believe single sign-on to be impossible in real use cases.
As single sign-on provides access to many resources once the user is initially authenticated ("keys to the castle"), it increases the negative impact in case the credentials are available to other persons and misused. Therefore, single sign-on requires an increased focus on the protection of the user credentials, and should ideally be combined with strong authentication methods like smart cards and one-time password tokens.
Single sign-on also makes the authentication systems highly critical; a loss of their availability can result in denial of access to all systems unified under the SSO. SSO can thus be undesirable for systems to which access must be guaranteed at all times, such as security or plant-floor systems.


Common Single Sign-On Configurations


Kerberos based

  • Initial sign-on prompts the user for credentials, and gets a Kerberos ticket-granting ticket (TGT).
  • Additional software applications requiring authentication, such as email clientswikisrevision control systems, etc., use the ticket-granting ticket to acquire service tickets, proving the user's identity to the mailserver / wiki server / etc. without prompting the user to re-enter credentials.
Windows environment - Windows login fetches TGT. Active Directory-aware applications fetch service tickets, so user is not prompted to re-authenticate.
UNIX/Linux environment - Login via Kerberos PAM modules fetches TGT. Kerberized client applications such as EvolutionFirefox, andSVN use service tickets, so user is not prompted to re-authenticate.


Smart card based

Initial sign on prompts the user for the smart card. Additional software applications also use the smart card, without prompting the user to re-enter credentials. Smart card-based single sign-on can either use certificates or passwords stored on the smart card.


OTP Token

Also referred to as one-time password token. Two-factor authentication with OTP tokens [1] follows industry best practices for authenticating users.[2] This OTP token method is more secure and effective at prohibiting unauthorized access than other authentication methods.[3]


Integrated Windows Authentication

Integrated Windows Authentication is a term associated with Microsoft products and refers to the SPNEGOKerberos, and NTLMSSPauthentication protocols with respect to SSPI functionality introduced with Microsoft Windows 2000 and included with later Windows NT-based operating systems. The term is used more commonly for the automatically authenticated connections between Microsoft Internet Information Services and Internet Explorer. Cross-platform Active Directory integration vendors have extended the Integrated Windows Authentication paradigm to UNIX, Linux and Mac systems.


Shared authentication schemes which are not single sign-on

Single sign on requires that users literally sign in once to establish their credentials. Systems which require the user to log in multiple times to the same identity are inherently not single sign on. For example, an environment where users are prompted to log in to their desktop, then log in to their email using the same credentials, is not single sign on.


According to other analysis.




What Is Single Sign On?



Single Sign On (SSO) (also known as Enterprise Single Sign On or "ESSO") is the ability for a user to enter the same id and password to logon to multiple applications within an enterprise. As passwords are the least secure authentication mechanism, single sign on has now become known as reduced sign on (RSO) since more than one type of authentication mechanism is used according to enterprise risk models.

For example, in an enterprise using SSO software, the user logs on with their id and password. This gains them access to low risk information and multiple applications such as the enterprise portal. However, when the user tries to access higher risk applications and information, like a payroll system, the single sign on software requires them to use a stronger form of authentication. This may include digital certificates, security tokens, smart cards, biometrics or combinations thereof.

Single sign on can also take place between enterprises using federated authentication. For example, a business partner's employee may successfully log on to their enterprise system. When they click on a link to your enterprise's application, the business partner's single sign on system will provide a security assertion token to your enterprise using a protocol like SAML, Liberty Alliance, WS Federation or Shibboleth. Your enterprise's SSO software receives the token, checks it, and then allows the business partner's employee to access your enterprise application without having to sign on.

Single sign on federated authentication also works with your employees. For example, an employee who is trying to access your outsourced benefits supplier to update their benefits information would click on the benefits link on your intranet. Your enterprise's single sign on software would then send a security assertion token to the benefits supplier. The benefits supplier's SSO system would then take the token, check it and grant access to your employee without making them sign on.



Single Sign On Benefits


Single sign on benefits are:
  • Ability to enforce uniform enterprise authentication and/or authorization policies across the enterprise
  • End to end user audit sessions to improve security reporting and auditing
  • Removes application developers from having to understand and implement identity security in their applications
  • Usually results in significant password help desk cost savings
Since the internet is stateless, this means that the single sign on software must check every request by the user's browser to see if there is an authentication policy pertaining to the resource or application the user is trying to access. In a medium to large enterprise, this means that every time the user clicks on a different URL, there is traffic between the user's browser, the web or application servers and the security server. This traffic can become large and cumbersome from a performance perspective. Therefore, most modern single sign on systems use LDAP (Lightweight Directory Access Protocol) directories to store the authentication and authorization policies. The LDAP directories are made for high performance lookups thus addressing the high traffic load. Further, the LDAP directories are often the source for the single sign on system to authenticate against.

Single sign on systems in medium to large enterprises can become a single point of enterprise failure if not properly designed. If the single sign on system goes down but the applications remain up, no user can access any resource or application protected by the SSO system. Many enterprises have experienced this painful condition resulting in productivity loss. Therefore, it is essential that your enterprise single sign on system have a good and well tested failover and disaster recovery design.

Finally, single sign on systems in medium to large enterprises requires good identity data governance. Enterprise security features being offered by the single sign on system is only as good as the underlying identity data. Thus it is critical that all enterprise identity data have good, quick business processes that pick up on any change to the identity such as new identity creation, identity termination or role changes. Without this, enterprise SSO systems are vulnerable to creating enterprise security holes.