Friday 16 August 2013

OpenERP CMS

Hello everyone,

Here I am with CMS implementation in OpenERP.

Before we start CMS design introduction lets have look at decorators in python as it is widely used here.

For decorators you can either go through Python standard documentation of go through my following blog which describes it briefly.

http://mishekha.blogspot.in/2013/08/python-decorators.html

CMS, content management system, a system where your content is managed which allows publishing, editing of content, a biggest example is bloggers where you put your content, edit it and publish it, same here OpenERP CMS allows website implementation stuff inside OpenERP itself.

So lets start with CMS:
OpenERP have developed a website module, this module has website controller(main.py website module)  inheriting Home controller(main.py web module) and overriding index method of it which is called while requesting OpenERP with / path(root path, just go through main.py -> Home class, index method check the route, you will find path as '/' so it means '/' path request controller is index method), here what we did if someone enters just / path then we opened website view and if someone enters /admin then we opened regular OpenERP instance, as I described(with content of other site documentation) decorator used here, so @http.route('/') is called at the time of entry and exit of function and current function is passed as a argument to route method of http module.

We have page method in website class which renders page with reference of page path given, path given will nothing but the page name, like here given website.homepage in root path we redirected to website.homepage page, first we get the rendering context by calling get_rendering_context method(Described below), and then renders path i.e. website.homepage page by calling render method of ir.ui.view go through render method of ir.ui.view in which we used and also go through qweb.py in server tools.py where all logic resides to render qweb template by parsing qweb template, the ultimate result of this method will be html, as it is going to be return response of request and going to be populated in browser and browser understands html.

get_rendering_context:
--------------------------------------
While defining template you will have needs of dynamic values from some object like res.company, say for example we have template where I have set title = <title><t t-esc="title or res_company.name"/></title>, so here the value of title come as res.company name, so we will have get_rendering_context method in our website model(your cms module's model) which will set the context with res.company browse record for current user so that while rendering template we can have data from context, to check go to website controller(main.py) and check page method there.



Defined template are stored a qweb template, you have to give modulename.pagename(template name), it is going to be partitioned by .(dot), and the first part going to set as module and second part as view name and it is found by get_object_reference of ir.model.data which returns id and model(id of view in ir.model.data and model i.e. ir.ui.view) and then we call read_combined method this method reads the view and its inherit view that's why its name is read_combined, the view returned here will be the qweb template and which converted into html by render method of qweb.py in tools, qweb module has QwebXML class which has method render which parsis the view and convert it into html.

One can go through cms example of HR, CRM, Event, Mail, Sale. All have different controller path, so to develop a new cms module you need to follow following things:

Directory structure:
website_modulename
        |
        -> controllers
        |        |
        |        -> __init__.py
        |           main.py
        |
        -> static
             |
             -> src
                 |
                 -> css -> your css file where your styles for your       
                                view/elements  resides.
                 |
                 -> js -> your js file
                 |
                 -> xml ->
        |
        -> views
            |
            -> your_module_view.xml -> these are the views which are going to be
                stroed as a qweb views which is loaded by path name
               like for example /event/page/contactus(render method of qweb do the
               job to fetch the view from database and render it)
        |
        -> modulename.py -> your model defination file where you define your
             model for your website
        |
        -> __init__.py
        |
        -> __openerp__.py -> Module manifest file.


Now write a model in the way you needs if you need any extra fields in website to some model like you can see here as a example we added a field "website_published" on model mail.mesage in website_mail module, also write the view for your website for example, for your pages for example Home page, Contact US, About US menus etc as a qweb template(fore reference you can take an example of website module website_view.xml), set a unique name to template.
After doing this much develop a controller in main.py say for example:

clas myModule(http.Controller):
    @http.route('/myModule/page/home', type='http', auth="public")
    def myFunction(self, bla, bla):
        //your logic
        //get_rendering_context
        //render your view by passing view name in calling website.render('viewName', context_values)

Explanation on above signature:
-------------------------------------------------------
path shows when request comes with this path given function is going to call(as all this path are loaded/registered when server is started), then type parameters shows which kind of request it will be, http or json, when you enter this path in URL like localhost:8069/myModule/page/home it will be an HTTP request(for further reference you can read http and json difference, but the basic difference is http sends data in either chunks, deflate, gzip, data is sent in a series of "chunks"(if chunks transfer-encoding used) while JSON(JavaScript Object Notation) as the name suggest it is transfers data in JavaScript like object {key: value} which so light weighted so future web applications are developed and uses JSON format for their request and response data format)
auth: auth can have one of three value 1. user, 2. admin, 3. none
``user``: The user must be authenticated and the current request will perform using the rights of the user.
``admin``: The user may not be authenticated and the current request will perform using the admin user.
``none``: The method is always active, even if there is no database. Mainly used by the framework and
        authentication modules. There request code will not have any facilities to access the database nor have any
        configuration indicating the current database nor the current user.

And and and website module adds one more kind of authentication that is 'public' so that method can be accessible publicly so to access all database related stuff public uid used.


Now where you need js and css ?
you can write styles for your template elements and if you need something special like controll over elements and view and client side events etc then you controll it by js like, one of the element of my template has input element and I want that when I change that element value I need to reflect it in other element like do a total etc(onchange), there are lots of task which are accomplished by client side itseelf there is no need of server trip so we will do it using js.

That's it, (Note: Still CMS evolving and there may be large changes in future but this post describes basic design of CMS as of now.)


Feel to raise your query, your feedback are always welcome....

Thursday 15 August 2013

Python decorators.

Python Decorators:

What Can You Do With Decorators?

Decorators allow you to inject or modify code in functions or classes. Sounds a bit like Aspect-Oriented Programming (AOP) in Java, doesn't it? Except that it's both much simpler and (as a result) much more powerful. For example, suppose you'd like to do something at the entry and exit points of a function (such as perform some kind of security, tracing, locking, etc. -- all the standard arguments for AOP). With decorators, it looks like this:

@entryExit
def func1():
    print "inside func1()"

@entryExit
def func2():
    print "inside func2()"

The @ indicates the application of the decorator.
Function Decorators

A function decorator is applied to a function definition by placing it on the line before that function definition begins. For example:

@myDecorator
def aFunction():
    print "inside aFunction"

When the compiler passes over this code, aFunction() is compiled and the resulting function object is passed to the myDecorator code, which does something to produce a function-like object that is then substituted for the original aFunction().

What does the myDecorator code look like? Well, most introductory examples show this as a function, but I've found that it's easier to start understanding decorators by using classes as decoration mechanisms instead of functions. In addition, it's more powerful.

The only constraint upon the object returned by the decorator is that it can be used as a function -- which basically means it must be callable. Thus, any classes we use as decorators must implement __call__.

What should the decorator do? Well, it can do anything but usually you expect the original function code to be used at some point. This is not required, however:

class myDecorator(object):

    def __init__(self, f):
        print "inside myDecorator.__init__()"
        f() # Prove that function definition has completed

    def __call__(self):
        print "inside myDecorator.__call__()"

@myDecorator
def aFunction():
    print "inside aFunction()"

print "Finished decorating aFunction()"

aFunction()

When you run this code, you see:

inside myDecorator.__init__()
inside aFunction()
Finished decorating aFunction()
inside myDecorator.__call__()

Notice that the constructor for myDecorator is executed at the point of decoration of the function. Since we can call f() inside __init__(), it shows that the creation of f() is complete before the decorator is called. Note also that the decorator constructor receives the function object being decorated. Typically, you'll capture the function object in the constructor and later use it in the __call__() method (the fact that decoration and calling are two clear phases when using classes is why I argue that it's easier and more powerful this way).

When aFunction() is called after it has been decorated, we get completely different behavior; the myDecorator.__call__() method is called instead of the original code. That's because the act of decoration replaces the original function object with the result of the decoration -- in our case, the myDecorator object replaces aFunction. Indeed, before decorators were added you had to do something much less elegant to achieve the same thing:

def foo(): pass
foo = staticmethod(foo)

With the addition of the @ decoration operator, you now get the same result by saying:

@staticmethod
def foo(): pass

This is the reason why people argued against decorators, because the @ is just a little syntax sugar meaning "pass a function object through another function and assign the result to the original function."

The reason I think decorators will have such a big impact is because this little bit of syntax sugar changes the way you think about programming. Indeed, it brings the idea of "applying code to other code" (i.e.: macros) into mainstream thinking by formalizing it as a language construct.
Slightly More Useful

Now let's go back and implement the first example. Here, we'll do the more typical thing and actually use the code in the decorated functions:

class entryExit(object):

    def __init__(self, f):
        self.f = f

    def __call__(self):
        print "Entering", self.f.__name__
        self.f()
        print "Exited", self.f.__name__

@entryExit
def func1():
    print "inside func1()"

@entryExit
def func2():
    print "inside func2()"

func1()
func2()

The output is:

Entering func1
inside func1()
Exited func1
Entering func2
inside func2()
Exited func2

You can see that the decorated functions now have the "Entering" and "Exited" trace statements around the call.

The constructor stores the argument, which is the function object. In the call, we use the __name__ attribute of the function to display that function's name, then call the function itself.
Using Functions as Decorators

The only constraint on the result of a decorator is that it be callable, so it can properly replace the decorated function. In the above examples, I've replaced the original function with an object of a class that has a __call__() method. But a function object is also callable, so we can rewrite the previous example using a function instead of a class, like this:

def entryExit(f):
    def new_f():
        print "Entering", f.__name__
        f()
        print "Exited", f.__name__
    return new_f

@entryExit
def func1():
    print "inside func1()"

@entryExit
def func2():
    print "inside func2()"

func1()
func2()
print func1.__name__

new_f() is defined within the body of entryExit(), so it is created and returned when entryExit() is called. Note that new_f() is a closure, because it captures the actual value of f.

Once new_f() has been defined, it is returned from entryExit() so that the decorator mechanism can assign the result as the decorated function.

The output of the line print func1.__name__ is new_f, because the new_f function has been substituted for the original function during decoration. If this is a problem you can change the name of the decorator function before you return it:

def entryExit(f):
    def new_f():
        print "Entering", f.__name__
        f()
        print "Exited", f.__name__
    new_f.__name__ = f.__name__
    return new_f

The information you can dynamically get about functions, and the modifications you can make to those functions, are quite powerful in Python.


I would prefer to go through this document, here very good explanation given with lower level information.

http://pythonconquerstheuniverse.wordpress.com/2009/08/06/introduction-to-python-decorators-part-1/

Friday 12 April 2013

Etherpad configuration with OpenERP, run ehterpad at your own instance and configure it with OpenERP.

Install Etherpad client from one of following link.

http://etherpad.org/#download
https://github.com/ether/etherpad-lite#installation

For Windows

Prebuilt windows package

This package works out of the box on any windows machine, but it's not very useful for developing purposes...
  1. Download the latest windows package
  2. Extract the folder
Now, run start.bat and open http://localhost:9001 in your browser.

Fancy install

You'll need node.js and (optionally, though recommended) git.
  1. Grab the source, either
  2. start bin\installOnWindows.bat
Now, run start.bat and open http://localhost:9001 in your browser.
Update to the latest version with git pull origin, then run bin\installOnWindows.bat, again.


Note:- nodejs is required for etherpad installation.

Ubuntu

For Ubuntu download tar.gz extract it and run run.sh script, thie may ask you to install some other packages so install that first and then rung sh file again.

ehterpad client requires nodejs, to install nodejs at your end run follwing command.


sudo apt-get install python g++ make
mkdir ~/nodejs && cd $_
wget -N http://nodejs.org/dist/node-latest.tar.gz
tar xzvf node-latest.tar.gz && cd `ls -rd node-v*`
./configure
make install

When you run run.sh of ehterpad you will get following message

[2013-04-12 15:11:16.946] [WARN] console - You need to set a sessionKey value in settings.json, this will allow your users to reconnect to your Etherpad Instance if your instance restarts
[2013-04-12 15:11:16.949] [WARN] console - DirtyDB is used. This is fine for testing but not recommended for production.
[2013-04-12 15:11:19.316] [INFO] console - Installed plugins: ep_etherpad-lite
[2013-04-12 15:11:19.351] [WARN] console - Can't get git version for server header
ENOENT, no such file or directory '/home/msh/Downloads/ether-etherpad-lite-069319f/.git/HEAD'
[2013-04-12 15:11:19.352] [INFO] console - Report bugs at https://github.com/ether/etherpad-lite/issues
[2013-04-12 15:11:19.562] [INFO] console -    info  - 'socket.io started'
[2013-04-12 15:11:19.917] [INFO] console - You can access your Etherpad-Lite instance at http://0.0.0.0:9001/
[2013-04-12 15:11:19.917] [WARN] console - Admin username and password not set in settings.json.  To access admin please uncomment and edit 'users' in settings.json

See the first message which shows you have to set a session key inside settings.json

Also read log You can access your Etherpad-Lite instance at http://0.0.0.0:9001/, your Etherpad is running on your system on port 9001.

No You can go to Etherpad folder wherever your etherpad resides and open api.txt file, copy the api key and add it in OpenERP -> Settings -> Companies -> open company and in configurations tab add api key and also add your host:port in Pad Server.

So install pad_project module and just open Task, see desccription field is widget="pad", so you will get etherpad view for that field it comes from your Etherpad client running on your system.

Wee have used Python API for Etherpad, you can download that
HTTP API client libraries from

https://github.com/ether/etherpad-lite/wiki/HTTP-API-client-libraries

Embed this API into your code as we did in standar pad module of OpenERP.

Very Important Note: This API's only works with HTTP, as the name of library suggests, so you can not run your Etherpad client on HTTPS, if you run it on HTTPS so it may create issues, like whenever your open your pad widget from task and write something in it, it sends HTTP request to your Etherpad server which is running on HTTPS, so it will give Bad Gateway error.


Sunday 17 March 2013

OpenERP Translation



How transation/POTs got generated ?(Export translation with option tgz, this will gnerated pot for installed modules)


Settings -> Translation -> Import/Export Translation -> Export translation wizard, load(object button) translation with option tgz, this will call method trans_export method(Note this wizard has fields like lang, modules many2many field with relation to ir.module.module to apply modules in trans_export signature, buffer in method definition will be the object of cStringIO.StringIO()), this method will call trans_generate method of translate.py which is responsible to generate translation for views, models, js, and web xml's.

The method trans_genrate will fires two sql injection, one on ir.model(contains information regarding model, like Model description, model information), query fired on this will translate _contraint, sql_constraint and second on ir.model.data(which contains data related to model, like view, wizard, report, fields), now based on model it will translate view(ir.ui.view) by calling trans_parse_view, wizard(ir.actions.wizard, this will also call trans_parse_view), fields(ir.model.fields, converts fields string, help, selection field), report(ir.actions.report.xml). This methos will get addons path as we passed as a command line argument or set in .cfg file, and iterate through *.py, *.mako, *.js, *.xml(also web xmls, because web is also a module) and calls babel_extract_terms, this will collects translation terms of .js and qweb files and all this translaation is pushed by calling push_translation which will appends tranlsations into _to_translate list, which is returned to "trans_export", and after colleting translation trans_export will call _process which will generates pot file, if we gonna generate pot file then we not going to pass lang parameter in trans_export because based on lang parameter it will either generates .po or .pot, if there is no lang parameter defined then it will generates .pot file with translation = '' for each term.


How entries created in ir.translation ?


load method of ir.translation object is called from update_translation method of ir.module.module(which is basically called by load object button method), this load method go through each module(passed from update_translation method of ir.module.module, which gets those whose state = installed), and will call trans_load(translate.py) method for each module which again calls trans_load_data of translate.py, this trans_load_data method will go through language.po file(language you going to load) and fetches ir.translation cursor through _get_import_cursor method of ir.translation object which retruns object of ir_translation_import_cursor(this is an object which creates a temporary cursor to insert mass data into ir.translation, that is it creates TEPORARY Table by inheriting ir.translation table), this temporary cursor has method push which creates entry temporary table when you call ir_translation_import_cursor.push, so trans_load_data go through each term of .po file and and calls that tempoary cursor's push method after enumerating through whole .po file it will call ir_translation_import_cursor.finish method, finish method of temporary cursor will transfers whole data in ir.translation in batch, after inserting data in bulk from temporary table, the temporary table got dropped.

So this way ir.translation got updated, and we all know entries in .po file updated through .pot whenever translation is updated by launchpad.

VERY IMPORTANT NOTE:- This is VERY IMPORTANT to NOTE that, from version 7.0 all the translations are stored in ir.translation table, my mean to all includes WEB also, translation of web static terms are also stored in ir.translation, this the reason I have written this doc on translation, as translation entries in pot is already did by trans_export method of translate.py, so whenever .po's got updated translation of web specific terms will also be available in ir.translation.


How views, fields and web_xmls(i.e. qwebs) are translated ?


Noe the here the main topic for which I have dig my head inside translation is because of web xml's as this is different then previous version, in previous version for web specific terms we were reads .po file of user language when web instance initiated and creates one translation database at client side, so when wherever static tanslation terms needed we take it from that translation databse bundle through _t orr _lt method of that object.

But from version 7.0 everything got stored in ir.translation so now when Web instance initiated we will read ir,translation table instead of reading .po file, rest process is same that we creates Translation database bundle and wherever tranlation rewuires we takes translation term from that translation bundle, say for example it there is button in web with lable "Validate" so to translate that string we web people uses _t("Validate") same as _(underscore) instance we uses in server addons.

Upto this it is ok that translation in js tatic terms can be done using _t or _lt but what about web xml files, what about translation of templates, just look inside template of pos.xml, there are lots of static terms and messages and there is no any method used to translate that static messages(for server addons views, server _view_look_dom method will return translated view), So the answer is web itself transltes all templates, there is a function named instance.web.qweb.preprocess_node which is called by Qweb(Our rendering engine) itself for each Element, this  instance.web.qweb.preprocess_node method will lookup into Element and checks its attributes like 'label', 'title', 'alt', 'placeholder' and translates it by _t by replacing tranlated term on this Elment, also this method checks for Text data inside the element say for exaple
<p>Hello World!</p>, so here Hello World! is my text, this method will also replace this texts by translated terms, so before rendering Qweb processes nodes for translation.

All above is about OpenERP Translation, mainly I have written this note on Web translation, how it works, please add your inputs if you something else regarding translation topic and correct me if I am wrong anywhere.


Friday 4 January 2013

REST Protocol

Representational state transfer (REST) is a style of software architecture for distributed hypermedia systems such as the World Wide Web. The term representational state transfer was introduced and defined in 2000 by Roy Fielding in his doctoral dissertation.[1][2] Fielding is one of the principal authors of the Hypertext Transfer Protocol (HTTP) specification versions 1.0 and 1.1.[3][4]

History

The REST architectural style was developed in parallel with HTTP/1.1, based on the existing design of HTTP/1.0.[6] The largest implementation of a system conforming to the REST architectural style is the World Wide Web. REST exemplifies how the Web's architecture emerged by characterizing and constraining the macro-interactions of the four components of the Web, namely origin servers, gateways, proxies and clients, without imposing limitations on the individual participants. As such, REST essentially governs the proper behavior of participants.

Concept

REST-style architectures consist of clients and servers. Clients initiate requests to servers; servers process requests and return appropriate responses. Requests and responses are built around the transfer of representations of resources. A resource can be essentially any coherent and meaningful concept that may be addressed. A representation of a resource is typically a document that captures the current or intended state of a resource.
The client begins sending requests when it is ready to make the transition to a new state. While one or more requests are outstanding, the client is considered to be in transition. The representation of each application state contains links that may be used next time the client chooses to initiate a new state transition.[7]
The name "Representational State Transfer" is intended to evoke an image of how a well-designed Web application behaves: a network of web pages (a virtual state-machine), where the user progresses through the application by selecting links (state transitions), resulting in the next page (representing the next state of the application) being transferred to the user and rendered for their use. Fielding's PhD thesis, section 6.1
REST was initially described in the context of HTTP, but is not limited to that protocol. RESTful architectures can be based on other Application Layer protocols if they already provide a rich and uniform vocabulary for applications based on the transfer of meaningful representational state. RESTful applications maximize the use of the pre-existing, well-defined interface and other built-in capabilities provided by the chosen network protocol, and minimize the addition of new application-specific features on top of it.

[edit] HTTP examples

HTTP, for example, has a very rich vocabulary in terms of verbs (or "methods"), URIs, Internet media types, request and response codes, etc. REST uses these existing features of the HTTP protocol, and thus allows existing layered proxy and gateway components to perform additional functions on the network such as HTTP caching and security enforcement.

[edit] SOAP RPC contrast

SOAP RPC over HTTP, on the other hand, encourages each application designer to define a new and arbitrary vocabulary of nouns and verbs (for example getUsers(), savePurchaseOrder(...)), usually overlaid onto the HTTP POST verb. This disregards many of HTTP's existing capabilities such as authentication, caching and content type negotiation, and may leave the application designer re-inventing many of these features within the new vocabulary.[8] Examples of doing so may include the addition of methods such as getNewUsersSince(Date date), savePurchaseOrder(string customerLogon, string password, ...).

[edit] Constraints

The REST architectural style describes the following six constraints applied to the architecture, while leaving the implementation of the individual components free to design:
Client–server
A uniform interface separates clients from servers. This separation of concerns means that, for example, clients are not concerned with data storage, which remains internal to each server, so that the portability of client code is improved. Servers are not concerned with the user interface or user state, so that servers can be simpler and more scalable. Servers and clients may also be replaced and developed independently, as long as the interface is not altered.
Stateless
The client–server communication is further constrained by no client context being stored on the server between requests. Each request from any client contains all of the information necessary to service the request, and any session state is held in the client. The server can be stateful; this constraint merely requires that server-side state be addressable by URL as a resource. This not only makes servers more visible for monitoring, but also makes them more reliable in the face of partial network failures as well as further enhancing their scalability.
Cacheable
As on the World Wide Web, clients can cache responses. Responses must therefore, implicitly or explicitly, define themselves as cacheable, or not, to prevent clients reusing stale or inappropriate data in response to further requests. Well-managed caching partially or completely eliminates some client–server interactions, further improving scalability and performance.
Layered system
A client cannot ordinarily tell whether it is connected directly to the end server, or to an intermediary along the way. Intermediary servers may improve system scalability by enabling load-balancing and by providing shared caches. They may also enforce security policies.
Code on demand (optional)
Servers are able temporarily to extend or customize the functionality of a client by the transfer of executable code. Examples of this may include compiled components such as Java applets and client-side scripts such as JavaScript.
Uniform interface
The uniform interface between clients and servers, discussed below, simplifies and decouples the architecture, which enables each part to evolve independently. The four guiding principles of this interface are detailed below.
The only optional constraint of REST architecture is code on demand. If a service violates any other constraint, it cannot strictly be considered RESTful.
Complying with these constraints, and thus conforming to the REST architectural style, will enable any kind of distributed hypermedia system to have desirable emergent properties, such as performance, scalability, simplicity, modifiability, visibility, portability and reliability.

[edit] Guiding principles of the interface

The uniform interface that any REST interface must provide is considered fundamental to the design of any REST service.[9]
Identification of resources
Individual resources are identified in requests, for example using URIs in web-based REST systems. The resources themselves are conceptually separate from the representations that are returned to the client. For example, the server does not send its database, but rather, perhaps, some HTML, XML or JSON that represents some database records expressed, for instance, in Finnish and encoded in UTF-8, depending on the details of the request and the server implementation.
Manipulation of resources through these representations
When a client holds a representation of a resource, including any metadata attached, it has enough information to modify or delete the resource on the server, provided it has permission to do so.
Self-descriptive messages
Each message includes enough information to describe how to process the message. For example, which parser to invoke may be specified by an Internet media type (previously known as a MIME type). Responses also explicitly indicate their cacheability.[1]
Hypermedia as the engine of application state
Clients make state transitions only through actions that are dynamically identified within hypermedia by the server (e.g. by hyperlinks within hypertext). Except for simple fixed entry points to the application, a client does not assume that any particular actions will be available for any particular resources beyond those described in representations previously received from the server.

[edit] Key goals

Key goals of REST include:
REST has been applied to describe the desired web architecture, to help identify existing problems, to compare alternative solutions, and to ensure that protocol extensions would not violate the core constraints that make the Web successful.
Fielding describes REST's effect on scalability thus:
REST's client–server separation of concerns simplifies component implementation, reduces the complexity of connector semantics, improves the effectiveness of performance tuning, and increases the scalability of pure server components. Layered system constraints allow intermediaries—proxies, gateways, and firewalls—to be introduced at various points in the communication without changing the interfaces between components, thus allowing them to assist in communication translation or improve performance via large-scale, shared caching. REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability.[10]

[edit] Central principle

An important concept in REST is the existence of resources (sources of specific information), each of which is referenced with a global identifier (e.g., a URI in HTTP). In order to manipulate these resources, components of the network (user agents and origin servers) communicate via a standardized interface (e.g., HTTP) and exchange representations of these resources (the actual documents conveying the information). For example, a resource that represents a circle may accept and return a representation that specifies a center point and radius, formatted in SVG, but may also accept and return a representation that specifies any three distinct points along the curve (since this also uniquely identifies a circle) as a comma-separated list.
Any number of connectors (e.g., clients, servers, caches, tunnels, etc.) can mediate the request, but each does so without "seeing past" its own request (referred to as "layering," another constraint of REST and a common principle in many other parts of information and networking architecture). Thus, an application can interact with a resource by knowing two things: the identifier of the resource and the action required—it does not need to know whether there are caches, proxies, gateways, firewalls, tunnels, or anything else between it and the server actually holding the information. The application does, however, need to understand the format of the information (representation) returned, which is typically an HTML, XML or JSON document of some kind, although it may be an image, plain text, or any other content.

[edit] RESTful web services

A RESTful web service (also called a RESTful web API) is a simple web service implemented using HTTP and the principles of REST. It is a collection of resources, with four defined aspects:
  • the base URI for the web service, such as http://example.com/resources/
  • the Internet media type of the data supported by the web service. This is often JSON, XML or YAML but can be any other valid Internet media type.
  • the set of operations supported by the web service using HTTP methods (e.g., GET, PUT, POST, or DELETE).
  • The API must be hypertext driven.[11]
The following table shows how the HTTP methods are typically used to implement a web service.