Thursday, March 8, 2012

Database Triggers


Database trigger is a PL/SQL block that is executed on an event in the database.  The event is 
related to a particular data manipulation of a table such as inserting, deleting or updating a 
row of a table.


Triggers may be used for any of the following: 

  • To implement complex business rule, which cannot be implemented using integrity constraints. 
  • To audit the process.  For example, to keep track of changes made to a table.  
  • To automatically perform an action when another concerned action takes place.  For example, updating a table whenever there is an insertion or a row into another  table. 

Triggers are similar to stored procedures, but stored procedures are called explicitly and 
triggers are called implicitly by Oracle when the concerned event occurs. 


Note: Triggers are automatically executed by Oracle and their execution is transparent to 
users.



Types of Triggers 


Depending upon, when a trigger is fired, it may be classified as : 
  • Statement-level trigger 
  • Row-level trigger 
  • Before triggers 
  • After triggers

Statement-level Triggers 

A statement trigger is fired only for once for a DML statement irrespective of the number of  rows affected by the statement. For example, if you execute the following UPDATE command STUDENTS table, statement trigger for UPDATE is executed only for once. 

update students   set  bcode=’b3’ where bcode = ‘b2’; 

However, statements triggers cannot be used to access the data that is being inserted, updated or deleted.  In other words, they do not have access to keywords NEW and OLD, which are used to access data
Statement-level triggers are typically used to enforce rules that are not related to data. For example, it is possible to implement a rule that says “no body can modify BATCHES table after 9 P.M”. 

Statement-level trigger is the default type of trigger. 

Row-level Trigger 

A row trigger is fired once for each row that is affected by DML command.  For example, if an UPDATE command updates 100 rows then row-level trigger is fired 100 times whereas a statement-level trigger is fired only for once. 

Row-level trigger are used to check for the validity of the data. They are typically used to implement rules that cannot be implemented by integrity constraints.  

Row-level triggers are implemented by using the option FOR EACH ROW in CREATE TRIGGER 
statement

Before Triggers 

While defining a trigger, you can specify whether the trigger is to be fired before the command (INSERT, DELETE, and UPDATE) is executed or after the command is executed. Before triggers are commonly used to check the validity of the data before the action is performed. For instance, you can use before trigger to prevent deletion of row if deletion should not be allowed in the given case. 

AFTER Triggers 

After triggers are fired after the triggering action is completed. For example, If after trigger is associated with INSERT command then it is fired after the row is inserted into the table. 

Possible Combinations 

The following are the various possible combinations of database triggers. 

  • Before Statement 
  • Before Row 
  • After Statement 
  • After Row 

Note: Each of the above triggers can be associated with INSERT, DELETE, and UPDATE 
commands resulting in a total of 12 triggers. 




Tuesday, November 15, 2011

HashMap and Hashtable


Hash table based implementation of the Map interface. This implementation provides all of the optional map operations, and permits null values and the null key. (The HashMap class is roughly equivalent to Hashtable, except that it is unsynchronized and permits nulls.)

This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.

How Hash Map Works:

HashMap works on principle of hashing, we have put () and get () method for storing and retrieving object from Hash Map. When we pass both key and value to put() method to store on HashMap, it uses key object hashcode() method to calculate hash code and then by applying hashing on that hash code it identifies bucket location for storing value object.

While retrieving it uses key object equals method to find out correct key value pair and return value object associated with that key.

HashMap uses linked list in case of collision and object will be stored in next node of linked list. Also hash map stores both key value tuple in every node of linked list.

What will happen if two different HashMap key objects have same hashcode?

They will be stored in same bucket but in the next node of linked list. And keys equals () method will be used to identify correct key value pair in HashMap.

 Difference between HashMap and Hashtable:
  • The HashMap class is roughly equivalent to Hashtable, except that it is non synchronized and permits nulls. (HashMap allows null values as key and value whereas Hashtable doesn't allow nulls).
  • HashMap does not guarantee that the order of the map will remain constant over time.
  • HashMap is non synchronized whereas Hashtable is synchronized.
  • Iterator in the HashMap is  fail-fast  while the enumerator for the Hashtable is not and throw ConcurrentModificationException if any other Thread modifies the map structurally  by adding or removing any element except Iterator's own remove()  method. But this is not aguaranteed behavior and will be done by JVM on best effort.

Hashtable Performance :

To get better performance from your java Hashtable, you need to use the following while instantiating a Hashtable: 
  •  use the initialCapacity and loadFactor arguments
  •  use them wisely
InitialCapacitiy is the number of buckets to be created at the time of Hashtable instantiation. The number of buckets and probability of collision is inversly proportional. If you have more number of buckets than needed then you have lesser possibility for a collision.

For example, if you are going to store 10 elements and if you are going to have initialCapacity as 100 then you will have 100 buckets. You are going to calculate hashCoe() only 10 times with a spectrum of 100 buckets. The possibility of a collision is very very less.

But if you are going to supply initialCapacity for the Hashtable as 10, then the possibility of collision is very large. loadFactor decides when to automatically increase the size of the Hashtable. The default size of initialCapacity is 11 and loadFactor is .75 That if the Hashtable is 3/4 th full then the size of the Hashtable is increased.

New capacity in java Hashtable is calculated as follows:
int newCapacity = oldCapacity * 2 + 1;

If you give a lesser capacity and loadfactor and often it does the rehash() which will cause you performance issues. Therefore for efficient performance for Hashtable in java, give initialCapacity as 25% extra than you need and loadFactor as 0.75 when you instantiate.

ConcurrentHashMap:

It is a hash table supporting full concurrency of retrievals and adjustable expected concurrency for updates. This class obeys the same functional specification a Hashtable, and includes versions of methods corresponding to each method of Hashtable.

However, even though all operations are thread-safe, retrieval operations do not entail locking, and there is not any support for locking the entire table in a way that prevents all access.

Retrieval operations (including get) generally do not block, so may overlap with update operations (including put and remove). Retrievals reflect the results of the most recently completed update operations holding upon their onset.

 For aggregate operations such as putAll and clear, concurrent retrievals may reflect insertion or removal of only some entries. Additionally the iterators and enumerators does not throw ConcurrentModificationException. However iterators are designed to be used by only one thread at a time.

Like Hashtable but unlike HashMap; this class does not allow NULL to be used as either Key or Value.

Difference between Hashtable and ConcurrentHashMap :

Both can be used in multithreaded environment but once the size of hashtable becomes considerable large performance degrade because for iteration it has to be locked for longer duration.

Since ConcurrentHashMap introduced concept of segmentation, how large it becomes only certain part of it gets locked to provide thread safety so many other readers can still access map without waiting for iteration to complete.

In Summary ConcurrentHashMap only locks certain portion of Map while Hashtable locks full map while doing iteration. 



Monday, November 14, 2011

Servlets

A Servlet is a Java class that runs within a web container in an application server, servicing multiple client requests concurrently forwarded through the server and the web container. The web browser establishes a socket connection to the host server in the URL , and sends the HTTP request. Servlets can forward requests to other servers and servlets and can also be used to balance load among several servers.

A browser and a servlet communicate using the HTTP protocol (a stateless request/response based protocol).

A “ServletRequest”, which encapsulates client request from the client and the “ServletResponse”, which encapsulates the communication from the servlet back to the client.

Servlet's Life Cycle :

The Web container is responsible for managing the servlet’s life cycle. The Web container creates an instance of the servlet and then the container calls the init() method. At the completion of the init() method the servlet is in ready state to service requests from clients. The container calls the servlet’s service() method for handling each request by spawning a new thread for each request from the Web container’s thread pool [It is also possible to have a single threaded Servlet, refer Q16 in Enterprise section]. Before destroying the instance the container will call the destroy() method. After destroy() the servlet becomes the potential candidate for garbage collection.




Get and Post Methods :

All client requests are handled through the service() method. The service method dispatches the request to an appropriate method like doGet(), doPost() etc to handle that request.

Below diagram summarizes the differences between Get and Post Methods :


Prefer using doPost() because it is secured and it can send much more information to the server.

If you want a servlet to take the same action for both GET and POST request, you should have doGet call doPost, or vice versa.Below is the code snippet illustration.


ServletConfig:

The ServletConfig parameters are for a particular Servlet.The parameters are specified in the web.xml(i.e. deployment descriptor). It is created after a servlet is instantiated and it is used to pass initialization information to the servlet.


ServletContext:

The ServletContext parameters are specified for the entire Web application. The parameters are specified in the web.xml(i.e.deployment descriptor).Servlet context is common to all Servlets. So all Servlets share information through ServletContext.

Servlet Life Cycle Events:

Servlet lifecycle events work like the Swing events.
  • Any listener interested in observing the ServletContext lifecycle can implement the ServletContextListener interface
  • Listener interested in the ServletContext attribute lifecycle can implement the ServletContextAttributesListener interface.
  • The session listener model is similar to the ServletContext listener model ServletContext’s and Session’s listener objects are notified when servlet contexts and sessions are initialized and destroyed, as well as when attributes are added or removed from a context or session.
The server creates an instance of the listener class to receive events and uses introspection to determine what listener interface (or interfaces) the class implements.

RequestDispatcher:

Defines an object that receives requests from the client and sends them to any resource(such as a servlet, HTML file, or JSP file) on the server.The servlet container creates the RequestDispatcher object, which is used as a wrapper around a server resource located at a particular path or given by a particular name.

This interface is intended to wrap servlets,but a servlet container can create RequestDispatcher
objects to wrap any type of resource.

It uses :
  • Forward - rd.forward(request,response):When it needs to forward the control to another servlet or JSP to generate response.This method allows one servlet to do preliminary processing of a request and another resource to generate the response.
  • Include - rd.include(request, response):When it needs to include the content of the resource such as Servlet, JSP, HTML, Images etc into the calling Servlet’s response.

Following image depicts the difference between Forward or Include and sendRedirect:



Following image depicts the difference between the getRequestDispatcher(String path) method of “ServletRequest” interface and ServletContext interface:


How to make Servlet thread safe:

One approach is to use single threaded model of a servlet by implementing the marker or null interface javax.servlet.SingleThreadedModel.The container will use one of the following approaches to ensure thread safety:
  • Instance pooling where container maintains a pool of servlets.
  • Sequential processing where new requests will wait while the current request is being processed.
Best practice: To use multi-threading and stay away from the single threaded model of the servlet.The single thread model can adversely affect performance.Shared resources can be synchronized, used in read only manner, or shared values can be stored in a session, as hidden fields or in database table.

It is better to avoid instance and static variables

Pre-initialization of a Servlet :

By default the container does not initialize the servlets as soon as it starts up. It initializes a servlet when it receives a request for the first time for that servlet. This is called lazy loading. The servlet deployment descriptor(web.xml) defines the element, which can be configured to make the servlet container load and initialize the servlet as soon as it starts up. The process of loading a servlet before any request comes in is called pre-loading or pre-initializing a servlet. We can also specify the order in which the servlets are initialized.

Servlet clustering :

The clustering promotes high availability and scalability. The considerations for servlet clustering are:
  • Objects stored in a session should be serializable to support in-memory replication of sessions.Also consider the overhead of serializing very large objects. Test the performance to make sure it is acceptable.
  • Design for idempotence. Failure of a request or impatient users clicking again can result in duplicate requests being submitted. So the Servlets should be able to tolerate duplicate requests.
  • Avoid using instance and static variables in read and write mode because different instances may exist on different JVMs. Any state should be held in an external resource such as a database.
  • Avoid storing values in a ServletContext. A ServletContext is not serializable and also the different instances may exist in different JVMs.
  • Avoid using java.io.* because the files may not exist on all backend machines. Instead use getResourceAsStream().

Session Replication :

Session replication is the term that is used when your current service state is being replicated across multiple application instances. Session replication occurs when we replicate the information (i.e. session attributes) that are stored in your HttpSession. The container propagates the changes only when you call the setAttribute(..) method.

Note: Mutating the objects in a session and then by-passing the setAttribute(………..) will not replicate the state change. Example : If you have an ArrayList in the session representing shopping cart objects and if you just call getAttribute(…) to retrieve the ArrayList and then add or change something without calling the setAttribute(…) then the container may not know that you have added or changed something in the ArrayList. So the session will not be replicated.

Constructors in Servlets :

Constructors for dynamically loaded java classes such as Servlets cannot accept arguments.Therefore init() is used to initialize by passing the servletConfig object and other needed parameters.

Also java constructors cannot be declared in an interface and javax.servlet.Servlet is an interface.

However the constructor can be defined in the servlet but the servletConfig object is no accessible in the constructor.







Monday, September 26, 2011

JTA : Java Transaction API

Introduction

The Java™ Transaction API (JTA) allows applications to perform distributed transactions; that is, transactions that access and update data on two or more networked computer resources. 

To demarcate a JTA transaction, you invoke the begin, commit, and rollback methods of the javax.transaction.UserTransactioninterface.


Distributed Transaction


A distributed transaction is simply a transaction that accesses and updates data on two or more networked resources, and therefore must be coordinated among those resources.

The distributed transaction processing (DTP) model defines several components:
* The application
* The application server
* The transaction manager
* The resource adapter
* The resource manager

The Resource Manager


The resource manager is generally a relational database management system (RDBMS), such as Oracle or SQL Server. All of the actual database management is handled by this component.

The Resource Adapter


The resource adapter is the component that is the communications channel, or request translator, between the "outside world," in this case the application, and the resource manager. In Java Context, this is a JDBC driver.

The Application Server


Application servers handle the bulk of application operations and take some of the load off of the end-user application. Application server adds another process tier to the transaction.

The first step of the distributed transaction process is for the application to send a request for the transaction to the transaction manager.

Transaction Manager


The transaction manager is responsible for making the final decision either to commit or rollback any distributed transaction. A commit decision should lead to a successful transaction; rollback leaves the data in the database unaltered. JTA specifies standard Java interfaces between the transaction manager and the other components in a distributed transaction: the application, the application server, and the resource managers.

Most enterprises use transaction managers and application servers because they manage distributed transactions much more efficiently than an application can.

Transaction Branch


Although the final commit/rollback decision treats the transaction as a single logical unit, there can be many transaction branches involved. A transaction branch is associated with a request to each resource manager involved in the distributed transaction.

Requests to three different RDBMSs, therefore, require three transaction branches. Each transaction branch must be committed or rolled back by the local resource manager.



This relationship is illustrated in the following diagram:
The numbered boxes around the transaction manager correspond to the three interface portions of JTA:

 1—UserTransaction—The javax.transaction.UserTransaction interface provides the application the ability to control transaction boundaries pro grammatically. The javax.transaction.UserTransaction method starts a global transaction and associates the transaction with the calling thread.
2—Transaction Manager—The javax.transaction.TransactionManager interface allows the application server to control transaction boundaries on behalf of the application being managed.
3—XAResource—The javax.transaction.xa.XAResource interface is a Java mapping of the industry standard XA interface based on the X/Open CAE Specification (Distributed Transaction Processing: The XA Specification).
Notice that a critical link is support of the XAResource interface by the JDBC driver. The JDBC driver must support both normal JDBC interactions, through the application and/or the application server, as well as the XAResource portion of JTA. DataDirect Connect for JDBC drivers provide this support.

Two-Phase Commit Protocol


The transaction manager controls the boundaries of the transaction and is responsible for the final decision as to whether or not the total transaction should commit or rollback. This decision is made in two phases, called the Two-Phase Commit Protocol.

In the first phase, the transaction manager polls all of the resource managers (RDBMSs) involved in the distributed transaction to see if each one is ready to commit. If a resource manager cannot commit, it responds negatively and rolls back its particular part of the transaction so that data is not altered.

In the second phase, the transaction manager determines if any of the resource managers have responded negatively, and, if so, rolls back the whole transaction. If there are no negative responses, the translation manager commits the whole transaction, and returns the results to the application.

 

Serializability


An important concept to understanding isolation through transactions is serializability. Transactions are serializable when the effect on the database is the same whether the transactions are executed in serial order or in an interleaved fashion.

Degrees of isolation (degrees of Consistency)


Degrees of isolation:

degree 0 - a transaction does not overwrite data updated by another user or process ("dirty data") of other transactions

degree 1 - degree 0 plus a transaction does not commit any writes until it completes all its writes (until the end of transaction)

degree 2 - degree 1 plus a transaction does not read dirty data from other transactions

degree 3 - degree 2 plus other transactions do not dirty data read by a transaction before the transaction commits

Returning without committing

In a stateless session bean with bean-managed transactions, a business method must commit or roll back a transaction before returning. However, a stateful session bean does not have this restriction.
In a stateful session bean with a JTA transaction, the association between the bean instance and the transaction is retained across multiple client calls. Even if each business method called by the client opens and closes the database connection, the association is retained until the instance completes the transaction.
In a stateful session bean with a JDBC transaction, the JDBC connection retains the association between the bean instance and the transaction across multiple calls. If the connection is closed, the association is not retained.

Methods not allowed in Bean-Managed Transactions

Do not invoke the getRollbackOnly and setRollbackOnly methods of the EJBContext interface in bean-managed transactions. These methods should be used only in container-managed transactions. For bean-managed transactions, invoke the getStatus and rollbackmethods of the UserTransaction interface.

Wednesday, September 21, 2011

JMS : Java Message Service


Introduction

Enterprise messaging systems

Enterprise messaging systems, often known as message oriented middleware (MOM), provide a mechanism for integrating applications in a loosely coupled, flexible manner. They provide asynchronous delivery of data between applications on a store and forward basis; that is; the applications do not communicate directly with each other, but instead communicate with the MOM, which acts as an intermediary.
The MOM provides assured delivery of messages (or at least makes its best effort) and relieves application programmers from knowing the details of remote procedure calls (RPC) and networking/communications protocols.

What is JMS?


JMS is a set of interfaces and associated semantics that define how a JMS client accesses the facilities of an enterprise messaging product.
The key to JMS portability is the fact that the JMS API is provided by Sun as a set of interfaces. Products that provide JMS functionality do so by supplying a provider that implements these interfaces.
As a developer, you build a JMS application by defining a set of messages and a set of client applications that exchange those messages.
  
JMS overview and architecture

Application


A JMS application comprises the following elements:

  • JMS clients. Java programs that send and receive messages using the JMS API.
  • Non-JMS clients. It is important to realize that legacy programs will often be part of an overall JMS application, and their inclusion must be anticipated in planning.
  • Messages. The format and content of messages to be exchanged by JMS and non-JMS clients is integral to the design of a JMS application.
  • JMS provider. As was stated previously, JMS defines a set of interfaces for which a provider must supply concrete implementations specific to its MOM product.
  • Administered objects. An administrator of a messaging-system provider creates objects that are isolated from the proprietary technologies of the provider. 
Administered objects

 To keep JMS clients portable, objects that implement the JMS interfaces must be isolated from a provider's proprietary technologies.
The mechanism for doing this is administered objects. These objects, which implement JMS interfaces, are created by an administrator of the provider's messaging system and are placed in the JNDI namespace.
The objects are then retrieved by JMS programs and accessed through the JMS interfaces that they implement. The JMS provider must supply a tool that allows creation of administered objects and their placement in the JNDI namespace.

There are two types of administered objects:

  • ConnectionFactory: Used to create a connection to the provider's underlying messaging system.
  • Destination: Used by the JMS client to specify the destination of messages being sent or the source of messages being received.
Although the administered objects themselves are instances of classes specific to a provider's implementation, they are retrieved using a portable mechanism (JNDI) and accessed through portable interfaces (JMS). The JMS program needs to know only the JNDI name and the JMS interface type of the administered object; no provider-specific knowledge is required

Interfaces

The high-level interfaces are:

  • ConnectionFactory: An administered object that creates a Connection. 
  • Connection: An active connection to a provider.
  • Destination: An administered object that encapsulates the identity of a message destination, such as where messages are sent to or received from.
  • Session: A single-threaded context for sending and receiving messages. For reasons of simplicity and because Sessions control transactions, concurrent access by multiple threads is restricted. Multiple Session s can be used for multithreaded applications.
  • MessageProducer: Used for sending messages.
  • MessageConsumer: Used for receiving messages.
  
Developing a JMS program

A typical JMS program goes through the following steps to begin producing and consuming messages:
  1. Look up a ConnectionFactory through JNDI.
  2. Look up one or more Destination s through JNDI.
  3. Use the ConnectionFactory to create a Connection. 
  4. Use the Connection to create one or more Session s.
  5. Use a Session and a Destination to create the required Message Producers and Message Consumers.
  6. Start the Connection
At this point, messages can begin to flow, and the application can receive, process, and send messages, as required.


Header







The following list gives the name of each header field of Message,its corresponding Java type, and a description of the field:




JMS permits an administrator to configure JMS to override the client-specified values for MSDeliveryMode, JMSExpiration and JMSPriority. If this is done, the header field value must reflect the administratively specified value.





Properties

The following list gives the name of each standard property of Message.JMS reserves the "JMSX" property name for these and future JMS-defined properties.



Using Properties

Property values are set prior to sending a message. When a client receives a message, its properties are in read-only mode. If a client attempts to set properties at this point, a  MessageNotWriteableException is thrown.

Message Body

There are five forms of message body, and each form is defined by an interface that extends Message. These interfaces are:
  • StreamMessage: Contains a stream of Java primitive values that are filled and read sequentially using standard stream operations.
  • MapMessage: Contains a set of name-value pairs; the names are of type string and the values are Java primitives.
  • TextMessage: Contains a String. 
  • ObjectMessage: Contains a Serializable Java object; JDK 1.2 collection classes can be used.
  • BytesMessage: Contains a stream of uninterpreted bytes; allows encoding a body to match an existing message format.
Each provider supplies classes specific to its product that implement these interfaces. It is important to note that the JMS specification mandates that providers must be prepared to accept and handle a Message object that is not an instance of one of its own Message classes.

Transaction

A JMS transaction groups a set of produced messages and a set of consumed messages into an atomic unit of work. If an error occurs during a transaction, the production and consumption of messages that occurred before the error can be "undone."
Session objects control transactions, and a Session can be denoted as transacted when it is created. A transacted Sessionalways has a current transaction, that is, there is no begin(); commit() and rollback() end one transaction and automatically begin another.
Distributed transactions can be supported by the Java Transaction API (JTA) XAResource API, though this is optional for providers.

Message Selection

JMS provides a mechanism, called a message selector, for a JMS program to filter and categorize the messages it receives.
The message selector is a String that contains an expression whose syntax is based on a subset of SQL92. The message selector is evaluated when an attempt is made to receive a message, and only messages that match the selection criteria of the selector are made available to the program.
Selection is based on matches to header fields and properties; body values cannot be used for selection. 


Acknowledgment

Acknowledgment is the mechanism whereby a provider is informed that a message has been successfully received.
If the Session receiving the message is transacted, acknowledgment is handled automatically. If the Session is not transacted, then the type of acknowledgment is determined when the Session is created.

There are three types of acknowledgment:
  • Session.DUPS_OK_ACKNOWLEDGE: Lazy acknowledgment of message delivery; reduces overhead by minimizing work done to prevent duplicates; should be used only if duplicate messages are expected and can be handled.
  • Session.AUTO_ACKNOWLEDGE: Message delivery is automatically acknowledged upon completion of the method that receives the message.
  • Session.CLIENT_ACKNOWLEDGE: Message delivery is explicitly acknowledged by calling the acknowledge() method on the Message.

A session’s recover method is used to stop a session and restart it with its first unacknowledged message. In effect, the session’s series of delivered messages is reset to the point after its last acknowledged message. The messages it now delivers may be different from those that were originally delivered due to message expiration and the arrival of higher-priority messages.
A session must set the redelivered flag of messages it redelivers due to a
recovery.

It is up to a JMS application to deal with this ambiguity. In some cases, this may cause a client to produce functionally duplicate messages. A message that is redelivered due to session recovery is not considered a duplicate message.

Duplicate Delivery of Messages

A JMS provider must never deliver a second copy of an acknowledged message. When a client uses the AUTO_ACKNOWLEDGE mode, it is not in direct control of message acknowledgment. Since such clients cannot know for certain if a particular message has been acknowledged, they must be prepared for redelivery of the last consumed message. This can be caused by the client
completing its work just prior to a failure that prevents the message acknowledgment from occurring. Only a session’s last consumed message is subject to this ambiguity. The JMSRedelivered message header field will be set for a message redelivered under these circumstances.

Duplicate Production of Messages

JMS providers must never produce duplicate messages. This means that a client that produces a message can rely on its JMS provider to insure that consumers of the message will receive it only once. No client error can cause a provider to duplicate a message. If a failure occurs between the time a client commits its work on a Session and the commit method returns, the client cannot determine if the transaction was committed or rolled back. The same ambiguity exists when a failure occurs between the non-transactional send of a PERSISTENT message and the return
from the sending method.

Synchronous Delivery

A client can request the next message from a MessageConsumer using one of its receive methods. There are several variations of receive that allow a client to poll or wait for the next message.

Asynchronous Delivery

A client can register an object that implements the JMS MessageListener interface with a MessageConsumer. As messages arrive for the consumer, the provider delivers them by calling the listener’s onMessage method. It is possible for a listener to throw a RuntimeException.

Message Delivery Mode

JMS supports two modes of message delivery.

The NON_PERSISTENT mode is the lowest-overhead delivery mode because it does not require that the message be logged to stable storage. A JMS provider failure can cause a NON_PERSISTENT message to be lost.

The PERSISTENT mode instructs the JMS provider to take extra care to insure the message is not lost in transit due to a JMS provider failure. A JMS provider must deliver a NON_PERSISTENT message at-most-once. This means that it may lose the message, but it must not deliver it twice.
A JMS provider must deliver a PERSISTENT message once-and-only-once. This means a JMS provider failure must not cause it to be lost, and it must not deliver it twice.

PERSISTENT (once-and-only-once) and NON_PERSISTENT (at-most-once) message delivery are a way for a JMS client to select between delivery techniques that may lose a messages if a JMS provider dies and those which take extra effort to insure that messages can survive such a failure. 




     There is typically a performance/reliability trade-off implied by this choice. When a 



client selects the NON_PERSISTENT delivery mode, it is indicating that it values performance over reliability; a selection of PERSISTENT reverses the requested trade-off. The use of PERSISTENT messages does not guarantee that all messages are always delivered to every eligible consumer. 

The below table shows the overview of Durable Non Durable subscriptions:


















JMS Point-to-Point Model


A point-to-point (PTP) product or application is built around the concept of message queues, senders, and receivers. Each message is addressed to a specific queue, and receiving clients extract messages from the queue(s) established to hold their messages. Queues retain all messages sent to them until the messages are consumed or until the messages expire. PTP messaging has the following characteristics and is illustrated in below figure
  Each message has only one consumer.
  • A sender and a receiver of a message have no timing dependencies. The receiver can fetch the message whether or not it was running when the client sent the message.
  • The receiver acknowledges the successful processing of a message.
Use PTP messaging when every message you send must be processed successfully by one consumer.

JMS Publish/Subscribe Model

In a publish/subscribe (pub/sub) product or application, clients address messages to a topic. Publishers and subscribers are generally anonymous and may dynamically publish or subscribe to the content hierarchy. The system takes care of distributing the messages arriving from a topic's multiple publishers to its multiple subscribers. Topics retain messages only as long as it takes to distribute them to current subscribers. Pub/sub messaging has the following characteristics.
  • Each message may have multiple consumers.
  • Publishers and subscribers have a timing dependency. A client that subscribes to a topic can consume only messages published after the client has created a subscription, and the subscriber must continue to be active in order for it to consume messages.
The JMS API relaxes this timing dependency to some extent by allowing clients to create durable subscriptions. Durable subscriptions can receive messages sent while the subscribers are not active. Durable subscriptions provide the flexibility and reliability of queues but still allow clients to send messages to many recipients Use pub/sub messaging when each message can be processed by zero, one, or many consumers. Below diagram illustrates pub/sub messaging.



The following states the table showing the common interfaces.

JMS common interface
PTP domain
Pub/sub domain
ConnectionFactory
QueueConnectionFactory
TopicConnectionFactory
Connection
QueueConnection
TopicConnection
Destination
Queue
Topic
Session
QueueSession
TopicSession
MessageProducer
QueueSender
TopicPublisher
MessageConsumer
QueueReceiver, QueueBrowser
TopicSubscriber


Unification of domains with the common interfaces results in some domain-specific classes inheriting methods that are not suited for that domain. The JMS provider is required to throw an IllegalStateException should this occur in client code.


JMS Exceptions


Following are the standard JMS Exceptions.