Modes of Deployment

Stage mode—The Administration Server copies the archive files from their source location to a location on each of the targeted Managed Servers that deploy the archive. For example, if you deploy a J2EE Application to three servers in a cluster, the Administration Server copies the application archive files to each of the three servers. Each server then deploys the J2EE Application using its local copy of the archive files.
Stage mode is the default mode when deploying to more than one WebLogic Server instance.

Nostage mode—The Administration Server does not copy the archive files from their source location. Instead, each targeted server must access the archive files from a single source directory for deployment. For example, if you deploy a J2EE Application to three servers in a cluster, each server must be able to access the same application archive files (from a shared or network-mounted directory) to deploy the application.
Nostage mode is the default mode when deploying only to the Administration Server (for example, in a single-server domain). You can also select nostage mode if you run a cluster of server instances on the same machine.

External_stage mode—External_stage mode is similar to stage mode, in that the deployment files must reside locally to each targeted server. However, the Administration Server does not automatically copy the deployment files to targeted servers in external_stage mode; instead, you must manually copy the files, or use a third-party application to copy the files for you

WebLogic Server as a Unix Daemon Process

Start Weblogic Server as a Unix Daemon Process

The following procedure outlines the steps required to start WLS as a Unix daemon

1) Copy the startWebLogic.sh script to the /etc/init.d directory. For the purpose of these instructions, name it "startWLS_daemon.sh".

2) Modify the java command line used to start WLS by adding a path that points to your domain directory. For example:
-Dweblogic.RootDirectory=/opt/bea/user_projects/domains/mydomain

3) Test your startup script by executing it from the /etc/init.d directory.

4) When you have successfully resolved any path issues, add the following command line options:
JAVA_OPTIONS="-Dweblogic.Stdout="outfilename" -Dweblogic.Stderr="errorfilename" -Dweblogic.RootDirectory=/bib00d10/boxi/user_projects/domains/BOXI/ ${SAVE_JAVA_OPTIONS}"

We have to specify a fully qualified fie path instead of outfilename and errorfilename.For example c:\out.txt

5) Next, create a link to startWLS_daemon.sh in the /etc/rc2 directory.
>cd /etc/rc2.d
>ln -s /etc/init.d/startWLS_daemon.sh startWLS

6) Reboot your Unix machine and examine the stdout and stderr output in the file(s) listed in step 4) to ensure that the WLS daemon started successfully.

If your domain is in production mode you have to do the following also:-

Modify the startup script and pass the user name and password like this:-
WLS_USER=
WLS_PW=

Java Client Program to test WLS MultiDatasource Failover/Load Balancing

Client to test Multidatasource Failover/Loadbalancing Feature of WLS


import javax.naming.*;
import java.util.*;
import javax.rmi.*;
import javax.sql.*;
import java.sql.*;
public class MyDsClient
{
public final static String JNDI_FACTORY = "weblogic.jndi.WLInitialContextFactory";
private static String serverUrl ="t3://localhost:7001";
public static void main(String ar[])throws Exception
{
PreparedStatement pr = null;
InitialContext ic=null;
try{
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY, JNDI_FACTORY);
env.put(Context.PROVIDER_URL, serverUrl);
ic = new InitialContext(env);
}
catch(Exception e){}
try{

Connection con[]=new Connection[1000];
for(int i=0;i<1000;i++)
{
Object obj=ic.lookup("mds");
DataSource ds=(DataSource)obj;
con[i]=ds.getConnection();
System.out.println("\n\nGot the Connection : "+con[i]);
pr = con[i].prepareStatement("insert into test values(?)");
pr.setInt(1,i);
System.out.println(i);
int rs = pr.executeUpdate();
System.out.println("ROWS UPDATED:"+ rs);
try {
Thread.sleep(1000);
} catch(Exception ee)
{
ee.printStackTrace();
}
con[i].close();
}
}
catch(Exception e)
{
System.out.println("\n\n\t jack Exception => "+e);
e.printStackTrace();
}
}
}

Distributed Environment

WebLogic Server Distributed Environment

We would need to install Node Manager on the second physical machine to make it accessible from Admin server (of first physical machine).
The step by step procedure for installing Managed Server on another physical machine and
associating it with admin server are:
1) Install Weblogic server on the physical machine.
2) Install Node Manager and start it.
3) Start the scripting tool.(Start -BEA -Tools -Weblogic Scripting Tool)
4) Connect to the admin server type - connect() and press enter.
5) It would ask for admin server username, password and admin server url. Please give the required
information.
6) NM Enroll - this would connect the two node managers of different physical machine. Command is
nmEnroll(‘the path till nodemanager folder’) e.g. nmEnroll(‘C:\bea\weblogic10\common\nodemanager’)
7) Go to the admin console
8) From the admin server create the managed server. Give listen address of second machine. Make a
machine and in the node manager give the address of remote IP(i.e IP of 2nd Physical machine ).
Associate this machine with the newly created Managed Server. In all we need to give remote IP
address in 2 places- one in node manager and another while creating managed server.
9) This would create the managed server on 2nd physical machine. This would be associated with the
admin server running on 1st physical machine.

Directly connecting to Database without using DataSource

//package _Connect;
import java.sql.*;

import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;

import java.util.Calendar;
import java.util.Date;
import java.util.Properties;

public class TryQuery
{
public static void main(String[] args) throws Exception
{

Calendar cal = Calendar.getInstance();
long startMilis = 0;;
long endMilis = 0;
Connection con = null;
try
{
Class.forName("oracle.jdbc.driver.OracleDriver");
//DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
Properties props = new Properties();
props.put("user", "abcd");
props.put("password", "abcd");
con = DriverManager.getConnection("jdbc:oracle:thin:@IPADDRESS:PORT_NO:SID",props);
DatabaseMetaData dbmd = con.getMetaData();
System.out.println("Connected to : " + dbmd.getDatabaseProductVersion());
System.out.println("JDBC Driver: " + dbmd.getDriverVersion());
System.out.println(dbmd.getURL());

PreparedStatement ps = con.prepareStatement("select slow_query(20) from dual");

ps.setQueryTimeout(5);
cal.setTime(new Date());
startMilis = cal.getTimeInMillis();
System.out.println("start query ");
ps.execute();

ResultSet rs = ps.getResultSet();
int counter = 0;
while (rs.next())
{
counter++;
}

ps.close();


}
catch (Exception e)
{
System.out.println(e);
}
finally
{
cal.setTime(new Date());
endMilis = cal.getTimeInMillis();
System.out.println("end query - Elapsed : " + ((endMilis - startMilis) / 1000) + " seconds");
con.close();

}
}

}

Java Code for Connecting to Database using DataSource

//package _Connect;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
//import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
//import java.sql.*;
import java.util.*;
import javax.naming.*;
import java.util.Calendar;
import java.util.Date;
//import java.util.Properties;

public class DataSourceQuery
{

public static void main(String[] args) throws Exception
{
Calendar cal = Calendar.getInstance();
long startMilis = 0;
long endMilis = 0;
Connection con = null;
Context ctx = null;
Hashtable ht = new Hashtable();
ht.put(Context.INITIAL_CONTEXT_FACTORY,"weblogic.jndi.WLInitialContextFactory");
ht.put(Context.PROVIDER_URL,"t3://localhost:7001");
try
{
ctx = new InitialContext(ht);
// Enter the JNDI Name here
javax.sql.DataSource ds = (javax.sql.DataSource) ctx.lookup ("JNDI_Name");
con = ds.getConnection();
DatabaseMetaData dbmd = con.getMetaData();
System.out.println("Connected to : " + dbmd.getDatabaseProductVersion());
System.out.println("JDBC Driver: " + dbmd.getDriverVersion());
System.out.println(dbmd.getURL());

PreparedStatement ps = con.prepareStatement("select * from dual");

ps.setQueryTimeout(5);
cal.setTime(new Date());
startMilis = cal.getTimeInMillis();
System.out.println("start query ");
ps.execute();

ResultSet rs = ps.getResultSet();
int counter = 0;
while (rs.next())
{
counter++;
}

ps.close();


}
catch (Exception e)
{
System.out.println(e);
}
finally
{
cal.setTime(new Date());
endMilis = cal.getTimeInMillis();
System.out.println("end query - Elapsed : " + ((endMilis - startMilis) / 1000) + "seconds");
//con.close();

}
}

}

Presenting the Permanent Generation

Have you ever wondered how the permanent generation fits into our generational system? Ever been curious about what's in the permanent generation. Are objects ever promoted into it? Ever promoted out? We'll you're not alone. Here are some of the answers.

Java objects are instantiations of Java classes. Our JVM has an internal representation of those Java objects and those internal representations are stored in the heap (in the young generation or the tenured generation). Our JVM also has an internal representation of the Java classes and those are stored in the permanent generation. That relationship is shown in the figure below.



The internal representation of a Java object and an internal representation of a Java class are very similar. From this point on let me just call them Java objects and Java classes and you'll understand that I'm referring to their internal representation. The Java objects and Java classes are similar to the extent that during a garbage collection both are viewed just as objects and are collected in exactly the same way. So why store the Java objects in a separate permanent generation? Why not just store the Java classes in the heap along with the Java objects?

Well, there is a philosophical reason and a technical reason. The philosophical reason is that the classes are part of our JVM implementation and we should not fill up the Java heap with our data structures. The application writer has a hard enough time understanding the amount of live data the application needs and we shouldn't confuse the issue with the JVM's needs.

The technical reason comes in parts. Firstly the origins of the permanent generation predate my joining the team so I had to do some code archaeology to get the story straight (thanks Steffen for the history lesson).

Originally there was no permanent generation. Objects and classes were just stored together.

Back in those days classes were mostly static. Custom class loaders were not widely used and so it was observed that not much class unloading occurred. As a performance optimization the permanent generation was created and classes were put into it. The performance improvement was significant back then. With the amount of class unloading that occur with some applications, it's not clear that it's always a win today.

It might be a nice simplification to not have a permanent generation, but the recent implementation of the parallel collector for the tenured generation (aka parallel old collector) has made a separate permanent generation again desirable. The issue with the parallel old collector has to do with the order in which objects and classes are moved. If you're interested, I describe this at the end.

So the Java classes are stored in the permanent generation. What all does that entail? Besides the basic fields of a Java class there are

Methods of a class (including the bytecodes)
Names of the classes (in the form of an object that points to a string also in the permanent generation)
Constant pool information (data read from the class file, see chapter 4 of the JVM specification for all the details).
Object arrays and type arrays associated with a class (e.g., an object array containing references to methods).
Internal objects created by the JVM (java/lang/Object or java/lang/exception for instance)
Information used for optimization by the compilers (JITs)

That's it for the most part. There are a few other bits of information that end up in the permanent generation but nothing of consequence in terms of size. All these are allocated in the permanent generation and stay in the permanent generation. So now you know.

This last part is really, really extra credit. During a collection the garbage collector needs to have a description of a Java object (i.e., how big is it and what does it contain). Say I have an object X and X has a class K. I get to X in the collection and I need K to tell me what X looks like. Where's K? Has it been moved already? With a permanent generation during a collection we move the permanent generation first so we know that all the K's are in their new location by the time we're looking at any X's.

How do the classes in the permanent generation get collected while the classes are moving? Classes also have classes that describe their content. To distinguish these classes from those classes we spell the former klasses. The classes of klasses we spell klassKlasses. Yes, conversations around the office can be confusing. Klasses are instantiation of klassKlasses so the klassKlass KZ of klass Z has already been allocated before Z can be allocated. Garbage collections in the permanent generation visit objects in allocation order and that allocation order is always maintained during the collection. That is, if A is allocated before B then A always comes before B in the generation. Therefore if a Z is being moved it's always the case that KZ has already been moved.

And why not use the same knowledge about allocation order to eliminate the permanent generations even in the parallel old collector case? The parallel old collector does maintain allocation order of objects, but objects are moved in parallel. When the collection gets to X, we no longer know if K has been moved. It might be in its new location (which is known) or it might be in its old location (which is also known) or part of it might have been moved (but not all of it). It is possible to keep track of where K is exactly, but it would complicate the collector and the extra work of keeping track of K might make it a performance loser. So we take advantage of the fact that classes are kept in the permanent generation by collecting the permanent generation before collecting the tenured generation. And the permanent generation is currently collected serially.


Reference : http://blogs.sun.com/jonthecollector/entry/presenting_the_permanent_generation