States of thread
A thread can be in one of the following states - ready, waiting for some action, running, and dead. These states are explained below.
Ready State A thread in this state is ready for execution, but is not being currently executed. Once a thread in the ready state gets access to the CPU, it gets converted to running state.
Running State A thread is said to be in running state when it is being executed. This thread has access to CPU.
Dead State A thread reaches "dead" state when the run method has finished execution. This thread cannot be executed now.
Waiting State In this state the thread is waiting for some action to happen. Once that action happens, the thread gets into the ready state. A waiting thread can be in one of the following states - sleeping, suspended, blocked, waiting for monitor.
0 A dead Thread cannot be restarted.
0 If you call its start() method after its death, IllegalThreadStateException will be thrown.
0 Even the Thread is dead, but you still can call its other method. Why? Simple, the Thread is just another Java Object. For example, if you call its run() method again, it will be just a sequential procedure call, no concurrent execution at all.
thread.join(): The effect of calling jobT.join() statement is that current thread waits for jobT to complete.
thread.isAlive(): The isAlive method returns true if the thread has been started and not stopped. If the isAlive method returns false, you know that the thread either is a New Thread or is Dead.
Friday, June 22, 2007
inner, nested, anonymous classes
0 There is no such thing as a static inner class. There are top-level classes and nested classes, and nested classes are by definition divided into static and inner.
0 If it is static, it is not inner class, you can use it just as top-level class, but put a qualifier as EncloseingClass.EnclosedClass.
0 If inner class is a member of enclosing class, it must be attached to an instance of the Enclosing class, something like new EnclosingClass().new EnclosedClass().
0 Local and anonymous classes are also inner classes, they can be defined in methods, initializers, which will be local to that enclosing scope.
0 Since the scope can be instance or static, Local and anonymous classes can be defined in an instance or a static context.
0 inner class in an interface?
interface IFace {
public class X { //It is not member inner class
void method_Of_X() {
System.out.println("Without static modifier, it is still static nested class.");
}
}
static class S { // static nested class.
void method_Of_S() {
System.out.println("static nested class.");
}
}
}
Conclusion: All classes defined in an interface are implicitly static
0Anonymous class always extends the class or implements the interface after the keyword new. I assume that extends or implements can only be implicit, in case of anonymous classes.
0 An anonymous class object can access static and instance fields when it has an associated instance of the enclosing class (i.e. defined in a instance method).
0 An anonymous class can only access static fields when it is in a static context (i.e. defined in a static method).
0 What it can't access is local variables (unless they are declared final). Attention: parameters passed to the method are treated the same as local variables, since it is passed by value and a local copy are really being used.
0 local class defined in a method can access final local variables.
0 If it is static, it is not inner class, you can use it just as top-level class, but put a qualifier as EncloseingClass.EnclosedClass.
0 If inner class is a member of enclosing class, it must be attached to an instance of the Enclosing class, something like new EnclosingClass().new EnclosedClass().
0 Local and anonymous classes are also inner classes, they can be defined in methods, initializers, which will be local to that enclosing scope.
0 Since the scope can be instance or static, Local and anonymous classes can be defined in an instance or a static context.
0 inner class in an interface?
interface IFace {
public class X { //It is not member inner class
void method_Of_X() {
System.out.println("Without static modifier, it is still static nested class.");
}
}
static class S { // static nested class.
void method_Of_S() {
System.out.println("static nested class.");
}
}
}
Conclusion: All classes defined in an interface are implicitly static
0Anonymous class always extends the class or implements the interface after the keyword new. I assume that extends or implements can only be implicit, in case of anonymous classes.
0 An anonymous class object can access static and instance fields when it has an associated instance of the enclosing class (i.e. defined in a instance method).
0 An anonymous class can only access static fields when it is in a static context (i.e. defined in a static method).
0 What it can't access is local variables (unless they are declared final). Attention: parameters passed to the method are treated the same as local variables, since it is passed by value and a local copy are really being used.
0 local class defined in a method can access final local variables.
Static, Interface and abstract class
Implicit access modifiers in interface:
>>All the variables defined in an interface must be static final, i.e. constants. Happily the values need not be known at compile time. You can do some computation at class load time to compute the values. The variables need not be just simple ints and Strings. They can be any type.
>>All methods in an interface are implicitly declared public and abstract. All variables in an interface must be constants. They are implicitly declared public static final.
>>Interface is abstract, but not vice versa. There should be nothing implemented in interface. Abstract class can be partially implemented.
>>A class can only extend one class; abstract class is included.
>>A class can implement multi interfaces.
Can abstract class and interface have static method:
Ex:
abstract class A {
// OK
static void doSomething() {
}
// illegal combination of modifiers: abstract and static
//abstract static void doOtherthing();
}
interface B {
// modifier static not allowed here
// static void doSomething();
}
static and interface:
1) Inner interfaces are implicitly static, no matter you put static modifier or not. Attention: inner interfaces are very rarely used.
2) Outer interface are NOT static, just like outer class.
3) All members (attributes & methods) of interface are implicitly public.
4) All attributes defined in interface are implicitly static and final.
5) All methods defined in interface are NOT static.
6) All classes defined in an interface are implicitly static
abstract class cannot be instantiated.
// The following code will not be compilable
abstarct public class A{
}
// A a = new A();
// a.aMethod();
>>All the variables defined in an interface must be static final, i.e. constants. Happily the values need not be known at compile time. You can do some computation at class load time to compute the values. The variables need not be just simple ints and Strings. They can be any type.
>>All methods in an interface are implicitly declared public and abstract. All variables in an interface must be constants. They are implicitly declared public static final.
>>Interface is abstract, but not vice versa. There should be nothing implemented in interface. Abstract class can be partially implemented.
>>A class can only extend one class; abstract class is included.
>>A class can implement multi interfaces.
Can abstract class and interface have static method:
Ex:
abstract class A {
// OK
static void doSomething() {
}
// illegal combination of modifiers: abstract and static
//abstract static void doOtherthing();
}
interface B {
// modifier static not allowed here
// static void doSomething();
}
static and interface:
1) Inner interfaces are implicitly static, no matter you put static modifier or not. Attention: inner interfaces are very rarely used.
2) Outer interface are NOT static, just like outer class.
3) All members (attributes & methods) of interface are implicitly public.
4) All attributes defined in interface are implicitly static and final.
5) All methods defined in interface are NOT static.
6) All classes defined in an interface are implicitly static
abstract class cannot be instantiated.
// The following code will not be compilable
abstarct public class A{
}
// A a = new A();
// a.aMethod();
What kind of Java methods does not participate polymorphism?
Polymorphism for Java method is always there except the following three situations:
The method is declared as final
The method is declared as private
The method is declared as static
In those three cases, static binding will be performed by the compiler. Compiler does not have the knowledge, nor care about the method was, is, or will be overridden by none, one, or many subclass(es). The decision is made by the runtime, which is called dynamic binding.
The method is declared as final
The method is declared as private
The method is declared as static
In those three cases, static binding will be performed by the compiler. Compiler does not have the knowledge, nor care about the method was, is, or will be overridden by none, one, or many subclass(es). The decision is made by the runtime, which is called dynamic binding.
Thursday, June 21, 2007
Q. When should I use inheritance, when aggregation?
A: They are Two different relationships!
He is a human, human has a heart!
ISA: inheritance //He is a human,
HASA: aggregation //human has a heart
He is a human, human has a heart!
ISA: inheritance //He is a human,
HASA: aggregation //human has a heart
static method override
A subclass cannot override methods that are declared static in the superclass. In other words, a subclass cannot override a class method. A subclass can hide a static method in the superclass by declaring a static method in the subclass with the same signature as the static method in the superclass.
Ex:
public class Test {
public static void main(String[] args){
baseClass bb = new subClass();
subClass1 ss = new subClass();
ss.testme(); //output:This is inside the sub class
bb.testme(); //output:This is inside the sub class
//Conclusion: U are able to override the testme method here.
ss.testmeStatic(); //output:This is inside the sub class static
bb.testmeStatic(); //output:This is inside the base class static
//Conclusion: U are not able to override the testmeStatic method here.
}
}
class subClass extends baseClass{
public void testme(){
System.out.println("This is inside the sub class");
}
public static void testmeStatic(){
System.out.println("This is inside the sub class static ");
}
}
class baseClass{
public void testme(){
System.out.println("This is inside the base class");
}
public static void testmeStatic(){
System.out.println("This is inside the base class static ");
}
}
Ex:
public class Test {
public static void main(String[] args){
baseClass bb = new subClass();
subClass1 ss = new subClass();
ss.testme(); //output:This is inside the sub class
bb.testme(); //output:This is inside the sub class
//Conclusion: U are able to override the testme method here.
ss.testmeStatic(); //output:This is inside the sub class static
bb.testmeStatic(); //output:This is inside the base class static
//Conclusion: U are not able to override the testmeStatic method here.
}
}
class subClass extends baseClass{
public void testme(){
System.out.println("This is inside the sub class");
}
public static void testmeStatic(){
System.out.println("This is inside the sub class static ");
}
}
class baseClass{
public void testme(){
System.out.println("This is inside the base class");
}
public static void testmeStatic(){
System.out.println("This is inside the base class static ");
}
}
Difference between new String("hello") and "hello"
public class test
{
public static void main(String[] args)
{
String a = new String("hello");
String b = new String("hello");
if(a == b)
System.out.println("Equal");
else
System.out.println("Not Equal");
String c = "hello";
String d = "hello";
if(c == d)
System.out.println("Equal");
else
System.out.println("Not Equal");
}
}
i expected i will get since a, b, c and d are reference to difference objects (or maybe i am wrong)
"Not Equal"
"Not Equal"
however, what i get is
"Not Equal"
"Equal"
isnt that c and d are refrence to different objects?
Strings are pooled and shared in Java, for efficiency reasons. By constructing a String object with the new keyword, you are explicitly telling the JVM not to the use strings from the pool. Therefore, for String a and b, they are referencing to 2 different strings (that is, 2 strings that are located on different addresses in the memory), whereas for c and d, they are referencing the to the same string located on the same address in memory.
== compares the memory addresses and not the value of the variables.
equals
public boolean equals(Object obj)Indicates whether some other object is "equal to" this one.
The equals method implements an equivalence relation:
It is reflexive: for any reference value x, x.equals(x) should return true.
It is symmetric: for any reference values x and y, x.equals(y) should return true if and only if y.equals(x) returns true.
It is transitive: for any reference values x, y, and z, if x.equals(y) returns true and y.equals(z) returns true, then x.equals(z) should return true.
It is consistent: for any reference values x and y, multiple invocations of x.equals(y) consistently return true or consistently return false, provided no information used in equals comparisons on the object is modified.
For any non-null reference value x, x.equals(null) should return false.
The equals method for class Object implements the most discriminating possible equivalence relation on objects; that is, for any reference values x and y, this method returns true if and only if x and y refer to the same object (x==y has the value true).
{
public static void main(String[] args)
{
String a = new String("hello");
String b = new String("hello");
if(a == b)
System.out.println("Equal");
else
System.out.println("Not Equal");
String c = "hello";
String d = "hello";
if(c == d)
System.out.println("Equal");
else
System.out.println("Not Equal");
}
}
i expected i will get since a, b, c and d are reference to difference objects (or maybe i am wrong)
"Not Equal"
"Not Equal"
however, what i get is
"Not Equal"
"Equal"
isnt that c and d are refrence to different objects?
Strings are pooled and shared in Java, for efficiency reasons. By constructing a String object with the new keyword, you are explicitly telling the JVM not to the use strings from the pool. Therefore, for String a and b, they are referencing to 2 different strings (that is, 2 strings that are located on different addresses in the memory), whereas for c and d, they are referencing the to the same string located on the same address in memory.
== compares the memory addresses and not the value of the variables.
equals
public boolean equals(Object obj)Indicates whether some other object is "equal to" this one.
The equals method implements an equivalence relation:
It is reflexive: for any reference value x, x.equals(x) should return true.
It is symmetric: for any reference values x and y, x.equals(y) should return true if and only if y.equals(x) returns true.
It is transitive: for any reference values x, y, and z, if x.equals(y) returns true and y.equals(z) returns true, then x.equals(z) should return true.
It is consistent: for any reference values x and y, multiple invocations of x.equals(y) consistently return true or consistently return false, provided no information used in equals comparisons on the object is modified.
For any non-null reference value x, x.equals(null) should return false.
The equals method for class Object implements the most discriminating possible equivalence relation on objects; that is, for any reference values x and y, this method returns true if and only if x and y refer to the same object (x==y has the value true).
Remember
final variable (variables), they are of type static or instance, must be initialized explicitly. final instance variables can also be initialized by every constructor.Three choices to make the code right:
public class Test{
static int sn;
int n;
final static int fsn; //wrong
final int fn; //wrong
}
public class Test{
static int sn;
int n;
final static int fsn = 3;
final int fn = 6;
}
public class Test{
static int sn;
int n;
final static int fsn;
final int fn;
static {fsn=6;}
{fn =8;}
}
public class Test{
static int sn;
int n;
final static int fsn;
final int fn;
static {fsn=6;}
Test(){
fn =8;
}
Test(int pn){
fn =pn;
}
}
A subclass cannot override methods that are declared final in the superclass (by definition, final methods cannot be overridden). If you attempt to override a final method, the compiler displays an error message similar to the following and refuses to compile the program:"
a subclass cannot override methods that are declared static in the superclass. In other words, a subclass cannot override a class method.
transient and static variables are not serializable.
we can use transient and static to modify the same variable.
The default constructor implicitly generated by Java compiler should have the same accessibility as the class. i.e. public class will have a public default constructor
public class Test{
static int sn;
int n;
final static int fsn; //wrong
final int fn; //wrong
}
public class Test{
static int sn;
int n;
final static int fsn = 3;
final int fn = 6;
}
public class Test{
static int sn;
int n;
final static int fsn;
final int fn;
static {fsn=6;}
{fn =8;}
}
public class Test{
static int sn;
int n;
final static int fsn;
final int fn;
static {fsn=6;}
Test(){
fn =8;
}
Test(int pn){
fn =pn;
}
}
A subclass cannot override methods that are declared final in the superclass (by definition, final methods cannot be overridden). If you attempt to override a final method, the compiler displays an error message similar to the following and refuses to compile the program:"
a subclass cannot override methods that are declared static in the superclass. In other words, a subclass cannot override a class method.
transient and static variables are not serializable.
we can use transient and static to modify the same variable.
The default constructor implicitly generated by Java compiler should have the same accessibility as the class. i.e. public class will have a public default constructor
Q. Can we declare an object as final?
final variables cannot be changed.
final methods cannot be overridden.
final classes cannot be inherited. However, there is something easily to be confused. There is no final Object in Java. You cannot even declare an Object in Java, only Object references. We only can have final object reference, which cannot be changed. It is like that the address of your house cannot be changed if it is declared as final. However, you can remodel your house, add a room, etc. Your house is not final, but the reference to your house is final. That is a little confusing. By the way, use finals as much as possible, but do not overuse them of course. It is because compiler knows they are not going to change, the byte code generated for them will be much more efficient.
final methods cannot be overridden.
final classes cannot be inherited. However, there is something easily to be confused. There is no final Object in Java. You cannot even declare an Object in Java, only Object references. We only can have final object reference, which cannot be changed. It is like that the address of your house cannot be changed if it is declared as final. However, you can remodel your house, add a room, etc. Your house is not final, but the reference to your house is final. That is a little confusing. By the way, use finals as much as possible, but do not overuse them of course. It is because compiler knows they are not going to change, the byte code generated for them will be much more efficient.
What is modifier for the default constructor implicitly generated by Java compiler?
The default constructor implicitly generated by Java compiler should have the same accessibility as the class. i.e. public class will have a public default constructor, package "friendly" class will have package "friendly" constructor, unless you explicitly define it otherwise.
Does Java initialize static variable first when we instantiate an Object?
Wrong question! Please do NOT mix static field initialization with Object construction or instantiation!
>>static field initialization happens at class loading time, it only happens once, and once only. It might be changed long before your object instantiation time!
>>When you compile and run a toy program, it might look correct. However, the real world programming is NOT a toy!
>>Someone might think "what the article said should be correct at least once, at least correct at the first Object is instantiated." No, that is not necessary true either, since you might call on static method of that class long before the first Object is instantiated. The static value has been initialized and changed long before the first Object is instantiated.
>>static field initialization happens at class loading time, it only happens once, and once only. It might be changed long before your object instantiation time!
>>When you compile and run a toy program, it might look correct. However, the real world programming is NOT a toy!
>>Someone might think "what the article said should be correct at least once, at least correct at the first Object is instantiated." No, that is not necessary true either, since you might call on static method of that class long before the first Object is instantiated. The static value has been initialized and changed long before the first Object is instantiated.
What are the variable initialization rules in Java?
Member variables (both static and instance) are initialized implicitly by default:
Most primitives except boolean are default initialized to zero, not null!
boolean variables are default initialized to false, not zero, not null!
Only object references are default initialized to null!
Final varibles must be initialized explicitly in declaration or constructors (instance final variable.)
Local varibles are not initialized
They are not default intialized to anything, unless programmer initialize them explicitly!
It is not compilable if you use them before assign them a value!
Most primitives except boolean are default initialized to zero, not null!
boolean variables are default initialized to false, not zero, not null!
Only object references are default initialized to null!
Final varibles must be initialized explicitly in declaration or constructors (instance final variable.)
Local varibles are not initialized
They are not default intialized to anything, unless programmer initialize them explicitly!
It is not compilable if you use them before assign them a value!
In Java, everything is pass-by-value
Some people will say incorrectly that objects are passed "by reference." In programming language design, the term pass by reference properly means that when an argument is passed to a function, the invoked function gets a reference to the original value, not a copy of its value. If the function modifies its parameter, the value in the calling code will be changed because the argument and parameter use the same slot in memory. The Java programming language does not pass objects by reference; it passes object references by value. Because two copies of the same reference refer to the same actual object, changes made through one reference variable are visible through the other. There is exactly one parameter passing mode -- pass by value -- and that helps keep things simple.
Wednesday, June 20, 2007
How to properly dispose of (close) JDBC resources
There is only one right way to close JDBC resources properly and it includes all of the following.
you must call the close method
you must close the resources in the opposite order to that in which you opened them. Close ResultSets first, Statements second and Connections last
you must close all your resources in a finally block
Example 1 - the correct way
Connection conn = null;
PreparedStatement ps = null;
ResultSet rs = null;
try{
conn = // get connection
ps = conn.prepareStatement(sqlString);
rs = ps.executeQuery();
}catch(SQLException sqle){
// whatever exception handling is appropriate for your program for now...
sqle.printStackTrace();
}finally{
if(rs!=null){
try{
rs.close();
}catch(SQLException closeRsEx){
// you should probably log this exception
}
}
if(ps!=null){
try{
ps.close();
}catch(SQLException closePsEx){
// you should probably log this exception
}
}
if(conn!=null){
try{
conn.close();
}catch(SQLException closeConnEx){
// you should probably log this exception
}
}
}
Please note that the point of the above example is not to say that all your JDBC code must only go into one method, the point is to demonstrate all the steps that go into making sure that we close our JDBC resources properly. Namely; we call close on all our JDBC objects, we do it in the inverse order from creation, we close all our resources in a finally block to ensure that our closing code is called. You should be closing your resources as soon as you are done with them.
Please also note that Statements were mentioned but a PreparedStatement was used. The rules to follow when closing JDBC resources regarding Statement apply just as much if not moreso to the sub-interfaces of Statement (PreparedStatement and CallableStatement).
Common mistakes
Sometimes the best way to learn what is right is to study what is wrong when mistakes are made. So in this light are a few examples of commonly seen mistakes in code dealing with JDBC resources.
One mistake is to set the Connection, Statement and ResultSet variables to null instead of calling the close methods. This seems to spring from a notion that then the Java garbage collector will "deal" with it and all will be fine. This is though a mistake for two reasons. One you may be leaving resources tied up locally for longer than they have to be. Two you not be closing resources on the database properly if at all thus causing all of the database server related problems listed previously. To avoid this mistake you must call close on all the instances of Connection, Statement and ResultSet you use.
A second mistake often seen is to only close some of the resources like Connection but not ResultSets or Statements. This one is based on the theory that closing the Connection is "good enough" and that the JDBC driver will deal with the rest. This one is a mistake for three reasons. One you will probably leaving resources locally and on the database tied up for longer than is necessary (thus degrading performance). Two, it relies on the JDBC driver and database implicitly cleaning up for you which may or not happen correctly. It is better to explicitly call close than implicitly. Three it may not work at all in some scenarios, for example with a ConnectionPool where the connections are not in fact ever closed but recycled.
you must call the close method
you must close the resources in the opposite order to that in which you opened them. Close ResultSets first, Statements second and Connections last
you must close all your resources in a finally block
Example 1 - the correct way
Connection conn = null;
PreparedStatement ps = null;
ResultSet rs = null;
try{
conn = // get connection
ps = conn.prepareStatement(sqlString);
rs = ps.executeQuery();
}catch(SQLException sqle){
// whatever exception handling is appropriate for your program for now...
sqle.printStackTrace();
}finally{
if(rs!=null){
try{
rs.close();
}catch(SQLException closeRsEx){
// you should probably log this exception
}
}
if(ps!=null){
try{
ps.close();
}catch(SQLException closePsEx){
// you should probably log this exception
}
}
if(conn!=null){
try{
conn.close();
}catch(SQLException closeConnEx){
// you should probably log this exception
}
}
}
Please note that the point of the above example is not to say that all your JDBC code must only go into one method, the point is to demonstrate all the steps that go into making sure that we close our JDBC resources properly. Namely; we call close on all our JDBC objects, we do it in the inverse order from creation, we close all our resources in a finally block to ensure that our closing code is called. You should be closing your resources as soon as you are done with them.
Please also note that Statements were mentioned but a PreparedStatement was used. The rules to follow when closing JDBC resources regarding Statement apply just as much if not moreso to the sub-interfaces of Statement (PreparedStatement and CallableStatement).
Common mistakes
Sometimes the best way to learn what is right is to study what is wrong when mistakes are made. So in this light are a few examples of commonly seen mistakes in code dealing with JDBC resources.
One mistake is to set the Connection, Statement and ResultSet variables to null instead of calling the close methods. This seems to spring from a notion that then the Java garbage collector will "deal" with it and all will be fine. This is though a mistake for two reasons. One you may be leaving resources tied up locally for longer than they have to be. Two you not be closing resources on the database properly if at all thus causing all of the database server related problems listed previously. To avoid this mistake you must call close on all the instances of Connection, Statement and ResultSet you use.
A second mistake often seen is to only close some of the resources like Connection but not ResultSets or Statements. This one is based on the theory that closing the Connection is "good enough" and that the JDBC driver will deal with the rest. This one is a mistake for three reasons. One you will probably leaving resources locally and on the database tied up for longer than is necessary (thus degrading performance). Two, it relies on the JDBC driver and database implicitly cleaning up for you which may or not happen correctly. It is better to explicitly call close than implicitly. Three it may not work at all in some scenarios, for example with a ConnectionPool where the connections are not in fact ever closed but recycled.
Replace "." using regexp
String.replaceAll takes a regexp as argument, and a single dot means any character in the regexp world. The solution to replacing a single dot is to escape the dot. A backslash is used for escaping in regular expressions, but we need to use two backslashes. One to escape the dot, and another one to escape the backslash (since we want to place it within a string).
String text = "..text with.. dots..";
String withoutDots = text.replaceAll("\\.", "");
String text = "..text with.. dots..";
String withoutDots = text.replaceAll("\\.", "");
Friday, June 8, 2007
Friday, June 1, 2007
Difference between error and an exception?
An error is an irrecoverable condition occurring at runtime. Such as OutOfMemory error. These JVM errors and you can not repair them at runtime. While exceptions are conditions that occur because of bad input etc. e.g. FileNotFoundException will be thrown if the specified file does not exist. Or a NullPointerException will take place if you try using a null reference. In most of the cases it is possible to recover from an exception (probably by giving user a feedback for entering proper values etc.).
Thursday, May 31, 2007
DB Connection pooling with J2EE
Manage access to shared, server-side resources for high performance
By Siva Visveswaran, JavaWorld.com, 10/27/00
Connection pooling is a technique that was pioneered by database vendors to allow multiple clients to share a cached set of connection objects that provide access to a database resource. In this article, I examine connection pooling in a J2EE environment for server-side resources such as databases, message queues, directories, and enterprise systems.
Why pool resource connections?
Consider the following code example where an EJB accesses a database resource using JDBC 1.0, without connection pooling:
...
import java.sql.*;
import javax.sql.*;
...
public class AccountBean implements EntityBean {
...
public Collection ejbFindByLastName(String lName) {
try {
String dbdriver = new InitialContext().lookup("java:comp/env/DBDRIVER").toString();
Class.forName(dbdriver).newInstance();
Connection conn = null;
conn = DriverManager.getConnection("java:comp/env/DBURL", "userID", "password");
...
conn.close();
}
...
}
Evidently, the main problem in this example is the opening and closing of connections. Given that entity beans are shared components, for every client request, the database connections are acquired and released several times.
You can see from Figure 1 that acquiring and releasing database connections via the database manager, using JDBC 1.0, will impact the performance on the EJB layer. That impact is due to the overhead in creating and destroying those objects by the database resource manager process. Typically, the application server process takes around one to three seconds to establish a database connection (that includes communicating with the server, authenticating, and so forth), and that needs to be done for every client (EJB) request.

Figure 1. Connection management using JDBC 1.0
Connection pooling using service provider facilities
Now I will look at what connection pooling facilities are currently available for database and nondatabase resource types in the J2EE environment.
JDBC 2.0 Standard Extension API
The JDBC 2.0 Standard Extension API specifies that a database service provider can implement a pooling technique that can allow multiple connection objects from a resource pool to be shared transparently among the requesting clients. In that situation, a J2EE component can use connection objects without causing overheads on the database resource manager, since a pool manager creates the connection objects upfront, at startup. The application service provider implements the pool manager in its memory space and can optimize resource usage by dynamically altering the pool size, based on demand. That is illustrated in Figure 2.

Figure 2. Connection pooling using JDBC 2.0 Standard extension
Using the DataSource interface (JDBC 2.0) or the DriverManager (JDBC 1.0) interface, a J2EE component could get physical database connection objects. To obtain logical (pooled) connections, the J2EE component must use these JDBC 2.0 pooling manager interfaces:
* A javax.sql.ConnectionPoolDataSource interface that serves as a resource manager connection factory for pooled java.sql.Connection objects. Each database server vendor provides the implementation for that interface (for example, Oracle implements the oracle.jdbc.pool.OracleConnectionPoolDataSource class).
* A javax.sql.PooledConnection interface that encapsulates the physical connection to a database. Again, the database vendor provides the implementation.
An XA (X/Open specification) equivalent exists for each of those interfaces as well as for XA connections.
The following code example shows how an EJB application might access a database resource by using pooled connection objects (based on JDBC 2.0). The EJB component in this example uses a JNDI lookup to locate the database connection pool resource. The JNDI 1.2 Standard Extension API lets Java applications access objects in disparate directories and naming systems in a common way. Using the JNDI API, an application can look up a directory to locate any type of resource such as database servers, LDAP servers, print servers, message servers, file servers, and so forth. For a good overview of JNDI, refer to "The Java Naming and Directory Interface (JNDI): A More Open and Flexible Model."
Note: The actual code will vary depending on the database vendor implementation classes.
import java.sql.*;
import javax.sql.*;
// import here vendor specific JDBC drivers
public ProductPK ejbCreate() {
try {
// initialize JNDI lookup parameters
Context ctx = new InitialContext(parms);
...
ConnectionPoolDataSource cpds = (ConnectionPoolDataSource)ctx.lookup(cpsource);
...
// Following parms could all come from a JNDI look-up
cpds.setDatabaseName("PTDB");
cpds.setUserIF("XYZ");
...
PooledConnection pc = cpds.getPooledConnection();
Connection conn = pc.getConnection();
...
// do business logic
conn.close();
}
...
}
The key difference between the above code (using JDBC 2.0) and using JDBC 1.0 is that a getConnection() gets an already open connection from the pool, and close() simply releases the connection object back to the pool. JDBC 2.0 drivers are available today from almost every database server vendor such as Oracle, DB2, Sybase, and Informix. And most application server vendors (IBM, BEA, iPlanet, IONA, etc.) today support JDBC 2.0.
Source>>http://www.javaworld.com/javaworld/jw-10-2000/jw-1027-pool.html
By Siva Visveswaran, JavaWorld.com, 10/27/00
Connection pooling is a technique that was pioneered by database vendors to allow multiple clients to share a cached set of connection objects that provide access to a database resource. In this article, I examine connection pooling in a J2EE environment for server-side resources such as databases, message queues, directories, and enterprise systems.
Why pool resource connections?
Consider the following code example where an EJB accesses a database resource using JDBC 1.0, without connection pooling:
...
import java.sql.*;
import javax.sql.*;
...
public class AccountBean implements EntityBean {
...
public Collection ejbFindByLastName(String lName) {
try {
String dbdriver = new InitialContext().lookup("java:comp/env/DBDRIVER").toString();
Class.forName(dbdriver).newInstance();
Connection conn = null;
conn = DriverManager.getConnection("java:comp/env/DBURL", "userID", "password");
...
conn.close();
}
...
}
Evidently, the main problem in this example is the opening and closing of connections. Given that entity beans are shared components, for every client request, the database connections are acquired and released several times.
You can see from Figure 1 that acquiring and releasing database connections via the database manager, using JDBC 1.0, will impact the performance on the EJB layer. That impact is due to the overhead in creating and destroying those objects by the database resource manager process. Typically, the application server process takes around one to three seconds to establish a database connection (that includes communicating with the server, authenticating, and so forth), and that needs to be done for every client (EJB) request.

Figure 1. Connection management using JDBC 1.0
Connection pooling using service provider facilities
Now I will look at what connection pooling facilities are currently available for database and nondatabase resource types in the J2EE environment.
JDBC 2.0 Standard Extension API
The JDBC 2.0 Standard Extension API specifies that a database service provider can implement a pooling technique that can allow multiple connection objects from a resource pool to be shared transparently among the requesting clients. In that situation, a J2EE component can use connection objects without causing overheads on the database resource manager, since a pool manager creates the connection objects upfront, at startup. The application service provider implements the pool manager in its memory space and can optimize resource usage by dynamically altering the pool size, based on demand. That is illustrated in Figure 2.

Figure 2. Connection pooling using JDBC 2.0 Standard extension
Using the DataSource interface (JDBC 2.0) or the DriverManager (JDBC 1.0) interface, a J2EE component could get physical database connection objects. To obtain logical (pooled) connections, the J2EE component must use these JDBC 2.0 pooling manager interfaces:
* A javax.sql.ConnectionPoolDataSource interface that serves as a resource manager connection factory for pooled java.sql.Connection objects. Each database server vendor provides the implementation for that interface (for example, Oracle implements the oracle.jdbc.pool.OracleConnectionPoolDataSource class).
* A javax.sql.PooledConnection interface that encapsulates the physical connection to a database. Again, the database vendor provides the implementation.
An XA (X/Open specification) equivalent exists for each of those interfaces as well as for XA connections.
The following code example shows how an EJB application might access a database resource by using pooled connection objects (based on JDBC 2.0). The EJB component in this example uses a JNDI lookup to locate the database connection pool resource. The JNDI 1.2 Standard Extension API lets Java applications access objects in disparate directories and naming systems in a common way. Using the JNDI API, an application can look up a directory to locate any type of resource such as database servers, LDAP servers, print servers, message servers, file servers, and so forth. For a good overview of JNDI, refer to "The Java Naming and Directory Interface (JNDI): A More Open and Flexible Model."
Note: The actual code will vary depending on the database vendor implementation classes.
import java.sql.*;
import javax.sql.*;
// import here vendor specific JDBC drivers
public ProductPK ejbCreate() {
try {
// initialize JNDI lookup parameters
Context ctx = new InitialContext(parms);
...
ConnectionPoolDataSource cpds = (ConnectionPoolDataSource)ctx.lookup(cpsource);
...
// Following parms could all come from a JNDI look-up
cpds.setDatabaseName("PTDB");
cpds.setUserIF("XYZ");
...
PooledConnection pc = cpds.getPooledConnection();
Connection conn = pc.getConnection();
...
// do business logic
conn.close();
}
...
}
The key difference between the above code (using JDBC 2.0) and using JDBC 1.0 is that a getConnection() gets an already open connection from the pool, and close() simply releases the connection object back to the pool. JDBC 2.0 drivers are available today from almost every database server vendor such as Oracle, DB2, Sybase, and Informix. And most application server vendors (IBM, BEA, iPlanet, IONA, etc.) today support JDBC 2.0.
Source>>http://www.javaworld.com/javaworld/jw-10-2000/jw-1027-pool.html
Wednesday, May 30, 2007
joins, self join, inner join, outer join
A join combines records from two or more tables in a relational database. In the Structured Query Language (SQL), there are two types of joins: "inner" and "outer". Outer joins are subdivided further into left outer joins, right outer joins, and full outer joins.
Cross joins
Cross joins are aptly named, because if you try to perform one on a large database, your users and systems programmers will get very cross at you. A cross join merges two tables on every record in a geometric fashion – every record of one table is combined with every record from the other table. Two tables of 100 records each in a cross join will create a table of 10,000 (100 times 100) records. Imagine the result set with tables of 20,000 or 30,000 records!
Inner join
This is the default join method if nothing else is specified. An inner join essentially finds the intersection between the two tables. The join takes all the records from table A and finds the matching record(s) from table B. If no match is found, the record from A is not included in the results. If multiple results are found in B that match the predicate then one row will be returned for each (the values from A will be repeated).
Special care must be taken when joining tables on columns that can be NULL since NULL values will never match each other
Left outer join
A left outer join is very different from an inner join. Instead of limiting results to those in both tables, it limits results to those in the "left" table (A). This means that if the ON clause matches 0 records in B, a row in the result will still be returned—but with NULL values for each column from B.
Right outer join
A right outer join is much like a left outer join, except that the tables are reversed. Every record from the right side, B, will be returned, and NULL values will be returned for those that have no matching record in A.
Full outer join
Full outer joins are the combination of left and right outer joins. These joins will show records from both tables, and fill in NULLs for missing matches on either side.
Cross joins
Cross joins are aptly named, because if you try to perform one on a large database, your users and systems programmers will get very cross at you. A cross join merges two tables on every record in a geometric fashion – every record of one table is combined with every record from the other table. Two tables of 100 records each in a cross join will create a table of 10,000 (100 times 100) records. Imagine the result set with tables of 20,000 or 30,000 records!
Inner join
This is the default join method if nothing else is specified. An inner join essentially finds the intersection between the two tables. The join takes all the records from table A and finds the matching record(s) from table B. If no match is found, the record from A is not included in the results. If multiple results are found in B that match the predicate then one row will be returned for each (the values from A will be repeated).
Special care must be taken when joining tables on columns that can be NULL since NULL values will never match each other
Left outer join
A left outer join is very different from an inner join. Instead of limiting results to those in both tables, it limits results to those in the "left" table (A). This means that if the ON clause matches 0 records in B, a row in the result will still be returned—but with NULL values for each column from B.
Right outer join
A right outer join is much like a left outer join, except that the tables are reversed. Every record from the right side, B, will be returned, and NULL values will be returned for those that have no matching record in A.
Full outer join
Full outer joins are the combination of left and right outer joins. These joins will show records from both tables, and fill in NULLs for missing matches on either side.
Normalization
Normalization is the process of efficiently organizing data in a database.
There are two goals of the normalization process:
1)Eliminating redundant data (for example, storing the same data in more than one table)
2)Ensuring data dependencies make sense (only storing related data in a table).
Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored.
________________________________________
1NF Eliminate Repeating Groups - Make a separate table for each set of related attributes, and give each table a primary key.
2NF Eliminate Redundant Data - If an attribute depends on only part of a multi-valued key, remove it to a separate table.
3NF Eliminate Columns Not Dependent On Key - If attributes do not contribute to a description of the key, remove them to a separate table.
BCNF Boyce-Codd Normal Form - If there are non-trivial dependencies between candidate key attributes, separate them out into distinct tables.
4NF Isolate Independent Multiple Relationships - No table may contain two or more 1:n or n:m relationships that are not directly related.
5NF Isolate Semantically Related Multiple Relationships - There may be practical constrains on information that justify separating logically related many-to-many relationships.
ONF Optimal Normal Form - a model limited to only simple (elemental) facts, as expressed in Object Role Model notation.
DKNF Domain-Key Normal Form - a model free from all modification anomalies.
________________________________________
1. Eliminate Repeating Groups
In the original member list, each member name is followed by any databases that the member has experience with. Some might know many, and others might not know any. To answer the question, "Who knows DB2?" we need to perform an awkward scan of the list looking for references to DB2. This is inefficient and an extremely untidy way to store information.
Moving the known databases into a seperate table helps a lot. Separating the repeating groups of databases from the member information results in first normal form. The MemberID in the database table matches the primary key in the member table, providing a foreign key for relating the two tables with a join operation. Now we can answer the question by looking in the database table for "DB2" and getting the list of members.

________________________________________
2. Eliminate Redundant Data
In the Database Table, the primary key is made up of the MemberID and the DatabaseID. This makes sense for other attributes like "Where Learned" and "Skill Level" attributes, since they will be different for every member/database combination. But the database name depends only on the DatabaseID. The same database name will appear redundantly every time its associated ID appears in the Database Table.
Suppose you want to reclassify a database - give it a different DatabaseID. The change has to be made for every member that lists that database! If you miss some, you'll have several members with the same database under different IDs. This is an update anomaly.
Or suppose the last member listing a particular database leaves the group. His records will be removed from the system, and the database will not be stored anywhere! This is a delete anomaly. To avoid these problems, we need second normal form.
To achieve this, separate the attributes depending on both parts of the key from those depending only on the DatabaseID. This results in two tables: "Database" which gives the name for each DatabaseID, and "MemberDatabase" which lists the databases for each member.
Now we can reclassify a database in a single operation: look up the DatabaseID in the "Database" table and change its name. The result will instantly be available throughout the application.

________________________________________
3. Eliminate Columns Not Dependent On Key
The Member table satisfies first normal form - it contains no repeating groups. It satisfies second normal form - since it doesn't have a multivalued key. But the key is MemberID, and the company name and location describe only a company, not a member. To achieve third normal form, they must be moved into a separate table. Since they describe a company, CompanyCode becomes the key of the new "Company" table.
The motivation for this is the same for second normal form: we want to avoid update and delete anomalies. For example, suppose no members from the IBM were currently stored in the database. With the previous design, there would be no record of its existence, even though 20 past members were from IBM!
There are two goals of the normalization process:
1)Eliminating redundant data (for example, storing the same data in more than one table)
2)Ensuring data dependencies make sense (only storing related data in a table).
Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored.
________________________________________
1NF Eliminate Repeating Groups - Make a separate table for each set of related attributes, and give each table a primary key.
2NF Eliminate Redundant Data - If an attribute depends on only part of a multi-valued key, remove it to a separate table.
3NF Eliminate Columns Not Dependent On Key - If attributes do not contribute to a description of the key, remove them to a separate table.
BCNF Boyce-Codd Normal Form - If there are non-trivial dependencies between candidate key attributes, separate them out into distinct tables.
4NF Isolate Independent Multiple Relationships - No table may contain two or more 1:n or n:m relationships that are not directly related.
5NF Isolate Semantically Related Multiple Relationships - There may be practical constrains on information that justify separating logically related many-to-many relationships.
ONF Optimal Normal Form - a model limited to only simple (elemental) facts, as expressed in Object Role Model notation.
DKNF Domain-Key Normal Form - a model free from all modification anomalies.
________________________________________
1. Eliminate Repeating Groups
In the original member list, each member name is followed by any databases that the member has experience with. Some might know many, and others might not know any. To answer the question, "Who knows DB2?" we need to perform an awkward scan of the list looking for references to DB2. This is inefficient and an extremely untidy way to store information.
Moving the known databases into a seperate table helps a lot. Separating the repeating groups of databases from the member information results in first normal form. The MemberID in the database table matches the primary key in the member table, providing a foreign key for relating the two tables with a join operation. Now we can answer the question by looking in the database table for "DB2" and getting the list of members.

________________________________________
2. Eliminate Redundant Data
In the Database Table, the primary key is made up of the MemberID and the DatabaseID. This makes sense for other attributes like "Where Learned" and "Skill Level" attributes, since they will be different for every member/database combination. But the database name depends only on the DatabaseID. The same database name will appear redundantly every time its associated ID appears in the Database Table.
Suppose you want to reclassify a database - give it a different DatabaseID. The change has to be made for every member that lists that database! If you miss some, you'll have several members with the same database under different IDs. This is an update anomaly.
Or suppose the last member listing a particular database leaves the group. His records will be removed from the system, and the database will not be stored anywhere! This is a delete anomaly. To avoid these problems, we need second normal form.
To achieve this, separate the attributes depending on both parts of the key from those depending only on the DatabaseID. This results in two tables: "Database" which gives the name for each DatabaseID, and "MemberDatabase" which lists the databases for each member.
Now we can reclassify a database in a single operation: look up the DatabaseID in the "Database" table and change its name. The result will instantly be available throughout the application.

________________________________________
3. Eliminate Columns Not Dependent On Key
The Member table satisfies first normal form - it contains no repeating groups. It satisfies second normal form - since it doesn't have a multivalued key. But the key is MemberID, and the company name and location describe only a company, not a member. To achieve third normal form, they must be moved into a separate table. Since they describe a company, CompanyCode becomes the key of the new "Company" table.
The motivation for this is the same for second normal form: we want to avoid update and delete anomalies. For example, suppose no members from the IBM were currently stored in the database. With the previous design, there would be no record of its existence, even though 20 past members were from IBM!
Define Primary key , unique key, candidate key, alternate key, composite key.
Both primary key and unique enforce uniqueness of the column on which they are defined.
But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn't allow NULLs, but unique key allows one NULL only.
A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys.
A key formed by combining at least two or more columns is called composite key.
But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn't allow NULLs, but unique key allows one NULL only.
A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys.
A key formed by combining at least two or more columns is called composite key.
What is denormalization and when would you go for it?
Denormalization is the process of attempting to optimize the performance of a database by adding redundant data. As the name indicates, denormalization is the reverse process of normalization. It's the controlled introduction of redundancy in to the database design. It helps improve the query performance as the number of joins could be reduced
Triggers
A trigger is a compiled SQL procedure in the database used to perform actions based on other actions that occur within the database.
A trigger is a form of a stored procedure that is executed when a specified (Data Manipulation Language) action is performed on a table. The trigger can be executed before or after an INSERT, DELETE, or UPDATE.
Triggers can also be used to check data integrity before an INSERT, DELETE, or UPDATE. Triggers can roll back transactions, and they can modify data in one table and read from another table in another database.
Triggers, for the most part, are very good functions to use; they can, however, cause more I/O overhead. Triggers should not be used when a stored procedure or a program can accomplish the same results with less overhead.
A trigger can be created using the CREATE TRIGGER statement.
The Microsoft SQL Server syntax to create a trigger is as follows:
CREATE TRIGGER trigger_name
ON table_name
FOR { INSERT | UPDATE | DELETE [, ..]}
AS
Sql statemens
[ RETURN ]
The basic syntax for Oracle is as follows:
CREATE [ OR REPLACE ] TRIGGER trigger_name
[ BEFORE | AFTER]
[ DELETE | INSERT | UPDATE]
ON [ user.table_name ]
[ FOR EACH ROW ]
[ WHEN condition ]
[ PL/SQL BLOCK ]
The following is an example trigger(MySql):
CREATE TRIGGER EMP_PAY_TRIG
AFTER UPDATE ON EMPLOYEE_PAY_TBL
FOR EACH ROW
BEGIN
INSERT INTO EMPLOYEE_PAY_HISTORY
(EMP_ID, PREV_PAY_RATE, PAY_RATE, DATE_LAST_RAISE,
TRANSACTION_TYPE)
VALUES
(:NEW.EMP_ID, :OLD.PAY_RATE, :NEW.PAY_RATE,
:NEW.DATE_LAST_RAISE, 'PAY CHANGE');
END;
The preceding example shows the creation of a trigger called EMP_PAY_TRIG. This trigger inserts a row into the EMPLOYEE_PAY_HISTORY table, reflecting the changes made every time a row of data is updated in the EMPLOYEE_PAY_TBL table.
Note:
The body of a trigger cannot be altered. You must either replace or re-create the trigger. Some implementations allow a trigger to be replaced (if the trigger with the same name already exists) as part of the CREATE TRIGGER statement.
The DROP TRIGGER Statement
A trigger can be dropped using the DROP TRIGGER statement. The syntax for dropping a trigger is as follows:
DROP TRIGGER TRIGGER_NAME
Source>>http://www.samspublishing.com/library/content.asp?b=STY_Sql_24hours&seqNum=184&rl=1
A trigger is a form of a stored procedure that is executed when a specified (Data Manipulation Language) action is performed on a table. The trigger can be executed before or after an INSERT, DELETE, or UPDATE.
Triggers can also be used to check data integrity before an INSERT, DELETE, or UPDATE. Triggers can roll back transactions, and they can modify data in one table and read from another table in another database.
Triggers, for the most part, are very good functions to use; they can, however, cause more I/O overhead. Triggers should not be used when a stored procedure or a program can accomplish the same results with less overhead.
A trigger can be created using the CREATE TRIGGER statement.
The Microsoft SQL Server syntax to create a trigger is as follows:
CREATE TRIGGER trigger_name
ON table_name
FOR { INSERT | UPDATE | DELETE [, ..]}
AS
Sql statemens
[ RETURN ]
The basic syntax for Oracle is as follows:
CREATE [ OR REPLACE ] TRIGGER trigger_name
[ BEFORE | AFTER]
[ DELETE | INSERT | UPDATE]
ON [ user.table_name ]
[ FOR EACH ROW ]
[ WHEN condition ]
[ PL/SQL BLOCK ]
The following is an example trigger(MySql):
CREATE TRIGGER EMP_PAY_TRIG
AFTER UPDATE ON EMPLOYEE_PAY_TBL
FOR EACH ROW
BEGIN
INSERT INTO EMPLOYEE_PAY_HISTORY
(EMP_ID, PREV_PAY_RATE, PAY_RATE, DATE_LAST_RAISE,
TRANSACTION_TYPE)
VALUES
(:NEW.EMP_ID, :OLD.PAY_RATE, :NEW.PAY_RATE,
:NEW.DATE_LAST_RAISE, 'PAY CHANGE');
END;
The preceding example shows the creation of a trigger called EMP_PAY_TRIG. This trigger inserts a row into the EMPLOYEE_PAY_HISTORY table, reflecting the changes made every time a row of data is updated in the EMPLOYEE_PAY_TBL table.
Note:
The body of a trigger cannot be altered. You must either replace or re-create the trigger. Some implementations allow a trigger to be replaced (if the trigger with the same name already exists) as part of the CREATE TRIGGER statement.
The DROP TRIGGER Statement
A trigger can be dropped using the DROP TRIGGER statement. The syntax for dropping a trigger is as follows:
DROP TRIGGER TRIGGER_NAME
Source>>http://www.samspublishing.com/library/content.asp?b=STY_Sql_24hours&seqNum=184&rl=1
Integrity constraints
Integrity constraints are used to ensure accuracy and consistency of data in a relational database. Data integrity is handled in a relational database through the concept of referential integrity. There are many types of integrity constraints that play a role in referential integrity (RI).
Primary Key Constraint
Unique Constraint
Foreign Key Constraint
NOT NULL Constraint
Check Constraint
Primary Key Constraints
Primary key is the term used to identify one or more columns in a table that make a row of data unique. Although the primary key typically consists of one column in a table, more than one column can comprise the primary key. For example, either the employee's Social Security number or an assigned employee identification number is the logical primary key for an employee table. The objective is for every record to have a unique primary key or value for the employee's identification number. Because there is probably no need to have more than one record for each employee in an employee table, the employee identification number makes a logical primary key. The primary key is assigned at table creation.
The following example identifies the EMP_ID column as the PRIMARY KEY for the EMPLOYEES table:
CREATE TABLE EMPLOYEE_TBL
(EMP_ID CHAR(9) NOT NULL PRIMARY KEY,
EMP_NAME VARCHAR (40) NOT NULL,
EMP_ST_ADDR VARCHAR (20) NOT NULL,
EMP_CITY VARCHAR (15) NOT NULL,
EMP_ST CHAR(2) NOT NULL,
EMP_ZIP INTEGER(5) NOT NULL,
EMP_PHONE INTEGER(10) NULL,
EMP_PAGER INTEGER(10) NULL);
This method of defining a primary key is accomplished during table creation. The primary key in this case is an implied constraint. You can also specify a primary key explicitly as a constraint when setting up a table, as follows:
CREATE TABLE EMPLOYEE_TBL
(EMP_ID CHAR(9) NOT NULL,
EMP_NAME VARCHAR (40) NOT NULL,
EMP_ST_ADDR VARCHAR (20) NOT NULL,
EMP_CITY VARCHAR (15) NOT NULL,
EMP_ST CHAR(2) NOT NULL,
EMP_ZIP INTEGER(5) NOT NULL,
EMP_PHONE INTEGER(10) NULL,
EMP_PAGER INTEGER(10) NULL,
PRIMARY KEY (EMP_ID));
The primary key constraint in this example is defined after the column comma list in the CREATE TABLE statement.
A primary key that consists of more than one column can be defined by either of the following methods:
CREATE TABLE PRODUCTS
(PROD_ID VARCHAR2(10) NOT NULL,
VEND_ID VARCHAR2(10) NOT NULL,
PRODUCT VARCHAR2(30) NOT NULL,
COST NUMBER(8,2) NOT NULL,
PRIMARY KEY (PROD_ID, VEND_ID));
Or
ALTER TABLE PRODUCTS
ADD CONSTRAINT PRODUCTS_PK PRIMARY KEY (PROD_ID, VEND_ID);
Unique Constraints
A unique column constraint in a table is similar to a primary key in that the value in that column for every row of data in the table must have a unique value. While a primary key constraint is placed on one column, you can place a unique constraint on another column even though it is not actually for use as the primary key.
Study the following example:
CREATE TABLE EMPLOYEE_TBL
(EMP_ID CHAR(9) NOT NULL PRIMARY KEY,
EMP_NAME VARCHAR (40) NOT NULL,
EMP_ST_ADDR VARCHAR (20) NOT NULL,
EMP_CITY VARCHAR (15) NOT NULL,
EMP_ST CHAR(2) NOT NULL,
EMP_ZIP INTEGER(5) NOT NULL,
EMP_PHONE INTEGER(10) NULL UNIQUE,
EMP_PAGER INTEGER(10) NULL);
The primary key in this example is EMP_ID, meaning that the employee identification number is the column that is used to ensure that every record in the table is unique. The primary key is a column that is normally referenced in queries, particularly to join tables.
The column EMP_PHONE has been designated as a UNIQUE value, meaning that no two employees can have the same telephone number. There is not a lot of difference between the two, except that the primary key is used to provide an order to data in a table and, in the same respect, join related tables.
Foreign Key Constraints
A foreign key is a column in a child table that references a primary key in the parent table. A foreign key constraint is the main mechanism used to enforce referential integrity between tables in a relational database. A column defined as a foreign key is used to reference a column defined as a primary key in another table.
Study the creation of the foreign key in the following example:
CREATE TABLE EMPLOYEE_PAY_TBL
(EMP_ID CHAR(9) NOT NULL,
POSITION VARCHAR2(15) NOT NULL,
DATE_HIRE DATE NULL,
PAY_RATE NUMBER(4,2) NOT NULL,
DATE_LAST_RAISE DATE NULL,
CONSTRAINT EMP_ID_FK FOREIGN KEY (EMP_ID) REFERENCES EMPLOYEE_TBL (EMP_ID));
The EMP_ID column in this example has been designated as the foreign key for the EMPLOYEE_PAY_TBL table. This foreign key, as you can see, references the EMP_ID column in the EMPLOYEE_TBL table.
This foreign key ensures that for every EMP_ID in the EMPLOYEE_PAY_TBL, there is a corresponding EMP_ID in the EMPLOYEE_TBL. This is called a parent/child relationship. The parent table is the EMPLOYEE_TBL table, and the child table is the EMPLOYEE_PAY_TBL table.
Study Figure 3.2 for a better understanding of the parent table/child table relationship.

In this figure, the EMP_ID column in the child table references the EMP_ID column in the parent table. In order for a value to be inserted for EMP_ID in the child table, there must first exist a value for EMP_ID in the parent table. Likewise, for a value to be removed for EMP_ID in the parent table, all corresponding values for EMP_ID must first be removed from the child table. This is how referential integrity works.
A foreign key can be added to a table using the ALTER TABLE command, as shown in the following example:
ALTER TABLE EMPLOYEE_PAY_TBL
ADD CONSTRAINT ID_FK FOREIGN KEY (EMP_ID)
REFERENCES EMPLOYEE_TBL (EMP_ID);
NOT NULL Constraints
Previous examples use the keywords NULL and NOT NULL listed on the same line as each column and after the data type. NOT NULL is a constraint that you can place on a table's column. This constraint disallows the entrance of NULL values into a column; in other words, data is required in a NOT NULL column for each row of data in the table. NULL is generally the default for a column if NOT NULL is not specified, allowing NULL values in a column.
Using Check Constraints
Check (CHK) constraints can be utilized to check the validity of data entered into particular table columns. Check constraints are used to provide back-end database edits, although edits are commonly found in the front-end application as well. General edits restrict values that can be entered into columns or objects, whether within the database itself or on a front-end application. The check constraint is a way of providing another protective layer for the data.
The following example illustrates the use of a check constraint:
CREATE TABLE EMPLOYEE_TBL
(EMP_ID CHAR(9) NOT NULL,
EMP_NAME VARCHAR2(40) NOT NULL,
EMP_ST_ADDR VARCHAR2(20) NOT NULL,
EMP_CITY VARCHAR2(15) NOT NULL,
EMP_ST CHAR(2) NOT NULL,
EMP_ZIP NUMBER(5) NOT NULL,
EMP_PHONE NUMBER(10) NULL,
EMP_PAGER NUMBER(10) NULL),
PRIMARY KEY (EMP_ID),
CONSTRAINT CHK_EMP_ZIP CHECK ( EMP_ZIP = '46234');
The check constraint in this table has been placed on the EMP_ZIP column, ensuring that all employees entered into this table have a ZIP code of '46234'. Perhaps that is a little restricting. Nevertheless, you can see how it works.
If you wanted to use a check constraint to verify that the ZIP code is within a list of values, your constraint definition could look like the following:
CONSTRAINT CHK_EMP_ZIP CHECK ( EMP_ZIP in ('46234','46227','46745') );
If there is a minimum pay rate that can be designated for an employee, you could have a constraint that looks like the following:
CREATE TABLE EMPLOYEE_PAY_TBL
(EMP_ID CHAR(9) NOT NULL,
POSITION VARCHAR2(15) NOT NULL,
DATE_HIRE DATE NULL,
PAY_RATE NUMBER(4,2) NOT NULL,
DATE_LAST_RAISE DATE NULL,
CONSTRAINT EMP_ID_FK FOREIGN KEY (EMP_ID) REFERENCES EMPLOYEE_TBL (EMP_ID),
CONSTRAINT CHK_PAY CHECK ( PAY_RATE > 12.50 ) );
In this example, any employee entered in this table must be paid more than $12.50 an hour. You can use just about any condition in a check constraint, as you can with a SQL query.
Dropping Constraints
Any constraint that you have defined can be dropped using the ALTER TABLE command with the DROP CONSTRAINT option. For example, to drop the primary key constraint in the EMPLOYEES table, you can use the following command:
ALTER TABLE EMPLOYEES DROP CONSTRAINT EMPLOYEES_PK;
Output:
Table altered.
Some implementations may provide shortcuts for dropping certain constraints. For example, to drop the primary key constraint for a table in Oracle, you can use the following command:
ALTER TABLE EMPLOYEES DROP PRIMARY KEY;
Output:
Table altered.
Note
Some implementations allow you to disable constraints. Instead of permanently dropping a constraint from the database, you may want to temporarily disable the constraint, and then enable it later.
Source>>http://www.samspublishing.com/library/content.asp?b=STY_Sql_24hours&seqNum=27&rl=1
Primary Key Constraint
Unique Constraint
Foreign Key Constraint
NOT NULL Constraint
Check Constraint
Primary Key Constraints
Primary key is the term used to identify one or more columns in a table that make a row of data unique. Although the primary key typically consists of one column in a table, more than one column can comprise the primary key. For example, either the employee's Social Security number or an assigned employee identification number is the logical primary key for an employee table. The objective is for every record to have a unique primary key or value for the employee's identification number. Because there is probably no need to have more than one record for each employee in an employee table, the employee identification number makes a logical primary key. The primary key is assigned at table creation.
The following example identifies the EMP_ID column as the PRIMARY KEY for the EMPLOYEES table:
CREATE TABLE EMPLOYEE_TBL
(EMP_ID CHAR(9) NOT NULL PRIMARY KEY,
EMP_NAME VARCHAR (40) NOT NULL,
EMP_ST_ADDR VARCHAR (20) NOT NULL,
EMP_CITY VARCHAR (15) NOT NULL,
EMP_ST CHAR(2) NOT NULL,
EMP_ZIP INTEGER(5) NOT NULL,
EMP_PHONE INTEGER(10) NULL,
EMP_PAGER INTEGER(10) NULL);
This method of defining a primary key is accomplished during table creation. The primary key in this case is an implied constraint. You can also specify a primary key explicitly as a constraint when setting up a table, as follows:
CREATE TABLE EMPLOYEE_TBL
(EMP_ID CHAR(9) NOT NULL,
EMP_NAME VARCHAR (40) NOT NULL,
EMP_ST_ADDR VARCHAR (20) NOT NULL,
EMP_CITY VARCHAR (15) NOT NULL,
EMP_ST CHAR(2) NOT NULL,
EMP_ZIP INTEGER(5) NOT NULL,
EMP_PHONE INTEGER(10) NULL,
EMP_PAGER INTEGER(10) NULL,
PRIMARY KEY (EMP_ID));
The primary key constraint in this example is defined after the column comma list in the CREATE TABLE statement.
A primary key that consists of more than one column can be defined by either of the following methods:
CREATE TABLE PRODUCTS
(PROD_ID VARCHAR2(10) NOT NULL,
VEND_ID VARCHAR2(10) NOT NULL,
PRODUCT VARCHAR2(30) NOT NULL,
COST NUMBER(8,2) NOT NULL,
PRIMARY KEY (PROD_ID, VEND_ID));
Or
ALTER TABLE PRODUCTS
ADD CONSTRAINT PRODUCTS_PK PRIMARY KEY (PROD_ID, VEND_ID);
Unique Constraints
A unique column constraint in a table is similar to a primary key in that the value in that column for every row of data in the table must have a unique value. While a primary key constraint is placed on one column, you can place a unique constraint on another column even though it is not actually for use as the primary key.
Study the following example:
CREATE TABLE EMPLOYEE_TBL
(EMP_ID CHAR(9) NOT NULL PRIMARY KEY,
EMP_NAME VARCHAR (40) NOT NULL,
EMP_ST_ADDR VARCHAR (20) NOT NULL,
EMP_CITY VARCHAR (15) NOT NULL,
EMP_ST CHAR(2) NOT NULL,
EMP_ZIP INTEGER(5) NOT NULL,
EMP_PHONE INTEGER(10) NULL UNIQUE,
EMP_PAGER INTEGER(10) NULL);
The primary key in this example is EMP_ID, meaning that the employee identification number is the column that is used to ensure that every record in the table is unique. The primary key is a column that is normally referenced in queries, particularly to join tables.
The column EMP_PHONE has been designated as a UNIQUE value, meaning that no two employees can have the same telephone number. There is not a lot of difference between the two, except that the primary key is used to provide an order to data in a table and, in the same respect, join related tables.
Foreign Key Constraints
A foreign key is a column in a child table that references a primary key in the parent table. A foreign key constraint is the main mechanism used to enforce referential integrity between tables in a relational database. A column defined as a foreign key is used to reference a column defined as a primary key in another table.
Study the creation of the foreign key in the following example:
CREATE TABLE EMPLOYEE_PAY_TBL
(EMP_ID CHAR(9) NOT NULL,
POSITION VARCHAR2(15) NOT NULL,
DATE_HIRE DATE NULL,
PAY_RATE NUMBER(4,2) NOT NULL,
DATE_LAST_RAISE DATE NULL,
CONSTRAINT EMP_ID_FK FOREIGN KEY (EMP_ID) REFERENCES EMPLOYEE_TBL (EMP_ID));
The EMP_ID column in this example has been designated as the foreign key for the EMPLOYEE_PAY_TBL table. This foreign key, as you can see, references the EMP_ID column in the EMPLOYEE_TBL table.
This foreign key ensures that for every EMP_ID in the EMPLOYEE_PAY_TBL, there is a corresponding EMP_ID in the EMPLOYEE_TBL. This is called a parent/child relationship. The parent table is the EMPLOYEE_TBL table, and the child table is the EMPLOYEE_PAY_TBL table.
Study Figure 3.2 for a better understanding of the parent table/child table relationship.

In this figure, the EMP_ID column in the child table references the EMP_ID column in the parent table. In order for a value to be inserted for EMP_ID in the child table, there must first exist a value for EMP_ID in the parent table. Likewise, for a value to be removed for EMP_ID in the parent table, all corresponding values for EMP_ID must first be removed from the child table. This is how referential integrity works.
A foreign key can be added to a table using the ALTER TABLE command, as shown in the following example:
ALTER TABLE EMPLOYEE_PAY_TBL
ADD CONSTRAINT ID_FK FOREIGN KEY (EMP_ID)
REFERENCES EMPLOYEE_TBL (EMP_ID);
NOT NULL Constraints
Previous examples use the keywords NULL and NOT NULL listed on the same line as each column and after the data type. NOT NULL is a constraint that you can place on a table's column. This constraint disallows the entrance of NULL values into a column; in other words, data is required in a NOT NULL column for each row of data in the table. NULL is generally the default for a column if NOT NULL is not specified, allowing NULL values in a column.
Using Check Constraints
Check (CHK) constraints can be utilized to check the validity of data entered into particular table columns. Check constraints are used to provide back-end database edits, although edits are commonly found in the front-end application as well. General edits restrict values that can be entered into columns or objects, whether within the database itself or on a front-end application. The check constraint is a way of providing another protective layer for the data.
The following example illustrates the use of a check constraint:
CREATE TABLE EMPLOYEE_TBL
(EMP_ID CHAR(9) NOT NULL,
EMP_NAME VARCHAR2(40) NOT NULL,
EMP_ST_ADDR VARCHAR2(20) NOT NULL,
EMP_CITY VARCHAR2(15) NOT NULL,
EMP_ST CHAR(2) NOT NULL,
EMP_ZIP NUMBER(5) NOT NULL,
EMP_PHONE NUMBER(10) NULL,
EMP_PAGER NUMBER(10) NULL),
PRIMARY KEY (EMP_ID),
CONSTRAINT CHK_EMP_ZIP CHECK ( EMP_ZIP = '46234');
The check constraint in this table has been placed on the EMP_ZIP column, ensuring that all employees entered into this table have a ZIP code of '46234'. Perhaps that is a little restricting. Nevertheless, you can see how it works.
If you wanted to use a check constraint to verify that the ZIP code is within a list of values, your constraint definition could look like the following:
CONSTRAINT CHK_EMP_ZIP CHECK ( EMP_ZIP in ('46234','46227','46745') );
If there is a minimum pay rate that can be designated for an employee, you could have a constraint that looks like the following:
CREATE TABLE EMPLOYEE_PAY_TBL
(EMP_ID CHAR(9) NOT NULL,
POSITION VARCHAR2(15) NOT NULL,
DATE_HIRE DATE NULL,
PAY_RATE NUMBER(4,2) NOT NULL,
DATE_LAST_RAISE DATE NULL,
CONSTRAINT EMP_ID_FK FOREIGN KEY (EMP_ID) REFERENCES EMPLOYEE_TBL (EMP_ID),
CONSTRAINT CHK_PAY CHECK ( PAY_RATE > 12.50 ) );
In this example, any employee entered in this table must be paid more than $12.50 an hour. You can use just about any condition in a check constraint, as you can with a SQL query.
Dropping Constraints
Any constraint that you have defined can be dropped using the ALTER TABLE command with the DROP CONSTRAINT option. For example, to drop the primary key constraint in the EMPLOYEES table, you can use the following command:
ALTER TABLE EMPLOYEES DROP CONSTRAINT EMPLOYEES_PK;
Output:
Table altered.
Some implementations may provide shortcuts for dropping certain constraints. For example, to drop the primary key constraint for a table in Oracle, you can use the following command:
ALTER TABLE EMPLOYEES DROP PRIMARY KEY;
Output:
Table altered.
Note
Some implementations allow you to disable constraints. Instead of permanently dropping a constraint from the database, you may want to temporarily disable the constraint, and then enable it later.
Source>>http://www.samspublishing.com/library/content.asp?b=STY_Sql_24hours&seqNum=27&rl=1
Syntax to create user difined functions for SQL 2000
Syntax:
CREATE FUNCTION [ owner_name. ] function_name
(
[ { @parameter_name [ AS ] data_type }[ ,...n ] ]
)
RETURNS data_type
[ AS ]
BEGIN
function_body
RETURN scalar_expression
END
Example:
CREATE FUNCTION sampleFunc(@num1 INT,@num2 INT)
RETURNS INT
AS
BEGIN
RETURN (@num1 * @num2)
END
CREATE FUNCTION [ owner_name. ] function_name
(
[ { @parameter_name [ AS ] data_type }[ ,...n ] ]
)
RETURNS data_type
[ AS ]
BEGIN
function_body
RETURN scalar_expression
END
Example:
CREATE FUNCTION sampleFunc(@num1 INT,@num2 INT)
RETURNS INT
AS
BEGIN
RETURN (@num1 * @num2)
END
About CallableStatement Class:
A CallableStatement object provides a way to call stored procedures in a standard way for all DBMSs.
A stored procedure is stored in a database; the call to the stored procedure is what a CallableStatement object contains. This call is written in an escape syntax that may take one of two forms: one form with a result parameter, and the other without one.
A result parameter, a kind of OUT parameter, is the return value for the stored procedure. Both forms may have a variable number of parameters used for input (IN parameters), output (OUT parameters), or both (INOUT parameters). A question mark serves as a placeholder for a parameter.
The syntax for invoking a stored procedure in JDBC is shown below. Note that the square brackets indicate that what is between them is optional; they are not themselves part of the syntax.
{call procedure_name[(?, ?, ...)]}
The syntax for a procedure that returns a result parameter is:
{? = call procedure_name[(?, ?, ...)]}
The syntax for a stored procedure with no parameters would look like this:
{call procedure_name}
Example:
String command = "{? = call TestingStoredProcedure(?, ?, ?)}";
CallableStatement cstmt = conn.prepareCall (command);
// Register arg1 OUT parameter
cstmt.registerOutParameter(1, Types.INTEGER);
// Pass in value for IN parameter
cstmt.setInt(2, 4);
// Register arg3 OUT parameter
cstmt.registerOutParameter(3, Types.INTEGER);
// Execute TestingStoredProcedure
ResultSet rs = cstmt.executeQuery();
// executeQuery returns values via a resultSet
while (rs.next())
{
// get value returned by TestingStoredProcedure
boolean b = rs.getBoolean(1);
System.out.println("return value from TestingStoredProcedure= " + b);
}
// Retrieve OUT parameters from TestingStoredProcedure
int i = cstmt.getInt(1);
System.out.println("arg1 OUT parameter value = " + i);
int k = cstmt.getInt(3);
System.out.println("arg3 OUT parameter value = " + k);
A stored procedure is stored in a database; the call to the stored procedure is what a CallableStatement object contains. This call is written in an escape syntax that may take one of two forms: one form with a result parameter, and the other without one.
A result parameter, a kind of OUT parameter, is the return value for the stored procedure. Both forms may have a variable number of parameters used for input (IN parameters), output (OUT parameters), or both (INOUT parameters). A question mark serves as a placeholder for a parameter.
The syntax for invoking a stored procedure in JDBC is shown below. Note that the square brackets indicate that what is between them is optional; they are not themselves part of the syntax.
{call procedure_name[(?, ?, ...)]}
The syntax for a procedure that returns a result parameter is:
{? = call procedure_name[(?, ?, ...)]}
The syntax for a stored procedure with no parameters would look like this:
{call procedure_name}
Example:
String command = "{? = call TestingStoredProcedure(?, ?, ?)}";
CallableStatement cstmt = conn.prepareCall (command);
// Register arg1 OUT parameter
cstmt.registerOutParameter(1, Types.INTEGER);
// Pass in value for IN parameter
cstmt.setInt(2, 4);
// Register arg3 OUT parameter
cstmt.registerOutParameter(3, Types.INTEGER);
// Execute TestingStoredProcedure
ResultSet rs = cstmt.executeQuery();
// executeQuery returns values via a resultSet
while (rs.next())
{
// get value returned by TestingStoredProcedure
boolean b = rs.getBoolean(1);
System.out.println("return value from TestingStoredProcedure= " + b);
}
// Retrieve OUT parameters from TestingStoredProcedure
int i = cstmt.getInt(1);
System.out.println("arg1 OUT parameter value = " + i);
int k = cstmt.getInt(3);
System.out.println("arg3 OUT parameter value = " + k);
The best reasons for using PreparedStatements are these:
(1) Executing the same query multiple times in loop, binding different parameter values each time, and
(2) Using the setDate()/setString() methods to escape dates and strings properly, in a database-independent way.
(3) SQL injection attacks on a system are virtually impossible when using Prepared Statements.
SQL injection:
Suppose your web application asks the user for their ID number. They type it into a box and click submit. This ends up calling the following method:
public List processUserID(String idNumber)
throws SQLException
{
String query = "SELECT role FROM roles WHERE id = '" + idNumber + "'";
ResultSet rs = this.connection.executeQuery(query);
// ... process results ...
}
If out of a sense of informed malice, your user enters the following text into the ID number field:
12345'; TRUNCATE role; SELECT '
They may be able to drop the contents of your role table, because the string that ends up in "query" will be:
SELECT role FROM roles WHERE id = '12345'; TRUNCATE role; SELECT ''
They have successfully injected SQL into your application that wasn't there before, hence the name. The specifics of this depend to some extent on your database, but there's some pretty portable SQL you can use to achieve this.
On the other hand, if you use a prepared statement:
public List processUserID(String idNumber)
throws SQLException
{
String query = "SELECT role FROM roles WHERE id = ?";
PreparedStatement statement = this.connection.prepare(query);
statement.setString(id,idNumber);
ResultSet rs = this.connection.executeQuery(query);
// ... process results ...
}
The database is told to compile the SQL in query first. The parameter is then submitted - so whatever you put into it will never get executed as SQL (well, ok, it's possible if you're passing it as a parameter to a stored proc, but it's very unlikely) - it will just return no matching records (because there won't be any users with id "12345'; TRUNCATE role; SELECT '"
Source>>http://forum.java.sun.com/thread.jspa?tstart=0&forumID=48&threadID=538747&trange=15
(2) Using the setDate()/setString() methods to escape dates and strings properly, in a database-independent way.
(3) SQL injection attacks on a system are virtually impossible when using Prepared Statements.
SQL injection:
Suppose your web application asks the user for their ID number. They type it into a box and click submit. This ends up calling the following method:
public List processUserID(String idNumber)
throws SQLException
{
String query = "SELECT role FROM roles WHERE id = '" + idNumber + "'";
ResultSet rs = this.connection.executeQuery(query);
// ... process results ...
}
If out of a sense of informed malice, your user enters the following text into the ID number field:
12345'; TRUNCATE role; SELECT '
They may be able to drop the contents of your role table, because the string that ends up in "query" will be:
SELECT role FROM roles WHERE id = '12345'; TRUNCATE role; SELECT ''
They have successfully injected SQL into your application that wasn't there before, hence the name. The specifics of this depend to some extent on your database, but there's some pretty portable SQL you can use to achieve this.
On the other hand, if you use a prepared statement:
public List processUserID(String idNumber)
throws SQLException
{
String query = "SELECT role FROM roles WHERE id = ?";
PreparedStatement statement = this.connection.prepare(query);
statement.setString(id,idNumber);
ResultSet rs = this.connection.executeQuery(query);
// ... process results ...
}
The database is told to compile the SQL in query first. The parameter is then submitted - so whatever you put into it will never get executed as SQL (well, ok, it's possible if you're passing it as a parameter to a stored proc, but it's very unlikely) - it will just return no matching records (because there won't be any users with id "12345'; TRUNCATE role; SELECT '"
Source>>http://forum.java.sun.com/thread.jspa?tstart=0&forumID=48&threadID=538747&trange=15
Question: What is the difference between a Statement and a PreparedStatement?
Short answer:
1. The PreparedStatement is a slightly more powerful version of a Statement, and should always be at least as quick and easy to handle as a Statement.
2. The PreparedStatement may be parametrized.
Longer answer: Most relational databases handles a JDBC / SQL query in four steps:
1. Parse the incoming SQL query
2. Compile the SQL query
3. Plan/optimize the data acquisition path
4. Execute the optimized query / acquire and return data
A Statement will always proceed through the four steps above for each SQL query sent to the database. A PreparedStatement pre-executes steps (1) - (3) in the execution process above. Thus, when creating a PreparedStatement some pre-optimization is performed immediately. The effect is to lessen the load on the database engine at execution time.
Code samples
Statement example:
// Assume a database connection, conn.
Statement stmnt = null;
ResultSet rs = null;
try
{
// Create the Statement
stmnt = conn.createStatement();
// Execute the query to obtain the ResultSet
rs = stmnt.executeQuery("select * from aTable");
}
catch(Exception ex)
{
System.err.println("Database exception: " + ex);
}
PreparedStatement example:
// Assume a database connection, conn.
PreparedStatement stmnt = null;
ResultSet rs = null;
try
{
// Create the PreparedStatement
stmnt = conn.prepareStatement("select * from aTable");
// Execute the query to obtain the ResultSet
rs = stmnt.executeQuery();
}
catch(Exception ex)
{
System.err.println("Database exception: " + ex);
}
Another advantage of the PreparedStatement class is the ability to create an incomplete query and supply parameter values at execution time. This type of query is well suited for filtering queries which may differ in parameter value only:
SELECT firstName FROM employees WHERE salary > 50
SELECT firstName FROM employees WHERE salary > 200
To create a parametrized prepared statement, use the following syntax:
// Assume a database connection, conn.
PreparedStatement stmnt = null;
ResultSet rs = null;
try
{
// Create the PreparedStatement, leaving a '?'
// to indicate placement of a parameter.
stmnt = conn.prepareStatement(
"SELECT firstName FROM employees WHERE salary > ?");
// Complete the statement
stmnt.setInt(1, 200);
// Execute the query to obtain the ResultSet
rs = stmnt.executeQuery();
}
catch(Exception ex)
{
System.err.println("Database exception: " + ex);
}
How PreparedStatement increasing performance?
Using prepare statement is less expensive 'coz, it pre executes the follwoing steps. Step 1 :Parse the incoming SQL query Step 2 :Compile the SQL query Step 3 :Plan/optimize the data acquisition path Where will be the pre executed steps stored, i.e) in Application server or in DataBaseServer
Source>>http://jguru.com/faq/view.jsp?EID=693
1. The PreparedStatement is a slightly more powerful version of a Statement, and should always be at least as quick and easy to handle as a Statement.
2. The PreparedStatement may be parametrized.
Longer answer: Most relational databases handles a JDBC / SQL query in four steps:
1. Parse the incoming SQL query
2. Compile the SQL query
3. Plan/optimize the data acquisition path
4. Execute the optimized query / acquire and return data
A Statement will always proceed through the four steps above for each SQL query sent to the database. A PreparedStatement pre-executes steps (1) - (3) in the execution process above. Thus, when creating a PreparedStatement some pre-optimization is performed immediately. The effect is to lessen the load on the database engine at execution time.
Code samples
Statement example:
// Assume a database connection, conn.
Statement stmnt = null;
ResultSet rs = null;
try
{
// Create the Statement
stmnt = conn.createStatement();
// Execute the query to obtain the ResultSet
rs = stmnt.executeQuery("select * from aTable");
}
catch(Exception ex)
{
System.err.println("Database exception: " + ex);
}
PreparedStatement example:
// Assume a database connection, conn.
PreparedStatement stmnt = null;
ResultSet rs = null;
try
{
// Create the PreparedStatement
stmnt = conn.prepareStatement("select * from aTable");
// Execute the query to obtain the ResultSet
rs = stmnt.executeQuery();
}
catch(Exception ex)
{
System.err.println("Database exception: " + ex);
}
Another advantage of the PreparedStatement class is the ability to create an incomplete query and supply parameter values at execution time. This type of query is well suited for filtering queries which may differ in parameter value only:
SELECT firstName FROM employees WHERE salary > 50
SELECT firstName FROM employees WHERE salary > 200
To create a parametrized prepared statement, use the following syntax:
// Assume a database connection, conn.
PreparedStatement stmnt = null;
ResultSet rs = null;
try
{
// Create the PreparedStatement, leaving a '?'
// to indicate placement of a parameter.
stmnt = conn.prepareStatement(
"SELECT firstName FROM employees WHERE salary > ?");
// Complete the statement
stmnt.setInt(1, 200);
// Execute the query to obtain the ResultSet
rs = stmnt.executeQuery();
}
catch(Exception ex)
{
System.err.println("Database exception: " + ex);
}
How PreparedStatement increasing performance?
Using prepare statement is less expensive 'coz, it pre executes the follwoing steps. Step 1 :Parse the incoming SQL query Step 2 :Compile the SQL query Step 3 :Plan/optimize the data acquisition path Where will be the pre executed steps stored, i.e) in Application server or in DataBaseServer
Source>>http://jguru.com/faq/view.jsp?EID=693
Monday, April 23, 2007
SingleThreadModel (Controlling Concurrent Access to Shared Resources)
Controlling Concurrent Access to Shared Resources
In a multithreaded server, it is possible for shared resources to be accessed concurrently. Besides scope object attributes, shared resources include in-memory data such as instance or class variables, and external objects such as files, database connections, and network connections. Concurrent access can arise in several situations:
* Multiple Web components accessing objects stored in the Web context
* Multiple Web components accessing objects stored in a session
* Multiple threads within a Web component accessing instance variables. A Web container will typically create a thread to handle each request. If you want to ensure that a servlet instance handles only one request at a time, a servlet can implement the SingleThreadModel interface. If a servlet implements this interface, you are guaranteed that no two threads will execute concurrently in the servlet's service method. A Web container can implement this guarantee by synchronizing access to a single instance of the servlet, or by maintaining a pool of Web component instances and dispatching each new request to a free instance. This interface does not prevent synchronization problems that result from Web components accessing shared resources such as static class variables or external objects.
Source>>http://java.sun.com/j2ee/tutorial/1_3-fcs/doc/Servlets5.html#73895
In a multithreaded server, it is possible for shared resources to be accessed concurrently. Besides scope object attributes, shared resources include in-memory data such as instance or class variables, and external objects such as files, database connections, and network connections. Concurrent access can arise in several situations:
* Multiple Web components accessing objects stored in the Web context
* Multiple Web components accessing objects stored in a session
* Multiple threads within a Web component accessing instance variables. A Web container will typically create a thread to handle each request. If you want to ensure that a servlet instance handles only one request at a time, a servlet can implement the SingleThreadModel interface. If a servlet implements this interface, you are guaranteed that no two threads will execute concurrently in the servlet's service method. A Web container can implement this guarantee by synchronizing access to a single instance of the servlet, or by maintaining a pool of Web component instances and dispatching each new request to a free instance. This interface does not prevent synchronization problems that result from Web components accessing shared resources such as static class variables or external objects.
Source>>http://java.sun.com/j2ee/tutorial/1_3-fcs/doc/Servlets5.html#73895
Nice interview
A Bad day in Java Land
At 8:24 AM on Apr 2, 2007, Akshay wrote:
It's interview season here and I happened to have one a while ago.
Bad experience- partly because I was not sure, partly because (I believe) the interviewer didn't know what he was talking. (he did sound sure though)
So let me outline what transpired.
He begand by reading out parts of my profile asked me a couple of personal questions and bang
# what's the difference between tomcat, weblogic and websphere?
easy question- back I replied.
a murmer of approval
# are servlets thread-safe?
i said no, a servlet instance is not inherently thread-safe; but the meaning of thread-safety depends on the scenario.
(what I had in mind here was, yes there might be concurrent access problems with servlet instances but not if they did not have instance variables)
so he asked me
# how can we make them thread safe?
i replied you could implement the SingleThreadModel interface but that would not solve concurrency problems.
and he again asked me the same question thinking i did not know the meaning of thread-safety. so i told him that I knew the meaning of thread-safety, but as I said it was a bit contextual.
murmers of disapproval
he asks me
# is jsp thread safe?
this time, im smarter, I say- no they're not thread safe by default.
he asks no?
i say no.
how sure are you?
100%
he asks why not
I tell him every jsp is going to be translated into a servlet and a servlet is not thread safe by default unless you implement the SingleThreadModel.
and he asks me how do I implement the SingleThreadModel?
so I say you just need to set the isThreadSafe attribute of the page directive to true.
So he asks me, will doing so, make the generated servlet class implement SingleThreadModel?
so I say yes.
And he asks, if you don't have that attribute, it will not implement that interface?
I say no.
(I'm getting irritated now, how many times do i need to say it)
So he says lets revisit the question again.
Is a jsp thread safe.
I say no, the isThreadSafe attribute has a default value of false.
now he asks me
# suppose there are ten concurrent requests to a s1 that implements SingleThreadModel and s2 tht doesnt', how many threads and processes are going to be created in each scenario?
(now I really don't understand this thing about processes, so I tell him about my understanding) I say- in either case, assuming there is only one instance of each servlet, the container creates one thread for each request irrespective of what kind of servlet instance it is. So in either case, there are only 10 threads, only the way they're scheduled to execute will differ. Because in the singlethreaded case, the execution of one thread would block the other threads and they would execute only one after the other.
and then he asks me a strange question (atleast it sounded strange to me)
he asked
# wot about the servlet? is that a seperate thread or a process? (And I thought WTH)
I told him it was neither, it's an object :-"
(Please let me know if I missed something here)
I show a little irritation
and he switches to JDBC this time.
and I don't know where he got this fancy for classloaders
he starts shooting-
# How do you register a driver?
I say when you do a Class.forName(), it loads the class and registers the driver class.
I said you could alternatively use the registerDriver method too...
he says ok (after a long time)
and then he asks
# what happens when you do Class.forName("java.lang.String")
I say, it loads the String class.
# are you sure?
Yes
# Why does a driver get registered when I do a Class.forName() and not a String?
(now, I'm not too sure, but I take a calculated guess)
I say its not just loading that occurs, the class also gets linked and initialized, and in the initialization process, the driver registers itself.
# what happens in the String case?
It probably sets up the pool, I'm not sure.
he goes on
# I have 4 Class.forName() calls each loading a different DB driver and then I call DriverManager.getConnection(). which of these drivers will it use?
I said the getConnection method takes a dbURL string. It resolves the driver to use from the protocol and subprotocol used in teh dbURL.
# How does it resolve?
(a snappy)I don't know.
and then he goes to ask me about problems with the Singleton design pattern. I say the instance is subject to concurrency problems.
so he asks how do you solve it.
I say you could make the getInstance method synchronized.
He asks are there problem with this approach?
i say- yes, it would be a bottleneck
How do you solve it?
(a tired) I don't know.
and then he goes on to ask me some simple questions on EJB's isIdentical method, and I easily mess it up and thats the end.
That was one tough nut to crack. Looking forward for your thoughts and opinions and most importantly ANSWERS.
:)
Source>> http://www.javalobby.org/java/forums/t92806.html
At 8:24 AM on Apr 2, 2007, Akshay wrote:
It's interview season here and I happened to have one a while ago.
Bad experience- partly because I was not sure, partly because (I believe) the interviewer didn't know what he was talking. (he did sound sure though)
So let me outline what transpired.
He begand by reading out parts of my profile asked me a couple of personal questions and bang
# what's the difference between tomcat, weblogic and websphere?
easy question- back I replied.
a murmer of approval
# are servlets thread-safe?
i said no, a servlet instance is not inherently thread-safe; but the meaning of thread-safety depends on the scenario.
(what I had in mind here was, yes there might be concurrent access problems with servlet instances but not if they did not have instance variables)
so he asked me
# how can we make them thread safe?
i replied you could implement the SingleThreadModel interface but that would not solve concurrency problems.
and he again asked me the same question thinking i did not know the meaning of thread-safety. so i told him that I knew the meaning of thread-safety, but as I said it was a bit contextual.
murmers of disapproval
he asks me
# is jsp thread safe?
this time, im smarter, I say- no they're not thread safe by default.
he asks no?
i say no.
how sure are you?
100%
he asks why not
I tell him every jsp is going to be translated into a servlet and a servlet is not thread safe by default unless you implement the SingleThreadModel.
and he asks me how do I implement the SingleThreadModel?
so I say you just need to set the isThreadSafe attribute of the page directive to true.
So he asks me, will doing so, make the generated servlet class implement SingleThreadModel?
so I say yes.
And he asks, if you don't have that attribute, it will not implement that interface?
I say no.
(I'm getting irritated now, how many times do i need to say it)
So he says lets revisit the question again.
Is a jsp thread safe.
I say no, the isThreadSafe attribute has a default value of false.
now he asks me
# suppose there are ten concurrent requests to a s1 that implements SingleThreadModel and s2 tht doesnt', how many threads and processes are going to be created in each scenario?
(now I really don't understand this thing about processes, so I tell him about my understanding) I say- in either case, assuming there is only one instance of each servlet, the container creates one thread for each request irrespective of what kind of servlet instance it is. So in either case, there are only 10 threads, only the way they're scheduled to execute will differ. Because in the singlethreaded case, the execution of one thread would block the other threads and they would execute only one after the other.
and then he asks me a strange question (atleast it sounded strange to me)
he asked
# wot about the servlet? is that a seperate thread or a process? (And I thought WTH)
I told him it was neither, it's an object :-"
(Please let me know if I missed something here)
I show a little irritation
and he switches to JDBC this time.
and I don't know where he got this fancy for classloaders
he starts shooting-
# How do you register a driver?
I say when you do a Class.forName(), it loads the class and registers the driver class.
I said you could alternatively use the registerDriver method too...
he says ok (after a long time)
and then he asks
# what happens when you do Class.forName("java.lang.String")
I say, it loads the String class.
# are you sure?
Yes
# Why does a driver get registered when I do a Class.forName() and not a String?
(now, I'm not too sure, but I take a calculated guess)
I say its not just loading that occurs, the class also gets linked and initialized, and in the initialization process, the driver registers itself.
# what happens in the String case?
It probably sets up the pool, I'm not sure.
he goes on
# I have 4 Class.forName() calls each loading a different DB driver and then I call DriverManager.getConnection(). which of these drivers will it use?
I said the getConnection method takes a dbURL string. It resolves the driver to use from the protocol and subprotocol used in teh dbURL.
# How does it resolve?
(a snappy)I don't know.
and then he goes to ask me about problems with the Singleton design pattern. I say the instance is subject to concurrency problems.
so he asks how do you solve it.
I say you could make the getInstance method synchronized.
He asks are there problem with this approach?
i say- yes, it would be a bottleneck
How do you solve it?
(a tired) I don't know.
and then he goes on to ask me some simple questions on EJB's isIdentical method, and I easily mess it up and thats the end.
That was one tough nut to crack. Looking forward for your thoughts and opinions and most importantly ANSWERS.
:)
Source>> http://www.javalobby.org/java/forums/t92806.html
Wednesday, February 28, 2007
DB: Configure a Server to Listen on a Specific TCP Port
How to: Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager)
For Microsoft SQL Server 2005:
To assign a TCP/IP port number to the SQL Server Database Engine 2005:
Open SQL Server Configuration Manager from Start->Programs->Microsoft SQL Server 2005->Configuraiton Tools->Server Configuration Manager.
In SQL Server Configuration Manager, in the console pane, expand SQL Server 2005 Network Configuration, expand Protocols for, and then double-click TCP/IP.
In the TCP/IP Properties dialog box, on the IP Addresses tab, several IP addresses appear, in the format IP1, IP2, up to IPAll. One of these are for the IP address of the loopback adapter, 127.0.0.1. Additional IP addresses appear for each IP Address on the computer. Right-click each address, and then click Properties to identify the IP address that you wish to configure.
If the TCP Dynamic Ports dialog box contains 0, indicating the Database Engine is listening on dynamic ports, delete the 0.
In the IPn Properties area box, in the TCP Port box, type the port number you wish this IP address to listen on, and then click OK.
In the console pane, click SQL Server 2005 Services.
In the details pane, right-click SQL Server () and then click restart, to stop and restart SQL Server.
Source:
http://msdn2.microsoft.com/en-us/library/ms177440.aspx
For Microsoft SQL Server 2000:
To assign a TCP/IP port number to the SQL Server Database Engine 2000:
Open Server Network Utility from Start->Programs->Microsoft SQL Server ->Server NetworkUtily.
In Server Network Utility, in the console pane, select TCP/IP and move it to Enabled protocols box. In Enabled Protocols box select TCP/IP and click Properties button. Now set port number to 1433 or other. Click ok button and restart the server.
Source:
net
For Microsoft SQL Server 2005:
To assign a TCP/IP port number to the SQL Server Database Engine 2005:
Open SQL Server Configuration Manager from Start->Programs->Microsoft SQL Server 2005->Configuraiton Tools->Server Configuration Manager.
In SQL Server Configuration Manager, in the console pane, expand SQL Server 2005 Network Configuration, expand Protocols for
In the TCP/IP Properties dialog box, on the IP Addresses tab, several IP addresses appear, in the format IP1, IP2, up to IPAll. One of these are for the IP address of the loopback adapter, 127.0.0.1. Additional IP addresses appear for each IP Address on the computer. Right-click each address, and then click Properties to identify the IP address that you wish to configure.
If the TCP Dynamic Ports dialog box contains 0, indicating the Database Engine is listening on dynamic ports, delete the 0.
In the IPn Properties area box, in the TCP Port box, type the port number you wish this IP address to listen on, and then click OK.
In the console pane, click SQL Server 2005 Services.
In the details pane, right-click SQL Server (
Source:
http://msdn2.microsoft.com/en-us/library/ms177440.aspx
To assign a TCP/IP port number to the SQL Server Database Engine 2000:
Open Server Network Utility from Start->Programs->Microsoft SQL Server ->Server NetworkUtily.
In Server Network Utility, in the console pane, select TCP/IP and move it to Enabled protocols box. In Enabled Protocols box select TCP/IP and click Properties button. Now set port number to 1433 or other. Click ok button and restart the server.
Source:
net
Wednesday, February 21, 2007
Java: Byte Streams
Programs use byte streams to perform input and output of 8-bit bytes. All byte stream classes are descended from
Source:
http://java.sun.com/docs/books/tutorial/essential/io/bytestreams.html
InputStream and OutputStream. There are many byte stream classes. Ex: AudioInputStream, ByteArrayInputStream, FilterInputStream, ObjectInputStream, PipedInputStream, SequenceInputStream, StringBufferInputStream
Using file byte streams:
FileInputStream in = new FileInputStream("xanadu.txt");
FileOutputStream out = new FileOutputStream("outagain.txt");
FileInputStream: Constructor summary..
FileInputStream(File file)Creates a FileInputStream by opening a connection to an actual file, the file named by the File object file in the file system. |
FileInputStream(FileDescriptor fdObj)Creates a FileInputStream by using the file descriptor fdObj, which represents an existing connection to an actual file in the file system. |
FileInputStream(String name)Creates a FileInputStream by opening a connection to an actual file, the file named by the path name name in the file system. |
FileInputStream: Methods summary..
int | available()Returns an estimate of the number of remaining bytes that can be read (or skipped over) from this input stream without blocking by the next invocation of a method for this input stream. |
void | close()Closes this file input stream and releases any system resources associated with the stream. |
protected void | finalize()Ensures that the close method of this file input stream is called when there are no more references to it. |
FileChannel | getChannel()Returns the unique FileChannel object associated with this file input stream. |
FileDescriptor | getFD()Returns the FileDescriptor object that represents the connection to the actual file in the file system being used by this FileInputStream. |
int | read()Reads a byte of data from this input stream. |
int | read(byte[] b)Reads up to b.length bytes of data from this input stream into an array of bytes. |
int | read(byte[] b, int off, int len)Reads up to len bytes of data from this input stream into an array of bytes. |
long | skip(long n)Skips over and discards n bytes of data from the input stream. |
FileOutputStream: Constructor summary..
FileOutputStream(File file)Creates a file output stream to write to the file represented by the specified File object. |
FileOutputStream(File file, boolean append)Creates a file output stream to write to the file represented by the specified File object. |
FileOutputStream(FileDescriptor fdObj)Creates an output file stream to write to the specified file descriptor, which represents an existing connection to an actual file in the file system. |
FileOutputStream(String name)Creates an output file stream to write to the file with the specified name. |
FileOutputStream(String name, boolean append)Creates an output file stream to write to the file with the specified name. |
FileOutputStream: Methods summary..
void | close()Closes this file output stream and releases any system resources associated with this stream. |
protected void | finalize()Cleans up the connection to the file, and ensures that the close method of this file output stream is called when there are no more references to this stream. |
FileChannel | getChannel()Returns the unique FileChannel object associated with this file output stream. |
FileDescriptor | getFD()Returns the file descriptor associated with this stream. |
void | write(byte[] b)Writes b.length bytes from the specified byte array to this file output stream. |
void | write(byte[] b, int off, int len)Writes len bytes from the specified byte array starting at offset off to this file output stream. |
void | write(int b)Writes the specified byte to this file output stream. |
Source:
http://java.sun.com/docs/books/tutorial/essential/io/bytestreams.html
Ejb: Transaction Attributes and isolation levels
More clear about Transactions attributes:
The Enterprise JavaBeans model supports six different transaction rules:
TX_BEAN_MANAGED. The TX_BEAN_MANAGED setting indicates that the enterprise bean manually manages its own transaction control. EJB supports manual transaction demarcation using the Java Transaction API. This is very tricky and should not be attempted without a really good reason.
TX_NOT_SUPPORTED. The TX_NOT_SUPPORTED setting indicates that the enterprise bean cannot execute within the context of a transaction. If a client (i.e., whatever called the method-either a remote client or another enterprise bean) has a transaction when it calls the enterprise bean, the container suspends the transaction for the duration of the method call.
TX_SUPPORTS. The TX_SUPPORTS setting indicates that the enterprise bean can run with or without a transaction context. If a client has a transaction when it calls the enterprise bean, the method will join the client's transaction context. If the client does not have a transaction, the method will run without a transaction.
TX_REQUIRED. The TX_REQUIRED setting indicates that the enterprise bean must execute within the context of a transaction. If a client has a transaction when it calls the enterprise bean, the method will join the client's transaction context. If the client does not have a transaction, the container automatically starts a new transaction for the method.
TX_REQUIRES_NEW. The TX_REQUIRES_NEW setting indicates that the enterprise bean must execute within the context of a new transaction. The container always starts a new transaction for the method. If the client has a transaction when it calls the enterprise bean, the container suspends the client's transaction for the duration of the method call.
TX_MANDATORY. The TX_MANDATORY setting indicates that the enterprise bean must always execute within the context of the client's transaction. If the client does not have a transaction when it calls the enterprise bean, the container throws the TransactionRequired
The Enterprise JavaBeans model supports six different transaction rules:
TX_BEAN_MANAGED. The TX_BEAN_MANAGED setting indicates that the enterprise bean manually manages its own transaction control. EJB supports manual transaction demarcation using the Java Transaction API. This is very tricky and should not be attempted without a really good reason.
TX_NOT_SUPPORTED. The TX_NOT_SUPPORTED setting indicates that the enterprise bean cannot execute within the context of a transaction. If a client (i.e., whatever called the method-either a remote client or another enterprise bean) has a transaction when it calls the enterprise bean, the container suspends the transaction for the duration of the method call.
TX_SUPPORTS. The TX_SUPPORTS setting indicates that the enterprise bean can run with or without a transaction context. If a client has a transaction when it calls the enterprise bean, the method will join the client's transaction context. If the client does not have a transaction, the method will run without a transaction.
TX_REQUIRED. The TX_REQUIRED setting indicates that the enterprise bean must execute within the context of a transaction. If a client has a transaction when it calls the enterprise bean, the method will join the client's transaction context. If the client does not have a transaction, the container automatically starts a new transaction for the method.
TX_REQUIRES_NEW. The TX_REQUIRES_NEW setting indicates that the enterprise bean must execute within the context of a new transaction. The container always starts a new transaction for the method. If the client has a transaction when it calls the enterprise bean, the container suspends the client's transaction for the duration of the method call.
TX_MANDATORY. The TX_MANDATORY setting indicates that the enterprise bean must always execute within the context of the client's transaction. If the client does not have a transaction when it calls the enterprise bean, the container throws the TransactionRequired
Jsp: Output comment and Hidden Comment
OutputComment:
A comment that is sent to the client in the viewable page source.The JSP engine handles an output comment as uninterpreted HTML text, returning the comment in the HTML output sent to the client. You can see the comment by viewing the page source from your Web browser.
A comment that is sent to the client in the viewable page source.The JSP engine handles an output comment as uninterpreted HTML text, returning the comment in the HTML output sent to the client. You can see the comment by viewing the page source from your Web browser.
HiddenComment:
A comments that documents the JSP page but is not sent to the client. The JSP engine ignores a hidden comment, and does not process any code within hidden comment tags. A hidden comment is not sent to the client, either in the displayed JSP page or the HTML page source. The hidden comment is useful when you want to hide or "comment out" part of your JSP page.
Monday, February 19, 2007
Thursday, February 15, 2007
Jsp page and Jsp document
The JSP specification supports two types of JSP pages: regular JSP pages containing any type of text or markup, and JSP Documents, which are well-formed XML documents; i.e., documents with XHTML and JSP elements. To satisfy the well-formed-ness requirements, JSP directives and scripting elements in a JSP Document must be written with a different syntax than a regular JSP page:
Wednesday, February 7, 2007
Subscribe to:
Comments (Atom)
