Wednesday, October 15, 2014

File Search and Recursive grep

File Search and Recursive grep:
    //This command will search in the current directory and its sub-directories for a file named FileName.sh. 
    find * -name "FileName.sh" -print
    find . -name "FileName.sh" -print

NOTE: The -print option will print out the path of any file that is found with that name. 

//This command will only list file names eG*.sh.
    find . -type f -name 'eG*.sh'

//Search file called httpd.conf in all directories:
    find / -type f -name httpd.conf

//Search file called httpd.conf in /usr/local directory:
    $ find /usr/local -type f -name httpd.conf

-----------------------------grep command--------------------------
grep command is used to search some content in a file or list of files. The input we provide is case sensitive
for grep command. There are options to overcome this.

For Non-Zip files:
    grep 'SMS' applicationNonZip.log.txt     (CASE-SENSITIVE "SMS")
    grep -i 'SMS' applicationNonZip.log.txt  (NON CASE-SENSITIVE "SMS")

For Zip files:
    zgrep 'SMS' application.log.gz
    zgrep -i 'SMS' application.log.gz

For Searching Recursively in current directory and its sub-directories:
    grep -r --include=*.sh MAIN_CLUSTER .     
Search MAIN_CLUSTER in *.sh files in current directory and sub-directories. It will print file Name and Matched Line Contents.

grep -rl SMS *
Search SMS recursively and print ONLY FILE NAMES containing SMS. It will not print Matched Line Contents.

Search String Forward:   /STRING  (Press ENTER. Press "n" for next occurance, "N" (SHIFT+n) for back occurance).  
SHIFT+g (means G, will take cursor to end of file). Press only g, you will be moved to begining of file.

Search and Replace First occurance:          :s/EG3_OLD/EG2_NEW
Search and Replace in one line occurance:    :s/EG3_OLD/EG2_NEW/g
Search and Replace in entire File all Occurance:      :%s/EG3_OLD/EG2_NEW/g

//Check Running Processes
ps -eaf | cut -c -125 | grep  _PROCESSNAME

//Setting Java Home Path in Unix
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_60
export CLASSPATH=$JAVA_HOME/lib/tools.jar:.:
export PATH=$JAVA_HOME/bin:$PATH:

//Search and Replace String in a File without opening
sed -i -- 's/OLD/NEW/g' FileName.sh

//List of Users in ubuntu
compgen -u

//List of User Groups in ubuntu
compgen -g

//Create user in ubuntu
sudo adduser lion     (It will ask for password and details, enter those...)

//To know which group current logged in user belongs to:
groups

//To know which group any user belongs to:
groups USERNAME

----------------------Setting Oracle Home in Unix------------------------
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export ORACLE_SID=orcl
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME:$PATH:.

//Login to Oracle System
sqlplus
Enter user-name: USERNAME
Enter password:  password

//Login to Oracle System as SYSDBA
sqlplus / as sysdba
SQL> startup   (Start Oracle Instance)
SQL> exit

lsnrctl start  (Start Oracle Listener)

//Setting Linesize and PageSize in Unix. Run below command in SQL Prompt.
SET linesize 300;
SET pagesize 2000;
//Setting AutoTrace On, this will give detailed plan if we execute any sql query in oracle. Run below command in SQL Prompt.
SET autotrace ON;

//Manual Index Query in Oracle
select /*+ index(TABLE_NAME index_name) */ Count(*) from TABLE_NAME where 
    time_stamp >='06-NOV-13' and time_stamp <='07-NOV-13' and card_no = 'XXXXX';

----------------------END------------------------

Wednesday, July 30, 2014

Removing JMX Console and EJBInvokerServlet in JBOSS

Removing JMX Console and EJBInvokerServlet in JBOSS

To Remove:
    invoker/EJBInvokerServlet
    invoker/JMXInvokerServlet
From JBOSS Servers for security and VA Scan:

Remove below services(If available): 
JBOSS/server/WEB_COMPONENT_NAME/deploy/           //WEB_COMPONENT_NAME is your web app name
    jmx-console.war
    web-console.war
    http-invoker.sar
    jmx-invoker-adaptor-server.sar   
  
Restart JBoss AS 5.    
Now check the URL: 
http://X.X.X.2:18081/invoker/EJBInvokerServlet
http://X.X.X.2:18081/invoker/JMXInvokerServlet 
http://X.X.X.2:18081/jmx-console/

Sunday, June 29, 2014

Clustering and Load Balancer

Clustering and Load Balancer:

A cluster is a group of application servers that transparently run your J2EE application as if it were a single entity.
Clustering means you run a program on several machines (nodes). One reason why you want to do this is: Load balancing.
If you have too much load/work to do for a single machine you can use a cluster of machines instead. A load balancer
then can distribute the load over the nodes in the cluster.
More and more mission-critical and large scale applications are now running on Java 2, Enterprise Edition (J2EE).
Those mission-critical applications such as banking and billing ask for more high availability (HA), while those
large scale systems such as Google and Yahoo ask for more scalability. The importance of high availability and
scalability in today's increasingly inter-connected world can be proved by a well known incident: a 22-hour service
outage of eBay in June 1999, caused an interruption of around 2.3 million auctions, and made a 9.2 percent drop in
eBay's stock value.


Clustering Setup can be done either at the request level or session level. Request level means that each request may go
to a different node - this is ideal since the traffic would be balanced across all nodes, and if a node goes down, the
user has no idea. Unfortunately this requires session replication between all nodes, not just of HttpSession, but ANY
session state.

Session level clustering means if your application is one that requires a login or other forms of session-state, and one
or more your Server nodes goes down, on their next request, the user will be asked to log in again, since they will hit a
different node which does not have any stored session data for the user.

This is still an improvement on a non-clustered environment where, if your node goes down, you have no application at all!
And we still get the benefits of load balancing, which allows us to scale our application horizontally across many machines.


Basic Terminology:

Scalability:
In some large-scale systems, it is hard to predict the number and behavior of end users. Scalability refers to a system’s
ability to support fast increasing numbers of users. The intuitive way to scale up the number of concurrent sessions
handled by a server is to add resources (memory, CPU or hard disk) to it. Clustering is an alternative way to resolve the
scalability issue. It allows a group of servers to share the heavy tasks, and operate as a single server logically.


High Availability:
The single-server’s solution (add memory and CPU) to scalability is not a robust one because of its single point of failure.
Those mission-critical applications such as banking and billing cannot tolerate service outage even for one single minute.
It is required that those services are accessible with reasonable/predictable response times at any time. Clustering is a
solution to achieve this kind of high availability by providing redundant servers in the cluster in case one server fails
to provide service.


Load balancing:
Load balancing is one of the key technologies behind clustering, which is a way to obtain high availability and better
performance by dispatching incoming requests to different servers. A load balancer can be anything from a simple Servlet
or Plug-in (a Linux box using ipchains to do the work, for example), to expensive hardware with an SSL accelerator embedded
in it. In addition to dispatching requests, a load balancer should perform some other important tasks such as
“session stickiness” to have a user session live entirely on one server and “health check” (or “heartbeat”) to
prevent dispatching requests to a failing server. Sometimes the load balancer will participant in the “Failover” process,
which will be mentioned later.


Fault Tolerance:
Highly available data is not necessarily strictly correct data. In a J2EE cluster, when a server instance fails, the service
is still available, because new requests can be handled by other redundant server instances in the cluster. But the requests
which are in processing in the failed server when the server is failing may not get the correct data, whereas a fault tolerant
service always guarantees strictly correct behavior despite a certain number of faults.


Failover:
Failover is another key technology behind clustering to achieve fault tolerance. By choosing another node in the cluster, the
process will continue when the original node fails. Failing over to another node can be coded explicitly or performed
automatically by the underlying platform which transparently reroutes communication to another server.


Idempotent methods:
Pronounced “i-dim-po-tent”, these are methods that can be called repeatedly with the same arguments and achieve the same results.
These methods shouldn’t impact the state of the system and can be called repeatedly without worry of altering the system.
For example, “getUsername()” method is an idempotent one, while “deleteFile()” method isn’t. Idempotency is an important concept
when discussing HTTP Session failover and EJB failover.


Issues in implementing Clusters:
1) Static Variables: When an application needs to share a state among many objects the most popular solution is to store the
state in a static variable. Many Java applications do it, and so do many J2EE applications -- and that's where the problem is.
This approach works absolutely fine on a single server, but fails miserably in a cluster. Each node in the cluster would maintain
its own copy of the static variable, thereby creating as many different values for the state as the number of nodes in the cluster.
Hence if one node updates the value, other nodes will not get updated value. Solution is to use a kind of ClusterNotifyManager which
will notify all clusters if a common such variables are changed. ClusterNotifyManager will work for all other kind of updates too.
Ex: a Database parameter like Application Version, SessionTimeout, UserExpiryTime etc which is used by the whole application is
changed from User Interface, then it needs to be communicated to all the server nodes. Cluster Notification Manager can be designed
as an application running on a IP and Port and all other Server Nodes are listening to that Port as an handler. An event change
triggers the nodes and reloads their parameter values.

2) You may need to write a design pattern which will keep syncing of some "less write, more read" variables among all Server nodes
in an interval basis. As each Node keeps a cache of application variables they may not update if above mentioned "point 1" fails.

Monday, May 19, 2014

Read and Write Binary Files to Database in Java

Read and Write Binary Files to Database in Java

Dear reader,
Many times we need to store and read binary files like Images, mp3, mp4 and any non-human readable files
into DB. This is required basically when you work on Case Management System or anything which requires
file storage into Database.

I have written a very simple and complete code with DB script to store and read binary file into DB.
The sequence of completing the tasks are:
1) Create table.
2) Take few files in a directory, which you want to store to DB.
3) Read the content from DB and create a duplicate file into the same directory from where you have read 
   and stored the file into DB.
4) Complete Screenshot for the example.   

-------------------------------------------------------------
Step 1: 
create table FILE_STORE( 
       ID integer(5) not null,
       FILE_NAME varchar(100) not null,  
       USER_NAME varchar(100) not null,  
       BINARY_FILE mediumblob,  
       MOBILE varchar(15),
       primary key (ID) 
); 

--MEDIUMBLOB - 16,777,215 bytes (2^24 - 1)
-------------------------------------------------------------

Step 2:
import java.io.File;
import java.io.FileInputStream;
import java.io.InputStream;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;

public class SaveBinaryFileToDB {
    static String directoryLocation="E:\\Eguard_Merged_Workspace\\TestProject\\inputFiles\\";
    //static String fileName="Hint_Oracle_History.png";
    static String fileName="Zoobi_Doobi.mp3";
    static File file = new File(directoryLocation+fileName);
    InputStream fis = null;

    public static void main(String[] args) throws SQLException{
        String connectionURL = "jdbc:mysql://192.168.111.111:3306/deepak_temp"; //Change IP address and Port.
        Connection connection = null;
        ResultSet rs = null;  
        PreparedStatement psmnt = null;  
        FileInputStream fis;
        try{
            fis = new FileInputStream(file);
        }
        catch(Exception e){
            e.printStackTrace();
            System.exit(0);
        }

        try {  
            Class.forName("com.mysql.jdbc.Driver").newInstance();  
            connection = DriverManager.getConnection(connectionURL, "root", "root");  
            psmnt = connection.prepareStatement
                    ("INSERT INTO FILE_STORE(ID, FILE_NAME, USER_NAME, BINARY_FILE, MOBILE) values(?,?,?,?,?)");  
            psmnt.setInt(1,1);
            psmnt.setString(2,fileName);  
            psmnt.setString(3,"DeepakModi,Enstage,Bangalore");  
            
            
            fis = new FileInputStream(file);  
            psmnt.setBinaryStream(4, (InputStream)fis, (int)(file.length()));  
            psmnt.setString(5,"+919916473353");  
            int s = psmnt.executeUpdate();  
            if(s>0) {  
                System.out.println("Binary File Uploaded successfully !");  
            }  
            else {  
                System.out.println("Unsucessfull to upload Binary File.");  
            }  
        }  
        catch (Exception ex) {  
            System.out.println("Found some error : "+ex);  
        }  
        finally {  
            connection.close();  
            psmnt.close();  
        }  
    }
}

-------------------------------------------------------------
Step 3:
import java.io.File;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.io.OutputStream;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;

public class ReadBinaryFileFromDB {
    static String directoryLocation="E:\\Eguard_Merged_Workspace\\TestProject\\inputFiles\\";
    //static String fileName="Hint_Oracle_History.png";
    static String fileName="Zoobi_Doobi.mp3";
    static String fileNameSuffix="_Duplicate.mp3";
    //static String fileNameSuffix="_Duplicate.png";
    static File file = new File(directoryLocation+fileName+fileNameSuffix);
    static OutputStream fos = null;
    static InputStream is = null; 

    public static void main(String[] args) throws Exception{
        String connectionURL = "jdbc:mysql://192.168.111.111:3306/deepak_temp"; //Change IP address and Port.
        Connection connection = null;
        ResultSet rs = null;  
        PreparedStatement psmnt = null;  
        try {  
            Class.forName("com.mysql.jdbc.Driver").newInstance();  
            connection = DriverManager.getConnection(connectionURL, "root", "root");
            psmnt = connection.prepareStatement("SELECT BINARY_FILE from FILE_STORE where ID=? and FILE_NAME=?");  
            psmnt.setInt(1,1);
            psmnt.setString(2,fileName);
            rs=psmnt.executeQuery();  
            fos = new FileOutputStream(directoryLocation+fileName+fileNameSuffix);
            
            if(rs.next()){  
                is = rs.getBinaryStream(1);
                System.out.println("Length of re-generated file.."+is.available());                
                byte[] buf = new byte[104];
                int read = 0;
                while ((read = is.read(buf)) > 0) {
                    fos.write(buf, 0, read);
                }
            }
            fos.close();
            is.close();
        }
        catch (Exception ex) {  
            System.out.println("Found some error : "+ex);  
            ex.printStackTrace();
        }
    }
}
-------------------------------------------------------------
Screen Shot: 

Attached Screenshot contains 5 markers, which is meant for below points:
1. Project Name
2. Original Binary File
3. Re-Created Binary File from DB
4. Mysql JDBC Jar
5. DB Script file




Monday, March 24, 2014

Truncate running nohup.out file in Unix

Truncate running nohup.out file in Unix
Sometimes because of faulty logger implementations or a process which is running for a long time in Unix
Operating System, background console output file “nohup.out” continues to grow in size and finally we need
to restart the process to either remove this or truncate this.

The below command will empty the “nohup.out” file without impacting running process:


UNIX_SHELL$   >   nohup.out


Or

UNIX_SHELL$  cat   /dev/null   >   nohup.out

Tuesday, January 28, 2014

JMAP, HISTO, Thread Dump, High CPU Utilization

Dear Reader,

In a production environment when we reach 100% or more than 100% CPU usage, developers and Production Support guys
have a nightmare to fix and handle the client calls. Java profiling help us to detect such CPU usage but not an 
option in production systems. Se we take thread dump, histo and use TDA or some other tools to pin point the faulty 
code implementation. This is really a panic scenario when you are handling production environment and issue comes at odd times.
This is written only for Unix environment (Ubuntu and Solaris) where these Unix command runs.

Fortunately, Java comes with some great debugging tools, We need to assemble those tools with Linux commands.

I am going to explain below items here:
    1) Introduction about Java threads and its relation to Linux LWP (Light Weight Process).
    2) Step-by-step process to take thread dump and analyze CPU utilization.
    3) "jmap" - Memory Map (dump), Command to get this.
    4) "jmap" - Histo, Command to get this.
    5) Command to see list of open files in Unix.
    6) Command to achieve the same (Resolving High CPU issue) in Sun Solaris Systems.

    

1) Introduction: A java program starts when JVM calls the main method, this creates a thread called the main thread and any 
    thread you create using java code will be derived from the main thread. The same exact behavior occurs at the Linux OS 
    level, the main thread for java means a Process for the OS and every thread you create using java the OS will create a 
    Light-weight-process or LWP. In short: Java main thread = Linux process and Java supporting threads = Linux LWP.
    LWP we give an alias name as Native ID.

    The solution requies:
        Ask Linux which LWP is eating the CPU.
        Ask Java for a Thread Dump.
        Map this LWP to a Java thread.
        Get the part of code causing the issue.    
    
2) Step-by-step process:    
        Get the PID: the very first step is to know what is the Java process ID, we will use Linux commands 
        as below. Use ONE OF THE BELOW commands (we use our grep "DAPPNAME", you can use anything like "grep java"):
        jps -v | cut -c -106 | grep DAPPNAME
        jps -mvl | cut -c -106 | grep DAPPNAME
        ps -eaf | cut -c -106 | grep DAPPNAME
        ps -ef | cut -c -106 | grep DAPPNAME
        ps -eaf | grep java
        
        Below are the sample output when you execute the command:
        dmodi@MyDirectory:~$ jps -mlv | cut -c -106 | grep DAPPNAME
        8243 org.quickserver.net.server.QuickServer -load config/DmodiServer.xml -DAPPNAME=CLIENT    
        13712 org.quickserver.net.server.QuickServer -load ./conf/DmodiDNXServer.xml -DAPPNAME=SERVER
        12229 org.quickserver.net.server.QuickServer -load ./config/DmodiPOSServer.xml -DAPPNAME=SERVER2
        
        Explanation: "jps" - Java Virtual Machine Process Status Tool, a command in Unix. "106" shows 106 
        characters we want to display in console.
        
        
        The next step is to get the CPU usage per each LWP related to the main process, we can use below commands:
        //Replace PROCESS_ID with numeric process_id you get after using "top" command in Unix, which is having more than 100% 
        CPU usage.
        
        ps -eLo pid,lwp,nlwp,ruser,pcpu,stime,etime,args | grep PROCESS_ID | cut -c -106 > ThreadsList.txt
        
  
        The newly created file ThreadsList.txt will contain below things in similar way (The headers will not be 
        shown as below):        
        PID   LWP  NLWP RUSER    %CPU STIME  ELAPSED     COMMAND
        8243  8243 3  dmodi  0.0 May13  1-19:20:18 java -Dprogram.name=run.sh -Xms64m -Xmx100m -Dsun.rmi.dgc
        8243  8244 3  dmodi  0.0 May13  1-19:20:18 java -Dprogram.name=run.sh -Xms64m -Xmx100m -Dsun.rmi.dgc
        8243  8245 3  dmodi 99.9 May13  1-19:20:18 java -Dprogram.name=run.sh -Xms64m -Xmx100m -Dsun.rmi.dgc
         
         
         To see headers too, just execute the below command (but it will display all processes, Process_Id is not entered):
         ps -eLo pid,lwp,nlwp,ruser,pcpu,stime,etime,args > ThreadsList.txt
         

         Explanation: PID is process Id.
            LWP: is Light weight processes Lists (java Threads for given ProcessId) for the above PID. These values are in Decimal.
            NLWP: is number of LWP created for the above PID.            
            We can see LWP (Thread) - 8245 is eating CPU. We need to convert this value into HEX value which will be "2035".
         
        Now take the thread dump and kill the process Id: 8243. 
        See below command:
        //Taking thread dump
   
       
        jstack -l 8243 > ThreadDump_15_May_2014_13_PM.txt
             
//Killing process kill -9 8243 kill -3 8243 (Can also be used in Ubuntu. If used for Solaris, It will generate Thread dump file along with killing).
Open the thread dump file ThreadDump_15_May_2014_13_PM.txt and search the hexa value "2035". Also if you are using TDA (Thread dump analyzer tool) to see this dump file, you can see Native-ID column. You can see the decimal thread Id (LWP): 8245 too. Click the link in TDA for this ThreadId, it will display the faulty code in TDA console.. 3) "jmap" - Memory Map (dump): Prints shared object memory maps or heap memory details of a given JVM process. dmodiUnixUser@productName79:~$ jmap -dump:file=deepak.bin 8243 Dumping heap to /home/dmodiUnixUser/deepak.bin ... Heap dump file created dmodiUnixUser@productName79:~$ ls deepak.bin This newly created file will be big in size (of 5-10 MB around). You can't see this content using "less" or "cat" command. You need tool to see this. We don't use this generally, so not mentioning here. 4) "jmap" - Histo: See below command: dmodiUnixUser@productName79:~$ jmap -histo:live 8243 > deepak.txt Contents of this file "deepak.txt" will have similar like above: num #instances #bytes class name ---------------------------------------------- 1: 14452 2229096 <constMethodKlass> 2: 14452 1740720 <methodKlass> 3: 1004 1406296 <constantPoolKlass> 4: 1336 1270504 [B 5: 25057 1060840 <symbolKlass> 6: 835 809368 <constantPoolCacheKlass> 7: 1004 787096 <instanceKlassKlass> 5) List of open files in Linux: lsof - list open files dmodiUnixUser@productName79:~$ lsof | grep home/dmodi/productName/dist/ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 16460 dmodiUnixUser mem REG 9,2 25680 8127062 home/dmodi/productName/dist/sample1.jar java 16460 dmodiUnixUser mem REG 9,2 66770 8127061 home/dmodi/productName/dist/sample2.jar dmodiUnixUser@productName79:~$ lsof | grep PROCESS_ID > help.txt dmodiUnixUser@productName79:~$ less help.txt 6) The Above mentioned few commands may not work for Sun Solaris system. Hence to track high CPU consuming Process and ThreadId, "prstat" is used. The syntax is "prstat -L -p ". Example: prstat -L -p 22991 >> 22991.txt This will generate file name 22991.txt having all the Threads and CPU Usage details for ProcessId 22991. The headers will be like this : PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/LWPID 22991 root 7667M 4953M cpu18 0 0 14:47:00 99.6% java/119286 22991 root 7667M 4953M sleep 59 0 0:02:14 0.0% java/46 22991 root 7667M 4953M sleep 59 0 0:00:15 0.0% java/15 22991 root 7667M 4953M sleep 59 0 0:00:15 0.0% java/14 Here "PID corresponds to Solaris Java process ID". "CPU corresponds to the CPU utilization % of a particular Java Thread". "PROCESS/LWPID corresponds to Light Weight Process ID e.g. your native Java ThreadID belonging to ProcessID 22991". HERE prstat says, ThreadId #119286 is the top CPU contributor with 99.6% utilization and hence the faulty code must be fixed. Hence take thread dump immediately using command "kill -3 22991". This command will generate a Thread Dump from Java process HotSpot VM format. Convert Thread ID #119286 which is in decimal format to HEXA, corresponds value is 0X1D1F6 (this HEXA format, see 0X prefix). HEXA value is "1D1F6". Now search this HEXA value in Thread Dump file, you will get the exactly faulty code stacktrace. ----------------------------------END-----------------------------------------

Wednesday, January 22, 2014

Execution Plan of a Query in Oracle

Working with Oracle in Unix and getting Execution Plan of a Query:

1) Login to Unix Machine where Oracle is installed.

2) You need to check whether Oracle is running (Listener service is running). To check this type below command:
        oracle@companyName:~$ lsnrctl status (press ENTER)
   This will give around 30-Lines output, at the end, it will show whether this running/ready or not. 
   However if it is not running, start it using below command:         
        oracle@companyName:~$ lsnrctl start (press ENTER)

3) Type below command in Unix (this will set Oracle_Home path and export it):  
        oracle@companyName:~$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 (press ENTER)
        oracle@companyName:~$ export ORACLE_SID=orcl (press ENTER)
        oracle@companyName:~$ export PATH=$ORACLE_HOME/bin:$ORACLE_HOME:$PATH:. (press ENTER)

4) Now you need to login to Database, so type below command: 
        oracle@companyName:~$ sqlplus (press ENTER)
            Enter user-name: YOUR_ORACLE_USER_NAME  (press ENTER)
            Enter password:  YOUR_ORACLE_PASSWORD (press ENTER)
        SQL> (You are logged in now...)

/*
For Logging as SYS DBA, required only for DBA activities, don't run oftenly:        
        oracle@companyName:~$ sqlplus / as sysdba
        SQL>
        SQL> exit
*/

5) Set Linesize and Pagesize as Unix(Oracle) will show fragmented pages. Type below commands:
        SQL> set linesize 300;
        SQL> set pagesize 2000;

6) Now to print Execution Plan of a Query to know CPU Utilization, IO Cost, Bytes Read, Index Scan etc, use below queries:
        SQL> SET autotrace ON;
        SQL> SELECT /*+ index(TABLE_NAME INDEX_TIME_STAMP) */  * FROM TABLE_NAME WHERE 
             TIME_STAMP>=To_Date('10/01/2014 00:04:20','dd/mm/yyyy hh24:mi:ss') AND TIME_STAMP<=To_Date('15/01/2014 17:04:20','dd/mm/yyyy hh24:mi:ss')
             ORDER BY CARD_NO ASC;

             Your output will look like below:    
ROW DATA 1........             
ROW DATA 2........
ROW DATA 3........
3 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1246977483

----------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                           |    18 |  4122 |     4  (25)| 00:00:01 |
|   1 |  SORT ORDER BY               |                           |    18 |  4122 |     4  (25)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| TABLE_NAME            |    18 |  4122 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | INDEX_TIME_STAMP |    18 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("TIME_STAMP">=TIMESTAMP' 2014-01-10 00:04:20' AND "TIME_STAMP"<=TIMESTAMP'
              2014-01-15 17:04:20')
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
       4397  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         10  rows processed

SQL> exit

7) Now the same above query output, can be achieved in ORACLE related UI interfaces like below 
    (above all execution happened in UNIX Black color command line interface), execution below 2 queries in UI interface like SQL Tools:

    explain plan for
    SELECT /*+ index(TABLE_NAME INDEX_TIME_STAMP) */  * FROM TABLE_NAME WHERE 
    TIME_STAMP>=To_Date('10/01/2014 00:04:20','dd/mm/yyyy hh24:mi:ss') AND TIME_STAMP<=To_Date('15/01/2014 17:04:20','dd/mm/yyyy hh24:mi:ss')
    ORDER BY CARD_NO ASC;

    SELECT PLAN_ID,CPU_COST,IO_COST,TIME,BYTES,COST,DEPTH,OPTIMIZER,OBJECT_TYPE,OBJECT_NAME,OBJECT_OWNER,OPTIONS,OPERATION,
    TIMESTAMP FROM PLAN_TABLE;
    
==========================================END==============================================