Saturday, October 15, 2011

Accidently issued alter user command to change apps password

Sol 1:
APPS User Locks After Manually Altering Password [ID 566127.1] for 12.0.4
To implement the solution, please execute the following steps:
1. Backup the FND_USER, FND_ORACLE_USERID tables
2. Change the password in the following sequence.
SQL> alter user applsys identified by apps;
SQL> alter user apps identified by apps;
SQL> alter user apps account unlock;
Sol 2:
APP-FND-01496 Error After Changing Apps And Applsys Users Using Alter User [ID 445153.1] for Release: 11.5.10 to 11.5.10 1.
Restore the FND_ORACLE_USERID and FND_USER tables from a backup.
2. Then run FNDCPASS to change the APPLSYS password.
FNDCPASS apps/apps 0 Y system/ SYSTEM APPLSYS WELCOME

Troubleshooting Guide For Login and Changing Applications Passwords [ID 1306938.1]

R12 Upgrade patch fails with ORA-04021: timeout occurred while waiting to lock object

R12 Upgrade patch fails with ORA-04021: timeout occurred while waiting to lock object for multiple objects.
adwork001.log:ORA-04021: timeout occurred while waiting to lock object CS.CS_INCIDENTS_ALL
adwork002.log:ORA-04021: timeout occurred while waiting to lock object CN.CN_REV_CLASS_API_ALL
adwork003.log:ORA-04021: timeout occurred while waiting to lock object PJM.PJM_TASK_ATTR_USAGES
adwork004.log:ORA-04021: timeout occurred while waiting to lock object INV.MTL_ITEM_CATALOG_GROUPS_S
adwork005.log:ORA-04021: timeout occurred while waiting to lock object JA.JA_IN_AP_TDS_SERVICES
adwork005.log:ORA-04021: timeout occurred while waiting to lock object JA.JA_IN_REQN_INTERFACE
adwork006.log:ORA-04021: timeout occurred while waiting to lock object JA.JA_IN_57F4_LINES_S
adwork007.log:ORA-04021: timeout occurred while waiting to lock object JA.JAI_ITM_TEMPL_ATTRIBS
adwork008.log:ORA-04021: timeout occurred while waiting to lock object HR.PER_JP_BANK_LOOKUPS
adwork009.log:ORA-04021: timeout occurred while waiting to lock object INV.MTL_ITEM_CATALOG_GROUPS_S
adwork009.log:ORA-04021: timeout occurred while waiting to lock object PJM.PJM_TASK_ATTRIBUTES
adwork010.log:ORA-04021: timeout occurred while waiting to lock object ZX.ZX_ID_TCC_MAPPING
adwork013.log:ORA-04021: timeout occurred while waiting to lock object JA.JA_IN_GEM_TAX_CODES_S1
adwork014.log:ORA-04021: timeout occurred while waiting to lock object CS.CS_INCIDENTS_ALL
adwork017.log:ORA-04021: timeout occurred while waiting to lock object PJM.PJM_TASK_ATTR_USAGES
adwork025.log:ORA-04021: timeout occurred while waiting to lock object JA.JA_IN_AP_FORM16_DTL
adwork027.log:ORA-04021: timeout occurred while waiting to lock object CN.CN_REV_CLASS_API_ALL
adwork028.log:ORA-04021: timeout occurred while waiting to lock object PJM.PJM_TASK_ATTRIBUTES

Solution:
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

and restart the failed workers.

Missing log records in receiving log queue for subscriber SUB_BILLPLAN_QUEUE errorcode = 416

Solution ( Below scripts needs to be run against each subscriber id having error)

begin
DBMS_MGWADM.CLEANUP_GATEWAY (action => DBMS_MGWADM.RESET_SUB_MISSING_LOG_REC,sarg => '<SUBSCRIBER_ID>');
end;

begin
dbms_mgwadm.reset_subscriber('<SUBSCRIBER_ID>');
end;


***********************************************
Refer mgw_subscibers for subscriber_id
***********************************************

Discoverer: Unable to connect to Oracle Applications database: invalid username/password

unable to connect to: user@123@sid Failed to connect to database - Unable to connect to Oracle Applications database: invalid username/password.
Solution:
1. use the following command to make the authentication off. java oracle.apps.fnd.security.AdminAppServer apps/apps AUTHENTICATION OFF DBC=file.dbc or Change the xml file and make the value of AUTHENTICATION secure to off.
2. Create a folder with name secure in Oracle Home, and copy dbc file from server ($FND_SECURE) to desktop.

patch 5980072 can also be applied on client to resolve this issue.

Routine FND_DCP.REQUEST_SESSION_LOCK received a result code of 1 from the call to DBMS_LOCK.Request.

In a Parallel Concurent processing environment, concurrnet managers only come up on one node, either nodeA or nodeB but not both nodes.

Encounter the following error in the Internal Concurrent Manager logfile:

Routine &ROUTINE has attempted to start the internal concurrent manager.
The ICM is already running. Contact you system administrator for further assistance.

afpdlrq received an unsuccessful result from PL/SQL procedure or function FND_DCP.Request_Session_Lock.
Routine FND_DCP.REQUEST_SESSION_LOCK received a result code of 1 from the call
to DBMS_LOCK.Request.
Possible DBMS_LOCK.Request resultCall to establish_icm failed.
The Internal Concurrent Manager has encountered an error.
Changes
Implementing parallel concurrent processing.
Cause
The concurrent manager startup script is being executed on both nodes.
This causes two instances of the ICM (internal concurrent manager) to be running in one application instance, which relates to error message in manager logfile.
Moreover, FNDSM is not able to complete its job of starting respective processes on the defined nodes.
Fix
1. Ensure that APPLDCP is set to ON in your $APPL_TOP/.env file.
2. Echo environment variable on command line prior to starting concurrent managers.
3. Execute adcmctl.sh only on the primary node of the Internal Concurrent Manager.

FNDCPVCM.fmb

concurrent requests are stuck in pending status

Below are several different possible solutions to the problem where concurrent requests are stuck in pending status:

1.  When shutting down the concurrent managers are there any FNDLIBR processes still running at the OS level?   If so, do a kill -9 on them,
then restart the concurrent managers.

2.  Try Relinking $FND_TOP.

3.  Rebuild the concurrent manager views.  As applmgr run the following from the OS:

This is non-destructive.
Concurrent Manager views can be rebuilt by running the following command at the command line:
Ensure that concurrent managers are shutdown.

FNDLIBR FND FNDCPBWV apps/apps SYSADMIN 'System Administrator' SYSADMIN

Restart the concurrent managers.

4.  The Profile Option 'Concurrent: OPS Request Partitioning' may be set incorrectly. This profile option should always be set to
OFF, regardless of whether you are running OPS(RAC) or not, because the profile is obsolete.

5.  The System Profile Option: Concurrent Active Requests is possibly to 0.
        a.  Log into Oracle Applications as SYSADMIN.
        b.  Select System Administrator responsibility.
        c.  Navigate to PROFILE > SYSTEM.
        d.  Query for %CONC%ACTIVE%.
        e.  Change the profile option for 'Concurrent: Active Request Limit' to Null (blank).
        f.  Exit Oracle Applications and log in again for the change to take affect.
        g.  Run a new concurrent request.

6.  The Concurrent managers were brought down, while an outstanding request was still running in the background.  In which case, update the
FND_CONCURRENT_REQUESTS table as follows:

sql>
update fnd_concurrent_requests
set status_code='X', phase_code='C'
where status_code='T';
sql> commit;

7.   The control_code for concurrent_queue_name = 'FNDCRM' is 'N' in the FND_CONCURRENT_QUEUES table,  which means 'Target node/queue unavailable'.
This value should be NULL (CRM is running; target and actual process amount are the same), or 'A' ('Activate concurrent manager' control status).
Set the control_code to 'A' in fnd_concurrent_queues for the Conflict Resolution Manager:
       a.  Logon to Oracle Applications database server as 'applmgr'.
       b.  Verify the Applications environment is setup correctly ($ORACLE_HOME and $ORACLE_SID).
       c.  Logon to SQL*Plus as 'APPS' and run the following SQL statement:
            update fnd_concurrent_queues
            set control_code = 'A'
            where concurrent_queue_name = 'FNDCRM';
            commit;
       d.  Verify the status of the concurrent managers through the  Concurrent -> Manager -> Administer form.

If the CRM is still not active, bounce (deactivate, activate) the Internal Concurrent Manager.  This is done through the Concurrent > Manager >  Administer form
from the 'System Administrator' responsibility. It can also be done through the CONCSUB command at the command level.     

Setting the control_code to 'A' in the fnd_concurrent_queues table for the Conflict Resolution Manager indicates that this concurrent manager
is to be activated with the parameter values specified through this table for this manager (MAX_PROCESSES, CACHE_SIZE, etc).

8.  What is the cache size?   Try increasing the cache size then stop/restart the concurrent managers.
If concurrent requests are rarely prioritized and there are managers that service short-running requests, consider setting the cache size to
equal at least twice the number of target processes.  This increases the throughput of the concurrent managers by attempting to avoid any sleep time.
For example:
If more than one manager or worker processes the same type of requests with only a small cache size, it may be unable to process any jobs in a
single processing cycle, because other processes have already run the cached requests.
When this happens, it is important to note that the manager will sleep before refreshing its cache.  To increase manager throughput where there
are sufficient requests of the required type in the queue, increase the cache size to improve the chance of the manager finding work to process
and thus avoid having to enter a sleep phase.

TIP: Ensure that the system is not resource-constrained before attempting to increase the rate of concurrent processing in this way, otherwise,
these changes may actually reduce concurrent processing throughput because jobs take longer to run.

Changes in R12 (12.2)

1. Oracle E-Business Suite R12 (12.2) uses Oracle Fusion Middleware 11g R1 PS3 (11.1.1.4) including WebLogic 10.3.4 as its application server (Note: In previous version of R12 i.e. 12.0.X and 12.1.X this is 10g R3 i.e. 10.1.3.X). Oracle HTTP server in R12 (12.2) is 11.1.1.4 . 2. Oracle E-Business Suite R12 (12.2) uses Oracle Application Server 10g R3 (10.1.2.3) for Forms & Reports (Note: In previous version of R12 i.e. 12.0.X and 12.1.X, forms are reports are of same version i.e. 10.1.2.3) 3. Default Database for Oracle E-Business Suite R12 (12.2) is 11g R2 (11.2.X) . (Note: In previous version of R12 i.e. 12.0.X, default database is 11.1 and 12.1.X, default database is 11.2) 4. Oracle JSP Compiler (OJSP) 10.1.3.5 (12.0.X and 12.1.X) is replaced by WebLogic JSP Compiler 11.1.1.4 in R12 version 12.2 5.Online Patching (OLP) introduced in Oracle Apps 12.2 , uses Oracle Database edition-based redefinition to reduce patch down time (more on edition-based redefinition coming soon). A secondary file system for application tier (in Apps R12 - 12.2) is introduced to support OnLine Patching (OLP) . 6. Oracle Apps R12.2 cloning will also support Fusion Middleware (11.1.1.4 - discussed in point 1) cloning. For standalone cloning of Fusion Middleware 11gR1 click here 7. Oracle Web Applications Desktop Integrator (Web ADI) in R12.2 is now certified with Microsoft Office 32-bit and 64-bit.

Socket to servlet or vice versa in Oracle Apps R12

$FND_TOP/bin/txkrun.pl -script=ChangeFormsMode \
[-contextfile=<CONTEXT_FILE>] \
-mode=servlet \
[-port=<Forms port number>] \
-runautoconfig=<No or Yes> \
-appspass=<APPS password>
Tags:Oracle Apps - Socket To Servlet

FRM-92050: Failed to connect to Server: /forms/lservlet:-1

This issue only occurs in servlet mode and on IE version8.
Solution for the issue is.
On Internet Explorer following navigation can be followed.
Tools- Internet Options - Security - Custom Level - Enable XSS filter - Disable
Also refer below Metalink Notes:
R12.1 FRM-92050: FAILED TO CONNECT TO SERVER: /FORMS/LSERVLET [ID 1070263.1]
IE8 AND R12 SECURITY SETTING REQUIREMENT ON CROSS SITE SCRIPTING (XSS) [ID 1069497.1]
Tags:FRM-92050 java.io.Exception negative c

Huge archive Generation in OID - Could not delete the Instance OIDLDAPD Instance 1 deletion failed

Abstract: ***Huge archive Generation in OID.
***oidldapd Fails to Start With "Bind Failed On Communication Endpoint (13) [ID 558296.1]
***Why the ODS_PROCESS Table Should NOT Be Truncated [ID 889673.1] **************** Error Message ******************************* Deleting OIDLDAPD instance 1 from Process table
Instance cn=instance1,cn=cndassfzdbop18,cn=osdldapd,cn=subregistrysubentry not found. Could not delete the Instance OIDLDAPD Instance 1 deletion failed
Sol : to remove an entry in the ods_process table do the following
opmnctl stopall
oidmon connect=ssf start
oidctl connect=ssf server=oidldapd instance=1 configset=0 stop
oidmon connect=ssf stop
opmnctl startall
(Note: Refer ods_process table for instance and configset value)

Saturday, June 25, 2011

Locks in Database

Locks in Database

SELECT DECODE(request,0,'Holder: ',' Waiter: ')||sid sess,id1,id2, lmode,inst_id, request, type
FROM GV$LOCK WHERE (id1, id2, type)
IN
(SELECT id1, id2, type FROM GV$LOCK WHERE request>0) ORDER BY id1, request


select sid,serial#,module,status,action,to_char(logon_time,'dd-mon-yyyy hh24:mi:ss') logon_time from v$session where sid=&sid;

Important tables in oracle APPS

Concurrent Manager
FND_CONCURRENT_QUEUES
FND_CONCURRENT_PROGRAMS
FND_CONCURRENT_REQUESTS
FND_CONCURRENT_PROCESSES
FND_CONCURRENT_QUEUE_SIZE
FND
FND_APPL_TOPS
FND_LOGINS
FND_USER
FND_DM_NODES
FND_TNS_ALIASES
FND_NODES
FND_RESPONSIBILITY
FND_DATABASES
FND_UNSUCCESSFUL_LOGINS
FND_LANGUAGES
FND_APPLICATION
FND_PROFILE_OPTION_VALUES
AD / Patches
AD_APPLIED_PATCHES
AD_PATCH_DRIVERS
AD_BUGS
AD_INSTALL_PROCESSES
AD_SESSIONS
AD_APPL_TOPS

finding installed patches in apps

Hi

     If you want to find the installed patch detail in oracle apps then you have to run following queries.

 select DRIVER_FILE_NAME
from AD_PATCH_DRIVERS
where DRIVER_FILE_NAME like '%3258830%';

select BUG_NUMBER,CREATION_DATE,LAST_UPDATE_DATE from ad_bugs where BUG_NUMBER='10052153';

select PATCH_NAME,PATCH_TYPE,CREATION_DATE,LAST_UPDATE_DATE from ad_applied_patches where PATCH_NAME='10052153';

If you know about localization patch detail.

select PATCH_NUMBER,STATUS,LOG_FILE,PATCH_DATE from jai_applied_patches where PATCH_NUMBER='Patch no';











Oracle Application Concurrent Manager

Inside the Oracle Concurrent Manager

by Lokesh Rustagi - Oracle Apps DBA at IBM = Noida


The concurrent managers in the Oracle e-Business suite serve several important administrative functions. Foremost, the concurrent managers ensure that the applications are not overwhelmed with requests, and the second areas of functions are the management of batch processing and report generation.

This article will explore tools that are used by experienced administrators to gain insight and improved control over the concurrent management functions. We will explore how the concurrent managers can be configured via the GUI, and also explore scripts and dictionary queries that are used to improve the functionality of concurrent management.

The Master Concurrent Managers

There is a lot of talk about "the" concurrent manager in Oracle Applications. Actually, there are many Concurrent Managers, each governing flow within each Oracle Apps areas. In addition there are "super" Concurrent Managers whose job is to govern the behavior of the slave Concurrent Managers. The Oracle e-Business suite has three important master Concurrent Managers:

    * Internal Concurrent Manager — The master manager is called the Internal Concurrent Manager (ICM) because it controls the behavior of all of the other managers, and because the ICM is the boss, it must be running before any other managers can be activated. The main functions of the ICM are to start up and shutdown the individual concurrent managers, and reset the other managers after one them has a failure.

    * Standard Manager — Another important master Concurrent Manager is called the Standard Manager (SM). The SM functions to run any reports and batch jobs that have not been defined to run in any specific product manager. Examples of specific concurrent managers include the Inventory Manager, CRP Inquiry Manager, and the Receivables Tax Manager.

    * Conflict Resolution Manager — The Conflict Resolution Manager (CRM) functions to check concurrent program definitions for incompatibility rules. However, the ICM can be configured to take over the CRM's job to resolve incompatibilities.

Now that we understand the functions of the master Concurrent Managers, let's take a quick look at techniques that are used by Oracle Apps DBAs to monitor the tune the behavior of the Concurrent Managers.

Tuning the Concurrent Manager

All successful Oracle Apps DBAs must understand how to monitor and tune each of the Concurrent Managers. This article will explore some of the important techniques for monitoring and tuning the Oracle Apps Concurrent Manager processes. The topics will include:

    * Tuning the Concurrent Manager
          o Tuning the Internal Concurrent Manager
          o Purging Concurrent Requests
          o Troubleshooting Oracle Apps performance problems
          o Adjusting the Concurrent Manager Cache Size
          o Analyzing the Oracle Apps Dictionary Tables
    * Monitoring Pending Requests in the Concurrent Manager
    * Changing the dispatching priority within the Concurrent Manager

Let's start by looking at tuning the ICM, and drill-down into more detail.

Tuning the Internal Concurrent Manager (ICM)

The ICM performance is affected by the three important Oracle parameters PMON cycle, queue size, and sleep time.

    * PMON cycle — This is the number of sleep cycles that the ICM waits between the time it checks for concurrent managers failures, which defaults to 20. You should change the PMON cycle to a number lower than 20 if your concurrent managers are having problems with abnormal terminations.

    * Queue Size — The queue size is the number of PMON cycles that the ICM waits between checking for disabled or new concurrent managers. The default for queue size of 1 PMON cycle should be used.

    * Sleep Time — The sleep time parameter indicates the seconds that the ICM should wait between checking for requests that are waiting to run. The default sleep time is 60, but you can lower this number if you see you have a lot of request waiting (Pending/Normal). However, reducing this number to a very low value many cause excessive cpu utilization.

All of the concurrent managers, with the exception of the ICM and CRM, can be configured to run as many processes as needed, as well as the time and days a manager can process requests. However, the number of processes needed is dependent on each organization's environment. An Applications DBA must monitor the concurrent processing in order to decide how to configure each manager. For a fresh install of the applications, initially configure the standard manager to run with five processes, and all the other managers with two processes. After the applications have been in operation for a while, the concurrent managers should be monitored to determine is more operating system process should be allocated.

Purging Concurrent Requests


One important area of Concurrent Manager tuning is monitoring the space usage for the subsets within each concurrent manager. When the space in FND_CONCURRENT_PROCESSES and FND_CONCURRENT_REQUESTS exceed 50K, you can start to experience serious performance problems within your Oracle Applications. When you experience these space problems, a specific request called "Purge Concurrent Requests And/Or Manager Data" should be scheduled to run on a regular basis. This request can be configured to purge the request data from the FND tables as well as the log files and output files on accumulate on disk.

Adjusting the Concurrent Manager Cache Size

Concurrent manager performance can also be enhanced by increasing the manager cache size to be at lease twice the number of target processes. The cache size specifies the number of requests that will be cached each time the concurrent manager reads from the FND_CONCURRENT_REQUESTS table. Increasing the cache size will boost the throughput of the managers by attempting to avoid sleep time.

Analyzing Oracle Apps Dictionary Tables for High Performance

It is also very important to run the request Gather Table Statistics on these tables:

    * FND_CONCURRENT_PROCESSES
    * FND_CONCURRENT_PROGRAMS
    * FND_CONCURRENT_REQUESTS
    * FND_CONCURRENT_QUEUES.

Run the request "Analyze All Index Column Statistics" on the indexes of these tables. Since the APPLSYS user is the owner of these tables, so you can also just run the request Analyze Schema Statistics for APPLSYS.

To troubleshoot performance, a DBA can use three types of trace. A module trace, such as PO or AR, can be set by enabling the module's profile option Debug Trace from within the applications. Second, most concurrent requests can be set to generate a trace file by changing the request parameters. To enable trace for a specific request, log in as a user with the System Administrator responsibility. Navigate to Concurrent -> Program -> Define. Query for the request that you want to enable trace. At the bottom right of the screen you can check the box Enable Trace. (Figure 1)

Figure 1: Troubleshooting Concurrent Manager Performance.

Another popular way to troubleshoot the Concurrent Managers is to generate a trace file. This is done by setting the OS environment variable FNDSQLCHK to FULL, and running the request from the command line.

Monitoring Pending Requests in the Concurrent Managers

Occasionally, you may find that requests are stacking up in the concurrent managers with a status of "pending". This can be caused by any of these conditions:

1. The concurrent managers were brought down will a request was running.

2. The database was shutdown before shutting down the concurrent managers.

3. There is a shortage of RAM memory or CPU resources.

When you get a backlog of pending requests, you can first allocate more processes to the manager that is having the problem in order to allow most of the requests to process, and then make a list of the requests that will not complete so they can be resubmitted, and cancel them.

To allocate more processes to a manager, log in as a user with the System Administrator responsibility. Navigate to Concurrent -> Manager -> Define. Increase the number in the Processes column. Also, you may not need all the concurrent managers that Oracle supplies with an Oracle Applications install, so you can save resources by identifying the unneeded managers and disabling them.

Figure 2: Allocating more processes to the Concurrent Manager.

However, you can still have problems. If the request remains in a phase of RUNNING and a status of TERMINATING after allocating more processes to the manager, then shutdown the concurrent managers, kill any processes from the operating system that won't terminate, and execute the following sqlplus statement as the APPLSYS user to reset the managers in the FND_CONCURRENT_REQUESTS table:

update fnd_concurrent_requests
set status_code='X', phase_code='C'
where status_code='T';

Changing Dispatching Priority within the Concurrent Manager

If there are requests that have a higher priority to run over other requests, you can navigate to Concurrent --> Program --> Define to change the priority of a request. If a priority is not set for a request, it will have the same priority as all other requests, or it will be set to the value specified in the user's profile option Concurrent:Priority.

Also, you can specify that a request run using an SQL optimizer mode of FIRST_ROWS, ALL_ROWS, RULE, or CHOOSE, and this can radically effect the performance of the SQL inside the Concurrent request. If several long running requests are submitted together, they can cause fast running requests to have to wait unnecessarily. If this is occurring, try to schedule as many long running requests to run after peak business hours. Additionally, a concurrent manager can be created to run only fast running requests.

Using data Dictionary Scripts with the Concurrent Manager

Few Oracle Applications DBAs understand that sophisticated data dictionary queries can be run to reveal details about the workings within each Concurrent Manager. Oracle provides several internal tables that can be queried from SQL*Plus to see the status of the concurrent requests, and the most important are FND_CONCURRENT_PROGRAMS and FND_CONCURRENT_REQUESTS.

Oracle supplies several useful scripts, (located in $FND_TOP/sql directory), for monitoring the concurrent managers:

afcmstat.sql
   

Displays all the defined managers, their maximum capacity, pids, and their status.

afimchk.sql
   

Displays the status of ICM and PMON method in effect, the ICM's log file, and determines if the concurrent manger monitor is running.

afcmcreq.sql
   

Displays the concurrent manager and the name of its log file that processed a request.

afrqwait.sql
   

Displays the requests that are pending, held, and scheduled.

afrqstat.sql
   

Displays of summary of concurrent request execution time and status since a particular date.

afqpmrid.sql
   

Displays the operating system process id of the FNDLIBR process based on a concurrent request id. The process id can then be used with the ORADEBUG utility.

afimlock.sql
   

Displays the process id, terminal, and process id that may be causing locks that the ICM and CRM are waiting to get. You should run this script if there are long delays when submitting jobs, or if you suspect the ICM is in a gridlock with another oracle process.

In addition to these canned scripts you can skill write custom Concurrent Manager scripts. For example, the following query can be executed to identify requests based on the number of minutes the request ran:

conc_stat.sql

set echo off

set feedback off

set linesize 97

set verify off

col request_id format 9999999999 heading "Request ID"

col exec_time format 999999999 heading "Exec Time|(Minutes)"

col start_date format a10 heading "Start Date"

col conc_prog format a20 heading "Conc Program Name"

col user_conc_prog format a40 trunc heading "User Program Name"

spool long_running_cr.lst

SELECT

fcr.request_id request_id,

TRUNC(((fcr.actual_completion_date-fcr.actual_start_date)/(1/24))*60) exec_time,

fcr.actual_start_date start_date,

fcp.concurrent_program_name conc_prog,

fcpt.user_concurrent_program_name user_conc_prog

FROM

fnd_concurrent_programs fcp,

fnd_concurrent_programs_tl fcpt,

fnd_concurrent_requests fcr

WHERE

TRUNC(((fcr.actual_completion_date-fcr.actual_start_date)/(1/24))*60) > NVL('&min',45)

and

fcr.concurrent_program_id = fcp.concurrent_program_id

and

fcr.program_application_id = fcp.application_id

and

fcr.concurrent_program_id = fcpt.concurrent_program_id

and

fcr.program_application_id = fcpt.application_id

and

fcpt.language = USERENV('Lang')

ORDER BY

TRUNC(((fcr.actual_completion_date-fcr.actual_start_date)/(1/24))*60) desc;

spool off

Note that this script prompts you for the number of minutes. The output from this query with a value of 60 produced the following output on my database. Here we can see important details about currently-running requests, including the request ID, the execution time, the user who submitted the program and the name of the program.

Enter value for min: 60

Exec Time

Request ID (Minutes) Start Date Conc Program Name User Program Name

----------- ---------- ---------- -------------------- --------------------------------------

1445627 218 01-SEP-02 MWCRMRGA Margin Analysis Report(COGS Breakups)

444965 211 03-JUL-01 CSTRBICR5G Cost Rollup - No Report GUI

1418262 208 22-AUG-02 MWCRMRGA Margin Analysis Report(COGS Breakups)

439443 205 28-JUN-01 CSTRBICR5G Cost Rollup - No Report GUI

516074 178 10-AUG-01 CSTRBICR6G Cost Rollup - Print Report GUI

1417551 164 22-AUG-02 MWCRMRGA Margin Analysis Report(COGS Breakups)

Important sql in oracle apps

Oracle Applications has useful collection of ready SQL scripts under $AD_TOP/sql. The following list shows the description of each script. For more details like if a specific script is available on a specific Apps version you need to check metalink note 108207.1 you will find the script and its description below.
adcompsc.pls
Compile objects in a given schema

adcpresp.sql
The script duplicates rows in FND_RESPONSIBILITY in the following way: Find data_group_id dg_id for the given data_group_name. For each row with data_group_id 0 in FND_RESPONSIBILITY, look for a corresponding row with data_group_id dg_id, with the same application_id, and responsibility_name that only differs in the given suffix string suffix_string. If such a row does not exist for the data_group_id dg_id, insert it.

aderrch2.sql
Reports all compilation errors for a given schema.

aderrchk.sql
Same as aderrch2.sql plus it fails if there are any errors

adtresp.sql
A fix for customers who have more than one set of books and they installed languages other than AMERICAN English. The symptom of the bug is that responsibility names are not translated properly for non-Standard data groups.

adutcobj.sql
Count objects by object type in schema

adutconf.sql
Utility script to display configuration of Applications

adutfip.sql
Utility script to display worker information

adutfpd.sql
Utility script to display product dependency information

ADXANLYZ.sql
Analyze all tables in an ORACLE ID with estimate sample 20%

ADXCKPIN.sql
Query the shared_pool area to determine space used by PL/SQL objects and whether they have been pinned.

ADXGNPIN.sql
Creates and runs a "pin" script for all packages and functions in a give schema

ADXGNPNS.sql
Creates and runs a "pin" script for all sequences in a give schema

ADXINMAI.sql
Install tables and views used by the Applications*DBA sql scripts.

adxirc.sql
AD - index - report columns

ADXLMCBC.sql
Live Monitor, Categorize Block Contention

ADXLMLSO.sql
Live Monitor, List Session Objects

ADXLMQMS.sql
Live Monitor, Query Monitor Statistics

ADXLPFLS.sql
Lock Problem, Find Lock Source

ADXLPSLU.sql
Lock Problem, Show Lock Users

adxpriv7.sql
grant privileges to a user

ADXRCSDC.sql
Report Configuration, Show Database Configuration formerly, config.sql (rollback, tablespace, data files)

ADXRCSTG.sql
Report Configuration, Select Table Grants

ADXRSEBH.sql
Estimate the effect of a bigger SGA cache on cache hit rate.

ADXRSESH.sql
Estimate the effect of a smaller SGA cache on cache hit rate.

ADXRSFIS.sql
Find the size (blocks, extents, extpct) of the given index.

ADXRSFTS.sql
Find the size (blocks, extents, extpct) of the given table.

ADXRSFUA.sql
Report the number of blocks used and the number of extents used for every table or index in every user in the database.

ADXRSLFS.sql
Report free extents in each tablespace.

ADXRSQDP.sql
Check for cache effectiveness for dc_xxxxx parameters' values.

ADXRSRTS.sql
Produce a brief database used space report.

ADXRSSIE.sql
Generate a list of tables and indexes whose next extent to be grabbed would be too large to be allocated in their corresponding tablespaces.

ADXRSSMF.sql
List tables and indexes with a number of allocated extents close to their max_extents.

ADXRSSMS.sql
Find space used for one's own segments.

ADXRSSRS.sql
Show v$rollstat statistics.

ADXRSSSU.sql
For a username, report the number of blocks used and the number of extents used for every table or index in that username.

ADXRSSTF.sql
Produce a brief report of database fragmentation by tablespace.

ADXRSSUS.sql
Report how much space each user has.

ADXUPLUP.sql
Generate a list of processes which the given user (NOT the database account's username) owns.

ADXUPSRU.sql
Show all users that have active transactions per Rollback Segment that they are writing to.

Wait event session detail

select sid,serial#,module,action,sql_address from v$session where sid in (select sid from v$session_wait
where event like 'db file scattered read')

if you want session information of other wait event then you mention wait event name on replace of db file scattered read.

R12 Log Files locations

A. Startup/Shutdown Log files for Application Tier in R12

Instance Top is new TOP added in R12 (to read more click here)

–Startup/Shutdown error message text files like adapcctl.txt, adcmctl.txt…
$INST_TOP/apps/$CONTEXT_NAME/logs/appl/admin/log

–Startup/Shutdown error message related to tech stack (10.1.2, 10.1.3 forms/reports/web)
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/  (10.1.2 & 10.1.3)
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/Apache/error_log[timestamp]
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/opmn/ (OC4J~…, oa*, opmn.log)$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.2/network/ (listener log)
$INST_TOP/apps/$CONTEXT_NAME/logs/appl/conc/log  (CM log files)

B. Log files related to cloning in R12

Preclone log files in source instance
i) Database Tier - /$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/(StageDBTier_MMDDHHMM.log)

ii) Application Tier - $INST_TOP/apps/$CONTEXT_NAME/admin/log/ (StageAppsTier_MMDDHHMM.log)

Clone log files in target instance

Database Tier - $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDBTier_<time>.log
Apps Tier  - $INST_TOP/apps/$CONTEXT_NAME/admin/log/ApplyAppsTier_<time>.log

—–
If your clone on DB Tier fails while running txkConfigDBOcm.pl  (Check metalink note - 415020.1)
During clone step on DB Tier it prompts for “Target System base directory for source homes” and during this you have to give like /base_install_dir like ../../r12 and not oracle home like ../../r12/db/tech_st_10.2.0
—–

C. Patching related log files in R12

i) Application Tier adpatch log - $APPL_TOP/admin/$SID/log/
ii) Developer (Developer/Forms & Reports 10.1.2) Patch - $ORACLE_HOME/.patch_storage
iii) Web Server (Apache) patch - $IAS_ORACLE_HOME/.patch_storage
iv) Database Tier opatch log - $ORACLE_HOME/.patch_storage

D. Autoconfig related log files in R12
i) Database Tier Autoconfig log :
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/adconfig.log
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/NetServiceHandler.log

ii) Application Tier Autoconfig log -  $INST_TOP/apps/$CONTEXT_NAME/admin/log/$MMDDHHMM/adconfig.log

Autoconfig context file location in R12 - $INST_TOP/apps/$CONTEXT_NAME/appl/admin/$CONTEXT_NAME.xml

E. R12 Installation Logs

Database Tier Installation

RDBMS $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/<MMDDHHMM>.logRDBMS $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDBTechStack_<MMDDHHMM>.logRDBMS
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ohclone.logRDBMS $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/make_<MMDDHHMM>.logRDBMS
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/installdbf.logRDBMS $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/adcrdb_<SID>.log RDBMS
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDatabase_<MMDDHHMM>.logRDBMS $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/<MMDDHHMM>/adconfig.log    RDBMS
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/<MMDDHHMM>/NetServiceHandler.log

Application Tier Installation

$INST_TOP/logs/<MMDDHHMM>.log
$APPL_TOP/admin/$CONTEXT_NAME/log/ApplyAppsTechStack.log
$INST_TOP/logs/ora/10.1.2/install/make_<MMDDHHMM>.log
$INST_TOP/logs/ora/10.1.3/install/make_<MMDDHHMM>.log
$INST_TOP/admin/log/ApplyAppsTechStack.log
$INST_TOP/admin/log/ohclone.log
$APPL_TOP/admin/$CONTEXT_NAME/log/installAppl.log
$APPL_TOP/admin/$CONTEXT_NAME/log/ApplyAppltop_<MMDDHHMM>.log
$APPL_TOP/admin/$CONTEXT_NAME/log/<MMDDHHMM>/adconfig.log
$APPL_TOP/admin/$CONTEXT_NAME/log/<MMDDHHMM>/NetServiceHandler.log

Inventory Registration:

$Global Inventory/logs/cloneActions<timestamp>.log
$Global Inventory/logs/oraInstall<timestamp>.log
$Global Inventory/logs/silentInstall<timestamp>.log

F. Other log files in R12
1) Database Tier
1.1) Relink Log files :
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME /MMDDHHMM/ make_$MMDDHHMM.log

1.2) Alert Log Files :
$ORACLE_HOME/admin/$CONTEXT_NAME/bdump/alert_$SID.log

1.3) Network Logs :
$ORACLE_HOME/network/admin/$SID.log

1.4) OUI Logs :
OUI Inventory Logs :
$ORACLE_HOME/admin/oui/$CONTEXT_NAME/oraInventory/logs

2) Application Tier
$ORACLE_HOME/j2ee/DevSuite/log
$ORACLE_HOME/opmn/logs
$ORACLE_HOME/network/logs

Tech Stack Patch 10.1.3 (Web/HTTP Server)
$IAS_ORACLE_HOME/j2ee/forms/logs
$IAS_ORACLE_HOME/j2ee/oafm/logs
$IAS_ORACLE_HOME/j2ee/oacore/logs
$IAS_ORACLE_HOME/opmn/logs
$IAS_ORACLE_HOME/network/log
$INST_TOP/logs/ora/10.1.2
$INST_TOP/logs/ora/10.1.3
$INST_TOP/logs/appl/conc/log
$INST_TOP/logs/appl/admin/log

Servlets and Sockets in oracle APPS

Default Forms connection mode in Oracle Applications R12 is “SERVLET” where as in Oracle Apps 11i default form connect mode is “SOCKET” So

What is difference between socket and servlet mode in Forms ?
What are advantages and disadvantages of each ?
Can we change default R12 forms mode from servlet to Socket ?

Oracle Form Servlet Overview in apps R12
——————————————

i) In this mode, Java servlet handles communication between forms client(java based) and Oracle Forms Service (10g).

ii) All connection is via HTTP Server so there is no need to start form server and no need to open form server port on firewall between client machine and application tier.

iii) More secure as compared to Forms Socket Mode.

iv) Network traffic is more as HTTP protocol is more chatty so little bit network bandwidth hungry when compared with SOCKET mode

v) No additional certificate requirement during SSL implementation for application tier, single certificate will handle both forms & web connection.

How to change from default Servlet mode (in apps R12) to Socket mode ?
———————————————————————

Refer to Oracle Metalink Note # 384241.1 Using Forms Socket Mode with Oracle E-Business Suite Release 12

Are there any network overheads of using Forms in Servlet Mode ?
—————————————————————-

Affected module by a application patch

Query to find out modules which will be affected by a specific patch.

select distinct aprb.application_short_name as "Affected Modules"
from ad_applied_patches aap,
ad_patch_drivers apd,
ad_patch_runs apr,
ad_patch_run_bugs aprb
where aap.applied_patch_id = apd.applied_patch_id
and apd.patch_driver_id = apr.patch_driver_id
and apr.patch_run_id = aprb.patch_run_id
and aprb.applied_flag = 'Y'
and aap.patch_name in ('7654736','9440370','7666111','9466179')

Query to find out patches nodewise.

select aap.patch_name, aat.name, apr.end_date,apr.SUCCESS_FLAG
from ad_applied_patches aap, ad_patch_drivers apd,
ad_patch_runs apr,
ad_appl_tops aat
where aap.applied_patch_id = apd.applied_patch_id
and apd.patch_driver_id = apr.patch_driver_id
and aat.appl_top_id = apr.appl_top_id
and aap.patch_name like '%4562325%'

Missing log records in receiving log queue for subscriber SUB_BILLPLAN_QUEUE errorcode = 416

Solution ( Below scripts needs to be run against each subscriber id having error)

begin
DBMS_MGWADM.CLEANUP_GATEWAY (action => DBMS_MGWADM.RESET_SUB_MISSING_LOG_REC,sarg => '<SUBSCRIBER_ID>');
end;

begin
dbms_mgwadm.reset_subscriber('<SUBSCRIBER_ID>');
end;

Refer mgw_subscibers for subscriber_id


Thursday, April 28, 2011

Create a temporary tablespace

Hi

     I am giving the script to create a temporary tablespace. One temporary tablespace should be default temporary tablespace. If you want create a new temporary tablespace then you have to make default temporary  tablespace new one. and you can drop older one.


create temporary tablesapce temp1 tempfile '/a01m/oradata/ilproddata/temp1_01.dbf' size 5000m;

alter database  default temporary tablespace temp1;

Drop tablespace temp including contents and datafiles;


 

Example fo create database script.

Hi

        Find the example of create a database in oracle.

             create database shiva
             logfile group 1 ('D:\log\log1a.log','D:\log\log1b') size 10m,
             group 2 ('D:\log\log2a.log','D:\log\log2b') size 10m
             maxlogfiles 3
             maxlogmembers 3
             maxinstances 1
             maxdatafiles 100
             maxloghistory 100
             datafile 'D:\dat\sysdata.dbf' size 200m
             undo tablespace undo1 datafile 'D:\dat\undo1.dbf'size 100m
             default temporary tablespace temp1 tempfile 'D:\dat\temp1.dbf'size 100m;

How to compile FMB in oracle apps.(use of f60gen)

Hi

      To compile the FMB in oracle apps we user f60gen command. I am giving the example of f60gen script.

f60gen Module=VERP_RMC_UPLOAD_DPC.fmb Userid=apps/h34b1bb4 Module_Type=FORM Module_Access=FILE Output_File=$XXONT_TOP/forms/US/VERP_RMC_UPLOAD_DPC.fmx Compile_All=special

Detail about how schedule DBMS_JOBS

Hi

       If you want to schedule different dbms_job here are some example to solve your problem regarding scheduling dbms jobs.

        
DBMS_JOB


Path: {ORACLE_HOME}/rdbms/admin/dbmsjob.sql
===========================================================
Important Views:
job$      dba_jobs      all_jobs      user_jobs
dba_jobs_running        all_jobs_running  
user_jobs_running

=======================================================================

Error which are commonly generated while operating jobs
Error Code Reason:      ORA-00001 Unique constraint (SYS.I_JOB_JOB) violated
                                    ORA-23420 Interval must evaluate to a time in the future
=========================================================================
Interval Definations:     Execute daily 'SYSDATE + 1'
              Execute once per week 'SYSDATE + 7'
              Execute hourly 'SYSDATE + 1/24'
              Execute every 10 min. 'SYSDATE + 10/1440'
              Execute every 30 sec. 'SYSDATE + 30/86400'
              Do not re-execute NULL
===============================================================
Why any job Failed:
Oracle has failed to successfully execute the job after 16 attempts.

OR
You have marked the job as broken, using the procedure DBMS_JOB.BROKEN

Once a job has been marked as broken, Oracle will not attempt to execute the job until it is either marked not broken, or forced to be execute by calling the DBMS_JOB.RUN.
=======================================================================
If you want to make any job forcefully broken then:

Use : exec dbms_job.broken(42, TRUE)

The following example marks job 14144 as not broken and sets its next execution date to - - the following Monday:
exec dbms_job.broken(14144, FALSE, NEXT_DAY(SYSDATE, 'MONDAY'));
===========================================================
Change any attribute of JOB:
exec dbms_job.change(14144, NULL, NULL, 'SYSDATE + 3');

=======================================================================
Assign a specific instance to run a job:

To do so, First get the instance information by:
SELECT instance_number FROM gv$instance;
then run the job :
exec dbms_job.instance(42, 1);
===========================================================
Reset the job Interval:
exec dbms_job.interval(179, 'TRUNC(SYSDATE) + 24/24');
Submit a Job:
exec dbms_job.isubmit(4242, 'MYPROC', SYSDATE);
Reset next Execution:
exec dbms_job.next_date(134, SYSDATE + 1/24);
To remove a job from job queue: First find the job to remove thorugh:
SELECT job FROM user_job;
then
exec dbms_job.remove(23);
To run a job immediatly:
exec dbms_job.run(job_no);
Various method to schedule a job to run with different timing:
-- To run everynight at midnight starting tonight
exec dbms_job.submit(:v_JobNo, 'proc1;', TRUNC(SYSDATE)+1, 'TRUNC(SYSDATE)+1');

-- To run every hour, on the hour, starting at the top of the hour
exec dbms_job.submit(:v_JobNo, 'proc2;', TRUNC(SYSDATE+(1/24), 'HH'),
'TRUNC(SYSDATE+(1/24),''HH'')');

-- To run every hour, starting now
exec dbms_job.submit(:v_JobNo, 'proc3;', INTERVAL => 'SYSDATE+(1/24)');

-- To run every ten minutes at 0,10,20,etc. minutes past the hour,
-- starting at the top of the hour

exec dbms_job.submit(:v_JobNo, 'proc4;', TRUNC(SYSDATE+(1/24), 'HH'),
'TRUNC(SYSDATE+(10/24/60),''MI'')');

-- To run every 2 min., on the minute, starting at the top of the
-- minute

exec dbms_job.submit(:v_JobNo, 'proc5;', TRUNC(SYSDATE+(1/24/60), 'MI'),
'TRUNC(SYSDATE+(2/24/60),''MI'')');

-- To run every two minutes, starting now
exec dbms_job.submit(:v_JobNo, 'proc6;', INTERVAL => 'SYSDATE+(2/24/60)');

-- To run every half hour, starting at the top of the hour
exec dbms_job.submit(:v_JobNo, 'proc7;', TRUNC(SYSDATE+(1/24), 'HH'),
'TRUNC(SYSDATE+(30/24/60),''MI'')');
execute dbms_lock.sleep(300);

===========================================================

Online Acitivity Performed on LGSOUTH DATABASE 10.100.201.12

select distinct job,log_user from dba_jobs where log_user='CSN';

job=62,122,149,185

BEGIN   DBMS_JOB.ISUBMIT(JOB=>62,WHAT=>'SP_CALLSUM_REPORT;',
NEXT_DATE=>TO_DATE('2009-01-25:06:00:00','YYYY-MM-DD:HH24:MI:SS'),
INTERVAL=>'TRUNC(SYSDATE)+1+(6/24)',NO_PARSE=>TRUE); END;

PL/SQL procedure successfully completed.

EXEC DBMS_JOB.NEXT_DATE(723,NEXT_DATE=>TO_DATE('2009-04-08:10:00:00','YYYY-MM-DD:HH24:MI:SS'));


BEGIN   DBMS_JOB.ISUBMIT(JOB=>122,WHAT=>'SP_REPAIRLIST_REPORT_CENTRAL;',
NEXT_DATE=>TO_DATE('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),INTERVAL=>'TRUNC(SYSDATE)+1+(1/24)',
NO_PARSE=>TRUE); END;

PL/SQL procedure successfully completed.

BEGIN  DBMS_JOB.ISUBMIT(JOB=>149,WHAT=>'SP_PDP_LIVE_REPORT;',
NEXT_DATE=>TO_DATE('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),
INTERVAL=>'FNC_DYNAMIC_SCHEDULE(8,18,90)',NO_PARSE=>TRUE); END;

PL/SQL procedure successfully completed.


BEGIN DBMS_JOB.ISUBMIT(JOB=>185,WHAT=>'SYNC_PO_RECEIPT;',
NEXT_DATE=>TO_DATE('2009-01-25:08:15:00','YYYY-MM-DD:HH24:MI:SS'),
INTERVAL=>'FNC_DYNAMIC_SCHEDULE (8.25, 15, 465)',NO_PARSE=>TRUE); END;

PL/SQL procedure successfully completed.




Mandatery Aix command for oracle dba and apps dba.

Hi

     You will get some AIX command which are help to manage database.


BASIC FILE HANDLING
ls
- list files in directory; use with options
·         -l (long format)
·         -a (list . files too)
·         -r (reverse order)
·         -t (newest appears first)
·         -d (do not go beyond current directory)
·         -i (show inodes)
For a more detailed description of ls see ls -l
more
- used to control input by pages - like the dos /p argument with dir. e.g.
$ more /etc/motd *******************************************************************************
* *
* *
* Welcome to AIX Version 4.1! *
* *
* *
* Please see the README file in /usr/lpp/bos for information pertinent to *
* this release of the AIX Operating System. *
* *
* *
*******************************************************************************
motd: END
Useful keys for use with more:
·         b (back a page)
·         ' (go to top)
·         v (vi the file)
·         / (Search)
·         q (quit)
·         ' ' (down a page)
·         Control-G (View current line number
·         <CR> (down a line)
See also pg which is extremely similar
pg
- used to control input by pages - like the dos /p argument. pg performs the same function as the more command but has different control, as it is based on ex
Helpful keys for pg:
·         1 (go to top)
·         $ (go to bottom)
·         h (help)
·         / (Search)
·         ? (Search back)
·         q (quit)
·         -1 (back a page)
pwd
- show present working directory. e.g.
$ pwd
/usr/live/data/epx/vss2

To change the current working directory use cd
cd
- change directory (without arguments, this is the same as $ cd $HOME or $ cd ~)
cp
<source> <destination> - copies a file from one location to another. e.g.
$ cp /etc/hosts /etc/hosts.backup # make a backup of the hosts file
$ cp /etc/motd /tmp/jon/ # Copy file /etc/motd to directory /tmp/jon/

Options
·         -f (to force the copy to occur)
·         -r (to recursively copy a directory)
·         -p (to attempt to preserve permissions when copying)
synonym: copy
mv
<source> <destination> - move a file from one location to another. e.g.
$ mv /tmp/jon/handycommands.txt . # move handycommands in /tmp/jon to current directory
$ mv -f vihelp vihelp.txt # Move file vihelp to vihelp.txt (forced)

Options
·         -f (to force the move to occur)
·         -r (to recursively move a directory)
·         -p (to attempt to preserve permissions when moving)
synonym: move
.
rm
<filename> - removes a file. e.g.
$ rm /tmp/jon/*.unl # remove all *.unl files in /tmp/jon
$ rm -r /tmp/jon/usr # remove all files recursively
Options
·         -f (to force the removal of the file)
·         -r (to recursively remove a directory)
du
Recursively lists directories and their sizes. e.g.
$ du /etc # list recursively all directories off /etc
712 /etc/objrepos
64 /etc/security/audit
536 /etc/security
104 /etc/uucp
8 /etc/vg
232 /etc/lpp/diagnostics/data
240 /etc/lpp/diagnostics
248 /etc/lpp
16 /etc/aliasesDB
16 /etc/acct
8 /etc/ncs
8 /etc/sm
8 /etc/sm.bak
4384 /etc
The sizes displayed are in 512K blocks. To view this in 1024K blocks use the option -k
lp -d<Printername> <Filename>
send file to printer. e.g. $ lp -dhplas14 /etc/motd # send file /etc/motd to printer hplas14
$ lp /etc/motd # send file /etc/motd to default printer
cat
- print a file to stdout (screen). e.g.
$ cat /etc/motd # display file /etc/motd to screen
*******************************************************************************
* *
* *
* Welcome to AIX Version 4.1! *
* *
* *
* Please see the README file in /usr/lpp/bos for information pertinent to *
* this release of the AIX Operating System. *
* *
* *
*******************************************************************************
cat is also useful for concatenating several files. e.g.
$ cat fontfile IN* > newfile # appends fontfile and all files beginning with IN to newfileThough this might seem an essentially useless command, because most unix commands always take a filename argument, it does in fact come in extremely useful at more advanced levels. Awards are given out occasionally for the most useless usage of cat. If an option of '-' is specified, cat will take its input from stdin.
INPUTS, OUTPUTS AND WILDCARDS

Unix commands generally get their information from the screen, and output to it. There are three main 'streams' which unix uses to get/place it's information on. These streams are called:
·         stdin (Standard Input) - normally, what you type into the screen
·         stdout (Standard Output) - normally, what is output to the screen
·         stderr (Standard Error) - normally, error messages which go to the screen

any of these may be redirected by the following symbols:
·         < <filename> take input from <filename> rather than the screen. e.g.
$ ksh < x # will read all commands from the file x and execute them using the Korn shell.
·         > <filename> take output from the command and place it in <filename>. e.g.
$ ls > x will place the output of the command 'ls' in the file x
·         >> <filename> take output from the command and append it to <filename>. e.g.
$ ls /tmp >> x will place the output of the command 'ls' and append it to the file x
·         2> <filename> take any error messages from the command and put it in <filename>. e.g.
$ ls /tmp 2>/dev/null would throw away any error messages that are produced by ls (sorry, /dev/null is a file that, if written to, the information disappears never to be seen again).
·         command1 | command2 Pipe - Takes the standard output of the first command, and turns it into the standard input of the second command. The output of the second command will then be put on the standard output (which, again, may be a pipe) e.g.
$ ls | more will send the output of 'ls' into the command 'more', thus producing a directory listing which stops after every page. This method is called piping.

command1 & - the ampersand (&) forces command1 to run in the background. so that you may continue to type other commands in the shell, while command1 executes. It is not advisable to run a command in the background if it outputs to the screen, or takes it's input from the screen

See also tee which allows splitting of the input stream and output to several different places at once.

Wildcards
B Bib Baby Fox Fib

There are various wildcards which you may use. One is '*' which means 0 or more characters. e.g. 'B*' will match 'B,Bib and Baby' from the list above, another wildcard is '?' which matches 1 character, e.g. '?ib' will match 'Bib and Fib'. Wildcards differ depending on the program in use: awk derivatives (awk,sed,grep,ex,vi,expr and others) have the following special characters:
·         ^ beginning of the line
·         $ end of the line
·         . any character
·         * one or more of the preceding character
·         .* any number of characters
·         \n Carriage return
·         \t Tab character
·         \<char> Treat <char> as is (so, \$ would try to match a '$')
Given the following four lines:

Chargeable calls in bundle: $47.50
Chargeable calls out of bundle: $20.50
Other bundle charges: $0.00
Total Charge: $20.50

$ grep "^Charg.*bundle.*\$.*"
would match the first two lines.
In english - match all lines which start with 'Charg', then have any number of characters and then the word 'bundle', then have any number of characters, and then a dollar symbol, and then have any number of characters following to the end of the line
OTHER FILE HANDLING COMMANDS
type <command>
- show where the source of a command is: e.g.
$ type sendmail
sendmail is /usr/sbin/sendmail

This command is merely an alias for 'whence -v'
whence <command>
- show where the source of a command is: shell builtin command. See type
Use option: -v for verbose mode
which <command>
- show where the source of a command is held. Almost the same as type and whence
chmod <Octal Permissions> <file(s)>
- change file permissions. e.g.
$ chmod 666 handycommands
changes the permissions (seen by ls -l) of the file handycommands to -rw-rw-rw-
r = 4, w = 2, x = 1. In the above example if we wanted read and write permission for a particular file then we would use r + w = 6. If we then wanted to have the file have read-write permissions for User, Group and All, then we would have permissions of 666. Therefore the command to change is that above.
$ chmod 711 a.out
Changes permissions to: -rwx--x--x
Additional explanation of file permissions and user/group/all meaning are given in the description of ls -l
You may specify chmod differently - by expressing it in terms of + and - variables. For example
$ chmod u+s /usr/bin/su
will modify the "sticky bit" on su, which allows it to gain the same access on the file as the owner of it. What it means is "add s permission to user". So a file that started off with permissions of "-rwxr-xr-x" will change to "rwsr-xr-x" when the above command is executed. You may use "u" for owner permissions, "g" for group permissions and "a" for all.
chown <Login Name> <file(s)>
- Change ownership of a file. Must be done as root. e.g.
chown informix *.dat # change all files ending .dat to be owned by informix
chgrp <Group Name> <file(s)>
- Change group ownership of a file. Must be done as root. e.g.
chgrp sys /.netrc # change file /.netrc to be owned by the group sys
mvdir <Source Directory> <Destination Directory>
- move a directory - can only be done within a volume group. To move a directory between volume groups you need to use mv -r
or find <dirname> -print | cpio -pdumv <dirname2>; rm -r <dirname>
cpdir <Source Directory> <Destination Directory>
- copy a directory. See mvdir
rmdir <Directory>
- this is crap - use rm -r instead
mkdir <Directory>
- Creates a directory. e.g.
$ mkdir /tmp/jon/ # create directory called /tmp/jon/
find <pathname> -name "searchkey" -print
- search for files - e.g.
$ find . -name "system.log" -print # will find all files (with full path names) called system.log - Wildcards are allowed, e.g.
$ find /tmp -name "sl.*" -atime +0 -print # will print out all files in /tmp/ that start sl. and which haven't been accessed for a day. Helpful for finding lost files, or finding stuff in enormous directories. Other useful options include:
·         -atime +<days> - finds files that haven't been accessed for 1+days also, ctime (creation time) and mtime (modify time)
·         -prune - stay in current directory - don't look in dirs off the directory specified in path names - e.g.
$ find /tmp -user "compgnc" -prune -print # will find all files in /tmp which user compgnc owns and will not search lower directories (e.g. /tmp/usr)
·         -size +<blocks> - finds files that are bigger than <blocks>
·         -exec rm {} \; - remove all files found...dangerous command - e.g.
$ find /tmp -name "sl.*" -atime +0 -prune -print -exec rm {} \; # will remove all files in /tmp starting 'sl.' that haven't been accessed for a day. Spacing of this command is important! Most exec commands are possible:
$ find /usr2/calltest -name "*.4gl" -print -exec grep "CHECK" {} \; | pg
·         -ok - like exec only it prompts for confirmation after each occurence. e.g.
$ find /tmp/disk7 -name "*" -print -ok doswrite -a {} {} \; # Please note that you MUST end any exec or ok option with an escaped semicolon (\;).
·         -user <username> - finds all files owned by <username>
·         -group <groupname> - finds all files with a group of <groupname>
ln -s <Directory> <symbolic link>
- create a symbolic link to a different directory from current directory: e.g.
$ ln -s /usr/uniplex/compgnc /u/compgnc/uni # would create a link called 'uni' in the directory /u/compgnc. From then on, typing cd uni would cd to /usr/uniplex/compgnc. You can also give two files the same name. e.g.
$ ln make.e_enquiry makefile # would link the two files so that they are identical, and when you change one, you change the other. You may also create a symbolic link to a host(!). Instead of typing 'rlogin hpserver' every time, by typing
$ ln -s /usr/bin/rsh hpserver # will create a link so that whenever you type 'hpserver' it will execute a remote shell on the machine.
Option -f forces the link to occur
head -<Number> <FileName>
- prints out the first few line of a file to screen. Specify number to indicate how many lines (default is 10). e.g. If you sent something to a labels printer and it wasn't lined up, then you could print the first few labels again using:
$ head -45 label1.out | lp -dlocal1
tail -<Number> <FileName>
- prints out the end of a file. Very similar to head but with a very useful option '-f' which allows you to follow the end of a file as it is being created.e.g.
$ tail -f vlink.log # follow end of vlink.log file as it is created.
wc -<options> <FileName>
- Word Count (wc) program. Counts the number of chars, words, and lines in a file or in a pipe. Options:
·         -l (lines)
·         -c (chars)
·         -w (words)
To find out how many files there are in a directory do ls | wc -l
split -<split> <FileName>
- Splits a file into several files.e.g.
$ split -5000 CALLS1 # will split file CALLS1 into smaller files of 5000 lines each called xaa, xab, xac, etc.
tr <character> <other character>
- translates characters. e.g.
$ cat handycommands | tr "\t" " " # will take the file handycommands and translate all tabs into spaces. Useful when messing about with awk or you need to convert some input (e.g. that from tty) to a unique filename that does not contain special characters. e.g.
$ tty | tr "/" "." # produces for example .dev.pts.7
od <options> <filename>
- od converts nasty (binary save) files into character representations. Useful when back-compiling, examining raw .dat files,etc. Use with option '-c' for character display (recommended).
script
- starts recording everything in the shell to a file by default 'typescript'. Press ^D to finish the script. Provides a log of everything used. Has almost the same effect as $ ksh | tee typescript
Used for debugging shells, seeing error messages which flash off the screen too quickly, etc.
cut
- cut's the file or pipe into various fields. e.g.
$ cut -d "|" -f1,2,3 active.unl # will take the file active.unl which is delimited by pipe symbols and print the first 3 fields options:
·         -d <delimiter>
·         -f <fields>
Not too useful as you can't specify the delimiter as merely white space (defaults to tab) Alternatively, you can 'cut' up files by character positioning (useful with a fixed width file). e.g.
$ cut -c8-28 "barcode.txt" # would cut columns 8 to 28 out of the barcode.txt file.
paste
- paste will join two files together horizontally rather than just tacking one on to the end of the other. e.g. If you had one file with two lines:
Name:
Employee Number:
and another file with the lines:
Fred Bloggs
E666
then by doing:
$ paste file1 file2 > file3 # this would then produce (in file3).
Name: Fred Bloggs
Employee Number: E666
Note that paste puts horizontal tabs between the files, so you may need a sed 's/ //g' command to get rid of these.
sort <filename>
- sorts the information from the file and displays the result on standard output (stdout). e.g.
$ sort /tmp/list_of_names # will sort the file into alphabetical order, and display it to the screen. Useful with option '-u' to filter out duplicates.
uniq <filename>
- filters out all duplicate lines from a file or input stream (file or stream must be sorted!). Useful with option -c which merely produces a count of unique lines.
ex <filename>
- ex is an old line editor, and almost never used now (similar to DOS edlin if you remember that - me, I've repressed it). You are most likely to come across ex within the vi editor - all commands beginning with a colon (:) are ex commands
EXTREMELY USEFUL COMMANDS
ls -l
- lists files in a directory in long format. You cannot do without this. Here's a more detailed explanation. e.g.
$ ls -l

Part 1
Part 2
Part 3
Part 4
Part 5
Part 6
Part 7
-rw-rw-rw-
1
root
staff
28
Jan 16 09:52
README
-rw-------
1
compjmd
staff
4304
Jun 24 12:21
tabledict
drwxrwxrwx
2
compjmd
staff
512
Jul 1 16:30
testdir
-rwxrwx---
1
compjmd
system
0
Jul 1 16:30
a.out


... is a sample listing.

·         Part 1: Permissions - see chmod for explanation of these. If the first field is set, then the file in question is not really a file at all, but something else, key:
·         -: normal file
·         d: directory
·         l: symbolic link created by 'ln'
·         c or b: device of some sort
You may sometimes see an 's' where the 'x' should be in the permissions - this is normally on executable files which change other files. e.g. Permissions of 'sqlexec' the file that executes all informix queries should be '-rwsr-sr-x' - this then accesses tables with permissions of '-rw-rw----'. where the table files are owned by informix (group informix). the 's' flags allows changing of the database tables on a program level, but not on a unix level. (can change contents via sqlexec but not use 'rm' command on db file).
·         Part 2: Number of links to this file (directories always have 2+).
·         Part 3: The owner of the file - e.g. If the owner is 'compjmd' and permissions are set to -rw------- then only the user 'compjmd' may read or write to that file. Again, if owner is "compjmd" and permissions are -r-x------ then only the user compjmd may read or execute that file. Only the owner of a file or root may chmod it.
·         Part 4: The group ownership of the file - (bloody hell, this is getting complicated). On a unix system there are certain 'groups' which users can belong to, held in the file '/etc/group'. You will notice that in this file there will be a main group, e.g. 'staff' which contains every user. Which means that any user listed under staff is in that group.....right...every file has a group attached to it. Which means that if a file had permissions ----rw---- and a group reference of 'system', then only users who were part of the group system could modify that file. To see which groups the current user belongs to do id. Sorry if this wasn't comprehensible but you should never need to use this anyway(!).
·         Part 5: Size of the file in bytes
·         Part 6: Time of last modification
·         Part 7: The name of the file
Useful options (and there are loads more). All may be combined except where specified:
·         ls -a show files starting with '.' too
·         ls -A show files starting with '.' but not '.' or '..'
·         ls -c must be used with either option l and/or t - displays/sorts by modification time
·         ls -d do not show subdirectory listings
·         ls -i display the i-node number of each file
·         ls -t Put the listing in time order (see options u and c)
·         ls -r Put the listing in reverse order - usually used with a -t
·         ls -u must be used with either options l and/or t - displays/sorts by last-access time
vi <filename>
- love it or loathe it - the standard operating system text-file editor. See Related help file. Vi You can also use 'view' which forces Read only (-R opt). vi +<number> enters the file at the specified line no. Also, vi +/<Search pattern> will enter the file and move to the first occurrence of <Search pattern>. e.g.
$ vi +/"love it or loathe it" handycommands
Users new to vi hate it. I personally managed to get through University without using it ever (I used Joe's own editor instead). If I accidentally went into vi, I had to ^Z and kill the job. Sigh. Five years of using vi means that I'm getting a little better at it now... (I'm actually typing this now in a vi-clone for Windows).
grep <pattern> <file(s)>
- a phenomenally useful command which matches strings within files - e.g.
$ grep D7523 mcall_reps.out # will find all the lines in mcall_reps.out that have the string "D7523" in it. Also incredibly useful for things like pipes,e.g.
$ du | grep cred # (in /home directory will show all users that have 'cred' in their title). You may use regular expression matching - e.g.
$ grep "main.*{" x.c # would match any line containing 'main' and an open curly brackets at any point in the line afterwards. There are two variations to grep - fgrep and egrep which do virtually the same things as grep, but are either faster (having less options) or more complex (but slower). See also section on Wildcards
Options:
·         -v : show all lines that do not contain pattern.
·         -y : don't bother matching case
·         -i : don't bother matching case
·         -c : show count of matching lines rather than the lines themselves
·         -l : show filename's instead of matching lines.
ksh -o vi
- The Korn Shell - pros might notice that I don't mention using the C-Shell at all - I've never used it, so that's why it doesn't appear. A Shell is a program that you run your commands in. Typing exit will end the current shell. The -o vi option of the korn shell allows vi commands to work at the shell prompt after pressing escape. For example, pressing escape and then 'k' will bring up the last command used in the shell.
awk
- this would be a damn useful command if I knew how to use it properly. see alternative page awkhelp
man <command>
- look at the manual, e.g.
$ man ps # will list the manual page for the command ps
GENERAL INFORMATION COMMANDS
smon
- monitor's system usage - F5 shows processes which are hogging the machine. Not available on AIX 4.1 and above sadly.
uptime
- shows how long the system has been up and how hard it is being hammered. The load average fields show how many jobs on average are waiting. <1 or less is very good, around 5 is pretty bad (though not unusual), >10 the machine is being seriously hammered.
who
- list users who are currently logged on (useful with option 'am i' - i.e. 'who am i' or 'whoami')
w
- list users and what they are doing, including idle time. The first line is the output from uptime
id
- similar to whoami except that it does a direct check to see who you are - who only checks /etc/utmp so any su commands will be ignored.
ps
- list processes currently running, by default on the current shell. Useful with options:
·         -t <tty> - show all processes running on a terminal
·         -ef - show all processes
·         -u <loginname> - show all processes owned by a user
·         -flp <processid> - show as much information as you can about a process number
·         -aux - show processes in order of usage of the processors. Useful to see what processes are hogging system resources.
fuser -u <filename>
- show who is using a file.(system hogging command). Useful when trying to work out who has locked a row or table in an informix database for example.
lpstat -p <printer>
- show the current status of a printer and any jobs in the queue. lpstat without arguments prints all of them.
enable <printer>
- enable a printer queue. You must be root or a member of the printq group to run this command.
disable <printer>
- disable a printer queue. You must be root or a member of the printq group to run this command.
enq <various parameters>
- examine spool queue for printers.
uname -a
- will show you what machine you're currently on.
ipcs
- list semaphores and shared memory.
ipcrm -s <semaphorenumber>
- remove semaphore or shared memory.
crontab
- use -l to list all regular scheduled jobs. To alter them, use option -e
at <now + ?? seconds/minutes/hours/days/years>
- perform a job at a specified time. (Useful for running something at a later date). at retains the current environment. e.g.
$ at now + 5 minutes
echo "Phone Julie McNally" > /dev/tty616
^D
job compjmd.389748732 will be run at ???
Will echo to tty616 the message "Phone Julie McNally" in 5 minutes. e.g.2
$ at 0331235930
echo "April fools day!" > /dev/console
^D
will echo "April fools day!" to the console at 11:59 and 30 seconds, on the 31st of march. Format for this is: [YYYY]MMDDhhmmss. at jobs are sometimes used in the place of crontab's because if the machine is off when the crontab is meant to take place, the job never happens. at jobs automatically start when the machine is switched on if the machine was down at the time. typing at -l will show you all the at jobs you have queued, at -r <atjob> will remove an at job (only the owner or root is allowed to do this).
date
- show current date and time. This command may also be used to set the system clock (ONLY WHEN EVERYONE IS LOGGED OFF) with a root user id. A date change is never simple, even when adjusting things by an hour. The safest way to do it is to change the date then reboot the machine because otherwise the crontab daemon may start doing jobs at odd times. I believe there might be a 'go slow/fast' option to set the clock, and the clock will then run 'slower/quicker' until it catches up with the required time.
last <username>
- shows a list of recent logins. It looks at /var/adm/wtmp so it only shows initial logins, and not whether those users have been su'd to.
fileplace -pv <filename>
- show the physical (as in disk location) location of a file. Useful for tracing informix files, and perhaps for working out whether defragmentation copying is required.
SYSTEM COMMANDS
kill -<Signal> <process>
- sends a signal (normally a kill) to a process. kill -9 terminates the job no questions asked, kill -15 tries to clear up as much as possible - e.g. remove semaphores and such-like. Other signals may be sent as well, see manual and /usr/include/sys/signal.h to see what signals you can send to a process.
renice <priority> <process>
- make a process not hog the system so much by setting its nice value.
smit
- system admin program for AIX
df
- list volume groups + usage. see also lsvg. Usually used with the -k flag so the number of blocks is displayed in 1024-blocks.
cu -l <device>
- log on to device such as a pad or a modem. See related files /etc/uucp/* and /etc/locks and /etc/services
stty sane
- Changes terminal settings back to normal. If a tetra module for example crashes your screen so that no keys function except ^C which doesn't even do very much then typing ^Jstty sane^J should cure the problem. To fully cure the problem you also need to type stty tab3 (and stty -ixon if you're feeling a little overzealous)
stty
- allows you to change terminal settings such as the interrupt key, quit key, etc. e.g.
$ stty intr ^A # would change the interrupt key to being control-A
$ stty quit ^L #would set the quit key (normally ^\) to control-L. other key changes are:
·         erase (normally ^H)
·         xon (normally ^Q)
·         xoff (normally ^S)
·         eof (normally ^D)
To really annoy a systems administrator, change interrupt to 't' and quit to '^D' . hehehehehehe
lscfg
- show all connected devices
lsvg
- list volume groups (see related file diskhelp)
lspv
- list physical disks (and see related file diskhelp)
lspv without arguments will produce a list of all the hard-disks used. lspv <hard-disk-name> will produce a list of information about the hard disk. lspv -l <hard-disk-name> will show any logical volumes which are mapped on to that drive.
lsdev
- list devices. Options:
·         -C list Configured devices
·         -P list Possible devices
produces different output when you are root.
mkdev
- make devices. e.g. To make a tty:
# Script to add a tty. Options that need amending are:
# -l name of tty to be created - e.g '-l tty600' wil create
# a tty called 'tty600'
# -p RAN name
# -w Port number on RAN
# -a Attributes (e.g. to set up auto login, etc.)
mkdev -c tty -t 'tty' -s 'rs232' -l tty433 -p sa2 -w 2 -a term='wyse50' -a forcedcd='enable' -a login='enable' -a speed='19200'
e.g. To create a printer (raw device):
mkdev -c printer -t 'osp' -s 'rs232' -p 'sa3' -w '10' -l label2 -a xon='yes' -a dtr='no' -a col=500
It is highly recommended that you make and change devices using smit
chdev
- change devices. See mkdev
cc
- c compiler, use with
·         -o <object> to specify a target instead of a.out
·         -O optimise
·         -w or -W all warning flags.
shutdown
- shutdown the system so that it may be switched off. Rather obviously, this may only be run by root. Options:
·         -f shuts the system down immediately (rather than waiting for a minute)
·         -R reboot the system immediately after halt
oslevel
- show the current revision of the operating system.
CONNECTIVITY
exit
- end current shell process. If you log in, then type this command, it will return you to login. ^D (control-D) and logout (in some shells) does the same.
rlogin
- login to a remote machine, e.g.
$ rlogin hollandrs # log in to machine called hollandrs
Useful with -l option to specify username - e.g.
$ rlogin cityrs -l ismsdev # log in to machine cityrs as user ismsdev For further info about trust network see .rhosts file and /etc/resolv.conf (I think).
telnet
- very similar to rlogin except that it is more flexible (just type telnet with no arguments and then '?' to see the options). Useful because you can specify a telnet to a different port.
ftp
- File Transfer Protocol - a quick and easy method for transferring files between machines. The .netrc file in your $HOME directory holds initial commands. type ftp without arguments and then '?' to see options)
rcp
- Remote copy. Copies a file from one unix box to another, as long as they trust each other (see .rhosts file or /etc/resolv.conf I think). Options
·         -f (to force the copy to occur)
·         -r (to recursively copy a directory)
·         -p (to attempt to preserve permissions when copying)
su - <loginname>
- switch user, option '-' means that the users .profile is run, without option you merely assume the id and permissions of the user, without (for example) changing PATH and DBPATH, e.g.
$ su - root # become root
$ su root # gain permissions of root but don't change the current environment variables
$ su - vlink # switch to user vlink
If you are root, you may su to any other user without being prompted for a password. su without arguments is the same as 'su root'. Note that the 'su' option is not available on all UNIX machines as it can crash some of them.
ping <hostname>
- check that <hostname> is alive and well (do not expect an immediate response from a machine that is linked over an ISDN line). Firewalls often block ping packets after the Ping of Death so quite often you'll find you can't ping internet sites either. Options include:
·         -q ping quietly
·         -i<no> wait no of seconds between each packet sending. The default is 1 second. If you are using ping to keep an ISDN line up then using something like $ ping -i 5 -q hollandrs is ideal.
·         -f Never use this! Sends as many packets as it possibly can as fast as possible, used for network debugging and is likely to slow networks horribly when used. Known as 'flood' pinging.
·         -c <no> send no of packets before giving up
To check that your machine can ping, try pinging 127.0.0.1 - this acts as a feedback loop, checking the network card's ability to ping.
rsh <hostname> <commands>
- remote shell - e.g.
$ rsh altos more /tmp/chk # will run the command more the file /tmp/chk on the machine called altos. Useful in pipes for example. rsh on its own will execute a login. Use option '-l' to specify logon name. You can also use rcmd and remsh on other flavours of unix.
host <ip address>
- lookup the ip address in the /etc/hosts file and give its name
TAPES AND DISKS
Please see this page for more information on disks in AIX

dd if=<filename or device> of=<filename or device> bs=<Block Size> conv=sync
- direct (and I mean DIRECT) copy, normally to tape. Archaic syntax and very rarely used. flags:
·         if - input filename or device
·         of - output filename
·         bs - block size
·         conv - ??
e.g. To write a file to tape use
$ dd if=/etc/hosts of=/dev/rmt0 bs=1024 conv=sync # write hosts file to tape using dd
cpio
stands for copy in-out, and is extremely powerful if you can cope with the innumerable flags that you have to use(!)
$ cpio -iBcvumd "etc/hosts" </dev/rmt0 # Grab /etc/hosts file from tape
$ find /etc -print | cpio -oBcv >/dev/rmt0 # Write the contents of the /etc directory to tape
$ find /etc -print | cpio -pdumv /usr2/etcbackup/ # copy directory /etc to /usr2/etcbackup and retain all permissions.
meaning of the flags:
·         i - input
·         o - output
·         B - Block size of 5120 bytes
·         c - read/write header info
·         v - list file names
·         u - unconditional copy - overwrites existing file.
·         m - keep modification dates
·         d - creates directories as needed.
·         t - generate listing of what is on the tape.
·         p - preserve permissions.
tapeutil -f <devicename> <commands>
- A program which came with the tape library to control it's working. Called without arguments gives a menu. Is useful for doing things like moving tapes from the slot to the drive. e.g.
$ tapeutil -f /dev/smc0 move -s 10 -d 23 # which moves the tape in slot 10 to the drive (obviously, this will depend on your own individual tape library, may I suggest the manual?).
doswrite -a <unix file> <dos file>
- copy unixfile to rs6000's floppy disk drive in DOS format. -a option expands certain characters, for certain ascii conversions.
dosdir <directory>
- show list of files on a dos floppy disk. Useful with option -l (long format). Like dos command 'dir'
dosread -a <DOS file> <unix file>
- copy dos file in floppy disk drive to unix - if UNIXFILE is omitted, it outputs to the screen.
dosdel <DOS file>
- delete dos file on floppy disk.
dosformat
- format dos floppy disk (High Density)
tar
- Read/Write stuff to archive.
tar cvf /dev/rmt0 <filenames> # will write files to tape
tar xvf /dev/rmt0 will read files from tape
tar tvf /dev/rmt0 will give a listing of what's on the tape. If you're using an archive file then replace /dev/rmt0 in the examples above with the name of the archive file.
SCREEN COMMUNICATION
echo
- a command mainly used in shell scripts. Examples:
$ echo "Hello" # will print Hello on your screen
$ echo "Hello" > /dev/tty616 # will print Hello on someone elses screen (warning - can crash their screen!)
$ echo $DESTF10 # will print the value of the environment variable DESTF10
$ echo "\033Fdemo demo" # will echo demo to the status bar at the top of a wyse terminal
See also file shellscripts
read
- will read text from standard input and place it in the variable name specified. See file shellscripts
line
- waits until the user presses return before carrying on (writes what is typed to standard output). If used in a crontab/at job this instruction is ignored. See file shellscripts
talk <user>
- set up an interactive communication dialogue box between two users. Looks good but isn't really that useful.
write <user>
- writes a message to someone elses screen. Try typing 'write root' and then type a message, finishing with control-D.
banner <message>
- writes <message> in huge letters across your screen! (max: 10 chars per word)
wall <message>
- send a message to all people on a system. Can only be executed by root (I think).
tput <argument>
- tty type independent attribute setting (requires TERM variable and TERMCAP to be set). I only know these few bits:
·         tput cnorm - turns the screen cursor on
·         tput civis - turns the screen cursor off
·         tput clear - clears the screen
·         tput smso - turns all new text to bold
·         tput rmso - turns all bold text off
tee (-a) <filename>
- command used in pipes to take a copy of the standard output. e.g.
$ ls | tee /tmp/x # would output ls normally and put a copy in /tmp/x. The option '-a' is used to append rather than replace files.
SOURCE CODE CONTROL SYSTEM (SCCS)
SCCS Overview
The source code control system allows versions of a program to be stored in a special file, so that any version may be retrieved. There are a few commands involved (not all of them listed here). All source code files start with 's.'
get -r<revision> <source code file>
- get a program out of source code to read only. Missing out the -r flag gets the most recent version. e.g.
$ get $SCUK/s.parser.c # extracts file parser.c from source code file $SCUK/s.parser.c as read only. See get -e for editing.
get -e <source code file>
- get a piece of code out for edit, so that the code may be modified and a new version created using 'delta'. e.g.
$ get -e $SCUK/s.parser.c # extracts file parser.c from source code file $SCUK/s.parser.c for editing. See get for read-only.
delta <source code file>
- you must be in the directory with the modified piece of code when you execute this command. This adds the latest version to the source code file. e.g.
$ delta $SCUK/s.parser.c # writes file parser.c to the source code file $SCUK/s.parser.c . See get -e for information on how to extract the file from source code.
prs <source code file>
- show comments/details on source code file.
admin -r <revision no> -i <program> <source code file>
- create a new source code file with progam. -r specifies the initial revision of the program and may be missed out (default is 1.1 I think). Must be spaced correctly! admin is also used for sccs administration, but it gets to fear and loathing time pretty fast. e.g.
admin -iparser.c $SCUK/s.parser.c # creates a new source code file called $SCUK/s.parser.c from the file parser.c
unget <source code file>
- cancels a get -e
MISCELLANEOUS
strip <binary compiled file>
- Removes all linking information within a compiled program - basically a way of cutting down the size of an executable.
yes <word>
- yes outputs the word 'yes' as fast as its' little legs can go. Never called on it's own. Always used in pipes. For example:
$ yes | rm *.o # would confirm 'yes' whenever rm prompts for confirmation. You can also use it to output a different word e.g.
$ yes please # would output 'please' to the screen until you kill it (prob. immediately).
SHELL SCRIPT COMMANDS
Are all held on a separate page now. Commands covered are export,if,for, shift, test, while, case, and a few others.


sed '<pattern>'
- used by myself for quick substitutions when tr doesn't seem to be doing its job properly. The syntax of the pattern is similar to vi ex command line. E.g. To substitute all spaces with colon symbols the command is
sed 's/ /:/g' file1 # substitute all occurrences of spaces with colons in file1 and output to stdout.
-------------- End of HandyCommands File ------------