Source : http://lefterhs.blogspot.co.id/2012/07/archived-log-generation-volume-and.html
V$ARCHIVED_LOG displays information about the archived logs generated by your database and haven’t yet aged out from your control file.
We are going to use this view to generate a report, displaying some useful information about it.
BLOCKS column stores archived log’s size in blocks, BLOCK_SIZE stores block size in bytes, so if we multiply them (BLOCKS*BLOCK_SIZE) we get its size in bytes.
ARCHIVED indicates whether the log is archived and DELETED whether RMAN has deleted it.
The following query displays per day the volume in MBytes of archived logs generated, deleted and of those that haven’t yet been deleted by RMAN.
Bigfile tablespaces are tablespaces with a single large datafile (up to 4G blocks). In contrast normal or smallfile tablespaces can have several datafiles, but each is limited in size. The system default is to create a smallfile tablespace. The SYSTEM and SYSAUX tablespace types are always created using the system default type.
Bigfile tablespaces are supported only for locally managed tablespaces with automatic segment space management, with three exceptions: locally managed undo tablespaces, temporary tablespaces, and the SYSTEM tablespace
Advantages of using Bigfile Tablespaces:
By allowing tablespaces to have a single large datafile the total capacity of the database is increased. An Oracle database can have a maximum of 64,000 datafiles which limits its total capacity.It allows you to create a bigfile tablespace of up to eight exabytes (eight million terabytes) in size, and significantly increase the storage capacity of an Oracle database. Continue reading
You can easily change timezone in CentOS Linux using the following methods.
CentOS timezone config files and directories
/usr/share/zoneinfo/ – The system timezone directory contains the files as per timezone name. For example, the file /usr/share/zoneinfo/America/New_York represents time zone for New York.
/etc/localtime – It is a symlink to the file localtime or to the correct timezone file in the system located in /usr/share/zoneinfo/ directory.
How do I see the current time zone?
Type the date command or the ls command:
$ ls -l /etc/localtime
To find list of all available time zones, run:
# timedatectl list-timezones
##*** Grep possible Asian timezones ***##
# timedatectl list-timezones | grep Asia
…. Continue reading
Recently I have published post about issue with RMAN duplication of Oracle database when standby database was used as a target. Later in updates of mentioned post I have confirmed that with fix for BUG:11715084 everything works as expected.
Today I have decided to make post with step-by-step procedure with which I have tested mentioned feature – making standby database from standby database.
I will Simplify setup As Much As Possible(SAMAP):
- single machine will be used as a host for all 3 databases
- OMF(Oracle Managed Files) will be used for managing files location and file names
- FRA(Fast/Flash Recovery Area) will be used for archive logs
1) Network Setup
Because I use only single server – all modifications will be done locally.
I will configure two more databases: orcl2 and orcl3
Knowledge From : http://gavinsoorma.com/2012/04/performing-a-database-clone-using-a-data-guard-physical-standby-database/
A common DBA task is to perform regular clones and database refreshes of the production database for the purpose of setting up training or test or development environments.
If we are having a physical standby Data Guard environment, then we can easily offload the potentially I/O and CPU intensive backup process required for creating these clone or duplicate databases to the standby site.
Here are a few examples of using the physical Standby database in a Data Guard environment to create a clone of the primary production database.
In the first example we use RMAN to perform the backup and restore and in the second example we are using OS commands to just copy files ONLINE from Standby host to the target host.
Note that in 11g, we can take the backup of the control file from the Standby database. In 10g, we have to take the backup of the controlfile from the primary database.
I was installing NFS server on otherwise public host recently, and noticed that conventional wisdom about securing NFS server is somewhat dated. My goal was to expose NFS on two internal interfaces without exposing it to whole wide Internet (assumptions about network security changed a lot since NFS was designed, sadly).
For a start, you are probably running rpcbind instead of portmap on recent Debian installations. So you will need to modify flags which are passed to portmap on startup:
root@rsync1:~# cat /etc/default/rpcbind
OPTIONS="-w -l -h 172.16.10.2 -h 192.168.0.219"
You will also need to add following line:
root@rsync1:~# grep rpcbind /etc/hosts.deny
Now you will notice that rpcinfo -p still works OK on localhost. That’s because rpcbind will always add loopback address, so we have to test it from another machine:
root@rsync1-dev:~# rpcinfo -p 192.168.0.219
rpcinfo: can't contact portmapper: RPC: Authentication error; why = Client credential too weak