Exploring GPDB gpcrondump command and files created by gpcrondump
Post date: Jan 09, 2015 3:7:43 PM
gp_dump is deprecated in later versions of GPDB. gpcrondump is practically used for gpdb parallel backups.
1. Getting the command syntax
[gpadmin@sachi ~]$ gpcrondump --help
COMMAND NAME: gpcrondump
A wrapper utility for gp_dump, which can be called directly or from a crontab entry.
*****************************************************
SYNOPSIS
*****************************************************
gpcrondump -x <database_name>
[-s <schema> | -t <schema>.<table> | -T <schema>.<table>]
[--table-file="<filename>" | --exclude-table-file="<filename>"]
[-u <backup_directory>] [-R <post_dump_script>]
[-c] [-z] [-r] [-f <free_space_percent>] [-b] [-h] [-j | -k]
[-g] [-G] [-C] [-d <master_data_directory>] [-B <parallel_processes>]
[-a] [-q] [-y <reportfile>] [-l <logfile_directory>] [-v]
{ [-E <encoding>] [--inserts | --column-inserts] [--oids]
[--no-owner | --use-set-session-authorization]
[--no-privileges] [--rsyncable] [--ddboost] }
gpcrondump --ddboost-host <ddboost_hostname> --ddboost-user <ddboost_user>
gpcrondump --ddboost-config-remove
gpcrondump -o
gpcrondump -?
gpcrondump --version
*****************************************************
DESCRIPTION
*****************************************************
gpcrondump is a wrapper utility for gp_dump. By default, dump files are created in their respective master and segment data directories in a directory named db_dumps/YYYYMMDD. The data dump files are compressed by default using gzip.
gpcrondump allows you to schedule routine backups of a Greenplum database using cron (a scheduling utility for UNIX operating systems). Cron jobs
that call gpcrondump should be scheduled on the master host.
gpcrondump is used to schedule Data Domain Boost backup and restore operations. gpcrondump is also used to set or remove one-time
credentials for Data Domain Boost.
**********************
Return Codes
**********************
The following is a list of the codes that gpcrondump returns.
0 - Dump completed with no problems
1 - Dump completed, but one or more warnings were generated
2 - Dump failed with a fatal error
**********************************************
EMAIL NOTIFICATIONS
**********************************************
To have gpcrondump send out status email notifications, you must place a file named mail_contacts in the home directory of the Greenplum superuser (gpadmin) or in the same directory as the gpcrondump utility ($GPHOME/bin). This file should contain one email address per line. gpcrondump will issue a warning if it cannot locate a mail_contacts file in either location. If both locations have a mail_contacts file, then the one in $HOME takes precedence.
*****************************************************
OPTIONS
*****************************************************
-a (do not prompt)
Do not prompt the user for confirmation.
-b (bypass disk space check)
Bypass disk space check. The default is to check for available disk space.
Note: Bypassing the disk space check generates a warning message. With a warning message, the return code for gpcrondump is 1 if the dump is successful. (If the dump fails, the return code is 2, in all cases.)
-B <parallel_processes>
The number of segments to check in parallel for pre/post-dump validation. If not specified, the utility will start up to 60 parallel processes depending on how many segment instances it needs to dump.
-c (clear old dump files first)
Clear out old dump files before doing the dump. The default is not to clear out old dump files. This will remove all old dump directories in the db_dumps directory, except for the dump directory of the current date.
-C (clean old catalog dumps)
Clean out old catalog schema dump files prior to create.
--column-inserts
Dump data as INSERT commands with column names.
-d <master_data_directory>
The master host data directory. If not specified, the value set for $MASTER_DATA_DIRECTORY will be used.
--ddboost
Use Data Domain Boost for this backup. Before using Data Domain Boost, set up the Data Domain Boost credential, as described in the next option below.
The following option is recommended if --ddboost is specified.
* -z option (uncompressed)
Backup compression (turned on by default) should be turned off with the -z option. Data Domain Boost will deduplicate and compress the backup data before sending it to the Data Domain System.
When running a mixed backup that backs up to both a local disk and to Data Domain, use the -u option to specify that the backup to the local disk does not use the default directory.
The -f, -G, -g, -R, and -u options change if --ddboost is specified. See the options for details.
Important: Never use the Greenplum Database default backup options with Data Domain Boost.
To maximize Data Domain deduplication benefits, retain at least 30 days of backups.
--ddboost-host <ddboost_hostname> --ddboost-user <ddboost_user>
Sets the Data Domain Boost credentials. Do not combine this options with any other gpcrondump options. Do not enter just part of this option.
<ddboost_hostname> is the IP address of the host. There is a 30-character limit.
<ddboost_user> is the Data Domain Boost user name. There is a 30-character limit.
Example:
gpcrondump --ddboost-host 172.28.8.230 --ddboost-user ddboostusername
After running gpcrondump with these options, the system verfies the limits on the host and user names and prompts for the Data Domain Boost password.
Enter the password when prompted; the password is not echoed on the screen. There is a 40-character limit on the password that can include lowercase
letters (a-z), uppercase letters (A-Z), numbers (0-9), and special characters ($, %, #, +, etc.).
The system verifies the password. After the password is verified, the system creates a file .ddconfig and copies it to all segments.
Note: If there is more than one operating system user using Data Domain Boost for backup and restore operations, repeat this configuration process for
each of those users.
Important: Set up the Data Domain Boost credential before running any Data Domain Boost backups with the --ddboost option, described above.
--ddboost-config-remove
Removes all Data Domain Boost credentials from the master and all segments on the system. Do not enter this option with any other gpcrondump option.
-E encoding
Character set encoding of dumped data. Defaults to the encoding of the database being dumped.
-f <free_space_percent>
When doing the check to ensure that there is enough free disk space to create the dump files, specifies a percentage of free disk space that should remain after the dump completes. The default is 10 percent.
-f is not supported if --ddboost is specified.
-g (copy config files)
Secure a copy of the master and segment configuration files postgresql.conf, pg_ident.conf, and pg_hba.conf. These configuration files are dumped in the master or segment data directory to db_dumps/YYYYMMDD/config_files_<timestamp>.tar
If --ddboost is specified, the files are located in the db_dumps directory on the default storage unit.
-G (dump global objects)
Use pg_dumpall to dump global objects such as roles and tablespaces. Global objects are dumped in the master data directory to db_dumps/YYYYMMDD/gp_global_1_1_<timestamp>.
If --ddboost is specified, the files are located in the db_dumps directory on the default storage unit.
-h (record dump details)
Record details of database dump in database table public.gpcrondump_history in database supplied via -x option. Utility will create table if it does not currently exist.
--inserts
Dump data as INSERT, rather than COPY commands.
-j (vacuum before dump)
Run VACUUM before the dump starts.
-k (vacuum after dump)
Run VACUUM after the dump has completed successfully.
-l <logfile_directory>
The directory to write the log file. Defaults to ~/gpAdminLogs.
--no-owner
Do not output commands to set object ownership.
--no-privileges
Do not output commands to set object privileges (GRANT/REVOKE commands).
-o (clear old dump files only)
Clear out old dump files only, but do not run a dump. This will remove the oldest dump directory except the current date's dump directory. All dump sets within that directory will be removed.
If --ddboost is specified, only the old files on DD Boost are deleted.
--oids
Include object identifiers (oid) in dump data.
-q (no screen output)
Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.
-r (rollback on failure)
Rollback the dump files (delete a partial dump) if a failure is detected. The default is to not rollback.
-r is not supported if --ddboost is specified.
-R <post_dump_script>
The absolute path of a script to run after a successful dump operation. For example, you might want a script that moves completed dump files
to a backup host. This script must reside in the same location on the master and all segment hosts.
--rsyncable
Passes the --rsyncable flag to the gpzip utility to synchronize the output occasionally, based on the input during compression. This synchronization increases the file size by less than 1% in most cases. When this flag is passed, the rsync(1) program can synchronize compressed files much more efficiently. The gunzip utility cannot differentiate between a compressed file created with this option, and one created without it.
-s <schema_name>
Dump only the named schema in the named database.
-t <schema>.<table_name>
Dump only the named table in this database.
The -t option can be specified multiple times.
-T <schema>.<table_name>
A table name to exclude from the database dump. The -T option can be specified multiple times.
--exclude-table-file="<filename>"
Exclude all tables listed in <filename> from the database dump. The file <filename> contains any number of tables, listed one per line.
--table-file="<filename>"
Dump only the tables listed in <filename>. The file <filename> contains any number of tables, listed one per line.
-u <backup_directory>
Specifies the absolute path where the backup files will be placed on each host. If the path does not exist, it will be created, if possible. If not specified, defaults to the data directory of each instance to be backed up. Using this option may be desirable if each segment host has multiple segment
instances as it will create the dump files in a centralized location rather than the segment data directories.
-u is not supported if --ddboost is specified.
--use-set-session-authorization
Use SET SESSION AUTHORIZATION commands instead of ALTER OWNER commands to set object ownership.
-v | --verbose
Specifies verbose mode.
--version (show utility version)
Displays the version of this utility.
-x <database_name>
Required. The name of the Greenplum database to dump. Multiple databases can be specified in a comma-separated list.
-y <reportfile>
Specifies the full path name where the backup job log file will be placed on the master host. If not specified, defaults to the master data directory or if running remotely, the current working directory.
-z (no compression)
Do not use compression. Default is to compress the dump files using gzip. We recommend using this option for NFS and Data Domain Boost backups.
-? (help)
Displays the online help.
*****************************************************
EXAMPLES
*****************************************************
Example 1: Call gpcrondump directly and dump sachi (and global objects):
$gpcrondump -x sachi -c -g -G
here the options used are
-x -> database name
-c ->clear old dump files first
-g -> copy config files
-G -> dump global objects
Example 2: Using a cron job and shell script
A crontab entry that runs a backup of the sales database and global objects) nightly at one past midnight:
01 0 * * * /home/gpadmin/gpdump.sh >> gpdump.log
The content of dump script gpdump.sh is:
#!/bin/bash
export GPHOME=/usr/local/greenplum-db
export MASTER_DATA_DIRECTORY=/data/gpdb_p1/gp-1
. $GPHOME/greenplum_path.sh
gpcrondump -x sales -c -g -G -a -q
here the options used are
-x -> database name
-c ->clear old dump files first
-g -> copy config files
-G -> dump global objects
-a -> do not prompt
-q -> no screen output
Example 3: Backup a schema (sachi2014) of database sachi and explore the output files
$gpcrondump -x sachi -s sachi2014
OUTPUT on the screen
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Starting gpcrondump with args: -x sachi -s sachi2014
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:----------------------------------------------------
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Master Greenplum Instance dump parameters
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:----------------------------------------------------
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Dump type = Single database
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Database to be dumped = sachi
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Schema to be dumped = sachi2014
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Master port = 5432
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Master data directory = /home/gpmaster/gpsne-1
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Run post dump program = Off
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Rollback dumps = Off
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Dump file compression = On
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Clear old dump files = Off
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Update history table = Off
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Secure config files = Off
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Dump global objects = Off
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Vacuum mode type = Off
20150108:18:46:31:017728 gpcrondump:sachi:gpadmin-[INFO]:-Ensuring remaining free disk > 10
Continue with Greenplum dump Yy|Nn (default=N):
> y
20150108:18:46:35:017728 gpcrondump:sachi:gpadmin-[INFO]:-Directory /home/gpmaster/gpsne-1/db_dumps/20150108 exists ===>mark1
20150108:18:46:35:017728 gpcrondump:sachi:gpadmin-[INFO]:-Checked /home/gpmaster/gpsne-1 on master
20150108:18:46:35:017728 gpcrondump:sachi:gpadmin-[INFO]:-Configuring for single database dump
20150108:18:46:36:017728 gpcrondump:sachi:gpadmin-[INFO]:-Adding compression parameter
20150108:18:46:36:017728 gpcrondump:sachi:gpadmin-[INFO]:-Adding schema name sachi2014
20150108:18:46:36:017728 gpcrondump:sachi:gpadmin-[INFO]:-Dump command line gp_dump -p 5432 -U gpadmin --gp-d=db_dumps/20150108 --gp-r=/home/gpmaster/gpsne-1/db_dumps/20150108 --gp-s=p --gp-c -n "\"sachi2014\"" sachi
20150108:18:46:36:017728 gpcrondump:sachi:gpadmin-[INFO]:-Starting dump process
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Dump process returned exit code 0
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Timestamp key = 20150108184636
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Checked master status file and master dump file.
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Dump status report
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:----------------------------------------------------
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Target database = sachi
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Dump subdirectory = 20150108
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Clear old dump directories = Off
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Dump start time = 18:46:36
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Dump end time = 18:46:48
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Status = COMPLETED
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Dump key = 20150108184636
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Dump file compression = On
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Vacuum mode type = Off
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-Exit code zero, no warnings generated
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:----------------------------------------------------
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[WARNING]:-Found neither /usr/local/greenplum-db/./bin/mail_contacts nor /home/gpadmin/mail_contacts
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[WARNING]:-Unable to send dump email notification
20150108:18:46:48:017728 gpcrondump:sachi:gpadmin-[INFO]:-To enable email notification, create /usr/local/greenplum-db/./bin/mail_contacts or /home/gpadmin/mail_contacts containing required email addresses
Now lets go to the master and segment servers and look at the files and content it created.
Note: gpcrondump creates a subdirectory based on the current date as shown in mark1. if this directory already exist it uses the same one.
[gpadmin@sachi 20150108]$ pwd
/home/gpmaster/gpsne-1/db_dumps/20150108
[gpadmin@sachi 20150108]$ ls -ltr
total 180
-rw-------. 1 gpadmin gpadmin 111 Jan 8 18:46 gp_cdatabase_1_1_20150108184636
-rw-------. 1 gpadmin gpadmin 12452 Jan 8 18:46 gp_dump_1_1_20150108184636.gz
-rw-------. 1 gpadmin gpadmin 893 Jan 8 18:46 gp_dump_status_1_1_20150108184636 ---> status of the dump process
-rw-------. 1 gpadmin gpadmin 432 Jan 8 18:46 gp_dump_1_1_20150108184636_post_data.gz
-rw-rw-r--. 1 gpadmin gpadmin 961 Jan 8 18:46 gp_dump_20150108184636.rpt ----> dump report file
-rw-------. 1 gpadmin gpadmin 1108 Jan 8 18:51 gp_restore_status_1_1_20150108184636 --> will discuss later. created when you restore this backup using gpdbrestore command
Note: I have copied the original files in bold to view the file contents without affecting the original
-rw-------. 1 gpadmin gpadmin 136936 Jan 8 21:12 sachi_1_1_20150108184636
-rw-------. 1 gpadmin gpadmin 992 Jan 8 21:12 sachi_1_1_20150108184636_post_data
-rw-------. 1 gpadmin gpadmin 111 Jan 8 21:17 sachitest_1_1_20150108184636
[gpadmin@sachi 20150108]$
There are 3 main files.
-rw-------. 1 gpadmin gpadmin 111 Jan 8 18:46 gp_cdatabase_1_1_20150108184636
-rw-------. 1 gpadmin gpadmin 12452 Jan 8 18:46 gp_dump_1_1_20150108184636.gz
-rw-------. 1 gpadmin gpadmin 432 Jan 8 18:46 gp_dump_1_1_20150108184636_post_data.gz
1. gp_cdatabase_1_1_20150108184636: This files contains script to create database. if database already exists it ignores it. does not drop and recreate database.
$[gpadmin@sachi 20150108]$ cat gp_cdatabase_1_1_20150108184636
--
-- Database creation
--
CREATE DATABASE sachi WITH TEMPLATE = template0 ENCODING = 'UTF8' OWNER = gpadmin;
[gpadmin@sachi 20150108]$
2. gp_dump_1_1_20150108184636.gz: This file contains DDLs for the schema objects.
Here are few lines from the top
--
-- Greenplum Database database dump
--
SET statement_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET default_with_oids = false;
--
-- Name: sachi2014; Type: SCHEMA; Schema: -; Owner: sachi
--
CREATE SCHEMA sachi2014;
ALTER SCHEMA sachi2014 OWNER TO sachi;
SET search_path = sachi2014, pg_catalog;
SET default_tablespace = '';
--
-- Name: foo; Type: TABLE; Schema: sachi2014; Owner: gpadmin; Tablespace:
--
CREATE TABLE foo (
x timestamp without time zone
) DISTRIBUTED RANDOMLY;
ALTER TABLE sachi2014.foo OWNER TO gpadmin;
--
-- Name: foo1; Type: TABLE; Schema: sachi2014; Owner: gpadmin; Tablespace:
--
CREATE TABLE foo1 (
x timestamp without time zone
) DISTRIBUTED RANDOMLY;
ALTER TABLE sachi2014.foo1 OWNER TO gpadmin;
3. gp_dump_1_1_20150108184636_post_data.gz: This file contains DDLs to create constraints, index etc.
Here are the contents of this file.
--
-- Greenplum Database database dump
--
SET statement_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET search_path = sachi2014, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- Name: firstkey; Type: CONSTRAINT; Schema: sachi2014; Owner: gpadmin; Tablespace:
--
ALTER TABLE ONLY films
ADD CONSTRAINT firstkey PRIMARY KEY (code);
--
-- Name: abc_id; Type: INDEX; Schema: sachi2014; Owner: gpadmin; Tablespace:
--
CREATE INDEX abc_id ON abc USING btree (id);
--
-- Name: idx_bloattest_id; Type: INDEX; Schema: sachi2014; Owner: gpadmin; Tablespace:
--
CREATE INDEX idx_bloattest_id ON bloattest USING btree (id);
--
-- Name: index_abc; Type: INDEX; Schema: sachi2014; Owner: gpadmin; Tablespace:
--
CREATE INDEX index_abc ON abc USING btree (id);
--
-- Greenplum Database database dump complete
--
[gpadmin@sachi 20150108]$
4. gp_dump_status_1_1_20150108184636: File contains status related details.
[gpadmin@sachi 20150108]$ cat gp_dump_status_1_1_20150108184636
20150108:18:46:36|gp_dump_agent-[INFO]:-Starting monitor thread
20150108:18:46:36|gp_dump_agent-[INFO]:-Dumping database "sachi"...
20150108:18:46:36|gp_dump_agent-[INFO]:-Dumping CREATE DATABASE statement for database "sachi"
20150108:18:46:38|gp_dump_agent-[INFO]:-TASK_SET_SERIALIZABLE
20150108:18:46:38|gp_dump_agent-[INFO]:-TASK_GOTLOCKS
20150108:18:46:38|gp_dump_agent-[INFO]:-Succeeded
20150108:18:46:38|gp_dump_agent-[INFO]:-Finished pre-data schema successfully
20150108:18:46:38|gp_dump_agent-[INFO]:-Finished successfully
20150108:18:46:38|gp_dump_agent-[INFO]:-Starting monitor thread
20150108:18:46:38|gp_dump_agent-[INFO]:-Dumping database "sachi"...
20150108:18:46:40|gp_dump_agent-[INFO]:-TASK_SET_SERIALIZABLE
20150108:18:46:40|gp_dump_agent-[INFO]:-TASK_GOTLOCKS
20150108:18:46:40|gp_dump_agent-[INFO]:-Succeeded
20150108:18:46:40|gp_dump_agent-[INFO]:-Finished successfully
[gpadmin@sachi 20150108]$
5. gp_dump_20150108184636.rpt: Report file used to communicate to the mail contacts.
[gpadmin@sachi 20150108]$ cat gp_dump_20150108184636.rpt
Greenplum Database Backup Report
Timestamp Key: 20150108184636
gp_dump Command Line: -p 5432 -U gpadmin --gp-d=db_dumps/20150108 --gp-r=/home/gpmaster/gpsne-1/db_dumps/20150108 --gp-s=p --gp-c -n ""sachi2014"" sachi
Pass through Command Line Options: -n "\"sachi2014\""
Compression Program: gzip
Individual Results
segment 1 (dbid 3) Host sachi Port 40001 Database sachi BackupFile /disk2/gpdata2/gpsne1/db_dumps/20150108/gp_dump_0_3_20150108184636.gz: Succeeded
segment 0 (dbid 2) Host sachi Port 40000 Database sachi BackupFile /disk1/gpdata1/gpsne0/db_dumps/20150108/gp_dump_0_2_20150108184636.gz: Succeeded
Master (dbid 1) Host sachi Port 5432 Database sachi BackupFile /home/gpmaster/gpsne-1/db_dumps/20150108/gp_dump_1_1_20150108184636.gz: Succeeded
Master (dbid 1) Host sachi Port 5432 Database sachi BackupFile /home/gpmaster/gpsne-1/db_dumps/20150108/gp_dump_1_1_20150108184636.gz_post_data: Succeeded
gp_dump utility finished successfully.
[gpadmin@sachi 20150108]$
Now lets move to segment servers and look at the files and its contents
As you see in the gp_dump_20150108184636.rpt file. following files are created on the segments.
segment 1 (dbid 3) Host sachi Port 40001 Database sachi BackupFile /disk2/gpdata2/gpsne1/db_dumps/20150108/gp_dump_0_3_20150108184636.gz: Succeeded
segment 0 (dbid 2) Host sachi Port 40000 Database sachi BackupFile /disk1/gpdata1/gpsne0/db_dumps/20150108/gp_dump_0_2_20150108184636.gz: Succeeded
[gpadmin@sachi 20150108]$ cd /disk2/gpdata2/gpsne1/db_dumps/20150108/
[gpadmin@sachi 20150108]$ ls
gp_dump_0_3_20150108184636.gz gp_dump_status_0_3_20150108184636 gp_restore_status_0_3_20150108184636
[gpadmin@sachi 20150108]$ ls -ltr
total 83648
-rw-------. 1 gpadmin gpadmin 360 Jan 8 18:46 gp_dump_status_0_3_20150108184636
-rw-------. 1 gpadmin gpadmin 85644160 Jan 8 18:46 gp_dump_0_3_20150108184636.gz
-rw-------. 1 gpadmin gpadmin 549 Jan 8 18:51 gp_restore_status_0_3_20150108184636
[gpadmin@sachi 20150108]$
1. gp_dump_status_0_3_20150108184636: Tracks status of the dump command.
[gpadmin@sachi 20150108]$ cat gp_dump_status_0_3_20150108184636
20150108:18:46:36|gp_dump_agent-[INFO]:-Starting monitor thread
20150108:18:46:36|gp_dump_agent-[INFO]:-Dumping database "sachi"...
20150108:18:46:38|gp_dump_agent-[INFO]:-TASK_SET_SERIALIZABLE
20150108:18:46:38|gp_dump_agent-[INFO]:-TASK_GOTLOCKS
20150108:18:46:48|gp_dump_agent-[INFO]:-Succeeded
20150108:18:46:48|gp_dump_agent-[INFO]:-Finished successfully
[gpadmin@sachi 20150108]$
2. gp_dump_0_3_20150108184636.gz: This file contains COPY command to insert data into the tables.
[gpadmin@sachi 20150108]$ cp gp_dump_0_3_20150108184636.gz sachi_0_3_20150108184636.gz
[gpadmin@sachi 20150108]$ gunzip sachi_0_3_20150108184636.gz
[gpadmin@sachi 20150108]$ ls -ltr
total 1435332
-rw-------. 1 gpadmin gpadmin 360 Jan 8 18:46 gp_dump_status_0_3_20150108184636
--
-rw-------. 1 gpadmin gpadmin 85644160 Jan 8 18:46 gp_dump_0_3_20150108184636.gz
-rw-------. 1 gpadmin gpadmin 549 Jan 8 18:51 gp_restore_status_0_3_20150108184636
-rw-------. 1 gpadmin gpadmin 1384123537 Jan 9 09:59 sachi_0_3_20150108184636
[gpadmin@sachi 20150108]$ view sachi_0_3_20150108184636
-- Greenplum Database database dump
--
SET statement_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET search_path = sachi2014, pg_catalog;
SET default_with_oids = false;
--
-- Data for Name: abc; Type: TABLE DATA; Schema: sachi2014; Owner: gpadmin
--
COPY abc (id, name) FROM stdin;
\.
--
-- Data for Name: bloattest; Type: TABLE DATA; Schema: sachi2014; Owner: gpadmin
--
COPY bloattest (id, int_1, int_2, int_3, ts_1, ts_2, ts_3, text_1, text_2, text_3) FROM stdin;
2 6527368 1774693 6621748 2014-08-07 12:39:38.854845-04 2013-12-24 15:23:49.059645-05 2013-12-15 05:59:51.795645-05 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2
4 4495918 2751623 8940072 2014-09-16 22:40:41.523645-04 2013-12-14 08:11:29.274045-05 2014-08-22 00:13:21.018045-04 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_1 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2 text_2
@
@
...
...