We have migrated to new Google sites!
Post date: Feb 26, 2015 7:53:4 PM
Note: This method is good for small tables. Large table should use greenplum parallel unload/load method.
Test CASE: Copy a small table from DEV database to TEST database running in different DCA configuration
1. Create a simple table in DEV database
2. Insert few rows
3. Create same table in TEST database
4. Copy the data using psql copy command
5. validate the result
[sachi@etl ~]$ psqldev
psql (8.2.15)
Type "help" for help.
sachi_dev=> create table sachi1(id numeric(10), name varchar(64));
NOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'id' as the Greenplum Database data distribution key for this table.
HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.
CREATE TABLE
sachi_dev=> insert into sachi1 values(1,'a');
INSERT 0 1
sachi_dev=> insert into sachi1 values(2,'b');
INSERT 0 1
sachi_dev=> select * from sachi1;
id | name
----+------
1 | a
2 | b
(2 rows)
[sachi@etl ~]$ psqltst
psql (8.2.15)
Type "help" for help.
sachi_tst=> create table sachi1(id numeric(10), name varchar(64));
NOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'id' as the Greenplum Database data distribution key for this table.
HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.
CREATE TABLE
sachi_tst=>
sachi_tst=> \q
[sachi@etl ~]$ psqldev -c "copy gpdba.sachi1 to STDOUT"|psqltst -c "COPY gpdba.sachi1 from STDIN"
[sachi@etl ~]$ psqltst
psql (8.2.15)
Type "help" for help.
sachi_tst=> select * from sachi1;
id | name
----+------
2 | b
1 | a
(2 rows)
bd_tst=>