Wednesday, January 25, 2017

AWS Athena to read Dynamodb backup data

20 comments
Our data scientists wanted to read some data from Dynamodb. In order to do this, we had to increase the Read Capacity of the table to a high value and it was cost consuming. So we architected a system using Athena where data scientist could run their query whenever they want.

We used AWS datapipeline template to backup the dynamodb table. So, we had our data archived into S3 in multiple json formatted files. Here Athena came into picture. Athena is a new tool created by AWS which runs presto underneath. So we created a table in Athena using the Dynamodb backup location and started exploring data:

CREATE EXTERNAL TABLE IF NOT EXISTS default.test ( key string, sortkey string, data string)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES ('ignore.malformed.json' = 'true')
LOCATION 's3://bucket-name/DynamodbDBBackup/2016-12-23-20-28-20/'

It created a table in Athena. It's time to run query.

select * from default.test limit 3;
Sample Data:

keysortkeydetails
{"s":"key1"}{"s":"sortkey1"}{"s":"{""keys"":[""abc""],""usage"":32,""temparature"":89,""size"":21}
{"s":"key2"}{"s":"sortkey1"}{"s":"{""keys"":[""abc""],""usage"":32,""temparature"":67,""size"":42}
{"s":"key1"}{"s":"sortkey2"}{"s":"{""keys"":[""abc""],""usage"":21,""temparature"":69,""size"":45}


Now we had to extract these data in such a way that we would get only values for key and sortkey and from details attribute we had to extract only usage.

As I said, Athena is a service for presto, we used presto function to extract data from json. There is a function is presto described at presto functions:
json_extract(jsonjson_path)

json_path expression can use the dot-notation and it takes $ as root path. More details are available at JsonPath

Using the above formula, we wrote our query:

SELECT cast(json_extract(key,'$.s') as varchar) as key ,
cast(json_extract(sortkey,'$.s') as varchar) as sortkey
, json_extract(replace(cast(json_extract(details,'$.s') as varchar),'""','"'),'$.usage') as usage
FROM test
limit 10;​

And that gave us the data as:


keysortkeyusage
key1sortkey132
key2sortkey132
key1sortkey221

It was really fun to use Athena.




Continue reading →
Wednesday, December 28, 2016

Dynamodb restore using datapipeline with newer emr version

11 comments
We faced an interesting issue here.

We had a task to restore dynamodb from backup stored in s3. We took the backup using AWS pipeline. But when we started restoring, which we had been doing for a long time, we found the cluster was not able to provisioned and it was failing on bootstraping.
Digging more into it we found the AWS Dynamodb import template of datapipeline use default subnet under default VPC. Now, we don't have Internet Gateway for this VPC. Since, EMR needs IG to work successfully, this pipeline was failing with "internal error"

We started talking with AWS support. After some discussion, we changed our private subnet. Again the EMR provisioning was failing as the template to restore dynamodb that provided by AWS uses AMI Version 3.9.0 which does not support private subnet.

So we decided to change the AMI Version 3.9 to release label "emr-4.5.0" which we have been using for all our EMR so far. Again we failed with error:
Unable to create resource for @EmrClusterForLoad_2016-12-28T20:33:21 due to: The supplied bootstrap action(s): 'bootstrap-action.7237c1e1-31de-4c02-ae68-c546dd581732' are not supported by release 'emr-4.5.0'. (Service: AmazonElasticMapReduce; Status Code: 400; Error Code: ValidationException; Request ID: e8be350e-cd3c-11e6-8e60-cb10b4c3228c)

That is, the template script provided by AWS does not support emr release label 4.5.0. To overcome the problem, we had to modify EmrCluster bootstrap action in pipeline definition which was:
s3://#{myDDBRegion}.elasticmapreduce/bootstrap-actions/configure-hadoop, --mapred-key-value,mapreduce.map.speculative=false

This was only supported by AMI 3.9.0. For release label emr-4.5.0, it should be added as configuration properties as follows:
--
        {
            "configuration": {
                "ref": "EmrConfigurationId_XXWNE"
            },
            "releaseLabel": "emr-4.5.0",
            "type": "EmrCluster",
            ...
       },
       {
            "property": {
                "ref": "PropertyId_3ghq7"
            },
            "type": "EmrConfiguration",
            "id": "EmrConfigurationId_XXWNE",
            "classification": "mapred-site",
            "name": "DefaultEmrConfiguration1"
        },
        {
            "key": "mapreduce.map.speculative",
            "type": "Property",
            "id": "PropertyId_3ghq7",
            "value": "false",
            "name": "DefaultProperty1"
        },
--

Now, we exported the pipeline definition and added the above configuration. The final pipeline definition was looking like this:

{
  "objects": [
    {
      "property": [
        {
          "ref": "PropertyId_3ghq7"
        }
      ],
      "name": "DefaultEmrConfiguration1",
      "id": "EmrConfigurationId_XXWNE",
      "type": "EmrConfiguration",
      "classification": "mapred-site"
    },
    {
      "output": {
        "ref": "DDBDestinationTable"
      },
      "input": {
        "ref": "S3InputDataNode"
      },
      "maximumRetries": "1",
      "name": "TableLoadActivity",
      "step": "s3://dynamodb-emr-#{myDDBRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbImport,#{input.directoryPath},#{output.tableName},#{output.writeThroughputPercent}",
      "runsOn": {
        "ref": "EmrClusterForLoad"
      },
      "id": "TableLoadActivity",
      "type": "EmrActivity",
      "resizeClusterBeforeRunning": "true"
    },
    {
      "subnetId": "subnet-xxxxxxx",
      "name": "EmrClusterForLoad",
      "coreInstanceCount": "1",
      "coreInstanceType": "m3.xlarge",
      "releaseLabel": "emr-4.5.0",
      "id": "EmrClusterForLoad",
      "masterInstanceType": "m3.xlarge",
      "region": "#{myDDBRegion}",
      "type": "EmrCluster",
      "terminateAfter": "23 Hours",
      "configuration": {
                "ref": "EmrConfigurationId_XXWNE"
            }
    },
    {
      "failureAndRerunMode": "CASCADE",
      "resourceRole": "PSS-BDP-QA-DataPipelineDefaultResourceRole",
      "pipelineLogUri": "s3://pss-bdp-qa-logfiles/datapipeline-logs/PSS-BDP-SQA-Dynamodb-Import-1/",
      "role": "PSS-BDP-DataPipelineDefaultRole",
      "scheduleType": "ONDEMAND",
      "name": "Default",
      "id": "Default"
    },
    {
      "writeThroughputPercent": "#{myDDBWriteThroughputRatio}",
      "name": "DDBDestinationTable",
      "id": "DDBDestinationTable",
      "type": "DynamoDBDataNode",
      "tableName": "#{myDDBTableName}"
    },
    {
      "directoryPath": "#{myInputS3Loc}",
      "name": "S3InputDataNode",
      "id": "S3InputDataNode",
      "type": "S3DataNode"
    },
    {
        "key": "mapreduce.map.speculative",
        "type": "Property",
        "id": "PropertyId_3ghq7",
        "value": "false",
        "name": "DefaultProperty1"
    }
  ],
  "parameters": [
    {
      "description": "Input S3 folder",
      "id": "myInputS3Loc",
      "type": "AWS::S3::ObjectKey"
    },
    {
      "description": "Target DynamoDB table name",
      "id": "myDDBTableName",
      "type": "String"
    },
    {
      "default": "0.25",
      "watermark": "Enter value between 0.1-1.0",
      "description": "DynamoDB write throughput ratio",
      "id": "myDDBWriteThroughputRatio",
      "type": "Double"
    },
    {
      "default": "us-east-1",
      "watermark": "us-east-1",
      "description": "Region of the DynamoDB table",
      "id": "myDDBRegion",
      "type": "String"
    }
  ],
  "values": {
    "myDDBRegion": "us-east-1",
    "myDDBTableName": "TABLE_TEST",
    "myDDBWriteThroughputRatio": "1",
    "myInputS3Loc": "s3://my-dynamobackup/TABLE_TEST_201609/2016-12-22-22-55-57"
  }
}
Continue reading →
Saturday, August 29, 2015

PowerShell Password-less SSH to Linux

2 comments
              
Using Powershell version 3:

PS C:\Windows\system32> $PSVersionTable
Name                           Value
----                           -----
WSManStackVersion              3.0
PSCompatibleVersions           {1.0, 2.0, 3.0}
SerializationVersion           1.1.0.1
BuildVersion                   6.2.9200.16398
PSVersion                      3.0
CLRVersion                     4.0.30319.1
PSRemotingProtocolVersion      2.2

PS C:\Windows\system32> $PSVersionTable.PSVersion
Major  Minor  Build  Revision
-----  -----  -----  --------
3      0      -1     -1
Download SSH-Session module from:
In Download Section:
Extract SSH-SessionsPSv3.zip and this creates:
C:\Downloads\SSH-SessionsPSv3.zip\SSH-Sessions

Get the Module Paths:

PS C:\Windows\system32> $env:PSModulePath
C:\Users\rumman\Documents\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules\

This is module extention path: C:\Users\rumman\Documents\WindowsPowerShell\Modules
Copy "C:\Downloads\SSH-SessionsPSv3\SSH-Sessions" To "C:\Users\rumman\Documents\WindowsPowerShell\Modules"

Now, in shell prompt, use the following command to ensure that the module is available:
PS C:\Windows\system32> Get-Module *ssh*
ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Script     SSH-Sessions                        {ConvertFrom-SecureToPlain, Enter-SshSession, Get-SshSession, 
PS C:\Windows\system32> Get-Command *ssh*
CommandType     Name                                               ModuleName
-----------     ----                                               ----------
Function        Enter-SshSession                                   SSH-Sessions
Function        Get-SshSession                                     SSH-Sessions
Function        Invoke-SshCommand                                  SSH-Sessions
Function        New-SshSession                                     SSH-Sessions
Function        Remove-SshSession                                  SSH-Sessions

Import Module:

PS C:\Users\rumman> Import-Module SSH-Sessions
Import-Module : Could not load file or assembly
'file:///C:\Users\rumman\Documents\WindowsPowerShell\Modules\SSH-Sessions\Renci.SshNet.dll' or one of its
dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515)
At line:1 char:1
+ Import-Module SSH-Sessions
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:) [Import-Module], FileLoadException
    + FullyQualifiedErrorId : FormatXmlUpdateException,Microsoft.PowerShell.Commands.ImportModuleCommand

I faced the above issue and what I did unblocking the file.
Go to C:\Users\rumman\Documents\WindowsPowerShell\Modules\SSH-Sessions
There are 3 files:
- Renci.SshNet.dll
- SSH-Sessions.psd1
- SSH-Sessions.psm1

For each file, right-click -> Properties -> Unblock
Then, load the module again:
PS C:\Users\rumman> Import-Module SSH-Sessions
PS C:\Users\rumman>

Verify SSH Connection:

PS C:\Users\rumman> New-SSHSession -ComputerName "10.0.0.16" -Username "postgres"
No key provided. Enter SSH password for postgres: ******
Successfully connected to 10.0.0.16
PS C:\Users\rumman> Get-Session

PS C:\Users\rumman> Get-SSHSession
ComputerName                                                                                                  Connected
------------                                                                                                  ---------
10.0.0.16                                                                                                          True

PS C:\Users\rumman> Remove-SSHSession -ComputerName "10.0.0.16"
10.0.0.16 should now be disconnected and disposed.

Setup Password-less SSH:

Download Putty.exe and Puttygen.exe. One of the sites for this:

There are lots of sites available to describe the steps to use puttygen to setup password-less ssh.
For example,
I saved the private key at  C:\putty\private_key.ppk
And also use the menu conversion->export OpenSSH
and save it as exported_openssh_key
This key is usable for other application.


When password-less ssh is working form Putty, now try from powershell:
PS C:\Users\rumman> New-SSHSession -ComputerName "10.0.0.16" -UserName "postgres" -KeyFile C:\putty\exported_openssh_key
Key file specified. Will override password. Trying to read key file...
PS C:\Users\rumman> Get-SSHSession
ComputerName                                                                                                  Connected
------------                                                                                                  ---------
10.0.0.16                                                                                                          True




Continue reading →
Thursday, September 4, 2014

Transfer user grants from one Postgresql instance to another

0 comments
Transferring role from one host to another:

pg_dumpall -r -h source_host |> roles. sql
psql -h destination_host -f roles.sql

Transfer role grants from one host to another

pg_dump -h source_host -s | egrep '(GRANT|REVOKE)' > grants,sql
psql -h destination_host -f grants.sql 
Continue reading →
Friday, July 25, 2014

Postgresql single mode and recover from transaction wrap around

1 comments
We got into a problem with the follwoing error:
"FATAL: database is not accepting commands to avoid wraparound data loss in database "testdb""

This means postgresql database went out of its xid.
There is a good document on it why it was happened.
http://www.postgresql.org/docs/9.2/static/routine-vacuuming.html
23.1.5. Preventing Transaction ID Wraparound Failures

So to recover from it, I stopped the running postgresql instance and started in single user mode.

~/bin/postgres --single -D /opt/msp/pkg/postgres/data/ testdb

Then started vacuum full on testdb.

backend> vacuum full analyze verbose;


However, I found a problem when I was running postgres command to get into single user mode.

-bash-4.1$ ~/bin/postgres --help
/opt/msp/pkg/postgres/bin/postgres: /lib64/libz.so.1: version `ZLIB_1.2.3.3' not found (required by /opt/msp/pkg/postgres/bin/../lib/libxml2.so.2)

I saw that libz.so.1 at /opt/msp/pkg/postgres/bin/../lib and /lib64/ and it was creating the problem.

I used LD_LIRBARY_PATH to /opt/msp/pkg/postgres/bin/../lib.
export LD_LIRBARY_PATH=/opt/msp/pkg/postgres/bin/../lib

Now it works:
~/bin/postgres --help
postgres is the PostgreSQL server.
Usage:
  postgres [OPTION]...
Options:
  -B NBUFFERS     number of shared buffers
  -c NAME=VALUE   set run-time parameter
  -d 1-5          debugging level
  -D DATADIR      database directory
  -e              use European date input format (DMY)
  -F              turn fsync off
  -h HOSTNAME     host name or IP address to listen on
  -i              enable TCP/IP connections
  -k DIRECTORY    Unix-domain socket location
  -l              enable SSL connections
  -N MAX-CONNECT  maximum number of allowed connections
  -o OPTIONS      pass "OPTIONS" to each server process (obsolete)
  -p PORT         port number to listen on
  -s              show statistics after each query
  -S WORK-MEM     set amount of memory for sorts (in kB)
  --NAME=VALUE    set run-time parameter
  --describe-config  describe configuration parameters, then exit
  --help          show this help, then exit
  --version       output version information, then exit
.... 
Continue reading →
Monday, February 10, 2014

Understanding XID Wrap around

2 comments

A few days back, I faced a disaster with one of the databases with xid wrap around issue. I started to dig into the details about the actual case for the failure. There are some very good articles about xid mechanism. I added those as references below.

However, I added my test result here.

Created a brand new cluster:

initdb
   
postgres=# select datname, datfrozenxid from pg_database;
  datname  | datfrozenxid
-----------+--------------
 template1 |         1798
 template0 |         1798
 postgres  |         1798
(3 rows)
   

postgres=# select txid_current();
 txid_current
--------------
         1809
(1 row)

postgres=# \c testdb
You are now connected to database "testdb" as user "postgres92".
testdb=# create table t ( i int);
CREATE TABLE
testdb=# insert into t values (1);
INSERT 0 1
testdb=# select txid_current();
 txid_current
--------------
         1812
(1 row)


testdb=# select datname, datfrozenxid from pg_database;
  datname  | datfrozenxid
-----------+--------------
 template1 |         1798
 template0 |         1798
 postgres  |         1798
 testdb    |         1798
(4 rows)


testdb=# select txid_current();
 txid_current
--------------
         1854
(1 row)


Identified in the source:


In postgresql-9.2.4/src/backend/access/varsup.c,
I found functions:
 TransactionId GetNewTransactionId(bool isSubXact)
 TransactionId ReadNewTransactionId(void)
 void7 SetTransactionIdLimit(TransactionId oldest_datfrozenxid, Oid oldest_datoid)

After every start of the database and after every execution of autovacuum process, it calls "SetTransactionIdLimit" to set the transaction xid limit.
It follows the following formula:

xidVacLimit = oldest_datfrozenxid + autovacuum_freeze_max_age;
xidWarnLimit = xidStopLimit - 10000000; // (10M)
xidStopLimit = xidWrapLimit - 1000000; //(1M)
xidWrapLimit = oldest_datfrozenxid + (MaxTransactionId >> 1);

When db xid crosses xidVacLimit, it startes autovacuum process.
When db xid crosses xidWarnLimit, it warns us.
When db xid crosses xidStopLimit, it stops to generate any new xid with alerts.



In our case:


oldest_datfrozenxid = 1798
MaxTransactionId = 4294967295

xidWrapLimit = oldest_datfrozenxid + (MaxTransactionId >> 1);
MaxTransactionId >> 1 = 2147483647
xidWrapLimit = 2147485445

xidStopLimit = xidWrapLimit - 1000000;  (1M)
xidStopLimit = 2146485445


// We'll start complaining loudly when we get within 10M transactions of the stop point.
xidWarnLimit = xidStopLimit - 10000000; (10M)
xidWarnLimit = 2136485445

//We'll start trying to force autovacuums when oldest_datfrozenxid gets to be more than autovacuum_freeze_max_age transactions old.
xidVacLimit = oldest_datfrozenxid + autovacuum_freeze_max_age;
xidVacLimit = 1798 + 100000000 (100M)


So, I modfieid the values as following to get the alerts:
printf("\nRumman: SetTransactionId: assign\n")
xidVacLimit = 1900;
xidWarnLimit = 1910;
xidStopLimit = 2000;
xidWrapLimit=2500;

So when I started the db, I get the followings in log:   
  
Rumman: SetTransactionId: assign

Rumman: SetTransactionID() : xidVacLimit = 1900
Rumman: SetTransactionID() : xidWarnLimit = 1910
Rumman: SetTransactionID() : xidStopLimit = 2000
Rumman: SetTransactionID() : xidWrapLimit = 2500
Rumman: SetTransactionId: curXid = 1903
Rumman: SetTransactionID: passed xidVacLimit

Then I increased the txid using txid_current() and crossed the value 1910 and got:
Rumman: getTransactionId: passed vacuum limit
WARNING:  database "testdb" must be vacuumed within 590 transactions
HINT:  To avoid a database shutdown, execute a database-wide VACUUM in that database.
    You might also need to commit or roll back old prepared transactions.
WARNING:  database "testdb" must be vacuumed within 589 transactions
HINT:  To avoid a database shutdown, execute a database-wide VACUUM in that database.
    You might also need to commit or roll back old prepared transactions.
WARNING:  database "postgres" must be vacuumed within 588 transactions
HINT:  To avoid a database shutdown, execute a database-wide VACUUM in that database.
    You might also need to commit or roll back old prepared transactions.


No, I modfied the values for xidStopLimit and xidWrapLimit to maximum and run the db:
Rumman: getTransactionId: passed vacuum limit

Rumman: SetTransactionId: assign

Rumman: SetTransactionID() : xidVacLimit = 1900
Rumman: SetTransactionID() : xidWarnLimit = 1910

Rumman: SetTransactionId: curXid = 1912
Rumman: SetTransactionID: passed xidVacLimit

Rumman: Set TransactionID (): passed xidWarnLimit
In psql console:

postgres=# \c testdb
You are now connected to database "testdb" as user "postgres92".
testdb=# select datname, datfrozenxid from pg_database;
  datname  | datfrozenxid
-----------+--------------
 template1 |         1798
 template0 |         1798
 postgres  |         1798
 testdb    |         1798
(4 rows)


No exeucted Vacuum Full and the xid changed:
VACUUM FULL;

testdb=# select datname, datfrozenxid from pg_database;
  datname  | datfrozenxid
-----------+--------------
 template1 |         1798
 template0 |         1798
 postgres  |         1798
 testdb    |         1905
(4 rows)

testdb=# insert into t values (1);
ERROR:  database is not accepting commands to avoid wraparound data loss in database "postgres"
HINT:  Stop the postmaster and use a standalone backend to vacuum that database.
You might also need to commit or roll back old prepared transactions.

Then, modified the source of varsup.c with its original and run the db and it worked fine.

So, I got into the result:
xidVacLimit = oldest_datfrozenxid + autovacuum_freeze_max_age
We should always take care of 
"datfrozenxid + autovacuum_freeze_max_age" 
in order to prevent xid wrap around issue;
Here,
datfrozenxid - is the column of a database's pg_database row is a lower bound on the normal XIDs appearing in that database — it is just the minimum of the per-table relfrozenxid values within the database.

And relfrozenxid is the column of a table's pg_class row contains the freeze cutoff XID that was used by the last whole-table VACUUM for that table.  All normal XIDs older than this cutoff XID are guaranteed to have been replaced by FrozenXID within the table.

References:

http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html



http://www.databasesoup.com/2012/10/freezing-your-tuples-off-part-2.html


Continue reading →
Wednesday, January 22, 2014

Exclude tables during pg_restore

43 comments
Let we have a database like:

postgres=# \c testdb
You are now connected to database "testdb" as user "postgres".
testdb=# \d
        List of relations
 Schema | Name | Type  |  Owner  
--------+------+-------+----------
 public | t    | table | postgres
 public | t1   | table | postgres
 public | t2   | table | postgres
(3 rows)


testdb=# insert into t1 select c, c::text||'abc' as some_text from generate_series(1,10) as c;
INSERT 0 10
testdb=# insert into t2 select c, c::text||'abc' as some_text from generate_series(1,10) as c;
INSERT 0 10
testdb=# insert into t select c, c::text||'abc' as some_text from generate_series(1,10) as c;
testdb=# insert into t select c, (c::text||'abc')::bytea as some_text from generate_series(1,10) as c;
INSERT 0 10


Take a dump of the database:

pg_dump testdb -Fc -v > testdb.sqlc

Now our task is to restore the database excluding table "t1":

Create a new database:

create database testdb_new;

Restore schema only:

pg_restore -d testdb_new -s -v testdb.sqlc

pg_restore: connecting to database for restore
Password:
pg_restore: creating SCHEMA public
pg_restore: creating COMMENT SCHEMA public
pg_restore: creating EXTENSION plpgsql
pg_restore: creating COMMENT EXTENSION plpgsql
pg_restore: creating TABLE t
pg_restore: creating TABLE t1
pg_restore: creating TABLE t2
pg_restore: creating CONSTRAINT t1_pkey
pg_restore: creating CONSTRAINT t_pkey
pg_restore: setting owner and privileges for SCHEMA public
pg_restore: setting owner and privileges for COMMENT SCHEMA public
pg_restore: setting owner and privileges for ACL public
pg_restore: setting owner and privileges for EXTENSION plpgsql
pg_restore: setting owner and privileges for COMMENT EXTENSION plpgsql
pg_restore: setting owner and privileges for TABLE t
pg_restore: setting owner and privileges for TABLE t1
pg_restore: setting owner and privileges for TABLE t2
pg_restore: setting owner and privileges for CONSTRAINT t1_pkey
pg_restore: setting owner and privileges for CONSTRAINT t_pkey

Drop table "t2":

postgres=# \c testdb_new
You are now connected to database "testdb_new" as user "postgres".
testdb_new=# drop table  t1;
DROP TABLE

Restore data:

pg_restore -d testdb_new -a -v testdb.sqlc

pg_restore: connecting to database for restore
Password:
pg_restore: restoring data for table "t"
pg_restore: restoring data for table "t1"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2728; 0 35908 TABLE DATA t1 postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  relation "t1" does not exist
    Command was: COPY t1 (i, some_text) FROM stdin;

pg_restore: restoring data for table "t2"
pg_restore: setting owner and privileges for TABLE DATA t
pg_restore: setting owner and privileges for TABLE DATA t1
pg_restore: setting owner and privileges for TABLE DATA t2
WARNING: errors ignored on restore: 1

Verify:

testdb_new=# \d
        List of relations
 Schema | Name | Type  |  Owner  
--------+------+-------+----------
 public | t    | table | postgres
 public | t2   | table | postgres
(2 rows)
testdb_new=# select * from t;
 i  |  nam 
----+-------
  1 | 1abc
  2 | 2abc
  3 | 3abc
  4 | 4abc
  5 | 5abc
  6 | 6abc
  7 | 7abc
  8 | 8abc
  9 | 9abc
 10 | 10abc
(10 rows)


Continue reading →

Labels