Lift uses IBM Aspera under the covers to move your data to the cloud at blazing fast speeds.
Lift uses Aspera's patented transport technology that leverages existing WAN infrastructure and commodity hardware to achieve speeds that are hundreds of times faster than FTP and HTTP.
Automatically recovers from common problems you may hit during the migration. For example, if your file upload is interrupted mid-transfer, Lift will resume where you last left off. File uploads are stable and robust, even over the most bandwidth-constrained networks.
Nobody wants to end up on the front page of the news. Any data moved over the wire to the IBM Cloud is completely secure via a 256-bit encrypted connection.
We want you to try our cloud data services. Cost shouldn't be an issue.
Every data migration is split into three steps: extract from source, transport over the wire, and load into target. Our CLI gives you the flexibility to perform these three steps separately so that your data migration works around your schedule, not the other way around.
You'll install the Lift CLI only once on your on-premises machine. Under the covers, the CLI works with the Lift Core Services running in the IBM Cloud to help get your data to your Watson Data Platform persistent store. Like any other cloud app, Lift never requires an update. New features are instantly available to you without you having to lift a finger.
For Db2 sources, Lift CLI now lets you create several database objects (including tables) in a target database with a single command. Specify, in the command, a series of object names, or use wildcards to apply the command to all objects with names that match the specified pattern. This can be done for tables, indexes, views, sequences, procedures, functions, aliases, auditpolicies, and triggers. For more information, see the Command Reference and FAQ sections of the Lift CLI documentation.
See All Recent Changes Read Prior BlogsHover over each item for more information.
Db2 Warehouse
on Cloud
Db2 Warehouse
Your fully-managed data warehouse in the cloud.
load
load
The 'load' command ingests the data from the landing zone into the Db2 Warehouse on Cloud engine.
Landing Zone
for Db2 Warehouse on Cloud
ls
The 'ls' command lists the files in the cloud landing zone.
df
The 'df' command shows the available disk space in the cloud landing zone.
rm
The 'rm' command removes a file from the cloud landing zone.
IBM CLOUD
ON-PREMISES
put
put
The 'put' command moves the file from your local disk to the Db2 Warehouse on Cloud landing zone: a space for staging data before ingest.
BEGIN
CSV
The flat file containing the data from a single table in your source database.
CSV
The flat file containing the data from a single table in your source database.
extract
extract
The 'extract' command allows you to extract data, as flat files, from your database to your local machine.
Source
Database
BEGIN
Source Database
The source of data you want to move to the IBM Cloud.
Want to migrate from IBM PureData System for Analytics to IBM Db2 Warehouse on Cloud?
It's a two-step process: convert your schema and migrate your data.
To convert your schema, start by downloading the IBM Database Conversion Workbench . The workbench will walk you through the process of converting your source database DDL so that it is compatible with the target. The workbench will also produce a report that tells you where your action is required. Once your schema is in place, you'll use the Lift CLI to migrate your data.
Get Data Conversion WorkbenchYou need to keep feeding your warehouse with new data constantly, and the Lift CLI is here to help.
Start by generating a set of CSV files that represent your incremental changes, per database table. Use the Lift CLI to scoop up those delimited files, push them over the wire, and import the files into IBM Db2 Warehouse on Cloud. Throw these steps in a script, set up a cron job, and you've got an ongoing incremental update of your data warehouse.
You can use the Lift CLI to migrate data from multiple different databases or data sources into a single IBM Db2 Warehouse on Cloud MPP cluster. Lift provides you with the flexibility to take tables from multiple data sources and import them under a single schema in IBM Db2 Warehouse on Cloud so that you can decommission your existing database cluster.
Don't slam your transactional data store with reporting queries.
Your customers don't care that you need to run analytics on their buying behavior. They just want a snappy user experience.
Spin up a cloud data warehouse, such as IBM Db2 Warehouse on Cloud to run analytics on data from your transactional data store. Keep your reports and dashboards up to date by sending small amounts of data from the source, and always have an up-to-date view of your business.
Source | IBM Db2 Warehouse on Cloud | IBM Db2 on Cloud |
---|---|---|
IBM Db2 | Yes | Yes |
IBM Db2 Warehouse | Yes | No |
IBM Integrated Analytics System | Yes | No |
IBM PureData System for Analytics | Yes | No |
Oracle Database | Yes | Yes |
Microsoft SQL Server | Yes | Yes |
CSV file format | Yes | Yes |
The IBM Database Conversion Workbench helps you migrate your source schema to IBM Db2 Warehouse on Cloud. It will examine your source DDL and automatically convert it to make the DDL compatible with your target engine. If the Database Conversion Workbench can't convert something automatically, you'll get a report detailing the steps you'll need to take to complete the conversion.
As a prerequisite, you need an instance of Db2 on Cloud or Db2 Warehouse on Cloud. If you have an instance, you're all set! If not, create an instance of one of the following IBM Cloud managed services:
Download the version of the Lift CLI for your operating system.
Unzip the package to a <zip-extract-directory> directory on your hard drive.
To install the Lift CLI, open a terminal window (macOS or Linux) or command prompt (Windows), and navigate to the <zip-extract-directory> directory. Install the Lift CLI by running the following command:
% <zip-extract-directory>/install <lift-home>
For example:
On Linux: $ sudo <zip-extract-directory>/install /opt/lift-cli
On macOS: % sudo <zip-extract-directory>/install /opt/lift-cli
On Windows: > <zip-extract-directory>\install.bat C:\lift-cli
The lift command executable is located in the <lift-home>/bin. After the installation completes, add the <lift-home>/bin to your PATH environment variable. For the rest of this tutorial, we assume that <lift-home>/bin is in your PATH and that the lift command is accessible from your terminal.
Tip: The lift command can be run by using a
properties file to include settings for command options, such
as login credentials,and you can reference that file from
within the command by using the
–-properties_file
or-pf option.The following is an example of a lift
extract command that specifies to use a properties file:
% lift extract --properties-file
<path-to-properties-file>
The
following example shows the format within a properties file
that contains login credentials:
source-user=user1
source-password=pass1
target-user=user1
target-password=pass1
max-load-errors=500
Target database on IBM Cloud | Data source |
---|---|
IBM Db2 on Cloud | IBM Db2 |
Oracle Database | |
Microsoft SQL Server | |
CSV file format | |
IBM Db2 Warehouse on Cloud | IBM Db2 |
IBM Db2 Warehouse | |
IBM Integrated Analytics System | |
IBM PureData System for Analytics | |
Oracle Database | |
Microsoft SQL Server | |
CSV file format |
The following sample data set can be used to complete the tutorial or you can use your own data file and DDL.Download the Download the Boston Property Assessment FY2016 (45.6MB) sample data set (courtesy of Analyze Boston). This package contains a schema (boston_property_assessment_fy2016.schema.sql) file and a data file (BOSTON_PROPERTY_ASSESSMENT_FY2016.csv).
Log in to your Db2 Warehouse on Cloud or Db2 on Cloud console.
To create a table, complete the following steps:
Copy the contents of boston_property_assessment_fy2016.schema.sql into the DDL box under the Run SQL tab.
Specify a schema by concatenating the schema name with the table name separated by a period. For example, <SCHEMA_NAME>.BOSTON_PROPERTY_ASSESSMENT_FY2016. If a schema is not specified, the table is created in your default schema. The default schema name is your user name in uppercase.
Click Run All. The result is a table called BOSTON_PROPERTY_ASSESSMENT_FY2016 in the specified or default schema.
Move the data file to the Db2 Warehouse on Cloud or Db2 on Cloud landing zone. This landing zone stages your CSV file before it's ingested into the database. You need your database credentials. You can get these credentials from the console by clicking Connect in the side navigation bar.
Move the data file to the landing zone of the target database by running the following lift put command:
% lift put --file <path-to-csv-file>/BOSTON_PROPERTY_ASSESSMENT_FY2016.csv --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>
Load the data set CSV file into the target database by running the following lift load command:
% lift load --filename BOSTON_PROPERTY_ASSESSMENT_FY2016.csv --target-schema <your-schema-name> --target-table BOSTON_PROPERTY_ASSESSMENT_FY2016 --header-row --remove --file-origin user --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>
The `--header-row` option specifies that the first row of the data set contains the column headings. The `--file-origin` user option specifies that this CSV file is user-generated and was not extracted using the lift extract command.
You're done. You can now run SQL queries on the sample data set from the database console.
Create the schema and table structures on your Db2 Warehouse on Cloud database target. You have several options to do this, but the most effective way is to download, install, and use the download and use the IBM Database Conversion Workbench. This tool can help you to convert your existing Netezza schema to one that's compatible with the Db2 Warehouse on Cloud engine database. After completing the conversion, the DCW produces a report that identifies which parts of your source database DDL were automatically converted and which parts require manual intervention. For more information, see the included step-by-step DCW guide.
After your table structures are in place on the target database, you can start moving your PureData System for Analytics (Netezza) tables. Start by extracting a table to a CSV file. Move that file over the network, stage it in the landing zone on Db2 Warehouse on Cloud, and then load it into the database
Extract the table to a CSV file by running the following lift extract command:
% lift extract --source-schema <source-schema-name> --source-table <source-table-name> --source-database ADMIN --source-host <source-database-host-name> --source-user <source-user-name> --source-password <source-password> --source-database-port <source-database-port> --file <path-to-csv-file>
Move your CSV file and stage it inthe landing zone of the target database by running the following lift put command:
% lift put --file <path-to-csv-file> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>
Load your CSV file into the target engine database by running the following lift load command:
% lift load --filename <csv-file-name> --target-schema <target-schema-name> --target-table <target-table-name> --file-origin extract-pda --target-user <target-user-nam> --target-password <target-password> --target-host <target-database-host-name>
The `--file-origin extract-pda` option specifies that the CSV file was extracted by using the lift extract command.
You're done. You can now run SQL queries on your data from the database console.
Create the table structures on your Db2 Warehouse on Cloud or Db2 on Cloud database target.
To migrate table structures, use the lift ddl command instead of the Database Conversion Workbench(DCW) tool. Choose one of the following methods:
Migrate table structures using Lift without any intervention. This method involves single command that extracts ddl from source database and applies to target database. (Recommended)
% lift ddl --migrate --source-schema <source-schema-name> --source-object <source-object-name> --source-database <source-database-name> --source-user <source-user-name> --source-password <source-password> --source-host <source-database-host-name> --source-database-port <source-database-port> --source-database-type <source-database-type> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>
Extract the table DDLusing Lift CLI, modify the generated DDL(optional) and apply the DDL to the target database. This method involves two command options:
1. Generate the DDL.
2. Execute the generated DDL.
Use this method if you want to customize the DDL extracted from the source database.
% lift ddl --generate --source-schema <source-schema-name> --source-object <source-object-name> --source-database <source-database-name> --source-user <source-user-name> --source-password <source-password> --source-host <source-database-host-name> --source-database-port <source-database-port> --source-database-type <source-database-type>[options]
% lift ddl --execute --file <path-to-ddl-file> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>
Run the lift ddl --help command for more available command options.
After your table structures are in place on the target database, you can start moving your tables. Start by extracting a table to a CSV file. Move that file over the network, stage it in the landing zone of the target database, and then load it into the database.
Extract the table to a CSV file by running the following lift extract command:.
% lift extract --source-schema <source-schema-name> --source-table <source-table-name> --source-database <source-database-name> --source-host <source-host-name> --source-user <source-user-name> --source-password <source-password> --source-database-port <source-database-port> --source-database-type <ias/db2/db2w> --file <path-to-csv-file>
The `ias, db2, and db2w` settings for the `–source-database-type` command option are used to specify the particular source database type.
Move your CSV file and stage it in the landing zone of the target database by running the following lift put command:
% lift put --file <path-to-csv-file> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>
Load your CSV file into the target engine database by running the following lift load command:
% lift load --filename <csv-file-name> --target-schema <target-schema-name> --target-table <target-table-name> --file-origin <extract-ias/extract-db2/extract-db2w> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>
The `extract-ias, extract-db2, and extract-db2w` settings for the `--file-origin` command option are used to specify that the CSV file was extracted from a particular database by using the lift extract command
You're done. You can now run SQL queries on your data from the database console.
Lift uses the following different modes to extract your data:
- Native
- JDBC
The native mode of the lift extract command leverages Oracle's native tools for extraction and is recommended for the best performance. If Oracle client tools cannot be installed or configured properly, Lift CLI automatically switches to JDBC mode that has relatively slower performance. For best performance, use Oracle client tools (Basic and Tools packages) that provide native extract tooling. Install and configure the Oracle client on the same box as Lift CLI and set the client path in the system environment variable so that Lift CLI can find and use it.
The native mode extract method uses server encoding irrespective of the encoding on the client machine. If the table name or column names contain multi byte character set (MBCS) characters, then the encoding of the client must match the encoding of the server so that the table and column names can be specified properly in the Lift CLI commands.
Recommended path for migration:
Migrate
table structures to the target database -> Move data using
Lift CLI -> Migrate indexes and other constraints (optional)
Create the schema and table structures on your IBM Cloud target database (Db2 Warehouse on Cloud or Db2 on Cloud). You have several options to do this, but the most effective way is to download, install, and use the
IBM Database Conversion Workbench (DCW). This tool can help you to convert your existing Oracle schema to one that's compatible with the Db2 Warehouse on Cloudor Db2 on Cloud engine database. After completing the conversion, the DCW produces a report that identifies which parts of your source DDL were automatically converted and which parts require manual intervention. For more information, see the included step-by-step DCW guide.
After your table structures are in place, you can start moving your Oracle tables. Start by extracting a table to a CSV file. Move that file over the network, stage it in the landing zone on Db2 Warehouse on Cloud or on Db2 on Cloud, and then load it into the database.
Extract the table to a CSV file by running the following lift extract command:.
% lift extract --source-schema <oracle-schema-name> --source-table <oracle-table-name> --source-database <oracle-database-name> --source-host <oracle-host-name> --source-user <oracle-user-name> --source-password <oracle-password> --source-database-port <oracle-database-port> --source-database-type <oracle> --file <path-to-csv-file>
Move your CSV file and stage it in the landing zone of the target database by running the following lift put command:
% lift put --file <path-to-csv-file> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>
Load your CSV file into the target engine database by running the following lift load command:
% lift load --filename <csv-file-name> --target-schema <target-schema-name> --target-table <target-table-name> --file-origin extract-oracle --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>
The `--file-origin extract-oracle` option specifies that the CSV file was extracted by using the lift extract command.
You're done. You can now run SQL queries on your data from the database console.
Optionally, you can move indexes and other constraints from the source database to the target database after completing the data movement.
Recommended path for migration:
Migrate
table structures to the target database -> Move data using
Lift CLI -> Migrate indexes and other constraints (optional)
Create the schema and table structures on your IBM Cloud target database (Db2 Warehouse on Cloud or Db2 on Cloud).
After your table structures are in place, you can start moving your Microsoft SQL Server tables. Start by extracting a table to a CSV file. Move that file over the network, stage it in the landing zone on Db2 Warehouse on Cloud or on Db2 on Cloud, and then load it into the database.
Extract the table to a CSV file by running the following lift extract command:.
% lift extract --source-schema <sqlserver-schema-name> --source-table <sqlserver-table-name> --source-database <sqlserver-database-name> --source-host <sqlserver-host-name> --source-user <sqlserver-user-name> --source-password <sqlserver-password> --source-database-port <sqlserver-database-port> --source-database-type <sqlserver> --file <path-to-csv-file>
Move your CSV file and stage it in the landing zone of the target database by running the following lift put command:
% lift put --file <path-to-csv-file> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>
Load your CSV file into the target engine database by running the following lift load command:
% lift load --filename <csv-file-name> --target-schema <target-schema-name> --target-table <target-table-name> --file-origin extract-sqlserver --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>
The `--file-origin extract-sqlserver` option specifies that the CSV file was extracted by using the lift extract command.
You're done. You can now run SQL queries on your data from the database console.
Optionally, you can move indexes and other constraints from the source database to the target database after completing the data movement.
Token: