hadoop:sqoop import into hive table

This will load data into existing table temp10

sqoop import –connect jdbc:mysql://localhost/mysql –table temp  –hive-import –hive-table temp10 –username root -m 1;

hive> select * from temp10;
10    10
10    20
Time taken: 0.242 seconds, Fetched: 2 row(s)

this will create new hive table tempnew and imports data into it

sqoop import –connect jdbc:mysql://localhost/mysql –table temp  –hive-import –hive-table tempnew –create-hive-table –username root -m 1;

hive> select * from tempnew;
10    10
10    20
Time taken: 0.324 seconds, Fetched: 2 row(s)


Argument Description
--hive-home <dir> Override $HIVE_HOME
--hive-import Import tables into Hive (Uses Hive’s default delimiters if none are set.)
--hive-overwrite Overwrite existing data in the Hive table.
--create-hive-table If set, then the job will fail if the target hive
table exits. By default this property is false.
--hive-table <table-name> Sets the table name to use when importing to Hive.
--hive-drop-import-delims Drops \n, \r, and \01 from string fields when importing to Hive.
--hive-delims-replacement Replace \n, \r, and \01 from string fields with user defined string when importing to Hive.
--hive-partition-key Name of a hive field to partition are sharded on
--hive-partition-value <v> String-value that serves as partition key for this imported into hive in this job.
--map-column-hive <map> Override default mapping from SQL type to Hive type for configured columns.