Fix Under-replicated blocks in HDFS manually

https://community.hortonworks.com/articles/4427/fix-under-replicated-blocks-in-hdfs-manually.html

Short Description:

Quick instruction to fix under-replicated Blocks in HDFS manually

Article

To Fix under-replicated blocks in HDFS, below is quick instruction to use:

####Fix under-replicated blocks###

  1. su <$hdfs_user>
  2. bash4.1$ hdfs fsck / | grep ‘Under replicated’ | awk F‘:’ ‘{print $1}’ >> /tmp/under_replicated_files
  3. bash4.1$ for hdfsfile in `cat /tmp/under_replicated_files`; do echo “Fixing $hdfsfile :” ; hadoop fs setrep 3 $hdfsfile; done
Advertisements

Excellent blog on HDFS (Hadoop Distributed File System)

The Architecture of Open Source Applications_ The Hadoop Distributed File System

This article answers many questions on HDFS for those who want to understand HDFS in depth.

Hadoop: hdfs block size, replication

Block Size Parameters:

 hdfs dfs -D dfs.blocksize=1024 -put lfile /user/test
put: Specified block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size): 1024 < 1048576
$hdfs dfs -D dfs.blocksize=1048576 -put lfile /user/test

dfs.namenode.fs-limits.min-block-size 1048576 Minimum block size in bytes, enforced by the Namenode at create time. This prevents the accidental creation of files with tiny block sizes (and thus many blocks), which can degrade performance.
dfs.namenode.fs-limits.max-blocks-per-file 1048576 Maximum number of blocks per file, enforced by the Namenode on write. This prevents the creation of extremely large files which can degrade performance.
dfs.blocksize 134217728 The default block size for new files, in bytes. You can use the following suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in bytes (such as 134217728 for 128 MB).
dfs.client.block.write.retries 3

Replication Parameters:

dfs.replication 3 Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
dfs.replication.max 512 Maximal block replication.
dfs.namenode.replication.min 1 Minimal block replication.

Hadoop: FAQ on hdfs snapshot

$ hdfs dfs -mkdir /user/test
$ hdfs dfs -put output23.txt /user/test
$ hdfs dfs -du /user/test
67335  /user/test/output23.txt
$ hdfs dfsadmin -allowSnapshot /user/test
Allowing snaphot on /user/test succeeded
$hdfs dfs -createSnapshot /user/test
Created snapshot /user/test/.snapshot/s20150924-050641.662

How to recover the file from .snapshot directory ?

$hdfs dfs -rm /user/test/output23.txt
15/09/24 05:08:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes.
Moved: ‘hdfs://sandbox.hortonworks.com:8020/user/test/output23.txt’ to trash at: hdfs://sandbox.hortonworks.com:8020/user/hdfs/.Trash/Current

Confirm File is deleted:
 hdfs dfs -ls /user/test/output23.txt
ls: `/user/test/output23.txt’: No such file or directory

hdfs dfs -cp /user/test/.snapshot/s20150924-050641.662/output23.txt /user/test

15/09/24 05:12:02 WARN hdfs.DFSClient: DFSInputStream has been closed already
$ hdfs dfs -ls /user/test/output23.txt
-rw-r–r–   1 hdfs hdfs      67335 2015-09-24 05:12 /user/test/output23.txt

Can we delete files from .snapshot copy ?

Try to delete file from .snapshot
As .snapshot is a readonly copy it will not allow to delete the files.

hdfs dfs -rm /user/test/.snapshot/s20150924-050641.662/output23.txt
rm: Failed to move to trash: hdfs://sandbox.hortonworks.com:8020/user/test/.snapshot/s20150924-050641.662/output23.txt: “.snapshot” is a reserved name.

How to list snapshottable directories ?

hdfs lsSnapshottableDir
drwxr-xr-x 0 hdfs hdfs 0 2015-09-24 05:12 1 65536 /user/test

How to rename a snapshot ?

hdfs dfs -renameSnapshot /user/test s20150924-050641.662 b4patch.24sep15

$ hdfs dfs -ls /user/test/.snapshot
drwxr-xr-x   – hdfs hdfs          0 2015-09-24 05:06 /user/test/.snapshot/b4patch.24sep15

How to show difference between two snapshots ?

$ touch testfile
$ hdfs dfs -put testfile /user/test
$ hdfs dfs -ls /user/test
Found 2 items
-rw-r–r–   1 hdfs hdfs      67335 2015-09-24 05:12 /user/test/output23.txt
-rw-r–r–   1 hdfs hdfs          0 2015-09-24 05:32 /user/test/testfile

$ hdfs dfs -createSnapshot /user/test
Created snapshot /user/test/.snapshot/s20150924-053313.181
$ hdfs dfs -renameSnapshot /user/test s20150924-053313.181 afterPatch.24sep15

$ hdfs snapshotDiff /user/test b4patch.24sep15 afterPatch.24sep15
Difference between snapshot b4patch.24sep15 and snapshot afterPatch.24sep15 under directory /user/test:
M    .
+    ./output23.txt
+    ./testfile
–    ./output23.txt

How to delete the snapshot ?

hdfs dfs -deleteSnapshot /user/test afterPatch.24sep15
hdfs dfs -deleteSnapshot /user/test b4patch.24sep15

Reference:

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html#Delete_Snapshots

HADOOP:HDFS: Recover files deleted in HDFS from .Trash

When files/directories are deleted, Hadoop moves files to .Trash directory  if “TrashPolicyDefault: Namenode trash configuration: Deletion interval ” enabled and interval is set.

hadoop fs -rm -r -f /user/root/employee

15/07/26 05:12:14 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes.

Moved: ‘hdfs://sandbox.hortonworks.com:8020/user/root/employee’ to trash at: hdfs://sandbox.hortonworks.com:8020/user/root/.Trash/Current

# hadoop fs -ls /user/root/employee

ls: `/user/root/employee’: No such file or directory

Notes on .Trash:

The Hadoop trash feature helps prevent accidental deletion of files and directories. If trash is enabled and a file or directory is deleted using the Hadoop shell, the file is moved to the .Trash directory in the user’s home directory instead of being deleted. Deleted files are initially moved to the Current sub-directory of the .Trash directory, and their original path is preserved. Files in .Trash are permanently removed after a user-configurable time interval. The interval setting also enables trash checkpointing, where the Current directory is periodically renamed using a timestamp. Files and directories in the trash can be restored simply by moving them to a location outside the .Trash directory.

Where is the configuration located ?

# grep -ri -a1 trash /etc/hadoop/conf/

/etc/hadoop/conf/core-site.xml: <name>fs.trash.interval</name>

/etc/hadoop/conf/core-site.xml- <value>360</value>

in CDS you can enable/disable and configure interval:

http://www.cloudera.com/content/cloudera/en/documentation/cloudera-manager/v4-latest/Cloudera-Manager-Managing-Clusters/cmmc_hdfs_trash.html

From AMBARI you can change the settings using below path:

HDFS==>Configs==>Advanced Core Site ==>fs.trash.interval