MySQL password less access to root on localhost

mysql-config-editor is the answer to manage mysql without displaying password on prompt or passing plain text files.

This tool mysql_config_editor will created encrypted .mylogin.cnf file, this will resolve the age old problem of reading cleartext password for the mysql scripts.

Command has the following options:

set [command options] Sets user name/password/host name/socket/port
for a given login path (section).
remove [command options] Remove a login path from the login file.
print [command options] Print all the options for a specified
login path.
reset [command options] Deletes the contents of the login file.
help Display this usage/help information.

 

 

 

 

TERASORT – benchmark using hadoop-mapreduce-examples.jar

hadoop jar /usr/hdp/2.5.0.0-1245/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen 10000000000 /teraInput
# hdfs dfs -mv /teraInput /user/root/10000000
# hadoop jar /usr/hdp/2.5.0.0-1245/hadoop-mapreduce/hadoop-mapreduce-examples.jar terasort 10000000 /teraInput /teraOutput
# hdfs dfs -mv /teraInput /teraOutput
# hadoop jar /usr/hdp/2.5.0.0-1245/hadoop-mapreduce/hadoop-mapreduce-examples.jar teravalidate /teraOutput /teraValidate

REF: Running-TeraSort-MapReduce-Benchmark

Ambari LDAP setup and Kerberos Setup

 Ambari steps to configure LDAP

Note: These steps performed on Ambari version 2.2.2 with HDP 2.3.2 hortonworks hadoop version.
  1. Configure /etc/ambari-server/conf/ambari.properties

Used ambari-ldap-ad.sh to update ambari.properties file

cat <<-‘EOF’ | sudo tee -a /etc/ambari-server/conf/ambari.properties
authentication.ldap.baseDn=dc=example,dc=com
authentication.ldap.managerDn=cn=ldap-connect,ou=users,ou=hdp,dc=example,dc=com
authentication.ldap.primaryUrl=activedirectory.example.com:389
authentication.ldap.bindAnonymously=false
authentication.ldap.dnAttribute=distinguishedName
authentication.ldap.groupMembershipAttr=member
authentication.ldap.groupNamingAttr=cn
authentication.ldap.groupObjectClass=group
authentication.ldap.useSSL=false
authentication.ldap.userObjectClass=user
authentication.ldap.usernameAttribute=sAMAccountName
EOF

Now run the command:

#ambari-server setup-ldap

It will read all required properties from the ambari.properties file which got setup above. Some important properties are:

Primary URL* (host:port): (activedirectory.example.com:389)

Base DN* (dc=example,dc=com)

Manager DN* (cn=ldap-connect,ou=users,ou=hdp,dc=example,dc=com)

Enter Manager Passwrod*: ******

Re-Enter passwrod: ******

ambari-server start
 Ambari LDAP SYNC

create users.csv or groups.csv with required users and groups to be sync with Ambari.

echo “user1,user2,user3” > users.txt

echo “group1,group2,group3” > groups.txt

ambari-server sync-ldap --user users.txt

ambari-server sync-ldap --group groups.txt

Enter Ambari Admin login: admin

Enter Ambari Admin password: *******

AMBARI KERBEROS SETUP

Pre requisite: Get the Service Principal (Ad service account if AD is configured for Kerberos)

Steps to create new service principal, setpassword and create keytab  (AD with centrify configuration)
  1.  Create Ambari Service Principal (Service account in Active directory, typically we take help of AD admin team to create this AD service account)

ambari-adm@example.com

adkeytab --new --upn ambari-adm@example.com --keytab ambari-adm.keytab --container "OU=Hadoop,OU=Application,DC=example,DC=com" -V ambari-adm --user adadmin@example.com --ignore

3. Set passwod for the new principal (Ad service account)

adpasswd  -a adadmin@example.com ambari-adm@example.com

4.Generate Keytab file for this user account (Again AD admin will help)

adkeytab -A --ignore -u adadmin@example.com -K ambari-adm.keytab -e arcfour-hmac-md5 --fource --newpassword P@$$w0rd -S ambari-adm  ambari-adm -V
Now setup ambari with kerberos
 ambari-server setup-security

Select option: 3

Setup Ambari Kerberos JAAS configuration.

Enter Ambari Server’s kerberos Principal Name: amabri-adm@example.com

Enter keytab path: /root/ambari-adm@example.com

Note: keep 600 permissions the keytab file

Once setup is done, need to configure kerberos principal

Hive View configuration:

Hive Authentication=auth=KERBEROS;principal=hive/<hive host fqdn>@EXAMPLE.COM;hive.server2.proxy.user=$(username)

WebHDFS Authentication=auth=KERBEROS;proxyuser=ambari-adm@EXAMPLE.COM

It requires proxy user configuration (personification) in HADOOP configuration: setup_HDFS_proxy_user

 

hadoop.proxyuser.ambari-adm.hosts=*
hadoop.proxyuser.ambari-adm.groups=*
hadoop.proxyuser.ambari-adm.users=*

 

HDFS Disk Failures

Intresting discussion on hortonworks community on disk failures

how-to-alert-for-hdfs-disk-failures.html

> When a single drive fails on a worker node in HDFS, can this adversely affect performance of jobs running on this node?

The answer to this question is it depends. If this node is running a job that is accessing blocks on the failed volume, then yes. it is also possible that the job would be treated as failed if the dfs.datanode.failed.volumes.tolerated is not greater than 0. If it is not a value greater than zero, then HDFS treats a loss of a volume as catastrophic and marks the datanode as failed. If this is set to a value greater than zero, then node will work well until we lose more volumes.

> If this could cause a performance impact, how can our customers monitor for these drive failures in order to take corrective action?

Now this is a hard question to answer without further details. I am tempted to answer that the performance benefit you are going to get by monitoring and relying on a human being to take a corrective action is very doubtful. YARN / MR or whatever execution engine you are using is probably going to be much more efficient at re-scheduling your jobs.

>Or does the DataNode process quickly mark the drive and its HDFS blocks as “unusable”.

Datanode does mark the volume as failed , and namenode will learn that all the blocks on that failed volume are not available on that datanode any more. This happens via something called “block reports”. Once namenode learns that data node has lost the replica of a block then namenode will initiate appropriate replication. Since namenode knows about the loss of blocks, further jobs that need access to those block would most probably not be scheduled on that node. This again depends on the scheduler and its policies.