因為嘗試過 Hortonworks 或 Cloudera 等完整的 Hadoop 佈署(Note:現在已經是同一家惹~),發現安裝完有太多這邊用還用不到的功能,因此這邊找一些從 Apache 專案網站設置的教學,設置單純只有 HDFS 跟 Hiverserver2 功能的迷你環境。
據說 Hive 要能夠有 insert 的語法,需要一些額外的設置,而且 hdfs_fdw 目前只有支援查詢,因此這就不再這邊設置範圍。。。
[aaa@labenv-lxc ~]$ wget http://ftp.twaren.net/Unix/Web/apache/hadoop/common/hadoop-3.2.0/hadoop-3.2.0.tar.gz http://ftp.twaren.net/Unix/Web/apache/hive/hive-3.1.1/apache-hive-3.1.1-bin.tar.gz [aaa@labenv-lxc ~]$
初始化 LXC Container,確認分配到的 IP,把然後把 Hadoop 跟 Hive 執行檔放進去
[aaa@labenv-lxc ~]$ sudo ./addcontainer.sh hivesvr . . . 略 . . . [aaa@labenv-lxc ~]$ sudo mv hadoop-3.2.0.tar.gz apache-hive-3.1.1-bin.tar.gz /var/lib/lxc/hivesvr/rootfs/root/ [aaa@labenv-lxc ~]$ sudo ./startenv.sh hivesvr Starting container environment hivesvr [aaa@labenv-lxc ~]$ sudo lxc-info -n hivesvr Name: hivesvr State: RUNNING PID: 3528 IP: 192.168.68.21 CPU use: 0.09 seconds BlkIO use: 208.00 KiB Memory use: 1.18 MiB KMem use: 0 bytes Link: vethLGFV67 TX bytes: 1.64 KiB RX bytes: 656 bytes Total bytes: 2.28 KiB [aaa@labenv-lxc ~]$
進入 Container 裡面,確認一下 hostname
[aaa@labenv-lxc ~]$ sudo ./envlogin.sh hivesvr Connected to tty 0 Typeto exit the console, to enter Ctrl+a itself CentOS Linux 7 (Core) Kernel 3.10.0-957.10.1.el7.x86_64 on an x86_64 hivesvr login: root Password: [root@hivesvr ~]# hostname hivesvr.c.testaaa-151709.internal [root@hivesvr ~]# hostname -s hivesvr [root@hivesvr ~]#
在 Container 裡面設置 /etc/hosts:LXC Container 有完整的 hostname,把上面查出來的結果,直接填寫到 /etc/hosts 即可
[root@hivesvr ~]# echo "192.168.68.21 hivesvr.c.testaaa-151709.internal" >> /etc/hosts [root@hivesvr ~]# cat /etc/hosts 127.0.0.1 localhost hivesvr 192.168.68.21 hivesvr.c.testaaa-151709.internal [root@hivesvr ~]#
安裝 Java 跟 rsync(以及,補安裝 Container 省略的 which 指令)
[root@hivesvr ~]# yum install -y java rsync which
解壓縮執行檔,並檢驗一下軟體可以執行
[root@hivesvr ~]# tar -xvf ~/hadoop-3.2.0.tar.gz -C /opt . . 略 . . [root@hivesvr ~]# tar -xvf ~/apache-hive-3.1.1-bin.tar.gz -C /opt . . 略 . . [root@hivesvr ~]# export JAVA_HOME=/etc/alternatives/jre/ [root@hivesvr ~]# /opt/hadoop-3.2.0/bin/hadoop version Hadoop 3.2.0 Source code repository https://github.com/apache/hadoop.git -r e97acb3bd8f3befd27418996fa5d4b50bf2e17bf Compiled by sunilg on 2019-01-08T06:08Z Compiled with protoc 2.5.0 From source with checksum d3f0795ed0d9dc378e2c785d3668f39 This command was run using /opt/hadoop-3.2.0/share/hadoop/common/hadoop-common-3.2.0.jar [root@hivesvr ~]#
把環境變數處理好:大致上要設定相關執行檔位置。包含 $HOME/.bashrc 以及 $HADOOP_HOME/etc/hadoop/hadoop-env.sh 兩個檔案
[root@hivesvr ~]# cat << "EOF" >> ~/.bashrc export JAVA_HOME=/etc/alternatives/jre/ export HADOOP_HOME=/opt/hadoop-3.2.0 export HIVE_HOME=/opt/apache-hive-3.1.1-bin export HDFS_NAMENODE_USER="root" export HDFS_DATANODE_USER="root" export HDFS_SECONDARYNAMENODE_USER="root" export YARN_RESOURCEMANAGER_USER="root" export YARN_NODEMANAGER_USER="root" export HIVE_CONF_DIR=$HIVE_HOME/conf export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$HIVE_HOME/bin EOF [root@hivesvr ~]# [root@hivesvr ~]# source ~/.bashrc [root@hivesvr ~]# [root@hivesvr ~]# sed 's@^# export JAVA_HOME=.*@&\n export JAVA_HOME=/etc/alternatives/jre/@' -i $HADOOP_HOME/etc/hadoop/hadoop-env.sh [root@hivesvr ~]# sed 's@^# export HADOOP_HOME=.*@&\n export HADOOP_HOME=/opt/hadoop-3.2.0@' -i $HADOOP_HOME/etc/hadoop/hadoop-env.sh [root@hivesvr ~]# sed 's/^# export HDFS_NAMENODE_USER=.*/&\n export HDFS_NAMENODE_USER=root/' -i $HADOOP_HOME/etc/hadoop/hadoop-env.sh [root@hivesvr ~]#
處理 HDFS 相關設定檔:這邊直接填必要內容而已
[root@hivesvr ~]# mv $HADOOP_HOME/etc/hadoop/core-site.xml $HADOOP_HOME/etc/hadoop/core-site.xml.orig [root@hivesvr ~]# cat << EOF >> $HADOOP_HOME/etc/hadoop/core-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> </configuration> EOF [root@hivesvr ~]#
[root@hivesvr ~]# mv $HADOOP_HOME/etc/hadoop/hdfs-site.xml $HADOOP_HOME/etc/hadoop/hdfs-site.xml.orig [root@hivesvr ~]# cat << EOF >>$HADOOP_HOME/etc/hadoop/hdfs-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> EOF [root@hivesvr ~]#
處理 Yarn 設定檔:這邊直接填必要內容而已
[root@hivesvr ~]# mv $HADOOP_HOME/etc/hadoop/mapred-site.xml $HADOOP_HOME/etc/hadoop/mapred-site.xml.orig [root@hivesvr ~]# cat << EOF >> $HADOOP_HOME/etc/hadoop/mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> EOF [root@hivesvr ~]#
[root@hivesvr ~]# mv $HADOOP_HOME/etc/hadoop/yarn-site.xml $HADOOP_HOME/etc/hadoop/yarn-site.xml.orig [root@hivesvr ~]# cat << EOF >> $HADOOP_HOME/etc/hadoop/yarn-site.xml <?xml version="1.0"?> <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> EOF [root@hivesvr ~]#
[root@hivesvr ~]# yum install -y https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-centos11-11-2.noarch.rpm [root@hivesvr ~]# yum group install -y "PostgreSQL Database Server 11 PGDG" [root@hivesvr ~]#
初始化資料庫。注意預設的 postgre 的 listen_addresses 只有 localhost,為了要能夠正常登入,需要調整才行。
另外,為了要減肥,這邊也把 wal_level 調最小,關掉 WAL Sender 功能(非必要)
[root@hivesvr ~]# /usr/pgsql-11/bin/postgresql-11-setup initdb Initializing database ... OK [root@hivesvr ~]# [root@hivesvr ~]# echo "listen_addresses = '*'" >> /var/lib/pgsql/11/data/postgresql.auto.conf [root@hivesvr ~]# cat << EOF >> /var/lib/pgsql/11/data/postgresql.auto.conf wal_level = 'minimal' max_wal_senders = 0 EOF [root@hivesvr ~]# sed "s/peer$/trust/g" -i /var/lib/pgsql/11/data/pg_hba.conf [root@hivesvr ~]# sed "s/ident$/trust/g" -i /var/lib/pgsql/11/data/pg_hba.conf [root@hivesvr ~]# service postgresql-11 start Redirecting to /bin/systemctl start postgresql-11.service [root@hivesvr ~]# chkconfig postgresql-11 on Note: Forwarding request to 'systemctl enable postgresql-11.service'. Created symlink from /etc/systemd/system/multi-user.target.wants/postgresql-11.service to /usr/lib/systemd/system/postgresql-11.service. [root@hivesvr ~]#
安裝 PGSQL JDBC Driver,並設定 Soft Link 到 Hive 目錄中
[root@hivesvr ~]# yum install -y postgresql-jdbc [root@hivesvr ~]# ln -s /usr/share/java/postgresql-jdbc.jar /opt/apache-hive-3.1.1-bin/jdbc/
建立 Hive 使用的 Metastore 資料庫 metastore_db,指定帳號為 hive,密碼 hive
[root@hivesvr ~]# createuser -U postgres --pwprompt hive Enter password for new role: hive Enter it again: hive [root@hivesvr ~]# createdb -U postgres --owner=hive metastore_db
設置 Hive 設定檔,:這邊直接填必要內容,包含 HDFS 連線以及 Metadata 資料庫連線資訊,以及 NOSASL 的認證模式(後面的 hdfs_fdw 測試要使用)
[root@hivesvr ~]# cat << EOF >> /opt/apache-hive-3.1.1-bin/conf/hive-site.xml <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> <description>For YARN this configuration variable is called fs.defaultFS.</description> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:postgresql://localhost:5432/metastore_db</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>org.postgresql.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> </property> <property> <name>hive.server2.authentication</name> <value>NOSASL</value> </property> </configuration> EOF [root@hivesvr ~]#
初始化 Hive Metastore 資料庫:這會參照上面的 hive-site.xml 設定
[root@hivesvr ~]# schematool -dbType postgres -initSchema SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.1-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Metastore connection URL: jdbc:postgresql://localhost:5432/metastore_db Metastore Connection Driver : org.postgresql.Driver Metastore connection User: hive Starting metastore schema initialization to 3.1.0 Initialization script hive-schema-3.1.0.postgres.sql . . . 中間略 . . . Initialization script completed schemaTool completed [root@hivesvr ~]#
設定 root 免密碼登入設置
[root@hivesvr ~]# sed -e 's/#PermitRootLogin yes/PermitRootLogin yes/g' \
-e 's/#PubkeyAuthentication yes/PubkeyAuthentication yes/g' \
-e 's/#PasswordAuthentication yes/PasswordAuthentication yes/g' \
-e 's/PasswordAuthentication no/#PasswordAuthentication no/g' \
-i /etc/ssh/sshd_config
[root@hivesvr ~]# systemctl restart sshd
[root@hivesvr ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:0kPSKS4dizmk5/e7hrh2AIbvlQMmL16rDqoXJrFOLUM root@hdp-single.c.testaaa-151709.internal
The key's randomart image is:
+---[RSA 2048]----+
| |
| . . |
| . . + + |
|+E=o = B |
|.Ooo*.= S |
|++===o . . |
|==+oo+.. |
|ooo.o.o.. |
|=+...o .+o |
+----[SHA256]-----+
[root@hivesvr ~]#
[root@hivesvr ~]# ssh-copy-id root@localhost
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:y2T8393hoC8ezL/RKE6r+XTkZwhNQYzLtsdAnI3FFwc.
ECDSA key fingerprint is MD5:52:3d:3e:ab:e3:d8:64:52:e3:be:ca:27:31:58:c5:2c.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@localhost's password: root
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@localhost'"
and check to make sure that only the key(s) you wanted were added.
[root@hivesvr ~]#
[root@hivesvr ~]# ssh root@hivesvr
The authenticity of host 'hivesvr (127.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:y2T8393hoC8ezL/RKE6r+XTkZwhNQYzLtsdAnI3FFwc.
ECDSA key fingerprint is MD5:52:3d:3e:ab:e3:d8:64:52:e3:be:ca:27:31:58:c5:2c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hivesvr' (ECDSA) to the list of known hosts.
Last login: Sat Apr 27 00:43:40 2019
[root@hivesvr ~]# exit
logout
Connection to hivesvr closed.
[root@hivesvr ~]#
初始化 Hadoop Name Node:這邊把完整的輸出內容記錄下來
[root@hivesvr ~]# hdfs namenode -format 2019-05-06 08:37:08,533 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = hivesvr.c.testaaa-151709.internal/192.168.68.21 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 3.2.0 STARTUP_MSG: classpath = /opt/hadoop-3.2.0/etc/hadoop:/opt/hadoop-3.2.0/share/hadoop/common/lib/accessors-smart-1.2.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/asm-5.0.4.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/avro-1.7.7.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-io-2.5.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-lang3-3.7.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-net-3.6.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/commons-text-1.4.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/curator-client-2.12.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/curator-framework-2.12.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/dnsjava-2.1.7.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/hadoop-annotations-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/hadoop-auth-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/httpclient-4.5.2.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/httpcore-4.4.4.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jackson-annotations-2.9.5.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jackson-core-2.9.5.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jackson-databind-2.9.5.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/json-smart-2.3.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerb-server-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/netty-3.10.5.Final.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/snappy-java-1.0.5.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/stax2-api-3.1.4.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/zookeeper-3.4.13.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/opt/hadoop-3.2.0/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/hadoop-3.2.0/share/hadoop/common/hadoop-common-3.2.0-tests.jar:/opt/hadoop-3.2.0/share/hadoop/common/hadoop-common-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/hadoop-nfs-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/common/hadoop-kms-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/hadoop-auth-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/json-smart-2.3.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-io-2.5.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/hadoop-annotations-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-text-1.4.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/paranamer-2.3.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/xz-1.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-databind-2.9.5.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-annotations-2.9.5.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-core-2.9.5.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-3.2.0-tests.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0-tests.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0-tests.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0-tests.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0-tests.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/guice-4.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/jackson-jaxrs-base-2.9.5.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/objenesis-1.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-api-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-client-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-common-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-registry-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-common-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-router-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-services-api-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-services-core-3.2.0.jar:/opt/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-submarine-3.2.0.jar STARTUP_MSG: build = https://github.com/apache/hadoop.git -r e97acb3bd8f3befd27418996fa5d4b50bf2e17bf; compiled by 'sunilg' on 2019-01-08T06:08Z STARTUP_MSG: java = 1.8.0_212 ************************************************************/ 2019-05-06 08:37:08,574 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2019-05-06 08:37:08,810 INFO namenode.NameNode: createNameNode [-format] Formatting using clusterid: CID-5c45adb7-480a-491b-9084-0092ec3d5056 2019-05-06 08:37:10,345 INFO namenode.FSEditLog: Edit logging is async:true 2019-05-06 08:37:10,366 INFO namenode.FSNamesystem: KeyProvider: null 2019-05-06 08:37:10,368 INFO namenode.FSNamesystem: fsLock is fair: true 2019-05-06 08:37:10,370 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2019-05-06 08:37:10,390 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 2019-05-06 08:37:10,390 INFO namenode.FSNamesystem: supergroup = supergroup 2019-05-06 08:37:10,390 INFO namenode.FSNamesystem: isPermissionEnabled = true 2019-05-06 08:37:10,390 INFO namenode.FSNamesystem: HA Enabled: false 2019-05-06 08:37:10,464 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2019-05-06 08:37:10,498 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 2019-05-06 08:37:10,498 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 2019-05-06 08:37:10,504 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2019-05-06 08:37:10,504 INFO blockmanagement.BlockManager: The block deletion will start around 2019 May 06 08:37:10 2019-05-06 08:37:10,506 INFO util.GSet: Computing capacity for map BlocksMap 2019-05-06 08:37:10,507 INFO util.GSet: VM type = 64-bit 2019-05-06 08:37:10,509 INFO util.GSet: 2.0% max memory 409.9 MB = 8.2 MB 2019-05-06 08:37:10,509 INFO util.GSet: capacity = 2^20 = 1048576 entries 2019-05-06 08:37:10,522 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled 2019-05-06 08:37:10,522 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false 2019-05-06 08:37:10,537 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS 2019-05-06 08:37:10,537 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2019-05-06 08:37:10,537 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 2019-05-06 08:37:10,537 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000 2019-05-06 08:37:10,537 INFO blockmanagement.BlockManager: defaultReplication = 1 2019-05-06 08:37:10,537 INFO blockmanagement.BlockManager: maxReplication = 512 2019-05-06 08:37:10,537 INFO blockmanagement.BlockManager: minReplication = 1 2019-05-06 08:37:10,537 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 2019-05-06 08:37:10,537 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms 2019-05-06 08:37:10,537 INFO blockmanagement.BlockManager: encryptDataTransfer = false 2019-05-06 08:37:10,537 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2019-05-06 08:37:10,620 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911 2019-05-06 08:37:10,621 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215 2019-05-06 08:37:10,621 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215 2019-05-06 08:37:10,621 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215 2019-05-06 08:37:10,633 INFO util.GSet: Computing capacity for map INodeMap 2019-05-06 08:37:10,633 INFO util.GSet: VM type = 64-bit 2019-05-06 08:37:10,633 INFO util.GSet: 1.0% max memory 409.9 MB = 4.1 MB 2019-05-06 08:37:10,633 INFO util.GSet: capacity = 2^19 = 524288 entries 2019-05-06 08:37:10,638 INFO namenode.FSDirectory: ACLs enabled? false 2019-05-06 08:37:10,639 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 2019-05-06 08:37:10,639 INFO namenode.FSDirectory: XAttrs enabled? true 2019-05-06 08:37:10,639 INFO namenode.NameNode: Caching file names occurring more than 10 times 2019-05-06 08:37:10,645 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 2019-05-06 08:37:10,647 INFO snapshot.SnapshotManager: SkipList is disabled 2019-05-06 08:37:10,655 INFO util.GSet: Computing capacity for map cachedBlocks 2019-05-06 08:37:10,656 INFO util.GSet: VM type = 64-bit 2019-05-06 08:37:10,656 INFO util.GSet: 0.25% max memory 409.9 MB = 1.0 MB ]# 2019-05-06 08:37:10,667 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2019-05-06 08:37:10,667 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2019-05-06 08:37:10,667 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2019-05-06 08:37:10,671 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 2019-05-06 08:37:10,672 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2019-05-06 08:37:10,673 INFO util.GSet: Computing capacity for map NameNodeRetryCache 2019-05-06 08:37:10,673 INFO util.GSet: VM type = 64-bit 2019-05-06 08:37:10,674 INFO util.GSet: 0.029999999329447746% max memory 409.9 MB = 125.9 KB 2019-05-06 08:37:10,674 INFO util.GSet: capacity = 2^14 = 16384 entries 2019-05-06 08:37:10,739 INFO namenode.FSImage: Allocated new BlockPoolId: BP-488037708-192.168.68.21-1557131830724 2019-05-06 08:37:10,809 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted. 2019-05-06 08:37:10,842 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression 2019-05-06 08:37:11,113 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 399 bytes saved in 0 seconds . 2019-05-06 08:37:11,134 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 2019-05-06 08:37:11,142 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hivesvr.c.testaaa-151709.internal/192.168.68.21 ************************************************************/ [root@hivesvr ~]#
啟動 Hadoop 部份的服務:這邊直接以 start-all.sh 一起帶動。
[root@hivesvr ~]# which start-all.sh /opt/hadoop-3.2.0/sbin/start-all.sh [root@hivesvr ~]# [root@hivesvr ~]# start-all.sh Starting namenodes on [localhost] Last login: Mon May 6 15:12:07 UTC 2019 on lxc/console Starting datanodes Last login: Mon May 6 15:14:09 UTC 2019 on lxc/console Starting secondary namenodes [hivesvr.c.testaaa-151709.internal] Last login: Mon May 6 15:14:12 UTC 2019 on lxc/console Starting resourcemanager Last login: Mon May 6 15:14:20 UTC 2019 on lxc/console Starting nodemanagers Last login: Mon May 6 15:14:31 UTC 2019 on lxc/console [root@hivesvr ~]#
這邊從 Host 確認這個 Container 有監聽的 Port 的狀況
[aaa@labenv-lxc ~]$ nmap 192.168.68.21 Starting Nmap 6.40 ( http://nmap.org ) at 2019-05-06 09:02 UTC Nmap scan report for 192.168.68.21 Host is up (0.00053s latency). Not shown: 995 closed ports PORT STATE SERVICE 22/tcp open ssh 5432/tcp open postgresql 8031/tcp open unknown 8042/tcp open fs-agent 8088/tcp open radan-http Nmap done: 1 IP address (1 host up) scanned in 0.22 seconds [aaa@labenv-lxc ~]$
接著測試一下 HDFS,丟一點檔案進去,確認 HDFS 運作正常
[root@hivesvr ~]# echo 'hello~~' >> ~/test.txt [root@hivesvr ~]# hadoop fs -put ~/test.txt / [root@hivesvr ~]# hdfs dfs -ls / Found 1 items -rw-r--r-- 1 root supergroup 8 2019-05-06 09:18 /test.txt [root@hivesvr ~]# [root@hivesvr ~]# hdfs dfs -cat /test.txt hello~~ [root@hivesvr ~]#
接著是啟動 Hive Server 連到 HDFS。這邊先試著直接在互動模式啟動 runtime,然後換成用 hive2 以背景程序的方式啟動
[root@hivesvr ~]# hive which: no hbase in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/opt/hadoop-3.2.0/sbin:/opt/hadoop-3.2.0/bin:/opt/apache-hive-3.1.1-bin/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.1-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Hive Session ID = 47dba19e-10a2-4c42-afb6-16822e9214e4 Logging initialized using configuration in jar:file:/opt/apache-hive-3.1.1-bin/lib/hive-common-3.1.1.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 99f34547-1565-4e77-921b-0b65ab21c9e4 hive> show databases; OK default Time taken: 2.119 seconds, Fetched: 1 row(s) hive> exit; [root@hivesvr ~]#
[root@hivesvr ~]# hive --config /opt/apache-hive-3.1.1-bin/conf/hive-site.xml --service hiveserver2 & [1] 4913 which: no hbase in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/opt/hadoop-3.2.0/sbin:/opt/hadoop-3.2.0/bin:/opt/apache-hive-3.1.1-bin/bin) 2019-05-06 14:26:35: Starting HiveServer2 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.1-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Hive Session ID = f51b740c-4d6b-4b5b-bbd9-c0c1fd50e35f Hive Session ID = 7d889962-f2d5-481b-9611-10f07d54ad8e Hive Session ID = a343c9e7-25ae-48f2-9610-9f98b702ed61 ↲ [root@hivesvr ~]#
匯入測試資料:
在 Hive 裡面,有 database 跟 table 等物件。這邊使用的測試資料 weblogs_parse.zip(範例參照 hdfs_fdw),直接參考 hdfs_fdw 的網站。下載下來,丟進去環境內。
操作步驟分成(1) csv 資料匯入 HDFS,以及(2)建立 hive meta-table,以及(3)把資料載入表格等步驟。注意這邊的環境沒有辦法進行 insert(雖然使用手冊上有提到這種語法,但這需要額外的設置才有辦法運作)
這裡透過 hive 直接進入,建立 database d1 與 table weblogs,並把資料檔載入 Hive(Note:hive 預設有一個叫做 default 的 database)
[root@hivesvr ~]# unzip weblogs_parse.zip Archive: weblogs_parse.zip inflating: weblogs_parse.txt [root@hivesvr ~]# [root@hivesvr ~]# hadoop fs -mkdir /weblogs [root@hivesvr ~]# hadoop fs -put ~/weblogs_parse.txt /weblogs [root@hivesvr ~]#
[root@hivesvr ~]# hive which: no hbase in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/hadoop-3.2.0/sbin:/opt/hadoop-3.2.0/bin:/opt/apache-hive-3.1.1-bin/bin:/root/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.1-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Hive Session ID = 651d7c88-7851-4c54-b4fe-a9194b51ca79 Logging initialized using configuration in jar:file:/opt/apache-hive-3.1.1-bin/lib/hive-common-3.1.1.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = c66dc593-8ef2-4393-803d-ad3fa3aaaa41 hive> create database d1; OK Time taken: 2.022 seconds hive> use d1; OK Time taken: 1.134 seconds hive> CREATE TABLE weblogs ( client_ip STRING, full_request_date STRING, day STRING, month STRING, month_num INT, year STRING, hour STRING, minute STRING, second STRING, timezone STRING, http_verb STRING, uri STRING, http_status_code STRING, bytes_returned STRING, referrer STRING, user_agent STRING) row format delimited fields terminated by '\t'; OK Time taken: 2.955 seconds hive> ; hive> ; hive> load data inpath '/weblogs/weblogs_parse.txt' into table weblogs; Loading data to table d1.weblogs OK Time taken: 2.234 seconds hive> ; hive> ; hive> use d1; OK Time taken: 0.17 seconds hive> select * from weblogs limit 3; OK 612.57.72.653 03/Jun/2012:09:12:23 -0500 03 Jun 6 2012 09 12 23 -0500 GET /product/product2 200 0 /product/product2 Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30) 612.57.72.653 03/Jun/2012:09:14:50 -0500 03 Jun 6 2012 09 14 50 -0500 GET /product/product3 200 0 /product/product2 Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30) 612.57.72.653 03/Jun/2012:09:15:05 -0500 03 Jun 6 2012 09 15 05 -0500 GET /product/product3 200 0 /product/product3 Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30) Time taken: 3.508 seconds, Fetched: 3 row(s) hive> ; hive>
要透過 IP 跟 Port 以帳號密碼登入,需要使用 beeline 這個 hiveserver2 的 Client。這邊不測試 LDAP 認證登入,而是測試 NOSASL 認證模式。(Note:Hive 要啟用 NOSASL 認證模式,需要在前面步驟的 core-site.xml 設定好,不然會出現 User: root is not allowed to impersonate hive 錯誤。)
資料庫的連線字串,可以直接用 localhost,或是實際的 IP。
另外,這邊疑似出現一點怪怪的 Java 例外資訊(認不得 hive-site.xml 裡面的 XML。。。),不過可以通就好~
[root@hivesvr ~]# beeline SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.1-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Beeline version 3.1.1 by Apache Hive beeline> beeline> beeline> !connect jdbc:hive2://192.168.68.21:10000/default;auth=noSasl hive hive Connecting to jdbc:hive2://192.168.68.21:10000/default;auth=noSasl NoViableAltException(358@[]) at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1387) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:220) at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:74) at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:67) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:616) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:197) at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:260) at org.apache.hive.service.cli.operation.Operation.run(Operation.java:247) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:541) at org.apache.hive.service.cli.session.HiveSessionImpl.access$000(HiveSessionImpl.java:89) at org.apache.hive.service.cli.session.HiveSessionImpl$GlobalHivercFileProcessor.processCmd(HiveSessionImpl.java:224) at org.apache.hadoop.hive.common.cli.HiveFileProcessor.processLine(HiveFileProcessor.java:87) at org.apache.hadoop.hive.common.cli.HiveFileProcessor.processReader(HiveFileProcessor.java:66) at org.apache.hadoop.hive.common.cli.HiveFileProcessor.processFile(HiveFileProcessor.java:37) at org.apache.hive.service.cli.session.HiveSessionImpl.processGlobalInitFile(HiveSessionImpl.java:252) at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:194) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) at com.sun.proxy.$Proxy42.open(Unknown Source) at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:425) at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:373) at org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:195) at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:472) at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:322) at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1497) at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1482) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) FAILED: ParseException line 1:0 cannot recognize input near '<' '?' 'xml' Connected to: Apache Hive (version 3.1.1) Driver: Hive JDBC (version 3.1.1) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://192.168.68.21:10000/default> 0: jdbc:hive2://192.168.68.21:10000/default> 0: jdbc:hive2://192.168.68.21:10000/default> show databases; OK INFO : Compiling command(queryId=root_20190510082415_7d5f0bdd-3aa6-4946-8cc8-5f0b35566dbd): show databases INFO : Concurrency mode is disabled, not creating a lock manager INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null) INFO : Completed compiling command(queryId=root_20190510082415_7d5f0bdd-3aa6-4946-8cc8-5f0b35566dbd); Time taken: 0.069 seconds INFO : Concurrency mode is disabled, not creating a lock manager INFO : Executing command(queryId=root_20190510082415_7d5f0bdd-3aa6-4946-8cc8-5f0b35566dbd): show databases INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing command(queryId=root_20190510082415_7d5f0bdd-3aa6-4946-8cc8-5f0b35566dbd); Time taken: 0.047 seconds INFO : OK INFO : Concurrency mode is disabled, not creating a lock manager +----------------+ | database_name | +----------------+ | d1 | | default | +----------------+ 2 rows selected (0.339 seconds) 0: jdbc:hive2://192.168.68.21:10000/default> 0: jdbc:hive2://192.168.68.21:10000/default> use d1; OK INFO : Compiling command(queryId=root_20190510082455_f419a01a-1ecb-4c02-b1f8-d306812f5ee0): use d1 INFO : Concurrency mode is disabled, not creating a lock manager INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) INFO : Completed compiling command(queryId=root_20190510082455_f419a01a-1ecb-4c02-b1f8-d306812f5ee0); Time taken: 0.036 seconds INFO : Concurrency mode is disabled, not creating a lock manager INFO : Executing command(queryId=root_20190510082455_f419a01a-1ecb-4c02-b1f8-d306812f5ee0): use d1 INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing command(queryId=root_20190510082455_f419a01a-1ecb-4c02-b1f8-d306812f5ee0); Time taken: 0.038 seconds INFO : OK INFO : Concurrency mode is disabled, not creating a lock manager No rows affected (0.105 seconds) 0: jdbc:hive2://192.168.68.21:10000/default> 0: jdbc:hive2://192.168.68.21:10000/default> show tables; OK INFO : Compiling command(queryId=root_20190510082506_a3deb8a2-31f2-4900-9013-b50c5450a4a9): show tables INFO : Concurrency mode is disabled, not creating a lock manager INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null) INFO : Completed compiling command(queryId=root_20190510082506_a3deb8a2-31f2-4900-9013-b50c5450a4a9); Time taken: 0.048 seconds INFO : Concurrency mode is disabled, not creating a lock manager INFO : Executing command(queryId=root_20190510082506_a3deb8a2-31f2-4900-9013-b50c5450a4a9): show tables INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing command(queryId=root_20190510082506_a3deb8a2-31f2-4900-9013-b50c5450a4a9); Time taken: 0.046 seconds INFO : OK INFO : Concurrency mode is disabled, not creating a lock manager +-----------+ | tab_name | +-----------+ | weblogs | +-----------+ 2 rows selected (0.152 seconds) 0: jdbc:hive2://192.168.68.21:10000/default> 0: jdbc:hive2://192.168.68.21:10000/default> describe weblogs; OK INFO : Compiling command(queryId=root_20190510082814_585aad52-a80c-4436-b988-f6d1d13ca942): describe weblogs INFO : Concurrency mode is disabled, not creating a lock manager INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col_name, type:string, comment:from deserializer), FieldSchema(name:data_type, type:string, comment:from deserializer), FieldSchema(name:comment, type:string, comment:from deserializer)], properties:null) INFO : Completed compiling command(queryId=root_20190510082814_585aad52-a80c-4436-b988-f6d1d13ca942); Time taken: 0.122 seconds INFO : Concurrency mode is disabled, not creating a lock manager INFO : Executing command(queryId=root_20190510082814_585aad52-a80c-4436-b988-f6d1d13ca942): describe weblogs INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing command(queryId=root_20190510082814_585aad52-a80c-4436-b988-f6d1d13ca942); Time taken: 0.054 seconds INFO : OK INFO : Concurrency mode is disabled, not creating a lock manager +--------------------+------------+----------+ | col_name | data_type | comment | +--------------------+------------+----------+ | client_ip | string | | | full_request_date | string | | | day | string | | | month | string | | | month_num | int | | | year | string | | | hour | string | | | minute | string | | | second | string | | | timezone | string | | | http_verb | string | | | uri | string | | | http_status_code | string | | | bytes_returned | string | | | referrer | string | | | user_agent | string | | +--------------------+------------+----------+ 16 rows selected (0.25 seconds) 0: jdbc:hive2://192.168.68.21:10000/default> 0: jdbc:hive2://192.168.68.21:10000/default> select uri from weblogs limit 3; OK INFO : Compiling command(queryId=root_20190510082913_c2cd10e0-70c6-4151-b89f-3e5ec38b59e6): select uri from weblogs limit 3 INFO : Concurrency mode is disabled, not creating a lock manager INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:uri, type:string, comment:null)], properties:null) INFO : Completed compiling command(queryId=root_20190510082913_c2cd10e0-70c6-4151-b89f-3e5ec38b59e6); Time taken: 3.619 seconds INFO : Concurrency mode is disabled, not creating a lock manager INFO : Executing command(queryId=root_20190510082913_c2cd10e0-70c6-4151-b89f-3e5ec38b59e6): select uri from weblogs limit 3 INFO : Completed executing command(queryId=root_20190510082913_c2cd10e0-70c6-4151-b89f-3e5ec38b59e6); Time taken: 0.017 seconds INFO : OK INFO : Concurrency mode is disabled, not creating a lock manager +--------------------+ | uri | +--------------------+ | /product/product2 | | /product/product3 | | /product/product3 | +--------------------+ 3 rows selected (4.951 seconds) 0: jdbc:hive2://192.168.68.21:10000/default>
要離開的話,按 Ctrl + C 就可以了。
另外,beeline 也可以直接在 Command Line 指定連線字串
[root@hivesvr ~]# beeline -u "jdbc:hive2://192.168.68.21:10000/default;auth=noSasl" -n hive -p hive
到這邊簡易的 Hive 設置完成~~
最後,總結一下
環境要啟動的方式(上面的環境變數要確認有載入)
因為這邊用的是下載來的執行檔,沒有提供 init script 或 SystemD Unit File,所以得手動執行(當然也可以自己湊一份 init Script 放到 /etc/init.d/ 裡面)
[root@hivesvr ~]# service postgresql-11 start Redirecting to /bin/systemctl start postgresql-11.service [root@hivesvr ~]# start-all.sh Starting namenodes on [localhost] Last login: Tue May 7 01:37:05 UTC 2019 on lxc/console Starting datanodes Last login: Tue May 7 01:40:52 UTC 2019 on lxc/console Starting secondary namenodes [hivesvr.c.testaaa-151709.internal] Last login: Tue May 7 01:40:54 UTC 2019 on lxc/console Starting resourcemanager Last login: Tue May 7 01:41:02 UTC 2019 on lxc/console Starting nodemanagers Last login: Tue May 7 01:41:13 UTC 2019 on lxc/console [root@hivesvr ~]# [root@hivesvr ~]# hive --config /opt/apache-hive-3.1.1-bin/conf/hive-site.xml --service hiveserver2 & [1] 1503 [root@hivesvr ~]# which: no hbase in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/hadoop-3.2.0/sbin:/opt/hadoop-3.2.0/bin:/opt/apache-hive-3.1.1-bin/bin:/root/bin) 2019-05-07 01:42:20: Starting HiveServer2 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.1-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Hive Session ID = 8295f3b3-3c0a-4035-9432-48a5a5285033 Hive Session ID = b8686ceb-23fd-4711-91d6-de8c36d58305 ↲ [root@hivesvr ~]#
要停止服務,建議停止以下兩個服務之後,再關 Container
[root@hivesvr ~]# service postgresql-11 stop Redirecting to /bin/systemctl stop postgresql-11.service [root@hivesvr ~]# [root@hivesvr ~]# stop-all.sh Stopping namenodes on [localhost] Last login: Tue May 7 05:40:45 UTC 2019 on lxc/console Stopping datanodes Last login: Tue May 7 05:41:19 UTC 2019 on lxc/console Stopping secondary namenodes [hivesvr.c.omnibrandon-151709.internal] Last login: Tue May 7 05:41:20 UTC 2019 on lxc/console Stopping nodemanagers Last login: Tue May 7 05:41:22 UTC 2019 on lxc/console Stopping resourcemanager Last login: Tue May 7 05:41:25 UTC 2019 on lxc/console [root@hivesvr ~]#
參考資料
Hadoop+Hive 單機測試環境(Metadata 用的資料庫換成 PostgreSQL)
匯入測試資料
沒有留言:
張貼留言