Introduction

Apache-hadoop2.7.3 + Spark2.0 集群搭建步骤 余 辉 版本 V 1目录 目录 ........................................................................................................................................... 2 一、 环境说明 .................................................................................................................... 4

  1. 硬件设备........................................................................................................................ 4
  2. linux 版本 ....................................................................................................................... 4
  3. JDK 版本 ......................................................................................................................... 4
  4. 集群节点........................................................................................................................ 4
  5. HOST 配置 ...................................................................................................................... 4
  6. 软件版本........................................................................................................................ 4 二、 准备工作 .................................................................................................................... 5 三、 批量启动命令 ............................................................................................................ 5
  7. 批量关机........................................................................................................................ 6
  8. 批量重启........................................................................................................................ 6
  9. Hadoop 启动和关闭 ...................................................................................................... 6
  10. 批量 Zookeeper 启动 .................................................................................................... 7
  11. 批量 Zookeeper 关闭 .................................................................................................... 7
  12. hbase 启动 ..................................................................................................................... 7
  13. hive 启动 ........................................................................................................................ 8 四、 zookeeper 安装 .......................................................................................................... 8 五、 安装 Hadoop ............................................................................................................ 10 六、 主节点安装 hbase ................................................................................................... 18 七、 主节点安装 MySql ................................................................................................... 20 八、 主节点安装 hive 和启动 ......................................................................................... 22 九、 Apache 搭建 Flume .................................................................................................. 24
  14. Flume 下载地址........................................................................................................... 24
  15. 安装.............................................................................................................................. 24
  16. 配置.............................................................................................................................. 25
  17. 添加 Java 路径 ............................................................................................................. 25
  18. 测试配置...................................................................................................................... 26十、 Kafka 安装和使用 .................................................................................................... 26
  19. 下载解压...................................................................................................................... 26
  20. 安装配置...................................................................................................................... 26
  21. 启动.............................................................................................................................. 27
  22. 测试.............................................................................................................................. 28 十一、 Scala 安装 ............................................................................................................. 29 十二、 Spakr 安装 ............................................................................................................ 30 十三、 启动顺序及进程解说 .......................................................................................... 33 1) 进程解说...................................................................................................................... 33 2) 启动顺序...................................................................................................................... 33 3) 关闭顺序...................................................................................................................... 33 4) 查看.............................................................................................................................. 34 5) hadoop 启动指令 ........................................................................................................ 34 十四、 错误集合 .............................................................................................................. 34
  23. 错误:Mysql ................................................................................................................ 34
  24. 错误:Hbase ............................................................................................................... 35
  25. 错误:Hbase ............................................................................................................... 35
  26. 错误:hbase ................................................................................................................ 36
  27. 错误:Hbase 连接集群 ............................................................................................... 37
  28. 错误:HDFS 连接集群 ................................................................................................ 39
  29. 错误:NameNode ....................................................................................................... 41
  30. 错误:Hive01 .............................................................................................................. 42
  31. 错误:Hive02 .............................................................................................................. 43
  32. 错误:Hive03........................................................................................................... 43
  33. 错误:Hive04........................................................................................................... 43一、 环境说明
  34. 硬件设备 一台物理机需要内存为【16G】
  35. linux 版本 [root@hadoop11 app]# cat /etc/issue CentOS release 6.7 (Final) Kernel \r on an \m
  36. JDK 版本 [root@hadoop11 app]# java -version java version "1.8.0_77" Java(TM) SE Runtime Environment (build 1.8.0_77-b03) Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)
  37. 集群节点 三个 hadoop11(Master),hadoop12(Slave),hadoop13(Slave)
  38. HOST 配置 192.168.200.11 hadoop11 192.168.200.12 hadoop12 192.168.200.13 hadoop13
  39. 软件版本jdk-8u77-linux-x64.tar.gz zookeeper-3.4.8.tar.gz hadoop-2.7.3.tar.gz hbase-1.2.6-bin.tar hive-0.12.0-bin.tar apache-flume-1.6.0-bin.tar kafka_2.10-0.8.1.1.tar.gz 二、 准备工作 1、安装 Java jdk [root@hadoop11 app]# java -version java version "1.8.0_77" Java(TM) SE Runtime Environment (build 1.8.0_77-b03) Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode) 2、ssh 免密码验证 3、下载 Hadoop 版本 4、所有软件放在 /app 目录下面 [root@hadoop11 app]# ls flume1.6 hadoop-2.7.3 hbase-1.2.6 zookeeper-3.4.8 三、 批量启动命令 hive-0.12.0 jdk1.8.0_77 kafka2.101. 批量关机 all_pc_halt.sh

    !/bin/sh

    ssh root@hadoop11 "bash" < /root/hadoop-halt.sh ssh root@hadoop12 "bash" < /root/hadoop-halt.sh ssh root@hadoop13 "bash" < /root/hadoop-halt.sh pc-halt.sh

    !/bin/sh

    halt
  40. 批量重启 all_pc_restart.sh

    !/bin/sh

    ssh root@hadoop11 "bash" < /root/hadoop-restart.sh ssh root@hadoop12 "bash" < /root/hadoop-restart.sh ssh root@hadoop13 "bash" < /root/hadoop-restart.sh hadoop-restart.sh

    !/bin/sh

    reboot
  41. Hadoop 启动和关闭 hadoop-start.sh

    !/bin/sh

    sh /usr/app/hadoop-2.7.3/sbin/start-dfs.sh sh /usr/app/hadoop-2.7.3/sbin/start-yarn.sh hadoop-stop.sh

    !/bin/sh

    sh /usr/app/hadoop-2.7.3/sbin/stop-dfs.sh sh /usr/app/hadoop-2.7.3/sbin/stop-yarn.sh4. 批量 Zookeeper 启动 all-zookeeper-start.sh

    !/bin/sh

    ssh root@hadoop11 "bash" < /root/zookeeper-start.sh ssh root@hadoop12 "bash" < /root/zookeeper-start.sh ssh root@hadoop13 "bash" < /root/zookeeper-start.sh zookeeper-start.sh

    !/bin/sh

    /usr/app/zookeeper-3.4.8/bin/./zkServer.sh start
  42. 批量 Zookeeper 关闭 all-zookeeper-stop.sh

    !/bin/sh

    ssh root@hadoop11 "bash" < /root/zookeeper-stop.sh ssh root@hadoop12 "bash" < /root/zookeeper-stop.sh ssh root@hadoop13 "bash" < /root/zookeeper-stop.sh zookeeper-stop.sh

    !/bin/sh

    /usr/app/zookeeper-3.4.8/bin/./zkServer.sh stop
  43. hbase 启动 hbase-start.sh

    !/bin/sh

    sh /usr/app/hbase-1.2.6/bin/start-hbase.sh7. hive 启动 hive-start.sh

    !/bin/sh

    sh /usr/app/hive-0.12.0/bin/hive 四、 zookeeper 安装 1.上传 zk 安装包 [root@hadoop11 app]# ls hadoop-2.7.3.tar.gz jdk1.8.0_77 zookeeper-3.4.8.tar.gz 2.解压 tar -zxvf zookeeper-3.4.8.tar.gz -C /usr/app/ 3.配置(先在一台节点上配置) 3.1 添加一个 zoo.cfg 配置文件 cd zookeeper-3.4.8/conf/ cp -r zoo_sample.cfg zoo.cfg 3.2 修改配置文件(zoo.cfg) 建立/usr/app/zookeeper-3.4.8/data 目录, mkdir /usr/app/zookeeper-3.4.8/data 配置 zoo.cfg dataDir=/usr/app/zookeeper-3.4.8/data (the directory where the snapshot is stored.) 在最后一行添加 server.1=hadoop11:2888:3888 server.2=hadoop12:2888:3888 server.3=hadoop13:2888:3888 3.3 在(dataDir=/usr/app/zookeeper-3.4.8/data)创建一个 myid 文件,里面内容是 server.N 中 的 N(server.2 里面内容为 2) echo "1" >myid3.4 将配置好的 zk 拷贝到其他节点 scp -r /usr/app/zookeeper-3.4.8/ scp -r /usr/app/zookeeper-3.4.8/ root@hadoop12:/usr/app root@hadoop13:/usr/app 3.5 注意:在其他节点上一定要修改 myid 的内容 在 hadoop12 应该讲 myid 的内容改为 2 (echo "2" >myid) 在 hadoop13 应该讲 myid 的内容改为 3 (echo "3" >myid) 4.启动集群 分别每台节点上面的 Zookeeper,启动命令: /usr/app/zookeeper-3.4.8/bin/./zkServer.sh start 选出 leader 和 follower,Zookeeper 启动的关闭命令 /usr/app/zookeeper-3.4.8/bin/./zkServer.sh /usr/app/zookeeper-3.4.8/bin/./zkServer.sh start stop 5.查看启动状态查看命令 /usr/app/zookeeper-3.4.8/bin/./zkServer.sh status 6、Zookeeper 操作 节点中数据是同步的 官网 http://zookeeper.apache.org/doc/r3.3.3/api/org/apache/zookeeper/ZooKeeper.html 参考:http://blog.csdn.net/ganglia/article/details/11606807启动命令 : bash zkCli.sh -server localhost:2181 [zk: localhost:2181(CONNECTED) 7] ls / [zookeeper] 命令 create ls get set rmr 路径 数据 /hadoop "myData" / /hadoop /hadoop "11" /hadoop 五、 安装 Hadoop 这是下载后的 hadoop-2.7.3.tar.gz 压缩包, 1、解压 tar -xzvf hadoop-2.7.3.tar.gz 2、配置环境变量 vi /etc/profile export JAVA_HOME=/usr/app/jdk1.8.0_77export HADOOP_HOME=/usr/app/hadoop-2.7.3 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin 3、主要文件配置 3.1、配置 hadoop-env.sh 文件-->修改 JAVA_HOME /usr/app/hadoop-2.7.3/etc/hadoop

    The java implementation to use.

    export JAVA_HOME=/usr/app/jdk1.8.0_77 3.2、配置 slaves 文件-->>增加 slave 节点 /usr/app/hadoop-2.7.3/etc/hadoop hadoop11 hadoop12 hadoop13 3.3 、 配 置 core-site.xml 文 件 -->> 增 加 hadoop 核 心 配 置 ( hdfs 文 件 端 口 是 9000 、 file:/usr/app/hadoop-2.7.3/tmp) fs.defaultFS hdfs://ns1 hadoop.tmp.dir /usr/app/hadoop-2.7.3/tmp ha.zookeeper.quorum hadoop11:2181,hadoop12:2181,hadoop13:2181 3.4、配置 hdfs-site.xml 文件-->>增加 hdfs 配置信息(namenode、 datanode 端口和目录位置) dfs.nameservices ns1 dfs.ha.namenodes.ns1 nn1,nn2 dfs.namenode.rpc-address.ns1.nn1 hadoop11:9000 dfs.namenode.http-address.ns1.nn1 hadoop11:50070 dfs.namenode.rpc-address.ns1.nn2 hadoop12:9000 dfs.namenode.http-address.ns1.nn2 hadoop12:50070 dfs.namenode.shared.edits.dir qjournal://hadoop11:8485;hadoop12:8485;hadoop13:8485/ns1 dfs.journalnode.edits.dir /usr/app/hadoop-2.7.3/journal/data dfs.ha.automatic-failover.enabled true dfs.client.failover.proxy.provider.ns1 org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence shell(/bin/true) dfs.ha.fencing.ssh.private-key-files /root/.ssh/id_rsa dfs.ha.fencing.ssh.connect-timeout 30000 3.5、配置 mapred-site.xml 文件-->>增加 mapreduce 配置(使用 yarn 框架、jobhistory 使用 地址以及 web 地址) mapreduce.framework.name yarn 3.6、配置 yarn-site.xml 文件-->>增加 yarn 功能 yarn.resourcemanager.ha.enabled true yarn.resourcemanager.cluster-id yrc yarn.resourcemanager.ha.rm-ids rm1,rm2 yarn.resourcemanager.hostname.rm1 hadoop11 yarn.resourcemanager.hostname.rm2 hadoop12 yarn.resourcemanager.zk-address hadoop11:2181,hadoop12:2181,hadoop13:2181 yarn.nodemanager.aux-servicesmapreduce_shuffle 4、将配置好的 hadoop 文件 copy 到另一台 slave 机器上 [root@hadoop11 app]# scp -r hadoop-2.7.3 root@hadoop13:/usr/app 5 启动 journalnode(分别在在 hadoop11、hadoop12、hadoop13 上执行) cd /usr/app/hadoop-2.7.3 sbin/hadoop-daemon.sh start journalnode

    运行 jps 命令检验,hadoop11、hadoop12、hadoop13 上多了 JournalNode 进程

    6 格式化 HDFS 在 hadoop11 上/usr/app/hadoop-2.7.3/bin 目录下执行命令: ./hdfs namenode -format 格式化后会在根据 core-site.xml 中的 hadoop.tmp.dir 配置生成个文件,这里我配置的是 /usr/app/hadoop-2.7.3/tmp, 然后将/hadoop-2.7.3/tmp 拷贝到 hadoop12 的/hadoop-2.7.3/下。 scp -r /usr/app/hadoop-2.7.3/tmp/ root@hadoop12:/usr/app/hadoop-2.7.3/

    也可以这样,在 hadoop12 上执行 hdfs namenode -bootstrap Standby7 格式化 ZKFC(在 hadoop11 上执行即可)

    hdfs zkfc -formatZK 8 启动 HDFS(在 hadoop11 上执行) sbin/start-dfs.sh [root@hadoop11 sbin]# jps 2960 QuorumPeerMain 3698 NameNode 4116 DFSZKFailoverController 3828 DataNode 4215 Jps 3287 JournalNode http://hadoop11:50070 http://hadoop12:500709 启动 YARN(#####注意#####:是在 hadoop11 上执行 start-yarn.sh,把 namenode 和 resourcemanager 分开是因为性能问题,因为他们都要占用大量资源,所以把他们分开了, 他们分开了就要分别在不同的机器上启动) sbin/start-yarn.sh http://hadoop11:8088/cluster 10 验证 验证 HDFS HA 首先向 hdfs 上传一个文件 hadoop fs -put /etc/profile / hadoop fs -ls / 然后再 kill 掉 active 的 NameNodekill -9 通过浏览器访问:http://192.168.200.12:50070 NameNode hadoop12:9000' (active) 这个时候 hadoop12 上的 NameNode 变成了 active 在执行命令: hadoop fs -ls / -rw-r--r-- 3 root supergroup 2198 2017-07-22 01:25 /profile 刚才上传的文件依然存在! ! ! 手动启动那个挂掉的 NameNode sbin/hadoop-daemon.sh start namenode 通过浏览器访问:http://192.168.200.11:50070 hadoop12:9000' (standby) 验证 YARN: 运行一下 hadoop 提供的 demo 中的 WordCount 程序: hadoop jar /usr/app/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /profile /out OK,大功告成! ! ! 查看 hdfs:http://hadoop11:50070/ 查看 RM:http://hadoop11:8088/ 六、 主节点安装 hbase 1.上传 hbase 安装包 hbase-1.2.6-bin.tar.gz 2.解压 tar –zxvf hbase-1.2.6-bin.tar.gz 3.配置 hbase 集群 3.1 要修改 3 个文件(首先 zk 集群已经安装好了 HMASTER REGIONSERVER) 注意:要把 hadoop 的 hdfs-site.xml 和 core-site.xml 放到 hbase/conf 下 3.2 修改 环境变量 Vi /etc/profileexport JAVA_HOME=/usr/app/jdk1.8.0_77 export HADOOP_HOME=/usr/app/hadoop-2.7.3 export HBASE_HOME=/usr/app/hbase-1.2.6 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin 注意:source /etc/profile (刷新环境变量配置文件) 3.3 修改 hbase-env.sh /usr/app/hbase-1.2.6/conf/ export JAVA_HOME= /usr/app/jdk1.8.0_77 //告诉 hbase 使用外部的 zk export HBASE_MANAGES_ZK=false 3.4 修改 hbase-site.xml 配置 vim hbase-site.xml hbase.rootdir hdfs://ns1/hbase hbase.cluster.distributed true hbase.master.info.port 60010 </property> hbase.zookeeper.quorum hadoop11:2181,hadoop12:2181,hadoop13:2181 </configuration> 4、增加 slave 的集群 Vim /usr/app/hbase-1.2.6/conf/regionservers (部署到 datanode 上面,那一台启动 hbase 那一台就是 master) hadoop11 hadoop12 hadoop135、拷贝 hbase 到其他节点 scp -r /usr/app/hbase-1.2.6 scp -r /usr/app/hbase-1.2.6 root@hadoop12:/usr/app/ root@hadoop13:/usr/app/ 6、将配置好的 HBase 拷贝到每一个节点并同步时间。 7、启动所有的 hbase 前提需要:Zookeeper 和 Hdfs 启动 分别启动 zk ./zkServer.sh start 启动 hbase 集群 start-dfs.sh 启动 hbase,在主节点上运行: /usr/app/hbase-1.2.6/bin/start-hbase.sh 8、通过浏览器访问 hbase 管理页面 http://192.168.200.11:16010/master-status 9、为保证集群的可靠性,要启动多个 HMaster hbase-daemon.sh start master 七、 主节点安装 MySql 账号:root 密码:123456 安装 mysql 服务器命令如下: yum install mysql-server 设置开机启动命令如下:chkconfig mysqld on 启动 mysql 服务命令如下: service mysqld start 并根据提示设置 root 的初试密码命令如下: mysqladmin -u root password 123456 进入 mysql 命令行命令如下: mysql -uroot –p123456 在 Mysql 中执行者四步: create database hive DEFAULT CHARSET utf8 COLLATE utf8_general_ci; create database amon DEFAULT CHARSET utf8 COLLATE utf8_general_ci; grant all privileges on . to 'root'@'%' identified by '123456' with grant option; flush privileges; 备注说明: 注意:报错 ERROR 1130: Host '192.168.200.1' is not allowed to connect to thisMySQL server 这个错误需要用户授权 这两个数据库不知道啥用??? 创建以下数据库:

    hive

    create database hive DEFAULT CHARSET utf8 COLLATE utf8_general_ci;

    activity monitor

    create database amon DEFAULT CHARSET utf8 COLLATE utf8_general_ci; 设置 root 授权访问以上所有的数据库:

    授权 root 用户在主节点拥有所有数据库的访问权限

    grant all privileges on . to 'root'@'n1' identified by 'xxxx' with grant option;flush privileges; grant all privileges on . to 'root'@'%' identified by '123456' with grant option;flush privileges;对用户授权 mysql>grant rights on database. to user@host identified by "pass"; 例 1: 增加一个用户 test1 密码为 abc,让他可以在任何主机上登录,并对所有数据库有查询、插 入、修改、删除的权限。 grantselect,insert,update,delete on . to test1@"%" Identified by "abc"; ON 子句中. 说明符的意思是“所有数据库,所有的表” 例 2: 增加一个用户 test2 密码为 abc, 让他只可以在 localhost 上登录,并可以对数据库 mydb 进行 查询、插入、修改、删除的操作。 grant select,insert,update,delete on mydb. to test2@localhost identified by "abc"; 八、 主节点安装 hive 和启动 hive 安装手册: 1、上传压缩包,解压 2、安装 mysql 服务器 3、进入 hive 的 conf 目录新建一个 hive-site.xml 4、在 hive-site.xml 中写入 mysql 连接信息 5、将 mysql 的驱动包复制到 hive 的 lib 目录下 app/hive-0.12.0/lib 6.、 Sh /usr/app/hive-0.12.0/bin/hive 启动 hive Vi /etc/profile export JAVA_HOME=/usr/app/jdk1.8.0_77 export HADOOP_HOME=/usr/app/hadoop-2.7.3 export HBASE_HOME=/usr/app/hbase-1.2.6 export HIVE_HOME=/usr/app/hive-0.12.0 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin 修改/usr/app/hive-0.12.0/conf/hive-env.sh 的尾部 hive-env.sh,增加以下三行 export JAVA_HOME=/usr/app/jdk1.8.0_77 export HADOOP_HOME=/usr/app/hadoop-2.7.3 export HBASE_HOME=/usr/app/hbase-1.2.6hive-default.xml.template 改为 hive-site.xml 修改/usr/app/hive-0.12.0/conf/hive-site.xml /usr/app/hive-site.xml hive-site.xml 主要配置 hive-site.xml javax.jdo.option.ConnectionURL jdbc:mysql://hadoop11:3306/hive?createDatabaseIfNotExist=true JDBC connect string for a JDBC metastore javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore javax.jdo.option.ConnectionUserName root username to use against metastore database javax.jdo.option.ConnectionPassword 123456 password to use against metastore database hive.server2.thrift.sasl.qop auth hive.metastore.schema.verification false 验证 hive 安装 Sh /usr/app/hive-0.12.0/bin/hive hive> create table test(id int,name string); OK Time taken: 8.292 seconds hive> show tables; OK test [root@hadoop13 ~]# hadoop fs -lsr / drwxr-xr-x - root supergroup drwxr-xr-x - root supergroup drwxr-xr-x - root supergroup drwxr-xr-x - root supergroup 0 2016-01-10 20:57 /user 0 2016-01-10 20:57 /user/hive 0 2016-01-11 01:46 /user/hive/warehouse 0 2016-01-11 01:46 /user/hive/warehouse/test 九、 Apache 搭建 Flume
  44. Flume 下载地址 apache-flume-1.6.0-bin.tar.gz http://pan.baidu.com/s/1o81nR8e s832 官网 https://flume.apache.org/download.html
  45. 安装 [root@hadoop11 ~]# cd /usr/app/ [root@hadoop11 app]# tar -zxvf apache-flume-1.6.0-bin.tar.gz[root@hadoop11 app]# mv apache-flume-1.6.0-bin flume1.6
  46. 配置 1) /etc/profile [root@hadoop11 app]# vi /etc/profile [root@hadoop11 app]# source /etc/profile export JAVA_HOME=/usr/app/jdk1.8.0_77 export HADOOP_HOME=/usr/app/hadoop-2.7.3 export HBASE_HOME=/usr/app/hbase-1.2.6 export HIVE_HOME=/usr/app/hive-0.12.0 export Flume_HOME=/usr/app/flume1.6 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$Flume_HOME/bin 2) /usr/app/flume1.6/conf/flume-env.sh [root@hadoop11 conf]# cp -r flume-env.sh.template flume-env.sh [root@hadoop11 conf]# chmod 777 flume-env.sh [root@hadoop11 conf]# vi flume-env.sh
  47. 添加 Java 路径 设置权限[root@hadoop11 conf]# chmod 777 flume-env.sh
  48. 测试配置 十、 Kafka 安装和使用
  49. 下载解压 kafka_2.10-0.8.1.1.tgz [root@hadoop11 app]# tar –zxvf kafka_2.10-0.8.1.1.tgz [root@hadoop11 app]# mv kafka_2.10-0.8.1.1 kafka2.10 [root@hadoop11 kafka2.10]# pwd /usr/app/kafka2.10
  50. 安装配置 解压进入 kafka 目录 [root@hadoop11 kafka2.10]# vi /usr/app/kafka2.10/config/server.properties 配置文件server.properties .docx

    Hostname the broker will bind to. If not set, the server will bind to all

    interfaces host.name=hadoop11 这里配置本机名(与主机名称一样) 传到其余机器 [root@hadoop11 kafka2.10]#scp -r /usr/app/kafka2.10 root@hadoop12:/usr/app [root@hadoop11 kafka2.10]#scp -r /usr/app/kafka2.10 root@hadoop13:/usr/app
  51. 启动 启动 kafka 之前,需要启动 Zookeeper 在装 kafka 的机器上启动(hadoop11,hadoop12,hadoop13) [root@hadoop11 kafka2.10]# /usr/app/kafka2.10/bin/kafka-server-start.sh /usr/app/kafka2.10/config/server.properties&4. 测试 Kafka 操作路径 /usr/app/kafka2.10/bin

    在 hadoop11 上面创建 kafka 的 topic

    ./kafka-topics.sh –create.sh --topic orcale --replication-factor 1 --partitions 2 --zookeeper hadoop11:2181

    kafka 生产者命令

    ./kafka-console-producer.sh --broker-list hadoop11:9092 --sync --topic orcale#kafka 消费者命令 ./kafka-console-consumer.sh --zookeeper hadoop11:2181 --topic orcale --from-beginning

    kafka 看 topic 得 list

    [root@hadoop11 kafka2.10]# ./kafka-topics.sh --list --zookeeper localhost:2181

    kafka 删除 topic

    ./kafka-topics.sh --zookeeper localhost:2181 --topic oracle --delete

    kafka 关闭进程

    /usr/app/kafka2.10/bin/kafka-server-stop.sh 十一、 Scala 安装 下载地址 : http://www.scala-lang.org/download/2.11.11.html [root@hadoop11 ~]# cd /usr/app/ [root@hadoop11 app]# tar -zxvf scala-2.11.11.tgz 配置环境 export JAVA_HOME=/usr/app/jdk1.8.0_77export HADOOP_HOME=/usr/app/hadoop-2.7.3 export HBASE_HOME=/usr/app/hbase-1.2.6 export HIVE_HOME=/usr/app/hive-0.12.0 export Flume_HOME=/usr/app/flume1.6 export Scala_HOME=scala-2.11.11export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$Flume_HOME/bin: $Scala_HOME/bin 刷新配置文件,检测 scala 版本 [root@hadoop11 app]# source /etc/profile [root@hadoop11 app]# scala -version Scala code runner version 2.11.11 -- Copyright 2002-2017, LAMP/EPFL 检测 scala 的客户端 [root@hadoop11 app]# scala Welcome to Scala 2.11.11 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> print("Hello Scala") Hello Scala 十二、 Spakr 安装 下载地址 : http://spark.apache.org/downloads.html 下载解压 [root@hadoop11 ~]# cd /usr/app/ [root@hadoop11 app]# tar -zxvf spark-2.0.0-bin-hadoop2.7 向环境变量添加 spark home [root@hadoop11 app]# vi /etc/profile export JAVA_HOME=/usr/app/jdk1.8.0_77 export HADOOP_HOME=/usr/app/hadoop-2.7.3 export HBASE_HOME=/usr/app/hbase-1.2.6 export HIVE_HOME=/usr/app/hive-0.12.0 export Flume_HOME=/usr/app/flume1.6 export Spark_HOME=/usr/app/spark-2.0.0-bin-hadoop2.7 export Scala_HOME=scala-2.11.11export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$Fl ume_HOME/bin:$Spark_HOME/bin:$Scala_HOME/bin 配置./conf/slaves 首先将 slaves.template 拷贝一份 [root@hadoop11 conf]# cp -r slaves.template slaves 修改 slaves 文件 hadoop11 hadoop12 hadoop13 配置./conf/spark-env.sh 同样将 spark-env.sh. template 拷贝一份 [root@hadoop11 conf]# cp -r spark-env.sh.template spark-env.sh 修改 /conf/spark-env.sh export JAVA_HOME=/usr/app/jdk1.8.0_77 export Scala_HOME=scala-2.11.11 export SPARK_MASTER_IP=hadoop11 export SPARK_WORKER_MEMORY=2g export MASTER=spark://hadoop11:7077 最后将 spark-1.6.1-bin-hadoop2.6 文件夹拷贝到另外两个结点即可。 其余两台的环境变量 export JAVA_HOME=/usr/app/jdk1.8.0_77 export HADOOP_HOME=/usr/app/hadoop-2.7.3 export HBASE_HOME=/usr/app/hbase-1.2.6 export Spark_HOME=/usr/app/spark-2.0.0-bin-hadoop2.7 export Scala_HOME=scala-2.11.11 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$Spark_HOME/bin:$S cala_HOME/bin启动 [root@hadoop11 spark-2.0.0-bin-hadoop2.7]# ./sbin/start-all.sh [root@hadoop11 bin]# spark-shell十三、 启动顺序及进程解说 1) 进程解说 DataNodeHadoop 放在 slaves 中 NameNodeHadoop 分配任务 HRegionServerhbase 从 QuorumPeerMainzookeeper HMasterhbase 主 JournalNode 指定 NameNode 的 edits 元数据在 JournalNode 上的存放位置 DFSZKFailoverControllerzkfcnamenode 的守护进程 ResourceManager yarn 里面的老大 Nodemanageryarn 的随从 2) 启动顺序 /usr/app/zookeeper-3.4.5/bin/./zkServer.sh start(每台) /usr/app/hadoop-2.6.0/sbin/start-dfs.sh(hadoop11) /usr/app/hadoop-2.6.0/sbin/start-yarn.sh(hadoop11) /usr/app/hbase/bin/start-hbase.sh(hadoop11) /usr/app/hbase/bin/hbase-daemon.sh start master(hadoop12) /usr/app/kafka2.10/bin/kafka-server-start.sh /usr/app/kafka2.10/config/server.properties& (每台 kafka) 3) 关闭顺序 /usr/app/kafka2.10/bin/kafka-server-stop.sh(每台 kafka) /usr/app/hbase/bin/hbase-daemon.sh stop master(hadoop12) /usr/app/hbase/bin/stop-hbase.sh(hadoop11) /usr/app/hadoop-2.7.3/sbin/stop-yarn.sh(hadoop11) /usr/app/hadoop-2.7.3/sbin/stop-dfs.sh(hadoop11) /usr/app/zookeeper-3.4.8/bin/./zkServer.sh stop(每台)4) 查看 http://192.168.200.11:50070 (HDFS 管理界面)(active) http://192.168.200.12:50070 (HDFS 管理界面)(standby) http://192.168.200.11:8088(yarn 管理界面)启动 yarn 才能启动 http://192.168.200.11:60010(hbase 管理页面) http://192.168.200.12:60010(hbase 管理页面副本) 5) hadoop 启动指令 start-all.sh 启动所有的 Hadoop 守护进程。包括 NameNode、 Secondary NameNode、DataNode、JobTracker、TaskTrack stop-all.sh 停止所有的 Hadoop 守护进程。包括 NameNode、 Secondary NameNode、DataNode、JobTracker、TaskTrack start-dfs.sh 启动 Hadoop HDFS 守护进程 NameNode、SecondaryNameNode 和 DataNode stop-dfs.sh 停止 Hadoop HDFS 守护进程 NameNode、SecondaryNameNode 和 DataNode hadoop-daemons.sh start namenode 单独启动 NameNode 守护进程 hadoop-daemons.sh stop namenode 单独停止 NameNode 守护进程 hadoop-daemons.sh start datanode 单独启动 DataNode 守护进程 hadoop-daemons.sh stop datanode 单独停止 DataNode 守护进程 hadoop-daemons.sh start secondarynamenode 单独启动 SecondaryNameNode 守护进程 hadoop-daemons.sh stop secondarynamenode 单独停止 SecondaryNameNode 守护进程 start-mapred.sh 启动 Hadoop MapReduce 守护进程 JobTracker 和 TaskTracker stop-mapred.sh 停止 Hadoop MapReduce 守护进程 JobTracker 和 TaskTracker hadoop-daemons.sh start jobtracker 单独启动 JobTracker 守护进程 hadoop-daemons.sh stop jobtracker 单独停止 JobTracker 守护进程 hadoop-daemons.sh start tasktracker 单独启动 TaskTracker 守护进程 hadoop-daemons.sh stop tasktracker 单独启动 TaskTracker 守护进程 十四、 错误集合
  52. 错误:MysqlMysql ERROR 1045 (28000): Access denied for user 'root'@'localhost'问 题的解决
  53. 关闭 mysql service mysqld stop
  54. 屏蔽权限 mysqld_safe --skip-grant-table
  55. 新开起一个终端输入 mysql -u root mysql mysql>UPDATE user SET Password=PASSWORD(‘123456’) where USER=‘root‘; mysql> FLUSH PRIVILEGES; mysql> \q
  56. 错误:Hbase Could not locate executable null\bin\winutils.exe in the Hadoop binaries System.setProperty("hadoop.home.dir", "G:/hadoop/hadoop-2.4.1"); 下载 hadoop2.6(x64)V0.2.zip 放入 D:\Java\hadoop-2.6.0\bin 中
  57. 错误:Hbasehttp://blog.csdn.net/zzu09huixu/article/details/28448705 2017-03-05 02:51:23,887 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2487) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2502) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2485) ... 5 more Caused by: java.net.BindException: Problem binding to hadoop11/192.168.200.11:16020 : 地址已在使用 at org.apache.hadoop.hbase.ipc.RpcServer.bind(RpcServer.java:2371) at org.apache.hadoop.hbase.ipc.RpcServer$Listener.(RpcServer.java:524) at org.apache.hadoop.hbase.ipc.RpcServer.(RpcServer.java:1899) at org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:790) at org.apache.hadoop.hbase.regionserver.HRegionServer.createRpcServices(HRegionServer.java:575) at org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:492) ... 10 more Caused by: java.net.BindException: 地址已在使用 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.hbase.ipc.RpcServer.bind(RpcServer.java:2369)
  58. 错误:hbase starting master, logging to /usr/app/hbase-1.2.6/bin/../logs/hbase-root-master-hadoop11.out hadoop12: starting regionserver, logging to /usr/app/hbase-1.2.6/bin/../logs/hbase-root-regionserver-hadoop12.out hadoop11: starting regionserver, logging to /usr/app/hbase-1.2.6/bin/../logs/hbase-root-regionserver-hadoop11.out hadoop13: starting regionserver, logging to /usr/app/hbase-1.2.6/bin/../logs/hbase-root-regionserver-hadoop13.outhadoop11: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 hadoop11: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 hadoop13: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 hadoop13: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 解决方案: 查看了 zk 节点的设置发现节点的设置确实有问题,在 zk 的 configuration 字 符串中间多了个空格,(⊙o⊙)... 修改之后,错误消失 。
  59. 错误:Hbase 连接集群 17/04/16 16:09:14 INFO zookeeper.ClientCnxn: Session establishment complete on server hadoop11/192.168.200.11:2181, sessionid = 0x15b762a42df0008, negotiated timeout = 40000 java.io.IOException: java.lang.reflect.InvocationTargetException at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:24 0) at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java: 414) at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java: 407) at org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal(ConnectionManager .java:285) at org.apache.hadoop.hbase.client.HBaseAdmin.(HBaseAdmin.java:207) at cn.orcale.com.bigdata.hbase.HbaseDao.createTable(HbaseDao.java:73) at cn.orcale.com.bigdata.hbase.HbaseDao.main(HbaseDao.java:48) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl. java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:23 8) ... 6 more Caused by: java.lang.VerifyError: class org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Result overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet; at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:211) at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClust erId(ConnectionManager.java:850) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(Conne ctionManager.java:635) ... 11 more 17/04/16 16:09:14 INFO hbase.HbaseDao: end create table ...... 解决方法:外界的包和 Maven 中的包有冲突6. 错误:HDFS 连接集群 17/04/16 16:39:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Exception in thread "main" java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$AppendRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet; at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.getDeclaredMethods0(Native Method)at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) at java.lang.Class.privateGetPublicMethods(Class.java:2902) at java.lang.Class.privateGetPublicMethods(Class.java:2911) at java.lang.Class.getMethods(Class.java:1615) at sun.misc.ProxyGenerator.generateClassFile(ProxyGenerator.java:451) at sun.misc.ProxyGenerator.generateProxyClass(ProxyGenerator.java:339) at java.lang.reflect.Proxy$ProxyClassFactory.apply(Proxy.java:639) at java.lang.reflect.Proxy$ProxyClassFactory.apply(Proxy.java:557) at java.lang.reflect.WeakCache$Factory.get(WeakCache.java:230) at java.lang.reflect.WeakCache.get(WeakCache.java:127) at java.lang.reflect.Proxy.getProxyClass0(Proxy.java:419) at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:719) at org.apache.hadoop.ipc.ProtobufRpcEngine.getProxy(ProtobufRpcEngine.java:105) at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:570) at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies .java:420) at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:316) at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(Configur edFailoverProxyProvider.java:124) at org.apache.hadoop.io.retry.RetryInvocationHandler.(RetryInvocationHandler.java:73) at org.apache.hadoop.io.retry.RetryInvocationHandler.(RetryInvocationHandler.java:64) at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:183) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:664) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:608) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) at cn.orcale.com.bigdata.hdfs.HdfsTest.uploadFile(HdfsTest.java:36) at cn.orcale.com.bigdata.hdfs.HdfsTest.main(HdfsTest.java:122) 解决方法:外界的包和 Maven 中的包有冲突7. 错误:NameNode 2017-03-05 10:53:19,411 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/app/hadoop-2.6.0/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) 2017-03-05 10:53:19,413 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2017-03-05 10:53:19,423 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /**SHUTDOWN_MSG: Shutting down NameNode at hadoop12/192.168.200.12 **/
  60. 错误:Hive01 http://www.cnblogs.com/simple-focus/p/6184581.html hive > show databases; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient hive> create table test(id int,name string); FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient 1、将 mysql 的驱动包复制到 hive 的 lib 目录下 app/hive-0.12.0/lib mysql-connector-java-5.1.36-bin.jar 2、在 mysq 中创建 hive 的用户,且授予权限,执行下面操作 在 Mysql 中执行者四步: create database hive DEFAULT CHARSET utf8 COLLATE utf8_general_ci; create database amon DEFAULT CHARSET utf8 COLLATE utf8_general_ci; grant all privileges on . to 'root'@'%' identified by '123456' with grant option; flush privileges;grant all privileges on . to 'root'@'n1' identified by 'xxxx' with grant option;flush privileges; grant all privileges on . to 'root'@'%' identified by '123456' with grant option;flush privileges;
  61. 错误:Hive02 Hive> CREATE TABLE dummy(value STRING); FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: An exception was thrown while adding/validating class(es) : Specified key was too long; max key length is 767 bytes com.MySQL.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was too long; max key length is 767 bytes 解决方法: mysql > alter database hive character set latin1; 再在 Hive 里创建表已经 ok 了 10. 错误:Hive03 http://www.micmiu.com/bigdata/hive/hive-exception-not-a-host-port-parir-pbuf/ 解决问题:http://www.aboutyun.com/thread-7881-1-1.html 11. 错误:Hive04 hive> create table test(id int,name string); FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.MetaException(message:javax.jdo.JDODataStoreException: An exception was thrown while adding/validating class(es) : Specified key was too long; max key length is 767 bytes com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was too long; max key length is 767 bytes 解决方法: mysql > alter database hive character set latin1; 再在 Hive 里创建表已经 ok 了

results matching ""

    No results matching ""