首页 星云 工具 资源 星选 资讯 热门工具
:

PDF转图片 完全免费 小红书视频下载 无水印 抖音视频下载 无水印 数字星空

ambari2.8+ambari-metrics3.0+bigtop3.2编译、打包、安装

编程知识
2024年07月30日 16:10

bigtop编译

资源说明:

软件及代码镜像

开发包镜像

github访问

修改hosts文件,以便访问github:https://www.ip138.com/ 通过查询域名,找到对应ip,将域名和ip填写到hosts

140.82.112.4 github.com
199.232.69.194 github.global.ssl.fastly.net
185.199.108.133 raw.githubusercontent.com
185.199.109.133 raw.githubusercontent.com
185.199.110.133 raw.githubusercontent.com
185.199.111.133 raw.githubusercontent.com

编译相关知识

技术知识

  • maven命令参数须知
    • 各种不同的跳过
      • -Drat.skip=true :RAT 插件用于检查源代码中的许可证问题,确保所有的文件都有适当的许可证声明;此参数跳过许可检查
      • -Dmaven.test.skip=true : 跳过测试编译和测试执行阶段。即它不仅会跳过测试的执行,还会跳过测试代码的编译
      • -DskipTests:Maven 跳过测试的执行,但会编译测试代码
      • 有的组件(如flink大部分子项目)需要编译测试代码,有的编译不需要编译测试代码(io.confluent相关)。
    • maven日志打印时间:-Dorg.slf4j.simpleLogger.showDateTime=true -Dorg.slf4j.simpleLogger.dateTimeFormat="yyyy-MM-dd HH:mm:ss.SSS
    • mavrn编译失败,处理完失败原因(如网络不通),继续从上次失败的模块编译。mvn <args> -rf: xxxx,起哄xxx为上次失败的模块名称

bigtop编译流程及经验总结

  • 目录说明
    • bigtop.bom: 定义了各个组件的版本
    • package.gradle: 定义了gradle编译组件的整体流程
    • 组件配置目录(简称packages):bigtop\bigtop-packages\src\common\XXX\,其中xxx为组件名称,
      • do-component-build:一般在此将maven命令包装为gradle命令
    • 组件源码包下载目录(简称dl):bigtop\dl\xxx,其中xxx组件名称
    • 组件编译目录(简称build):bigtop/build/xxx其中xxx组件名称
      • rpm/SOURCES: 组件配置目录下的文件都将被copy到此,另外组件源码下载目录下的源码包也被copy到此,
      • rpm/BUILD:组件源码将被解压在此,并进行编译;(注意每次编译时,此目录会重新生成,故编译前更改此目录文件无效)
      • rpm/RPMS:组件打包后的rpm包存放位置;若此位置有文件,gradle将跳过编译
      • rpm/SRPMS:srpm包存放位置
      • tar:存放源码tar包
  • 组件下载编译流程:建议详细阅读package.gradle
    • 文件流转:dl/spark-xxx [tar.gz|zip|tgz] -> build/spark/rpm/SOURCES/spark/[tar.gz&sourcecode] -> build/spark/rpm/BUILD/spark/spark-xxx[sourcecode] -> build/spark/rpm/RPMS/[src.rpm] -> output/spark/ [src.rpm]
      1. 下载组件源码包到dl目录,若源码包已存在,则不再重新下载(可以利用此规则事先下载,节省时间,或者更改源码后,重新打包)
      1. 将dl目录下的源码压缩包解压到build目录;将packages目录的配置信息应用到build目录。(只要没有编译成功,会重复此步骤,故修改解压后的代码无效,会被解压覆盖)
      1. 在build目录进行编译;若发现配置没有生效,则需要删除build目录,重新执行xxx-rpm gradle命令;如npm、yarn、bower镜像配置没有生效的时候
  • 编译经验
    • package目录中,编译前进行相关配置,尤其是更改为国内镜像。
    • 源码包事先下载后放到dl中,部分压缩包需要更改代码,重新打包
    • 若是前端项目编译失败,如tez-ui,可以定位到目录,执行node包安装命令,如 npm install,yarn install,bower install
      • 注意:请勿使用全局npm、yarn、install,定位到局部的,然后跟上全目录执行
    • 若是maven编译失败,处理失败原因后,优先使用 -rf: xxx,进行本模块及后续模块的编译,等maven在整体编译成功后,再使用gradle编译整个组件
      • 注意maven具体命令可以通过查看gradle编译日志中的获取
    • gradle 编译时,将日志保留下来,以便失败后,查看原因及搜索maven、和前端编译命令及局部npm、yarn、bower具体位置
  • 并行编译:bigtop 3.2.0 貌似不支持并行编译,3.3.0 才支持 --parallel --parallel-threads=N
  • 记录日志,以方便获取mvn命令及问题排查:
    • gradle编译:
      • ./gradlew tez-rpm -PparentDir=/usr/bigtop >> tez-rpm.log 2>> tez-rpm.log
    • maven编译(请以gradle日志中的mvn命令为准):
      • mvn package install -DskipTests -Dorg.slf4j.simpleLogger.showDateTime=true -Dorg.slf4j.simpleLogger.dateTimeFormat="yyyy-MM-dd HH:mm:ss.SSS" >> /soft/code/bigtop/spark-rpm.log 2>> /soft/code/bigtop/spark-rpm.log
    • 前端编译:
      • npm install
      • yarn install
      • bower install --allow-root

各模块编译难度及大概耗时(纯编译耗时,不包含下载文件和排错时间)

  • 组件名称 编译难度(5星) 纯编译耗时
  • hadoop *** (1小时)[成功、无日志]
  • zookeeper * (30分钟)[成功、无日志]
  • hive * (1小时) [成功]
  • hbase * (30分钟)[成功]
  • phoenix * (30分钟)[成功]
  • tez *** (1小时)[成功]
  • bigtop-ambari-mpack * (5分钟)[成功]
  • bigtop-groovy * (5秒)[成功]
  • bigtop-jsvc * (5秒)[成功]
  • bigtop-select * (5秒)[成功]
  • bigtop-utils * (5秒)[成功]
  • ranger (无此项)
  • solr *** (1小时)[成功]
  • kafka *** (30分钟)[成功]
  • spark ***** (150分钟)[成功,去掉本地R环境,但是sparkR不去掉]
  • flink [失败]
  • zeppelin []

centos 真机编译branch-3.2

硬件说明:

  • 联想笔记本W540:内存32G CPU4核8线程 固态硬盘1T
  • 编译环境:vmware centos 7.9 2核4线程(占整体物理机的一半) ,内存 12G,硬盘500G(建议至少100G,详见编译后磁盘占用)
  • 说明:centos 的虚拟机文件放在固态硬盘分区中,以加快响应速度

编译步骤

下载代码并切换分支

git clone https://gitee.com/piaolingzxh/bigtop.git
cd bigtop/
git checkout -b branch-3.2 origin/branch-3.2
  • 说明:
    • 本此编译时间:2024-7-18~29日
    • branch-3.2 分支会不断提交代码,你获取的代码可能比我的新;编译时,可能已经修复部分问题,但也可能出现新的问题。
    • 当前branch-3.2最新提交为2024-06-02。git hashcode 为3ffe75e05e8428f353a018aafc9c003be72ca6ff
    • branch3.2的代码其实已经包含了3.2.0和3.2.1release的代码,也就是比这两个tag的代码更新。有部分commit比master分支还要更新。

国内镜像配置

#修改bigtop/bigtop.bom配置 有两处要修改(两处全改或者改version=bigtop3.2.x对应的)
#1.修改镜像源为国内镜像源 103、104行
    APACHE_MIRROR = "https://repo.huaweicloud.com/apache"
    APACHE_ARCHIVE = "https://mirrors.aliyun.com/apache"
#2.解开bigtop-select组件的注释 删除273、281行

注意:部分材料可能在aliyun上照不到,需要换回原始地址
     APACHE_MIRROR = "https://apache.osuosl.org"
     APACHE_ARCHIVE = "https://archive.apache.org/dist"

基础环境准备

依赖环境安装(yum)

#安装组件编译所需的依赖
#1.hadoop依赖
yum -y install fuse-devel cmake cmake3 lzo-devel openssl-devel protobuf* cyrus-* 
#cmake默认版本改为cmake3
mv /usr/bin/cmake /usr/bin/cmake.bak
ln -s /usr/bin/cmake3 /usr/bin/cmake
#2.zookeeper依赖
yum -y install cppunit-devel
#3.spark依赖
yum -y install R* harfbuzz-devel fribidi-devel libcurl-devel libxml2-devel freetype-devel libpng-devel libtiff-devel libjpeg-turbo-devel pandoc* libgit2-devel
# 我的Rscript没有安装,且spark编译时跳过了本地R语言的make
Rscript -e "install.packages(c('knitr', 'rmarkdown', 'devtools', 'testthat', 'e1071', 'survival'), repos='http://mirrors.tuna.tsinghua.edu.cn/CRAN/')"
Rscript -e "install.packages(c('devtools'), repos='http://mirrors.tuna.tsinghua.edu.cn/CRAN/')"
Rscript -e "install.packages(c('evaluate'), repos='https://mirrors.bfsu.edu.cn/CRAN/')"

# 本地makeR环境报错如下
package 'evaluate' is not available (for R version 3.6.0) 
dependency 'evaluate' is not available for package 'knitr'
Rscript -e "install.packages(c('knitr'), repos='http://mirrors.tuna.tsinghua.edu.cn/CRAN/')"
Rscript -e "install.packages(c('evaluate'), repos='http://mirrors.tuna.tsinghua.edu.cn/CRAN/')"

依赖环境配置

  • github相关hosts
140.82.112.4 github.com
199.232.69.194 github.global.ssl.fastly.net
185.199.108.133 raw.githubusercontent.com
185.199.109.133 raw.githubusercontent.com
185.199.110.133 raw.githubusercontent.com
185.199.111.133 raw.githubusercontent.com

国内镜像配置|软件全局配置

  • maven镜像配置
  • node镜像配置: ~/.npmrc , npm config set registry https://registry.npmmirror.com
  • yarn镜像配置: ~/.yarnrc
  • bower镜像配置: ~/.bowerrc



# ~/.bowerrc
{
  "directory": "bower_components",
  "registry": "https://registry.bower.io",
  "analytics": false,
  "resolvers": [
    "bower-shrinkwrap-resolver-ext"
  ],
  "strict-ssl": false
}

修改部分组件源代码

下载组件源码

#1.先下载
./gradlew tez-download zeppelin-download flink-download spark-download
#2.进入下载目录
cd dl
#3.解压这几个tar
tar -zxvf flink-1.15.3.tar.gz
tar -zxvf apache-tez-0.10.1-src.tar.gz
tar -zxvf zeppelin-0.10.1.tar.gz
tar -zxvf spark-3.2.3.tar.gz

修改代码

修改hadoop代码

-- dl/hadoop-3.3.6-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml,在yarn install和bower innstall 之间添加一个节点如下

<execution>
  <phase>generate-resources</phase>
  <id>bower install moment-timezone</id>
  <configuration>
    <arguments>install moment-timezone=https://hub.nuaa.cf/moment/moment-timezone.git#=0.5.1 --allow-root</arguments>
  </configuration>
  <goals>
    <goal>bower</goal>
  </goals>
</execution>
修改flink代码
1)node、npm版本及相关设置
vi flink-1.15.0/flink-runtime-web/pom.xml ​
在275行 nodeVersion改为v12.22.1​
在276行 npmVersion改为6.14.12
bigtop/dl/flink-1.15.3/flink-runtime-web/pom.xml
<arguments>ci --cache-max=0 --no-save ${npm.proxy}</arguments> 此行删除,更改为下一行,注意没了ci
<arguments> install -g --registry=https://registry.npmmirror.com --cache-max=0 --no-save</arguments>
2)注释掉报错的测试代码
cd dl/flink-1.15.3/flink-formats/flink-avro-confluent-registry/src/test/java/org/apache/flink/formats/avro/registry/confluent/  
mv CachedSchemaCoderProviderTest.java CachedSchemaCoderProviderTest.java1
mv RegistryAvroFormatFactoryTest.java RegistryAvroFormatFactoryTest.java1
cd dl/flink-1.15.3/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/
mv SQLClientSchemaRegistryITCase.java SQLClientSchemaRegistryITCase.java1 
3)io.confluent包无法下载的问题,手动下载,并安装到本地仓库()
wget https://packages.confluent.io/maven/io/confluent/common-config/6.2.2/common-config-6.2.2.jar ./
wget https://packages.confluent.io/maven/io/confluent/common-utils/6.2.2/common-utils-6.2.2.jar ./
wget https://packages.confluent.io/maven/io/confluent/kafka-avro-serializer/6.2.2/kafka-avro-serializer-6.2.2.jar ./
wget http://packages.confluent.io/maven/io/confluent/kafka-schema-registry-client/6.2.2/kafka-schema-registry-client-6.2.2.jar ./

# 安装jar包到本地仓库
mvn install:install-file -Dfile=/soft/ambari-develop/common-config-6.2.2.jar -DgroupId=io.confluent -DartifactId=common-config -Dversion=6.2.2 -Dpackaging=jar
mvn install:install-file -Dfile=/soft/ambari-develop/common-utils-6.2.2.jar -DgroupId=io.confluent -DartifactId=common-utils -Dversion=6.2.2 -Dpackaging=jar
mvn install:install-file -Dfile=/soft/ambari-develop/kafka-avro-serializer-6.2.2.jar -DgroupId=io.confluent -DartifactId=kafka-avro-serializer -Dversion=6.2.2 -Dpackaging=jar
mvn install:install-file -Dfile=/soft/ambari-develop/kafka-schema-registry-client-6.2.2.jar -DgroupId=io.confluent -DartifactId=kafka-schema-registry-client -Dversion=6.2.2 -Dpackaging=jar

npm install -g @angular/cli@13.0.0

修改tez代码
vi apache-tez-0.10.1-src/tez-ui/pom.xml 
在37行 allow-root-build改为--allow-root=true

bower国内镜像配置:https://bower.herokuapp.com 改为https://registry.bower.io
- 涉及到的文件:
  - bigtop\bigtop-packages\src\common\ambari\patch13-AMBARI-25946.diff
  - bigtop\bigtop-packages\src\common\tez\patch6-TEZ-4492.diff
修改zeppelin代码
vi zeppelin-0.10.1/pom.xml 
在209行plugin.gitcommitid.useNativeGit改为true
vi zeppelin-0.10.1/spark/pom.xml
在50行spark.src.download.url改为https://repo.huaweicloud.com/apache/spark/${spark.archive}/${spark.archive}.tgz
在53行spark.bin.download.url改为https://repo.huaweicloud.com/apache/spark/${spark.archive}/${spark.archive}-bin-without-hadoop.tgz
vi zeppelin-0.10.1/rlang/pom.xml
在41行spark.src.download.url改为https://repo.huaweicloud.com/apache/spark/${spark.archive}/${spark.archive}.tgz
在44行spark.bin.download.url改为https://repo.huaweicloud.com/apache/spark/${spark.archive}/${spark.archive}-bin-without-hadoop.tgz
vi zeppelin-0.10.1/flink/flink-scala-parent/pom.xml
在45行flink.bin.download.url改为https://repo.huaweicloud.com/apache/flink/flink-${flink.version}/flink-${flink.version}-bin-scala_${flink.scala.binary.version}.tgz
修改spark源码
cd bigtop/dl
vim spark-3.2.3/dev/make-distribution.sh
#BUILD_COMMAND=("$MVN" clean package -DskipTests $@)
BUILD_COMMAND=("$MVN" clean package -DskipTests -Dorg.slf4j.simpleLogger.showDateTime=true -Dorg.slf4j.simpleLogger.dateTimeFormat="yyyy-MM-dd HH:mm:ss.SSS" $@) #此行中添加打印时间参数
tar zcvf spark-3.2.3.tar.gz spark-3.2.3

重新打包组件源码

  • 注意:打包后的压缩包,应与解压前的压缩包同名(含后缀)
tar zcvf flink-1.15.3.tar.gz flink-1.15.3
tar zcvf apache-tez-0.10.1-src.tar.gz apache-tez-0.10.1-src
tar zcvf zeppelin-0.10.1.tar.gz zeppelin-0.10.1
tar zcvf spark-3.2.3.tar.gz spark-3.2.3

整体编译[不建议]

  • 预计需要N小时以上,且中间可能报各种错误,建议逐个组件进行编译,等所有组件编译成功之后,再整体编译
  • 编译参数
    • allclean
    • -PparentDir=/usr/bigtop :打出来的rpm包的默认安装位置,相当于ambari默认安装路径/usr/hdp
    • -Dbuildwithdeps=true: 编译时,若依赖组件没有编译,先编译依赖组件
    • -PpkgSuffix:
      • 编译时的bigtop版本,类似于hdp版本号,如3.2.0,3.2.1,3.3.0;
      • 此版本号将体现在打包后的文件名重,编译时务必带上此参数,否则编译后,ambari找不到对应的包。
      • 示例包名称:hadoop_3_2_1-hdfs-namenode-3.3.6-1.el7.x86_64.rpm,其中3_2_1代表bigtop版本号
    • $component-rpm: component 代表组件名称,如spark,hive,hbase等大数据组件
    • allclean: 删除build/*、output/、/dist,请慎重操作,重新编译会极其耗时,且由于被墙,可能出现以前没出现的错误。建议使用此命令前,先将此三个目录手动备份

特别强调
编译时一定要添加-PpkgSuffix,否则打出的包不带bigtop版本号,后期安装大数据组件的时候,ambari无法识别,具体报错如下:
Error getting repository data for BIGTOP-3.2.0, repository not found
No package found for hadoop_${stack_version}(expected name: hadoop_3_2_0)

整体编译命令(在bigtop根目录执行)
# 
./gradlew bigtop-groovy-rpm bigtop-jsvc-rpm bigtop-select-rpm bigtop-utils-rpm flink-rpm hadoop-rpm hbase-rpm hive-rpm kafka-rpm solr-rpm spark-rpm tez-rpm zeppelin-rpm zookeeper-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-full.log 2>> logs/bigtop-full.log

逐步逐组件编译

先编译一些周边依赖组件:

  • bigtop-ambari-mpack-rpm
  • bigtop-groovy-rpm
  • bigtop-jsvc-rpm
  • bigtop-select-rpm
  • bigtop-utils-rpm
./gradlew bigtop-ambari-mpack-rpm bigtop-groovy-rpm bigtop-jsvc-rpm bigtop-select-rpm bigtop-utils-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-dep.log 2>> logs/bigtop-dep.log

zookeeper(子项目21个)

  • ./gradlew zookeeper-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-zookeeper.log 2>> logs/bigtop-zookeeper.log

hadoop(子项目111个)

  • gradle编译命令:./gradlew hadoop-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-hadoop.log 2>> logs/bigtop-hadoop.log
  • maven编译命令示例:
cd /soft/code/bigtop/build/hadoop/rpm/BUILD/hadoop-3.3.6-src
mvn -Pdist -Pnative -Psrc -Pyarn-ui -Dtar -Dzookeeper.version=3.6.4 -Dhbase.profile=2.0 -DskipTests -DskipITs install -rf :hadoop-yarn-ui >> /soft/code/bigtop/logs/bigtop-hadoop-mvn.log 2>> /soft/code/bigtop/logs/bigtop-hadoop-mvn.log

问题点-解决方法

  • 1)cmake3:见上边基础环境配置
  • 2)hadoop-yarn-ui编译
    • hadoop-yarn-ui:pom.xml -> 添加:bower install moment-timezone=https://hub.nuaa.cf/moment/moment-timezone.git#=0.5.1 --allow-root
  • 3)bower要访问github.com,注意确认网络通畅

具体操作

  • 1)hadoop-yarn-ui编译时,测试是否联通github
  • git ls-remote --tags --heads https://github.com/moment/moment-timezone.git 可以使用此命令测试是否能够正常访问github分支信息
  • 2)编译时,若moment-timezone#0.5.1或者ember#2.8.0无法安装,报ENORES component No version found that was able to satisfy
  • 则在hadoop-yarn-ui:pom.xml -> 在bow install命令之前添加:bower install moment-timezone=https://hub.nuaa.cf/moment/moment-timezone.git#=0.5.1 --allow-root,详见hadoop代码修改
  • 前端命令位置:
    • node、yarn、bower命令执行的位置(以日志中为准):bigtop/build/hadoop/rpm/BUILD/hadoop-3.3.6-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/component/webapp
    • 局部node位置:webapp/node/yarn/dist/bin/yarn install
    • 局部bower位置:webapp/node_modules/bower/bin/bower,若在root账户下编译执行,需要添加 --allow-root

hbase

  • ./gradlew hbase-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-hbase.log 2>> logs/bigtop-hbase.log

hive

  • ./gradlew hive-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-hive.log 2>> logs/bigtop-hive.log

phoenix

  • ./gradlew bigtop-ambari-mpack-rpm phoenix-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-phoenix.log 2>> logs/bigtop-phoenix.log

tez

  • ./gradlew tez-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-tez.log 2>> logs/bigtop-tez.log

问题点|解决方法

  • 1)root编译,apache-tez-0.10.2-src/tez-ui/pom.xml ,​在37行 allow-root-build改为--allow-root=true
  • 2)bower镜像:https://bower.herokuapp.com 改为https://registry.bower.io
    • 涉及到的文件:
      • bigtop\bigtop-packages\src\common\ambari\patch13-AMBARI-25946.diff
      • bigtop\bigtop-packages\src\common\tez\patch6-TEZ-4492.diff
  • 3)证书过期:codemirror#5.11.0 certificate has expired
    • export BOWER_STRCT_SSL=false
    • bower全局配置:"strct_ssl" = false ,详细见上

报错信息

spark(子项目29个)

  • gradle 命令./gradlew spark-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-spark.log 2>> logs/bigtop-spark.log
  • maven命令(请以日志中命令为准):mvn clean package -DskipTests -Divy.home=/root/.ivy2 -Dsbt.ivy.home=/root/.ivy2 -Duser.home=/root -Drepo.maven.org= -Dreactor.repo=file:///root/.m2/repository -Dhadoop.version=3.3.6 -Dyarn.version=3.3.6 -Pyarn -Phadoop-3.2 -Phive -Phive-thriftserver -Psparkr -Pkubernetes -Pscala-2.12 -Dguava.version=27.0-jre -DskipTests -Dorg.slf4j.simpleLogger.showDateTime=true -Dorg.slf4j.simpleLogger.dateTimeFormat="yyyy-MM-dd HH:mm:ss.SSS" >> /soft/code/bigtop/spark-rpm-mvn.log 2>> /soft/code/bigtop/spark-rpm-mvn.log
  • 说明:gradle先调用spark源码解压目录下dev/make-distribution.sh进行mvn clean package;然后再执行maven install,花费时间为单次的2倍。纯编译时长总共大约在150分钟左右

问题点|解决方法

  • 1)SparkR能够正常打包,但是本地MAKE_R一直失败,可以考虑取消本地make_r
  • 解决方法:vim bigtop\bigtop-packages\src\common\spark\do-component-build 中./dev/make-distribution.sh --mvn mvn --r $BUILD_OPTS -DskipTests # 此行中去掉 --r
  • 2)scala、java混合编译耗时较长,maven编译日志,显示时间
  • vim dl/spark-3.2.3/dev/make-distribution.sh
  • BUILD_COMMAND=("$MVN" clean package -DskipTests $@) 注释掉本行

  • BUILD_COMMAND=("$MVN" clean package -DskipTests -Dorg.slf4j.simpleLogger.showDateTime=true -Dorg.slf4j.simpleLogger.dateTimeFormat="yyyy-MM-dd HH:mm:ss.SSS" $@) #此行中添加打印时间参数
  • vim bigtop\bigtop-packages\src\common\spark\do-component-build
  • mvn $BUILD_OPTS install -DskipTests=$SPARK_SKIP_TESTS -Dorg.slf4j.simpleLogger.showDateTime=true -Dorg.slf4j.simpleLogger.dateTimeFormat="yyyy-MM-dd HH:mm:ss.SSS" #此行中添加打印时间参数

报错信息

  • 1)Error in loadVignetteBuilder(pkgdir, TRUE) : vignette builder 'knitr' not found
  • 解决方法:去掉本地make_r

kafka

  • ./gradlew kafka-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-kafka.log 2>> logs/bigtop-kafka.log

注意点

  • 1)访问:raw.githubusercontent.com
  • 解决方法:hosts文件配置(见上)

flink(子项目207个)

  • ./gradlew flink-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-flink.log 2>> logs/bigtop-flink.log

  • 注意:此项目不能跳过测试代码编译
    问题点|解决方法

  • 1)io.confluent相关jar包无法下载,

  • 解决方法:手动下载,并注册到本地仓库,详见上

  • 2)测试代码出错:

    • CachedSchemaCoderProviderTest.java
    • RegistryAvroFormatFactoryTest.java
    • SQLClientSchemaRegistryITCase.java
    1. ndoe、npm 下载失败,
    • Downloading https://nodejs.org/dist/v16.13.2/node-v16.13.2-linux-x64.tar.gz to /root/.m2/repository/com/github/eirslett/node/16.13.2/node-16.13.2-linux-x64.tar.gz
    • Downloading https://registry.npmjs.org/npm/-/npm-8.1.2.tgz to /root/.m2/repository/com/github/eirslett/npm/8.1.2/npm-8.1.2.tar.gz
    • 解决方案:手动下载,并放到下载目录(注意后缀名保持一致)
    • mv /home/zxh/soft/ambari-develop/node-v16.13.2-linux-x64.tar.gz /root/.m2/repository/com/github/eirslett/node/16.13.2/node-16.13.2-linux-x64.tar.gz
    • mv /home/zxh/soft/ambari-develop/npm-8.1.2.tgz /root/.m2/repository/com/github/eirslett/npm/8.1.2/npm-8.1.2.tar.gz

solr

  • ./gradlew solr-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-solr.log 2>> logs/bigtop-solr.log
  • 注意点:
    • 1)此项目使用ivy编译,强烈建议配置国内源

zeppelin

  • ./gradlew zeppelin-rpm -Dbuildwithdeps=true -PparentDir=/usr/bigtop -PpkgSuffix -x test >> logs/bigtop-zeppelin.log 2>> logs/bigtop-zeppelin.log
#下载zeppelin源码包
​./gradlew zeppelin-download
​#解压zeppelin源码
​cd dl
​tar -zxvf zeppelin-0.10.1.tar.gz
​#修改pom文件
​vi zeppelin-0.10.1/pom.xml 
​在209行plugin.gitcommitid.useNativeGit改为true
​vi zeppelin-0.10.1/spark/pom.xml
​在50行spark.src.download.url改为https://repo.huaweicloud.com/apache/spark/${spark.archive}/${spark.archive}.tgz
​在53行spark.bin.download.url改为https://repo.huaweicloud.com/apache/spark/${spark.archive}/${spark.archive}-bin-without-hadoop.tgz
​vi zeppelin-0.10.1/rlang/pom.xml
​在41行spark.src.download.url改为https://repo.huaweicloud.com/apache/spark/${spark.archive}/${spark.archive}.tgz
​在44行spark.bin.download.url改为https://repo.huaweicloud.com/apache/spark/${spark.archive}/${spark.archive}-bin-without-hadoop.tgz
​vi zeppelin-0.10.1/flink/flink-scala-parent/pom.xml
​在45行flink.bin.download.url改为https://repo.huaweicloud.com/apache/flink/flink-${flink.version}/flink-${flink.version}-bin-scala_${flink.scala.binary.version}.tgz
​#重新打包zeppelin源码
​tar -zcvf zeppelin-0.10.1.tar.gz zeppelin-0.10.1
​#编译
​./gradlew zeppelin-rpm -PparentDir=/usr/bigtop

问题:
[INFO] Downloading https://registry.npmjs.org/npm/-/npm-6.9.0.tgz to /root/.m2/repository/com/github/eirslett/npm/6.9.0/npm-6.9.0.tar.gz
I/O exception (java.net.SocketException) caught when processing request to {s}->https://registry.npmjs.org:443: Connection reset
解决:去阿里云、华为云下载,放到上述目标位置,注意文件重命名
https://mirrors.aliyun.com/macports/distfiles/npm6/npm-6.9.0.tgz
mvn -Dhadoop3.2.version=3.3.6 -Dlivy.version=0.7.1-incubating -Pscala-2.11 -Phadoop3 -Pbuild-distr -DskipTests clean package -pl '!beam,!hbase,!pig,!jdbc,!flink,!ignite,!kylin,!lens,!cassandra,!elasticsearch,!bigquery,!alluxio,!scio,!groovy,!sap,!java,!geode,!neo4j,!hazelcastjet,!submarine,!sparql,!mongodb,!ksql,!scalding' -am -rf :zeppelin-client-examples  >> /soft/code/bigtop/logs/bigtop-zeppelin.log 2>> /soft/code/bigtop/logs/bigtop-zeppelin.log    
问题:Bower resolver not found: bower-shrinkwrap-resolver-ext
https://www.cnblogs.com/jameBo/p/10615444.html

编译后磁盘占用

  • 磁盘占用:
    • bigtop 11G
    • ambari 2.1G
    • ambari 6G
    • 公共部分磁盘占用
      • .m2: 12G
      • .ant 2M
      • .ivy2 200M
      • .gradle 1.2g
      • .nvm 75m
      • .npm 400M
      • .cache 400M
  • 总计磁盘占用:34G+

branch-3.2 各组件编译耗时[纯编译时间,不含下载相关包时间]

  • 注:不出错的情况下

hbase

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Apache HBase 2.4.17:
[INFO] 
[INFO] Apache HBase ....................................... SUCCESS [  5.172 s]
[INFO] Apache HBase - Checkstyle .......................... SUCCESS [  1.053 s]
[INFO] Apache HBase - Annotations ......................... SUCCESS [  0.786 s]
[INFO] Apache HBase - Build Configuration ................. SUCCESS [  0.313 s]
[INFO] Apache HBase - Logging ............................. SUCCESS [  1.018 s]
[INFO] Apache HBase - Shaded Protocol ..................... SUCCESS [ 40.242 s]
[INFO] Apache HBase - Common .............................. SUCCESS [ 10.561 s]
[INFO] Apache HBase - Metrics API ......................... SUCCESS [  6.383 s]
[INFO] Apache HBase - Hadoop Compatibility ................ SUCCESS [ 10.353 s]
[INFO] Apache HBase - Metrics Implementation .............. SUCCESS [  6.360 s]
[INFO] Apache HBase - Hadoop Two Compatibility ............ SUCCESS [  8.477 s]
[INFO] Apache HBase - Protocol ............................ SUCCESS [  9.247 s]
[INFO] Apache HBase - Client .............................. SUCCESS [  8.566 s]
[INFO] Apache HBase - Zookeeper ........................... SUCCESS [  7.584 s]
[INFO] Apache HBase - Replication ......................... SUCCESS [  7.105 s]
[INFO] Apache HBase - Resource Bundle ..................... SUCCESS [  0.327 s]
[INFO] Apache HBase - HTTP ................................ SUCCESS [  9.274 s]
[INFO] Apache HBase - Asynchronous FileSystem ............. SUCCESS [  9.537 s]
[INFO] Apache HBase - Procedure ........................... SUCCESS [  6.110 s]
[INFO] Apache HBase - Server .............................. SUCCESS [ 19.575 s]
[INFO] Apache HBase - MapReduce ........................... SUCCESS [ 11.442 s]
[INFO] Apache HBase - Testing Util ........................ SUCCESS [ 13.362 s]
[INFO] Apache HBase - Thrift .............................. SUCCESS [ 14.274 s]
[INFO] Apache HBase - RSGroup ............................. SUCCESS [ 10.479 s]
[INFO] Apache HBase - Shell ............................... SUCCESS [ 13.807 s]
[INFO] Apache HBase - Coprocessor Endpoint ................ SUCCESS [ 11.806 s]
[INFO] Apache HBase - Integration Tests ................... SUCCESS [ 13.822 s]
[INFO] Apache HBase - Rest ................................ SUCCESS [ 10.202 s]
[INFO] Apache HBase - Examples ............................ SUCCESS [ 14.512 s]
[INFO] Apache HBase - Shaded .............................. SUCCESS [  0.385 s]
[INFO] Apache HBase - Shaded - Client (with Hadoop bundled) SUCCESS [ 31.657 s]
[INFO] Apache HBase - Shaded - Client ..................... SUCCESS [ 18.075 s]
[INFO] Apache HBase - Shaded - MapReduce .................. SUCCESS [ 28.745 s]
[INFO] Apache HBase - External Block Cache ................ SUCCESS [  9.085 s]
[INFO] Apache HBase - HBTop ............................... SUCCESS [  7.789 s]
[INFO] Apache HBase - Assembly ............................ SUCCESS [02:21 min]
[INFO] Apache HBase - Shaded - Testing Util ............... SUCCESS [01:08 min]
[INFO] Apache HBase - Shaded - Testing Util Tester ........ SUCCESS [ 10.186 s]
[INFO] Apache HBase Shaded Packaging Invariants ........... SUCCESS [ 11.475 s]
[INFO] Apache HBase Shaded Packaging Invariants (with Hadoop bundled) SUCCESS [  7.806 s]
[INFO] Apache HBase - Archetypes .......................... SUCCESS [  0.088 s]
[INFO] Apache HBase - Exemplar for hbase-client archetype . SUCCESS [  8.892 s]
[INFO] Apache HBase - Exemplar for hbase-shaded-client archetype SUCCESS [ 11.462 s]
[INFO] Apache HBase - Archetype builder ................... SUCCESS [  0.595 s]
[INFO] ------------------------------------------------------------------------

Phoenix

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Apache Phoenix 5.1.3:
[INFO] 
[INFO] Apache Phoenix ..................................... SUCCESS [  7.259 s]
[INFO] Phoenix Hbase 2.5.0 compatibility .................. SUCCESS [ 26.320 s]
[INFO] Phoenix Hbase 2.4.1 compatibility .................. SUCCESS [ 15.803 s]
[INFO] Phoenix Hbase 2.4.0 compatibility .................. SUCCESS [ 16.274 s]
[INFO] Phoenix Hbase 2.3.0 compatibility .................. SUCCESS [ 21.425 s]
[INFO] Phoenix Hbase 2.2.5 compatibility .................. SUCCESS [ 13.561 s]
[INFO] Phoenix Hbase 2.1.6 compatibility .................. SUCCESS [ 12.690 s]
[INFO] Phoenix Core ....................................... SUCCESS [ 57.483 s]
[INFO] Phoenix - Pherf .................................... SUCCESS [ 17.866 s]
[INFO] Phoenix - Tracing Web Application .................. SUCCESS [ 11.662 s]
[INFO] Phoenix Client Parent .............................. SUCCESS [  0.046 s]
[INFO] Phoenix Client ..................................... SUCCESS [05:54 min]
[INFO] Phoenix Client Embedded ............................ SUCCESS [04:47 min]
[INFO] Phoenix Server JAR ................................. SUCCESS [ 59.698 s]
[INFO] Phoenix Assembly ................................... SUCCESS [ 22.830 s]
[INFO] ------------------------------------------------------------------------

tez

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for tez 0.10.2:
[INFO] 
[INFO] tez ................................................ SUCCESS [  5.045 s]
[INFO] hadoop-shim ........................................ SUCCESS [  6.835 s]
[INFO] tez-api ............................................ SUCCESS [ 16.603 s]
[INFO] tez-build-tools .................................... SUCCESS [  0.765 s]
[INFO] tez-common ......................................... SUCCESS [  3.871 s]
[INFO] tez-runtime-internals .............................. SUCCESS [  4.840 s]
[INFO] tez-runtime-library ................................ SUCCESS [  9.267 s]
[INFO] tez-mapreduce ...................................... SUCCESS [  5.369 s]
[INFO] tez-examples ....................................... SUCCESS [  3.140 s]
[INFO] tez-dag ............................................ SUCCESS [ 12.564 s]
[INFO] tez-tests .......................................... SUCCESS [  6.626 s]
[INFO] tez-ext-service-tests .............................. SUCCESS [  5.906 s]
[INFO] tez-ui ............................................. SUCCESS [01:36 min]
[INFO] tez-plugins ........................................ SUCCESS [  0.072 s]
[INFO] tez-protobuf-history-plugin ........................ SUCCESS [  6.192 s]
[INFO] tez-yarn-timeline-history .......................... SUCCESS [  7.515 s]
[INFO] tez-yarn-timeline-history-with-acls ................ SUCCESS [  8.769 s]
[INFO] tez-yarn-timeline-cache-plugin ..................... SUCCESS [ 31.190 s]
[INFO] tez-yarn-timeline-history-with-fs .................. SUCCESS [  7.541 s]
[INFO] tez-history-parser ................................. SUCCESS [ 22.187 s]
[INFO] tez-aux-services ................................... SUCCESS [ 17.969 s]
[INFO] tez-tools .......................................... SUCCESS [  0.159 s]
[INFO] tez-perf-analyzer .................................. SUCCESS [  0.095 s]
[INFO] tez-job-analyzer ................................... SUCCESS [  4.962 s]
[INFO] tez-javadoc-tools .................................. SUCCESS [  1.292 s]
[INFO] hadoop-shim-impls .................................. SUCCESS [  0.074 s]
[INFO] hadoop-shim-2.8 .................................... SUCCESS [  1.260 s]
[INFO] tez-dist ........................................... SUCCESS [ 51.987 s]
[INFO] Tez ................................................ SUCCESS [  0.416 s]
[INFO] ------------------------------------------------------------------------

spark

  • mvn clean package
2024-07-22 15:37:08.290 [INFO] ------------------------------------------------------------------------
2024-07-22 15:37:08.290 [INFO] Reactor Summary for Spark Project Parent POM 3.2.3:
2024-07-22 15:37:08.290 [INFO]
2024-07-22 15:37:08.292 [INFO] Spark Project Parent POM ........................... SUCCESS [  7.644 s]
2024-07-22 15:37:08.292 [INFO] Spark Project Tags ................................. SUCCESS [ 17.558 s]
2024-07-22 15:37:08.292 [INFO] Spark Project Sketch ............................... SUCCESS [ 14.904 s]
2024-07-22 15:37:08.293 [INFO] Spark Project Local DB ............................. SUCCESS [  4.335 s]
2024-07-22 15:37:08.293 [INFO] Spark Project Networking ........................... SUCCESS [  9.101 s]
2024-07-22 15:37:08.293 [INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  4.797 s]
2024-07-22 15:37:08.294 [INFO] Spark Project Unsafe ............................... SUCCESS [ 18.140 s]
2024-07-22 15:37:08.294 [INFO] Spark Project Launcher ............................. SUCCESS [  3.284 s]
2024-07-22 15:37:08.294 [INFO] Spark Project Core ................................. SUCCESS [06:30 min]  6分钟
2024-07-22 15:37:08.294 [INFO] Spark Project ML Local Library ..................... SUCCESS [01:39 min]
2024-07-22 15:37:08.295 [INFO] Spark Project GraphX ............................... SUCCESS [01:31 min]
2024-07-22 15:37:08.295 [INFO] Spark Project Streaming ............................ SUCCESS [03:09 min]  3分钟
2024-07-22 15:37:08.295 [INFO] Spark Project Catalyst ............................. SUCCESS [07:35 min]  7分钟
2024-07-22 15:37:08.296 [INFO] Spark Project SQL .................................. SUCCESS [12:56 min]  12分钟
2024-07-22 15:37:08.296 [INFO] Spark Project ML Library ........................... SUCCESS [08:11 min]  8分钟
2024-07-22 15:37:08.296 [INFO] Spark Project Tools ................................ SUCCESS [ 18.675 s]
2024-07-22 15:37:08.296 [INFO] Spark Project Hive ................................. SUCCESS [05:02 min]  5分钟
2024-07-22 15:37:08.296 [INFO] Spark Project REPL ................................. SUCCESS [01:13 min]
2024-07-22 15:37:08.297 [INFO] Spark Project YARN Shuffle Service ................. SUCCESS [ 21.930 s]
2024-07-22 15:37:08.297 [INFO] Spark Project YARN ................................. SUCCESS [02:53 min]
2024-07-22 15:37:08.297 [INFO] Spark Project Kubernetes ........................... SUCCESS [02:39 min]
2024-07-22 15:37:08.297 [INFO] Spark Project Hive Thrift Server ................... SUCCESS [02:12 min]
2024-07-22 15:37:08.298 [INFO] Spark Project Assembly ............................. SUCCESS [  6.690 s]
2024-07-22 15:37:08.298 [INFO] Kafka 0.10+ Token Provider for Streaming ........... SUCCESS [ 59.635 s]
2024-07-22 15:37:08.298 [INFO] Spark Integration for Kafka 0.10 ................... SUCCESS [01:28 min]
2024-07-22 15:37:08.299 [INFO] Kafka 0.10+ Source for Structured Streaming ........ SUCCESS [02:50 min]
2024-07-22 15:37:08.299 [INFO] Spark Project Examples ............................. SUCCESS [01:51 min]
2024-07-22 15:37:08.299 [INFO] Spark Integration for Kafka 0.10 Assembly .......... SUCCESS [ 19.932 s]
2024-07-22 15:37:08.300 [INFO] Spark Avro ......................................... SUCCESS [02:07 min]
2024-07-22 15:37:08.300 [INFO] ------------------------------------------------------------------------
2024-07-22 15:37:08.301 [INFO] BUILD SUCCESS
2024-07-22 15:37:08.301 [INFO] ------------------------------------------------------------------------
2024-07-22 15:37:08.302 [INFO] Total time:  01:07 h
2024-07-22 15:37:08.302 [INFO] Finished at: 2024-07-22T15:37:08+08:00
2024-07-22 15:37:08.303 [INFO] ------------------------------------------------------------------------


疑问:在家时候,编译速度更慢,CPU消耗也较低,猜测可能是因为没关机的原因
2024-07-22 07:21:01.933 [INFO] ------------------------------------------------------------------------
2024-07-22 07:21:01.934 [INFO] Reactor Summary for Spark Project Parent POM 3.2.3:
2024-07-22 07:21:01.936 [INFO] 
2024-07-22 07:21:01.939 [INFO] Spark Project Parent POM ........................... SUCCESS [ 24.693 s]
2024-07-22 07:21:01.941 [INFO] Spark Project Tags ................................. SUCCESS [ 47.200 s]
2024-07-22 07:21:01.943 [INFO] Spark Project Sketch ............................... SUCCESS [ 53.843 s]
2024-07-22 07:21:01.945 [INFO] Spark Project Local DB ............................. SUCCESS [ 15.175 s]
2024-07-22 07:21:01.947 [INFO] Spark Project Networking ........................... SUCCESS [ 33.738 s]
2024-07-22 07:21:01.949 [INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 17.500 s]
2024-07-22 07:21:01.951 [INFO] Spark Project Unsafe ............................... SUCCESS [01:06 min]
2024-07-22 07:21:01.952 [INFO] Spark Project Launcher ............................. SUCCESS [ 12.097 s]
2024-07-22 07:21:01.954 [INFO] Spark Project Core ................................. SUCCESS [23:34 min]
2024-07-22 07:21:01.957 [INFO] Spark Project ML Local Library ..................... SUCCESS [04:01 min]
2024-07-22 07:21:01.960 [INFO] Spark Project GraphX ............................... SUCCESS [04:43 min]
2024-07-22 07:21:01.962 [INFO] Spark Project Streaming ............................ SUCCESS [08:30 min]
2024-07-22 07:21:01.962 [INFO] Spark Project Catalyst ............................. SUCCESS [24:34 min]
2024-07-22 07:21:01.963 [INFO] Spark Project SQL .................................. SUCCESS [38:07 min]
2024-07-22 07:21:01.965 [INFO] Spark Project ML Library ........................... SUCCESS [25:03 min]
2024-07-22 07:21:01.966 [INFO] Spark Project Tools ................................ SUCCESS [01:09 min]
2024-07-22 07:21:01.969 [INFO] Spark Project Hive ................................. SUCCESS [15:42 min]
2024-07-22 07:21:01.972 [INFO] Spark Project REPL ................................. SUCCESS [03:50 min]
2024-07-22 07:21:01.973 [INFO] Spark Project YARN Shuffle Service ................. SUCCESS [01:20 min]
2024-07-22 07:21:01.975 [INFO] Spark Project YARN ................................. SUCCESS [08:42 min]
2024-07-22 07:21:01.976 [INFO] Spark Project Hive Thrift Server ................... SUCCESS [08:33 min]
2024-07-22 07:21:01.976 [INFO] Spark Project Assembly ............................. SUCCESS [ 36.134 s]
2024-07-22 07:21:01.978 [INFO] Kafka 0.10+ Token Provider for Streaming ........... SUCCESS [03:52 min]
2024-07-22 07:21:01.981 [INFO] Spark Integration for Kafka 0.10 ................... SUCCESS [05:26 min]
2024-07-22 07:21:01.982 [INFO] Kafka 0.10+ Source for Structured Streaming ........ SUCCESS [10:05 min]
2024-07-22 07:21:01.983 [INFO] Spark Project Examples ............................. SUCCESS [06:48 min]
2024-07-22 07:21:01.984 [INFO] Spark Integration for Kafka 0.10 Assembly .......... SUCCESS [01:18 min]
2024-07-22 07:21:01.985 [INFO] Spark Avro ......................................... SUCCESS [08:12 min]
2024-07-22 07:21:01.987 [INFO] ------------------------------------------------------------------------
2024-07-22 07:21:01.987 [INFO] BUILD SUCCESS
2024-07-22 07:21:01.988 [INFO] ------------------------------------------------------------------------
2024-07-22 07:21:01.991 [INFO] Total time:  03:28 h
2024-07-22 07:21:01.994 [INFO] Finished at: 2024-07-22T07:21:01+08:00
2024-07-22 07:21:01.995 [INFO] ------------------------------------------------------------------------
  • mvn install
2024-07-22 16:43:49.807 [INFO] ------------------------------------------------------------------------
2024-07-22 16:43:49.808 [INFO] Reactor Summary for Spark Project Parent POM 3.2.3:
2024-07-22 16:43:49.808 [INFO]
2024-07-22 16:43:49.809 [INFO] Spark Project Parent POM ........................... SUCCESS [  8.358 s]
2024-07-22 16:43:49.809 [INFO] Spark Project Tags ................................. SUCCESS [ 11.288 s]
2024-07-22 16:43:49.810 [INFO] Spark Project Sketch ............................... SUCCESS [  7.351 s]
2024-07-22 16:43:49.810 [INFO] Spark Project Local DB ............................. SUCCESS [  9.763 s]
2024-07-22 16:43:49.810 [INFO] Spark Project Networking ........................... SUCCESS [ 19.142 s]
2024-07-22 16:43:49.810 [INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 19.644 s]
2024-07-22 16:43:49.811 [INFO] Spark Project Unsafe ............................... SUCCESS [  9.358 s]
2024-07-22 16:43:49.811 [INFO] Spark Project Launcher ............................. SUCCESS [  8.332 s]
2024-07-22 16:43:49.811 [INFO] Spark Project Core ................................. SUCCESS [06:43 min]
2024-07-22 16:43:49.811 [INFO] Spark Project ML Local Library ..................... SUCCESS [01:14 min]
2024-07-22 16:43:49.811 [INFO] Spark Project GraphX ............................... SUCCESS [01:23 min]
2024-07-22 16:43:49.812 [INFO] Spark Project Streaming ............................ SUCCESS [02:23 min]
2024-07-22 16:43:49.812 [INFO] Spark Project Catalyst ............................. SUCCESS [07:44 min]
2024-07-22 16:43:49.812 [INFO] Spark Project SQL .................................. SUCCESS [12:42 min]
2024-07-22 16:43:49.812 [INFO] Spark Project ML Library ........................... SUCCESS [08:15 min]
2024-07-22 16:43:49.812 [INFO] Spark Project Tools ................................ SUCCESS [ 15.477 s]
2024-07-22 16:43:49.813 [INFO] Spark Project Hive ................................. SUCCESS [04:28 min]
2024-07-22 16:43:49.813 [INFO] Spark Project REPL ................................. SUCCESS [01:13 min]
2024-07-22 16:43:49.814 [INFO] Spark Project YARN Shuffle Service ................. SUCCESS [ 26.235 s]
2024-07-22 16:43:49.814 [INFO] Spark Project YARN ................................. SUCCESS [02:35 min]
2024-07-22 16:43:49.815 [INFO] Spark Project Kubernetes ........................... SUCCESS [02:32 min]
2024-07-22 16:43:49.815 [INFO] Spark Project Hive Thrift Server ................... SUCCESS [02:35 min]
2024-07-22 16:43:49.815 [INFO] Spark Project Assembly ............................. SUCCESS [  5.722 s]
2024-07-22 16:43:49.816 [INFO] Kafka 0.10+ Token Provider for Streaming ........... SUCCESS [ 59.883 s]
2024-07-22 16:43:49.816 [INFO] Spark Integration for Kafka 0.10 ................... SUCCESS [01:30 min]
2024-07-22 16:43:49.816 [INFO] Kafka 0.10+ Source for Structured Streaming ........ SUCCESS [02:57 min]
2024-07-22 16:43:49.817 [INFO] Spark Project Examples ............................. SUCCESS [02:22 min]
2024-07-22 16:43:49.817 [INFO] Spark Integration for Kafka 0.10 Assembly .......... SUCCESS [ 17.156 s]
2024-07-22 16:43:49.817 [INFO] Spark Avro ......................................... SUCCESS [02:15 min]
2024-07-22 16:43:49.818 [INFO] ------------------------------------------------------------------------
2024-07-22 16:43:49.818 [INFO] BUILD SUCCESS
2024-07-22 16:43:49.818 [INFO] ------------------------------------------------------------------------
2024-07-22 16:43:49.818 [INFO] Total time:  01:06 h
2024-07-22 16:43:49.819 [INFO] Finished at: 2024-07-22T16:43:49+08:00
2024-07-22 16:43:49.819 [INFO] ------------------------------------------------------------------------

flink

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO] 
[INFO] Flink :                                                            [pom]
[INFO] Flink : Annotations                                                [jar]
[INFO] Flink : Architecture Tests                                         [pom]
[INFO] Flink : Architecture Tests : Base                                  [jar]
[INFO] Flink : Test utils :                                               [pom]
[INFO] Flink : Test utils : Junit                                         [jar]
[INFO] Flink : Metrics :                                                  [pom]
[INFO] Flink : Metrics : Core                                             [jar]
[INFO] Flink : Core                                                       [jar]
[INFO] Flink : Table :                                                    [pom]
[INFO] Flink : Table : Common                                             [jar]
[INFO] Flink : Table : API Java                                           [jar]
[INFO] Flink : Java                                                       [jar]
[INFO] Flink : Connectors :                                               [pom]
[INFO] Flink : Connectors : File Sink Common                              [jar]
[INFO] Flink : RPC :                                                      [pom]
[INFO] Flink : RPC : Core                                                 [jar]
[INFO] Flink : RPC : Akka                                                 [jar]
[INFO] Flink : RPC : Akka-Loader                                          [jar]
[INFO] Flink : Queryable state :                                          [pom]
[INFO] Flink : Queryable state : Client Java                              [jar]
[INFO] Flink : FileSystems :                                              [pom]
[INFO] Flink : FileSystems : Hadoop FS                                    [jar]
[INFO] Flink : Runtime                                                    [jar]
[INFO] Flink : Streaming Java                                             [jar]
[INFO] Flink : Table : API bridge base                                    [jar]
[INFO] Flink : Table : API Java bridge                                    [jar]
[INFO] Flink : Table : Code Splitter                                      [jar]
[INFO] Flink : Optimizer                                                  [jar]
[INFO] Flink : Clients                                                    [jar]
[INFO] Flink : DSTL                                                       [pom]
[INFO] Flink : DSTL : DFS                                                 [jar]
[INFO] Flink : State backends :                                           [pom]
[INFO] Flink : State backends : RocksDB                                   [jar]
[INFO] Flink : State backends : Changelog                                 [jar]
[INFO] Flink : Test utils : Utils                                         [jar]
[INFO] Flink : Libraries :                                                [pom]
[INFO] Flink : Libraries : CEP                                            [jar]
[INFO] Flink : Table : Runtime                                            [jar]
[INFO] Flink : Scala                                                      [jar]
[INFO] Flink : Table : SQL Parser                                         [jar]
[INFO] Flink : Table : SQL Parser Hive                                    [jar]
[INFO] Flink : Table : API Scala                                          [jar]
[INFO] Flink : Test utils : Connectors                                    [jar]
[INFO] Flink : Architecture Tests : Test                                  [jar]
[INFO] Flink : Connectors : Base                                          [jar]
[INFO] Flink : Connectors : Files                                         [jar]
[INFO] Flink : Examples :                                                 [pom]
[INFO] Flink : Examples : Batch                                           [jar]
[INFO] Flink : Connectors : Hadoop compatibility                          [jar]
[INFO] Flink : Tests                                                      [jar]
[INFO] Flink : Streaming Scala                                            [jar]
[INFO] Flink : Table : API Scala bridge                                   [jar]
[INFO] Flink : Table : Planner                                            [jar]
[INFO] Flink : Formats :                                                  [pom]
[INFO] Flink : Format : Common                                            [jar]
[INFO] Flink : Formats : Csv                                              [jar]
[INFO] Flink : Formats : Hadoop bulk                                      [jar]
[INFO] Flink : Formats : Orc                                              [jar]
[INFO] Flink : Formats : Orc nohive                                       [jar]
[INFO] Flink : Formats : Avro                                             [jar]
[INFO] Flink : Formats : Parquet                                          [jar]
[INFO] Flink : Connectors : Hive                                          [jar]
[INFO] Flink : Python                                                     [jar]
[INFO] Flink : Table : SQL Client                                         [jar]
[INFO] Flink : Connectors : AWS Base                                      [jar]
[INFO] Flink : Connectors : Cassandra                                     [jar]
[INFO] Flink : Formats : Json                                             [jar]
[INFO] Flink : Connectors : Elasticsearch base                            [jar]
[INFO] Flink : Connectors : Elasticsearch 6                               [jar]
[INFO] Flink : Connectors : Elasticsearch 7                               [jar]
[INFO] Flink : Connectors : Google PubSub                                 [jar]
[INFO] Flink : Connectors : HBase base                                    [jar]
[INFO] Flink : Connectors : HBase 1.4                                     [jar]
[INFO] Flink : Connectors : HBase 2.2                                     [jar]
[INFO] Flink : Connectors : JDBC                                          [jar]
[INFO] Flink : Metrics : JMX                                              [jar]
[INFO] Flink : Formats : Avro confluent registry                          [jar]
[INFO] Flink : Connectors : Kafka                                         [jar]
[INFO] Flink : Connectors : Amazon Kinesis Data Streams                   [jar]
[INFO] Flink : Connectors : Kinesis                                       [jar]
[INFO] Flink : Connectors : Nifi                                          [jar]
[INFO] Flink : Connectors : Pulsar                                        [jar]
[INFO] Flink : Connectors : RabbitMQ                                      [jar]
[INFO] Flink : Architecture Tests : Production                            [jar]
[INFO] Flink : FileSystems : Hadoop FS shaded                             [jar]
[INFO] Flink : FileSystems : S3 FS Base                                   [jar]
[INFO] Flink : FileSystems : S3 FS Hadoop                                 [jar]
[INFO] Flink : FileSystems : S3 FS Presto                                 [jar]
[INFO] Flink : FileSystems : OSS FS                                       [jar]
[INFO] Flink : FileSystems : Azure FS Hadoop                              [jar]
[INFO] Flink : FileSystems : Google Storage FS Hadoop                     [jar]
[INFO] Flink : Runtime web                                                [jar]
[INFO] Flink : Connectors : HCatalog                                      [jar]
[INFO] Flink : Connectors : Amazon Kinesis Data Firehose                  [jar]
[INFO] Flink : Connectors : SQL : Elasticsearch 6                         [jar]
[INFO] Flink : Connectors : SQL : Elasticsearch 7                         [jar]
[INFO] Flink : Connectors : SQL : HBase 1.4                               [jar]
[INFO] Flink : Connectors : SQL : HBase 2.2                               [jar]
[INFO] Flink : Connectors : SQL : Hive 1.2.2                              [jar]
[INFO] Flink : Connectors : SQL : Hive 2.2.0                              [jar]
[INFO] Flink : Connectors : SQL : Hive 2.3.6                              [jar]
[INFO] Flink : Connectors : SQL : Hive 3.1.2                              [jar]
[INFO] Flink : Connectors : SQL : Kafka                                   [jar]
[INFO] Flink : Connectors : SQL : Amazon Kinesis Data Streams             [jar]
[INFO] Flink : Connectors : SQL : Amazon Kinesis Data Firehose            [jar]
[INFO] Flink : Connectors : SQL : Kinesis                                 [jar]
[INFO] Flink : Connectors : SQL : Pulsar                                  [jar]
[INFO] Flink : Connectors : SQL : RabbitMQ                                [jar]
[INFO] Flink : Formats : Sequence file                                    [jar]
[INFO] Flink : Formats : Compress                                         [jar]
[INFO] Flink : Formats : Avro AWS Glue Schema Registry                    [jar]
[INFO] Flink : Formats : JSON AWS Glue Schema Registry                    [jar]
[INFO] Flink : Formats : SQL Orc                                          [jar]
[INFO] Flink : Formats : SQL Parquet                                      [jar]
[INFO] Flink : Formats : SQL Avro                                         [jar]
[INFO] Flink : Formats : SQL Avro Confluent Registry                      [jar]
[INFO] Flink : Examples : Streaming                                       [jar]
[INFO] Flink : Examples : Table                                           [jar]
[INFO] Flink : Examples : Build Helper :                                  [pom]
[INFO] Flink : Examples : Build Helper : Streaming State machine          [jar]
[INFO] Flink : Examples : Build Helper : Streaming Google PubSub          [jar]
[INFO] Flink : Container                                                  [jar]
[INFO] Flink : Queryable state : Runtime                                  [jar]
[INFO] Flink : Dist-Scala                                                 [jar]
[INFO] Flink : Kubernetes                                                 [jar]
[INFO] Flink : Yarn                                                       [jar]
[INFO] Flink : Table : API Java Uber                                      [jar]
[INFO] Flink : Table : Planner Loader Bundle                              [jar]
[INFO] Flink : Table : Planner Loader                                     [jar]
[INFO] Flink : Libraries : Gelly                                          [jar]
[INFO] Flink : Libraries : Gelly scala                                    [jar]
[INFO] Flink : Libraries : Gelly Examples                                 [jar]
[INFO] Flink : External resources :                                       [pom]
[INFO] Flink : External resources : GPU                                   [jar]
[INFO] Flink : Metrics : Dropwizard                                       [jar]
[INFO] Flink : Metrics : Graphite                                         [jar]
[INFO] Flink : Metrics : InfluxDB                                         [jar]
[INFO] Flink : Metrics : Prometheus                                       [jar]
[INFO] Flink : Metrics : StatsD                                           [jar]
[INFO] Flink : Metrics : Datadog                                          [jar]
[INFO] Flink : Metrics : Slf4j                                            [jar]
[INFO] Flink : Libraries : CEP Scala                                      [jar]
[INFO] Flink : Libraries : State processor API                            [jar]
[INFO] Flink : Dist                                                       [jar]
[INFO] Flink : Yarn Tests                                                 [jar]
[INFO] Flink : E2E Tests :                                                [pom]
[INFO] Flink : E2E Tests : CLI                                            [jar]
[INFO] Flink : E2E Tests : Parent Child classloading program              [jar]
[INFO] Flink : E2E Tests : Parent Child classloading lib-package          [jar]
[INFO] Flink : E2E Tests : Dataset allround                               [jar]
[INFO] Flink : E2E Tests : Dataset Fine-grained recovery                  [jar]
[INFO] Flink : E2E Tests : Datastream allround                            [jar]
[INFO] Flink : E2E Tests : Batch SQL                                      [jar]
[INFO] Flink : E2E Tests : Stream SQL                                     [jar]
[INFO] Flink : E2E Tests : Distributed cache via blob                     [jar]
[INFO] Flink : E2E Tests : High parallelism iterations                    [jar]
[INFO] Flink : E2E Tests : Stream stateful job upgrade                    [jar]
[INFO] Flink : E2E Tests : Queryable state                                [jar]
[INFO] Flink : E2E Tests : Local recovery and allocation                  [jar]
[INFO] Flink : E2E Tests : Elasticsearch 6                                [jar]
[INFO] Flink : Quickstart :                                               [pom]
[INFO] Flink : Quickstart : Java                              [maven-archetype]
[INFO] Flink : Quickstart : Scala                             [maven-archetype]
[INFO] Flink : E2E Tests : Quickstart                                     [jar]
[INFO] Flink : E2E Tests : Confluent schema registry                      [jar]
[INFO] Flink : E2E Tests : Stream state TTL                               [jar]
[INFO] Flink : E2E Tests : SQL client                                     [jar]
[INFO] Flink : E2E Tests : File sink                                      [jar]
[INFO] Flink : E2E Tests : State evolution                                [jar]
[INFO] Flink : E2E Tests : RocksDB state memory control                   [jar]
[INFO] Flink : E2E Tests : Common                                         [jar]
[INFO] Flink : E2E Tests : Metrics availability                           [jar]
[INFO] Flink : E2E Tests : Metrics reporter prometheus                    [jar]
[INFO] Flink : E2E Tests : Heavy deployment                               [jar]
[INFO] Flink : E2E Tests : Connectors : Google PubSub                     [jar]
[INFO] Flink : E2E Tests : Streaming Kafka base                           [jar]
[INFO] Flink : E2E Tests : Streaming Kafka                                [jar]
[INFO] Flink : E2E Tests : Plugins :                                      [pom]
[INFO] Flink : E2E Tests : Plugins : Dummy fs                             [jar]
[INFO] Flink : E2E Tests : Plugins : Another dummy fs                     [jar]
[INFO] Flink : E2E Tests : TPCH                                           [jar]
[INFO] Flink : E2E Tests : Streaming Kinesis                              [jar]
[INFO] Flink : E2E Tests : Elasticsearch 7                                [jar]
[INFO] Flink : E2E Tests : Common Kafka                                   [jar]
[INFO] Flink : E2E Tests : TPCDS                                          [jar]
[INFO] Flink : E2E Tests : Netty shuffle memory control                   [jar]
[INFO] Flink : E2E Tests : Python                                         [jar]
[INFO] Flink : E2E Tests : HBase                                          [jar]
[INFO] Flink : E2E Tests : Pulsar                                         [jar]
[INFO] Flink : E2E Tests : Avro AWS Glue Schema Registry                  [jar]
[INFO] Flink : E2E Tests : JSON AWS Glue Schema Registry                  [jar]
[INFO] Flink : E2E Tests : Scala                                          [jar]
[INFO] Flink : E2E Tests : Kinesis SQL tests                              [jar]
[INFO] Flink : E2E Tests : Kinesis Firehose SQL tests                     [jar]
[INFO] Flink : E2E Tests : SQL                                            [jar]
[INFO] Flink : State backends : Heap spillable                            [jar]
[INFO] Flink : Table : Test Utils                                         [jar]
[INFO] Flink : Contrib :                                                  [pom]
[INFO] Flink : Contrib : Connectors : Wikiedits                           [jar]
[INFO] Flink : FileSystems : Tests                                        [jar]
[INFO] Flink : Docs                                                       [jar]
[INFO] Flink : Walkthrough :                                              [pom]
[INFO] Flink : Walkthrough : Common                                       [jar]
[INFO] Flink : Walkthrough : Datastream Java                  [maven-archetype]
[INFO] Flink : Walkthrough : Datastream Scala                 [maven-archetype]
[INFO] Flink : Tools : CI : Java                                          [jar]
[INFO] 
[INFO] -------------------< org.apache.flink:flink-parent >--------------------
[INFO] Flink : ............................................ SUCCESS [  4.064 s]
[INFO] Flink : Annotations ................................ SUCCESS [  6.429 s]
[INFO] Flink : Architecture Tests ......................... SUCCESS [  0.333 s]
[INFO] Flink : Architecture Tests : Base .................. SUCCESS [  1.564 s]
[INFO] Flink : Test utils : ............................... SUCCESS [  0.271 s]
[INFO] Flink : Test utils : Junit ......................... SUCCESS [  4.591 s]
[INFO] Flink : Metrics : .................................. SUCCESS [  0.271 s]
[INFO] Flink : Metrics : Core ............................. SUCCESS [  2.729 s]
[INFO] Flink : Core ....................................... SUCCESS [ 58.288 s]
[INFO] Flink : Table : .................................... SUCCESS [  0.241 s]
[INFO] Flink : Table : Common ............................. SUCCESS [ 18.151 s]
[INFO] Flink : Table : API Java ........................... SUCCESS [  8.986 s]
[INFO] Flink : Java ....................................... SUCCESS [ 11.097 s]
[INFO] Flink : Connectors : ............................... SUCCESS [  0.222 s]
[INFO] Flink : Connectors : File Sink Common .............. SUCCESS [  1.121 s]
[INFO] Flink : RPC : ...................................... SUCCESS [  0.306 s]
[INFO] Flink : RPC : Core ................................. SUCCESS [  1.293 s]
[INFO] Flink : RPC : Akka ................................. SUCCESS [ 16.250 s]
[INFO] Flink : RPC : Akka-Loader .......................... SUCCESS [  4.921 s]
[INFO] Flink : Queryable state : .......................... SUCCESS [  0.227 s]
[INFO] Flink : Queryable state : Client Java .............. SUCCESS [  1.497 s]
[INFO] Flink : FileSystems : .............................. SUCCESS [  0.189 s]
[INFO] Flink : FileSystems : Hadoop FS .................... SUCCESS [  6.945 s]
[INFO] Flink : Runtime .................................... SUCCESS [01:38 min]
[INFO] Flink : Streaming Java ............................. SUCCESS [ 30.549 s]
[INFO] Flink : Table : API bridge base .................... SUCCESS [  0.893 s]
[INFO] Flink : Table : API Java bridge .................... SUCCESS [  2.253 s]
[INFO] Flink : Table : Code Splitter ...................... SUCCESS [  4.346 s]
[INFO] Flink : Optimizer .................................. SUCCESS [  8.160 s]
[INFO] Flink : Clients .................................... SUCCESS [  4.826 s]
[INFO] Flink : DSTL ....................................... SUCCESS [  0.247 s]
[INFO] Flink : DSTL : DFS ................................. SUCCESS [  1.941 s]
[INFO] Flink : State backends : ........................... SUCCESS [  0.202 s]
[INFO] Flink : State backends : RocksDB ................... SUCCESS [  4.197 s]
[INFO] Flink : State backends : Changelog ................. SUCCESS [  1.939 s]
[INFO] Flink : Test utils : Utils ......................... SUCCESS [  3.685 s]
[INFO] Flink : Libraries : ................................ SUCCESS [  0.208 s]
[INFO] Flink : Libraries : CEP ............................ SUCCESS [  6.168 s]
[INFO] Flink : Table : Runtime ............................ SUCCESS [ 16.743 s]
[INFO] Flink : Scala ...................................... SUCCESS [01:28 min]
[INFO] Flink : Table : SQL Parser ......................... SUCCESS [ 10.096 s]
[INFO] Flink : Table : SQL Parser Hive .................... SUCCESS [  7.248 s]
[INFO] Flink : Table : API Scala .......................... SUCCESS [ 22.686 s]
[INFO] Flink : Test utils : Connectors .................... SUCCESS [  2.327 s]
[INFO] Flink : Architecture Tests : Test .................. SUCCESS [  1.135 s]
[INFO] Flink : Connectors : Base .......................... SUCCESS [  3.811 s]
[INFO] Flink : Connectors : Files ......................... SUCCESS [  5.432 s]
[INFO] Flink : Examples : ................................. SUCCESS [  0.301 s]
[INFO] Flink : Examples : Batch ........................... SUCCESS [ 22.082 s]
[INFO] Flink : Connectors : Hadoop compatibility .......... SUCCESS [ 12.312 s]
[INFO] Flink : Tests ...................................... SUCCESS [01:11 min]
[INFO] Flink : Streaming Scala ............................ SUCCESS [01:03 min]
[INFO] Flink : Table : API Scala bridge ................... SUCCESS [ 21.680 s]
[INFO] Flink : Table : Planner ............................ SUCCESS [07:30 min]  8分钟
[INFO] Flink : Formats : .................................. SUCCESS [  0.258 s]
[INFO] Flink : Format : Common ............................ SUCCESS [  0.341 s]
[INFO] Flink : Formats : Csv .............................. SUCCESS [  2.072 s]
[INFO] Flink : Formats : Hadoop bulk ...................... SUCCESS [  2.403 s]
[INFO] Flink : Formats : Orc .............................. SUCCESS [  3.152 s]
[INFO] Flink : Formats : Orc nohive ....................... SUCCESS [  2.388 s]
[INFO] Flink : Formats : Avro ............................. SUCCESS [  6.456 s]
[INFO] Flink : Formats : Parquet .......................... SUCCESS [ 15.684 s]
[INFO] Flink : Connectors : Hive .......................... SUCCESS [ 34.813 s]
[INFO] Flink : Python ..................................... SUCCESS [01:08 min]
[INFO] Flink : Table : SQL Client ......................... SUCCESS [  4.035 s]
[INFO] Flink : Connectors : AWS Base ...................... SUCCESS [  2.094 s]
[INFO] Flink : Connectors : Cassandra ..................... SUCCESS [  7.046 s]
[INFO] Flink : Formats : Json ............................. SUCCESS [  2.567 s]
[INFO] Flink : Connectors : Elasticsearch base ............ SUCCESS [  3.339 s]
[INFO] Flink : Connectors : Elasticsearch 6 ............... SUCCESS [  2.120 s]
[INFO] Flink : Connectors : Elasticsearch 7 ............... SUCCESS [  2.034 s]
[INFO] Flink : Connectors : Google PubSub ................. SUCCESS [  1.555 s]
[INFO] Flink : Connectors : HBase base .................... SUCCESS [  1.983 s]
[INFO] Flink : Connectors : HBase 1.4 ..................... SUCCESS [  6.194 s]
[INFO] Flink : Connectors : HBase 2.2 ..................... SUCCESS [  5.611 s]
[INFO] Flink : Connectors : JDBC .......................... SUCCESS [  5.551 s]
[INFO] Flink : Metrics : JMX .............................. SUCCESS [  0.784 s]
[INFO] Flink : Formats : Avro confluent registry .......... SUCCESS [  1.002 s]
[INFO] Flink : Connectors : Kafka ......................... SUCCESS [ 15.116 s]
[INFO] Flink : Connectors : Amazon Kinesis Data Streams ... SUCCESS [  3.460 s]
[INFO] Flink : Connectors : Kinesis ....................... SUCCESS [ 39.773 s]
[INFO] Flink : Connectors : Nifi .......................... SUCCESS [  1.233 s]
[INFO] Flink : Connectors : Pulsar ........................ SUCCESS [ 18.759 s]
[INFO] Flink : Connectors : RabbitMQ ...................... SUCCESS [  1.263 s]
[INFO] Flink : Architecture Tests : Production ............ SUCCESS [  2.390 s]
[INFO] Flink : FileSystems : Hadoop FS shaded ............. SUCCESS [  6.516 s]
[INFO] Flink : FileSystems : S3 FS Base ................... SUCCESS [  1.879 s]
[INFO] Flink : FileSystems : S3 FS Hadoop ................. SUCCESS [ 12.980 s]
[INFO] Flink : FileSystems : S3 FS Presto ................. SUCCESS [01:35 min]
[INFO] Flink : FileSystems : OSS FS ....................... SUCCESS [ 27.004 s]
[INFO] Flink : FileSystems : Azure FS Hadoop .............. SUCCESS [ 33.770 s]
[INFO] Flink : FileSystems : Google Storage FS Hadoop ..... SUCCESS [ 40.728 s]


[INFO] Flink : Runtime web ................................ SUCCESS [01:53 min]  编译常出错的地儿
[INFO] Flink : Connectors : HCatalog ...................... SUCCESS [ 25.040 s]
[INFO] Flink : Connectors : Amazon Kinesis Data Firehose .. SUCCESS [  7.678 s]
[INFO] Flink : Connectors : SQL : Elasticsearch 6 ......... SUCCESS [ 18.119 s]
[INFO] Flink : Connectors : SQL : Elasticsearch 7 ......... SUCCESS [ 16.245 s]
[INFO] Flink : Connectors : SQL : HBase 1.4 ............... SUCCESS [ 12.682 s]
[INFO] Flink : Connectors : SQL : HBase 2.2 ............... SUCCESS [ 24.552 s]
[INFO] Flink : Connectors : SQL : Hive 1.2.2 .............. SUCCESS [ 25.486 s]
[INFO] Flink : Connectors : SQL : Hive 2.2.0 .............. SUCCESS [ 23.168 s]
[INFO] Flink : Connectors : SQL : Hive 2.3.6 .............. SUCCESS [ 20.671 s]
[INFO] Flink : Connectors : SQL : Hive 3.1.2 .............. SUCCESS [ 28.815 s]
[INFO] Flink : Connectors : SQL : Kafka ................... SUCCESS [  3.212 s]
[INFO] Flink : Connectors : SQL : Amazon Kinesis Data Streams SUCCESS [  5.737 s]
[INFO] Flink : Connectors : SQL : Amazon Kinesis Data Firehose SUCCESS [  6.406 s]
[INFO] Flink : Connectors : SQL : Kinesis ................. SUCCESS [ 15.596 s]
[INFO] Flink : Connectors : SQL : Pulsar .................. SUCCESS [  8.630 s]
[INFO] Flink : Connectors : SQL : RabbitMQ ................ SUCCESS [  1.694 s]
[INFO] Flink : Formats : Sequence file .................... SUCCESS [  3.445 s]
[INFO] Flink : Formats : Compress ......................... SUCCESS [  3.588 s]
[INFO] Flink : Formats : Avro AWS Glue Schema Registry .... SUCCESS [ 10.882 s]
[INFO] Flink : Formats : JSON AWS Glue Schema Registry .... SUCCESS [  6.255 s]
[INFO] Flink : Formats : SQL Orc .......................... SUCCESS [  1.724 s]
[INFO] Flink : Formats : SQL Parquet ...................... SUCCESS [  3.187 s]
[INFO] Flink : Formats : SQL Avro ......................... SUCCESS [  3.286 s]
[INFO] Flink : Formats : SQL Avro Confluent Registry ...... SUCCESS [  3.646 s]
[INFO] Flink : Examples : Streaming ....................... SUCCESS [ 37.441 s]
[INFO] Flink : Examples : Table ........................... SUCCESS [ 21.271 s]
[INFO] Flink : Examples : Build Helper : .................. SUCCESS [  1.212 s]
[INFO] Flink : Examples : Build Helper : Streaming State machine SUCCESS [  3.890 s]
[INFO] Flink : Examples : Build Helper : Streaming Google PubSub SUCCESS [ 11.414 s]
[INFO] Flink : Container .................................. SUCCESS [  2.133 s]
[INFO] Flink : Queryable state : Runtime .................. SUCCESS [  4.881 s]
[INFO] Flink : Dist-Scala ................................. SUCCESS [  4.323 s]
[INFO] Flink : Kubernetes ................................. SUCCESS [ 24.012 s]
[INFO] Flink : Yarn ....................................... SUCCESS [ 15.309 s]
[INFO] Flink : Table : API Java Uber ...................... SUCCESS [  7.425 s]
[INFO] Flink : Table : Planner Loader Bundle .............. SUCCESS [  8.391 s]
[INFO] Flink : Table : Planner Loader ..................... SUCCESS [  9.858 s]
[INFO] Flink : Libraries : Gelly .......................... SUCCESS [ 18.030 s]
[INFO] Flink : Libraries : Gelly scala .................... SUCCESS [ 43.604 s]
[INFO] Flink : Libraries : Gelly Examples ................. SUCCESS [ 28.281 s]
[INFO] Flink : External resources : ....................... SUCCESS [  0.674 s]
[INFO] Flink : External resources : GPU ................... SUCCESS [  1.818 s]
[INFO] Flink : Metrics : Dropwizard ....................... SUCCESS [  1.815 s]
[INFO] Flink : Metrics : Graphite ......................... SUCCESS [  1.660 s]
[INFO] Flink : Metrics : InfluxDB ......................... SUCCESS [  3.359 s]
[INFO] Flink : Metrics : Prometheus ....................... SUCCESS [  2.531 s]
[INFO] Flink : Metrics : StatsD ........................... SUCCESS [  1.492 s]
[INFO] Flink : Metrics : Datadog .......................... SUCCESS [  2.686 s]
[INFO] Flink : Metrics : Slf4j ............................ SUCCESS [  1.298 s]
[INFO] Flink : Libraries : CEP Scala ...................... SUCCESS [ 29.310 s]
[INFO] Flink : Libraries : State processor API ............ SUCCESS [  7.720 s]
[INFO] Flink : Dist ....................................... SUCCESS [ 34.103 s]
[INFO] Flink : Yarn Tests ................................. SUCCESS [ 16.673 s]
[INFO] Flink : E2E Tests : ................................ SUCCESS [  0.682 s]
[INFO] Flink : E2E Tests : CLI ............................ SUCCESS [  2.041 s]
[INFO] Flink : E2E Tests : Parent Child classloading program SUCCESS [  2.058 s]
[INFO] Flink : E2E Tests : Parent Child classloading lib-package SUCCESS [  1.393 s]
[INFO] Flink : E2E Tests : Dataset allround ............... SUCCESS [  1.368 s]
[INFO] Flink : E2E Tests : Dataset Fine-grained recovery .. SUCCESS [  1.900 s]
[INFO] Flink : E2E Tests : Datastream allround ............ SUCCESS [  5.782 s]
[INFO] Flink : E2E Tests : Batch SQL ...................... SUCCESS [  1.579 s]
[INFO] Flink : E2E Tests : Stream SQL ..................... SUCCESS [  1.804 s]
[INFO] Flink : E2E Tests : Distributed cache via blob ..... SUCCESS [  2.322 s]
[INFO] Flink : E2E Tests : High parallelism iterations .... SUCCESS [ 12.869 s]
[INFO] Flink : E2E Tests : Stream stateful job upgrade .... SUCCESS [  3.227 s]
[INFO] Flink : E2E Tests : Queryable state ................ SUCCESS [  5.336 s]
[INFO] Flink : E2E Tests : Local recovery and allocation .. SUCCESS [  1.871 s]
[INFO] Flink : E2E Tests : Elasticsearch 6 ................ SUCCESS [  8.550 s]
[INFO] Flink : Quickstart : ............................... SUCCESS [  3.347 s]
[INFO] Flink : Quickstart : Java .......................... SUCCESS [  2.686 s]
[INFO] Flink : Quickstart : Scala ......................... SUCCESS [  1.510 s]
[INFO] Flink : E2E Tests : Quickstart ..................... SUCCESS [  2.555 s]
[INFO] Flink : E2E Tests : Confluent schema registry ...... SUCCESS [  5.685 s]
[INFO] Flink : E2E Tests : Stream state TTL ............... SUCCESS [ 11.581 s]
[INFO] Flink : E2E Tests : SQL client ..................... SUCCESS [  3.902 s]
[INFO] Flink : E2E Tests : File sink ...................... SUCCESS [  1.857 s]
[INFO] Flink : E2E Tests : State evolution ................ SUCCESS [  3.031 s]
[INFO] Flink : E2E Tests : RocksDB state memory control ... SUCCESS [  3.005 s]
[INFO] Flink : E2E Tests : Common ......................... SUCCESS [  5.961 s]
[INFO] Flink : E2E Tests : Metrics availability ........... SUCCESS [  1.442 s]
[INFO] Flink : E2E Tests : Metrics reporter prometheus .... SUCCESS [  1.308 s]
[INFO] Flink : E2E Tests : Heavy deployment ............... SUCCESS [ 16.170 s]
[INFO] Flink : E2E Tests : Connectors : Google PubSub ..... SUCCESS [  3.637 s]
[INFO] Flink : E2E Tests : Streaming Kafka base ........... SUCCESS [  2.733 s]
[INFO] Flink : E2E Tests : Streaming Kafka ................ SUCCESS [ 14.646 s]
[INFO] Flink : E2E Tests : Plugins : ...................... SUCCESS [  0.571 s]
[INFO] Flink : E2E Tests : Plugins : Dummy fs ............. SUCCESS [  1.221 s]
[INFO] Flink : E2E Tests : Plugins : Another dummy fs ..... SUCCESS [  1.397 s]
[INFO] Flink : E2E Tests : TPCH ........................... SUCCESS [  4.258 s]
[INFO] Flink : E2E Tests : Streaming Kinesis .............. SUCCESS [ 39.958 s]
[INFO] Flink : E2E Tests : Elasticsearch 7 ................ SUCCESS [  7.996 s]
[INFO] Flink : E2E Tests : Common Kafka ................... SUCCESS [  6.187 s]
[INFO] Flink : E2E Tests : TPCDS .......................... SUCCESS [  6.090 s]
[INFO] Flink : E2E Tests : Netty shuffle memory control ... SUCCESS [  1.916 s]
[INFO] Flink : E2E Tests : Python ......................... SUCCESS [  7.097 s]
[INFO] Flink : E2E Tests : HBase .......................... SUCCESS [  8.455 s]
[INFO] Flink : E2E Tests : Pulsar ......................... SUCCESS [  6.309 s]
[INFO] Flink : E2E Tests : Avro AWS Glue Schema Registry .. SUCCESS [  5.409 s]
[INFO] Flink : E2E Tests : JSON AWS Glue Schema Registry .. SUCCESS [  7.582 s]
[INFO] Flink : E2E Tests : Scala .......................... SUCCESS [ 17.373 s]
[INFO] Flink : E2E Tests : Kinesis SQL tests .............. SUCCESS [  2.019 s]
[INFO] Flink : E2E Tests : Kinesis Firehose SQL tests ..... SUCCESS [  2.118 s]
[INFO] Flink : E2E Tests : SQL ............................ SUCCESS [  3.836 s]
[INFO] Flink : State backends : Heap spillable ............ SUCCESS [  3.277 s]
[INFO] Flink : Table : Test Utils ......................... SUCCESS [  4.389 s]
[INFO] Flink : Contrib : .................................. SUCCESS [  0.957 s]
[INFO] Flink : Contrib : Connectors : Wikiedits ........... SUCCESS [  3.463 s]
[INFO] Flink : FileSystems : Tests ........................ SUCCESS [  3.867 s]
[INFO] Flink : Docs ....................................... SUCCESS [ 12.552 s]
[INFO] Flink : Walkthrough : .............................. SUCCESS [  1.111 s]
[INFO] Flink : Walkthrough : Common ....................... SUCCESS [  5.820 s]
[INFO] Flink : Walkthrough : Datastream Java .............. SUCCESS [  1.744 s]
[INFO] Flink : Walkthrough : Datastream Scala ............. SUCCESS [  1.685 s]
[INFO] Flink : Tools : CI : Java .......................... SUCCESS [  2.370 s]


[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Flink : 1.15.3:
[INFO] 
[INFO] Flink : ............................................ SUCCESS [  7.995 s]
[INFO] Flink : Annotations ................................ SUCCESS [  8.043 s]
[INFO] Flink : Architecture Tests ......................... SUCCESS [  0.385 s]
[INFO] Flink : Architecture Tests : Base .................. SUCCESS [  1.766 s]
[INFO] Flink : Test utils : ............................... SUCCESS [  0.265 s]
[INFO] Flink : Test utils : Junit ......................... SUCCESS [  7.279 s]
[INFO] Flink : Metrics : .................................. SUCCESS [  0.262 s]
[INFO] Flink : Metrics : Core ............................. SUCCESS [  3.096 s]
[INFO] Flink : Core ....................................... SUCCESS [ 59.517 s]
[INFO] Flink : Table : .................................... SUCCESS [  0.241 s]
[INFO] Flink : Table : Common ............................. SUCCESS [ 18.535 s]
[INFO] Flink : Table : API Java ........................... SUCCESS [  8.766 s]
[INFO] Flink : Java ....................................... SUCCESS [ 10.929 s]
[INFO] Flink : Connectors : ............................... SUCCESS [  0.243 s]
[INFO] Flink : Connectors : File Sink Common .............. SUCCESS [  1.152 s]
[INFO] Flink : RPC : ...................................... SUCCESS [  0.322 s]
[INFO] Flink : RPC : Core ................................. SUCCESS [  1.329 s]
[INFO] Flink : RPC : Akka ................................. SUCCESS [ 17.470 s]
[INFO] Flink : RPC : Akka-Loader .......................... SUCCESS [  5.843 s]
[INFO] Flink : Queryable state : .......................... SUCCESS [  0.191 s]
[INFO] Flink : Queryable state : Client Java .............. SUCCESS [  1.487 s]
[INFO] Flink : FileSystems : .............................. SUCCESS [  0.273 s]
[INFO] Flink : FileSystems : Hadoop FS .................... SUCCESS [  7.420 s]
[INFO] Flink : Runtime .................................... SUCCESS [01:57 min]
[INFO] Flink : Streaming Java ............................. SUCCESS [ 33.004 s]
[INFO] Flink : Table : API bridge base .................... SUCCESS [  1.097 s]
[INFO] Flink : Table : API Java bridge .................... SUCCESS [  2.483 s]
[INFO] Flink : Table : Code Splitter ...................... SUCCESS [  4.870 s]
[INFO] Flink : Optimizer .................................. SUCCESS [ 10.957 s]
[INFO] Flink : Clients .................................... SUCCESS [  7.801 s]
[INFO] Flink : DSTL ....................................... SUCCESS [  0.325 s]
[INFO] Flink : DSTL : DFS ................................. SUCCESS [  2.526 s]
[INFO] Flink : State backends : ........................... SUCCESS [  0.458 s]
[INFO] Flink : State backends : RocksDB ................... SUCCESS [  7.054 s]
[INFO] Flink : State backends : Changelog ................. SUCCESS [  2.164 s]
[INFO] Flink : Test utils : Utils ......................... SUCCESS [  4.810 s]
[INFO] Flink : Libraries : ................................ SUCCESS [  0.210 s]
[INFO] Flink : Libraries : CEP ............................ SUCCESS [  5.664 s]
[INFO] Flink : Table : Runtime ............................ SUCCESS [ 16.529 s]
[INFO] Flink : Scala ...................................... SUCCESS [01:32 min]
[INFO] Flink : Table : SQL Parser ......................... SUCCESS [  9.768 s]
[INFO] Flink : Table : SQL Parser Hive .................... SUCCESS [  6.073 s]
[INFO] Flink : Table : API Scala .......................... SUCCESS [ 22.689 s]
[INFO] Flink : Test utils : Connectors .................... SUCCESS [  2.072 s]
[INFO] Flink : Architecture Tests : Test .................. SUCCESS [  1.176 s]
[INFO] Flink : Connectors : Base .......................... SUCCESS [  3.125 s]
[INFO] Flink : Connectors : Files ......................... SUCCESS [  5.213 s]
[INFO] Flink : Examples : ................................. SUCCESS [  0.302 s]
[INFO] Flink : Examples : Batch ........................... SUCCESS [ 21.655 s]
[INFO] Flink : Connectors : Hadoop compatibility .......... SUCCESS [ 11.743 s]
[INFO] Flink : Tests ...................................... SUCCESS [01:11 min]
[INFO] Flink : Streaming Scala ............................ SUCCESS [01:00 min]
[INFO] Flink : Table : API Scala bridge ................... SUCCESS [ 19.945 s]
[INFO] Flink : Table : Planner ............................ SUCCESS [05:36 min]
[INFO] Flink : Formats : .................................. SUCCESS [  0.134 s]
[INFO] Flink : Format : Common ............................ SUCCESS [  0.343 s]
[INFO] Flink : Formats : Csv .............................. SUCCESS [  1.905 s]
[INFO] Flink : Formats : Hadoop bulk ...................... SUCCESS [  2.757 s]
[INFO] Flink : Formats : Orc .............................. SUCCESS [  2.996 s]
[INFO] Flink : Formats : Orc nohive ....................... SUCCESS [  2.281 s]
[INFO] Flink : Formats : Avro ............................. SUCCESS [  6.245 s]
[INFO] Flink : Formats : Parquet .......................... SUCCESS [ 16.343 s]
[INFO] Flink : Connectors : Hive .......................... SUCCESS [ 33.385 s]
[INFO] Flink : Python ..................................... SUCCESS [01:00 min]
[INFO] Flink : Table : SQL Client ......................... SUCCESS [  4.377 s]
[INFO] Flink : Connectors : AWS Base ...................... SUCCESS [  1.983 s]
[INFO] Flink : Connectors : Cassandra ..................... SUCCESS [  6.715 s]
[INFO] Flink : Formats : Json ............................. SUCCESS [  2.139 s]
[INFO] Flink : Connectors : Elasticsearch base ............ SUCCESS [  3.682 s]
[INFO] Flink : Connectors : Elasticsearch 6 ............... SUCCESS [  3.211 s]
[INFO] Flink : Connectors : Elasticsearch 7 ............... SUCCESS [  1.792 s]
[INFO] Flink : Connectors : Google PubSub ................. SUCCESS [  1.583 s]
[INFO] Flink : Connectors : HBase base .................... SUCCESS [  2.133 s]
[INFO] Flink : Connectors : HBase 1.4 ..................... SUCCESS [  6.742 s]
[INFO] Flink : Connectors : HBase 2.2 ..................... SUCCESS [  5.883 s]
[INFO] Flink : Connectors : JDBC .......................... SUCCESS [  4.605 s]
[INFO] Flink : Metrics : JMX .............................. SUCCESS [  0.625 s]
[INFO] Flink : Formats : Avro confluent registry .......... SUCCESS [  1.078 s]
[INFO] Flink : Connectors : Kafka ......................... SUCCESS [  7.781 s]
[INFO] Flink : Connectors : Amazon Kinesis Data Streams ... SUCCESS [  1.645 s]
[INFO] Flink : Connectors : Kinesis ....................... SUCCESS [ 28.342 s]
[INFO] Flink : Connectors : Nifi .......................... SUCCESS [  1.144 s]
[INFO] Flink : Connectors : Pulsar ........................ SUCCESS [ 19.607 s]
[INFO] Flink : Connectors : RabbitMQ ...................... SUCCESS [  1.307 s]
[INFO] Flink : Architecture Tests : Production ............ SUCCESS [  2.979 s]
[INFO] Flink : FileSystems : Hadoop FS shaded ............. SUCCESS [  6.329 s]
[INFO] Flink : FileSystems : S3 FS Base ................... SUCCESS [  2.042 s]
[INFO] Flink : FileSystems : S3 FS Hadoop ................. SUCCESS [ 13.426 s]
[INFO] Flink : FileSystems : S3 FS Presto ................. SUCCESS [01:31 min]
[INFO] Flink : FileSystems : OSS FS ....................... SUCCESS [ 21.164 s]
[INFO] Flink : FileSystems : Azure FS Hadoop .............. SUCCESS [ 30.780 s]
[INFO] Flink : FileSystems : Google Storage FS Hadoop ..... SUCCESS [ 33.817 s]
[INFO] Flink : Runtime web ................................ FAILURE [  7.514 s]
[INFO] Flink : Connectors : HCatalog ...................... SKIPPED
[INFO] Flink : Connectors : Amazon Kinesis Data Firehose .. SKIPPED
[INFO] Flink : Connectors : SQL : Elasticsearch 6 ......... SKIPPED
[INFO] Flink : Connectors : SQL : Elasticsearch 7 ......... SKIPPED
[INFO] Flink : Connectors : SQL : HBase 1.4 ............... SKIPPED
[INFO] Flink : Connectors : SQL : HBase 2.2 ............... SKIPPED
[INFO] Flink : Connectors : SQL : Hive 1.2.2 .............. SKIPPED
[INFO] Flink : Connectors : SQL : Hive 2.2.0 .............. SKIPPED
[INFO] Flink : Connectors : SQL : Hive 2.3.6 .............. SKIPPED
[INFO] Flink : Connectors : SQL : Hive 3.1.2 .............. SKIPPED
[INFO] Flink : Connectors : SQL : Kafka ................... SKIPPED
[INFO] Flink : Connectors : SQL : Amazon Kinesis Data Streams SKIPPED
[INFO] Flink : Connectors : SQL : Amazon Kinesis Data Firehose SKIPPED
[INFO] Flink : Connectors : SQL : Kinesis ................. SKIPPED
[INFO] Flink : Connectors : SQL : Pulsar .................. SKIPPED
[INFO] Flink : Connectors : SQL : RabbitMQ ................ SKIPPED
[INFO] Flink : Formats : Sequence file .................... SKIPPED
[INFO] Flink : Formats : Compress ......................... SKIPPED
[INFO] Flink : Formats : Avro AWS Glue Schema Registry .... SKIPPED
[INFO] Flink : Formats : JSON AWS Glue Schema Registry .... SKIPPED
[INFO] Flink : Formats : SQL Orc .......................... SKIPPED
[INFO] Flink : Formats : SQL Parquet ...................... SKIPPED
[INFO] Flink : Formats : SQL Avro ......................... SKIPPED
[INFO] Flink : Formats : SQL Avro Confluent Registry ...... SKIPPED
[INFO] Flink : Examples : Streaming ....................... SKIPPED
[INFO] Flink : Examples : Table ........................... SKIPPED
[INFO] Flink : Examples : Build Helper : .................. SKIPPED
[INFO] Flink : Examples : Build Helper : Streaming State machine SKIPPED
[INFO] Flink : Examples : Build Helper : Streaming Google PubSub SKIPPED
[INFO] Flink : Container .................................. SKIPPED
[INFO] Flink : Queryable state : Runtime .................. SKIPPED
[INFO] Flink : Dist-Scala ................................. SKIPPED
[INFO] Flink : Kubernetes ................................. SKIPPED
[INFO] Flink : Yarn ....................................... SKIPPED
[INFO] Flink : Table : API Java Uber ...................... SKIPPED
[INFO] Flink : Table : Planner Loader Bundle .............. SKIPPED
[INFO] Flink : Table : Planner Loader ..................... SKIPPED
[INFO] Flink : Libraries : Gelly .......................... SKIPPED
[INFO] Flink : Libraries : Gelly scala .................... SKIPPED
[INFO] Flink : Libraries : Gelly Examples ................. SKIPPED
[INFO] Flink : External resources : ....................... SKIPPED
[INFO] Flink : External resources : GPU ................... SKIPPED
[INFO] Flink : Metrics : Dropwizard ....................... SKIPPED
[INFO] Flink : Metrics : Graphite ......................... SKIPPED
[INFO] Flink : Metrics : InfluxDB ......................... SKIPPED
[INFO] Flink : Metrics : Prometheus ....................... SKIPPED
[INFO] Flink : Metrics : StatsD ........................... SKIPPED
[INFO] Flink : Metrics : Datadog .......................... SKIPPED
[INFO] Flink : Metrics : Slf4j ............................ SKIPPED
[INFO] Flink : Libraries : CEP Scala ...................... SKIPPED
[INFO] Flink : Libraries : State processor API ............ SKIPPED
[INFO] Flink : Dist ....................................... SKIPPED
[INFO] Flink : Yarn Tests ................................. SKIPPED
[INFO] Flink : E2E Tests : ................................ SKIPPED
[INFO] Flink : E2E Tests : CLI ............................ SKIPPED
[INFO] Flink : E2E Tests : Parent Child classloading program SKIPPED
[INFO] Flink : E2E Tests : Parent Child classloading lib-package SKIPPED
[INFO] Flink : E2E Tests : Dataset allround ............... SKIPPED
[INFO] Flink : E2E Tests : Dataset Fine-grained recovery .. SKIPPED
[INFO] Flink : E2E Tests : Datastream allround ............ SKIPPED
[INFO] Flink : E2E Tests : Batch SQL ...................... SKIPPED
[INFO] Flink : E2E Tests : Stream SQL ..................... SKIPPED
[INFO] Flink : E2E Tests : Distributed cache via blob ..... SKIPPED
[INFO] Flink : E2E Tests : High parallelism iterations .... SKIPPED
[INFO] Flink : E2E Tests : Stream stateful job upgrade .... SKIPPED
[INFO] Flink : E2E Tests : Queryable state ................ SKIPPED
[INFO] Flink : E2E Tests : Local recovery and allocation .. SKIPPED
[INFO] Flink : E2E Tests : Elasticsearch 6 ................ SKIPPED
[INFO] Flink : Quickstart : ............................... SKIPPED
[INFO] Flink : Quickstart : Java .......................... SKIPPED
[INFO] Flink : Quickstart : Scala ......................... SKIPPED
[INFO] Flink : E2E Tests : Quickstart ..................... SKIPPED
[INFO] Flink : E2E Tests : Confluent schema registry ...... SKIPPED
[INFO] Flink : E2E Tests : Stream state TTL ............... SKIPPED
[INFO] Flink : E2E Tests : SQL client ..................... SKIPPED
[INFO] Flink : E2E Tests : File sink ...................... SKIPPED
[INFO] Flink : E2E Tests : State evolution ................ SKIPPED
[INFO] Flink : E2E Tests : RocksDB state memory control ... SKIPPED
[INFO] Flink : E2E Tests : Common ......................... SKIPPED
[INFO] Flink : E2E Tests : Metrics availability ........... SKIPPED
[INFO] Flink : E2E Tests : Metrics reporter prometheus .... SKIPPED
[INFO] Flink : E2E Tests : Heavy deployment ............... SKIPPED
[INFO] Flink : E2E Tests : Connectors : Google PubSub ..... SKIPPED
[INFO] Flink : E2E Tests : Streaming Kafka base ........... SKIPPED
[INFO] Flink : E2E Tests : Streaming Kafka ................ SKIPPED
[INFO] Flink : E2E Tests : Plugins : ...................... SKIPPED
[INFO] Flink : E2E Tests : Plugins : Dummy fs ............. SKIPPED
[INFO] Flink : E2E Tests : Plugins : Another dummy fs ..... SKIPPED
[INFO] Flink : E2E Tests : TPCH ........................... SKIPPED
[INFO] Flink : E2E Tests : Streaming Kinesis .............. SKIPPED
[INFO] Flink : E2E Tests : Elasticsearch 7 ................ SKIPPED
[INFO] Flink : E2E Tests : Common Kafka ................... SKIPPED
[INFO] Flink : E2E Tests : TPCDS .......................... SKIPPED
[INFO] Flink : E2E Tests : Netty shuffle memory control ... SKIPPED
[INFO] Flink : E2E Tests : Python ......................... SKIPPED
[INFO] Flink : E2E Tests : HBase .......................... SKIPPED
[INFO] Flink : E2E Tests : Pulsar ......................... SKIPPED
[INFO] Flink : E2E Tests : Avro AWS Glue Schema Registry .. SKIPPED
[INFO] Flink : E2E Tests : JSON AWS Glue Schema Registry .. SKIPPED
[INFO] Flink : E2E Tests : Scala .......................... SKIPPED
[INFO] Flink : E2E Tests : Kinesis SQL tests .............. SKIPPED
[INFO] Flink : E2E Tests : Kinesis Firehose SQL tests ..... SKIPPED
[INFO] Flink : E2E Tests : SQL ............................ SKIPPED
[INFO] Flink : State backends : Heap spillable ............ SKIPPED
[INFO] Flink : Table : Test Utils ......................... SKIPPED
[INFO] Flink : Contrib : .................................. SKIPPED
[INFO] Flink : Contrib : Connectors : Wikiedits ........... SKIPPED
[INFO] Flink : FileSystems : Tests ........................ SKIPPED
[INFO] Flink : Docs ....................................... SKIPPED
[INFO] Flink : Walkthrough : .............................. SKIPPED
[INFO] Flink : Walkthrough : Common ....................... SKIPPED
[INFO] Flink : Walkthrough : Datastream Java .............. SKIPPED
[INFO] Flink : Walkthrough : Datastream Scala ............. SKIPPED
[INFO] Flink : Tools : CI : Java .......................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  24:55 min
[INFO] Finished at: 2024-07-23T20:20:59+08:00
[INFO] ------------------------------------------------------------------------


[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Flink : 1.15.3:
[INFO] 
[INFO] Flink : ............................................ SUCCESS [  7.995 s]
[INFO] Flink : Annotations ................................ SUCCESS [  8.043 s]
[INFO] Flink : Architecture Tests ......................... SUCCESS [  0.385 s]
[INFO] Flink : Architecture Tests : Base .................. SUCCESS [  1.766 s]
[INFO] Flink : Test utils : ............................... SUCCESS [  0.265 s]
[INFO] Flink : Test utils : Junit ......................... SUCCESS [  7.279 s]
[INFO] Flink : Metrics : .................................. SUCCESS [  0.262 s]
[INFO] Flink : Metrics : Core ............................. SUCCESS [  3.096 s]
[INFO] Flink : Core ....................................... SUCCESS [ 59.517 s]
[INFO] Flink : Table : .................................... SUCCESS [  0.241 s]
[INFO] Flink : Table : Common ............................. SUCCESS [ 18.535 s]
[INFO] Flink : Table : API Java ........................... SUCCESS [  8.766 s]
[INFO] Flink : Java ....................................... SUCCESS [ 10.929 s]
[INFO] Flink : Connectors : ............................... SUCCESS [  0.243 s]
[INFO] Flink : Connectors : File Sink Common .............. SUCCESS [  1.152 s]
[INFO] Flink : RPC : ...................................... SUCCESS [  0.322 s]
[INFO] Flink : RPC : Core ................................. SUCCESS [  1.329 s]
[INFO] Flink : RPC : Akka ................................. SUCCESS [ 17.470 s]
[INFO] Flink : RPC : Akka-Loader .......................... SUCCESS [  5.843 s]
[INFO] Flink : Queryable state : .......................... SUCCESS [  0.191 s]
[INFO] Flink : Queryable state : Client Java .............. SUCCESS [  1.487 s]
[INFO] Flink : FileSystems : .............................. SUCCESS [  0.273 s]
[INFO] Flink : FileSystems : Hadoop FS .................... SUCCESS [  7.420 s]
[INFO] Flink : Runtime .................................... SUCCESS [01:57 min]
[INFO] Flink : Streaming Java ............................. SUCCESS [ 33.004 s]
[INFO] Flink : Table : API bridge base .................... SUCCESS [  1.097 s]
[INFO] Flink : Table : API Java bridge .................... SUCCESS [  2.483 s]
[INFO] Flink : Table : Code Splitter ...................... SUCCESS [  4.870 s]
[INFO] Flink : Optimizer .................................. SUCCESS [ 10.957 s]
[INFO] Flink : Clients .................................... SUCCESS [  7.801 s]
[INFO] Flink : DSTL ....................................... SUCCESS [  0.325 s]
[INFO] Flink : DSTL : DFS ................................. SUCCESS [  2.526 s]
[INFO] Flink : State backends : ........................... SUCCESS [  0.458 s]
[INFO] Flink : State backends : RocksDB ................... SUCCESS [  7.054 s]
[INFO] Flink : State backends : Changelog ................. SUCCESS [  2.164 s]
[INFO] Flink : Test utils : Utils ......................... SUCCESS [  4.810 s]
[INFO] Flink : Libraries : ................................ SUCCESS [  0.210 s]
[INFO] Flink : Libraries : CEP ............................ SUCCESS [  5.664 s]
[INFO] Flink : Table : Runtime ............................ SUCCESS [ 16.529 s]
[INFO] Flink : Scala ...................................... SUCCESS [01:32 min]
[INFO] Flink : Table : SQL Parser ......................... SUCCESS [  9.768 s]
[INFO] Flink : Table : SQL Parser Hive .................... SUCCESS [  6.073 s]
[INFO] Flink : Table : API Scala .......................... SUCCESS [ 22.689 s]
[INFO] Flink : Test utils : Connectors .................... SUCCESS [  2.072 s]
[INFO] Flink : Architecture Tests : Test .................. SUCCESS [  1.176 s]
[INFO] Flink : Connectors : Base .......................... SUCCESS [  3.125 s]
[INFO] Flink : Connectors : Files ......................... SUCCESS [  5.213 s]
[INFO] Flink : Examples : ................................. SUCCESS [  0.302 s]
[INFO] Flink : Examples : Batch ........................... SUCCESS [ 21.655 s]
[INFO] Flink : Connectors : Hadoop compatibility .......... SUCCESS [ 11.743 s]
[INFO] Flink : Tests ...................................... SUCCESS [01:11 min]
[INFO] Flink : Streaming Scala ............................ SUCCESS [01:00 min]
[INFO] Flink : Table : API Scala bridge ................... SUCCESS [ 19.945 s]
[INFO] Flink : Table : Planner ............................ SUCCESS [05:36 min]
[INFO] Flink : Formats : .................................. SUCCESS [  0.134 s]
[INFO] Flink : Format : Common ............................ SUCCESS [  0.343 s]
[INFO] Flink : Formats : Csv .............................. SUCCESS [  1.905 s]
[INFO] Flink : Formats : Hadoop bulk ...................... SUCCESS [  2.757 s]
[INFO] Flink : Formats : Orc .............................. SUCCESS [  2.996 s]
[INFO] Flink : Formats : Orc nohive ....................... SUCCESS [  2.281 s]
[INFO] Flink : Formats : Avro ............................. SUCCESS [  6.245 s]
[INFO] Flink : Formats : Parquet .......................... SUCCESS [ 16.343 s]
[INFO] Flink : Connectors : Hive .......................... SUCCESS [ 33.385 s]
[INFO] Flink : Python ..................................... SUCCESS [01:00 min]
[INFO] Flink : Table : SQL Client ......................... SUCCESS [  4.377 s]
[INFO] Flink : Connectors : AWS Base ...................... SUCCESS [  1.983 s]
[INFO] Flink : Connectors : Cassandra ..................... SUCCESS [  6.715 s]
[INFO] Flink : Formats : Json ............................. SUCCESS [  2.139 s]
[INFO] Flink : Connectors : Elasticsearch base ............ SUCCESS [  3.682 s]
[INFO] Flink : Connectors : Elasticsearch 6 ............... SUCCESS [  3.211 s]
[INFO] Flink : Connectors : Elasticsearch 7 ............... SUCCESS [  1.792 s]
[INFO] Flink : Connectors : Google PubSub ................. SUCCESS [  1.583 s]
[INFO] Flink : Connectors : HBase base .................... SUCCESS [  2.133 s]
[INFO] Flink : Connectors : HBase 1.4 ..................... SUCCESS [  6.742 s]
[INFO] Flink : Connectors : HBase 2.2 ..................... SUCCESS [  5.883 s]
[INFO] Flink : Connectors : JDBC .......................... SUCCESS [  4.605 s]
[INFO] Flink : Metrics : JMX .............................. SUCCESS [  0.625 s]
[INFO] Flink : Formats : Avro confluent registry .......... SUCCESS [  1.078 s]
[INFO] Flink : Connectors : Kafka ......................... SUCCESS [  7.781 s]
[INFO] Flink : Connectors : Amazon Kinesis Data Streams ... SUCCESS [  1.645 s]
[INFO] Flink : Connectors : Kinesis ....................... SUCCESS [ 28.342 s]
[INFO] Flink : Connectors : Nifi .......................... SUCCESS [  1.144 s]
[INFO] Flink : Connectors : Pulsar ........................ SUCCESS [ 19.607 s]
[INFO] Flink : Connectors : RabbitMQ ...................... SUCCESS [  1.307 s]
[INFO] Flink : Architecture Tests : Production ............ SUCCESS [  2.979 s]
[INFO] Flink : FileSystems : Hadoop FS shaded ............. SUCCESS [  6.329 s]
[INFO] Flink : FileSystems : S3 FS Base ................... SUCCESS [  2.042 s]
[INFO] Flink : FileSystems : S3 FS Hadoop ................. SUCCESS [ 13.426 s]
[INFO] Flink : FileSystems : S3 FS Presto ................. SUCCESS [01:31 min]
[INFO] Flink : FileSystems : OSS FS ....................... SUCCESS [ 21.164 s]
[INFO] Flink : FileSystems : Azure FS Hadoop .............. SUCCESS [ 30.780 s]
[INFO] Flink : FileSystems : Google Storage FS Hadoop ..... SUCCESS [ 33.817 s]
--------------------------------------------------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Flink : Runtime web 1.15.3:
[INFO] 
[INFO] Flink : Runtime web ................................ SUCCESS [03:43 min]   第93个子项目
[INFO] Flink : Connectors : HCatalog ...................... SUCCESS [ 27.049 s]
[INFO] Flink : Connectors : Amazon Kinesis Data Firehose .. SUCCESS [  7.150 s]
[INFO] Flink : Connectors : SQL : Elasticsearch 6 ......... SUCCESS [ 21.324 s]
[INFO] Flink : Connectors : SQL : Elasticsearch 7 ......... SUCCESS [ 15.506 s]
[INFO] Flink : Connectors : SQL : HBase 1.4 ............... SUCCESS [ 13.099 s]
[INFO] Flink : Connectors : SQL : HBase 2.2 ............... SUCCESS [ 22.985 s]
[INFO] Flink : Connectors : SQL : Hive 1.2.2 .............. SUCCESS [ 24.233 s]
[INFO] Flink : Connectors : SQL : Hive 2.2.0 .............. SUCCESS [ 21.787 s]
[INFO] Flink : Connectors : SQL : Hive 2.3.6 .............. SUCCESS [ 18.690 s]
[INFO] Flink : Connectors : SQL : Hive 3.1.2 .............. SUCCESS [ 28.055 s]
[INFO] Flink : Connectors : SQL : Kafka ................... SUCCESS [  2.946 s]
[INFO] Flink : Connectors : SQL : Amazon Kinesis Data Streams SUCCESS [  6.017 s]
[INFO] Flink : Connectors : SQL : Amazon Kinesis Data Firehose SUCCESS [  5.719 s]
[INFO] Flink : Connectors : SQL : Kinesis ................. SUCCESS [ 15.532 s]
[INFO] Flink : Connectors : SQL : Pulsar .................. SUCCESS [  8.219 s]
[INFO] Flink : Connectors : SQL : RabbitMQ ................ SUCCESS [  1.337 s]
[INFO] Flink : Formats : Sequence file .................... SUCCESS [  2.858 s]
[INFO] Flink : Formats : Compress ......................... SUCCESS [  2.911 s]
[INFO] Flink : Formats : Avro AWS Glue Schema Registry .... SUCCESS [  7.371 s]
[INFO] Flink : Formats : JSON AWS Glue Schema Registry .... SUCCESS [  6.201 s]
[INFO] Flink : Formats : SQL Orc .......................... SUCCESS [  1.209 s]
[INFO] Flink : Formats : SQL Parquet ...................... SUCCESS [  2.485 s]
[INFO] Flink : Formats : SQL Avro ......................... SUCCESS [  2.524 s]
[INFO] Flink : Formats : SQL Avro Confluent Registry ...... SUCCESS [  2.992 s]
[INFO] Flink : Examples : Streaming ....................... SUCCESS [ 33.861 s]
[INFO] Flink : Examples : Table ........................... SUCCESS [ 19.095 s]
[INFO] Flink : Examples : Build Helper : .................. SUCCESS [  1.024 s]
[INFO] Flink : Examples : Build Helper : Streaming State machine SUCCESS [  2.983 s]
[INFO] Flink : Examples : Build Helper : Streaming Google PubSub SUCCESS [  9.270 s]
[INFO] Flink : Container .................................. SUCCESS [  1.725 s]
[INFO] Flink : Queryable state : Runtime .................. SUCCESS [  4.084 s]
[INFO] Flink : Dist-Scala ................................. SUCCESS [  3.771 s]
[INFO] Flink : Kubernetes ................................. SUCCESS [ 22.354 s]
[INFO] Flink : Yarn ....................................... SUCCESS [ 12.413 s]
[INFO] Flink : Table : API Java Uber ...................... SUCCESS [  6.933 s]
[INFO] Flink : Table : Planner Loader Bundle .............. SUCCESS [  7.476 s]
[INFO] Flink : Table : Planner Loader ..................... SUCCESS [  7.382 s]
[INFO] Flink : Libraries : Gelly .......................... SUCCESS [ 16.101 s]
[INFO] Flink : Libraries : Gelly scala .................... SUCCESS [ 42.011 s]
[INFO] Flink : Libraries : Gelly Examples ................. SUCCESS [ 26.385 s]
[INFO] Flink : External resources : ....................... SUCCESS [  0.745 s]
[INFO] Flink : External resources : GPU ................... SUCCESS [  1.493 s]
[INFO] Flink : Metrics : Dropwizard ....................... SUCCESS [  1.778 s]
[INFO] Flink : Metrics : Graphite ......................... SUCCESS [  1.301 s]
[INFO] Flink : Metrics : InfluxDB ......................... SUCCESS [  3.347 s]
[INFO] Flink : Metrics : Prometheus ....................... SUCCESS [  2.463 s]
[INFO] Flink : Metrics : StatsD ........................... SUCCESS [  1.234 s]
[INFO] Flink : Metrics : Datadog .......................... SUCCESS [  1.967 s]
[INFO] Flink : Metrics : Slf4j ............................ SUCCESS [  1.424 s]
[INFO] Flink : Libraries : CEP Scala ...................... SUCCESS [ 28.689 s]
[INFO] Flink : Libraries : State processor API ............ SUCCESS [  8.257 s]
[INFO] Flink : Dist ....................................... SUCCESS [ 29.859 s]
[INFO] Flink : Yarn Tests ................................. SUCCESS [ 13.865 s]
[INFO] Flink : E2E Tests : ................................ SUCCESS [  0.558 s]
[INFO] Flink : E2E Tests : CLI ............................ SUCCESS [  1.343 s]
[INFO] Flink : E2E Tests : Parent Child classloading program SUCCESS [  1.706 s]
[INFO] Flink : E2E Tests : Parent Child classloading lib-package SUCCESS [  1.407 s]
[INFO] Flink : E2E Tests : Dataset allround ............... SUCCESS [  1.325 s]
[INFO] Flink : E2E Tests : Dataset Fine-grained recovery .. SUCCESS [  1.579 s]
[INFO] Flink : E2E Tests : Datastream allround ............ SUCCESS [  4.573 s]
[INFO] Flink : E2E Tests : Batch SQL ...................... SUCCESS [  1.514 s]
[INFO] Flink : E2E Tests : Stream SQL ..................... SUCCESS [  1.850 s]
[INFO] Flink : E2E Tests : Distributed cache via blob ..... SUCCESS [  1.870 s]
[INFO] Flink : E2E Tests : High parallelism iterations .... SUCCESS [ 12.986 s]
[INFO] Flink : E2E Tests : Stream stateful job upgrade .... SUCCESS [  3.228 s]
[INFO] Flink : E2E Tests : Queryable state ................ SUCCESS [  5.598 s]
[INFO] Flink : E2E Tests : Local recovery and allocation .. SUCCESS [  1.794 s]
[INFO] Flink : E2E Tests : Elasticsearch 6 ................ SUCCESS [  7.156 s]
[INFO] Flink : Quickstart : ............................... SUCCESS [  2.048 s]
[INFO] Flink : Quickstart : Java .......................... SUCCESS [  1.930 s]
[INFO] Flink : Quickstart : Scala ......................... SUCCESS [  1.234 s]
[INFO] Flink : E2E Tests : Quickstart ..................... SUCCESS [  3.613 s]
[INFO] Flink : E2E Tests : Confluent schema registry ...... SUCCESS [  6.589 s]
[INFO] Flink : E2E Tests : Stream state TTL ............... SUCCESS [ 11.081 s]
[INFO] Flink : E2E Tests : SQL client ..................... SUCCESS [  3.873 s]
[INFO] Flink : E2E Tests : File sink ...................... SUCCESS [  1.760 s]
[INFO] Flink : E2E Tests : State evolution ................ SUCCESS [  2.769 s]
[INFO] Flink : E2E Tests : RocksDB state memory control ... SUCCESS [  3.049 s]
[INFO] Flink : E2E Tests : Common ......................... SUCCESS [  5.366 s]
[INFO] Flink : E2E Tests : Metrics availability ........... SUCCESS [  1.496 s]
[INFO] Flink : E2E Tests : Metrics reporter prometheus .... SUCCESS [  1.555 s]
[INFO] Flink : E2E Tests : Heavy deployment ............... SUCCESS [ 15.932 s]
[INFO] Flink : E2E Tests : Connectors : Google PubSub ..... SUCCESS [  2.823 s]
[INFO] Flink : E2E Tests : Streaming Kafka base ........... SUCCESS [  2.473 s]
[INFO] Flink : E2E Tests : Streaming Kafka ................ SUCCESS [ 12.795 s]
[INFO] Flink : E2E Tests : Plugins : ...................... SUCCESS [  0.654 s]
[INFO] Flink : E2E Tests : Plugins : Dummy fs ............. SUCCESS [  1.148 s]
[INFO] Flink : E2E Tests : Plugins : Another dummy fs ..... SUCCESS [  1.301 s]
[INFO] Flink : E2E Tests : TPCH ........................... SUCCESS [  4.617 s]
[INFO] Flink : E2E Tests : Streaming Kinesis .............. SUCCESS [ 35.494 s]
[INFO] Flink : E2E Tests : Elasticsearch 7 ................ SUCCESS [  8.015 s]
[INFO] Flink : E2E Tests : Common Kafka ................... SUCCESS [  5.963 s]
[INFO] Flink : E2E Tests : TPCDS .......................... SUCCESS [  6.296 s]
[INFO] Flink : E2E Tests : Netty shuffle memory control ... SUCCESS [  1.781 s]
[INFO] Flink : E2E Tests : Python ......................... SUCCESS [  6.986 s]
[INFO] Flink : E2E Tests : HBase .......................... SUCCESS [  6.613 s]
[INFO] Flink : E2E Tests : Pulsar ......................... SUCCESS [  5.003 s]
[INFO] Flink : E2E Tests : Avro AWS Glue Schema Registry .. SUCCESS [  4.342 s]
[INFO] Flink : E2E Tests : JSON AWS Glue Schema Registry .. SUCCESS [  6.271 s]
[INFO] Flink : E2E Tests : Scala .......................... SUCCESS [ 14.172 s]
[INFO] Flink : E2E Tests : Kinesis SQL tests .............. SUCCESS [  2.021 s]
[INFO] Flink : E2E Tests : Kinesis Firehose SQL tests ..... SUCCESS [  2.207 s]
[INFO] Flink : E2E Tests : SQL ............................ SUCCESS [  3.718 s]
[INFO] Flink : State backends : Heap spillable ............ SUCCESS [  3.408 s]
[INFO] Flink : Table : Test Utils ......................... SUCCESS [  3.781 s]
[INFO] Flink : Contrib : .................................. SUCCESS [  0.773 s]
[INFO] Flink : Contrib : Connectors : Wikiedits ........... SUCCESS [  2.721 s]
[INFO] Flink : FileSystems : Tests ........................ SUCCESS [  3.393 s]
[INFO] Flink : Docs ....................................... SUCCESS [ 10.543 s]
[INFO] Flink : Walkthrough : .............................. SUCCESS [  0.871 s]
[INFO] Flink : Walkthrough : Common ....................... SUCCESS [  4.019 s]
[INFO] Flink : Walkthrough : Datastream Java .............. SUCCESS [  1.421 s]
[INFO] Flink : Walkthrough : Datastream Scala ............. SUCCESS [  1.426 s]
[INFO] Flink : Tools : CI : Java .......................... SUCCESS [  1.850 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  18:12 min
[INFO] Finished at: 2024-07-23T21:18:32+08:00
[INFO] ------------------------------------------------------------------------

docker-centos 编译

docker pull docker.fxxk.dedyn.io/bigtop/slaves:3.2.0-centos-7
# 将本地代码目录G:\OpenSource\Data\platform\bigtop映射到ws目录
# 将maven本地仓库文件目录映射到/root,方便之后maven、grandle、ant使用 
docker run -d -it -p 8000:8000 --network ambari -v G:\OpenSource\Data\platform\bigtop:/ws -v F:\docker\data\bigtop:/root --workdir /ws --name repo bigtop/slaves:3.2.0-centos-7


docker pull docker.fxxk.dedyn.io/bigtop/slaves:3.2.0-centos-7
docker pull docker.fxxk.dedyn.io/bigtop/slaves:trunk-centos-7
docker pull docker.fxxk.dedyn.io/bigtop/puppet:trunk-centos-7
docker pull mariadb:10.2
docker pull centos:7  
docker pull mysql:5.7

bigtop 源码分析

packages.gradle

  • task 列表
    • packages-help: All package build related tasks information
    • bom-json: List the components of the stack in json format
    • all-components: List the components of the stack
    • 单个组件:如zookeeper
      • ${component}-download: Download $component artifacts
      • ${component}-tar: Preparing a tarball for $component artifacts
      • $component-deb:Building DEB for $component artifacts
      • $component-sdeb:Building SDEB for $component artifacts
      • $component-rpm:Building RPM for $component artifacts
      • $component-srpm:Building SRPM for $component artifacts
      • $component-pkg:Invoking a native binary packaging component $ptype
      • $component-spkg:Invoking a native binary packaging component s$ptype
      • $component-pkg-ind: Invoking a native binary packaging for $component in Docker
      • $component-version: Show version of $component component
      • ${component}_vardefines: 变量定义
      • $component-info: Info about $component component build
      • $component-relnotes: Preparing release notes for $component. No yet implemented!!!
      • $component-clean: Removing $component component build and output directories
      • $component-help: List of available tasks for $component
    • 所有组件
      • srpm:Build all SRPM packages for the stack components
      • rpm: Build all RPM packages for the stack
      • sdeb:Build all SDEB packages for the stack components
      • deb: Build all DEB packages for the stack components
      • pkgs:Build all native packages for the stack components
      • pkgs-ind: Build all native packages for the stack components inside Docker
      • allclean:Removing $BUILD_DIR, $OUTPUT_DIR, and $DIST_DIR,Cleaning all components' build and output directories
      • realclean:Removing $DL_DIR
    • apt:Creating APT repository
    • yum: Creating YUM repository
    • repo:Invoking a native repository target $
    • repo-ind: Invoking a native repository in Docker

写在最后:

  • 初次编译ambari+bigtop,有些地方可能不是太完善,部分问题可能也没有记录,欢迎大家补充完善。
  • 如有问题或者补充,可以在评论区回复。或者加入以下群:
  • qq群1:722014912 (一个大神建立的);
  • qq群2:160074759 (个人建立); 会定期在里边发布同步的微信群二维码,供大家交流
From:https://www.cnblogs.com/piaolingzxh/p/18332934
本文地址: http://www.shuzixingkong.net/article/601
0评论
提交 加载更多评论
其他文章 ComfyUI插件:ComfyUI Impact 节点(四)
前言: 学习ComfyUI是一场持久战,而 ComfyUI Impact 是一个庞大的模块节点库,内置许多非常实用且强大的功能节点 ,例如检测器、细节强化器、预览桥、通配符、Hook、图片发送器、图片接收器等等。通过这些节点的组合运用,我们可以实现的工作有很多,例如自动人脸检测和优化修复、区域增强、
ComfyUI插件:ComfyUI Impact 节点(四) ComfyUI插件:ComfyUI Impact 节点(四) ComfyUI插件:ComfyUI Impact 节点(四)
使用Nginx Proxy Manager配置Halo的反向代理和申请 SSL 证书
本文介绍Nginx Proxy Manager配置Halo的反向代理和申请 SSL 证书,如需要了解Halo 2的安装,参考如何在Linux云服务器上通过Docker Compose部署安装Halo,搭建个人博客网站?。 目录安装Nginx Proxy ManagerNginx Proxy Mana
使用Nginx Proxy Manager配置Halo的反向代理和申请 SSL 证书 使用Nginx Proxy Manager配置Halo的反向代理和申请 SSL 证书 使用Nginx Proxy Manager配置Halo的反向代理和申请 SSL 证书
Known框架实战演练——进销存财务管理
本文介绍如何实现进销存管理系统的财务对账模块,财务对账模块包括供应商对账和客户对账2个菜单页面。供应商和客户对账字段相同,因此可共用一个页面组件类。 项目代码:JxcLite 开源地址: https://gitee.com/known/JxcLite 1. 配置模块 运行项目,在【系统管理-模块管理
[rCore学习笔记 019]在main中测试本章实现
写在前面 本随笔是非常菜的菜鸡写的。如有问题请及时提出。 可以联系:1160712160@qq.com GitHhub:https://github.com/WindDevil (目前啥也没有 批处理操作系统的启动和运行流程 要想把本章实现的那些模块全部都串联在一起以实现运行一个批处理操作系统,回顾
[rCore学习笔记 019]在main中测试本章实现
一个基于 SourceGenerator 生成 从 dbReader转换为 class 数据的性能测试实验
好奇 SourceGenerator 出现开始,好几年了,虽然一直好奇用SourceGenerator 生成代码 与 emit 等动态生成的代码会有多少差距, 但是一直特别懒,不想搞 其实 dapper aot 项目做了类似事情,不过功能特别积极,还引用了实验特性,所以还是想更为简单客观对比 本次乘
前后端数据的交互--如何实现数据加密?--02
数据加密是保护数据安全的重要手段,通过加密技术,我们可以确保即使数据被窃取,也无法直接读取其中的信息。本文将介绍三种常见的加密方法:对称加密、非对称加密以及数据库加密,并展示如何在实际项目中实现这些加密技术。 1. 对称加密 对称加密算法使用相同的密钥进行加密和解密。AES(Advanced Enc
《花100块做个摸鱼小网站 · 序》灵感来源
序 大家好呀,我是summo,这次来写写我在上班空闲(摸鱼)的时候做的一个小网站的事。去年阿里云不是推出了个活动嘛,2核2G的云服务器一年只要99块钱,懂行的人应该知道这个价格在业界已经是非常良心了,虽然优惠只有一年,但是买一台用来学习还是非常合适的(优惠链接在这,需要自取)。 我也跟风买了一台,开
《花100块做个摸鱼小网站 · 序》灵感来源 《花100块做个摸鱼小网站 · 序》灵感来源 《花100块做个摸鱼小网站 · 序》灵感来源
火山引擎VeDI数据技术分享:两个步骤,为Parquet降本提效
本文将介绍字节跳动基于Parquet格式降本增效的技术原理和在具体业务中的实践,首先介绍了Parquet格式在字节跳动的应用,然后将结合具体的应用场景:小文件合并和列级TTL ,从问题产生的背景和解决问题的技术方案出发,介绍如何基于Parquet格式实现降本增效的目标。
火山引擎VeDI数据技术分享:两个步骤,为Parquet降本提效 火山引擎VeDI数据技术分享:两个步骤,为Parquet降本提效 火山引擎VeDI数据技术分享:两个步骤,为Parquet降本提效