Advertisement

Flume 案例实操

阅读量:

监控端口数据官方案例

(1) 案例需求:首先,Flume监控本机44444端口,然后通过telnet工具向本机44444端口发送消息,最后Flume将监听的数据实时显示在控制台。

(2) 需求分析:

(3) 实现步骤:

① 安装telnet工具

在/opt/software目录下创建flume-telnet文件夹

[luomk@hadoop102 software]$ mkdir flume-telnet

再将rpm软件包(xinetd-2.3.14-40.el6.x86_64.rpm、telnet-0.17-48.el6.x86_64.rpm和telnet-server-0.17-48.el6.x86_64.rpm)拷入/opt/software/flume-telnet文件夹下面。执行RPM软件包安装命令:

[luomk@hadoop102 software]$ sudo rpm -ivh xinetd-2.3.14-40.el6.x86_64.rpm

[luomk@hadoop102 software]$ sudo rpm -ivh telnet-0.17-48.el6.x86_64.rpm

[luomk@hadoop102 software]$ sudo rpm -ivh telnet-server-0.17-48.el6.x86_64.rpm

② 判断44444端口是否被占用

[luomk@hadoop102 flume-telnet]$ sudo netstat -tunlp | grep 44444

③ 创建Flume Agent配置文件flume-telnet-logger.conf

在flume目录下创建job文件夹并进入job文件夹。

[luomk@hadoop102 flume]$ mkdir job

[luomk@hadoop102 flume]$ cd job/

在job文件夹下创建Flume Agent配置文件flume-telnet-logger.conf。

[luomk@hadoop102 job]$ touch flume-telnet-logger.conf

在flume-telnet-logger.conf文件中添加如下内容。

[luomk@hadoop102 job]$ vim flume-telnet-logger.conf

添加内容如下:

复制代码
 # Name the components on this agent

    
  
    
 a1.sources = r1
    
  
    
 a1.sinks = k1
    
  
    
 a1.channels = c1
    
  
    
 # Describe/configure the source
    
  
    
 a1.sources.r1.type = netcat
    
  
    
 a1.sources.r1.bind = localhost
    
  
    
 a1.sources.r1.port = 44444    
    
  
    
 # Describe the sink
    
  
    
 a1.sinks.k1.type = logger
    
  
    
 # Use a channel which buffers events in memory
    
  
    
 a1.channels.c1.type = memory
    
  
    
 a1.channels.c1.capacity = 1000
    
  
    
 a1.channels.c1.transactionCapacity = 100
    
  
    
 # Bind the source and sink to the channel
    
  
    
 a1.sources.r1.channels = c1
    
  
    
 a1.sinks.k1.channel = c1
    
    
    
    
    AI写代码
![](https://ad.itadn.com/c/weblog/blog-img/images/2025-05-30/DLC0FksjfTmOUiVh2levNr69znYA.png)

注:配置文件来源于官方手册http://flume.apache.org/FlumeUserGuide.html

④ 开启flume监听端口

[luomk@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/flume-telnet-logger.conf -Dflume.root.logger=INFO,console

参数说明:

--conf conf/ :表示配置文件存储在conf/目录

--name a1 :表示给agent起名为a1

--conf-file job/flume-telnet.conf :flume本次启动读取的配置文件是在job文件夹下的flume-telnet.conf文件。

-Dflume.root.logger==INFO,console :-D表示flume运行时动态修改flume.root.logger参数属性值,并将控制台日志打印级别设置为INFO级别。日志级别包括:log、info、warn、error。

⑤ 使用telnet工具向本机的44444端口发送内容

$ telnet localhost 44444

⑥ 效果展示:

2.实时读取本地文件到HDFS案例

(1) 案例需求:实时监控Hive日志,并上传到HDFS中

(2) 需求分析:

(3) 实现步骤:

① Flume要想将数据输出到HDFS,必须持有Hadoop相关jar包

将commons-configuration-1.6.jar、hadoop-auth-2.7.2.jar、hadoop-common-2.7.2.jar、hadoop-hdfs-2.7.2.jar、commons-io-2.4.jar、htrace-core-3.1.0-incubating.jar拷贝到/opt/module/flume/lib文件夹下。

尖叫提示:标红的jar为1.99版本flume必须引用的jar。其他版本可以不引用。

② 创建flume-file-hdfs.conf文件

[luomk@hadoop102 job]$ touch flume-file-hdfs.conf

注:要想读取Linux系统中的文件,就得按照Linux命令的规则执行命令。由于hive日志在Linux系统中所以读取文件的类型选择:exec即execute执行的意思。表示执行Linux命令来读取文件。

[luomk@hadoop102 job]$ vim flume-file-hdfs.conf

添加如下内容

复制代码
 # Name the components on this agent

    
  
    
 a2.sources = r2
    
  
    
 a2.sinks = k2
    
  
    
 a2.channels = c2
    
  
    
 # Describe/configure the source
    
  
    
 a2.sources.r2.type = exec
    
  
    
 a2.sources.r2.command = tail -F /opt/module/hive/logs/hive.log
    
  
    
 a2.sources.r2.shell = /bin/bash -c
    
  
    
  
    
  
    
 # Describe the sink
    
  
    
 a2.sinks.k2.type = hdfs
    
  
    
 a2.sinks.k2.hdfs.path = hdfs://hadoop102:9000/flume/%Y%m%d/%H
    
  
    
 #上传文件的前缀
    
  
    
 a2.sinks.k2.hdfs.filePrefix = logs-
    
  
    
 #是否按照时间滚动文件夹
    
  
    
 a2.sinks.k2.hdfs.round = true
    
  
    
 #多少时间单位创建一个新的文件夹
    
  
    
 a2.sinks.k2.hdfs.roundValue = 1
    
  
    
 #重新定义时间单位
    
  
    
 a2.sinks.k2.hdfs.roundUnit = hour
    
  
    
 #是否使用本地时间戳
    
  
    
 a2.sinks.k2.hdfs.useLocalTimeStamp = true
    
  
    
 #积攒多少个Event才flush到HDFS一次
    
  
    
 a2.sinks.k2.hdfs.batchSize = 1000
    
  
    
 #设置文件类型,可支持压缩
    
  
    
 a2.sinks.k2.hdfs.fileType = DataStream
    
  
    
 #多久生成一个新的文件
    
  
    
 a2.sinks.k2.hdfs.rollInterval = 600
    
  
    
 #设置每个文件的滚动大小
    
  
    
 a2.sinks.k2.hdfs.rollSize = 134217700
    
  
    
 #文件的滚动与Event数量无关
    
  
    
 a2.sinks.k2.hdfs.rollCount = 0
    
  
    
 #最小冗余数
    
  
    
 a2.sinks.k2.hdfs.minBlockReplicas = 1
    
  
    
 # Use a channel which buffers events in memory
    
  
    
 a2.channels.c2.type = memory
    
  
    
 a2.channels.c2.capacity = 1000
    
  
    
 a2.channels.c2.transactionCapacity = 100
    
  
    
 # Bind the source and sink to the channel
    
  
    
 a2.sources.r2.channels = c2
    
  
    
 a2.sinks.k2.channel = c2
    
    
    
    
    AI写代码
![](https://ad.itadn.com/c/weblog/blog-img/images/2025-05-30/eWGzkC39tpVD56jQwvRAUJoiPSnI.png)

③ 执行监控配置

[luomk@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/flume-file-hdfs.conf

④ 开启hadoop和hive并操作hive产生日志

[luomk@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh

[luomk@hadoop103 hadoop-2.7.2]$ sbin/start-yarn.sh

[luomk@hadoop102 hive]$ bin/hive

⑤ 在HDFS上查看文件。

3.实时读取目录文件到HDFS案例

(1) 案例需求:使用flume监听整个目录的文件

(2) 需求分析:

(3) 实现步骤:

① 创建配置文件flume-dir-hdfs.conf

[luomk@hadoop102 job]$ touch flume-dir-hdfs.conf

[luomk@hadoop102 job]$ vim flume-dir-hdfs.conf

添加如下内容

复制代码
 a3.sources = r3

    
  
    
 a3.sinks = k3
    
  
    
 a3.channels = c3
    
  
    
 # Describe/configure the source
    
  
    
 a3.sources.r3.type = spooldir
    
  
    
 a3.sources.r3.spoolDir = /opt/module/flume/upload
    
  
    
 a3.sources.r3.fileSuffix = .COMPLETED
    
  
    
 a3.sources.r3.fileHeader = true
    
  
    
 #忽略所有以.tmp结尾的文件,不上传
    
  
    
 a3.sources.r3.ignorePattern = ([^ ]*\.tmp)
    
  
    
 # Describe the sink
    
  
    
 a3.sinks.k3.type = hdfs
    
  
    
 a3.sinks.k3.hdfs.path = hdfs://hadoop102:9000/flume/upload/%Y%m%d/%H
    
  
    
 #上传文件的前缀
    
  
    
 a3.sinks.k3.hdfs.filePrefix = upload-
    
  
    
 #是否按照时间滚动文件夹
    
  
    
 a3.sinks.k3.hdfs.round = true
    
  
    
 #多少时间单位创建一个新的文件夹
    
  
    
 a3.sinks.k3.hdfs.roundValue = 1
    
  
    
 #重新定义时间单位
    
  
    
 a3.sinks.k3.hdfs.roundUnit = hour
    
  
    
 #是否使用本地时间戳
    
  
    
 a3.sinks.k3.hdfs.useLocalTimeStamp = true
    
  
    
 #积攒多少个Event才flush到HDFS一次
    
  
    
 a3.sinks.k3.hdfs.batchSize = 100
    
  
    
 #设置文件类型,可支持压缩
    
  
    
 a3.sinks.k3.hdfs.fileType = DataStream
    
  
    
 #多久生成一个新的文件
    
  
    
 a3.sinks.k3.hdfs.rollInterval = 600
    
  
    
 #设置每个文件的滚动大小大概是128M
    
  
    
 a3.sinks.k3.hdfs.rollSize = 134217700
    
  
    
 #文件的滚动与Event数量无关
    
  
    
 a3.sinks.k3.hdfs.rollCount = 0
    
  
    
 #最小冗余数
    
  
    
 a3.sinks.k3.hdfs.minBlockReplicas = 1
    
  
    
 # Use a channel which buffers events in memory
    
  
    
 a3.channels.c3.type = memory
    
  
    
 a3.channels.c3.capacity = 1000
    
  
    
 a3.channels.c3.transactionCapacity = 100
    
  
    
 # Bind the source and sink to the channel
    
  
    
 a3.sources.r3.channels = c3
    
  
    
 a3.sinks.k3.channel = c3
    
    
    
    
    AI写代码
![](https://ad.itadn.com/c/weblog/blog-img/images/2025-05-30/ipQZ5naqc0TsoKO8GYreDU1WLh4X.png)

② 启动监控文件夹命令

[luomk@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/flume-dir-hdfs.conf

说明: 在使用Spooling Directory Source时

不要在监控目录中创建并持续修改文件

上传完成的文件会以.COMPLETED结尾

被监控文件夹每600毫秒扫描一次文件变动

③ 向upload文件夹中添加文件

在/opt/module/flume目录下创建upload文件夹

[luomk@hadoop102 flume]$ mkdir upload

向upload文件夹中添加文件

[luomk@hadoop102 upload]$ touch luomk.txt

[luomk@hadoop102 upload]$ touch luomk.tmp

[luomk@hadoop102 upload]$ touch luomk.log

④ 查看HDFS上的数据

4.单数据源多出口案例

(1) 案例需求:使用flume-1监控文件变动,flume-1将变动内容传递给flume-2,flume-2负责存储到HDFS。同时flume-1将变动内容传递给flume-3,flume-3负责输出到local filesystem。

(2) 实现步骤:

① 准备工作

在/opt/module/flume/job目录下创建group1文件夹

[luomk@hadoop102 job]$ cd group1/

在/opt/module/datas/目录下创建flume3文件夹

[luomk@hadoop102 datas]$ mkdir flume3

② 创建flume-file-flume.conf

配置1个接收日志文件的source和两个channel、两个sink,分别输送给flume-flume-hdfs和flume-flume-dir。

[luomk@hadoop102 group1]$ touch flume-file-flume.conf

[luomk@hadoop102 group1]$ vim flume-file-flume.conf

添加如下内容

复制代码
 # Name the components on this agent

    
  
    
 a1.sources = r1
    
  
    
 a1.sinks = k1 k2
    
  
    
 a1.channels = c1 c2
    
  
    
 # 将数据流复制给多个channel
    
  
    
 a1.sources.r1.selector.type = replicating
    
  
    
 # Describe/configure the source
    
  
    
 a1.sources.r1.type = exec
    
  
    
 a1.sources.r1.command = tail -F /opt/module/hive/logs/hive.log
    
  
    
 a1.sources.r1.shell = /bin/bash -c
    
  
    
 # Describe the sink
    
  
    
 a1.sinks.k1.type = avro
    
  
    
 a1.sinks.k1.hostname = hadoop102
    
  
    
 a1.sinks.k1.port = 4141
    
  
    
 a1.sinks.k2.type = avro
    
  
    
 a1.sinks.k2.hostname = hadoop102
    
  
    
 a1.sinks.k2.port = 4142
    
  
    
 # Describe the channel
    
  
    
 a1.channels.c1.type = memory
    
  
    
 a1.channels.c1.capacity = 1000
    
  
    
 a1.channels.c1.transactionCapacity = 100
    
  
    
 a1.channels.c2.type = memory
    
  
    
 a1.channels.c2.capacity = 1000
    
  
    
 a1.channels.c2.transactionCapacity = 100
    
  
    
 # Bind the source and sink to the channel
    
  
    
 a1.sources.r1.channels = c1 c2
    
  
    
 a1.sinks.k1.channel = c1
    
  
    
 a1.sinks.k2.channel = c2
    
    
    
    
    AI写代码
![](https://ad.itadn.com/c/weblog/blog-img/images/2025-05-30/D8hE3z7PTOIGCdw2aoKL6ViWrFUs.png)

注:Avro是由Hadoop创始人Doug Cutting创建的一种语言无关的数据序列化和RPC框架。

RPC(Remote Procedure Call)—远程过程调用,它是一种通过网络从远程计算机程序上请求服务,而不需要了解底层网络技术的协议。

③ 创建flume-flume-hdfs.conf

配置上级flume输出的source,输出是到hdfs的sink。

创建配置文件并打开

[luomk@hadoop102 group1]$ touch flume-flume-hdfs.conf

[luomk@hadoop102 group1]$ vim flume-flume-hdfs.conf

添加如下内容

复制代码
 # Name the components on this agent

    
  
    
 a2.sources = r1
    
  
    
 a2.sinks = k1
    
  
    
 a2.channels = c1
    
  
    
 # Describe/configure the source
    
  
    
 a2.sources.r1.type = avro
    
  
    
 a2.sources.r1.bind = hadoop102
    
  
    
 a2.sources.r1.port = 4141
    
  
    
 # Describe the sink
    
  
    
 a2.sinks.k1.type = hdfs
    
  
    
 a2.sinks.k1.hdfs.path = hdfs://hadoop102:9000/flume2/%Y%m%d/%H
    
  
    
 #上传文件的前缀
    
  
    
 a2.sinks.k1.hdfs.filePrefix = flume2-
    
  
    
 #是否按照时间滚动文件夹
    
  
    
 a2.sinks.k1.hdfs.round = true
    
  
    
 #多少时间单位创建一个新的文件夹
    
  
    
 a2.sinks.k1.hdfs.roundValue = 1
    
  
    
 #重新定义时间单位
    
  
    
 a2.sinks.k1.hdfs.roundUnit = hour
    
  
    
 #是否使用本地时间戳
    
  
    
 a2.sinks.k1.hdfs.useLocalTimeStamp = true
    
  
    
 #积攒多少个Event才flush到HDFS一次
    
  
    
 a2.sinks.k1.hdfs.batchSize = 100
    
  
    
 #设置文件类型,可支持压缩
    
  
    
 a2.sinks.k1.hdfs.fileType = DataStream
    
  
    
 #多久生成一个新的文件
    
  
    
 a2.sinks.k1.hdfs.rollInterval = 600
    
  
    
 #设置每个文件的滚动大小大概是128M
    
  
    
 a2.sinks.k1.hdfs.rollSize = 134217700
    
  
    
 #文件的滚动与Event数量无关
    
  
    
 a2.sinks.k1.hdfs.rollCount = 0
    
  
    
 #最小冗余数
    
  
    
 a2.sinks.k1.hdfs.minBlockReplicas = 1
    
  
    
 # Describe the channel
    
  
    
 a2.channels.c1.type = memory
    
  
    
 a2.channels.c1.capacity = 1000
    
  
    
 a2.channels.c1.transactionCapacity = 100
    
  
    
 # Bind the source and sink to the channel
    
  
    
 a2.sources.r1.channels = c1
    
  
    
 a2.sinks.k1.channel = c1
    
    
    
    
    AI写代码
![](https://ad.itadn.com/c/weblog/blog-img/images/2025-05-30/mWQTd1NuMZOhYafxvljAtynz7Iq9.png)

④ 创建flume-flume-dir.conf

配置上级flume输出的source,输出是到本地目录的sink。

创建配置文件并打开

[luomk@hadoop102 group1]$ touch flume-flume-dir.conf

[luomk@hadoop102 group1]$ vim flume-flume-dir.conf

添加如下内容

复制代码
 # Name the components on this agent

    
  
    
 a3.sources = r1
    
  
    
 a3.sinks = k1
    
  
    
 a3.channels = c2
    
  
    
 # Describe/configure the source
    
  
    
 a3.sources.r1.type = avro
    
  
    
 a3.sources.r1.bind = hadoop102
    
  
    
 a3.sources.r1.port = 4142
    
  
    
 # Describe the sink
    
  
    
 a3.sinks.k1.type = file_roll
    
  
    
 a3.sinks.k1.sink.directory = /opt/module/datas/flume3
    
  
    
 # Describe the channel
    
  
    
 a3.channels.c2.type = memory
    
  
    
 a3.channels.c2.capacity = 1000
    
  
    
 a3.channels.c2.transactionCapacity = 100
    
  
    
 # Bind the source and sink to the channel
    
  
    
 a3.sources.r1.channels = c2
    
  
    
 a3.sinks.k1.channel = c2
    
    
    
    
    AI写代码
![](https://ad.itadn.com/c/weblog/blog-img/images/2025-05-30/85HyL19cPWahiNOqoSk7mtTrBu4F.png)

提示:输出的本地目录必须是已经存在的目录,如果该目录不存在,并不会创建新的目录。

④ 执行配置文件

分别开启对应配置文件:flume-flume-dir,flume-flume-hdfs,flume-file-flume。

[luomk@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group1/flume-flume-dir.conf

[luomk@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group1/flume-flume-hdfs.conf

[luomk@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group1/flume-file-flume.conf

⑤ 启动hadoop和hive

[luomk@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh

[luomk@hadoop103 hadoop-2.7.2]$ sbin/start-yarn.sh

[luomk@hadoop102 hive]$ bin/hive

hive (default)>

⑥ 检查HDFS上数据

5.多数据源汇总案例

(1) 案例需求:flume-1监控文件hive.log,flume-2监控某一个端口的数据流,flume-1与flume-2将数据发送给flume-3,flume3将最终数据写入到HDFS。

(2) 实现步骤:

① 准备工作

在/opt/module/flume/job目录下创建一个group2文件夹

[luomk@hadoop102 job]$ mkdir group2

② 创建flume-file-flume.conf

配置source用于监控hive.log文件,配置sink输出数据到下一级flume。

创建配置文件并打开

[luomk@hadoop102 group2]$ touch flume-file-flume.conf

[luomk@hadoop102 group2]$ vim flume-file-flume.conf

添加如下内容

复制代码
 # Name the components on this agent

    
  
    
 a1.sources = r1
    
  
    
 a1.sinks = k1
    
  
    
 a1.channels = c1
    
  
    
 # Describe/configure the source
    
  
    
 a1.sources.r1.type = exec
    
  
    
 a1.sources.r1.command = tail -F /opt/module/hive/logs/hive.log
    
  
    
 a1.sources.r1.shell = /bin/bash -c
    
  
    
 # Describe the sink
    
  
    
 a1.sinks.k1.type = avro
    
  
    
 a1.sinks.k1.hostname = hadoop102
    
  
    
 a1.sinks.k1.port = 4141
    
  
    
 # Describe the channel
    
  
    
 a1.channels.c1.type = memory
    
  
    
 a1.channels.c1.capacity = 1000
    
  
    
 a1.channels.c1.transactionCapacity = 100
    
  
    
 # Bind the source and sink to the channel
    
  
    
 a1.sources.r1.channels = c1
    
  
    
 a1.sinks.k1.channel = c1
    
    
    
    
    AI写代码
![](https://ad.itadn.com/c/weblog/blog-img/images/2025-05-30/CGjgaqAbTfezLDYVx9EZclvF3wB1.png)

③ 创建flume-telnet-flume.conf

配置source监控端口44444数据流,配置sink数据到下一级flume:

创建配置文件并打开

[luomk@hadoop102 group2]$ touch flume-telnet-flume.conf

[luomk@hadoop102 group2]$ vim flume-telnet-flume.conf

添加如下内容

复制代码
 # Name the components on this agent

    
  
    
 a2.sources = r1
    
  
    
 a2.sinks = k1
    
  
    
 a2.channels = c1
    
  
    
 # Describe/configure the source
    
  
    
 a2.sources.r1.type = netcat
    
  
    
 a2.sources.r1.bind = hadoop102
    
  
    
 a2.sources.r1.port = 44444
    
  
    
 # Describe the sink
    
  
    
 a2.sinks.k1.type = avro
    
  
    
 a2.sinks.k1.hostname = hadoop102
    
  
    
 a2.sinks.k1.port = 4141
    
  
    
 # Use a channel which buffers events in memory
    
  
    
 a2.channels.c1.type = memory
    
  
    
 a2.channels.c1.capacity = 1000
    
  
    
 a2.channels.c1.transactionCapacity = 100
    
  
    
 # Bind the source and sink to the channel
    
  
    
 a2.sources.r1.channels = c1
    
  
    
 a2.sinks.k1.channel = c1
    
    
    
    
    AI写代码
![](https://ad.itadn.com/c/weblog/blog-img/images/2025-05-30/bliRPt1YJTMenw7XvHzZAahfN6Cg.png)

④ 创建flume-flume-hdfs.conf

配置source用于接收flume-file-flume与flume-telnet-flume发送过来的数据流,最终合并后sink到HDFS。

创建配置文件并打开

[luomk@hadoop102 group2]$ touch flume-flume-hdfs.conf

[luomk@hadoop102 group2]$ vim flume-flume-hdfs.conf

添加如下内容

复制代码
 # Name the components on this agent

    
  
    
 a3.sources = r1
    
  
    
 a3.sinks = k1
    
  
    
 a3.channels = c1
    
  
    
 # Describe/configure the source
    
  
    
 a3.sources.r1.type = avro
    
  
    
 a3.sources.r1.bind = hadoop102
    
  
    
 a3.sources.r1.port = 4141
    
  
    
 # Describe the sink
    
  
    
 a3.sinks.k1.type = hdfs
    
  
    
 a3.sinks.k1.hdfs.path = hdfs://hadoop102:9000/flume3/%Y%m%d/%H
    
  
    
 #上传文件的前缀
    
  
    
 a3.sinks.k1.hdfs.filePrefix = flume3-
    
  
    
 #是否按照时间滚动文件夹
    
  
    
 a3.sinks.k1.hdfs.round = true
    
  
    
 #多少时间单位创建一个新的文件夹
    
  
    
 a3.sinks.k1.hdfs.roundValue = 1
    
  
    
 #重新定义时间单位
    
  
    
 a3.sinks.k1.hdfs.roundUnit = hour
    
  
    
 #是否使用本地时间戳
    
  
    
 a3.sinks.k1.hdfs.useLocalTimeStamp = true
    
  
    
 #积攒多少个Event才flush到HDFS一次
    
  
    
 a3.sinks.k1.hdfs.batchSize = 100
    
  
    
 #设置文件类型,可支持压缩
    
  
    
 a3.sinks.k1.hdfs.fileType = DataStream
    
  
    
 #多久生成一个新的文件
    
  
    
 a3.sinks.k1.hdfs.rollInterval = 600
    
  
    
 #设置每个文件的滚动大小大概是128M
    
  
    
 a3.sinks.k1.hdfs.rollSize = 134217700
    
  
    
 #文件的滚动与Event数量无关
    
  
    
 a3.sinks.k1.hdfs.rollCount = 0
    
  
    
 #最小冗余数
    
  
    
 a3.sinks.k1.hdfs.minBlockReplicas = 1
    
  
    
 # Describe the channel
    
  
    
 a3.channels.c1.type = memory
    
  
    
 a3.channels.c1.capacity = 1000
    
  
    
 a3.channels.c1.transactionCapacity = 100
    
  
    
 # Bind the source and sink to the channel
    
  
    
 a3.sources.r1.channels = c1
    
  
    
 a3.sinks.k1.channel = c1
    
    
    
    
    AI写代码
![](https://ad.itadn.com/c/weblog/blog-img/images/2025-05-30/qdZoXSY96Jv3zVuGPCEF0kR5prNs.png)

⑤ 执行配置文件

分别开启对应配置文件:flume-flume-hdfs.conf,flume-telnet-flume.conf,flume-file-flume.conf。

[luomk@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group2/flume-flume-hdfs.conf

[luomk@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group2/flume-telnet-flume.conf

[luomk@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group2/flume-file-flume.conf

⑥ 启动hadoop和hive

[luomk@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh

[luomk@hadoop103 hadoop-2.7.2]$ sbin/start-yarn.sh

[luomk@hadoop102 hive]$ bin/hive

hive (default)>

⑦ 向44444端口发送数据

[luomk@hadoop102 flume]$ telnet hadoop102 44444

⑧ 检查HDFS上数据

全部评论 (0)

还没有任何评论哟~