Advertisement

flume简单串联配置案例

阅读量:

(1)简单串联

在这里插入图片描述

这种架构将多个Flume依次串联起来,并从源端的初始节点到末端节点进行数据传输。该架构并不建议过度配置Flumes的数量。避免过度配置Flumes的数量会带来多方面的负面影响。过高的Flumes数量可能导致数据传输速率下降。此外,在运行过程中若单一Flumes出现故障(如Crash或Stale),会导致整个系统的数据传输中断。

(不需要启动hadoop集群)

案例:

这里我以hadoop102与hadoop103两个节点组成串联举例

1.分发flume到hadoop102和hadoop103节点,并各自新建配置文件

在hadoop102新建配置文件netcat-flume-avro.conf

复制代码
    # Name the components on this agent
    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1
    
    # Describe/configure the source
    a1.sources.r1.type = netcat
    a1.sources.r1.bind = hadoop102
    a1.sources.r1.port = 44444
    
    # Describe the sink
    a1.sinks.k1.type = avro
    a1.sinks.k1.hostname = hadoop103
    a1.sinks.k1.port = 4141
    
    # Use a channel which buffers events in memory
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100
    
    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1
    a1.sinks.k1.channel = c1
    
    
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
    
    AI写代码

在hadoop103新建配置文件netcat-flume-avro.conf

复制代码
    # Name the components on this agent
    a2.sources = r2
    a2.sinks = k2
    a2.channels = c2
    
    # Describe/configure the source
    a2.sources.r2.type = avro
    a2.sources.r2.bind = hadoop103
    a2.sources.r2.port = 4141
    
    # Describe the sink
    a2.sinks.k2.type = logger
    
    # Use a channel which buffers events in memory
    a2.channels.c2.type = memory
    a2.channels.c2.capacity = 1000
    a2.channels.c2.transactionCapacity = 100
    
    # Bind the source and sink to the channel
    a2.sources.r2.channels = c2
    a2.sinks.k2.channel = c2
    
    
    
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
    
    AI写代码

配置好文件后先在hadoop103启动flume:

复制代码
    [atguigu@hadoop103 flume]$ bin/flume-ng agent -n a2 -c conf/ -f job/netcat-flume-avro.conf -Dflume.root.logger=INFO,console
    
    
      
    
    AI写代码

然后在hadoop102启动flume:

复制代码
    [atguigu@hadoop102 flume]$ bin/flume-ng agent -n a1 -c conf/ -f job/netcat-flume-avro.conf 
    
    
      
    
    AI写代码

新开一个hadoop102窗口登录nc:

复制代码
    [atguigu@hadoop102 ~]$ nc hadoop102 44444
    
    
      
    
    AI写代码

然后在nc发送消息,hadoop103就会收到。

开一个hadoop102窗口登录nc:

复制代码
    [atguigu@hadoop102 ~]$ nc hadoop102 44444
    
    
      
    
    AI写代码

然后在nc发送消息,hadoop103就会收到。

全部评论 (0)

还没有任何评论哟~