hadoop - Flume to load data to local file system -


i using hadoop 2.2 in linux.can 1 tell me how use fileroll in flume.i know fileroll sends data local file system.can tell me how???

thanks in advance..

in order use file roll sink, need configure sink in flume configuration file. config file example fetch data spooling directory source located in directory /logs/source, send through memory channel file roll sink in directory /logs/sink.

there other configuration options should have @ in flume user's guide here

# define memory channel called ch1 on agent1 agent1.channels.ch1.type = memory  agent1.sources.spool.type = spooldir agent1.sources.spool.channels = ch1 agent1.sources.spool.spooldir = /logs/source agent1.sources.spool.fileheader = true  agent1.sinks.fr1.type = file_roll agent1.sinks.fr1.channel = ch1 agent1.sinks.fr1.sink.directory = /logs/sink  agent1.channels = ch1 agent1.sources = spool agent1.sinks = fr1 

Comments

Popular posts from this blog

commonjs - How to write a typescript definition file for a node module that exports a function? -

openid - Okta: Failed to get authorization code through API call -

thorough guide for profiling racket code -