Class ShellBolt
- All Implemented Interfaces:
Serializable
,IBolt
- Direct Known Subclasses:
BlobStoreAPIWordCountTopology.SplitSentence
,FluxShellBolt
,PythonShellMetricsBolt
,RichShellBolt
,WordCountTopology.SplitSentence
,WordCountTopologyNode.SplitSentence
To run a ShellBolt on a cluster, the scripts that are shelled out to must be in the resources directory within the jar submitted to the master. During development/testing on a local machine, that resources directory just needs to be on the classpath.
When creating topologies using the Java API, subclass this bolt and implement the IRichBolt interface to create components for the topology that use other languages. For example:
```java public class MyBolt extends ShellBolt implements IRichBolt { public MyBolt() { super("python3", "mybolt.py"); }
public void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("field1", "field2")); } } ```
- See Also:
-
Field Summary
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionvoid
changeChildCWD
(boolean changeDirectory) Set if the current working directory of the child process should change to the resources dir from extracted from the jar, or if it should stay the same as the worker process to access things from the blob store.void
cleanup()
Called when an IBolt is going to be shutdown.void
Process a single tuple of input.void
prepare
(Map<String, Object> topoConf, TopologyContext context, OutputCollector collector) Called when a task for this component is initialized within a worker on the cluster.boolean
-
Field Details
-
HEARTBEAT_STREAM_ID
- See Also:
-
LOG
public static final org.slf4j.Logger LOG
-
-
Constructor Details
-
ShellBolt
-
ShellBolt
-
-
Method Details
-
setEnv
-
shouldChangeChildCWD
public boolean shouldChangeChildCWD() -
changeChildCWD
public void changeChildCWD(boolean changeDirectory) Set if the current working directory of the child process should change to the resources dir from extracted from the jar, or if it should stay the same as the worker process to access things from the blob store.- Parameters:
changeDirectory
- true change the directory (default) false leave the directory the same as the worker process.
-
prepare
public void prepare(Map<String, Object> topoConf, TopologyContext context, OutputCollector collector) Description copied from interface:IBolt
Called when a task for this component is initialized within a worker on the cluster. It provides the bolt with the environment in which the bolt executes.This includes the:
- Specified by:
prepare
in interfaceIBolt
- Parameters:
topoConf
- The Storm configuration for this bolt. This is the configuration provided to the topology merged in with cluster configuration on this machine.context
- This object can be used to get information about this task's place within the topology, including the task id and component id of this task, input and output information, etc.collector
- The collector is used to emit tuples from this bolt. Tuples can be emitted at any time, including the prepare and cleanup methods. The collector is thread-safe and should be saved as an instance variable of this bolt object.
-
execute
Description copied from interface:IBolt
Process a single tuple of input. The Tuple object contains metadata on it about which component/stream/task it came from. The values of the Tuple can be accessed using Tuple#getValue. The IBolt does not have to process the Tuple immediately. It is perfectly fine to hang onto a tuple and process it later (for instance, to do an aggregation or join).Tuples should be emitted using the OutputCollector provided through the prepare method. It is required that all input tuples are acked or failed at some point using the OutputCollector. Otherwise, Storm will be unable to determine when tuples coming off the spouts have been completed.
For the common case of acking an input tuple at the end of the execute method, see IBasicBolt which automates this.
-
cleanup
public void cleanup()Description copied from interface:IBolt
Called when an IBolt is going to be shutdown. Storm will make a best-effort attempt to call this if the worker shutdown is orderly. TheConfig.SUPERVISOR_WORKER_SHUTDOWN_SLEEP_SECS
setting controls how long orderly shutdown is allowed to take. There is no guarantee that cleanup will be called if shutdown is not orderly, or if the shutdown exceeds the time limit.The one context where cleanup is guaranteed to be called is when a topology is killed when running Storm in local mode.
-