The number of seconds that the blacklisted slots or supervisor will be resumed.
The number of hit count that will trigger blacklist in tolerance time.
The number of seconds that the blacklist scheduler will concern of bad slots or supervisors.
This port on Storm DRPC is used by DRPC topologies to receive function invocations and send results back.
DRPC invocations thrift server worker threads.
The maximum buffer size thrift should use when reading messages for DRPC.
This port is used by Storm DRPC for receiving DPRC requests from clients.
DRPC thrift server queue size.
The timeout on DRPC requests within the DRPC server.
DRPC thrift server worker threads.
How often executor metrics should report to master, used for RPC heartbeat mode.
How many minutes since a log was last modified for the log to be considered for clean-up.
How often to clean up old log files.
Storm Logviewer HTTPS port.
The maximum number of bytes per worker's files can take up in MB.
The maximum number of bytes all worker log files can take up in MB.
HTTP UI port for log viewer.
During operations with the blob store, via master, how long a connection is idle before nimbus considers it dead and drops the
session and any associated connections.
How often nimbus should wake the cleanup thread to clean the inbox.
How often nimbus should wake up to renew credentials if needed.
A number representing the maximum number of executors any single topology can acquire.
How often nimbus should wake up to check heartbeats and do reassignments.
Nimbus thrift server queue size, default is 100000.
A number representing the maximum number of workers any single topology can acquire.
How long before a supervisor can go without heartbeating before nimbus considers it dead and stops assigning new work to it.
A special timeout used when a task is initially launched.
How long without heartbeating a task can go before nimbus will consider the task dead and reassign it to another location.
The maximum buffer size thrift should use when reading messages.
Which port the Thrift interface of Nimbus should run on.
The number of threads that should be used by the nimbus thrift server.
The maximum number of threads that should be used by the Pacemaker client.
The maximum number of threads that should be used by the Pacemaker.
The port Pacemaker should run on.
Pacemaker Thrift Max Message Size (bytes).
The maximum number of times that the RAS will attempt to schedule a topology.
How long before a scheduler considers its config cache expired.
It is the frequency at which the plugin will call out to artifactory instead of returning the most recently cached result.
It is the amount of time an http connection to the artifactory server will wait before timing out.
Max time to attempt to schedule one topology.
What chunk size to use for storm client to upload dependency jars.
What buffer size to use for the blobstore uploads.
Set replication factor for a blob in HDFS Blobstore Implementation.
Please use STORM_SUPERVISOR_MEMORY_LIMIT_TOLERANCE_MARGIN_MB instead.
How often cluster metrics data is published to metrics consumer.
Netty based messaging: The netty write buffer high watermark in bytes.
Netty based messaging: The netty write buffer low watermark in bytes.
Netty based messaging: The buffer size for send/recv buffer.
Netty based messaging: The max # of milliseconds that a peer will wait.
Netty based messaging: The min # of milliseconds that a peer will wait.
Netty based messaging: The # of worker threads for the server.
Netty based messaging: Sets the backlog value to specify when the channel binds to a local address.
If the memory usage of a worker goes over its limit by this value is it shot immediately.
A multiplier for the memory limit of a worker that will have the supervisor shoot it immediately. 1.0 means shoot the worker as soon
as it goes over. 2.0 means shoot the worker if its usage is double what was requested.
If the amount of memory that is free in the system (either on the box or in the supervisor's cgroup) is below this number (in MB)
consider the system to be in low memory mode and start shooting workers if they are over their limit.
The number of milliseconds that a worker is allowed to be over their limit when there is a medium amount of memory free in the
system.
If the amount of memory that is free in the system (either on the box or in the supervisor's cgroup) is below this number (in MB)
consider the system to be a little low on memory and start shooting workers if they are over their limit for a given grace period
STORM_SUPERVISOR_MEDIUM_MEMORY_GRACE_PERIOD_MS.
Memory given to each worker for free (because java and storm have some overhead).
the manually set cpu share for each CGroup on supervisor node.
the manually set memory limit (in MB) for each CGroup on supervisor node.
The config indicates the minimum percentage of cpu for a core that a worker will use.
The number of hours a worker token is valid for.
The port Storm will use to connect to each of the ZooKeeper servers.
Maximum number of retries a supervisor is allowed to make for downloading a blob.
What blobstore download parallelism the supervisor should use.
The total amount of CPU resources a supervisor is allowed to give to its workers.
The distributed cache cleanup interval.
The distributed cache target size in MB.
The distributed cache interval for checking for blobs to update.
The total amount of memory (in MiB) a supervisor is allowed to give to its workers.
How often the supervisor checks the worker heartbeats to see if any of them need to be restarted.
How many seconds to allow for graceful worker shutdown when killing workers before resorting to force kill.
How long a worker can go without heartbeating during the initial launch before the supervisor tries to restart the worker process.
How long a worker can go without heartbeating before the supervisor tries to restart the worker process.
How often a task should sync credentials, worst case.
How often a task should heartbeat its status to the Pacamker.
How often a task should sync its connections with other tasks (if a task is reassigned, the other tasks sending messages to it need
to refresh their connections).
The config indicates the percentage of cpu for a core an instance(executor) of an acker will use.
How many executors to spawn for ackers.
The maximum amount of memory an instance of an acker will take off heap.
The maximum amount of memory an instance of an acker will take on heap.
How often a worker should check and notify upstream workers about its tasks that are no longer experiencing BP and able to receive
new messages.
Configures park time if using WaitStrategyPark for BackPressure.
Configures steps used to determine progression to the next level of wait .. if using WaitStrategyProgressive for BackPressure.
Configures steps used to determine progression to the next level of wait .. if using WaitStrategyProgressive for BackPressure.
Configures sleep time if using WaitStrategyProgressive for BackPressure.
How often to send flush tuple to the executors for flushing out batched events.
Configures park time for WaitStrategyPark.
Configures number of iterations to spend in level 1 of WaitStrategyProgressive, before progressing to level 2.
Configures number of iterations to spend in level 2 of WaitStrategyProgressive, before progressing to level 3.
Configures sleep time for WaitStrategyProgressive.
Bolt-specific configuration for windowed bolts to specify the maximum time lag of the tuple timestamp in milliseconds.
The config indicates the percentage of cpu for a core an instance(executor) of a component will use.
The maximum amount of memory an instance of a spout/bolt will take off heap.
The maximum amount of memory an instance of a spout/bolt will take on heap.
How many executors to spawn for event logger.
If number of items in task's overflowQ exceeds this, new messages coming from other workers to this task will be dropped This
prevents OutOfMemoryException that can occur in rare scenarios in the presence of BackPressure.
The size of the receive queue for each executor.
The maximum number of machines that should be used by this topology.
This signifies the load congestion among target tasks in scope.
This signifies the load congestion among target tasks in scope.
The maximum number of tuples that can be pending on a spout task at any given time.
The maximum parallelism allowed for a component in this topology.
The maximum amount of time given to the topology to fully process a message emitted by a spout.
The config indicates the percentage of cpu for a core an instance(executor) of a metrics consumer will use.
The maximum amount of memory an instance of a metrics consumer will take off heap.
The maximum amount of memory an instance of a metrics consumer will take on heap.
Sets the priority for a topology.
The number of tuples to batch before sending to the destination executor.
How many ackers to put in when launching a new worker until we run out of ackers.
The maximum number of states that will be searched looking for a solution in resource aware strategies, e.g.
The maximum number of seconds to spend scheduling a topology using resource aware strategies, e.g.
Max pending tuples in one ShellBolt.
The amount of milliseconds the SleepEmptyEmitStrategy should sleep for.
Check recvQ after every N invocations of Spout's nextTuple() [when ACKing is disabled].
Configures park time for WaitStrategyPark for spout.
Configures number of iterations to spend in level 1 of WaitStrategyProgressive, before progressing to level 2.
Configures number of iterations to spend in level 2 of WaitStrategyProgressive, before progressing to level 3.
Configures sleep time for WaitStrategyProgressive.
Topology configuration to specify the checkpoint interval (in millis) at which the topology state is saved when
IStatefulBolt
bolts are involved.
The maximum amount of time a component gives a source of state to synchronize before it requests synchronization again.
The percentage of tuples to sample to produce stats for a task.
How long a subprocess can go without heartbeating before the ShellSpout/ShellBolt tries to suicide itself.
How many instances to create for a spout/bolt.
The size of the transfer queue for each worker.
The size of the transfer queue for each worker.
How often a batch can be emitted in a Trident topology.
Maximum number of tuples that can be stored inmemory cache in windowing operators for fast access without fetching them from store.
Topology configuration to specify the V2 metrics tick interval in seconds.
A per topology config that specifies the maximum amount of memory a worker can use for that specific topology.
Topology configurable worker heartbeat timeout before the supervisor tries to restart the worker process.
How many processes should be spawned around the cluster to execute this topology.
The port to use to connect to the transactional zookeeper servers.
The size of the header buffer for the UI in bytes.
This port is used by Storm UI for receiving HTTPS (SSL) requests from clients.
Storm UI binds to this port.
Interval to check for the worker to check for updated blobs and refresh worker state accordingly.
The default heap memory size in MB per worker, used in the jvm -Xmx opts for launching the worker.
How often this worker should heartbeat to the supervisor.
How often a worker should check dynamic log level timeouts for expiration.
HdfsSpout.setCommitFrequencyCount(int)