All Classes and Interfaces

Class
Description
This abstract class can be extended to implement concrete classes capable of (de)serializing generic avro objects across a Topology.
This is a base class for DNS to Switch mappings.
 
The base class that for auto credential plugins that abstracts out some of the common functionality.
The base class that for auto credential plugins that abstracts out some of the common functionality.
 
 
 
This abstract bolt provides the basic behavior of bolts that rank objects according to their count.
AbstractRedisBolt class is for users to implement custom bolts which makes interaction with Redis.
AbstractRedisMapState is base class of any RedisMapState, which implements IBackingMap.
AbstractRedisStateQuerier is base class of any RedisStateQuerier, which implements BaseQueryFunction.
AbstractRedisStateUpdater is base class of any RedisStateUpdater, which implements BaseStateUpdater.
 
Basic functionality to manage trident tuple events using WindowManager and WindowsStore for storing tuples and triggers related information.
For topology-related code reusage.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
This is code intended to enforce ZK ACLs.
 
A Tuple that is addressed to a destination.
 
 
 
Utility class of Aether.
Specifies the available timeframes for Metric aggregation.
 
An example that illustrates the global aggregate.
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
This class implements the DNSToSwitchMapping interface It alternates bewteen RACK1 and RACK2 for the hosts.
 
 
 
 
 
 
A dynamic loader that can load scheduler configurations for user resource guarantees from Artifactory (an artifact repository manager).
Factory class for ArtifactoryConfigLoader.
The `Assembly` interface provides a means to encapsulate logic applied to a Stream.
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
A service for distributing master assignments to supervisors, this service makes the assignments notification asynchronous.
Downloads and caches blobs locally.
An output stream where all of the data is committed on close, or can be canceled with cancel.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
Auto credentials plugin for HBase implementation.
Command tool of Hive credential renewer.
Auto credentials nimbus plugin for HBase implementation.
Auto credentials plugin for HDFS implementation.
Command tool of HDFS credential renewer.
Auto credentials nimbus plugin for HDFS implementation.
Auto credentials plugin for Hive implementation.
Command tool of Hive credential renewer.
Auto credentials nimbus plugin for Hive implementation.
This plugin is intended to be used for user topologies to send SSL keystore/truststore files to the remote workers.
Automatically take a user's TGT, and push it, and renew it in Nimbus.
Custom LoginModule to enable Auto Login based on cached ticket.
Custom LoginModule extended for testing.
 
 
 
AvroScheme uses generic(without code generation) instead of specific(with code generation) readers.
AvroSerializer uses generic(without code generation) instead of specific(with code generation) writers.
 
 
 
 
Tracks the BackPressure status.
 
 
 
 
 
Base implementation of iterator over KeyValueState which encoded types of key and value are both binary type.
 
 
 
 
 
Convenience implementation of the Operation interface.
 
 
Different node sorting types available.
 
 
 
Base class that abstracts the common logic for executing bolts in a stateful topology.
 
 
Base implementation of iterator over KeyValueState.
 
 
This class is based on BaseRichBolt, but is aware of tick tuple.
 
 
 
Holds a count value for count based windows and sliding intervals.
Holds a Time duration for time based windows and sliding intervals.
 
A BaseWorkerHook is a noop implementation of IWorkerHook.
 
A container that runs processes on the local box.
Launch containers with no security using standard java commands.
A simple client for connecting to DRPC.
This topology is a basic example of doing distributed RPC on top of Storm.
 
 
 
 
 
 
 
 
 
 
Top level marker interface for processors that computes results for a batch of tuples like Aggregate, Join etc.
 
 
 
A representation of a Java object that is uniquely identifyable, and given a className, constructor arguments, and properties, can be instantiated.
A bean list reference is a list of bean reference.
A bean reference is simply a string pointer to another id.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
a function that accepts two arguments and produces a result.
 
 
 
Callback for when a localized blob is going to change.
 
 
Provides a way to store blobs that can be downloaded.
Blob store implements its own version of iterator to list the blobs.
Provides common handling of acls for Blobstores.
 
 
 
 
Provides an base implementation for creating a blobstore based on file backed storage.
 
 
Apply Blowfish encryption for tuple communication to bolts.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
BoltDeclarer includes grouping APIs for storm topology.
Bean representation of a Storm bolt.
 
 
 
 
BoltMsg is an object that represents the data sent from a shell component to a bolt process that implements a multi-language protocol.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
Manage mvn repository.
An example that demonstrates the usage of Stream.branch(Predicate[]) to split a stream into multiple branches based on predicates.
 
 
 
 
 
 
 
Mapped response for bulk index.
Based off of a given Kafka topic a ConsumerRecord came from it will be translated to a Storm tuple and emitted to a given stream.
 
 
Useful to layer over a map that communicates with a database. you generally layer opaque map over this over your database store.
 
 
Capture running topologies for load gen later on.
 
 
 
 
 
Deprecated.
Report CPU used in the cgroup.
Deprecated.
Report the guaranteed number of cpu percentage this worker has requested.
Deprecated.
Report the percentage of the cpu guaranteed for the worker.
 
Class that implements ResourceIsolationInterface that manages cgroups.
Deprecated.
Reports the current memory limit of the cgroup for this worker.
Deprecated.
Reports the current memory usage of the cgroup for this worker.
Deprecated.
Base class for checking if CGroups are enabled, etc.
An interface to implement the basic functions to manage cgroups such as mount and mounting a hiearchy and creating cgroups.
 
 
 
 
 
 
A composite context that holds a chain of ProcessorContext.
 
Emits checkpoint tuples which is used to save the state of the IStatefulComponent across the topology.
Captures the current state of the transaction in CheckpointSpout.
 
 
Wraps IRichBolt and forwards checkpoint tuples in a stateful topology.
 
 
 
 
A Netty client for sending task messages to a remote destination (Netty server).
 
The ClientBlobStore has two concrete implementations 1.
 
SASL client side callback handler.
 
Stats calculations needed by storm client code.
 
 
 
 
 
 
 
 
The current state of the storm cluster.
 
This class is intended to provide runtime-context to StateStorageFactory implementors, giving information such as what daemon is creating it.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
co-group by key implementation.
A database table can be defined as a list of rows and each row can be defined as a list of columns where each column instance has a name, a value and a type.
 
 
 
 
 
Interface for aggregating values.
 
 
 
 
 
 
 
 
Object representing metadata committed to Kafka.
Generates and reads commit metadata.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
Abstract Aggregator for comparing two values in a stream.
 
 
 
This is a Java ClassLoader that will attempt to load a class from a string of source code.
Thrown when code cannot be compiled.
 
The param class for the `Testing.completeTopology`.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
Topology configs are specified as a plain old map.
The user interface to create a concrete IConfigLoader instance for use.
 
 
 
Extensions of this class takes a reference to one or more configuration files.
 
Provides functionality for validating configuration fields.
 
 
 
Checks if the named type derives from the specified Class.
 
 
Validates an entry for ImpersonationAclUser.
 
Validates a Integer.
Validates Kryo Registration.
Validates each entry in a list against a list of custom Validators.
Validates each entry in a list.
Validates a list of a list of Strings.
validates each key and each value against the respective arrays of validators.
validates each key and value in a map of a certain type.
 
 
 
validates a list of has no duplicates.
Validates if an object is not null.
 
 
Validates a positive number.
Validates if a number is a power of 2.
 
Validates basic types.
Validates a String or a list of Strings.
 
 
 
Note: every annotation interface must have method `validatorClass()` For every annotation there must validator class to do the validation To add another annotation for config validation, add another annotation @interface class.
For custom validators.
 
 
Custom validator where exactly one of the validations must be successful.
 
 
 
Validates each entry in a list with a list of validators Validators with fields: validatorClass and entryValidatorClass.
validates each entry in a list is of a certain type.
Validates a each key and value in a Map with a list of validators Validator with fields: validatorClass, keyValidatorClasses, valueValidatorClasses.
Validates the type of each key and value in a map Validator with fields: validatorClass, keyValidatorClass, valueValidatorClass.
Validates that there are no duplicates in a list.
 
Checks if a number is positive and whether zero inclusive Validator with fields: validatorClass, includeZero.
 
Validators with fields: validatorClass.
 
Complex/custom type validators.
Validators with fields: validatorClass and type.
Validates on object is not null.
 
Field names for annotations.
 
Declares methods for validating configuration values.
Declares a method for validating configuration values that is nestable.
Read a value from the topology config map.
This class provides a mechanism to utilize the Confluent Schema Registry (https://github.com/confluentinc/schema-registry) for Storm to (de)serialize Avro generic records across a topology.
Provides a database connection.
 
 
 
 
 
 
 
Component constraint as derived from configuration.
 
 
ConstSpout -> IdBolt -> DevNullBolt This topology measures speed of messaging between spouts->bolt and bolt->bolt ConstSpout : Continuously emits a constant string IdBolt : clones and emits input tuples DevNullBolt : discards incoming tuples.
This topo helps measure the messaging peak throughput between a spout and a bolt.
This topo helps measure how fast a spout can produce data (so no bolts are attached).
Represents an operation that accepts a single input argument and returns no result.
Represents an operation that accepts a single input argument and returns no result.
 
This is here to enable testing.
 
Represents a container that a worker will run in.
 
Launches containers.
 
Could not recover the container.
 
 
 
Coordination requires the request ids to be globally unique for awhile.
 
 
 
 
 
Computes the count of values.
 
 
 
An eviction policy that tracks event counts and can evict based on a threshold count.
 
 
Keeps track of approximate counts for the last 10 mins, 3 hours, 1 day, and all time.
SyncPolicy implementation that will trigger a file system sync after a certain number of tuples have been processed.
SyncPolicy implementation that will trigger a file system sync after a certain number of tuples have been processed.
A trigger that tracks event counts and calls back TriggerHandler.onTrigger() when the count threshold is hit.
 
 
 
 
A metric using Sigar to get User and System CPU utilization for a worker.
 
 
Provider interface for credential key.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
CsvScheme uses the standard RFC4180 CSV Parser One of the difference from Tsv format is that fields with embedded commas will be quoted.
CsvSerializer uses the standard RFC4180 CSV Parser One of the difference from Tsv format is that fields with embedded commas will be quoted.
 
 
 
 
Storm configs are specified as a plain old map.
 
 
The type of process/daemon that this server is running as.
 
 
 
 
 
Filter for debugging purposes.
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
The default strategy used for blacklisting hosts.
Default implementation of EsLookupResultOutput.
 
 
 
Creates file names with the following format:
Creates file names with the following format:
 
 
 
A pool of machines that anyone can use, but topologies are not isolated.
Storm can be configured to launch worker processed as a given user.
This class implements the DNSToSwitchMapping interface It returns the DEFAULT_RACK for every host.
 
 
 
Default implementation of resources declarer.
This is the default class to manage worker processes, including launching, killing, profiling and etc.
 
 
 
Basic SequenceFormat implementation that uses LongWritable for keys and Text for values.
Basic SequenceFormat implementation that uses LongWritable for keys and Text for values.
Default implementation of ShellLogHandler.
Default state encoder class for encoding/decoding key values.
A default implementation that uses Kryo to serialize and de-serialize the state.
 
 
 
 
 
RecordFormat implementation that uses field and record delimiters.
RecordFormat implementation that uses field and record delimiters.
 
An authorization implementation that denies everything, for testing purposes.
 
 
Resolver class of dependencies.
Main class of dependency resolver.
 
A class that is called when a TaskMessage arrives.
a class that represents a device in linux.
 
 
 
 
 
Class that allows using a ScheduledReporter to report V2 task metrics with support for dimensions.
 
Provide methods to help Logviewer to clean up files in directories and to get a list of files without worrying about excessive memory usage.
Facility to synchronize access to HDFS directory.
 
 
 
 
 
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
An interface that must be implemented to allow pluggable DNS-name/IP-address to RackID resolvers.
 
Encapsulates the docker exec command and its command line arguments.
Encapsulates the docker inspect command and its command line arguments.
For security, we can launch worker processes inside the docker container.
 
Encapsulates the docker rm command and its command line arguments.
Encapsulates the docker run command and its command line arguments.
Encapsulates the docker stop command and its command line arguments.
Encapsulates the docker wait command and its command line arguments.
 
 
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
 
 
 
A context that emits the results to downstream processors which are in another bolt.
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Configuration for Elasticsearch connection.
Constants used in ElasticSearch examples.
Basic bolt for storing tuple to ES document.
Demonstrates an ElasticSearch Storm topology.
The user data spout.
Basic bolt for looking up document in ES.
The adapter to convert the results fetched from Elasticsearch to values.
Basic bolt for retrieve matched percolate queries.
StateFactory for providing EsState.
Estimate the throughput of all topologies.
TupleMapper defines how to extract source, index, type, and id from tuple for ElasticSearch.
 
 
 
 
 
An event is a wrapper object that gets stored in the window.
 
 
 
 
 
Context information that can be used by the eviction policy.
Eviction policy tracks events and decides whether an event should be evicted from the window or not.
The action to be taken when EvictionPolicy.evict(Event) is invoked.
An example JMS topology.
 
 
This is a basic example of a Storm topology.
This is a basic example of a Storm topology.
 
 
 
A more accurate sleep implementation.
 
 
 
Compiled executable expression.
Container for all the objects required to instantiate a topology.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
A callback that can accept an integer.
 
 
WordCount but the spout does not stop, and the bolts are implemented in java.
 
 
 
 
 
 
Uses field with a given index to select the topic name from a tuple .
Describe each column of the field.
 
 
 
Uses field name to select topic name from tuple .
Collection of unique named fields using in an ITuple.
 
 
 
Very basic blob store impl with no ACL handling.
Scheduler configuration loader which loads configs from a file.
Factory class for FileConfigLoader.
Facility to synchronize access to HDFS files.
 
Formatter interface for determining HDFS file names.
Formatter interface for determining HDFS file names.
 
 
 
This topo helps measure speed of word count.
Used by the HdfsBolt to decide when to rotate files.
Used by the HdfsBolt to decide when to rotate files.
File rotation policy that will rotate files when a certain file size is reached.
File rotation policy that will rotate files when a certain file size is reached.
 
 
 
 
Filters take in a tuple as input and decide whether or not to keep that tuple or not.
 
 
 
Simple `Filter` implementation that filters out any tuples that have fields with a value of `null`.
FilterOptions provides a method to select various filtering options for doing a scan of the metrics database.
 
An Assembly implementation.
 
 
Defines how the spout seeks the offset to be used in the first poll to Kafka upon topology deployment.
A class to help (de)serialize a pre-defined set of Avro schemas.
 
 
 
 
 
A function that accepts one argument and returns an Iterable of elements as its result.
A one to many transformation function.
 
 
 
Flux entry point.
 
Static utility methods for parsing flux YAML.
A generic `ShellBolt` implementation that allows you specify output fields and even streams without having to subclass `ShellBolt` to do so.
A generic `ShellSpout` implementation that allows you specify output fields and even streams without having to subclass `ShellSpout` to do so.
 
 
A context that emits the results to downstream processors which are in the same bolt.
All of the machines that currently have nothing assigned to them.
 
 
 
 
A simple interface to allow compatibility with non java 8 code bases.
Represents a function that accepts one argument and produces a result.
A function takes in a set of input fields and emits zero or more tuples as output.
 
A default implementation of the AvroSerializer that will just pass literal schemas back and forth.
A generic org.apache.storm.topology.IRichBolt implementation for testing/debugging the Storm JMS Spout and example topologies.
 
 
 
 
Generate a simulated load.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Used as a way to give feedback that the listener is ready for the caller to change the blob.
 
 
 
 
 
 
 
 
 
 
 
A bridge between CustomStreamGrouping and LoadAwareCustomStreamGrouping.
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Bean representation of a Storm stream grouping.
Types of stream groupings Storm allows.
The different types of groupings that are supported.
Always writes gzip out, but tests incoming to see if it's gzipped.
Note, this assumes it's deserializing a gzip byte stream, and will err if it encounters any other serialization.
Note, this assumes it's deserializing a gzip byte stream, and will err if it encounters any other serialization.
UserGroupInformation#loginUserFromKeytab(String, String) changes the static fields of UserGroupInformation, especially the current logged-in user, and UserGroupInformation itself is not thread-safe.
 
 
This class provides util methods for storm-hbase connector communicating with secured HBase.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
Provides a HDFS file system backed blob store implementation.
 
HDFS blob store impl.
 
Client to access the HDFS blobStore.
Create a HDFS sink based on the URI and properties.
 
 
 
 
 
This class provides util methods for storm-hdfs connector communicating with secured HDFS.
 
This topo helps measure speed of reading from Hdfs.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Holds a cache of heartbeats from the workers.
 
 
A class that describes a cgroup hierarchy.
 
A metric wrapping an HdrHistogram.
 
Maps a org.apache.storm.tuple.Tupe object to a row in an Hive table.
 
This class provides util methods for storm-hdfs connector communicating with secured Hive.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Listens for all metrics and POSTs them serialized to a configured URL.
A server that can listen for metrics from the HttpForwardingMetricsConsumer.
 
 
Nimbus could be configured with an authorization plugin.
Provides a way to automatically push credentials to a topology and to retrieve them in the worker.
 
 
 
 
 
 
 
An IBolt represents a component that takes tuples as input and produces tuples as output.
 
 
 
 
 
 
 
Common methods for all possible components in a topology.
 
 
 
 
A class that is called when a TaskMessage arrives.
This interface needs to be implemented for messaging plugin.
Allows a bolt or a spout to be informed when the credentials of the topology have changed.
Provides a way to renew credentials on behalf of a user.
 
 
 
A Function that returns the input argument itself as the result.
 
 
 
EventLogger interface for logging the event info to a sink like log file or db for inspecting the events via UI for debugging.
A wrapper for the fields that we would log.
 
 
 
Interface for handling credentials in an HttpServletRequest.
 
An interface that controls the Kryo instance used by Storm for serialization.
The interface for leader election.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Interface for storing local assignments.
This is here mostly for backwards compatibility.
Close this class to kill the topology.
This is here mostly for backwards compatibility.
 
The image manifest.
Produces metrics.
 
 
 
 
Interface to allow registration of metrics.
 
 
 
Represents an include.
 
Elasticsearch document fragment with "_index", "_type" and "_id" fields.
Elasticsearch document fragment with a single "index" field, used in bulk requests.
 
 
 
Elasticsearch document fragment containing extended index information, used in bulk index responses.
Elasticsearch document with a single "index" field, used in bulk responses.
 
 
 
Nimbus auto credential plugin that will be called on nimbus host during submit topology option.
An assignment backend which will keep all assignments and id-info in memory.
An in-memory implementation of the State.
This ITridentWindowManager instance stores all the tuples and trigger related information inmemory.
Inmemory store implementation of WindowsStore which can be backed by persistent store.
InMemoryWindowsStoreFactory contains a single instance of InMemoryWindowsStore which will be used for storing tuples and triggers of the window.
 
 
 
 
 
A local Zookeeper instance available for testing.
 
 
A set of measurements about a stream so we can statistically reproduce it.
 
 
Annotation to inform users of how much to rely on a particular package, class or method not changing over time.
Evolving, but can break compatibility at minor release (i.e. m.x)
Can evolve while retaining compatibility for minor release boundaries.; can break compatibility only at major release (ie. at m.0).
No guarantee is provided as to reliability or stability across any level of release granularity.
This bolt ranks incoming objects by their count.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
This defines a transactional spout which does *not* necessarily replay the same batch every time it emits a batch for a transaction id.
Coordinator for batches.
 
 
 
This interface defines a transactional spout that reads its tuples from a partitioned set of brokers.
 
 
This interface is implemented by classes, instances of which can be passed into certain Util functions which test some collection for elements matching the IPredicate.
Storm can be configured to launch worker processed as a given user.
 
report blacklist to alert system.
 
When writing topologies using Java, IRichBolt and IRichSpout are the main interfaces to use to implement components of the topology.
When writing topologies using Java, IRichBolt and IRichSpout are the main interfaces to use to implement components of the topology.
 
 
 
 
 
 
An interface that provides access to the current scheduling state.
 
The ISerializer interface describes the methods that an object should implement to provide serialization and de-serialization capabilities to non-JVM language components.
 
 
 
A pool of machines that can be used to run isolated topologies.
 
ISpout is the core interface for implementing spouts.
Methods are not expected to be thread safe.
 
An ISqlStreamsDataSource specifies how an external data source produces and consumes data.
A bolt abstraction for supporting stateful computation.
Common methods for stateful components in the topology.
 
A windowed bolt abstraction for supporting windowing operation with state.
 
 
StateStorage provides the API for the pluggable state store used by the Storm daemons.
 
An interface to for implementing different scheduling strategies for the resource aware scheduling.
if FQCN of an implementation of this class is specified by setting the config storm.topology.submission.notifier.plugin.class, that class's notify method will be invoked when a topology is successfully submitted via StormSubmitter class.
 
 
 
 
A plugin interface that gets invoked any time there is an action for a topology.
 
Interface for Thrift Transport plugin.
 
ITridentDataSource marks all spouts that provide data into Trident.
This interface is implemented by various Trident classes in order to gather and propogate resources that have been set on them.
 
 
 
Window manager to handle trident tuple events.
 
 
 
 
Implementation of VersionInfo that uses a properties file to get the VersionInfo.
 
 
A bolt abstraction for supporting time and count based sliding & tumbling windows.
Interface for strategy to recover heartbeats when master gains leadership.
An IWorkerHook represents a topology component that can be executed when a worker starts, and when a worker shuts down.
Provides passwords out of a jaas conf for typical MD5-DIGEST authentication support.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
 
Basic bolt for writing to any Database table.
Basic bolt for querying from any database.
 
 
 
 
 
 
 
Configuration for JedisCluster.
Builder for initializing JedisClusterConfig.
Container for managing JedisCluster.
Interfaces for containers which stores instances implementing JedisCommands.
Container Builder which helps abstraction of two env. - single instance or Redis Cluster.
Adapter for providing a unified interface for running commands over both Jedis and JedisCluster instances.
Configuration for JedisPool.
Builder for initializing JedisPoolConfig.
Batch coordination metadata object for the TridentJmsSpout.
A JmsBolt receives org.apache.storm.tuple.Tuple objects from a Storm topology and publishes JMS Messages to a destination (topic or queue).
 
JmsMessageProducer implementations are responsible for translating a org.apache.storm.tuple.Values instance into a javax.jms.Message object.
A JmsProvider object encapsulates the ConnectionFactory and Destination JMS objects the JmsSpout needs to manage a topic/queue connection over the course of it's lifecycle.
A Storm Spout implementation that listens to a JMS topic or queue and outputs tuples based on the messages it receives.
 
 
 
Interface to define classes that can produce a Storm Values objects from a javax.jms.Message object>.
 
 
 
 
 
Describes how to join the other stream with the current stream.
 
 
 
 
An example that demonstrates the usage of PairStream.join(PairStream) to join multiple streams.
This enum defines how the output fields of JOIN is constructed.
Provides equi-join implementation based on simple hash-join.
 
 
 
 
 
 
 
Response builder for JSON.
 
JsonSerializer implements the JSON multilang protocol.
 
 
A simple JmsTupleProducer that expects to receive JMS TextMessage objects with a body in JSON format.
Bolt implementation that can send Tuple data to Kafka.
This topo helps measure speed of reading from Kafka and writing to Hdfs.
 
Benchmark topology for measuring spout read/emit/ack performance.
Create a Kafka spout/sink based on the URI and properties.
Class representing the log head offsets, spout offsets and the lag for a topic.
Utility class for querying offset lag for kafka spout.
This class is used to manage both the partition and topic level offset metrics.
Partition level metrics.
Topic level metrics.
 
 
 
KafkaSpoutConfig defines the required configuration to connect a consumer to a consumer group, as well as the subscribing topics.
 
This enum controls when the tuple with the ConsumerRecord for an offset is marked as processed, i.e. when the offset can be committed to Kafka.
 
 
Implementation of KafkaSpoutRetryService using the exponential backoff formula.
 
Represents the logic that manages the retrial of failed tuples.
 
This example sets up 3 topologies to put data in Kafka via the KafkaBolt, and shows how to set up a topology that reads from some Kafka topics using the KafkaSpout.
This example is similar to KafkaSpoutTopologyMainNamedTopics, but demonstrates subscribing to Kafka topics with a regex.
 
 
 
Wraps transaction batch information.
Defines the required Kafka-related configuration for the Trident spouts.
 
 
 
 
 
ISpoutPartition that wraps TopicPartition information.
 
 
A list of Values in a tuple that can be routed to a given stream: RecordTranslator.apply(org.apache.kafka.clients.consumer.ConsumerRecord<K, V>).
The KafkaTupleListener handles state changes of a kafka tuple inside a KafkaSpout.
Map a kerberos principal to a local user.
 
Implements SASL logic for storm worker client processes.
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
KeyFactory defines conversion of state key (which could be compounded) -> Redis key.
Default Key Factory.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Class hands over the key sequence number which implies the number of updates made to a blob.
Specifies all the valid types of keys and their values.
A state that supports key-value mappings.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
 
 
 
 
Keeps track of approximate latency for the last 10 mins, 3 hours, 1 day, and all time.
 
A callback function when nimbus gains leadership.
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
Implements missing sequence functions in Java compared to Clojure.
 
 
 
Represents the load that a Bolt is currently under to help in deciding where to route a tuple, to help balance the load.
 
 
A bolt that simulates a real world bolt based off of statistics about it.
Configuration for a simulated spout.
 
Holds a list of the current loads.
A metrics server that records and reports metrics for a set of running topologies.
 
A spout that simulates a real world spout based off of statistics about it.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Factory class for creating local assignments.
A stand alone storm cluster that runs inside a single process.
Simple way to configure a LocalCluster to meet your needs.
 
Launch Containers in local mode.
A Local way to test DRPC.
 
Provides a local file system backed blob store implementation for Nimbus.
 
Is called periodically and updates the nimbus with blobs based on the state stored inside the zookeeper for a non leader nimbus trying to be in sync with the operations performed on the leader nimbus.
Represents a resource that is localized on the supervisor.
A set of resources that we can look at to see which ones we retain and which ones should be removed.
Represents a blob that is cached locally on disk by the supervisor.
A locally cached blob for the topology.
 
A Client blob store for LocalMode.
 
Local Resource requested by the topology.
A simple, durable, atomic K/V database.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Cleans dead workers logs and directories.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
Listens for cluster related metrics, dumps them to log.
Listens for all metrics, dumps them to log
 
Simple bolt that does nothing other than LOG.info() every tuple recieveed.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
Constants which are used across logviewer related classes.
 
 
 
 
 
Handles HTTP requests for Logviewer.
 
The main entry of Logviewer.
Launch a sub process and write files out to logs.
Computes the long sum of the input values.
Mapped response for document lookup.
 
 
 
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
A function used to assign partitions to this spout.
 
A one-one transformation function.
 
 
 
Processor for executing Stream.map(MapFunction) and Stream.flatMap(FlatMapFunction) functions.
 
 
 
 
 
This aggregator computes the maximum of aggregated tuples in a stream.
This aggregator computes the maximum of aggregated tuples in a stream.
 
 
 
 
 
 
 
 
Encapsulates the state used for batching up messages.
 
 
Class containing metric values and all identifying fields to be stored in a MetricStore.
A MetricException is used to describe an error using a MetricStore.
 
Class for removing expired metrics and unused metadata from the RocksDB store.
 
 
 
 
Interface used to callback metrics results from a scan.
 
 
 
 
 
 
This aggregator computes the minimum of aggregated tuples in a stream.
This aggregator computes the minimum of aggregated tuples in a stream.
The param arg for `Testing.withSimulatedTimeCluster`, `Testing.withTrackedCluster` and `Testing.withLocalCluster`.
 
 
 
 
 
 
 
 
 
 
 
 
Acts as a MultiCount Stat, but keeps track of approximate counts for the last 10 mins, 3 hours, 1 day, and all time. for the same keys
Keeps track of approximate latency for the last 10 mins, 3 hours, 1 day, and all time. for the same keys
This is a basic example of a Storm topology.
 
 
 
 
 
 
Some topologies might spawn some threads within bolts to do some work and emit tuples from those threads.
 
 
 
 
Filter that returns all partitions for the specified topics.
A `Filter` implementation that inverts another delegate `Filter`.
 
 
 
 
 
Class representing information for querying kafka for log head offsets, consumer offsets and the difference for new kafka spout using new consumer api.
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
NimbusBlobStore is a USER facing client API to perform basic operations such as create, update, delete and read for local and hdfs blob store.
Client used for connecting to nimbus.
 
An interface to allow callbacks with a thrift nimbus client.
Test for nimbus heartbeats max throughput, This is a client to collect the statistics.
 
This exception class is used to signify a problem with nimbus leader identification.
Implementation of WorkerMetricsProcessor that sends metric data to Nimbus for processing.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Represents a single node in the cluster.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
A pool of nodes that can be used to run topologies.
 
Place executors into slots in a round robin way, taking into account component spreading among different hosts.
 
interface for calculating the number of existing executors scheduled on a object (rack or node).
 
interface for calculating the number of existing executors scheduled on a object (rack or node).
 
 
 
A no-op authorization implementation that illustrate info available for authorization decisions.
 
 
A NoOutputException states that no data has been received from the connected non-JVM process.
Stats related to something with a normal distribution, and a way to randomly simulate it.
An offer of resources with normalized resource names.
A resource request with normalized resource names.
Resources that have been normalized.
Intended for NormalizedResources wrappers that handle memory.
File rotation policy that will never rotate...
File rotation policy that will never rotate...
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
Annotation that can be used to explicitly call out public static final String fields that are not configs.
This class tracks the time-since-last-modify of a "thing" in a rolling fashion.
The NullPartitioner partitions every tuple to the empty string.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
A representation of a Java object that given a className, constructor arguments, and properties, can be instantiated.
 
class to keep track of resources on a rack or node.
a class to contain individual object resources as well as cumulative stats.
 
 
 
 
 
 
 
 
 
 
Manages acked and committed offsets for a TopicPartition.
This allows you to submit a Runnable with a key.
 
 
 
 
The parent interface for any kind of streaming operation.
Parent interface for Trident `Filter`s and `Function`s.
A one to many transformation function which is aware of Operation (lifecycle of the Trident component).
A one-one transformation function which is aware of Operation (lifecycle of the Trident component).
Options of State.
It's a data structure (whole things are public) and you can access and modify all fields.
This output collector exposes the API for emitting tuples from an IRichBolt.
 
 
 
A set of measurements about a stream so we can statistically reproduce it.
 
Provides an API to simulate the output of a stream.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
 
A pair of values.
A function that accepts one argument and returns an Iterable of Pair as its result.
A function that accepts an argument and produces a key-value Pair.
Represents a stream of key-value pairs.
A ValueJoiner that joins two values to produce a Pair of the two values as the result.
Extracts a typed key-value pair from a tuple.
Table that can be converted to a stream.
 
 
This exception is thrown when parse errors are encountered.
A variation on FieldGrouping.
This interface is responsible for choosing a subset of the target tasks to use for a given key.
A basic implementation of target selection.
This implementation of AssignmentCreator chooses two arbitrary tasks.
This interface chooses one element from a task assignment to send a specific Tuple to.
 
 
 
 
A very basic API that will provide a password for a given user name.
Utility functions to make Path manipulation slightly less verbose.
Filter that returns all partitions for topics matching the given Pattern.
 
Mapped response for percolate.
 
 
Wraps a IStatefulWindowedBolt and handles the execution.
An example that demonstrates the usage of IStatefulWindowedBolt with window persistence.
 
Deprecated.
Deprecated.
 
Represents a predicate (boolean-valued function) of a value.
Serializable callback for use with the KafkaProducer on KafkaBolt.
 
 
 
 
A consumer that prints the input to stdout.
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
A processor processes a stream of elements and produces some result.
Context information passed to the Processor.
 
Node that wraps a processor in the stream.
 
In local mode, ProcessSimulator keeps track of Shutdownable objects in place of actual processes (in cluster mode).
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
 
Emits a random integer and a timestamp value (offset by one day), every 100 ms.
This spout generates random whole numbers with given maxNumber value as maximum with the given fields.
 
 
 
This class wraps an objects and its associated count, including any additional data fields.
 
Blacklisting strategy just like the default one, but specifically setup for use with the resource aware scheduler.
Represents a single node in the cluster.
 
A Counter metric that also implements a Gauge to report the average rate of events per second over 1 minute.
This class is a utility to track the rate of something.
 
 
This is a good example of doing complex Distributed RPC on top of Storm.
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
The read-only interface to a StringMetadataCache allowed to be used by any thread.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Formats a Tuple object into a byte array that will be written to HDFS.
Formats a Tuple object into a byte array that will be written to HDFS.
Translate a ConsumerRecord to a tuple.
RecordTranslator that delegates to a Scheme.
Container for managing JedisCluster.
IBackingMap implementation for Redis Cluster environment.
RedisClusterMapState.Factory provides Redis Cluster environment version of StateFactory.
Implementation of State for Redis Cluster environment.
RedisClusterState.Factory implements StateFactory for Redis Cluster environment.
BaseQueryFunction implementation for Redis Cluster environment.
BaseStateUpdater implementation for Redis Cluster environment.
This interface represents Jedis methods exhaustively which are used on storm-redis.
Adapter class to make Jedis instance play with BinaryRedisCommands interface.
Adapter class to make JedisCluster instance play with BinaryRedisCommands interface.
The binary version of container builder which helps abstraction of two env. - single instance or Redis Cluster.
Interfaces for containers which stores instances implementing RedisCommands.
 
Create a Redis sink based on the URI and properties.
RedisDataTypeDescription defines data type and additional key if needed for lookup / store tuples.
 
Basic bolt for querying from Redis and filters out if key/field doesn't exist.
RedisFilterMapper is for defining spec. which is used for querying value from Redis and filtering.
A redis based implementation that persists the state in Redis.
An iterator over RedisKeyValueState.
 
Basic bolt for querying from Redis and emits response as tuple.
RedisLookupMapper is for defining spec. which is used for querying value from Redis and converting response to tuple.
RedisMapper is for defining data type for querying / storing from / to Redis.
IBackingMap implementation for single Redis environment.
RedisMapState.Factory provides single Redis environment version of StateFactory.
Implementation of State for single Redis environment.
RedisState.Factory implements StateFactory for single Redis environment.
BaseQueryFunction implementation for single Redis environment.
BaseStateUpdater implementation for single Redis environment.
Basic bolt for writing to Redis.
RedisStoreMapper is for defining spec. which is used for storing value to Redis.
 
 
 
 
The Reducer performs an operation on two values of the same type producing a result of the same type.
 
 
 
 
Provides reference counting of tuples.
 
 
This class is used as part of testing Storm.
 
 
 
 
 
Runnable reporting local worker reported heartbeats to master, supervisor should take care the of the heartbeats integrity for the master heartbeats recovery, a non-null node id means that the heartbeats are full, and master can go on to check and wait others nodes when doing a heartbeats recovery.
Get maven repository instance.
Request context.
 
 
 
 
 
 
 
This is a new base interface that can be used by anything that wants to mirror RAS's basic API.
A plugin to support resource isolation and limitation within Storm.
Provides translation between normalized resource maps and resource value arrays.
 
Provides resource name normalization for resource maps.
 
 
 
 
 
Compiles a scalar expression (RexNode) to Java source code String.
 
 
 
 
 
 
Class representing the data used as a Key in RocksDB.
Class designed to perform all metrics inserts into RocksDB.
 
 
This bolt aggregates counts from multiple upstream bolts.
This bolt performs rolling counts of incoming objects, i.e. sliding window based counting.
This topology does a continuous computation of the top N words that the topology has seen in terms of cardinality.
Expires keys that have not been updated in the configured number of seconds.
 
 
 
 
Assign partitions in a round robin fashion for all spouts, not just the ones that are alive.
 
 
Send and receive SASL tokens.
Implements SASL logic for storm worker client processes.
Deprecated.
 
 
 
Authorize or deny client requests based on existence and completeness of client's SASL authentication.
 
Base class for SASL authentication plugin.
 
 
 
 
A utility class to cache the scheduler config and refresh after the cache expires.
 
This class serves as a mechanism to return results and messages from a scheduling strategy to the Resource Aware Scheduler.
 
 
 
 
A set of topology names that will be killed when this is closed, or when the program exits.
 
SequenceFileReader<KeyT extends org.apache.hadoop.io.Writable,ValueT extends org.apache.hadoop.io.Writable>
 
 
 
 
 
 
Interface for converting Tuple objects to HDFS sequence file key-value pairs.
Interface for converting TridentTuple objects to HDFS sequence file key-value pairs.
 
 
 
 
 
 
Allow Utils to delegate meta serialization.
 
 
Provides a way using a service loader to register Kryo serializers with the SerializationFactory without needing to modify the config.
Interface to be implemented for serlializing and de-serializing the state.
 
 
SASL server side callback handler for kerberos auth.
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Elasticsearch document fragment containing shard success information, used in percolate and bulk index responses.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
A request for a shared memory region off heap between workers on a node.
A request for a shared memory region off heap, but only within a worker.
A request for a shared memory region on heap.
 
A bolt that shells out to another process to process tuples.
A data structure for ShellBolt which includes two queues (FIFO), which one is for task ids (unbounded), another one is for bolt msg (bounded).
Contains convenience functions for running shell commands for cases that are too simple to need a full ShellUtils implementation.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Handle logging from non-JVM processes.
ShellMsg is an object that represents the data sent to a shell component from a process that implements a multi-language protocol.
 
 
 
 
 
This is an IOException with exit code added.
OSType detection.
A simple shell command executor.
 
 
An authorization implementation that simply checks if a user is allowed to perform specific operations.
An implementation of interface CharStream, where the stream is assumed to contain only ASCII characters (without unicode processing).
 
 
 
 
 
 
A client callback handler that supports a single username and password.
 
Simple transport for Thrift plugin.
Take a version string and parse out a Major.Minor version
An authorization implementation that simply checks a whitelist of users that are allowed to use the cluster.
A simple implementation that evicts the largest un-pinned entry from the cache.
 
 
 
 
Example of a simple custom bolt for joining two streams.
Example of using a simple custom join bolt.
A Cluster that only allows modification to a single topology.
A Principal that represents a user.
 
This topology does a continuous computation of the top N words that the topology has seen in terms of cardinality.
Represents configuration of sliding window based on count of events.
This class represents sliding window strategy based on the sliding window count and sliding interval count from the given slidingCountWindow configuration.
Represents configuration of sliding window based on duration.
This class represents sliding window strategy based on the sliding window duration and sliding interval from the given slidingCountWindow configuration.
Computes sliding window sum.
Windowing based on tuple timestamp (e.g. the time when tuple is generated rather than when its processed).
Computes sliding window sum.
This class counts objects in a sliding window fashion.
A sliding window specification based on a window length and sliding interval.
Computes sliding window sum.
A sample topology that demonstrates the usage of IWindowedBolt to calculate sliding window sum.
 
This class provides per-slot counts of the occurrences of objects.
A repeating pattern of skewedness in processing times.
 
 
 
The Bolt implementation for Socket.
Create a Socket data source based on the URI and properties.
Spout for Socket data.
Elasticsearch document fragment with a single field, "source", used in bulk requests.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
Bean representation of a Storm spout.
 
 
 
SpoutMsg is an object that represents the data sent from a shell spout to a process that implements a multi-language spout.
 
 
This output collector exposes the API for emitting tuples from an IRichSpout.
Methods are not thread safe.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
A JmsProvider that uses the spring framework to obtain a JMS ConnectionFactory and Desitnation objects.
 
 
Define the keywords that can occur in a CREATE TABLE statement.
 
The state of the component that is either managed by the framework (e.g in case of IStatefulBolt) or managed by the the individual components themselves.
There's 3 different kinds of state:
The interface of State Encoder.
A factory for creating State instances.
 
Wraps a IStatefulBolt and manages the state of the bolt.
Top level interface for processors that does stateful processing.
An example topology that demonstrates the use of IStatefulBolt to manage state.
 
Wraps a IStatefulWindowedBolt and handles the execution.
A simple example that demonstrates the usage of IStatefulWindowedBolt to save the state of the windowing operation to avoid re-computation in case of failures.
Window manager that handles windows with state persistence.
A stateful word count that uses PairStream.updateStateByKey(StateUpdater) to save the counts in a key value state.
 
 
Used by the StateFactory to create a new state instances.
An example that uses Stream.stateQuery(StreamState) to query the state
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
Interface for updating state.
 
 
This window manager uses WindowsStore for storing tuples and other trigger related information.
Root resource (exposed at "storm" path).
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
This is a hack to use Calcite Context.
This is based on SlimDataContext in Calcite, and borrow some from DataContextImpl in Calcite.
Client for connecting Elasticsearch.
 
 
 
 
 
 
SQL parser, generated from Parser.jj by JavaCC.
Token literal values and constants.
Token Manager.
 
 
 
 
 
 
The StormSql class provides standalone, interactive interfaces to execute SQL statements over streaming data.
 
 
 
 
 
 
Use this class to submit topologies to run on the Storm cluster.
Interface use to track progress of file upload.
 
The timer defined in this file is very similar to java.util.Timer, except it integrates with Storm's time simulation capabilities.
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Represents a stream of values.
A Stream represents the core data model in Trident, and can be thought of as a "stream" of tuples that are processed as a series of small batches.
A builder for constructing a StormTopology via storm streams api (DSL).
Represents a stream of tuples from one Storm component (Spout or Bolt) to another (an edge in the topology DAG).
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
A wrapper for the stream state which can be used to query the state via Stream.stateQuery(StreamState).
Utility class for (Input/Output)Stream.
 
This topo helps measure speed of writing to Hdfs.
 
 
This class provides a method to pass data from the test bolts and spouts to the test method, via the worker log.
Spout pre-computes a list with 30k fixed length random strings.
 
Class to create a use a cache that stores Metadata string information in memory.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
This Exception is thrown when registered ISubmitterHook could not be initialized or invoked.
a class that implements operations that can be performed on a cgroup subsystem.
A enum class to described the subsystems that can be used.
A Bolt that does processing for a subsection of the complete graph.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Client for interacting with Supervisor server, now we use supervisor server mainly for cases below.
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
An authorization implementation that simply checks if a user is allowed to perform specific operations.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
A runnable which will synchronize assignments to node local and then worker processes.
 
Interface for controlling when the HdfsBolt syncs and flushes the filesystem.
Interface for controlling when the HdfsBolt syncs and flushes the filesystem.
 
A class that implements system operations for using cgroups.
 
 
 
Class to store task-related V2 metrics dimensions.
Metric repository to allow reporting of task-specific metrics.
 
 
 
 
 
 
 
 
A utility that helps with testing topologies, Bolts and Spouts.
A topology that has all messages captured and can be read later on.
Simply produces a boolean to see if a specific state is true or false.
A simple UI filter that should only be used for testing purposes.
This is the core interface for the storm java testing, usually we put our java unit testing logic in the run method.
 
 
 
Prints the tuples to stdout.
 
Facilitates testing of non-public methods in the parent class.
 
 
 
 
 
 
 
 
 
 
 
The purpose for which the Thrift server is created.
 
 
 
 
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
WordCount but the spout goes at a predefined rate and we collect proper latency statistics.
 
 
 
This class implements time simulation support.
 
Deprecated.
 
 
 
 
 
 
 
 
 
 
 
 
Eviction policy that evicts events based on time duration.
Wait for a node to report worker heartbeats until a configured timeout.
 
 
 
 
Interface to be implemented for extracting timestamp from a tuple.
Invokes TriggerHandler.onTrigger() after the duration.
 
 
Describes the input token stream.
Token Manager Error.
Handles assigning partitions to the consumer and updating the rebalance listener.
 
Singleton comparator of TopicPartitions.
 
Cache topologies and topology confs from the blob store.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Actions that can be done to a topology in nimbus.
TopologyBuilder exposes the Java API for specifying a topology for Storm to execute.
A `TopologyContext` is given to bolts and spouts in their `prepare()` and `open()` methods, respectively.
Bean represenation of a topology.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
Configuration for a simulated topology.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
Marker interface for objects that can produce `StormTopology` objects.
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
This bolt merges incoming Rankings.
A tracked topology keeps metrics for every bolt and spout.
 
Class that contains the logic to extract the transactional state info from zookeeper.
 
Deprecated.
 
 
 
 
 
 
 
 
 
Interface for publishing tuples to a stream and reporting exceptions (to be displayed in Storm UI).
 
A Trident topology example.
A fixed batch spout.
 
 
 
Trident implementation of the JmsSpout.
This example sets up a few topologies to put random strings in Kafka topics via the KafkaBolt, and shows how to set up a Trident topology that reads from some Kafka topics using the KafkaSpout.
This example is similar to TridentKafkaClientTopologyWildcardTopics, but demonstrates subscribing to Kafka topics with a regex.
 
 
 
 
A simple example that demonstrates the usage of Stream.map(MapFunction) and Stream.flatMap(FlatMapFunction) functions.
This class demonstrates different usages of * Stream.minBy(String) * Stream.maxBy(String) operations on trident Stream.
This class demonstrates different usages of * Stream.minBy(String, Comparator) * Stream.min(Comparator) * Stream.maxBy(String, Comparator) * Stream.max(Comparator) operations on trident Stream.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Extends AbstractList so that it can be emitted directly as Storm tuples.
 
 
 
 
 
Sample application of trident windowing which uses inmemory store for storing tuples in window.
 
 
The callback fired by TriggerPolicy when the trigger condition is satisfied.
Triggers the window calculations based on the policy.
 
 
TsvScheme uses a simple delimited format implemention by splitting string, and it supports user defined delimiter.
TsvSerializer uses a simple delimited format implemention by splitting string, and it supports user defined delimiter.
Represents tumbling count window configuration.
This class represents tumbling window strategy based on the window count from the given slidingCountWindow configuration.
Represents tumbling duration window configuration.
This class represents tumbling window strategy based on the window duration from the given slidingCountWindow configuration.
Computes sliding window sum.
Computes sliding window sum.
A tumbling window specification.
The tuple is the main data structure in Storm.
A tuple of ten elements along the lines of Scala's Tuple.
A tuple of three elements along the lines of Scala's Tuple.
A tuple of four elements along the lines of Scala's Tuple.
A tuple of five elements along the lines of Scala's Tuple.
A tuple of six elements along the lines of Scala's Tuple.
A tuple of seven elements along the lines of Scala's Tuple.
A tuple of eight elements along the lines of Scala's Tuple.
A tuple of nine elements along the lines of Scala's Tuple.
 
 
A TimestampExtractor that extracts timestamp from a specific field in the tuple.
 
 
TupleMapper defines how to extract key and value from tuple for Redis.
 
Interface defining a mapping from storm tuple to kafka key and message.
 
A generic interface for mapping a Tuple to typed values.
Factory for constructing typed tuples from a Tuple based on indicies.
A Window that contains Tuple objects.
Holds the expired, new and current tuples in a window.
An iterator based implementation over the events in a window.
 
An example that illustrates the usage of typed tuples (TupleN<..>) and TupleValueMappers.
 
Main class.
 
 
 
 
 
Convenient utility class to build the URL.
 
 
 
 
 
 
 
A thread that can answer if it is sleeping in the case of simulated time.
 
An interface that is used to inform config validation what to look at.
An interface for joining two values to produce a result.
Extracts a single typed value from a tuple.
 
A convenience class for making tuple values using new Values("field1", 2, 3) syntax.
Constructs a Values from a Tuple based on indicies.
 
 
 
 
 
 
Abstract parent class of component definitions.
 
A Progressive Wait Strategy.
 
An eviction policy that tracks count based on watermark ts and evicts events up to the watermark based on a threshold count.
A trigger policy that tracks event counts and sets the context for eviction policy to evict based on latest watermark time.
Watermark event used for tracking progress of time when processing event based ts.
Tracks tuples across input streams and periodically emits watermark events.
An eviction policy that evicts events based on time duration taking watermark time and event lag into account.
Handles watermark events and triggers TriggerHandler.onTrigger() for each window interval that has events to be processed up to the watermark ts.
 
 
 
The window specification within Stream.
A view of events in a sliding window.
Windowing configuration with window and sliding length.
 
An IWindowedBolt wrapper that does the windowing of tuples.
 
A windowed word count example.
Kryo serializer/deserializer for values that are stored as part of windowing.
A callback for expiry, activation of events tracked by the WindowManager.
Tracks a window of events and fires WindowLifecycleListener callbacks on expiry of events or activation of the window due to TriggerPolicy.
Node that captures the windowing configurations.
A loading cache abstraction for caching WindowState.WindowPartition.
Builder interface for WindowPartitionCache.
The interface for loading entires into the cache.
The reason why an enrty got evicted from the cache.
A callback interface for handling removal of events from the cache.
State implementation for windowing operation.
StateFactory instance for creating WindowsState instances.
StateUpdater<WindowState> instance which removes successfully emitted triggers from store.
Store for storing window related entities like windowed tuples, triggers etc.
This class wraps key and value objects which can be passed to putAll method.
Factory to create instances of WindowsStore.
A wrapper around the window related states that are checkpointed.
 
Strategy for windowing which will have respective trigger and eviction policies.
TridentProcessor implementation for windowing operations on trident stream.
 
 
Connects to the 'WordCount' HBase table and prints counts for each word.
This bolt is used by the HBase example.
 
 
 
An example that computes word counts and finally emits the results to an external bolt (sink).
This topology demonstrates Storm's stream groupings and multilang capabilities.
 
This topology demonstrates Storm's stream groupings and multilang capabilities.
 
 
 
 
 
 
 
 
 
 
Factory class for recovery strategy.
Bean representation of a Storm worker hook.
A class that knows about how to operate with worker log directory.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
 
 
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
Allow for SASL authentication using worker tokens.
A Client callback handler for a WorkerToken.
 
The set of fields this struct contains, along with convenience methods for finding and manipulating them.
The WorkerTokenManager manages the life cycle of worker tokens in nimbus.
 
 
 
 
Wraps the generated TException to allow getMessage() to return a valid string.
Wraps the generated TException to allow getMessage() to return a valid string.
Wraps the generated TException to allow getMessage() to return a valid string.
Wraps the generated TException to allow getMessage() to return a valid string.
 
Wraps the generated TException to allow getMessage() to return a valid string.
Wraps the generated TException to allow getMessage() to return a valid string.
Wraps the generated TException to allow getMessage() to return a valid string.
Wraps the generated TException to allow getMessage() to return a valid string.
The writable interface to a StringMetadataCache intended to be used by a single RocksDBMetricwWriter instance.