Last Updated: 5/8/2026
feldera.pipeline_builder module
PipelineBuilder
class feldera.pipeline_builder.PipelineBuilder(
client: FelderaClient,
name: str,
sql: str,
udf_rust: str = '',
udf_toml: str = '',
description: str = '',
compilation_profile: CompilationProfile = CompilationProfile.OPTIMIZED,
runtime_config: RuntimeConfig = <feldera.runtime_config.RuntimeConfig object>,
runtime_version: str | None = None,
)Bases: object
A builder for creating a Feldera Pipeline.
Parameters:
- client – The
FelderaClientinstance - name – The name of the pipeline
- description – The description of the pipeline
- sql – The SQL code of the pipeline
- udf_rust – Rust code for UDFs
- udf_toml – Rust dependencies required by UDFs (in the TOML format)
- compilation_profile – The
CompilationProfileto use - runtime_config – The
RuntimeConfigto use. Enables configuring the runtime behavior of the pipeline such as: fault tolerance, storage andResources
init()
__init__(
client: FelderaClient,
name: str,
sql: str,
udf_rust: str = '',
udf_toml: str = '',
description: str = '',
compilation_profile: CompilationProfile = CompilationProfile.OPTIMIZED,
runtime_config: RuntimeConfig = <feldera.runtime_config.RuntimeConfig object>,
runtime_version: str | None = None,
)create()
create(wait: bool = True) -> PipelineCreate the pipeline if it does not exist.
Parameters:
- wait – Whether to wait for the pipeline to be created. True by default
Returns:
- The created pipeline
create_or_replace()
create_or_replace(wait: bool = True) -> PipelineCreates a pipeline if it does not exist and replaces it if it exists.
If the pipeline exists and is running, it will be stopped and replaced.
Parameters:
- wait – Whether to wait for the pipeline to be created. True by default
Returns:
- The created pipeline
Pipeline
class feldera.pipeline.Pipeline(client: FelderaClient)Bases: object
init()
__init__(client: FelderaClient)activate()
activate(wait: bool = True, timeout_s: float | None = None) -> PipelineStatus | NoneActivates the pipeline when starting from STANDBY mode. Only applicable when the pipeline is starting from a checkpoint in object store.
Parameters:
- wait – Set True to wait for the pipeline to activate. True by default
- timeout_s – The maximum time (in seconds) to wait for the pipeline to pause.
all()
all(client: FelderaClient) -> List[Pipeline]Get all pipelines.
Parameters:
- client – The FelderaClient instance.
Returns:
- A list of Pipeline objects.
approve()
approve()Approves the pipeline to proceed with bootstrapping.
This method is used when a pipeline has been started with bootstrap_policy=BootstrapPolicy.AWAIT_APPROVAL and is currently in the AWAITINGAPPROVAL state. The pipeline will wait for explicit user approval before proceeding with the bootstrapping process.
bootstrap_policy()
bootstrap_policy() -> BootstrapPolicy | NoneReturn the bootstrap policy of the pipeline.
checkpoint()
checkpoint(wait: bool = False, timeout_s: float | None = None) -> intCheckpoints this pipeline.
Parameters:
- wait – If true, will block until the checkpoint completes.
- timeout_s – The maximum time (in seconds) to wait for the checkpoint to complete (defaults to None = no timeout is enforced).
Returns:
- The checkpoint sequence number.
Raises:
- FelderaAPIError – If enterprise features are not enabled.
checkpoint_status()
checkpoint_status(seq: int) -> CheckpointStatusChecks the status of the given checkpoint.
Parameters:
- seq – The checkpoint sequence number.
checkpoints()
checkpoints() -> List[CheckpointMetadata]Returns the list of checkpoints for this pipeline.
clear_storage()
clear_storage(wait: bool = True, timeout_s: float | None = None, poll_interval_s: float = 0.25)Clears the storage of the pipeline. Once started, this action cannot be canceled, and will delete all the pipeline storage.
Parameters:
- wait – Set True to wait for the pipeline storage to become cleared, or set False to immediately return. Default is True.
- timeout_s – Timeout waiting for storage to become cleared. None = no timeout is enforced (default). Not used if wait=False.
- poll_interval_s – Polling interval at which to check while waiting if storage is cleared (default is every 0.25 seconds). Not used if wait=False.
commit_transaction()
commit_transaction(
transaction_id: int | None = None,
wait: bool = True,
timeout_s: float | None = None,
)Commit the currently active transaction.
Parameters:
- transaction_id – If provided, the function verifies that the currently active transaction matches this ID. If the active transaction ID does not match, the function raises an error.
- wait – If True, the function blocks until the transaction either commits successfully or the timeout is reached. If False, the function initiates the commit and returns immediately without waiting for completion. The default value is True.
- timeout_s – Maximum time (in seconds) to wait for the transaction to commit when wait is True. If None, the function will wait indefinitely.
Raises:
- RuntimeError – If there is currently no transaction in progress.
- ValueError – If the provided transaction_id does not match the current transaction.
- TimeoutError – If the transaction does not commit within the specified timeout (when wait is True).
- FelderaAPIError – If the pipeline fails to commit a transaction.
completion_token_status()
completion_token_status(token: str) -> CompletionTokenStatusReturns the status of the completion token.
created_at()
created_at() -> datetimeReturn the creation time of the pipeline.
delete()
delete(clear_storage: bool = False)Deletes the pipeline.
The pipeline must be stopped, and the storage cleared before it can be deleted.
Parameters:
- clear_storage – True if the storage should be cleared before deletion. False by default
Raises:
- FelderaAPIError – If the pipeline is not in STOPPED state or the storage is still bound.
deployment_config()
deployment_config() -> Mapping[str, Any]Return the deployment config of the pipeline.
deployment_desired_status()
deployment_desired_status() -> DeploymentDesiredStatusReturn the desired deployment status of the pipeline. This is the next state that the pipeline should transition to.
deployment_error()
deployment_error() -> Mapping[str, Any]Return the deployment error of the pipeline. Returns an empty string if there is no error.
deployment_location()
deployment_location() -> strReturn the deployment location of the pipeline. Deployment location is the location where the pipeline can be reached at runtime (a TCP port number or a URI).
deployment_resources_desired_status()
deployment_resources_desired_status() -> DeploymentResourcesDesiredStatusReturn the desired status of the the deployment resources.
deployment_resources_status()
deployment_resources_status() -> DeploymentResourcesStatusReturn the status of the deployment resources.
deployment_runtime_desired_status()
deployment_runtime_desired_status() -> DeploymentRuntimeDesiredStatusReturn the deployment runtime desired status.
deployment_runtime_status()
deployment_runtime_status() -> DeploymentRuntimeStatusReturn the deployment runtime status.
deployment_runtime_status_details()
deployment_runtime_status_details() -> dict | NoneReturn the deployment runtime status details.
deployment_status_since()
deployment_status_since() -> datetimeReturn the timestamp when the current deployment status of the pipeline was set.
description()
description() -> strReturn the description of the pipeline.
dismiss_error()
dismiss_error()Dismisses the deployment_error of the pipeline.
errors()
errors() -> List[Mapping[str, Any]]Returns a list of all errors in this pipeline.
event()
event(event_id: str, *, selector: str = 'status') -> dictRetrieves a specific pipeline event.
Parameters:
- event_id – Identifier (UUID) of the event to retrieve, or latest for the latest event.
- selector – (Optional) Limit the returned fields. Valid values: “all”, “status” (default).
Returns:
- Event (fields limited based on selector).
events()
events(*, selector: str = 'status') -> List[dict]Retrieves all pipeline events (status fields only).
Parameters:
- selector – (Optional) Limit the returned fields. Valid values: “all”, “status” (default).
Returns:
- List of pipeline events.
execute()
execute(query: str)Executes an ad-hoc SQL query on the current pipeline, discarding its
result. Unlike the query() method which returns a generator for
retrieving query results lazily, this method processes the query
eagerly and fully before returning.
This method is suitable for SQL operations like INSERT and
DELETE, where the user needs confirmation of successful query
execution, but does not require the query result. If the query fails,
an exception will be raised.
Important:
If you try to INSERT or DELETE data from a table while the
pipeline is paused, it will block until the pipeline is resumed.
Parameters:
- query – The SQL query to be executed.
Raises:
- FelderaAPIError – If the pipeline is not in a RUNNING state.
- FelderaAPIError – If the query is invalid.
foreach_chunk()
foreach_chunk(view_name: str, callback: Callable[[DataFrame, int], None])Run the given callback on each chunk of the output of the specified view.
The callback will only receive changes from the point in time when the listener is created.
In order to receive all changes since the pipeline started, you can create the pipeline in the PAUSED state
using start_paused(), attach listeners and unpause the pipeline using resume().
Parameters:
- view_name – The name of the view.
- callback – The callback to run on each chunk. The callback should take two arguments: - chunk -> The chunk as a pandas DataFrame - seq_no -> The sequence number. The sequence number is a monotonically increasing integer that starts from 0. Note that the sequence number is unique for each chunk, but not necessarily contiguous.
Please note that the callback is run in a separate thread, so it should be thread-safe. Please note that the callback should not block for a long time, as by default, backpressure is enabled and will block the pipeline.
Note
- The callback must be thread-safe as it will be run in a separate thread.
generate_completion_token()
generate_completion_token(table_name: str, connector_name: str) -> strReturns a completion token that can be passed to Pipeline.completion_token_status() to
check whether the pipeline has finished processing all inputs received from the connector before
the token was generated.
get()
get(name: str, client: FelderaClient) -> PipelineGet the pipeline if it exists.
Parameters:
- name – The name of the pipeline.
- client – The FelderaClient instance.
get_samply_profile()
get_samply_profile(output_path: str | None = None) -> bytesReturns the gzip file of the samply profile as bytes.
The gzip file contains the samply profile that can be inspected by the samply tool.
Parameters:
- output_path – Optional path to save the samply profile file. If None, the samply profile is only returned as bytes.
Returns:
- The samply profile as bytes (GZIP file)
Raises:
- FelderaAPIError – If the pipeline does not exist or if there’s an error
id()
id() -> strReturn the ID of the pipeline.
input_connector_stats()
input_connector_stats(table_name: str, connector_name: str) -> InputEndpointStatusGet the status of the specified input connector.
input_json()
input_json(
table_name: str,
data: Dict | list,
update_format: str = 'raw',
force: bool = False,
wait: bool = True,
** kwargs,
) -> strPush this JSON data to the specified table of the pipeline.
The pipeline must either be in RUNNING or PAUSED states to push data. An error will be raised if the pipeline is in any other state.
Parameters:
- table_name – The name of the table to push data into.
- data – The JSON encoded data to be pushed to the pipeline. The data should be in the form: {‘col1’: ‘val1’, ‘col2’: ‘val2’} or [{‘col1’: ‘val1’, ‘col2’: ‘val2’}, {‘col1’: ‘val1’, ‘col2’: ‘val2’}]
- update_format – The update format of the JSON data to be pushed to the pipeline. Must be one of: “raw”, “insert_delete”. https://docs.feldera.com/formats/json#the-insertdelete-format
- force – True to push data even if the pipeline is paused. False by default.
- wait – If True, blocks until this input has been processed by the pipeline
- kwargs – Additional key word arguments forwarded to the Client push_to_pipeline method
Returns:
- The completion token to this input.
Raises:
- ValueError – If the update format is invalid.
- FelderaAPIError – If the pipeline is not in a valid state to push data.
- RuntimeError – If the pipeline is paused and force is not set to True.
input_pandas()
input_pandas(table_name: str, df: DataFrame, force: bool = False)Push all rows in a pandas DataFrame to the pipeline.
The pipeline must either be in RUNNING or PAUSED states to push data. An error will be raised if the pipeline is in any other state.
The dataframe must have the same columns as the table in the pipeline.
Parameters:
- table_name – The name of the table to insert data into.
- df – The pandas DataFrame to be pushed to the pipeline.
- force – True to push data even if the pipeline is paused. False by default.
Raises:
- ValueError – If the table does not exist in the pipeline.
- RuntimeError – If the pipeline is not in a valid state to push data.
- RuntimeError – If the pipeline is paused and force is not set to True.
is_complete()
is_complete() -> boolCheck if the pipeline has completed processing all input records.
Returns True if (1) all input connectors attached to the pipeline have finished reading their input data sources and issued end-of-input notifications to the pipeline, and (2) all inputs received from these connectors have been fully processed and corresponding outputs have been sent out through the output connectors.
last_successful_checkpoint_sync()
last_successful_checkpoint_sync() -> UUIDReturns the UUID of the last successfully synced checkpoint.
Returns:
- The UUID of the last successfully synced checkpoint.
listen()
listen(view_name: str) -> OutputHandlerFollow the change stream (i.e., the output) of the provided view. Returns an output handle to read the changes.
When the pipeline is stopped, the handle is dropped.
The handle will only receive changes from the point in time when the listener is created.
In order to receive all changes since the pipeline started, you can create the pipeline in the PAUSED state
using start_paused(), attach listeners and unpause the pipeline using resume().
Parameters:
- view_name – The name of the view to listen to.
logs()
logs() -> Generator[str, None, None]Gets the pipeline logs.
modify()
modify(
sql: str | None = None,
udf_rust: str | None = None,
udf_toml: str | None = None,
program_config: Mapping[str, Any] | None = None,
runtime_config: Mapping[str, Any] | None = None,
description: str | None = None,
)Modify the pipeline.
Modify the values of pipeline attributes: SQL code, UDF Rust code, UDF Rust dependencies (TOML), program config, runtime config, and description. Only the provided attributes will be modified. Other attributes will remain unchanged.
The pipeline must be in the STOPPED state to be modified.
Raises:
- FelderaAPIError – If the pipeline is not in a STOPPED state.
name (property)
nameReturn the name of the pipeline.
output_connector_stats()
output_connector_stats(view_name: str, connector_name: str) -> OutputEndpointStatusGet the status of the specified output connector.
pause()
pause(wait: bool = True, timeout_s: float | None = None)Pause the pipeline.
The pipeline can only transition to the PAUSED state from the RUNNING state. If the pipeline is already paused, it will remain in the PAUSED state.
Parameters:
- wait – Set True to wait for the pipeline to pause. True by default
- timeout_s – The maximum time (in seconds) to wait for the pipeline to pause.
pause_connector()
pause_connector(table_name: str, connector_name: str)Pause the specified input connector.
Connectors allow feldera to fetch data from a source or write data to a sink. This method allows users to PAUSE a specific INPUT connector. All connectors are RUNNING by default.
Refer to the connector documentation for more information:
https://docs.feldera.com/connectors/#input-connector-orchestration
Parameters:
- table_name – The name of the table that the connector is attached to.
- connector_name – The name of the connector to pause.
Raises:
- FelderaAPIError – If the connector is not found, or if the pipeline is not running.
platform_version()
platform_version() -> strReturn the Feldera platform witch which the program was compiled.
program_code()
program_code() -> strReturn the program SQL code of the pipeline.
program_config()
program_config() -> Mapping[str, Any]Return the program config of the pipeline.
program_error()
program_error() -> Mapping[str, Any]Return the program error of the pipeline. If there are no errors, the exit_code field inside both sql_compilation and rust_compilation will be 0.
program_info()
program_info() -> Mapping[str, Any]Return the program info of the pipeline. This is the output returned by the SQL compiler, including: the list of input and output connectors, the generated Rust code for the pipeline, and the SQL program schema.
program_status()
program_status() -> ProgramStatusReturn the program status of the pipeline.
Program status is the status of compilation of this SQL program. We first compile the SQL program to Rust code, and then compile the Rust code to a binary.
program_status_since()
program_status_since() -> datetimeReturn the timestamp when the current program status was set.
program_version()
program_version() -> intReturn the program version of the pipeline.
query()
query(query: str) -> Generator[Mapping[str, Any], None, None]Executes an ad-hoc SQL query on this pipeline and returns a generator
that yields the rows of the result as Python dictionaries. For
INSERT and DELETE queries, consider using execute()
instead. All floating-point numbers are deserialized as Decimal objects
to avoid precision loss.
Note:
You can only SELECT from materialized tables and views.
Important:
This method is lazy. It returns a generator and is not evaluated until you consume the result.
Parameters:
- query – The SQL query to be executed.
Returns:
- A generator that yields the rows of the result as Python dictionaries.
Raises:
- FelderaAPIError – If the pipeline is not in a RUNNING or PAUSED state.
- FelderaAPIError – If querying a non materialized table or view.
- FelderaAPIError – If the query is invalid.
query_hash()
query_hash(query: str)Executes an ad-hoc SQL query on this pipeline and returns the result as a hash of the result set. This is useful for quickly checking if the result set has changed without retrieving the entire result.
Note:
For a stable hash, the query must be deterministic which means it should be sorted.
Parameters:
- query – The SQL query to be executed.
Raises:
- FelderaAPIError – If the pipeline is not in a RUNNING or PAUSED state.
- FelderaAPIError – If querying a non materialized table or view.
- FelderaAPIError – If the query is invalid.
query_parquet()
query_parquet(query: str, path: str)Executes an ad-hoc SQL query on this pipeline and saves the result to the specified path as a parquet file. If the extension isn’t parquet, it will be automatically appended to path.
Note:
You can only SELECT from materialized tables and views.
Parameters:
- query – The SQL query to be executed.
- path – The path of the parquet file.
Raises:
- FelderaAPIError – If the pipeline is not in a RUNNING or PAUSED state.
- FelderaAPIError – If querying a non materialized table or view.
- FelderaAPIError – If the query is invalid.
query_tabular()
query_tabular(query: str) -> Generator[str, None, None]Executes a SQL query on this pipeline and returns the result as a formatted string.
Note:
You can only SELECT from materialized tables and views.
Important:
This method is lazy. It returns a generator and is not evaluated until you consume the result.
Parameters:
- query – The SQL query to be executed.
Returns:
- A generator that yields a string representing the query result in a human-readable, tabular format.
Raises:
- FelderaAPIError – If the pipeline is not in a RUNNING or PAUSED state.
- FelderaAPIError – If querying a non materialized table or view.
- FelderaAPIError – If the query is invalid.
rebalance()
rebalance()Initiate rebalancing.
Initiate immediate rebalancing of the pipeline. Normally rebalancing is initiated automatically when the drift in the size of joined relations exceeds a threshold. This method forces the balancer to reevaluate and apply an optimal partitioning policy regardless of the threshold.
This operation is a no-op unless the adaptive_joins feature is enabled in dev_tweaks.
refresh()
refresh(field_selector: PipelineFieldSelector)Calls the backend to get the updated, latest version of the pipeline.
Parameters:
- field_selector – Choose what pipeline information to refresh; see PipelineFieldSelector enum definition.
Raises:
- FelderaConnectionError – If there is an issue connecting to the backend.
restart()
restart(
bootstrap_policy: BootstrapPolicy | None = None,
timeout_s: float | None = None,
dismiss_error: bool = True,
)Restarts the pipeline.
This method forcibly STOPS the pipeline regardless of its current state and then starts it again. No checkpoints are made when stopping the pipeline.
Parameters:
- bootstrap_policy – The bootstrap policy to use.
- timeout_s – The maximum time (in seconds) to wait for the pipeline to restart.
- dismiss_error – Set True to dismiss any deployment error before starting; set False to make it fail in that case. True by default.
resume()
resume(wait: bool = True, timeout_s: float | None = None)Resumes the pipeline from the PAUSED state. If the pipeline is already running, it will remain in the RUNNING state.
Parameters:
- wait – Set True to wait for the pipeline to resume. True by default
- timeout_s – The maximum time (in seconds) to wait for the pipeline to resume.
resume_connector()
resume_connector(table_name: str, connector_name: str)Resume the specified connector.
Connectors allow feldera to fetch data from a source or write data to a sink. This method allows users to RESUME / START a specific INPUT connector. All connectors are RUNNING by default.
Refer to the connector documentation for more information:
https://docs.feldera.com/connectors/#input-connector-orchestration
Parameters:
- table_name – The name of the table that the connector is attached to.
- connector_name – The name of the connector to resume.
Raises:
- FelderaAPIError – If the connector is not found, or if the pipeline is not running.
runtime_config()
runtime_config() -> RuntimeConfigReturn the runtime config of the pipeline.
set_runtime_config()
set_runtime_config(runtime_config: RuntimeConfig)Updates the runtime config of the pipeline. The pipeline must be stopped. Changing some pipeline configuration, such as the number of workers, requires storage to be cleared.
For example, to set ‘min_batch_size_records’ on a pipeline:
runtime_config = pipeline.runtime_config()
runtime_config.min_batch_size_records = 500
pipeline.set_runtime_config(runtime_config)start()
start(
bootstrap_policy: BootstrapPolicy | None = None,
wait: bool = True,
timeout_s: float | None = None,
dismiss_error: bool = True,
)Starts this pipeline.
- The pipeline must be in STOPPED state to start.
- If the pipeline is in any other state, an error will be raised.
- If the pipeline is in PAUSED state, use .meth:resume instead.
Parameters:
- bootstrap_policy – The bootstrap policy to use.
- timeout_s – The maximum time (in seconds) to wait for the pipeline to start.
- wait – Set True to wait for the pipeline to start. True by default
- dismiss_error – Set True to dismiss any deployment error before starting; set False to make it fail in that case. True by default.
Raises:
- RuntimeError – If the pipeline is not in STOPPED state.
start_paused()
start_paused(
bootstrap_policy: BootstrapPolicy | None = None,
wait: bool = True,
timeout_s: float | None = None,
dismiss_error: bool = True,
)Starts the pipeline in the paused state.
Parameters:
- bootstrap_policy – The bootstrap policy to use.
- wait – Set True to wait for the pipeline to start. True by default.
- timeout_s – The maximum time (in seconds) to wait for the pipeline to start (defaults to None = no timeout is enforced).
- dismiss_error – Set True to dismiss any deployment error before starting; set False to make it fail in that case. True by default.
start_samply_profile()
start_samply_profile(duration: int = 30)Starts profiling this pipeline using samply.
Parameters:
- duration – The duration of the profile in seconds (default: 30)
Raises:
- FelderaAPIError – If the pipeline does not exist or if there’s an error
start_standby()
start_standby(
bootstrap_policy: BootstrapPolicy | None = None,
wait: bool = True,
timeout_s: float | None = None,
dismiss_error: bool = True,
)Starts the pipeline in the standby state.
Parameters:
- bootstrap_policy – The bootstrap policy to use.
- wait – Set True to wait for the pipeline to start. True by default.
- timeout_s – The maximum time (in seconds) to wait for the pipeline to start (defaults to None = no timeout is enforced).
- dismiss_error – Set True to dismiss any deployment error before starting; set False to make it fail in that case. True by default.
start_transaction()
start_transaction() -> intStart a new transaction.
Returns:
- Transaction ID.
Raises:
- FelderaAPIError – If the pipeline fails to start a transaction, e.g., if the pipeline is not running or there is already an active transaction.
stats()
stats() -> PipelineStatisticsGets the pipeline metrics and performance counters.
status()
status() -> PipelineStatusReturn the current status of the pipeline.
stop()
stop(force: bool, wait: bool = True, timeout_s: float | None = None)Stops the pipeline.
Stops the pipeline regardless of its current state.
Parameters:
- force – Set True to immediately scale compute resources to zero. Set False to automatically checkpoint before stopping.
- wait – Set True to gracefully shutdown listeners and wait for the pipeline to stop. True by default.
- timeout_s – The maximum time (in seconds) to wait for the pipeline to stop.
storage_status()
storage_status() -> StorageStatusReturn the storage status of the pipeline.
storage_status_details()
storage_status_details() -> dictReturn the storage status details of the pipeline.
support_bundle()
support_bundle(
output_path: str | None = None,
*,
circuit_profile: bool = True,
heap_profile: bool = True,
metrics: bool = True,
logs: bool = True,
stats: bool = True,
pipeline_config: bool = True,
system_config: bool = True,
dataflow_graph: bool = True,
) -> bytesGenerate a support bundle containing diagnostic information from this pipeline.
This method collects various diagnostic data from the pipeline including circuit profile, heap profile, metrics, logs, stats, and connector statistics, and packages them into a single ZIP file for support purposes.
Parameters:
- output_path – Optional path to save the support bundle file. If None, the support bundle is only returned as bytes.
- circuit_profile – Whether to collect circuit profile data (default: True)
- heap_profile – Whether to collect heap profile data (default: True)
- metrics – Whether to collect metrics data (default: True)
- logs – Whether to collect logs data (default: True)
- stats – Whether to collect stats data (default: True)
- pipeline_config – Whether to collect pipeline configuration data (default: True)
- system_config – Whether to collect system configuration data (default: True)
- dataflow_graph – Whether to collect dataflow graph (default: True)
Returns:
- The support bundle as bytes (ZIP archive)
Raises:
- FelderaAPIError – If the pipeline does not exist or if there’s an error
sync_checkpoint()
sync_checkpoint(wait: bool = False, timeout_s: float | None = None) -> strSyncs this checkpoint to object store.
Parameters:
- wait – If true, will block until the checkpoint sync operation completes.
- timeout_s – The maximum time (in seconds) to wait for the checkpoint to complete syncing.
Raises:
- FelderaAPIError – If no checkpoints have been made.
- RuntimeError – If syncing the checkpoint fails.
sync_checkpoint_status()
sync_checkpoint_status(uuid: str) -> CheckpointStatusChecks the status of the given checkpoint sync operation. If the checkpoint is currently being synchronized, returns CheckpointStatus.Unknown.
Failures are not raised as runtime errors and must be explicitly checked.
Parameters:
- uuid – The checkpoint uuid.
tables()
tables() -> List[SQLTable]Return the tables of the pipeline.
testing_force_update_platform_version()
testing_force_update_platform_version(platform_version: str)Used to simulate a pipeline compiled with a different platform version than the one currently in use. This is useful for testing platform upgrade behavior without actually upgrading Feldera.
This method is only available when Feldera runs with the “testing” unstable feature enabled.
transaction_id()
transaction_id() -> int | NoneGets the ID of the currently active transaction or None if there is no active transaction.
Returns:
- The ID of the transaction.
transaction_status()
transaction_status() -> TransactionStatusGet pipeline’s transaction handling status.
Returns:
- Current transaction handling status of the pipeline.
Raises:
- FelderaAPIError – If pipeline’s status couldn’t be read, e.g., because the pipeline is not currently running.
udf_rust()
udf_rust() -> strReturn the Rust code for UDFs.
udf_toml()
udf_toml() -> strReturn the Rust dependencies required by UDFs (in the TOML format).
update_runtime()
update_runtime()Recompile a pipeline with the Feldera runtime version included in the currently installed Feldera platform.
Use this endpoint after upgrading Feldera to rebuild pipelines that were compiled with older platform versions. In most cases, recompilation is not required—pipelines compiled with older versions will continue to run on the upgraded platform.
Situations where recompilation may be necessary:
- To benefit from the latest bug fixes and performance optimizations.
- When backward-incompatible changes are introduced in Feldera. In this case, attempting to start a pipeline compiled with an unsupported version will result in an error.
If the pipeline is already compiled with the current platform version, this operation is a no-op.
Note that recompiling the pipeline with a new platform version may change its query plan. If the modified pipeline is started from an existing checkpoint, it may require bootstrapping parts of its state from scratch. See Feldera documentation for details on the bootstrapping process.
Raises:
- FelderaAPIError – If the pipeline is not in a STOPPED state.
version()
version() -> intReturn the version of the pipeline.
views()
views() -> List[SQLView]Return the views of the pipeline.
wait_for_completion()
wait_for_completion(force_stop: bool = False, timeout_s: float | None = None)Block until the pipeline has completed processing all input records.
This method blocks until (1) all input connectors attached to the pipeline have finished reading their input data sources and issued end-of-input notifications to the pipeline, and (2) all inputs received from these connectors have been fully processed and corresponding outputs have been sent out through the output connectors.
This method will block indefinitely if at least one of the input connectors attached to the pipeline is a streaming connector, such as Kafka, that does not issue the end-of-input notification.
Parameters:
- force_stop – If True, the pipeline will be forcibly stopped after completion. False by default. No checkpoints will be made.
- timeout_s – Optional. The maximum time (in seconds) to wait for the pipeline to complete. The default is None, which means wait indefinitely.
Raises:
- RuntimeError – If the pipeline returns unknown metrics.
wait_for_idle()
wait_for_idle(
idle_interval_s: float = 5.0,
timeout_s: float | None = None,
poll_interval_s: float = 0.2,
)Wait for the pipeline to become idle and then returns.
Idle is defined as a sufficiently long interval in which the number of input and processed records reported by the pipeline do not change, and they equal each other (thus, all input records present at the pipeline have been processed).
Parameters:
- idle_interval_s – Idle interval duration (default is 5.0 seconds).
- timeout_s – Timeout waiting for idle (None = no timeout is enforced).
- poll_interval_s – Polling interval, should be set substantially smaller than the idle interval (default is 0.2 seconds).
Raises:
- ValueError – If idle interval is larger than timeout, poll interval is larger than timeout, or poll interval is larger than idle interval.
- RuntimeError – If the metrics are missing or the timeout was reached.
wait_for_status()
wait_for_status(expected_status: PipelineStatus, timeout: int | None = None) -> NoneWait for the pipeline to reach the specified status.
Parameters:
- expected_status – The status to wait for
- timeout – Maximum time to wait in seconds. If None, waits forever (default: None)
Raises:
- TimeoutError – If the expected status is not reached within the timeout
wait_for_token()
wait_for_token(token: str)Blocks until the pipeline processes all inputs represented by the completion token.
BootstrapPolicy
class feldera.enums.BootstrapPolicy(* values)Bases: Enum
ALLOW
ALLOWAWAIT_APPROVAL
AWAIT_APPROVALREJECT
REJECTfrom_str()
from_str(value)BuildMode
class feldera.enums.BuildMode(* values)Bases: Enum
CREATE
CREATEGET
GETGET_OR_CREATE
GET_OR_CREATECheckpointStatus
class feldera.enums.CheckpointStatus(* values)Bases: Enum
Failure
FailureInProgress
InProgressSuccess
SuccessUnknown
Unknowninit()
__init__(value)error
errorget_error()
get_error() -> str | NoneReturns the error, if any.
CompilationProfile
class feldera.enums.CompilationProfile(* values)Bases: Enum
The compilation profile to use when compiling the program.
DEV
DEVThe development compilation profile.
OPTIMIZED
OPTIMIZEDThe optimized compilation profile, the default for this API.
OPTIMIZED_SYMBOLS
OPTIMIZED_SYMBOLSThe optimized symbols compilation profile, good for profiling and debugging.
SERVER_DEFAULT
SERVER_DEFAULTThe compiler server default compilation profile.
UNOPTIMIZED
UNOPTIMIZEDThe unoptimized compilation profile.
CompletionTokenStatus
class feldera.enums.CompletionTokenStatus(* values)Bases: Enum
COMPLETE
COMPLETEFeldera has completed processing all inputs represented by this token.
IN_PROGRESS
IN_PROGRESSFeldera is still processing the inputs represented by this token.
DeploymentDesiredStatus
class feldera.enums.DeploymentDesiredStatus(* values)Bases: Enum
Deployment desired status of the pipeline.
PAUSED
PAUSEDRUNNING
RUNNINGSTANDBY
STANDBYSTOPPED
STOPPEDSUSPENDED
SUSPENDEDUNAVAILABLE
UNAVAILABLEfrom_str()
from_str(value)DeploymentResourcesDesiredStatus
class feldera.enums.DeploymentResourcesDesiredStatus(* values)Bases: Enum
The desired status of deployment resources of the pipeline.
PROVISIONED
PROVISIONEDSTOPPED
STOPPEDfrom_str()
from_str(value)DeploymentResourcesStatus
class feldera.enums.DeploymentResourcesStatus(* values)Bases: Enum
The desired status of deployment resources of the pipeline.
PROVISIONED
PROVISIONEDPROVISIONING
PROVISIONINGSTOPPED
STOPPEDSTOPPING
STOPPINGfrom_str()
from_str(value)DeploymentRuntimeDesiredStatus
class feldera.enums.DeploymentRuntimeDesiredStatus(* values)Bases: Enum
Deployment runtime desired status of the pipeline.
PAUSED
PAUSEDRUNNING
RUNNINGSTANDBY
STANDBYSUSPENDED
SUSPENDEDUNAVAILABLE
UNAVAILABLEfrom_str()
from_str(value)DeploymentRuntimeStatus
class feldera.enums.DeploymentRuntimeStatus(* values)Bases: Enum
Deployment runtime status of the pipeline.
AWAITINGAPPROVAL
AWAITINGAPPROVALBOOTSTRAPPING
BOOTSTRAPPINGINITIALIZING
INITIALIZINGPAUSED
PAUSEDREPLAYING
REPLAYINGRUNNING
RUNNINGSTANDBY
STANDBYSUSPENDED
SUSPENDEDUNAVAILABLE
UNAVAILABLEfrom_str()
from_str(value)FaultToleranceModel
class feldera.enums.FaultToleranceModel(* values)Bases: Enum
The fault tolerance model.
AtLeastOnce
AtLeastOnceEach record is output at least once. Crashes may duplicate output, but no input or output is dropped.
ExactlyOnce
ExactlyOnceEach record is output exactly once. Crashes do not drop or duplicate input or output.
from_str()
from_str(value)PipelineFieldSelector
class feldera.enums.PipelineFieldSelector(* values)Bases: Enum
ALL
ALLSelect all fields of a pipeline.
STATUS
STATUSSelect only the fields required to know the status of a pipeline.
PipelineStatus
class feldera.enums.PipelineStatus(* values)Bases: Enum
Represents the state that this pipeline is currently in.
AWAITINGAPPROVAL
AWAITINGAPPROVALBOOTSTRAPPING
BOOTSTRAPPINGINITIALIZING
INITIALIZINGNOT_FOUND
NOT_FOUNDPAUSED
PAUSEDPROVISIONING
PROVISIONINGREPLAYING
REPLAYINGRUNNING
RUNNINGSTANDBY
STANDBYSTOPPED
STOPPEDSTOPPING
STOPPINGSUSPENDED
SUSPENDEDUNAVAILABLE
UNAVAILABLEfrom_str()
from_str(value)ProgramStatus
class feldera.enums.ProgramStatus(* values)Bases: Enum
CompilingRust
CompilingRustCompilingSql
CompilingSqlPending
PendingRustError
RustErrorSqlCompiled
SqlCompiledSqlError
SqlErrorSuccess
SuccessSystemError
SystemErrorinit()
__init__(value)error
errorfrom_value()
from_value(value)get_error()
get_error() -> dict | NoneReturns the compilation error, if any.
StorageStatus
class feldera.enums.StorageStatus(* values)Bases: Enum
Represents the current storage usage status of the pipeline.
CLEARED
CLEAREDThe pipeline has not been started before, or the user has cleared storage.
In this state, the pipeline has no storage resources bound to it.
CLEARING
CLEARINGThe pipeline is in the process of becoming unbound from its storage resources.
If storage resources are configured to be deleted upon clearing, their deletion occurs before transitioning to CLEARED. Otherwise, no actual work is required, and the transition happens immediately.
If storage is not deleted during clearing, the responsibility to manage or delete those resources lies with the user.
INUSE
INUSEThe pipeline was (attempted to be) started before, transitioning from STOPPED to PROVISIONING, which caused the storage status to become INUSE.
Being in the INUSE state restricts certain edits while the pipeline is STOPPED.
The pipeline remains in this state until the user invokes /clear, transitioning it to CLEARING.
from_str()
from_str(value)TransactionStatus
class feldera.enums.TransactionStatus(* values)Bases: Enum
Represents the transaction handling status of a pipeline.
CommitInProgress
CommitInProgressA commit is currently in progress.
NoTransaction
NoTransactionThere is currently no active transaction.
TransactionInProgress
TransactionInProgressThere is an active transaction in progress.
from_str()
from_str(value)OutputHandler
class feldera.output_handler.OutputHandler(
client: FelderaClient,
pipeline_name: str,
view_name: str,
)Bases: object
init()
__init__(client: FelderaClient, pipeline_name: str, view_name: str)Initializes the output handler, but doesn’t start it. To start the output handler, call the .OutputHandler.start method.
start()
start()Starts the output handler in a separate thread
to_dict()
to_dict(clear_buffer: bool = True)Returns the output of the pipeline as a list of python dictionaries
Parameters:
- clear_buffer – Whether to clear the buffer after getting the output.
to_pandas()
to_pandas(clear_buffer: bool = True)Returns the output of the pipeline as a pandas DataFrame
Parameters:
- clear_buffer – Whether to clear the buffer after getting the output.
Resources
class feldera.runtime_config.Resources(
config: Mapping[str, Any] | None = None,
cpu_cores_max: int | None = None,
cpu_cores_min: int | None = None,
memory_mb_max: int | None = None,
memory_mb_min: int | None = None,
storage_class: str | None = None,
storage_mb_max: int | None = None,
)Bases: object
Class used to specify the resource configuration for a pipeline.
Parameters:
- config – A dictionary containing all the configuration values.
- cpu_cores_max – The maximum number of CPU cores to reserve for an instance of the pipeline.
- cpu_cores_min – The minimum number of CPU cores to reserve for an instance of the pipeline.
- memory_mb_max – The maximum memory in Megabytes to reserve for an instance of the pipeline.
- memory_mb_min – The minimum memory in Megabytes to reserve for an instance of the pipeline.
- storage_class – The storage class to use for the pipeline. The class determines storage performance such as IOPS and throughput.
- storage_mb_max – The storage in Megabytes to reserve for an instance of the pipeline.
init()
__init__(
config: Mapping[str, Any] | None = None,
cpu_cores_max: int | None = None,
cpu_cores_min: int | None = None,
memory_mb_max: int | None = None,
memory_mb_min: int | None = None,
storage_class: str | None = None,
storage_mb_max: int | None = None,
)RuntimeConfig
class feldera.runtime_config.RuntimeConfig(
workers: int | None = None,
hosts: int | None = None,
storage: Storage | bool | None = None,
tracing: bool | None = False,
tracing_endpoint_jaeger: str | None = '',
cpu_profiler: bool = True,
max_buffering_delay_usecs: int = 0,
min_batch_size_records: int = 0,
clock_resolution_usecs: int | None = None,
provisioning_timeout_secs: int | None = None,
resources: Resources | None = None,
fault_tolerance_model: FaultToleranceModel | None = None,
checkpoint_interval_secs: int | None = None,
dev_tweaks: dict | None = None,
env: dict[str, str] | None = None,
logging: str | None = None,
)Bases: object
Runtime configuration class to define the configuration for a pipeline.
To create runtime config from a dictionary, use
RuntimeConfig.from_dict().
Documentation:
https://docs.feldera.com/pipelines/configuration/#runtime-configuration
init()
__init__(
workers: int | None = None,
hosts: int | None = None,
storage: Storage | bool | None = None,
tracing: bool | None = False,
tracing_endpoint_jaeger: str | None = '',
cpu_profiler: bool = True,
max_buffering_delay_usecs: int = 0,
min_batch_size_records: int = 0,
clock_resolution_usecs: int | None = None,
provisioning_timeout_secs: int | None = None,
resources: Resources | None = None,
fault_tolerance_model: FaultToleranceModel | None = None,
checkpoint_interval_secs: int | None = None,
dev_tweaks: dict | None = None,
env: dict[str, str] | None = None,
logging: str | None = None,
)default()
default() -> RuntimeConfigfrom_dict()
from_dict(d: Mapping[str, Any])Create a RuntimeConfig object from a dictionary.
to_dict()
to_dict() -> dictStorage
class feldera.runtime_config.Storage(
config: Mapping[str, Any] | None = None,
min_storage_bytes: int | None = None,
)Bases: object
Storage configuration for a pipeline.
Parameters:
- min_storage_bytes – The minimum estimated number of bytes in a batch of data to write it to storage.
init()
__init__(config: Mapping[str, Any] | None = None, min_storage_bytes: int | None = None)CommitProgressSummary
class feldera.stats.CommitProgressSummaryBases: object
Progress of a transaction commit.
init()
__init__()Initializes an empty status.
from_dict()
from_dict(d: Mapping[str, Any])CompletedWatermark
class feldera.stats.CompletedWatermarkBases: object
Latest completed watermark reported by input connector status.
init()
__init__()from_dict()
from_dict(d: Mapping[str, Any])ConnectorError
class feldera.stats.ConnectorErrorBases: object
Represents a connector error item reported by connector status endpoints.
init()
__init__()from_dict()
from_dict(d: Mapping[str, Any])ConnectorHealth
class feldera.stats.ConnectorHealthBases: object
Health status reported for a connector.
HEALTHY
HEALTHYUNHEALTHY
UNHEALTHYinit()
__init__()from_dict()
from_dict(d: Mapping[str, Any])ConnectorTransactionPhase
class feldera.stats.ConnectorTransactionPhaseBases: object
Connector transaction phase with optional label
init()
__init__()Initializes an empty status.
from_dict()
from_dict(d: Mapping[str, Any])label
labelphase
phaseGlobalPipelineMetrics
class feldera.stats.GlobalPipelineMetricsBases: object
Represents the “global_metrics” object within the pipeline’s “/stats” endpoint reply.
init()
__init__()Initializes as an empty set of metrics.
bootstrap_in_progress
bootstrap_in_progressbuffered_input_bytes
buffered_input_bytesbuffered_input_records
buffered_input_recordscommit_progress
commit_progresscpu_msecs
cpu_msecsfrom_dict()
from_dict(d: Mapping[str, Any])incarnation_uuid
incarnation_uuidinitial_start_time
initial_start_timeoutput_stall_msecs
output_stall_msecspipeline_complete
pipeline_completerss_bytes
rss_bytesruntime_elapsed_msecs
runtime_elapsed_msecsstart_time
start_timestate
statestorage_bytes
storage_bytesstorage_mb_secs
storage_mb_secstotal_completed_records
total_completed_recordstotal_completed_steps
total_completed_stepstotal_initiated_steps
total_initiated_stepstotal_input_bytes
total_input_bytestotal_input_records
total_input_recordstotal_processed_bytes
total_processed_bytestotal_processed_records
total_processed_recordstransaction_id
transaction_idtransaction_initiators
transaction_initiatorstransaction_msecs
transaction_msecstransaction_records
transaction_recordstransaction_status
transaction_statusuptime_msecs
uptime_msecsInputEndpointMetrics
class feldera.stats.InputEndpointMetricsBases: object
Represents the “metrics” member within an input endpoint status in the pipeline’s “/stats” endpoint reply.
init()
__init__()buffered_bytes
buffered_bytesbuffered_records
buffered_recordsend_of_input
end_of_inputfrom_dict()
from_dict(d: Mapping[str, Any])num_parse_errors
num_parse_errorsnum_transport_errors
num_transport_errorstotal_bytes
total_bytestotal_records
total_recordsInputEndpointStatus
class feldera.stats.InputEndpointStatusBases: object
Represents one member of the “inputs” array within the pipeline’s “/stats” endpoint reply.
init()
__init__()Initializes an empty status.
barrier
barriercompleted_frontier
completed_frontierconfig
configendpoint_name
endpoint_namefatal_error
fatal_errorfrom_dict()
from_dict(d: Mapping[str, Any])health
healthmetrics
metricsparse_errors
parse_errorspaused
pausedtransport_errors
transport_errorsOutputEndpointMetrics
class feldera.stats.OutputEndpointMetricsBases: object
Represents the “metrics” member within an output endpoint status in the pipeline’s “/stats” endpoint reply.
init()
__init__()buffered_batches
buffered_batchesbuffered_records
buffered_recordsfrom_dict()
from_dict(d: Mapping[str, Any])memory
memorynum_encode_errors
num_encode_errorsnum_transport_errors
num_transport_errorsqueued_batches
queued_batchesqueued_records
queued_recordstotal_processed_input_records
total_processed_input_recordstotal_processed_steps
total_processed_stepstransmitted_bytes
transmitted_bytestransmitted_records
transmitted_recordsOutputEndpointStatus
class feldera.stats.OutputEndpointStatusBases: object
Represents one member of the “outputs” array within the pipeline’s “/stats” endpoint reply.
init()
__init__()Initializes an empty status.
config
configencode_errors
encode_errorsendpoint_name
endpoint_namefatal_error
fatal_errorfrom_dict()
from_dict(d: Mapping[str, Any])health
healthmetrics
metricstransport_errors
transport_errorsPipelineStatistics
class feldera.stats.PipelineStatisticsBases: object
Represents statistics reported by a pipeline’s “/stats” endpoint.
init()
__init__()Initializes as an empty set of statistics.
from_dict()
from_dict(d: Mapping[str, Any])global_metrics
global_metricsinputs
inputsoutputs
outputssuspend_error
suspend_errorTransactionInitiators
class feldera.stats.TransactionInitiatorsBases: object
Initiators for an ongoing transaction.
init()
__init__()Initializes an empty status.
from_dict()
from_dict(d: Mapping[str, Any])initiated_by_api
initiated_by_apiinitiated_by_connectors
initiated_by_connectorstransaction_id
transaction_id