Extending the SDK¶
ldclient.interfaces module¶
This submodule contains interfaces for various components of the SDK.
They may be useful in writing new implementations of these components, or for testing.
- class ldclient.interfaces.BigSegmentStore[source]¶
Bases:
object
Interface for a read-only data store that allows querying of user membership in Big Segments.
Big Segments are a specific type of user segments. For more information, read the LaunchDarkly documentation: https://docs.launchdarkly.com/home/users/big-segments
- abstract get_membership(context_hash: str) dict | None [source]¶
Queries the store for a snapshot of the current segment state for a specific context.
The context_hash is a base64-encoded string produced by hashing the context key as defined by the Big Segments specification; the store implementation does not need to know the details of how this is done, because it deals only with already-hashed keys, but the string can be assumed to only contain characters that are valid in base64.
The return value should be either a
dict
, or None if the context is not referenced in any big segments. Each key in the dictionary is a “segment reference”, which is how segments are identified in Big Segment data. This string is not identical to the segment key– the SDK will add other information. The store implementation should not be concerned with the format of the string. Each value in the dictionary is True if the context is explicitly included in the segment, False if the context is explicitly excluded from the segment– and is not also explicitly included (that is, if both an include and an exclude existed in the data, the include would take precedence). If the context’s status in a particular segment is undefined, there should be no key or value for that segment.This dictionary may be cached by the SDK, so it should not be modified after it is created. It is a snapshot of the segment membership state at one point in time.
- Parameters:
context_hash – the hashed context key
- Returns:
True/False values for Big Segments that reference this context
- abstract get_metadata() BigSegmentStoreMetadata [source]¶
Returns information about the overall state of the store. This method will be called only when the SDK needs the latest state, so it should not be cached.
- Returns:
the store metadata
- class ldclient.interfaces.BigSegmentStoreMetadata(last_up_to_date: int | None)[source]¶
Bases:
object
Values returned by
BigSegmentStore.get_metadata()
.- property last_up_to_date: int | None¶
The Unix epoch millisecond timestamp of the last update to the
BigSegmentStore
. It is None if the store has never been updated.
- class ldclient.interfaces.BigSegmentStoreStatus(available: bool, stale: bool)[source]¶
Bases:
object
Information about the state of a Big Segment store, provided by
BigSegmentStoreStatusProvider
.Big Segments are a specific type of user segments. For more information, read the LaunchDarkly documentation: https://docs.launchdarkly.com/home/users/big-segments
- property available: bool¶
True if the Big Segment store is able to respond to queries, so that the SDK can evaluate whether a user is in a segment or not.
If this property is False, the store is not able to make queries (for instance, it may not have a valid database connection). In this case, the SDK will treat any reference to a Big Segment as if no users are included in that segment. Also, the
ldclient.evaluation.EvaluationDetail.reason()
associated with with any flag evaluation that references a Big Segment when the store is not available will have abigSegmentsStatus
of"STORE_ERROR"
.
- property stale: bool¶
True if the Big Segment store is available, but has not been updated within the amount of time specified by {BigSegmentsConfig#stale_after}.
This may indicate that the LaunchDarkly Relay Proxy, which populates the store, has stopped running or has become unable to receive fresh data from LaunchDarkly. Any feature flag evaluations that reference a Big Segment will be using the last known data, which may be out of date. Also, the
ldclient.evaluation.EvaluationDetail.reason()
associated with those evaluations will have abigSegmentsStatus
of"STALE"
.
- class ldclient.interfaces.BigSegmentStoreStatusProvider[source]¶
Bases:
object
An interface for querying the status of a Big Segment store.
The Big Segment store is the component that receives information about Big Segments, normally from a database populated by the LaunchDarkly Relay Proxy. Big Segments are a specific type of user segments. For more information, read the LaunchDarkly documentation: https://docs.launchdarkly.com/home/users/big-segments
An implementation of this abstract class is returned by
ldclient.client.LDClient.big_segment_store_status_provider()
. Application code never needs to implement this interface.There are two ways to interact with the status. One is to simply get the current status; if its
available
property is true, then the SDK is able to evaluate user membership in Big Segments, and thestale
property indicates whether the data might be out of date.The other way is to subscribe to status change notifications. Applications may wish to know if there is an outage in the Big Segment store, or if it has become stale (the Relay Proxy has stopped updating it with new data), since then flag evaluations that reference a Big Segment might return incorrect values. Use
add_listener()
to register a callback for notifications.- abstract add_listener(listener: Callable[[BigSegmentStoreStatus], None]) None [source]¶
Subscribes for notifications of status changes.
The listener is a function or method that will be called with a single parameter: the new
BigSegmentStoreStatus
.- Parameters:
listener – the listener to add
- abstract remove_listener(listener: Callable[[BigSegmentStoreStatus], None]) None [source]¶
Unsubscribes from notifications of status changes.
- Parameters:
listener – a listener that was previously added with
add_listener()
; if it was not, this method does nothing
- abstract property status: BigSegmentStoreStatus¶
Gets the current status of the store.
- Returns:
the status
- class ldclient.interfaces.DataSourceErrorInfo(kind: DataSourceErrorKind, status_code: int, time: float, message: str | None)[source]¶
Bases:
object
A description of an error condition that the data source encountered.
- __init__(kind: DataSourceErrorKind, status_code: int, time: float, message: str | None)[source]¶
- property kind: DataSourceErrorKind¶
- Returns:
The general category of the error
- property message: str | None¶
- Returns:
Message an error message if applicable, or None
- property status_code: int¶
- Returns:
An HTTP status or zero.
- property time: float¶
- Returns:
Unix timestamp when the error occurred
- class ldclient.interfaces.DataSourceErrorKind(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases:
Enum
Enumeration representing the types of errors a data source can encounter.
- ERROR_RESPONSE = 'error_response'¶
The LaunchDarkly service returned an HTTP response with an error status.
- INVALID_DATA = 'invalid_data'¶
The SDK received malformed data from the LaunchDarkly service.
- NETWORK_ERROR = 'network_error'¶
An I/O error such as a dropped connection.
- STORE_ERROR = 'store_error'¶
The data source itself is working, but when it tried to put an update into the data store, the data store failed (so the SDK may not have the latest data).
Data source implementations do not need to report this kind of error; it will be automatically reported by the SDK when exceptions are detected.
- UNKNOWN = 'unknown'¶
An unexpected error, such as an uncaught exception.
- class ldclient.interfaces.DataSourceState(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases:
Enum
Enumeration representing the states a data source can be in at any given time.
- INITIALIZING = 'initializing'¶
The initial state of the data source when the SDK is being initialized.
If it encounters an error that requires it to retry initialization, the state will remain at
DataSourceState.INITIALIZING
until it either succeeds and becomes {VALID}, or permanently fails and becomes {OFF}.
- INTERRUPTED = 'interrupted'¶
Indicates that the data source encountered an error that it will attempt to recover from.
In streaming mode, this means that the stream connection failed, or had to be dropped due to some other error, and will be retried after a backoff delay. In polling mode, it means that the last poll request failed, and a new poll request will be made after the configured polling interval.
- OFF = 'off'¶
Indicates that the data source has been permanently shut down.
This could be because it encountered an unrecoverable error (for instance, the LaunchDarkly service rejected the SDK key; an invalid SDK key will never become valid), or because the SDK client was explicitly shut down.
- VALID = 'valid'¶
Indicates that the data source is currently operational and has not had any problems since the last time it received data.
In streaming mode, this means that there is currently an open stream connection and that at least one initial message has been received on the stream. In polling mode, it means that the last poll request succeeded.
- class ldclient.interfaces.DataSourceStatus(state: DataSourceState, state_since: float, last_error: DataSourceErrorInfo | None)[source]¶
Bases:
object
Information about the data source’s status and about the last status change.
- __init__(state: DataSourceState, state_since: float, last_error: DataSourceErrorInfo | None)[source]¶
- property error: DataSourceErrorInfo | None¶
- Returns:
A description of the last error, or None if there are no errors since startup
- property since: float¶
- Returns:
Unix timestamp of the last state transition.
- property state: DataSourceState¶
- Returns:
The basic state of the data source.
- class ldclient.interfaces.DataSourceStatusProvider[source]¶
Bases:
object
An interface for querying the status of the SDK’s data source. The data source is the component that receives updates to feature flag data; normally this is a streaming connection, but it could be polling or file data depending on your configuration.
An implementation of this interface is returned by
ldclient.client.LDClient.data_source_status_provider()
. Application code never needs to implement this interface.- abstract add_listener(listener: Callable[[DataSourceStatus], None])[source]¶
Subscribes for notifications of status changes.
The listener is a function or method that will be called with a single parameter: the new
DataSourceStatus
.- Parameters:
listener – the listener to add
- abstract remove_listener(listener: Callable[[DataSourceStatus], None])[source]¶
Unsubscribes from notifications of status changes.
- Parameters:
listener – a listener that was previously added with
add_listener()
; if it was not, this method does nothing
- abstract property status: DataSourceStatus¶
Returns the current status of the data source.
All the built-in data source implementations are guaranteed to update this status whenever they successfully initialize, encounter an error, or recover after an error.
For a custom data source implementation, it is the responsibility of the data source to push status updates to the SDK; if it does not do so, the status will always be reported as
DataSourceState.INITIALIZING
.- Returns:
the status
- class ldclient.interfaces.DataSourceUpdateSink[source]¶
Bases:
object
Interface that a data source implementation will use to push data into the SDK.
The data source interacts with this object, rather than manipulating the data store directly, so that the SDK can perform any other necessary operations that must happen when data is updated.
- abstract delete(kind: VersionedDataKind, key: str, version: int)[source]¶
Attempt to delete an entity if it exists. Deletion should only succeed if the version parameter is greater than the existing entity’s version; otherwise, the method should do nothing.
- Parameters:
kind – The kind of object to delete
key – The key of the object to be deleted
version – The version for the delete operation
- abstract init(all_data: Mapping[VersionedDataKind, Mapping[str, dict]])[source]¶
Initializes (or re-initializes) the store with the specified set of entities. Any existing entries will be removed. Implementations can assume that this data set is up to date– there is no need to perform individual version comparisons between the existing objects and the supplied features.
If possible, the store should update the entire data set atomically. If that is not possible, it should iterate through the outer hash and then the inner hash using the existing iteration order of those hashes (the SDK will ensure that the items were inserted into the hashes in the correct order), storing each item, and then delete any leftover items at the very end.
- Parameters:
all_data – All objects to be stored
- abstract update_status(new_state: DataSourceState, new_error: DataSourceErrorInfo | None)[source]¶
Informs the SDK of a change in the data source’s status.
Data source implementations should use this method if they have any concept of being in a valid state, a temporarily disconnected state, or a permanently stopped state.
If new_state is different from the previous state, and/or new_error is non-null, the SDK will start returning the new status (adding a timestamp for the change) from
DataSourceStatusProvider.status
, and will trigger status change events to any registered listeners.A special case is that if {new_state} is
DataSourceState.INTERRUPTED
, but the previous state wasDataSourceState.INITIALIZING
, the state will remain atDataSourceState.INITIALIZING
becauseDataSourceState.INTERRUPTED
is only meaningful after a successful startup.- Parameters:
new_state – The updated state of the data source
new_error – An optional error if the new state is an error condition
- abstract upsert(kind: VersionedDataKind, item: dict)[source]¶
Attempt to add an entity, or update an existing entity with the same key. An update should only succeed if the new item’s version is greater than the old one; otherwise, the method should do nothing.
- Parameters:
kind – The kind of object to update
item – The object to update or insert
- class ldclient.interfaces.DataStoreStatus(available: bool, stale: bool)[source]¶
Bases:
object
Information about the data store’s status.
- property available: bool¶
Returns true if the SDK believes the data store is now available.
This property is normally true. If the SDK receives an exception while trying to query or update the data store, then it sets this property to false (notifying listeners, if any) and polls the store at intervals until a query succeeds. Once it succeeds, it sets the property back to true (again notifying listeners).
- Returns:
if store is available
- property stale: bool¶
Returns true if the store may be out of date due to a previous outage, so the SDK should attempt to refresh all feature flag data and rewrite it to the store.
This property is not meaningful to application code.
- Returns:
true if data should be rewritten
- class ldclient.interfaces.DataStoreStatusProvider[source]¶
Bases:
object
An interface for querying the status of a persistent data store.
An implementation of this interface is returned by
ldclient.client.LDClient.data_store_status_provider()
. Application code should not implement this interface.- abstract add_listener(listener: Callable[[DataStoreStatus], None])[source]¶
Subscribes for notifications of status changes.
Applications may wish to know if there is an outage in a persistent data store, since that could mean that flag evaluations are unable to get the flag data from the store (unless it is currently cached) and therefore might return default values.
If the SDK receives an exception while trying to query or update the data store, then it notifies listeners that the store appears to be offline ({Status#available} is false) and begins polling the store at intervals until a query succeeds. Once it succeeds, it notifies listeners again with {Status#available} set to true.
This method has no effect if the data store implementation does not support status tracking, such as if you are using the default in-memory store rather than a persistent store.
- Parameters:
listener – the listener to add
- abstract is_monitoring_enabled() bool [source]¶
Indicates whether the current data store implementation supports status monitoring.
This is normally true for all persistent data stores, and false for the default in-memory store. A true value means that any listeners added with {#add_listener} can expect to be notified if there is any error in storing data, and then notified again when the error condition is resolved. A false value means that the status is not meaningful and listeners should not expect to be notified.
- Returns:
true if status monitoring is enabled
- abstract remove_listener(listener: Callable[[DataStoreStatus], None])[source]¶
Unsubscribes from notifications of status changes.
This method has no effect if the data store implementation does not support status tracking, such as if you are using the default in-memory store rather than a persistent store.
- Parameters:
listener – the listener to remove; if no such listener was added, this does nothing
- abstract property status: DataStoreStatus¶
Returns the current status of the store.
This is only meaningful for persistent stores, or any custom data store implementation that makes use of the status reporting mechanism provided by the SDK. For the default in-memory store, the status will always be reported as “available”.
- Returns:
the latest status
- class ldclient.interfaces.DataStoreUpdateSink[source]¶
Bases:
object
Interface that a data store implementation can use to report information back to the SDK.
- abstract property listeners: Listeners¶
Access the listeners associated with this sink instance.
- abstract status() DataStoreStatus [source]¶
Inspect the data store’s operational status.
- abstract update_status(status: DataStoreStatus)[source]¶
Reports a change in the data store’s operational status.
This is what makes the status monitoring mechanisms in
DataStoreStatusProvider
work.- Parameters:
status – the updated status properties
- class ldclient.interfaces.DiagnosticDescription[source]¶
Bases:
object
Optional interface for components to describe their own configuration.
- class ldclient.interfaces.EventProcessor[source]¶
Bases:
object
Interface for the component that buffers analytics events and sends them to LaunchDarkly. The default implementation can be replaced for testing purposes.
- abstract flush()[source]¶
Specifies that any buffered events should be sent as soon as possible, rather than waiting for the next flush interval. This method is asynchronous, so events still may not be sent until a later time. However, calling
stop()
will synchronously deliver any events that were not yet delivered prior to shutting down.
- class ldclient.interfaces.FeatureRequester[source]¶
Bases:
object
Interface for the component that acquires feature flag data in polling mode. The default implementation can be replaced for testing purposes.
- class ldclient.interfaces.FeatureStore[source]¶
Bases:
object
Interface for a versioned store for feature flags and related objects received from LaunchDarkly. Implementations should permit concurrent access and updates.
An “object”, for
FeatureStore
, is simply a dict of arbitrary data which must have at least three properties:key
(its unique key),version
(the version number provided by LaunchDarkly), anddeleted
(True if this is a placeholder for a deleted object).Delete and upsert requests are versioned: if the version number in the request is less than the currently stored version of the object, the request should be ignored.
These semantics support the primary use case for the store, which synchronizes a collection of objects based on update messages that may be received out-of-order.
- abstract all(kind: ~ldclient.versioned_data_kind.VersionedDataKind, callback: ~typing.Callable[[~typing.Any], ~typing.Any] = <function FeatureStore.<lambda>>) Any [source]¶
Retrieves a dictionary of all associated objects of a given kind. The retrieved dict of keys to objects can be transformed by the specified callback.
- Parameters:
kind – The kind of objects to get
callback – A function that accepts the retrieved data and returns a transformed value
- abstract delete(kind: VersionedDataKind, key: str, version: int)[source]¶
Deletes the object associated with the specified key, if it exists and its version is less than the specified version. The object should be replaced in the data store by a placeholder with the specified version and a “deleted” property of TErue.
- Parameters:
kind – The kind of object to delete
key – The key of the object to be deleted
version – The version for the delete operation
- abstract get(kind: ~ldclient.versioned_data_kind.VersionedDataKind, key: str, callback: ~typing.Callable[[~typing.Any], ~typing.Any] = <function FeatureStore.<lambda>>) Any [source]¶
Retrieves the object to which the specified key is mapped, or None if the key is not found or the associated object has a
deleted
property of True. The retrieved object, if any (a dict) can be transformed by the specified callback.- Parameters:
kind – The kind of object to get
key – The key whose associated object is to be returned
callback – A function that accepts the retrieved data and returns a transformed value
- Returns:
The result of executing callback
- abstract init(all_data: Mapping[VersionedDataKind, Mapping[str, dict]])[source]¶
Initializes (or re-initializes) the store with the specified set of objects. Any existing entries will be removed. Implementations can assume that this set of objects is up to date– there is no need to perform individual version comparisons between the existing objects and the supplied data.
- Parameters:
all_data – All objects to be stored
- abstract property initialized: bool¶
Returns whether the store has been initialized yet or not
- abstract upsert(kind: VersionedDataKind, item: dict)[source]¶
Updates or inserts the object associated with the specified key. If an item with the same key already exists, it should update it only if the new item’s version property is greater than the old one.
- Parameters:
kind – The kind of object to update
item – The object to update or insert
- class ldclient.interfaces.FeatureStoreCore[source]¶
Bases:
object
Interface for a simplified subset of the functionality of
FeatureStore
, to be used in conjunction withldclient.feature_store_helpers.CachingStoreWrapper
. This allows developers of customFeatureStore
implementations to avoid repeating logic that would commonly be needed in any such implementation, such as caching. Instead, they can implement onlyFeatureStoreCore
and then create aCachingStoreWrapper
.- abstract get_all_internal(callback) Mapping[str, dict] [source]¶
Returns a dictionary of all associated objects of a given kind. The method should not attempt to filter out any items based on their deleted property, nor to cache any items.
- Parameters:
kind – The kind of objects to get
- Returns:
A dictionary of keys to items
- abstract get_internal(kind: VersionedDataKind, key: str) dict [source]¶
Returns the object to which the specified key is mapped, or None if no such item exists. The method should not attempt to filter out any items based on their deleted property, nor to cache any items.
- Parameters:
kind – The kind of object to get
key – The key of the object
- Returns:
The object to which the specified key is mapped, or None
- abstract init_internal(all_data: Mapping[VersionedDataKind, Mapping[str, dict]])[source]¶
Initializes (or re-initializes) the store with the specified set of objects. Any existing entries will be removed. Implementations can assume that this set of objects is up to date– there is no need to perform individual version comparisons between the existing objects and the supplied data.
- Parameters:
all_data – A dictionary of data kinds to item collections
- abstract initialized_internal() bool [source]¶
Returns true if this store has been initialized. In a shared data store, it should be able to detect this even if initInternal was called in a different process, i.e. the test should be based on looking at what is in the data store. The method does not need to worry about caching this value;
CachingStoreWrapper
will only call it when necessary.
- abstract upsert_internal(kind: VersionedDataKind, item: dict) dict [source]¶
Updates or inserts the object associated with the specified key. If an item with the same key already exists, it should update it only if the new item’s version property is greater than the old one. It should return the final state of the item, i.e. if the update succeeded then it returns the item that was passed in, and if the update failed due to the version check then it returns the item that is currently in the data store (this ensures that
CachingStoreWrapper
will update the cache correctly).- Parameters:
kind – The kind of object to update
item – The object to update or insert
- Returns:
The state of the object after the update
- class ldclient.interfaces.FlagChange(key: str)[source]¶
Bases:
object
Change event fired when some aspect of the flag referenced by the key has changed.
- property key: str¶
- Returns:
The flag key that was modified by the store.
- class ldclient.interfaces.FlagTracker[source]¶
Bases:
object
An interface for tracking changes in feature flag configurations.
An implementation of this interface is returned by
ldclient.client.LDClient.flag_tracker
. Application code never needs to implement this interface.- abstract add_flag_value_change_listener(key: str, context: Context, listener: Callable[[FlagValueChange], None])[source]¶
Registers a listener to be notified of a change in a specific feature flag’s value for a specific evaluation context.
When you call this method, it first immediately evaluates the feature flag. It then uses
add_listener()
to start listening for feature flag configuration changes, and whenever the specified feature flag changes, it re-evaluates the flag for the same context. It then calls your listener if and only if the resulting value has changed.All feature flag evaluations require an instance of
ldclient.context.Context
. If the feature flag you are tracking does not have any context targeting rules, you must still pass a dummy context such asldclient.context.Context.create("for-global-flags")()
. If you do not want the user to appear on your dashboard, use the anonymous property which can be set via the context builder.The returned listener represents the subscription that was created by this method call; to unsubscribe, pass that object (not your listener) to
remove_listener()
.- Parameters:
key – The flag key to monitor
context – The context to evaluate against the flag
listener – The listener to trigger if the value has changed
- abstract add_listener(listener: Callable[[FlagChange], None])[source]¶
Registers a listener to be notified of feature flag changes in general.
The listener will be notified whenever the SDK receives any change to any feature flag’s configuration, or to a user segment that is referenced by a feature flag. If the updated flag is used as a prerequisite for other flags, the SDK assumes that those flags may now behave differently and sends flag change events for them as well.
Note that this does not necessarily mean the flag’s value has changed for any particular evaluation context, only that some part of the flag configuration was changed so that it may return a different value than it previously returned for some context. If you want to track flag value changes, use
add_flag_value_change_listener()
instead.It is possible, given current design restrictions, that a listener might be notified when no change has occurred. This edge case will be addressed in a later version of the SDK. It is important to note this issue does not affect
add_flag_value_change_listener()
listeners.If using the file data source, any change in a data file will be treated as a change to every flag. Again, use
add_flag_value_change_listener()
(or just re-evaluate the flag # yourself) if you want to know whether this is a change that really affects a flag’s value.Change events only work if the SDK is actually connecting to LaunchDarkly (or using the file data source). If the SDK is only reading flags from a database then it cannot know when there is a change, because flags are read on an as-needed basis.
The listener will be called from a worker thread.
Calling this method for an already-registered listener has no effect.
- Parameters:
listener – listener to call when flag has changed
- abstract remove_listener(listener: Callable[[FlagChange], None])[source]¶
Unregisters a listener so that it will no longer be notified of feature flag changes.
Calling this method for a listener that was not previously registered has no effect.
- Parameters:
listener – the listener to remove
- class ldclient.interfaces.FlagValueChange(key, old_value, new_value)[source]¶
Bases:
object
Change event fired when the evaluated value for the specified flag key has changed.
- property key¶
- Returns:
The flag key that was modified by the store.
- property new_value¶
- Returns:
The new evaluation result after to the flag was changed
- property old_value¶
- Returns:
The old evaluation result prior to the flag changing
- class ldclient.interfaces.UpdateProcessor[source]¶
Bases:
BackgroundOperation
Interface for the component that obtains feature flag data in some way and passes it to a
FeatureStore
. The built-in implementations of this are the client’s standard streaming or polling behavior. For testing purposes, there is alsoldclient.integrations.Files.new_data_source()
.
ldclient.feature_store_helpers module¶
This submodule contains support code for writing feature store implementations.
- class ldclient.feature_store_helpers.CachingStoreWrapper(core: FeatureStoreCore, cache_config: CacheConfig)[source]¶
Bases:
DiagnosticDescription
,FeatureStore
A partial implementation of
ldclient.interfaces.FeatureStore
.This class delegates the basic functionality to an implementation of
ldclient.interfaces.FeatureStoreCore
- while adding optional caching behavior and other logic that would otherwise be repeated in every feature store implementation. This makes it easier to create new database integrations by implementing only the database-specific logic.- __init__(core: FeatureStoreCore, cache_config: CacheConfig)[source]¶
Constructs an instance by wrapping a core implementation object.
- Parameters:
core – the implementation object
cache_config – the caching parameters
- describe_configuration(config)[source]¶
Used internally by the SDK to inspect the configuration. :param config: the full configuration, in case this component depends on properties outside itself :return: a string describing the type of the component, or None
- init(all_encoded_data: Mapping[VersionedDataKind, Mapping[str, Dict[Any, Any]]])[source]¶
- property initialized: bool¶
ldclient.versioned_data_kind module¶
This submodule is used only by the internals of the feature flag storage mechanism.
If you are writing your own implementation of ldclient.integrations.FeatureStore
, the
VersionedDataKind
tuple type will be passed to the kind
parameter of the feature
store methods; its namespace
property tells the feature store which collection of objects is
being referenced (“features”, “segments”, etc.). The intention is for the feature store to treat
storable objects as completely generic JSON dictionaries, rather than having any special logic
for features or segments.
- class ldclient.versioned_data_kind.VersionedDataKind(namespace: str, request_api_path: str, stream_api_path: str, decoder: Callable[[dict], Any] | None = None)[source]¶
Bases:
object
- property namespace: str¶
- property request_api_path: str¶
- property stream_api_path: str¶
- class ldclient.versioned_data_kind.VersionedDataKindWithOrdering(namespace: str, request_api_path: str, stream_api_path: str, decoder: Callable[[dict], Any] | None, priority: int, get_dependency_keys: Callable[[dict], Iterable[str]] | None)[source]¶
Bases:
VersionedDataKind
- property get_dependency_keys: Callable[[dict], Iterable[str]] | None¶
- property priority: int¶