Integrating with other services¶
ldclient.integrations module¶
This submodule contains factory/configuration methods for integrating the SDK with services other than LaunchDarkly.
-
class
ldclient.integrations.
Consul
[source]¶ Bases:
object
Provides factory methods for integrations between the LaunchDarkly SDK and Consul.
-
DEFAULT_PREFIX
= 'launchdarkly'¶
-
static
new_feature_store
(host=None, port=None, prefix=None, consul_opts=None, caching=<ldclient.feature_store.CacheConfig object>)[source]¶ Creates a Consul-backed implementation of
ldclient.interfaces.FeatureStore
. For more details about how and why you can use a persistent feature store, see the SDK reference guide.To use this method, you must first install the
python-consul
package. Then, put the object returned by this method into thefeature_store
property of your client configuration (ldclient.config.Config
).from ldclient.integrations import Consul store = Consul.new_feature_store() config = Config(feature_store=store)
Note that
python-consul
is not available for Python 3.3 or 3.4, so this feature cannot be used in those Python versions.- Parameters
host (
Optional
[str
]) – hostname of the Consul server (useslocalhost
if omitted)port (
Optional
[int
]) – port of the Consul server (uses 8500 if omitted)prefix (
Optional
[str
]) – a namespace prefix to be prepended to all Consul keysconsul_opts (
Optional
[dict
]) – optional parameters for configuring the Consul client, if you need to set any of them besides host and port, as defined in the python-consul APIcaching (
CacheConfig
) – specifies whether local caching should be enabled and if so, sets the cache properties; defaults toldclient.feature_store.CacheConfig.default()
- Return type
-
-
class
ldclient.integrations.
DynamoDB
[source]¶ Bases:
object
Provides factory methods for integrations between the LaunchDarkly SDK and DynamoDB.
-
static
new_big_segment_store
(table_name, prefix=None, dynamodb_opts={})[source]¶ Creates a DynamoDB-backed Big Segment store.
Big Segments are a specific type of user segments. For more information, read the LaunchDarkly documentation: https://docs.launchdarkly.com/home/users/big-segments
To use this method, you must first install the
boto3
package for the AWS SDK. Then, put the object returned by this method into thestore
property of your Big Segments configuration (seeldclient.config.Config
).from ldclient.config import Config, BigSegmentsConfig from ldclient.integrations import DynamoDB store = DynamoDB.new_big_segment_store("my-table-name") config = Config(big_segments=BigSegmentsConfig(store=store))
Note that the DynamoDB table must already exist; the LaunchDarkly SDK does not create the table automatically, because it has no way of knowing what additional properties (such as permissions and throughput) you would want it to have. The table must have a partition key called “namespace” and a sort key called “key”, both with a string type.
By default, the DynamoDB client will try to get your AWS credentials and region name from environment variables and/or local configuration files, as described in the AWS SDK documentation. You may also pass configuration settings in
dynamodb_opts
.- Parameters
table_name (
str
) – the name of an existing DynamoDB tableprefix (
Optional
[str
]) – an optional namespace prefix to be prepended to all DynamoDB keysdynamodb_opts (
Mapping
[str
,Any
]) – optional parameters for configuring the DynamoDB client, as defined in the boto3 API
-
static
new_feature_store
(table_name, prefix=None, dynamodb_opts={}, caching=<ldclient.feature_store.CacheConfig object>)[source]¶ Creates a DynamoDB-backed implementation of
ldclient.interfaces.FeatureStore
. For more details about how and why you can use a persistent feature store, see the SDK reference guide.To use this method, you must first install the
boto3
package for the AWS SDK. Then, put the object returned by this method into thefeature_store
property of your client configuration (ldclient.config.Config
).from ldclient.integrations import DynamoDB store = DynamoDB.new_feature_store("my-table-name") config = Config(feature_store=store)
Note that the DynamoDB table must already exist; the LaunchDarkly SDK does not create the table automatically, because it has no way of knowing what additional properties (such as permissions and throughput) you would want it to have. The table must have a partition key called “namespace” and a sort key called “key”, both with a string type.
By default, the DynamoDB client will try to get your AWS credentials and region name from environment variables and/or local configuration files, as described in the AWS SDK documentation. You may also pass configuration settings in
dynamodb_opts
.- Parameters
table_name (
str
) – the name of an existing DynamoDB tableprefix (
Optional
[str
]) – an optional namespace prefix to be prepended to all DynamoDB keysdynamodb_opts (
Mapping
[str
,Any
]) –optional parameters for configuring the DynamoDB client, as defined in the boto3 API
caching (
CacheConfig
) – specifies whether local caching should be enabled and if so, sets the cache properties; defaults toldclient.feature_store.CacheConfig.default()
- Return type
-
static
-
class
ldclient.integrations.
Files
[source]¶ Bases:
object
Provides factory methods for integrations with filesystem data.
-
static
new_data_source
(paths, auto_update=False, poll_interval=1, force_polling=False)[source]¶ Provides a way to use local files as a source of feature flag state. This would typically be used in a test environment, to operate using a predetermined feature flag state without an actual LaunchDarkly connection.
To use this component, call
new_data_source
, specifying the file path(s) of your data file(s) in thepaths
parameter; then put the value returned by this method into theupdate_processor_class
property of your LaunchDarkly client configuration (ldclient.config.Config
).from ldclient.integrations import Files data_source = Files.new_data_source(paths=[ myFilePath ]) config = Config(update_processor_class=data_source)
This will cause the client not to connect to LaunchDarkly to get feature flags. The client may still make network connections to send analytics events, unless you have disabled this in your configuration with
send_events
oroffline
.The format of the data files is described in the SDK Reference Guide on Reading flags from a file. Note that in order to use YAML, you will need to install the
pyyaml
package.If the data source encounters any error in any file– malformed content, a missing file, or a duplicate key– it will not load flags from any of the files.
- Parameters
paths (
List
[str
]) – the paths of the source files for loading flag data. These may be absolute paths or relative to the current working directory. Files will be parsed as JSON unless thepyyaml
package is installed, in which case YAML is also allowed.auto_update (
bool
) – (default: false) True if the data source should watch for changes to the source file(s) and reload flags whenever there is a change. The default implementation of this feature is based on polling the filesystem, which may not perform well; if you install thewatchdog
package, its native file watching mechanism will be used instead. Note that auto-updating will only work if all of the files you specified have valid directory paths at startup time.poll_interval (
float
) – (default: 1) the minimum interval, in seconds, between checks for file modifications– used only ifauto_update
is true, and if the native file-watching mechanism fromwatchdog
is not being used.force_polling (
bool
) – (default: false) True if the data source should implement auto-update via polling the filesystem even if a native mechanism is available. This is mainly for SDK testing.
- Return type
object
- Returns
an object (actually a lambda) to be stored in the
update_processor_class
configuration property
-
static
-
class
ldclient.integrations.
Redis
[source]¶ Bases:
object
Provides factory methods for integrations between the LaunchDarkly SDK and Redis.
-
DEFAULT_MAX_CONNECTIONS
= 16¶
-
DEFAULT_PREFIX
= 'launchdarkly'¶
-
DEFAULT_URL
= 'redis://localhost:6379/0'¶
-
static
new_big_segment_store
(url='redis://localhost:6379/0', prefix='launchdarkly', max_connections=16, redis_opts={})[source]¶ Creates a Redis-backed Big Segment store.
Big Segments are a specific type of user segments. For more information, read the LaunchDarkly documentation: https://docs.launchdarkly.com/home/users/big-segments
To use this method, you must first install the
redis
package. Then, put the object returned by this method into thestore
property of your Big Segments configuration (seeldclient.config.Config
).from ldclient.config import Config, BigSegmentsConfig from ldclient.integrations import Redis store = Redis.new_big_segment_store() config = Config(big_segments=BigSegmentsConfig(store=store))
- Parameters
url (
str
) – the URL of the Redis host; defaults toDEFAULT_URL
prefix (
str
) – a namespace prefix to be prepended to all Redis keys; defaults toDEFAULT_PREFIX
max_connections (
int
) – the maximum number of Redis connections to keep in the connection pool; defaults toDEFAULT_MAX_CONNECTIONS
. This parameter will later be dropped in favor of setting redis_opts[‘max_connections’]redis_opts (
Dict
[str
,Any
]) – extra options for initializing Redis connection from the url, see redis.connection.ConnectionPool.from_url for more details. Note that if you set max_connections, this will take precedence over the deprecated max_connections parameter.
- Return type
-
static
new_feature_store
(url='redis://localhost:6379/0', prefix='launchdarkly', max_connections=16, caching=<ldclient.feature_store.CacheConfig object>, redis_opts={})[source]¶ Creates a Redis-backed implementation of
FeatureStore
. For more details about how and why you can use a persistent feature store, see the SDK reference guide.To use this method, you must first install the
redis
package. Then, put the object returned by this method into thefeature_store
property of your client configuration (ldclient.config.Config
).from ldclient.config import Config from ldclient.integrations import Redis store = Redis.new_feature_store() config = Config(feature_store=store)
- Parameters
url (
str
) – the URL of the Redis host; defaults toDEFAULT_URL
prefix (
str
) – a namespace prefix to be prepended to all Redis keys; defaults toDEFAULT_PREFIX
max_connections (
int
) – the maximum number of Redis connections to keep in the connection pool; defaults toDEFAULT_MAX_CONNECTIONS
. This parameter will later be dropped in favor of setting redis_opts[‘max_connections’]caching (
CacheConfig
) – specifies whether local caching should be enabled and if so, sets the cache properties; defaults toldclient.feature_store.CacheConfig.default()
redis_opts (
Dict
[str
,Any
]) – extra options for initializing Redis connection from the url, see redis.connection.ConnectionPool.from_url for more details. Note that if you set max_connections, this will take precedence over the deprecated max_connections parameter.
- Return type
-