Session
kedro.framework.session ¶
kedro.framework.session
provides access to KedroSession responsible for
project lifecycle.
Module | Description |
---|---|
kedro.framework.session.session |
Implements Kedro session responsible for project lifecycle. |
kedro.framework.session.store |
Implements a dict-like store object used to persist Kedro sessions. |
kedro.framework.session.session ¶
This module implements Kedro session responsible for project lifecycle.
AbstractConfigLoader ¶
AbstractConfigLoader(conf_source, env=None, runtime_params=None, **kwargs)
Bases: UserDict
AbstractConfigLoader
is the abstract base class
for all ConfigLoader
implementations.
All user-defined ConfigLoader
implementations should inherit
from AbstractConfigLoader
and implement all relevant abstract methods.
Source code in kedro/config/abstract_config.py
19 20 21 22 23 24 25 26 27 28 29 |
|
get ¶
get(key, default=None)
D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None.
Source code in kedro/config/abstract_config.py
35 36 37 38 39 40 |
|
AbstractRunner ¶
AbstractRunner(is_async=False)
Bases: ABC
AbstractRunner
is the base class for all Pipeline
runner
implementations.
Parameters:
-
is_async
(bool
, default:False
) –If True, the node inputs and outputs are loaded and saved asynchronously with threads. Defaults to False.
Source code in kedro/runner/runner.py
44 45 46 47 48 49 50 51 52 53 54 |
|
run ¶
run(pipeline, catalog, hook_manager=None, run_id=None, only_missing_outputs=False)
Run the Pipeline
using the datasets provided by catalog
and save results back to the same objects.
Parameters:
-
pipeline
(Pipeline
) –The
Pipeline
to run. -
catalog
(CatalogProtocol | SharedMemoryCatalogProtocol
) –An implemented instance of
CatalogProtocol
orSharedMemoryCatalogProtocol
from which to fetch data. -
hook_manager
(PluginManager | None
, default:None
) –The
PluginManager
to activate hooks. -
run_id
(str | None
, default:None
) –The id of the run.
-
only_missing_outputs
(bool
, default:False
) –Run only nodes with missing outputs.
Raises:
-
ValueError
–Raised when
Pipeline
inputs cannot be satisfied.
Returns:
-
dict[str, Any]
–Dictionary with pipeline outputs, where keys are dataset names
-
dict[str, Any]
–and values are dataset object.
Source code in kedro/runner/runner.py
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
|
BaseSessionStore ¶
BaseSessionStore(path, session_id)
Bases: UserDict
BaseSessionStore
is the base class for all session stores.
BaseSessionStore
is an ephemeral store implementation that doesn't
persist the session data.
Source code in kedro/framework/session/store.py
16 17 18 19 |
|
read ¶
read()
Read the data from the session store.
Returns:
-
dict[str, Any]
–A mapping containing the session store data.
Source code in kedro/framework/session/store.py
25 26 27 28 29 30 31 32 33 34 35 |
|
save ¶
save()
Persist the session store
Source code in kedro/framework/session/store.py
37 38 39 40 41 42 |
|
KedroContext ¶
KedroContext
is the base class which holds the configuration and
Kedro's main functionality.
Create a context object by providing the root of a Kedro project and
the environment configuration subfolders (see kedro.config.OmegaConfigLoader
)
Raises:
KedroContextError: If there is a mismatch
between Kedro project version and package version.
Args:
project_path: Project path to define the context for.
config_loader: Kedro's OmegaConfigLoader
for loading the configuration files.
env: Optional argument for configuration default environment to be used
for running the pipeline. If not specified, it defaults to "local".
package_name: Package name for the Kedro project the context is
created for.
hook_manager: The PluginManager
to activate hooks, supplied by the session.
runtime_params: Optional dictionary containing runtime project parameters.
If specified, will update (and therefore take precedence over)
the parameters retrieved from the project configuration.
catalog
property
¶
catalog
Read-only property referring to Kedro's catalog` for this context.
Returns:
-
CatalogProtocol
–catalog defined in
catalog.yml
.
Raises: KedroContextError: Incorrect catalog registered for the project.
params
property
¶
params
Read-only property referring to Kedro's parameters for this context.
Returns:
-
dict[str, Any]
–Parameters defined in
parameters.yml
with the addition of any extra parameters passed at initialization.
KedroSession ¶
KedroSession(session_id, package_name=None, project_path=None, save_on_close=False, conf_source=None)
KedroSession
is the object that is responsible for managing the lifecycle
of a Kedro run. Use KedroSession.create()
as
a context manager to construct a new KedroSession with session data
provided (see the example below).
Example: ::
>>> from kedro.framework.session import KedroSession
>>> from kedro.framework.startup import bootstrap_project
>>> from pathlib import Path
>>> # If you are creating a session outside of a Kedro project (i.e. not using
>>> # `kedro run` or `kedro jupyter`), you need to run `bootstrap_project` to
>>> # let Kedro find your configuration.
>>> bootstrap_project(Path("<project_root>"))
>>> with KedroSession.create() as session:
>>> session.run()
Source code in kedro/framework/session/session.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
|
close ¶
close()
Close the current session and save its store to disk
if save_on_close
attribute is True.
Source code in kedro/framework/session/session.py
266 267 268 269 270 271 |
|
create
classmethod
¶
create(project_path=None, save_on_close=True, env=None, runtime_params=None, conf_source=None)
Create a new instance of KedroSession
with the session data.
Parameters:
-
project_path
(Path | str | None
, default:None
) –Path to the project root directory. Default is current working directory Path.cwd().
-
save_on_close
(bool
, default:True
) –Whether or not to save the session when it's closed.
-
conf_source
(str | None
, default:None
) –Path to a directory containing configuration
-
env
(str | None
, default:None
) –Environment for the KedroContext.
-
runtime_params
(dict[str, Any] | None
, default:None
) –Optional dictionary containing extra project parameters for underlying KedroContext. If specified, will update (and therefore take precedence over) the parameters retrieved from the project configuration.
Returns:
-
KedroSession
–A new
KedroSession
instance.
Source code in kedro/framework/session/session.py
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
|
load_context ¶
load_context()
An instance of the project context.
Source code in kedro/framework/session/session.py
235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 |
|
run ¶
run(pipeline_name=None, tags=None, runner=None, node_names=None, from_nodes=None, to_nodes=None, from_inputs=None, to_outputs=None, load_versions=None, namespaces=None, only_missing_outputs=False)
Runs the pipeline with a specified runner.
Parameters:
-
pipeline_name
(str | None
, default:None
) –Name of the pipeline that is being run.
-
tags
(Iterable[str] | None
, default:None
) –An optional list of node tags which should be used to filter the nodes of the
Pipeline
. If specified, only the nodes containing any of these tags will be run. -
runner
(AbstractRunner | None
, default:None
) –An optional parameter specifying the runner that you want to run the pipeline with.
-
node_names
(Iterable[str] | None
, default:None
) –An optional list of node names which should be used to filter the nodes of the
Pipeline
. If specified, only the nodes with these names will be run. -
from_nodes
(Iterable[str] | None
, default:None
) –An optional list of node names which should be used as a starting point of the new
Pipeline
. -
to_nodes
(Iterable[str] | None
, default:None
) –An optional list of node names which should be used as an end point of the new
Pipeline
. -
from_inputs
(Iterable[str] | None
, default:None
) –An optional list of input datasets which should be used as a starting point of the new
Pipeline
. -
to_outputs
(Iterable[str] | None
, default:None
) –An optional list of output datasets which should be used as an end point of the new
Pipeline
. -
load_versions
(dict[str, str] | None
, default:None
) –An optional flag to specify a particular dataset version timestamp to load.
-
namespaces
(Iterable[str] | None
, default:None
) –The namespaces of the nodes that are being run.
-
only_missing_outputs
(bool
, default:False
) –Run only nodes with missing outputs.
Raises:
ValueError: If the named or __default__
pipeline is not
defined by register_pipelines
.
Exception: Any uncaught exception during the run will be re-raised
after being passed to on_pipeline_error
hook.
KedroSessionError: If more than one run is attempted to be executed during
a single session.
Returns:
Any node outputs that cannot be processed by the DataCatalog
.
These are returned in a dictionary, where the keys are defined
by the node outputs.
Source code in kedro/framework/session/session.py
281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 |
|
KedroSessionError ¶
Bases: Exception
KedroSessionError
raised by KedroSession
in the case that multiple runs are attempted in one session.
ParallelRunner ¶
ParallelRunner(max_workers=None, is_async=False)
Bases: AbstractRunner
ParallelRunner
is an AbstractRunner
implementation. It can
be used to run the Pipeline
in parallel groups formed by toposort.
Please note that this runner
implementation validates dataset using the
_validate_catalog
method, which checks if any of the datasets are
single process only using the _SINGLE_PROCESS
dataset attribute.
Parameters:
-
max_workers
(int | None
, default:None
) –Number of worker processes to spawn. If not set, calculated automatically based on the pipeline configuration and CPU core count. On windows machines, the max_workers value cannot be larger than 61 and will be set to min(61, max_workers).
-
is_async
(bool
, default:False
) –If True, the node inputs and outputs are loaded and saved asynchronously with threads. Defaults to False.
Raises: ValueError: bad parameters passed
Source code in kedro/runner/parallel_runner.py
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
|
SequentialRunner ¶
SequentialRunner(is_async=False)
Bases: AbstractRunner
SequentialRunner
is an AbstractRunner
implementation. It can
be used to run the Pipeline
in a sequential manner using a
topological sort of provided nodes.
Parameters:
-
is_async
(bool
, default:False
) –If True, the node inputs and outputs are loaded and saved asynchronously with threads. Defaults to False.
Source code in kedro/runner/sequential_runner.py
25 26 27 28 29 30 31 32 33 34 35 |
|
SharedMemoryDataCatalog ¶
SharedMemoryDataCatalog(datasets=None, config_resolver=None, load_versions=None, save_version=None)
Bases: DataCatalog
A specialized DataCatalog
for managing datasets in a shared memory context.
The SharedMemoryDataCatalog
extends the base DataCatalog
to support multiprocessing
by ensuring that datasets are serializable and synchronized across threads or processes.
It provides additional functionality for managing shared memory datasets, such as setting
a multiprocessing manager and validating dataset compatibility with multiprocessing.
Attributes:
-
default_runtime_patterns
(ClassVar
) –A dictionary defining the default runtime pattern for datasets of type
kedro.io.SharedMemoryDataset
.
Example: ::
>>> from multiprocessing.managers import SyncManager
>>> from kedro.io import MemoryDataset
>>> from kedro.io.data_catalog import SharedMemoryDataCatalog
>>>
>>> # Create a shared memory catalog
>>> catalog = SharedMemoryDataCatalog(
... datasets={"shared_data": MemoryDataset(data=[1, 2, 3])}
... )
>>>
>>> # Set a multiprocessing manager
>>> manager = SyncManager()
>>> manager.start()
>>> catalog.set_manager_datasets(manager)
>>>
>>> # Validate the catalog for multiprocessing compatibility
>>> catalog.validate_catalog()
Source code in kedro/io/data_catalog.py
207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 |
|
set_manager_datasets ¶
set_manager_datasets(manager)
Associate a multiprocessing manager with all shared memory datasets in the catalog.
This method iterates through all datasets in the catalog and sets the provided
multiprocessing manager for datasets of type SharedMemoryDataset
. This ensures
that these datasets are properly synchronized across threads or processes.
Parameters:
-
manager
(SyncManager
) –A multiprocessing manager to be associated with shared memory datasets.
Example: ::
>>> from multiprocessing.managers import SyncManager
>>> from kedro.io.data_catalog import SharedMemoryDataCatalog
>>>
>>> catalog = SharedMemoryDataCatalog(
... datasets={"shared_data": MemoryDataset(data=[1, 2, 3])}
... )
>>>
>>> manager = SyncManager()
>>> manager.start()
>>> catalog.set_manager_datasets(manager)
>>> print(catalog)
# {'shared_data': kedro.io.memory_dataset.MemoryDataset(data='<list>')}
Source code in kedro/io/data_catalog.py
1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 |
|
validate_catalog ¶
validate_catalog()
Validate the catalog to ensure all datasets are serializable and compatible with multiprocessing.
This method checks that all datasets in the catalog are serializable and do not include non-proxied memory datasets as outputs. Non-serializable datasets or datasets that rely on single-process memory cannot be used in a multiprocessing context. If any such datasets are found, an exception is raised with details.
Raises:
-
AttributeError
–If any datasets are found to be non-serializable or incompatible with multiprocessing.
Example: ::
>>> from kedro.io.data_catalog import SharedMemoryDataCatalog
>>>
>>> catalog = SharedMemoryDataCatalog(
... datasets={"shared_data": MemoryDataset(data=[1, 2, 3])}
... )
>>>
>>> try:
... catalog.validate_catalog()
... except AttributeError as e:
... print(f"Validation failed: {e}")
# No error
Source code in kedro/io/data_catalog.py
1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 |
|
_create_hook_manager ¶
_create_hook_manager()
Create a new PluginManager instance and register Kedro's hook specs.
Source code in kedro/framework/hooks/manager.py
26 27 28 29 30 31 32 33 34 35 36 37 38 |
|
_describe_git ¶
_describe_git(project_path)
Source code in kedro/framework/session/session.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
|
_jsonify_cli_context ¶
_jsonify_cli_context(ctx)
Source code in kedro/framework/session/session.py
65 66 67 68 69 70 71 |
|
_register_hooks ¶
_register_hooks(hook_manager, hooks)
Register all hooks as specified in hooks
with the global hook_manager
.
Parameters:
-
hook_manager
(PluginManager
) –Hook manager instance to register the hooks with.
-
hooks
(Iterable[Any]
) –Hooks that need to be registered.
Source code in kedro/framework/hooks/manager.py
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
|
_register_hooks_entry_points ¶
_register_hooks_entry_points(hook_manager, disabled_plugins)
Register pluggy hooks from python package entrypoints.
Parameters:
-
hook_manager
(PluginManager
) –Hook manager instance to register the hooks with.
-
disabled_plugins
(Iterable[str]
) –An iterable returning the names of plugins which hooks must not be registered; any already registered hooks will be unregistered.
Source code in kedro/framework/hooks/manager.py
62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
|
find_kedro_project ¶
find_kedro_project(current_dir)
Given a path, find a Kedro project associated with it.
Can be
- Itself, if a path is a root directory of a Kedro project.
- One of its parents, if self is not a Kedro project but one of the parent path is.
- None, if neither self nor any parent path is a Kedro project.
Returns:
-
Any
–Kedro project associated with a given path,
-
Any
–or None if no relevant Kedro project is found.
Source code in kedro/utils.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 |
|
generate_timestamp ¶
generate_timestamp()
Generate the timestamp to be used by versioning.
Returns:
-
str
–String representation of the current timestamp.
Source code in kedro/io/core.py
480 481 482 483 484 485 486 487 488 |
|
validate_settings ¶
validate_settings()
Eagerly validate that the settings module is importable if it exists. This is desirable to
surface any syntax or import errors early. In particular, without eagerly importing
the settings module, dynaconf would silence any import error (e.g. missing
dependency, missing/mislabelled pipeline), and users would instead get a cryptic
error message Expected an instance of `ConfigLoader`, got `NoneType` instead
.
More info on the dynaconf issue: https://github.com/dynaconf/dynaconf/issues/460
Source code in kedro/framework/project/__init__.py
322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 |
|
kedro.framework.session.store ¶
This module implements a dict-like store object used to persist Kedro sessions.
BaseSessionStore ¶
BaseSessionStore(path, session_id)
Bases: UserDict
BaseSessionStore
is the base class for all session stores.
BaseSessionStore
is an ephemeral store implementation that doesn't
persist the session data.
Source code in kedro/framework/session/store.py
16 17 18 19 |
|
read ¶
read()
Read the data from the session store.
Returns:
-
dict[str, Any]
–A mapping containing the session store data.
Source code in kedro/framework/session/store.py
25 26 27 28 29 30 31 32 33 34 35 |
|
save ¶
save()
Persist the session store
Source code in kedro/framework/session/store.py
37 38 39 40 41 42 |
|