Skip to content

Pipeline

kedro.pipeline.Pipeline

Pipeline(nodes, *, inputs=None, outputs=None, parameters=None, tags=None, namespace=None, prefix_datasets_with_namespace=True)

A Pipeline defined as a collection of Node objects. This class treats nodes as part of a graph representation and provides inputs, outputs and execution order.

Parameters:

  • nodes (Iterable[Node | Pipeline] | Pipeline) –

    The iterable of nodes the Pipeline will be made of. If you provide pipelines among the list of nodes, those pipelines will be expanded and all their nodes will become part of this new pipeline.

  • inputs (str | set[str] | dict[str, str] | None, default: None ) –

    A name or collection of input names to be exposed as connection points to other pipelines upstream. This is optional; if not provided, the pipeline inputs are automatically inferred from the pipeline structure. When str or set[str] is provided, the listed input names will stay the same as they are named in the provided pipeline. When dict[str, str] is provided, current input names will be mapped to new names. Must only refer to the pipeline's free inputs.

  • outputs (str | set[str] | dict[str, str] | None, default: None ) –

    A name or collection of names to be exposed as connection points to other pipelines downstream. This is optional; if not provided, the pipeline outputs are automatically inferred from the pipeline structure. When str or set[str] is provided, the listed output names will stay the same as they are named in the provided pipeline. When dict[str, str] is provided, current output names will be mapped to new names. Can refer to both the pipeline's free outputs, as well as intermediate results that need to be exposed.

  • parameters (str | set[str] | dict[str, str] | None, default: None ) –

    A name or collection of parameters to namespace. When str or set[str] are provided, the listed parameter names will stay the same as they are named in the provided pipeline. When dict[str, str] is provided, current parameter names will be mapped to new names. The parameters can be specified without the params: prefix.

  • tags (str | Iterable[str] | None, default: None ) –

    Optional set of tags to be applied to all the pipeline nodes.

  • namespace (str | None, default: None ) –

    A prefix to give to all dataset names, except those explicitly named with the inputs/outputs arguments, and parameter references (params: and parameters).

  • prefix_datasets_with_namespace (bool, default: True ) –

    A flag to specify if the inputs, outputs, and parameters of the nodes should be prefixed with the namespace. It is set to True by default. It is useful to turn off when namespacing is used for grouping nodes for deployment purposes.

Raises:

  • ValueError

    When an empty list of nodes is provided, or when not all nodes have unique names.

  • CircularDependencyError

    When visiting all the nodes is not possible due to the existence of a circular dependency.

  • OutputNotUniqueError

    When multiple Node instances produce the same output.

  • ConfirmNotUniqueError

    When multiple Node instances attempt to confirm the same dataset.

  • PipelineError

    When inputs, outputs or parameters are incorrectly specified, or they do not exist on the original pipeline.

Example: ::

>>> from kedro.pipeline import Pipeline
>>> from kedro.pipeline import node
>>>
>>> # In the following scenario first_ds and second_ds
>>> # are datasets provided by io. Pipeline will pass these
>>> # datasets to first_node function and provides the result
>>> # to the second_node as input.
>>>
>>> def first_node(first_ds, second_ds):
>>>     return dict(third_ds=first_ds+second_ds)
>>>
>>> def second_node(third_ds):
>>>     return third_ds
>>>
>>> pipeline = Pipeline([
>>>     node(first_node, ['first_ds', 'second_ds'], ['third_ds']),
>>>     node(second_node, dict(third_ds='third_ds'), 'fourth_ds')])
>>>
>>> pipeline.describe()
>>>
Source code in kedro/pipeline/pipeline.py
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
def __init__(  # noqa: PLR0913
    self,
    nodes: Iterable[Node | Pipeline] | Pipeline,
    *,
    inputs: str | set[str] | dict[str, str] | None = None,
    outputs: str | set[str] | dict[str, str] | None = None,
    parameters: str | set[str] | dict[str, str] | None = None,
    tags: str | Iterable[str] | None = None,
    namespace: str | None = None,
    prefix_datasets_with_namespace: bool = True,
):
    """Initialise ``Pipeline`` with a list of ``Node`` instances.

    Args:
        nodes: The iterable of nodes the ``Pipeline`` will be made of. If you
            provide pipelines among the list of nodes, those pipelines will
            be expanded and all their nodes will become part of this
            new pipeline.
        inputs: A name or collection of input names to be exposed as connection points
            to other pipelines upstream. This is optional; if not provided, the
            pipeline inputs are automatically inferred from the pipeline structure.
            When str or set[str] is provided, the listed input names will stay
            the same as they are named in the provided pipeline.
            When dict[str, str] is provided, current input names will be
            mapped to new names.
            Must only refer to the pipeline's free inputs.
        outputs: A name or collection of names to be exposed as connection points
            to other pipelines downstream. This is optional; if not provided, the
            pipeline outputs are automatically inferred from the pipeline structure.
            When str or set[str] is provided, the listed output names will stay
            the same as they are named in the provided pipeline.
            When dict[str, str] is provided, current output names will be
            mapped to new names.
            Can refer to both the pipeline's free outputs, as well as
            intermediate results that need to be exposed.
        parameters: A name or collection of parameters to namespace.
            When str or set[str] are provided, the listed parameter names will stay
            the same as they are named in the provided pipeline.
            When dict[str, str] is provided, current parameter names will be
            mapped to new names.
            The parameters can be specified without the `params:` prefix.
        tags: Optional set of tags to be applied to all the pipeline nodes.
        namespace: A prefix to give to all dataset names,
            except those explicitly named with the `inputs`/`outputs`
            arguments, and parameter references (`params:` and `parameters`).
        prefix_datasets_with_namespace: A flag to specify if the inputs, outputs, and parameters of the nodes
            should be prefixed with the namespace. It is set to True by default. It is
            useful to turn off when namespacing is used for grouping nodes for deployment purposes.

    Raises:
        ValueError:
            When an empty list of nodes is provided, or when not all
            nodes have unique names.
        CircularDependencyError:
            When visiting all the nodes is not
            possible due to the existence of a circular dependency.
        OutputNotUniqueError:
            When multiple ``Node`` instances produce the same output.
        ConfirmNotUniqueError:
            When multiple ``Node`` instances attempt to confirm the same
            dataset.
        PipelineError: When inputs, outputs or parameters are incorrectly
            specified, or they do not exist on the original pipeline.
    Example:
    ::

        >>> from kedro.pipeline import Pipeline
        >>> from kedro.pipeline import node
        >>>
        >>> # In the following scenario first_ds and second_ds
        >>> # are datasets provided by io. Pipeline will pass these
        >>> # datasets to first_node function and provides the result
        >>> # to the second_node as input.
        >>>
        >>> def first_node(first_ds, second_ds):
        >>>     return dict(third_ds=first_ds+second_ds)
        >>>
        >>> def second_node(third_ds):
        >>>     return third_ds
        >>>
        >>> pipeline = Pipeline([
        >>>     node(first_node, ['first_ds', 'second_ds'], ['third_ds']),
        >>>     node(second_node, dict(third_ds='third_ds'), 'fourth_ds')])
        >>>
        >>> pipeline.describe()
        >>>

    """
    if isinstance(nodes, Pipeline):
        nodes = nodes.nodes

    if any([inputs, outputs, parameters, namespace]):
        nodes = self._map_nodes(
            nodes=nodes,
            inputs=inputs,
            outputs=outputs,
            parameters=parameters,
            tags=tags,
            namespace=namespace,
            prefix_datasets_with_namespace=prefix_datasets_with_namespace,
        )

    if nodes is None:
        raise ValueError(
            "'nodes' argument of 'Pipeline' is None. It must be an "
            "iterable of nodes and/or pipelines instead."
        )
    nodes_list = list(nodes)  # in case it's a generator
    _validate_duplicate_nodes(nodes_list)

    nodes_chain = list(
        chain.from_iterable(
            [[n] if isinstance(n, Node) else n.nodes for n in nodes_list]
        )
    )
    _validate_transcoded_inputs_outputs(nodes_chain)
    _tags = set(_to_list(tags))

    if _tags:
        tagged_nodes = [n.tag(_tags) for n in nodes_chain]
    else:
        tagged_nodes = nodes_chain

    self._nodes_by_name = {node.name: node for node in tagged_nodes}
    _validate_unique_outputs(tagged_nodes)
    _validate_unique_confirms(tagged_nodes)

    # input -> nodes with input
    self._nodes_by_input: dict[str, set[Node]] = defaultdict(set)
    for node in tagged_nodes:
        for input_ in node.inputs:
            self._nodes_by_input[_strip_transcoding(input_)].add(node)

    # output -> node with output
    self._nodes_by_output: dict[str, Node] = {}
    for node in tagged_nodes:
        for output in node.outputs:
            self._nodes_by_output[_strip_transcoding(output)] = node

    self._nodes = tagged_nodes
    node_parents = self.node_dependencies
    self._toposorter = TopologicalSorter(node_parents)

    # test for circular dependencies without executing the toposort for efficiency
    try:
        self._toposorter.prepare()
    except CycleError as exc:
        loop = list(set(exc.args[1]))
        message = f"Circular dependencies exist among the following {len(loop)} item(s): {loop}"
        raise CircularDependencyError(message) from exc

    self._toposorted_nodes: list[Node] = []
    self._toposorted_groups: list[list[Node]] = []
    self._validate_namespaces(node_parents)

_nodes instance-attribute

_nodes = tagged_nodes

_nodes_by_input instance-attribute

_nodes_by_input = defaultdict(set)

_nodes_by_name instance-attribute

_nodes_by_name = {name: nodefor node in tagged_nodes}

_nodes_by_output instance-attribute

_nodes_by_output = {}

_toposorted_groups instance-attribute

_toposorted_groups = []

_toposorted_nodes instance-attribute

_toposorted_nodes = []

_toposorter instance-attribute

_toposorter = TopologicalSorter(node_parents)

grouped_nodes property

grouped_nodes

Return a list of the pipeline nodes in topologically ordered groups, i.e. if node A needs to be run before node B, it will appear in an earlier group.

Returns:

  • list[list[Node]]

    The pipeline nodes in topologically ordered groups.

node_dependencies property

node_dependencies

All dependencies of nodes where the first Node has a direct dependency on the second Node.

Returns:

  • dict[Node, set[Node]]

    Dictionary where keys are nodes and values are sets made up of

  • dict[Node, set[Node]]

    their parent nodes. Independent nodes have this as empty sets.

nodes property

nodes

Return a list of the pipeline nodes in topological order, i.e. if node A needs to be run before node B, it will appear earlier in the list.

Returns:

  • list[Node]

    The list of all pipeline nodes in topological order.

__add__

__add__(other)
Source code in kedro/pipeline/pipeline.py
354
355
356
357
def __add__(self, other: Any) -> Pipeline:
    if not isinstance(other, Pipeline):
        return NotImplemented
    return Pipeline(set(self._nodes + other._nodes))

__and__

__and__(other)
Source code in kedro/pipeline/pipeline.py
369
370
371
372
def __and__(self, other: Any) -> Pipeline:
    if not isinstance(other, Pipeline):
        return NotImplemented
    return Pipeline(set(self._nodes) & set(other._nodes))

__or__

__or__(other)
Source code in kedro/pipeline/pipeline.py
374
375
376
377
def __or__(self, other: Any) -> Pipeline:
    if not isinstance(other, Pipeline):
        return NotImplemented
    return Pipeline(set(self._nodes + other._nodes))

__radd__

__radd__(other)
Source code in kedro/pipeline/pipeline.py
359
360
361
362
def __radd__(self, other: Any) -> Pipeline:
    if isinstance(other, int) and other == 0:
        return self
    return self.__add__(other)

__repr__

__repr__()

Pipeline ([node1, ..., node10 ...], name='pipeline_name')

Source code in kedro/pipeline/pipeline.py
342
343
344
345
346
347
348
349
350
351
352
def __repr__(self) -> str:  # pragma: no cover
    """Pipeline ([node1, ..., node10 ...], name='pipeline_name')"""
    max_nodes_to_display = 10

    nodes_reprs = [repr(node) for node in self.nodes[:max_nodes_to_display]]
    if len(self.nodes) > max_nodes_to_display:
        nodes_reprs.append("...")
    sep = ",\n"
    nodes_reprs_str = f"[\n{sep.join(nodes_reprs)}\n]" if nodes_reprs else "[]"
    constructor_repr = f"({nodes_reprs_str})"
    return f"{self.__class__.__name__}{constructor_repr}"

__sub__

__sub__(other)
Source code in kedro/pipeline/pipeline.py
364
365
366
367
def __sub__(self, other: Any) -> Pipeline:
    if not isinstance(other, Pipeline):
        return NotImplemented
    return Pipeline(set(self._nodes) - set(other._nodes))

_copy_node

_copy_node(node, mapping, namespace, prefix_datasets_with_namespace=True)
Source code in kedro/pipeline/pipeline.py
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
def _copy_node(
    self,
    node: Node,
    mapping: dict,
    namespace: str | None,
    prefix_datasets_with_namespace: bool = True,
) -> Node:
    new_namespace = node.namespace
    if namespace:
        new_namespace = (
            f"{namespace}.{node.namespace}" if node.namespace else namespace
        )
    return node._copy(
        inputs=self._process_dataset_names(
            node._inputs, mapping, namespace, prefix_datasets_with_namespace
        ),
        outputs=self._process_dataset_names(
            node._outputs, mapping, namespace, prefix_datasets_with_namespace
        ),
        namespace=new_namespace,
        confirms=self._process_dataset_names(
            node._confirms, mapping, namespace, prefix_datasets_with_namespace
        ),
    )

_get_nodes_with_inputs_transcode_compatible

_get_nodes_with_inputs_transcode_compatible(datasets)

Retrieves nodes that use the given datasets as inputs. If provided a name, but no format, for a transcoded dataset, it includes all nodes that use inputs with that name, otherwise it matches to the fully-qualified name only (i.e. name@format).

Raises:

  • ValueError

    if any of the given datasets do not exist in the Pipeline object

Returns:

  • set[Node]

    Set of Nodes that use the given datasets as inputs.

Source code in kedro/pipeline/pipeline.py
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
def _get_nodes_with_inputs_transcode_compatible(
    self, datasets: set[str]
) -> set[Node]:
    """Retrieves nodes that use the given `datasets` as inputs.
    If provided a name, but no format, for a transcoded dataset, it
    includes all nodes that use inputs with that name, otherwise it
    matches to the fully-qualified name only (i.e. name@format).

    Raises:
        ValueError: if any of the given datasets do not exist in the
            ``Pipeline`` object

    Returns:
        Set of ``Nodes`` that use the given datasets as inputs.
    """
    missing = sorted(
        datasets - self.datasets() - self._transcode_compatible_names()
    )
    if missing:
        raise ValueError(f"Pipeline does not contain datasets named {missing}")

    relevant_nodes = set()
    for input_ in datasets:
        if _strip_transcoding(input_) == input_:
            relevant_nodes.update(self._nodes_by_input[_strip_transcoding(input_)])
        else:
            for node_ in self._nodes_by_input[_strip_transcoding(input_)]:
                if input_ in node_.inputs:
                    relevant_nodes.add(node_)
    return relevant_nodes

_get_nodes_with_outputs_transcode_compatible

_get_nodes_with_outputs_transcode_compatible(datasets)

Retrieves nodes that output to the given datasets. If provided a name, but no format, for a transcoded dataset, it includes the node that outputs to that name, otherwise it matches to the fully-qualified name only (i.e. name@format).

Raises:

  • ValueError

    if any of the given datasets do not exist in the Pipeline object

Returns:

  • set[Node]

    Set of Nodes that output to the given datasets.

Source code in kedro/pipeline/pipeline.py
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
def _get_nodes_with_outputs_transcode_compatible(
    self, datasets: set[str]
) -> set[Node]:
    """Retrieves nodes that output to the given `datasets`.
    If provided a name, but no format, for a transcoded dataset, it
    includes the node that outputs to that name, otherwise it matches
    to the fully-qualified name only (i.e. name@format).

    Raises:
        ValueError: if any of the given datasets do not exist in the
            ``Pipeline`` object

    Returns:
        Set of ``Nodes`` that output to the given datasets.
    """
    missing = sorted(
        datasets - self.datasets() - self._transcode_compatible_names()
    )
    if missing:
        raise ValueError(f"Pipeline does not contain datasets named {missing}")

    relevant_nodes = set()
    for output in datasets:
        if _strip_transcoding(output) in self._nodes_by_output:
            node_with_output = self._nodes_by_output[_strip_transcoding(output)]
            if (
                _strip_transcoding(output) == output
                or output in node_with_output.outputs
            ):
                relevant_nodes.add(node_with_output)

    return relevant_nodes

_group_by_namespace

_group_by_namespace()
Source code in kedro/pipeline/pipeline.py
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
def _group_by_namespace(self) -> list[GroupedNodes]:
    grouped_nodes_map: dict[str, GroupedNodes] = {}

    for node in self.nodes:
        key = node.namespace.split(".")[0] if node.namespace else node.name

        if key not in grouped_nodes_map:
            grouped_nodes_map[key] = GroupedNodes(
                name=key,
                type="namespace" if node.namespace else "nodes",
            )

        grouped_nodes_map[key].nodes.append(node.name)

        dependencies = grouped_nodes_map[key].dependencies
        unique_dependencies = set(dependencies)

        for parent in sorted(self.node_dependencies[node], key=lambda n: n.name):
            parent_key = (
                parent.namespace.split(".")[0] if parent.namespace else parent.name
            )
            if parent_key != key and parent_key not in unique_dependencies:
                dependencies.append(parent_key)
                unique_dependencies.add(parent_key)

    return list(grouped_nodes_map.values())

_group_by_none

_group_by_none()
Source code in kedro/pipeline/pipeline.py
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
def _group_by_none(self) -> list[GroupedNodes]:
    return [
        GroupedNodes(
            name=node.name,
            type="nodes",
            nodes=[node.name],
            dependencies=[
                parent.name
                for parent in sorted(
                    self.node_dependencies[node], key=lambda n: n.name
                )
            ],
        )
        for node in self.nodes
    ]

_map_nodes

_map_nodes(nodes, inputs=None, outputs=None, parameters=None, tags=None, namespace=None, prefix_datasets_with_namespace=True)

Map namespace to the inputs, outputs, parameters and nodes of the pipeline.

Source code in kedro/pipeline/pipeline.py
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
def _map_nodes(  # noqa: PLR0913
    self,
    nodes: Iterable[Node | Pipeline] | Pipeline,
    inputs: str | set[str] | dict[str, str] | None = None,
    outputs: str | set[str] | dict[str, str] | None = None,
    parameters: str | set[str] | dict[str, str] | None = None,
    tags: str | Iterable[str] | None = None,
    namespace: str | None = None,
    prefix_datasets_with_namespace: bool = True,
) -> list[Node]:
    """Map namespace to the inputs, outputs, parameters and nodes of the pipeline."""
    if isinstance(nodes, Pipeline):
        # To ensure that we are always dealing with a *copy* of pipe.
        pipe = Pipeline([nodes], tags=tags)
    else:
        pipe = Pipeline(nodes, tags=tags)

    inputs = _get_dataset_names_mapping(inputs)
    outputs = _get_dataset_names_mapping(outputs)
    parameters = _get_param_names_mapping(parameters)

    _validate_datasets_exist(inputs.keys(), outputs.keys(), parameters.keys(), pipe)
    _validate_inputs_outputs(inputs.keys(), outputs.keys(), pipe)

    mapping = {**inputs, **outputs, **parameters}
    new_nodes = [
        self._copy_node(n, mapping, namespace, prefix_datasets_with_namespace)
        for n in pipe.nodes
    ]
    return new_nodes

_process_dataset_names

_process_dataset_names(datasets, mapping, namespace, prefix_datasets_with_namespace=True)
Source code in kedro/pipeline/pipeline.py
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
def _process_dataset_names(
    self,
    datasets: str | list[str] | dict[str, str] | None,
    mapping: dict,
    namespace: str | None,
    prefix_datasets_with_namespace: bool = True,
) -> str | list[str] | dict[str, str] | None:
    if datasets is None:
        return None
    if isinstance(datasets, str):
        return self._rename(
            datasets, mapping, namespace, prefix_datasets_with_namespace
        )
    if isinstance(datasets, list):
        return [
            self._rename(name, mapping, namespace, prefix_datasets_with_namespace)
            for name in datasets
        ]
    if isinstance(datasets, dict):
        return {
            key: self._rename(
                value, mapping, namespace, prefix_datasets_with_namespace
            )
            for key, value in datasets.items()
        }
    raise ValueError(
        f"Unexpected input {datasets} of type {type(datasets)}"
    )  # pragma: no cover

_remove_intermediates

_remove_intermediates(datasets)
Source code in kedro/pipeline/pipeline.py
397
398
399
400
401
def _remove_intermediates(self, datasets: set[str]) -> set[str]:
    intermediate = {_strip_transcoding(i) for i in self.all_inputs()} & {
        _strip_transcoding(o) for o in self.all_outputs()
    }
    return {d for d in datasets if _strip_transcoding(d) not in intermediate}

_rename

_rename(name, mapping, namespace, prefix_datasets_with_namespace)
Source code in kedro/pipeline/pipeline.py
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
def _rename(
    self,
    name: str,
    mapping: dict,
    namespace: str | None,
    prefix_datasets_with_namespace: bool,
) -> str:
    def _prefix_dataset(name: str) -> str:
        return f"{namespace}.{name}"

    def _prefix_param(name: str) -> str:
        _, param_name = name.split("params:")
        return f"params:{namespace}.{param_name}"

    def _is_transcode_base_in_mapping(name: str) -> bool:
        base_name, _ = _transcode_split(name)
        return base_name in mapping

    def _map_transcode_base(name: str) -> str:
        from kedro.pipeline.transcoding import TRANSCODING_SEPARATOR

        base_name, transcode_suffix = _transcode_split(name)
        return TRANSCODING_SEPARATOR.join((mapping[base_name], transcode_suffix))

    rules = [
        # if name mapped to new name, update with new name
        (lambda n: n in mapping, lambda n: mapping[n]),
        # if name refers to the set of all "parameters", leave as is
        (_is_all_parameters, lambda n: n),
        # if transcode base is mapped to a new name, update with new base
        (_is_transcode_base_in_mapping, _map_transcode_base),
    ]

    # Add rules for prefixing only if prefix_datasets_with_namespace is True
    if prefix_datasets_with_namespace:
        rules.extend(
            [
                # if name refers to a single parameter and a namespace is given, apply prefix
                (
                    lambda n: bool(namespace) and _is_single_parameter(n),
                    _prefix_param,
                ),
                # if namespace given for a dataset, prefix name using that namespace
                (lambda n: bool(namespace), _prefix_dataset),
            ]
        )

    for predicate, processor in rules:
        if predicate(name):  # type: ignore[no-untyped-call]
            processor_name: str = processor(name)  # type: ignore[no-untyped-call]
            return processor_name

    # leave name as is
    return name

_transcode_compatible_names

_transcode_compatible_names()
Source code in kedro/pipeline/pipeline.py
436
437
def _transcode_compatible_names(self) -> set[str]:
    return {_strip_transcoding(ds) for ds in self.datasets()}

_validate_namespaces

_validate_namespaces(node_parents)
Source code in kedro/pipeline/pipeline.py
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
def _validate_namespaces(self, node_parents: dict[Node, set[Node]]) -> None:
    from warnings import warn

    visited: dict[str, int] = dict()
    path: list[str] = []
    node_children = defaultdict(set)
    for child, parents in node_parents.items():
        for parent in parents:
            node_children[parent].add(child)

    def dfs(n: Node, last_namespace: str) -> None:
        curr_namespace = n.namespace or ""
        if curr_namespace and curr_namespace in visited:
            warn(
                f"Namespace '{curr_namespace}' is interrupted by nodes {path[visited[curr_namespace]:]} and thus invalid.",
                UserWarning,
            )

        # If the current namespace is different from the last namespace and isn't a child namespace,
        # mark the last namespace and all unrelated parent namespaces as visited to detect potential future interruptions
        backtracked: dict[str, int] = dict()
        if (
            last_namespace
            and curr_namespace != last_namespace
            and not curr_namespace.startswith(last_namespace + ".")
        ):
            parts = last_namespace.split(".")
            prefix = ""
            for p in parts:
                prefix += p
                if not curr_namespace.startswith(prefix):
                    backtracked[prefix] = len(path)
                prefix += "."

            visited.update(backtracked)

        path.append(n.name)
        for child in node_children[n]:
            dfs(child, n.namespace or "")
        path.pop()
        for key in backtracked.keys():
            visited.pop(key, None)

    start = (n for n in node_parents if not node_parents[n])
    for n in start:
        dfs(n, "")

all_inputs

all_inputs()

All inputs for all nodes in the pipeline.

Returns:

  • set[str]

    All node input names as a Set.

Source code in kedro/pipeline/pipeline.py
379
380
381
382
383
384
385
386
def all_inputs(self) -> set[str]:
    """All inputs for all nodes in the pipeline.

    Returns:
        All node input names as a Set.

    """
    return set.union(set(), *(node.inputs for node in self._nodes))

all_outputs

all_outputs()

All outputs of all nodes in the pipeline.

Returns:

  • set[str]

    All node outputs.

Source code in kedro/pipeline/pipeline.py
388
389
390
391
392
393
394
395
def all_outputs(self) -> set[str]:
    """All outputs of all nodes in the pipeline.

    Returns:
        All node outputs.

    """
    return set.union(set(), *(node.outputs for node in self._nodes))

datasets

datasets()

The names of all datasets used by the Pipeline, including inputs and outputs.

Returns:

  • set[str]

    The set of all pipeline datasets.

Source code in kedro/pipeline/pipeline.py
426
427
428
429
430
431
432
433
434
def datasets(self) -> set[str]:
    """The names of all datasets used by the ``Pipeline``,
    including inputs and outputs.

    Returns:
        The set of all pipeline datasets.

    """
    return self.all_outputs() | self.all_inputs()

describe

describe(names_only=True)

Obtain the order of execution and expected free input variables in a loggable pre-formatted string. The order of nodes matches the order of execution given by the topological sort.

Parameters:

  • names_only (bool, default: True ) –

    The flag to describe names_only pipeline with just node names.

Example: ::

>>> pipeline = Pipeline([ ... ])
>>>
>>> logger = logging.getLogger(__name__)
>>>
>>> logger.info(pipeline.describe())

After invocation the following will be printed as an info level log statement: ::

#### Pipeline execution order ####
Inputs: C, D

func1([C]) -> [A]
func2([D]) -> [B]
func3([A, D]) -> [E]

Outputs: B, E
##################################

Returns:

  • str

    The pipeline description as a formatted string.

Source code in kedro/pipeline/pipeline.py
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
def describe(self, names_only: bool = True) -> str:
    """Obtain the order of execution and expected free input variables in
    a loggable pre-formatted string. The order of nodes matches the order
    of execution given by the topological sort.

    Args:
        names_only: The flag to describe names_only pipeline with just
            node names.

    Example:
    ::

        >>> pipeline = Pipeline([ ... ])
        >>>
        >>> logger = logging.getLogger(__name__)
        >>>
        >>> logger.info(pipeline.describe())

    After invocation the following will be printed as an info level log
    statement:
    ::

        #### Pipeline execution order ####
        Inputs: C, D

        func1([C]) -> [A]
        func2([D]) -> [B]
        func3([A, D]) -> [E]

        Outputs: B, E
        ##################################

    Returns:
        The pipeline description as a formatted string.

    """

    def set_to_string(set_of_strings: set[str]) -> str:
        """Convert set to a string but return 'None' in case of an empty
        set.
        """
        return ", ".join(sorted(set_of_strings)) if set_of_strings else "None"

    nodes_as_string = "\n".join(
        node.name if names_only else str(node) for node in self.nodes
    )

    str_representation = (
        "#### Pipeline execution order ####\n"
        "Inputs: {0}\n\n"
        "{1}\n\n"
        "Outputs: {2}\n"
        "##################################"
    )

    return str_representation.format(
        set_to_string(self.inputs()), nodes_as_string, set_to_string(self.outputs())
    )

filter

filter(tags=None, from_nodes=None, to_nodes=None, node_names=None, from_inputs=None, to_outputs=None, node_namespaces=None)

Creates a new Pipeline object with the nodes that meet all of the specified filtering conditions.

The new pipeline object is the intersection of pipelines that meet each filtering condition. This is distinct from chaining multiple filters together.

Parameters:

  • tags (Iterable[str] | None, default: None ) –

    A list of node tags which should be used to lookup the nodes of the new Pipeline.

  • from_nodes (Iterable[str] | None, default: None ) –

    A list of node names which should be used as a starting point of the new Pipeline.

  • to_nodes (Iterable[str] | None, default: None ) –

    A list of node names which should be used as an end point of the new Pipeline.

  • node_names (Iterable[str] | None, default: None ) –

    A list of node names which should be selected for the new Pipeline.

  • from_inputs (Iterable[str] | None, default: None ) –

    A list of inputs which should be used as a starting point of the new Pipeline

  • to_outputs (Iterable[str] | None, default: None ) –

    A list of outputs which should be the final outputs of the new Pipeline.

  • node_namespaces (Iterable[str] | None, default: None ) –

    A list of node namespaces which should be used to select nodes in the new Pipeline.

Returns:

  • Pipeline

    A new Pipeline object with nodes that meet all of the specified filtering conditions.

Raises:

  • ValueError

    The filtered Pipeline has no nodes.

Example: ::

>>> pipeline = Pipeline(
>>>     [
>>>         node(func, "A", "B", name="node1"),
>>>         node(func, "B", "C", name="node2"),
>>>         node(func, "C", "D", name="node3"),
>>>     ]
>>> )
>>> pipeline.filter(node_names=["node1", "node3"], from_inputs=["A"])
>>> # Gives a new pipeline object containing node1 and node3.
Source code in kedro/pipeline/pipeline.py
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
def filter(  # noqa: PLR0913
    self,
    tags: Iterable[str] | None = None,
    from_nodes: Iterable[str] | None = None,
    to_nodes: Iterable[str] | None = None,
    node_names: Iterable[str] | None = None,
    from_inputs: Iterable[str] | None = None,
    to_outputs: Iterable[str] | None = None,
    node_namespaces: Iterable[str] | None = None,
) -> Pipeline:
    """Creates a new ``Pipeline`` object with the nodes that meet all of the
    specified filtering conditions.

    The new pipeline object is the intersection of pipelines that meet each
    filtering condition. This is distinct from chaining multiple filters together.

    Args:
        tags: A list of node tags which should be used to lookup
            the nodes of the new ``Pipeline``.
        from_nodes: A list of node names which should be used as a
            starting point of the new ``Pipeline``.
        to_nodes:  A list of node names which should be used as an
            end point of the new ``Pipeline``.
        node_names: A list of node names which should be selected for the
            new ``Pipeline``.
        from_inputs: A list of inputs which should be used as a starting point
            of the new ``Pipeline``
        to_outputs: A list of outputs which should be the final outputs of
            the new ``Pipeline``.
        node_namespaces: A list of node namespaces which should be used to select
            nodes in the new ``Pipeline``.

    Returns:
        A new ``Pipeline`` object with nodes that meet all of the specified
            filtering conditions.

    Raises:
        ValueError: The filtered ``Pipeline`` has no nodes.

    Example:
    ::

        >>> pipeline = Pipeline(
        >>>     [
        >>>         node(func, "A", "B", name="node1"),
        >>>         node(func, "B", "C", name="node2"),
        >>>         node(func, "C", "D", name="node3"),
        >>>     ]
        >>> )
        >>> pipeline.filter(node_names=["node1", "node3"], from_inputs=["A"])
        >>> # Gives a new pipeline object containing node1 and node3.
    """

    filter_methods = {
        self.only_nodes_with_tags: tags,
        self.from_nodes: from_nodes,
        self.to_nodes: to_nodes,
        self.only_nodes: node_names,
        self.from_inputs: from_inputs,
        self.to_outputs: to_outputs,
        self.only_nodes_with_namespaces: [node_namespaces]
        if node_namespaces
        else None,
    }

    subset_pipelines = {
        filter_method(*filter_args)  # type: ignore
        for filter_method, filter_args in filter_methods.items()
        if filter_args
    }

    # Intersect all the pipelines subsets. We apply each filter to the original
    # pipeline object (self) rather than incrementally chaining filter methods
    # together. Hence, the order of filtering does not affect the outcome, and the
    # resultant pipeline is unambiguously defined.
    # If this were not the case then, for example,
    # pipeline.filter(node_names=["node1", "node3"], from_inputs=["A"])
    # would give different outcomes depending on the order of filter methods:
    # only_nodes and then from_inputs would give node1, while only_nodes and then
    # from_inputs would give node1 and node3.
    filtered_pipeline = Pipeline(self._nodes)
    for subset_pipeline in subset_pipelines:
        filtered_pipeline &= subset_pipeline

    if not filtered_pipeline.nodes:
        raise ValueError(
            "Pipeline contains no nodes after applying all provided filters. "
            "Please ensure that at least one pipeline with nodes has been defined."
        )
    return filtered_pipeline

from_inputs

from_inputs(*inputs)

Create a new Pipeline object with the nodes which depend directly or transitively on the provided inputs. If provided a name, but no format, for a transcoded input, it includes all the nodes that use inputs with that name, otherwise it matches to the fully-qualified name only (i.e. name@format).

Parameters:

  • *inputs (str, default: () ) –

    A list of inputs which should be used as a starting point of the new Pipeline

Raises:

  • ValueError

    Raised when any of the given inputs do not exist in the Pipeline object.

Returns:

  • Pipeline

    A new Pipeline object, containing a subset of the nodes of the current one such that only nodes depending directly or transitively on the provided inputs are being copied.

Source code in kedro/pipeline/pipeline.py
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
def from_inputs(self, *inputs: str) -> Pipeline:
    """Create a new ``Pipeline`` object with the nodes which depend
    directly or transitively on the provided inputs.
    If provided a name, but no format, for a transcoded input, it
    includes all the nodes that use inputs with that name, otherwise it
    matches to the fully-qualified name only (i.e. name@format).

    Args:
        *inputs: A list of inputs which should be used as a starting point
            of the new ``Pipeline``

    Raises:
        ValueError: Raised when any of the given inputs do not exist in the
            ``Pipeline`` object.

    Returns:
        A new ``Pipeline`` object, containing a subset of the
            nodes of the current one such that only nodes depending
            directly or transitively on the provided inputs are being
            copied.

    """
    starting = set(inputs)
    result: set[Node] = set()
    next_nodes = self._get_nodes_with_inputs_transcode_compatible(starting)

    while next_nodes:
        result |= next_nodes
        outputs = set(chain.from_iterable(node.outputs for node in next_nodes))
        starting = outputs

        next_nodes = set(
            chain.from_iterable(
                self._nodes_by_input[_strip_transcoding(input_)]
                for input_ in starting
            )
        )

    return Pipeline(result)

from_nodes

from_nodes(*node_names)

Create a new Pipeline object with the nodes which depend directly or transitively on the provided nodes.

Parameters:

  • *node_names (str, default: () ) –

    A list of node_names which should be used as a starting point of the new Pipeline.

Raises: ValueError: Raised when any of the given names do not exist in the Pipeline object. Returns: A new Pipeline object, containing a subset of the nodes of the current one such that only nodes depending directly or transitively on the provided nodes are being copied.

Source code in kedro/pipeline/pipeline.py
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
def from_nodes(self, *node_names: str) -> Pipeline:
    """Create a new ``Pipeline`` object with the nodes which depend
    directly or transitively on the provided nodes.

    Args:
        *node_names: A list of node_names which should be used as a
            starting point of the new ``Pipeline``.
    Raises:
        ValueError: Raised when any of the given names do not exist in the
            ``Pipeline`` object.
    Returns:
        A new ``Pipeline`` object, containing a subset of the nodes of
            the current one such that only nodes depending directly or
            transitively on the provided nodes are being copied.

    """

    res = self.only_nodes(*node_names)
    res += self.from_inputs(*map(_strip_transcoding, res.all_outputs()))
    return res

group_nodes_by

group_nodes_by(group_by='namespace')

Return a list of grouped nodes based on the specified strategy.

Parameters:

  • group_by (str | None, default: 'namespace' ) –

    Strategy for grouping. Supported values: - "namespace": Groups nodes by their top-level namespace. - None or "none": No grouping, each node is its own group.

Returns:

Source code in kedro/pipeline/pipeline.py
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
def group_nodes_by(
    self,
    group_by: str | None = "namespace",
) -> list[GroupedNodes]:
    """Return a list of grouped nodes based on the specified strategy.

    Args:
        group_by: Strategy for grouping. Supported values:
            - "namespace": Groups nodes by their top-level namespace.
            - None or "none": No grouping, each node is its own group.

    Returns:
        A list of GroupedNodes instances.
    """
    if group_by is None or group_by.lower() == "none":
        return self._group_by_none()
    if group_by.lower() == "namespace":
        return self._group_by_namespace()
    raise ValueError(f"Unsupported group_by strategy: {group_by}")

inputs

inputs()

The names of free inputs that must be provided at runtime so that the pipeline is runnable. Does not include intermediate inputs which are produced and consumed by the inner pipeline nodes. Resolves transcoded names where necessary.

Returns:

  • set[str]

    The set of free input names needed by the pipeline.

Source code in kedro/pipeline/pipeline.py
403
404
405
406
407
408
409
410
411
412
413
def inputs(self) -> set[str]:
    """The names of free inputs that must be provided at runtime so that
    the pipeline is runnable. Does not include intermediate inputs which
    are produced and consumed by the inner pipeline nodes. Resolves
    transcoded names where necessary.

    Returns:
        The set of free input names needed by the pipeline.

    """
    return self._remove_intermediates(self.all_inputs())

only_nodes

only_nodes(*node_names)

Create a new Pipeline which will contain only the specified nodes by name.

Parameters:

  • *node_names (str, default: () ) –

    One or more node names. The returned Pipeline will only contain these nodes.

Raises:

  • ValueError

    When some invalid node name is given.

Returns:

  • Pipeline

    A new Pipeline, containing only nodes.

Source code in kedro/pipeline/pipeline.py
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
def only_nodes(self, *node_names: str) -> Pipeline:
    """Create a new ``Pipeline`` which will contain only the specified
    nodes by name.

    Args:
        *node_names: One or more node names. The returned ``Pipeline``
            will only contain these nodes.

    Raises:
        ValueError: When some invalid node name is given.

    Returns:
        A new ``Pipeline``, containing only ``nodes``.

    """
    unregistered_nodes = set(node_names) - set(self._nodes_by_name.keys())
    if unregistered_nodes:
        # check if unregistered nodes are available under namespace
        namespaces = []
        for unregistered_node in unregistered_nodes:
            namespaces.extend(
                [
                    node_name
                    for node_name in self._nodes_by_name.keys()
                    if node_name.endswith(f".{unregistered_node}")
                ]
            )
        if namespaces:
            raise ValueError(
                f"Pipeline does not contain nodes named {list(unregistered_nodes)}. "
                f"Did you mean: {namespaces}?"
            )
        raise ValueError(
            f"Pipeline does not contain nodes named {list(unregistered_nodes)}."
        )

    nodes = [self._nodes_by_name[name] for name in node_names]
    return Pipeline(nodes)

only_nodes_with_inputs

only_nodes_with_inputs(*inputs)

Create a new Pipeline object with the nodes which depend directly on the provided inputs. If provided a name, but no format, for a transcoded input, it includes all the nodes that use inputs with that name, otherwise it matches to the fully-qualified name only (i.e. name@format).

Parameters:

  • *inputs (str, default: () ) –

    A list of inputs which should be used as a starting point of the new Pipeline.

Raises:

  • ValueError

    Raised when any of the given inputs do not exist in the Pipeline object.

Returns:

  • Pipeline

    A new Pipeline object, containing a subset of the nodes of the current one such that only nodes depending directly on the provided inputs are being copied.

Source code in kedro/pipeline/pipeline.py
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
def only_nodes_with_inputs(self, *inputs: str) -> Pipeline:
    """Create a new ``Pipeline`` object with the nodes which depend
    directly on the provided inputs.
    If provided a name, but no format, for a transcoded input, it
    includes all the nodes that use inputs with that name, otherwise it
    matches to the fully-qualified name only (i.e. name@format).

    Args:
        *inputs: A list of inputs which should be used as a starting
            point of the new ``Pipeline``.

    Raises:
        ValueError: Raised when any of the given inputs do not exist in the
            ``Pipeline`` object.

    Returns:
        A new ``Pipeline`` object, containing a subset of the
            nodes of the current one such that only nodes depending
            directly on the provided inputs are being copied.

    """
    starting = set(inputs)
    nodes = self._get_nodes_with_inputs_transcode_compatible(starting)

    return Pipeline(nodes)

only_nodes_with_namespaces

only_nodes_with_namespaces(node_namespaces)

Creates a new Pipeline containing only nodes with the specified namespaces.

Parameters:

  • node_namespaces (list[str]) –

    A list of node namespaces.

Raises:

  • ValueError

    When pipeline contains no nodes with the specified namespaces.

Returns:

  • Pipeline

    A new Pipeline containing nodes with the specified namespaces.

Source code in kedro/pipeline/pipeline.py
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
def only_nodes_with_namespaces(self, node_namespaces: list[str]) -> Pipeline:
    """Creates a new ``Pipeline`` containing only nodes with the specified
    namespaces.

    Args:
        node_namespaces: A list of node namespaces.

    Raises:
        ValueError: When pipeline contains no nodes with the specified namespaces.

    Returns:
        A new ``Pipeline`` containing nodes with the specified namespaces.
    """
    nodes = []
    unmatched_namespaces = []  # Track namespaces that don't match any nodes

    for node_namespace in node_namespaces:
        matching_nodes = []
        for n in self._nodes:
            if n.namespace and (
                n.namespace == node_namespace
                or n.namespace.startswith(f"{node_namespace}.")
            ):
                matching_nodes.append(n)

        if not matching_nodes:
            unmatched_namespaces.append(node_namespace)
        nodes.extend(matching_nodes)

    if unmatched_namespaces:
        raise ValueError(
            f"Pipeline does not contain nodes with the following namespaces: {unmatched_namespaces}"
        )

    return Pipeline(nodes)

only_nodes_with_outputs

only_nodes_with_outputs(*outputs)

Create a new Pipeline object with the nodes which are directly required to produce the provided outputs. If provided a name, but no format, for a transcoded dataset, it includes all the nodes that output to that name, otherwise it matches to the fully-qualified name only (i.e. name@format).

Parameters:

  • *outputs (str, default: () ) –

    A list of outputs which should be the final outputs of the new Pipeline.

Raises:

  • ValueError

    Raised when any of the given outputs do not exist in the Pipeline object.

Returns:

  • Pipeline

    A new Pipeline object, containing a subset of the nodes of the

  • Pipeline

    current one such that only nodes which are directly required to

  • Pipeline

    produce the provided outputs are being copied.

Source code in kedro/pipeline/pipeline.py
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
def only_nodes_with_outputs(self, *outputs: str) -> Pipeline:
    """Create a new ``Pipeline`` object with the nodes which are directly
    required to produce the provided outputs.
    If provided a name, but no format, for a transcoded dataset, it
    includes all the nodes that output to that name, otherwise it matches
    to the fully-qualified name only (i.e. name@format).

    Args:
        *outputs: A list of outputs which should be the final outputs
            of the new ``Pipeline``.

    Raises:
        ValueError: Raised when any of the given outputs do not exist in the
            ``Pipeline`` object.

    Returns:
        A new ``Pipeline`` object, containing a subset of the nodes of the
        current one such that only nodes which are directly required to
        produce the provided outputs are being copied.
    """
    starting = set(outputs)
    nodes = self._get_nodes_with_outputs_transcode_compatible(starting)

    return Pipeline(nodes)

only_nodes_with_tags

only_nodes_with_tags(*tags)

Creates a new Pipeline object with the nodes which contain any of the provided tags. The resulting Pipeline is empty if no tags are provided.

Parameters:

  • *tags (str, default: () ) –

    A list of node tags which should be used to lookup the nodes of the new Pipeline.

Returns: Pipeline: A new Pipeline object, containing a subset of the nodes of the current one such that only nodes containing any of the tags provided are being copied.

Source code in kedro/pipeline/pipeline.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
def only_nodes_with_tags(self, *tags: str) -> Pipeline:
    """Creates a new ``Pipeline`` object with the nodes which contain *any*
    of the provided tags. The resulting ``Pipeline`` is empty if no tags
    are provided.

    Args:
        *tags: A list of node tags which should be used to lookup
            the nodes of the new ``Pipeline``.
    Returns:
        Pipeline: A new ``Pipeline`` object, containing a subset of the
            nodes of the current one such that only nodes containing *any*
            of the tags provided are being copied.
    """
    unique_tags = set(tags)
    nodes = [node for node in self._nodes if unique_tags & node.tags]
    return Pipeline(nodes)

outputs

outputs()

The names of outputs produced when the whole pipeline is run. Does not include intermediate outputs that are consumed by other pipeline nodes. Resolves transcoded names where necessary.

Returns:

  • set[str]

    The set of final pipeline outputs.

Source code in kedro/pipeline/pipeline.py
415
416
417
418
419
420
421
422
423
424
def outputs(self) -> set[str]:
    """The names of outputs produced when the whole pipeline is run.
    Does not include intermediate outputs that are consumed by
    other pipeline nodes. Resolves transcoded names where necessary.

    Returns:
        The set of final pipeline outputs.

    """
    return self._remove_intermediates(self.all_outputs())

tag

tag(tags)

Tags all the nodes in the pipeline.

Parameters:

  • tags (str | Iterable[str]) –

    The tags to be added to the nodes.

Returns:

  • Pipeline

    New Pipeline object with nodes tagged.

Source code in kedro/pipeline/pipeline.py
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
def tag(self, tags: str | Iterable[str]) -> Pipeline:
    """Tags all the nodes in the pipeline.

    Args:
        tags: The tags to be added to the nodes.

    Returns:
        New ``Pipeline`` object with nodes tagged.
    """
    nodes = [n.tag(tags) for n in self._nodes]
    return Pipeline(nodes)

to_json

to_json()

Return a json representation of the pipeline.

Source code in kedro/pipeline/pipeline.py
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
def to_json(self) -> str:
    """Return a json representation of the pipeline."""
    transformed = [
        {
            "name": n.name,
            "inputs": list(n.inputs),
            "outputs": list(n.outputs),
            "tags": list(n.tags),
        }
        for n in self._nodes
    ]
    pipeline_versioned = {
        "kedro_version": kedro.__version__,
        "pipeline": transformed,
    }

    return json.dumps(pipeline_versioned)

to_nodes

to_nodes(*node_names)

Create a new Pipeline object with the nodes required directly or transitively by the provided nodes.

Parameters:

  • *node_names (str, default: () ) –

    A list of node_names which should be used as an end point of the new Pipeline.

Raises: ValueError: Raised when any of the given names do not exist in the Pipeline object. Returns: A new Pipeline object, containing a subset of the nodes of the current one such that only nodes required directly or transitively by the provided nodes are being copied.

Source code in kedro/pipeline/pipeline.py
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
def to_nodes(self, *node_names: str) -> Pipeline:
    """Create a new ``Pipeline`` object with the nodes required directly
    or transitively by the provided nodes.

    Args:
        *node_names: A list of node_names which should be used as an
            end point of the new ``Pipeline``.
    Raises:
        ValueError: Raised when any of the given names do not exist in the
            ``Pipeline`` object.
    Returns:
        A new ``Pipeline`` object, containing a subset of the nodes of the
            current one such that only nodes required directly or
            transitively by the provided nodes are being copied.

    """

    res = self.only_nodes(*node_names)
    res += self.to_outputs(*map(_strip_transcoding, res.all_inputs()))
    return res

to_outputs

to_outputs(*outputs)

Create a new Pipeline object with the nodes which are directly or transitively required to produce the provided outputs. If provided a name, but no format, for a transcoded dataset, it includes all the nodes that output to that name, otherwise it matches to the fully-qualified name only (i.e. name@format).

Parameters:

  • *outputs (str, default: () ) –

    A list of outputs which should be the final outputs of the new Pipeline.

Raises:

  • ValueError

    Raised when any of the given outputs do not exist in the Pipeline object.

Returns:

  • Pipeline

    A new Pipeline object, containing a subset of the nodes of the

  • Pipeline

    current one such that only nodes which are directly or transitively

  • Pipeline

    required to produce the provided outputs are being copied.

Source code in kedro/pipeline/pipeline.py
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
def to_outputs(self, *outputs: str) -> Pipeline:
    """Create a new ``Pipeline`` object with the nodes which are directly
    or transitively required to produce the provided outputs.
    If provided a name, but no format, for a transcoded dataset, it
    includes all the nodes that output to that name, otherwise it matches
    to the fully-qualified name only (i.e. name@format).

    Args:
        *outputs: A list of outputs which should be the final outputs of
            the new ``Pipeline``.

    Raises:
        ValueError: Raised when any of the given outputs do not exist in the
            ``Pipeline`` object.


    Returns:
        A new ``Pipeline`` object, containing a subset of the nodes of the
        current one such that only nodes which are directly or transitively
        required to produce the provided outputs are being copied.

    """
    starting = set(outputs)
    result: set[Node] = set()
    next_nodes = self._get_nodes_with_outputs_transcode_compatible(starting)

    while next_nodes:
        result |= next_nodes
        inputs = set(chain.from_iterable(node.inputs for node in next_nodes))
        starting = inputs

        next_nodes = {
            self._nodes_by_output[_strip_transcoding(output)]
            for output in starting
            if _strip_transcoding(output) in self._nodes_by_output
        }

    return Pipeline(result)