API Documentation
This section contains the complete API documentation for py3plex, automatically generated from docstrings.
Tip
Looking for algorithm documentation? Visit Algorithm Roadmap for a conceptual overview of all algorithms organized by category with working examples. Then come back here for detailed API signatures.
For auto-generated module documentation, run:
cd docfiles
sphinx-apidoc -o AUTOGEN_results -f ../py3plex
make html
Core Modules
- py3plex.core.multinet.ensure(*args, **kwargs)
- py3plex.core.multinet.invariant(*args, **kwargs)
- class py3plex.core.multinet.multi_layer_network(verbose: bool = True, network_type: str = 'multilayer', directed: bool = True, dummy_layer: str = 'null', label_delimiter: str = '---', coupling_weight: int | float = 1)
Bases:
objectMain class for multilayer network analysis and manipulation.
This class provides a comprehensive toolkit for creating, analyzing, and visualizing multilayer networks where nodes can exist in multiple layers and edges can connect nodes within or across layers.
- Supported Network Types (network_type parameter):
multilayer (default): General multilayer networks with arbitrary layer structure. Each layer can have a different set of nodes. Suitable for heterogeneous networks (e.g., authors-papers-venues) or networks where nodes naturally appear in only some layers.
multiplex: Special case where all layers share the same node set but with different edge types. After loading a network, automatic coupling edges are created between each node and its counterparts in other layers. Suitable for social networks with multiple relationship types (e.g., friend, colleague, family layers).
- Choosing the Right Network Type:
- Key Features:
Dict-based API for adding nodes and edges (see add_nodes() and add_edges())
NetworkX interoperability via to_networkx() and from_networkx()
Multiple I/O formats (edgelist, GML, GraphML, gpickle, etc.)
Visualization methods for multilayer layouts
Community detection and centrality analysis
Random walk and embedding generation
- Hypergraph Support:
This class does NOT natively support true hypergraphs (edges connecting more than two nodes). For hypergraph-like structures, consider: - Using bipartite projections (nodes and hyperedges as separate node types) - The incidence gadget encoding via to_homogeneous_hypergraph() - External hypergraph libraries with conversion utilities
Notes
Nodes in multilayer networks are represented as (node_id, layer) tuples
Use add_nodes() and add_edges() with dict format for easiest interaction
See examples/ directory for usage patterns and best practices
Examples
>>> # Create a general multilayer network (different node sets per layer) >>> net = multi_layer_network(network_type='multilayer', directed=False) >>> >>> # Add nodes to different layers >>> net.add_nodes([ ... {'source': 'A', 'type': 'social'}, ... {'source': 'B', 'type': 'social'}, ... {'source': 'A', 'type': 'email'} # Same node, different layer ... ]) >>> >>> # Add edges (intra-layer and inter-layer) >>> net.add_edges([ ... {'source': 'A', 'target': 'B', ... 'source_type': 'social', 'target_type': 'social'}, ... {'source': 'A', 'target': 'A', ... 'source_type': 'social', 'target_type': 'email'} ... ]) >>> >>> print(net) # Shows network statistics <multi_layer_network: type=multilayer, directed=False, nodes=3, edges=2, layers=2>
>>> # Create a multiplex network (same nodes across relationship layers) >>> # Note: coupling edges are auto-added after load_network() >>> multiplex_net = multi_layer_network(network_type='multiplex')
- add_dummy_layers()
Internal function, for conversion between objects
- add_edges(edge_dict_list: List[Dict] | List[List] | Tuple, input_type: str = 'dict') multi_layer_network
Add edges to the multilayer network.
This method supports multiple input formats for specifying edges between nodes in different layers. The most common format is dict-based.
- Parameters:
edge_dict_list – Edge data in one of the supported formats (see below)
input_type – Format of edge data (‘dict’, ‘list’, or ‘px_edge’)
- Returns:
Returns self for method chaining
- Return type:
self
- Supported Formats:
Dict format (recommended): ```python {
‘source’: ‘node1’, # Source node ID ‘target’: ‘node2’, # Target node ID ‘source_type’: ‘layer1’, # Source layer name ‘target_type’: ‘layer2’, # Target layer name (can be same as source) ‘weight’: 1.0, # Optional: edge weight ‘type’: ‘interaction’ # Optional: edge type/label
List format: [node1, layer1, node2, layer2]
px_edge format: ((node1, layer1), (node2, layer2), {‘weight’: 1.0})
Examples
>>> # Add single intra-layer edge >>> net = multi_layer_network() >>> net.add_edges([{ ... 'source': 'A', ... 'target': 'B', ... 'source_type': 'protein', ... 'target_type': 'protein' ... }]) <multi_layer_network: type=multilayer, directed=True, nodes=2, edges=1, layers=1>
>>> # Method chaining >>> net = multi_layer_network() >>> net.add_edges([ ... {'source': 'A', 'target': 'B', 'source_type': 'layer1', 'target_type': 'layer1'} ... ]).add_edges([ ... {'source': 'B', 'target': 'C', 'source_type': 'layer1', 'target_type': 'layer1'} ... ]) <multi_layer_network: type=multilayer, directed=True, nodes=3, edges=2, layers=1>
>>> # Add inter-layer edge with weight >>> net.add_edges([{ ... 'source': 'gene1', ... 'target': 'protein1', ... 'source_type': 'genes', ... 'target_type': 'proteins', ... 'weight': 0.95, ... 'type': 'expression' ... }]) <multi_layer_network: type=multilayer, directed=True, nodes=5, edges=3, layers=3>
>>> # Add multiple edges at once >>> edges = [ ... {'source': 'A', 'target': 'B', 'source_type': 'layer1', 'target_type': 'layer1'}, ... {'source': 'B', 'target': 'C', 'source_type': 'layer1', 'target_type': 'layer1'} ... ] >>> net.add_edges(edges) <multi_layer_network: type=multilayer, directed=True, nodes=5, edges=5, layers=3>
- Raises:
Exception – If input_type is not one of ‘dict’, ‘list’, or ‘px_edge’
Notes
For intra-layer edges, use the same layer for source_type and target_type
For inter-layer edges, use different layers
Edge weights default to 1.0 if not specified
- add_nodes(node_dict_list: List[Dict] | Dict, input_type: str = 'dict') multi_layer_network
Add nodes to the multilayer network.
Nodes in a multilayer network are identified by both their ID and the layer they belong to. This method adds nodes using a dict-based format.
- Parameters:
node_dict_list – Node data as a dict or list of dicts (see format below)
input_type – Format of node data (currently only ‘dict’ is supported)
- Returns:
Returns self for method chaining
- Return type:
self
- Dict Format:
-
‘source’: ‘node_id’, # Node identifier (can be string or number) ‘type’: ‘layer_name’, # Layer this node belongs to ‘weight’: 1.0, # Optional: node weight/importance ‘label’: ‘display’ # Optional: display label # … any other node attributes
Examples
>>> # Add single node >>> net = multi_layer_network() >>> net.add_nodes([{'source': 'A', 'type': 'layer1'}]) <multi_layer_network: type=multilayer, directed=True, nodes=1, edges=0, layers=1>
>>> # Method chaining >>> net = multi_layer_network() >>> net.add_nodes([{'source': 'A', 'type': 'layer1'}]).add_nodes([{'source': 'B', 'type': 'layer1'}]) <multi_layer_network: type=multilayer, directed=True, nodes=2, edges=0, layers=1>
>>> # Add multiple nodes to the same layer >>> nodes = [ ... {'source': 'A', 'type': 'protein'}, ... {'source': 'B', 'type': 'protein'}, ... {'source': 'C', 'type': 'protein'} ... ] >>> net.add_nodes(nodes) <multi_layer_network: type=multilayer, directed=True, nodes=5, edges=0, layers=2>
>>> # Add nodes with attributes >>> net.add_nodes([{ ... 'source': 'gene1', ... 'type': 'genes', ... 'weight': 0.8, ... 'label': 'BRCA1', ... 'chromosome': '17' ... }]) <multi_layer_network: type=multilayer, directed=True, nodes=6, edges=0, layers=3>
>>> # Add nodes to multiple layers >>> multi_layer_nodes = [ ... {'source': 'entity1', 'type': 'layer1'}, ... {'source': 'entity1', 'type': 'layer2'}, # Same entity, different layer ... {'source': 'entity2', 'type': 'layer1'} ... ] >>> net.add_nodes(multi_layer_nodes) <multi_layer_network: type=multilayer, directed=True, nodes=9, edges=0, layers=5>
Notes
The same node ID can exist in multiple layers
Each (node_id, layer) combination is treated as a unique node
Additional attributes beyond ‘source’ and ‘type’ are preserved
Nodes must be added before edges referencing them
- aggregate_edges(metric='count', normalize_by='degree')
Edge aggregation method
Count weights across layers and return a weighted network
- Parameters:
param1 – aggregation operator (count is default)
param2 – normalization of the values
- Returns:
A simplified network.
- assign_partition(partition: Dict[Tuple[Any, Any], int]) None
Assign community partition to network nodes.
This method stores the community assignments as node attributes and computes community-level statistics.
- Parameters:
partition (dict) – Dictionary mapping (node, layer) tuples to community IDs.
Examples
>>> from py3plex.core import multinet >>> net = multinet.multi_layer_network(directed=False) >>> net.add_edges([['A', 'L1', 'B', 'L1', 1]], input_type='list') >>> partition = {('A', 'L1'): 0, ('B', 'L1'): 0} >>> net.assign_partition(partition) >>> print(net.community_sizes) {0: 2}
- basic_stats(target_network=None)
A method for obtaining a network’s statistics.
Displays: - Basic network info (nodes, edges) - Total unique nodes (counting each (node, layer) as unique) - Unique node IDs (across all layers) - Per-layer node counts
- compute_ollivier_ricci(mode: str = 'core', layers: List[Any] | None = None, alpha: float = 0.5, weight_attr: str = 'weight', curvature_attr: str = 'ricciCurvature', verbose: str = 'ERROR', backend_kwargs: Dict[str, Any] | None = None, inplace: bool = True, interlayer_weight: float = 1.0) Dict[str, Any]
Compute Ollivier-Ricci curvature on the multilayer network.
This method provides flexible computation of Ollivier-Ricci curvature at different levels of the multilayer network:
core mode: Compute curvature on the aggregated (flattened) network
layers mode: Compute curvature separately for each layer
supra mode: Compute curvature on the full supra-graph including both intra-layer and inter-layer edges
- Parameters:
mode – Scope of computation. Options: “core”, “layers”, “supra”.
layers – List of layer identifiers to process (only for mode=”layers”). If None, all layers are processed.
alpha – Ollivier-Ricci parameter in [0, 1] controlling the mass distribution. Default: 0.5.
weight_attr – Name of edge attribute containing weights. Default: “weight”.
curvature_attr – Name of edge attribute to store curvature values. Default: “ricciCurvature”.
verbose – Verbosity level. Options: “INFO”, “DEBUG”, “ERROR”. Default: “ERROR”.
backend_kwargs – Additional keyword arguments for OllivierRicci constructor.
inplace – If True, update internal graphs. If False, return new graphs without modifying the network. Default: True.
interlayer_weight – Weight for inter-layer coupling edges (only for mode=”supra”). Default: 1.0.
- Returns:
Dictionary mapping scope identifiers to NetworkX graphs with computed curvatures: - mode=”core”: {“core”: graph_with_curvature} - mode=”layers”: {layer_id: graph_with_curvature, …} - mode=”supra”: {“supra”: supra_graph_with_curvature}
- Raises:
RicciBackendNotAvailable – If GraphRicciCurvature is not installed.
ValueError – If mode is invalid or layers contains invalid identifiers.
Examples
>>> from py3plex.core import multinet >>> net = multinet.multi_layer_network() >>> net.add_edges([ ... ['A', 'layer1', 'B', 'layer1', 1], ... ['B', 'layer1', 'C', 'layer1', 1], ... ], input_type="list") >>> >>> # Compute on aggregated network >>> result = net.compute_ollivier_ricci(mode="core") >>> >>> # Compute per layer >>> result = net.compute_ollivier_ricci(mode="layers") >>> >>> # Compute on supra-graph >>> result = net.compute_ollivier_ricci(mode="supra", inplace=False)
- compute_ollivier_ricci_flow(mode: str = 'core', layers: List[Any] | None = None, alpha: float = 0.5, iterations: int = 10, method: str = 'OTD', weight_attr: str = 'weight', curvature_attr: str = 'ricciCurvature', verbose: str = 'ERROR', backend_kwargs: Dict[str, Any] | None = None, inplace: bool = True, interlayer_weight: float = 1.0) Dict[str, Any]
Compute Ollivier-Ricci flow on the multilayer network.
Ricci flow iteratively adjusts edge weights based on their Ricci curvature, effectively revealing and enhancing community structure. After Ricci flow, edges with negative curvature (community boundaries) have reduced weights, while edges with positive curvature have increased weights.
- Parameters:
mode – Scope of computation. Options: “core”, “layers”, “supra”.
layers – List of layer identifiers to process (only for mode=”layers”). If None, all layers are processed.
alpha – Ollivier-Ricci parameter in [0, 1]. Default: 0.5.
iterations – Number of Ricci flow iterations. Default: 10.
method – Ricci flow method. Options: “OTD” (Optimal Transport Distance, recommended), “ATD” (Average Transport Distance). Default: “OTD”.
weight_attr – Name of edge attribute containing weights. After Ricci flow, these weights are updated to reflect the flow metric.
curvature_attr – Name of edge attribute for curvature values. Default: “ricciCurvature”.
verbose – Verbosity level. Options: “INFO”, “DEBUG”, “ERROR”. Default: “ERROR”.
backend_kwargs – Additional keyword arguments for OllivierRicci constructor.
inplace – If True, update internal graphs. If False, return new graphs. Default: True.
interlayer_weight – Weight for inter-layer coupling edges (only for mode=”supra”). Default: 1.0.
- Returns:
Dictionary mapping scope identifiers to NetworkX graphs with Ricci flow applied: - mode=”core”: {“core”: graph_with_flow} - mode=”layers”: {layer_id: graph_with_flow, …} - mode=”supra”: {“supra”: supra_graph_with_flow}
- Raises:
RicciBackendNotAvailable – If GraphRicciCurvature is not installed.
ValueError – If mode is invalid or layers contains invalid identifiers.
Examples
>>> from py3plex.core import multinet >>> net = multinet.multi_layer_network() >>> net.add_edges([ ... ['A', 'layer1', 'B', 'layer1', 1], ... ['B', 'layer1', 'C', 'layer1', 1], ... ], input_type="list") >>> >>> # Apply Ricci flow to aggregated network >>> result = net.compute_ollivier_ricci_flow(mode="core", iterations=20) >>> >>> # Apply to each layer >>> result = net.compute_ollivier_ricci_flow(mode="layers", iterations=10)
- property edge_count: int
Number of edges in the network.
- Returns:
Total count of edges
- Return type:
int
Examples
>>> net = multi_layer_network() >>> net.add_edges([{'source': 'A', 'target': 'B', ... 'source_type': 'layer1', 'target_type': 'layer1'}]) >>> net.edge_count 1
- edges_from_temporal_table(edge_df)
Convert a temporal edge DataFrame to edge tuple list.
Extracts edges from a pandas DataFrame with temporal/activity information and converts them to a list of edge tuples suitable for network construction.
- Parameters:
edge_df – pandas DataFrame with columns: - node_first: Source node identifier - node_second: Target node identifier - layer_name: Layer identifier
- Returns:
- List of edge tuples in format:
(node_first, node_second, layer_first, layer_second, weight) where weight is always 1
- Return type:
list
Notes
All values are converted to strings
All edges are assigned weight=1
Both source and target are assumed to be in the same layer
Examples
>>> import pandas as pd >>> net = multi_layer_network() >>> df = pd.DataFrame({ ... 'node_first': ['A', 'B'], ... 'node_second': ['B', 'C'], ... 'layer_name': ['L1', 'L1'] ... }) >>> result = net.edges_from_temporal_table(df) >>> len(result) >= 2 True
See also
fill_tmp_with_edges: Add these edges to layer graphs
- execute_query(query: str) Dict[str, Any]
Execute a DSL query on this multilayer network.
This is a convenience method that provides first-class access to the py3plex DSL (Domain-Specific Language) for querying multilayer networks.
Supports both SELECT and MATCH queries:
- SELECT queries:
net.execute_query(‘SELECT nodes WHERE layer=”transport”’) net.execute_query(‘SELECT * FROM nodes IN LAYER “ppi” WHERE degree > 10’)
- MATCH queries (Cypher-like):
net.execute_query(‘MATCH (g:Gene)-[r]->(t:Gene) RETURN g, t’) net.execute_query(‘MATCH (a)-[e]->(b) IN LAYER “ppi” WHERE a.degree > 5 RETURN a, b’)
- Parameters:
query – DSL query string
- Returns:
For SELECT queries: ‘nodes’ or ‘edges’ list, ‘count’, optional ‘computed’
For MATCH queries: ‘bindings’ list, ‘count’, ‘type’
- Return type:
Dictionary containing query results
- Raises:
DSLSyntaxError – If query syntax is invalid
DSLExecutionError – If query cannot be executed
Examples
>>> net = multi_layer_network(directed=False) >>> net.add_nodes([{'source': 'A', 'type': 'layer1'}]) >>> net.add_edges([{'source': 'A', 'target': 'B', ... 'source_type': 'layer1', 'target_type': 'layer1'}]) >>> result = net.execute_query('SELECT nodes WHERE layer="layer1"') >>> result['count'] >= 0 True
>>> # Using MATCH syntax >>> result = net.execute_query('MATCH (a:layer1)-[r]->(b:layer1) RETURN a, b') >>> 'bindings' in result True
See also
py3plex.dsl.execute_query()for standalone functionpy3plex.dsl.format_result()for formatting results
- fill_tmp_with_edges(edge_df)
Fill temporary layer graphs with edges from a DataFrame.
Populates the emptied layer graphs (created by remove_layer_edges) with edges from a temporal/activity DataFrame. Useful for temporal network analysis where edge sets change over time.
- Parameters:
edge_df – pandas DataFrame with columns: - node_first: Source node identifier - node_second: Target node identifier - layer_name: Layer identifier
Notes
Requires remove_layer_edges() to be called first
Edges are grouped by layer
Modifies self.tmp_layers in place
Each edge is stored as ((node_first, layer), (node_second, layer))
- Raises:
AttributeError – If self.tmp_layers doesn’t exist (call remove_layer_edges first)
Examples
These examples require proper network setup and are for illustration only.
>>> import pandas as pd >>> net = multi_layer_network() >>> net.split_to_layers() >>> net.remove_layer_edges() >>> df = pd.DataFrame({ ... 'node_first': ['A', 'B'], ... 'node_second': ['B', 'C'], ... 'layer_name': ['L1', 'L1'] ... }) >>> net.fill_tmp_with_edges(df)
See also
remove_layer_edges: Creates empty layer graphs edges_from_temporal_table: Convert DataFrame to edge list
- classmethod from_edges(edges: List[Dict | List], network_type: str = 'multilayer', directed: bool = False, input_type: str = 'dict') multi_layer_network
Create a multilayer network directly from a list of edges.
This is a convenience factory method that creates a network and populates it with edges in a single call, supporting method chaining patterns.
- Parameters:
edges – List of edges in dict or list format
network_type – Type of network (‘multilayer’ or ‘multiplex’)
directed – Whether the network is directed
input_type – Format of edge data (‘dict’ or ‘list’)
- Returns:
New network instance with edges added
- Return type:
Examples
>>> # Create from dict format >>> net = multi_layer_network.from_edges([ ... {'source': 'A', 'target': 'B', ... 'source_type': 'layer1', 'target_type': 'layer1'}, ... {'source': 'B', 'target': 'C', ... 'source_type': 'layer1', 'target_type': 'layer1'} ... ]) >>> len(net) 3
>>> # Create from list format >>> net = multi_layer_network.from_edges([ ... ['A', 'layer1', 'B', 'layer1', 1], ... ['B', 'layer1', 'C', 'layer1', 1] ... ], input_type='list') >>> net.edge_count 2
- from_homogeneous_hypergraph(H)
Decode a homogeneous graph created by to_homogeneous_hypergraph.
This method reconstructs a multiplex network from its incidence gadget encoding. It identifies edge-nodes by their degree and cycle structure, then reconstructs the original layers based on cycle lengths (prime numbers).
- Parameters:
H (networkx.Graph) – Homogeneous graph created by to_homogeneous_hypergraph().
- Returns:
dict – Dictionary mapping layer names to lists of edges: {layer: [(u, v), …]}
Examples – Example requires proper network setup - for illustration only.
>>> network = multi_layer_network() >>> network.add_layer("A") >>> network.add_nodes([("1", "A"), ("2", "A")]) >>> network.add_edges([(("1", "A"), ("2", "A"))]) >>> H, node_map, edge_info = network.to_homogeneous_hypergraph() >>> recovered = network.from_homogeneous_hypergraph(H) >>> print(recovered) {'layer_with_prime_2': [('1', '2')]}
Notes
The decoded layer names indicate the prime number used for encoding: - “layer_with_prime_2” corresponds to the first layer - “layer_with_prime_3” corresponds to the second layer, etc.
- classmethod from_networkx(G: Graph, network_type: str = 'multilayer', directed: bool | None = None) multi_layer_network
Create a multi_layer_network from a NetworkX graph.
This class method converts a NetworkX graph into a py3plex multi_layer_network. For multilayer networks, nodes should be tuples of (node_id, layer).
- Parameters:
G – NetworkX graph to convert
network_type – Type of network (‘multilayer’ or ‘multiplex’)
directed – Whether to treat the network as directed. If None, inferred from G.
- Returns:
A new multi_layer_network instance
- Return type:
Examples
>>> import networkx as nx >>> G = nx.Graph() >>> G.add_nodes_from([('A', 'layer1'), ('B', 'layer1')]) >>> G.add_edge(('A', 'layer1'), ('B', 'layer1')) >>> net = multi_layer_network.from_networkx(G) >>> print(net) <multi_layer_network: type=multilayer, directed=False, nodes=2, edges=1, layers=1>
Notes
For proper multilayer behavior, ensure nodes are (node_id, layer) tuples
Edge attributes are preserved during conversion
The input graph is copied, not referenced
- get_decomposition(heuristic='all', cycle=None, parallel=False, alpha=1, beta=0)
Core method for obtaining a network’s decomposition in terms of relations
- get_decomposition_cycles(cycle=None)
A supporting method for obtaining decomposition triplets
- get_degrees()
A simple wrapper which computes node degrees.
- get_edges(data: bool = False, multiplex_edges: bool = False) Any
Iterate over edges in the network.
This method behaves differently based on the network_type:
multilayer: Returns all edges without filtering.
multiplex: By default, filters out coupling edges (auto-generated inter-layer edges connecting each node to itself in other layers). Set multiplex_edges=True to include coupling edges.
- Parameters:
data – If True, return edge data along with edge tuples
multiplex_edges – If True, include coupling edges in multiplex networks. Only relevant when network_type=’multiplex’. Coupling edges are automatically added to connect each node to its counterparts in other layers.
- Yields:
Edge tuples, optionally with data. For multiplex networks with multiplex_edges=False, coupling edges are excluded.
- Raises:
ValueError – If network_type is not ‘multilayer’ or ‘multiplex’.
Examples
>>> net = multi_layer_network(network_type='multilayer') >>> net.add_edges([ ... {'source': 'A', 'target': 'B', ... 'source_type': 'layer1', 'target_type': 'layer1'} ... ]) <multi_layer_network: type=multilayer, directed=True, nodes=2, edges=1, layers=1> >>> list(net.get_edges()) [(('A', 'layer1'), ('B', 'layer1'))]
See also
__init__()for the difference between multilayer and multiplex_couple_all_edges()for how coupling edges are created
- get_label_matrix()
Return network labels
- get_layers(style='diagonal', compute_layouts='force', layout_parameters=None, verbose=True)
A method for obtaining layerwise distributions
- get_neighbors(node_id: str, layer_id: str | None = None) Any
Get neighbors of a node in a specific layer.
- Parameters:
node_id – Node identifier
layer_id – Layer identifier (optional)
- Returns:
Iterator of neighbor nodes
- get_node_attribute(node: Any, attribute: str, layer: Any | None = None) Any
Get an attribute value for a given node.
- Parameters:
node (Any) – Node identifier.
attribute (str) – Name of the attribute to retrieve.
layer (Any, optional) – Layer identifier. If None and node is a tuple, assumes node is (node_id, layer).
- Returns:
The attribute value, or None if not found.
- Return type:
Any
Examples
>>> from py3plex.core import multinet >>> net = multinet.multi_layer_network(directed=False) >>> net.add_edges([['A', 'L1', 'B', 'L1', 1]], input_type='list') >>> net.set_node_attribute('A', 'score', 42.0, 'L1') >>> print(net.get_node_attribute('A', 'score', 'L1')) 42.0
- get_nodes(data: bool = False) Any
A method for obtaining a network’s nodes
- Parameters:
data – If True, return node data along with node identifiers
- Yields:
Node identifiers, optionally with data
- get_nx_object()
Return only core network with proper annotations
- get_partition(node: Any, layer: Any | None = None) int | None
Get the community/partition ID for a given node.
- Parameters:
node (Any) – Node identifier.
layer (Any, optional) – Layer identifier. If None and node is a tuple, assumes node is (node_id, layer).
- Returns:
Community ID, or None if node doesn’t have a partition assigned.
- Return type:
int or None
Examples
>>> from py3plex.core import multinet >>> net = multinet.multi_layer_network(directed=False) >>> net.add_edges([['A', 'L1', 'B', 'L1', 1]], input_type='list') >>> partition = {('A', 'L1'): 0, ('B', 'L1'): 1} >>> net.assign_partition(partition) >>> print(net.get_partition('A', 'L1')) 0
- get_supra_adjacency_matrix(mtype='sparse')
Get sparse representation of the supra matrix.
- Parameters:
mtype – ‘sparse’ or ‘dense’ - matrix representation type
- Returns:
Supra-adjacency matrix in requested format
Warning
For large multilayer networks, dense matrices can consume significant memory (N*L)^2 * 8 bytes for float64.
- get_tensor(sparsity_type='bsr')
Get sparse tensor representation of the multilayer network.
Returns the supra-adjacency matrix in the specified sparse format. This method provides a tensor-like view of the multilayer network, useful for mathematical analysis and matrix-based algorithms.
- Parameters:
sparsity_type – Sparse matrix format to use. Options include: - ‘bsr’ (default): Block Sparse Row format - ‘csr’: Compressed Sparse Row format - ‘csc’: Compressed Sparse Column format - ‘coo’: Coordinate format - ‘lil’: List of Lists format - ‘dok’: Dictionary of Keys format
- Returns:
Supra-adjacency matrix in specified format
- Return type:
scipy.sparse matrix
Example
>>> net = multi_layer_network() >>> tensor = net.get_tensor(sparsity_type='csr') >>> print(tensor.shape)
Note
The returned matrix is the same as get_supra_adjacency_matrix(mtype=’sparse’) but with control over the specific sparse format used.
- get_unique_entity_counts()
Count unique entities in the network.
- Returns:
- (total_unique_nodes, unique_node_ids, nodes_per_layer)
total_unique_nodes: count of unique (node, layer) tuples
unique_node_ids: count of unique node IDs (across all layers)
nodes_per_layer: dict mapping layer to count of nodes in that layer
- Return type:
tuple
- invert(override_core=False)
invert the nodes to edges. Get the “edge graph”. Each node is here an edge.
- property is_empty: bool
Check if the network is empty (has no nodes).
- Returns:
True if network has no nodes
- Return type:
bool
Examples
>>> net = multi_layer_network() >>> net.is_empty True >>> net.add_nodes([{'source': 'A', 'type': 'layer1'}]) >>> net.is_empty False
- property layer_count: int
Number of unique layers in the network.
- Returns:
Count of distinct layers
- Return type:
int
Examples
>>> net = multi_layer_network() >>> net.add_nodes([ ... {'source': 'A', 'type': 'layer1'}, ... {'source': 'B', 'type': 'layer2'} ... ]) >>> net.layer_count 2
- property layers: List[Any]
List of unique layer identifiers in the network.
- Returns:
Sorted list of layer identifiers
- Return type:
list
Examples
>>> net = multi_layer_network() >>> net.add_nodes([ ... {'source': 'A', 'type': 'social'}, ... {'source': 'B', 'type': 'work'} ... ]) >>> net.layers ['social', 'work']
- load_embedding(embedding_file)
Embedding loading method
- load_layer_name_mapping(mapping_name, header=False)
Layer-node mapping loader method
- Parameters:
param1 – The name of the mapping file.
- Returns:
self.layer_name_map is filled, returns nothing.
- load_network(input_file: str | None = None, directed: bool = False, input_type: str = 'gml', label_delimiter: str = '---') multi_layer_network
Load a network from file.
This method loads and prepares a given network. The behavior depends on the
network_typeset during initialization:multilayer: Network is loaded as-is. No automatic edges are added.
multiplex: After loading, coupling edges are automatically created between each node and its counterparts in other layers. These edges have type=’coupling’ and can be filtered via get_edges().
- Parameters:
input_file – Path to the network file to load
directed – Whether the network is directed
input_type – Format of the input file. Supported values: - ‘gml’: Graph Modeling Language format - ‘graphml’: GraphML XML format - ‘edgelist’: Simple edge list (source target [weight]) - ‘multiedgelist’: Multilayer edge list (node1 layer1 node2 layer2 weight) - ‘multiplex_edges’: Multiplex format (layer node1 node2 weight) - ‘multiplex_folder’: Folder with layer files - ‘gpickle’: Python pickle format - ‘nx’: NetworkX graph object - ‘sparse’: Sparse matrix format
label_delimiter – Delimiter used to separate layer names in node labels
- Returns:
Self for method chaining. Populates self.core_network, self.labels, and self.activity.
Note
For multiplex networks, use input_type=’multiplex_edges’ or ‘multiplex_folder’ with network_type=’multiplex’ to get automatic coupling edges.
Examples
>>> # Load multilayer network (no automatic coupling) >>> net = multi_layer_network(network_type='multilayer') >>> net.load_network('data.gml', input_type='gml')
>>> # Load multiplex network (automatic coupling edges) >>> net = multi_layer_network(network_type='multiplex') >>> net.load_network('data.edges', input_type='multiplex_edges')
- load_network_activity(activity_file)
Network activity loader
- Args:
param1: The name of the generic activity file -> 65432 61888 1377688175 RE
, n1 n2 timestamp and layer name. Note that layer node mappings MUST be loaded in order to map nodes to activity properly.
- Returns:
self.activity is filled.
- load_temporal_edge_information(input_file=None, input_type='edge_activity', directxed=False, layer_mapping=None)
A method for loading temporal edge information
- merge_with(target_px_object)
Merge two px objects.
- monitor(message)
A simple monitor method for logging
- monoplex_nx_wrapper(method, kwargs=None)
A generic networkx function wrapper.
- Parameters:
method (str) – Name of the NetworkX function to call (e.g., ‘degree_centrality’, ‘betweenness_centrality’)
kwargs (dict, optional) – Keyword arguments to pass to the NetworkX function. For example, for betweenness_centrality you can pass: - weight: Edge attribute to use as weight - normalized: Whether to normalize betweenness values - distance: Edge attribute to use as distance (for closeness_centrality)
- Returns:
The result of the NetworkX function call.
- Raises:
AttributeError – If the specified method does not exist in NetworkX.
Example
# Unweighted betweenness centrality centralities = network.monoplex_nx_wrapper(“betweenness_centrality”)
# Weighted betweenness centrality centralities = network.monoplex_nx_wrapper(“betweenness_centrality”, kwargs={“weight”: “weight”})
# With multiple parameters centralities = network.monoplex_nx_wrapper(“betweenness_centrality”,
kwargs={“weight”: “weight”, “normalized”: True})
- property node_count: int
Number of nodes in the network.
- Returns:
Total count of nodes (node-layer pairs)
- Return type:
int
Examples
>>> net = multi_layer_network() >>> net.add_nodes([{'source': 'A', 'type': 'layer1'}]) >>> net.node_count 1
- read_ground_truth_communities(cfile)
Parse ground truth community file and make mappings to the original nodes. This works based on node ID mappings, exact node,layer tuplets are to be added. :param param1: ground truth communities.
- Returns:
self.ground_truth_communities
- remove_edges(edge_dict_list: List[Dict] | List[List], input_type: str = 'list') None
A method for removing edges..
- Parameters:
edge_dict_list – Edge data in dict or list format
input_type – Format of edge data (‘dict’ or ‘list’)
- Raises:
Exception – If input_type is not valid
- remove_layer_edges()
Remove all edges from separate layer graphs while keeping nodes.
This method creates empty copies of each layer graph with all nodes intact but no edges. Useful for reconstructing networks with different edge sets or for temporal network analysis.
Notes
Requires split_to_layers() to be called first
Stores empty layer graphs in self.tmp_layers
Original graphs in self.separate_layers remain unchanged
All nodes and their attributes are preserved
- Raises:
RuntimeError – If split_to_layers() hasn’t been called yet
See also
split_to_layers: Must be called before this method fill_tmp_with_edges: Add edges back to emptied layers
- remove_nodes(node_dict_list, input_type='dict')
Remove nodes from the network
- save_network(output_file=None, output_type='edgelist')
Save the network to a file in various formats.
This method exports the multilayer network to different file formats for persistence, sharing, or use with other tools.
- Parameters:
output_file – Path where the network should be saved
output_type – Format for saving (‘edgelist’, ‘multiedgelist’, ‘multiedgelist_encoded’, or ‘gpickle’)
- Supported Formats:
‘edgelist’: Simple edge list format (standard NetworkX)
‘multiedgelist’: Multilayer edge list with layer information
‘multiedgelist_encoded’: Multilayer edge list with integer encoding
‘gpickle’: Python pickle format (preserves all attributes)
Examples
>>> net = multi_layer_network() >>> net.add_nodes([{'source': 'A', 'type': 'layer1'}]) >>> net.add_edges([{'source': 'A', 'target': 'B', ... 'source_type': 'layer1', 'target_type': 'layer1'}]) >>> net.save_network('network.txt', output_type='multiedgelist')
>>> # For faster I/O with all metadata preserved >>> net.save_network('network.gpickle', output_type='gpickle')
Notes
‘gpickle’ format preserves all node/edge attributes
‘multiedgelist_encoded’ creates node_map and layer_map attributes
Edge weights and types are preserved in supported formats
- serialize_to_edgelist(edgelist_file='./tmp/tmpedgelist.txt', tmp_folder='tmp', out_folder='out', multiplex=False)
Serialize the multilayer network to an edgelist file.
Converts the network to a numeric edgelist format suitable for external tools and algorithms that require integer node/layer identifiers.
- Parameters:
edgelist_file – Path to output edgelist file (default: “./tmp/tmpedgelist.txt”)
tmp_folder – Temporary folder for intermediate files (default: “tmp”)
out_folder – Output folder for results (default: “out”)
multiplex – If True, use multiplex format (node layer node layer weight) If False, use simple edgelist format (node1 node2 weight)
- Returns:
- Inverse node mapping (numeric_id -> original_node_tuple)
Use this to decode results from external algorithms
- Return type:
dict
- File Formats:
Multiplex format: node1_id layer1_id node2_id layer2_id weight
Simple format: node1_id node2_id weight
Notes
Creates tmp_folder and out_folder if they don’t exist
Nodes are mapped to sequential integers starting from 0
Layers are mapped to sequential integers starting from 0 (multiplex mode)
All edges have weight 1 unless explicitly specified
Examples
Example requires file output - for illustration only.
>>> net = multi_layer_network() >>> # ... build network ... >>> node_mapping = net.serialize_to_edgelist( ... edgelist_file='network.txt', ... multiplex=True ... ) >>> # Use node_mapping to decode results >>> original_node = node_mapping[0] # Get original node for id 0
See also
load_network: Load networks from file save_network: Alternative serialization method
- set_node_attribute(node: Any, attribute: str, value: Any, layer: Any | None = None) None
Set an attribute value for a given node.
- Parameters:
node (Any) – Node identifier.
attribute (str) – Name of the attribute to set.
value (Any) – Value to assign to the attribute.
layer (Any, optional) – Layer identifier. If None and node is a tuple, assumes node is (node_id, layer).
Examples
>>> from py3plex.core import multinet >>> net = multinet.multi_layer_network(directed=False) >>> net.add_edges([['A', 'L1', 'B', 'L1', 1]], input_type='list') >>> net.set_node_attribute('A', 'score', 42.0, 'L1') >>> print(net.get_node_attribute('A', 'score', 'L1')) 42.0
- sparse_to_px(directed=None)
Convert sparse matrix to py3plex format
- Parameters:
directed – Whether the network is directed (uses self.directed if None)
- split_to_layers(style='diagonal', compute_layouts='force', layout_parameters=None, verbose=True, multiplex=False, convert_to_simple=False)
A method for obtaining layerwise distributions
- subnetwork(input_list=None, subset_by='node_layer_names')
Construct a subgraph based on a set of nodes.
- summary()
Generate a summary of network statistics.
Computes and returns key metrics about the multilayer network structure.
- Returns:
- Network statistics including:
’Number of layers’: Count of unique layers
’Nodes’: Total number of nodes
’Edges’: Total number of edges
’Mean degree’: Average node degree
’CC’: Number of connected components
- Return type:
dict
Examples
>>> net = multi_layer_network() >>> net.add_nodes([{'source': 'A', 'type': 'layer1'}]) >>> net.add_edges([{'source': 'A', 'target': 'B', ... 'source_type': 'layer1', 'target_type': 'layer1'}]) >>> stats = net.summary() >>> print(f"Network has {stats['Nodes']} nodes and {stats['Edges']} edges") Network has 2 nodes and 1 edges
Notes
Connected components are computed on the undirected version
Mean degree is averaged across all nodes in all layers
- test_scale_free()
Test the scale-free-nness of the network
- to_homogeneous_hypergraph()
Transform a multiplex network into a homogeneous graph using incidence gadget encoding.
This method encodes the multiplex structure where each layer is represented by a unique prime number signature. Each edge becomes an edge-node connected to its endpoints and a cycle of length prime-1 that encodes the layer.
- Returns:
tuple (H, node_mapping, edge_info) –
- Hnetworkx.Graph
Homogeneous unlabeled graph encoding the multiplex structure.
- node_mappingdict
Maps each original node to its vertex-node in H.
- edge_infodict
Mapping from each edge-node in H to its (layer, endpoints) tuple.
Examples – Example requires sympy dependency - for illustration only.
>>> network = multi_layer_network(directed=False) # doctest (+SKIP)
>>> network.add_nodes([{‘source’ (‘1’, ‘type’: ‘A’}, {‘source’: ‘2’, ‘type’: ‘A’}], input_type=’dict’) # doctest: +SKIP)
>>> network.add_edges([{‘source’ (‘1’, ‘target’: ‘2’, ‘source_type’: ‘A’, ‘target_type’: ‘A’}], input_type=’dict’) # doctest: +SKIP)
>>> H, node_map, edge_info = network.to_homogeneous_hypergraph() # doctest (+SKIP)
>>> print(f”Homogeneous graph has {len(H.nodes())} nodes”) # doctest (+SKIP)
Notes
This transformation uses prime-based signatures to encode layers: - Each layer is assigned a unique prime number (2, 3, 5, 7, …) - Each edge in layer with prime p is connected to a cycle of length p - The cycle structure uniquely identifies the layer
- to_json()
A method for exporting the graph to a json file
Args: self
- to_networkx() Graph
Convert the multilayer network to a NetworkX graph.
Returns a copy of the core network as a NetworkX graph. The returned graph preserves all node and edge attributes, including layer information for multilayer networks (where nodes are typically (node_id, layer) tuples).
- Returns:
A NetworkX graph (MultiGraph or MultiDiGraph depending on network type)
- Return type:
nx.Graph
Examples
>>> net = multi_layer_network(directed=False) >>> net.add_nodes([{'source': 'A', 'type': 'layer1'}]) >>> nx_graph = net.to_networkx() >>> print(type(nx_graph)) <class 'networkx.classes.multigraph.MultiGraph'>
Notes
For multilayer networks, nodes are tuples: (node_id, layer)
All edge attributes (weight, type, etc.) are preserved
The returned graph is a copy, not a reference
- to_sparse_matrix(replace_core=False, return_only=False)
Conver the matrix to scipy-sparse version. This is useful for classification.
- visualize_matrix(kwargs=None)
Plot the matrix – this plots the supra-adjacency matrix
- visualize_network(style='diagonal', parameters_layers=None, parameters_multiedges=None, show=False, compute_layouts='force', layouts_parameters=None, verbose=True, orientation='upper', resolution=0.01, axis=None, fig=None, no_labels=False, linewidth=1.7, alphachannel=0.3, linepoints='-.', legend=False)
Visualize the multilayer network.
Supports multiple visualization styles: - ‘diagonal’: Layer-centric diagonal layout with inter-layer edges - ‘hairball’: Aggregate hairball plot of all layers - ‘flow’ or ‘alluvial’: Layered flow visualization with horizontal bands - ‘sankey’: Sankey diagram showing inter-layer flow strength
- Parameters:
style – Visualization style (‘diagonal’, ‘hairball’, ‘flow’, ‘alluvial’, or ‘sankey’)
parameters_layers – Custom parameters for layer drawing
parameters_multiedges – Custom parameters for edge drawing
show – Show plot immediately
compute_layouts – Layout algorithm (currently unused)
layouts_parameters – Layout parameters (currently unused)
verbose – Enable verbose output
orientation – Edge orientation for diagonal style
resolution – Resolution for edge curves
axis – Optional matplotlib axis to draw on
fig – Optional matplotlib figure (currently unused)
no_labels – Hide network labels
linewidth – Width of edge lines
alphachannel – Alpha channel for edge transparency
linepoints – Line style for edges
legend – Show legend (for hairball style)
- Returns:
Matplotlib axis object
- Raises:
Exception – If style is not recognized
- Performance Notes:
For large networks (>500 nodes), visualization performance may degrade: - Layout computation can be slow (O(n²) for force-directed layouts) - Rendering many edges is memory and CPU intensive - Consider filtering or sampling for exploratory visualization - Use simpler layouts or increase layout iteration limits
Approximate rendering times on typical hardware: - 100 nodes: <1 second - 500 nodes: 5-10 seconds - 1000 nodes: 30-60 seconds - 5000+ nodes: Several minutes, may run out of memory
- visualize_ricci_core(alpha: float = 0.5, iterations: int = 10, layout_type: str = 'mds', dim: int = 2, **kwargs)
Visualize the aggregated core network using Ricci-flow-based layout.
This method is a high-level wrapper for Ricci-flow-based visualization of the core (aggregated) network. It automatically computes Ricci flow if not already done and creates an informative layout that emphasizes geometric structure and communities.
- Parameters:
alpha – Ollivier-Ricci parameter for flow computation. Default: 0.5.
iterations – Number of Ricci flow iterations. Default: 10.
layout_type – Layout algorithm (“mds”, “spring”, “spectral”). Default: “mds”.
dim – Dimensionality of layout (2 or 3). Default: 2.
**kwargs – Additional arguments passed to visualize_multilayer_ricci_core.
- Returns:
Tuple of (figure, axes, positions_dict).
- Raises:
RicciBackendNotAvailable – If GraphRicciCurvature is not installed.
Examples
Example requires GraphRicciCurvature library - for illustration only.
>>> from py3plex.core import multinet >>> net = multinet.multi_layer_network() >>> net.add_edges([ ... ['A', 'layer1', 'B', 'layer1', 1], ... ['B', 'layer1', 'C', 'layer1', 1], ... ], input_type="list") >>> fig, ax, pos = net.visualize_ricci_core() >>> import matplotlib.pyplot as plt >>> plt.show()
See also
visualize_ricci_layers: Per-layer visualization with Ricci flow visualize_ricci_supra: Supra-graph visualization with Ricci flow
- visualize_ricci_layers(layers: List[Any] | None = None, alpha: float = 0.5, iterations: int = 10, layout_type: str = 'mds', share_layout: bool = True, **kwargs)
Visualize individual layers using Ricci-flow-based layouts.
This method creates visualizations of individual layers with layouts derived from Ricci flow. Layers can share a common coordinate system (for easier comparison) or have independent layouts.
- Parameters:
layers – List of layer identifiers to visualize. If None, uses all layers.
alpha – Ollivier-Ricci parameter. Default: 0.5.
iterations – Number of Ricci flow iterations. Default: 10.
layout_type – Layout algorithm. Default: “mds”.
share_layout – If True, use shared coordinates across layers. Default: True.
**kwargs – Additional arguments passed to visualize_multilayer_ricci_layers.
- Returns:
Tuple of (figure, layer_positions_dict).
- Raises:
RicciBackendNotAvailable – If GraphRicciCurvature is not installed.
Examples
>>> fig, pos_dict = net.visualize_ricci_layers( ... arrangement="grid", share_layout=True ... ) >>> import matplotlib.pyplot as plt >>> plt.show()
See also
visualize_ricci_core: Core network visualization with Ricci flow visualize_ricci_supra: Supra-graph visualization with Ricci flow
- visualize_ricci_supra(alpha: float = 0.5, iterations: int = 10, layout_type: str = 'mds', dim: int = 2, **kwargs)
Visualize the full supra-graph using Ricci-flow-based layout.
This method visualizes the complete multilayer structure including both intra-layer edges (within layers) and inter-layer edges (coupling between layers) using a layout derived from Ricci flow.
- Parameters:
alpha – Ollivier-Ricci parameter. Default: 0.5.
iterations – Number of Ricci flow iterations. Default: 10.
layout_type – Layout algorithm. Default: “mds”.
dim – Dimensionality (2 or 3). Default: 2.
**kwargs – Additional arguments passed to visualize_multilayer_ricci_supra.
- Returns:
Tuple of (figure, axes, positions_dict).
- Raises:
RicciBackendNotAvailable – If GraphRicciCurvature is not installed.
Examples
>>> fig, ax, pos = net.visualize_ricci_supra(dim=3) >>> import matplotlib.pyplot as plt >>> plt.show()
See also
visualize_ricci_core: Core network visualization with Ricci flow visualize_ricci_layers: Per-layer visualization with Ricci flow
- py3plex.core.multinet.require(*args, **kwargs)
- py3plex.core.parsers.ensure(*args, **kwargs)
- py3plex.core.parsers.load_edge_activity_file(fname: str, layer_mapping: str | None = None) DataFrame
- py3plex.core.parsers.load_edge_activity_raw(activity_file: str, layer_mappings: dict) DataFrame
Basic parser for loading generic activity files. Here, temporal edges are given as tuples -> this can be easily transformed for example into a pandas dataframe!
- Parameters:
activity_file – Path to activity file
layer_mappings – Dictionary mapping layer names to IDs
- Returns:
DataFrame with edge activity data
- py3plex.core.parsers.load_temporal_edge_information(input_network: str, input_type: str, layer_mapping: str | None = None) DataFrame | None
- py3plex.core.parsers.parse_detangler_json(file_path: str, directed: bool = False) Tuple[MultiGraph | MultiDiGraph, None]
Parser for generic Detangler files :param file_path: Path to Detangler JSON file :param directed: Whether to create directed graph
- py3plex.core.parsers.parse_edgelist_multi_types(input_name: str, directed: bool) Tuple[MultiGraph | MultiDiGraph, None]
Parse an edgelist file with multiple edge types.
Reads a text file where each line represents an edge, optionally with weights and edge types. Lines starting with ‘#’ are treated as comments.
- File Format:
node1 node2 [weight] [edge_type]
Lines starting with ‘#’ are ignored (comments)
- Parameters:
input_name – Path to edgelist file
directed – Whether to create a directed graph
- Returns:
(parsed_graph, None for labels)
- Return type:
Tuple[Union[nx.MultiGraph, nx.MultiDiGraph], None]
Notes
All nodes are assigned to a “null” layer
Default weight is 1 if not specified
Edge type is optional (4th column)
Handles both 2-column (node pairs) and 3+ column formats
Examples
>>> # File content: >>> # A B 1.0 friendship >>> # B C 2.0 collaboration >>> graph, _ = parse_edgelist_multi_types('edges.txt', directed=False)
- py3plex.core.parsers.parse_embedding(input_name: str) Tuple[ndarray, ndarray]
Loader for generic embedding as outputed by GenSim
- Parameters:
input_name – Path to embedding file
- Returns:
Tuple of (embedding matrix, embedding indices)
- py3plex.core.parsers.parse_gml(file_name: str, directed: bool) Tuple[MultiGraph | MultiDiGraph, None]
Parse a gml network.
- Parameters:
file_name – Path to GML file
directed – Whether to create directed graph
- Returns:
Tuple of (multigraph, possible labels)
- Contracts:
Precondition: file_name must be a non-empty string
Postcondition: result graph is not None
Postcondition: result is a MultiGraph or MultiDiGraph
- py3plex.core.parsers.parse_gpickle(file_name: str, directed: bool = False, layer_separator: str | None = None) Tuple[MultiGraph | MultiDiGraph, None]
A parser for generic Gpickle as stored by Py3plex.
- Parameters:
file_name – Path to gpickle file
directed – Whether to create directed graph
layer_separator – Optional separator for layer parsing
- Contracts:
Precondition: file_name must be a non-empty string
Postcondition: result graph is not None
Postcondition: result is a MultiGraph or MultiDiGraph
- py3plex.core.parsers.parse_gpickle_biomine(file_name: str, directed: bool) Tuple[MultiGraph | MultiDiGraph, None]
Gpickle parser for biomine graphs :param file_name: Path to gpickle containing BioMine data :param directed: Whether to create directed graph
- py3plex.core.parsers.parse_matrix(file_name: str, directed: bool) Tuple[Any, Any]
Parser for matrices.
- Parameters:
file_name – Path to .mat file
directed – Whether the graph is directed
- Returns:
Tuple of (network, group) from the .mat file
- Contracts:
Precondition: file_name must be a non-empty string
Postcondition: network must not be None
- py3plex.core.parsers.parse_matrix_to_nx(file_name: str, directed: bool) Graph | DiGraph
Parser for matrices to NetworkX graph.
- Parameters:
file_name – Path to .mat file
directed – Whether to create directed graph
- Returns:
NetworkX Graph or DiGraph
- py3plex.core.parsers.parse_multi_edgelist(input_name: str, directed: bool) Tuple[MultiGraph | MultiDiGraph, None]
A generic multiedgelist parser n l n l w :param input_name: Path to text file containing multiedges :param directed: Whether to create directed graph
- py3plex.core.parsers.parse_multiedge_tuple_list(network: list, directed: bool) Tuple[MultiGraph | MultiDiGraph, None]
Parse a list of edge tuples into a multilayer network.
- Parameters:
network – List of edge tuples (node_first, node_second, layer_first, layer_second, weight)
directed – Whether to create directed graph
- py3plex.core.parsers.parse_multiplex_edges(input_name: str, directed: bool) Tuple[MultiGraph | MultiDiGraph, None]
Parse a multiplex edgelist file where each line specifies layer and edge.
- File Format:
layer node1 node2 [weight]
Each line: layer_id source_node target_node [optional_weight]
- Parameters:
input_name – Path to multiplex edgelist file
directed – Whether to create a directed graph
- Returns:
(parsed_graph, None for labels)
- Return type:
Tuple[Union[nx.MultiGraph, nx.MultiDiGraph], None]
Notes
Each edge belongs to a specific layer (first column)
Nodes are represented as (node_id, layer) tuples
Default weight is 1 if not specified
All edges have type=’default’ attribute
Automatically couples nodes across layers for multiplex structure
Examples
>>> # File content: >>> # layer1 A B 1.5 >>> # layer2 A B 2.0 >>> # layer1 B C 1.0 >>> graph, _ = parse_multiplex_edges('multiplex.txt', directed=False) >>> # Creates nodes: (A, layer1), (A, layer2), (B, layer1), etc.
- py3plex.core.parsers.parse_multiplex_folder(input_folder: str, directed: bool) Tuple[MultiGraph | MultiDiGraph, None, DataFrame]
Parse a folder containing multiplex network files.
Expects a folder with specific file formats for edges, layers, and optional activity.
- Expected Files:
*.edges: Edge information (format: layer_id node1 node2 weight)
layers.txt: Layer definitions (format: layer_id layer_name)
activity.txt: Optional temporal activity (format: node1 node2 timestamp layer_name)
- Parameters:
input_folder – Path to folder containing multiplex network files
directed – Whether to create a directed graph
- Returns:
Union[nx.MultiGraph, nx.MultiDiGraph]: Parsed multilayer graph
None: Placeholder for labels (not used)
pd.DataFrame: Time series activity data (empty if no activity.txt)
- Return type:
Tuple containing
Notes
Uses glob to find files with specific extensions
Layer mapping is built from layers.txt
Activity data is optional and returned as pandas DataFrame
Nodes are represented as (node_id, layer_id) tuples
Examples
>>> # Folder structure: >>> # my_network/ >>> # network.edges >>> # layers.txt >>> # activity.txt (optional) >>> graph, _, activity_df = parse_multiplex_folder('my_network/', directed=False)
- py3plex.core.parsers.parse_network(input_name: str | Any, f_type: str = 'gml', directed: bool = False, label_delimiter: str | None = None, network_type: str = 'multilayer') Tuple[Any, Any, Any]
A wrapper method for available parsers!
- Parameters:
input_name – Path to network file or network object
f_type – Type of file format to parse
directed – Whether to create directed graph
label_delimiter – Optional delimiter for labels
network_type – Type of network (multilayer or multiplex)
- Returns:
Tuple of (parsed_network, labels, time_series)
- py3plex.core.parsers.parse_nx(nx_object: Graph, directed: bool) Tuple[Graph, None]
Core parser for networkx objects.
- Parameters:
nx_object – A networkx graph
directed – Whether the graph is directed
- Returns:
Tuple of (graph, None)
- Contracts:
Precondition: nx_object must not be None
Precondition: nx_object must be a NetworkX graph
Postcondition: result graph is not None
- py3plex.core.parsers.parse_simple_edgelist(input_name: str, directed: bool) Tuple[Graph | DiGraph, None]
Simple edgelist n n w :param input_name: Path to text file :param directed: Whether to create directed graph
- py3plex.core.parsers.parse_spin_edgelist(input_name: str, directed: bool) Tuple[Graph, None]
Parse SPIN format edgelist file.
SPIN format includes node pairs with edge tags and optional weights.
- File Format:
node1 node2 tag [weight]
Each line: source_node target_node edge_tag [optional_weight]
- Parameters:
input_name – Path to SPIN edgelist file
directed – Whether to create directed graph (currently creates undirected)
- Returns:
(parsed_graph, None for labels)
- Return type:
Tuple[nx.Graph, None]
Notes
Currently always returns nx.Graph (undirected) regardless of directed parameter
Edge tag is stored in edge ‘type’ attribute
Default weight is 1 if not specified (4th column)
Examples
>>> # File content: >>> # A B protein_interaction 0.95 >>> # B C gene_regulation 0.80 >>> graph, _ = parse_spin_edgelist('spin_edges.txt', directed=False)
- py3plex.core.parsers.require(*args, **kwargs)
- py3plex.core.parsers.save_edgelist(input_network: Graph, output_file: str, attributes: bool = False) None
Save network to edgelist format.
For multilayer networks (where nodes are tuples of (node_id, layer)), saves in format: node1 layer1 node2 layer2
For regular networks, saves in format: node1 node2
- py3plex.core.parsers.save_gpickle(input_network: Any, output_file: str) None
- py3plex.core.parsers.save_multiedgelist(input_network: Any, output_file: str, attributes: bool = False, encode_with_ints: bool = False) Tuple[Dict[Any, str], Dict[Any, str]] | None
Save multiedgelist – as n1, l1, n2, l2, w
- Returns:
When encode_with_ints is True, returns tuple of (node_encodings, type_encodings) Otherwise returns None
- py3plex.core.converters.compute_layout(network: Graph, compute_layouts: str, layout_parameters: Dict[str, Any] | None, verbose: bool) Graph
Compute and normalize layout for a network.
- Parameters:
network – NetworkX graph to compute layout for
compute_layouts – Layout algorithm to use (‘force’, ‘random’, ‘custom_coordinates’)
layout_parameters – Optional parameters for layout algorithms
verbose – Whether to print verbose output
- Returns:
Network with ‘pos’ attribute added to nodes
- Contracts:
Precondition: network must not be None and must have at least one node
Precondition: compute_layouts must be a valid algorithm name
Postcondition: all nodes have ‘pos’ attribute (layout preserves nodes)
- py3plex.core.converters.ensure(*args, **kwargs)
- py3plex.core.converters.prepare_for_parsing(multinet)
Compute layout for a hairball visualization
- Parameters:
param1 (obj) – multilayer object
- Returns:
(names, prepared network)
- Return type:
tuple
- py3plex.core.converters.prepare_for_visualization(multinet: Graph, network_type: str = 'multilayer', compute_layouts: str = 'force', layout_parameters: Dict[str, Any] | None = None, verbose: bool = True, multiplex: bool = False) Tuple[List[Any], List[Graph], Any]
This functions takes a multilayer object and returns individual layers, their names, as well as multilayer edges spanning over multiple layers.
- Parameters:
multinet – multilayer network object
network_type – “multilayer” or “multiplex”
compute_layouts – Layout algorithm (‘force’, ‘random’, etc.)
layout_parameters – Optional layout parameters
verbose – Whether to print progress information
multiplex – Whether to treat as multiplex network
- Returns:
- (layer_names, layer_networks_list, multiedges)
layer_names: List of layer names
layer_networks_list: List of NetworkX graph objects for each layer
multiedges: Dictionary of edges spanning multiple layers
- Return type:
tuple
- py3plex.core.converters.prepare_for_visualization_hairball(multinet, compute_layouts=False)
Compute layout for a hairball visualization
- Parameters:
param1 (obj) – multilayer object
- Returns:
(names, prepared network)
- Return type:
tuple
- py3plex.core.converters.require(*args, **kwargs)
- class py3plex.core.random_generators.SBMMetadata(block_memberships: ndarray, block_matrix: ndarray, node_ids: List[str], layer_names: List[str])
Bases:
objectGround-truth information for a multilayer SBM sample.
- block_memberships
Array of shape (n_nodes,) with integer block labels in {0, …, n_blocks-1}. Shared across layers in this simple multiplex model.
- Type:
np.ndarray
- block_matrix
Array of shape (n_blocks, n_blocks) with edge probabilities p_ab. Same for all layers in this simple model.
- Type:
np.ndarray
- node_ids
Node identifiers used in the resulting multi_layer_network (e.g. “v0”, “v1”, …).
- Type:
List[str]
- layer_names
Names of the layers used in the multi_layer_network (e.g. “L0”, “L1”, …).
- Type:
List[str]
- block_matrix: ndarray
- block_memberships: ndarray
- layer_names: List[str]
- node_ids: List[str]
- py3plex.core.random_generators.ensure(*args, **kwargs)
- py3plex.core.random_generators.random_multilayer_ER(n: int, l: int, p: float, directed: bool = False) Any
Generate random multilayer Erdős-Rényi network.
- Parameters:
n – Number of nodes (must be positive)
l – Number of layers (must be positive)
p – Edge probability in [0, 1]
directed – If True, generate directed network
- Returns:
multi_layer_network object
- Contracts:
Precondition: n > 0 - must have at least one node
Precondition: l > 0 - must have at least one layer
Precondition: 0 <= p <= 1 - probability must be valid
Postcondition: result is not None - must return valid network
- py3plex.core.random_generators.random_multilayer_SBM(n_layers: int, n_nodes: int, n_blocks: int, p_in: float, p_out: float, coupling: float = 0.0, directed: bool = False, seed: int | None = None) Tuple[multi_layer_network, SBMMetadata]
Generate a simple multiplex multilayer stochastic block model (SBM) network.
This function creates a multilayer network with: - n_nodes nodes shared across all layers, - n_layers layers, - n_blocks latent communities (blocks), - within-block edge probability p_in, - between-block edge probability p_out, - optional diagonal inter-layer coupling with probability coupling
between replicas of the same node across layers.
The block memberships are shared across layers in this simple model, and the same block probability matrix is used for all layers.
- Parameters:
n_layers (int) – Number of layers.
n_nodes (int) – Number of nodes (shared across all layers).
n_blocks (int) – Number of blocks (communities).
p_in (float) – Edge probability for edges within the same block.
p_out (float) – Edge probability for edges between different blocks.
coupling (float, optional) – Probability of inter-layer edges between replicas of the same node in different layers. Defaults to 0.0 (no inter-layer edges).
directed (bool, optional) – Whether to generate directed layers. Defaults to False.
seed (int, optional) – Random seed for reproducibility.
- Returns:
network (multi_layer_network) – The generated multilayer network.
metadata (SBMMetadata) – Ground-truth metadata containing block memberships and block matrix.
Notes
Node IDs are strings “v0”, “v1”, …, “v{n_nodes-1}”.
Layer names are strings “L0”, “L1”, …, “L{n_layers-1}”.
Edges are inserted via the py3plex list-based API: [src_node, src_layer, dst_node, dst_layer, weight].
Examples
>>> from py3plex.core.random_generators import random_multilayer_SBM >>> net, meta = random_multilayer_SBM( ... n_layers=3, n_nodes=20, n_blocks=2, ... p_in=0.5, p_out=0.05, coupling=0.1, seed=42 ... ) >>> print(len(meta.block_memberships)) 20 >>> print(len(meta.layer_names)) 3
- py3plex.core.random_generators.random_multiplex_ER(n: int, l: int, p: float, directed: bool = False) Any
Generate random multiplex Erdős-Rényi network.
- Parameters:
n – Number of nodes (must be positive)
l – Number of layers (must be positive)
p – Edge probability in [0, 1]
directed – If True, generate directed network
- Returns:
multi_layer_network object
- Contracts:
Precondition: n > 0 - must have at least one node
Precondition: l > 0 - must have at least one layer
Precondition: 0 <= p <= 1 - probability must be valid
Postcondition: result is not None - must return valid network
- py3plex.core.random_generators.random_multiplex_generator(n: int, m: int, d: float = 0.9) MultiGraph
Generate a multiplex network from a random bipartite graph.
- Parameters:
n – Number of nodes (must be positive)
m – Number of layers (must be positive)
d – Layer dropout to avoid cliques, range [0..1] (default: 0.9)
- Returns:
Generated multiplex network as a MultiGraph
- Contracts:
Precondition: n > 0 - must have at least one node
Precondition: m > 0 - must have at least one layer
Precondition: 0 <= d <= 1 - dropout must be valid probability
Postcondition: result is not None
Postcondition: result is a NetworkX MultiGraph
- py3plex.core.random_generators.require(*args, **kwargs)
Supporting methods for parsers and converters.
This module provides utility functions for network parsing and conversion, including layer splitting, multiplex edge addition, and GAF parsing.
- py3plex.core.supporting.add_mpx_edges(input_network: Graph) Graph
Add multiplex edges between corresponding nodes across layers.
Multiplex edges connect nodes representing the same entity across different layers of a multilayer network.
- Parameters:
input_network – NetworkX graph with multilayer structure.
- Returns:
Network with added multiplex edges between corresponding nodes.
- Contracts:
Precondition: input_network must not be None
Precondition: input_network must be a NetworkX graph
Postcondition: result is a NetworkX graph
Example
>>> network = nx.Graph() >>> network.add_node(('A', 'layer1')) >>> network.add_node(('A', 'layer2')) >>> network = add_mpx_edges(network)
- py3plex.core.supporting.ensure(*args, **kwargs)
- py3plex.core.supporting.parse_gaf_to_uniprot_GO(gaf_mappings: str, filter_terms: int | None = None) Dict[str, List[str]]
Parse Gene Association File (GAF) to map UniProt IDs to GO terms.
- Parameters:
gaf_mappings – Path to GAF file.
filter_terms – Optional minimum occurrence threshold for GO terms.
- Returns:
Dictionary mapping UniProt IDs to lists of associated GO terms.
Example
>>> mappings = parse_gaf_to_uniprot_GO("gaf_file.gaf", filter_terms=5)
- py3plex.core.supporting.require(*args, **kwargs)
- py3plex.core.supporting.split_to_layers(input_network: Graph) Dict[Any, Graph]
Split a multilayer network into separate layer subgraphs.
- Parameters:
input_network – NetworkX graph containing nodes from multiple layers.
- Returns:
Dictionary mapping layer names to their corresponding subgraphs.
- Contracts:
Precondition: input_network must not be None
Precondition: input_network must be a NetworkX graph
Postcondition: result is a dictionary
Postcondition: all values are NetworkX graphs
Example
>>> network = nx.Graph() >>> network.add_node(('A', 'layer1')) >>> network.add_node(('B', 'layer2')) >>> layers = split_to_layers(network)
NetworkX compatibility layer for py3plex. This module provides compatibility functions for different NetworkX versions.
- py3plex.core.nx_compat.ensure(*args, **kwargs)
- py3plex.core.nx_compat.is_string_like(obj: Any) bool
Check if obj is string-like (compatible with NetworkX < 3.0).
- Parameters:
obj – Object to check
- Returns:
True if string-like
- Return type:
bool
- py3plex.core.nx_compat.nx_from_scipy_sparse_matrix(A: Any, parallel_edges: bool = False, create_using: Graph | None = None, edge_attribute: str = 'weight') Graph
Create a graph from scipy sparse matrix (compatible with NetworkX < 3.0 and >= 3.0).
- Parameters:
A – scipy sparse matrix
parallel_edges – Whether to create parallel edges (ignored in NetworkX 3.0+)
create_using – Graph type to create
edge_attribute – Edge attribute name for weights
- Returns:
NetworkX graph
- Contracts:
Precondition: A must not be None
Precondition: edge_attribute must be a non-empty string
Postcondition: returns a NetworkX graph
- py3plex.core.nx_compat.nx_info(G: Graph) str
Get network information (compatible with NetworkX < 3.0 and >= 3.0).
- Parameters:
G – NetworkX graph
- Returns:
Network information
- Return type:
str
- Contracts:
Precondition: G must not be None and must be a NetworkX graph
Postcondition: returns a non-empty string
- py3plex.core.nx_compat.nx_read_gpickle(path: str) Graph
Read a graph from a pickle file (compatible with NetworkX < 3.0 and >= 3.0).
- Parameters:
path – File path
- Returns:
NetworkX graph
- Contracts:
Precondition: path must be a non-empty string
Postcondition: returns a NetworkX graph
- py3plex.core.nx_compat.nx_to_scipy_sparse_matrix(G: Graph, nodelist: list | None = None, dtype: Any | None = None, weight: str = 'weight', format: str = 'csr') Any
Convert graph to scipy sparse matrix (compatible with NetworkX < 3.0 and >= 3.0).
- Parameters:
G – NetworkX graph
nodelist – List of nodes
dtype – Data type
weight – Edge weight attribute
format – Sparse matrix format
- Returns:
scipy sparse matrix
- Contracts:
Precondition: G must not be None and must be a NetworkX graph
Precondition: weight must be a non-empty string
Precondition: format must be a non-empty string
Postcondition: returns a non-None sparse matrix
- py3plex.core.nx_compat.nx_write_gpickle(G: Graph, path: str) None
Write a graph to a pickle file (compatible with NetworkX < 3.0 and >= 3.0).
- Parameters:
G – NetworkX graph
path – File path
- Contracts:
Precondition: G must not be None and must be a NetworkX graph
Precondition: path must be a non-empty string
- py3plex.core.nx_compat.require(*args, **kwargs)
HINMINE Network Decomposition
- py3plex.core.HINMINE.decomposition.aggregate_sum(input_thing, classes, universal_set)
- py3plex.core.HINMINE.decomposition.aggregate_weighted_sum(input_thing, classes, universal_set)
- py3plex.core.HINMINE.decomposition.calculate_importance_chi(classes, universal_set, linked_nodes, n, **kwargs)
Calculates importance of a single midpoint using chi-squared weighing. :param classes: List of all classes :param universal_set: Set of all indices to consider :param linked_nodes: Set of all nodes linked by the midpoint :param n: Number of elements of universal set :return: List of weights of the midpoint for each label in class
- py3plex.core.HINMINE.decomposition.calculate_importance_delta(classes, universal_set, linked_nodes, n, **kwargs)
Calculates importance of a single midpoint using delta-idf weighing :param classes: List of all classes :param universal_set: Set of all indices to consider :param linked_nodes: Set of all nodes linked by the midpoint :param n: Number of elements of universal set :return: List of weights of the midpoint for each label in class
- py3plex.core.HINMINE.decomposition.calculate_importance_gr(classes, universal_set, linked_nodes, n, **kwargs)
Calculates importance of a single midpoint using the GR (gain ratio) :param classes: List of all classes :param universal_set: Set of all indices to consider :param linked_nodes: Set of all nodes linked by the midpoint :param n: Number of elements of universal set :return: List of weights of the midpoint for each label in class
- py3plex.core.HINMINE.decomposition.calculate_importance_idf(classes, universal_set, linked_nodes, n, **kwargs)
Calculates importance of a single midpoint using idf weighing :param classes: List of all classes :param universal_set: Set of all indices to consider :param linked_nodes: Set of all nodes linked by the midpoint :param n: Number of elements of universal set :return: List of weights of the midpoint for each label in class
- py3plex.core.HINMINE.decomposition.calculate_importance_ig(classes, universal_set, linked_nodes, n, **kwargs)
Calculates importance of a single midpoint using IG (information gain) weighing :param classes: List of all classes :param universal_set: Set of all indices to consider :param linked_nodes: Set of all nodes linked by the midpoint :param n: Number of elements of universal set :return: List of weights of the midpoint for each label in class
- py3plex.core.HINMINE.decomposition.calculate_importance_okapi(classes, universal_set, linked_nodes, n, degrees=None, avgdegree=None)
- py3plex.core.HINMINE.decomposition.calculate_importance_rf(classes, universal_set, linked_nodes, n, **kwargs)
Calculates importance of a single midpoint using rf weighing :param classes: List of all classes :param universal_set: Set of all indices to consider :param linked_nodes: Set of all nodes linked by the midpoint :param n: Number of elements of universal set :return: List of weights of the midpoint for each label in class
- py3plex.core.HINMINE.decomposition.calculate_importance_tf(classes, universal_set, linked_nodes, n, **kwargs)
Calculates importance of a single midpoint using term frequency weighing. :param classes: List of all classes :param universal_set: Set of all indices to consider :param linked_nodes: Set of all nodes linked by the midpoint :param n: Number of elements of universal set :return: List of weights of the midpoint for each label in class
- py3plex.core.HINMINE.decomposition.calculate_importance_w2w(classes, universal_set, linked_nodes, n, **kwargs)
- py3plex.core.HINMINE.decomposition.calculate_importances(midpoints, classes, universal_set, method, degrees=None, avgdegree=None)
- py3plex.core.HINMINE.decomposition.chi_value(actual_pos_num, predicted_pos_num, tp, n)
- py3plex.core.HINMINE.decomposition.get_aggregation_method(method_name)
- py3plex.core.HINMINE.decomposition.get_calculation_method(method_name)
- py3plex.core.HINMINE.decomposition.gr_value(actual_pos_num, predicted_pos_num, tp, n)
- py3plex.core.HINMINE.decomposition.hinmine_decompose(network, heuristic, cycle=None, parallel=False)
- py3plex.core.HINMINE.decomposition.hinmine_get_cycles(network, cycle=None)
- py3plex.core.HINMINE.decomposition.ig_value(actual_pos_num, predicted_pos_num, tp, n)
- py3plex.core.HINMINE.decomposition.np_calculate_importance_chi(predicted, label_matrix, actual_pos_nums)
- py3plex.core.HINMINE.decomposition.np_calculate_importance_tf(predicted, label_matrix)
- py3plex.core.HINMINE.decomposition.rf_value(predicted_pos_num, tp)
- py3plex.core.HINMINE.IO.load_hinmine_object(infile, label_delimiter='---', weight_tag=False, targets=True)
Configuration and Utilities
Centralized configuration for py3plex.
This module provides default settings for visualization, layout algorithms, and other configurable aspects of the library. Users can override these settings by modifying the values after import.
Example
>>> from py3plex import config
>>> config.DEFAULT_NODE_SIZE = 15
>>> config.DEFAULT_EDGE_ALPHA = 0.5
- py3plex.config.get_color_palette(name: str | None = None) List[str]
Get a color palette by name.
- Parameters:
name – Palette name. If None, returns the default palette.
- Returns:
List of color hex codes.
- Raises:
ValueError – If palette name is not recognized.
Example
>>> from py3plex.config import get_color_palette >>> colors = get_color_palette("rainbow") >>> print(colors[0]) '#FF6B6B'
- py3plex.config.reset_to_defaults() None
Reset all configuration values to their defaults.
This is useful for testing or when you want to start fresh.
Example
>>> from py3plex import config >>> config.DEFAULT_NODE_SIZE = 20 >>> config.reset_to_defaults() >>> print(config.DEFAULT_NODE_SIZE) 10
Utility functions for py3plex.
This module provides common utilities used across the library, including random state management for reproducibility and deprecation warnings.
- py3plex.utils.deprecated(reason: str, version: str | None = None, alternative: str | None = None) Callable[[Callable], Callable]
Decorator to mark functions/methods as deprecated.
This decorator will issue a DeprecationWarning when the decorated function is called, providing information about why it’s deprecated and what to use instead.
- Parameters:
reason – Explanation of why the function is deprecated
version – Version in which the function was deprecated (optional)
alternative – Suggested alternative function/method (optional)
- Returns:
Decorator function
Example
>>> @deprecated( ... reason="This function is obsolete", ... version="0.95a", ... alternative="new_function()" ... ) ... def old_function(): ... pass
- py3plex.utils.ensure(*args, **kwargs)
- py3plex.utils.get_background_knowledge_dir() str
Get the absolute path to the background knowledge directory.
- Returns:
Absolute path to the background_knowledge directory
- Return type:
str
Examples
>>> from py3plex.utils import get_background_knowledge_dir >>> dir_path = get_background_knowledge_dir()
- py3plex.utils.get_background_knowledge_path(filename: str) str
Get the absolute path to a background knowledge file or directory.
Convenience wrapper around get_data_path() specifically for background knowledge files.
- Parameters:
filename – Name or relative path of the background knowledge file. Use empty string or ‘.’ to get the background_knowledge directory itself.
- Returns:
Absolute path to the background knowledge file or directory
- Return type:
str
Examples
>>> from py3plex.utils import get_background_knowledge_path >>> path = get_background_knowledge_path("bk.n3") >>> dir_path = get_background_knowledge_path(".")
- py3plex.utils.get_data_path(relative_path: str) str
Get the absolute path to a data file in the repository.
This function searches for data files in multiple locations to support both: - Running examples from a cloned repository - Running scripts/notebooks from any directory with datasets locally available
Search order: 1. Relative to the calling script’s directory (for examples in cloned repo) 2. Relative to current working directory (for notebooks/user scripts) 3. Relative to py3plex package location (for editable installs)
- Parameters:
relative_path – Path relative to repository root (e.g., “datasets/intact02.gpickle”)
- Returns:
Absolute path to the file
- Return type:
str
- Raises:
Py3plexIOError – If the file cannot be found in any search location
Examples
>>> from py3plex.utils import get_data_path >>> path = get_data_path("datasets/intact02.gpickle") >>> os.path.exists(path) True
Note
When py3plex is installed via pip, datasets are not included in the package. Users should either: - Clone the repository and run examples from there - Download datasets separately and place them relative to their scripts - Use current working directory with datasets folder
- py3plex.utils.get_dataset_path(filename: str) str
Get the absolute path to a dataset file.
Convenience wrapper around get_data_path() specifically for dataset files.
- Parameters:
filename – Name or relative path of the dataset file
- Returns:
Absolute path to the dataset file
- Return type:
str
Examples
>>> from py3plex.utils import get_dataset_path >>> path = get_dataset_path("intact02.gpickle") >>> os.path.exists(path) True
- py3plex.utils.get_example_image_path(filename: str) str
Get the absolute path to an example image file.
Convenience wrapper around get_data_path() specifically for example image files.
- Parameters:
filename – Name or relative path of the image file
- Returns:
Absolute path to the example image file
- Return type:
str
Examples
>>> from py3plex.utils import get_example_image_path >>> path = get_example_image_path("intact_10_BH.png")
- py3plex.utils.get_multilayer_dataset_path(relative_path: str) str
Get the absolute path to a multilayer dataset file.
Convenience wrapper around get_data_path() specifically for multilayer dataset files.
- Parameters:
relative_path – Relative path within multilayer_datasets directory
- Returns:
Absolute path to the multilayer dataset file
- Return type:
str
Examples
>>> from py3plex.utils import get_multilayer_dataset_path >>> path = get_multilayer_dataset_path("MLKing/MLKing2013_multiplex.edges")
- py3plex.utils.get_rng(seed: int | Generator | None = None) Generator
Get a NumPy random number generator with optional seed.
This provides a unified interface for random state management across the library, ensuring reproducibility when a seed is provided.
- Parameters:
seed – Random seed for reproducibility. Can be: - None: Use default unseeded generator - int: Seed value for the generator - np.random.Generator: Pass through existing generator
- Returns:
Initialized random number generator
- Return type:
np.random.Generator
Examples
>>> rng = get_rng(42) >>> rng.random() # Reproducible random number 0.7739560485559633
>>> rng1 = get_rng(42) >>> rng2 = get_rng(42) >>> rng1.random() == rng2.random() True
>>> existing_rng = np.random.default_rng(123) >>> rng = get_rng(existing_rng) >>> rng is existing_rng True
- Contracts:
Postcondition: result is a NumPy random Generator
Note
Uses numpy.random.Generator (modern API introduced in NumPy 1.17) rather than the legacy numpy.random.RandomState API.
Negative seeds are converted to positive values by taking absolute value to ensure compatibility with NumPy’s SeedSequence.
- py3plex.utils.require(*args, **kwargs)
- py3plex.utils.validate_multilayer_input(network_data: Any) None
Validate multilayer network input data.
Performs sanity checks on multilayer network structures to catch common errors early.
- Parameters:
network_data – Network data to validate (can be various formats)
- Raises:
ValueError – If the network data is invalid
- Contracts:
Precondition: network_data must not be None
Example
>>> from py3plex.utils import validate_multilayer_input >>> validate_multilayer_input(my_network)
- py3plex.utils.warn_if_deprecated(feature_name: str, reason: str, alternative: str | None = None) None
Issue a deprecation warning for a feature.
This is useful for deprecating specific usage patterns or parameter combinations rather than entire functions.
- Parameters:
feature_name – Name of the deprecated feature
reason – Explanation of why it’s deprecated
alternative – Suggested alternative (optional)
Example
>>> def my_function(old_param=None, new_param=None): ... if old_param is not None: ... warn_if_deprecated( ... "old_param", ... "This parameter is no longer used", ... "new_param" ... )
Custom exception types for the py3plex library.
This module defines domain-specific exceptions to provide clear error messages and enable better error handling throughout the library.
Py3plex follows Rust’s approach to error messages: - Clear, descriptive error messages - Error codes (e.g., PX101, PX201) - Helpful suggestions for fixing issues - “Did you mean?” suggestions for typos - Context showing the relevant location in files
- Example usage:
>>> from py3plex.exceptions import InvalidLayerError >>> raise InvalidLayerError( ... "social", ... available_layers=["work", "family", "social_media"], ... suggestion="Did you mean 'social_media'?" ... )
- exception py3plex.exceptions.AlgorithmError(message: str, *, algorithm_name: str | None = None, valid_algorithms: List[str] | None = None, **kwargs)
Bases:
Py3plexExceptionException raised when an algorithm execution fails.
Error code: PX301
- default_code: str = 'PX301'
- exception py3plex.exceptions.CentralityComputationError(message: str, *, algorithm_name: str | None = None, valid_algorithms: List[str] | None = None, **kwargs)
Bases:
AlgorithmErrorException raised when centrality computation fails.
Error code: PX301
- default_code: str = 'PX301'
- exception py3plex.exceptions.CommunityDetectionError(message: str, *, algorithm_name: str | None = None, valid_algorithms: List[str] | None = None, **kwargs)
Bases:
AlgorithmErrorException raised when community detection fails.
Error code: PX301
- default_code: str = 'PX301'
- exception py3plex.exceptions.ConversionError(message: str, *, code: str | None = None, suggestions: List[str] | None = None, notes: List[str] | None = None, did_you_mean: str | None = None, context: Dict[str, Any] | None = None)
Bases:
Py3plexExceptionException raised when format conversion fails.
Error code: PX501
- default_code: str = 'PX501'
- exception py3plex.exceptions.DecompositionError(message: str, *, algorithm_name: str | None = None, valid_algorithms: List[str] | None = None, **kwargs)
Bases:
AlgorithmErrorException raised when network decomposition fails.
Error code: PX301
- default_code: str = 'PX301'
- exception py3plex.exceptions.EmbeddingError(message: str, *, algorithm_name: str | None = None, valid_algorithms: List[str] | None = None, **kwargs)
Bases:
AlgorithmErrorException raised when embedding generation fails.
Error code: PX301
- default_code: str = 'PX301'
- exception py3plex.exceptions.ExternalToolError(message: str, *, code: str | None = None, suggestions: List[str] | None = None, notes: List[str] | None = None, did_you_mean: str | None = None, context: Dict[str, Any] | None = None)
Bases:
Py3plexExceptionException raised when external tool execution fails.
Error code: PX001
- default_code: str = 'PX001'
- exception py3plex.exceptions.IncompatibleNetworkError(message: str, *, code: str | None = None, suggestions: List[str] | None = None, notes: List[str] | None = None, did_you_mean: str | None = None, context: Dict[str, Any] | None = None)
Bases:
Py3plexExceptionException raised when network format is incompatible with an operation.
Error code: PX304
- default_code: str = 'PX304'
- exception py3plex.exceptions.InvalidEdgeError(message: str, *, code: str | None = None, suggestions: List[str] | None = None, notes: List[str] | None = None, did_you_mean: str | None = None, context: Dict[str, Any] | None = None)
Bases:
Py3plexExceptionException raised when an invalid edge is specified.
Error code: PX203
- default_code: str = 'PX203'
- exception py3plex.exceptions.InvalidLayerError(layer_name: str, *, available_layers: List[str] | None = None, **kwargs)
Bases:
Py3plexExceptionException raised when an invalid layer is specified.
Error code: PX201
Example
>>> raise InvalidLayerError( ... "social", ... available_layers=["work", "family"], ... )
- default_code: str = 'PX201'
- exception py3plex.exceptions.InvalidNodeError(node_id: str, *, available_nodes: List[str] | None = None, **kwargs)
Bases:
Py3plexExceptionException raised when an invalid node is specified.
Error code: PX202
- default_code: str = 'PX202'
- exception py3plex.exceptions.NetworkConstructionError(message: str, *, code: str | None = None, suggestions: List[str] | None = None, notes: List[str] | None = None, did_you_mean: str | None = None, context: Dict[str, Any] | None = None)
Bases:
Py3plexExceptionException raised when network construction fails.
Error code: PX208
- default_code: str = 'PX208'
- exception py3plex.exceptions.ParsingError(message: str, *, file_path: str | None = None, line_number: int | None = None, expected: str | None = None, got: str | None = None, **kwargs)
Bases:
Py3plexExceptionException raised when parsing input data fails.
Error code: PX105
- default_code: str = 'PX105'
- exception py3plex.exceptions.Py3plexException(message: str, *, code: str | None = None, suggestions: List[str] | None = None, notes: List[str] | None = None, did_you_mean: str | None = None, context: Dict[str, Any] | None = None)
Bases:
ExceptionBase exception class for all py3plex-specific exceptions.
- code
Error code (e.g., “PX001”)
- suggestions
List of suggestions for fixing the error
- notes
Additional context/notes
- did_you_mean
Suggested correction for typos
- default_code: str = 'PX001'
- format_message(use_color: bool = True) str
Format the exception with Rust-style error formatting.
- Parameters:
use_color – Whether to use ANSI colors
- Returns:
Formatted error message string
- exception py3plex.exceptions.Py3plexFormatError(message: str, *, valid_formats: List[str] | None = None, input_format: str | None = None, **kwargs)
Bases:
Py3plexExceptionException raised when input format is invalid or cannot be parsed.
Error code: PX103
- default_code: str = 'PX103'
- exception py3plex.exceptions.Py3plexIOError(message: str, *, code: str | None = None, suggestions: List[str] | None = None, notes: List[str] | None = None, did_you_mean: str | None = None, context: Dict[str, Any] | None = None)
Bases:
Py3plexExceptionException raised when I/O operations fail (file reading, writing, etc.).
Error code: PX101
- default_code: str = 'PX101'
- exception py3plex.exceptions.Py3plexLayoutError(message: str, *, code: str | None = None, suggestions: List[str] | None = None, notes: List[str] | None = None, did_you_mean: str | None = None, context: Dict[str, Any] | None = None)
Bases:
Py3plexExceptionException raised when layout computation or visualization positioning fails.
Error code: PX402
- default_code: str = 'PX402'
- exception py3plex.exceptions.Py3plexMatrixError(message: str, *, code: str | None = None, suggestions: List[str] | None = None, notes: List[str] | None = None, did_you_mean: str | None = None, context: Dict[str, Any] | None = None)
Bases:
Py3plexExceptionException raised when matrix operations fail or matrix is invalid.
Error code: PX001
- default_code: str = 'PX001'
- exception py3plex.exceptions.VisualizationError(message: str, *, code: str | None = None, suggestions: List[str] | None = None, notes: List[str] | None = None, did_you_mean: str | None = None, context: Dict[str, Any] | None = None)
Bases:
Py3plexExceptionException raised when visualization operations fail.
Error code: PX401
- default_code: str = 'PX401'
Domain-Specific Language (DSL)
DSL v2 for Multilayer Network Queries.
This module provides a Domain-Specific Language (DSL) version 2 for querying and analyzing multilayer networks. DSL v2 introduces:
Unified AST representation
Pythonic builder API (Q, L, Param)
Multilayer-specific abstractions (layer algebra, intralayer/interlayer)
Improved ergonomics (ORDER BY, LIMIT, EXPLAIN, rich results)
DSL Extensions (v2.1): 5. Network comparison (C.compare()) 6. Null models (N.model()) 7. Path queries (P.shortest(), P.random_walk()) 8. Plugin system for user-defined operators (@dsl_operator)
- Example Usage:
>>> from py3plex.dsl import Q, L, Param, C, N, P >>> >>> # Build a query using the builder API >>> q = ( ... Q.nodes() ... .from_layers(L["social"] + L["work"]) ... .where(intralayer=True, degree__gt=5) ... .compute("betweenness_centrality", alias="bc") ... .order_by("bc", desc=True) ... .limit(20) ... ) >>> >>> # Execute the query >>> result = q.execute(network, k=5) >>> df = result.to_pandas() >>> >>> # Compare two networks >>> comparison = C.compare("baseline", "treatment").using("multiplex_jaccard").execute(networks) >>> >>> # Generate null models >>> nullmodels = N.configuration().samples(100).seed(42).execute(network) >>> >>> # Find paths >>> paths = P.shortest("Alice", "Bob").crossing_layers().execute(network) >>> >>> # Define custom operators >>> @dsl_operator("my_measure") ... def my_custom_measure(context, param: float = 1.0): ... # Use context.graph, context.current_layers, etc. ... return result
- The DSL also supports a string syntax:
SELECT nodes FROM LAYER(“social”) + LAYER(“work”) WHERE intralayer AND degree > 5 COMPUTE betweenness_centrality AS bc ORDER BY bc DESC LIMIT 20 TO pandas
All frontends (string DSL, builder API) compile into a single AST which is executed by the same engine, ensuring consistent behavior.
- class py3plex.dsl.AttrType(value)
Bases:
EnumAttribute type for static analysis.
- BOOLEAN = 'boolean'
- CATEGORICAL = 'categorical'
- DATETIME = 'datetime'
- EDGE_REF = 'edge_ref'
- LAYER_REF = 'layer_ref'
- NODE_REF = 'node_ref'
- NUMERIC = 'numeric'
- UNKNOWN = 'unknown'
- supports_operator(op: str) bool
Check if this type supports a given operator.
- class py3plex.dsl.BooleanExpression(condition: ConditionExpr, negated: bool = False)
Bases:
objectRepresents a boolean expression that can be combined with & (AND), | (OR), and ~ (NOT).
This class wraps ConditionExpr AST nodes and provides operator overloading for building complex boolean logic.
- _condition
The underlying ConditionExpr AST node
- _negated
Whether this expression is negated
- to_condition_expr() ConditionExpr
Convert to AST ConditionExpr.
- Returns:
ConditionExpr that can be used in SelectStmt
- class py3plex.dsl.C
Bases:
objectCompare factory for creating CompareBuilder instances.
Example
>>> C.compare("baseline", "intervention").using("multiplex_jaccard")
- static compare(network_a: str, network_b: str) CompareBuilder
Create a comparison builder for two networks.
- class py3plex.dsl.CompareBuilder(network_a: str, network_b: str)
Bases:
objectBuilder for COMPARE statements.
Example
>>> from py3plex.dsl import C, L >>> >>> result = ( ... C.compare("baseline", "intervention") ... .using("multiplex_jaccard") ... .on_layers(L["social"] + L["work"]) ... .measure("global_distance", "layerwise_distance") ... .execute(networks) ... )
- execute(networks: Dict[str, Any]) ComparisonResult
Execute the comparison.
- Parameters:
networks – Dictionary mapping network names to network objects
- Returns:
ComparisonResult with comparison results
- measure(*measures: str) CompareBuilder
Specify which measures to compute.
- Parameters:
*measures – Measure names (e.g., “global_distance”, “layerwise_distance”)
- Returns:
Self for chaining
- on_layers(layer_expr: LayerExprBuilder) CompareBuilder
Filter by layers using layer algebra.
- Parameters:
layer_expr – Layer expression (e.g., L[“social”] + L[“work”])
- Returns:
Self for chaining
- to(target: str) CompareBuilder
Set export target.
- Parameters:
target – Export format (‘pandas’, ‘json’)
- Returns:
Self for chaining
- to_ast() CompareStmt
Export as AST CompareStmt object.
- using(metric: str) CompareBuilder
Set the comparison metric.
- Parameters:
metric – Metric name (e.g., “multiplex_jaccard”)
- Returns:
Self for chaining
- class py3plex.dsl.CompareStmt(network_a: str, network_b: str, metric_name: str, layer_expr: ~py3plex.dsl.ast.LayerExpr | None = None, measures: ~typing.List[str] = <factory>, export_target: str | None = None)
Bases:
objectCOMPARE statement for network comparison.
- DSL Example:
COMPARE NETWORK baseline, intervention USING multiplex_jaccard ON LAYER(“offline”) + LAYER(“online”) MEASURE global_distance TO pandas
- network_a
Name/key for first network
- Type:
str
- network_b
Name/key for second network
- Type:
str
- metric_name
Comparison metric (e.g., “multiplex_jaccard”)
- Type:
str
- layer_expr
Optional layer expression for filtering
- Type:
py3plex.dsl.ast.LayerExpr | None
- measures
List of measure types (e.g., [“global_distance”, “layerwise_distance”])
- Type:
List[str]
- export_target
Optional export format
- Type:
str | None
- export_target: str | None = None
- measures: List[str]
- metric_name: str
- network_a: str
- network_b: str
- class py3plex.dsl.Comparison(left: str, op: str, right: str | float | int | ParamRef)
Bases:
objectA comparison expression.
- left
Attribute name (e.g., “degree”, “layer”)
- Type:
str
- op
Comparison operator (‘>’, ‘>=’, ‘<’, ‘<=’, ‘=’, ‘!=’)
- Type:
str
- right
Value to compare against
- Type:
str | float | int | py3plex.dsl.ast.ParamRef
- left: str
- op: str
- class py3plex.dsl.ComputeItem(name: str, alias: str | None = None, uncertainty: bool = False, method: str | None = None, n_samples: int | None = None, ci: float | None = None, bootstrap_unit: str | None = None, bootstrap_mode: str | None = None, n_null: int | None = None, null_model: str | None = None, random_state: int | None = None)
Bases:
objectA measure to compute.
- name
Measure name (e.g., ‘betweenness_centrality’)
- Type:
str
- alias
Optional alias for the result (e.g., ‘bc’)
- Type:
str | None
- uncertainty
Whether to compute uncertainty for this measure
- Type:
bool
- method
Uncertainty estimation method (e.g., ‘bootstrap’, ‘perturbation’, ‘null_model’)
- Type:
str | None
- n_samples
Number of samples for uncertainty estimation
- Type:
int | None
- ci
Confidence interval level (e.g., 0.95 for 95% CI)
- Type:
float | None
- bootstrap_unit
What to resample for bootstrap: “edges”, “nodes”, or “layers”
- Type:
str | None
- bootstrap_mode
Resampling mode: “resample” or “permute”
- Type:
str | None
- n_null
Number of null model replicates
- Type:
int | None
- null_model
Null model type: “degree_preserving”, “erdos_renyi”, “configuration”
- Type:
str | None
- random_state
Random seed for reproducibility
- Type:
int | None
- alias: str | None = None
- bootstrap_mode: str | None = None
- bootstrap_unit: str | None = None
- ci: float | None = None
- method: str | None = None
- n_null: int | None = None
- n_samples: int | None = None
- name: str
- null_model: str | None = None
- random_state: int | None = None
- property result_name: str
Get the name to use in results (alias or original name).
- uncertainty: bool = False
- class py3plex.dsl.ConditionAtom(comparison: Comparison | None = None, function: FunctionCall | None = None, special: SpecialPredicate | None = None)
Bases:
objectA single atomic condition.
Exactly one of comparison, function, or special should be non-None.
- comparison
Simple comparison (e.g., degree > 5)
- Type:
py3plex.dsl.ast.Comparison | None
- function
Function call (e.g., reachable_from(“Alice”))
- Type:
py3plex.dsl.ast.FunctionCall | None
- special
Special predicate (e.g., intralayer)
- Type:
- comparison: Comparison | None = None
- function: FunctionCall | None = None
- property is_comparison: bool
- property is_function: bool
- property is_special: bool
- special: SpecialPredicate | None = None
- class py3plex.dsl.ConditionExpr(atoms: ~typing.List[~py3plex.dsl.ast.ConditionAtom] = <factory>, ops: ~typing.List[str] = <factory>)
Bases:
objectCompound condition expression.
Represents conditions joined by logical operators (AND, OR).
- atoms
List of condition atoms
- Type:
- ops
List of logical operators (‘AND’, ‘OR’) between atoms
- Type:
List[str]
- atoms: List[ConditionAtom]
- ops: List[str]
- class py3plex.dsl.DSLExecutionContext(graph: Any, current_layers: List[str] | None = None, current_nodes: List[Any] | None = None, params: Mapping[str, Any] | None = None)
Bases:
objectExecution context passed to DSL operators.
This context object provides operators with access to the network, current selection state, and query parameters.
- graph
The underlying multilayer network object
- Type:
Any
- current_layers
Currently selected layers (None = all layers)
- Type:
List[str] | None
- current_nodes
Currently selected nodes (None = all nodes)
- Type:
List[Any] | None
- params
Query parameters (e.g., from Param() in builder API)
- Type:
Mapping[str, Any]
- current_layers: List[str] | None = None
- current_nodes: List[Any] | None = None
- graph: Any
- params: Mapping[str, Any] = None
- exception py3plex.dsl.DSLExecutionError
Bases:
ExceptionException raised for DSL execution errors.
- class py3plex.dsl.DSLOperator(name: str, func: Callable[[...], Any], description: str | None = None, category: str | None = None)
Bases:
objectMetadata for a registered DSL operator.
- name
Operator name (normalized)
- Type:
str
- func
Python callable implementing the operator
- Type:
Callable[[…], Any]
- description
Optional human-readable description
- Type:
str | None
- category
Optional category (e.g., “centrality”, “dynamics”, “io”)
- Type:
str | None
- category: str | None = None
- description: str | None = None
- func: Callable[[...], Any]
- name: str
- exception py3plex.dsl.DSLSyntaxError
Bases:
ExceptionException raised for DSL syntax errors.
- class py3plex.dsl.Diagnostic(code: str, severity: Literal['error', 'warning', 'info', 'hint'], message: str, span: Tuple[int, int], suggested_fix: SuggestedFix | None = None)
Bases:
objectA linting diagnostic (error, warning, info, or hint).
- code
Diagnostic code (e.g., “DSL001”, “PERF301”)
- Type:
str
- severity
Severity level
- Type:
Literal[‘error’, ‘warning’, ‘info’, ‘hint’]
- message
Human-readable message
- Type:
str
- span
Tuple of (start_index, end_index) in the query string
- Type:
Tuple[int, int]
- suggested_fix
Optional suggested fix
- Type:
- code: str
- message: str
- severity: Literal['error', 'warning', 'info', 'hint']
- span: Tuple[int, int]
- suggested_fix: SuggestedFix | None = None
- exception py3plex.dsl.DslError(message: str, query: str | None = None, line: int | None = None, column: int | None = None)
Bases:
ExceptionBase exception for all DSL errors.
- format_message() str
Format the error message with context.
- exception py3plex.dsl.DslExecutionError(message: str, query: str | None = None, line: int | None = None, column: int | None = None)
Bases:
DslErrorException raised for DSL execution errors.
- exception py3plex.dsl.DslMissingMetricError(metric: str, required_by: str | None = None, autocompute_enabled: bool = True, query: str | None = None, line: int | None = None, column: int | None = None)
Bases:
DslErrorException raised when a required metric is missing and cannot be autocomputed.
This error occurs when: - A query references a metric that hasn’t been computed - Autocompute is disabled or the metric is not autocomputable - The metric is required for an operation (e.g., top_k, where clause)
- metric
The missing metric name
- required_by
The operation that requires the metric
- autocompute_enabled
Whether autocompute was enabled
- exception py3plex.dsl.DslSyntaxError(message: str, query: str | None = None, line: int | None = None, column: int | None = None)
Bases:
DslErrorException raised for DSL syntax errors.
- class py3plex.dsl.DynamicsBuilder(process_name: str, **params)
Bases:
objectBuilder for DYNAMICS statements.
Example
>>> from py3plex.dsl import Q, L >>> >>> result = ( ... Q.dynamics("SIS", beta=0.3, mu=0.1) ... .on_layers(L["contacts"] + L["travel"]) ... .seed(Q.nodes().where(degree__gt=10)) ... .parameters_per_layer({ ... "contacts": {"beta": 0.4}, ... "travel": {"beta": 0.2} ... }) ... .run(steps=100, replicates=10) ... .execute(network) ... )
- execute(network: Any) Any
Execute dynamics simulation.
- Parameters:
network – Multilayer network
- Returns:
DynamicsResult with simulation outputs
- on_layers(layer_expr: LayerExprBuilder) DynamicsBuilder
Filter by layers using layer algebra.
- Parameters:
layer_expr – Layer expression (e.g., L[“social”] + L[“work”])
- Returns:
Self for chaining
- parameters_per_layer(layer_params: Dict[str, Dict[str, Any]]) DynamicsBuilder
Set per-layer parameter overrides.
- Parameters:
layer_params – Dictionary mapping layer names to parameter dictionaries
- Returns:
Self for chaining
Example
>>> builder.parameters_per_layer({ ... "contacts": {"beta": 0.3}, ... "travel": {"beta": 0.1} ... })
- random_seed(seed: int) DynamicsBuilder
Set random seed for reproducibility.
- Parameters:
seed – Random seed
- Returns:
Self for chaining
- run(steps: int = 100, replicates: int = 1, track: str | List[str] | None = None) DynamicsBuilder
Set execution parameters.
- Parameters:
steps – Number of time steps to simulate
replicates – Number of independent runs
track – Measures to track (“all” or list of specific measures)
- Returns:
Self for chaining
- seed(query_or_fraction: float | QueryBuilder) DynamicsBuilder
Set initial seeding for the dynamics.
- Parameters:
query_or_fraction – Either a fraction (e.g., 0.01 for 1%) or a QueryBuilder for selecting specific nodes to seed
- Returns:
Self for chaining
Examples
>>> # Seed 1% randomly >>> builder.seed(0.01)
>>> # Seed high-degree nodes >>> builder.seed(Q.nodes().where(degree__gt=10))
- to(target: str) DynamicsBuilder
Set export target.
- Parameters:
target – Export format
- Returns:
Self for chaining
- to_ast() DynamicsStmt
Export as AST DynamicsStmt object.
- with_states(**state_mapping) DynamicsBuilder
Explicitly define state labels (optional).
- Parameters:
**state_mapping – State labels (e.g., S=”susceptible”, I=”infected”)
- Returns:
Self for chaining
Note
This is optional metadata and doesn’t affect execution, but helps with documentation and trajectory queries.
- class py3plex.dsl.DynamicsStmt(process_name: str, params: ~typing.Dict[str, ~typing.Any] = <factory>, layer_expr: ~py3plex.dsl.ast.LayerExpr | None = None, seed_query: ~py3plex.dsl.ast.SelectStmt | None = None, seed_fraction: float | None = None, layer_params: ~typing.Dict[str, ~typing.Dict[str, ~typing.Any]] = <factory>, steps: int = 100, replicates: int = 1, track: ~typing.List[str] = <factory>, seed: int | None = None, export_target: str | None = None)
Bases:
objectDYNAMICS statement for declarative process simulation.
- DSL Example:
DYNAMICS SIS WITH beta=0.3, mu=0.1 ON LAYER(“contacts”) + LAYER(“travel”) SEED FROM nodes WHERE degree > 10 PARAMETERS PER LAYER contacts: {beta=0.4}, travel: {beta=0.2} RUN FOR 100 STEPS, 10 REPLICATES TRACK prevalence, incidence
- process_name
Name of the process (e.g., “SIS”, “SIR”, “RANDOM_WALK”)
- Type:
str
- params
Global process parameters (e.g., {“beta”: 0.3, “mu”: 0.1})
- Type:
Dict[str, Any]
- layer_expr
Optional layer expression for filtering
- Type:
py3plex.dsl.ast.LayerExpr | None
- seed_query
Optional SELECT query for seeding initial conditions
- Type:
py3plex.dsl.ast.SelectStmt | None
- seed_fraction
Optional fraction for random seeding (e.g., 0.01 for 1%)
- Type:
float | None
- layer_params
Optional per-layer parameter overrides
- Type:
Dict[str, Dict[str, Any]]
- steps
Number of simulation steps
- Type:
int
- replicates
Number of independent runs
- Type:
int
- track
List of measures to track (e.g., [“prevalence”, “incidence”])
- Type:
List[str]
- seed
Optional random seed for reproducibility
- Type:
int | None
- export_target
Optional export format
- Type:
str | None
- export_target: str | None = None
- layer_params: Dict[str, Dict[str, Any]]
- params: Dict[str, Any]
- process_name: str
- replicates: int = 1
- seed: int | None = None
- seed_fraction: float | None = None
- seed_query: SelectStmt | None = None
- steps: int = 100
- track: List[str]
- class py3plex.dsl.EdgeLayerConstraint(kind: str, src_layer: str | None = None, dst_layer: str | None = None, layer: str | None = None)
Bases:
objectLayer constraint for an edge.
- kind
Type of constraint (“within”, “between”, “any”)
- Type:
str
- src_layer
Source layer constraint (for “between”)
- Type:
str | None
- dst_layer
Destination layer constraint (for “between”)
- Type:
str | None
- layer
Layer constraint (for “within”)
- Type:
str | None
- static any_layer() EdgeLayerConstraint
Create constraint that accepts any edge.
- static between(src_layer: str, dst_layer: str) EdgeLayerConstraint
Create constraint for edges between two layers.
- dst_layer: str | None = None
- kind: str
- layer: str | None = None
- matches(src_layer: str, dst_layer: str) bool
Check if an edge satisfies this constraint.
- src_layer: str | None = None
- to_dict() Dict[str, Any]
Convert to dictionary for serialization.
- static within(layer: str) EdgeLayerConstraint
Create constraint for edges within a single layer.
- class py3plex.dsl.EntityRef(entity_type: str, layer: str | None = None, attribute: str | None = None)
Bases:
objectReference to an entity (node/edge) in the schema.
- class py3plex.dsl.ExecutionPlan(steps: ~typing.List[~py3plex.dsl.ast.PlanStep] = <factory>, warnings: ~typing.List[str] = <factory>)
Bases:
objectExecution plan for EXPLAIN queries.
- steps
List of execution steps
- Type:
List[py3plex.dsl.ast.PlanStep]
- warnings
List of performance or correctness warnings
- Type:
List[str]
- warnings: List[str]
- class py3plex.dsl.ExplainResult(ast_summary: str, type_info: Dict[str, str], cost_estimate: str, diagnostics: List[Diagnostic], plan_steps: List[str])
Bases:
objectResult of an EXPLAIN query with linting information.
- ast_summary
Human-readable summary of the AST
- Type:
str
- type_info
Dictionary mapping node IDs to inferred types
- Type:
Dict[str, str]
- cost_estimate
Rough cost classification
- Type:
str
- diagnostics
List of diagnostics from linting
- Type:
- plan_steps
List of execution plan steps
- Type:
List[str]
- ast_summary: str
- cost_estimate: str
- diagnostics: List[Diagnostic]
- plan_steps: List[str]
- type_info: Dict[str, str]
- class py3plex.dsl.ExportSpec(path: str, fmt: str = 'csv', columns: ~typing.List[str] | None = None, options: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectSpecification for exporting query results to a file.
Used to declaratively export results as part of the DSL pipeline.
- path
Output file path
- Type:
str
- fmt
Format type (‘csv’, ‘json’, ‘tsv’, etc.)
- Type:
str
- columns
Optional list of columns to include/order
- Type:
List[str] | None
- options
Additional format-specific options (e.g., delimiter, orient)
- Type:
Dict[str, Any]
Example
ExportSpec(path=’results.csv’, fmt=’csv’, columns=[‘node’, ‘score’]) ExportSpec(path=’output.json’, fmt=’json’, options={‘orient’: ‘records’})
- columns: List[str] | None = None
- fmt: str = 'csv'
- options: Dict[str, Any]
- path: str
- class py3plex.dsl.ExportTarget(value)
Bases:
EnumExport target for query results.
- ARROW = 'arrow'
- NETWORKX = 'networkx'
- PANDAS = 'pandas'
- class py3plex.dsl.ExtendedQuery(kind: str, explain: bool = False, select: SelectStmt | None = None, compare: CompareStmt | None = None, nullmodel: NullModelStmt | None = None, path: PathStmt | None = None, dynamics: DynamicsStmt | None = None, trajectories: TrajectoriesStmt | None = None, dsl_version: str = '2.0')
Bases:
objectExtended query supporting multiple statement types.
This extends the basic Query to support COMPARE, NULLMODEL, PATH, DYNAMICS, and TRAJECTORIES statements in addition to SELECT statements.
- kind
Query type (“select”, “compare”, “nullmodel”, “path”, “dynamics”, “trajectories”)
- Type:
str
- explain
If True, return execution plan instead of results
- Type:
bool
- select
SELECT statement (if kind == “select”)
- Type:
py3plex.dsl.ast.SelectStmt | None
- compare
COMPARE statement (if kind == “compare”)
- Type:
py3plex.dsl.ast.CompareStmt | None
- nullmodel
NULLMODEL statement (if kind == “nullmodel”)
- Type:
- path
PATH statement (if kind == “path”)
- Type:
py3plex.dsl.ast.PathStmt | None
- dynamics
DYNAMICS statement (if kind == “dynamics”)
- Type:
py3plex.dsl.ast.DynamicsStmt | None
- trajectories
TRAJECTORIES statement (if kind == “trajectories”)
- Type:
- dsl_version
DSL version for compatibility
- Type:
str
- compare: CompareStmt | None = None
- dsl_version: str = '2.0'
- dynamics: DynamicsStmt | None = None
- explain: bool = False
- kind: str
- nullmodel: NullModelStmt | None = None
- select: SelectStmt | None = None
- trajectories: TrajectoriesStmt | None = None
- class py3plex.dsl.FieldExpression(field: str)
Bases:
objectRepresents a field reference that can be compared to values.
This class implements operator overloading to create comparison expressions that are then converted to AST ConditionExpr objects.
- _field
The field name being referenced
- class py3plex.dsl.FieldProxy
Bases:
objectProxy for creating field expressions via F.field_name syntax.
- This allows for intuitive syntax like:
F.degree > 5 F.layer == “social” F.is_infected
- class py3plex.dsl.FunctionCall(name: str, args: ~typing.List[str | float | int | ~py3plex.dsl.ast.ParamRef] = <factory>)
Bases:
objectA function call in a condition.
- name
Function name (e.g., “reachable_from”)
- Type:
str
- args
List of arguments
- Type:
List[str | float | int | py3plex.dsl.ast.ParamRef]
- name: str
- exception py3plex.dsl.GroupingError(message: str, query: str | None = None, line: int | None = None, column: int | None = None)
Bases:
DslErrorException raised when a grouping operation is used incorrectly.
This error is raised when operations that require active grouping (like coverage) are called without proper grouping context.
- class py3plex.dsl.LayerConstraint(kind: str, value: str | Set[str] | Callable | None = None)
Bases:
objectLayer constraint for a node.
- kind
Type of constraint (“one”, “set”, “wildcard”, “predicate”)
- Type:
str
- value
Layer name, set of layer names, or predicate function
- Type:
str | Set[str] | Callable | None
- kind: str
- matches(layer: str) bool
Check if a layer satisfies this constraint.
- static one(layer: str) LayerConstraint
Create constraint for a specific layer.
- static set_of(layers: Set[str]) LayerConstraint
Create constraint for a set of layers.
- to_dict() Dict[str, Any]
Convert to dictionary for serialization.
- value: str | Set[str] | Callable | None = None
- static wildcard() LayerConstraint
Create wildcard constraint (any layer).
- class py3plex.dsl.LayerExpr(terms: ~typing.List[~py3plex.dsl.ast.LayerTerm] = <factory>, ops: ~typing.List[str] = <factory>)
Bases:
objectLayer expression with optional algebra operations.
- Supports:
Union: LAYER(“a”) + LAYER(“b”)
Difference: LAYER(“a”) - LAYER(“b”)
Intersection: LAYER(“a”) & LAYER(“b”)
- terms
List of layer terms
- Type:
- ops
List of operators between terms (‘+’, ‘-’, ‘&’)
- Type:
List[str]
- get_layer_names() List[str]
Get all layer names referenced in this expression.
- ops: List[str]
- class py3plex.dsl.LayerExprBuilder(term: str)
Bases:
objectBuilder for layer expressions.
- Supports layer algebra:
Union: L[“social”] + L[“work”]
Difference: L[“social”] - L[“bots”]
Intersection: L[“social”] & L[“work”]
- class py3plex.dsl.LayerProxy
Bases:
objectProxy for creating layer expressions via L[“name”] syntax.
- Supports both simple layer names and advanced string expressions:
L[“social”] → single layer (backward compatible)
L[“social”, “work”] → union of layers (backward compatible)
L[”* - coupling”] → string expression with algebra (NEW)
L[“(ppi | gene) & disease”] → complex expression (NEW)
The proxy automatically detects whether to use the old LayerExprBuilder (for simple names) or the new LayerSet (for expressions with operators).
- static clear_groups() None
Clear all defined layer groups.
- static define(name: str, layer_expr: LayerExprBuilder | LayerSet) None
Define a named layer group for reuse.
- Parameters:
name – Group name
layer_expr – LayerExprBuilder or LayerSet to associate with the name
Example
>>> bio = L["ppi"] | L["gene"] | L["disease"] >>> L.define("bio", bio) >>> >>> # Later use the group >>> result = Q.nodes().from_layers(L["bio"]).execute(net)
- static list_groups() Dict[str, Any]
List all defined layer groups.
- Returns:
Dictionary mapping group names to layer expressions
- class py3plex.dsl.LayerSet(name_or_expr: str | LayerExpr)
Bases:
objectRepresents an unevaluated layer expression.
LayerSet objects are immutable and composable. They maintain an internal AST representation that is only evaluated when resolve() is called.
- expr
Internal expression AST
Example
>>> # Create from layer name >>> social = LayerSet("social") >>> >>> # Set operations >>> both = social | LayerSet("work") >>> non_coupling = LayerSet("*") - LayerSet("coupling") >>> >>> # Parse from string >>> layers = LayerSet.parse("* - coupling - transport") >>> >>> # Resolve to actual layer names >>> active = layers.resolve(network) >>> print(active) # {'social', 'work', 'hobby'}
- static clear_groups() None
Clear all defined layer groups.
Useful for testing or resetting state.
- static define_group(name: str, layer_set: LayerSet) None
Define a named layer group for reuse.
- Parameters:
name – Group name
layer_set – LayerSet to associate with the name
Example
>>> bio = LayerSet.parse("ppi | gene | disease") >>> LayerSet.define_group("bio", bio) >>> >>> # Later, reference the group >>> layers = LayerSet("bio") & LayerSet("*")
- explain(network: Any | None = None) str
Generate human-readable explanation of the layer expression.
- Parameters:
network – Optional network to resolve against (shows actual layers)
- Returns:
Formatted explanation string
Example
>>> layers = LayerSet("*") - LayerSet("coupling") >>> print(layers.explain()) LayerSet: difference( all_layers("*"), layer("coupling") )
- static list_groups() Dict[str, LayerSet]
List all defined layer groups.
- Returns:
Dictionary mapping group names to LayerSet objects
- static parse(expr_str: str) LayerSet
Parse a layer expression from string.
- Supports:
Layer names: “social”, “work”
Wildcard: “*”
Union: “social | work” or “social + work”
Intersection: “social & work”
Difference: “social - work”
Complement: “~social” (future)
Parentheses: “(social | work) & ~coupling”
Named groups: “bio” (if defined via define_group)
- Parameters:
expr_str – Expression string to parse
- Returns:
LayerSet object
- Raises:
DslSyntaxError – If expression is invalid
Example
>>> layers = LayerSet.parse("* - coupling - transport") >>> layers = LayerSet.parse("(ppi | gene) & disease")
- resolve(network: Any, *, strict: bool = False, warn_empty: bool = True) Set[str]
Resolve the layer expression to a set of actual layer names.
This is where late evaluation happens. The expression is evaluated against the network’s actual layers.
- Parameters:
network – Multilayer network object
strict – If True, raise error for unknown layers (default: False)
warn_empty – If True, warn when result is empty (default: True)
- Returns:
Set of layer names (as strings)
- Raises:
UnknownLayerError – If strict=True and a referenced layer doesn’t exist
Example
>>> layers = LayerSet("social") | LayerSet("work") >>> active = layers.resolve(network) >>> print(active) # {'social', 'work'}
- py3plex.dsl.LayerSetExpr
alias of
LayerExpr
- class py3plex.dsl.LayerTerm(name: str)
Bases:
objectA single layer reference in a layer expression.
- name
Layer name (e.g., “social”, “work”)
- Type:
str
- name: str
- class py3plex.dsl.MatchRow(bindings: ~typing.Dict[str, ~typing.Any] = <factory>, edge_bindings: ~typing.Dict[str, ~typing.Tuple[~typing.Any, ~typing.Any]] | None = None)
Bases:
objectRepresents a single match result.
- bindings
Dictionary mapping variable names to node IDs
- Type:
Dict[str, Any]
- edge_bindings
Optional dictionary mapping edge vars to edge tuples
- Type:
Dict[str, Tuple[Any, Any]] | None
- bindings: Dict[str, Any]
- edge_bindings: Dict[str, Tuple[Any, Any]] | None = None
- to_dict() Dict[str, Any]
Convert to dictionary.
- class py3plex.dsl.N
Bases:
objectNullModel factory for creating NullModelBuilder instances.
Example
>>> N.model("configuration").samples(100).seed(42)
- static configuration() NullModelBuilder
Create a configuration model builder.
- static edge_swap() NullModelBuilder
Create an edge swap model builder.
- static erdos_renyi() NullModelBuilder
Create an Erdős-Rényi model builder.
- static layer_shuffle() NullModelBuilder
Create a layer shuffle model builder.
- static model(model_type: str) NullModelBuilder
Create a null model builder.
- class py3plex.dsl.NetworkSchemaProvider(network: Any)
Bases:
objectSchema provider backed by a py3plex multilayer network.
- get_attribute_type(entity_ref: EntityRef, attr: str) AttrType | None
Get attribute type by sampling nodes/edges.
- get_edge_count(layer: str | None = None) int
Get edge count.
- get_node_count(layer: str | None = None) int
Get node count.
- list_edge_types(layer: str | None = None) List[str]
Get list of edge types.
- list_layers() List[str]
Get list of all layers.
- list_node_types(layer: str | None = None) List[str]
Get list of node types.
- class py3plex.dsl.NullModelBuilder(model_type: str)
Bases:
objectBuilder for NULLMODEL statements.
Example
>>> from py3plex.dsl import N, L >>> >>> result = ( ... N.model("configuration") ... .on_layers(L["social"]) ... .with_params(preserve_degree=True) ... .samples(100) ... .seed(42) ... .execute(network) ... )
- execute(network: Any) NullModelResult
Execute null model generation.
- Parameters:
network – Multilayer network
- Returns:
NullModelResult with generated samples
- on_layers(layer_expr: LayerExprBuilder) NullModelBuilder
Filter by layers using layer algebra.
- Parameters:
layer_expr – Layer expression
- Returns:
Self for chaining
- samples(n: int) NullModelBuilder
Set number of samples to generate.
- Parameters:
n – Number of samples
- Returns:
Self for chaining
- seed(seed: int) NullModelBuilder
Set random seed.
- Parameters:
seed – Random seed
- Returns:
Self for chaining
- to(target: str) NullModelBuilder
Set export target.
- Parameters:
target – Export format
- Returns:
Self for chaining
- to_ast() NullModelStmt
Export as AST NullModelStmt object.
- with_params(**params) NullModelBuilder
Set model parameters.
- Parameters:
**params – Model parameters
- Returns:
Self for chaining
- class py3plex.dsl.NullModelStmt(model_type: str, layer_expr: ~py3plex.dsl.ast.LayerExpr | None = None, params: ~typing.Dict[str, ~typing.Any] = <factory>, num_samples: int = 1, seed: int | None = None, export_target: str | None = None)
Bases:
objectNULLMODEL statement for generating randomized networks.
- DSL Example:
NULLMODEL configuration ON LAYER(“social”) + LAYER(“work”) WITH preserve_degree=True, preserve_layer_sizes=True SAMPLES 100 SEED 42
- model_type
Type of null model (e.g., “configuration”, “erdos_renyi”, “layer_shuffle”)
- Type:
str
- layer_expr
Optional layer expression for filtering
- Type:
py3plex.dsl.ast.LayerExpr | None
- params
Model parameters
- Type:
Dict[str, Any]
- num_samples
Number of samples to generate
- Type:
int
- seed
Optional random seed
- Type:
int | None
- export_target
Optional export format
- Type:
str | None
- export_target: str | None = None
- model_type: str
- num_samples: int = 1
- params: Dict[str, Any]
- seed: int | None = None
- class py3plex.dsl.OrderItem(key: str, desc: bool = False)
Bases:
objectOrdering specification.
- key
Attribute or computed value to order by
- Type:
str
- desc
True for descending order, False for ascending
- Type:
bool
- desc: bool = False
- key: str
- class py3plex.dsl.P
Bases:
objectPath factory for creating PathBuilder instances.
Example
>>> P.shortest("Alice", "Bob").crossing_layers() >>> P.random_walk("Alice").with_params(steps=100, teleport=0.1)
- static all_paths(source: Any, target: Any) PathBuilder
Create an all-paths query builder.
- static flow(source: Any, target: Any) PathBuilder
Create a flow analysis query builder.
- static random_walk(source: Any) PathBuilder
Create a random walk query builder.
- static shortest(source: Any, target: Any) PathBuilder
Create a shortest path query builder.
- class py3plex.dsl.Param
Bases:
objectFactory for parameter references.
Parameters are placeholders in queries that are bound at execution time.
Example
>>> q = Q.nodes().where(degree__gt=Param.int("k")) >>> result = q.execute(network, k=5)
- class py3plex.dsl.ParamRef(name: str, type_hint: str | None = None)
Bases:
objectReference to a query parameter.
Parameters are placeholders in queries that are bound at execution time.
- name
Parameter name (e.g., “k” for :k in DSL)
- Type:
str
- type_hint
Optional type hint for validation
- Type:
str | None
- name: str
- type_hint: str | None = None
- exception py3plex.dsl.ParameterMissingError(parameter: str, provided_params: List[str] | None = None, query: str | None = None, line: int | None = None, column: int | None = None)
Bases:
DslErrorException raised when a required parameter is not provided.
- parameter
The missing parameter name
- provided_params
List of provided parameter names
- class py3plex.dsl.PathBuilder(path_type: str, source: Any, target: Any | None = None)
Bases:
objectBuilder for PATH statements.
Example
>>> from py3plex.dsl import P, L >>> >>> result = ( ... P.shortest("Alice", "Bob") ... .on_layers(L["social"] + L["work"]) ... .crossing_layers() ... .execute(network) ... )
- crossing_layers(allow: bool = True) PathBuilder
Allow or disallow cross-layer paths.
- Parameters:
allow – Whether to allow cross-layer paths
- Returns:
Self for chaining
- execute(network: Any) PathResult
Execute path query.
- Parameters:
network – Multilayer network
- Returns:
PathResult with found paths
- limit(n: int) PathBuilder
Limit number of results.
- Parameters:
n – Maximum number of results
- Returns:
Self for chaining
- on_layers(layer_expr: LayerExprBuilder) PathBuilder
Filter by layers using layer algebra.
- Parameters:
layer_expr – Layer expression
- Returns:
Self for chaining
- to(target: str) PathBuilder
Set export target.
- Parameters:
target – Export format
- Returns:
Self for chaining
- with_params(**params) PathBuilder
Set additional parameters.
- Parameters:
**params – Additional parameters
- Returns:
Self for chaining
- class py3plex.dsl.PathStmt(path_type: str, source: str | ~py3plex.dsl.ast.ParamRef, target: str | ~py3plex.dsl.ast.ParamRef | None = None, layer_expr: ~py3plex.dsl.ast.LayerExpr | None = None, cross_layer: bool = False, params: ~typing.Dict[str, ~typing.Any] = <factory>, limit: int | None = None, export_target: str | None = None)
Bases:
objectPATH statement for path queries and flow analysis.
- DSL Example:
PATH SHORTEST FROM “Alice” TO “Bob” ON LAYER(“social”) + LAYER(“work”) CROSSING LAYERS LIMIT 10
- path_type
Type of path query (“shortest”, “all”, “random_walk”, “flow”)
- Type:
str
- source
Source node identifier
- Type:
str | py3plex.dsl.ast.ParamRef
- target
Optional target node identifier
- Type:
str | py3plex.dsl.ast.ParamRef | None
- layer_expr
Optional layer expression for filtering
- Type:
py3plex.dsl.ast.LayerExpr | None
- cross_layer
Whether to allow cross-layer paths
- Type:
bool
- params
Additional parameters (e.g., max_length, teleport probability)
- Type:
Dict[str, Any]
- limit
Optional limit on results
- Type:
int | None
- export_target
Optional export format
- Type:
str | None
- cross_layer: bool = False
- export_target: str | None = None
- limit: int | None = None
- params: Dict[str, Any]
- path_type: str
- class py3plex.dsl.PatternEdge(src: str, dst: str, directed: bool = False, etype: str | None = None, predicates: ~typing.List[~py3plex.dsl.patterns.ir.Predicate] = <factory>, layer_constraint: ~py3plex.dsl.patterns.ir.EdgeLayerConstraint | None = None)
Bases:
objectRepresents an edge between two node variables in a pattern.
- src
Source variable name
- Type:
str
- dst
Destination variable name
- Type:
str
- directed
Whether the edge is directed
- Type:
bool
- etype
Optional edge type/relation
- Type:
str | None
- predicates
List of predicates for filtering
- Type:
- layer_constraint
Optional layer constraint
- Type:
- directed: bool = False
- dst: str
- etype: str | None = None
- layer_constraint: EdgeLayerConstraint | None = None
- src: str
- to_dict() Dict[str, Any]
Convert to dictionary for serialization.
- class py3plex.dsl.PatternEdgeBuilder(parent: PatternQueryBuilder, src: str, dst: str, directed: bool = False, etype: str | None = None)
Bases:
objectBuilder for configuring a pattern edge.
This builder is returned by PatternQueryBuilder.edge() and provides chainable methods for specifying edge predicates and constraints.
Methods that don’t return self will return the parent QueryBuilder, allowing for seamless chaining back to the main pattern builder.
- any_layer() PatternQueryBuilder
Allow edge to be in any layer.
- Returns:
Parent PatternQueryBuilder for chaining
- between_layers(src_layer: str, dst_layer: str) PatternQueryBuilder
Constrain edge to be between two specific layers.
- Parameters:
src_layer – Source layer name
dst_layer – Destination layer name
- Returns:
Parent PatternQueryBuilder for chaining
- where(**kwargs) PatternQueryBuilder
Add predicates to the edge.
- Parameters:
**kwargs – Predicate specifications
- Returns:
Parent PatternQueryBuilder for chaining
- within_layer(layer: str) PatternQueryBuilder
Constrain edge to be within a single layer.
- Parameters:
layer – Layer name
- Returns:
Parent PatternQueryBuilder for chaining
- class py3plex.dsl.PatternGraph(nodes: ~typing.Dict[str, ~py3plex.dsl.patterns.ir.PatternNode] = <factory>, edges: ~typing.List[~py3plex.dsl.patterns.ir.PatternEdge] = <factory>, constraints: ~typing.List[str] = <factory>, return_vars: ~typing.List[str] | None = None)
Bases:
objectRepresents a complete pattern query.
- nodes
Dictionary mapping variable names to PatternNode objects
- Type:
Dict[str, py3plex.dsl.patterns.ir.PatternNode]
- edges
List of PatternEdge objects
- Type:
- constraints
List of global constraints (e.g., all-different)
- Type:
List[str]
- return_vars
List of variables to return (defaults to all)
- Type:
List[str] | None
- add_constraint(constraint: str) None
Add a global constraint.
- add_edge(edge: PatternEdge) None
Add an edge to the pattern.
- add_node(node: PatternNode) None
Add a node to the pattern.
- constraints: List[str]
- edges: List[PatternEdge]
- get_return_vars() List[str]
Get the list of variables to return.
- nodes: Dict[str, PatternNode]
- return_vars: List[str] | None = None
- to_dict() Dict[str, Any]
Convert to dictionary for serialization.
- class py3plex.dsl.PatternNode(var: str, labels: ~typing.Set[str] | None = None, predicates: ~typing.List[~py3plex.dsl.patterns.ir.Predicate] = <factory>, layer_constraint: ~py3plex.dsl.patterns.ir.LayerConstraint | None = None)
Bases:
objectRepresents a node variable in a pattern.
- var
Variable name (e.g., “a”, “b”)
- Type:
str
- labels
Optional semantic labels (metadata only in v1)
- Type:
Set[str] | None
- predicates
List of predicates for filtering
- Type:
- layer_constraint
Optional layer constraint
- Type:
- labels: Set[str] | None = None
- layer_constraint: LayerConstraint | None = None
- to_dict() Dict[str, Any]
Convert to dictionary for serialization.
- var: str
- class py3plex.dsl.PatternNodeBuilder(parent: PatternQueryBuilder, var: str, labels: str | List[str] | None = None)
Bases:
objectBuilder for configuring a pattern node variable.
This builder is returned by PatternQueryBuilder.node() and provides chainable methods for specifying node predicates and constraints.
Methods that don’t return self will return the parent QueryBuilder, allowing for seamless chaining back to the main pattern builder.
- in_layers(layers: str | List[str]) PatternQueryBuilder
Specify layer constraint for the node.
- Parameters:
layers – Single layer name, list of layers, or “*” for wildcard
- Returns:
Parent PatternQueryBuilder for chaining
- label(*labels: str) PatternNodeBuilder
Add labels to the node.
- Parameters:
*labels – Label names
- Returns:
Self for chaining
- where(**kwargs) PatternQueryBuilder
Add predicates to the node.
- Supports the same predicate syntax as Q.nodes().where():
layer=”social” → layer constraint
degree__gt=5 → degree > 5
any_attribute__op=value
- Parameters:
**kwargs – Predicate specifications
- Returns:
Parent PatternQueryBuilder for chaining
- class py3plex.dsl.PatternPlan(pattern: ~py3plex.dsl.patterns.ir.PatternGraph, root_var: str, join_order: ~typing.List[~py3plex.dsl.patterns.compiler.JoinStep] = <factory>, variable_plans: ~typing.Dict[str, ~py3plex.dsl.patterns.compiler.VariablePlan] = <factory>, estimated_complexity: int = -1)
Bases:
objectComplete execution plan for a pattern.
- pattern
Original pattern graph
- root_var
Variable to start matching from
- Type:
str
- join_order
Sequence of join steps
- Type:
List[py3plex.dsl.patterns.compiler.JoinStep]
- variable_plans
Plans for each variable
- Type:
Dict[str, py3plex.dsl.patterns.compiler.VariablePlan]
- estimated_complexity
Rough complexity estimate
- Type:
int
- estimated_complexity: int = -1
- join_order: List[JoinStep]
- pattern: PatternGraph
- root_var: str
- to_dict() Dict[str, Any]
Convert to dictionary for serialization/display.
- variable_plans: Dict[str, VariablePlan]
- class py3plex.dsl.PatternQueryBuilder
Bases:
objectMain builder for pattern queries.
Provides a fluent API for constructing pattern queries. The builder accumulates pattern elements (nodes, edges, constraints) and produces a PatternGraph IR object that can be compiled and executed.
Example
>>> pq = ( ... Q.pattern() ... .node("a").where(degree__gt=3) ... .node("b") ... .edge("a", "b", directed=False) ... .returning("a", "b") ... ) >>> matches = pq.execute(network)
- constraint(expr: str) PatternQueryBuilder
Add a global constraint.
- Currently supports:
“a != b” for all-different constraints
“all_distinct([a, b, c])” for multi-variable all-different
- Parameters:
expr – Constraint expression
- Returns:
Self for chaining
- edge(src: str, dst: str, directed: bool = False, etype: str | None = None) PatternEdgeBuilder
Add an edge between two node variables.
- Parameters:
src – Source variable name
dst – Destination variable name
directed – Whether the edge is directed
etype – Optional edge type
- Returns:
PatternEdgeBuilder for configuring the edge
- execute(network: Any, backend: str = 'native', max_matches: int | None = None, timeout: float | None = None) PatternQueryResult
Execute the pattern query on a network.
- Parameters:
network – Multilayer network object
backend – Execution backend (currently only “native” supported)
max_matches – Maximum number of matches (overrides .limit())
timeout – Optional timeout in seconds
- Returns:
PatternQueryResult with matches
- explain() Dict[str, Any]
Generate and return the compilation plan.
- Returns:
Dictionary with compilation plan details
- limit(n: int) PatternQueryBuilder
Limit the number of matches.
- Parameters:
n – Maximum number of matches
- Returns:
Self for chaining
- node(var: str, labels: str | List[str] | None = None) PatternNodeBuilder
Add a node variable to the pattern.
- Parameters:
var – Variable name (e.g., “a”, “b”)
labels – Optional semantic labels
- Returns:
PatternNodeBuilder for configuring the node
- order_by(key: str, desc: bool = False) PatternQueryBuilder
Order matches by a computed attribute (future enhancement).
- Parameters:
key – Attribute key for ordering
desc – Whether to sort descending
- Returns:
Self for chaining
- path(vars: List[str] | Tuple[str, ...], directed: bool = False, etype: str | None = None, length: int | None = None) PatternQueryBuilder
Add a path pattern.
Creates edges between consecutive variables in the list. For example, path([“a”, “b”, “c”]) creates edges a-b and b-c.
- Parameters:
vars – List of variable names representing the path
directed – Whether edges are directed
etype – Optional edge type for all edges
length – Optional length constraint (currently ignored, for future use)
- Returns:
Self for chaining
- returning(*vars: str) PatternQueryBuilder
Specify which variables to return in results.
- Parameters:
*vars – Variable names to return
- Returns:
Self for chaining
- triangle(a: str, b: str, c: str, directed: bool = False) PatternQueryBuilder
Add a triangle motif.
Creates edges a-b, b-c, and c-a.
- Parameters:
a – First variable name
b – Second variable name
c – Third variable name
directed – Whether edges are directed
- Returns:
Self for chaining
- class py3plex.dsl.PatternQueryResult(pattern: PatternGraph, matches: List[MatchRow], meta: Dict[str, Any] | None = None)
Bases:
objectResult container for pattern matching queries.
Provides access to matches with multiple export formats and projections.
- pattern
Original pattern graph
- matches
List of MatchRow objects
- meta
Metadata about the query execution
- property count: int
Get number of matches.
- Returns:
Number of matches
- filter(predicate) PatternQueryResult
Filter matches using a predicate function.
- Parameters:
predicate – Function that takes a MatchRow and returns bool
- Returns:
New PatternQueryResult with filtered matches
- limit(n: int) PatternQueryResult
Limit the number of matches.
- Parameters:
n – Maximum number of matches
- Returns:
New PatternQueryResult with limited matches
- property rows: List[Dict[str, Any]]
Get matches as list of dictionaries.
- Returns:
List of dictionaries mapping variable names to node IDs
- to_edges(var_pairs: List[Tuple[str, str]] | None = None) List[Tuple[Any, Any]]
Extract edges from matches.
Infers edges from pairs of node variables in the pattern.
- Parameters:
var_pairs – Optional list of (src_var, dst_var) tuples to extract If None, uses pattern edges
- Returns:
List of (src_node, dst_node) tuples
- to_nodes(vars: List[str] | None = None, unique: bool = True) List[Any] | Set[Any]
Extract node IDs from matches.
- Parameters:
vars – Optional list of variables to include (defaults to all)
unique – If True, return unique nodes as a set
- Returns:
List or set of node IDs
- to_pandas(include_meta: bool = False)
Export matches to pandas DataFrame.
- Parameters:
include_meta – If True, include metadata columns
- Returns:
pandas.DataFrame with matches
- Raises:
ImportError – If pandas is not available
- to_subgraph(network: Any, per_match: bool = False) Any
Extract induced subgraph(s) from matches.
- Parameters:
network – Original network object
per_match – If True, return list of subgraphs (one per match) If False, return single subgraph with all matched nodes
- Returns:
NetworkX graph or list of graphs
- class py3plex.dsl.PlanStep(description: str, estimated_complexity: str | None = None)
Bases:
objectA step in the execution plan.
- description
Human-readable description of the step
- Type:
str
- description: str
- estimated_complexity: str | None = None
- class py3plex.dsl.Predicate(attr: str, op: str, value: Any)
Bases:
objectA predicate for filtering nodes or edges.
- attr
Attribute name (e.g., “degree”, “weight”)
- Type:
str
- op
Comparison operator (“>”, “>=”, “<”, “<=”, “=”, “!=”)
- Type:
str
- value
Value to compare against
- Type:
Any
- attr: str
- op: str
- to_dict() Dict[str, Any]
Convert to dictionary for serialization.
- value: Any
- class py3plex.dsl.Q
Bases:
objectQuery factory for creating QueryBuilder instances.
Example
>>> Q.nodes().where(layer="social").compute("degree") >>> Q.edges().where(intralayer=True) >>> Q.nodes(autocompute=False).where(degree__gt=5) # Disable autocompute >>> Q.dynamics("SIS", beta=0.3).run(steps=100) # Dynamics simulation >>> Q.trajectories("sim_result").at(50) # Query trajectories
- static dynamics(process_name: str, **params) DynamicsBuilder
Create a dynamics simulation builder.
- Parameters:
process_name – Name of the process (e.g., “SIS”, “SIR”, “RANDOM_WALK”)
**params – Process parameters (e.g., beta=0.3, mu=0.1)
- Returns:
DynamicsBuilder for configuring and running simulations
Example
>>> sim = ( ... Q.dynamics("SIS", beta=0.3, mu=0.1) ... .on_layers(L["contacts"]) ... .seed(0.01) ... .run(steps=100, replicates=10, track="all") ... .execute(network) ... )
- static edges(autocompute: bool = True) QueryBuilder
Create a query builder for edges.
- Parameters:
autocompute – Whether to automatically compute missing metrics (default: True)
- Returns:
QueryBuilder for edges
- static nodes(autocompute: bool = True) QueryBuilder
Create a query builder for nodes.
- Parameters:
autocompute – Whether to automatically compute missing metrics (default: True)
- Returns:
QueryBuilder for nodes
- static pattern() PatternQueryBuilder
Create a pattern matching query builder.
- Returns:
PatternQueryBuilder for constructing pattern queries
Example
>>> pq = ( ... Q.pattern() ... .node("a").where(layer="social", degree__gt=3) ... .node("b").where(layer="social") ... .edge("a", "b", directed=False).where(weight__gt=0.2) ... .returning("a", "b") ... ) >>> matches = pq.execute(network) >>> df = matches.to_pandas()
- static trajectories(process_ref: str) TrajectoriesBuilder
Create a trajectories query builder.
- Parameters:
process_ref – Reference to a simulation result or process name
- Returns:
TrajectoriesBuilder for querying simulation outputs
Example
>>> result = ( ... Q.trajectories("sim_result") ... .at(50) ... .measure("peak_time", "final_state") ... .execute(context) ... )
- class uncertainty
Bases:
objectGlobal defaults for uncertainty estimation.
This class provides a way to configure default parameters for uncertainty estimation that will be used when uncertainty=True is passed to compute() but specific parameters are omitted.
Example
>>> from py3plex.dsl import Q >>> >>> # Set global defaults >>> Q.uncertainty.defaults( ... enabled=True, ... n_boot=200, ... ci=0.95, ... bootstrap_unit="edges", ... bootstrap_mode="resample", ... random_state=42 ... ) >>> >>> # Now compute() will use these defaults >>> Q.nodes().compute("degree", uncertainty=True).execute(net)
>>> # Reset to defaults >>> Q.uncertainty.reset()
- classmethod defaults(**kwargs) None
Set global defaults for uncertainty estimation.
- Parameters:
enabled – Whether uncertainty is enabled by default (default: False)
n_boot – Number of bootstrap replicates (default: 50)
n_samples – Alias for n_boot (default: 50)
ci – Confidence interval level (default: 0.95)
bootstrap_unit – What to resample - “edges”, “nodes”, or “layers” (default: “edges”)
bootstrap_mode – Resampling mode - “resample” or “permute” (default: “resample”)
method – Uncertainty estimation method - “bootstrap”, “perturbation”, “seed” (default: “bootstrap”)
random_state – Random seed for reproducibility (default: None)
n_null – Number of null model replicates (default: 200)
null_model – Null model type - “degree_preserving”, “erdos_renyi”, “configuration” (default: “degree_preserving”)
Example
>>> Q.uncertainty.defaults( ... enabled=True, ... n_boot=500, ... ci=0.95, ... bootstrap_unit="edges" ... )
- classmethod get(key: str, default: Any | None = None) Any
Get a default value.
- Parameters:
key – Parameter name
default – Value to return if key not found
- Returns:
Default value for the parameter
- classmethod get_all() Dict[str, Any]
Get all current defaults as a dictionary.
- Returns:
Dictionary of all default values
- classmethod reset() None
Reset all defaults to their initial values.
- class py3plex.dsl.Query(explain: bool, select: SelectStmt, dsl_version: str = '2.0')
Bases:
objectTop-level query representation.
- explain
If True, return execution plan instead of results
- Type:
bool
- select
The SELECT statement
- dsl_version
DSL version for compatibility
- Type:
str
- dsl_version: str = '2.0'
- explain: bool
- select: SelectStmt
- class py3plex.dsl.QueryBuilder(target: Target, autocompute: bool = True)
Bases:
objectChainable query builder.
Use Q.nodes() or Q.edges() to create a builder, then chain methods to construct the query.
- after(t: float) QueryBuilder
Add temporal constraint for edges/nodes after a specific time.
Convenience method equivalent to .during(t, None). Filters to only include edges/nodes active after (and at) time t.
- Parameters:
t – Lower bound timestamp (inclusive)
- Returns:
Self for chaining
Examples
>>> # Get all edges after time 100 >>> Q.edges().after(100.0).execute(network)
>>> # Nodes active after 2024-01-01 >>> Q.nodes().after(1704067200.0).execute(network)
- aggregate(**aggregations) QueryBuilder
Aggregate columns with support for lambdas and builtin functions.
This method computes aggregations over the result set. It supports: - Built-in aggregation functions: mean(), sum(), min(), max(), std(), count() - Direct attribute references for last/first value - Lambda functions for custom aggregations
The aggregations are computed after grouping if active, otherwise globally.
- Parameters:
**aggregations –
Named aggregations where: - Key is the output column name - Value is either:
A string like “mean(degree)” or “sum(weight)”
A string attribute name (gets the value directly)
A lambda function receiving each item
- Returns:
Self for chaining
Example
>>> Q.nodes().per_layer().aggregate( ... avg_degree="mean(degree)", ... max_bc="max(betweenness_centrality)", ... node_count="count()", ... layer_name="layer" # Direct attribute ... )
>>> # With lambda >>> Q.nodes().aggregate( ... community_size=lambda n: network.community_sizes[network.get_partition(n)] ... )
- arrange(*columns: str, desc: bool = False) QueryBuilder
Sort results by specified columns (dplyr-style alias for order_by).
This is a convenience method that provides dplyr-style syntax. Columns can be prefixed with “-” to indicate descending order.
- Parameters:
*columns – Column names to sort by (prefix with “-” for descending)
desc – Default sort direction (only used if column has no prefix)
- Returns:
Self for chaining
Example
>>> Q.nodes().compute("degree").arrange("degree") # ascending >>> Q.nodes().compute("degree").arrange("-degree") # descending >>> Q.nodes().compute("degree", "betweenness").arrange("degree", "-betweenness")
- at(t: float) QueryBuilder
Add temporal snapshot constraint (AT clause).
Filters edges to only those active at a specific point in time. For point-in-time edges (with ‘t’ attribute), includes edges where t_edge == t. For interval edges (with ‘t_start’, ‘t_end’), includes edges where t is in [t_start, t_end].
- Parameters:
t – Timestamp for snapshot
- Returns:
Self for chaining
Examples
>>> # Snapshot at specific time >>> Q.edges().at(150.0).execute(network)
- before(t: float) QueryBuilder
Add temporal constraint for edges/nodes before a specific time.
Convenience method equivalent to .during(None, t). Filters to only include edges/nodes active before (and at) time t.
- Parameters:
t – Upper bound timestamp (inclusive)
- Returns:
Self for chaining
Examples
>>> # Get all edges before time 100 >>> Q.edges().before(100.0).execute(network)
>>> # Nodes active before 2024-01-01 >>> Q.nodes().before(1704067200.0).execute(network)
- centrality(*metrics: str, **aliases: str) QueryBuilder
Compute centrality metrics (convenience wrapper for compute).
This is a domain-specific convenience method for computing common centrality measures. It’s equivalent to calling compute() with the metric names.
- Supported metrics:
degree
betweenness (or betweenness_centrality)
closeness (or closeness_centrality)
eigenvector (or eigenvector_centrality)
pagerank
clustering (or clustering_coefficient)
- Parameters:
*metrics – Centrality metric names
**aliases – Optional aliases for metrics (alias=metric_name)
- Returns:
Self for chaining
Example
>>> Q.nodes().centrality("degree", "betweenness", "pagerank") >>> Q.nodes().centrality("degree", bc="betweenness_centrality")
- compute(*measures: str, alias: str | None = None, aliases: Dict[str, str] | None = None, uncertainty: bool | None = None, method: str | None = None, n_samples: int | None = None, ci: float | None = None, bootstrap_unit: str | None = None, bootstrap_mode: str | None = None, n_boot: int | None = None, n_null: int | None = None, null_model: str | None = None, random_state: int | None = None) QueryBuilder
Add measures to compute with optional uncertainty estimation.
- Parameters:
*measures – Measure names to compute
alias – Alias for single measure
aliases – Dictionary mapping measure names to aliases
uncertainty – Whether to compute uncertainty for these measures. If None, uses Q.uncertainty defaults or the global uncertainty context.
method – Uncertainty estimation method (‘bootstrap’, ‘perturbation’, ‘seed’, ‘null_model’)
n_samples – Number of samples for uncertainty estimation (default: from Q.uncertainty.defaults)
ci – Confidence interval level (default: from Q.uncertainty.defaults)
bootstrap_unit – What to resample - “edges”, “nodes”, or “layers” (default: from Q.uncertainty.defaults)
bootstrap_mode – Resampling mode - “resample” or “permute” (default: from Q.uncertainty.defaults)
n_boot – Alias for n_samples (for bootstrap)
n_null – Number of null model replicates (default: from Q.uncertainty.defaults)
null_model – Null model type - “degree_preserving”, “erdos_renyi”, “configuration” (default: from Q.uncertainty.defaults)
random_state – Random seed for reproducibility (default: from Q.uncertainty.defaults)
- Returns:
Self for chaining
Example
>>> # Without uncertainty >>> Q.nodes().compute("degree", "betweenness_centrality")
>>> # With uncertainty using explicit parameters >>> Q.nodes().compute( ... "degree", "betweenness_centrality", ... uncertainty=True, ... method="bootstrap", ... n_samples=500, ... ci=0.95 ... )
>>> # With uncertainty using global defaults >>> Q.uncertainty.defaults(n_boot=500, ci=0.95) >>> Q.nodes().compute("degree", uncertainty=True)
- coverage(mode: str = 'all', k: int | None = None, threshold: int | None = None, p: float | None = None, group: str | None = None, id_field: str = 'id') QueryBuilder
Configure coverage filtering across groups.
Coverage determines which items appear in the final result based on how many groups they appear in after grouping and top_k filtering.
- Parameters:
mode – Coverage mode: - “all”: Keep items that appear in ALL groups - “any”: Keep items that appear in AT LEAST ONE group - “at_least”: Keep items that appear in at least k groups (requires k/threshold parameter) - “exact”: Keep items that appear in exactly k groups (requires k/threshold parameter) - “fraction”: Keep items that appear in at least p fraction (0-1) of groups (requires p parameter)
k – Threshold for “at_least” or “exact” modes
threshold – Alias for k parameter
p – Fraction threshold (0.0-1.0) for “fraction” mode. E.g., p=0.67 means at least 67% of groups
group – Group attribute for coverage (defaults to primary grouping context)
id_field – Field to use for identity matching (default: “id” for nodes)
- Returns:
Self for chaining
- Raises:
ValueError – If mode is invalid or required parameters are missing
ValueError – If called without prior grouping
Example
>>> # Nodes that are top-5 hubs in ALL layers >>> Q.nodes().per_layer().top_k(5, "betweenness").coverage(mode="all")
>>> # Nodes that are top-5 in at least 2 layers >>> Q.nodes().per_layer().top_k(5, "degree").coverage(mode="at_least", k=2) >>> # Or equivalently: >>> Q.nodes().per_layer().top_k(5, "degree").coverage(mode="at_least", threshold=2)
>>> # Nodes in top-10 in at least 70% of layers (0.7 fraction) >>> Q.nodes().per_layer().top_k(10, "degree").coverage(mode="fraction", p=0.7)
- distinct(*columns: str) QueryBuilder
Return unique rows based on specified columns.
If columns are specified, deduplicates based on those columns only. If no columns are specified, deduplicates based on all columns.
- Parameters:
*columns – Optional column names to use for uniqueness check
- Returns:
Self for chaining
Example
>>> # Unique (node, layer) pairs >>> Q.nodes().distinct()
>>> # Unique communities per layer >>> Q.nodes().distinct("community", "layer")
- drop(*columns: str) QueryBuilder
Remove specified columns from the result.
This operation filters out the specified columns from the output. Complementary to select() - use drop() when it’s easier to specify what to remove rather than what to keep.
- Parameters:
*columns – Column names to remove from the result
- Returns:
Self for chaining
Example
>>> Q.nodes().compute("degree", "betweenness", "closeness").drop("closeness")
- during(t0: float | None = None, t1: float | None = None) QueryBuilder
Add temporal range constraint (DURING clause).
Filters edges to only those active during a time range [t0, t1]. For point-in-time edges, includes edges where t is in [t0, t1]. For interval edges, includes edges where the interval overlaps [t0, t1].
- Parameters:
t0 – Start of time range (None means -infinity)
t1 – End of time range (None means +infinity)
- Returns:
Self for chaining
Examples
>>> # Time range query >>> Q.edges().during(100.0, 200.0).execute(network)
>>> # Open-ended ranges >>> Q.edges().during(100.0, None).execute(network) # From 100 onwards >>> Q.edges().during(None, 200.0).execute(network) # Up to 200
- end_grouping() QueryBuilder
Marker for the end of grouping configuration.
This is purely for API readability and has no effect on execution. It helps visually separate grouping operations from post-grouping operations.
- Returns:
Self for chaining
Example
>>> (Q.nodes() ... .per_layer() ... .top_k(5, "degree") ... .end_grouping() ... .coverage(mode="all"))
- execute(network: Any, **params) QueryResult
Execute the query.
- Parameters:
network – Multilayer network object
**params – Parameter bindings
- Returns:
QueryResult with results and metadata
- explain() ExplainQuery
Create EXPLAIN query for execution plan.
- Returns:
ExplainQuery that can be executed to get the plan
- export(path: str, fmt: str = 'csv', columns: List[str] | None = None, **options) QueryBuilder
Attach a file export specification to the query.
This adds a side-effect to write query results to a file when executed. The query will still return the QueryResult as normal.
- Parameters:
path – Output file path (string)
fmt – Format type (‘csv’, ‘json’, ‘tsv’)
columns – Optional list of column names to include/order
**options – Format-specific options (e.g., delimiter=’;’, orient=’records’)
- Returns:
Self for chaining
- Raises:
ValueError – If format is not supported
Example
>>> q = ( ... Q.nodes() ... .compute("degree") ... .export("results.csv", fmt="csv", columns=["node", "degree"]) ... )
- export_csv(path: str, columns: List[str] | None = None, delimiter: str = ',', **options) QueryBuilder
Export query results to CSV file.
Convenience wrapper around .export() for CSV format.
- Parameters:
path – Output CSV file path
columns – Optional list of columns to include/order
delimiter – CSV delimiter (default: ‘,’)
**options – Additional CSV-specific options
- Returns:
Self for chaining
- export_json(path: str, columns: List[str] | None = None, orient: str = 'records', **options) QueryBuilder
Export query results to JSON file.
Convenience wrapper around .export() for JSON format.
- Parameters:
path – Output JSON file path
columns – Optional list of columns to include/order
orient – JSON orientation (‘records’, ‘split’, ‘index’, ‘columns’, ‘values’)
**options – Additional JSON-specific options
- Returns:
Self for chaining
- from_layers(layer_expr: LayerExprBuilder | LayerSet) QueryBuilder
Filter by layers using layer algebra.
Supports both LayerExprBuilder (backward compatible) and LayerSet (new).
- Parameters:
layer_expr – Layer expression (e.g., L[“social”] + L[“work”] or L[”* - coupling”])
- Returns:
Self for chaining
Example
>>> # Old style (still works) >>> Q.nodes().from_layers(L["social"] + L["work"]) >>> >>> # New style with string expressions >>> Q.nodes().from_layers(L["* - coupling"]) >>> Q.nodes().from_layers(L["(ppi | gene) & disease"])
- group_by(*fields: str) QueryBuilder
Group result items by given fields.
This is the low-level grouping primitive used by per_layer(). Once grouping is established, you can apply per-group operations like top_k().
- Parameters:
*fields – Attribute names to group by (e.g., “layer”)
- Returns:
Self for chaining
Example
>>> Q.nodes().group_by("layer").top_k(5, "degree")
- has_community(predicate) QueryBuilder
Filter nodes based on a community-related predicate.
This method filters nodes based on their community membership or community-related attributes. The predicate can be: - A callable: Called with each node tuple, should return bool - A value: Direct equality check against “community” attribute
- Parameters:
predicate – Either a callable(node_tuple) -> bool or a value to match
- Returns:
Self for chaining
Example
>>> # Filter by community ID >>> Q.nodes().has_community(3)
>>> # Filter by custom predicate >>> Q.nodes().has_community( ... lambda n: network.get_node_attribute(n, "disease_enriched") is True ... )
- limit(n: int) QueryBuilder
Limit number of results.
- Parameters:
n – Maximum number of results
- Returns:
Self for chaining
- node_type(node_type: str) QueryBuilder
Filter nodes by node_type attribute.
This is a convenience method that adds a WHERE condition filtering by the “node_type” attribute. Equivalent to .where(node_type=node_type).
- Parameters:
node_type – Node type to filter by (e.g., “gene”, “protein”, “drug”)
- Returns:
Self for chaining
Example
>>> Q.nodes().node_type("gene").compute("degree")
- order_by(*keys: str, desc: bool = False) QueryBuilder
Add ORDER BY clause.
- Parameters:
*keys – Attribute names to order by (prefix with “-” for descending)
desc – Default sort direction
- Returns:
Self for chaining
- per_community() QueryBuilder
Group results by community (sugar for group_by(“community”)).
Similar to per_layer(), but groups by community attribute. Useful after community detection has been run and community assignments are stored in node attributes.
- Returns:
Self for chaining
Example
>>> # Find top nodes per community >>> Q.nodes().per_community().top_k(5, "betweenness_centrality")
- per_layer() QueryBuilder
Group results by layer (sugar for group_by(“layer”)).
This is the most common grouping operation for multilayer queries. After calling this, you can apply per-layer operations like top_k().
Note: Only valid for node queries. For edge queries, use per_layer_pair().
- Returns:
Self for chaining
- Raises:
DslExecutionError – If called on an edge query
Example
>>> Q.nodes().per_layer().top_k(5, "betweenness_centrality")
- per_layer_pair() QueryBuilder
Group edge results by (src_layer, dst_layer) pair.
This is the grouping operation for edge queries in multilayer networks. After calling this, you can apply per-layer-pair operations like top_k().
Note: Only valid for edge queries. For node queries, use per_layer().
- Returns:
Self for chaining
- Raises:
DslExecutionError – If called on a node query
Example
>>> Q.edges().per_layer_pair().top_k(5, "edge_betweenness_centrality")
- rank_by(attr: str, method: str = 'dense') QueryBuilder
Add rank column based on specified attribute.
Computes ranks within the current grouping context. If grouping is active, ranks are computed per group. Otherwise, ranks are global.
The rank column will be named “{attr}_rank”.
- Parameters:
attr – Attribute to rank by
method – Ranking method - “dense”, “min”, “max”, “average”, “first” (follows pandas.Series.rank semantics)
- Returns:
Self for chaining
Example
>>> # Global ranking >>> Q.nodes().compute("degree").rank_by("degree")
>>> # Per-layer ranking >>> Q.nodes().compute("degree").per_layer().rank_by("degree", "dense")
- rename(**mapping: str) QueryBuilder
Rename columns in the result.
Provide keyword arguments where the key is the new name and the value is the old name to rename.
- Parameters:
**mapping – Mapping from new names to old names (new=old)
- Returns:
Self for chaining
Example
>>> Q.nodes().compute("degree", "betweenness_centrality").rename( ... deg="degree", bc="betweenness_centrality" ... )
- select(*columns: str) QueryBuilder
Keep only specified columns in the result.
This operation filters the output columns, keeping only the ones specified. Useful for reducing result size and focusing on specific attributes.
- Parameters:
*columns – Column names to keep in the result
- Returns:
Self for chaining
Example
>>> Q.nodes().compute("degree", "betweenness_centrality").select("id", "degree")
- sort(by: str, descending: bool = False) QueryBuilder
Sort results by a column (convenience alias for order_by).
This provides a more intuitive API matching common data analysis patterns (e.g., pandas DataFrame.sort_values).
- Parameters:
by – Column name to sort by
descending – If True, sort in descending order (default: False)
- Returns:
Self for chaining
Example
>>> Q.nodes().compute("degree").sort(by="degree", descending=True)
- summarize(**aggregations: str) QueryBuilder
Aggregate over the current grouping context.
Computes summary statistics per group when grouping is active, or globally if no grouping is set. Aggregation expressions are strings like “mean(degree)”, “max(degree)”, “n()”.
- Supported aggregations:
n() : count of items
mean(attr) : mean value
sum(attr) : sum of values
min(attr) : minimum value
max(attr) : maximum value
std(attr) : standard deviation
var(attr) : variance
- Parameters:
**aggregations – Named aggregations (name=expression)
- Returns:
Self for chaining
- Raises:
ValueError – If aggregation expression is invalid
Example
>>> Q.nodes().from_layers(L["*"]).compute("degree").per_layer().summarize( ... mean_degree="mean(degree)", ... max_degree="max(degree)", ... n="n()" ... )
- to(target: str) QueryBuilder
Set export target.
- Parameters:
target – Export format (‘pandas’, ‘networkx’, ‘arrow’)
- Returns:
Self for chaining
- to_dsl() str
Export as DSL string.
- Returns:
DSL query string
- top_k(k: int, key: str | None = None) QueryBuilder
Keep the top-k items per group, ordered by the given key.
Requires that group_by() or per_layer() has been called first.
- Parameters:
k – Number of items to keep per group
key – Attribute/measure to sort by (descending). If None, uses existing order_by.
- Returns:
Self for chaining
- Raises:
ValueError – If called without prior grouping
Example
>>> Q.nodes().per_layer().top_k(5, "betweenness_centrality")
- uncertainty(method: str | None = 'perturbation', n_samples: int | None = 50, ci: float | None = 0.95, seed: int | None = None, **kwargs) QueryBuilder
Alias for uq() - set query-scoped uncertainty configuration.
See uq() for full documentation.
- uq(method: str | None = 'perturbation', n_samples: int | None = 50, ci: float | None = 0.95, seed: int | None = None, **kwargs) QueryBuilder
Set query-scoped uncertainty quantification configuration.
This method establishes uncertainty defaults for all metrics computed in this query, unless overridden on a per-metric basis in compute().
- Parameters:
method – Uncertainty estimation method (‘bootstrap’, ‘perturbation’, ‘seed’, ‘null_model’) Pass None to disable query-level uncertainty.
n_samples – Number of samples for uncertainty estimation (default: 50)
ci – Confidence interval level (default: 0.95 for 95% CI)
seed – Random seed for reproducibility (default: None)
**kwargs – Additional method-specific parameters (e.g., bootstrap_unit=’edges’, bootstrap_mode=’resample’, null_model=’configuration’)
- Returns:
Self for chaining
Example
>>> # Set uncertainty defaults for the query >>> (Q.nodes() ... .uq(method="perturbation", n_samples=100, ci=0.95, seed=42) ... .compute("betweenness_centrality") ... .where(betweenness_centrality__mean__gt=0.1) ... .execute(net))
>>> # Use UQ profile (see UQ class for presets) >>> (Q.nodes() ... .uq(UQ.fast(seed=7)) ... .compute("degree") ... .execute(net))
>>> # Disable query-level uncertainty >>> Q.nodes().uq(method=None).compute("degree").execute(net)
- where(*args, **kwargs) QueryBuilder
Add WHERE conditions.
Supports two styles:
- Keyword arguments:
layer=”social” → equality
degree__gt=5 → comparison (gt, ge, lt, le, eq, ne)
intralayer=True → intralayer predicate
interlayer=(“social”,”work”) → interlayer predicate
- Expression objects (using F):
where(F.degree > 5)
where((F.degree > 5) & (F.layer == “social”))
where((F.degree > 10) | (F.clustering < 0.5))
- Can mix both styles:
where(F.degree > 5, layer=”social”)
- Parameters:
*args – BooleanExpression objects from F
**kwargs – Conditions as keyword arguments
- Returns:
Self for chaining
- window(window_size: float | str, step: float | str | None = None, start: float | None = None, end: float | None = None, aggregation: str = 'list') QueryBuilder
Add sliding window specification for temporal analysis.
Enables queries that operate over sliding time windows, useful for streaming algorithms and temporal pattern analysis.
- Parameters:
window_size – Size of each window. Can be: - Numeric: treated as timestamp units - String: duration like “7d”, “1h”, “30m”
step – Step size between windows (defaults to window_size for non-overlapping). Same format as window_size.
start – Optional start time for windowing (defaults to network’s first timestamp)
end – Optional end time for windowing (defaults to network’s last timestamp)
aggregation – How to aggregate results across windows: - “list”: Return list of per-window results - “concat”: Concatenate DataFrames - “avg”: Average numeric columns
- Returns:
Self for chaining
Examples
>>> # Non-overlapping windows of size 100 >>> Q.nodes().compute("degree").window(100.0).execute(tnet)
>>> # Overlapping windows: size 100, step 50 >>> Q.nodes().compute("degree").window(100.0, step=50.0).execute(tnet)
>>> # Duration strings (for datetime timestamps) >>> Q.edges().window("7d", step="1d").execute(tnet)
Note
Window queries require a TemporalMultiLayerNetwork instance. For regular multi_layer_network, an error will be raised.
- zscore(*attrs: str) QueryBuilder
Compute z-scores for specified attributes.
For each attribute, computes the z-score (standardized value) within the current grouping context. If grouping is active, z-scores are computed per group. Otherwise, they are global.
Creates new columns named “{attr}_zscore”.
- Parameters:
*attrs – Attribute names to compute z-scores for
- Returns:
Self for chaining
Example
>>> # Global z-scores >>> Q.nodes().compute("degree", "betweenness").zscore("degree", "betweenness")
>>> # Per-layer z-scores >>> Q.nodes().compute("degree").per_layer().zscore("degree")
- class py3plex.dsl.QueryResult(target: str, items: List[Any], attributes: Dict[str, List[Any] | Dict[Any, Any]] | None = None, meta: Dict[str, Any] | None = None, computed_metrics: set | None = None)
Bases:
objectRich result object from DSL query execution.
Provides access to query results with multiple export formats and execution metadata.
- target
‘nodes’ or ‘edges’
- items
Sequence of node/edge identifiers
- attributes
Dictionary of computed attributes (column -> values or dict)
- meta
Metadata about the query execution
- computed_metrics
Set of metrics that were computed during query execution
- property count: int
Get number of items in result.
- property edges: List[Any]
Get edges (raises if target is not ‘edges’).
- group_summary()
Return a summary DataFrame with one row per group.
Returns a pandas DataFrame containing: - Grouping key columns (e.g., “layer”, “src_layer”, “dst_layer”) - n_items: Number of items (nodes/edges) in each group - Any group-level coverage metrics if available
This method only uses information already present in the result and does not recompute expensive measures.
- Returns:
pandas.DataFrame with one row per group
- Raises:
ImportError – If pandas is not available
ValueError – If result does not have grouping metadata
- property nodes: List[Any]
Get nodes (raises if target is not ‘nodes’).
- to_arrow()
Export results to Apache Arrow table.
- Returns:
pyarrow.Table with items and computed attributes
- Raises:
ImportError – If pyarrow is not available
- to_dict() Dict[str, Any]
Export results as a dictionary.
- Returns:
Dictionary with target, items, attributes, and metadata
- to_networkx(network: Any | None = None)
Export results to NetworkX graph.
For node queries: Returns subgraph containing the selected nodes For edge queries: Returns subgraph containing the selected edges and their endpoints
- Parameters:
network – Optional source network to extract subgraph from
- Returns:
networkx.Graph subgraph containing result items
- Raises:
ImportError – If networkx is not available
- to_pandas(multiindex: bool = False, include_grouping: bool = True, expand_uncertainty: bool = False)
Export results to pandas DataFrame.
For node queries: Returns DataFrame with ‘id’ column plus computed attributes For edge queries: Returns DataFrame with ‘source’, ‘target’, ‘source_layer’,
‘target_layer’, ‘weight’ columns plus computed attributes
- Parameters:
multiindex – If True and grouping metadata is present, set DataFrame index to the grouping keys (e.g., [“layer”] or [“src_layer”, “dst_layer”])
include_grouping – If True and grouping metadata is present, ensure grouping key columns are included in the DataFrame
expand_uncertainty – If True, expand uncertainty metrics into multiple columns: - metric (point estimate/mean) - metric_std (standard deviation) - metric_ci95_low (95% CI lower bound) - metric_ci95_high (95% CI upper bound) - metric_ci95_width (CI width)
- Returns:
pandas.DataFrame with items and computed attributes
- Raises:
ImportError – If pandas is not available
Example
>>> result = Q.nodes().uq(UQ.fast()).compute("degree").execute(net) >>> df = result.to_pandas(expand_uncertainty=True) >>> # df now has columns: id, layer, degree, degree_std, degree_ci95_low, degree_ci95_high, degree_ci95_width
- class py3plex.dsl.SchemaProvider(*args, **kwargs)
Bases:
ProtocolProtocol for schema providers.
Schema providers allow the linter to query information about available layers, node/edge types, and attributes in a network.
- get_attribute_type(entity_ref: EntityRef, attr: str) AttrType | None
Get the type of an attribute for a given entity.
- Parameters:
entity_ref – Reference to the entity (node/edge + layer)
attr – Attribute name
- Returns:
Attribute type or None if unknown
- get_edge_count(layer: str | None = None) int
Get approximate edge count, optionally for a specific layer.
- get_node_count(layer: str | None = None) int
Get approximate node count, optionally for a specific layer.
- list_edge_types(layer: str | None = None) List[str]
Get list of edge types, optionally filtered by layer.
- list_layers() List[str]
Get list of all layers in the network.
- list_node_types(layer: str | None = None) List[str]
Get list of node types, optionally filtered by layer.
- class py3plex.dsl.SelectStmt(target: ~py3plex.dsl.ast.Target, layer_expr: ~py3plex.dsl.ast.LayerExpr | None = None, layer_set: ~typing.Any | None = None, where: ~py3plex.dsl.ast.ConditionExpr | None = None, compute: ~typing.List[~py3plex.dsl.ast.ComputeItem] = <factory>, order_by: ~typing.List[~py3plex.dsl.ast.OrderItem] = <factory>, limit: int | None = None, export: ~py3plex.dsl.ast.ExportTarget | None = None, file_export: ~py3plex.dsl.ast.ExportSpec | None = None, temporal_context: ~py3plex.dsl.ast.TemporalContext | None = None, window_spec: ~py3plex.dsl.ast.WindowSpec | None = None, group_by: ~typing.List[str] = <factory>, limit_per_group: int | None = None, coverage_mode: str | None = None, coverage_k: int | None = None, coverage_p: float | None = None, coverage_group: str | None = None, coverage_id_field: str = 'id', select_cols: ~typing.List[str] | None = None, drop_cols: ~typing.List[str] | None = None, rename_map: ~typing.Dict[str, str] | None = None, summarize_aggs: ~typing.Dict[str, str] | None = None, distinct_cols: ~typing.List[str] | None = None, rank_specs: ~typing.List[~typing.Tuple[str, str]] | None = None, zscore_attrs: ~typing.List[str] | None = None, post_filters: ~typing.List[~typing.Dict[str, ~typing.Any]] | None = None, aggregate_specs: ~typing.Dict[str, ~typing.Any] | None = None, autocompute: bool = True, uq_config: ~py3plex.dsl.ast.UQConfig | None = None)
Bases:
objectA SELECT statement.
- target
What to select (nodes or edges)
- Type:
- layer_expr
Optional layer expression for filtering
- Type:
py3plex.dsl.ast.LayerExpr | None
- where
Optional WHERE conditions
- Type:
- compute
List of measures to compute
- Type:
- order_by
List of ordering specifications
- Type:
- limit
Optional limit on results
- Type:
int | None
- export
Optional export target (for result format conversion)
- Type:
py3plex.dsl.ast.ExportTarget | None
- file_export
Optional file export specification (for writing to files)
- Type:
py3plex.dsl.ast.ExportSpec | None
- temporal_context
Optional temporal context for time-based queries
- Type:
- window_spec
Optional window specification for sliding window analysis
- Type:
py3plex.dsl.ast.WindowSpec | None
- group_by
List of attribute names to group by (e.g., [“layer”])
- Type:
List[str]
- limit_per_group
Optional per-group limit for top-k filtering
- Type:
int | None
- coverage_mode
Coverage filtering mode (“all”, “any”, “at_least”, “exact”, “fraction”)
- Type:
str | None
- coverage_k
Threshold for “at_least” or “exact” coverage modes
- Type:
int | None
- coverage_p
Fraction threshold for “fraction” coverage mode
- Type:
float | None
- coverage_group
Group attribute for coverage (defaults to primary grouping)
- Type:
str | None
- coverage_id_field
Field to use for coverage identity (default: “id”)
- Type:
str
- select_cols
Optional list of columns to keep (for select() operation)
- Type:
List[str] | None
- drop_cols
Optional list of columns to drop (for drop() operation)
- Type:
List[str] | None
- rename_map
Optional mapping of old column names to new names
- Type:
Dict[str, str] | None
- summarize_aggs
Optional dict of name -> aggregation expression for summarize()
- Type:
Dict[str, str] | None
- distinct_cols
Optional list of columns for distinct operation
- Type:
List[str] | None
- rank_specs
Optional list of (attr, method) tuples for rank_by()
- Type:
List[Tuple[str, str]] | None
- zscore_attrs
Optional list of attributes to compute z-scores for
- Type:
List[str] | None
- post_filters
Optional list of filter specifications to apply after computation
- Type:
List[Dict[str, Any]] | None
- aggregate_specs
Optional dict of name -> aggregation spec for aggregate()
- Type:
Dict[str, Any] | None
- autocompute
Whether to automatically compute missing metrics (default: True)
- Type:
bool
- uq_config
Optional query-scoped uncertainty quantification configuration
- Type:
py3plex.dsl.ast.UQConfig | None
- aggregate_specs: Dict[str, Any] | None = None
- autocompute: bool = True
- compute: List[ComputeItem]
- coverage_group: str | None = None
- coverage_id_field: str = 'id'
- coverage_k: int | None = None
- coverage_mode: str | None = None
- coverage_p: float | None = None
- distinct_cols: List[str] | None = None
- drop_cols: List[str] | None = None
- export: ExportTarget | None = None
- file_export: ExportSpec | None = None
- group_by: List[str]
- layer_set: Any | None = None
- limit: int | None = None
- limit_per_group: int | None = None
- post_filters: List[Dict[str, Any]] | None = None
- rank_specs: List[Tuple[str, str]] | None = None
- rename_map: Dict[str, str] | None = None
- select_cols: List[str] | None = None
- summarize_aggs: Dict[str, str] | None = None
- temporal_context: TemporalContext | None = None
- where: ConditionExpr | None = None
- window_spec: WindowSpec | None = None
- zscore_attrs: List[str] | None = None
- class py3plex.dsl.SpecialPredicate(kind: str, params: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectSpecial multilayer predicates.
- Supported kinds:
‘intralayer’: Edges within the same layer
‘interlayer’: Edges between specific layers
‘motif’: Motif pattern matching
‘reachable_from’: Cross-layer reachability
- kind
Predicate type
- Type:
str
- params
Additional parameters for the predicate
- Type:
Dict[str, Any]
- kind: str
- params: Dict[str, Any]
- class py3plex.dsl.SuggestedFix(replacement: str, span: Tuple[int, int])
Bases:
objectA suggested fix for a diagnostic.
- replacement
The replacement text
- Type:
str
- span
Tuple of (start_index, end_index) in the query string
- Type:
Tuple[int, int]
- replacement: str
- span: Tuple[int, int]
- class py3plex.dsl.Target(value)
Bases:
EnumQuery target - what to select from the network.
- EDGES = 'edges'
- NODES = 'nodes'
- class py3plex.dsl.TemporalContext(kind: str, t0: float | None = None, t1: float | None = None, range_name: str | None = None)
Bases:
objectTemporal context for time-based queries.
This represents temporal constraints on a query, specified via AT or DURING clauses.
- kind
Type of temporal constraint (“at” for point-in-time, “during” for interval)
- Type:
str
- t0
Start time for interval queries (None for point-in-time)
- Type:
float | None
- t1
End time for interval queries (None for point-in-time)
- Type:
float | None
- range_name
Optional named range reference (e.g., “Q1_2023”)
- Type:
str | None
Examples
>>> # Point-in-time: AT 1234567890 >>> TemporalContext(kind="at", t0=1234567890.0, t1=1234567890.0)
>>> # Time range: DURING [100, 200] >>> TemporalContext(kind="during", t0=100.0, t1=200.0)
>>> # Named range: DURING RANGE "Q1_2023" >>> TemporalContext(kind="during", range_name="Q1_2023")
- kind: str
- range_name: str | None = None
- t0: float | None = None
- t1: float | None = None
- class py3plex.dsl.TrajectoriesBuilder(process_ref: str)
Bases:
objectBuilder for TRAJECTORIES statements.
Example
>>> from py3plex.dsl import Q >>> >>> result = ( ... Q.trajectories("sim_result") ... .where(replicate=5) ... .at(50) ... .measure("peak_time", "final_state") ... .order_by("node_id") ... .limit(100) ... .execute() ... )
- at(t: float) TrajectoriesBuilder
Filter to specific time point.
- Parameters:
t – Timestamp
- Returns:
Self for chaining
- during(t0: float, t1: float) TrajectoriesBuilder
Filter to time range.
- Parameters:
t0 – Start time
t1 – End time
- Returns:
Self for chaining
- execute(context: Any | None = None) Any
Execute trajectory query.
- Parameters:
context – Optional context containing simulation results
- Returns:
QueryResult with trajectory data
- limit(n: int) TrajectoriesBuilder
Limit number of results.
- Parameters:
n – Maximum number of results
- Returns:
Self for chaining
- measure(*measures: str) TrajectoriesBuilder
Add trajectory measures to compute.
- Parameters:
*measures – Measure names
- Returns:
Self for chaining
- order_by(key: str, desc: bool = False) TrajectoriesBuilder
Add ordering specification.
- Parameters:
key – Attribute to order by
desc – If True, descending order
- Returns:
Self for chaining
- to(target: str) TrajectoriesBuilder
Set export target.
- Parameters:
target – Export format
- Returns:
Self for chaining
- to_ast() TrajectoriesStmt
Export as AST TrajectoriesStmt object.
- where(**kwargs) TrajectoriesBuilder
Add WHERE conditions on trajectories.
- Parameters:
**kwargs – Conditions (e.g., replicate=5, node=”Alice”)
- Returns:
Self for chaining
- class py3plex.dsl.TrajectoriesStmt(process_ref: str, where: ~py3plex.dsl.ast.ConditionExpr | None = None, temporal_context: ~py3plex.dsl.ast.TemporalContext | None = None, measures: ~typing.List[str] = <factory>, order_by: ~typing.List[~py3plex.dsl.ast.OrderItem] = <factory>, limit: int | None = None, export_target: str | None = None)
Bases:
objectTRAJECTORIES statement for querying simulation results.
- DSL Example:
TRAJECTORIES FROM process_result WHERE replicate = 5 AT time = 50 MEASURE peak_time, final_state ORDER BY node_id LIMIT 100
- process_ref
Reference to a dynamics process or result
- Type:
str
- where
Optional WHERE conditions on trajectories
- Type:
- temporal_context
Optional temporal filtering (at specific time, during range)
- Type:
- measures
List of trajectory measures to compute
- Type:
List[str]
- order_by
List of ordering specifications
- Type:
- limit
Optional limit on results
- Type:
int | None
- export_target
Optional export format
- Type:
str | None
- export_target: str | None = None
- limit: int | None = None
- measures: List[str]
- process_ref: str
- temporal_context: TemporalContext | None = None
- where: ConditionExpr | None = None
- class py3plex.dsl.TypeEnvironment
Bases:
objectType environment for DSL queries.
Tracks types of attributes, computed values, and other entities in a query context.
- add_layer(layer: str)
Add a layer reference.
- has_layer(layer: str) bool
Check if a layer is known.
- exception py3plex.dsl.TypeMismatchError(attribute: str, expected_type: str, actual_type: str, query: str | None = None, line: int | None = None, column: int | None = None)
Bases:
DslErrorException raised when there’s a type mismatch.
- attribute
The attribute with the type mismatch
- expected_type
Expected type
- actual_type
Actual type received
- class py3plex.dsl.UQ
Bases:
objectUncertainty quantification profiles for ergonomic one-liners.
This class provides convenient presets for common uncertainty estimation scenarios. Each profile returns a UQConfig that can be passed to .uq().
Example
>>> from py3plex.dsl import Q, UQ >>> >>> # Fast exploratory analysis >>> Q.nodes().uq(UQ.fast(seed=42)).compute("degree").execute(net) >>> >>> # Default balanced settings >>> Q.nodes().uq(UQ.default()).compute("betweenness_centrality").execute(net) >>> >>> # Publication-quality with more samples >>> Q.nodes().uq(UQ.paper(seed=123)).compute("closeness").execute(net)
- static default(seed: int | None = None) UQConfig
Default balanced profile.
Settings: perturbation, n=50, ci=0.95
Use this for general-purpose uncertainty estimation with reasonable computational cost.
- Parameters:
seed – Random seed for reproducibility (default: None)
- Returns:
UQConfig with default settings
Example
>>> Q.nodes().uq(UQ.default()).compute("betweenness_centrality").execute(net)
- static fast(seed: int | None = None) UQConfig
Fast exploratory profile with minimal samples.
Settings: perturbation, n=25, ci=0.95
Use this for quick exploratory analysis when speed matters more than precision.
- Parameters:
seed – Random seed for reproducibility (default: None)
- Returns:
UQConfig with fast settings
Example
>>> Q.nodes().uq(UQ.fast(seed=0)).compute("degree").execute(net)
- static paper(seed: int | None = None) UQConfig
Publication-quality profile with thorough sampling.
Settings: bootstrap, n=300, ci=0.95
Use this for publication-quality results where precision is critical and computational cost is acceptable.
- Parameters:
seed – Random seed for reproducibility (default: None)
- Returns:
UQConfig with publication-quality settings
Example
>>> Q.nodes().uq(UQ.paper(seed=123)).compute("closeness").execute(net)
- class py3plex.dsl.UQConfig(method: str | None = None, n_samples: int | None = None, ci: float | None = None, seed: int | None = None, kwargs: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectQuery-scoped uncertainty quantification configuration.
This dataclass stores uncertainty estimation settings at the query level, providing defaults for all metrics computed in the query unless explicitly overridden on a per-metric basis.
- method
Uncertainty estimation method (‘bootstrap’, ‘perturbation’, ‘seed’, ‘null_model’)
- Type:
str | None
- n_samples
Number of samples for uncertainty estimation
- Type:
int | None
- ci
Confidence interval level (e.g., 0.95 for 95% CI)
- Type:
float | None
- seed
Random seed for reproducibility
- Type:
int | None
- kwargs
Additional method-specific parameters (e.g., bootstrap_unit, bootstrap_mode)
- Type:
Dict[str, Any]
Example
>>> uq = UQConfig(method="perturbation", n_samples=100, ci=0.95, seed=42) >>> uq = UQConfig(method="bootstrap", n_samples=200, ci=0.95, ... kwargs={"bootstrap_unit": "edges", "bootstrap_mode": "resample"})
- ci: float | None = None
- kwargs: Dict[str, Any]
- method: str | None = None
- n_samples: int | None = None
- seed: int | None = None
- exception py3plex.dsl.UnknownAttributeError(attribute: str, known_attributes: List[str] | None = None, query: str | None = None, line: int | None = None, column: int | None = None)
Bases:
DslErrorException raised when an unknown attribute is referenced.
- attribute
The unknown attribute name
- known_attributes
List of valid attribute names
- suggestion
Suggested alternative, if any
- exception py3plex.dsl.UnknownLayerError(layer: str, known_layers: List[str] | None = None, query: str | None = None, line: int | None = None, column: int | None = None)
Bases:
DslErrorException raised when an unknown layer is referenced.
- layer
The unknown layer name
- known_layers
List of valid layer names
- suggestion
Suggested alternative, if any
- exception py3plex.dsl.UnknownMeasureError(measure: str, known_measures: List[str] | None = None, query: str | None = None, line: int | None = None, column: int | None = None)
Bases:
DslErrorException raised when an unknown measure is referenced.
- measure
The unknown measure name
- known_measures
List of valid measure names
- suggestion
Suggested alternative, if any
- py3plex.dsl.compile_pattern(pattern: PatternGraph) PatternPlan
Compile a pattern graph into an execution plan.
The compilation strategy: 1. Select root variable with most restrictive predicates 2. Build join order using a greedy approach (expand along edges) 3. Estimate selectivity for each variable
- Parameters:
pattern – Pattern graph to compile
- Returns:
PatternPlan with execution strategy
- py3plex.dsl.compute_centrality_for_layer(network: Any, layer: str, centrality: str = 'betweenness_centrality') Dict[Any, float]
Compute centrality for all nodes in a layer.
- Parameters:
network – Multilayer network object
layer – Layer identifier
centrality – Centrality measure name
- Returns:
Dictionary mapping nodes to centrality values
- py3plex.dsl.describe_operator(name: str) Dict[str, Any] | None
Get detailed information about a registered operator.
- Parameters:
name – Operator name
- Returns:
Dictionary with operator metadata, or None if not found
Example
>>> info = describe_operator("layer_resilience") >>> print(info["description"]) Compute resilience score for current layers.
- py3plex.dsl.detect_communities(network: Any, layer: str | None = None) Dict[str, Any]
Detect communities in the network using Louvain algorithm via DSL.
- Parameters:
network – Multilayer network object
layer – Optional layer to filter nodes by
- Returns:
‘partition’: Dict mapping nodes to community IDs
’num_communities’: Number of communities detected
’community_sizes’: Dict mapping community ID to size
’biggest_community’: Tuple (community_id, size)
’smallest_community’: Tuple (community_id, size)
’size_distribution’: List of community sizes
- Return type:
Dictionary containing
Example
>>> from py3plex.core import multinet >>> from py3plex.dsl import detect_communities >>> >>> # Create a sample network >>> network = multinet.multi_layer_network() >>> # ... add nodes and edges ... >>> >>> # Detect communities >>> result = detect_communities(network) >>> print(f"Found {result['num_communities']} communities") >>> print(f"Biggest community has {result['biggest_community'][1]} nodes")
- py3plex.dsl.dsl_operator(name: str | None = None, *, description: str | None = None, category: str | None = None, overwrite: bool = False)
Decorator to register a Python function as a DSL operator.
This decorator allows users to define custom operators that can be used in DSL queries. The decorated function should accept a DSLExecutionContext as its first argument, followed by any keyword arguments.
- Parameters:
name – Operator name (defaults to function name if not provided)
description – Human-readable description (defaults to function docstring)
category – Optional category for organization (e.g., “centrality”, “dynamics”)
overwrite – If True, allow replacing existing operators
- Returns:
Decorator function that registers the operator
Example
>>> @dsl_operator("layer_resilience", category="dynamics") ... def layer_resilience_op(context: DSLExecutionContext, alpha: float = 0.1): ... '''Compute resilience score for current layers.''' ... # Access context.graph, context.current_layers, etc. ... return 42.0
>>> # Use in DSL: >>> # measure layer_resilience(alpha=0.2) on layers ["infra", "power"]
- py3plex.dsl.execute_ast(network: Any, query: Query, params: Dict[str, Any] | None = None) QueryResult | ExecutionPlan
Execute an AST query on a multilayer network.
- Parameters:
network – Multilayer network object
query – Query AST
params – Parameter bindings
- Returns:
QueryResult or ExecutionPlan (if explain=True)
- py3plex.dsl.execute_query(network: Any, query: str) Dict[str, Any]
Execute a DSL query on a multilayer network.
Supports both SELECT and MATCH queries:
- SELECT queries:
SELECT nodes WHERE layer=”transport” AND degree > 5 SELECT * FROM nodes IN LAYER ‘ppi’ WHERE degree > 10 SELECT id, degree FROM nodes IN LAYERS (‘ppi’, ‘coexpr’) WHERE color = ‘red’
- MATCH queries (Cypher-like):
MATCH (g:Gene)-[r:REGULATES]->(t:Gene) IN LAYER ‘reg’ WHERE g.degree > 10 RETURN g, t
- Parameters:
network – Multilayer network object (multi_layer_network instance)
query – DSL query string
- Returns:
‘nodes’ or ‘edges’ or ‘bindings’: List of selected items / pattern matches
’computed’: Dictionary of computed measures (if COMPUTE used)
’query’: Original query string
- Return type:
Dictionary containing
- Raises:
DSLSyntaxError – If query syntax is invalid
DSLExecutionError – If query cannot be executed
Examples
>>> from py3plex.core import multinet >>> net = multinet.multi_layer_network() >>> net.add_nodes([{'source': 'A', 'type': 'transport'}]) >>> net.add_nodes([{'source': 'B', 'type': 'transport'}]) >>> net.add_nodes([{'source': 'C', 'type': 'social'}]) >>> net.add_edges([ ... {'source': 'A', 'target': 'B', 'source_type': 'transport', 'target_type': 'transport'}, ... {'source': 'B', 'target': 'C', 'source_type': 'social', 'target_type': 'social'} ... ]) >>> >>> # Select all nodes in "transport" layer >>> result = execute_query(net, 'SELECT nodes WHERE layer="transport"') >>> result['count'] >= 0 True >>> >>> # Select high-degree nodes and compute centrality >>> result = execute_query(net, 'SELECT nodes WHERE degree > 0 COMPUTE betweenness_centrality') >>> 'computed' in result True >>> >>> # Complex query with multiple conditions >>> result = execute_query(net, 'SELECT nodes WHERE layer="social" AND degree >= 0') >>> result['count'] >= 0 True
- py3plex.dsl.explain(query: Query, graph: Any | None = None, schema: SchemaProvider | None = None) ExplainResult
Explain a DSL query with type information and cost estimates.
Provides detailed information about: - Query structure (AST) - Inferred types for all expressions - Estimated execution cost - Potential issues (via linting)
- Parameters:
query – Query AST to explain
graph – Optional py3plex network for schema extraction
schema – Optional schema provider
- Returns:
ExplainResult with detailed query information
Example
>>> result = explain(query, graph=network) >>> print(result.ast_summary) >>> print(f"Cost: {result.cost_estimate}") >>> for diag in result.diagnostics: ... print(diag)
- py3plex.dsl.export_result(result: Any, spec: ExportSpec) None
Export query result to a file according to the export specification.
This is the main entry point for file exports. It normalizes the result into a tabular format and dispatches to format-specific writers.
- Parameters:
result – Query result (QueryResult, dict, or other supported type)
spec – Export specification with path, format, columns, and options
- Raises:
DslExecutionError – If export fails or result type is not supported
- py3plex.dsl.format_result(result: Dict[str, Any], limit: int = 10) str
Format query result as human-readable string.
- Parameters:
result – Result dictionary from execute_query
limit – Maximum number of items to display
- Returns:
Formatted string representation
- py3plex.dsl.get_biggest_community(network: Any, layer: str | None = None) Tuple[int, int, List[Any]]
Get the largest community in the network.
- Parameters:
network – Multilayer network object
layer – Optional layer to filter nodes by
- Returns:
Tuple of (community_id, size, list_of_nodes)
Example
>>> community_id, size, nodes = get_biggest_community(network) >>> print(f"Community {community_id} has {size} nodes") >>> print(f"Nodes: {nodes}")
- py3plex.dsl.get_community_partition(network: Any, layer: str | None = None) Dict[Any, int]
Get community partition (mapping of nodes to community IDs).
- Parameters:
network – Multilayer network object
layer – Optional layer to filter nodes by
- Returns:
Dictionary mapping nodes to community IDs
Example
>>> partition = get_community_partition(network) >>> for node, community_id in partition.items(): ... print(f"Node {node} is in community {community_id}")
- py3plex.dsl.get_community_size_distribution(network: Any, layer: str | None = None) List[int]
Get the distribution of community sizes.
- Parameters:
network – Multilayer network object
layer – Optional layer to filter nodes by
- Returns:
List of community sizes sorted in descending order
Example
>>> distribution = get_community_size_distribution(network) >>> print(f"Largest community: {distribution[0]} nodes") >>> print(f"Smallest community: {distribution[-1]} nodes") >>> print(f"Average size: {sum(distribution) / len(distribution):.1f}")
- py3plex.dsl.get_community_sizes(network: Any, layer: str | None = None) Dict[int, int]
Get the size of each community in the network.
- Parameters:
network – Multilayer network object
layer – Optional layer to filter nodes by
- Returns:
Dictionary mapping community ID to its size
Example
>>> sizes = get_community_sizes(network) >>> for comm_id, size in sizes.items(): ... print(f"Community {comm_id}: {size} nodes")
- py3plex.dsl.get_num_communities(network: Any, layer: str | None = None) int
Get the number of communities in the network.
- Parameters:
network – Multilayer network object
layer – Optional layer to filter nodes by
- Returns:
Number of communities detected
Example
>>> num_communities = get_num_communities(network) >>> print(f"Found {num_communities} communities")
- py3plex.dsl.get_operator(name: str) DSLOperator | None
Get a registered operator by name from the global registry.
This is a convenience function that delegates to operator_registry.get().
- Parameters:
name – Operator name (will be normalized)
- Returns:
DSLOperator instance or None if not found
- py3plex.dsl.get_smallest_community(network: Any, layer: str | None = None) Tuple[int, int, List[Any]]
Get the smallest community in the network.
- Parameters:
network – Multilayer network object
layer – Optional layer to filter nodes by
- Returns:
Tuple of (community_id, size, list_of_nodes)
Example
>>> community_id, size, nodes = get_smallest_community(network) >>> print(f"Community {community_id} has {size} nodes") >>> print(f"Nodes: {nodes}")
- py3plex.dsl.lint(query: Query, graph: Any | None = None, schema: SchemaProvider | None = None) List[Diagnostic]
Lint a DSL query.
Analyzes the query for potential issues including: - Unknown layers or attributes - Type mismatches - Unsatisfiable or redundant predicates - Performance issues
- Parameters:
query – Query AST to lint
graph – Optional py3plex network for schema extraction
schema – Optional schema provider (auto-created from graph if not provided)
- Returns:
List of diagnostics found
Example
>>> from py3plex.dsl import Q, L, lint >>> from py3plex.core import multinet >>> >>> network = multinet.multi_layer_network() >>> # ... build network ... >>> >>> query = Q.nodes().from_layers(L["social"]).where(degree__gt=5).build() >>> diagnostics = lint(query, graph=network) >>> >>> for diag in diagnostics: ... print(f"{diag.severity}: {diag.message}")
- py3plex.dsl.list_operators() Dict[str, DSLOperator]
List all registered operators from the global registry.
This is a convenience function that delegates to operator_registry.list_operators().
- Returns:
Dictionary mapping operator names to DSLOperator instances
- py3plex.dsl.match_pattern(network: Any, pattern: PatternGraph, plan: PatternPlan, limit: int | None = None, timeout: float | None = None) List[MatchRow]
Execute pattern matching on a network.
- Parameters:
network – Multilayer network object
pattern – Pattern graph to match
plan – Compiled execution plan
limit – Maximum number of matches to return
timeout – Optional timeout in seconds
- Returns:
List of MatchRow objects representing matches
- py3plex.dsl.register_operator(name: str, func: Callable[[...], Any], description: str | None = None, category: str | None = None, overwrite: bool = False) None
Register a DSL operator with the global registry.
This is a convenience function that delegates to operator_registry.register().
- Parameters:
name – Operator name (will be normalized)
func – Python callable implementing the operator
description – Optional description
category – Optional category for organization
overwrite – If True, allow replacing existing operators
- py3plex.dsl.select_high_degree_nodes(network: Any, min_degree: int, layer: str | None = None) List[Any]
Select nodes with degree greater than threshold.
- Parameters:
network – Multilayer network object
min_degree – Minimum degree threshold (exclusive - nodes must have degree > min_degree)
layer – Optional layer to filter by
- Returns:
List of nodes with degree > min_degree
- py3plex.dsl.select_nodes_by_layer(network: Any, layer: str) List[Any]
Select all nodes in a specific layer.
- Parameters:
network – Multilayer network object
layer – Layer identifier
- Returns:
List of nodes in the specified layer
- py3plex.dsl.unregister_operator(name: str) None
Unregister an operator from the global registry.
This is a convenience function that delegates to operator_registry.unregister(). Primarily useful for testing and cleanup.
- Parameters:
name – Operator name (will be normalized)
Uncertainty Quantification
First-class uncertainty support for py3plex.
This module provides types and utilities for representing statistics with uncertainty information. The core idea is that every statistic is an object that can carry a distribution:
Deterministic mode: object has mean with std=None (certainty 1.0)
Uncertainty mode: object has mean, std, quantiles populated
This makes uncertainty “first-class” - it’s baked into how numbers exist in the library, not bolted on later.
Examples
>>> from py3plex.uncertainty import StatSeries
>>> # Deterministic result
>>> result = StatSeries(
... index=['a', 'b', 'c'],
... mean=np.array([1.0, 2.0, 3.0])
... )
>>> result.is_deterministic
True
>>> result.certainty
1.0
>>> # Uncertain result
>>> result_unc = StatSeries(
... index=['a', 'b', 'c'],
... mean=np.array([1.0, 2.0, 3.0]),
... std=np.array([0.1, 0.2, 0.15]),
... quantiles={
... 0.025: np.array([0.8, 1.6, 2.7]),
... 0.975: np.array([1.2, 2.4, 3.3])
... }
... )
>>> result_unc.is_deterministic
False
>>> np.array(result_unc) # Backward compat - gives mean
array([1., 2., 3.])
- class py3plex.uncertainty.CommunityStats(labels: ~typing.Dict[~typing.Any, int], modularity: float | None = None, modularity_std: float | None = None, coassoc: ~py3plex.uncertainty.types.StatMatrix | None = None, stability: ~typing.Dict[~typing.Any, float] | None = None, n_communities: int = 0, meta: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectStatistics from community detection with optional uncertainty.
Wraps cluster labels, modularity, co-association matrix, and stability indices computed from multiple runs.
- Parameters:
labels (dict[Any, int]) – Node -> community ID mapping (from deterministic run or consensus).
modularity (float or None) – Modularity score (mean if multiple runs).
modularity_std (float or None) – Standard deviation of modularity across runs.
coassoc (StatMatrix or None) – Co-association matrix (probability nodes are in same community).
stability (dict[Any, float] or None) – Per-node stability index (how often node stays in same cluster).
n_communities (int) – Number of communities detected.
meta (dict[str, Any]) – Optional metadata.
Examples
>>> cs = CommunityStats( ... labels={'a': 0, 'b': 0, 'c': 1}, ... modularity=0.42, ... n_communities=2 ... ) >>> cs.is_deterministic True >>> cs.labels['a'] 0
- property certainty: float
Return certainty level (1.0 if deterministic, 0.0 otherwise).
- coassoc: StatMatrix | None = None
- property is_deterministic: bool
Return True if no uncertainty info is present.
- labels: Dict[Any, int]
- meta: Dict[str, Any]
- modularity: float | None = None
- modularity_std: float | None = None
- n_communities: int = 0
- stability: Dict[Any, float] | None = None
- class py3plex.uncertainty.ResamplingStrategy(value)
Bases:
EnumStrategy for estimating uncertainty via resampling.
- SEED
Run with different random seeds (Monte Carlo).
- Type:
str
- BOOTSTRAP
Bootstrap resampling of nodes or edges.
- Type:
str
- JACKKNIFE
Leave-one-out jackknife resampling.
- Type:
str
- PERTURBATION
Add noise/perturbations to network structure or parameters.
- Type:
str
- BOOTSTRAP = 'bootstrap'
- JACKKNIFE = 'jackknife'
- PERTURBATION = 'perturbation'
- SEED = 'seed'
- class py3plex.uncertainty.StatMatrix(index: ~typing.List[~typing.Any], mean: ~numpy.ndarray, std: ~numpy.ndarray | None = None, quantiles: ~typing.Dict[float, ~numpy.ndarray] | None = None, meta: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectA matrix of statistics with optional uncertainty.
Used for adjacency matrices, co-association matrices, distance matrices, etc.
- Parameters:
index (list[Any]) – Row/column labels (assumed square matrix for simplicity).
mean (np.ndarray) – The mean matrix, shape (n, n).
std (np.ndarray or None) – The standard deviation matrix, shape (n, n), or None.
quantiles (dict[float, np.ndarray] or None) – Quantile matrices, e.g., {0.025: (n, n), 0.975: (n, n)}.
meta (dict[str, Any]) – Optional metadata.
Examples
>>> import numpy as np >>> m = StatMatrix( ... index=['a', 'b', 'c'], ... mean=np.array([[0, 1, 0], [1, 0, 1], [0, 1, 0]], dtype=float) ... ) >>> m.is_deterministic True >>> np.array(m).shape (3, 3)
- property certainty: float
Return certainty level (1.0 if deterministic, 0.0 otherwise).
- index: List[Any]
- property is_deterministic: bool
Return True if this is a deterministic result.
- mean: ndarray
- meta: Dict[str, Any]
- quantiles: Dict[float, ndarray] | None = None
- std: ndarray | None = None
- class py3plex.uncertainty.StatSeries(index: ~typing.List[~typing.Any], mean: ~numpy.ndarray, std: ~numpy.ndarray | None = None, quantiles: ~typing.Dict[float, ~numpy.ndarray] | None = None, meta: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectA series of statistics with optional uncertainty information.
This is the canonical result type for statistics that return a value per node, time point, or other index.
In deterministic mode (uncertainty=False): - mean contains the single-run values - std = None - quantiles = None - certainty = 1.0
In uncertain mode (uncertainty=True): - mean contains the average across runs - std contains the standard deviation - quantiles contains percentile arrays (e.g., {0.025: arr, 0.975: arr}) - certainty < 1.0
- Parameters:
index (list[Any]) – The index labels (e.g., node IDs, time points).
mean (np.ndarray) – The mean values, shape (n,).
std (np.ndarray or None) – The standard deviations, shape (n,), or None if deterministic.
quantiles (dict[float, np.ndarray] or None) – Quantile arrays, e.g., {0.025: (n,), 0.975: (n,)}, or None.
meta (dict[str, Any]) – Optional metadata (e.g., algorithm parameters, run info).
Examples
>>> import numpy as np >>> # Deterministic >>> s = StatSeries( ... index=['a', 'b', 'c'], ... mean=np.array([1.0, 2.0, 3.0]) ... ) >>> s.is_deterministic True >>> s.certainty 1.0 >>> np.array(s) array([1., 2., 3.])
>>> # With uncertainty >>> s_unc = StatSeries( ... index=['a', 'b', 'c'], ... mean=np.array([1.0, 2.0, 3.0]), ... std=np.array([0.1, 0.2, 0.15]), ... quantiles={0.025: np.array([0.8, 1.6, 2.7]), ... 0.975: np.array([1.2, 2.4, 3.3])} ... ) >>> s_unc.is_deterministic False >>> s_unc.certainty 0.0
- property certainty: float
Return certainty level.
Returns 1.0 if deterministic, 0.0 otherwise. In the future, this could return a richer metric.
- index: List[Any]
- property is_deterministic: bool
Return True if this is a deterministic result (no uncertainty).
- mean: ndarray
- meta: Dict[str, Any]
- quantiles: Dict[float, ndarray] | None = None
- std: ndarray | None = None
- to_dict() Dict[Any, Dict[str, Any]]
Convert to dictionary mapping index -> stats dict.
- Returns:
Dictionary with keys from index, values are dicts with ‘mean’, optionally ‘std’ and ‘quantiles’.
- Return type:
dict
- class py3plex.uncertainty.UncertaintyConfig(mode: UncertaintyMode = UncertaintyMode.OFF, default_n_runs: int = 50, default_resampling: ResamplingStrategy = ResamplingStrategy.SEED)
Bases:
objectConfiguration for uncertainty estimation.
- mode
The current uncertainty mode.
- Type:
- default_n_runs
Default number of runs for uncertainty estimation.
- Type:
int
- default_resampling
Default resampling strategy.
- Type:
- default_n_runs: int = 50
- default_resampling: ResamplingStrategy = 'seed'
- mode: UncertaintyMode = 'off'
- class py3plex.uncertainty.UncertaintyMode(value)
Bases:
EnumGlobal mode for uncertainty computation.
- OFF
Always deterministic, std=None.
- Type:
str
- ON
Try to compute uncertainty when supported.
- Type:
str
- AUTO
Only do it if explicitly requested by a function.
- Type:
str
- AUTO = 'auto'
- OFF = 'off'
- ON = 'on'
- py3plex.uncertainty.bootstrap_metric(graph: multi_layer_network, metric_fn: Callable[[multi_layer_network], Dict[Any, float]], n_boot: int = 50, unit: str = 'edges', mode: str = 'resample', ci: float = 0.95, random_state: int | None = None) Dict[str, ndarray]
Bootstrap a metric for uncertainty estimation.
This function resamples the graph by unit (edges, nodes, or layers) and recomputes the metric on each bootstrap sample to estimate uncertainty.
- Parameters:
graph (multi_layer_network) – The multilayer network to analyze.
metric_fn (callable) – Function that takes a network and returns a dict mapping items (usually nodes) to metric values. Must have signature: metric_fn(network) -> Dict[item_id, float]
n_boot (int, default=200) – Number of bootstrap replicates.
unit (str, default="edges") – What to resample: “edges”, “nodes”, or “layers”.
mode (str, default="resample") – Resampling mode: - “resample”: Sample with replacement (classic bootstrap) - “permute”: Permute the units (permutation test)
ci (float, default=0.95) – Confidence interval level (e.g., 0.95 for 95% CI).
random_state (int, optional) – Random seed for reproducibility.
- Returns:
Dictionary with keys: - “mean”: np.ndarray of shape (n_items,) with mean values - “std”: np.ndarray of shape (n_items,) with standard errors - “ci_low”: np.ndarray of shape (n_items,) with lower CI bounds - “ci_high”: np.ndarray of shape (n_items,) with upper CI bounds - “index”: List of item IDs - “n_boot”: Number of bootstrap replicates used - “method”: String describing the bootstrap method
- Return type:
dict
Examples
>>> from py3plex.core import multinet >>> from py3plex.uncertainty.bootstrap import bootstrap_metric >>> >>> # Create a network >>> net = multinet.multi_layer_network(directed=False) >>> net.add_edges([["a", "L0", "b", "L0", 1.0]], input_type="list") >>> >>> # Define metric function >>> def degree_metric(network): ... result = {} ... for node in network.get_nodes(): ... result[node] = network.core_network.degree(node) ... return result >>> >>> # Bootstrap >>> boot_result = bootstrap_metric( ... net, degree_metric, n_boot=100, unit="edges" ... ) >>> boot_result["mean"] # Mean degree values >>> boot_result["ci_low"] # Lower CI bounds
Notes
For “edges” unit: resamples edges with replacement
For “nodes” unit: resamples nodes with replacement
For “layers” unit: resamples layers with replacement
The metric_fn must be able to handle graphs with different structure
- py3plex.uncertainty.estimate_uncertainty(network: multi_layer_network, metric_fn: Callable[[multi_layer_network], Dict[Any, float] | float | ndarray], *, n_runs: int | None = None, resampling: ResamplingStrategy | None = None, random_seed: int | None = None, perturbation_params: Dict[str, Any] | None = None) StatSeries | float
Estimate uncertainty for a network statistic.
This is the main entry point for adding uncertainty to any statistic. It runs the metric function multiple times with different random seeds or network perturbations, then computes mean, std, and quantiles.
- Parameters:
network (multi_layer_network) – The network to analyze.
metric_fn (callable) – Function that computes a statistic. Must accept a network and return: - dict[node, float] for per-node statistics - float for scalar statistics - np.ndarray for array statistics
n_runs (int, optional) – Number of runs for uncertainty estimation. If None, uses the default from the current uncertainty config.
resampling (ResamplingStrategy, optional) – Strategy for resampling. If None, uses default from config.
random_seed (int, optional) – Random seed for reproducibility.
perturbation_params (dict, optional) – Parameters for perturbation strategies. For example: {“edge_drop_p”: 0.05, “node_drop_p”: 0.02}
- Returns:
If metric_fn returns a dict: StatSeries with uncertainty info If metric_fn returns a scalar: float (mean value) The returned object has mean, std, and quantiles populated.
- Return type:
StatSeries or float
Examples
>>> from py3plex.uncertainty import estimate_uncertainty, ResamplingStrategy >>> from py3plex.core import multinet >>> >>> # Create a simple network >>> net = multinet.multi_layer_network(directed=False) >>> net.add_edges([["a", "L0", "b", "L0", 1.0]], input_type="list") >>> >>> # Define a metric function >>> def my_metric(network): ... # Return per-node degree ... degrees = {} ... for node in network.get_nodes(): ... degrees[node] = network.core_network.degree(node) ... return degrees >>> >>> # Estimate uncertainty >>> result = estimate_uncertainty( ... net, ... my_metric, ... n_runs=50, ... resampling=ResamplingStrategy.PERTURBATION, ... perturbation_params={"edge_drop_p": 0.1} ... ) >>> result.mean # Mean degree values >>> result.std # Std deviation of degrees >>> result.quantiles # Confidence intervals
Notes
For SEED strategy: runs the metric with different random seeds
For PERTURBATION strategy: applies edge/node drops then recomputes
For BOOTSTRAP: resamples nodes/edges with replacement (not yet implemented)
For JACKKNIFE: leave-one-out resampling (not yet implemented)
- py3plex.uncertainty.get_uncertainty_config() UncertaintyConfig
Get the current uncertainty configuration.
- Returns:
The current configuration from the context.
- Return type:
Examples
>>> from py3plex.uncertainty import get_uncertainty_config >>> cfg = get_uncertainty_config() >>> cfg.mode <UncertaintyMode.OFF: 'off'> >>> cfg.default_n_runs 50
- py3plex.uncertainty.null_model_metric(graph: multi_layer_network, metric_fn: Callable[[multi_layer_network], Dict[Any, float]], n_null: int = 200, model: str = 'degree_preserving', random_state: int | None = None) Dict[str, ndarray]
Compute metric on null models for statistical significance testing.
This function generates null models of the network (e.g., via degree-preserving rewiring) and computes the metric on each null network. It then computes z-scores and p-values for the observed metric values.
- Parameters:
graph (multi_layer_network) – The multilayer network to analyze.
metric_fn (callable) – Function that takes a network and returns a dict mapping items (usually nodes) to metric values. Must have signature: metric_fn(network) -> Dict[item_id, float]
n_null (int, default=200) – Number of null model replicates to generate.
model (str, default="degree_preserving") – Null model type: - “degree_preserving”: Rewire edges while preserving degree sequence - “erdos_renyi”: Random graph with same density - “configuration”: Configuration model matching degree distribution
random_state (int, optional) – Random seed for reproducibility.
- Returns:
Dictionary with keys: - “observed”: np.ndarray of shape (n_items,) with observed metric values - “mean_null”: np.ndarray of shape (n_items,) with mean null values - “std_null”: np.ndarray of shape (n_items,) with std of null values - “zscore”: np.ndarray of shape (n_items,) with z-scores - “pvalue”: np.ndarray of shape (n_items,) with two-tailed p-values - “index”: List of item IDs - “n_null”: Number of null replicates used - “model”: String describing the null model
- Return type:
dict
Examples
>>> from py3plex.core import multinet >>> from py3plex.uncertainty.null_models import null_model_metric >>> >>> # Create a network >>> net = multinet.multi_layer_network(directed=False) >>> net.add_edges([["a", "L0", "b", "L0", 1.0]], input_type="list") >>> >>> # Define metric function >>> def degree_metric(network): ... result = {} ... for node in network.get_nodes(): ... result[node] = network.core_network.degree(node) ... return result >>> >>> # Compute null model statistics >>> null_result = null_model_metric( ... net, degree_metric, n_null=100, model="degree_preserving" ... ) >>> null_result["zscore"] # Z-scores for each node >>> null_result["pvalue"] # P-values for each node
Notes
Z-scores indicate how many standard deviations the observed value is from the null distribution mean
P-values are two-tailed by default: P(|Z| >= |Z_observed|) under the null. For one-tailed tests, users can compute p_one_sided = p_two_sided / 2 and check the sign of z-score to determine the direction.
High |z-score| and low p-value indicate statistical significance
- py3plex.uncertainty.set_uncertainty_config(config: UncertaintyConfig) Token
Set the uncertainty configuration.
- Parameters:
config (UncertaintyConfig) – The new configuration to set.
- Returns:
A token that can be used to reset the configuration.
- Return type:
Token
Examples
>>> from py3plex.uncertainty import set_uncertainty_config, UncertaintyConfig >>> from py3plex.uncertainty import UncertaintyMode >>> cfg = UncertaintyConfig(mode=UncertaintyMode.ON, default_n_runs=100) >>> token = set_uncertainty_config(cfg) >>> # ... do work ... >>> _uncertainty_ctx.reset(token) # restore previous config
- py3plex.uncertainty.uncertainty_enabled(*, n_runs: int | None = None, resampling: ResamplingStrategy | None = None)
Context manager to enable uncertainty estimation.
Within this context, all supported functions will compute uncertainty by default (unless explicitly disabled with uncertainty=False).
- Parameters:
n_runs (int, optional) – Number of runs for uncertainty estimation. If None, uses the default from the current config.
resampling (ResamplingStrategy, optional) – Resampling strategy to use. If None, uses the default from the current config.
- Yields:
None
Examples
>>> from py3plex.uncertainty import uncertainty_enabled >>> from py3plex.algorithms.centrality_toolkit import multilayer_pagerank >>> >>> # Without uncertainty >>> result = multilayer_pagerank(network) >>> result.is_deterministic True >>> >>> # With uncertainty >>> with uncertainty_enabled(n_runs=100): ... result = multilayer_pagerank(network) >>> result.is_deterministic False >>> result.std is not None True
Notes
This uses contextvars, so it’s thread-safe and async-safe. Each context gets its own configuration.
Core statistic types for first-class uncertainty representation.
This module defines the fundamental types that wrap statistics with optional uncertainty information (standard deviations, confidence intervals, quantiles).
- class py3plex.uncertainty.types.CommunityStats(labels: ~typing.Dict[~typing.Any, int], modularity: float | None = None, modularity_std: float | None = None, coassoc: ~py3plex.uncertainty.types.StatMatrix | None = None, stability: ~typing.Dict[~typing.Any, float] | None = None, n_communities: int = 0, meta: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectStatistics from community detection with optional uncertainty.
Wraps cluster labels, modularity, co-association matrix, and stability indices computed from multiple runs.
- Parameters:
labels (dict[Any, int]) – Node -> community ID mapping (from deterministic run or consensus).
modularity (float or None) – Modularity score (mean if multiple runs).
modularity_std (float or None) – Standard deviation of modularity across runs.
coassoc (StatMatrix or None) – Co-association matrix (probability nodes are in same community).
stability (dict[Any, float] or None) – Per-node stability index (how often node stays in same cluster).
n_communities (int) – Number of communities detected.
meta (dict[str, Any]) – Optional metadata.
Examples
>>> cs = CommunityStats( ... labels={'a': 0, 'b': 0, 'c': 1}, ... modularity=0.42, ... n_communities=2 ... ) >>> cs.is_deterministic True >>> cs.labels['a'] 0
- property certainty: float
Return certainty level (1.0 if deterministic, 0.0 otherwise).
- coassoc: StatMatrix | None = None
- property is_deterministic: bool
Return True if no uncertainty info is present.
- labels: Dict[Any, int]
- meta: Dict[str, Any]
- modularity: float | None = None
- modularity_std: float | None = None
- n_communities: int = 0
- stability: Dict[Any, float] | None = None
- class py3plex.uncertainty.types.ResamplingStrategy(value)
Bases:
EnumStrategy for estimating uncertainty via resampling.
- SEED
Run with different random seeds (Monte Carlo).
- Type:
str
- BOOTSTRAP
Bootstrap resampling of nodes or edges.
- Type:
str
- JACKKNIFE
Leave-one-out jackknife resampling.
- Type:
str
- PERTURBATION
Add noise/perturbations to network structure or parameters.
- Type:
str
- BOOTSTRAP = 'bootstrap'
- JACKKNIFE = 'jackknife'
- PERTURBATION = 'perturbation'
- SEED = 'seed'
- class py3plex.uncertainty.types.StatMatrix(index: ~typing.List[~typing.Any], mean: ~numpy.ndarray, std: ~numpy.ndarray | None = None, quantiles: ~typing.Dict[float, ~numpy.ndarray] | None = None, meta: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectA matrix of statistics with optional uncertainty.
Used for adjacency matrices, co-association matrices, distance matrices, etc.
- Parameters:
index (list[Any]) – Row/column labels (assumed square matrix for simplicity).
mean (np.ndarray) – The mean matrix, shape (n, n).
std (np.ndarray or None) – The standard deviation matrix, shape (n, n), or None.
quantiles (dict[float, np.ndarray] or None) – Quantile matrices, e.g., {0.025: (n, n), 0.975: (n, n)}.
meta (dict[str, Any]) – Optional metadata.
Examples
>>> import numpy as np >>> m = StatMatrix( ... index=['a', 'b', 'c'], ... mean=np.array([[0, 1, 0], [1, 0, 1], [0, 1, 0]], dtype=float) ... ) >>> m.is_deterministic True >>> np.array(m).shape (3, 3)
- property certainty: float
Return certainty level (1.0 if deterministic, 0.0 otherwise).
- index: List[Any]
- property is_deterministic: bool
Return True if this is a deterministic result.
- mean: ndarray
- meta: Dict[str, Any]
- quantiles: Dict[float, ndarray] | None = None
- std: ndarray | None = None
- class py3plex.uncertainty.types.StatSeries(index: ~typing.List[~typing.Any], mean: ~numpy.ndarray, std: ~numpy.ndarray | None = None, quantiles: ~typing.Dict[float, ~numpy.ndarray] | None = None, meta: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectA series of statistics with optional uncertainty information.
This is the canonical result type for statistics that return a value per node, time point, or other index.
In deterministic mode (uncertainty=False): - mean contains the single-run values - std = None - quantiles = None - certainty = 1.0
In uncertain mode (uncertainty=True): - mean contains the average across runs - std contains the standard deviation - quantiles contains percentile arrays (e.g., {0.025: arr, 0.975: arr}) - certainty < 1.0
- Parameters:
index (list[Any]) – The index labels (e.g., node IDs, time points).
mean (np.ndarray) – The mean values, shape (n,).
std (np.ndarray or None) – The standard deviations, shape (n,), or None if deterministic.
quantiles (dict[float, np.ndarray] or None) – Quantile arrays, e.g., {0.025: (n,), 0.975: (n,)}, or None.
meta (dict[str, Any]) – Optional metadata (e.g., algorithm parameters, run info).
Examples
>>> import numpy as np >>> # Deterministic >>> s = StatSeries( ... index=['a', 'b', 'c'], ... mean=np.array([1.0, 2.0, 3.0]) ... ) >>> s.is_deterministic True >>> s.certainty 1.0 >>> np.array(s) array([1., 2., 3.])
>>> # With uncertainty >>> s_unc = StatSeries( ... index=['a', 'b', 'c'], ... mean=np.array([1.0, 2.0, 3.0]), ... std=np.array([0.1, 0.2, 0.15]), ... quantiles={0.025: np.array([0.8, 1.6, 2.7]), ... 0.975: np.array([1.2, 2.4, 3.3])} ... ) >>> s_unc.is_deterministic False >>> s_unc.certainty 0.0
- property certainty: float
Return certainty level.
Returns 1.0 if deterministic, 0.0 otherwise. In the future, this could return a richer metric.
- index: List[Any]
- property is_deterministic: bool
Return True if this is a deterministic result (no uncertainty).
- mean: ndarray
- meta: Dict[str, Any]
- quantiles: Dict[float, ndarray] | None = None
- std: ndarray | None = None
- to_dict() Dict[Any, Dict[str, Any]]
Convert to dictionary mapping index -> stats dict.
- Returns:
Dictionary with keys from index, values are dicts with ‘mean’, optionally ‘std’ and ‘quantiles’.
- Return type:
dict
- class py3plex.uncertainty.types.UncertaintyConfig(mode: UncertaintyMode = UncertaintyMode.OFF, default_n_runs: int = 50, default_resampling: ResamplingStrategy = ResamplingStrategy.SEED)
Bases:
objectConfiguration for uncertainty estimation.
- mode
The current uncertainty mode.
- Type:
- default_n_runs
Default number of runs for uncertainty estimation.
- Type:
int
- default_resampling
Default resampling strategy.
- Type:
- default_n_runs: int = 50
- default_resampling: ResamplingStrategy = 'seed'
- mode: UncertaintyMode = 'off'
- class py3plex.uncertainty.types.UncertaintyMode(value)
Bases:
EnumGlobal mode for uncertainty computation.
- OFF
Always deterministic, std=None.
- Type:
str
- ON
Try to compute uncertainty when supported.
- Type:
str
- AUTO
Only do it if explicitly requested by a function.
- Type:
str
- AUTO = 'auto'
- OFF = 'off'
- ON = 'on'
Context management for global uncertainty settings.
This module provides context variables and context managers for controlling uncertainty estimation globally across a pipeline or workflow.
- py3plex.uncertainty.context.get_uncertainty_config() UncertaintyConfig
Get the current uncertainty configuration.
- Returns:
The current configuration from the context.
- Return type:
Examples
>>> from py3plex.uncertainty import get_uncertainty_config >>> cfg = get_uncertainty_config() >>> cfg.mode <UncertaintyMode.OFF: 'off'> >>> cfg.default_n_runs 50
- py3plex.uncertainty.context.set_uncertainty_config(config: UncertaintyConfig) Token
Set the uncertainty configuration.
- Parameters:
config (UncertaintyConfig) – The new configuration to set.
- Returns:
A token that can be used to reset the configuration.
- Return type:
Token
Examples
>>> from py3plex.uncertainty import set_uncertainty_config, UncertaintyConfig >>> from py3plex.uncertainty import UncertaintyMode >>> cfg = UncertaintyConfig(mode=UncertaintyMode.ON, default_n_runs=100) >>> token = set_uncertainty_config(cfg) >>> # ... do work ... >>> _uncertainty_ctx.reset(token) # restore previous config
- py3plex.uncertainty.context.uncertainty_disabled()
Context manager to disable uncertainty estimation.
Within this context, all functions will compute deterministic results (unless explicitly requested with uncertainty=True).
- Yields:
None
Examples
>>> from py3plex.uncertainty import uncertainty_disabled, uncertainty_enabled >>> >>> with uncertainty_enabled(): ... # Nested context to temporarily disable ... with uncertainty_disabled(): ... result = multilayer_pagerank(network) >>> result.is_deterministic True
- py3plex.uncertainty.context.uncertainty_enabled(*, n_runs: int | None = None, resampling: ResamplingStrategy | None = None)
Context manager to enable uncertainty estimation.
Within this context, all supported functions will compute uncertainty by default (unless explicitly disabled with uncertainty=False).
- Parameters:
n_runs (int, optional) – Number of runs for uncertainty estimation. If None, uses the default from the current config.
resampling (ResamplingStrategy, optional) – Resampling strategy to use. If None, uses the default from the current config.
- Yields:
None
Examples
>>> from py3plex.uncertainty import uncertainty_enabled >>> from py3plex.algorithms.centrality_toolkit import multilayer_pagerank >>> >>> # Without uncertainty >>> result = multilayer_pagerank(network) >>> result.is_deterministic True >>> >>> # With uncertainty >>> with uncertainty_enabled(n_runs=100): ... result = multilayer_pagerank(network) >>> result.is_deterministic False >>> result.std is not None True
Notes
This uses contextvars, so it’s thread-safe and async-safe. Each context gets its own configuration.
Uncertainty estimation helpers.
This module provides the main helper function for estimating uncertainty in network statistics via resampling or perturbation strategies.
- py3plex.uncertainty.estimation.estimate_uncertainty(network: multi_layer_network, metric_fn: Callable[[multi_layer_network], Dict[Any, float] | float | ndarray], *, n_runs: int | None = None, resampling: ResamplingStrategy | None = None, random_seed: int | None = None, perturbation_params: Dict[str, Any] | None = None) StatSeries | float
Estimate uncertainty for a network statistic.
This is the main entry point for adding uncertainty to any statistic. It runs the metric function multiple times with different random seeds or network perturbations, then computes mean, std, and quantiles.
- Parameters:
network (multi_layer_network) – The network to analyze.
metric_fn (callable) – Function that computes a statistic. Must accept a network and return: - dict[node, float] for per-node statistics - float for scalar statistics - np.ndarray for array statistics
n_runs (int, optional) – Number of runs for uncertainty estimation. If None, uses the default from the current uncertainty config.
resampling (ResamplingStrategy, optional) – Strategy for resampling. If None, uses default from config.
random_seed (int, optional) – Random seed for reproducibility.
perturbation_params (dict, optional) – Parameters for perturbation strategies. For example: {“edge_drop_p”: 0.05, “node_drop_p”: 0.02}
- Returns:
If metric_fn returns a dict: StatSeries with uncertainty info If metric_fn returns a scalar: float (mean value) The returned object has mean, std, and quantiles populated.
- Return type:
StatSeries or float
Examples
>>> from py3plex.uncertainty import estimate_uncertainty, ResamplingStrategy >>> from py3plex.core import multinet >>> >>> # Create a simple network >>> net = multinet.multi_layer_network(directed=False) >>> net.add_edges([["a", "L0", "b", "L0", 1.0]], input_type="list") >>> >>> # Define a metric function >>> def my_metric(network): ... # Return per-node degree ... degrees = {} ... for node in network.get_nodes(): ... degrees[node] = network.core_network.degree(node) ... return degrees >>> >>> # Estimate uncertainty >>> result = estimate_uncertainty( ... net, ... my_metric, ... n_runs=50, ... resampling=ResamplingStrategy.PERTURBATION, ... perturbation_params={"edge_drop_p": 0.1} ... ) >>> result.mean # Mean degree values >>> result.std # Std deviation of degrees >>> result.quantiles # Confidence intervals
Notes
For SEED strategy: runs the metric with different random seeds
For PERTURBATION strategy: applies edge/node drops then recomputes
For BOOTSTRAP: resamples nodes/edges with replacement (not yet implemented)
For JACKKNIFE: leave-one-out resampling (not yet implemented)
Algorithms
Community Detection
- py3plex.algorithms.community_detection.community_wrapper.NoRC_communities(network, verbose=True, clustering_scheme='kmeans', output='mapping', prob_threshold=0.001, parallel_step=8, community_range=None, fine_range=3)
- py3plex.algorithms.community_detection.community_wrapper.infomap_communities(graph: Graph, binary: str = './infomap', edgelist_file: str = './tmp/tmpedgelist.txt', multiplex: bool = False, verbose: bool = False, overlapping: bool = False, iterations: int = 200, output: str = 'mapping', seed: int | None = None) Dict[Any, int] | Dict[Any, List[int]]
Detect communities using the Infomap algorithm.
- Parameters:
graph – Input graph (NetworkX graph or multi_layer_network)
binary – Path to Infomap binary (default: “./infomap”)
edgelist_file – Temporary file for edgelist (default: “./tmp/tmpedgelist.txt”)
multiplex – Whether to use multiplex mode (default: False)
verbose – Whether to show verbose output (default: False)
overlapping – Whether to detect overlapping communities (default: False)
iterations – Number of iterations (default: 200)
output – Output format - “mapping” or “partition” (default: “mapping”)
seed – Random seed for reproducibility (default: None) Note: Requires Infomap binary that supports –seed parameter
- Returns:
Dict mapping nodes to community IDs (if output=”mapping”) or Dict mapping community IDs to lists of nodes (if output=”partition”)
- Raises:
FileNotFoundError – If Infomap binary is not found
PermissionError – If Infomap binary is not executable
Examples
>>> # Using with seed for reproducibility >>> partition = infomap_communities(graph, seed=42) >>> >>> # Get partition format instead of mapping >>> communities = infomap_communities(graph, output="partition")
- py3plex.algorithms.community_detection.community_wrapper.louvain_communities(network, output='mapping')
- py3plex.algorithms.community_detection.community_wrapper.parse_infomap(outfile)
- py3plex.algorithms.community_detection.community_wrapper.run_infomap(infile: str, multiplex: bool = True, overlapping: bool = False, binary: str = './infomap', verbose: bool = True, iterations: int = 1000, seed: int | None = None) None
Multilayer Modularity Maximization (Mucha et al., 2010)
This module implements multilayer modularity quality function and optimization algorithms for community detection in multilayer/multiplex networks.
References
Mucha et al., “Community Structure in Time-Dependent, Multiscale, and Multiplex Networks”, Science 328:876-878 (2010)
- py3plex.algorithms.community_detection.multilayer_modularity.build_supra_modularity_matrix(network: Any, gamma: float | Dict[Any, float] = 1.0, omega: float | ndarray = 1.0, weight: str = 'weight') Tuple[ndarray, List[Tuple[Any, Any]]]
Build the supra-modularity matrix B for multilayer network.
The supra-modularity matrix is: B_{iα,jβ} = (A^[α]_ij - γ^[α] k_i^α k_j^α / 2m_α) δ_αβ + δ_ij ω_αβ
This matrix can be used for spectral community detection methods.
- Parameters:
network – py3plex multi_layer_network object
gamma – Resolution parameter(s)
omega – Inter-layer coupling strength
weight – Edge weight attribute
- Returns:
Tuple of (modularity_matrix, node_layer_list) - modularity_matrix: Supra-modularity matrix B (NL × NL) - node_layer_list: List of (node, layer) tuples corresponding to matrix indices
- py3plex.algorithms.community_detection.multilayer_modularity.louvain_multilayer(network: Any, gamma: float | Dict[Any, float] = 1.0, omega: float | ndarray = 1.0, weight: str = 'weight', max_iter: int = 100, random_state: int | None = None) Dict[Tuple[Any, Any], int]
Generalized Louvain algorithm for multilayer networks.
This implements the multilayer Louvain method as described in Mucha et al. (2010), which greedily maximizes the multilayer modularity quality function.
- Complexity:
- Time: O(n × L × d × k) per iteration, where:
n = number of nodes per layer
L = number of layers
d = average degree
k = number of communities
Typical: O(n × L) iterations until convergence
Worst case: O((n×L)²) for dense networks
Space: O((n×L)²) for supra-adjacency matrix (use sparse for large networks)
- Parameters:
network – py3plex multi_layer_network object
gamma – Resolution parameter(s)
omega – Inter-layer coupling strength
weight – Edge weight attribute
max_iter – Maximum number of iterations
random_state – Random seed for reproducibility
- Returns:
Dictionary mapping (node, layer) tuples to community IDs
Examples
>>> from py3plex.core import multinet >>> from py3plex.algorithms.community_detection.multilayer_modularity import louvain_multilayer >>> >>> network = multinet.multi_layer_network(directed=False) >>> network.add_edges([ ... ['A', 'L1', 'B', 'L1', 1], ... ['B', 'L1', 'C', 'L1', 1], ... ['A', 'L2', 'C', 'L2', 1] ... ], input_type='list') >>> >>> communities = louvain_multilayer(network, gamma=1.0, omega=1.0, random_state=42) >>> print(communities)
Note
For reproducible results, always set random_state parameter.
- py3plex.algorithms.community_detection.multilayer_modularity.multilayer_modularity(network: Any, communities: Dict[Tuple[Any, Any], int], gamma: float | Dict[Any, float] = 1.0, omega: float | ndarray = 1.0, weight: str = 'weight') float
Calculate multilayer modularity quality function (Mucha et al., 2010).
The multilayer modularity is defined as: Q = (1/2μ) Σ_{ijαβ} [(A^[α]_ij - γ^[α]P^[α]_ij)δ_αβ + δ_ij ω_αβ] δ(g_iα, g_jβ)
where: - A^[α]_ij is the adjacency matrix of layer α - P^[α]_ij is the null model (e.g., Newman-Girvan: k_i^α k_j^α / 2m_α) - γ^[α] is the resolution parameter for layer α - ω_αβ is the inter-layer coupling strength - δ_αβ = 1 if α=β, else 0 (Kronecker delta) - δ_ij = 1 if i=j, else 0 - δ(g_iα, g_jβ) = 1 if node i in layer α and node j in layer β are in same community - μ is the total edge weight in the supra-network
- Parameters:
network – py3plex multi_layer_network object
communities – Dictionary mapping (node, layer) tuples to community IDs
gamma – Resolution parameter(s). Can be: - Single float: same resolution for all layers - Dict[layer, float]: layer-specific resolution parameters
omega – Inter-layer coupling strength. Can be: - Single float: uniform coupling between all layer pairs - np.ndarray: layer-pair specific coupling matrix (L×L)
weight – Edge weight attribute (default: “weight”)
- Returns:
Modularity value Q ∈ [-1, 1]
Examples
>>> from py3plex.core import multinet >>> from py3plex.algorithms.community_detection.multilayer_modularity import multilayer_modularity >>> >>> # Create a simple multilayer network >>> network = multinet.multi_layer_network(directed=False) >>> network.add_edges([ ... ['A', 'L1', 'B', 'L1', 1], ... ['B', 'L1', 'C', 'L1', 1], ... ['A', 'L2', 'C', 'L2', 1] ... ], input_type='list') >>> >>> # Assign communities >>> communities = { ... ('A', 'L1'): 0, ('B', 'L1'): 0, ('C', 'L1'): 1, ... ('A', 'L2'): 0, ('C', 'L2'): 0 ... } >>> >>> # Calculate modularity >>> Q = multilayer_modularity(network, communities, gamma=1.0, omega=1.0) >>> print(f"Modularity: {Q:.3f}")
This module implements community detection.
- class py3plex.algorithms.community_detection.community_louvain.Status
Bases:
objectTo handle several data in one struct.
Could be replaced by named tuple, but don’t want to depend on python 2.6
- copy()
Perform a deep copy of status
- degrees: dict = {}
- gdegrees: dict = {}
- init(graph, weight, part=None)
Initialize the status of a graph with every node in one community
- internals: dict = {}
- node2com: dict = {}
- total_weight = 0
- py3plex.algorithms.community_detection.community_louvain.best_partition(graph: Graph, partition: Dict | None = None, weight: str = 'weight', resolution: float = 1.0, randomize: bool = False) Dict
Compute the partition of the graph nodes which maximises the modularity (or try..) using the Louvain heuristices
This is the partition of highest modularity, i.e. the highest partition of the dendrogram generated by the Louvain algorithm.
- Parameters:
graph (networkx.Graph) – the networkx graph which is decomposed
partition (dict, optional) – the algorithm will start using this partition of the nodes. It’s a dictionary where keys are their nodes and values the communities
weight (str, optional) – the key in graph to use as weight. Default to ‘weight’
resolution (double, optional) – Will change the size of the communities, default to 1. represents the time described in “Laplacian Dynamics and Multiscale Modular Structure in Networks”, R. Lambiotte, J.-C. Delvenne, M. Barahona
randomize (boolean, optional) – Will randomize the node evaluation order and the community evaluation order to get different partitions at each call
- Returns:
partition – The partition, with communities numbered from 0 to number of communities
- Return type:
dictionnary
- Raises:
NetworkXError – If the graph is not Eulerian.
See also
Notes
Uses Louvain algorithm
References
large networks. J. Stat. Mech 10008, 1-12(2008).
Examples
>>> #Basic usage >>> G=nx.erdos_renyi_graph(100, 0.01) >>> part = best_partition(G)
>>> #other example to display a graph with its community : >>> #better with karate_graph() as defined in networkx examples >>> #erdos renyi don't have true community structure >>> G = nx.erdos_renyi_graph(30, 0.05) >>> #first compute the best partition >>> partition = community.best_partition(G) >>> #drawing >>> size = float(len(set(partition.values()))) >>> pos = nx.spring_layout(G) >>> count = 0. >>> for com in set(partition.values()) : >>> count += 1. >>> list_nodes = [nodes for nodes in partition.keys() >>> if partition[nodes] == com] >>> nx.draw_networkx_nodes(G, pos, list_nodes, node_size = 20, node_color = str(count / size)) >>> nx.draw_networkx_edges(G, pos, alpha=0.5) >>> plt.show()
- py3plex.algorithms.community_detection.community_louvain.generate_dendrogram(graph: Graph, part_init: Dict | None = None, weight: str = 'weight', resolution: float = 1.0, randomize: bool = False) List[Dict]
Find communities in the graph and return the associated dendrogram
A dendrogram is a tree and each level is a partition of the graph nodes. Level 0 is the first partition, which contains the smallest communities, and the best is len(dendrogram) - 1. The higher the level is, the bigger are the communities
- Parameters:
graph (networkx.Graph) – the networkx graph which will be decomposed
part_init (dict, optional) – the algorithm will start using this partition of the nodes. It’s a dictionary where keys are their nodes and values the communities
weight (str, optional) – the key in graph to use as weight. Default to ‘weight’
resolution (double, optional) – Will change the size of the communities, default to 1. represents the time described in “Laplacian Dynamics and Multiscale Modular Structure in Networks”, R. Lambiotte, J.-C. Delvenne, M. Barahona
- Returns:
dendrogram – a list of partitions, ie dictionnaries where keys of the i+1 are the values of the i. and where keys of the first are the nodes of graph
- Return type:
list of dictionaries
- Raises:
TypeError – If the graph is not a networkx.Graph
See also
Notes
Uses Louvain algorithm
References
networks. J. Stat. Mech 10008, 1-12(2008).
Examples
>>> G=nx.erdos_renyi_graph(100, 0.01) >>> dendo = generate_dendrogram(G) >>> for level in range(len(dendo) - 1) : >>> print("partition at level", level, >>> "is", partition_at_level(dendo, level)) :param weight: :type weight:
- py3plex.algorithms.community_detection.community_louvain.induced_graph(partition: Dict, graph: Graph, weight: str = 'weight') Graph
Produce the graph where nodes are the communities
there is a link of weight w between communities if the sum of the weights of the links between their elements is w
- Parameters:
partition (dict) – a dictionary where keys are graph nodes and values the part the node belongs to
graph (networkx.Graph) – the initial graph
weight (str, optional) – the key in graph to use as weight. Default to ‘weight’
- Returns:
g – a networkx graph where nodes are the parts
- Return type:
networkx.Graph
Examples
>>> n = 5 >>> g = nx.complete_graph(2*n) >>> part = dict([]) >>> for node in g.nodes() : >>> part[node] = node % 2 >>> ind = induced_graph(part, g) >>> goal = nx.Graph() >>> goal.add_weighted_edges_from([(0,1,n*n),(0,0,n*(n-1)/2), (1, 1, n*(n-1)/2)]) # NOQA >>> nx.is_isomorphic(int, goal) True
- py3plex.algorithms.community_detection.community_louvain.load_binary(data)
Load binary graph as used by the cpp implementation of this algorithm
- py3plex.algorithms.community_detection.community_louvain.modularity(partition: Dict, graph: Graph, weight: str = 'weight') float
Compute the modularity of a partition of a graph
- Parameters:
partition (dict) – the partition of the nodes, i.e a dictionary where keys are their nodes and values the communities
graph (networkx.Graph) – the networkx graph which is decomposed
weight (str, optional) – the key in graph to use as weight. Default to ‘weight’
- Returns:
modularity – The modularity
- Return type:
float
- Raises:
KeyError – If the partition is not a partition of all graph nodes
ValueError – If the graph has no link
TypeError – If graph is not a networkx.Graph
References
structure in networks. Physical Review E 69, 26113(2004).
Examples
>>> G=nx.erdos_renyi_graph(100, 0.01) >>> part = best_partition(G) >>> modularity(part, G)
- py3plex.algorithms.community_detection.community_louvain.partition_at_level(dendrogram: List[Dict], level: int) Dict
Return the partition of the nodes at the given level
A dendrogram is a tree and each level is a partition of the graph nodes. Level 0 is the first partition, which contains the smallest communities, and the best is len(dendrogram) - 1. The higher the level is, the bigger are the communities
- Parameters:
dendrogram (list of dict) – a list of partitions, ie dictionnaries where keys of the i+1 are the values of the i.
level (int) – the level which belongs to [0..len(dendrogram)-1]
- Returns:
partition – A dictionary where keys are the nodes and the values are the set it belongs to
- Return type:
dictionnary
- Raises:
KeyError – If the dendrogram is not well formed or the level is too high
See also
Examples
>>> G=nx.erdos_renyi_graph(100, 0.01) >>> dendrogram = generate_dendrogram(G) >>> for level in range(len(dendrogram) - 1) : >>> print("partition at level", level, "is", partition_at_level(dendrogram, level)) # NOQA
Community quality measures and metrics.
This module provides various measures for assessing the quality of community partitions in networks, including modularity, size distribution, and other statistical metrics.
- py3plex.algorithms.community_detection.community_measures.modularity(G: Graph, communities: Dict[Any, List[Any]], weight: str = 'weight') float
Calculate modularity of a graph partition.
- Parameters:
G – NetworkX graph
communities – Dictionary mapping community IDs to node lists
weight – Edge weight attribute (default: “weight”)
- Returns:
Modularity value
- py3plex.algorithms.community_detection.community_measures.number_of_communities(network_partition: Dict[Any, List[Any]]) int
Count number of communities in a partition.
- Parameters:
network_partition – Dictionary mapping community IDs to node lists
- Returns:
Number of communities
- py3plex.algorithms.community_detection.community_measures.size_distribution(network_partition: Dict[Any, List[Any]]) ndarray
Calculate size distribution of communities.
- Parameters:
network_partition – Dictionary mapping community IDs to node lists
- Returns:
Array of community sizes
Multilayer Synthetic Graph Generation for Community Detection Benchmarks
This module implements synthetic multilayer/multiplex graph generators with ground-truth community structure for benchmarking community detection algorithms.
Includes: - Multilayer LFR (Lancichinetti-Fortunato-Radicchi) benchmark - Coupled/Interdependent Erdős-Rényi models - Support for overlapping communities across layers - Support for partial node presence across layers
References
Lancichinetti et al., “Benchmark graphs for testing community detection algorithms”, Phys. Rev. E 78, 046110 (2008)
Granell et al., “Benchmark model to assess community structure in evolving networks”, Phys. Rev. E 92, 012805 (2015)
- py3plex.algorithms.community_detection.multilayer_benchmark.generate_coupled_er_multilayer(n: int, layers: List[str], p: float | List[float] = 0.1, omega: float = 1.0, coupling_probability: float = 1.0, directed: bool = False, seed: int | None = None) Any
Generate coupled/interdependent Erdős-Rényi multilayer networks.
Creates random Erdős-Rényi graphs in each layer and couples nodes across layers with specified coupling strength and probability.
- Parameters:
n – Number of nodes
layers – List of layer names
p – Edge probability per layer. Can be float (same for all layers) or list of floats per layer.
omega – Inter-layer coupling strength (weight of identity links)
coupling_probability – Probability that a node has inter-layer coupling. Range: [0, 1] - 1.0 = all nodes coupled (full multiplex) - <1.0 = partial coupling (interdependent networks)
directed – Whether to generate directed networks
seed – Random seed for reproducibility
- Returns:
py3plex multi_layer_network object
Examples
>>> from py3plex.algorithms.community_detection.multilayer_benchmark import generate_coupled_er_multilayer >>> >>> # Full multiplex ER network >>> network = generate_coupled_er_multilayer( ... n=100, ... layers=['L1', 'L2', 'L3'], ... p=0.1, ... omega=1.0, ... coupling_probability=1.0 ... ) >>> >>> # Partially coupled (interdependent) >>> network = generate_coupled_er_multilayer( ... n=100, ... layers=['L1', 'L2'], ... p=0.1, ... omega=0.5, ... coupling_probability=0.5 # Only 50% nodes coupled ... )
- py3plex.algorithms.community_detection.multilayer_benchmark.generate_multilayer_lfr(n: int, layers: List[str], tau1: float = 2.0, tau2: float = 1.5, mu: float | List[float] = 0.1, avg_degree: float | List[float] = 10.0, min_community: int = 20, max_community: int | None = None, community_persistence: float = 1.0, node_overlap: float = 1.0, overlapping_nodes: int = 0, overlapping_membership: int = 2, directed: bool = False, seed: int | None = None) Tuple[Any, Dict[Tuple[Any, str], Set[int]]]
Generate multilayer LFR benchmark networks with controllable community structure.
This extends the LFR benchmark to multilayer networks, allowing control over: - Community persistence across layers (how many nodes keep their community) - Node overlap across layers (which nodes appear in which layers) - Overlapping communities (nodes belonging to multiple communities)
- Parameters:
n – Number of nodes
layers – List of layer names
tau1 – Power-law exponent for degree distribution (typically 2-3)
tau2 – Power-law exponent for community size distribution (typically 1-2)
mu – Mixing parameter (fraction of edges outside community). Can be float (same for all layers) or list of floats per layer. Range: [0, 1], where 0 = perfect communities, 1 = random
avg_degree – Average degree per layer. Can be float (same for all layers) or list of floats per layer.
min_community – Minimum community size
max_community – Maximum community size (default: n/2)
community_persistence – Probability that a node keeps its community from one layer to the next. Range: [0, 1] - 1.0 = identical communities across all layers - 0.0 = completely independent communities per layer
node_overlap – Fraction of nodes present in all layers. Range: [0, 1] - 1.0 = all nodes in all layers (full multiplex) - <1.0 = some nodes absent from some layers
overlapping_nodes – Number of nodes that belong to multiple communities within each layer
overlapping_membership – Number of communities each overlapping node belongs to
directed – Whether to generate directed networks
seed – Random seed for reproducibility
- Returns:
Tuple of (network, ground_truth_communities) - network: py3plex multi_layer_network object - ground_truth_communities: Dict mapping (node, layer) to Set of community IDs
Examples
>>> from py3plex.algorithms.community_detection.multilayer_benchmark import generate_multilayer_lfr >>> >>> # Generate with identical communities across layers >>> network, communities = generate_multilayer_lfr( ... n=100, ... layers=['L1', 'L2', 'L3'], ... mu=0.1, ... community_persistence=1.0 ... ) >>> >>> # Generate with evolving communities >>> network, communities = generate_multilayer_lfr( ... n=100, ... layers=['T0', 'T1', 'T2'], ... mu=0.1, ... community_persistence=0.7 # 70% nodes keep community ... )
- py3plex.algorithms.community_detection.multilayer_benchmark.generate_sbm_multilayer(n: int, layers: List[str], communities: List[Set[int]], p_in: float | List[float] = 0.3, p_out: float | List[float] = 0.05, community_persistence: float = 1.0, directed: bool = False, seed: int | None = None) Tuple[Any, Dict[Tuple[Any, str], int]]
Generate multilayer stochastic block model (SBM) networks.
Creates networks where nodes are divided into communities with different intra- and inter-community connection probabilities.
- Parameters:
n – Number of nodes
layers – List of layer names
communities – List of node sets defining initial communities
p_in – Intra-community edge probability per layer
p_out – Inter-community edge probability per layer
community_persistence – Probability nodes keep their community across layers
directed – Whether to generate directed networks
seed – Random seed for reproducibility
- Returns:
Tuple of (network, ground_truth_communities) - network: py3plex multi_layer_network object - ground_truth_communities: Dict mapping (node, layer) to community ID
Examples
>>> from py3plex.algorithms.community_detection.multilayer_benchmark import generate_sbm_multilayer >>> >>> # Define initial communities >>> communities = [ ... {0, 1, 2, 3, 4}, # Community 0 ... {5, 6, 7, 8, 9} # Community 1 ... ] >>> >>> network, ground_truth = generate_sbm_multilayer( ... n=10, ... layers=['L1', 'L2'], ... communities=communities, ... p_in=0.7, ... p_out=0.1, ... community_persistence=0.8 ... )
Statistics
Multilayer Network Statistics
This module implements various statistics for multilayer and multiplex networks, following standard definitions from multilayer network analysis literature.
References
Kivelä et al. (2014), “Multilayer networks”, J. Complex Networks 2(3), 203-271
De Domenico et al. (2013), “Mathematical formulation of multilayer networks”, PRX 3, 041022
Mucha et al. (2010), “Community Structure in Time-Dependent, Multiscale, and Multiplex Networks”, Science 328, 876-878
Authors: py3plex contributors Date: 2025
- py3plex.algorithms.statistics.multilayer_statistics.algebraic_connectivity(network: Any) float
Calculate algebraic connectivity (λ₂).
Formula: λ₂(ℒ)
Second smallest eigenvalue of the supra-Laplacian (Fiedler value).
Indicates global connectivity and diffusion efficiency of the multilayer system.
- Properties:
λ₀ = 0 always (associated with constant eigenvector) λ₁ > 0 if and only if the multilayer network is connected Larger λ₁ indicates better connectivity and faster diffusion/synchronization
- Parameters:
network – py3plex multi_layer_network object
- Returns:
Second smallest eigenvalue (Fiedler value)
Examples
>>> alg_conn = algebraic_connectivity(network)
- Reference:
Fiedler (1973), Sole-Ribalta et al. (2013)
- py3plex.algorithms.statistics.multilayer_statistics.community_participation_coefficient(network: Any, communities: Dict[Tuple[Any, Any], int], node: Any) float
Calculate participation coefficient for a node across community structure.
Measures how evenly a node’s connections are distributed across different communities, across all layers. A node with connections to many communities has high participation.
Formula: Pᵢ = 1 - Σₛ (kᵢₛ / kᵢ)²
where kᵢₛ is the number of connections node i has to community s, and kᵢ is the total degree of node i across all layers.
- Parameters:
network – py3plex multi_layer_network object
communities – Dictionary mapping (node, layer) to community ID
node – Node identifier (not node-layer tuple)
- Returns:
Participation coefficient value between 0 and 1
Examples
>>> communities = detect_communities(network) >>> pc = community_participation_coefficient(network, communities, 'Alice') >>> print(f"Participation: {pc:.3f}")
- Reference:
Guimerà & Amaral (2005), “Functional cartography of complex metabolic networks”
- py3plex.algorithms.statistics.multilayer_statistics.community_participation_entropy(network: Any, communities: Dict[Tuple[Any, Any], int], node: Any) float
Calculate participation entropy for a node across community structure.
Shannon entropy-based measure of how evenly a node distributes its connections across different communities. Higher entropy indicates more diverse community participation.
Formula: Hᵢ = -Σₛ (kᵢₛ / kᵢ) log(kᵢₛ / kᵢ)
where kᵢₛ is connections to community s, kᵢ is total degree.
- Parameters:
network – py3plex multi_layer_network object
communities – Dictionary mapping (node, layer) to community ID
node – Node identifier (not node-layer tuple)
- Returns:
Entropy value (higher = more diverse participation)
Examples
>>> entropy = community_participation_entropy(network, communities, 'Alice') >>> print(f"Participation entropy: {entropy:.3f}")
- Reference:
Based on Shannon entropy applied to community structure
- py3plex.algorithms.statistics.multilayer_statistics.compute_modularity_score(network: Any, communities: Dict[Tuple[Any, Any], int], gamma: float = 1.0, omega: float = 1.0) float
Compute explicit multislice modularity score.
Direct computation of the modularity quality function for a given community partition, without running detection algorithms.
Formula: Q = (1/2μ) Σᵢⱼₐᵦ [(Aᵢⱼᵅ - γ·kᵢᵅkⱼᵅ/(2mₐ))δₐᵦ + ω·δᵢⱼ] δ(cᵢᵅ, cⱼᵝ)
- Parameters:
network – py3plex multi_layer_network object
communities – Dictionary mapping (node, layer) to community ID
gamma – Resolution parameter (default: 1.0)
omega – Inter-layer coupling strength (default: 1.0)
- Returns:
Modularity score Q (higher is better)
Examples
>>> communities = {('A', 'L1'): 0, ('B', 'L1'): 0, ('C', 'L1'): 1} >>> Q = compute_modularity_score(network, communities) >>> print(f"Modularity: {Q:.3f}")
- Reference:
Mucha et al. (2010), Science 328, 876-878
- py3plex.algorithms.statistics.multilayer_statistics.cross_layer_mutual_information(network: Any, layer_i: str, layer_j: str, bins: int = 10) float
Calculate cross-layer mutual information (I(Lᵢ; Lⱼ)).
Formula: I(Lᵢ; Lⱼ) = H(Lᵢ) + H(Lⱼ) - H(Lᵢ, Lⱼ)
Measures statistical dependence between degree distributions in two layers; quantifies how much knowing a node’s degree in one layer tells us about its degree in another layer.
- Variables:
H(Lᵢ) = entropy of degree distribution in layer i H(Lⱼ) = entropy of degree distribution in layer j H(Lᵢ, Lⱼ) = joint entropy of degree distributions
- Properties:
I = 0 when layers are independent I > 0 indicates statistical dependence (higher = stronger) I(Lᵢ; Lⱼ) ≤ min(H(Lᵢ), H(Lⱼ))
- Parameters:
network – py3plex multi_layer_network object
layer_i – First layer identifier
layer_j – Second layer identifier
bins – Number of bins for discretizing degree distributions
- Returns:
Mutual information value in bits
Examples
>>> mi = cross_layer_mutual_information(network, 'L1', 'L2', bins=10) >>> print(f"Mutual information: {mi:.3f} bits")
- Reference:
Cover & Thomas (2006), “Elements of Information Theory” De Domenico et al. (2015), “Structural reducibility”
- py3plex.algorithms.statistics.multilayer_statistics.cross_layer_redundancy_entropy(network: Any) float
Calculate cross-layer redundancy entropy (H_redundancy).
Formula: H_r = -Σᵢⱼ rᵢⱼ log₂(rᵢⱼ), where rᵢⱼ is the normalized edge overlap between layers i and j.
Measures diversity in structural redundancy across layer pairs; high entropy indicates varied overlap patterns, low entropy indicates uniform redundancy.
- Variables:
rᵢⱼ = edge_overlap(i,j) / Σₐᵦ edge_overlap(α,β) Normalized overlap proportion for each layer pair
- Parameters:
network – py3plex multi_layer_network object
- Returns:
Entropy value in bits
Examples
>>> entropy = cross_layer_redundancy_entropy(network) >>> print(f"Cross-layer redundancy entropy: {entropy:.3f}")
- Reference:
Bianconi (2018), “Multilayer Networks: Structure and Function”
- py3plex.algorithms.statistics.multilayer_statistics.degree_vector(network: Any, node: Any, weighted: bool = False) Dict[str, float]
Calculate degree vector (kᵢ).
Formula: kᵢ = (kᵢ¹, kᵢ², …, kᵢᴸ)
Node degree in each layer; can be analyzed via mean, variance, or entropy to capture node versatility.
- Variables:
kᵢᵅ = degree of node i in layer α For undirected: kᵢᵅ = Σⱼ Aᵢⱼᵅ
- Parameters:
network – py3plex multi_layer_network object
node – Node identifier
weighted – If True, return strength instead of degree
- Returns:
Dictionary mapping layer to degree/strength
Examples
>>> degrees = degree_vector(network, 'A') >>> print(f"Degree in layer L1: {degrees['L1']}")
- Reference:
Kivelä et al. (2014), J. Complex Networks 2(3), 203-271
- py3plex.algorithms.statistics.multilayer_statistics.edge_overlap(network: Any, layer_i: str, layer_j: str) float
Calculate edge overlap (ω^αβ).
Formula: ω^αβ = |Eₐ ∩ Eᵦ| / |Eₐ ∪ Eᵦ|
Jaccard similarity of edge sets between two layers; measures structural redundancy.
- Variables:
Eₐ = set of edges in layer α Eᵦ = set of edges in layer β |·| = cardinality (number of elements)
- Parameters:
network – py3plex multi_layer_network object
layer_i – First layer identifier (α)
layer_j – Second layer identifier (β)
- Returns:
Overlap coefficient between 0 and 1 (Jaccard similarity)
Examples
>>> overlap = edge_overlap(network, 'L1', 'L2')
- Reference:
Kivelä et al. (2014), J. Complex Networks 2(3), 203-271
- py3plex.algorithms.statistics.multilayer_statistics.entropy_of_multiplexity(network: Any) float
Calculate entropy of multiplexity (Hₘ).
Formula: Hₘ = -Σₐ pₐ log₂(pₐ), where pₐ = Eₐ / Σᵦ Eᵦ
Shannon entropy of layer contributions; measures layer diversity.
- Variables:
pₐ = proportion of edges in layer α Eₐ = number of edges in layer α log₂ gives entropy in bits
- Properties:
Hₘ = 0 when all edges are in one layer (minimum entropy/diversity) Hₘ = log₂(L) when edges are uniformly distributed across L layers (maximum entropy)
- Parameters:
network – py3plex multi_layer_network object
- Returns:
Entropy value in bits
Examples
>>> entropy = entropy_of_multiplexity(network)
- Reference:
De Domenico et al. (2013), Shannon (1948)
- py3plex.algorithms.statistics.multilayer_statistics.inter_layer_assortativity(network: Any, layer_i: str, layer_j: str) float
Calculate inter-layer assortativity (rᴵ).
Formula: r^αβ = cov(k^α, k^β) / (σₐ σᵦ) = corr(k^α, k^β)
Measures whether nodes with similar degrees tend to connect across different layers.
- Variables:
k^α = degree vector in layer α k^β = degree vector in layer β σₐ, σᵦ = standard deviations of degrees in layers α and β Equivalent to Pearson correlation of degree vectors
- Parameters:
network – py3plex multi_layer_network object
layer_i – First layer identifier (α)
layer_j – Second layer identifier (β)
- Returns:
Assortativity coefficient
Examples
>>> assort = inter_layer_assortativity(network, 'L1', 'L2')
- Reference:
Newman (2002), Nicosia & Latora (2015)
- py3plex.algorithms.statistics.multilayer_statistics.inter_layer_coupling_strength(network: Any, layer_i: str, layer_j: str) float
Calculate inter-layer coupling strength (C^αβ).
Formula: C^αβ = (1/N_αβ) Σᵢ wᵢ^αβ
Average weight of inter-layer connections between corresponding nodes in two layers. Quantifies cross-layer connectivity.
- Variables:
N_αβ = number of nodes present in both layers α and β wᵢ^αβ = weight of inter-layer edge connecting node i in layer α to node i in layer β
- Parameters:
network – py3plex multi_layer_network object
layer_i – First layer identifier (α)
layer_j – Second layer identifier (β)
- Returns:
Average coupling strength
Examples
>>> coupling = inter_layer_coupling_strength(network, 'L1', 'L2')
- Reference:
De Domenico et al. (2013), Physical Review X 3(4), 041022
- Contracts:
Precondition: network must not be None
Precondition: layer_i and layer_j must be non-empty strings
Postcondition: result is non-negative (weights are non-negative)
Postcondition: result is not NaN
- py3plex.algorithms.statistics.multilayer_statistics.inter_layer_degree_correlation(network: Any, layer_i: str, layer_j: str) float
Calculate inter-layer degree correlation (r^αβ).
Formula: r^αβ = Σᵢ(kᵢᵅ - k̄ᵅ)(kᵢᵝ - k̄ᵝ) / [√(Σᵢ(kᵢᵅ - k̄ᵅ)²) √(Σᵢ(kᵢᵝ - k̄ᵝ)²)]
Pearson correlation of node degrees between two layers; reveals if highly connected nodes in one layer are also central in others.
- Variables:
kᵢᵅ = degree of node i in layer α k̄ᵅ = mean degree in layer α Sum over nodes present in both layers
- Parameters:
network – py3plex multi_layer_network object
layer_i – First layer identifier (α)
layer_j – Second layer identifier (β)
- Returns:
Pearson correlation coefficient between -1 and 1
Examples
>>> corr = inter_layer_degree_correlation(network, 'L1', 'L2')
- Reference:
Battiston et al. (2014), Nicosia & Latora (2015)
- py3plex.algorithms.statistics.multilayer_statistics.inter_layer_dependence_entropy(network: Any, layer_i: str, layer_j: str) float
Calculate inter-layer dependence entropy (H_dep).
Formula: H_dep = -Σₙ pₙ log₂(pₙ), where pₙ is the proportion of inter-layer edges for each node n connecting layers i and j.
Measures heterogeneity in how nodes couple the two layers; high entropy indicates diverse coupling patterns, low entropy indicates uniform coupling.
- Variables:
pₙ = proportion of inter-layer edges incident to node n Total over all nodes connecting the two layers
- Parameters:
network – py3plex multi_layer_network object
layer_i – First layer identifier
layer_j – Second layer identifier
- Returns:
Entropy value in bits
Examples
>>> entropy = inter_layer_dependence_entropy(network, 'L1', 'L2') >>> print(f"Inter-layer dependence entropy: {entropy:.3f}")
- Reference:
De Domenico et al. (2015), “Ranking in interconnected multilayer networks”
- py3plex.algorithms.statistics.multilayer_statistics.interdependence(network: Any, sample_size: int = 100) float
Calculate interdependence (λ).
Formula: λ = ⟨dᴹᴸ⟩ / ⟨dᵃᵛᵍ⟩
Quantifies how much shortest-path communication depends on inter-layer connections.
- Variables:
dᵢⱼᴹᴸ = shortest path from node i to node j in the full multilayer network dᵢⱼᵃᵛᵍ = (1/L) Σₐ dᵢⱼᵅ is the average shortest path across individual layers ⟨·⟩ = average over sampled node pairs
- Interpretation:
λ < 1: multilayer connectivity reduces path lengths (positive interdependence) λ ≈ 1: inter-layer connections provide little benefit λ > 1: multilayer structure increases path lengths (rare)
- Parameters:
network – py3plex multi_layer_network object
sample_size – Number of node pairs to sample for estimation
- Returns:
Interdependence ratio
Examples
>>> interdep = interdependence(network, sample_size=50)
- Reference:
Gomez et al. (2013), Buldyrev et al. (2010)
- py3plex.algorithms.statistics.multilayer_statistics.interlayer_degree_correlation_matrix(network: Any) Tuple[ndarray, list]
Calculate inter-layer degree correlation matrix.
Computes Pearson correlation coefficients for node degrees between all pairs of layers, organized as a symmetric correlation matrix.
Formula: Matrix[α,β] = r^αβ = corr(k^α, k^β)
where k^α and k^β are degree vectors for layers α and β over common nodes.
- Properties:
Diagonal elements are 1.0 (self-correlation)
Off-diagonal elements in [-1, 1]
Symmetric matrix
Positive values indicate positive degree correlation
Negative values indicate negative degree correlation
- Parameters:
network – py3plex multi_layer_network object
- Returns:
correlation_matrix: 2D numpy array of shape (num_layers, num_layers)
layer_labels: List of layer names corresponding to matrix indices
- Return type:
Tuple of (correlation_matrix, layer_labels)
Examples
>>> corr_matrix, layers = interlayer_degree_correlation_matrix(network) >>> import matplotlib.pyplot as plt >>> import seaborn as sns >>> sns.heatmap(corr_matrix, annot=True, xticklabels=layers, ... yticklabels=layers, cmap='coolwarm', center=0, ... vmin=-1, vmax=1) >>> plt.title('Inter-layer Degree Correlation Matrix') >>> plt.show()
- Reference:
Nicosia & Latora (2015), “Measuring and modeling correlations in multiplex networks” Battiston et al. (2014), “Structural measures for multiplex networks”
- py3plex.algorithms.statistics.multilayer_statistics.layer_connectivity_entropy(network: Any, layer: str) float
Calculate entropy of layer connectivity (H_connectivity).
Formula: H_c = -Σᵢ (kᵢ/Σⱼkⱼ) log₂(kᵢ/Σⱼkⱼ)
Shannon entropy of degree distribution within a layer; measures heterogeneity of node connectivity patterns.
- Variables:
kᵢ = degree of node i in the layer Σⱼkⱼ = sum of all degrees (2 * edges for undirected)
- Properties:
H_c = 0 when all nodes have the same degree (uniform distribution) H_c is maximized when degree distribution is highly uneven
- Parameters:
network – py3plex multi_layer_network object
layer – Layer identifier
- Returns:
Entropy value in bits
Examples
>>> from py3plex.core import multinet >>> network = multinet.multi_layer_network(directed=False) >>> network.add_edges([ ... ['A', 'L1', 'B', 'L1', 1], ... ['B', 'L1', 'C', 'L1', 1] ... ], input_type='list') >>> entropy = layer_connectivity_entropy(network, 'L1') >>> print(f"Connectivity entropy: {entropy:.3f}")
- Reference:
Solé-Ribalta et al. (2013), “Spectral properties of complex networks” Shannon (1948), “A Mathematical Theory of Communication”
- py3plex.algorithms.statistics.multilayer_statistics.layer_density(network: Any, layer: str) float
Calculate layer density (ρₐ).
- Formula: ρₐ = (2Eₐ) / (Nₐ(Nₐ - 1)) [undirected]
ρₐ = Eₐ / (Nₐ(Nₐ - 1)) [directed]
Measures the fraction of possible edges present in a specific layer, indicating how densely connected that layer is.
- Variables:
Eₐ = number of edges in layer α Nₐ = number of nodes in layer α
- Parameters:
network – py3plex multi_layer_network object
layer – Layer identifier
- Returns:
Density value between 0 and 1
Examples
>>> from py3plex.core import multinet >>> network = multinet.multi_layer_network(directed=False) >>> network.add_edges([ ... ['A', 'L1', 'B', 'L1', 1], ... ['B', 'L1', 'C', 'L1', 1] ... ], input_type='list') >>> density = layer_density(network, 'L1') >>> print(f"Layer L1 density: {density:.3f}")
- Reference:
Kivelä et al. (2014), J. Complex Networks 2(3), 203-271
- Contracts:
Precondition: network must not be None
Precondition: layer must be a non-empty string
Postcondition: result is in [0, 1] (fundamental property of density)
Postcondition: result is not NaN
- py3plex.algorithms.statistics.multilayer_statistics.layer_influence_centrality(network: Any, layer: str, method: str = 'coupling', sample_size: int = 100) float
Calculate layer influence centrality (Iᵅ).
Formula (coupling): Iᵅ = Σᵦ≠ᵅ C^αβ / (L-1) Formula (flow): Iᵅ = Σᵦ≠ᵅ F^αβ / (L-1)
Quantifies how much a layer influences other layers through inter-layer connections (coupling method) or information flow (flow method).
- Variables:
C^αβ = inter-layer coupling strength between layers α and β F^αβ = flow from layer α to layer β (random walk transition probability) L = total number of layers
- Properties:
Higher values indicate layers that strongly influence others Useful for identifying critical layers in the multilayer structure
- Parameters:
network – py3plex multi_layer_network object
layer – Layer identifier
method – ‘coupling’ for structural influence, ‘flow’ for dynamic influence
sample_size – Number of random walk steps for flow simulation
- Returns:
Influence centrality value
Examples
>>> influence = layer_influence_centrality(network, 'L1', method='coupling') >>> print(f"Layer L1 influence: {influence:.3f}")
- Reference:
Cozzo et al. (2013), “Mathematical formulation of multilayer networks” De Domenico et al. (2014), “Identifying modular flows”
- py3plex.algorithms.statistics.multilayer_statistics.layer_redundancy_coefficient(network: Any, layer_i: str, layer_j: str) float
Calculate layer redundancy coefficient.
Measures the proportion of edges in one layer that are redundant (also present) in another layer. Values close to 1 indicate high redundancy, while values close to 0 indicate complementary layers.
Formula: Rᵅᵝ = |Eᵅ ∩ Eᵝ| / |Eᵅ|
where Eᵅ and Eᵝ are edge sets of layers α and β.
- Parameters:
network – py3plex multi_layer_network object
layer_i – First layer identifier
layer_j – Second layer identifier
- Returns:
Redundancy coefficient between 0 and 1
Examples
>>> redundancy = layer_redundancy_coefficient(network, 'social', 'work') >>> print(f"Redundancy: {redundancy:.2%}")
- Reference:
Nicosia & Latora (2015), “Measuring and modeling correlations in multiplex networks”
- py3plex.algorithms.statistics.multilayer_statistics.layer_similarity(network: Any, layer_i: str, layer_j: str, method: str = 'cosine') float
Calculate layer similarity (S^αβ).
Formula: S^αβ = ⟨Aₐ, Aᵦ⟩ / (‖Aₐ‖ ‖Aᵦ‖) = Σᵢⱼ AᵢⱼᵅAᵢⱼᵝ / √(Σᵢⱼ(Aᵢⱼᵅ)²) √(Σᵢⱼ(Aᵢⱼᵝ)²)
Cosine or Jaccard similarity between adjacency matrices of two layers.
- Variables:
Aₐ, Aᵦ = adjacency matrices for layers α and β ⟨·,·⟩ = Frobenius inner product ‖·‖ = Frobenius norm
- Parameters:
network – py3plex multi_layer_network object
layer_i – First layer identifier (α)
layer_j – Second layer identifier (β)
method – ‘cosine’ or ‘jaccard’
- Returns:
Similarity value between 0 and 1
Examples
>>> similarity = layer_similarity(network, 'L1', 'L2', method='cosine')
- Reference:
De Domenico et al. (2013), Physical Review X 3(4), 041022
- py3plex.algorithms.statistics.multilayer_statistics.multilayer_betweenness_surface(network: Any, normalized: bool = True, weight: str | None = None) ndarray
Calculate multilayer betweenness surface (tensor representation).
Computes betweenness centrality for each node-layer pair and organizes the results as a 2D array (nodes × layers) that can be visualized as a heatmap or surface plot.
Formula: Surface[i,α] = Bᵢᵅ
where Bᵢᵅ is the betweenness centrality of node i in layer α.
- Parameters:
network – py3plex multi_layer_network object
normalized – Whether to normalize betweenness values
weight – Edge weight attribute name (None for unweighted)
- Returns:
2D numpy array of shape (num_nodes, num_layers) containing betweenness values. Also returns tuple of (node_labels, layer_labels) for axis labeling.
Examples
>>> surface, (nodes, layers) = multilayer_betweenness_surface(network) >>> import matplotlib.pyplot as plt >>> plt.imshow(surface, aspect='auto', cmap='viridis') >>> plt.xlabel('Layers') >>> plt.ylabel('Nodes') >>> plt.xticks(range(len(layers)), layers) >>> plt.yticks(range(len(nodes)), nodes) >>> plt.colorbar(label='Betweenness Centrality') >>> plt.title('Multilayer Betweenness Surface') >>> plt.show()
- Reference:
De Domenico et al. (2015), “Structural reducibility of multilayer networks”
- py3plex.algorithms.statistics.multilayer_statistics.multilayer_clustering_coefficient(network: Any, node: Any | None = None) float | Dict[Any, float]
Calculate multilayer clustering coefficient (Cᴹ).
Formula: Cᵢᴹ = Tᵢ / Tᵢᵐᵃˣ
Extends transitivity to account for triangles that span multiple layers.
- Variables:
Tᵢ = number of closed triplets (triangles) involving node i across all layers Tᵢᵐᵃˣ = maximum possible triplets = Σₐ kᵢᵅ(kᵢᵅ - 1)/2 for undirected networks Average over all nodes: Cᴹ = (1/N) Σᵢ Cᵢᴹ
- Parameters:
network – py3plex multi_layer_network object
node – If specified, compute for single node; otherwise compute for all
- Returns:
Clustering coefficient value or dict of values per node
Examples
>>> clustering = multilayer_clustering_coefficient(network) >>> node_clustering = multilayer_clustering_coefficient(network, node='A')
- Reference:
Battiston et al. (2014), Section III.C
- py3plex.algorithms.statistics.multilayer_statistics.multilayer_modularity(network: Any, communities: Dict[Tuple[Any, Any], int], gamma: float | Dict[Any, float] = 1.0, omega: float | ndarray = 1.0, weight: str = 'weight') float
Calculate multilayer modularity (Qᴹᴸ).
This is a wrapper for the existing multilayer_modularity implementation in py3plex.algorithms.community_detection.multilayer_modularity.
Formula: Qᴹᴸ = (1/2μ) Σᵢⱼₐᵦ [(Aᵢⱼᵅ - γₐPᵢⱼᵅ)δₐᵦ + ωₐᵦδᵢⱼ] δ(gᵢᵅ, gⱼᵝ)
Extension of Newman-Girvan modularity to multiplex networks (Mucha et al., 2010). Measures community quality across layers.
- Variables:
μ = total edge weight in supra-network Aᵢⱼᵅ = adjacency matrix element for layer α Pᵢⱼᵅ = kᵢᵅkⱼᵅ/(2mₐ) is the null model (configuration model) γₐ = resolution parameter for layer α ωₐᵦ = inter-layer coupling strength δₐᵦ = Kronecker delta (1 if α=β, 0 otherwise) δᵢⱼ = Kronecker delta (1 if i=j, 0 otherwise) δ(gᵢᵅ, gⱼᵝ) = 1 if node i in layer α and node j in layer β are in same community
- Parameters:
network – py3plex multi_layer_network object
communities – Dictionary mapping (node, layer) tuples to community IDs
gamma – Resolution parameter(s)
omega – Inter-layer coupling strength
weight – Edge weight attribute
- Returns:
Modularity value Q
Examples
>>> communities = {('A', 'L1'): 0, ('B', 'L1'): 0, ('C', 'L1'): 1} >>> Q = multilayer_modularity(network, communities)
- Reference:
Mucha et al. (2010), Science 328(5980), 876-878
- py3plex.algorithms.statistics.multilayer_statistics.multilayer_motif_frequency(network: Any, motif_size: int = 3) Dict[str, float]
Calculate multilayer motif frequency (fₘ).
Formula: fₘ = nₘ / Σₖ nₖ
Frequency of recurring subgraph patterns across layers.
- Variables:
nₘ = count of motif type m Σₖ nₖ = total count of all motifs
Note: This is a simplified implementation counting basic patterns (intra-layer vs. inter-layer triangles). Complete multilayer motif enumeration includes many more configurations and is computationally expensive.
- Parameters:
network – py3plex multi_layer_network object
motif_size – Size of motifs to count (default: 3 for triangles)
- Returns:
Dictionary of motif type frequencies
Examples
>>> motifs = multilayer_motif_frequency(network, motif_size=3)
- Reference:
Battiston et al. (2014), Section IV
- py3plex.algorithms.statistics.multilayer_statistics.multiplex_betweenness_centrality(network: Any, normalized: bool = True, weight: str | None = None) Dict[Tuple[Any, Any], float]
Calculate multiplex betweenness centrality.
Computes betweenness centrality on the supra-graph, accounting for paths that traverse inter-layer couplings. This extends the standard betweenness definition to multiplex networks where paths can cross layers.
Formula: Bᵢᵅ = Σₛ≠ᵢ≠ₜ (σₛₜ(iα) / σₛₜ)
where σₛₜ is the total number of shortest paths from s to t, and σₛₜ(iα) is the number of those paths passing through node i in layer α.
- Parameters:
network – py3plex multi_layer_network object
normalized – Whether to normalize by the number of node pairs
weight – Edge weight attribute name (None for unweighted)
- Returns:
Dictionary mapping (node, layer) tuples to betweenness centrality values
Examples
>>> betweenness = multiplex_betweenness_centrality(network) >>> top_nodes = sorted(betweenness.items(), key=lambda x: x[1], reverse=True)[:5]
- Reference:
De Domenico et al. (2015), “Structural reducibility of multilayer networks”
- py3plex.algorithms.statistics.multilayer_statistics.multiplex_closeness_centrality(network: Any, normalized: bool = True, weight: str | None = None, variant: str = 'standard') Dict[Tuple[Any, Any], float]
Calculate multiplex closeness centrality.
Computes closeness centrality on the supra-graph, where shortest paths can traverse inter-layer edges. This captures how quickly a node-layer can reach all other node-layers in the multiplex network.
Standard closeness formula: Cᵢᵅ = (N*L - 1) / Σⱼᵝ≠ᵢᵅ d(iα, jβ)
Harmonic closeness formula: HCᵢᵅ = Σⱼᵝ≠ᵢᵅ 1/d(iα, jβ)
where d(iα, jβ) is the shortest path distance from node i in layer α to node j in layer β, and N*L is the total number of node-layer pairs.
- Parameters:
network – py3plex multi_layer_network object
normalized – Whether to normalize by network size
weight – Edge weight attribute name (None for unweighted)
variant –
Closeness variant to use. Options: - ‘standard’: Classic closeness (reciprocal of sum of distances).
Can produce biased values for nodes in disconnected components.
’harmonic’: Harmonic closeness (sum of reciprocal distances). Recommended for disconnected multilayer networks.
’auto’: Automatically selects ‘harmonic’ if the network has multiple connected components, otherwise uses ‘standard’.
Default is ‘standard’ for backward compatibility.
- Returns:
Dictionary mapping (node, layer) tuples to closeness centrality values
Examples
>>> closeness = multiplex_closeness_centrality(network) >>> central_nodes = {k: v for k, v in closeness.items() if v > 0.5}
>>> # For disconnected networks, use harmonic variant >>> closeness = multiplex_closeness_centrality(network, variant='harmonic')
- Reference:
De Domenico et al. (2015), “Structural reducibility of multilayer networks” Boldi, P., & Vigna, S. (2014). Axioms for Centrality. Internet Math.
- py3plex.algorithms.statistics.multilayer_statistics.multiplex_rich_club_coefficient(network: Any, k: int, normalized: bool = True) float
Calculate multiplex rich-club coefficient.
Measures the tendency of high-degree nodes to be more densely connected to each other than expected by chance, accounting for the multiplex structure.
Formula: φᴹ(k) = Eᴹ(>k) / (Nᴹ(>k) * (Nᴹ(>k)-1) / 2)
where Eᴹ(>k) is the number of edges among nodes with overlapping degree > k, and Nᴹ(>k) is the number of such nodes.
- Parameters:
network – py3plex multi_layer_network object
k – Degree threshold
normalized – Whether to normalize by random expectation
- Returns:
Rich-club coefficient value
Examples
>>> rich_club = multiplex_rich_club_coefficient(network, k=10) >>> print(f"Rich-club coefficient: {rich_club:.3f}")
- Reference:
Alstott et al. (2014), “powerlaw: A Python Package for Analysis of Heavy-Tailed Distributions” Extended to multiplex networks
- py3plex.algorithms.statistics.multilayer_statistics.node_activity(network: Any, node: Any) float
Calculate node activity (aᵢ).
Formula: aᵢ = (1/L) Σₐ 𝟙(vᵢ ∈ Vₐ)
Fraction of layers in which node i is active (has at least one connection).
- Variables:
L = total number of layers 𝟙(vᵢ ∈ Vₐ) = indicator function (1 if node i is active in layer α, 0 otherwise) Vₐ = set of active nodes in layer α
- Parameters:
network – py3plex multi_layer_network object
node – Node identifier
- Returns:
Activity value between 0 and 1
Examples
>>> activity = node_activity(network, 'A')
- Reference:
Kivelä et al. (2014), J. Complex Networks 2(3), 203-271
- Contracts:
Precondition: network must not be None
Precondition: node must not be None
Postcondition: result is in [0, 1] (fraction of layers)
Postcondition: result is not NaN
- py3plex.algorithms.statistics.multilayer_statistics.percolation_threshold(network: Any, removal_strategy: str = 'random', trials: int = 10) float
Estimate percolation threshold for the multiplex network.
Determines the fraction of nodes that must be removed before the network fragments into disconnected components. Uses sampling to estimate threshold.
- Parameters:
network – py3plex multi_layer_network object
removal_strategy – ‘random’, ‘degree’, or ‘betweenness’
trials – Number of trials for averaging
- Returns:
Estimated percolation threshold (fraction of nodes)
Examples
>>> threshold = percolation_threshold(network, removal_strategy='degree') >>> print(f"Percolation threshold: {threshold:.2%}")
- Reference:
Buldyrev et al. (2010), “Catastrophic cascade of failures in interdependent networks”
- py3plex.algorithms.statistics.multilayer_statistics.resilience(network: Any, perturbation_type: str = 'layer_removal', perturbation_param: str | float | None = None) float
Calculate resilience (R).
Formula: R = S’ / S₀
Ratio of largest connected component after perturbation to original size.
- Variables:
S₀ = size of largest connected component in original network S’ = size of largest connected component after perturbation
- Perturbation types:
Layer removal: Remove all nodes/edges in a specific layer
Coupling removal: Remove a fraction of inter-layer edges
- Properties:
R = 1 indicates full resilience (no impact from perturbation) R = 0 indicates complete fragmentation 0 < R < 1 indicates partial resilience
- Parameters:
network – py3plex multi_layer_network object
perturbation_type – ‘layer_removal’ or ‘coupling_removal’
perturbation_param – Layer to remove or fraction of inter-layer edges
- Returns:
Resilience ratio between 0 and 1
Examples
>>> r = resilience(network, 'layer_removal', perturbation_param='L1') >>> r = resilience(network, 'coupling_removal', perturbation_param=0.5)
- Reference:
Buldyrev et al. (2010), Nature 464, 1025-1028
- py3plex.algorithms.statistics.multilayer_statistics.supra_laplacian_spectrum(network: Any, k: int = 10) ndarray
Calculate supra-Laplacian spectrum (Λ).
Formula: ℒ = 𝒟 - 𝒜
Eigenvalue spectrum of the supra-Laplacian matrix; captures diffusion properties. Uses sparse eigenvalue computation when beneficial.
- Variables:
𝒜 = supra-adjacency matrix (NL × NL block matrix containing all layers and inter-layer couplings) 𝒟 = supra-degree matrix (diagonal matrix with row sums of 𝒜) ℒ = supra-Laplacian matrix Λ = {λ₀, λ₁, …, λₙₗ₋₁} with 0 = λ₀ ≤ λ₁ ≤ … ≤ λₙₗ₋₁
- Parameters:
network – py3plex multi_layer_network object
k – Number of smallest eigenvalues to compute
- Returns:
Array of k smallest eigenvalues
Examples
>>> spectrum = supra_laplacian_spectrum(network, k=10)
- Reference:
De Domenico et al. (2013), Gomez et al. (2013)
Notes
Uses sparse eigsh() for sparse matrices (more efficient)
Falls back to dense computation for small matrices (n < 100) or when k is large relative to n
Laplacian is always symmetric for undirected graphs (PSD with smallest eigenvalue = 0)
- py3plex.algorithms.statistics.multilayer_statistics.targeted_layer_removal(network: Any, layer: str, return_resilience: bool = False) Any | Tuple[Any, float]
Simulate targeted removal of an entire layer.
Removes all edges in a specified layer and returns the modified network or resilience score.
- Parameters:
network – py3plex multi_layer_network object
layer – Layer identifier to remove
return_resilience – If True, return resilience score instead of network
- Returns:
Modified network or resilience score
Examples
>>> resilience = targeted_layer_removal(network, 'social', return_resilience=True) >>> print(f"Resilience after removing social layer: {resilience:.3f}")
- Reference:
Buldyrev et al. (2010), “Catastrophic cascade of failures”
- py3plex.algorithms.statistics.multilayer_statistics.unique_redundant_edges(network: Any, layer_i: str, layer_j: str) Tuple[int, int]
Count unique and redundant edges between two layers.
Returns the number of edges unique to the first layer and the number of edges present in both layers (redundant).
- Parameters:
network – py3plex multi_layer_network object
layer_i – First layer identifier
layer_j – Second layer identifier
- Returns:
Tuple of (unique_edges, redundant_edges)
Examples
>>> unique, redundant = unique_redundant_edges(network, 'social', 'work') >>> print(f"Unique: {unique}, Redundant: {redundant}")
- py3plex.algorithms.statistics.multilayer_statistics.versatility_centrality(network: Any, centrality_type: str = 'degree', alpha: Dict[str, float] | None = None) Dict[Any, float]
Calculate versatility centrality (Vᵢ).
Formula: Vᵢ = Σₐ wₐ Cᵢᵅ
Weighted combination of node centrality values across layers; measures overall influence.
- Variables:
wₐ = weight for layer α (typically 1/L for uniform weighting, Σₐ wₐ = 1) Cᵢᵅ = centrality of node i in layer α (can be degree, betweenness, closeness, etc.)
- Parameters:
network – py3plex multi_layer_network object
centrality_type – Type of centrality (‘degree’, ‘betweenness’, ‘closeness’)
alpha – Layer weights (default: uniform weights)
- Returns:
Dictionary mapping nodes to versatility centrality values
Examples
>>> versatility = versatility_centrality(network, centrality_type='degree')
- Reference:
De Domenico et al. (2015), Nature Communications 6, 6868
- py3plex.algorithms.statistics.topology.basic_pl_stats(degree_sequence: List[int]) Tuple[float, float]
Calculate basic power law statistics for a degree sequence.
- Parameters:
degree_sequence – Degree sequence of individual nodes
- Returns:
Tuple of (alpha, sigma) values
- py3plex.algorithms.statistics.topology.plot_power_law(degree_sequence: List[int], title: str, xlabel: str, plabel: str, ylabel: str = 'Number of nodes', formula_x: int = 70, formula_y: float = 0.05, show: bool = True, use_normalization: bool = False) Any
- py3plex.algorithms.statistics.correlation_networks.default_correlation_to_network(matrix: ndarray, input_type: str = 'matrix', preprocess: str = 'standard') ndarray
Convert correlation matrix to network using optimal thresholding.
- Parameters:
matrix – Input data matrix
input_type – Type of input (default: “matrix”)
preprocess – Preprocessing method (default: “standard”)
- Returns:
Binary adjacency matrix
- py3plex.algorithms.statistics.correlation_networks.pick_threshold(matrix: ndarray) float
Pick optimal threshold for correlation network construction.
- Parameters:
matrix – Input data matrix
- Returns:
Optimal threshold value
- py3plex.algorithms.statistics.basic_statistics.core_network_statistics(G: Graph, labels: Any | None = None, name: str = 'example') DataFrame
Compute core statistics for a network.
- Parameters:
G – NetworkX graph to analyze
labels – Optional label matrix with shape attribute
name – Name identifier for the network (default: “example”)
- Returns:
DataFrame containing network statistics
- py3plex.algorithms.statistics.basic_statistics.ensure(*args, **kwargs)
- py3plex.algorithms.statistics.basic_statistics.identify_n_hubs(G: Graph, top_n: int = 100, node_type: str | None = None) Dict[Any, int]
Identify the top N hub nodes in a network based on degree centrality.
- Parameters:
G – NetworkX graph to analyze
top_n – Number of top hubs to return (default: 100)
node_type – Optional filter for specific node type
- Returns:
Dictionary mapping node identifiers to their degree values
- Contracts:
Precondition: top_n must be positive
Postcondition: result has at most top_n entries
Postcondition: all degree values are non-negative integers
- py3plex.algorithms.statistics.basic_statistics.require(*args, **kwargs)
Multilayer Algorithms
Multilayer/Multiplex Network Centrality Measures
This module implements various centrality measures for multilayer and multiplex networks, following standard definitions from multilayer network analysis literature.
Weight Handling Notes:
For path-based centralities (betweenness, closeness): - Edge weights from the supra-adjacency matrix are converted to distances (inverse of weight) - NetworkX algorithms use these distances for shortest path computation - betweenness_centrality: weight parameter specifies the edge attribute for path computation - closeness_centrality: distance parameter specifies the edge attribute for path computation
For disconnected graphs: - closeness_centrality uses wf_improved parameter (Wasserman-Faust scaling) by default - When wf_improved=True, scores are normalized by reachable nodes only - When wf_improved=False, unreachable nodes contribute infinite distance
Weight Constraints: - Weights should be positive (> 0) for shortest path algorithms - Zero or negative weights will cause undefined behavior - For unweighted analysis, use weighted=False parameters
Authors: py3plex contributors Date: 2025
- class py3plex.algorithms.multilayer_algorithms.centrality.MultilayerCentrality(network: Any)
Bases:
objectClass for computing centrality measures on multilayer networks.
This class provides implementations of various centrality measures specifically designed for multilayer/multiplex networks, including degree-based, eigenvector-based, and path-based measures.
- accessibility_centrality(h=2)
Compute accessibility centrality (entropy-based reach within h steps).
Accessibility measures the diversity of nodes reachable within h steps using entropy of the probability distribution.
Access_r = exp(H_r) where H_r is the entropy of the h-step distribution
- Parameters:
h – Number of steps (default: 2)
- Returns:
{(node, layer): accessibility}
- Return type:
dict
Note
Accessibility is measured as the effective number of h-step destinations, using the entropy of the random walk distribution.
Dangling Node Handling: For nodes with no outgoing edges (dangling nodes), this implementation uses uniform teleportation to all nodes. This differs from the original Travencolo-Costa definition which does not include teleportation. The teleportation ensures a well-defined random walk distribution but may affect accessibility values for nodes near dangling nodes compared to implementations that handle dangling nodes differently.
References
Travencolo, B. A. N., & Costa, L. D. F. (2008). Accessibility in complex networks. Physics Letters A, 373(1), 89-95.
- aggregate_to_node_level(node_layer_centralities, method='sum', weights=None)
Aggregate node-layer centralities to node level.
- Parameters:
node_layer_centralities – dict with {(node, layer): value} entries
method – ‘sum’, ‘mean’, ‘max’, ‘weighted_sum’
weights – dict with {layer: weight} for weighted_sum method
- Returns:
{node: aggregated_value}
- Return type:
dict
- bridging_centrality()
Compute bridging centrality for nodes in the supra-graph.
Bridging centrality combines betweenness with the bridging coefficient, which measures how much a node connects sparse regions of the network.
Bridging(u) = B(u) * BCoeff(u) where BCoeff(u) = (1 / k_u) * sum_{v in N(u)} [1 / k_v]
- Returns:
{(node, layer): bridging_centrality}
- Return type:
dict
Note
Nodes with higher bridging centrality act as important bridges connecting different parts of the network.
- collective_influence(radius=2)
Compute collective influence (CI_ℓ) for multiplex networks.
Collective influence identifies influential spreaders by considering not just immediate neighbors but also nodes at distance ℓ.
CI_ℓ(u) = (k_u - 1) * sum_{v in ∂Ball_ℓ(u)} (k_v - 1)
- Parameters:
radius – Radius ℓ for the ball boundary (default: 2)
- Returns:
{(node, layer): collective_influence}
- Return type:
dict
Note
Uses overlapping degree across all layers for each physical node.
- communicability_betweenness_centrality(normalized=True)
Compute communicability betweenness centrality on the supra-graph.
This measure quantifies how much a node contributes to the communicability between other pairs of nodes. It uses the matrix exponential to account for all walks between nodes.
- Parameters:
normalized – If True (default), normalize scores by dividing by the maximum value (min-max scaling to [0, 1] range). Note that this is simple rescaling, not the theoretical Estrada-Hatano normalization from the original literature.
- Returns:
{(node, layer): communicability_betweenness}
- Return type:
dict
Note
This implementation uses NetworkX’s communicability_betweenness_centrality, which operates on unweighted graphs. Edge weights from the supra-adjacency matrix are used only to determine edge existence (weight > 0), not for weighted communicability computation. For truly weighted communicability analysis, a custom implementation using the weighted matrix exponential would be required.
This is computationally expensive as it requires computing the matrix exponential multiple times. For large networks, this may take significant time.
- current_flow_betweenness_centrality()
Compute current-flow betweenness centrality via supra Laplacian pseudoinverse.
This measure is based on the electrical current flow through each node.
- Returns:
{(node, layer): current_flow_betweenness}
- Return type:
dict
- current_flow_closeness_centrality()
Compute current-flow closeness centrality via supra Laplacian pseudoinverse.
This measure is based on the resistance distance in electrical networks.
- Returns:
{(node, layer): current_flow_closeness}
- Return type:
dict
- edge_betweenness_centrality(normalized=True)
Compute edge betweenness centrality on the supra-graph.
Edge betweenness measures the fraction of shortest paths that pass through each edge.
- Returns:
{(source, target): edge_betweenness}
- Return type:
dict
Note
This returns edge-level centrality, not node-level.
- flow_betweenness_centrality(samples=100)
Compute flow betweenness based on maximum flow sampling.
Flow betweenness measures how much flow passes through a node when maximizing flow between randomly sampled source-target pairs.
- Parameters:
samples – Number of source-target pairs to sample (default: 100)
- Returns:
{(node, layer): flow_betweenness}
- Return type:
dict
Note
This is a sampling-based approximation of flow betweenness. The implementation samples random pairs of supra-graph nodes and computes the maximum flow through each intermediate node.
Unlike classical Freeman flow betweenness which uses a normalization factor based on the number of nodes, this implementation returns the average flow per sampled pair without additional normalization. For multilayer networks, the number of supra-graph nodes (N * L for N physical nodes and L layers) affects the raw values.
For large networks, this sampling approach is more computationally feasible than computing exact flow betweenness for all pairs.
- harmonic_closeness_centrality()
Compute harmonic closeness centrality on the supra-graph.
Harmonic closeness handles disconnected graphs better than standard closeness by summing the reciprocals of distances instead of taking reciprocal of sum.
HC(u) = sum_{v≠u} (1 / d(u,v)) for finite distances
- Returns:
{(node, layer): harmonic_closeness}
- Return type:
dict
Note
This measure naturally handles disconnected components as unreachable nodes contribute 0 (instead of infinity) to the sum.
Weight Interpretation: Edge weights from the supra-adjacency matrix are interpreted as connection strengths (larger weight = stronger connection = shorter distance). They are converted to distances via 1/weight for shortest path computation. If your edge weights already represent distances, do not use this function directly—you would need to invert them first or use the distance values directly in a custom computation.
- hits_centrality(max_iter=1000, tol=1e-06)
Compute HITS (hubs and authorities) centrality on the supra-graph.
For undirected networks, this equals eigenvector centrality. For directed networks, computes separate hub and authority scores.
- Parameters:
max_iter – Maximum number of iterations.
tol – Tolerance for convergence.
- Returns:
- If directed network: {‘hubs’: {(node, layer): score}, ‘authorities’: {(node, layer): score}}
If undirected network: {(node, layer): score} (equivalent to eigenvector centrality)
- Return type:
dict
- information_centrality()
Compute Information Centrality (Stephenson-Zelen style) on the supra-graph.
Information centrality measures the importance of a node based on the information flow through the network. It uses the inverse of a modified Laplacian matrix.
- Returns:
{(node, layer): information_centrality}
- Return type:
dict
Note
This implementation returns values for each node-layer pair in the supra-graph, not aggregated physical node values. To obtain physical node-level information centrality, use the aggregate_to_node_level() method on the returned dictionary.
The implementation uses NetworkX’s information_centrality for computation. Falls back to harmonic closeness if information centrality computation fails.
Information centrality is defined only for undirected graphs. For directed networks, the graph is symmetrized with a warning.
References
Stephenson, K., & Zelen, M. (1989). Rethinking centrality: Methods and examples. Social Networks, 11(1), 1-37.
- katz_bonacich_centrality(alpha=0.1, beta=None)
Compute Katz-Bonacich centrality on the supra-graph.
z = Σ_{t=0}^∞ α^t M^t b = (I - αM)^{-1} b
- Parameters:
alpha – Attenuation parameter (should be < 1/ρ(M)). If None, automatically computes a safe value as 0.85/ρ(M) where ρ(M) is the spectral radius.
beta – Exogenous preference vector. If None, uses vector of ones.
- Returns:
{(node, layer): centrality_value}
- Return type:
dict
- layer_degree_centrality(layer: str | None = None, weighted: bool = False, direction: str = 'out') Dict[str | Tuple[str, str], float]
Compute layer-specific degree (or strength) centrality.
- For undirected networks:
k^[α]_i = Σ_j 1(A^[α]_ij > 0) [unweighted] s^[α]_i = Σ_j A^[α]_ij [weighted]
- For directed networks:
k^[α,out]_i = Σ_j 1(A^[α]_ij > 0) [out-degree] k^[α,in]_i = Σ_j 1(A^[α]_ji > 0) [in-degree]
- Parameters:
layer – Layer to compute centrality for. If None, compute for all layers.
weighted – If True, compute strength instead of degree.
direction – ‘out’, ‘in’, or ‘both’ for directed networks.
- Returns:
- {(node, layer): centrality_value} if layer is None,
{node: centrality_value} if layer is specified.
- Return type:
dict
- load_centrality()
Compute load centrality (shortest-path load).
Load centrality measures the fraction of shortest paths that pass through each node, counting all paths (not just unique pairs).
Load[k] = sum over all (s,t) pairs of [number of shortest paths through k / total shortest paths]
- Returns:
{(node, layer): load_centrality}
- Return type:
dict
Note
This is similar to betweenness but counts all paths rather than normalizing by the number of node pairs.
- local_efficiency_centrality()
Compute local efficiency centrality.
Local efficiency measures how efficiently information is exchanged among a node’s neighbors when the node is removed. It quantifies the fault tolerance of the network.
LE(u) = (1 / (|N_u|*(|N_u|-1))) * sum_{i≠j in N_u} [1 / d(i,j)]
- Returns:
{(node, layer): local_efficiency}
- Return type:
dict
Note
For nodes with less than 2 neighbors, local efficiency is 0.
- lp_aggregated_centrality(layer_centralities, p=2, weights=None, exclude_missing=True)
Compute Lp-aggregated per-layer centrality.
Aggregates per-layer centrality values using Lp norm.
C_i = (sum_{ℓ in L_i} w_ℓ * |c^ℓ[i]|^p)^{1/p} for p < ∞ C_i = max_{ℓ in L_i} (w_ℓ * |c^ℓ[i]|) for p = ∞
where L_i is the set of layers where node i exists.
- Parameters:
layer_centralities – dict of {layer: {node: centrality}} or {(node, layer): centrality}
p – Lp norm parameter (default: 2). Use float(‘inf’) for L-infinity norm.
weights – dict of {layer: weight}. If None, uniform weights are used.
exclude_missing – If True (default), only aggregate over layers where the node exists (has a centrality value). If False, treat nodes absent from a layer as having zero centrality, which may bias scores for nodes with sparse layer participation.
- Returns:
{node: aggregated_centrality}
- Return type:
dict
Note
This is a framework for aggregating any per-layer centrality measure. Input can be degree, PageRank, eigenvector, etc.
Sparse Layer Participation: When exclude_missing=True (default), nodes that only appear in a subset of layers are aggregated only over those layers where they exist. This prevents bias against nodes with sparse layer participation. When exclude_missing=False, missing layers contribute zero to the aggregation, which may penalize nodes that don’t appear in all layers.
- multilayer_betweenness_centrality(normalized=True, endpoints=False)
Compute betweenness centrality on the supra-graph.
For each node-layer pair (i,α), computes the fraction of shortest paths between all pairs of nodes that pass through (i,α).
- Parameters:
normalized – Whether to normalize the betweenness values.
endpoints – Whether to include endpoints in path counts.
- Returns:
{(node, layer): betweenness_centrality}
- Return type:
dict
Note
This is computationally expensive for large networks as it requires computing shortest paths between all pairs of nodes.
Weight handling: Edge weights from the supra-adjacency matrix are converted to distances (1/weight) for shortest path computation. Weights must be positive (> 0). Zero or negative weights will cause undefined behavior.
- multilayer_closeness_centrality(normalized=True, wf_improved=True, variant='standard')
Compute closeness centrality on the supra-graph.
For each node-layer pair (i,α), computes:
- Standard closeness:
C_c(i,α) = (n-1) / Σ_{(j,β)} d((i,α), (j,β))
- Harmonic closeness (recommended for disconnected networks):
HC(i,α) = Σ_{(j,β)≠(i,α)} 1/d((i,α), (j,β))
where d((i,α), (j,β)) is the shortest path distance in the supra-graph.
- Parameters:
normalized – This parameter is kept for API compatibility but has no effect. Standard closeness is always normalized by (n-1) per the NetworkX implementation. Harmonic closeness returns unnormalized sums of reciprocal distances by definition.
wf_improved – If True, use Wasserman-Faust improved closeness scaling for disconnected graphs. Default is True. This affects the magnitude and ordering of scores in graphs with multiple components (e.g., low interlayer coupling). See NetworkX documentation for details. Only used when variant=’standard’.
variant –
Closeness variant to use. Options: - ‘standard’: Classic closeness (reciprocal of sum of distances).
For disconnected graphs, uses Wasserman-Faust scaling if wf_improved=True. Can produce biased values for nodes in small or disconnected components.
’harmonic’: Harmonic closeness (sum of reciprocal distances). Mathematically well-defined for disconnected networks: unreachable nodes contribute 0 instead of infinity. Recommended for disconnected multilayer networks.
’auto’: Automatically selects ‘harmonic’ if the supra-graph has multiple connected components, otherwise uses ‘standard’.
Default is ‘standard’ for backward compatibility.
- Returns:
{(node, layer): closeness_centrality}
- Return type:
dict
Note
This implementation uses NetworkX’s shortest path algorithms on the supra-graph representation. For large networks, this can be computationally expensive.
For disconnected multilayer graphs (e.g., layers with no inter-layer coupling, or networks with isolated components), use variant=’harmonic’ or variant=’auto’ to get mathematically consistent closeness values. The harmonic variant naturally handles unreachable nodes by summing 1/d for finite distances only (infinite distances contribute 0).
Weight Interpretation: Edge weights from the supra-adjacency matrix are interpreted as connection strengths (larger weight = stronger connection). They are converted to distances via 1/weight for shortest path computation. If your edge weights already represent distances, use them directly without this function’s weight inversion.
Examples
>>> # For connected networks, standard closeness works well >>> closeness = calc.multilayer_closeness_centrality(variant='standard')
>>> # For potentially disconnected networks, use harmonic >>> closeness = calc.multilayer_closeness_centrality(variant='harmonic')
>>> # Let the algorithm decide based on connectivity >>> closeness = calc.multilayer_closeness_centrality(variant='auto')
References
Wasserman, S., & Faust, K. (1994). Social Network Analysis.
Boldi, P., & Vigna, S. (2014). Axioms for Centrality. Internet Math.
De Domenico, M., et al. (2015). Structural reducibility of multilayer networks.
- multiplex_coreness()
Alias for multiplex_k_core for compatibility.
- Returns:
{(node, layer): core_number}
- Return type:
dict
- multiplex_eigenvector_centrality(max_iter: int = 1000, tol: float = 1e-06) Dict
Compute multiplex eigenvector centrality (node-layer level).
x = (1/λ_max) * M * x where x_{iα} is the centrality of node i in layer α, and λ_max is the spectral radius of the supra-adjacency matrix M.
- Parameters:
max_iter – Maximum number of iterations.
tol – Tolerance for convergence.
- Returns:
{(node, layer): centrality_value}
- Return type:
dict
- multiplex_eigenvector_versatility(max_iter: int = 1000, tol: float = 1e-06) Dict
Compute node-level eigenvector versatility.
x̄_i = Σ_α x_{iα}
- Parameters:
max_iter – Maximum number of iterations.
tol – Tolerance for convergence.
- Returns:
{node: versatility_value}
- Return type:
dict
- multiplex_k_core()
Compute multiplex k-core decomposition.
A node belongs to the k-core if it has at least k neighbors in the multilayer network. This implementation computes the core number for each node-layer pair.
- Returns:
{(node, layer): core_number}
- Return type:
dict
- overlapping_degree_centrality(weighted: bool = False) Dict
Compute overlapping degree/strength centrality (node level).
k^{over}_i = Σ_α k^[α]_i [unweighted] s^{over}_i = Σ_α s^[α]_i [weighted]
- Parameters:
weighted – If True, compute overlapping strength.
- Returns:
{node: centrality_value}
- Return type:
dict
- pagerank_centrality(damping=0.85, max_iter=1000, tol=1e-06)
Compute PageRank centrality on the supra-graph.
Uses the standard PageRank algorithm on the supra-adjacency matrix representing the multilayer network. Properly handles dangling nodes (nodes with no outgoing edges) via teleportation.
This implementation preserves sparsity when possible for memory efficiency.
- Parameters:
damping – Damping parameter (typically 0.85).
max_iter – Maximum number of iterations.
tol – Tolerance for convergence.
- Returns:
{(node, layer): centrality_value}
- Return type:
dict
- Mathematical Invariants:
PageRank values sum to 1.0 (within tol=1e-6)
All values are non-negative
Converges for strongly connected components or with teleportation
- participation_coefficient(weighted: bool = False) Dict
Compute participation coefficient across layers.
Measures how evenly a node’s degree is distributed across layers: P_i = 1 - Σ_α (k^[α]_i / k^{over}_i)^2
Set P_i = 0 if k^{over}_i = 0.
- Parameters:
weighted – If True, use strength instead of degree.
- Returns:
{node: participation_coefficient}
- Return type:
dict
- percolation_centrality(edge_activation_prob=0.5, trials=100)
Compute percolation centrality using bond percolation Monte Carlo simulation.
This implementation measures the average relative component size that a node belongs to across multiple bond percolation realizations, providing an estimate of a node’s importance for network connectivity under random edge failures.
- Parameters:
edge_activation_prob – Probability that an edge is active (default: 0.5)
trials – Number of Monte Carlo trials (default: 100)
- Returns:
{(node, layer): percolation_centrality}
- Return type:
dict
Note
This is a component-size-based percolation measure, not the path-based percolation betweenness from the original Piraveenan et al. literature, which requires recomputing betweenness on each percolated realization. The original percolation centrality is computationally expensive (O(n³) per trial), so this implementation provides a more efficient alternative that captures related connectivity information.
Values are normalized to [0, 1] range where higher values indicate nodes that tend to belong to larger connected components across percolation realizations.
References
Piraveenan, M., Prokopenko, M., & Hossain, L. (2013). Percolation centrality: Quantifying graph-theoretic impact of nodes during percolation in networks. PloS one, 8(1), e53095.
- spreading_centrality(beta=0.2, mu=0.1, trials=50, steps=100)
Compute spreading (epidemic) centrality using SIR model.
Spreading centrality measures how influential a node is in spreading information or disease through the network, based on Monte Carlo simulations of discrete-time SIR dynamics.
- Parameters:
beta – Infection rate per edge per time step (default: 0.2)
mu – Recovery rate per time step (default: 0.1)
trials – Number of simulation trials per node (default: 50)
steps – Maximum simulation steps (default: 100)
- Returns:
{(node, layer): spreading_centrality}
- Return type:
dict
Note
This measures the average outbreak size (fraction of nodes ever infected) when seeding the epidemic from each node. Values are normalized by the total number of nodes, producing scores in the range [1/n, 1] where n is the number of supra-graph nodes.
This is an empirical simulation-based measure, not normalized by theoretical epidemic threshold or branching factor as in some literature definitions. The normalization allows comparison of relative spreading power within the network but may not be directly comparable across networks of different sizes or structures.
References
Kitsak, M., et al. (2010). Identification of influential spreaders in complex networks. Nature physics, 6(11), 888-893.
- subgraph_centrality()
Compute subgraph centrality via matrix exponential of the supra-adjacency matrix.
Subgraph centrality counts closed walks of all lengths starting and ending at each node. SC_i = (e^A)_ii where A is the adjacency matrix.
- Returns:
{(node, layer): subgraph_centrality}
- Return type:
dict
- supra_degree_centrality(weighted: bool = False) Dict
Compute supra degree/strength centrality (node-layer level).
k_{iα} = Σ_{j,β} 1(M_{(i,α),(j,β)} > 0) [unweighted] s_{iα} = Σ_{j,β} M_{(i,α),(j,β)} [weighted]
- Parameters:
weighted – If True, compute strength instead of degree.
- Returns:
{(node, layer): centrality_value}
- Return type:
dict
- total_communicability()
Compute total communicability via matrix exponential.
Total communicability is the row sum of the matrix exponential: TC_i = sum_j (e^A)_ij
- Returns:
{(node, layer): total_communicability}
- Return type:
dict
- py3plex.algorithms.multilayer_algorithms.centrality.communicability_centrality(supra_matrix: spmatrix, normalize: bool = True, use_sparse: bool = True, max_iter: int = 100, tol: float = 1e-06) ndarray
Compute communicability centrality for each node-layer pair.
Communicability centrality measures the weighted sum of all walks between nodes, with exponentially decaying weights for longer walks. It is computed as the row sum of the matrix exponential:
c_i = sum_j (exp(A))_ij
This implementation uses scipy.sparse.linalg.expm_multiply for efficient sparse matrix exponential computation.
- Parameters:
supra_matrix – Sparse supra-adjacency matrix (n x n).
normalize – If True, normalize output to sum to 1.
use_sparse – If True, use sparse matrix operations. Falls back to dense for matrices smaller than 10000 elements.
max_iter – Maximum number of iterations for sparse approximation (currently unused).
tol – Tolerance for convergence (currently unused).
- Returns:
Communicability centrality scores for each node-layer pair (n,).
- Return type:
np.ndarray
- Raises:
Py3plexMatrixError – If matrix is invalid (non-square, empty, etc.).
Example
>>> from py3plex.core import random_generators >>> net = random_generators.random_multiplex_ER(50, 3, 0.1) >>> A = net.get_supra_adjacency_matrix() >>> comm = communicability_centrality(A) >>> print(f"Communicability centrality computed for {len(comm)} node-layer pairs")
- py3plex.algorithms.multilayer_algorithms.centrality.compute_all_centralities(network, include_path_based=False, include_advanced=False, include_extended=False, preset=None, wf_improved=True, closeness_variant='standard')
Compute all available centrality measures for a multilayer network.
- Parameters:
network – py3plex multi_layer_network object
include_path_based – Whether to include computationally expensive path-based measures (betweenness, closeness). Default: False
include_advanced – Whether to include advanced measures (HITS, current-flow, communicability, k-core). Default: False
include_extended – Whether to include extended measures (information, accessibility, percolation, spreading, collective influence, load, flow betweenness, harmonic, bridging, local efficiency). Default: False
preset – Convenience parameter to set all inclusion flags at once. Options: - ‘basic’: Only degree and eigenvector-based measures (default behavior) - ‘standard’: Includes path-based measures - ‘advanced’: Includes path-based and advanced measures - ‘all’: Includes all measures (path-based, advanced, and extended) - None: Use individual flags (default)
wf_improved – If True, use Wasserman-Faust improved scaling for closeness centrality in disconnected graphs. Default: True. Only used when closeness_variant=’standard’.
closeness_variant –
Variant of closeness centrality to use. Options: - ‘standard’: Classic closeness (reciprocal of sum of distances).
Uses Wasserman-Faust scaling if wf_improved=True.
’harmonic’: Harmonic closeness (sum of reciprocal distances). Recommended for disconnected multilayer networks.
’auto’: Automatically selects ‘harmonic’ if the network has disconnected components, otherwise uses ‘standard’.
Default: ‘standard’ for backward compatibility.
- Returns:
- Dictionary containing all computed centrality measures with keys:
Degree-based: “layer_degree”, “layer_strength”, “supra_degree”, “supra_strength”, “overlapping_degree”, “overlapping_strength”, “participation_coefficient”, “participation_coefficient_strength”
Eigenvector-based: “multiplex_eigenvector”, “eigenvector_versatility”, “katz_bonacich”, “pagerank”
Path-based (if include_path_based=True): “closeness”, “betweenness”
Advanced (if include_advanced=True): “hits”, “current_flow_closeness”, “current_flow_betweenness”, “subgraph_centrality”, “total_communicability”, “multiplex_k_core”
Extended (if include_extended=True): “information”, “communicability_betweenness”, “accessibility”, “harmonic_closeness”, “local_efficiency”, “edge_betweenness”, “bridging”, “percolation”, “spreading”, “collective_influence”, “load”, “flow_betweenness”
- Return type:
dict
Note
Path-based, advanced, and extended measures are computationally expensive for large networks. Use flags or presets to control which measures are computed.
For disconnected multilayer graphs (e.g., networks without inter-layer coupling, or with isolated components), use closeness_variant=’harmonic’ or ‘auto’ to get mathematically consistent closeness values.
Examples
>>> # Compute only basic measures (fast) >>> results = compute_all_centralities(network)
>>> # Use preset for standard analysis >>> results = compute_all_centralities(network, preset='standard')
>>> # Compute everything >>> results = compute_all_centralities(network, preset='all')
>>> # Fine-grained control >>> results = compute_all_centralities( ... network, ... include_path_based=True, ... include_extended=True ... )
>>> # For disconnected multilayer networks, use harmonic closeness >>> results = compute_all_centralities( ... network, ... include_path_based=True, ... closeness_variant='harmonic' ... )
- py3plex.algorithms.multilayer_algorithms.centrality.katz_centrality(supra_matrix: spmatrix, alpha: float | None = None, beta: float = 1.0, tol: float = 1e-06) ndarray
Compute Katz centrality for each node-layer pair.
Katz centrality measures node influence by accounting for all paths with exponentially decaying weights. It is computed as:
x = (I - alpha * A)^{-1} * beta * 1
where alpha < 1/lambda_max(A) to ensure convergence. If alpha is not provided, it defaults to 0.85 / lambda_max(A).
- Parameters:
supra_matrix – Sparse supra-adjacency matrix (n x n).
alpha – Attenuation parameter. Must be less than 1/spectral_radius(A). If None, defaults to 0.85 / lambda_max(A).
beta – Weight of exogenous influence (typically 1.0).
tol – Tolerance for eigenvalue computation.
- Returns:
- Katz centrality scores for each node-layer pair (n,).
Normalized to sum to 1.
- Return type:
np.ndarray
- Raises:
Py3plexMatrixError – If matrix is invalid or alpha is out of valid range.
Example
>>> from py3plex.core import random_generators >>> net = random_generators.random_multiplex_ER(50, 3, 0.1) >>> A = net.get_supra_adjacency_matrix() >>> katz = katz_centrality(A) >>> print(f"Katz centrality computed for {len(katz)} node-layer pairs") >>> # With custom alpha >>> katz_custom = katz_centrality(A, alpha=0.05)
Built-in multilayer centrality toolkit.
Implements core multilayer variants of centrality measures: - Multilayer PageRank - Multilayer betweenness centrality - Multilayer eigenvector centrality - Multiplex degree centrality
These algorithms properly account for the multilayer structure of networks.
All centrality functions now support first-class uncertainty estimation via the uncertainty parameter.
Authors: py3plex contributors Date: 2025
- py3plex.algorithms.centrality_toolkit.aggregate_centrality_across_layers(centrality_dict: Dict[Tuple, float], aggregation: str = 'sum') Dict[Any, float]
Aggregate node centrality values across layers.
Given centrality scores for (node, layer) tuples, aggregate to get per-node scores.
- Parameters:
centrality_dict – Dictionary mapping (node, layer) -> score
aggregation – Aggregation method (‘sum’, ‘mean’, ‘max’, ‘min’)
- Returns:
Dictionary mapping node_id -> aggregated score
Example
>>> scores = {('A', 'L1'): 0.5, ('A', 'L2'): 0.3, ('B', 'L1'): 0.7} >>> aggregate_centrality_across_layers(scores, 'mean') {'A': 0.4, 'B': 0.7}
- py3plex.algorithms.centrality_toolkit.multilayer_betweenness_centrality(network: Any, normalized: bool = True, weight: str | None = None) Dict[Tuple, float]
Compute multilayer betweenness centrality.
Computes betweenness centrality on the supra-graph, where shortest paths can traverse multiple layers.
- Parameters:
network – Multilayer network object
normalized – Whether to normalize by number of pairs
weight – Edge attribute to use as weight (None for unweighted)
- Returns:
Dictionary mapping (node, layer) tuples to betweenness scores
- Algorithm:
For each pair of nodes (s,t), count the fraction of shortest paths passing through each node v:
BC(v) = Σ_{s≠v≠t} σ_{st}(v) / σ_{st}
where σ_{st} is the number of shortest paths from s to t, and σ_{st}(v) is the number passing through v.
References
De Domenico, M., et al. (2015). “Ranking in interconnected multilayer networks reveals versatile nodes.” Nature Communications, 6, 6868.
- py3plex.algorithms.centrality_toolkit.multilayer_eigenvector_centrality(network: Any, max_iter: int = 100, tol: float = 1e-06) Dict[Tuple, float]
Compute multilayer eigenvector centrality.
Computes the principal eigenvector of the supra-adjacency matrix. Nodes are important if connected to other important nodes across layers.
- Parameters:
network – Multilayer network object
max_iter – Maximum number of power iteration steps
tol – Convergence tolerance
- Returns:
Dictionary mapping (node, layer) tuples to eigenvector centrality scores
- Algorithm:
Find the principal eigenvector of the supra-adjacency matrix:
A * x = λ * x
where x is the eigenvector with largest eigenvalue λ.
References
Solá, L., et al. (2013). “Eigenvector centrality of nodes in multiplex networks.” Chaos, 23(3), 033131.
- py3plex.algorithms.centrality_toolkit.multilayer_pagerank(network: Any, alpha: float = 0.85, max_iter: int = 100, tol: float = 1e-06, personalization: Dict | None = None, uncertainty: bool = False, n_runs: int | None = None, resampling: ResamplingStrategy | None = None, random_seed: int | None = None) Dict[Tuple, float] | StatSeries
Compute multilayer PageRank centrality with optional uncertainty estimation.
Implements PageRank on the supra-adjacency matrix, accounting for random walks across layers.
- Parameters:
network – Multilayer network object
alpha – Damping factor (teleportation probability = 1-alpha)
max_iter – Maximum number of iterations
tol – Convergence tolerance
personalization – Optional personalization vector (node -> weight)
uncertainty – If True, estimate uncertainty via resampling
n_runs – Number of runs for uncertainty estimation (default from config)
resampling – Resampling strategy (default from config)
random_seed – Random seed for reproducibility
- Returns:
StatSeries with deterministic values (std=None) If uncertainty=True: StatSeries with mean, std, quantiles
- Return type:
If uncertainty=False
- Algorithm:
PR = (1-α)/N + α * A^T * PR
where A is the column-normalized supra-adjacency matrix
References
Halu, A., et al. (2013). “Multiplex PageRank.” PLoS ONE, 8(10), e78293.
Examples
>>> # Deterministic >>> result = multilayer_pagerank(network) >>> result[('A', 'L1')] # Dict-like access {'mean': 0.25} >>> np.array(result) # Array access (backward compat)
>>> # With uncertainty >>> result = multilayer_pagerank(network, uncertainty=True, n_runs=50) >>> result.mean # Average PageRank values >>> result.std # Standard deviations >>> result.quantiles # Confidence intervals
- py3plex.algorithms.centrality_toolkit.multiplex_degree_centrality(network: Any, normalized: bool = True, consider_interlayer: bool = True) Dict[Tuple, float]
Compute multiplex degree centrality.
Sums degree across all layers for each node. For multiplex networks where nodes exist in all layers.
- Parameters:
network – Multiplex network object
normalized – Whether to normalize by maximum possible degree
consider_interlayer – Whether to count inter-layer edges
- Returns:
Dictionary mapping (node, layer) tuples to degree centrality scores
- Algorithm:
For node i: DC(i) = Σ_α k_i^α
where k_i^α is the degree of node i in layer α.
References
Battiston, F., et al. (2014). “Structural measures for multiplex networks.” Physical Review E, 89(3), 032804.
- py3plex.algorithms.centrality_toolkit.versatility_score(centrality_dict: Dict[Tuple, float], normalized: bool = True) Dict[Any, float]
Compute versatility score for each node.
Measures how evenly a node’s centrality is distributed across layers. High versatility means the node is important in multiple layers.
- Parameters:
centrality_dict – Dictionary mapping (node, layer) -> centrality score
normalized – Whether to normalize to [0, 1]
- Returns:
Dictionary mapping node_id -> versatility score
- Algorithm:
V(i) = 1 - Σ_α (c_i^α / c_i^total)^2
where c_i^α is centrality of node i in layer α. This is similar to the Herfindahl-Hirschman index.
References
Battiston, F., et al. (2014). “Structural measures for multiplex networks.” Physical Review E, 89(3), 032804.
MultiXRank: Random Walk with Restart on Universal Multilayer Networks
This module implements the MultiXRank algorithm described in: Baptista et al. (2022), “Universal multilayer network exploration by random walk with restart”, Communications Physics, 5, 170.
MultiXRank performs random walk with restart (RWR) on a supra-heterogeneous adjacency matrix built from multiple multiplexes connected by bipartite blocks.
References
- class py3plex.algorithms.multilayer_algorithms.multixrank.MultiXRank(restart_prob: float = 0.4, epsilon: float = 1e-06, max_iter: int = 100000, verbose: bool = True)
Bases:
objectMultiXRank: Universal multilayer network exploration by random walk with restart.
This class implements the MultiXRank algorithm for node prioritization and ranking in universal multilayer networks. It builds a supra-heterogeneous adjacency matrix from multiple multiplexes and bipartite inter-multiplex connections, then performs random walk with restart.
- multiplexes
Dictionary of multiplex supra-adjacency matrices
- Type:
Dict[str, sp.spmatrix]
- bipartite_blocks
Inter-multiplex connection matrices
- Type:
Dict[Tuple[str, str], sp.spmatrix]
- restart_prob
Restart probability (r) for RWR
- Type:
float
- epsilon
Convergence threshold
- Type:
float
- max_iter
Maximum number of iterations
- Type:
int
- node_order
Node ordering for each multiplex
- Type:
Dict[str, List]
- supra_matrix
Built supra-heterogeneous adjacency matrix
- Type:
sp.spmatrix
- transition_matrix
Column-stochastic transition matrix
- Type:
sp.spmatrix
- add_bipartite_block(multiplex_from: str, multiplex_to: str, bipartite_matrix: spmatrix | ndarray, weight: float = 1.0)
Add a bipartite block connecting two multiplexes.
- Parameters:
multiplex_from – Name of source multiplex
multiplex_to – Name of target multiplex
bipartite_matrix – Matrix of connections (rows: from, cols: to)
weight – Optional weight to scale this bipartite block
- add_multiplex(name: str, supra_adjacency: spmatrix | ndarray, node_order: List | None = None)
Add a multiplex to the universal multilayer network.
- Parameters:
name – Unique identifier for this multiplex
supra_adjacency – Supra-adjacency matrix for the multiplex (can be from multi_layer_network.get_supra_adjacency_matrix())
node_order – Optional list of node IDs in the order they appear in the matrix. If None, uses integer indices.
- aggregate_scores(scores: ndarray, aggregation: str = 'sum') Dict[str, Dict]
Aggregate scores per multiplex and optionally per physical node.
- Parameters:
scores – Probability vector from RWR (length = total supra-matrix dimension)
aggregation – How to aggregate scores (‘sum’, ‘mean’, ‘max’)
- Returns:
Dictionary mapping multiplex names to dictionaries of node scores
- build_supra_heterogeneous_matrix(block_weights: Dict[str | Tuple[str, str], float] | None = None)
Build the supra-heterogeneous adjacency matrix S.
This constructs the universal multilayer network matrix by: 1. Placing each multiplex supra-adjacency on the block diagonal 2. Adding bipartite blocks for inter-multiplex connections
- Parameters:
block_weights – Optional dictionary to weight blocks. Keys can be: - Multiplex names (to weight within-multiplex edges) - Tuple (multiplex_from, multiplex_to) for bipartite blocks
- Returns:
The supra-heterogeneous adjacency matrix
- column_normalize(handle_dangling: str = 'uniform') spmatrix
Column-normalize the supra-heterogeneous matrix to create a stochastic transition matrix.
This ensures each column sums to 1, making the matrix suitable for RWR.
- Parameters:
handle_dangling – How to handle dangling nodes (columns with zero sum): - ‘uniform’: Distribute mass uniformly across all nodes - ‘self’: Add self-loop (mass stays at the node) - ‘ignore’: Leave as zero (not recommended for RWR)
- Returns:
Column-stochastic transition matrix
- get_top_ranked(scores: ndarray, k: int = 10, multiplex: str | None = None, exclude_seeds: bool = True, seed_nodes: List[int] | Dict[str, List] | None = None) List[Tuple]
Get top-k ranked nodes from RWR scores.
- Parameters:
scores – Probability vector from RWR
k – Number of top nodes to return
multiplex – If specified, only return nodes from this multiplex
exclude_seeds – Whether to exclude seed nodes from results
seed_nodes – Seed nodes to exclude (if exclude_seeds=True)
- Returns:
List of (global_index, score) tuples, sorted by score descending
- random_walk_with_restart(seed_nodes: List[int] | Dict[str, List] | ndarray, seed_weights: ndarray | None = None, multiplex_name: str | None = None) ndarray
Perform Random Walk with Restart (RWR) from seed nodes.
- Parameters:
seed_nodes – Seed node specification. Can be: - List of global indices in the supra-matrix - Dict mapping multiplex names to lists of local node indices - NumPy array of global indices
seed_weights – Optional weights for seed nodes (must match length of seed_nodes). If None, uniform weights are used.
multiplex_name – If seed_nodes is a list/array of local indices, specify which multiplex they belong to
- Returns:
Steady-state probability vector (length = total nodes across all multiplexes)
- py3plex.algorithms.multilayer_algorithms.multixrank.multixrank_from_py3plex_networks(networks: Dict[str, multinet.multi_layer_network], bipartite_connections: Dict[Tuple[str, str], spmatrix | ndarray] | None = None, seed_nodes: Dict[str, List] | None = None, restart_prob: float = 0.4, epsilon: float = 1e-06, max_iter: int = 100000, verbose: bool = True) Tuple[MultiXRank, ndarray]
Convenience function to run MultiXRank on py3plex multi_layer_network objects.
- Parameters:
networks – Dictionary mapping names to multi_layer_network objects
bipartite_connections – Optional dict of inter-network connection matrices
seed_nodes – Dict mapping network names to lists of seed node IDs
restart_prob – Restart probability for RWR
epsilon – Convergence threshold
max_iter – Maximum iterations
verbose – Whether to log progress
- Returns:
Tuple of (MultiXRank object, scores array)
Example
>>> from py3plex.core import multinet >>> net1 = multinet.multi_layer_network() >>> net1.load_network('network1.edgelist', ...) >>> net2 = multinet.multi_layer_network() >>> net2.load_network('network2.edgelist', ...) >>> >>> networks = {'net1': net1, 'net2': net2} >>> seed_nodes = {'net1': ['node1', 'node2']} >>> >>> mxr, scores = multixrank_from_py3plex_networks( ... networks, seed_nodes=seed_nodes ... ) >>> >>> # Get aggregated scores per network >>> aggregated = mxr.aggregate_scores(scores)
Authors: Benjamin Renoust (github.com/renoust) Date: 2018/02/13 Description: Loads a Detangler JSON format graph and compute unweighted entanglement analysis with Py3Plex
- py3plex.algorithms.multilayer_algorithms.entanglement.build_occurrence_matrix(network: Any) Tuple[ndarray, List[Any]]
Build occurrence matrix from multilayer network.
- Parameters:
network – Multilayer network object
- Returns:
Tuple of (c_matrix, layers) where c_matrix is the normalized occurrence matrix and layers is the list of layer names
- py3plex.algorithms.multilayer_algorithms.entanglement.compute_blocks(c_matrix: ndarray) Tuple[List[List[int]], List[ndarray]]
Compute block decomposition of occurrence matrix.
- Parameters:
c_matrix – Occurrence matrix
- Returns:
Tuple of (indices, blocks) where indices are the layer indices in each block and blocks are the submatrices for each block
- py3plex.algorithms.multilayer_algorithms.entanglement.compute_entanglement(block_matrix: ndarray) Tuple[List[float], List[float]]
Compute entanglement metrics for a block.
- Parameters:
block_matrix – Block submatrix
- Returns:
Tuple of ([intensity, homogeneity, normalized_homogeneity], gamma_layers)
- py3plex.algorithms.multilayer_algorithms.entanglement.compute_entanglement_analysis(network: Any) List[Dict[str, Any]]
Compute full entanglement analysis for a multilayer network.
- Parameters:
network – Multilayer network object
- Returns:
List of block analysis dictionaries with entanglement metrics
Supra Matrix Function Centralities for Multilayer Networks.
This module implements centrality measures based on matrix functions of the supra-adjacency matrix, including: - Communicability Centrality (Estrada & Hatano, 2008) - Katz Centrality (Katz, 1953)
These measures operate directly on the sparse supra-adjacency matrix obtained from multi_layer_network.get_supra_adjacency_matrix().
References
Estrada, E., & Hatano, N. (2008). Communicability in complex networks. Physical Review E, 77(3), 036111.
Katz, L. (1953). A new status index derived from sociometric analysis. Psychometrika, 18(1), 39-43.
Authors: py3plex contributors Date: October 2025 (Phase II)
- py3plex.algorithms.multilayer_algorithms.supra_matrix_function_centrality.communicability_centrality(supra_matrix: spmatrix, normalize: bool = True, use_sparse: bool = True, max_iter: int = 100, tol: float = 1e-06) ndarray
Compute communicability centrality for each node-layer pair.
Communicability centrality measures the weighted sum of all walks between nodes, with exponentially decaying weights for longer walks. It is computed as the row sum of the matrix exponential:
c_i = sum_j (exp(A))_ij
This implementation uses scipy.sparse.linalg.expm_multiply for efficient sparse matrix exponential computation.
- Parameters:
supra_matrix – Sparse supra-adjacency matrix (n x n).
normalize – If True, normalize output to sum to 1.
use_sparse – If True, use sparse matrix operations. Falls back to dense for matrices smaller than 10000 elements.
max_iter – Maximum number of iterations for sparse approximation (currently unused).
tol – Tolerance for convergence (currently unused).
- Returns:
Communicability centrality scores for each node-layer pair (n,).
- Return type:
np.ndarray
- Raises:
Py3plexMatrixError – If matrix is invalid (non-square, empty, etc.).
Example
>>> from py3plex.core import random_generators >>> net = random_generators.random_multiplex_ER(50, 3, 0.1) >>> A = net.get_supra_adjacency_matrix() >>> comm = communicability_centrality(A) >>> print(f"Communicability centrality computed for {len(comm)} node-layer pairs")
- py3plex.algorithms.multilayer_algorithms.supra_matrix_function_centrality.katz_centrality(supra_matrix: spmatrix, alpha: float | None = None, beta: float = 1.0, tol: float = 1e-06) ndarray
Compute Katz centrality for each node-layer pair.
Katz centrality measures node influence by accounting for all paths with exponentially decaying weights. It is computed as:
x = (I - alpha * A)^{-1} * beta * 1
where alpha < 1/lambda_max(A) to ensure convergence. If alpha is not provided, it defaults to 0.85 / lambda_max(A).
- Parameters:
supra_matrix – Sparse supra-adjacency matrix (n x n).
alpha – Attenuation parameter. Must be less than 1/spectral_radius(A). If None, defaults to 0.85 / lambda_max(A).
beta – Weight of exogenous influence (typically 1.0).
tol – Tolerance for eigenvalue computation.
- Returns:
- Katz centrality scores for each node-layer pair (n,).
Normalized to sum to 1.
- Return type:
np.ndarray
- Raises:
Py3plexMatrixError – If matrix is invalid or alpha is out of valid range.
Example
>>> from py3plex.core import random_generators >>> net = random_generators.random_multiplex_ER(50, 3, 0.1) >>> A = net.get_supra_adjacency_matrix() >>> katz = katz_centrality(A) >>> print(f"Katz centrality computed for {len(katz)} node-layer pairs") >>> # With custom alpha >>> katz_custom = katz_centrality(A, alpha=0.05)
Multiplex Participation Coefficient (MPC) for multiplex networks.
This module implements the Multiplex Participation Coefficient metric for multiplex networks (networks with identical node sets across all layers).
Authors: py3plex contributors Date: 2025
- py3plex.algorithms.multicentrality.ensure(*args, **kwargs)
- py3plex.algorithms.multicentrality.multiplex_participation_coefficient(multinet: Any, normalized: bool = True, check_multiplex: bool = True) Dict[Any, float]
Compute the Multiplex Participation Coefficient (MPC) for multiplex networks. MPC measures how evenly a node participates across layers.
- Parameters:
multinet (py3plex.core.multinet.multi_layer_network) – Multiplex network object (same node set across layers).
normalized (bool, optional) – Whether to normalize MPC to [0,1]. Default: True.
check_multiplex (bool, optional) – Validate that all layers share the same node set.
- Returns:
Node → MPC value mapping.
- Return type:
dict
Notes
- The MPC is computed as:
MPC(i) = 1 - sum_α (k_i^α / k_i^total)^2
Where: - k_i^α is the degree of node i in layer α - k_i^total is the total degree of node i across all layers
When normalized=True, the result is multiplied by L/(L-1) to normalize to [0,1] range, where L is the number of layers.
References
Battiston, F., et al. (2014). “Structural measures for multiplex networks.” Physical Review E, 89(3), 032804.
De Domenico, M., et al. (2015). “Identifying modular flows on multilayer networks reveals highly overlapping organization in interconnected systems.” Physical Review X, 5(1), 011027.
Harooni, M., et al. (2025). “Centrality in Multilayer Networks: Accurate Measurements with MultiNetPy.” The Journal of Supercomputing, 81(1), 92. DOI: 10.1007/s11227-025-07197-8
- Contracts:
Precondition: multinet must not be None
Precondition: normalized and check_multiplex must be booleans
Postcondition: returns a dictionary with numeric values
- py3plex.algorithms.multicentrality.require(*args, **kwargs)
General Algorithms
Random walk primitives for graph-based algorithms.
This module provides foundation for higher-level algorithms like Node2Vec, DeepWalk, and diffusion processes. Implements both basic and second-order (biased) random walks with proper edge weight handling and multilayer support.
- Key Features:
Basic random walks with weighted edge sampling
Second-order (Node2Vec-style) biased random walks with p/q parameters
Multiple simultaneous walks with deterministic reproducibility
Support for directed, weighted, and multilayer networks
Efficient sparse adjacency handling
References
Grover, A., & Leskovec, J. (2016). node2vec: Scalable feature learning for networks. KDD ‘16. https://doi.org/10.1145/2939672.2939754
Perozzi, B., Al-Rfou, R., & Skiena, S. (2014). DeepWalk: Online learning of social representations. KDD ‘14. https://doi.org/10.1145/2623330.2623732
- py3plex.algorithms.general.walkers.basic_random_walk(G: Graph, start_node: int | str, walk_length: int, weighted: bool = True, seed: int | None = None) List[int | str]
Perform a basic random walk on a graph with proper edge weight handling.
The next step is sampled proportionally to the normalized edge weights of the current node. For unweighted graphs, transitions are uniform.
- Parameters:
G – NetworkX graph (directed or undirected, weighted or unweighted)
start_node – Node to start the walk from
walk_length – Number of steps in the walk
weighted – Whether to use edge weights (default: True)
seed – Random seed for reproducibility (default: None)
- Returns:
List of nodes representing the walk path (includes start_node)
- Raises:
ValueError – If start_node not in graph or walk_length < 1
Examples
>>> G = nx.Graph() >>> G.add_weighted_edges_from([(0, 1, 1.0), (1, 2, 2.0), (1, 3, 1.0)]) >>> walk = basic_random_walk(G, 0, walk_length=3, seed=42) >>> len(walk) 4 >>> walk[0] 0
Note
Handles disconnected nodes by terminating walk early
Edge weights must be positive for weighted walks
Sum of transition probabilities from any node equals 1.0
- py3plex.algorithms.general.walkers.general_random_walk(G, start_node, iterations=1000, teleportation_prob=0)
Legacy random walk with teleportation (for backward compatibility).
Deprecated since version 0.95a: Use
basic_random_walk()ornode2vec_walk()instead. This function will be removed in version 1.0.- Parameters:
G – NetworkX graph
start_node – Starting node
iterations – Number of steps
teleportation_prob – Probability of teleporting to random visited node
- Returns:
List of visited nodes (excluding start_node)
- py3plex.algorithms.general.walkers.generate_walks(G: Graph, num_walks: int, walk_length: int, start_nodes: List[int | str] | None = None, p: float = 1.0, q: float = 1.0, weighted: bool = True, return_edges: bool = False, seed: int | None = None) List[List[int | str]] | List[List[Tuple[int | str, int | str]]]
Generate multiple random walks from specified or all nodes.
This interface supports multiple simultaneous walks with deterministic reproducibility under fixed RNG seed. Can return either node sequences or edge sequences.
- Parameters:
G – NetworkX graph
num_walks – Number of walks to generate per start node
walk_length – Number of steps in each walk
start_nodes – Nodes to start walks from (if None, uses all nodes)
p – Return parameter for Node2Vec (1.0 = no bias)
q – In-out parameter for Node2Vec (1.0 = no bias)
weighted – Whether to use edge weights
return_edges – Return edge sequences instead of node sequences
seed – Random seed for reproducibility
- Returns:
List of nodes (if return_edges=False)
List of edges as tuples (if return_edges=True)
- Return type:
List of walks, where each walk is either
Examples
>>> G = nx.karate_club_graph() >>> # Generate 10 walks from each node >>> walks = generate_walks(G, num_walks=10, walk_length=5, seed=42) >>> len(walks) 340
>>> # Generate walks with Node2Vec bias >>> walks = generate_walks(G, num_walks=5, walk_length=10, p=0.5, q=2.0, seed=42)
>>> # Get edge sequences >>> edge_walks = generate_walks(G, num_walks=3, walk_length=5, return_edges=True, seed=42)
Note
With same seed, generates identical walks across runs
If p == q == 1.0, uses basic random walk (faster)
Empty walks are included if node has no neighbors
- py3plex.algorithms.general.walkers.layer_specific_random_walk(G: Graph, start_node: int | str, walk_length: int, layer: str | None = None, cross_layer_prob: float = 0.0, weighted: bool = True, seed: int | None = None) List[int | str]
Perform random walk with layer constraints for multilayer networks.
In a multilayer network represented in py3plex format (where node names include layer information), this function can constrain walks to specific layers with occasional inter-layer transitions.
- Parameters:
G – NetworkX graph (multilayer network in py3plex format)
start_node – Node to start walk from (may include layer info)
walk_length – Number of steps in the walk
layer – Target layer to constrain walk to (None = no constraint)
cross_layer_prob – Probability of crossing to different layer (0-1)
weighted – Whether to use edge weights
seed – Random seed for reproducibility
- Returns:
List of nodes representing the walk path
Examples
>>> # Multilayer network with layer-specific walks >>> from py3plex.core import multinet >>> network = multinet.multi_layer_network() >>> network.add_layer("social") >>> network.add_layer("biological") >>> # ... add nodes and edges ... >>> walk = layer_specific_random_walk( ... network.core_network, ... "nodeA---social", ... walk_length=10, ... layer="social", ... cross_layer_prob=0.1 ... )
Note
If layer is None, behaves like basic_random_walk
cross_layer_prob controls inter-layer transitions
Node format should follow py3plex convention: “nodeID—layerID”
- py3plex.algorithms.general.walkers.node2vec_walk(G: Graph, start_node: int | str, walk_length: int, p: float = 1.0, q: float = 1.0, weighted: bool = True, seed: int | None = None) List[int | str]
Perform a second-order (biased) random walk following Node2Vec logic.
When transitioning from node t → v → x, the probability of choosing x is biased by parameters p (return) and q (in-out): - If x == t (return to previous): weight / p - If x is neighbor of t (stay close): weight / 1 - If x is not neighbor of t (explore): weight / q
- Parameters:
G – NetworkX graph (directed or undirected, weighted or unweighted)
start_node – Node to start the walk from
walk_length – Number of steps in the walk
p – Return parameter (higher p = less likely to return to previous node)
q – In-out parameter (higher q = less likely to explore further)
weighted – Whether to use edge weights (default: True)
seed – Random seed for reproducibility (default: None)
- Returns:
List of nodes representing the walk path (includes start_node)
- Raises:
ValueError – If p <= 0 or q <= 0 or start_node not in graph
Examples
>>> G = nx.Graph() >>> G.add_edges_from([(0, 1), (1, 2), (1, 3), (0, 2)]) >>> # Low p, high q: tends to backtrack >>> walk = node2vec_walk(G, 0, walk_length=5, p=0.1, q=10.0, seed=42) >>> # High p, low q: tends to explore outward >>> walk2 = node2vec_walk(G, 0, walk_length=5, p=10.0, q=0.1, seed=42)
Note
First step is always a basic random walk (no previous node)
Properly normalizes probabilities at each step
Handles disconnected nodes by terminating early
References
Grover & Leskovec (2016), node2vec: Scalable feature learning for networks
Benchmark algorithms for node classification performance evaluation.
This module provides algorithms for benchmarking node classification performance, including oracle-based F1 score evaluation.
- py3plex.algorithms.general.benchmark_classification.evaluate_oracle_F1(probs: ndarray, Y_real: ndarray) tuple[float, float]
Evaluate oracle F1 scores for multi-label classification.
This function computes micro and macro F1 scores by selecting the top-k predictions for each sample, where k is determined by the ground truth number of labels.
- Parameters:
probs – Predicted probability matrix of shape (n_samples, n_labels).
Y_real – Ground truth binary label matrix of shape (n_samples, n_labels).
- Returns:
micro: Micro-averaged F1 score
macro: Macro-averaged F1 score
- Return type:
A tuple containing
Example
>>> probs = np.array([[0.9, 0.1, 0.2], [0.1, 0.8, 0.7]]) >>> Y_real = np.array([[1, 0, 0], [0, 1, 1]]) >>> micro, macro = evaluate_oracle_F1(probs, Y_real)
Node Ranking
- py3plex.algorithms.node_ranking.node_ranking.authority_matrix(graph: Graph) spmatrix
Get the authority matrix of a graph.
- Parameters:
graph – NetworkX graph
- Returns:
Authority matrix
- py3plex.algorithms.node_ranking.node_ranking.hub_matrix(graph: Graph) spmatrix
Get the hub matrix of a graph.
- Parameters:
graph – NetworkX graph
- Returns:
Hub matrix
- py3plex.algorithms.node_ranking.node_ranking.hubs_and_authorities(graph: Graph) Tuple[dict, dict]
Compute hubs and authorities scores using HITS algorithm.
- Parameters:
graph – NetworkX graph
- Returns:
Tuple of (hubs dictionary, authorities dictionary)
- py3plex.algorithms.node_ranking.node_ranking.modularity(G: Graph, communities: List[List[Any]], weight: str = 'weight') float
Calculate modularity of a graph partition.
- Parameters:
G – NetworkX graph
communities – List of communities (each community is a list of nodes)
weight – Edge weight attribute name
- Returns:
Modularity value
- py3plex.algorithms.node_ranking.node_ranking.page_rank_kernel(index_row: int) Tuple[int, ndarray]
PageRank kernel for parallel computation.
Note: This function expects global variables G, damping_hyper, spread_step_hyper, spread_percent_hyper, and graph to be defined. It’s designed for use with multiprocessing.Pool.map().
- Parameters:
index_row – Row index to compute PageRank for
- Returns:
Tuple of (index, PageRank vector)
- py3plex.algorithms.node_ranking.node_ranking.sparse_page_rank(matrix: spmatrix, start_nodes: List[int] | range | None, epsilon: float = 1e-06, max_steps: int = 100000, damping: float = 0.5, spread_step: int = 10, spread_percent: float = 0.3, try_shrink: bool = False) ndarray
Compute sparse PageRank with personalization.
- Parameters:
matrix – Sparse adjacency matrix (column-stochastic)
start_nodes – List of starting node indices for personalization (can be range or None)
epsilon – Convergence threshold
max_steps – Maximum number of iterations
damping – Damping factor (teleportation probability)
spread_step – Maximum steps for spread calculation
spread_percent – Percentage threshold for spread
try_shrink – Whether to try matrix shrinking optimization
- Returns:
PageRank vector
- py3plex.algorithms.node_ranking.node_ranking.stochastic_normalization(matrix: spmatrix) spmatrix
Normalize a sparse matrix stochastically.
- Parameters:
matrix – Sparse matrix to normalize
- Returns:
Stochastically normalized sparse matrix
- py3plex.algorithms.node_ranking.node_ranking.stochastic_normalization_hin(matrix: spmatrix) spmatrix
Normalize a heterogeneous information network matrix stochastically.
- Parameters:
matrix – Sparse matrix to normalize
- Returns:
Stochastically normalized sparse matrix
Network Classification
- py3plex.algorithms.network_classification.label_propagation.label_propagation(graph_matrix: spmatrix, class_matrix: ndarray, alpha: float = 0.001, epsilon: float = 1e-12, max_steps: int = 100000, normalization: str | List[str] = 'freq') ndarray
Propagate labels through a graph.
- Parameters:
graph_matrix – Sparse graph adjacency matrix
class_matrix – Initial class label matrix
alpha – Propagation weight parameter
epsilon – Convergence threshold
max_steps – Maximum number of iterations
normalization – Normalization scheme(s) to apply
- Returns:
Propagated label matrix
- py3plex.algorithms.network_classification.label_propagation.label_propagation_normalization(matrix: spmatrix) spmatrix
Normalize a matrix for label propagation.
- Parameters:
matrix – Sparse matrix to normalize
- Returns:
Normalized sparse matrix
- py3plex.algorithms.network_classification.label_propagation.label_propagation_tf() None
TensorFlow-based label propagation (TODO: implement).
Placeholder for future TensorFlow implementation.
- py3plex.algorithms.network_classification.label_propagation.normalize_amplify_freq(mat: ndarray) ndarray
Normalize and amplify matrix by frequency.
- Parameters:
mat – Matrix to normalize
- Returns:
Normalized and amplified matrix
- py3plex.algorithms.network_classification.label_propagation.normalize_exp(mat: ndarray) ndarray
Apply exponential normalization.
- Parameters:
mat – Matrix to normalize
- Returns:
Exponentially normalized matrix
- py3plex.algorithms.network_classification.label_propagation.normalize_initial_matrix_freq(mat: ndarray) ndarray
Normalize matrix by frequency.
- Parameters:
mat – Matrix to normalize
- Returns:
Normalized matrix
- py3plex.algorithms.network_classification.label_propagation.normalize_none(mat: ndarray) ndarray
No normalization (identity function).
- Parameters:
mat – Matrix to return unchanged
- Returns:
Original matrix
- py3plex.algorithms.network_classification.label_propagation.validate_label_propagation(core_network: spmatrix, labels: ndarray | spmatrix, dataset_name: str = 'test', repetitions: int = 5, normalization_scheme: str | List[str] = 'basic', alpha_value: float = 0.001, random_seed: int = 123, verbose: bool = False) DataFrame
Validate label propagation with cross-validation.
- Parameters:
core_network – Sparse network adjacency matrix
labels – Label matrix
dataset_name – Name of the dataset
repetitions – Number of repetitions
normalization_scheme – Normalization scheme to use
alpha_value – Alpha parameter for propagation
random_seed – Random seed for reproducibility
verbose – Whether to print progress
- Returns:
DataFrame with validation results
Visualization
- py3plex.visualization.multilayer.draw_multiedges(network_list: List[Graph] | Dict[Any, Graph], multi_edge_tuple: List[Any], input_type: str = 'nodes', linepoints: str = '-.', alphachannel: float = 0.3, linecolor: str = 'black', curve_height: float = 1, style: str = 'curve2_bezier', linewidth: float = 1, invert: bool = False, linmod: str = 'both', resolution: float = 0.001, ax: Any | None = None) Any
Draw edges connecting multiple layers.
Draws curved or straight edges that connect nodes across different layers in a multilayer network visualization. Typically used after draw_multilayer_default to add inter-layer connections.
- Parameters:
network_list – List of NetworkX graphs (layers) or dict of layer_name -> graph
multi_edge_tuple – List of tuples specifying edges to draw, e.g. [(node1, node2), …]
input_type – Type of input (“nodes” or other)
linepoints – Line style (e.g., “-.”, “–”, “-“)
alphachannel – Transparency level (0.0 to 1.0)
linecolor – Color of the lines
curve_height – Height of curved edges
style – Style of edges (“curve2_bezier”, “line”, “curve3_bezier”, “curve3_fit”, “piramidal”)
linewidth – Width of lines
invert – Whether to invert drawing direction
linmod – Line modification mode
resolution – Resolution for curve drawing
ax – Matplotlib Axes to draw on. If None, uses current axes (plt.gca())
- Returns:
Matplotlib Axes object containing the visualization.
Example
>>> import matplotlib.pyplot as plt >>> from py3plex.visualization import draw_multilayer_default, draw_multiedges >>> fig, ax = plt.subplots(figsize=(10, 10)) >>> ax = draw_multilayer_default(graphs, ax=ax) >>> ax = draw_multiedges(graphs, edges, ax=ax) >>> plt.savefig("multilayer_with_edges.png")
- py3plex.visualization.multilayer.draw_multilayer_default(network_list: List[Graph] | Dict[Any, Graph], display: bool = False, node_size: int = 10, alphalevel: float = 0.13, rectanglex: float = 1, rectangley: float = 1, background_shape: str = 'circle', background_color: str = 'rainbow', networks_color: str = 'rainbow', labels: bool = False, arrowsize: float = 0.5, label_position: int = 1, verbose: bool = False, remove_isolated_nodes: bool = False, ax: Any | None = None, edge_size: float = 1, node_labels: bool = False, node_font_size: int = 5, scale_by_size: bool = False, *, axis: Any | None = None) Any
Core multilayer drawing method.
Draws a diagonal multilayer network visualization where each layer is offset to create a 3D-like effect. Nodes within each layer are drawn with their positions, and background shapes indicate layer boundaries.
- Parameters:
network_list – List of NetworkX graphs to visualize (or dict of layer_name -> graph)
display – If True, calls plt.show() after drawing. Default is False to let the caller control rendering.
node_size – Base size of nodes
alphalevel – Transparency level for background shapes
rectanglex – Width of rectangular backgrounds
rectangley – Height of rectangular backgrounds
background_shape – Background shape type (“circle” or “rectangle”)
background_color – Background color scheme (“default”, “rainbow”, or None)
networks_color – Network color scheme (“rainbow” or “black”)
labels – Layer labels to display
arrowsize – Size of edge arrows
label_position – Position offset for layer labels
verbose – Whether to log network information
remove_isolated_nodes – Whether to remove isolated nodes
ax – Matplotlib Axes to draw on. If None, uses current axes (plt.gca())
edge_size – Width of edges
node_labels – Whether to display node labels
node_font_size – Font size for node labels
scale_by_size – Whether to scale node size by degree
axis – Deprecated. Use ax instead.
- Returns:
Matplotlib Axes object containing the visualization.
Example
>>> import matplotlib.pyplot as plt >>> from py3plex.visualization import draw_multilayer_default >>> # Create figure and get axes >>> fig, ax = plt.subplots(figsize=(10, 10)) >>> # Draw on the axes (returns the axes) >>> ax = draw_multilayer_default(graphs, ax=ax) >>> # Caller controls when to display >>> plt.savefig("multilayer.png") # or plt.show()
- py3plex.visualization.multilayer.draw_multilayer_flow(graphs: List[Graph], multilinks: Dict[str, List[Tuple]], labels: List[str] | None = None, node_activity: Dict[Any, float] | None = None, ax: Any | None = None, display: bool = True, layer_gap: float = 3.0, node_size: float = 30, node_cmap: str = 'viridis', flow_alpha: float = 0.3, flow_min_width: float = 0.2, flow_max_width: float = 4.0, aggregate_by: Tuple[str, ...] = ('u', 'v', 'layer_u', 'layer_v'), **kwargs) Any
Draw multilayer network as layered flow visualization (alluvial-style).
Shows each layer as a horizontal band with nodes positioned along the x-axis. Intra-layer activity is encoded as node color/size, and inter-layer edges are shown as thick flow ribbons (Bezier curves) where width encodes edge weight.
- Parameters:
graphs – List of NetworkX graphs, one per layer (from multi_layer_network.get_layers())
multilinks – Dictionary mapping edge_type -> list of multi-layer edges
labels – Optional list of layer labels. If None, uses layer indices
node_activity – Optional dict mapping node_id -> activity value. If None, computes intra-layer degree
ax – Matplotlib axes to draw on. If None, creates new figure
display – If True, calls plt.show() at the end
layer_gap – Vertical distance between layer bands
node_size – Base marker size for nodes
node_cmap – Matplotlib colormap name for node activity coloring
flow_alpha – Base transparency for flow ribbons
flow_min_width – Minimum line width for flows
flow_max_width – Maximum line width for flows
aggregate_by – Tuple of keys for aggregating flows (currently not used, for future extension)
**kwargs – Reserved for future extensions
- Returns:
Matplotlib axes object
Examples
>>> network = multi_layer_network() >>> network.load_network("data.txt", input_type="multiedgelist") >>> labels, graphs, multilinks = network.get_layers() >>> draw_multilayer_flow(graphs, multilinks, labels=labels)
- py3plex.visualization.multilayer.generate_random_multiedges(network_list: List[Graph], random_edges: int, style: str = 'line', linepoints: str = '-.', upper_first: int = 2, lower_first: int = 0, lower_second: int = 2, inverse_tag: bool = False, pheight: float = 1) None
Generate and draw random multi-layer edges.
- Parameters:
network_list – List of NetworkX graphs (layers)
random_edges – Number of random edges to generate
style – Style of edges to draw
linepoints – Line style
upper_first – Upper bound for first layer
lower_first – Lower bound for first layer
lower_second – Lower bound for second layer
inverse_tag – Whether to invert drawing
pheight – Height parameter for curves
- py3plex.visualization.multilayer.generate_random_networks(number_of_networks: int) List[Graph]
Generate random networks for testing.
- Parameters:
number_of_networks – Number of random networks to generate
- Returns:
List of NetworkX graphs with random layouts
- py3plex.visualization.multilayer.hairball_plot(g: Graph | Any, color_list: List[str] | List[int] | None = None, display: bool = False, node_size: float = 1, text_color: str = 'black', node_sizes: List[float] | None = None, layout_parameters: dict | None = None, legend: Any | None = None, scale_by_size: bool = True, layout_algorithm: str = 'force', edge_width: float = 0.01, alpha_channel: float = 0.5, labels: List[str] | None = None, draw: bool = True, label_font_size: int = 2, ax: Any | None = None) Any | None
Draw a force-directed “hairball” visualization of a network.
Creates a force-directed layout visualization where nodes are colored by type/layer and sized by degree. This is a common visualization for showing the overall structure of a network.
- Parameters:
g – NetworkX graph to visualize
color_list – List of colors for nodes. If None, colors are assigned based on node types.
display – If True, calls plt.show() after drawing. Default is False to let the caller control rendering.
node_size – Base size of nodes
text_color – Color for node labels
node_sizes – Custom list of node sizes (overrides node_size and scale_by_size)
layout_parameters – Parameters for the layout algorithm (e.g., {“pos”: {…}})
legend – If True, display a legend mapping colors to node types
scale_by_size – If True, scale node sizes by log(degree)
layout_algorithm – Layout algorithm to use. Options: - “force”: Force-directed layout (spring layout) - “random”: Random layout - “custom_coordinates”: Use positions from layout_parameters[“pos”] - “custom_coordinates_initial_force”: Use custom positions as initial layout
edge_width – Width of edges
alpha_channel – Transparency level (0.0 to 1.0)
labels – List of node labels to display (None for no labels)
draw – If True, draw the network. If False, only compute layout and return data.
label_font_size – Font size for node labels
ax – Matplotlib Axes to draw on. If None, uses current axes (plt.gca())
- Returns:
Matplotlib Axes object containing the visualization - If draw=False: Tuple of (graph, node_sizes, color_mapping, positions)
- Return type:
If draw=True
Example
>>> import matplotlib.pyplot as plt >>> from py3plex.visualization import hairball_plot >>> fig, ax = plt.subplots(figsize=(10, 10)) >>> ax = hairball_plot(network.core_network, ax=ax, legend=True) >>> plt.savefig("hairball.png")
- py3plex.visualization.multilayer.interactive_diagonal_plot(network_list: List[Graph] | Dict[Any, Graph], layer_labels: List[str] | None = None, layout_algorithm: str = 'force', layer_gap: float = 4.0, node_size_base: int = 8, layer_colors: List[str] | None = None, show_interlayer_edges: bool = True, interlayer_edges: List[Tuple[Any, Any]] | None = None) bool | Any
Create an interactive 2.5D diagonal multilayer plot using Plotly.
This function creates an interactive version of the diagonal multilayer visualization, mimicking the traditional 2D diagonal layout but in an interactive 3D environment. Each layer is positioned diagonally with clear visual separation, similar to the static diagonal visualization.
- Parameters:
network_list – List of NetworkX graphs (layers) or dict of layer_name -> graph
layer_labels – Optional labels for each layer
layout_algorithm – Layout algorithm for nodes (“force”, “circular”, “random”)
layer_gap – Distance between layers in diagonal direction (default: 2.5)
node_size_base – Base size for nodes (default: 8)
layer_colors – Optional list of colors for each layer (HTML color names or hex)
show_interlayer_edges – Whether to show inter-layer edges
interlayer_edges – List of tuples (node1, node2) for inter-layer connections
- Returns:
False if plotly not available, otherwise plotly figure object
Examples
>>> from py3plex.core import multinet >>> net = multinet.multi_layer_network() >>> net.load_network("network.txt", input_type="multiedgelist") >>> labels, graphs, multilinks = net.get_layers("diagonal") >>> fig = interactive_diagonal_plot(graphs, layer_labels=labels)
- py3plex.visualization.multilayer.interactive_hairball_plot(G: Graph, nsizes: List[float], final_color_mapping: dict, pos: dict, colorscale: str = 'Rainbow') bool | Any
Create an interactive 3D hairball plot using Plotly.
- Parameters:
G – NetworkX graph to visualize
nsizes – Node sizes
final_color_mapping – Mapping of nodes to colors
pos – Node positions
colorscale – Color scale to use
- Returns:
False if plotly not available, otherwise plotly figure object
- py3plex.visualization.multilayer.onclick(event: Any) None
Handle mouse click events on plots.
- Parameters:
event – Matplotlib event object
- py3plex.visualization.multilayer.plot_edge_colored_projection(multilayer_network, layout: str = 'spring', node_size: int = 50, layer_colors: Dict[Any, str] | None = None, aggregate_multilayer_edges: bool = True, figsize: Tuple[float, float] = (12, 9), edge_alpha: float = 0.7, **kwargs)
Create an aggregated projection where edge colors indicate layer membership.
This visualization projects all layers onto a single 2D graph, using edge colors to distinguish which layer each edge belongs to. Useful for seeing the overall structure while maintaining layer information.
- Parameters:
multilayer_network – A MultiLayerNetwork instance
layout – Layout algorithm to use (“spring”, “circular”, “random”, “kamada_kawai”)
node_size – Size of nodes
layer_colors – Optional dict mapping layer names to colors; if None, auto-generated
aggregate_multilayer_edges – If True, show edges from all layers with distinct colors
figsize – Figure size as (width, height) tuple
edge_alpha – Transparency level for edges (0-1)
**kwargs – Additional arguments for customization
- Returns:
The created figure
- Return type:
matplotlib.figure.Figure
- py3plex.visualization.multilayer.plot_ego_multilayer(multilayer_network, ego, layers: List[Any] | None = None, max_depth: int = 1, layout: str = 'spring', figsize: Tuple[float, float] | None = None, max_cols: int = 3, node_size: int = 500, ego_node_size: int = 1200, **kwargs)
Create an ego-centric multilayer visualization.
This visualization focuses on a single node (ego) and shows its neighborhood across different layers, highlighting the ego node’s position in each layer.
- Parameters:
multilayer_network – A MultiLayerNetwork instance
ego – The ego node to focus on
layers – Optional list of specific layers to visualize; if None, uses all layers
max_depth – Maximum depth of neighborhood to include (number of hops)
layout – Layout algorithm for each ego graph
figsize – Optional figure size; if None, auto-calculated
max_cols – Maximum columns in subplot grid
node_size – Size of regular nodes
ego_node_size – Size of the ego node (highlighted)
**kwargs – Additional drawing parameters
- Returns:
The created figure
- Return type:
matplotlib.figure.Figure
- py3plex.visualization.multilayer.plot_radial_layers(multilayer_network, base_radius: float = 1.0, radius_step: float = 1.0, node_size: int = 500, draw_inter_layer_edges: bool = True, figsize: Tuple[float, float] = (12, 12), edge_alpha: float = 0.5, draw_layer_bands: bool = True, band_alpha: float = 0.25, **kwargs)
Create a radial/concentric visualization with layers as rings.
This visualization arranges layers as concentric circles, with nodes positioned on rings based on their layer. Inter-layer edges appear as radial connections.
- Parameters:
multilayer_network – A MultiLayerNetwork instance
base_radius – Radius of the innermost layer
radius_step – Distance between consecutive layer rings
node_size – Size of nodes (default: 500 for better visibility)
draw_inter_layer_edges – If True, draw edges between layers
figsize – Figure size as (width, height) tuple
edge_alpha – Transparency for edges
draw_layer_bands – If True, draw semi-transparent circular bands around layers
band_alpha – Transparency for layer bands (default: 0.25)
**kwargs – Additional drawing parameters
- Returns:
The created figure
- Return type:
matplotlib.figure.Figure
- py3plex.visualization.multilayer.plot_small_multiples(multilayer_network, layout: str = 'spring', max_cols: int = 3, node_size: int = 50, shared_layout: bool = True, show_layer_titles: bool = True, figsize: Tuple[float, float] | None = None, **kwargs)
Create a small multiples visualization with one subplot per layer.
This visualization shows each layer as a separate subplot in a grid layout, making it easy to compare the structure of different layers side-by-side.
- Parameters:
multilayer_network – A MultiLayerNetwork instance
layout – Layout algorithm to use (“spring”, “circular”, “random”, “kamada_kawai”)
max_cols – Maximum number of columns in the subplot grid
node_size – Size of nodes in each subplot
shared_layout – If True, compute one layout and reuse for all layers; if False, compute independent layouts per layer
show_layer_titles – If True, show layer names as subplot titles
figsize – Optional figure size as (width, height) tuple
**kwargs – Additional arguments passed to nx.draw_networkx
- Returns:
The created figure
- Return type:
matplotlib.figure.Figure
- py3plex.visualization.multilayer.plot_supra_adjacency_heatmap(multilayer_network, include_inter_layer: bool = False, inter_layer_weight: float = 1.0, node_order: List[Any] | None = None, cmap: str = 'viridis', figsize: Tuple[float, float] = (10, 10), **kwargs)
Create a supra-adjacency matrix heatmap visualization.
This visualization shows the multilayer network as a block matrix where each block represents the adjacency matrix of one layer. Optionally includes inter-layer connections.
- Parameters:
multilayer_network – A MultiLayerNetwork instance
include_inter_layer – If True, include inter-layer edges/couplings
inter_layer_weight – Default weight for inter-layer connections
node_order – Optional list specifying node ordering; if None, uses sorted order
cmap – Colormap name for the heatmap
figsize – Figure size as (width, height) tuple
**kwargs – Additional arguments for imshow
- Returns:
The created figure
- Return type:
matplotlib.figure.Figure
- py3plex.visualization.multilayer.supra_adjacency_matrix_plot(matrix: ndarray, display: bool = False, ax: Any | None = None, cmap: str = 'binary') Any
Plot a supra-adjacency matrix as a heatmap.
Visualizes the supra-adjacency matrix of a multilayer network, where the matrix shows both intra-layer and inter-layer connections. The matrix is displayed as a heatmap with configurable colormap.
- Parameters:
matrix – Supra-adjacency matrix to plot (numpy ndarray or scipy sparse matrix)
display – If True, calls plt.show() after drawing. Default is False to let the caller control rendering.
ax – Matplotlib Axes to draw on. If None, uses current axes (plt.gca())
cmap – Colormap to use for the heatmap (default: “binary”)
- Returns:
Matplotlib Axes object containing the visualization.
Example
>>> import matplotlib.pyplot as plt >>> from py3plex.visualization import supra_adjacency_matrix_plot >>> fig, ax = plt.subplots(figsize=(8, 8)) >>> ax = supra_adjacency_matrix_plot(supra_matrix, ax=ax, cmap="viridis") >>> plt.colorbar(ax.images[0]) >>> plt.savefig("supra_matrix.png")
- py3plex.visualization.multilayer.visualize_multilayer_network(multilayer_network, visualization_type: str = 'diagonal', **kwargs)
High-level function to visualize multilayer networks with multiple visualization modes.
This function provides a unified interface for various multilayer network visualization techniques, making it easy to switch between different visual representations.
- Parameters:
multilayer_network – A MultiLayerNetwork instance from py3plex.core.multinet
visualization_type – Type of visualization to use. Options: - “diagonal”: Default layer-centric diagonal layout (existing behavior) - “small_multiples”: One subplot per layer with shared or independent layouts - “edge_colored_projection”: Aggregate projection with edge colors by layer - “supra_adjacency_heatmap”: Matrix representation of multilayer structure - “radial_layers”: Concentric circles for layers with radial inter-layer edges - “ego_multilayer”: Ego-centric view focused on a specific node
**kwargs – Additional keyword arguments specific to each visualization type. See individual plot functions for details.
- Returns:
The created figure object
- Return type:
matplotlib.figure.Figure
- Raises:
ValueError – If visualization_type is not recognized
Examples
>>> from py3plex.core import multinet >>> net = multinet.multi_layer_network() >>> net.load_network("network.txt", input_type="multiedgelist") >>> >>> # Use default diagonal visualization >>> fig = visualize_multilayer_network(net) >>> >>> # Use small multiples view >>> fig = visualize_multilayer_network(net, visualization_type="small_multiples") >>> >>> # Use edge-colored projection >>> fig = visualize_multilayer_network(net, visualization_type="edge_colored_projection")
- py3plex.visualization.drawing_machinery.draw(G, pos=None, ax=None, **kwds)
Draw the graph G with Matplotlib.
Draw the graph as a simple representation with no node labels or edge labels and using the full Matplotlib figure area and no axis labels by default. See draw_networkx() for more full-featured drawing that allows title, axis labels etc.
- Parameters:
G (graph) – A networkx graph
pos (dictionary, optional) – A dictionary with nodes as keys and positions as values. If not specified a spring layout positioning will be computed. See
networkx.drawing.layoutfor functions that compute node positions.ax (Matplotlib Axes object, optional) – Draw the graph in specified Matplotlib axes.
kwds (optional keywords) – See networkx.draw_networkx() for a description of optional keywords.
Examples
>>> G = nx.dodecahedral_graph() >>> nx.draw(G) >>> nx.draw(G, pos=nx.spring_layout(G)) # use spring layout
See also
draw_networkx,draw_networkx_nodes,draw_networkx_edges,draw_networkx_labels,draw_networkx_edge_labelsNotes
This function has the same name as pylab.draw and pyplot.draw so beware when using
>>> from networkx import *
since you might overwrite the pylab.draw function.
With pyplot use
>>> import matplotlib.pyplot as plt >>> import networkx as nx >>> G = nx.dodecahedral_graph() >>> nx.draw(G) # networkx draw() >>> plt.draw() # pyplot draw()
Also see the NetworkX drawing examples at https://networkx.github.io/documentation/latest/auto_examples/index.html
- py3plex.visualization.drawing_machinery.draw_circular(G, **kwargs)
Draw the graph G with a circular layout.
- Parameters:
G (graph) – A networkx graph
kwargs (optional keywords) – See networkx.draw_networkx() for a description of optional keywords, with the exception of the pos parameter which is not used by this function.
- py3plex.visualization.drawing_machinery.draw_kamada_kawai(G, **kwargs)
Draw the graph G with a Kamada-Kawai force-directed layout.
- Parameters:
G (graph) – A networkx graph
kwargs (optional keywords) – See networkx.draw_networkx() for a description of optional keywords, with the exception of the pos parameter which is not used by this function.
- py3plex.visualization.drawing_machinery.draw_networkx(G, pos=None, arrows=True, with_labels=True, **kwds)
Draw the graph G using Matplotlib.
Draw the graph with Matplotlib with options for node positions, labeling, titles, and many other drawing features. See draw() for simple drawing without labels or axes.
- Parameters:
G (graph) – A networkx graph
pos (dictionary, optional) – A dictionary with nodes as keys and positions as values. If not specified a spring layout positioning will be computed. See
networkx.drawing.layoutfor functions that compute node positions.arrows (bool, optional (default=True)) – For directed graphs, if True draw arrowheads. Note: Arrows will be the same color as edges.
arrowstyle (str, optional (default=’-|>’)) – For directed graphs, choose the style of the arrowsheads. See :py:class: matplotlib.patches.ArrowStyle for more options.
arrowsize (int, optional (default=10)) – For directed graphs, choose the size of the arrow head head’s length and width. See :py:class: matplotlib.patches.FancyArrowPatch for attribute mutation_scale for more info.
with_labels (bool, optional (default=True)) – Set to True to draw labels on the nodes.
ax (Matplotlib Axes object, optional) – Draw the graph in the specified Matplotlib axes.
nodelist (list, optional (default G.nodes())) – Draw only specified nodes
edgelist (list, optional (default=G.edges())) – Draw only specified edges
node_size (scalar or array, optional (default=300)) – Size of nodes. If an array is specified it must be the same length as nodelist.
node_color (color string, or array of floats, (default='r')) – Node color. Can be a single color format string, or a sequence of colors with the same length as nodelist. If numeric values are specified they will be mapped to colors using the cmap and vmin,vmax parameters. See matplotlib.scatter for more details.
node_shape (string, optional (default='o')) – The shape of the node. Specification is as matplotlib.scatter marker, one of ‘so^>v<dph8’.
alpha (float, optional (default=1.0)) – The node and edge transparency
cmap (Matplotlib colormap, optional (default=None)) – Colormap for mapping intensities of nodes
vmin (float, optional (default=None)) – Minimum and maximum for node colormap scaling
vmax (float, optional (default=None)) – Minimum and maximum for node colormap scaling
linewidths ([None | scalar | sequence]) – Line width of symbol border (default =1.0)
width (float, optional (default=1.0)) – Line width of edges
edge_color (color string, or array of floats (default='r')) – Edge color. Can be a single color format string, or a sequence of colors with the same length as edgelist. If numeric values are specified they will be mapped to colors using the edge_cmap and edge_vmin,edge_vmax parameters.
edge_cmap (Matplotlib colormap, optional (default=None)) – Colormap for mapping intensities of edges
edge_vmin (floats, optional (default=None)) – Minimum and maximum for edge colormap scaling
edge_vmax (floats, optional (default=None)) – Minimum and maximum for edge colormap scaling
style (string, optional (default='solid')) – Edge line style (solid|dashed|dotted,dashdot)
labels (dictionary, optional (default=None)) – Node labels in a dictionary keyed by node of text labels
font_size (int, optional (default=12)) – Font size for text labels
font_color (string, optional (default='k' black)) – Font color string
font_weight (string, optional (default='normal')) – Font weight
font_family (string, optional (default='sans-serif')) – Font family
label (string, optional) – Label for graph legend
Notes
For directed graphs, arrows are drawn at the head end. Arrows can be turned off with keyword arrows=False.
Examples
>>> G = nx.dodecahedral_graph() >>> nx.draw(G) >>> nx.draw(G, pos=nx.spring_layout(G)) # use spring layout
>>> import matplotlib.pyplot as plt >>> limits = plt.axis('off') # turn of axis
Also see the NetworkX drawing examples at https://networkx.github.io/documentation/latest/auto_examples/index.html
- py3plex.visualization.drawing_machinery.draw_networkx_edge_labels(G, pos, edge_labels=None, label_pos=0.5, font_size=10, font_color='k', font_family='sans-serif', font_weight='normal', alpha=1.0, bbox=None, ax=None, rotate=True, **kwds)
Draw edge labels.
- Parameters:
G (graph) – A networkx graph
pos (dictionary) – A dictionary with nodes as keys and positions as values. Positions should be sequences of length 2.
ax (Matplotlib Axes object, optional) – Draw the graph in the specified Matplotlib axes.
alpha (float) – The text transparency (default=1.0)
edge_labels (dictionary) – Edge labels in a dictionary keyed by edge two-tuple of text labels (default=None). Only labels for the keys in the dictionary are drawn.
label_pos (float) – Position of edge label along edge (0=head, 0.5=center, 1=tail)
font_size (int) – Font size for text labels (default=12)
font_color (string) – Font color string (default=’k’ black)
font_weight (string) – Font weight (default=’normal’)
font_family (string) – Font family (default=’sans-serif’)
bbox (Matplotlib bbox) – Specify text box shape and colors.
clip_on (bool) – Turn on clipping at axis boundaries (default=True)
- Returns:
dict of labels keyed on the edges
- Return type:
dict
Examples
>>> G = nx.dodecahedral_graph() >>> edge_labels = nx.draw_networkx_edge_labels(G, pos=nx.spring_layout(G))
Also see the NetworkX drawing examples at https://networkx.github.io/documentation/latest/auto_examples/index.html
- py3plex.visualization.drawing_machinery.draw_networkx_edges(G, pos, edgelist=None, width=1.0, edge_color='k', style='solid', alpha=1.0, arrowstyle='-|>', arrowsize=10, edge_cmap=None, edge_vmin=None, edge_vmax=None, ax=None, arrows=True, label=None, node_size=300, nodelist=None, node_shape='o', **kwds)
Draw the edges of the graph G.
This draws only the edges of the graph G.
- Parameters:
G (graph) – A networkx graph
pos (dictionary) – A dictionary with nodes as keys and positions as values. Positions should be sequences of length 2.
edgelist (collection of edge tuples) – Draw only specified edges(default=G.edges())
width (float, or array of floats) – Line width of edges (default=1.0)
edge_color (color string, or array of floats) – Edge color. Can be a single color format string (default=’r’), or a sequence of colors with the same length as edgelist. If numeric values are specified they will be mapped to colors using the edge_cmap and edge_vmin,edge_vmax parameters.
style (string) – Edge line style (default=’solid’) (solid|dashed|dotted,dashdot)
alpha (float) – The edge transparency (default=1.0)
cmap (edge) – Colormap for mapping intensities of edges (default=None)
edge_vmin (floats) – Minimum and maximum for edge colormap scaling (default=None)
edge_vmax (floats) – Minimum and maximum for edge colormap scaling (default=None)
ax (Matplotlib Axes object, optional) – Draw the graph in the specified Matplotlib axes.
arrows (bool, optional (default=True)) – For directed graphs, if True draw arrowheads. Note: Arrows will be the same color as edges.
arrowstyle (str, optional (default=’-|>’)) – For directed graphs, choose the style of the arrow heads. See :py:class: matplotlib.patches.ArrowStyle for more options.
arrowsize (int, optional (default=10)) – For directed graphs, choose the size of the arrow head head’s length and width. See :py:class: matplotlib.patches.FancyArrowPatch for attribute mutation_scale for more info.
label ([None| string]) – Label for legend
- Returns:
matplotlib.collection.LineCollection – LineCollection of the edges
list of matplotlib.patches.FancyArrowPatch – FancyArrowPatch instances of the directed edges
Depending whether the drawing includes arrows or not.
Notes
For directed graphs, arrows are drawn at the head end. Arrows can be turned off with keyword arrows=False. Be sure to include node_size as a keyword argument; arrows are drawn considering the size of nodes.
Examples
>>> G = nx.dodecahedral_graph() >>> edges = nx.draw_networkx_edges(G, pos=nx.spring_layout(G))
>>> G = nx.DiGraph() >>> G.add_edges_from([(1, 2), (1, 3), (2, 3)]) >>> arcs = nx.draw_networkx_edges(G, pos=nx.spring_layout(G)) >>> alphas = [0.3, 0.4, 0.5] >>> for i, arc in enumerate(arcs): # change alpha values of arcs ... arc.set_alpha(alphas[i])
Also see the NetworkX drawing examples at https://networkx.github.io/documentation/latest/auto_examples/index.html
- py3plex.visualization.drawing_machinery.draw_networkx_labels(G, pos, labels=None, font_size=1, font_color='k', font_family='sans-serif', font_weight='normal', alpha=1.0, bbox=None, ax=None, **kwds)
Draw node labels on the graph G.
- Parameters:
G (graph) – A networkx graph
pos (dictionary) – A dictionary with nodes as keys and positions as values. Positions should be sequences of length 2.
labels (dictionary, optional (default=None)) – Node labels in a dictionary keyed by node of text labels Node-keys in labels should appear as keys in pos. If needed use: {n:lab for n,lab in labels.items() if n in pos}
font_size (int) – Font size for text labels (default=12)
font_color (string) – Font color string (default=’k’ black)
font_family (string) – Font family (default=’sans-serif’)
font_weight (string) – Font weight (default=’normal’)
alpha (float) – The text transparency (default=1.0)
ax (Matplotlib Axes object, optional) – Draw the graph in the specified Matplotlib axes.
- Returns:
dict of labels keyed on the nodes
- Return type:
dict
Examples
>>> G = nx.dodecahedral_graph() >>> labels = nx.draw_networkx_labels(G, pos=nx.spring_layout(G))
Also see the NetworkX drawing examples at https://networkx.github.io/documentation/latest/auto_examples/index.html
- py3plex.visualization.drawing_machinery.draw_networkx_nodes(G, pos, nodelist=None, node_size=300, node_color='r', node_shape='o', alpha=1.0, cmap=None, vmin=None, vmax=None, ax=None, linewidths=None, edgecolors=None, label=None, **kwds)
Draw the nodes of the graph G.
This draws only the nodes of the graph G.
- Parameters:
G (graph) – A networkx graph
pos (dictionary) – A dictionary with nodes as keys and positions as values. Positions should be sequences of length 2.
ax (Matplotlib Axes object, optional) – Draw the graph in the specified Matplotlib axes.
nodelist (list, optional) – Draw only specified nodes (default G.nodes())
node_size (scalar or array) – Size of nodes (default=300). If an array is specified it must be the same length as nodelist.
node_color (color string, or array of floats) – Node color. Can be a single color format string (default=’r’), or a sequence of colors with the same length as nodelist. If numeric values are specified they will be mapped to colors using the cmap and vmin,vmax parameters. See matplotlib.scatter for more details.
node_shape (string) – The shape of the node. Specification is as matplotlib.scatter marker, one of ‘so^>v<dph8’ (default=’o’).
alpha (float or array of floats) – The node transparency. This can be a single alpha value (default=1.0), in which case it will be applied to all the nodes of color. Otherwise, if it is an array, the elements of alpha will be applied to the colors in order (cycling through alpha multiple times if necessary).
cmap (Matplotlib colormap) – Colormap for mapping intensities of nodes (default=None)
vmin (floats) – Minimum and maximum for node colormap scaling (default=None)
vmax (floats) – Minimum and maximum for node colormap scaling (default=None)
linewidths ([None | scalar | sequence]) – Line width of symbol border (default =1.0)
edgecolors ([None | scalar | sequence]) – Colors of node borders (default = node_color)
label ([None| string]) – Label for legend
- Returns:
PathCollection of the nodes.
- Return type:
matplotlib.collections.PathCollection
Examples
>>> G = nx.dodecahedral_graph() >>> nodes = nx.draw_networkx_nodes(G, pos=nx.spring_layout(G))
Also see the NetworkX drawing examples at https://networkx.github.io/documentation/latest/auto_examples/index.html
- py3plex.visualization.drawing_machinery.draw_random(G, **kwargs)
Draw the graph G with a random layout.
- Parameters:
G (graph) – A networkx graph
kwargs (optional keywords) – See networkx.draw_networkx() for a description of optional keywords, with the exception of the pos parameter which is not used by this function.
- py3plex.visualization.drawing_machinery.draw_shell(G, **kwargs)
Draw networkx graph with shell layout.
- Parameters:
G (graph) – A networkx graph
kwargs (optional keywords) – See networkx.draw_networkx() for a description of optional keywords, with the exception of the pos parameter which is not used by this function.
- py3plex.visualization.drawing_machinery.draw_spectral(G, **kwargs)
Draw the graph G with a spectral layout.
- Parameters:
G (graph) – A networkx graph
kwargs (optional keywords) – See networkx.draw_networkx() for a description of optional keywords, with the exception of the pos parameter which is not used by this function.
- py3plex.visualization.drawing_machinery.draw_spring(G, **kwargs)
Draw the graph G with a spring layout.
- Parameters:
G (graph) – A networkx graph
kwargs (optional keywords) – See networkx.draw_networkx() for a description of optional keywords, with the exception of the pos parameter which is not used by this function.
Color utilities for py3plex visualization.
- py3plex.visualization.colors.RGB_to_hex(RGB: List[int]) str
Convert RGB list to hex color.
- Parameters:
RGB – RGB values as [R, G, B] list
- Returns:
Hex color string like “#FFFFFF”
- py3plex.visualization.colors.color_dict(gradient: List[List[int]]) Dict[str, List]
Takes in a list of RGB sub-lists and returns dictionary of colors in RGB and hex form for use in a graphing function defined later on.
- Parameters:
gradient – List of RGB color values
- Returns:
Dictionary with ‘hex’, ‘r’, ‘g’, ‘b’ keys
- py3plex.visualization.colors.hex_to_RGB(hex: str) List[int]
Convert hex color to RGB list.
- Parameters:
hex – Hex color string like “#FFFFFF”
- Returns:
RGB values as [R, G, B] list
- py3plex.visualization.colors.linear_gradient(start_hex: str, finish_hex: str = '#FFFFFF', n: int = 10) Dict[str, List]
Returns a gradient list of (n) colors between two hex colors.
- Parameters:
start_hex – Starting color as six-digit hex string (e.g., “#FFFFFF”)
finish_hex – Ending color as six-digit hex string (default: “#FFFFFF”)
n – Number of colors in gradient (default: 10)
- Returns:
Dictionary with ‘hex’, ‘r’, ‘g’, ‘b’ keys containing gradient colors
- py3plex.visualization.layout_algorithms.compute_force_directed_layout(g: Graph, layout_parameters: Dict[str, Any] | None = None, verbose: bool = True, gravity: float = 0.2, strongGravityMode: bool = False, barnesHutTheta: float = 1.2, edgeWeightInfluence: float = 1, scalingRatio: float = 2.0, forceImport: bool = True, seed: int | None = None) Dict[Any, ndarray]
Compute force-directed layout for a graph using ForceAtlas2 or NetworkX spring layout.
- Parameters:
g – NetworkX graph to layout
layout_parameters – Optional parameters to pass to layout algorithm
verbose – Whether to print progress information
gravity – Attraction force towards the center (must be non-negative)
strongGravityMode – Use strong gravity mode
barnesHutTheta – Barnes-Hut approximation parameter
edgeWeightInfluence – Influence of edge weights on layout
scalingRatio – Scaling factor for the layout (must be positive)
forceImport – Whether to use ForceAtlas2 (if available)
seed – Random seed for reproducibility in fallback spring layout
- Returns:
Dictionary mapping nodes to 2D position arrays
Note
For large networks (>1000 nodes), this may be slow. Consider using faster layouts (circular, random, spectral) or matrix visualization.
- Contracts:
Precondition: graph must not be None and be a NetworkX graph
Precondition: graph must have at least one node
Precondition: gravity must be non-negative
Precondition: scalingRatio must be positive
Postcondition: result is a dictionary
Postcondition: result has positions for all nodes
- py3plex.visualization.layout_algorithms.compute_random_layout(g: Graph, seed: int | None = None) Dict[Any, ndarray]
Compute a random layout for the graph.
- Parameters:
g – NetworkX graph
seed – Random seed for reproducibility
- Returns:
Dictionary mapping nodes to 2D positions
- Contracts:
Precondition: graph must not be None and be a NetworkX graph
Precondition: graph must have at least one node
Postcondition: result is a dictionary
Postcondition: result has positions for all nodes
- py3plex.visualization.layout_algorithms.ensure(*args, **kwargs)
- py3plex.visualization.layout_algorithms.require(*args, **kwargs)
- py3plex.visualization.bezier.bezier_calculate_dfy(mp_y: float, path_height: float, x0: float, midpoint_x: float, x1: float, y0: float, y1: float, dfx: ndarray, mode: str = 'upper') ndarray
Calculate y-coordinates for bezier curve.
- Parameters:
mp_y – Midpoint y-coordinate
path_height – Height of the path
x0 – Start x-coordinate
midpoint_x – Midpoint x-coordinate
x1 – End x-coordinate
y0 – Start y-coordinate
y1 – End y-coordinate
dfx – Array of x-coordinates
mode – Mode for curve calculation (“upper” or “bottom”)
- Returns:
Array of y-coordinates
- py3plex.visualization.bezier.draw_bezier(total_size: int, p1: Tuple[float, float], p2: Tuple[float, float], mode: str = 'quadratic', inversion: bool = False, path_height: float = 2, linemode: str = 'both', resolution: float = 0.1) Tuple[ndarray, ndarray]
Draw bezier curve between two points.
- Parameters:
total_size – Total size of the drawing area
p1 – First point coordinates (x0, x1)
p2 – Second point coordinates (y0, y1)
mode – Drawing mode (default: “quadratic”)
inversion – Whether to invert the curve
path_height – Height of the path
linemode – Line drawing mode (“upper”, “bottom”, or “both”)
resolution – Resolution for curve sampling
- Returns:
Tuple of (x-coordinates, y-coordinates) arrays
- py3plex.visualization.benchmark_visualizations.generic_grouping(fname: DataFrame, score_name: str, threshold: float = 1.0, percentages: bool = True) DataFrame
- py3plex.visualization.benchmark_visualizations.plot_core_macro(fname: DataFrame) int
A very simple visualization of the results..
- py3plex.visualization.benchmark_visualizations.plot_core_macro_box(fname: str) int
- py3plex.visualization.benchmark_visualizations.plot_core_macro_gg(fnamex: DataFrame) None
- py3plex.visualization.benchmark_visualizations.plot_core_micro(fname: DataFrame) int
A very simple visualization of the results..
- py3plex.visualization.benchmark_visualizations.plot_core_micro_gg(fnamex: DataFrame) None
- py3plex.visualization.benchmark_visualizations.plot_core_micro_grid(fname: str) None
- py3plex.visualization.benchmark_visualizations.plot_core_time(fnamex: DataFrame) int
- py3plex.visualization.benchmark_visualizations.plot_core_time_gg(fname: str) None
- py3plex.visualization.benchmark_visualizations.plot_core_variability(fname: str) None
- py3plex.visualization.benchmark_visualizations.plot_critical_distance(fname: str, num_algo: int = 14) None
- py3plex.visualization.benchmark_visualizations.plot_mean_times(fn: DataFrame) None
- py3plex.visualization.benchmark_visualizations.plot_robustness(infile: DataFrame) None
- py3plex.visualization.benchmark_visualizations.table_to_latex(fname, outfolder='../final_results/tables/', threshold=1)
Wrappers
- class py3plex.wrappers.benchmark_nodes.TopKRanker(estimator, *, n_jobs=None, verbose=0)
Bases:
OneVsRestClassifier- predict(X: ndarray, top_k_list: List[int]) List[List]
Predict top K labels for each sample.
- Parameters:
X – Feature matrix
top_k_list – List of K values for each sample
- Returns:
List of predicted labels for each sample
- set_partial_fit_request(*, classes: bool | None | str = '$UNCHANGED$') TopKRanker
Configure whether metadata should be requested to be passed to the
partial_fitmethod.Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True(seesklearn.set_config()). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed topartial_fitif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it topartial_fit.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
- Parameters:
classes (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
classesparameter inpartial_fit.- Returns:
self – The updated object.
- Return type:
object
- set_predict_request(*, top_k_list: bool | None | str = '$UNCHANGED$') TopKRanker
Configure whether metadata should be requested to be passed to the
predictmethod.Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True(seesklearn.set_config()). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed topredictif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it topredict.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
- Parameters:
top_k_list (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
top_k_listparameter inpredict.- Returns:
self – The updated object.
- Return type:
object
- set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') TopKRanker
Configure whether metadata should be requested to be passed to the
scoremethod.Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True(seesklearn.set_config()). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed toscoreif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it toscore.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
- Parameters:
sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
sample_weightparameter inscore.- Returns:
self – The updated object.
- Return type:
object
- py3plex.wrappers.benchmark_nodes.benchmark_node_classification(path: str, core_network: Any, labels_matrix: Any, percent: Any = 'all') Dict[Any, Any]
Benchmark node classification using embeddings.
- Parameters:
path – Path to embeddings file
core_network – Network adjacency matrix
labels_matrix – Labels for nodes
percent – Training percentage or “all” for multiple percentages
- Returns:
Dictionary of classification results
- py3plex.wrappers.benchmark_nodes.main()
- py3plex.wrappers.benchmark_nodes.sparse2graph(x: spmatrix) Dict[str, List[str]]
Convert sparse matrix to graph dictionary.
- Parameters:
x – Sparse matrix representation of graph
- Returns:
Dictionary mapping node IDs to lists of neighbor IDs
I/O Operations
For the complete auto-generated API documentation, see the AUTOGEN_results directory after building the docs. Aggregation and Network Operations ———————————-
Vectorized multiplex aggregation for multilayer networks.
This module provides optimized implementations for aggregating edges across multiple layers using vectorized NumPy and SciPy sparse operations, replacing slower Python loops with efficient matrix operations.
Performance targets: - ≥3× speedup for 1M edges across 4 layers compared to legacy loop-based methods - Memory-efficient sparse matrix output by default - Float tolerance of 1e-6 for numerical equivalence with legacy methods
Author: py3plex development team License: MIT
- py3plex.multinet.aggregation.aggregate_layers(edges: ndarray | list, weight_col: str | int = 'w', reducer: Literal['sum', 'mean', 'max'] = 'sum', to_sparse: bool = True) csr_matrix | ndarray
Aggregate edge weights across multiple layers using vectorized operations.
This function replaces Python loops with efficient NumPy and SciPy sparse matrix operations for superior performance on large multilayer networks.
Complexity: O(E) where E is the number of edges Memory: O(E) for sparse output, O(N²) for dense output
- Parameters:
edges – Edge data as ndarray with shape (E, >=3) containing (layer, src, dst, [weight]) columns. If no weight column, assumes weight=1.0 for all edges. Can also accept list of lists.
weight_col – Column name or index for weights (default “w”). If edges is ndarray, must be int index (0-based). Column 3 is used if it exists, else weights default to 1.
reducer – Aggregation method - one of: - “sum”: Sum weights for edges appearing in multiple layers (default) - “mean”: Average weights across layers - “max”: Take maximum weight across layers
to_sparse – If True, return scipy.sparse.csr_matrix (default, memory-efficient). If False, return dense numpy.ndarray.
- Returns:
Aggregated adjacency matrix in requested format (sparse CSR or dense). Shape is (N, N) where N is the maximum node ID + 1.
- Raises:
ValueError – If edges array has wrong shape or reducer is invalid.
TypeError – If edges is not ndarray or list-like.
Examples
>>> import numpy as np >>> # Create edge list: (layer, src, dst, weight) >>> edges = np.array([ ... [0, 0, 1, 1.0], ... [0, 1, 2, 2.0], ... [1, 0, 1, 0.5], # Same edge in layer 1 ... [1, 2, 3, 1.5], ... ]) >>> mat = aggregate_layers(edges, reducer="sum") >>> mat.shape (4, 4) >>> mat[0, 1] # Sum of weights from both layers 1.5
>>> # With mean aggregation >>> mat_mean = aggregate_layers(edges, reducer="mean", to_sparse=False) >>> mat_mean[0, 1] # Average of 1.0 and 0.5 0.75
Notes
Node IDs are assumed to be integers starting from 0
Self-loops are supported
For directed graphs, (i,j) and (j,i) are different edges
Sparse output recommended for large networks (N > 1000)
Deterministic output for fixed input order
- Performance:
Achieves ≥3× speedup vs loop-based aggregation on 1M edges: - Legacy loop: ~2.5s for 1M edges, 4 layers - Vectorized: ~0.8s for same dataset (measured on standard hardware)
- py3plex.multinet.aggregation.ensure(*args, **kwargs)
- py3plex.multinet.aggregation.require(*args, **kwargs)
Profiling Utilities
Performance profiling utilities for py3plex.
This module provides decorators and utilities for tracking function execution time, memory usage, and performance metrics. These tools enable performance regression detection and optimization efforts.
Example
>>> from py3plex.profiling import profile_performance
>>>
>>> @profile_performance
>>> def slow_function():
... # ... computation
... pass
>>>
>>> slow_function() # Logs execution time automatically
- class py3plex.profiling.PerformanceMonitor
Bases:
objectGlobal performance monitoring registry.
Tracks execution times and call counts for profiled functions. Can be used to generate performance reports and detect regressions.
- enabled
Whether profiling is enabled globally
- stats
Dictionary mapping function names to performance statistics
- clear()
Clear all collected statistics.
- get_report() str
Generate a performance report.
- Returns:
String containing formatted performance statistics
- record(func_name: str, elapsed: float, memory_delta: float | None = None)
Record performance metrics for a function call.
- Parameters:
func_name – Name of the function
elapsed – Execution time in seconds
memory_delta – Memory usage change in MB (optional)
- py3plex.profiling.benchmark(func: Callable, iterations: int = 100, warmup: int = 10, args: tuple = (), kwargs: dict | None = None) Dict[str, float]
Benchmark a function with multiple iterations.
Runs the function multiple times and collects statistics about execution time. Includes a warmup phase to allow JIT compilation and caching.
- Parameters:
func – Function to benchmark
iterations – Number of iterations to run (default: 100)
warmup – Number of warmup iterations (default: 10)
args – Positional arguments for the function
kwargs – Keyword arguments for the function
- Returns:
mean: Average execution time (seconds)
median: Median execution time (seconds)
min: Minimum execution time (seconds)
max: Maximum execution time (seconds)
std: Standard deviation (seconds)
total: Total time for all iterations (seconds)
- Return type:
Dictionary containing benchmark statistics
Example
>>> from py3plex.profiling import benchmark >>> >>> def my_function(n): ... return sum(range(n)) >>> >>> stats = benchmark(my_function, iterations=1000, args=(1000,)) >>> print(f"Average time: {stats['mean']*1000:.3f}ms")
- py3plex.profiling.get_monitor() PerformanceMonitor
Get the global performance monitor instance.
- Returns:
Global performance monitoring instance
- Return type:
Example
>>> from py3plex.profiling import get_monitor >>> monitor = get_monitor() >>> print(monitor.get_report())
- py3plex.profiling.profile_performance(func: Callable | None = None, *, log_args: bool = False, track_memory: bool = False) Callable
Decorator to track function execution time and optionally memory usage.
This decorator measures the wall-clock time taken by a function and logs it. It can also track memory usage changes if requested. Performance metrics are stored in the global performance monitor for later analysis.
- Parameters:
func – Function to decorate (when used without arguments)
log_args – If True, log function arguments (default: False)
track_memory – If True, track memory usage (default: False)
- Returns:
Decorated function that logs execution time
Examples
Basic usage: >>> @profile_performance … def my_function(x, y): … return x + y
With options: >>> @profile_performance(log_args=True, track_memory=True) … def expensive_function(data): … # … expensive computation … return result
Manual wrapping: >>> def my_function(): … pass >>> profiled_func = profile_performance(my_function)
Note
Memory tracking requires the tracemalloc module and adds overhead. Use it only when investigating memory issues.
- py3plex.profiling.timed_section(name: str, log_level: str = 'info')
Context manager for timing code blocks.
Useful for timing specific sections of code without wrapping entire functions.
- Parameters:
name – Name of the code section being timed
log_level – Logging level (‘debug’, ‘info’, ‘warning’, ‘error’)
Example
>>> from py3plex.profiling import timed_section >>> >>> with timed_section("data loading"): ... data = load_large_dataset() >>> >>> with timed_section("computation", log_level="debug"): ... result = complex_calculation()
- Yields:
None
Logging Configuration
Logging configuration for py3plex.
This module provides centralized logging configuration for the py3plex library.
- py3plex.logging_config.get_logger(name: str | None = None, level: int = 20) Logger
Get or create a logger for py3plex modules.
- Parameters:
name – Logger name. If None, returns the root py3plex logger.
level – Logging level (default: INFO)
- Returns:
Configured logger instance
Example
>>> from py3plex.logging_config import get_logger >>> logger = get_logger(__name__) >>> logger.info("Processing network...")
- py3plex.logging_config.setup_logging(level: int = 20, format_string: str | None = None) Logger
Configure logging for py3plex with custom settings.
- Parameters:
level – Logging level (default: INFO)
format_string – Custom format string (optional)
- Returns:
Root py3plex logger
Example
>>> from py3plex.logging_config import setup_logging >>> logger = setup_logging(level=logging.DEBUG)
I/O Schema and Validation
Schema definitions for multilayer graphs using dataclasses.
This module provides dataclass representations of multilayer graph components with built-in validation and serialization support.
- class py3plex.io.schema.Edge(src: ~typing.Hashable, dst: ~typing.Hashable, src_layer: ~typing.Hashable, dst_layer: ~typing.Hashable, key: int = 0, attributes: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectRepresents an edge in a multilayer network.
- src
Source node ID
- Type:
Hashable
- dst
Destination node ID
- Type:
Hashable
- src_layer
Source layer ID
- Type:
Hashable
- dst_layer
Destination layer ID
- Type:
Hashable
- key
Optional edge key for multigraphs (default: 0)
- Type:
int
- attributes
Dictionary of edge attributes (must be JSON-serializable)
- Type:
Dict[str, Any]
- attributes: Dict[str, Any]
- dst: Hashable
- dst_layer: Hashable
- edge_tuple() Tuple[Hashable, Hashable, Hashable, Hashable, int]
Return edge as a tuple for uniqueness checking.
- Returns:
Tuple of (src, dst, src_layer, dst_layer, key)
- classmethod from_dict(data: Dict[str, Any]) Edge
Create edge from dictionary.
- Parameters:
data – Dictionary containing edge data
- Returns:
Edge instance
- key: int = 0
- src: Hashable
- src_layer: Hashable
- to_dict() Dict[str, Any]
Convert edge to dictionary.
- Returns:
Dictionary representation of the edge
- class py3plex.io.schema.Layer(id: ~typing.Hashable, attributes: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectRepresents a layer in a multilayer network.
- id
Unique identifier for the layer
- Type:
Hashable
- attributes
Dictionary of layer attributes (must be JSON-serializable)
- Type:
Dict[str, Any]
- attributes: Dict[str, Any]
- classmethod from_dict(data: Dict[str, Any]) Layer
Create layer from dictionary.
- Parameters:
data – Dictionary containing layer data
- Returns:
Layer instance
- id: Hashable
- to_dict() Dict[str, Any]
Convert layer to dictionary.
- Returns:
Dictionary representation of the layer
- class py3plex.io.schema.MultiLayerGraph(nodes: ~typing.Dict[~typing.Hashable, ~py3plex.io.schema.Node] = <factory>, layers: ~typing.Dict[~typing.Hashable, ~py3plex.io.schema.Layer] = <factory>, edges: ~typing.List[~py3plex.io.schema.Edge] = <factory>, directed: bool = True, attributes: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectRepresents a complete multilayer graph.
- nodes
Dictionary mapping node IDs to Node objects
- Type:
Dict[Hashable, py3plex.io.schema.Node]
- layers
Dictionary mapping layer IDs to Layer objects
- Type:
Dict[Hashable, py3plex.io.schema.Layer]
- edges
List of Edge objects
- Type:
List[py3plex.io.schema.Edge]
- directed
Whether the graph is directed
- Type:
bool
- attributes
Dictionary of graph-level attributes
- Type:
Dict[str, Any]
- add_edge(edge: Edge)
Add an edge to the graph.
- Parameters:
edge – Edge to add
- Raises:
ReferentialIntegrityError – If edge references non-existent nodes or layers
SchemaValidationError – If edge is duplicate
- add_layer(layer: Layer)
Add a layer to the graph.
- Parameters:
layer – Layer to add
- Raises:
SchemaValidationError – If layer ID already exists
- add_node(node: Node)
Add a node to the graph.
- Parameters:
node – Node to add
- Raises:
SchemaValidationError – If node ID already exists
- attributes: Dict[str, Any]
- directed: bool = True
- edges: List[Edge]
- classmethod from_dict(data: Dict[str, Any]) MultiLayerGraph
Create graph from dictionary.
- Parameters:
data – Dictionary containing graph data
- Returns:
MultiLayerGraph instance
- layers: Dict[Hashable, Layer]
- nodes: Dict[Hashable, Node]
- to_dict() Dict[str, Any]
Convert graph to dictionary.
- Returns:
Dictionary representation of the graph
- class py3plex.io.schema.Node(id: ~typing.Hashable, attributes: ~typing.Dict[str, ~typing.Any] = <factory>)
Bases:
objectRepresents a node in a multilayer network.
- id
Unique identifier for the node
- Type:
Hashable
- attributes
Dictionary of node attributes (must be JSON-serializable)
- Type:
Dict[str, Any]
- attributes: Dict[str, Any]
- classmethod from_dict(data: Dict[str, Any]) Node
Create node from dictionary.
- Parameters:
data – Dictionary containing node data
- Returns:
Node instance
- id: Hashable
- to_dict() Dict[str, Any]
Convert node to dictionary.
- Returns:
Dictionary representation of the node
Public API for reading and writing multilayer graphs.
This module provides the main entry points for I/O operations with format detection and a registry system for extensibility.
- py3plex.io.api.ensure(*args, **kwargs)
- py3plex.io.api.read(filepath: str | Path, format: str | None = None, **kwargs) MultiLayerGraph
Read a multilayer graph from a file.
- Parameters:
filepath – Path to the input file
format – Format name (e.g., ‘json’, ‘csv’). If None, auto-detected from extension
**kwargs – Additional arguments passed to the format-specific reader
- Returns:
MultiLayerGraph instance
- Raises:
FormatUnsupportedError – If format is not supported or cannot be detected
FileNotFoundError – If file does not exist
Example
>>> graph = read('network.json') >>> graph = read('network.csv', format='csv')
- py3plex.io.api.register_reader(format_name: str, reader_func: Callable[[...], MultiLayerGraph]) None
Register a reader function for a specific format.
- Parameters:
format_name – Name of the format (e.g., ‘json’, ‘csv’, ‘graphml’)
reader_func – Function that takes (filepath, **kwargs) and returns MultiLayerGraph
Example
>>> def my_reader(filepath, **kwargs): ... # Custom reading logic ... return MultiLayerGraph(...) >>> register_reader('myformat', my_reader)
- Contracts:
Precondition: format_name must be a non-empty string
Precondition: reader_func must be callable
Postcondition: reader is registered in _READERS
- py3plex.io.api.register_writer(format_name: str, writer_func: Callable[[...], None]) None
Register a writer function for a specific format.
- Parameters:
format_name – Name of the format (e.g., ‘json’, ‘csv’, ‘graphml’)
writer_func – Function that takes (graph, filepath, **kwargs) and writes to file
Example
>>> def my_writer(graph, filepath, **kwargs): ... # Custom writing logic ... pass >>> register_writer('myformat', my_writer)
- Contracts:
Precondition: format_name must be a non-empty string
Precondition: writer_func must be callable
Postcondition: writer is registered in _WRITERS
- py3plex.io.api.require(*args, **kwargs)
- py3plex.io.api.supported_formats(read: bool = True, write: bool = True) Dict[str, List[str]]
Get list of supported formats for read and/or write operations.
- Parameters:
read – Include formats that support reading
write – Include formats that support writing
- Returns:
Dictionary with ‘read’ and/or ‘write’ keys containing lists of format names
Example
>>> formats = supported_formats() >>> print(formats) {'read': ['json', 'jsonl', 'csv'], 'write': ['json', 'jsonl', 'csv']}
- Contracts:
Postcondition: result is a dictionary
Postcondition: result contains ‘read’ key when read=True
Postcondition: result contains ‘write’ key when write=True
- py3plex.io.api.write(graph: MultiLayerGraph, filepath: str | Path, format: str | None = None, **kwargs) None
Write a multilayer graph to a file.
- Parameters:
graph – MultiLayerGraph to write
filepath – Path to the output file
format – Format name (e.g., ‘json’, ‘csv’). If None, auto-detected from extension
**kwargs – Additional arguments passed to the format-specific writer
- Raises:
FormatUnsupportedError – If format is not supported or cannot be detected
Example
>>> write(graph, 'network.json') >>> write(graph, 'network.csv', format='csv', deterministic=True)
Hedwig Rule Learning
- py3plex.algorithms.hedwig.build_graph(kwargs: Dict[str, Any]) Any
- py3plex.algorithms.hedwig.generate_rules_report(kwargs: ~typing.Dict[str, ~typing.Any], rules_per_target: ~typing.List[~typing.Tuple[~typing.Any, ~typing.List[~typing.Any]]], human: ~typing.Callable[[~typing.Any, ~typing.Any], ~typing.Any] = <function <lambda>>) str
- py3plex.algorithms.hedwig.rule_kernel(target: Any) Tuple[Any, List[Any]]
- py3plex.algorithms.hedwig.run(kwargs: Dict[str, Any], cli: bool = True, generator_tag: bool = False, num_threads: str | int = 'all') List[Tuple[Any, List[Any]]]
- py3plex.algorithms.hedwig.run_learner(kwargs: Dict[str, Any], kb: ExperimentKB, validator: Validate, generator: bool = False, num_threads: str | int = 'all') List[Tuple[Any, List[Any]]]
Example-related classes.
@author: anze.vavpetic@ijs.si
- class py3plex.algorithms.hedwig.core.example.Example(id, label, score, annotations=None, weights=None)
Bases:
objectRepresents an example with its score, label, id and annotations.
- ClassLabeled = 'class'
- Ranked = 'ranked'
Predicate-related classes.
@author: anze.vavpetic@ijs.si
- class py3plex.algorithms.hedwig.core.predicate.BinaryPredicate(label, pairs, kb, producer_pred=None)
Bases:
PredicateA binary predicate.
- class py3plex.algorithms.hedwig.core.predicate.Predicate(label, kb, producer_pred)
Bases:
objectRepresents a predicate as a member of a certain rule.
- i = -1
- class py3plex.algorithms.hedwig.core.predicate.UnaryPredicate(label, members, kb, producer_pred=None, custom_var_name=None, negated=False)
Bases:
PredicateA unary predicate.
The rule class.
@author: anze.vavpetic@ijs.si
- class py3plex.algorithms.hedwig.core.rule.Rule(kb, predicates=None, target=None)
Bases:
objectRepresents a rule, along with its description, examples and statistics.
- clone()
Returns a clone of this rule. The predicates themselves are NOT cloned.
- clone_append(predicate_label, producer_pred, bin=False)
Returns a copy of this rule where ‘predicate_label’ is appended to the rule.
- clone_negate(target_pred)
Returns a copy of this rule where ‘taget_pred’ is negated.
- clone_swap_with_subclass(target_pred, child_pred_label)
Returns a copy of this rule where ‘target_pred’ is swapped for ‘child_pred_label’.
- examples(positive_only=False)
Returns the covered examples.
- property positives
- precision()
- rule_report(show_uris=False, latex=False)
Rule as string with some statistics.
- static ruleset_examples_json(rules_per_target, show_uris=False)
- static ruleset_report(rules, show_uris=False, latex=False, human=<function Rule.<lambda>>)
- similarity(rule)
Calculates the similarity between this rule and ‘rule’.
- size()
Returns the number of conjunts.
- static to_json(rules_per_target, show_uris=False)
Force Atlas 2 Visualization
- class py3plex.visualization.fa2.forceatlas2.ForceAtlas2(outboundAttractionDistribution=False, linLogMode=False, adjustSizes=False, edgeWeightInfluence=1.0, jitterTolerance=1.0, barnesHutOptimize=True, barnesHutTheta=1.2, multiThreaded=False, scalingRatio=2.0, strongGravityMode=False, gravity=1.0, verbose=True)
Bases:
object- forceatlas2(G, pos=None, iterations=30)
- forceatlas2_igraph_layout(G, pos=None, iterations=100, weight_attr=None)
- forceatlas2_networkx_layout(G, pos=None, iterations=100)
- init(G, pos=None)
- class py3plex.visualization.fa2.forceatlas2.Timer(name='Timer')
Bases:
object- display()
- start()
- stop()
- class py3plex.visualization.fa2.fa2util.Edge
Bases:
object
- class py3plex.visualization.fa2.fa2util.Node
Bases:
object
- class py3plex.visualization.fa2.fa2util.Region(nodes)
Bases:
object- applyForce(n, theta, coefficient=0)
- applyForceOnNodes(nodes, theta, coefficient=0)
- buildSubRegions()
- updateMassAndGeometry()
- py3plex.visualization.fa2.fa2util.adjustSpeedAndApplyForces(nodes, speed, speedEfficiency, jitterTolerance)
- py3plex.visualization.fa2.fa2util.apply_attraction(nodes, edges, distributedAttraction, coefficient, edgeWeightInfluence)
- py3plex.visualization.fa2.fa2util.apply_gravity(nodes, gravity, useStrongGravity=False)
- py3plex.visualization.fa2.fa2util.apply_repulsion(nodes, coefficient)
- py3plex.visualization.fa2.fa2util.linAttraction(n1, n2, e, distributedAttraction, coefficient=0)
- py3plex.visualization.fa2.fa2util.linGravity(n, g)
- py3plex.visualization.fa2.fa2util.linRepulsion(n1, n2, coefficient=0)
- py3plex.visualization.fa2.fa2util.linRepulsion_region(n, r, coefficient=0)
- py3plex.visualization.fa2.fa2util.strongGravity(n, g, coefficient=0)
Embedding Visualization
- py3plex.visualization.embedding_visualization.embedding_visualization.visualize_embedding(multinet, labels=None, verbose=True)
Network Generation and Benchmarking
Additional Statistics and Analysis
See correlated_ttest module for explanations
- py3plex.algorithms.statistics.bayesiantests.heaviside(X)
Compute the Heaviside step function.
The Heaviside function returns 1 for positive values, 0.5 for zero, and 0 for negative values. This is used in signed-rank tests for Bayesian comparisons.
- Parameters:
X – Input array or scalar
- Returns:
- Array with same shape as X containing:
1.0 where X > 0
0.5 where X == 0
0.0 where X < 0
- Return type:
np.ndarray
Examples
>>> heaviside(np.array([-1, 0, 1, 2])) array([0. , 0.5, 1. , 1. ])
- py3plex.algorithms.statistics.bayesiantests.hierarchical(diff, rope, rho, upperAlpha=2, lowerAlpha=1, lowerBeta=0.01, upperBeta=0.1, std_upper_bound=1000, verbose=False, names=('C1', 'C2'))
Perform hierarchical Bayesian test for comparing algorithms across multiple datasets.
This test accounts for the hierarchical structure of the data (multiple datasets, each with multiple folds) and correlations due to overlapping training sets.
- Parameters:
diff – Array of differences between classifier scores
rope – Width of the region of practical equivalence (ROPE)
rho – Correlation between folds (typically around 1/n_folds)
upperAlpha – Upper bound for alpha parameter of Gamma prior (default: 2)
lowerAlpha – Lower bound for alpha parameter of Gamma prior (default: 1)
lowerBeta – Lower bound for beta parameter of Gamma prior (default: 0.01)
upperBeta – Upper bound for beta parameter of Gamma prior (default: 0.1)
std_upper_bound – Upper bound multiplier for standard deviation prior (default: 1000) Posterior is insensitive to this if large enough (>100)
verbose – Whether to print probability results (default: False)
names – Tuple of classifier names for verbose output (default: (“C1”, “C2”))
- Returns:
- (p_left, p_rope, p_right)
p_left: Probability that first classifier is worse
p_rope: Probability that classifiers are practically equivalent
p_right: Probability that first classifier is better
- Return type:
Tuple[float, float, float]
Notes
The Gamma distribution parameters control the prior on degrees of freedom
The hierarchical structure models between-dataset and within-dataset variance
Use when comparing algorithms across multiple datasets with cross-validation
References
Benavoli, A., Corani, G., & Mangili, F. (2016). Should we really use post-hoc tests based on mean-ranks? The Journal of Machine Learning Research.
See also
hierarchical_MC: Monte Carlo sampling version correlated_ttest: Simpler test for single dataset comparisons
- py3plex.algorithms.statistics.bayesiantests.hierarchical_MC(diff, rope, rho, upperAlpha=2, lowerAlpha=1, lowerBeta=0.01, upperBeta=0.1, std_upper_bound=1000, names=('C1', 'C2'))
Monte Carlo sampling for hierarchical Bayesian test.
Generates Monte Carlo samples from the posterior distribution for hierarchical comparison of algorithms across multiple datasets with cross-validation.
- Parameters:
diff – Array of differences between classifier scores (shape: n_datasets x n_folds)
rope – Width of the region of practical equivalence (ROPE)
rho – Correlation between folds due to overlapping training sets
upperAlpha – Upper bound for alpha parameter of Gamma prior (default: 2)
lowerAlpha – Lower bound for alpha parameter of Gamma prior (default: 1)
lowerBeta – Lower bound for beta parameter of Gamma prior (default: 0.01)
upperBeta – Upper bound for beta parameter of Gamma prior (default: 0.1)
std_upper_bound – Upper bound multiplier for standard deviation prior (default: 1000)
names – Tuple of classifier names for identification (default: (“C1”, “C2”))
- Returns:
- Monte Carlo samples with shape (n_samples, 3)
Each row contains [p_left, p_rope, p_right] for one MC sample
- Return type:
np.ndarray
Notes
Uses PyStan for Bayesian inference with hierarchical model
Data is rescaled by mean standard deviation for numerical stability
Hierarchical structure captures both within-dataset and between-dataset variance
Requires pystan package to be installed
- Implementation:
Uses Stan’s NUTS sampler with 4 chains
Each chain runs 100 iterations (including warmup)
Total posterior samples: ~200 after warmup
See also
hierarchical: Main function that processes MC samples correlated_ttest_MC: Simpler MC version for single dataset
- Raises:
ImportError – If pystan is not installed
- py3plex.algorithms.statistics.bayesiantests.plot_posterior(samples, names=('C1', 'C2'), proba_triplet=None)
- Parameters:
x (array) – a vector of differences or a 2d array with pairs of scores.
names (pair of str) – the names of the two classifiers
- Returns:
matplotlib.pyplot.figure
- py3plex.algorithms.statistics.bayesiantests.plot_simplex(points, names=('C1', 'C2'), proba_triplet=None)
- py3plex.algorithms.statistics.bayesiantests.signrank(x, rope, prior_strength=0.6, prior_place=1, nsamples=50000, verbose=False, names=('C1', 'C2'))
- Parameters:
x (array) – a vector of differences or a 2d array with pairs of scores.
rope (float) – the width of the rope
prior_strength (float) – prior strength (default: 0.6)
prior_place (LEFT, ROPE or RIGHT) – the region to which the prior is assigned (default: ROPE)
nsamples (int) – the number of Monte Carlo samples
verbose (bool) – report the computed probabilities
names (pair of str) – the names of the two classifiers
- Returns:
p_left, p_rope, p_right
- py3plex.algorithms.statistics.bayesiantests.signrank_MC(x, rope, prior_strength=0.6, prior_place=1, nsamples=50000)
- Parameters:
x (array) – a vector of differences or a 2d array with pairs of scores.
rope (float) – the width of the rope
prior_strength (float) – prior strength (default: 0.6)
prior_place (LEFT, ROPE or RIGHT) – the region to which the prior is assigned (default: ROPE)
nsamples (int) – the number of Monte Carlo samples
- Returns:
2-d array with rows corresponding to samples and columns to probabilities [p_left, p_rope, p_right]
- py3plex.algorithms.statistics.bayesiantests.signtest(x, rope, prior_strength=1, prior_place=1, nsamples=50000, verbose=False, names=('C1', 'C2'))
- Parameters:
x (array) – a vector of differences or a 2d array with pairs of scores.
rope (float) – the width of the rope
prior_strength (float) – prior strength (default: 1)
prior_place (LEFT, ROPE or RIGHT) – the region to which the prior is assigned (default: ROPE)
nsamples (int) – the number of Monte Carlo samples
verbose (bool) – report the computed probabilities
names (pair of str) – the names of the two classifiers
- Returns:
p_left, p_rope, p_right
- py3plex.algorithms.statistics.bayesiantests.signtest_MC(x, rope, prior_strength=1, prior_place=1, nsamples=50000)
- Parameters:
x (array) – a vector of differences or a 2d array with pairs of scores.
rope (float) – the width of the rope
prior_strength (float) – prior strength (default: 1)
prior_place (LEFT, ROPE or RIGHT) – the region to which the prior is assigned (default: ROPE)
nsamples (int) – the number of Monte Carlo samples
- Returns:
2-d array with rows corresponding to samples and columns to probabilities [p_left, p_rope, p_right]
Community Detection Advanced
Node Ranking and Clustering (NoRC) module for community detection.
This module implements algorithms for node ranking and hierarchical clustering in networks, including parallel PageRank computation and hierarchical merging.
- py3plex.algorithms.community_detection.NoRC.NoRC_communities_main(input_graph, clustering_scheme='hierarchical', max_com_num=100, verbose=False, parallel_step=None, prob_threshold=0.0005, community_range=None, fine_range=3, lag_threshold=10)
- py3plex.algorithms.community_detection.NoRC.page_rank_kernel(index_row)
Compute normalized PageRank vector for a given node.
- Parameters:
index_row – Node index for which to compute PageRank
- Returns:
Tuple of (node_index, normalized_pagerank_vector)
Community-based node ranking framework.
This module implements a framework for ranking nodes within and across communities using PageRank-based metrics and hierarchical clustering.
- py3plex.algorithms.community_detection.community_ranking.create_tree(centers: ndarray) Dict[int, Dict[str, List]]
- py3plex.algorithms.community_detection.community_ranking.page_rank_kernel(index_row: int) Tuple[int, ndarray]
- py3plex.algorithms.community_detection.community_ranking.return_infomap_communities(network: Any) List[List[int]]
Node Ranking and Classification
Node ranking algorithms for multilayer networks.
This module provides various node ranking algorithms including PageRank variants, HITS (Hubs and Authorities), and personalized PageRank (PPR) for network analysis.
- Key Functions:
sparse_page_rank: Compute PageRank scores using sparse matrix operations
run_PPR: Run Personalized PageRank in parallel across multiple cores
hubs_and_authorities: Compute HITS scores for nodes
stochastic_normalization: Normalize adjacency matrix to stochastic form
Notes
The PageRank implementations use sparse matrices for memory efficiency and support parallel computation for large-scale networks.
- py3plex.algorithms.node_ranking.authority_matrix(graph: Graph) ndarray
Get the authority matrix representation of a graph.
Computes the matrix A = A.T @ A where A is the adjacency matrix. The authority matrix is used in HITS algorithm computation.
- Parameters:
graph – NetworkX graph
- Returns:
Authority matrix (N x N) where N is number of nodes
- Return type:
np.ndarray
Notes
For directed graphs: A[i,j] = number of nodes that point to both i and j
Used internally by HITS algorithm to compute authority scores
See also
hub_matrix: Complementary hub matrix hubs_and_authorities: Compute actual hub/authority scores
- py3plex.algorithms.node_ranking.damping_hyper: float
- py3plex.algorithms.node_ranking.hub_matrix(graph: Graph) ndarray
Get the hub matrix representation of a graph.
Computes the matrix H = A @ A.T where A is the adjacency matrix. The hub matrix is used in HITS algorithm computation.
- Parameters:
graph – NetworkX graph
- Returns:
Hub matrix (N x N) where N is number of nodes
- Return type:
np.ndarray
Notes
For directed graphs: H[i,j] = number of nodes pointed to by both i and j
Used internally by HITS algorithm to compute hub scores
See also
authority_matrix: Complementary authority matrix hubs_and_authorities: Compute actual hub/authority scores
- py3plex.algorithms.node_ranking.hubs_and_authorities(graph: Graph) Tuple[dict, dict]
Compute HITS (Hubs and Authorities) scores for all nodes in a graph.
Implements the Hyperlink-Induced Topic Search (HITS) algorithm to identify hub nodes (nodes that point to many authorities) and authority nodes (nodes pointed to by many hubs) in a network.
- Parameters:
graph – NetworkX graph (directed or undirected)
- Returns:
- (hub_scores, authority_scores)
hub_scores: Dictionary mapping node -> hub score
authority_scores: Dictionary mapping node -> authority score
- Return type:
Tuple[dict, dict]
Notes
Uses scipy-based implementation from NetworkX (nx.hits_scipy)
Scores are normalized so that the sum of squares equals 1
For undirected graphs, hub and authority scores are identical
Converges using power iteration method
Examples
>>> import networkx as nx >>> G = nx.DiGraph([(0, 1), (0, 2), (1, 2)]) >>> hubs, authorities = hubs_and_authorities(G) >>> # Node 0 has high hub score (points to others) >>> # Node 2 has high authority score (pointed to by others)
See also
hub_matrix: Get the hub matrix representation authority_matrix: Get the authority matrix representation
- py3plex.algorithms.node_ranking.page_rank_kernel(index_row: int) Tuple[int, ndarray]
Compute PageRank vector for a single starting node (multiprocessing kernel).
This function is designed to be called in parallel via multiprocessing.Pool.map(). It computes the personalized PageRank vector starting from a single node.
- Parameters:
index_row – Index of the starting node for personalized PageRank
- Returns:
- (node_index, normalized_pagerank_vector)
node_index: The input index (for tracking results)
pagerank_vector: L2-normalized PageRank scores for all nodes
- Return type:
Tuple[int, np.ndarray]
Notes
Accesses global variables: __graph_matrix, damping_hyper, spread_step_hyper, spread_percent_hyper (set by run_PPR before parallel execution)
Returns zero vector if normalization fails
L2 normalization ensures comparable magnitudes across different starting nodes
See also
run_PPR: Main function that sets up parallel execution sparse_page_rank: Core PageRank computation
- py3plex.algorithms.node_ranking.run_PPR(network: spmatrix, cores: int | None = None, jobs: List[range] | None = None, damping: float = 0.85, spread_step: int = 10, spread_percent: float = 0.3, targets: List[int] | None = None, parallel: bool = True) Generator[Tuple[int, ndarray] | List[Tuple[int, ndarray]], None, None]
Run Personalized PageRank (PPR) in parallel for multiple starting nodes.
Computes personalized PageRank vectors for multiple nodes using parallel processing. This is useful for creating node embeddings or analyzing node importance from different perspectives in the network.
- Parameters:
network – Sparse adjacency matrix (will be automatically normalized to stochastic form)
cores – Number of CPU cores to use (default: all available cores)
jobs – Custom job batches as list of ranges (default: auto-generated)
damping – Damping factor for PageRank (default: 0.85) Higher values (0.85-0.99) emphasize network structure
spread_step – Steps to check spread pattern for optimization (default: 10)
spread_percent – Max node fraction for shrinkage optimization (default: 0.3)
targets – Specific node indices to compute PPR for (default: all nodes)
parallel – Enable parallel processing (default: True) Set to False for debugging or single-core execution
- Yields:
Union[Tuple[int, np.ndarray], List[Tuple[int, np.ndarray]]] – - If parallel=True: Lists of (node_index, pagerank_vector) tuples (batched) - If parallel=False: Individual (node_index, pagerank_vector) tuples
Notes
Automatically normalizes input matrix to column-stochastic form
Uses multiprocessing.Pool for parallel execution
Global variables are used to share the graph matrix across processes
Results are yielded incrementally (generator pattern) to save memory
Each pagerank_vector is L2-normalized for comparability
Examples
>>> import scipy.sparse as sp >>> # Create a small network >>> adj = sp.csr_matrix([[0, 1, 1], [1, 0, 1], [1, 1, 0]]) >>> >>> # Compute PPR for all nodes in parallel >>> for batch in run_PPR(adj, cores=2, parallel=True): ... for node_idx, pr_vector in batch: ... print(f"Node {node_idx}: {pr_vector}") >>> >>> # Compute PPR for specific nodes without parallelism >>> for node_idx, pr_vector in run_PPR(adj, targets=[0, 1], parallel=False): ... print(f"Node {node_idx}: {pr_vector}")
- Performance:
Parallel speedup scales with number of cores (up to ~0.8 * cores efficiency)
Memory usage: O(N * N_targets) for storing results
For large networks (>100K nodes), consider processing targets in batches
See also
sparse_page_rank: Core PageRank computation page_rank_kernel: Worker function for parallel execution stochastic_normalization: Matrix normalization (called internally)
- py3plex.algorithms.node_ranking.sparse_page_rank(matrix: spmatrix, start_nodes: List[int] | range, epsilon: float = 1e-06, max_steps: int = 100000, damping: float = 0.5, spread_step: int = 10, spread_percent: float = 0.3, try_shrink: bool = True) ndarray
Compute personalized PageRank using sparse matrix operations.
Implements an efficient personalized PageRank algorithm with adaptive sparsification to reduce memory usage and computation time. The algorithm uses a power iteration method with early stopping based on convergence criteria.
- Parameters:
matrix – Column-stochastic sparse adjacency matrix (use stochastic_normalization first)
start_nodes – List or range of starting nodes for personalized PageRank
epsilon – Convergence threshold for L1 norm difference (default: 1e-6)
max_steps – Maximum number of iterations (default: 100000)
damping – Damping factor / teleportation probability (default: 0.5) Higher values (e.g., 0.85) favor network structure over random jumps
spread_step – Number of steps to check for sparsity pattern (default: 10)
spread_percent – Maximum fraction of nodes to consider for shrinkage (default: 0.3)
try_shrink – Enable adaptive shrinkage to reduce computation (default: True)
- Returns:
PageRank scores for all nodes, with start_nodes set to 0
- Return type:
np.ndarray
Notes
Assumes matrix is column-stochastic (use stochastic_normalization first)
Adaptive shrinkage identifies nodes unreachable from start_nodes and excludes them from computation for efficiency
Convergence is measured by L1 norm of rank vector difference
Start nodes are zeroed out in the final result to avoid self-importance
- Complexity:
Time: O(k * E) where k is iterations and E is edges
Space: O(N) for rank vectors, plus matrix storage
Examples
>>> import scipy.sparse as sp >>> # Create and normalize adjacency matrix >>> adj = sp.csr_matrix([[0, 1, 1], [1, 0, 1], [1, 1, 0]]) >>> adj_norm = stochastic_normalization(adj) >>> # Compute PageRank from node 0 >>> pr = sparse_page_rank(adj_norm, [0], damping=0.85) >>> pr # PageRank scores (node 0 will be 0) array([0. , 0.5, 0.5])
- Raises:
AssertionError – If start_nodes is empty
See also
stochastic_normalization: Required preprocessing step run_PPR: Parallel wrapper for computing multiple PageRank vectors
- py3plex.algorithms.node_ranking.spread_percent_hyper: float
- py3plex.algorithms.node_ranking.spread_step_hyper: int
- py3plex.algorithms.node_ranking.stochastic_normalization(matrix: spmatrix) spmatrix
Normalize a sparse matrix to stochastic form (column-stochastic).
Converts an adjacency matrix to a stochastic matrix where each column sums to 1. This normalization is required for PageRank-style random walk algorithms.
- Parameters:
matrix – Sparse adjacency matrix to normalize
- Returns:
Column-stochastic sparse matrix where each column sums to 1
- Return type:
sp.spmatrix
Notes
Removes self-loops (sets diagonal to 0) before normalization
Handles zero-degree nodes by leaving corresponding columns as zeros
Preserves sparsity structure for memory efficiency
Examples
>>> import scipy.sparse as sp >>> adj = sp.csr_matrix([[0, 1, 1], [1, 0, 1], [1, 1, 0]]) >>> stoch_adj = stochastic_normalization(adj) >>> stoch_adj.sum(axis=1).A1 # Column sums should be 1 array([1., 1., 1.])
- py3plex.algorithms.network_classification.benchmark_classification(matrix: spmatrix, labels: ndarray, alpha_value: float = 0.85, iterations: int = 30, normalization_scheme: str = 'freq', dataset_name: str = 'example', verbose: bool = False, test_size: float | None = None) DataFrame
- py3plex.algorithms.network_classification.label_propagation(graph_matrix: spmatrix, class_matrix: ndarray, alpha: float = 0.001, epsilon: float = 1e-12, max_steps: int = 100000, normalization: str | List[str] = 'freq') ndarray
Propagate labels through a graph.
- Parameters:
graph_matrix – Sparse graph adjacency matrix
class_matrix – Initial class label matrix
alpha – Propagation weight parameter
epsilon – Convergence threshold
max_steps – Maximum number of iterations
normalization – Normalization scheme(s) to apply
- Returns:
Propagated label matrix
- py3plex.algorithms.network_classification.label_propagation_normalization(matrix: spmatrix) spmatrix
Normalize a matrix for label propagation.
- Parameters:
matrix – Sparse matrix to normalize
- Returns:
Normalized sparse matrix
- py3plex.algorithms.network_classification.label_propagation_tf() None
- py3plex.algorithms.network_classification.normalize_amplify_freq(mat: ndarray) ndarray
Normalize and amplify matrix by frequency.
- Parameters:
mat – Matrix to normalize
- Returns:
Normalized and amplified matrix
- py3plex.algorithms.network_classification.normalize_exp(mat: ndarray) ndarray
Apply exponential normalization.
- Parameters:
mat – Matrix to normalize
- Returns:
Exponentially normalized matrix
- py3plex.algorithms.network_classification.normalize_initial_matrix_freq(mat: ndarray) ndarray
Normalize matrix by frequency.
- Parameters:
mat – Matrix to normalize
- Returns:
Normalized matrix
- py3plex.algorithms.network_classification.validate_label_propagation(core_network: spmatrix, labels: ndarray | spmatrix, dataset_name: str = 'test', repetitions: int = 5, normalization_scheme: str | List[str] = 'basic', alpha_value: float = 0.001, random_seed: int = 123, verbose: bool = False) DataFrame
Validate label propagation with cross-validation.
- Parameters:
core_network – Sparse network adjacency matrix
labels – Label matrix
dataset_name – Name of the dataset
repetitions – Number of repetitions
normalization_scheme – Normalization scheme to use
alpha_value – Alpha parameter for propagation
random_seed – Random seed for reproducibility
verbose – Whether to print progress
- Returns:
DataFrame with validation results
Embeddings and Wrappers
- py3plex.wrappers.train_node2vec_embedding.call_node2vec_binary(input_graph: str, output_graph: str, p: float = 1, q: float = 1, dimension: int = 128, directed: bool = False, weighted: bool = True, binary: str | None = None, timeout: int = 300) None
Call the Node2Vec C++ binary with specified parameters.
- Parameters:
input_graph – Path to input graph file
output_graph – Path to output embedding file
p – Return parameter
q – In-out parameter
dimension – Embedding dimension
directed – Whether graph is directed
weighted – Whether graph is weighted
binary – Path to node2vec binary (defaults to PY3PLEX_NODE2VEC_BINARY env var or “./node2vec”)
timeout – Maximum execution time in seconds
- Raises:
ExternalToolError – If binary is not found or execution fails
- py3plex.wrappers.train_node2vec_embedding.learn_embedding(core_network: Any, labels: List[Any] | None = None, ssize: float = 0.5, embedding_outfile: str = 'out.emb', p: float = 0.1, q: float = 0.1, binary_path: str | None = None, parameter_range: str = '[0.25,0.50,1,2,4]', timeout: int = 300) Tuple[str, float]
Learn node embeddings for a network.
- Parameters:
core_network – NetworkX graph
labels – Node labels
ssize – Sample size
embedding_outfile – Output file for embeddings
p – Return parameter
q – In-out parameter
binary_path – Path to node2vec binary (defaults to PY3PLEX_NODE2VEC_BINARY env var or “./node2vec”)
parameter_range – String representation of parameter range list
timeout – Maximum execution time in seconds per call
- Returns:
Tuple of (method_name, elapsed_time)
- py3plex.wrappers.train_node2vec_embedding.n2v_embedding(G: Any, targets: Any, verbose: bool = False, sample_size: float = 0.5, outfile_name: str = 'test.emb', p: float | None = None, q: float | None = None, binary_path: str | None = None, parameter_range: List[float] | None = None, embedding_dimension: int = 128, timeout: int = 300) None
Train Node2Vec embeddings with parameter optimization.
- Parameters:
G – NetworkX graph
targets – Target labels for nodes
verbose – Whether to print verbose output
sample_size – Sample size for training
outfile_name – Output embedding file name
p – Return parameter (None triggers grid search)
q – In-out parameter (None triggers grid search)
binary_path – Path to node2vec binary (defaults to PY3PLEX_NODE2VEC_BINARY env var or “./node2vec”)
parameter_range – Range of parameters to search
embedding_dimension – Dimension of embeddings
timeout – Maximum execution time in seconds per call
Network Motifs and Patterns
HINMINE Data Structures
- class py3plex.core.HINMINE.dataStructures.Class(lab_id, name, members)
Bases:
object
- class py3plex.core.HINMINE.dataStructures.HeterogeneousInformationNetwork(network, label_delimiter, weight_tag=False, target_tag=True)
Bases:
object- add_label(node, label_id, label_name=None)
- calculate_decomposition_candidates(max_decomposition_length=10)
- calculate_schema()
- create_label_matrix(weights=None)
- decompose_from_iterator(name, weighing, summing, generator=None, degrees=None, parallel=False, pool=None)
- midpoint_generator(node_sequence, edge_sequence)
- process_network(label_delimiter)
- split_to_indices(train_indices=(), validate_indices=(), test_indices=())
- split_to_parts(lst, n)
Command-Line Interface
Command-line interface for py3plex.
This module provides a comprehensive CLI tool for multilayer network analysis with full coverage of main algorithms.
- py3plex.cli.cmd_aggregate(args: Namespace) int
Aggregate multilayer network into single layer.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_centrality(args: Namespace) int
Compute node centrality measures.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_check(args: Namespace) int
Lint and validate a graph data file.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success, non-zero for errors)
- py3plex.cli.cmd_community(args: Namespace) int
Detect communities in the network.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_convert(args: Namespace) int
Convert network between different formats.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_create(args: Namespace) int
Create a new multilayer network.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_dsl_lint(args: Namespace) int
Lint and analyze DSL queries.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for no errors, 1 for errors found, 2 for command error)
- py3plex.cli.cmd_help(args: Namespace) int
Show detailed help information.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_load(args: Namespace) int
Load and inspect a network.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_query(args: Namespace) int
Execute DSL queries on networks with Unix piping support.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_quickstart(args: Namespace) int
Run quickstart demo - creates a tiny demo graph and demonstrates basic operations.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_run_config(args: Namespace) int
Run workflow from configuration file.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_selftest(args: Namespace) int
Run self-test to verify installation and core functionality.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_stats(args: Namespace) int
Compute multilayer network statistics.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_tutorial(args: Namespace) int
Run interactive tutorial mode to learn py3plex step by step.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.cmd_visualize(args: Namespace) int
Visualize the multilayer network.
- Parameters:
args – Parsed command-line arguments
- Returns:
Exit code (0 for success)
- py3plex.cli.create_parser() ArgumentParser
Create and configure the argument parser for py3plex CLI.
- Returns:
Configured ArgumentParser instance
- py3plex.cli.main(argv: List[str] | None = None) int
Main entry point for the CLI.
- Parameters:
argv – Command-line arguments (defaults to sys.argv)
- Returns:
Exit code (0 for success, non-zero for errors)
Validation Utilities
Input validation utilities for Py3plex.
This module provides pre-validation functions to catch common input errors early and provide clear, actionable error messages to users.
- py3plex.validation.validate_csv_columns(file_path: str, required_columns: List[str], optional_columns: List[str] | None = None) None
Validate that a CSV file has required columns.
- Parameters:
file_path – Path to CSV file
required_columns – List of column names that must be present
optional_columns – List of column names that are optional
- Raises:
ParsingError – If required columns are missing
- Contracts:
Precondition: file_path must be a non-empty string
Precondition: required_columns must be a non-empty list of strings
- py3plex.validation.validate_edgelist_format(file_path: str, delimiter: str | None = None) None
Validate simple edgelist file format (source target weight).
- Parameters:
file_path – Path to edgelist file
delimiter – Optional delimiter (default: whitespace)
- Raises:
ParsingError – If file format is invalid
- Contracts:
Precondition: file_path must be a non-empty string
Precondition: delimiter must be None or a string
- py3plex.validation.validate_file_exists(file_path: str) None
Validate that a file exists and is readable.
- Parameters:
file_path – Path to file to validate
- Raises:
ParsingError – If file doesn’t exist or isn’t readable
- Contracts:
Precondition: file_path must be a non-empty string
- py3plex.validation.validate_input_type(input_type: str, valid_types: Set[str] | None = None) None
Validate that input_type is recognized.
- Parameters:
input_type – The input type string to validate
valid_types – Optional set of valid types (uses default if None)
- Raises:
ParsingError – If input_type is not valid
- Contracts:
Precondition: input_type must be a non-empty string
Precondition: valid_types must be None or a set
- py3plex.validation.validate_multiedgelist_format(file_path: str, delimiter: str | None = None) None
Validate multiedgelist file format (source target layer weight).
- Parameters:
file_path – Path to multiedgelist file
delimiter – Optional delimiter (default: whitespace)
- Raises:
ParsingError – If file format is invalid
- Contracts:
Precondition: file_path must be a non-empty string
Precondition: delimiter must be None or a string
- py3plex.validation.validate_network_data(file_path: str, input_type: str) None
Validate network data before parsing.
This is the main entry point for validation. It performs appropriate validation based on the input type.
- Parameters:
file_path – Path to network file
input_type – Type of input file
- Raises:
ParsingError – If validation fails
- Contracts:
Precondition: file_path must be a string
Precondition: input_type must be a non-empty string
Network Comparison and Testing
Hedwig Learning Algorithms
Main learner class.
@author: anze.vavpetic@ijs.si
- class py3plex.algorithms.hedwig.learners.learner.Learner(kb, n=None, min_sup=1, sim=1, depth=4, target=None, use_negations=False, optimal_subclass=False)
Bases:
objectLearner class, supporting various types of induction from the knowledge base.
- Default = 'default'
- Improvement = 'improvement'
- Similarity = 'similarity'
- can_specialize(rule)
Is the rule good enough to be further refined?
- extend(rules, specializations)
Extends the ruleset in the given way.
- extend_replace_worst(rules, specializations)
Extends the list by replacing the worst rules.
- extend_with_similarity(rules, specializations)
Extends the list based on how similar is ‘new_rule’ to the rules contained in ‘rules’.
- get_subclasses(pred)
- get_superclasses(pred)
- group_score(rules)
Calculates the score of the whole list of rules.
- induce()
Induces rules for the given knowledge base.
- is_implicit_root(pred)
- non_redundant(rule, new_rule)
Is the rule non-redundant compared to its immediate generalization?
- specialize(rule)
Returns a list of all specializations of ‘rule’.
- specialize_add_relation(rule)
Specialize with new binary relation.
Main learner class.
@author: anze.vavpetic@ijs.si
- class py3plex.algorithms.hedwig.learners.bottomup.BottomUpLearner(kb, n=None, min_sup=1, sim=1, depth=4, target=None, use_negations=False)
Bases:
objectBottom-up learner.
- Default = 'default'
- Improvement = 'improvement'
- Similarity = 'similarity'
- bottom_clause()
- get_subclasses(pred)
- get_superclasses(pred)
- induce()
Induces rules for the given knowledge base.
- is_implicit_root(pred)
Main learner class.
@author: anze.vavpetic@ijs.si
Hedwig Statistics and Scoring
Score function definitions.
@author: anze.vavpetic@ijs.si
- py3plex.algorithms.hedwig.stats.scorefunctions.chisq(rule)
- py3plex.algorithms.hedwig.stats.scorefunctions.enrichment_score(rule)
- py3plex.algorithms.hedwig.stats.scorefunctions.interesting(rule)
Checks if a given rule is interesting for the given score function
- py3plex.algorithms.hedwig.stats.scorefunctions.kaplan_meier_AUC(rule)
- py3plex.algorithms.hedwig.stats.scorefunctions.leverage(rule)
- py3plex.algorithms.hedwig.stats.scorefunctions.lift(rule)
- py3plex.algorithms.hedwig.stats.scorefunctions.precision(rule)
- py3plex.algorithms.hedwig.stats.scorefunctions.t_score(rule)
- py3plex.algorithms.hedwig.stats.scorefunctions.wracc(rule)
- py3plex.algorithms.hedwig.stats.scorefunctions.z_score(rule)
Module for ruleset validation.
@author: anze.vavpetic@ijs.si
Hedwig Core Components
- py3plex.algorithms.hedwig.core.converters.convert_mapping_to_rdf(input_mapping_file, extract_subnode_info=False, split_node_by=':', keep_index=1, layer_type='uniprotkb', annotation_mapping_file='test.gaf', go_identifier='GO:', prepend_string=None)
- py3plex.algorithms.hedwig.core.converters.obo2n3(obofile, n3out, gaf_file)
Knowledge-base class.
@author: anze.vavpetic@ijs.si
- class py3plex.algorithms.hedwig.core.kb.ExperimentKB(triplets, score_fun, instances_as_leaves=True)
Bases:
objectThe knowledge base for one specific experiment.
- add_sub_class(sub, obj)
Adds the resource ‘sub’ as a subclass of ‘obj’.
- bits_to_indices(bits)
Converts the bitset to a set of indices.
- get_domains(predicate)
Returns the bitsets for input and outputexamples of the binary predicate ‘predicate’.
- get_empty_domain()
Returns a bitset covering no examples.
- get_examples()
Returns all examples for this experiment.
- get_full_domain()
Returns a bitset covering all examples.
- get_members(predicate, bit=True)
Returns the examples for this predicate, either as a bitset or a set of ids.
- get_reverse_members(predicate, bit=True)
Returns the examples for this predicate, either as a bitset or a set of ids.
- get_root()
Root predicate, which covers all examples.
- get_score(ex_idx)
Returns the score for example id ‘ex_idx’.
- get_subclasses(predicate, producer_pred=None)
Returns a list of subclasses (as predicate objects) for ‘predicate’.
- indices_to_bits(indices)
Converts the indices to a bitset.
- is_discrete_target()
- n_examples()
Returns the number of examples.
- n_members(predicate)
- super_classes(pred)
Returns all super classes of pred (with transitivity).
Global settings file.
@author: anze.vavpetic@ijs.si
- class py3plex.algorithms.hedwig.core.settings.Defaults
Bases:
object- ADJUST = 'fwer'
- ALPHA = 0.05
- BEAM_SIZE = 20
- COVERED = None
- DEPTH = 5
- FDR_Q = 0.05
- FORMAT = 'n3'
- LEARNER = 'heuristic'
- LEAVES = False
- MODE = 'subgroups'
- NEGATIONS = False
- NO_CACHE = False
- OPTIMAL_SUBCLASS = False
- OUTPUT = None
- SCORE = 'lift'
- SUPPORT = 0.1
- TARGET = None
- URIS = False
- VERBOSE = False
Time Series and Temporal Analysis
Advanced Visualization
Sankey diagram visualization for multilayer networks.
This module provides inter-layer flow visualization to show connection strength between layers in multilayer networks. The visualization displays flows as arrows with widths proportional to the number of inter-layer connections.
Note: This uses a simplified flow diagram approach rather than matplotlib’s Sankey class, as the Sankey class is designed for more complex flow networks and doesn’t map directly to multilayer network inter-layer connections.
- py3plex.visualization.sankey.draw_multilayer_sankey(graphs: List[Graph], multilinks: Dict[str, List[Tuple]], labels: List[str] | None = None, ax: Any | None = None, display: bool = False, **kwargs) Any
Draw inter-layer flow diagram showing connection strength in multilayer networks.
Creates a flow visualization where: - Each layer is represented in the diagram - Flows between layers show the strength (number) of inter-layer connections - Flow width/text indicates the number of inter-layer edges
- Parameters:
graphs – List of NetworkX graphs, one per layer
multilinks – Dictionary mapping edge_type -> list of multi-layer edges
labels – Optional list of layer labels. If None, uses layer indices
ax – Matplotlib axes to draw on. If None, creates new figure
display – If True, calls plt.show() after drawing. Default is False to let the caller control rendering.
**kwargs – Reserved for future extensions
- Returns:
Matplotlib axes object
Examples
>>> import matplotlib.pyplot as plt >>> from py3plex.visualization import draw_multilayer_sankey >>> network = multi_layer_network() >>> network.load_network("data.txt", input_type="multiedgelist") >>> labels, graphs, multilinks = network.get_layers() >>> fig, ax = plt.subplots(figsize=(12, 8)) >>> ax = draw_multilayer_sankey(graphs, multilinks, labels=labels, ax=ax) >>> plt.savefig("sankey.png")
Note
This visualization is most effective for networks with 2-5 layers. For networks with many layers, the diagram may become cluttered. The implementation uses a simplified flow visualization approach.