Example Notebook: BioCypher and Pandas¶
Introduction¶
The main purpose of BioCypher is to facilitate the pre-processing of biomedical data, and thus save development time in the maintenance of curated knowledge graphs, while allowing simple and efficient creation of task-specific lightweight knowledge graphs in a user-friendly and biology-centric fashion.
We are going to use a toy example to familiarise the user with the basic functionality of BioCypher. One central task of BioCypher is the harmonisation of dissimilar datasets describing the same entities. Thus, in this example, the input data - which in the real-world use case could come from any type of interface - are represented by simulated data containing some examples of differently formatted biomedical entities such as proteins and their interactions.
There are two other versions of this tutorial, which only differ in the output format. The first uses a CSV output format to write files suitable for Neo4j admin import, and the second creates an in-memory collection of Pandas dataframes. You can find the former in the tutorial directory of the BioCypher repository. This tutorial simply takes the latter, in-memory approach to a Jupyter notebook.
While BioCypher was designed as a graph-focused framework, due to commonalities in bioinformatics workflows, BioCypher also supports Pandas DataFrames. This allows integration with methods that use tabular data, such as machine learning and statistical analysis, for instance in the scVerse framework.
Setup¶
To run this tutorial interactively, you will first need to install perform some setup steps specific to running on Google Colab. You can collapse this section and run the setup steps with one click, as they are not required for the explanation of BioCyper's functionality. You can of course also run the steps one by one, if you want to see what is happening. The real tutorial starts with section 1, "Adding data" (do not follow this link on colab, as you will be taken back to the website; please scroll down instead).
!pip install biocypher
Requirement already satisfied: biocypher in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (0.9.1) Requirement already satisfied: PyYAML>=5.0 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from biocypher) (6.0.2) Requirement already satisfied: appdirs in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from biocypher) (1.4.4) Requirement already satisfied: more_itertools in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from biocypher) (10.6.0) Requirement already satisfied: neo4j-utils==0.0.7 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from biocypher) (0.0.7) Requirement already satisfied: networkx<4.0,>=3.0 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from biocypher) (3.4.2) Requirement already satisfied: pandas<3.0.0,>=2.0.1 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from biocypher) (2.2.3) Requirement already satisfied: pooch<2.0.0,>=1.7.0 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from biocypher) (1.8.2) Requirement already satisfied: rdflib<7.0.0,>=6.2.0 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from biocypher) (6.3.2) Requirement already satisfied: stringcase<2.0.0,>=1.2.0 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from biocypher) (1.2.0) Requirement already satisfied: tqdm<5.0.0,>=4.65.0 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from biocypher) (4.67.1) Requirement already satisfied: treelib==1.6.4 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from biocypher) (1.6.4) Requirement already satisfied: colorlog in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from neo4j-utils==0.0.7->biocypher) (6.9.0) Requirement already satisfied: neo4j<5.0,>=4.4 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from neo4j-utils==0.0.7->biocypher) (4.4.12) Requirement already satisfied: toml in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from neo4j-utils==0.0.7->biocypher) (0.10.2) Requirement already satisfied: six in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from treelib==1.6.4->biocypher) (1.17.0) Requirement already satisfied: numpy>=1.22.4 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from pandas<3.0.0,>=2.0.1->biocypher) (2.2.1) Requirement already satisfied: python-dateutil>=2.8.2 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from pandas<3.0.0,>=2.0.1->biocypher) (2.9.0.post0) Requirement already satisfied: pytz>=2020.1 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from pandas<3.0.0,>=2.0.1->biocypher) (2024.2) Requirement already satisfied: tzdata>=2022.7 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from pandas<3.0.0,>=2.0.1->biocypher) (2024.2) Requirement already satisfied: platformdirs>=2.5.0 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from pooch<2.0.0,>=1.7.0->biocypher) (4.3.6) Requirement already satisfied: packaging>=20.0 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from pooch<2.0.0,>=1.7.0->biocypher) (24.2) Requirement already satisfied: requests>=2.19.0 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from pooch<2.0.0,>=1.7.0->biocypher) (2.32.3) Requirement already satisfied: isodate<0.7.0,>=0.6.0 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from rdflib<7.0.0,>=6.2.0->biocypher) (0.6.1) Requirement already satisfied: pyparsing<4,>=2.1.0 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from rdflib<7.0.0,>=6.2.0->biocypher) (3.2.1) Requirement already satisfied: charset-normalizer<4,>=2 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from requests>=2.19.0->pooch<2.0.0,>=1.7.0->biocypher) (3.4.1) Requirement already satisfied: idna<4,>=2.5 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from requests>=2.19.0->pooch<2.0.0,>=1.7.0->biocypher) (3.10) Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from requests>=2.19.0->pooch<2.0.0,>=1.7.0->biocypher) (2.3.0) Requirement already satisfied: certifi>=2017.4.17 in /Users/slobentanzer/GitHub/tmp/biocypher/.venv/lib/python3.10/site-packages (from requests>=2.19.0->pooch<2.0.0,>=1.7.0->biocypher) (2024.12.14) [notice] A new release of pip is available: 24.3.1 -> 25.0.1 [notice] To update, run: pip install --upgrade pip
Tutorial files¶
In the biocypher
root directory, you will find a tutorial
directory with
the files for this tutorial. The data_generator.py
file contains the
simulated data generation code, and the other files, specifically the .yaml
files, are named according to the
tutorial step they are used in.
Let's download these:
import yaml
import requests
import subprocess
schema_path = "https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/"
!wget -O data_generator.py "https://github.com/biocypher/biocypher/raw/main/tutorial/data_generator.py"
--2025-03-07 20:32:24-- https://github.com/biocypher/biocypher/raw/main/tutorial/data_generator.py Resolving github.com (github.com)... 140.82.121.4 Connecting to github.com (github.com)|140.82.121.4|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/data_generator.py [following] --2025-03-07 20:32:24-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/data_generator.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 7083 (6,9K) [text/plain] Saving to: ‘data_generator.py’ data_generator.py 100%[===================>] 6,92K --.-KB/s in 0s 2025-03-07 20:32:24 (56,8 MB/s) - ‘data_generator.py’ saved [7083/7083]
owner = "biocypher"
repo = "biocypher"
path = "tutorial" # The path within the repository (optional, leave empty for the root directory)
github_url = "https://api.github.com/repos/{owner}/{repo}/contents/{path}"
api_url = github_url.format(owner=owner, repo=repo, path=path)
response = requests.get(api_url)
# Get list of yaml files from the repo
files = response.json()
yamls = []
for file in files:
if file["type"] == "file":
if file["name"].endswith(".yaml"):
yamls.append(file["name"])
# wget all yaml files
for yaml in yamls:
url_path = schema_path + yaml
subprocess.run(["wget", url_path])
--2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/01_biocypher_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 64 [text/plain] Saving to: ‘01_biocypher_config.yaml.3’ 0K 100% 3,39M=0s 2025-03-07 20:32:25 (3,39 MB/s) - ‘01_biocypher_config.yaml.3’ saved [64/64] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/01_biocypher_config_pandas.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 43 [text/plain] Saving to: ‘01_biocypher_config_pandas.yaml.3’ 0K 100% 4,56M=0s 2025-03-07 20:32:25 (4,56 MB/s) - ‘01_biocypher_config_pandas.yaml.3’ saved [43/43] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/01_schema_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 93 [text/plain] Saving to: ‘01_schema_config.yaml.3’ 0K 100% 9,85M=0s 2025-03-07 20:32:25 (9,85 MB/s) - ‘01_schema_config.yaml.3’ saved [93/93] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/02_biocypher_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 58 [text/plain] Saving to: ‘02_biocypher_config.yaml.3’ 0K 100% 9,22M=0s 2025-03-07 20:32:25 (9,22 MB/s) - ‘02_biocypher_config.yaml.3’ saved [58/58] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/02_biocypher_config_pandas.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 43 [text/plain] Saving to: ‘02_biocypher_config_pandas.yaml.3’ 0K 100% 2,73M=0s 2025-03-07 20:32:25 (2,73 MB/s) - ‘02_biocypher_config_pandas.yaml.3’ saved [43/43] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/02_schema_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 111 [text/plain] Saving to: ‘02_schema_config.yaml.3’ 0K 100% 15,1M=0s 2025-03-07 20:32:25 (15,1 MB/s) - ‘02_schema_config.yaml.3’ saved [111/111] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/03_biocypher_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 69 [text/plain] Saving to: ‘03_biocypher_config.yaml.3’ 0K 100% 6,58M=0s 2025-03-07 20:32:25 (6,58 MB/s) - ‘03_biocypher_config.yaml.3’ saved [69/69] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/03_biocypher_config_pandas.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 43 [text/plain] Saving to: ‘03_biocypher_config_pandas.yaml.3’ 0K 100% 3,42M=0s 2025-03-07 20:32:25 (3,42 MB/s) - ‘03_biocypher_config_pandas.yaml.3’ saved [43/43] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/03_schema_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 121 [text/plain] Saving to: ‘03_schema_config.yaml.3’ 0K 100% 10,5M=0s 2025-03-07 20:32:25 (10,5 MB/s) - ‘03_schema_config.yaml.3’ saved [121/121] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/04_biocypher_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 63 [text/plain] Saving to: ‘04_biocypher_config.yaml.3’ 0K 100% 7,51M=0s 2025-03-07 20:32:25 (7,51 MB/s) - ‘04_biocypher_config.yaml.3’ saved [63/63] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/04_biocypher_config_pandas.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 43 [text/plain] Saving to: ‘04_biocypher_config_pandas.yaml.3’ 0K 100% 3,73M=0s 2025-03-07 20:32:25 (3,73 MB/s) - ‘04_biocypher_config_pandas.yaml.3’ saved [43/43] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/04_schema_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 221 [text/plain] Saving to: ‘04_schema_config.yaml.3’ 0K 100% 19,2M=0s 2025-03-07 20:32:25 (19,2 MB/s) - ‘04_schema_config.yaml.3’ saved [221/221] --2025-03-07 20:32:25-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/05_biocypher_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 72 [text/plain] Saving to: ‘05_biocypher_config.yaml.3’ 0K 100% 6,24M=0s 2025-03-07 20:32:25 (6,24 MB/s) - ‘05_biocypher_config.yaml.3’ saved [72/72] --2025-03-07 20:32:26-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/05_biocypher_config_pandas.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 43 [text/plain] Saving to: ‘05_biocypher_config_pandas.yaml.3’ 0K 100% 8,20M=0s 2025-03-07 20:32:26 (8,20 MB/s) - ‘05_biocypher_config_pandas.yaml.3’ saved [43/43] --2025-03-07 20:32:26-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/05_schema_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 370 [text/plain] Saving to: ‘05_schema_config.yaml.3’ 0K 100% 70,6M=0s 2025-03-07 20:32:26 (70,6 MB/s) - ‘05_schema_config.yaml.3’ saved [370/370] --2025-03-07 20:32:26-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/06_biocypher_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 66 [text/plain] Saving to: ‘06_biocypher_config.yaml.3’ 0K 100% 3,00M=0s 2025-03-07 20:32:26 (3,00 MB/s) - ‘06_biocypher_config.yaml.3’ saved [66/66] --2025-03-07 20:32:26-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/06_biocypher_config_pandas.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 43 [text/plain] Saving to: ‘06_biocypher_config_pandas.yaml.3’ 0K 100% 3,73M=0s 2025-03-07 20:32:26 (3,73 MB/s) - ‘06_biocypher_config_pandas.yaml.3’ saved [43/43] --2025-03-07 20:32:26-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/06_schema_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 579 [text/plain] Saving to: ‘06_schema_config.yaml.3’ 0K 100% 69,0M=0s 2025-03-07 20:32:26 (69,0 MB/s) - ‘06_schema_config.yaml.3’ saved [579/579] --2025-03-07 20:32:26-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/06_schema_config_pandas.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 579 [text/plain] Saving to: ‘06_schema_config_pandas.yaml.3’ 0K 100% 42,5M=0s 2025-03-07 20:32:26 (42,5 MB/s) - ‘06_schema_config_pandas.yaml.3’ saved [579/579] --2025-03-07 20:32:26-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/07_biocypher_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 79 [text/plain] Saving to: ‘07_biocypher_config.yaml.3’ 0K 100% 10,8M=0s 2025-03-07 20:32:26 (10,8 MB/s) - ‘07_biocypher_config.yaml.3’ saved [79/79] --2025-03-07 20:32:26-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/07_biocypher_config_pandas.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 43 [text/plain] Saving to: ‘07_biocypher_config_pandas.yaml.3’ 0K 100% 8,20M=0s 2025-03-07 20:32:26 (8,20 MB/s) - ‘07_biocypher_config_pandas.yaml.3’ saved [43/43] --2025-03-07 20:32:26-- https://raw.githubusercontent.com/biocypher/biocypher/main/tutorial/07_schema_config.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 711 [text/plain] Saving to: ‘07_schema_config.yaml.3’ 0K 100% 56,5M=0s 2025-03-07 20:32:26 (56,5 MB/s) - ‘07_schema_config.yaml.3’ saved [711/711]
Let's also define functions with which we can visualize those
# helper function to print yaml files
import yaml
def print_yaml(file_path):
with open(file_path, 'r') as file:
yaml_data = yaml.safe_load(file)
print("--------------")
print(yaml.dump(yaml_data, sort_keys=False, indent=4))
print("--------------")
Configuration¶
BioCypher is configured using a YAML file; it comes with a default (which you
can see in the
Configuration section).
You can use it, for instance, to select an output format, the output directory,
separators, logging level, and other options. For this tutorial, we will use a
dedicated configuration file for each of the steps. The configuration files are
located in the tutorial
directory, and are called using the
biocypher_config_path
argument at instantiation of the BioCypher interface.
For more information, see also the Quickstart
Configuration
section.
Section 1: Adding data¶
Input data stream ("adapter")¶
The basic operation of adding data to the knowledge graph requires two
components: an input stream of data (which we call adapter) and a configuration
for the resulting desired output (the schema configuration). The former will be
simulated by calling the Protein
class of our data generator 10 times.
# create a list of proteins to be imported
from data_generator import Protein
n_proteins = 3
proteins = [Protein() for _ in range(n_proteins)]
Each protein in our simulated data has a UniProt ID, a label ("uniprot_protein"), and a dictionary of properties describing it. This is - purely by coincidence - very close to the input BioCypher expects (for nodes):
- a unique identifier
- an input label (to allow mapping to the ontology, see the second step below)
- a dictionary of further properties (which can be empty)
These should be presented to BioCypher in the form of a tuple. To achieve this representation, we can use a generator function that iterates through our simulated input data and, for each entity, forms the corresponding tuple. The use of a generator allows for efficient streaming of larger datasets where required.
def node_generator(proteins):
for protein in proteins:
yield (
protein.get_id(),
protein.get_label(),
protein.get_properties(),
)
entities = node_generator(proteins)
The concept of an adapter can become arbitrarily complex and involve programmatic access to databases, API requests, asynchronous queries, context managers, and other complicating factors. However, it always boils down to providing the BioCypher driver with a collection of tuples, one for each entity in the input data. For more info, see the section on Adapters.
As descibed above, nodes possess:
- a mandatory ID,
- a mandatory label, and
- a property dictionary,
while edges possess:
- an (optional) ID,
- two mandatory IDs for source and target,
- a mandatory label, and
- a property dictionary.
How these entities are mapped to the ontological hierarchy underlying a BioCypher graph is determined by their mandatory labels, which connect the input data stream to the schema configuration. This we will see in the following section.
Schema configuration¶
How each BioCypher graph is structured is determined by the schema configuration YAML file that is given to the BioCypher interface. This also serves to ground the entities of the graph in the biomedical realm by using an ontological hierarchy. In this tutorial, we refer to the Biolink model as the general backbone of our ontological hierarchy. The basic premise of the schema configuration YAML file is that each component of the desired knowledge graph output should be configured here; if (and only if) an entity is represented in the schema configuration and is present in the input data stream, will it be part of our knowledge graph.
In our case, since we only import proteins, we only require few lines of configuration:
print_yaml('01_schema_config.yaml')
-------------- protein: represented_as: node preferred_id: uniprot input_label: uniprot_protein --------------
The first line (protein
) identifies our entity and connects to the ontological
backbone; here we define the first class to be represented in the graph. In the
configuration YAML, we represent entities — similar to the internal
representation of Biolink — in lower sentence case (e.g., "small molecule").
Conversely, for class names, in file names, and property graph labels, we use
PascalCase instead (e.g., "SmallMolecule") to avoid issues with handling spaces.
The transformation is done by BioCypher internally. BioCypher does not strictly
enforce the entities allowed in this class definition; in fact, we provide
several methods of extending the existing ontological backbone ad hoc by
providing custom inheritance or hybridising
ontologies.
However, every entity should at some point be connected to the underlying
ontology, otherwise the multiple hierarchical labels will not be populated.
Following this first line are three indented values of the protein class.
The second line (represented_as
) tells BioCypher in which way each entity
should be represented in the graph; the only options are node
and edge
.
Representation as an edge is only possible when source and target IDs are
provided in the input data stream. Conversely, relationships can be represented
as both node
or edge
, depending on the desired output. When a relationship
should be represented as a node, i.e., "reified", BioCypher takes care to create
a set of two edges and a node in place of the relationship. This is useful when
we want to connect the relationship to other entities in the graph, for example
literature references.
The third line (preferred_id
) informs the uniqueness of represented entities
by selecting an ontological namespace around which the definition of uniqueness
should revolve. In our example, if a protein has its own uniprot ID, it is
understood to be a unique entity. When there are multiple protein isoforms
carrying the same uniprot ID, they are understood to be aggregated to result in
only one unique entity in the graph. Decisions around uniqueness of graph
constituents sometimes require some consideration in task-specific
applications. Selection of a namespace also has effects in identifier mapping;
in our case, for protein nodes that do not carry a uniprot ID, identifier
mapping will attempt to find a uniprot ID given the other identifiers of that
node. To account for the broadest possible range of identifier systems while
also dealing with parsing of namespace prefixes and validation, we refer to the
Bioregistry project namespaces, which should be
preferred values for this field.
Finally, the fourth line (input_label
) connects the input data stream to the
configuration; here we indicate which label to expect in the input tuple for
each class in the graph. In our case, we expect "uniprot_protein" as the label
for each protein in the input data stream; all other input entities that do not
carry this label are ignored as long as they are not in the schema
configuration.
Creating the graph (using the BioCypher interface)¶
All that remains to be done now is to instantiate the BioCypher interface (as the main means of communicating with BioCypher) and call the function to create the graph.
from biocypher import BioCypher
bc = BioCypher(
biocypher_config_path='01_biocypher_config_pandas.yaml',
schema_config_path='01_schema_config.yaml',
)
# Add the entities that we generated above to the graph
bc.add(entities)
INFO -- This is BioCypher v0.9.1. INFO -- Logging into `biocypher-log/biocypher-20250307-203226.log`. INFO -- Running BioCypher with schema configuration from 01_schema_config.yaml. INFO -- Loading ontologies... INFO -- Instantiating OntologyAdapter class for https://github.com/biolink/biolink-model/raw/v3.2.1/biolink-model.owl.ttl.
# Print the graph as a dictionary of pandas DataFrame(s) per node label
bc.to_df()["protein"]
node_id | node_label | sequence | description | taxon | id | preferred_id | |
---|---|---|---|---|---|---|---|
0 | F6M7Z7 | protein | IGQSELYWNAWQEWLDYDNFVQIRGIRPQMQIDEFGLDMIDTRAFTE | Lorem ipsum mbgrm | 9606 | F6M7Z7 | uniprot |
1 | O8Y0M7 | protein | SCEMHQTMEMQLFGSWHYQPIQILFEIYYWPVD | Lorem ipsum mwpax | 9606 | O8Y0M7 | uniprot |
2 | O5P9G9 | protein | GNFFNKHGYTTQKKDCFRPKMCPLLMCRRIWHGVSSVVWRGSCMP | Lorem ipsum wcjeq | 9606 | O5P9G9 | uniprot |
Section 2: Merging data¶
Plain merge¶
Using the workflow described above with minor changes, we can merge data from
different input streams. If we do not want to introduce additional ontological
subcategories, we can simply add the new input stream to the existing one and
add the new label to the schema configuration (the new label being
entrez_protein
). In this case, we would add the following to the schema
configuration:
from data_generator import Protein, EntrezProtein
print_yaml('02_schema_config.yaml')
-------------- protein: represented_as: node preferred_id: uniprot input_label: - uniprot_protein - entrez_protein --------------
# Create a list of proteins to be imported
proteins = [
p for sublist in zip(
[Protein() for _ in range(n_proteins)],
[EntrezProtein() for _ in range(n_proteins)],
) for p in sublist
]
# Create a new BioCypher instance
bc = BioCypher(
biocypher_config_path='02_biocypher_config_pandas.yaml',
schema_config_path='02_schema_config.yaml',
)
# Run the import
bc.add(node_generator(proteins))
INFO -- Running BioCypher with schema configuration from 02_schema_config.yaml. INFO -- Loading ontologies... INFO -- Instantiating OntologyAdapter class for https://github.com/biolink/biolink-model/raw/v3.2.1/biolink-model.owl.ttl.
bc.to_df()["protein"]
node_id | node_label | sequence | description | taxon | id | preferred_id | |
---|---|---|---|---|---|---|---|
0 | M5L6M9 | protein | VQLVILKLMFKAKLVANNLYAPWHH | Lorem ipsum cjusl | 9606 | M5L6M9 | uniprot |
1 | 767429 | protein | WIMWMQHYCKIVQRTRQSCTGAIS | Lorem ipsum bcuez | 9606 | 767429 | uniprot |
2 | G3H8N9 | protein | HQCIWAEPNSYEGEVHALFAAVGVTVHDVKNQIM | Lorem ipsum otkfx | 9606 | G3H8N9 | uniprot |
3 | 774050 | protein | SHRTMQMYQRSVGKGIPCDSC | Lorem ipsum byuxg | 9606 | 774050 | uniprot |
4 | C2G0M3 | protein | RRPIWQDYPNPTYTWSQCEVLSLIKYWC | Lorem ipsum rygko | 9606 | C2G0M3 | uniprot |
5 | 264315 | protein | KEMHFLWKCQSFYFGFFEACRK | Lorem ipsum kvuni | 9606 | 264315 | uniprot |
This again creates a single DataFrame, now for both protein types, but now including
both input streams (you should note both uniprot & entrez style IDs in the id column). However, we are generating our entrez
proteins as having entrez IDs, which could result in problems in querying.
Additionally, a strict import mode including regex pattern matching of
identifiers will fail at this point due to the difference in pattern of UniProt
vs. Entrez IDs. This issue could be resolved by mapping the Entrez IDs to
UniProt IDs, but we will instead use the opportunity to demonstrate how to
merge data from different sources into the same ontological class using ad
hoc subclasses.
Ad hoc subclassing¶
In the previous section, we saw how to merge data from different sources into
the same ontological class. However, we did not resolve the issue of the
entrez
proteins living in a different namespace than the uniprot
proteins,
which could result in problems in querying. In proteins, it would probably be
more appropriate to solve this problem using identifier mapping, but in other
categories, e.g., pathways, this may not be possible because of a lack of
one-to-one mapping between different data sources. Thus, if we so desire, we
can merge datasets into the same ontological class by creating ad hoc
subclasses implicitly through BioCypher, by providing multiple preferred
identifiers. In our case, we update our schema configuration as follows:
print_yaml('03_schema_config.yaml')
-------------- protein: represented_as: node preferred_id: - uniprot - entrez input_label: - uniprot_protein - entrez_protein --------------
This will "implicitly" create two subclasses of the protein
class, which will
inherit the entire hierarchy of the protein
class. The two subclasses will be
named using a combination of their preferred namespace and the name of the
parent class, separated by a dot, i.e., uniprot.protein
and entrez.protein
.
In this manner, they can be identified as proteins regardless of their sources
by any queries for the generic protein
class, while still carrying
information about their namespace and avoiding identifier conflicts.
Let's create a DataFrame with the same nodes as above, but with a different schema configuration:
bc = BioCypher(
biocypher_config_path='03_biocypher_config_pandas.yaml',
schema_config_path='03_schema_config.yaml',
)
bc.add(node_generator(proteins))
for name, df in bc.to_df().items():
print(name)
display(df)
INFO -- Running BioCypher with schema configuration from 03_schema_config.yaml. INFO -- Loading ontologies... INFO -- Instantiating OntologyAdapter class for https://github.com/biolink/biolink-model/raw/v3.2.1/biolink-model.owl.ttl.
uniprot.protein
node_id | node_label | sequence | description | taxon | id | preferred_id | |
---|---|---|---|---|---|---|---|
0 | M5L6M9 | uniprot.protein | VQLVILKLMFKAKLVANNLYAPWHH | Lorem ipsum cjusl | 9606 | M5L6M9 | uniprot |
1 | G3H8N9 | uniprot.protein | HQCIWAEPNSYEGEVHALFAAVGVTVHDVKNQIM | Lorem ipsum otkfx | 9606 | G3H8N9 | uniprot |
2 | C2G0M3 | uniprot.protein | RRPIWQDYPNPTYTWSQCEVLSLIKYWC | Lorem ipsum rygko | 9606 | C2G0M3 | uniprot |
entrez.protein
node_id | node_label | sequence | description | taxon | id | preferred_id | |
---|---|---|---|---|---|---|---|
0 | 767429 | entrez.protein | WIMWMQHYCKIVQRTRQSCTGAIS | Lorem ipsum bcuez | 9606 | 767429 | entrez |
1 | 774050 | entrez.protein | SHRTMQMYQRSVGKGIPCDSC | Lorem ipsum byuxg | 9606 | 774050 | entrez |
2 | 264315 | entrez.protein | KEMHFLWKCQSFYFGFFEACRK | Lorem ipsum kvuni | 9606 | 264315 | entrez |
Now we see two separate DataFrames, one for each subclass of the protein
class.
Section 3: Handling properties¶
While ID and label are mandatory components of our knowledge graph, properties are optional and can include different types of information on the entities. In source data, properties are represented in arbitrary ways, and designations rarely overlap even for the most trivial of cases (spelling differences, formatting, etc). Additionally, some data sources contain a large wealth of information about entities, most of which may not be needed for the given task. Thus, it is often desirable to filter out properties that are not needed to save time, disk space, and memory.
Maintaining consistent properties per entity type is particularly important when using the admin import feature of Neo4j, which requires consistency between the header and data files. Properties that are introduced into only some of the rows will lead to column misalignment and import failure. In "online mode", this is not an issue.
We will take a look at how to handle property selection in BioCypher in a way that is flexible and easy to maintain.
Designated properties¶
The simplest and most straightforward way to ensure that properties are
consistent for each entity type is to designate them explicitly in the schema
configuration. This is done by adding a properties
key to the entity type
configuration. The value of this key is another dictionary, where in the
standard case the keys are the names of the properties that the entity type
should possess, and the values give the type of the property. Possible values
are:
str
(orstring
),int
(orinteger
,long
),float
(ordouble
,dbl
),bool
(orboolean
),arrays of any of these types (indicated by square brackets, e.g.
string[]
).
In the case of properties that are not present in (some of) the source data,
BioCypher will add them to the output with a default value of None
.
Additional properties in the input that are not represented in these designated
property names will be ignored. Let's imagine that some, but not all, of our
protein nodes have a mass
value. If we want to include the mass value on all
proteins, we can add the following to our schema configuration:
print_yaml('04_schema_config.yaml')
-------------- protein: represented_as: node preferred_id: - uniprot - entrez input_label: - uniprot_protein - entrez_protein properties: sequence: str description: str taxon: str mass: int --------------
This will add the mass
property to all proteins (in addition to the three we
had before); if not encountered, the column will be empty. Implicit subclasses
will automatically inherit the property configuration; in this case, both
uniprot.protein
and entrez.protein
will have the mass
property, even
though the entrez
proteins do not have a mass
value in the input data.
from data_generator import EntrezProtein, RandomPropertyProtein
# Create a list of proteins to be imported (now with properties)
proteins = [
p for sublist in zip(
[RandomPropertyProtein() for _ in range(n_proteins)],
[EntrezProtein() for _ in range(n_proteins)],
) for p in sublist
]
# New instance, populated, and to DataFrame
bc = BioCypher(
biocypher_config_path='04_biocypher_config_pandas.yaml',
schema_config_path='04_schema_config.yaml',
)
bc.add(node_generator(proteins))
for name, df in bc.to_df().items():
print(name)
display(df)
INFO -- Running BioCypher with schema configuration from 04_schema_config.yaml. INFO -- Loading ontologies... INFO -- Instantiating OntologyAdapter class for https://github.com/biolink/biolink-model/raw/v3.2.1/biolink-model.owl.ttl.
uniprot.protein
node_id | node_label | sequence | description | taxon | mass | id | preferred_id | |
---|---|---|---|---|---|---|---|---|
0 | B4F6M1 | uniprot.protein | KQGAYLKNAHCLPAAMISPWSCSPNFVWKTKDNEDDILTEAAGEQWQS | Lorem ipsum wqmjt | 6116 | NaN | B4F6M1 | uniprot |
1 | J8S4Q8 | uniprot.protein | EMYWSCPEVTHEGEMYPYADFYAFNLICIGKCRYLME | Lorem ipsum pghkf | 4135 | NaN | J8S4Q8 | uniprot |
2 | D5M8K6 | uniprot.protein | KEHLAAMVTDPLGPWSMMGGLALFLPINSEEWLMMQYAYEHPQTNETDR | Lorem ipsum taqsx | 4535 | 6811.0 | D5M8K6 | uniprot |
entrez.protein
node_id | node_label | sequence | description | taxon | mass | id | preferred_id | |
---|---|---|---|---|---|---|---|---|
0 | 285343 | entrez.protein | EMFSHFMMQLTDPWKNWNECHWRHSAPHPSIMLFTFSSPYNWIIEL | Lorem ipsum wqrqh | 9606 | None | 285343 | entrez |
1 | 678056 | entrez.protein | DFSKSCPEGGVTIPPLIYNIWDCKESAIWTHFRRDMMSNDEILQHW... | Lorem ipsum ibutr | 9606 | None | 678056 | entrez |
2 | 131909 | entrez.protein | KMCSINAQWAWCQPTGNAQWAPGN | Lorem ipsum pbjcm | 9606 | None | 131909 | entrez |
Inheriting properties¶
Sometimes, explicit designation of properties requires a lot of maintenance
work, particularly for classes with many properties. In these cases, it may be
more convenient to inherit properties from a parent class. This is done by
adding a properties
key to a suitable parent class configuration, and then
defining inheritance via the is_a
key in the child class configuration and
setting the inherit_properties
key to true
.
Let's say we have an additional protein isoform
class, which can reasonably
inherit from protein
and should carry the same properties as the parent. We
can add the following to our schema configuration:
from data_generator import RandomPropertyProteinIsoform
print_yaml('05_schema_config.yaml')
-------------- protein: represented_as: node preferred_id: - uniprot - entrez input_label: - uniprot_protein - entrez_protein properties: sequence: str description: str taxon: str mass: int protein isoform: is_a: protein inherit_properties: true represented_as: node preferred_id: uniprot input_label: uniprot_isoform --------------
This allows maintenance of property lists for many classes at once. If the child class has properties already, they will be kept (if they are not present in the parent class) or replaced by the parent class properties (if they are present).
Again, apart from adding the protein isoforms to the input stream, the code for this example is identical to the previous one except for the reference to the updated schema configuration.
We now create three separate DataFrames, all of which are children of the
protein
class; two implicit children (uniprot.protein
and entrez.protein
)
and one explicit child (protein isoform
).
# create a list of proteins to be imported
proteins = [
p for sublist in zip(
[RandomPropertyProtein() for _ in range(n_proteins)],
[RandomPropertyProteinIsoform() for _ in range(n_proteins)],
[EntrezProtein() for _ in range(n_proteins)],
) for p in sublist
]
# Create BioCypher driver
bc = BioCypher(
biocypher_config_path='05_biocypher_config_pandas.yaml',
schema_config_path='05_schema_config.yaml',
)
# Run the import
bc.add(node_generator(proteins))
for name, df in bc.to_df().items():
print(name)
display(df)
INFO -- Running BioCypher with schema configuration from 05_schema_config.yaml. INFO -- Loading ontologies... INFO -- Instantiating OntologyAdapter class for https://github.com/biolink/biolink-model/raw/v3.2.1/biolink-model.owl.ttl.
uniprot.protein
node_id | node_label | sequence | description | taxon | mass | id | preferred_id | |
---|---|---|---|---|---|---|---|---|
0 | F8X2E7 | uniprot.protein | VEYFPWPYEEWGQAITNEQEPKHDNSVHVNLGPRQWNFQNYT | Lorem ipsum sucas | 2449 | NaN | F8X2E7 | uniprot |
1 | W9Y6I4 | uniprot.protein | IKDRRTDVNCSIRKTSNGEEECCPMWHMDYVLSSMACEKDQETW | Lorem ipsum iimgv | 9323 | 8324.0 | W9Y6I4 | uniprot |
2 | Z0Y5B3 | uniprot.protein | YQCVLFEMQVTLCITYIMSQENPNISI | Lorem ipsum ljnbm | 8891 | NaN | Z0Y5B3 | uniprot |
protein isoform
node_id | node_label | sequence | description | taxon | mass | id | preferred_id | |
---|---|---|---|---|---|---|---|---|
0 | H8F9L4 | protein isoform | QVLIYDLIDLNCTRCWDWGTWWNL | Lorem ipsum binuk | 8081 | NaN | H8F9L4 | uniprot |
1 | A0Q9J8 | protein isoform | KFSRHINNGGEKAVQEWEQTSEGLVMGFRNSRWPYKWQQY | Lorem ipsum ebspk | 5773 | NaN | A0Q9J8 | uniprot |
2 | L1K7W3 | protein isoform | HVWMRWWFHGWINNYKEHWNSMMSITHASVLHKQEYQMEAGA | Lorem ipsum fxyad | 4565 | 7372.0 | L1K7W3 | uniprot |
entrez.protein
node_id | node_label | sequence | description | taxon | mass | id | preferred_id | |
---|---|---|---|---|---|---|---|---|
0 | 814463 | entrez.protein | YRLSSQRYIYQKDGPDHQVL | Lorem ipsum sfcix | 9606 | None | 814463 | entrez |
1 | 887207 | entrez.protein | KFESNGEDSYHNYWDAQHYSMFVVRCPMPLGHGVNNWE | Lorem ipsum jkddr | 9606 | None | 887207 | entrez |
2 | 681494 | entrez.protein | KQFRNGMKHWMFANLEYKYWEHNPFRECT | Lorem ipsum auede | 9606 | None | 681494 | entrez |
Section 4: Handling relationships¶
Naturally, we do not only want nodes in our knowledge graph, but also edges. In
BioCypher, the configuration of relationships is very similar to that of nodes,
with some key differences. First the similarities: the top-level class
configuration of edges is the same; class names refer to ontological classes or
are an extension thereof. Similarly, the is_a
key is used to define
inheritance, and the inherit_properties
key is used to inherit properties from
a parent class. Relationships also possess a preferred_id
key, an
input_label
key, and a properties
key, which work in the same way as for
nodes.
Relationships also have a represented_as
key, which in this case can be
either node
or edge
. The node
option is used to "reify" the relationship
in order to be able to connect it to other nodes in the graph. In addition to
the configuration of nodes, relationships also have fields for the source
and
target
node types, which refer to the ontological classes of the respective
nodes, and are currently optional.
To add protein-protein interactions to our graph, we can modify the schema configuration above to the following:
print_yaml('06_schema_config_pandas.yaml')
-------------- protein: represented_as: node preferred_id: - uniprot - entrez input_label: - uniprot_protein - entrez_protein properties: sequence: str description: str taxon: str mass: int protein isoform: is_a: protein inherit_properties: true represented_as: node preferred_id: uniprot input_label: uniprot_isoform protein protein interaction: is_a: pairwise molecular interaction represented_as: edge preferred_id: intact input_label: interacts_with properties: method: str source: str --------------
Now that we have added protein protein interaction
as an edge, we have to simulate some interactions:
from data_generator import InteractionGenerator
# Simulate edges for proteins we defined above
ppi = InteractionGenerator(
interactors=[p.get_id() for p in proteins],
interaction_probability=0.05,
).generate_interactions()
# naturally interactions/edges contain information about the interacting source and target nodes
# let's look at the first one in the list
interaction = ppi[0]
f"{interaction.get_source_id()} {interaction.label} {interaction.get_target_id()}"
'814463 interacts_with 681494'
# similarly to nodes, it also has a dictionary of properties
interaction.get_properties()
{'method': 'Lorem ipsum sqeyz'}
As with nodes, we add first createa a new BioCypher instance, and then populate it with nodes as well as edges:
bc = BioCypher(
biocypher_config_path='06_biocypher_config_pandas.yaml',
schema_config_path='06_schema_config_pandas.yaml',
)
INFO -- Running BioCypher with schema configuration from 06_schema_config_pandas.yaml.
# Extract id, source, target, label, and property dictionary
def edge_generator(ppi):
for interaction in ppi:
yield (
interaction.get_id(),
interaction.get_source_id(),
interaction.get_target_id(),
interaction.get_label(),
interaction.get_properties(),
)
bc.add(node_generator(proteins))
bc.add(edge_generator(ppi))
INFO -- Loading ontologies... INFO -- Instantiating OntologyAdapter class for https://github.com/biolink/biolink-model/raw/v3.2.1/biolink-model.owl.ttl.
Let's look at the interaction DataFrame:
bc.to_df()["protein protein interaction"]
relationship_id | source_id | target_id | relationship_label | method | source | |
---|---|---|---|---|---|---|
0 | None | 814463 | 681494 | protein protein interaction | Lorem ipsum sqeyz | None |
Finally, it is worth noting that BioCypher relies on ontologies, which are machine readable representations of domains of knowledge that we use to ground the contents of our knowledge graphs. While details about ontologies are out of scope for this tutorial, and are described in detail in the BioCypher documentation, we can still have a glimpse at the ontology that we used implicitly in this tutorial:
bc.show_ontology_structure()
INFO -- Showing ontology structure based on https://github.com/biolink/biolink-model/raw/v3.2.1/biolink-model.owl.ttl INFO -- entity ├── association │ └── gene to gene association │ └── pairwise gene to gene interaction │ └── pairwise molecular interaction │ └── protein protein interaction └── named thing └── biological entity └── polypeptide └── protein ├── entrez.protein ├── protein isoform └── uniprot.protein
<treelib.tree.Tree at 0x13058fd60>